ACL-OCL / Base_JSON /prefixW /json /wnu /2022.wnu-1.7.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T06:02:58.481408Z"
},
"title": "Narrative Detection and Feature Analysis in Online Health Communities",
"authors": [
{
"first": "Achyutarama",
"middle": [
"R"
],
"last": "Ganti",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Oakland University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Steven",
"middle": [
"R"
],
"last": "Wilson",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Oakland University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Zexin",
"middle": [],
"last": "Ma",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Oakland University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Xinyan",
"middle": [],
"last": "Zhao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of North Carolina",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Rong",
"middle": [],
"last": "Ma",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Butler University",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Narratives have been shown to be an effective way to communicate health risks and promote health behavior change, and given the growing amount of health information being shared on social media, it is crucial to study healthrelated narratives in social media. However, expert identification of a large number of narrative texts is a time consuming process, and larger scale studies on the use of narratives may be enabled through automatic text classification approaches. Prior work has demonstrated that automatic narrative detection is possible, but modern deep learning approaches have not been used for this task in the domain of online health communities. Therefore, in this paper, we explore the use of deep learning methods to automatically classify the presence of narratives in social media posts, finding that they outperform previously proposed approaches. We also find that in many cases, these models generalize well across posts from different health organizations. Finally, in order to better understand the increase in performance achieved by deep learning models, we use feature analysis techniques to explore the features that most contribute to narrative detection for posts in online health communities.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "Narratives have been shown to be an effective way to communicate health risks and promote health behavior change, and given the growing amount of health information being shared on social media, it is crucial to study healthrelated narratives in social media. However, expert identification of a large number of narrative texts is a time consuming process, and larger scale studies on the use of narratives may be enabled through automatic text classification approaches. Prior work has demonstrated that automatic narrative detection is possible, but modern deep learning approaches have not been used for this task in the domain of online health communities. Therefore, in this paper, we explore the use of deep learning methods to automatically classify the presence of narratives in social media posts, finding that they outperform previously proposed approaches. We also find that in many cases, these models generalize well across posts from different health organizations. Finally, in order to better understand the increase in performance achieved by deep learning models, we use feature analysis techniques to explore the features that most contribute to narrative detection for posts in online health communities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Narrative forms of communication are widely used for conveying information and building connections. Broadly defined as a representation of someone's experience of a series of events (Bilandzic and Busselle, 2013) , narratives take on different formats, ranging from short anecdotes and testimonials to lengthy entertainment TV shows and movies (Kreuter et al., 2007) .",
"cite_spans": [
{
"start": 183,
"end": 213,
"text": "(Bilandzic and Busselle, 2013)",
"ref_id": "BIBREF4"
},
{
"start": 345,
"end": 367,
"text": "(Kreuter et al., 2007)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the health context, extensive research has found that narratives are more effective than nonnarratives (e.g., statistics, didactic arguments) in communicating health risks (Janssen et al., 2013; Ma, 2021) and promoting health behavior change (Kreuter et al., 2010) . Moreover, telling personal illness narratives helps patients to better cope with the illness (Carlick and Biley, 2004) and for health care professionals to better understand the illness (Kalitzkus and Matthiessen, 2009) . Given that social media has become a widely used platform for cancer patients and their caregivers to share stories and connect with others (Gage-Bouchard et al., 2017; Hale et al., 2020) , it is critical to understand what cancer narratives are told on social media and how they engage social media users.",
"cite_spans": [
{
"start": 175,
"end": 197,
"text": "(Janssen et al., 2013;",
"ref_id": "BIBREF13"
},
{
"start": 198,
"end": 207,
"text": "Ma, 2021)",
"ref_id": "BIBREF21"
},
{
"start": 245,
"end": 267,
"text": "(Kreuter et al., 2010)",
"ref_id": "BIBREF18"
},
{
"start": 363,
"end": 388,
"text": "(Carlick and Biley, 2004)",
"ref_id": "BIBREF6"
},
{
"start": 456,
"end": 489,
"text": "(Kalitzkus and Matthiessen, 2009)",
"ref_id": "BIBREF16"
},
{
"start": 632,
"end": 660,
"text": "(Gage-Bouchard et al., 2017;",
"ref_id": "BIBREF10"
},
{
"start": 661,
"end": 679,
"text": "Hale et al., 2020)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, in order to understand the impact of narratives in online communication, narratives must first be identified in social media datasets. Doing this often requires annotations from subject matter experts, which can be a costly process and difficult to scale up to massive datasets. In this work, we seek to understand the extent to whether natural language processing methods, specifically, fine-tuned large language models, can be used to automatically detect narratives within social media posts in the health domain using only a relatively small number of expert annotations. Additionally, analyzing models that are able to successfully detect narratives can provide insights into the types of textual features that are most related to narrative text within a corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Toward these aims, we collect and annotate a dataset of social media posts created by breast cancer organizations and address the following research questions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "RQ1 Which text classification models provide the best performance for automatic narrative detection for social media texts posted by breast cancer organizations?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "RQ2 How does the ability to detect narratives generalize across posts written by different organizations?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "RQ3 Which features are most important for automatic narrative detection in this context?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To answer RQ1, We compare a range of text classification methods and find that transformer-based deep-learning based methods outperform classical approaches like support vector machines, as well as the previous state-of-the-art method for detecting narratives within health-related social media posts (Dirkson et al., 2019 ). To answer RQ2, we split our dataset so that the same organizations' accounts are not used for both train and test data, finding that in most cases, it is possible for our best models to generalize well across organizations. Finally, to answer RQ3, we use machine learning analysis tools to identify which features contribute most to the prediction of narratives, finding that references to people, such as pronouns and names, as well as state-of-being verbs like \"is\", contributed strongly to cases where models predicted that texts contained narratives.",
"cite_spans": [
{
"start": 301,
"end": 322,
"text": "(Dirkson et al., 2019",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our results suggest that automatic detection of narratives in social media posts is a promising application of text classification, and can help ease the burden of manual annotation for researchers seeking to study the relationship between narrative and other variables of interest at scale. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Online health communities have been computationally studied before in order to understand how users show social support for one another (Andy et al., 2021) , to automatically extract information needs of patients (Romberg et al., 2020) , and to identify linguistic patterns associated with anxiety (Rey-Villamizar et al., 2016) . Additionally, Antoniak et al. (2019) analyzed birth stories from an online forum and demonstrated the utility of these stories for computational work. Machine learning models have been trained using textual health forum data to predict attributes such as the sentiment (Ali et al., 2013) or cancer stage of the patients posting to forums (Jha and Elhadad, 2010 ). Yet, most work in the area of computational analysis of online medical forums has not considered the importance of narrative. At the same time, computational approaches incorporating and extracting narratives have led to advances in the study of corporate finance (Zmandar et al., 2021), environmental issues (Armbrust et al., 2020) , the analysis of clinical records (Jung et al., 2011) , and emotion classification within stories (Tanabe et al., 2020) .",
"cite_spans": [
{
"start": 136,
"end": 155,
"text": "(Andy et al., 2021)",
"ref_id": "BIBREF1"
},
{
"start": 213,
"end": 235,
"text": "(Romberg et al., 2020)",
"ref_id": "BIBREF27"
},
{
"start": 298,
"end": 327,
"text": "(Rey-Villamizar et al., 2016)",
"ref_id": "BIBREF24"
},
{
"start": 599,
"end": 617,
"text": "(Ali et al., 2013)",
"ref_id": "BIBREF0"
},
{
"start": 668,
"end": 690,
"text": "(Jha and Elhadad, 2010",
"ref_id": "BIBREF14"
},
{
"start": 1003,
"end": 1026,
"text": "(Armbrust et al., 2020)",
"ref_id": null
},
{
"start": 1062,
"end": 1081,
"text": "(Jung et al., 2011)",
"ref_id": "BIBREF15"
},
{
"start": 1126,
"end": 1147,
"text": "(Tanabe et al., 2020)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "As NLP datasets, narratives are often directly collected by sampling data from sources that are already known to use narrative based on the genre of the corpus, such as literary works (Hammond et al., 2013) , doctors' notes (Elhadad et al., 2015) , or fan fiction (Yoder et al., 2021) . In the social media domain, data is often sampled in a way to ensure the presence of narratives, e.g., by collecting posts from specific subreddits which typically contain narrative style posts (Yan et al., 2019) .",
"cite_spans": [
{
"start": 184,
"end": 206,
"text": "(Hammond et al., 2013)",
"ref_id": "BIBREF12"
},
{
"start": 224,
"end": 246,
"text": "(Elhadad et al., 2015)",
"ref_id": "BIBREF9"
},
{
"start": 264,
"end": 284,
"text": "(Yoder et al., 2021)",
"ref_id": null
},
{
"start": 481,
"end": 499,
"text": "(Yan et al., 2019)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In other cases, the presence or location of narrative content is unknown beforehand and needs to be to detected or extracted. This might be done using filtering criteria like the length of the post or the presence of predefined linguistic patterns (Vijayaraghavan and Roy, 2021) . However, some datasets contain a balanced mixture of both narrative and non-narrative content, and quick rulebased filtering is not adequate. In the domain of online health communities specifically, prior work has relied on expert annotations to determine what should or should not be considered a narrative (Dirkson et al., 2019; . In each of these works, text classification models were trained to automatically determine whether or not a given post contained narratives, and support vector machines (SVM) using bag-of-words or character n-gram features were found to be the best approach.",
"cite_spans": [
{
"start": 248,
"end": 278,
"text": "(Vijayaraghavan and Roy, 2021)",
"ref_id": "BIBREF32"
},
{
"start": 589,
"end": 611,
"text": "(Dirkson et al., 2019;",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We build upon this existing work by applying deep learning text classification models to the task of narrative detection in social media posts from breast cancer organizations as an example use case that includes personal narratives, texts for which narrative presence is unknown a priori, and provide the potential for enabling larger scale studies of the importance of narratives in health communication. We find that these approaches outperform SVMbased models similar to those used by Dirkson et al. (2019) 2 and Verberne et al. 2019and explore their effectiveness on our dataset throughout the rest of this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "A list of breast cancer non-profit organizations was identified from the Canadian cancer survivor net- ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection and Annotation",
"sec_num": "3"
},
{
"text": "(N = 8, 580).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection and Annotation",
"sec_num": "3"
},
{
"text": "The top 10% posts in terms of total interactions were sampled for annotation. Following standard procedures in content analysis (Riff et al., 2014) , two expert coders annotated the presence of narratives (48.83%). All disagreements were resolved by discussion, and the consensus results were used for further analyses (i.e., the highest standard of intercoder reliability) (Krippendorff, 2004) . The overall agreement rate was above 0.9. For this study, we omit 9 posts which did not contain any text and only videos or images. The breakdown of the annotated dataset by non-profit organization account is presented in Table 1 .",
"cite_spans": [
{
"start": 128,
"end": 147,
"text": "(Riff et al., 2014)",
"ref_id": "BIBREF26"
},
{
"start": 374,
"end": 394,
"text": "(Krippendorff, 2004)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 619,
"end": 626,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Data Collection and Annotation",
"sec_num": "3"
},
{
"text": "Next, we set out to determine how well various text classification models could detect the presence of narratives given the expert annotations as training data. For this experiment, we appended data from all five non-profit organizations into a single dataset. All the data points were then shuffled and split using 80% of the data for training, and each 10% for validation and test sets. The metrics that were used for model evaluation are the F1 scores, Precision, and Recall of the narrative class. We consider two categories of models: classical machine learning models using bag-of-words features, and transformer-based deep learning models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Detecting Narrative Style",
"sec_num": "4"
},
{
"text": "For the classical models, we experiment with various preprocessing schemes in terms of low-ercasing, lemmatization, and stopword removal, and choose the approach that gave the best performance on our validation set. That process included: lowercasing, removing URLs, lemmatization using NLTK's wordnet (Miller, 1995) lemmatizer, and stopword removal using the NLTK (Bird et al., 2009) . However, given the importance of pronouns in narrative detection as evidenced in prior work (Dirkson et al., 2019), we do not remove pronouns as part of our stopword removal step. The models that we consider are Naive Bayes, Logistic Regression, and SVM-classification, using each model's scikit-learn (Pedregosa et al., 2011) Python implementation. Model-specific hyperparameters were also tuned using the validation set as described in Appendix A.",
"cite_spans": [
{
"start": 302,
"end": 316,
"text": "(Miller, 1995)",
"ref_id": "BIBREF22"
},
{
"start": 365,
"end": 384,
"text": "(Bird et al., 2009)",
"ref_id": "BIBREF5"
},
{
"start": 689,
"end": 713,
"text": "(Pedregosa et al., 2011)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Detecting Narrative Style",
"sec_num": "4"
},
{
"text": "Additionally, we consider the best reported approach from Dirkson et al. 2019, which is the previous best reported narrative detection model for online health forum data. We use the code provided by the authors to both preprocess the data and train the predictive model. The authors used an SVM classifier with a linear kernel and characterlevel trigram features as input, and so we refer to this model as SVM-trigram in our results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Detecting Narrative Style",
"sec_num": "4"
},
{
"text": "For the deep learning models, we use Distil-BERT , BERT (Devlin et al., 2019) , and RoBERTa (Liu et al., 2019 ) models based on the DistilBERT-Base-Uncased, BERT-Base-Uncased, and RoBERTa-Base checkpoints available from HuggingFace . The tokenizer for each model was automatically determined using the AutoTokenizer() class. We use the output representation of the [CLS] token as input to the classification layer (the default approach when using the HuggingFace Trainer class). Hyperparameters are described in Appendix A.",
"cite_spans": [
{
"start": 56,
"end": 77,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF7"
},
{
"start": 92,
"end": 109,
"text": "(Liu et al., 2019",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Detecting Narrative Style",
"sec_num": "4"
},
{
"text": "The results of running each of these models are presented in Table 2 . It is evident that deep learning models are capable of distinguishing narratives from non-narratives in the sequences, with BERT showing the best overall performance. Among the classical machine learning methods, the SVM model outperformed others with an F1 score and accuracy of 0.901. Although our classical methods didn't perform poorly, there is a substantial gain in F1-score when using the deep learning approaches. Therefore, for the generalization experiments in the next section, we only consider the best performing model, i.e., the BERT model. ",
"cite_spans": [],
"ref_spans": [
{
"start": 61,
"end": 68,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Detecting Narrative Style",
"sec_num": "4"
},
{
"text": "A model's ability to generalize to unseen data is key to a successful deployment. Our deep learning 5 models can successfully classify the presence of narratives in social-media posts, but it is possible that they overfit to features that are specific to the set of organizations that generated the posts included in our dataset. To evaluate the generalizability of the BERT model to data from unseen organizations, we re-trained the model on data from only four organizations, leaving the fifth one out as test data. We then repeat this process again for each of the five organizations, so that each organization is used as the held-out test set once, and as part of the training set in all other cases. The results of this experiment are presented in Table 3 . The posts from the organization Breast Cancer Now held out as test data were the easiest to generalize to (F1 score of 0.991) compared to the other combinations. On the other hand, the model slightly under-performed when trained on data from all organizations leaving NBCF Australia as test set with a F1 score of 0.900. However, in all cases, this shows that there is good potential for models trained on a subset of organizations to generalize well to others.",
"cite_spans": [],
"ref_spans": [
{
"start": 753,
"end": 760,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Generalizing across accounts",
"sec_num": "4.1"
},
{
"text": "We then performed one slightly varied version of the same experiment to further determine model generalizability. Here, we chose a dataset from only one organization as the training set, and used the remaining four datasets as testing data. As before, we repeat this experiment five times, using each organization as training data once, and testing in all other cases. This experiment helps to determine the potential for cross-organization transfer when we have very limited data or data from a single source. Given the very small amount of data for some of the organizations, we found that the size of the training set was too small to learn effective models in some cases. Therefore, we chose to up-sample our training set by 200%, (duplicating each training instance) which we found empirically to give better results in the low training data case. From the final result (Table 4) , we observe that the model trained on NBCF Australia performs the best overall, achieving an F1 score that is within a few points of the model trained on data from all organizations from Table 2 . On the other hand, the model trained only on Breast Cancer Now posts had poor generalization performance on the data from the other organizations, suggesting that having data from only a single organization is not always enough to guarantee good generalizability.",
"cite_spans": [],
"ref_spans": [
{
"start": 875,
"end": 884,
"text": "(Table 4)",
"ref_id": "TABREF6"
},
{
"start": 1073,
"end": 1080,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Generalizing across accounts",
"sec_num": "4.1"
},
{
"text": "We have established that deep learning models are very effective at detecting narratives from social media data, substantially outperforming classical machine learning approaches. However, it is not immediately apparent why these models are able to achieve better F1 scores. Therefore, in this section, we use model interpretability tools to further examine which features contributed to the ability of our models to detect narratives. We chose the best performing models in each cat- . Orange (blue) shading indicates the token was found to be important for the \"narrative\" (\"non-narrative\") class by LIME, with the color intensity indicating the degree of importance. Post 1 was correctly classified by both models, while posts 2 and 3 were correctly classified by BERT but incorrectly classified by the SVM model. egory, i.e., BERT for the deep learning approaches, and SVM for the classical models, and use the explainable AI tool for Local Interpretable Model Agnostic Explanation (LIME; Ribeiro et al. (2016)) to understand the significance of text-based features to each model. In both cases, we use the LIME explainer function 6 to learn which features best explain the narrative class and non-narrative class. We chose 5000 samples and 25 features as parameters for the function, based on the suggested default values and our desire to include a reasonable number of features per example. Each instance in the test dataset is examined using LIME, which generates an importance score for each feature (token) in the input based on how much it contributes to predictions for the positive class (narrative) or negative class (non-narrative). For a given feature j in a given text i, a higher positive score W ij denotes greater importance of that feature in the overall narrative class and a lower positive score denotes a weaker importance of that feature for the same class. Likewise, a greater negative value W ij for a feature indicates a stronger association with predictions of the non-narrative class. Several examples of LIME explanations are presented in Figure 1 . We can see that for posts where both models made the correct prediction, the set of important features is approximately the same. However, when BERT made the correct prediction and SVM did not, we notice that BERT places a greater emphasis on first names in the case of narratives, and features like \"fatigue\" and 6 From https://github.com/marcotcr/lime \"common\", which refer side effects of breast cancer, are correctly identified as important indicators that the post does not contain a narrative.",
"cite_spans": [],
"ref_spans": [
{
"start": 2070,
"end": 2078,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Analysis of Narrative Detection Models",
"sec_num": "5"
},
{
"text": "While these qualitative results are highly useful, LIME only provides the W ij score for a specific text, i, yet we sought to quantitatively understand which features were important across the entire test set. Therefore, we use Global Aggregations of Local Explanations (GALE; van der Linden et al. (2019)) to aggregate the LIME scores. For the purposes of aggregation, we set a cut-off of \u03f5 = 0.001 and consider any W ij < \u03f5 as a score of 0. A feature importance score of zero indicates that the feature does not explain much of either the narrative or the non-narrative class while making predictions. GALE suggests several different methods for aggregating scores, but we use the Global Average Importance I AV G as it was found to correlate well with external measures of feature importance for model classification. The Global Average Importance I AV G j for a given feature j is defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Narrative Detection Models",
"sec_num": "5"
},
{
"text": "I AV G j = N i=1 |W ij | i:W ij \u0338 =0 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Narrative Detection Models",
"sec_num": "5"
},
{
"text": "where N is the number of texts in the corpus. Table 5 shows the top and bottom 10 aggregated feature importance scores for both BERT and SVM. Both the models put more emphasis on pronouns and first names as they are more personal to the storyteller or subject of the narrative. Our feature analysis results align with that of Dirkson et al. (2019) Table 5 : Top and bottom ten aggregated feature importance scores for BERT (left side) and SVM (right side) models trained for narrative detection. Larger positive values indicate a greater overall importance for the \"narrative\" class, while more negative values were more important for predicting the \"non-narrative\" class.",
"cite_spans": [
{
"start": 326,
"end": 347,
"text": "Dirkson et al. (2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 46,
"end": 53,
"text": "Table 5",
"ref_id": null
},
{
"start": 348,
"end": 355,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis of Narrative Detection Models",
"sec_num": "5"
},
{
"text": "person pronouns. Also, since breast cancer is more common among women, it is more common to see feminine pronouns and first names related to women with the only exception being the token \"his\" which can be found as an important feature for the \"narrative\" class in the SVM model. Upon further inspection, we found that there are instances referring to women as \"his wife\" and \"his mother\" which further validates the model's choice for the token in the positive list. We also note verbs such as \"found\" (connected to \"lump\", which also had a positive score for both models but is not in the top ten for either) and \"is\". Considering the tokens with negative values, indicating that they were more relevant when predicting the \"non-narrative class\", we found words related to scientific studies, sharing songs, and describing clinical procedures. Hashtags such as \"myreserachstory\" and \"mondaymotivation\" were also present, indicating posts that may have been trying to seek engagement through means other than the use of narrative. While Our BERT model was successful in detecting narratives by learning associations between features like pronouns and first names, the SVM model failed to consistently learn these associations as indicated by the placement of several first names in the non-narrative (negative valued) end of the list. Table 6 : Top and bottom ten features that differed in importance the most between the BERT and SVM model. Scores with a larger value had more overall importance for the BERT model, while features with a smaller value had more importance for the SVM model.",
"cite_spans": [],
"ref_spans": [
{
"start": 1336,
"end": 1343,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis of Narrative Detection Models",
"sec_num": "5"
},
{
"text": "While these results illustrate which features were important to each model, they do not directly quantify the difference between the BERT and SVM. To investigate that further, we checked the extent to which the degree of importance I AV G j for each feature differed between BERT and the SVM model (Table 6 ). For each feature in the list obtained from SVM, we subtract the corresponding aggregated importance score from BERT for that feature. If the result is positive, it indicates that the BERT model puts more emphasis on that feature, whereas if the result is negative, it indicates that SVM gives more importance for that feature compared to BERT model for predicting the \"narrative\" class. We observe that BERT assigns a higher weight for first names and the pronoun \"she\" has a higher importance for BERT compared to SVM whereas, the pronoun \"her\" appears to be given greater importance by the SVM model compared to BERT.",
"cite_spans": [],
"ref_spans": [
{
"start": 298,
"end": 306,
"text": "(Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis of Narrative Detection Models",
"sec_num": "5"
},
{
"text": "In this paper, we show that deep learning models like BERT, DistilBERT and RoBERTa are effective at detecting narratives from social media data. Previous research focused on the use of classical machine learning models to understand narratives in online health discussion forums, but we demon-strate that deep learning models outperform these when detecting the presence of narratives. We studied generalizability of the deep learning models across organizations, finding that overall, models are able to generalize well across accounts, suggesting that deep learning models provided with sufficient data can perform well on an unseen dataset with similar distributions. We also analyze the performance of deep learning models with explainable AI methods, uncovering important features that contribute to narratives in a particular context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "However, there are certain limitations and challenges associated with these models. Although they are quite successful at understanding narratives, performance of deep learning models is directly proportional to the quality of the dataset and they are highly susceptible to annotator and dataset bias.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "With the growing amount of health information being shared on social media, understanding narratives becomes extremely important to study public health behavior and estimate health risks. The work described in this paper is a step towards helping researchers automatically annotate narratives in social media posts, thus enabling larger scale studies of the impact of narratives on health conversations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Code and annotations are publicly available at https: //github.com/ou-nlp/NarativeDetection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We contacted the authors of these papers but they could not share their data due to user privacy restrictions. Therefore, we only use the same approach reported by the authors, rather than applying our proposed deep learning models on the same datasets that were used in those studies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://survivornet.ca/connect/ partners 4 https://www.crowdtangle.com/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We also experimented with our best performing classical ML model, SVM, in the same way, but the results were not as strong (Appendix B).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "For Naive Bayes, we did not tune any hyperparameters. For the SVM classifier, we considered linear, polynomial, and rbf kernels, and found the polynomial kernel to work the best. We set the regularization parameter C = 2. For the Logistic Regression classifier, we tried various values for the regularization parameter C in the range of {0.01, 0.1, 0.2, 1, 2, 10} and found that C = 1 gave the best results. For the deep learning models, we use a batch size of 16 with a weight decay of 0.01 and a learning rate of 2e-5, training for 5 epochs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Model Hyperparameters",
"sec_num": null
},
{
"text": "We performed the same experiments from section 4.1 using an SVM model (the best performing classical model from our experiments in section 4). The results are presented in Tables 7 and 8 . Table 8 : Generalization performance using the best classical ML model (SVM) by training on one account and testing on the remaining four target accounts.",
"cite_spans": [],
"ref_spans": [
{
"start": 172,
"end": 186,
"text": "Tables 7 and 8",
"ref_id": null
},
{
"start": 189,
"end": 196,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "B Generalizability of SVM model",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Can I hear you? sentiment analysis on medical forums",
"authors": [
{
"first": "Tanveer",
"middle": [],
"last": "Ali",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Schramm",
"suffix": ""
},
{
"first": "Marina",
"middle": [],
"last": "Sokolova",
"suffix": ""
},
{
"first": "Diana",
"middle": [],
"last": "Inkpen",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Sixth International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "667--673",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tanveer Ali, David Schramm, Marina Sokolova, and Diana Inkpen. 2013. Can I hear you? sentiment anal- ysis on medical forums. In Proceedings of the Sixth International Joint Conference on Natural Language Processing, pages 667-673, Nagoya, Japan. Asian Federation of Natural Language Processing.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Understanding social support expressed in a COVID-19 online forum",
"authors": [
{
"first": "Anietie",
"middle": [],
"last": "Andy",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Chu",
"suffix": ""
},
{
"first": "Ramie",
"middle": [],
"last": "Fathy",
"suffix": ""
},
{
"first": "Barrington",
"middle": [],
"last": "Bennett",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Stokes",
"suffix": ""
},
{
"first": "Sharath Chandra",
"middle": [],
"last": "Guntuku",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 12th International Workshop on Health Text Mining and Information Analysis",
"volume": "",
"issue": "",
"pages": "19--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anietie Andy, Brian Chu, Ramie Fathy, Barrington Ben- nett, Daniel Stokes, and Sharath Chandra Guntuku. 2021. Understanding social support expressed in a COVID-19 online forum. In Proceedings of the 12th International Workshop on Health Text Mining and Information Analysis, pages 19-27, online. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Narrative paths and negotiation of power in birth stories",
"authors": [
{
"first": "Maria",
"middle": [],
"last": "Antoniak",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mimno",
"suffix": ""
},
{
"first": "Karen",
"middle": [],
"last": "Levy",
"suffix": ""
}
],
"year": 2019,
"venue": "Proc. ACM Hum.-Comput. Interact",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/3359190"
]
},
"num": null,
"urls": [],
"raw_text": "Maria Antoniak, David Mimno, and Karen Levy. 2019. Narrative paths and negotiation of power in birth sto- ries. Proc. ACM Hum.-Comput. Interact., 3(CSCW).",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "2020. A computational analysis of financial and environmental narratives within financial reports and its value for investors",
"authors": [
{
"first": "Felix",
"middle": [],
"last": "Armbrust",
"suffix": ""
},
{
"first": "Henry",
"middle": [],
"last": "Sch\u00e4fer",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Klinger",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the 1st Joint Workshop on Financial Narrative Processing and MultiLing Financial Summarisation",
"volume": "",
"issue": "",
"pages": "181--194",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Felix Armbrust, Henry Sch\u00e4fer, and Roman Klinger. 2020. A computational analysis of financial and environmental narratives within financial reports and its value for investors. In Proceedings of the 1st Joint Workshop on Financial Narrative Processing and MultiLing Financial Summarisation, pages 181-194, Barcelona, Spain (Online). COLING.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Narrative persuasion. The Sage handbook of persuasion: Developments in theory and practice",
"authors": [
{
"first": "Helena",
"middle": [],
"last": "Bilandzic",
"suffix": ""
},
{
"first": "Rick",
"middle": [],
"last": "Busselle",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "2",
"issue": "",
"pages": "200--219",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Helena Bilandzic and Rick Busselle. 2013. Narrative persuasion. The Sage handbook of persuasion: De- velopments in theory and practice, 2:200-219.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Natural language processing with Python: analyzing text with the natural language toolkit",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Bird",
"suffix": ""
},
{
"first": "Ewan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Loper",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Bird, Ewan Klein, and Edward Loper. 2009. Nat- ural language processing with Python: analyzing text with the natural language toolkit. \" O'Reilly Media, Inc.\".",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Thoughts on the therapeutic use of narrative in the promotion of coping in cancer care",
"authors": [
{
"first": "Alice",
"middle": [],
"last": "Carlick",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Francis",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Biley",
"suffix": ""
}
],
"year": 2004,
"venue": "European Journal of Cancer Care",
"volume": "13",
"issue": "4",
"pages": "308--317",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alice Carlick and Francis C Biley. 2004. Thoughts on the therapeutic use of narrative in the promotion of coping in cancer care. European Journal of Cancer Care, 13(4):308-317.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Narrative detection in online patient communities",
"authors": [
{
"first": "Suzan",
"middle": [],
"last": "Ar Dirkson",
"suffix": ""
},
{
"first": "Wessel",
"middle": [],
"last": "Verberne",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kraaij",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jorge",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Campos",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Jatowt",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bhatia",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of Text2Story-Second Workshop on Narrative Extraction From Texts co-located with 41th European Conference on Information Retrieval (ECIR 2019)",
"volume": "",
"issue": "",
"pages": "21--28",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "AR Dirkson, Suzan Verberne, Wessel Kraaij, AM Jorge, R Campos, A Jatowt, and S Bhatia. 2019. Narrative detection in online patient communities. In Proceed- ings of Text2Story-Second Workshop on Narrative Extraction From Texts co-located with 41th European Conference on Information Retrieval (ECIR 2019), pages 21-28. CEUR-WS.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "SemEval-2015 task 14: Analysis of clinical text",
"authors": [
{
"first": "No\u00e9mie",
"middle": [],
"last": "Elhadad",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Pradhan",
"suffix": ""
},
{
"first": "Sharon",
"middle": [],
"last": "Gorman",
"suffix": ""
},
{
"first": "Suresh",
"middle": [],
"last": "Manandhar",
"suffix": ""
},
{
"first": "Wendy",
"middle": [],
"last": "Chapman",
"suffix": ""
},
{
"first": "Guergana",
"middle": [],
"last": "Savova",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 9th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "303--310",
"other_ids": {
"DOI": [
"10.18653/v1/S15-2051"
]
},
"num": null,
"urls": [],
"raw_text": "No\u00e9mie Elhadad, Sameer Pradhan, Sharon Gorman, Suresh Manandhar, Wendy Chapman, and Guergana Savova. 2015. SemEval-2015 task 14: Analysis of clinical text. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), pages 303-310, Denver, Colorado. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Cancer communication on social media: examining how cancer caregivers use facebook for cancer-related communication",
"authors": [
{
"first": "A",
"middle": [],
"last": "Elizabeth",
"suffix": ""
},
{
"first": "Susan",
"middle": [],
"last": "Gage-Bouchard",
"suffix": ""
},
{
"first": "Michelle",
"middle": [],
"last": "Lavalley",
"suffix": ""
},
{
"first": "Lynda",
"middle": [
"Kwon"
],
"last": "Mollica",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Beaupin",
"suffix": ""
}
],
"year": 2017,
"venue": "Cancer nursing",
"volume": "40",
"issue": "4",
"pages": "332--338",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elizabeth A Gage-Bouchard, Susan LaValley, Michelle Mollica, and Lynda Kwon Beaupin. 2017. Cancer communication on social media: examining how can- cer caregivers use facebook for cancer-related com- munication. Cancer nursing, 40(4):332-338.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Posting about cancer: Predicting social support in imgur comments",
"authors": [
{
"first": "J",
"middle": [],
"last": "Brent",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Hale",
"suffix": ""
},
{
"first": "Danielle",
"middle": [
"K"
],
"last": "Collins",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kilgo",
"suffix": ""
}
],
"year": 2020,
"venue": "Social Media+ Society",
"volume": "6",
"issue": "4",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brent J Hale, Ryan Collins, and Danielle K Kilgo. 2020. Posting about cancer: Predicting social sup- port in imgur comments. Social Media+ Society, 6(4):2056305120965209.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A tale of two cultures: Bringing literary analysis and computational linguistics together",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Hammond",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Brooke",
"suffix": ""
},
{
"first": "Graeme",
"middle": [],
"last": "Hirst",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Workshop on Computational Linguistics for Literature",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Hammond, Julian Brooke, and Graeme Hirst. 2013. A tale of two cultures: Bringing literary anal- ysis and computational linguistics together. In Pro- ceedings of the Workshop on Computational Linguis- tics for Literature, pages 1-8, Atlanta, Georgia. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "The influence of narrative risk communication on feelings of cancer risk",
"authors": [
{
"first": "Eva",
"middle": [],
"last": "Janssen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Liesbeth Van Osch",
"suffix": ""
},
{
"first": "Lilian",
"middle": [],
"last": "Hein De Vries",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lechner",
"suffix": ""
}
],
"year": 2013,
"venue": "British Journal of Health Psychology",
"volume": "18",
"issue": "2",
"pages": "407--419",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eva Janssen, Liesbeth van Osch, Hein de Vries, and Lilian Lechner. 2013. The influence of narrative risk communication on feelings of cancer risk. British Journal of Health Psychology, 18(2):407-419.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Cancer stage prediction based on patient online discourse",
"authors": [
{
"first": "Mukund",
"middle": [],
"last": "Jha",
"suffix": ""
},
{
"first": "No\u00e9mie",
"middle": [],
"last": "Elhadad",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Workshop on Biomedical Natural Language Processing",
"volume": "",
"issue": "",
"pages": "64--71",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mukund Jha and No\u00e9mie Elhadad. 2010. Cancer stage prediction based on patient online discourse. In Pro- ceedings of the 2010 Workshop on Biomedical Nat- ural Language Processing, pages 64-71, Uppsala, Sweden. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Building timelines from narrative clinical records: Initial results based-on deep natural language understanding",
"authors": [
{
"first": "Hyuckchul",
"middle": [],
"last": "Jung",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Allen",
"suffix": ""
},
{
"first": "Nate",
"middle": [],
"last": "Blaylock",
"suffix": ""
},
{
"first": "Lucian",
"middle": [],
"last": "William De Beaumont",
"suffix": ""
},
{
"first": "Mary",
"middle": [],
"last": "Galescu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Swift",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of BioNLP 2011 Workshop",
"volume": "",
"issue": "",
"pages": "146--154",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hyuckchul Jung, James Allen, Nate Blaylock, William de Beaumont, Lucian Galescu, and Mary Swift. 2011. Building timelines from narrative clinical records: Initial results based-on deep natural language under- standing. In Proceedings of BioNLP 2011 Workshop, pages 146-154, Portland, Oregon, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Narrative-based medicine: potential, pitfalls, and practice",
"authors": [
{
"first": "Vera",
"middle": [],
"last": "Kalitzkus",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Peter F Matthiessen",
"suffix": ""
}
],
"year": 2009,
"venue": "The Permanente Journal",
"volume": "13",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vera Kalitzkus and Peter F Matthiessen. 2009. Narrative-based medicine: potential, pitfalls, and practice. The Permanente Journal, 13(1):80.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Narrative communication in cancer prevention and control: a framework to guide research and application",
"authors": [
{
"first": "W",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "Melanie",
"middle": [
"C"
],
"last": "Kreuter",
"suffix": ""
},
{
"first": "Joseph",
"middle": [
"N"
],
"last": "Green",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cappella",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Meg",
"middle": [
"E"
],
"last": "Slater",
"suffix": ""
},
{
"first": "Doug",
"middle": [],
"last": "Wise",
"suffix": ""
},
{
"first": "Eddie",
"middle": [
"M"
],
"last": "Storey",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Daniel",
"suffix": ""
},
{
"first": "Deborah",
"middle": [
"O"
],
"last": "O'keefe",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [],
"last": "Erwin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Holmes",
"suffix": ""
}
],
"year": 2007,
"venue": "Annals of behavioral medicine",
"volume": "33",
"issue": "3",
"pages": "221--235",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew W Kreuter, Melanie C Green, Joseph N Cap- pella, Michael D Slater, Meg E Wise, Doug Storey, Eddie M Clark, Daniel J O'Keefe, Deborah O Erwin, Kathleen Holmes, et al. 2007. Narrative communica- tion in cancer prevention and control: a framework to guide research and application. Annals of behavioral medicine, 33(3):221-235.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Comparing narrative and informational videos to increase mammography in lowincome african american women",
"authors": [
{
"first": "W",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [],
"last": "Kreuter",
"suffix": ""
},
{
"first": "Kassandra",
"middle": [],
"last": "Holmes",
"suffix": ""
},
{
"first": "Bindu",
"middle": [],
"last": "Alcaraz",
"suffix": ""
},
{
"first": "Suchitra",
"middle": [],
"last": "Kalesan",
"suffix": ""
},
{
"first": "Melissa",
"middle": [],
"last": "Rath",
"suffix": ""
},
{
"first": "Amy",
"middle": [],
"last": "Richert",
"suffix": ""
},
{
"first": "Nikki",
"middle": [],
"last": "Mcqueen",
"suffix": ""
},
{
"first": "Lou",
"middle": [],
"last": "Caito",
"suffix": ""
},
{
"first": "Eddie",
"middle": [
"M"
],
"last": "Robinson",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2010,
"venue": "Patient education and counseling",
"volume": "81",
"issue": "",
"pages": "6--14",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew W Kreuter, Kathleen Holmes, Kassandra Al- caraz, Bindu Kalesan, Suchitra Rath, Melissa Richert, Amy McQueen, Nikki Caito, Lou Robinson, and Ed- die M Clark. 2010. Comparing narrative and infor- mational videos to increase mammography in low- income african american women. Patient education and counseling, 81:S6-S14.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Content analysis: An introduction to its methodology",
"authors": [
{
"first": "Klaus",
"middle": [],
"last": "Krippendorff",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Klaus Krippendorff. 2004. Content analysis: An intro- duction to its methodology. Sage publications.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "The role of narrative pictorial warning labels in communicating alcohol-related cancer risks",
"authors": [
{
"first": "Zexin",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2021,
"venue": "Health Communication",
"volume": "",
"issue": "",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zexin Ma. 2021. The role of narrative pictorial warning labels in communicating alcohol-related cancer risks. Health Communication, pages 1-9.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Wordnet: a lexical database for english",
"authors": [
{
"first": "A",
"middle": [],
"last": "George",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 1995,
"venue": "Communications of the ACM",
"volume": "38",
"issue": "11",
"pages": "39--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39-41.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Scikit-learn: Machine learning in Python",
"authors": [
{
"first": "F",
"middle": [],
"last": "Pedregosa",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Varoquaux",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Gramfort",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Michel",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Thirion",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Grisel",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Blondel",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Prettenhofer",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Dubourg",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Vanderplas",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Passos",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Cournapeau",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Brucher",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Perrot",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Duchesnay",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2825--2830",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duch- esnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Analysis of anxious word usage on online health forums",
"authors": [
{
"first": "Nicolas",
"middle": [],
"last": "Rey-Villamizar",
"suffix": ""
},
{
"first": "Prasha",
"middle": [],
"last": "Shrestha",
"suffix": ""
},
{
"first": "Farig",
"middle": [],
"last": "Sadeque",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bethard",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Pedersen",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Seventh International Workshop on Health Text Mining and Information Analysis",
"volume": "",
"issue": "",
"pages": "37--42",
"other_ids": {
"DOI": [
"10.18653/v1/W16-6105"
]
},
"num": null,
"urls": [],
"raw_text": "Nicolas Rey-Villamizar, Prasha Shrestha, Farig Sad- eque, Steven Bethard, Ted Pedersen, Arjun Mukher- jee, and Thamar Solorio. 2016. Analysis of anxious word usage on online health forums. In Proceedings of the Seventh International Workshop on Health Text Mining and Information Analysis, pages 37-42, Aux- tin, TX. Association for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "why should i trust you?\" explaining the predictions of any classifier",
"authors": [
{
"first": "Sameer",
"middle": [],
"last": "Marco Tulio Ribeiro",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Guestrin",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining",
"volume": "",
"issue": "",
"pages": "1135--1144",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. \" why should i trust you?\" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135- 1144.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Analyzing media messages: Using quantitative content analysis in research",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Riff",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Lacy",
"suffix": ""
},
{
"first": "Frederick",
"middle": [],
"last": "Fico",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Riff, Stephen Lacy, and Frederick Fico. 2014. Analyzing media messages: Using quantitative con- tent analysis in research. Routledge.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Annotating patient information needs in online diabetes forums",
"authors": [
{
"first": "Julia",
"middle": [],
"last": "Romberg",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Dyczmons",
"suffix": ""
},
{
"first": "Sandra",
"middle": [
"Olivia"
],
"last": "Borgmann",
"suffix": ""
},
{
"first": "Jana",
"middle": [],
"last": "Sommer",
"suffix": ""
},
{
"first": "Markus",
"middle": [],
"last": "Vomhof",
"suffix": ""
},
{
"first": "Cecilia",
"middle": [],
"last": "Brunoni",
"suffix": ""
},
{
"first": "Ismael",
"middle": [],
"last": "Bruck-Ramisch",
"suffix": ""
},
{
"first": "Luis",
"middle": [],
"last": "Enders",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifth Social Media Mining for Health Applications Workshop & Shared Task",
"volume": "",
"issue": "",
"pages": "19--26",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julia Romberg, Jan Dyczmons, Sandra Olivia Borgmann, Jana Sommer, Markus Vomhof, Cecilia Brunoni, Ismael Bruck-Ramisch, Luis Enders, An- drea Icks, and Stefan Conrad. 2020. Annotating pa- tient information needs in online diabetes forums. In Proceedings of the Fifth Social Media Mining for Health Applications Workshop & Shared Task, pages 19-26, Barcelona, Spain (Online). Association for Computational Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.01108"
]
},
"num": null,
"urls": [],
"raw_text": "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Exploiting narrative context and a priori knowledge of categories in textual emotion classification",
"authors": [
{
"first": "Hikari",
"middle": [],
"last": "Tanabe",
"suffix": ""
},
{
"first": "Tetsuji",
"middle": [],
"last": "Ogawa",
"suffix": ""
},
{
"first": "Tetsunori",
"middle": [],
"last": "Kobayashi",
"suffix": ""
},
{
"first": "Yoshihiko",
"middle": [],
"last": "Hayashi",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5535--5540",
"other_ids": {
"DOI": [
"10.18653/v1/2020.coling-main.483"
]
},
"num": null,
"urls": [],
"raw_text": "Hikari Tanabe, Tetsuji Ogawa, Tetsunori Kobayashi, and Yoshihiko Hayashi. 2020. Exploiting narrative con- text and a priori knowledge of categories in textual emotion classification. In Proceedings of the 28th International Conference on Computational Linguis- tics, pages 5535-5540, Barcelona, Spain (Online). International Committee on Computational Linguis- tics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Global aggregations of local explanations for black box models",
"authors": [
{
"first": "Ilse",
"middle": [],
"last": "Van Der Linden",
"suffix": ""
},
{
"first": "Hinda",
"middle": [],
"last": "Haned",
"suffix": ""
},
{
"first": "Evangelos",
"middle": [],
"last": "Kanoulas",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilse van der Linden, Hinda Haned, and Evange- los Kanoulas. 2019. Global aggregations of lo- cal explanations for black box models. CoRR, abs/1907.03039.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Analyzing empowerment processes among cancer patients in an online community: A text mining approach",
"authors": [
{
"first": "Suzan",
"middle": [],
"last": "Verberne",
"suffix": ""
},
{
"first": "Anika",
"middle": [],
"last": "Batenburg",
"suffix": ""
},
{
"first": "Remco",
"middle": [],
"last": "Sanders",
"suffix": ""
},
{
"first": "Enny",
"middle": [],
"last": "Mies Van Eenbergen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Mattijs",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lambooij",
"suffix": ""
}
],
"year": 2019,
"venue": "JMIR cancer",
"volume": "5",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Suzan Verberne, Anika Batenburg, Remco Sanders, Mies van Eenbergen, Enny Das, Mattijs S Lambooij, et al. 2019. Analyzing empowerment processes among cancer patients in an online community: A text mining approach. JMIR cancer, 5(1):e9887.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Modeling human motives and emotions from personal narratives using external knowledge and entity tracking",
"authors": [
{
"first": "Prashanth",
"middle": [],
"last": "Vijayaraghavan",
"suffix": ""
},
{
"first": "Deb",
"middle": [],
"last": "Roy",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the Web Conference 2021",
"volume": "",
"issue": "",
"pages": "529--540",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Prashanth Vijayaraghavan and Deb Roy. 2021. Mod- eling human motives and emotions from personal narratives using external knowledge and entity track- ing. In Proceedings of the Web Conference 2021, pages 529-540.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Using functional schemas to understand social media narratives",
"authors": [
{
"first": "Xinru",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Aakanksha",
"middle": [],
"last": "Naik",
"suffix": ""
},
{
"first": "Yohan",
"middle": [],
"last": "Jo",
"suffix": ""
},
{
"first": "Carolyn",
"middle": [],
"last": "Rose",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Second Workshop on Storytelling",
"volume": "",
"issue": "",
"pages": "22--33",
"other_ids": {
"DOI": [
"10.18653/v1/W19-3403"
]
},
"num": null,
"urls": [],
"raw_text": "Xinru Yan, Aakanksha Naik, Yohan Jo, and Carolyn Rose. 2019. Using functional schemas to understand social media narratives. In Proceedings of the Second Workshop on Storytelling, pages 22-33, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Huiming Jin, Hariharan Muralidharan, and Carolyn Ros\u00e9. 2021. FanfictionNLP: A text processing pipeline for fanfiction",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Yoder",
"suffix": ""
},
{
"first": "Sopan",
"middle": [],
"last": "Khosla",
"suffix": ""
},
{
"first": "Qinlan",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Aakanksha",
"middle": [],
"last": "Naik",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the Third",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/2021.nuse-1.2"
]
},
"num": null,
"urls": [],
"raw_text": "Michael Yoder, Sopan Khosla, Qinlan Shen, Aakanksha Naik, Huiming Jin, Hariharan Muralidharan, and Car- olyn Ros\u00e9. 2021. FanfictionNLP: A text processing pipeline for fanfiction. In Proceedings of the Third",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "(a) Post 1: BERT predicts \"narrative\" (correct).(b) Post 1: SVM predicts \"narrative\" (correct).(c) Post 2: BERT predicts \"narrative\" (correct).(d) Post 2: SVM predicts \"non-narrative\" (incorrect).(e) Post 3: BERT predicts \"non-narrative\" (correct). (f) Post 3: SVM predicts \"narrative\" (incorrect).",
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"text": "Feature importance visualization for three posts, one per row, that were classified by our top-performing deep learning model (BERT) and classical machine learning model (SVM)",
"num": null,
"type_str": "figure"
},
"TABREF1": {
"text": "Annotated data set statistics.",
"content": "<table><tr><td>work partners page 3 . We selected five organiza-</td></tr><tr><td>tions with the most Facebook followers and span-</td></tr><tr><td>ning several different countries, including Susan G.</td></tr><tr><td>Komen For the Cure, National Breast Cancer Foun-</td></tr><tr><td>dation USA, the UK-based Breast Cancer Now, A</td></tr><tr><td>Future Without Breast Cancer (Canadian Cancer</td></tr><tr><td>Society), and the National Breast Cancer Founda-</td></tr><tr><td>tion Australia. Their Facebook posts and engage-</td></tr><tr><td>ment metrics from 2016 to 2021 were downloaded using CrowdTangle 4</td></tr></table>",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF3": {
"text": "Narrative class F1, Precision, and Recall scores of the text classification models on the narrative detection task, separated into groups of classical ML and deep learning methods. The score of the performing model(s) for each metric is listed in bold. SVM-trigram is the best performing model from(Dirkson et al., 2019). Baseline-narrative is the score achieved by labeling all texts as narrative.",
"content": "<table/>",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF5": {
"text": "Generalization performance using the best classifier (BERT) by training on all accounts except for the target account, and testing on the target account.",
"content": "<table><tr><td>Train Susan G. Komen 0.917 0.852 F1 Prec Recall 0.993 Breast Cancer Now 0.777 0.979 0.645 NBCF Australia 0.953 0.961 0.945 NBCF USA 0.877 0.791 0.985 AFWBC Canada 0.914 0.976 0.859</td></tr></table>",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF6": {
"text": "",
"content": "<table><tr><td>: Generalization performance using the best clas-sifier (BERT) by training on one account and testing on the remaining four target accounts.</td></tr></table>",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF7": {
"text": "who noted that narratives in health forums are characterized by health related words and first",
"content": "<table><tr><td>BERT significant word celeste she latasha beautiful mother her barbe hall found is s don' round myresearchstory -0.05 mel score word 0.29 her 0.28 taylor 0.24 my 0.17 she 0.16 app 0.15 peace 0.14 becca 0.11 tip 0.09 rest 0.09 his -0.04 face -0.04 study -0.04 run -0.05 mammogram SVM awareness -0.05 addy free -0.06 steph it -0.06 listen \" -0.06 mondaymotivation -0.17 score 0.22 0.20 0.19 0.18 0.15 0.14 0.13 0.13 0.12 0.11 -0.10 -0.11 -0.11 -0.11 -0.12 -0.15 -0.15 -0.16 increase -0.08 song -0.19</td></tr></table>",
"num": null,
"html": null,
"type_str": "table"
}
}
}
}