|
{ |
|
"paper_id": "2022", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T12:30:53.365733Z" |
|
}, |
|
"title": "Emotionally-Informed Models for Detecting Moments of Change and Suicide Risk Levels in Longitudinal Social Media Data", |
|
"authors": [ |
|
{ |
|
"first": "Ulya", |
|
"middle": [], |
|
"last": "Bayram", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "\u00c7anakkale Onsekiz Mart University \u00c7anakkale", |
|
"location": { |
|
"country": "Turkey" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Lamia", |
|
"middle": [], |
|
"last": "Benhiba", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "V University in Rabat Rabat", |
|
"location": { |
|
"region": "Mohammed", |
|
"country": "Morocco" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "In this shared task, we focus on detecting mental health signals in Reddit users' posts through two main challenges: A) capturing mood changes (anomalies) from the longitudinal set of posts (called timelines), and B) assessing the users' suicide risk-levels. Our approaches leverage emotion recognition on linguistic content by computing emotion/sentiment scores using pre-trained BERTs on users' posts and feeding them to machine learning models, including XGBoost, Bi-LSTM, and logistic regression. For Task-A, we detect longitudinal anomalies using a sequence-to-sequence (seq2seq) autoencoder and capture regions of mood deviations. For Task-B, our two models utilize the BERT emotion/sentiment scores. The first computes emotion bandwidths and merges them with n-gram features, and employs logistic regression to detect users' suicide risk levels. The second model predicts suicide risk on the timeline level using a Bi-LSTM on Task-A results and sentiment scores. Our results outperformed most participating teams and ranked in the top three in Task-A. In Task-B, our methods surpass all others and return the best macro and micro F1 scores.", |
|
"pdf_parse": { |
|
"paper_id": "2022", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "In this shared task, we focus on detecting mental health signals in Reddit users' posts through two main challenges: A) capturing mood changes (anomalies) from the longitudinal set of posts (called timelines), and B) assessing the users' suicide risk-levels. Our approaches leverage emotion recognition on linguistic content by computing emotion/sentiment scores using pre-trained BERTs on users' posts and feeding them to machine learning models, including XGBoost, Bi-LSTM, and logistic regression. For Task-A, we detect longitudinal anomalies using a sequence-to-sequence (seq2seq) autoencoder and capture regions of mood deviations. For Task-B, our two models utilize the BERT emotion/sentiment scores. The first computes emotion bandwidths and merges them with n-gram features, and employs logistic regression to detect users' suicide risk levels. The second model predicts suicide risk on the timeline level using a Bi-LSTM on Task-A results and sentiment scores. Our results outperformed most participating teams and ranked in the top three in Task-A. In Task-B, our methods surpass all others and return the best macro and micro F1 scores.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Tracking and identifying moments of change in a user's social media longitudinal data could be a possible identifier of their mental health deterioration and be especially useful for those with suicidal ideation (Tsakalidis et al., 2022b) . In this 2022 CLPsych shared task, the goal is to tackle two challenges. Task-A aims to identify mood shifts and gradual mood progressions from users' timelines, where each timeline has a list of longitudinal posts from a close time range. Meantime, Task-B aims to detect suicide risk levels of the users. We were allowed to provide three submissions for Task-A and two for Task-B. The second Task-B submission was expected to use the results from Task-A.", |
|
"cite_spans": [ |
|
{ |
|
"start": 212, |
|
"end": 238, |
|
"text": "(Tsakalidis et al., 2022b)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The dataset of this shared task is a mixture of three separate datasets: UMD from 2019 CLPsych (Shing et al., 2018; Zirikly et al., 2019) , E-Risk with some additional data (Losada and Crestani, 2016; Losada et al., 2020) , and a new collection called Reddit-New (Tsakalidis et al., 2022a) . The dataset has 255 timelines: 204 in training/51 in the unlabeled test set.", |
|
"cite_spans": [ |
|
{ |
|
"start": 95, |
|
"end": 115, |
|
"text": "(Shing et al., 2018;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 116, |
|
"end": 137, |
|
"text": "Zirikly et al., 2019)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 173, |
|
"end": 200, |
|
"text": "(Losada and Crestani, 2016;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 201, |
|
"end": 221, |
|
"text": "Losada et al., 2020)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 263, |
|
"end": 289, |
|
"text": "(Tsakalidis et al., 2022a)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our team (called WResearch for \"Women in Research\") decided to use emotionally-informed features for their ability to capture mood changes. In Task-A, we combine a seq2seq autoencoder and machine learning (ML) models to capture moments of change in a user's timeline. Meanwhile, in Task-B, we were partially influenced by the 2021 CLPsych results, which showed that merging longterm posts of a user could capture long-term suicidal ideation (Bayram and Benhiba, 2021; Macavaney et al., 2021) . We used the post-level features extracted in Task-A to compute user-level emotionbandwidth features and concatenated them with statistical n-gram features to detect suicidal risk levels. Additionally, we experimented with a timelinelevel prediction model using Bi-LSTM. The success of our results compared to the other teams and the baselines suggest that our emotionallyinformed models are advantageous for dealing with the tasks at hand.", |
|
"cite_spans": [ |
|
{ |
|
"start": 441, |
|
"end": 467, |
|
"text": "(Bayram and Benhiba, 2021;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 468, |
|
"end": 491, |
|
"text": "Macavaney et al., 2021)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The training set in this challenge includes data on users with three suicide risk levels (Severe/Moderate/Low). A user can have multiple timelines, where a timeline is a chronologically ordered sequence of posts. Each post is labeled as IS for switches in mood (sudden mood shifts from positive to negative, or vice versa), IE for mood escalations (gradual mood changes from neutral or positive to a higher positive, or neutral, or negative to a higher negative), or O to represent the baseline (neutral) mood (Tsakalidis et al., 2022b) . In the implementations, for machine learning models, Scikit-learn (version 1.0.2) (Pedregosa et al., 2011) , for deep learning models, PyTorch (version 1.11.0+cu102) and Keras (version 2.7.0) libraries (Paszke et al., 2019) are used.", |
|
"cite_spans": [ |
|
{ |
|
"start": 510, |
|
"end": 536, |
|
"text": "(Tsakalidis et al., 2022b)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 621, |
|
"end": 645, |
|
"text": "(Pedregosa et al., 2011)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 741, |
|
"end": 762, |
|
"text": "(Paszke et al., 2019)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methods", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Feature Extraction: The main set of features used in Task-A is obtained from three pre-trained BERT models. The first model is Bertweetbase-sentiment, trained with SemEval 2017 corpus (around 40k tweets) using a RoBERTa (P\u00e9rez et al., 2021) . It returns three sentiments {P ositive, N egative, N eutral} per text. The second model is EmoRoBERTa, trained with 58,000 Reddit comments and returns 28 emotion scores per text (Ghoshal, 2021) . The third model is Twitter-roberta-base-emotion (CardiffNLP, 2021), trained on58M tweets and fine-tuned for emotion recognition with the TweetEval benchmark ( Barbieri et al., 2020) . As shown in Figure 1 , we concatenate the sentiment and emotion scores into an emotionally-informed feature vector of length 35 for each post in the data collection. Mood Anomaly detection: Before feeding the emotionally-informed features to classifiers, we compute a feature vector that reflects abnormalities in the user-expressed mood based on past behavior. To compute the abnormality vector, we use a seq2seq learning model for multivariate time-series forecasting (Provotar et al., 2019) . We generate a series of (t-n) feature vectors for each post at time t, where n is the length of the look-back time window. This input is fed to the autoencoder. We aim to predict the emotionally-informed feature vector of the next step, i.e., the feature vector of the post at t+1. The error margin is thereafter calculated based on the outputs of the autoencoder and the actual emotionally-informed feature vectors. We follow the same methodology as Tran et al. (Tran et al., 2019) to compute the irregularities vector and use it as a proxy for identifying mood anomalies. Upon experimentation, we found that, while the abnormality vector helps detect escalations, it did not succeed for switches. We thus concatenated the emotionally-informed features, window-based abnormality vectors, and a feature vector denoting the emotional difference between a post and the previous one. We implement the seq2seq learning model in Keras with two LSTMs with 100 neurons and a final dense layer with 35 neurons. We use a Learning Rate Scheduler that decreases the learning rate (lr) with a factor of 1e-3 * 0.90 ** lr when the learning stagnates. We train using the Adam optimizer and Huber loss function with a batch size of 16 and early stopping (patience=3). Classification: We pass the output of the previous step as an input to ML classifiers to predict the label of a post (O, IE, IS). We experiment with three models: a Logistic Regression (LR) [class_weight=\"balanced\", multi_class=\"multinomial\", solver=\"saga\"], XG-Boost, and a stacked Ensemble of four classifiers: LR, Random Forest, XGBoost, and Extremely Randomized Trees. Being mindful of the data imbalance, we choose to assign a higher class weight to the minority classes (IE, IS) while reducing the weight of the majority class (O). We apply stratified 10-folds cross-validation and grid-search on the tree-based models (n_estimators=[400, 700, 1000], colsam-ple_bytree=[0.7,0.8], max_depth= [15, 20, 25] , sub-sample=[0.7,0.8,0.9]) to optimize the hyperparameters and avoid overfitting.", |
|
"cite_spans": [ |
|
{ |
|
"start": 220, |
|
"end": 240, |
|
"text": "(P\u00e9rez et al., 2021)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 421, |
|
"end": 436, |
|
"text": "(Ghoshal, 2021)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 598, |
|
"end": 620, |
|
"text": "Barbieri et al., 2020)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 1093, |
|
"end": 1116, |
|
"text": "(Provotar et al., 2019)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 1570, |
|
"end": 1601, |
|
"text": "Tran et al. (Tran et al., 2019)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 3069, |
|
"end": 3073, |
|
"text": "[15,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 3074, |
|
"end": 3077, |
|
"text": "20,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 3078, |
|
"end": 3081, |
|
"text": "25]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 635, |
|
"end": 643, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Task A", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "In this task, we eliminate all users with suicide risk label N/A from the labeled set, thus work on a three-class classification problem: Low, Moderate, Severe suicide risk detection. Feature Extraction: For the first submission, we use two types of features. The first feature, n-grams, is selected due to their success in previous suicide risk detection research (Bayram and Benhiba, 2021; Pestian et al., 2020) . Our n-gram features consist of unigrams and bigrams (n \u2208 {1, 2}). To extract them, we perform lowercase conversion and punctuation removal, then use a spaCy library (en_core_web_lg) (Honnibal and Montani, 2017) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 365, |
|
"end": 391, |
|
"text": "(Bayram and Benhiba, 2021;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 392, |
|
"end": 413, |
|
"text": "Pestian et al., 2020)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 598, |
|
"end": 626, |
|
"text": "(Honnibal and Montani, 2017)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task B", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "As the goal is to obtain user-level suicide risk, we perform the detection on the merged posts per user. However, the leave-one-out cross-validation experiments returned low results on the labeled set, so we decided to use/merge only the posts with \"IE\" or \"IS\" labels in training since they contain strong emotions that might be associated with suicidal ideation. In the test set, we merge all posts per person (since they lack IE and IS labels) and obtain the user's suicide risk-level prediction.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task B", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "The training set provides 5,808 n-gram features. Next, we train an LR to collect feature importance scores for performing feature elimination. Upon applying a leave-one-out cross-validation on the labeled set, also using LR, we exploit classification performance scores from the top features to find the optimal feature subset. Figure 2 shows a peak at top 900 n-gram features, corresponding to 300 top features per class. We save these features and use them as the final features on the test set. We also experiment with adding the emotionallyinformed features per post from Task A. Per user, we compute the minimum and the maximum of the emotion/sentiment scores from the emotionallyinformed features of all posts and calculate their absolute difference. Thus, in the new feature vector, each element reflects the range (bandwidth) of emotions/sentiments of that user. We hypothesize that these bandwidths of emotions/sentiments could help identify suicide risk. Next, we concatenate the n-gram feature vector and the obtained emotion bandwidth vector per user for classification. Classification: In the first submission of Task-B, we use simple methods that do not require a lot of training data and that can perform multiclass classification: LR (lbfgs, sag, saga, newton-cg solvers), non-linear support vector machines (SVM) (rbf, poly, and sigmoid ker-nels), random forest (RF), and XGBoost. We obtain leave-one-out results on the training set, where LR with lbfgs solver (weighted F1=0.718) and SVM with the sigmoid kernel (weighted F1=0.710) achieve the best performance, possibly due to their success in handling small datasets (RF's weighted F1=0.433, XGBoost's weighted F1=0.278). Thus, we select LR as the ML model to be used with ngrams+emotional bandwidth features (class_weight=\"balanced\", multi_class = \"multinomial\", solver=\"lbfgs\", random_state=7, remaining parameters are kept at default values (Pedregosa et al., 2011)). Timeline-level risk prediction: The second submission for Task-B leverages Task-A's mood change predictions and the emotionally-informed features to predict a user's suicide risk level. Since timelines (longitudinal posts) were obtained around a user's mood change-points during data collection (Tsakalidis et al., 2022b) , we predict the suicide risk on the timeline level. As was the case in the first model, we only include posts with IS or IE labels in our training set while also including O labels in the validation and test data. We use a Bi-LSTM to classify the suicide risk in the timeline by exploiting past and future emotional contexts of posts. To aggregate predictions on the user level, we experiment with computing average, majority voting, and argmax on the timeline-level results and select argmax due to its accuracy. The Bi-LSTM model is a gated recurrent unit (GRU) wrapped in a Bi-LSTM, followed by a dropout layer and two dense layers (Dropout_rate=0.1, Dense layer 1: 50-neurons with Relu, Dense layer 2: 3-neurons with softmax, batch_size=16, Rmsprop optimizer, categorical cross-entropy loss, and early-stopping with patience=3).", |
|
"cite_spans": [ |
|
{ |
|
"start": 2236, |
|
"end": 2262, |
|
"text": "(Tsakalidis et al., 2022b)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 328, |
|
"end": 336, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Task B", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "In Tables 1, 2 , and 3, we present the test set results of Task-A obtained from three different evaluation techniques. Each table summarizes the results obtained on the three submissions: seq2seq + one of the selected classifiers (i.e., 1=LR, 2=XGBoost, and 3=the Ensemble method). Table 1 shows results at the post-level, while Table 2 and 3 report results on a timeline basis using the coverage metric and the window-based evaluation metric with window size = 3 (more details on the evaluation methods can be found in (Tsakalidis et al., 2022b) ). Table 4 shows results for Task-B where the first model (1) is the n-grams + emotion bandwidth features with LR classifier, and the second (2) is the Bi-LSTM model. The shared task provided two baselines from the mood change study (Tsakalidis et al., 2022b) . The first baseline (B1 in the tables) uses tf-idf features with LR. The second baseline (B2) uses BERT trained with Talklife website posts, treats each post as an instance (i.e., completely ignoring the timeline sequence), and is trained using the alpha-weighted focal loss. We also include the best (Max) and worst (Min) values for each metric obtained by competing submissions to allow better readability of the results. We add an asterisk (*) next to the results when the best performance is achieved by our models.", |
|
"cite_spans": [ |
|
{ |
|
"start": 520, |
|
"end": 546, |
|
"text": "(Tsakalidis et al., 2022b)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 780, |
|
"end": 806, |
|
"text": "(Tsakalidis et al., 2022b)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 14, |
|
"text": "Tables 1, 2", |
|
"ref_id": "TABREF0" |
|
}, |
|
{ |
|
"start": 282, |
|
"end": 289, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
}, |
|
{ |
|
"start": 329, |
|
"end": 342, |
|
"text": "Table 2 and 3", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 550, |
|
"end": 557, |
|
"text": "Table 4", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In comparison to the submissions of other teams that participated in this shared task (Tsakalidis et al., 2022a) , our models achieved the top three Task-A: In the post-level, the seq2seq + XGBoost achieves robust performance by balancing between precision and recall. It outperforms the baseline methods on all macro-average evaluation metrics and achieves second best F1 scores in all categories (e.g., IE, IS, O, average). At the timeline level, the coverage metric demonstrates the ability of a model to capture regions of change. In this respect, the seq2seq + XGBoost strikes a balance between precision and recall again, and performs second best on the macro-average F1. In the window-based evaluation the seq2seq + LR achieves the third highest F1 performance overall and renders the best macro-average recall. The ensemble method achieves the best precision on the IS class but tends to over-predict, as demonstrated by its low coverage recall. Experimenting with various look-back time windows can provide more insight on the rationale behind the results. Task-B: In Task-B, we wanted to contrast the user suicide risk prediction performance when obtained at the user level in the n-grams+emotion band-width+LR model and at the timeline level using the Bi-LSTM model. The latter leverages Task A's moments-of-change results to help predict the user's suicide risk level. The n-grams+emotion bandwidth+LR model returns the best F1 scores in CLPsych'22 based on micro and macro average metrics in Table 4 , showing the viability of our approach. This outcome is also a good inspiration for future suicide risk detection studies in which mood change labels are available or obtainable.", |
|
"cite_spans": [ |
|
{ |
|
"start": 86, |
|
"end": 112, |
|
"text": "(Tsakalidis et al., 2022a)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1505, |
|
"end": 1513, |
|
"text": "Table 4", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The Bi-LSTM model was built on the premise that emotional context from past and future posts, including the moments of change, would allow better inference of the timeline's suicide risk level. While the model is slightly better than the baseline, we suppose that it might have rendered better results had it been trained on timeline-level rather than user-level labels. In an attempt to err on the side of safety, we chose argmax for aggregation. However, it biased the model in favor of moderate and severe risk levels. Other aggregation methods will be explored in the future to help address the prediction of low-level suicide risk.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In this shared task, we tackled two problems: capturing mood changes from timelines of posts of Reddit users and detecting their suicide risk levels.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The results reveal that our methods performed the highest macro and micro F1 scores in suicide risklevel detection and performed in the top three in mood-change detection. Our models can inspire future research for accurately detecting abrupt mood changes among social media users. These models also might shed light on users' suicide risk levels, thus enabling early mental-health intervention to prevent suicidal events.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Secure access to the shared task dataset was provided with IRB approval under University of Maryland, College Park protocol 1642625 and approval by the Biomedical and Scientific Research Ethics Committee (BSREC) at the University of Warwick (ethical application reference BSREC 40/19-20). Before being granted access, we signed a Non-Disclosure Agreement (NDA) and a Data Enclave Use Agreement (DUA).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ethical Statement", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "The authors are particularly grateful to the anonymous users of Reddit whose data feature in this year's shared task dataset, to the annotators of the data for Task A, to the clinical experts from Bar-Ilan University who annotated the data for Task B, the American Association of Suicidology, to NORC for creating and administering the secure infrastructure and providing researcher support and to UKRI for providing funding to the CLPsych 2022 shared task organisers.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Tweeteval: Unified benchmark and comparative evaluation for tweet classification", |
|
"authors": [ |
|
{ |
|
"first": "Francesco", |
|
"middle": [], |
|
"last": "Barbieri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jose", |
|
"middle": [], |
|
"last": "Camacho-Collados", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Leonardo", |
|
"middle": [], |
|
"last": "Neves", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luis", |
|
"middle": [], |
|
"last": "Espinosa-Anke", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2010.12421" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Francesco Barbieri, Jose Camacho-Collados, Leonardo Neves, and Luis Espinosa-Anke. 2020. Tweet- eval: Unified benchmark and comparative eval- uation for tweet classification. arXiv preprint arXiv:2010.12421.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Determining a person's suicide risk by voting on the short-term history of tweets for the clpsych 2021 shared task", |
|
"authors": [ |
|
{ |
|
"first": "Ulya", |
|
"middle": [], |
|
"last": "Bayram", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lamia", |
|
"middle": [], |
|
"last": "Benhiba", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the Seventh Workshop on Computational Linguistics and Clinical Psychology: Improving Access", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "81--86", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ulya Bayram and Lamia Benhiba. 2021. Determining a person's suicide risk by voting on the short-term history of tweets for the clpsych 2021 shared task. In Proceedings of the Seventh Workshop on Computa- tional Linguistics and Clinical Psychology: Improv- ing Access, pages 81-86.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Twitter-roBERTa-base for emotion recognition", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Cardiffnlp", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "CardiffNLP. 2021. Twitter-roBERTa-base for emotion recognition.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "spaCy 2: Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing", |
|
"authors": [ |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Honnibal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ines", |
|
"middle": [], |
|
"last": "Montani", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew Honnibal and Ines Montani. 2017. spaCy 2: Natural language understanding with Bloom embed- dings, convolutional neural networks and incremental parsing. To appear.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "A test collection for research on depression and language use", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Losada", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fabio", |
|
"middle": [], |
|
"last": "Crestani", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Experimental IR Meets Multilinguality, Multimodality, and Interaction -7th International Conference of the CLEF Association", |
|
"volume": "9822", |
|
"issue": "", |
|
"pages": "28--39", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1007/978-3-319-44564-9_3" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David E. Losada and Fabio Crestani. 2016. A test collec- tion for research on depression and language use. In Experimental IR Meets Multilinguality, Multimodal- ity, and Interaction -7th International Conference of the CLEF Association, CLEF 2016, \u00c9vora, Portu- gal, September 5-8, 2016, Proceedings, volume 9822 of Lecture Notes in Computer Science, pages 28-39. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Overview of erisk 2020: Early risk prediction on the internet", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Losada", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fabio", |
|
"middle": [], |
|
"last": "Crestani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Javier", |
|
"middle": [], |
|
"last": "Parapar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Experimental IR Meets Multilinguality, Multimodality, and Interaction -11th International Conference of the CLEF Association", |
|
"volume": "2020", |
|
"issue": "", |
|
"pages": "272--287", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1007/978-3-030-58219-7_20" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David E. Losada, Fabio Crestani, and Javier Parapar. 2020. Overview of erisk 2020: Early risk predic- tion on the internet. In Experimental IR Meets Mul- tilinguality, Multimodality, and Interaction -11th International Conference of the CLEF Association, CLEF 2020, Thessaloniki, Greece, September 22-25, 2020, Proceedings, volume 12260 of Lecture Notes in Computer Science, pages 272-287. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Community-level research on suicidality prediction in a secure environment: Overview of the clpsych 2021 shared task", |
|
"authors": [ |
|
{ |
|
"first": "Sean", |
|
"middle": [], |
|
"last": "Macavaney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anjali", |
|
"middle": [], |
|
"last": "Mittu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Glen", |
|
"middle": [], |
|
"last": "Coppersmith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Leintz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philip", |
|
"middle": [], |
|
"last": "Resnik", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the Seventh Workshop on Computational Linguistics and Clinical Psychology: Improving Access", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "70--80", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sean Macavaney, Anjali Mittu, Glen Coppersmith, Jeff Leintz, and Philip Resnik. 2021. Community-level research on suicidality prediction in a secure environ- ment: Overview of the clpsych 2021 shared task. In Proceedings of the Seventh Workshop on Computa- tional Linguistics and Clinical Psychology: Improv- ing Access, pages 70-80.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Pytorch: An imperative style, high-performance deep learning library", |
|
"authors": [ |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Paszke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sam", |
|
"middle": [], |
|
"last": "Gross", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Francisco", |
|
"middle": [], |
|
"last": "Massa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Lerer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Bradbury", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gregory", |
|
"middle": [], |
|
"last": "Chanan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Trevor", |
|
"middle": [], |
|
"last": "Killeen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zeming", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Natalia", |
|
"middle": [], |
|
"last": "Gimelshein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luca", |
|
"middle": [], |
|
"last": "Antiga", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alban", |
|
"middle": [], |
|
"last": "Desmaison", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andreas", |
|
"middle": [], |
|
"last": "Kopf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edward", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zachary", |
|
"middle": [], |
|
"last": "Devito", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martin", |
|
"middle": [], |
|
"last": "Raison", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alykhan", |
|
"middle": [], |
|
"last": "Tejani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sasank", |
|
"middle": [], |
|
"last": "Chilamkurthy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benoit", |
|
"middle": [], |
|
"last": "Steiner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lu", |
|
"middle": [], |
|
"last": "Fang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Junjie", |
|
"middle": [], |
|
"last": "Bai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Soumith", |
|
"middle": [], |
|
"last": "Chintala", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "32", |
|
"issue": "", |
|
"pages": "8024--8035", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Te- jani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Process- ing Systems 32, pages 8024-8035. Curran Associates, Inc.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Scikit-learn: Machine learning in Python", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Pedregosa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Varoquaux", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Gramfort", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Michel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Thirion", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O", |
|
"middle": [], |
|
"last": "Grisel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Blondel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Prettenhofer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Weiss", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Dubourg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Vanderplas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Passos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Cournapeau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Brucher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Perrot", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Duchesnay", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "12", |
|
"issue": "", |
|
"pages": "2825--2830", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duch- esnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "A machine learning approach to identifying changes in suicidal language", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Pestian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Santel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Sorter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ulya", |
|
"middle": [], |
|
"last": "Bayram", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Brian", |
|
"middle": [], |
|
"last": "Connolly", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tracy", |
|
"middle": [], |
|
"last": "Glauser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Melissa", |
|
"middle": [], |
|
"last": "Del-Bello", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Suzanne", |
|
"middle": [], |
|
"last": "Tamang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Cohen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Suicide and Life-Threatening Behavior", |
|
"volume": "50", |
|
"issue": "5", |
|
"pages": "939--947", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John Pestian, Daniel Santel, Michael Sorter, Ulya Bayram, Brian Connolly, Tracy Glauser, Melissa Del- Bello, Suzanne Tamang, and Kevin Cohen. 2020. A machine learning approach to identifying changes in suicidal language. Suicide and Life-Threatening Behavior, 50(5):939-947.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Unsupervised anomaly detection in time series using lstm-based autoencoders", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Oleksandr I Provotar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Yaroslav", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maksym M", |
|
"middle": [], |
|
"last": "Linder", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Veres", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "2019 IEEE International Conference on Advanced Trends in Information Theory (ATIT)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "513--517", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Oleksandr I Provotar, Yaroslav M Linder, and Maksym M Veres. 2019. Unsupervised anomaly de- tection in time series using lstm-based autoencoders. In 2019 IEEE International Conference on Advanced Trends in Information Theory (ATIT), pages 513-517. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "2021. pysentimiento: A python toolkit for sentiment analysis and socialnlp tasks", |
|
"authors": [ |
|
{ |
|
"first": "Juan", |
|
"middle": [], |
|
"last": "Manuel P\u00e9rez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Juan", |
|
"middle": [ |
|
"Carlos" |
|
], |
|
"last": "Giudici", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Franco", |
|
"middle": [], |
|
"last": "Luque", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Juan Manuel P\u00e9rez, Juan Carlos Giudici, and Franco Luque. 2021. pysentimiento: A python toolkit for sentiment analysis and socialnlp tasks.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Expert, crowdsourced, and machine assessment of suicide risk via online postings", |
|
"authors": [ |
|
{ |
|
"first": "Han-Chin", |
|
"middle": [], |
|
"last": "Shing", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Suraj", |
|
"middle": [], |
|
"last": "Nair", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ayah", |
|
"middle": [], |
|
"last": "Zirikly", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Meir", |
|
"middle": [], |
|
"last": "Friedenberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hal", |
|
"middle": [], |
|
"last": "Daum\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iii", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philip", |
|
"middle": [], |
|
"last": "Resnik", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the fifth workshop on computational linguistics and clinical psychology: from keyboard to clinic", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "25--36", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Han-Chin Shing, Suraj Nair, Ayah Zirikly, Meir Frieden- berg, Hal Daum\u00e9 III, and Philip Resnik. 2018. Expert, crowdsourced, and machine assessment of suicide risk via online postings. In Proceedings of the fifth workshop on computational linguistics and clinical psychology: from keyboard to clinic, pages 25-36.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Anomaly detection using long short term memory networks and its applications in supply chain management. IFAC-PapersOnLine", |
|
"authors": [ |
|
{ |
|
"first": "Huu", |
|
"middle": [], |
|
"last": "Kim Phuc Tran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S\u00e9bastien", |
|
"middle": [], |
|
"last": "Du Nguyen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Thomassey", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "52", |
|
"issue": "", |
|
"pages": "2408--2412", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kim Phuc Tran, Huu Du Nguyen, and S\u00e9bastien Thomassey. 2019. Anomaly detection using long short term memory networks and its applications in supply chain management. IFAC-PapersOnLine, 52(13):2408-2412.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Overview of the CLPsych 2022 shared task: Capturing moments of change in longitudinal user posts", |
|
"authors": [ |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Tsakalidis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jenny", |
|
"middle": [], |
|
"last": "Chim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Iman Munire", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ayah", |
|
"middle": [], |
|
"last": "Bilal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dana", |
|
"middle": [], |
|
"last": "Zirikly", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Federico", |
|
"middle": [], |
|
"last": "Atzil-Slonim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philip", |
|
"middle": [], |
|
"last": "Nanni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Manas", |
|
"middle": [], |
|
"last": "Resnik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kaushik", |
|
"middle": [], |
|
"last": "Gaur", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Becky", |
|
"middle": [], |
|
"last": "Roy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Inkster", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maria", |
|
"middle": [], |
|
"last": "Leintz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Liakata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2022, |
|
"venue": "Proceedings of the Eighth Workshop on Computational Linguistics and Clinical Psychology: Mental Health in the Face of Change", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adam Tsakalidis, Jenny Chim, Iman Munire Bilal, Ayah Zirikly, Dana Atzil-Slonim, Federico Nanni, Philip Resnik, Manas Gaur, Kaushik Roy, Becky Inkster, Jeff Leintz, and Maria Liakata. 2022a. Overview of the CLPsych 2022 shared task: Capturing moments of change in longitudinal user posts. In Proceedings of the Eighth Workshop on Computational Linguistics and Clinical Psychology: Mental Health in the Face of Change.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Identifying moments of change from longitudinal user text", |
|
"authors": [ |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Tsakalidis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Federico", |
|
"middle": [], |
|
"last": "Nanni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anthony", |
|
"middle": [], |
|
"last": "Hills", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jenny", |
|
"middle": [], |
|
"last": "Chim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jiayu", |
|
"middle": [], |
|
"last": "Song", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maria", |
|
"middle": [], |
|
"last": "Liakata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2022, |
|
"venue": "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "4647--4660", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adam Tsakalidis, Federico Nanni, Anthony Hills, Jenny Chim, Jiayu Song, and Maria Liakata. 2022b. Identi- fying moments of change from longitudinal user text. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4647-4660, Dublin, Ireland. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Clpsych 2019 shared task: Predicting the degree of suicide risk in reddit posts", |
|
"authors": [ |
|
{ |
|
"first": "Ayah", |
|
"middle": [], |
|
"last": "Zirikly", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philip", |
|
"middle": [], |
|
"last": "Resnik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ozlem", |
|
"middle": [], |
|
"last": "Uzuner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristy", |
|
"middle": [], |
|
"last": "Hollingshead", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the sixth workshop on computational linguistics and clinical psychology", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "24--33", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ayah Zirikly, Philip Resnik, Ozlem Uzuner, and Kristy Hollingshead. 2019. Clpsych 2019 shared task: Pre- dicting the degree of suicide risk in reddit posts. In Proceedings of the sixth workshop on computational linguistics and clinical psychology, pages 24-33.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"text": "Task A Learning model", |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"uris": null, |
|
"text": "N-gram feature selection with weighted precision, recall and F1 scores.", |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF0": { |
|
"html": null, |
|
"content": "<table><tr><td/><td colspan=\"3\">Sub. Precision Recall</td><td>F1</td></tr><tr><td>IS</td><td>1 2 3 B1</td><td>0.204 0.362 0.478 0.222</td><td colspan=\"2\">0.512 0.292 0.256 0.300 0.134 0.209 0.024 0.044</td></tr><tr><td/><td>B2 Max Min</td><td>0.091 0.500 0</td><td colspan=\"2\">0.012 0.021 0.585 0.376 0 0</td></tr><tr><td/><td>1 2 3</td><td>0.500 0.646 0.644</td><td colspan=\"2\">0.625 0.556 0.553 0.596 0.505 0.566</td></tr><tr><td>IE</td><td>B1 B2 Max Min</td><td>0.569 0.723 0.748 0.273</td><td colspan=\"2\">0.514 0.540 0.163 0.267 0.630 0.662 0.029 0.052</td></tr><tr><td>O</td><td>1 2 3 B1 B2 Max Min</td><td>0.944 0.868 0.838 0.844 0.753 0.954 0.729</td><td colspan=\"2\">0.726 0.820 0.929 0.897 0.953 0.892 0.947 0.893 0.983 0.853 0.968 0.910 0.647 0.771</td></tr><tr><td>Macro avg</td><td>1 2 3 B1 B2 Max Min</td><td>0.549 0.625 0.654 0.545 0.523 0.689 0.354</td><td colspan=\"2\">0.621 0.556 0.579 0.598 0.531 0.556 0.495 0.492 0.386 0.380 0.625 0.649 0.337 0.305</td></tr></table>", |
|
"type_str": "table", |
|
"text": "Task-A post-level evaluation for seq2seq+classifier (resp. (1) Logistic Regression (LR), (2) XGBoost, (3) Ensemble). (B1) tf-idf LR and (B2) BERT are baselines. Max & Min results from all CLPsych'22 submissions are also included.", |
|
"num": null |
|
}, |
|
"TABREF1": { |
|
"html": null, |
|
"content": "<table><tr><td/><td colspan=\"3\">Sub. Precision Recall</td><td>F1</td></tr><tr><td>IS</td><td>1 2 3 B1</td><td>0.211 0.406 0.511 0.111</td><td>0.563 0.318 0.199 0.008</td><td>0.307 0.357 0.287 0.0148</td></tr><tr><td/><td>B2 Max Min</td><td>0.025 0.517 0</td><td>0.007 0.575 0</td><td>0.011 0.390 0</td></tr><tr><td>IE</td><td>1 2 3 B1 B2 Max Min</td><td>0.198 0.307 0.302 0.284 0.226 0.369 0.070</td><td>0.406 0.467* 0.452 0.504 0.094 0.467* 0.050</td><td>0.266 0.370 0.362 0.363 0.132 0.406 0.073</td></tr><tr><td/><td>1</td><td>0.520</td><td>0.537</td><td>0.528</td></tr><tr><td/><td>2</td><td>0.703</td><td>0.725</td><td>0.713</td></tr><tr><td>O</td><td>3 B1 B2</td><td>0.675 0.738 0.529</td><td>0.700 0.762 0.513</td><td>0.687 0.750 0.521</td></tr><tr><td/><td>Max</td><td>0.720</td><td>0.737</td><td>0.728</td></tr><tr><td/><td>Min</td><td>0.510</td><td>0.486</td><td>0.503</td></tr><tr><td>Macro avg</td><td>1 2 3 B1 B2 Max Min</td><td>0.310 0.472 0.496 0.378 0.260 0.521 0.220</td><td>0.502 0.503* 0.450 0.425 0.204 0.503* 0.186</td><td>0.383 0.487 0.472 0.400 0.229 0.504 0.202</td></tr><tr><td colspan=\"5\">macro average F1 scores for Task-A on all three</td></tr><tr><td colspan=\"5\">evaluation techniques. Meanwhile, in Task-B, the</td></tr><tr><td colspan=\"5\">first model returns the highest micro and macro</td></tr><tr><td colspan=\"4\">average F1 scores in Clpysch'22.</td><td/></tr></table>", |
|
"type_str": "table", |
|
"text": "Task-A coverage evaluation for seq2seq+classifier (resp. (1) Logistic Regression (LR), (2) XGBoost, (3) Ensemble). (B1) tf-idf LR and (B2) BERT are baselines. Max & Min results from all CLPsych'22 submissions are also included.", |
|
"num": null |
|
}, |
|
"TABREF2": { |
|
"html": null, |
|
"content": "<table><tr><td colspan=\"5\">: Task-A window-based (window size = 3) evaluation for seq2seq+classifier (resp. (1) Logistic Regression (LR), (2) XGBoost, (3) Ensemble). (B1) tf-idf LR and (B2) BERT are baselines. Max & Min results from all CLPsych'22 submis-sions are also included.</td></tr><tr><td/><td colspan=\"3\">Sub. Precision Recall</td><td>F1</td></tr><tr><td>IS</td><td>1 2 3 B1</td><td>0.368 0.525 0.711* 0.167</td><td>0.814 0.372 0.224 0.008</td><td>0.507 0.435 0.341 0.015</td></tr><tr><td/><td>B2 Max Min</td><td>0.450 0.711* 0.200</td><td>0.065 0.872 0.004</td><td>0.113 0.512 0.008</td></tr><tr><td/><td>1 2</td><td>0.429 0.566</td><td>0.748 0.620</td><td>0.545 0.592</td></tr><tr><td/><td>3</td><td>0.570</td><td>0.622</td><td>0.595</td></tr><tr><td>IE</td><td>B1</td><td>0.477</td><td>0.675</td><td>0.559</td></tr><tr><td/><td>B2 Max Min</td><td>0.612 0.630 0.371</td><td>0.158 0.773 0.010</td><td>0.251 0.637 0.168</td></tr><tr><td>O</td><td>1 2 3 B1 B2 Max Min</td><td>0.956* 0.881 0.854 0.875 0.762 0.956* 0.769</td><td>0.755 0.968 0.992 0.973 0.995 0.996 0.610</td><td>0.844 0.923* 0.918 0.922 0.863 0.923* 0.742</td></tr><tr><td>Macro avg</td><td>1 2 3 B1 B2 Max Min</td><td>0.584 0.657 0.712 0.506 0.608 0.723 0.523</td><td>0.773* 0.653 0.613 0.552 0.406 0.773* 0.399</td><td>0.665 0.655 0.658 0.528 0.487 0.697 0.455</td></tr></table>", |
|
"type_str": "table", |
|
"text": "", |
|
"num": null |
|
}, |
|
"TABREF3": { |
|
"html": null, |
|
"content": "<table><tr><td>Level</td><td colspan=\"3\">Sub. Precision Recall</td><td>F1</td></tr><tr><td/><td>1 2</td><td>0.200 0</td><td>0.333 0</td><td>0.250 0</td></tr><tr><td>Low</td><td>B1 Max Min</td><td>0 1 0</td><td>0 0.667 0</td><td>0 0.500 0</td></tr><tr><td>Moderate</td><td>1 2 B1 Max Min</td><td>0.533 0.545 0.429 0.625 0.250</td><td>0.571 0.429 0.214 0.714 0.071</td><td>0.552 0.480 0.286 0.588 0.111</td></tr><tr><td>Severe</td><td>1 2 B1 Max Min</td><td>0.667* 0.556 0.480 0.667* 0.478</td><td>0.533 0.667 0.800 0.867 0.467</td><td>0.593 0.606 0.600 0.684 0.500</td></tr><tr><td>Macro avg</td><td>1 2 B1 Max Min</td><td>0.467 0.367 0.303 0.618 0.306</td><td colspan=\"2\">0.479* 0.465* 0.365 0.362 0.338 0.295 0.479* 0.465* 0.365 0.298</td></tr><tr><td>Micro avg</td><td>1 2 B1 Max Min</td><td>0.565* 0.499 0.412 0.565* 0.359</td><td>0.531 0.500 0.469 0.562 0.344</td><td>0.543* 0.494 0.406 0.543* 0.315</td></tr></table>", |
|
"type_str": "table", |
|
"text": "Task-B evaluation for the models (1) n-gram+emotion bandwith+Logistic Regression (LR), and (2) Bi-LSTM. A baseline (B1) tf-idf LR, and Max & Min results from all CLPsych'22 submissions are also included.", |
|
"num": null |
|
} |
|
} |
|
} |
|
} |