ACL-OCL / Base_JSON /prefixL /json /ltedi /2022.ltedi-1.17.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:12:17.912632Z"
},
"title": "UMUTeam@LT-EDI-ACL2022: Detecting Signs of Depression from text",
"authors": [
{
"first": "Jos\u00e9",
"middle": [
"Antonio"
],
"last": "Garc\u00eda-D\u00edaz",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universidad de Murcia",
"location": {
"addrLine": "Campus de Espinardo",
"postCode": "30100",
"country": "Spain"
}
},
"email": ""
},
{
"first": "Rafael",
"middle": [],
"last": "Valencia-Garc\u00eda",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universidad de Murcia",
"location": {
"addrLine": "Campus de Espinardo",
"postCode": "30100",
"country": "Spain"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Depression is a mental condition related to sadness and the lack of interest in common daily tasks. In this working-notes, we describe the proposal of the UMUTeam in the LT-EDI shared task (ACL 2022) concerning the identification of signs of depression in social network posts. This task is somehow related to other relevant Natural Language Processing tasks such as Emotion Analysis. In this shared task, the organisers challenged the participants to distinguish between moderate and severe signs of depression (or no signs of depression at all) in a set of social posts written in English. Our proposal is based on the combination of linguistic features and several sentence embeddings using a knowledge integration strategy. Our proposal achieved the 6th position, with a macro f1-score of 53.82 in the official leader board.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "Depression is a mental condition related to sadness and the lack of interest in common daily tasks. In this working-notes, we describe the proposal of the UMUTeam in the LT-EDI shared task (ACL 2022) concerning the identification of signs of depression in social network posts. This task is somehow related to other relevant Natural Language Processing tasks such as Emotion Analysis. In this shared task, the organisers challenged the participants to distinguish between moderate and severe signs of depression (or no signs of depression at all) in a set of social posts written in English. Our proposal is based on the combination of linguistic features and several sentence embeddings using a knowledge integration strategy. Our proposal achieved the 6th position, with a macro f1-score of 53.82 in the official leader board.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The automatic analysis of depression is a medium that allows people to support their mental health (Evans-Lacko et al., 2018 ). The shared-task Dep-Sign LT-EDI (ACL-2022) (Sampath et al., 2022) aims to measure the ability of neural networks and Natural Language Processing (NLP) tools to detect signs of depression from social media posts written in English. It is worth noting that this is not the first shared task concerning the identification of depression. In (Losada et al., 2017) , the organisers of eRisk 2017 develop a pilot project which main purpose is the identification of early risk detection of depression.",
"cite_spans": [
{
"start": 99,
"end": 124,
"text": "(Evans-Lacko et al., 2018",
"ref_id": "BIBREF2"
},
{
"start": 171,
"end": 193,
"text": "(Sampath et al., 2022)",
"ref_id": null
},
{
"start": 465,
"end": 486,
"text": "(Losada et al., 2017)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this shared task, the organisers proposed a multi-classification challenge that consists of identifying whether a moderate or severe sign of depression is observed in a short text or, on the contrary, no sign of depression is observed. For this, the performance of all participants is ranked using the macro averaged precision, recall and f1-score. The details of the dataset compilation can be found at (Kayalvizhi and Thenmozhi, 2022) . The dataset is distributed into three folds: training, validation, and testing. We decided to use this distribution and not to merge train and validation to make a custom training-validation split. Table 1 depicts the label distribution per split. We can observe that the dataset is imbalanced, with many instances that reflect moderate signs of depression. Our research group has experience in Emotion Analysis. Specifically, we participated in the Emo-EvalEs shared task (Plaza-del Arco et al., 2021), organised in the IberLEF 2021 workshop. This shared-task is about a multi-classification task of identification of emotions in Spanish (based on Ekman's basic emotions). Our participation is detailed at (Garc\u00eda-D\u00edaz et al., 2021b) . Besides, we released the Spanish MisoCorpus 2021 and evaluated with different feature sets and neural network models . In the same line, we evaluated in (Garc\u00eda-D\u00edaz et al., 2022) how to combine different feature sets and state-of-the-art neural network architectures for improving automatic hate-speech detectors. Specifically, we tested two strategies for combining the features: knowledge integration and ensemble learning. In this work we evaluate these strategies as well. Besides, as part of the doctoral thesis of one of the members of the team, we evaluate a subset of languageindependent linguistic features in order to observe if they contribute to improve the performance of state-of-the-art embeddings.",
"cite_spans": [
{
"start": 407,
"end": 439,
"text": "(Kayalvizhi and Thenmozhi, 2022)",
"ref_id": "BIBREF7"
},
{
"start": 1149,
"end": 1176,
"text": "(Garc\u00eda-D\u00edaz et al., 2021b)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 640,
"end": 647,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our pipeline can be summarised as follows. First, documents are pre-processed by removing punctuation symbols, spaces, emojis, and punctuation. Second, four feature sets are extracted from the documents: linguistic features (LF), sentence embeddings from FastText (SE), BERT (BF), and RoBERTa (RF). Third, several neural networks with different combinations of the feature sets are trained using hyperparameter tuning. Forth, two additional ensembles are created to combine the features. Finally, we use the best neural network to get the final submission with the official test.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "Next, we describe the feature extraction stage. The linguistic features (LF) are extracted with the UMUTextStats tool (Garc\u00eda-D\u00edaz and Valencia-Garc\u00eda, 2022). The linguistic features are related to stylometry (for instance, word and sentence length, or Type-Token ratio), Part-of-Speech, emojis and generic social network jargon. The main advantage of linguistic features versus state-of-the-art embeddings is that linguistic features are easy to interpret at the same time they achieve promising results, specially in Author Analysis tasks (Garc\u00eda-D\u00edaz et al., 2021a). The sentence embeddings from FastText (SE) are extracted with the FastText tool (Mikolov et al., 2018) . These sentence embeddings are not contextual. That is, the same word has the same representation, regardless of its context. Finally, the sentence embeddings from BERT (BF) and RoBERTa (RF) are extracted from distilled models (Sanh et al., 2019) . We use the distilled versions because they require less computational resources. To obtain the sentence embeddings from BERT or RoBERTa, a hyperparameter selection stage of 10 models is conducted to obtain a good configuration of the models. Next, the sentence embeddings from BERT and RoBERTa are obtained from the [CLS] token (using the approach described at (Reimers and Gurevych, 2019) ). During the hyperparameter selection stage, we use Tree of Parzen Estimators (TPE) (Bergstra et al., 2013) for determining the best parameters (weight decay, batch size, warm-up speed, number of epochs, and learning rate).",
"cite_spans": [
{
"start": 650,
"end": 672,
"text": "(Mikolov et al., 2018)",
"ref_id": "BIBREF10"
},
{
"start": 901,
"end": 920,
"text": "(Sanh et al., 2019)",
"ref_id": "BIBREF14"
},
{
"start": 1284,
"end": 1312,
"text": "(Reimers and Gurevych, 2019)",
"ref_id": "BIBREF12"
},
{
"start": 1398,
"end": 1421,
"text": "(Bergstra et al., 2013)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "The next step is the training of several neural networks. We train a neural network for each feature set (LF, SE, BF, RF), and a neural network that combines all feature sets (LF + SE + BF + RF). All these neural networks are trained with hyperparameter selection. For this, we rely on Ray Tune (Liaw et al., 2018) . For each training, we evaluate different number of hidden layers, neurons, batch size, learning rate or regularisation mechanisms. We distinguish between (1) shallow neural networks, that are simple neural networks composed of one or two hidden layers with the same number of neurons in each layer; and (2) deep neural networks, that have 3, 4, 5, 6, 7 or 8 hidden layers. Besides, the layers of deep neural networks are evaluated with different number of neurons disposed in several shapes (brick, triangle, diamond, rhombus, and funnel). For the rest of the parameters, we evaluate large batch sizes due to class imbalance, a dropout mechanism for regularisation (in different ratios), and small and large learning rates.",
"cite_spans": [
{
"start": 295,
"end": 314,
"text": "(Liaw et al., 2018)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "The results for the hyperparameter optimisation stage are shown in Table 2 . We can observe that the best neural network that combines all features consisted in a shallow neural network composed of 2 wide hidden layers, with 128 neurons each. The batch size is large (512), the learning rate is large (0.01) and there is no activation function (is linear). Besides, this network uses a small dropout ratio of .1.",
"cite_spans": [],
"ref_spans": [
{
"start": 67,
"end": 74,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "We report the results achieved with the validation split. Table 3 depicts the macro average precision, recall, and f1-score of each feature set separately and combined with ensemble learning and two ensemble learning strategies: one based on the mode of the predictions and another based on averaging the predictions.",
"cite_spans": [],
"ref_spans": [
{
"start": 58,
"end": 65,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and discussion",
"sec_num": "3"
},
{
"text": "From the results achieved with the feature sets separately, BF is the one that achieves better results (77.27% of f1-score). This result is similar to RF (76.91% of f1-score) and outperforms largely SE and LF. With the knowledge integration strategy, the results outperform the ones achieved separately, with a f1-score of 77.90. Besides, when the results are combined with ensembles, the results are larger with the average of the probabilities (mean) achieving a macro f1-score of 78.69.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and discussion",
"sec_num": "3"
},
{
"text": "We decided to use for the final submission the predictions obtained with the knowledge integration strategy. This decision is taken because in past competitions we have achieved better results with this strategy with the official test (that is, we suspect this strategy generalises better than ensemble learning). Accordingly, we show the classification report of the validation split in Table 2 : Results for the best hyperparameters for each feature set separately or combined using knowledge integration. We include the shape of the neural network, the number of layers, the number of neurons in the first hidden layer, the dropout ratio, the learning rate, and the activation function Table 3 : Macro average precision (P), recall (R), and f1-score (F1) of each feature set (LF, SE, BF and RF), the knowledge integration strategy (K.I) and the two ensemble learning strategies (mode and mean) with the validation split fusion matrix in Figure 1 . We can observe that the precision and recall of all labels are competitive, achieving a macro f1-score of 79.90% and a weighted f1-score of 81.41%. Moderate sign of depression (the majority label) is the one that achieves better precision and recall. Concerning the confusion matrix, we can observe that most wrong classifications occur between not depression and moderate depression and between severe and moderate depression. This means that our system does not mismatch severe failures, such as classifying severe signs of depression as not depression. Table 4 : Classification report of the knowledge integration strategy with the validation split, showing the precision (P), recall (R) and f1-score (F1) of each label and the macro and weighted scores Next, Table 5 shows the official results in the leader board. We achieved 6th position in the task in a total of 31 teams. We achieve 53.82 of macro f1-score (4.48% below the best result). Table 5 : Official results, including the team name and the rank, the recall (R), precision (P), and the macro f1-score (f1)",
"cite_spans": [],
"ref_spans": [
{
"start": 388,
"end": 395,
"text": "Table 2",
"ref_id": null
},
{
"start": 689,
"end": 696,
"text": "Table 3",
"ref_id": null
},
{
"start": 940,
"end": 948,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 1507,
"end": 1514,
"text": "Table 4",
"ref_id": "TABREF2"
},
{
"start": 1714,
"end": 1721,
"text": "Table 5",
"ref_id": null
},
{
"start": 1897,
"end": 1904,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and discussion",
"sec_num": "3"
},
{
"text": "4 Conclusions and promising research lines",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Team",
"sec_num": null
},
{
"text": "Here we have described the participation of UMUTeam in the LT-EDI-ACL2022 shared task, concerning the identification of moderate and severe signs of depression in short texts. We achieved 6th position from a total of 31 participants with a system that combines linguistic features and three forms of sentence embeddings using knowledge integration. We are proud of our participation as it has allowed us to evaluate a subset of languageindependent linguistic features. Accordingly, we will continue to adapt our methods to English. Specifically, we will include linguistic features from figurative language, as the ones described at (del Pilar Salas-Z\u00e1rate et al., 2020).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Team",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work is part of the research project LaTe4PSP (PID2019-107652RB-I00) funded by MCIN/ AEI/10.13039/501100011033. This work is also part of the research project PDC2021-121112-I00 funded by MCIN/AEI/10.13039/501100011033 and by the European Union NextGenera-tionEU/PRTR. In addition, Jos\u00e9 Antonio Garc\u00eda-D\u00edaz is supported by Banco Santander and the University of Murcia through the Doctorado Industrial programme.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Making a science of model search: Hyperparameter optimization in hundreds of dimensions for vision architectures",
"authors": [
{
"first": "James",
"middle": [],
"last": "Bergstra",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Yamins",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Cox",
"suffix": ""
}
],
"year": 2013,
"venue": "International conference on machine learning",
"volume": "",
"issue": "",
"pages": "115--123",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Bergstra, Daniel Yamins, and David Cox. 2013. Making a science of model search: Hyperparameter optimization in hundreds of dimensions for vision ar- chitectures. In International conference on machine learning, pages 115-123. PMLR.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Review of english literature on figurative language applied to social networks",
"authors": [
{
"first": "Mar\u00eda",
"middle": [],
"last": "Del",
"suffix": ""
},
{
"first": "Pilar",
"middle": [],
"last": "Salas-Z\u00e1rate",
"suffix": ""
},
{
"first": "Giner",
"middle": [],
"last": "Alor-Hern\u00e1ndez",
"suffix": ""
},
{
"first": "Jos\u00e9",
"middle": [
"Luis"
],
"last": "S\u00e1nchez-Cervantes",
"suffix": ""
},
{
"first": "Mario",
"middle": [
"Andr\u00e9s"
],
"last": "Paredes-Valverde",
"suffix": ""
},
{
"first": "Jorge",
"middle": [
"Luis"
],
"last": "Garc\u00eda-Alcaraz",
"suffix": ""
},
{
"first": "Rafael",
"middle": [],
"last": "Valencia-Garc\u00eda",
"suffix": ""
}
],
"year": 2020,
"venue": "Knowledge and Information Systems",
"volume": "62",
"issue": "6",
"pages": "2105--2137",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mar\u00eda del Pilar Salas-Z\u00e1rate, Giner Alor-Hern\u00e1ndez, Jos\u00e9 Luis S\u00e1nchez-Cervantes, Mario Andr\u00e9s Paredes- Valverde, Jorge Luis Garc\u00eda-Alcaraz, and Rafael Valencia-Garc\u00eda. 2020. Review of english literature on figurative language applied to social networks. Knowledge and Information Systems, 62(6):2105- 2137.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Socio-economic variations in the mental health treatment gap for people with anxiety, mood, and substance use disorders: results from the who world mental health (wmh) surveys",
"authors": [
{
"first": "Sara",
"middle": [],
"last": "Evans-Lacko",
"suffix": ""
},
{
"first": "Sergio",
"middle": [],
"last": "Aguilar-Gaxiola",
"suffix": ""
},
{
"first": "Ali",
"middle": [],
"last": "Al-Hamzawi",
"suffix": ""
},
{
"first": "Jordi",
"middle": [],
"last": "Alonso",
"suffix": ""
},
{
"first": "Corina",
"middle": [],
"last": "Benjet",
"suffix": ""
},
{
"first": "Ronny",
"middle": [],
"last": "Bruffaerts",
"suffix": ""
},
{
"first": "Silvia",
"middle": [],
"last": "Chiu",
"suffix": ""
},
{
"first": "Giovanni",
"middle": [],
"last": "Florescu",
"suffix": ""
},
{
"first": "Oye",
"middle": [],
"last": "De Girolamo",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gureje",
"suffix": ""
}
],
"year": 2018,
"venue": "Psychological medicine",
"volume": "48",
"issue": "9",
"pages": "1560--1571",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sara Evans-Lacko, Sergio Aguilar-Gaxiola, Ali Al- Hamzawi, Jordi Alonso, Corina Benjet, Ronny Bruf- faerts, WT Chiu, Silvia Florescu, Giovanni de Giro- lamo, Oye Gureje, et al. 2018. Socio-economic vari- ations in the mental health treatment gap for people with anxiety, mood, and substance use disorders: re- sults from the who world mental health (wmh) sur- veys. Psychological medicine, 48(9):1560-1571.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Psychographic traits identification based on political ideology: An author analysis study on spanish politicians' tweets posted in 2020",
"authors": [
{
"first": "Jos\u00e9",
"middle": [],
"last": "Antonio Garc\u00eda-D\u00edaz",
"suffix": ""
},
{
"first": "Ricardo",
"middle": [],
"last": "Colomo-Palacios",
"suffix": ""
},
{
"first": "Rafael",
"middle": [],
"last": "Valencia-Garc\u00eda",
"suffix": ""
}
],
"year": 2021,
"venue": "Future Generation Computer Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jos\u00e9 Antonio Garc\u00eda-D\u00edaz, Ricardo Colomo-Palacios, and Rafael Valencia-Garc\u00eda. 2021a. Psychographic traits identification based on political ideology: An author analysis study on spanish politicians' tweets posted in 2020. Future Generation Computer Sys- tems.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Umuteam at emoevales 2021: Emosjon analysis for spanish based on explainable linguistic features and transformers",
"authors": [
{
"first": "Jos\u00e9",
"middle": [],
"last": "Antonio Garc\u00eda-D\u00edaz",
"suffix": ""
},
{
"first": "Ricardo",
"middle": [],
"last": "Colomo-Palacios",
"suffix": ""
},
{
"first": "Rafael",
"middle": [],
"last": "Valencia-Garcia",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jos\u00e9 Antonio Garc\u00eda-D\u00edaz, Ricardo Colomo-Palacios, and Rafael Valencia-Garcia. 2021b. Umuteam at emoevales 2021: Emosjon analysis for spanish based on explainable linguistic features and transformers.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Evaluating feature combination strategies for hate-speech detection in spanish using linguistic features and transformers",
"authors": [
{
"first": "Jos\u00e9",
"middle": [],
"last": "Antonio Garc\u00eda-D\u00edaz",
"suffix": ""
},
{
"first": "Salud",
"middle": [],
"last": "Mar\u00eda Jim\u00e9nez-Zafra",
"suffix": ""
},
{
"first": "Miguel",
"middle": [
"Angel"
],
"last": "Garc\u00eda-Cumbreras",
"suffix": ""
},
{
"first": "Rafael",
"middle": [],
"last": "Valencia-Garc\u00eda",
"suffix": ""
}
],
"year": 2022,
"venue": "Complex & Intelligent Systems",
"volume": "",
"issue": "",
"pages": "1--22",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jos\u00e9 Antonio Garc\u00eda-D\u00edaz, Salud Mar\u00eda Jim\u00e9nez- Zafra, Miguel Angel Garc\u00eda-Cumbreras, and Rafael Valencia-Garc\u00eda. 2022. Evaluating feature combina- tion strategies for hate-speech detection in spanish using linguistic features and transformers. Complex & Intelligent Systems, pages 1-22.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Compilation and evaluation of the spanish saticorpus 2021 for satire identification using linguistic features and transformers",
"authors": [
{
"first": "Jos\u00e9",
"middle": [],
"last": "Antonio Garc\u00eda-D\u00edaz",
"suffix": ""
},
{
"first": "Rafael",
"middle": [],
"last": "Valencia-Garc\u00eda",
"suffix": ""
}
],
"year": 2022,
"venue": "Complex & Intelligent Systems",
"volume": "",
"issue": "",
"pages": "1--14",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jos\u00e9 Antonio Garc\u00eda-D\u00edaz and Rafael Valencia-Garc\u00eda. 2022. Compilation and evaluation of the spanish sati- corpus 2021 for satire identification using linguistic features and transformers. Complex & Intelligent Systems, pages 1-14.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Data set creation and empirical analysis for detecting signs of depression from social media postings",
"authors": [
{
"first": "S",
"middle": [],
"last": "Kayalvizhi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Thenmozhi",
"suffix": ""
}
],
"year": 2022,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2202.03047"
]
},
"num": null,
"urls": [],
"raw_text": "S Kayalvizhi and D Thenmozhi. 2022. Data set cre- ation and empirical analysis for detecting signs of de- pression from social media postings. arXiv preprint arXiv:2202.03047.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Tune: A research platform for distributed model selection and training",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Liaw",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Nishihara",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Moritz",
"suffix": ""
},
{
"first": "Joseph",
"middle": [
"E"
],
"last": "Gonzalez",
"suffix": ""
},
{
"first": "Ion",
"middle": [],
"last": "Stoica",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1807.05118"
]
},
"num": null,
"urls": [],
"raw_text": "Richard Liaw, Eric Liang, Robert Nishihara, Philipp Moritz, Joseph E Gonzalez, and Ion Stoica. 2018. Tune: A research platform for distributed model selection and training. arXiv preprint arXiv:1807.05118.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Clef lab on early risk prediction on the internet: experimental foundations",
"authors": [
{
"first": "Fabio",
"middle": [],
"last": "David E Losada",
"suffix": ""
},
{
"first": "Javier",
"middle": [],
"last": "Crestani",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Parapar",
"suffix": ""
}
],
"year": 2017,
"venue": "International Conference of the Cross-Language Evaluation Forum for European Languages",
"volume": "",
"issue": "",
"pages": "346--360",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David E Losada, Fabio Crestani, and Javier Parapar. 2017. erisk 2017: Clef lab on early risk prediction on the internet: experimental foundations. In Inter- national Conference of the Cross-Language Evalua- tion Forum for European Languages, pages 346-360. Springer.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Advances in pre-training distributed word representations",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Puhrsch",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Edouard Grave, Piotr Bojanowski, Christian Puhrsch, and Armand Joulin. 2018. Ad- vances in pre-training distributed word representa- tions. In Proceedings of the International Confer- ence on Language Resources and Evaluation (LREC 2018).",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Luis Alfonso Ure\u00f1a L\u00f3pez, and Mar\u00eda Teresa Mart\u00edn Valdivia. 2021. Overview of the emoevales task on emotion detection for spanish at iberlef",
"authors": [
{
"first": "Flor",
"middle": [
"Miriam"
],
"last": "Plaza-Del Arco",
"suffix": ""
},
{
"first": "Salud M Jim\u00e9nez",
"middle": [],
"last": "Zafra",
"suffix": ""
},
{
"first": "Arturo",
"middle": [
"Montejo"
],
"last": "R\u00e1ez",
"suffix": ""
},
{
"first": "M Dolores Molina",
"middle": [],
"last": "Gonz\u00e1lez",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Flor Miriam Plaza-del Arco, Salud M Jim\u00e9nez Zafra, Arturo Montejo R\u00e1ez, M Dolores Molina Gonz\u00e1lez, Luis Alfonso Ure\u00f1a L\u00f3pez, and Mar\u00eda Teresa Mart\u00edn Valdivia. 2021. Overview of the emoevales task on emotion detection for spanish at iberlef 2021.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Sentence-bert: Sentence embeddings using siamese bert-networks",
"authors": [
{
"first": "Nils",
"middle": [],
"last": "Reimers",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Bharathi Raja Chakravarthi, and Jerin Mahibha C. 2022. Findings of the shared task on Detecting Signs of Depression from Social Media",
"authors": [
{
"first": "Kayalvizhi",
"middle": [],
"last": "Sampath",
"suffix": ""
},
{
"first": "Thenmozhi",
"middle": [],
"last": "Durairaj",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kayalvizhi Sampath, Thenmozhi Durairaj, Bharathi Raja Chakravarthi, and Jerin Mahibha C. 2022. Findings of the shared task on Detecting Signs of Depression from Social Media. In Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Distilbert, a distilled version of BERT: smaller, faster, cheaper and lighter",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of BERT: smaller, faster, cheaper and lighter. CoRR, abs/1910.01108.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"text": "Confusion matrix of knowledge integration strategy with the validation split",
"uris": null
},
"TABREF1": {
"type_str": "table",
"html": null,
"num": null,
"text": "",
"content": "<table/>"
},
"TABREF2": {
"type_str": "table",
"html": null,
"num": null,
"text": "",
"content": "<table><tr><td>and its con-</td></tr></table>"
}
}
}
}