ACL-OCL / Base_JSON /prefixS /json /smm4h /2021.smm4h-1.21.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:35:23.670978Z"
},
"title": "Lasige-BioTM at ProfNER: BiLSTM-CRF and contextual Spanish embeddings for Named Entity Recognition and Tweet Binary Classification",
"authors": [
{
"first": "Pedro",
"middle": [],
"last": "Ruas",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Vitor",
"middle": [
"D T"
],
"last": "Andrade",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Francisco",
"middle": [
"M"
],
"last": "Couto",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The paper describes the participation of the Lasige-BioTM team at sub-tracks A and B of ProfNER, which was based on: i) a BiLSTM-CRF model that leverages contextual and classical word embeddings to recognize and classify the mentions, and ii) on a rule-based module to classify tweets. In the Evaluation phase, our model achieved a F1-score of 0.917 (0,031 more than the median) in sub-track A and a F1score of 0.727 (0,034 less than the median) in sub-track B.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "The paper describes the participation of the Lasige-BioTM team at sub-tracks A and B of ProfNER, which was based on: i) a BiLSTM-CRF model that leverages contextual and classical word embeddings to recognize and classify the mentions, and ii) on a rule-based module to classify tweets. In the Evaluation phase, our model achieved a F1-score of 0.917 (0,031 more than the median) in sub-track A and a F1score of 0.727 (0,034 less than the median) in sub-track B.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The track \"ProfNER-ST: Identification of professions & occupations in Health-related Social Media\" (Miranda-Escalada et al., 2021b) occurred in the context of the \"Social Media Mining for Health Applications (#SMM4H) Shared Task 2021\" (Magge et al., 2021) , and included two different sub-tracks that focused on Spanish Twitter data:",
"cite_spans": [
{
"start": 99,
"end": 131,
"text": "(Miranda-Escalada et al., 2021b)",
"ref_id": "BIBREF13"
},
{
"start": 235,
"end": 255,
"text": "(Magge et al., 2021)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Track A -Tweet binary classification: to determine if a given tweet has a mention of occupation or not.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Track B -Named Entity Recognition (NER) offset detection and classification: to recognise the span of mentions of occupations and classify them in the respective category.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper describes the participation of the Lasige-BioTM team in the aforementioned subtracks. We applied 8 different models NER models (4 supervised models based on BiLSTM-CRF architecture, 3 rule-based models) to predict entities for sub-track B and explored the impact of performing data augmentation in the training set. For sub-track A, we developed a rule-based model for tweet classification that was based on the NER output for sub-track B.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "According to Goyal et al. (2018) , NER approaches can be divided in two categories: rule-based and machine learning-based, being the latter further subdivided into supervised, semi-supervised, unsupervised; other approaches combine aspects from the two categories and are thus designated by hybrid. The models with an architecture consisting of a bidirectional Long Short-Term Memory (BiL-STM) network and a Conditional Random Field (CRF) decoding layer are among the state-of-theart approaches for the NER task. (Huang et al., 2015) . For a comprehensive overview of the existing NER approaches please refer to Goyal et al. (2018) and, specifically for the biomedical domain, to .",
"cite_spans": [
{
"start": 13,
"end": 32,
"text": "Goyal et al. (2018)",
"ref_id": "BIBREF5"
},
{
"start": 513,
"end": 533,
"text": "(Huang et al., 2015)",
"ref_id": "BIBREF6"
},
{
"start": 612,
"end": 631,
"text": "Goyal et al. (2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "1.1"
},
{
"text": "The ProfNER corpus (Miranda-Escalada et al., 2020) contains 10,000 health-related tweets in Spanish that were annotated by linguist experts with entities relative to professions, employment statuses, and other work-related activities and includes four categories: \"PROFESION\", \"SITUA-CION_LABORAL\", \"ACTIVIDAD\", and \"FIGU-RATIVA\". For sub-track A, a given tweet was assigned the label \"1\" if it included at least one entity belonging to any category, but for sub-track B only entities belonging to categories \"PROFESION\" and \"SITUACION_LABORAL\" were considered for evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus description",
"sec_num": "2.1"
},
{
"text": "We performed data augmentation on the training set of the corpus using the Python library nlpaug (Ma, 2019) . For example, considering the mentioned entity \"m\u00e9dico\" present in the training set, data augmentation consisted of substituting a random character by a keyboard character (i.e. replac-ing the character by a neighbour character in the keyboard in order to simulate a typing error character, since Twitter data is usually noisy: \"m\u00e9dico\" \u2192 \"m\u00e9dLco\"), by a random distance character (\"m\u00e9dico\" \u2192 \"m\u00e9dicB\"), and by a synonym ( i.e. replacing the character by a synonym in the Spanish WordNet: \"m\u00e9dico\" \u2192 \"dr.\"). The output of this step consisted of three additional training files besides the original training file, each one associated with the result of a type of augmentation.",
"cite_spans": [
{
"start": 97,
"end": 107,
"text": "(Ma, 2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-processing",
"sec_num": "2.2"
},
{
"text": "The first approach was based on MER (Couto and Lamurias, 2018), a minimal NER tagger that recognizes entities and the respective span in text according to a given lexicon. It is based on the text processing command-line tools grep and awk, and on an inverted recognition technique that uses the words in input text as patterns to match the lexicon words. Several lexicons were created and processed including: 1) mentions in \"PROFESION\" category in training set and its WordNet synonyms, 2) mentions in \"PROFESION\" category in training set and its WordNet synonyms, jointly with entities present in the Occupations gazetteer provided by the organisation (Asensio et al., 2021) , 3) mentions in \"SITUACION_LABORAL\" category in training set and its WordNet synonyms, 4) entities in \"AC-TIVIDAD\" category in train set and its WordNet synonyms, 5) entities in \"FIGURATIVA\" category in train set and its WordNet synonyms. The first model (\"MER 1\") included the lexicons 1, 3, 4, and 5, the second model (\"MER 2\") included the lexicons 2, 3, 4, and 5, the third model (\"MER 3\") was similar to the first one but the mention \"sin\" was filtered out. During Practice phase, we built the lexicons from the training set and used the validation set as the test set. For sub-task A, we developed a rule-based module to classify each tweet with the label \"1\" if at least one mention was recognized in the respective text, and with label \"0\" otherwise.",
"cite_spans": [
{
"start": 654,
"end": 676,
"text": "(Asensio et al., 2021)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "MER",
"sec_num": "2.3"
},
{
"text": "To implement the second approach, we resorted to the FLAIR framework (Akbik et al., 2019) , and created an object of the class SequenceTagger, which instantiates a NER model with an architecture consisting of a BiLSTM network and a CRF decoding layer. LSTM are recurrent neural networks (RNNs), which include an input layer x representing features at time t, one or more hidden layers h, and an output layer y, which in the case of the NER task, represents a probability distribution over labels or tags at time t. A CRF network focus on the sentence level and also uses past and future tags/labels to predict the current one. The combination of a BiLSTM network with a CRF network has shown performance improvements over alternative architectures (Huang et al., 2015 ).",
"cite_spans": [
{
"start": 69,
"end": 89,
"text": "(Akbik et al., 2019)",
"ref_id": "BIBREF0"
},
{
"start": 748,
"end": 767,
"text": "(Huang et al., 2015",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BiLSTM-CRF",
"sec_num": "2.4"
},
{
"text": "In the NER task, text needs to be tokenized and vectorized before being inputed to the neural network, which can be done leveraging pre-trained embeddings. FastText embeddings (Bojanowski et al., 2017) are an improvement over classic word embeddings, more concretely the skipgram model, by capturing sub-word information. FLAIR embeddings (Akbik et al., 2018) are contextual string embeddings that capture syntactic-semantic word features. We have explored the integration of different types of embeddings in the BiLSTM-CRF model through the StackedEmbeddings class:",
"cite_spans": [
{
"start": 176,
"end": 201,
"text": "(Bojanowski et al., 2017)",
"ref_id": "BIBREF3"
},
{
"start": 339,
"end": 359,
"text": "(Akbik et al., 2018)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BiLSTM-CRF",
"sec_num": "2.4"
},
{
"text": "\u2022 \"Base\" : FLAIR embeddings (\"es-forward\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BiLSTM-CRF",
"sec_num": "2.4"
},
{
"text": "and \"es-backward\") trained on Spanish Wikipedia (Akbik et al., 2018) + Spanish Fast-Text embeddings",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BiLSTM-CRF",
"sec_num": "2.4"
},
{
"text": "\u2022 \"Twitter\" : FastText Spanish COVID-19 Twitter Embeddings, provided by the organization (Miranda-Escalada et al., 2021a) (uncased version of the cbow model).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BiLSTM-CRF",
"sec_num": "2.4"
},
{
"text": "\u2022 \"Medium\" : FLAIR embeddings (\"esforward\" and \"es-backward\") + Spanish Fast-Text embeddings + FastText Spanish COVID-19 Twitter Embeddings For the sub-track A, we applied a similar rulebased module as described in Section 2.3. If a model recognizes at least one entity in a given tweet in the context of sub-track B, the module assigns the label \"1\" to the respective tweet. If no entity is recognized in a given tweet, this receives the label \"0\". All the tweet IDs and respective label are then outputted in the predictions file for sub-track A.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BiLSTM-CRF",
"sec_num": "2.4"
},
{
"text": "During Practice phase, we trained the models \"Base\" and \"Twitter\" on the original training file (\"Base\" and \"Twitter\"), and additionally, on the three files that resulted from the data augmentation step (\"Base-aug\" and \"Twitter-aug\"). During Evaluation phase, we merged the training and validation annotations, resulting in a file composed by 14,674 sentences for training and 1,630 sentences for validation. The training parameters were set to: hidden size = 256, Mini batch size = 32, Max epochs = 55, Patience = 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "2.4.1"
},
{
"text": "3 Results and discussion",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "2.4.1"
},
{
"text": "The performance of the referred models in the validation set for sub-tracks A and B are available in Table 1 . The \"Base\" model trained on the original training file achieved the best performance in sub-tracks A and B: F1-scores (strict) of 0.908 and 0.716, respectively. Consequently, we selected this model for further training and application in the test set. The models trained on files resulting from data augmentation achieved lower performances compared with the respective versions trained exclusively on the original training file.",
"cite_spans": [],
"ref_spans": [
{
"start": 101,
"end": 108,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Practice phase",
"sec_num": "3.1"
},
{
"text": "The results achieved by our model in the Evaluation phase and the median results for all competing teams are shown in Table 2 . In sub-track A, our model achieved a F1-score of 0.917 (0.031 more than the median) and in sub-track our model achieved a F1-score of 0.727 (0.034 less than the median).",
"cite_spans": [],
"ref_spans": [
{
"start": 118,
"end": 125,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation phase",
"sec_num": "3.2"
},
{
"text": "The model \"Base\", that uses contextual embeddings trained on a general corpora, obtained higher performance when comparing to the model \"Twitter\", although this latter model uses Twitter-specific embeddings, more concretely, FastText embeddings that were trained on Twitter data. For instance, consider the following tweet of the validation set: \"Ya que est\u00e1n sesionando la importante pero NO prioritaria #LeyDeAmnistia,ser\u00e1 que tambi\u00e9n vean la cuesti\u00f3n de #Economia y #Salud-ParaTodos? Digo!Recuerden que su prioridad somos los millones que estamos indefensos ante el #COVID-19 y sin trabajo @MorenaSenadores #LeyDeAmnistiaNo https://t.co/DCiuqiBjEs\". The model \"Twitter\" recognizes the mention \"@More-naSenadores\" and assigns the \"PROFESION\" category to it, whereas the model \"Base\" does not recognize any mention, since is been able to assume in this context that the mention do not correspond to a profession, but instead to a Twitter handle. There is a mention with the string \"senadores\" classified as \"PROFESION\" in a tweet of the training set, which maybe leads the model \"Twitter\" to assume that the words \"@MorenaSenadores\" must also correspond to a mention, since the string is similar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error analysis",
"sec_num": "3.3"
},
{
"text": "During the Practice Phase, we explored different approaches to participate in sub-tracks A e B of ProfNER: data augmentation on training set, and application of MER and a BiLSTM-CRF model for NER and further tweet classification. For the Evaluation phase we applied the BiLSTM-CRF model on the test set of ProfNER corpus and achieved F1scores of 0.917 (0,031 more than the median) and in sub-track our model achieved a F1-score of 0.727 (0,034 less than the median). The code to run the experiments is available in our GitHub page 1 . For future work, we intend to perform hyper-parameter optimisation for the BiLSTM-CRF model, such as learning rate, hidden size, and specially the number of training epochs, since we had limited available time to perform the training of the model. We will also explore the use of different contextualised embeddings, since the models using this type of embeddings seem to achieve better performance compared to those using classical word embeddings. Besides, to improve tweet classification we will explore the application of Named Entity Linking tools to link the recognized entities in sub-track B to structured vocabularies that contain hierarchical relationships between concepts, such as MeSH or DBpedia. This way, it will be possible to know the ancestors for a given entity, which will provide the context to effectively determine if the entity is associated with an occupation or not. Table 1 : Practice results for sub-track 7A (left) and sub-track 7B (right). P, R, and F1 refer to precision, recall, and F1-score (strict), respectively and Rel-P, Rel-R, and Rel-F1 refer to relaxed precision, relaxed recall, and relaxed F1-score, respectively Table 2 : Evaluation phase results for sub-tracks 7A and 7B. P, R, F1 refer to precision, recall, and F1-score (strict), respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 1428,
"end": 1435,
"text": "Table 1",
"ref_id": null
},
{
"start": 1690,
"end": 1697,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Flair: An easy-to-use framework for state-of-the-art nlp",
"authors": [
{
"first": "Alan",
"middle": [],
"last": "Akbik",
"suffix": ""
},
{
"first": "Tanja",
"middle": [],
"last": "Bergmann",
"suffix": ""
},
{
"first": "Duncan",
"middle": [],
"last": "Blythe",
"suffix": ""
},
{
"first": "Kashif",
"middle": [],
"last": "Rasul",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Schweter",
"suffix": ""
},
{
"first": "Roland",
"middle": [],
"last": "Vollgraf",
"suffix": ""
}
],
"year": 2019,
"venue": "NAACL 2019, 2019 Annual Conference of the North American Chapter",
"volume": "",
"issue": "",
"pages": "54--59",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alan Akbik, Tanja Bergmann, Duncan Blythe, Kashif Rasul, Stefan Schweter, and Roland Vollgraf. 2019. Flair: An easy-to-use framework for state-of-the-art nlp. In NAACL 2019, 2019 Annual Conference of the North American Chapter of the Association for 1 https://github.com/lasigeBioTM/ LASIGE-participation-in-ProfNER Computational Linguistics (Demonstrations), pages 54-59.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Contextual string embeddings for sequence labeling",
"authors": [
{
"first": "Alan",
"middle": [],
"last": "Akbik",
"suffix": ""
},
{
"first": "Duncan",
"middle": [],
"last": "Blythe",
"suffix": ""
},
{
"first": "Roland",
"middle": [],
"last": "Vollgraf",
"suffix": ""
}
],
"year": 2018,
"venue": "COLING 2018, 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1638--1649",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual string embeddings for sequence labeling. In COLING 2018, 27th International Con- ference on Computational Linguistics, pages 1638- 1649.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Occupations gazetteer -ProfNER & MEDDOPROF -occupations, professions and working status terms with their associated codes. Funded by the Plan de Impulso de las Tecnolog\u00edas del Lenguaje",
"authors": [
{
"first": "Alejandro",
"middle": [],
"last": "Asensio",
"suffix": ""
},
{
"first": "Antonio",
"middle": [],
"last": "Miranda-Escalada",
"suffix": ""
},
{
"first": "Marvin",
"middle": [],
"last": "Aguero",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Krallinger",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.5281/zenodo.4524659"
]
},
"num": null,
"urls": [],
"raw_text": "Alejandro Asensio, Antonio Miranda-Escalada, Mar- vin Aguero, and Martin Krallinger. 2021. Occupa- tions gazetteer -ProfNER & MEDDOPROF -occu- pations, professions and working status terms with their associated codes. Funded by the Plan de Im- pulso de las Tecnolog\u00edas del Lenguaje (Plan TL).",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "MER: a shell script and annotation server for minimal named entity recognition and linking",
"authors": [
{
"first": "M",
"middle": [],
"last": "Francisco",
"suffix": ""
},
{
"first": "Andre",
"middle": [],
"last": "Couto",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lamurias",
"suffix": ""
}
],
"year": 2018,
"venue": "Journal of Cheminformatics",
"volume": "10",
"issue": "1",
"pages": "",
"other_ids": {
"DOI": [
"10.1186/s13321-018-0312-9"
]
},
"num": null,
"urls": [],
"raw_text": "Francisco M. Couto and Andre Lamurias. 2018. MER: a shell script and annotation server for minimal named entity recognition and linking. Journal of Cheminformatics, 10(1):58.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Recent Named Entity Recognition and Classification techniques: A systematic review",
"authors": [
{
"first": "Archana",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Vishal",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Manish",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2018,
"venue": "Computer Science Review",
"volume": "29",
"issue": "",
"pages": "21--43",
"other_ids": {
"DOI": [
"10.1016/j.cosrev.2018.06.001"
]
},
"num": null,
"urls": [],
"raw_text": "Archana Goyal, Vishal Gupta, and Manish Kumar. 2018. Recent Named Entity Recognition and Classi- fication techniques: A systematic review. Computer Science Review, 29:21-43.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Bidirectional lstm-crf models for sequence tagging",
"authors": [
{
"first": "Zhiheng",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirec- tional lstm-crf models for sequence tagging.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Text Mining for Bioinformatics Using Biomedical Literature",
"authors": [
{
"first": "Andre",
"middle": [],
"last": "Lamurias",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Francisco",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Couto",
"suffix": ""
}
],
"year": 2019,
"venue": "Bioinformatics and Computational Biology",
"volume": "1",
"issue": "",
"pages": "602--61",
"other_ids": {
"DOI": [
"10.1016/B978-0-12-809633-8.20409-3"
]
},
"num": null,
"urls": [],
"raw_text": "Andre Lamurias and Francisco M Couto. 2019. Text Mining for Bioinformatics Using Biomedical Liter- ature. In K. and Ranganathan, S., Gribskov, M., Nakai and C Schoonbach, editors, Encyclopedia of Bioinformatics and Computational Biology, vol. 1, January, pages pp. 602-61. Oxford: Elsevier.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "PPR-SSM: Personalized PageRank and semantic similarity measures for entity linking",
"authors": [
{
"first": "Andre",
"middle": [],
"last": "Lamurias",
"suffix": ""
},
{
"first": "Pedro",
"middle": [],
"last": "Ruas",
"suffix": ""
},
{
"first": "Francisco",
"middle": [
"M"
],
"last": "Couto",
"suffix": ""
}
],
"year": 2019,
"venue": "BMC Bioinformatics",
"volume": "20",
"issue": "1",
"pages": "1--12",
"other_ids": {
"DOI": [
"10.1186/s12859-019-3157-y"
]
},
"num": null,
"urls": [],
"raw_text": "Andre Lamurias, Pedro Ruas, and Francisco M. Couto. 2019. PPR-SSM: Personalized PageRank and se- mantic similarity measures for entity linking. BMC Bioinformatics, 20(1):1-12.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Nlp augmentation",
"authors": [
{
"first": "Edward",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edward Ma. 2019. Nlp augmentation.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Overview of the sixth social media mining for health applications",
"authors": [
{
"first": "Arjun",
"middle": [],
"last": "Magge",
"suffix": ""
},
{
"first": "Ari",
"middle": [
"Z"
],
"last": "Klein",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Flores",
"suffix": ""
},
{
"first": "Ilseyar",
"middle": [],
"last": "Alimova",
"suffix": ""
},
{
"first": "Mohammed",
"middle": [
"Ali"
],
"last": "Al-Garadi",
"suffix": ""
},
{
"first": "Antonio",
"middle": [],
"last": "Miranda-Escalada",
"suffix": ""
},
{
"first": "Zulfat",
"middle": [],
"last": "Miftahutdinov",
"suffix": ""
},
{
"first": "Eulalia",
"middle": [],
"last": "Farre",
"suffix": ""
},
{
"first": "Salvador",
"middle": [],
"last": "Lima",
"suffix": ""
},
{
"first": "Juan",
"middle": [],
"last": "Banda",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "Karen",
"suffix": ""
},
{
"first": "Abeed",
"middle": [],
"last": "Connor",
"suffix": ""
},
{
"first": "Elena",
"middle": [],
"last": "Sarker",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Tutubalina",
"suffix": ""
},
{
"first": "Davy",
"middle": [],
"last": "Krallinger",
"suffix": ""
},
{
"first": "Graciela",
"middle": [],
"last": "Weissenbacher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gonzalez-Hernandez",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arjun Magge, Ari Z. Klein, Ivan Flores, Ilseyar Al- imova, Mohammed Ali Al-garadi, Antonio Miranda- Escalada, Zulfat Miftahutdinov, Eulalia Farre, Sal- vador Lima, Juan Banda, Karen O'Connor, Abeed Sarker, Elena Tutubalina, Martin Krallinger, Davy Weissenbacher, and Graciela Gonzalez-Hernandez. 2021. Overview of the sixth social media mining for health applications (#smm4h) shared tasks at naacl 2021.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Spanish covid-19 twitter embeddings in fasttext. Funded by the Plan de Impulso de las Tecnolog\u00edas del Lenguaje",
"authors": [
{
"first": "Antonio",
"middle": [],
"last": "Miranda-Escalada",
"suffix": ""
},
{
"first": "Marvin",
"middle": [],
"last": "Aguero",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.5281/zenodo.4449930"
]
},
"num": null,
"urls": [],
"raw_text": "Antonio Miranda-Escalada, Marvin Aguero, and Mar- tin Krallinger. 2021a. Spanish covid-19 twitter em- beddings in fasttext. Funded by the Plan de Impulso de las Tecnolog\u00edas del Lenguaje (Plan TL).",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "ProfNER corpus: gold standard annotations for profession detection in Spanish COVID-19 tweets. Funded by the Plan de Impulso de las Tecnolog\u00edas del Lenguaje",
"authors": [
{
"first": "Antonio",
"middle": [],
"last": "Miranda-Escalada",
"suffix": ""
},
{
"first": "Vicent",
"middle": [],
"last": "Briva-Iglesias",
"suffix": ""
},
{
"first": "Eul\u00e0lia",
"middle": [],
"last": "Farr\u00e9",
"suffix": ""
},
{
"first": "Salvador",
"middle": [
"Lima"
],
"last": "L\u00f3pez",
"suffix": ""
},
{
"first": "Marvin",
"middle": [],
"last": "Aguero",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Krallinger",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.5281/zenodo.4563995"
]
},
"num": null,
"urls": [],
"raw_text": "Antonio Miranda-Escalada, Vicent Briva-Iglesias, Eu- l\u00e0lia Farr\u00e9, Salvador Lima L\u00f3pez, Marvin Aguero, and Martin Krallinger. 2020. ProfNER corpus: gold standard annotations for profession detection in Spanish COVID-19 tweets. Funded by the Plan de Impulso de las Tecnolog\u00edas del Lenguaje (Plan TL).",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "The profner shared task on automatic recognition of occupation mentions in social media: systems, evaluation, guidelines, embeddings and corpora",
"authors": [
{
"first": "Antonio",
"middle": [],
"last": "Miranda-Escalada",
"suffix": ""
},
{
"first": "Eul\u00e0lia",
"middle": [],
"last": "Farr\u00e9-Maduell",
"suffix": ""
},
{
"first": "Salvador",
"middle": [
"Lima"
],
"last": "L\u00f3pez",
"suffix": ""
},
{
"first": "Luis",
"middle": [],
"last": "Gasc\u00f3-S\u00e1nchez",
"suffix": ""
},
{
"first": "Vicent",
"middle": [],
"last": "Briva-Iglesias",
"suffix": ""
},
{
"first": "Marvin",
"middle": [],
"last": "Ag\u00fcero-Torales",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Krallinger",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the Sixth Social Media Mining for Health Applications Workshop & Shared Task",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antonio Miranda-Escalada, Eul\u00e0lia Farr\u00e9-Maduell, Sal- vador Lima L\u00f3pez, Luis Gasc\u00f3-S\u00e1nchez, Vicent Briva-Iglesias, Marvin Ag\u00fcero-Torales, and Martin Krallinger. 2021b. The profner shared task on auto- matic recognition of occupation mentions in social media: systems, evaluation, guidelines, embeddings and corpora. In Proceedings of the Sixth Social Media Mining for Health Applications Workshop & Shared Task.",
"links": null
}
},
"ref_entries": {}
}
}