ACL-OCL / Base_JSON /prefixS /json /smm4h /2021.smm4h-1.11.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:34:54.132795Z"
},
"title": "System description for ProfNER -SMMH: Optimized fine tuning of a pretrained transformer and word vectors",
"authors": [
{
"first": "David",
"middle": [],
"last": "Fidalgo",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Vila-Suero",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Francisco",
"middle": [],
"last": "Aranda",
"suffix": "",
"affiliation": {},
"email": "francisco]@recogn.ai"
},
{
"first": "Ignacio",
"middle": [],
"last": "Talavera",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This shared task system description depicts two neural network architectures submitted to the ProfNER track, among them the winning system that scored highest in the two subtasks 7a and 7b. We present in detail the approach, preprocessing steps and the architectures used to achieve the submitted results, and also provide a GitHub repository to reproduce the scores. The winning system is based on a transformer-based pretrained language model and solves the two sub-tasks simultaneously.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "This shared task system description depicts two neural network architectures submitted to the ProfNER track, among them the winning system that scored highest in the two subtasks 7a and 7b. We present in detail the approach, preprocessing steps and the architectures used to achieve the submitted results, and also provide a GitHub repository to reproduce the scores. The winning system is based on a transformer-based pretrained language model and solves the two sub-tasks simultaneously.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The identification of professions and occupations in Spanish (ProfNER 1 , is part of the Social Media Mining for Health Applications (SMM4H) Shared Task 2021 (Magge et al., 2021) . Its aim was to extract professions from social media to enable characterizing health-related issues, in particular in the context of COVID-19 epidemiology as well as mental health conditions.",
"cite_spans": [
{
"start": 158,
"end": 178,
"text": "(Magge et al., 2021)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "ProfNER was the seventh track of the task and focused on the identification of professions and occupations in Spanish tweets. It consisted of two sub-tasks:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 task 7a: In this binary classification task, participants had to determine whether a tweet contains a mention of occupation, or not.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 task 7b: In this Named Entity Recognition (NER) task, participants had to find the beginning and end of occupation mentions and classify them into two categories: PROFESION (professions) and SITUACION_LABORAL (working status).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We submitted two systems to each of the tasks described above, which share the same basic structure:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our approach",
"sec_num": "2"
},
{
"text": "1 https://temu.bsc.es/smm4h-spanish/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our approach",
"sec_num": "2"
},
{
"text": "\u2022 a backbone model that extracts and contextualizes the input features",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our approach",
"sec_num": "2"
},
{
"text": "\u2022 a task head that performs task specific operations and computes the loss",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our approach",
"sec_num": "2"
},
{
"text": "In the backbone of both systems we take advantage of pretrained components, such as a transformerbased language model or skip-gram word vectors. The task head of both systems is very similar in that it solves task 7a and 7b simultaneously, and returns the sum of both losses. For the first system we aimed to maximize the metrics of the competition with the constraint of using a single GPU environment. For the second system we tried to maximize the model's efficiency with respect to the model size and speed while maintaining acceptable performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our approach",
"sec_num": "2"
},
{
"text": "Both systems were designed and trained using biome.text 2 , a practical NLP open source library based on AllenNLP 3 (Gardner et al., 2017) and PyTorch 4 (Paszke et al., 2019) .",
"cite_spans": [
{
"start": 116,
"end": 138,
"text": "(Gardner et al., 2017)",
"ref_id": "BIBREF4"
},
{
"start": 153,
"end": 174,
"text": "(Paszke et al., 2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Our approach",
"sec_num": "2"
},
{
"text": "In a first step we transformed the given brat 5 annotations of task 7b to commonly used BIO NER tags (Ratinov and Roth, 2009) . For this we used spaCy 6 (Honnibal et al., 2020) and a customized tokenizer of its \"es_core_news_sm\" language model, to make sure that the resulting word tokens and annotations always aligned well. In this step we excluded the entity classes not considered during evaluation. The same customized tokenizer was used to transform the predicted NER tags of our systems back to brat annotations during inference time. To obtain the input data for our training pipeline, we added the tweet ID and the corresponding classification labels of task 7a to our word tokens and NER tags (see Table 1 for an example).",
"cite_spans": [
{
"start": 101,
"end": 125,
"text": "(Ratinov and Roth, 2009)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 708,
"end": 715,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "2.1"
},
{
"text": "No data augmentation or external data was used for the training of our systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "2.1"
},
{
"text": "In our first system, the backbone model consists of a transformer-based pretrained language model. More precisely, we use BETO, a BERT model trained on a big Spanish corpus (Ca\u00f1ete et al., 2020) , which is distributed via Hugging Face's (Wolf et al., 2019) Model Hub 7 under the name \"dccuchile/bert-base-spanishwwm-cased\". For its usage we further tokenize the word tokens into word pieces with the corresponding BERT tokenizer, which also introduces the special BERT tokens [CLS] and [SEP] (Devlin et al., 2019). Since some of the word tokens cannot be processed by the tokenizer and are simply ignored (e.g. the newline character \"\\n\"), we replace those problematic word tokens with a dummy token \"ae\", which is not ignored, and that allows the correct transformation of NER tags to brat annotations at inference time. The output sequence of the transformer is then passed on to the task head of the system.",
"cite_spans": [
{
"start": 173,
"end": 194,
"text": "(Ca\u00f1ete et al., 2020)",
"ref_id": "BIBREF1"
},
{
"start": 237,
"end": 256,
"text": "(Wolf et al., 2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System 1: Transformer",
"sec_num": "2.2"
},
{
"text": "In the task head we first apply a non-linear tanh activation layer to the [CLS] token, which we initialize with its pretrained weights (Devlin et al., 2019) , before obtaining the logits of a linear classification layer that solves task 7a. The classification loss is calculated via the Cross Entropy loss function. To solve task 7b, we need to bridge the difference between the word piece features and predictions at a the level of word tokens. For this, we follow the approach of Devlin et al. (2019) who use a subword pooling in which the first word piece of a word token is used to represent the entire token, excluding the special BERT tokens. After the subword pooling we apply a linear classification layer and a subsequent Conditional Random Field (CRF) model that predicts a sequence of NER tags.",
"cite_spans": [
{
"start": 135,
"end": 156,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 482,
"end": 502,
"text": "Devlin et al. (2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System 1: Transformer",
"sec_num": "2.2"
},
{
"text": "For the parameter updates we used the AdamW algorithm (Loshchilov and Hutter, 2019) and schedule the learning rate with warm-up steps and a linear decay afterwards. We optimized the training parameters listed in Table 2 by means of the Ray Tune library 8 (Liaw et al., 2018) which is tightly integrated with biome.text. Our Hyperparameter Optimization (HPO) consisted of 50 runs (see Figure 1) using a tree-structured Parzen Estimator 9 as search algorithm (Bergstra et al., 2011) and the ASHA trial scheduler to terminate low-performing trials (Li et al., 2018) . The reference metric for both algorithms was the overall F1 score of task 7b. The HPO lasted for about 6 hours on a g4dn.",
"cite_spans": [
{
"start": 54,
"end": 83,
"text": "(Loshchilov and Hutter, 2019)",
"ref_id": "BIBREF9"
},
{
"start": 255,
"end": 274,
"text": "(Liaw et al., 2018)",
"ref_id": "BIBREF8"
},
{
"start": 457,
"end": 480,
"text": "(Bergstra et al., 2011)",
"ref_id": "BIBREF0"
},
{
"start": 545,
"end": 562,
"text": "(Li et al., 2018)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 212,
"end": 219,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 384,
"end": 393,
"text": "Figure 1)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Training",
"sec_num": "2.2.1"
},
{
"text": "xlarge AWS machine with one Tesla T4 GPU. We took the best performing model of the HPO, performed a quick sweep across several random seeds for the initialization 10 and finally employed the best configuration to train the system on the combined train and validation data set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "2.2.1"
},
{
"text": "In further experiments, we tried to improve the validation metrics by switching to BILOU tags (Ratinov and Roth, 2009) or by including the entity classes not considered for the final evaluation, but could not find any significance differences. Figure 1 : Distribution of the hyperparameters during the HPO for system 1. In total we executed 50 trials using a tree-structured Parzen Estimator as search algorithm and the ASHA trial scheduler to terminate low-performing trials early. The trial with the highest F1 NER score had a batch size of 8, a learning rate of 3.03e-05, a weight decay of 1.79e-3, was trained for 4 epochs and had 49 warm-up steps.",
"cite_spans": [
{
"start": 94,
"end": 118,
"text": "(Ratinov and Roth, 2009)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 244,
"end": 252,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Training",
"sec_num": "2.2.1"
},
{
"text": "In our second system, the backbone model extracts word and character features, and combines them at a word token level. For the word feature we start from a cased version of skip-gram word vectors that were pretrained on 140 million Spanish tweets 11 . We concatenate these word vectors with the output of the last hidden state of a bidirectional Gated Recurrent Unit (GRU, Cho et al., 2014) that takes as input the lower cased characters of a word token. These embeddings are then fed into another larger bidirectional GRU, where we add contextual information to the encoding, and whose hidden states are passed on to the task head of the system.",
"cite_spans": [
{
"start": 368,
"end": 391,
"text": "(GRU, Cho et al., 2014)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System 2: RNN",
"sec_num": "2.3"
},
{
"text": "In the task head we pool the sequence by means of a bidirectional Long short-term memory (LSTM, Hochreiter and Schmidhuber, 1997) unit and pass the last hidden state to a classification layer to solver task 7a. The classification loss is calculated via the Cross Entropy loss function. To solve task 7b, we pass each embedding from the backbone sequence through a feedforward network with a linear classification layer on top. The outputs of the classification layer are fed into a CRF model that predicts a sequence of NER tags.",
"cite_spans": [
{
"start": 89,
"end": 129,
"text": "(LSTM, Hochreiter and Schmidhuber, 1997)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System 2: RNN",
"sec_num": "2.3"
},
{
"text": "The architectural choice of using GRU or LSTM units was solved via an HPO as described in the following training subsection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System 2: RNN",
"sec_num": "2.3"
},
{
"text": "For the parameter updates we apply the same optimization algorithm and learning rate scheduler as for system 1. The comparatively small size of sys- tem 2 allowed us to perform extensive HPOs, not only for the training parameters but also for the architecture, and to some extent Neural Architecture Searches (NAS).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "2.3.1"
},
{
"text": "In a first optimization run of 200 trials, we allowed wide ranges for almost all hyperparameters and tried out different RNN architectures, that is either LSTMs or GRUs. An example of a clearly preferred choice are the word embeddings pretrained with a skip-gram model over the ones pretrained with a a CBOW model (Mikolov et al., 2013) . In a second run, we fixed obviously preferred choices and narrowed down the search spaces to the most promising ones.",
"cite_spans": [
{
"start": 314,
"end": 336,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "2.3.1"
},
{
"text": "For both HPO runs we applied the same search algorithm and trial scheduler as for system 1, and proceeded the same way to obtain the submitted version of system 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "2.3.1"
},
{
"text": "The resulting best RNN architecture is detailed in Table 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 51,
"end": 58,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Training",
"sec_num": "2.3.1"
},
{
"text": "F1 Test F1 Test F1 Valid. F1 Valid. Model size Inference time * (task 7a) (task 7b) (task 7a) (task 7b) (nr of params) (for 1 prediction) 1: Transformer 0.93 0.839 0.92 0.834 \u223c 1.1 \u00d7 10 8 24.5 ms \u00b1 854 \u00b5s 2: RNN 0.88 0.764 0.85 0.731 \u223c 1.5 \u00d7 10 7 3.7 ms \u00b1 103 \u00b5s Table 4 : Results for the two systems. Test results are provided with the systems trained on the combined training and validation data set, while the validation metric is taken from the best performing HPO trial. System 1 was the winning system in both ProfNER sub-tracks, while system 2 still scored above the arithmetic median of 0.85 and 0.7605 in both tasks. * Mean value, computed on an i7-9750 H CPU with 6 cores.",
"cite_spans": [],
"ref_spans": [
{
"start": 263,
"end": 270,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "System",
"sec_num": null
},
{
"text": "3 Results Table 4 presents the evaluation metrics of both systems on the validation and the test data sets, as well as the model size and its inference speed.",
"cite_spans": [],
"ref_spans": [
{
"start": 10,
"end": 17,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "System",
"sec_num": null
},
{
"text": "With system 1 we managed to score highest on both ProfNER 7a and 7b sub-tracks (F1:0.93/P:0.9251/R:0.933 and F1:0.839/P:0.838/R:0.84, respectively), with an average of 8 points above the arithmetic median of all submissions. The much smaller and faster (by a factor of \u223c 7) system 2 still manages to score above the competitions median (F1:0.88/P:0.9083/R:0.8553 and F1:0.764/P:0.815/R:0.718, respectively), but performs significantly worse when compared to system 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System",
"sec_num": null
},
{
"text": "We find a clear correlation between the classification F1 score and the F1 score of the NER task in our HPO runs, which signals that the feedback loop between the two tasks is in general beneficial and advocates solving both tasks simultaneously.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System",
"sec_num": null
},
{
"text": "When comparing system 1 and 2, it seems that the amount of training data provided to the RNN architecture was not sufficient to match the transfer capabilities of the pretrained transformer, even with dedicated architecture searches and extensive hyperparameter tuning. This is corroborated by the fact that adding the validation data to the training data led to a clear performance boost for system 2, while the performance of system 1 stayed almost the same (compare the F1 Test and Validation metrics for task 7b in Table 4) .",
"cite_spans": [],
"ref_spans": [
{
"start": 519,
"end": 527,
"text": "Table 4)",
"ref_id": null
}
],
"eq_spans": [],
"section": "System",
"sec_num": null
},
{
"text": "A possible path to improve system 1, which was not pursued due to time constraints, could be the inclusion of the gazetteers provided during the ProfNER track. We consider this path especially promising given the fact that the precision was always lower than the recall for both tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System",
"sec_num": null
},
{
"text": "We conclude that the exploitation of the transfer capabilities of a pretrained language model and its optimized fine tuning to the target domain, provides an conceptually easy system architecture and seems to be the most straight forward method to achieve competitive performance, especially for tasks where training data is scarce.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System",
"sec_num": null
},
{
"text": "To help to reproduce our results, we provide a GitHub repository at https://github.com/ recognai/profner.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System",
"sec_num": null
},
{
"text": "https://www.recogn.ai/biome-text 3 https://allennlp.org/ 4 https://pytorch.org/ 5 http://brat.nlplab.org 6 https://spacy.io/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://docs.ray.io/en/master/tune/ 9 https://github.com/hyperopt/hyperopt 10 In hindsight, it would have been better to perform this sweep before the HPO and include the best performing random seeds in the HPO itself.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was supported by the Spanish Ministerio de Ciencia, Inonvacion y Universidades through its Ayuda para contratos Torres Quevedo 2018 program with the reference number PTQ2018-009909.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Algorithms for Hyper-Parameter Optimization",
"authors": [
{
"first": "James",
"middle": [],
"last": "Bergstra",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Bardenet",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Bal\u00e1zs",
"middle": [],
"last": "K\u00e9gl",
"suffix": ""
}
],
"year": 2011,
"venue": "Advances in Neural Information Processing Systems, Granada, Spain. Neural Information Processing Systems Foundation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Bergstra, R. Bardenet, Yoshua Bengio, and Bal\u00e1zs K\u00e9gl. 2011. Algorithms for Hyper-Parameter Optimization. In 25th Annual Conference on Neural Information Processing Systems (NIPS 2011), vol- ume 24 of Advances in Neural Information Process- ing Systems, Granada, Spain. Neural Information Processing Systems Foundation.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Spanish pre-trained bert model and evaluation data",
"authors": [
{
"first": "Jos\u00e9",
"middle": [],
"last": "Ca\u00f1ete",
"suffix": ""
},
{
"first": "Gabriel",
"middle": [],
"last": "Chaperon",
"suffix": ""
},
{
"first": "Rodrigo",
"middle": [],
"last": "Fuentes",
"suffix": ""
},
{
"first": "Jou-Hui",
"middle": [],
"last": "Ho",
"suffix": ""
},
{
"first": "Hojin",
"middle": [],
"last": "Kang",
"suffix": ""
},
{
"first": "Jorge",
"middle": [],
"last": "P\u00e9rez",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jos\u00e9 Ca\u00f1ete, Gabriel Chaperon, Rodrigo Fuentes, Jou- Hui Ho, Hojin Kang, and Jorge P\u00e9rez. 2020. Span- ish pre-trained bert model and evaluation data. In PML4DC at ICLR 2020.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Learning phrase representations using rnn encoder-decoder for statistical machine translation",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merrienboer",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Fethi",
"middle": [],
"last": "Bougares",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho, Bart van Merrienboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understand- ing.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Allennlp: A deep semantic natural language processing platform",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Joel",
"middle": [],
"last": "Grus",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Oyvind",
"middle": [],
"last": "Tafjord",
"suffix": ""
},
{
"first": "Pradeep",
"middle": [],
"last": "Dasigi",
"suffix": ""
},
{
"first": "Nelson",
"middle": [
"F"
],
"last": "Liu",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Schmitz",
"suffix": ""
},
{
"first": "Luke",
"middle": [
"S"
],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Peters, Michael Schmitz, and Luke S. Zettlemoyer. 2017. Allennlp: A deep semantic natural language processing platform.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Long shortterm memory",
"authors": [
{
"first": "S",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Computation",
"volume": "9",
"issue": "",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Hochreiter and J. Schmidhuber. 1997. Long short- term memory. Neural Computation, 9:1735-1780.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "spaCy: Industrial-strength Natural Language Processing in Python",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.5281/zenodo.1212303"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew Honnibal, Ines Montani, Sofie Van Lan- deghem, and Adriane Boyd. 2020. spaCy: Industrial-strength Natural Language Processing in Python.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A System for Massively Parallel Hyperparameter Tuning. arXiv e-prints",
"authors": [
{
"first": "Liam",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Jamieson",
"suffix": ""
},
{
"first": "Afshin",
"middle": [],
"last": "Rostamizadeh",
"suffix": ""
},
{
"first": "Ekaterina",
"middle": [],
"last": "Gonina",
"suffix": ""
},
{
"first": "Moritz",
"middle": [],
"last": "Hardt",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Recht",
"suffix": ""
},
{
"first": "Ameet",
"middle": [],
"last": "Talwalkar",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.05934"
]
},
"num": null,
"urls": [],
"raw_text": "Liam Li, Kevin Jamieson, Afshin Rostamizadeh, Eka- terina Gonina, Moritz Hardt, Benjamin Recht, and Ameet Talwalkar. 2018. A System for Massively Parallel Hyperparameter Tuning. arXiv e-prints, page arXiv:1810.05934.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Tune: A Research Platform for Distributed Model Selection and Training. arXiv e-prints",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Liaw",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Nishihara",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Moritz",
"suffix": ""
},
{
"first": "Joseph",
"middle": [
"E"
],
"last": "Gonzalez",
"suffix": ""
},
{
"first": "Ion",
"middle": [],
"last": "Stoica",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1807.05118"
]
},
"num": null,
"urls": [],
"raw_text": "Richard Liaw, Eric Liang, Robert Nishihara, Philipp Moritz, Joseph E. Gonzalez, and Ion Stoica. 2018. Tune: A Research Platform for Distributed Model Selection and Training. arXiv e-prints, page arXiv:1807.05118.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Decoupled weight decay regularization",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Loshchilov",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Hutter",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Overview of the sixth social media mining for health applications (# smm4h) shared tasks at naacl 2021",
"authors": [
{
"first": "Arjun",
"middle": [],
"last": "Magge",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Flores",
"suffix": ""
},
{
"first": "Ilseyar",
"middle": [],
"last": "Alimova",
"suffix": ""
},
{
"first": "Mohammed",
"middle": [
"Ali"
],
"last": "Al-Garadi",
"suffix": ""
},
{
"first": "Antonio",
"middle": [],
"last": "Miranda-Escalada",
"suffix": ""
},
{
"first": "Zulfat",
"middle": [],
"last": "Miftahutdinov",
"suffix": ""
},
{
"first": "Eul\u00e0lia",
"middle": [],
"last": "Farr\u00e9-Maduell",
"suffix": ""
},
{
"first": "Salvador",
"middle": [
"Lima"
],
"last": "L\u00f3pez",
"suffix": ""
},
{
"first": "Juan",
"middle": [
"M"
],
"last": "Banda",
"suffix": ""
},
{
"first": "Karen",
"middle": [
"O"
],
"last": "Connor",
"suffix": ""
},
{
"first": "Abeed",
"middle": [],
"last": "Sarker",
"suffix": ""
},
{
"first": "Elena",
"middle": [],
"last": "Tutubalina",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Krallinger",
"suffix": ""
},
{
"first": "Davy",
"middle": [],
"last": "Weissenbacher",
"suffix": ""
},
{
"first": "Graciela",
"middle": [],
"last": "Gonzalez-Hernandez",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the Sixth Social Media Mining for Health Applications Workshop & Shared Task",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arjun Magge, Ari Klein, Ivan Flores, Ilseyar Al- imova, Mohammed Ali Al-garadi, Antonio Miranda- Escalada, Zulfat Miftahutdinov, Eul\u00e0lia Farr\u00e9- Maduell, Salvador Lima L\u00f3pez, Juan M Banda, Karen O'Connor, Abeed Sarker, Elena Tutubalina, Martin Krallinger, Davy Weissenbacher, and Gra- ciela Gonzalez-Hernandez. 2021. Overview of the sixth social media mining for health applications (# smm4h) shared tasks at naacl 2021. In Proceedings of the Sixth Social Media Mining for Health Appli- cations Workshop & Shared Task.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word represen- tations in vector space.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "The profner shared task on automatic recognition of occupation mentions in social media: systems, evaluation, guidelines, embeddings and corpora",
"authors": [
{
"first": "Antonio",
"middle": [],
"last": "Miranda-Escalada",
"suffix": ""
},
{
"first": "Eul\u00e0lia",
"middle": [],
"last": "Farr\u00e9-Maduell",
"suffix": ""
},
{
"first": "Salvador",
"middle": [
"Lima"
],
"last": "L\u00f3pez",
"suffix": ""
},
{
"first": "Luis",
"middle": [],
"last": "Gasc\u00f3-S\u00e1nchez",
"suffix": ""
},
{
"first": "Vicent",
"middle": [],
"last": "Briva-Iglesias",
"suffix": ""
},
{
"first": "Marvin",
"middle": [],
"last": "Ag\u00fcero-Torales",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Krallinger",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the Sixth Social Media Mining for Health Applications Workshop & Shared Task",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antonio Miranda-Escalada, Eul\u00e0lia Farr\u00e9-Maduell, Sal- vador Lima L\u00f3pez, Luis Gasc\u00f3-S\u00e1nchez, Vicent Briva-Iglesias, Marvin Ag\u00fcero-Torales, and Martin Krallinger. 2021. The profner shared task on auto- matic recognition of occupation mentions in social media: systems, evaluation, guidelines, embeddings and corpora. In Proceedings of the Sixth Social Media Mining for Health Applications Workshop & Shared Task.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Pytorch: An imperative style, high-performance deep learning library",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Paszke",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Massa",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lerer",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Chanan",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Killeen",
"suffix": ""
},
{
"first": "Zeming",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Gimelshein",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Antiga",
"suffix": ""
},
{
"first": "Alban",
"middle": [],
"last": "Desmaison",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Kopf",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zachary",
"middle": [],
"last": "Devito",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Raison",
"suffix": ""
},
{
"first": "Alykhan",
"middle": [],
"last": "Tejani",
"suffix": ""
},
{
"first": "Sasank",
"middle": [],
"last": "Chilamkurthy",
"suffix": ""
},
{
"first": "Benoit",
"middle": [],
"last": "Steiner",
"suffix": ""
},
{
"first": "Lu",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Junjie",
"middle": [],
"last": "Bai",
"suffix": ""
},
{
"first": "Soumith",
"middle": [],
"last": "Chintala",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "32",
"issue": "",
"pages": "8024--8035",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Te- jani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Py- torch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alch\u00e9-Buc, E. Fox, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 32, pages 8024-8035. Curran Asso- ciates, Inc.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Design Challenges and Misconceptions in Named Entity Recognition",
"authors": [
{
"first": "Lev",
"middle": [],
"last": "Ratinov",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lev Ratinov and Dan Roth. 2009. Design Challenges and Misconceptions in Named Entity Recognition.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Proc. of the Conference on Computational Natural Language Learning (CoNLL)",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "In Proc. of the Conference on Computational Natu- ral Language Learning (CoNLL).",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2019. HuggingFace's Transformers: State-of-theart Natural Language Processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R\u00e9mi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Davison",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Shleifer",
"suffix": ""
},
{
"first": "Clara",
"middle": [],
"last": "Patrick Von Platen",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "Canwen",
"middle": [],
"last": "Plu",
"suffix": ""
},
{
"first": "Teven",
"middle": [
"Le"
],
"last": "Xu",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Scao",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gugger",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.03771"
]
},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Fun- towicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Can- wen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2019. HuggingFace's Transformers: State-of-the- art Natural Language Processing. arXiv e-prints, page arXiv:1910.03771.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"type_str": "table",
"content": "<table><tr><td>tweet ID</td><td>word tokens</td><td>NER tags</td><td>classification label</td></tr><tr><td>1242604595463032832</td><td>[El, alcalde, ...]</td><td>[O, B-PROFESION, ...]</td><td>1</td></tr><tr><td colspan=\"2\">1242603450321506304 [\", Trump, decide, ...]</td><td>[O, O, O, ...]</td><td>0</td></tr><tr><td>...</td><td>...</td><td>...</td><td>...</td></tr><tr><td>Table 1:</td><td/><td/><td/></tr></table>",
"text": "Example of the format of our input data. NER tags are provided in the BIO encoding scheme.",
"html": null,
"num": null
},
"TABREF2": {
"type_str": "table",
"content": "<table/>",
"text": "List of hyperparameters tuned during training. Search spaces define valid values for the hyperparameters and how they are sampled initially. They are provided as Ray Tune search space functions.",
"html": null,
"num": null
},
"TABREF4": {
"type_str": "table",
"content": "<table/>",
"text": "Details of our best RNN architecture.",
"html": null,
"num": null
}
}
}
}