|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T12:10:31.415132Z" |
|
}, |
|
"title": "Leveraging knowledge sources for detecting self-reports of particular health issues on social media", |
|
"authors": [ |
|
{ |
|
"first": "Parsa", |
|
"middle": [], |
|
"last": "Bagherzadeh", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "CLaC Labs", |
|
"institution": "Concordia University Montreal", |
|
"location": { |
|
"country": "Canada" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Sabine", |
|
"middle": [], |
|
"last": "Bergler", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "CLaC Labs", |
|
"institution": "Concordia University Montreal", |
|
"location": { |
|
"country": "Canada" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This paper investigates incorporating quality knowledge sources developed by experts for the medical domain as well as syntactic information for classification of tweets into four different health oriented categories. We claim that resources such as the MeSH hierarchy and currently available parse information are effective extensions of moderately sized training datasets for various fine-grained tweet classification tasks of self-reported health issues.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This paper investigates incorporating quality knowledge sources developed by experts for the medical domain as well as syntactic information for classification of tweets into four different health oriented categories. We claim that resources such as the MeSH hierarchy and currently available parse information are effective extensions of moderately sized training datasets for various fine-grained tweet classification tasks of self-reported health issues.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Social media are a ubiquously accessible way to communicate and interact with others, making their users producers of Big Data at a fast rate. It is estimated that about 500M tweets are sent each day on Twitter which often contain information about opinions, trends, reviews, health, incidents, etc. This offers the possibility to gain insight into individuals' behavior and general state in direct and unmitigated fashion (Rousidis et al., 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 423, |
|
"end": 446, |
|
"text": "(Rousidis et al., 2020)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Health applications based on social media are an active research area for outbreak management, disease surveillance (Charles-Smith et al., 2015) , and pharmacovigilance (Golder et al., 2015) . For instance, epidemiologists hope to mine social media to predict and monitor the likelihood and possible severity of outbreaks in a timely fashion. Systems that support this type of research have to make predictions from incomplete data of varying quality.", |
|
"cite_spans": [ |
|
{ |
|
"start": 116, |
|
"end": 144, |
|
"text": "(Charles-Smith et al., 2015)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 169, |
|
"end": 190, |
|
"text": "(Golder et al., 2015)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Deep learning methods are popular for NLP applications and demonstrated significant improvements in areas such as text classification. Deep models have been widely used for personal health mention detection (Khan et al., 2020) , (Sarabadani, 2019) , (Barry and Uzuner, 2019) , (Aroyehun and Gelbukh, 2019) , (Bagherzadeh et al., 2018) . Deep models, however, do not usually have access to outside resources, apart from word embeddings.", |
|
"cite_spans": [ |
|
{ |
|
"start": 207, |
|
"end": 226, |
|
"text": "(Khan et al., 2020)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 229, |
|
"end": 247, |
|
"text": "(Sarabadani, 2019)", |
|
"ref_id": "BIBREF42" |
|
}, |
|
{ |
|
"start": 250, |
|
"end": 274, |
|
"text": "(Barry and Uzuner, 2019)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 277, |
|
"end": 305, |
|
"text": "(Aroyehun and Gelbukh, 2019)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 308, |
|
"end": 334, |
|
"text": "(Bagherzadeh et al., 2018)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "While such models can outperform systems that are limited to look-up in gazetteer lists for task specific terms, this can only be when the terms of the test set are foreshadowed sufficiently in the training set.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Sensitivity to lexical triggers is crucial in classification, especially in the medical domain, where vocabularies are ever-growing and new specialized terms are introduced everyday. The most recent example is the term \"CoVID\" which was coined in late 2019.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Most language models are trained contextually. The assumption for contextualized language models is that the meaning of a word can be represented by the context in which it appears. However, the context usually is not sufficient to represent the meaning for rare specialized terms, which require large amounts of training data for coverage. In addition, highly specialized terms with very different meanings may occur in the same immediate context (see Example 1), rendering contextualized word embeddings less effective.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "(1) (a) My son was diagnosed with leukemia (b) My son was diagnosed with hydrocephalus", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The context for hydrocephalus and leukemia here is the same, and is the same for all diseases, making such contextualized word embeddings less sensitive to, for instance, the more fine-grained distinctions between birth defects and cancer. Consequently, contextualized language models often fail to make these distinctions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Current language models have over 60M parameters 1 , making fine-tuning as well as testing timeconsuming and requiring large training sets. These issues motivate us to investigate widely available external knowledge sources, such as MeSH (Lipscomb, 2000) , and language features in a deep architecture suitable for personal health mention detection. We show that knowledge sources, combined with light-weight word embeddings and language models such as GLoVE (Pennington et al., 2014) and ELMo (Peters et al., 2018) , are strong contenders for larger models such as RoBERTa .", |
|
"cite_spans": [ |
|
{ |
|
"start": 238, |
|
"end": 254, |
|
"text": "(Lipscomb, 2000)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 459, |
|
"end": 484, |
|
"text": "(Pennington et al., 2014)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 494, |
|
"end": 515, |
|
"text": "(Peters et al., 2018)", |
|
"ref_id": "BIBREF37" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We experiment on four health-related tweet classification tasks of the ongoing SMM4H Workshop and present ablation studies to assess the contribution of different external knowledge sources. Our results suggest that the external resources tested indeed enhance performance when properly calibrated to work together. Best performance is achieved with a two layer system that adds representations of gazetteer lists and enhanced partof-speech annotations in an encoder followed by a graph convolutional neural network (GCNN) (Kipf and Welling, 2017) representing preprocessed grammatical dependencies.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Personal experiences posted on social media can give insight into the state of public health. Examination and identification of smoking behavior (Mysl\u00edn et al., 2013) , non-medical use of opioids (Chan et al., 2015) , and identification of medicationrelated experiences (Jiang et al., 2018) , (Jiang et al., 2019) have recently been studied on social media.", |
|
"cite_spans": [ |
|
{ |
|
"start": 145, |
|
"end": 166, |
|
"text": "(Mysl\u00edn et al., 2013)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 196, |
|
"end": 215, |
|
"text": "(Chan et al., 2015)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 270, |
|
"end": 290, |
|
"text": "(Jiang et al., 2018)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 293, |
|
"end": 313, |
|
"text": "(Jiang et al., 2019)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related literature", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "A variety of models have been proposed for the task addressed in the current paper, namely health experience mention detection for different experiences. The approaches fall into three main categories, namely statistical models with hand-crafted features, pure deep learning models, and deep models with leveraged features.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related literature", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Hand-crafted features (Jiang et al., 2016) proposed a set of textual features such as count of emotion words, of unique words, of first person pronouns, of pronouns, etc., for personal health surveillance. (Jiang et al., 2019) compared different word embeddings such as GLoVE, Word2Vec (Mikolov et al., 2013) , fastText, and wordRank with the features of (Jiang et al., 2016) . Their word embeddings performed close to one another and considerably outperform their feature based model. For detection of vaccination behaviour, (Joshi et al., 2018) proposed to use the count of POS tags, number of special characters (such as # and @), and count of emotion words as input features to an ensemble of SVM, logistic regression and random forest classifiers. (Joshi et al., 2018) also experimented with a pure deep-learning method (employing ULMfit (Howard and Ruder, 2018) with finetuning) and reported a performance close to their feature-based model, demonstrating that a model with handcrafted features is a strong contender for deep models.", |
|
"cite_spans": [ |
|
{ |
|
"start": 22, |
|
"end": 42, |
|
"text": "(Jiang et al., 2016)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 206, |
|
"end": 226, |
|
"text": "(Jiang et al., 2019)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 286, |
|
"end": 308, |
|
"text": "(Mikolov et al., 2013)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 355, |
|
"end": 375, |
|
"text": "(Jiang et al., 2016)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 526, |
|
"end": 546, |
|
"text": "(Joshi et al., 2018)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 753, |
|
"end": 773, |
|
"text": "(Joshi et al., 2018)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 843, |
|
"end": 867, |
|
"text": "(Howard and Ruder, 2018)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related literature", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "To identify drug and adverse drug reaction mentions, (Saha et al., 2018) used a SVM classifier with some hand-crafted features such as the count of typed-dependency relation, drug names, and sentiment score and demonstrated success to some extent. (\u00c7\u00f6ltekin and Rama, 2018) also addressed the task using a SVM model with word and character n-grams as input features. Bag of word features (tf-idf) as well as negation, adverse reaction mentions, and drug mention were also used in (Wang et al., 2019) as input for a SVM.", |
|
"cite_spans": [ |
|
{ |
|
"start": 53, |
|
"end": 72, |
|
"text": "(Saha et al., 2018)", |
|
"ref_id": "BIBREF40" |
|
}, |
|
{ |
|
"start": 480, |
|
"end": 499, |
|
"text": "(Wang et al., 2019)", |
|
"ref_id": "BIBREF46" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related literature", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Models with hand-crafted features have shown competitive performance compared to deep models on some tasks and datasets but despite the availability of many high quality resources from which features can be derived, the power of contextualized word embeddings to fill in not only lexical gaps, but also subtask specific patterns led to the investigation of deep models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related literature", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Pure deep models A majority of the proposed models for personal health detection are deep learning based. Studies such as (Xherija, 2018) and (Cortes-Tejada et al., 2019) use conventional pre-trained word embeddings, such as Word2Vec (Mikolov et al., 2013) and GLoVe. By the advent of pre-trained language models, many studies benefit from models such as ULMfit (Howard and Ruder, 2018) , ELMo (Peters et al., 2018) , and BERT (Devlin et al., 2019 ) variants (Khan et al., 2020 , (Sarabadani, 2019) , (Miftahutdinov et al., 2019) , (Aroyehun and Gelbukh, 2019), (Babu and Eswari, 2020), (Aduragba et al., 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 122, |
|
"end": 137, |
|
"text": "(Xherija, 2018)", |
|
"ref_id": "BIBREF51" |
|
}, |
|
{ |
|
"start": 142, |
|
"end": 170, |
|
"text": "(Cortes-Tejada et al., 2019)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 234, |
|
"end": 256, |
|
"text": "(Mikolov et al., 2013)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 362, |
|
"end": 386, |
|
"text": "(Howard and Ruder, 2018)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 394, |
|
"end": 415, |
|
"text": "(Peters et al., 2018)", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 427, |
|
"end": 447, |
|
"text": "(Devlin et al., 2019", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 448, |
|
"end": 477, |
|
"text": ") variants (Khan et al., 2020", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 480, |
|
"end": 498, |
|
"text": "(Sarabadani, 2019)", |
|
"ref_id": "BIBREF42" |
|
}, |
|
{ |
|
"start": 501, |
|
"end": 529, |
|
"text": "(Miftahutdinov et al., 2019)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 587, |
|
"end": 610, |
|
"text": "(Aduragba et al., 2020)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related literature", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Deep models are dependent on the representativeness of their training data for the test cases. Pretrained (often BERT-based models) are generally the top performers in current shared task competitions. The models have to be developed by highly skilled machine learning experts. The data, on the other hand, have to be collected and annotated by domain experts, if they are to be of use in health care research.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related literature", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Deep models with leveraged features While many approaches to health-oriented classification of tweets use features in statistical models, there have been fewer efforts to leverage them in deep models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related literature", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "For the task of adverse drug reaction mention detection (Wu et al., 2019) proposed to embed POS tags, gazetteer information, and sentiment scores, then concatenate the features to GloVe embeddings as input to a hybrid of CNN and LSTM. POS tags as well as features such as side effects, medical concepts, and first character are concatenated to word embeddings in (Vydiswaran et al., 2019) . (Bagherzadeh et al., 2019) also leveraged features such as adverse mentions, POS tags, and scope of negation and modality, by concatenation to Word2Vec and GLoVE embeddings.", |
|
"cite_spans": [ |
|
{ |
|
"start": 56, |
|
"end": 73, |
|
"text": "(Wu et al., 2019)", |
|
"ref_id": "BIBREF50" |
|
}, |
|
{ |
|
"start": 363, |
|
"end": 388, |
|
"text": "(Vydiswaran et al., 2019)", |
|
"ref_id": "BIBREF45" |
|
}, |
|
{ |
|
"start": 391, |
|
"end": 417, |
|
"text": "(Bagherzadeh et al., 2019)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related literature", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The limitation of input feature concatenation is that only a fixed number of annotation types can be used. Adding more gazetteer lists requires reconfiguring the network, since the number of dimensions is bounded by the number of predefined annotations. This is undesirable because addition of any annotation type has to be performed by a machine learning expert and must be followed by re-training the network from scratch.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related literature", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In the following we outline a simple architecture, where the end-user (possibly an epidemiologist) can add or remove gazetteer lists and fine-tune the model without making any changes in the network settings such as hidden dimensions (and thus without requiring help from a machine learning expert).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related literature", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In order to demonstrate the ability of the presented approach to adapt to new domains, we compare performance on four tasks from the Social Media Mining for Health application 2 (SMM4H) shared tasks. All tasks involve detection of self-reported health mentions on Twitter.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tasks", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "SM18-2: Self-reported medication intake is a 3class. Tweets which clearly express personal medication intake are considered Category 1. Tweets where the user may have taken some medication are Category 2. Category 3 tweets mention medication names but do not indicate personal intake (Weissenbacher et al., 2018) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 284, |
|
"end": 312, |
|
"text": "(Weissenbacher et al., 2018)", |
|
"ref_id": "BIBREF48" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tasks", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "(2) (Class 1):", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tasks", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "I took three Ibuprofens and I still got a headache crack head *cough cough*", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tasks", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "(3) (Class 2): since I'm constantly in pain, the only way I can go to sleep is if I take Tylenol PM (4) (Class 3):", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tasks", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Will you take a Xanax and relax.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tasks", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The performance is evaluated as \u00b5F score of Class 1 and Class 2. 3 SM18-4 Vaccination behavior mention classification is a binary task where the positive class indicates the user has received or intends to receive a flu vaccine. A tweet is classified as negative if it does not contain any mentions of a vaccination or if it merely mentions vaccination (Weissenbacher et al., 2018) . This Vyvanse got me sweating right now and I dont even know why SM20-5 Birth defect mention detection is a 3way classification problem, where Category 1 tweets refer to the user's child and indicate that he/she has a birth defect. Category 2 tweets are unclear whether the tweet speaks of birth defects of the author's child. Category 3 tweets merely mention birth defects but not with respect to the author's child (Klein et al., 2020) . Examples of each class are provided in Examples 9-11.", |
|
"cite_spans": [ |
|
{ |
|
"start": 353, |
|
"end": 381, |
|
"text": "(Weissenbacher et al., 2018)", |
|
"ref_id": "BIBREF48" |
|
}, |
|
{ |
|
"start": 800, |
|
"end": 820, |
|
"text": "(Klein et al., 2020)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tasks", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "(9) (Class 1): I had a stillbirth when I was 7 month pregnant. It was hydrocephalus.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tasks", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "(10) (Class 2): Olivia was born with down syndrome.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tasks", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "(11) (Class 3): Down's syndrome day. Please share to raise awareness.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tasks", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The performance is evaluated as the \u00b5F score of Class 1 and Class 2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tasks", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "A summary of the statistics of training and test data is provided in Table 1 . ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 69, |
|
"end": 76, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Tasks", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We experiment with two types of external knowledge sources for the deep learning system: (a) gazetteer lists extracted from MeSH as examples of a high quality resource developed by experts that can be used to partly define the domain of the task and (b) language features (POS, NEs, dependencies) extracted from the text with a parser and named entity recognition pipeline.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "External resources", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Disease Mentions of disease are important evidence for medication intake classification, since drugs are usually consumed to treat a disease or its symptom. To identify disease mentions, we compiled a gazetteer from subtree C in MeSH (Lipscomb, 2000) which includes terms for infections, wounds, injuries, pain, etc. Anatomy Body parts are often present in both birth defect and adverse drug reaction mentions. Tweets talk about a child's birth defect often specifically mentioning an affected body part. When talking about an adverse drug reaction, tweets often mention affected organs. To identify these mentions of anatomy, we extracted a gazetteer list from sub-tree A of MeSH.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Gazetteer lists", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "ADR We use the adverse drug reaction (ADR) lexicon provided by (Nikfarjam et al., 2015) which is a collection of several lexica including SIDER (Kuhn et al., 2016) , CHV (Zeng et al., 2007) , COSTART, 4 and DIEGO Lab ADR lexicon 5 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 63, |
|
"end": 87, |
|
"text": "(Nikfarjam et al., 2015)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 144, |
|
"end": 163, |
|
"text": "(Kuhn et al., 2016)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 170, |
|
"end": 189, |
|
"text": "(Zeng et al., 2007)", |
|
"ref_id": "BIBREF52" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Gazetteer lists", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Preg For tweets that mention a pregnancy issue, rather than a birth defect, a gazetteer list of pregnancy complication terms was extracted from subtree C13.703 of MeSH. Gazetteer lists have two main advantages. First, they enable the model to extend its vocabulary. Second, they determine the position of a certain annotation type in a sentence, which becomes important when coupled with dependency relations, as illustrated in Figure 1 . The mapping 6 of gazetteer lists to tasks is provided in Table 2 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 428, |
|
"end": 436, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 496, |
|
"end": 503, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Gazetteer lists", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Part-of-speech tags are the most widely used linguistic feature and are available from many standard NLP environments. POS tags provide useful information such as types of pronouns and tense for verbs, all important clues for the detection of a personal experience. POS tags have been used in the literature for classifying personal and impersonal sentences (Li et al., 2010) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 358, |
|
"end": 375, |
|
"text": "(Li et al., 2010)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "POS tags", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Since our tasks focus intensely on first person reports, we replace the single tag for pronouns ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "POS tags", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Gazetteer set SM18-2 Drug, Disease SM18-4 Descendant, FamilyRel, Acquaintance SM19-1 Drug, Disease, ADR, Anatomy, Descendant, FamilyRel, Acquaintance SM20-5 BirthDef, Preg, Anatomy, Descendant, FamilyRel PRP with three tags PRP1, PRP2, PRP3, reserved for first, second, and third person pronouns. Likewise, the reflexive pronoun tag PRP$ is replaced by PRP$1, PRP$2, and PRP$3 for first, second, and third person possessive pronouns. In our experiments we compare the standard Penn Treebank tag set (denoted by POS1) to this extended POS tag set (denoted by POS2). While POS-tag information is partially encoded in word embeddings, our ablation shows that explicit encoding leads to performance increase and that POS2 is part of our best performing model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Dependency graphs provide syntactic knowledge as well as shallow semantic information. An example of a dependency graph for the ADR task together with gazetteer annotations is provided in Figure 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 188, |
|
"end": 196, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Dependency parse", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "Some dependency relations are indicative of personal experience mentions. For instance, drug or birth defect mentions occur more likely as direct objects. Self-reports mostly use first person pronouns in subject position.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency parse", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "We use the Stanford parser (Klein and Manning, 2003) to determine dependency relations. Figure 2: Additive annotation embedding Layer15 Proposed model", |
|
"cite_spans": [ |
|
{ |
|
"start": 27, |
|
"end": 52, |
|
"text": "(Klein and Manning, 2003)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency parse", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "= = POS Embedding P N N P P V BZ P N N P P V BG P P RP 3 P T O P V", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency parse", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "We developed a multi-layer system which includes four layers, namely: embeddings, self-attention, GCNN, and classification.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency parse", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "Layer1: Embeddings We combine traditional word embeddings with POS embeddings and our gazetteer embeddings additively.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency parse", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "\u2022 Tokens are embedded by GLoVE, 7 ELMo, 8 , pretrained RoBERTa 9 , or BioBERT (Lee et al., 2020 ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 78, |
|
"end": 95, |
|
"text": "(Lee et al., 2020", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency parse", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "\u2022 We pretrain POS embeddings using Word2Vec.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency parse", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "Our approach is to apply Word2Vec on POS tags instead of tokens. The embeddings are trained using the Gensim package (Rehurek and Sojka, 2010 ) with a window size of w = 5. The pretraining is performed on training data of all task introduced in Section 3. The resulting embeddings are used to initialize a POS embedding matrix P \u2208 R \u03c6\u00d7d emb , where \u03c6 is the number of distinct POS tags, and d emb is the dimensionality of the word embeddings. The POS embeddings are fine-tuned during the training for the main classification task.", |
|
"cite_spans": [ |
|
{ |
|
"start": 117, |
|
"end": 141, |
|
"text": "(Rehurek and Sojka, 2010", |
|
"ref_id": "BIBREF38" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency parse", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "\u2022 A gazetteer annotation x is embedded through a vector G x \u2208 R d emb which is a learnable parameter and is updated during the training of the main classification task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency parse", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "\u2022 Inspired by BERT (Devlin et al., 2019) segment embeddings, we add POS embeddings and gazetteer embeddings to token embeddings. We call this scheme, additive annotations. Figure 2 shows that for time step 3, the vectors E Hydrocephalus , G BirthDef , and P N N P are added to form h 1 3 , the aggregate for time-step 3 in layer 1.", |
|
"cite_spans": [ |
|
{ |
|
"start": 19, |
|
"end": 40, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 172, |
|
"end": 180, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Dependency parse", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "The additive approach enables the model to encode as many as gazetteer annotations, without introducing new dimensions to the token representations (in contrast to concatenative approaches). After the training, one can easily introduce a new gazetteer annotation y by adding a learnable vector G y to the model parameters, and only fine-tune the model, without making any changes to hidden dimensions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency parse", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "Layer2: Self-attention encoder We use a selfattention encoder (the encoder part of the Transformer) as first layer (Vaswani et al., 2017) . The encoder at layer 2 gets the representations h 1 i and outputs representations h 2 i . The number of heads in the multi-head attention is n heads = 4 and the dimensionality of the feed-forward layer is d F F = 1024.", |
|
"cite_spans": [ |
|
{ |
|
"start": 115, |
|
"end": 137, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF44" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency parse", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "Layer3: Graph CNN We use a graph convolutional network (GCNN) (Kipf and Welling, 2017) to encode the dependency graph following (Marcheggiani and Titov, 2017) . In GCCN, each token is represented based on its adjacent tokens in a dependency parse using:", |
|
"cite_spans": [ |
|
{ |
|
"start": 128, |
|
"end": 158, |
|
"text": "(Marcheggiani and Titov, 2017)", |
|
"ref_id": "BIBREF30" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency parse", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "h 3 i = ReLU ( j\u2208N (i) W L(i,j) h 2 j + b)", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Dependency parse", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "where N (i) is the set of tokens adjacent to token i and L(i, j) is the label of the arc from token j to token i. Note that the network is not tied, i.e. W L(i,j) depends on the arc labels. GCNN receives h 2 i and outputs token-wise representations h 3 i . Layer4: Pooling and classification For the vector representation of the tweet, attention (Bahdanau et al., 2015) is calculated from importance scores: using a latent context vector w att , and then normalizing the scores using softmax:", |
|
"cite_spans": [ |
|
{ |
|
"start": 346, |
|
"end": 369, |
|
"text": "(Bahdanau et al., 2015)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency parse", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "e i = w T att h 3 i", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Dependency parse", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u03b1 i = exp(e i ) j exp(e j )", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "Dependency parse", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "The normalized scores are then used for a weighted sum H = i \u03b1 i * h 3", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency parse", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "i . The final vector H is used as input to a linear layer for the classification.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency parse", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "The proposed model is implemented using the PyTorch library (Paszke et al., 2017) . Crossentropy is used to calculate the network loss and the model is optimized using the Adam optimizer (Kingma and Ba, 2015). Table 3 details the hyperparameters used for each task.", |
|
"cite_spans": [ |
|
{ |
|
"start": 60, |
|
"end": 81, |
|
"text": "(Paszke et al., 2017)", |
|
"ref_id": "BIBREF35" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 210, |
|
"end": 217, |
|
"text": "Table 3", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Dependency parse", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "We evaluate the proposed model using a set of ablation studies. The SM19-1 and SM20-5 tasks are evaluated on the official test data. For SM18-2 and SM18-4, official test data is not available, therefore we replicate the state-of-the art systems and perform evaluation on a hold-out set from the original training data. Table 4 shows that all tasks benefit moderately from POS features with the extended POS tagset POS2 outperforming the standard Penn Treebank tagset POS1. POS features increase performance for GLoVE and ELMO more than for RoBERTa or BioBERT. Dependency information Dep, similarly, yields consistent small improvements. Note, however, the asymmetrically stronger improvements in precision, especially for RoBERTa and BioBERT models. Combining POS and Dep results in another consistent small improvement, showing that the features effectively interoperate.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 319, |
|
"end": 326, |
|
"text": "Table 4", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Numerical results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The gazetteer lists and named-entity features provide considerable improvements for all tasks except for SM18-4 with marginal improvements. Note that SM18-4 is the vaccination behaviour prediction task, specific to flu. This result is to be expected: it requires identifying self reports, but the trigger terms for the flu domain consisted only of flu, making gazetteers ineffective. The tasks with more diverse vocabularies show greater impact of gazetteer lists.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Numerical results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Combining grammatical and gazetteer features robustly yields best results. Interestingly, adding knowledge resources to a lighter language model approaches performance of a larger model. For instance, GloVe with all resources outperforms ELMo without resources for all tasks and even approaches RoBERTa or BioBERT without resources.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Numerical results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We also observe that while our system configurations using RoBERTa or BioBERT reported in Table 4 beat the SOTA reported in competition in F1 and precision, our recall only exceeds SOTA for SM18-4 and SM18-4. We interpret this as a strong point of our system: in health-related applications, precision often outweighs recall. For system development, increasing recall is usually easier and this paper limits itself to gazetteer lists and linguistic features for domain adaptation to show their potential and limits. This leaves room for further error-driven domain adaptation.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 90, |
|
"end": 97, |
|
"text": "Table 4", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Numerical results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "To examine the effects of knowledge sources, we probe the attention importance scores at Layer4 (Equation 2). The scores demonstrate how the the model attends to different tokens. We probe the scores in two cases, with and without gazetteer lists. Figure 3 and Figure 4 demonstrate visualizations of the attention scores for two samples from SM20-5 and SM19-1 tasks respectively. Higher attention scores are indicated with darker gray color.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 248, |
|
"end": 256, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 261, |
|
"end": 269, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Case-study", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "The model of Figure 3a uses no gazetteer lists. The model partially attends to the birth defect mention Trisomy18 and pays no attention to the pregnancy issue StillBirth. The model, however, properly attends to the personal pronouns I and lexical triggers such as baby and birth. On the other hand, when the model is given BirthDef and Pregnancy gazetteers, the model puts more attention on Still-Birth as evidence for a birth defect, leading to a more certain prediction.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 13, |
|
"end": 22, |
|
"text": "Figure 3a", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Case-study", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "A similar pattern is observed in Figure 4 . Supply- . 64 .67 .66 .79 .78 .78 .51 .55 .53 .53 .57 .55 POS2 .66 .67 .66 .79 .81 .80 .53 .55 .54 .53 .59 .56 Dep .68 .65 .67 .83 .75 .79 .56 .53 .54 .58 .58 .58 POS1, Dep .70 .66 .68 .81 .78 .79 .57 .55 .56 .60 .59 .60 POS2, Dep .70 .68 .69 .82 .78 .80 .58 .58 .58 .61 .60 .60 Gaz, Name, Org .69 .72 .71 .79 .77 .78 .55 .56 .55 .60 .58 .59 Gaz, Name, Org, Pos2, Dep .73 .71 .72 .83 .78 .81 .59 .60 .60 .65 .60 .62 ELMo None .71 .69 .70 .80 .78 .79 .54 .53 .54 .64 .60 .62 POS1 .73 .68 .70 .79 .81 .80 .54 .54 .54 .66 .61 .63 POS2 .72 .70 .71 .80 .81 .81 .53 .57 .55 .65 .63 .64 Dep .75 .68 .71 .86 .78 .82 .58 .56 .57 .68 .61 .65 POS1, Dep .74 .71 .72 .85 .80 .83 .57 .59 .58 .70 .62 .66 POS2, Dep .75 .71 .73 .86 .81 .84 .59 .60 .60 .71 .62 .67 Gaz, Name, Org .72 .76 .74 .80 .80 .80 .58 .56 .57 .69 .66 .68 Gaz, Name, Org, Pos2, Dep .77 .73 .75 .85 .83 .84 .61 .61 .61 .71 .65 .69 RoBERTa None .71 .74 .73 .87 .82 .85 .62 .58 .60 .68 .62 .65 POS1 .69 .76 .73 .85 .86 .85 .61 .60 .60 .72 .64 .69 POS2 .71 .75 .74 .87 .87 .87 .62 .60 .61 .74 .65 .70 Dep .76 .73 .74 .89 .86 .87 .64 .59 .61 .74 .60 .67 POS1, Dep .75 .75 .75 .87 .88 .87 .62 .62 .62 .75 .65 .70 POS2, Dep .76 .77 .76 .88 .88 .88 .63 .62 .62 .76 .65 .71 Gaz, Name, Org .73 .76 .75 .89 .83 .86 .65 .60 .63 .76 .65 .71 Gaz, Name, Org, Pos2, Dep .77 .78 .77 . (Bai and Zhou, 2020) ing the model with the Anatomy and Drug gazetteer leads the model to pay more attention to drug mentions and mentions of affected body parts.", |
|
"cite_spans": [ |
|
{ |
|
"start": 54, |
|
"end": 211, |
|
"text": "64 .67 .66 .79 .78 .78 .51 .55 .53 .53 .57 .55 POS2 .66 .67 .66 .79 .81 .80 .53 .55 .54 .53 .59 .56 Dep .68 .65 .67 .83 .75 .79 .56 .53 .54 .58 .58 .58 POS1,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 212, |
|
"end": 269, |
|
"text": "Dep .70 .66 .68 .81 .78 .79 .57 .55 .56 .60 .59 .60 POS2,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 270, |
|
"end": 326, |
|
"text": "Dep .70 .68 .69 .82 .78 .80 .58 .58 .58 .61 .60 .60 Gaz,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 327, |
|
"end": 332, |
|
"text": "Name,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 333, |
|
"end": 389, |
|
"text": "Org .69 .72 .71 .79 .77 .78 .55 .56 .55 .60 .58 .59 Gaz,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 390, |
|
"end": 395, |
|
"text": "Name,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 396, |
|
"end": 400, |
|
"text": "Org,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 401, |
|
"end": 406, |
|
"text": "Pos2,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 407, |
|
"end": 680, |
|
"text": "Dep .73 .71 .72 .83 .78 .81 .59 .60 .60 .65 .60 .62 ELMo None .71 .69 .70 .80 .78 .79 .54 .53 .54 .64 .60 .62 POS1 .73 .68 .70 .79 .81 .80 .54 .54 .54 .66 .61 .63 POS2 .72 .70 .71 .80 .81 .81 .53 .57 .55 .65 .63 .64 Dep .75 .68 .71 .86 .78 .82 .58 .56 .57 .68 .61 .65 POS1,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 681, |
|
"end": 738, |
|
"text": "Dep .74 .71 .72 .85 .80 .83 .57 .59 .58 .70 .62 .66 POS2,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 739, |
|
"end": 795, |
|
"text": "Dep .75 .71 .73 .86 .81 .84 .59 .60 .60 .71 .62 .67 Gaz,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 796, |
|
"end": 801, |
|
"text": "Name,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 802, |
|
"end": 858, |
|
"text": "Org .72 .76 .74 .80 .80 .80 .58 .56 .57 .69 .66 .68 Gaz,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 859, |
|
"end": 864, |
|
"text": "Name,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 865, |
|
"end": 869, |
|
"text": "Org,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 870, |
|
"end": 875, |
|
"text": "Pos2,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 876, |
|
"end": 1152, |
|
"text": "Dep .77 .73 .75 .85 .83 .84 .61 .61 .61 .71 .65 .69 RoBERTa None .71 .74 .73 .87 .82 .85 .62 .58 .60 .68 .62 .65 POS1 .69 .76 .73 .85 .86 .85 .61 .60 .60 .72 .64 .69 POS2 .71 .75 .74 .87 .87 .87 .62 .60 .61 .74 .65 .70 Dep .76 .73 .74 .89 .86 .87 .64 .59 .61 .74 .60 .67 POS1,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1153, |
|
"end": 1210, |
|
"text": "Dep .75 .75 .75 .87 .88 .87 .62 .62 .62 .75 .65 .70 POS2,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1211, |
|
"end": 1267, |
|
"text": "Dep .76 .77 .76 .88 .88 .88 .63 .62 .62 .76 .65 .71 Gaz,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1268, |
|
"end": 1273, |
|
"text": "Name,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1274, |
|
"end": 1330, |
|
"text": "Org .73 .76 .75 .89 .83 .86 .65 .60 .63 .76 .65 .71 Gaz,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1331, |
|
"end": 1336, |
|
"text": "Name,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1337, |
|
"end": 1341, |
|
"text": "Org,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1342, |
|
"end": 1347, |
|
"text": "Pos2,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1348, |
|
"end": 1363, |
|
"text": "Dep .77 .78 .77", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1366, |
|
"end": 1386, |
|
"text": "(Bai and Zhou, 2020)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 33, |
|
"end": 41, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Case-study", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "This paper demonstrates the effectiveness of using gazetteer lists from high-quality sources, standard named entity categories and part-of-speech embeddings with a self-attention encoder and a GCNN encoding grammatical dependencies. The architecture supports precision oriented domain adaptation from widely available, high-quality resources (i.e. MeSH). Adaptation with new gazetteer lists using additive annotation sidesteps the need to reconfigure or retrain the neural networks The experiments confirm that quality external resources can offset the lower parameter space of light-weight word embedding/language models, such as GLoVE and ELMo. At the same time, these resources effectively combine with RoBERTa for best performance. The stronger improvements in precision are especially promising for health applications. The comparative results on different tasks and different domains proves that this extensible architecture is well-suited for actual use in the wild on domains and tasks, for which experts know to supply high-quality terminology resources.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "Its been a year since I found out I 'd be giving birth to a sleeping baby ! # StillBirth ! #Loss # Trisomy18", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "Its been a year since I found out I 'd be giving birth to a sleeping baby ! # StillBirth #Loss # ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "The smallest model, DistilBERT(Sanh et al., 2019), has 66M parameters, RoBERTaLarge has 340M parameters.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://healthlanguageprocessing.org/ smm4h-2021/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "for all tasks we follow the standard measure used in SMM4H competitions", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://bioportal.bioontology.org/ontologies/COSTART 5 http://diego.asu.edu/Publications/ADRMine.html", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Note that inTable 4, the label Gaz refers to the respective gazetteer set", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We would like to thank the anonymous reviewers for their valuable and constructive comments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgement", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Sentence Contextual Encoder with BERT and BiLSTM for Automatic Classification with imbalanced medication tweets", |
|
"authors": [ |
|
{ |
|
"first": "Jialin", |
|
"middle": [], |
|
"last": "Olanrewaju Tahir Aduragba", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gautham", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Senthilnathan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Crsitea", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "4", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Olanrewaju Tahir Aduragba, Jialin Yu, Gautham Senthilnathan, and Alexandra Crsitea. 2020. Sen- tence Contextual Encoder with BERT and BiLSTM for Automatic Classification with imbalanced medi- cation tweets. In SMM4H 2020.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Detection of Adverse Drug Reaction in Tweets Using a Combination of Heterogeneous Word Embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Segun Taofeek Aroyehun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Gelbukh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "4", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Segun Taofeek Aroyehun and Alexander Gelbukh. 2019. Detection of Adverse Drug Reaction in Tweets Using a Combination of Heterogeneous Word Embeddings. In SMM4H 2019.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Identification of Medication Tweets Using Domainspecific Pre-trained Language Models", |
|
"authors": [ |
|
{ |
|
"first": "Yandrapati", |
|
"middle": [], |
|
"last": "Prakash Babu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rajagopal", |
|
"middle": [], |
|
"last": "Eswari", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "4", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yandrapati Prakash Babu and Rajagopal Eswari. 2020. Identification of Medication Tweets Using Domain- specific Pre-trained Language Models. In SMM4H 2020.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "CLaC at SMM4H task 1, 2, and 4", |
|
"authors": [ |
|
{ |
|
"first": "Parsa", |
|
"middle": [], |
|
"last": "Bagherzadeh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nadia", |
|
"middle": [], |
|
"last": "Sheikh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sabine", |
|
"middle": [], |
|
"last": "Bergler", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "4", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Parsa Bagherzadeh, Nadia Sheikh, and Sabine Bergler. 2018. CLaC at SMM4H task 1, 2, and 4. In SMM4H 2018.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Adverse Drug Effect and Personalized Health Mentions, CLaC at SMM4H 2019, Tasks 1 and 4", |
|
"authors": [ |
|
{ |
|
"first": "Parsa", |
|
"middle": [], |
|
"last": "Bagherzadeh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nadia", |
|
"middle": [], |
|
"last": "Sheikh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sabine", |
|
"middle": [], |
|
"last": "Bergler", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "4", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Parsa Bagherzadeh, Nadia Sheikh, and Sabine Bergler. 2019. Adverse Drug Effect and Personalized Health Mentions, CLaC at SMM4H 2019, Tasks 1 and 4. In SMM4H 2019.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Neural machine translation by jointly learning to align and translate", |
|
"authors": [ |
|
{ |
|
"first": "Dzmitry", |
|
"middle": [], |
|
"last": "Bahdanau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 3rd International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the 3rd International Conference on Learning Rep- resentations, ICLR'15.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Automatic Detecting for Health-related Twitter Data with BioBERT", |
|
"authors": [ |
|
{ |
|
"first": "Yang", |
|
"middle": [], |
|
"last": "Bai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaobing", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "4", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yang Bai and Xiaobing Zhou. 2020. Automatic Detect- ing for Health-related Twitter Data with BioBERT. In SMM4H 2020.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Deep Learning for identification of adverse effect mentions in Twitter data", |
|
"authors": [ |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Barry", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ozlem", |
|
"middle": [], |
|
"last": "Uzuner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "4", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Paul Barry and Ozlem Uzuner. 2019. Deep Learning for identification of adverse effect mentions in Twit- ter data. In SMM4H 2019.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "The canary in the coal mine tweets: social media reveals public perceptions of non-medical use of opioids", |
|
"authors": [ |
|
{ |
|
"first": "Brian", |
|
"middle": [], |
|
"last": "Chan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrea", |
|
"middle": [], |
|
"last": "Lopez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Urmimala", |
|
"middle": [], |
|
"last": "Sarkar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "PloS one", |
|
"volume": "", |
|
"issue": "8", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Brian Chan, Andrea Lopez, and Urmimala Sarkar. 2015. The canary in the coal mine tweets: social me- dia reveals public perceptions of non-medical use of opioids. PloS one, 10(8).", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Using social media for actionable disease surveillance and outbreak management: a systematic literature review", |
|
"authors": [ |
|
{ |
|
"first": "Lauren", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Charles-Smith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tera", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Reynolds", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Mark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Cameron", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Conway", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Eric", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jennifer", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Lau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julie", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Olsen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mika", |
|
"middle": [], |
|
"last": "Pavlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Shigematsu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Laura", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Katie", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Streichert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Suda", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "PloS one", |
|
"volume": "", |
|
"issue": "10", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lauren E Charles-Smith, Tera L Reynolds, Mark A Cameron, Mike Conway, Eric HY Lau, Jennifer M Olsen, Julie A Pavlin, Mika Shigematsu, Laura C Streichert, Katie J Suda, et al. 2015. Using social media for actionable disease surveillance and out- break management: a systematic literature review. PloS one, 10(10).", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "HITSZ-ICRC: A report for SMM4H shared task 2019-automatic classification and extraction of adverse effect mentions in tweets", |
|
"authors": [ |
|
{ |
|
"first": "Shuai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuanhang", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaowei", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Haoming", |
|
"middle": [], |
|
"last": "Qin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "SMM4H", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shuai Chen, Yuanhang Huang, Xiaowei Huang, Haom- ing Qin, Jun Yan, and Buzhou Tang. 2019. HITSZ- ICRC: A report for SMM4H shared task 2019- automatic classification and extraction of adverse ef- fect mentions in tweets. In SMM4H 2019.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Drug-use Identification from Tweets with Word and Character Ngrams", |
|
"authors": [ |
|
{ |
|
"first": "\u00c7", |
|
"middle": [], |
|
"last": "Agr\u0131 \u00c7\u00f6ltekin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Taraka", |
|
"middle": [], |
|
"last": "Rama", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "4", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "\u00c7 agr\u0131 \u00c7\u00f6ltekin and Taraka Rama. 2018. Drug-use Iden- tification from Tweets with Word and Character N- grams. In SMM4H 2018.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "NLP@ UNED at SMM4H 2019: Neural Networks Applied to Automatic Classifications of Adverse Effects Mentions in Tweets", |
|
"authors": [ |
|
{ |
|
"first": "Javier", |
|
"middle": [], |
|
"last": "Cortes-Tejada", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Juan", |
|
"middle": [], |
|
"last": "Martinez-Romo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lourdes", |
|
"middle": [], |
|
"last": "Araujo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "4", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Javier Cortes-Tejada, Juan Martinez-Romo, and Lour- des Araujo. 2019. NLP@ UNED at SMM4H 2019: Neural Networks Applied to Automatic Classifica- tions of Adverse Effects Mentions in Tweets. In SMM4H 2019.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of NAACL-HLT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina N. Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Un- derstanding. In Proceedings of NAACL-HLT 2019.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Systematic review on the prevalence, frequency and comparative value of adverse events data in social media", |
|
"authors": [ |
|
{ |
|
"first": "Su", |
|
"middle": [], |
|
"last": "Golder", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gill", |
|
"middle": [], |
|
"last": "Norman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoon K", |
|
"middle": [], |
|
"last": "Loke", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "British journal of clinical pharmacology", |
|
"volume": "80", |
|
"issue": "4", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Su Golder, Gill Norman, and Yoon K Loke. 2015. Sys- tematic review on the prevalence, frequency and comparative value of adverse events data in social media. British journal of clinical pharmacology, 80(4).", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Universal Language Model Fine-tuning for Text Classification", |
|
"authors": [ |
|
{ |
|
"first": "Jeremy", |
|
"middle": [], |
|
"last": "Howard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Ruder", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "ACL 2108", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeremy Howard and Sebastian Ruder. 2018. Universal Language Model Fine-tuning for Text Classification. In ACL 2108.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Construction of a personal experience tweet corpus for health surveillance", |
|
"authors": [ |
|
{ |
|
"first": "Keyuan", |
|
"middle": [], |
|
"last": "Jiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ricardo", |
|
"middle": [], |
|
"last": "Calix", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matrika", |
|
"middle": [], |
|
"last": "Gupta", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Keyuan Jiang, Ricardo Calix, and Matrika Gupta. 2016. Construction of a personal experience tweet corpus for health surveillance. In BioNLP.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Assessment of word embedding techniques for identification of personal experience tweets pertaining to medication uses", |
|
"authors": [ |
|
{ |
|
"first": "Keyuan", |
|
"middle": [], |
|
"last": "Jiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shichao", |
|
"middle": [], |
|
"last": "Feng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ricardo", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Calix", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gordon R", |
|
"middle": [], |
|
"last": "Bernard", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "In International Workshop on Health Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Keyuan Jiang, Shichao Feng, Ricardo A Calix, and Gordon R Bernard. 2019. Assessment of word em- bedding techniques for identification of personal ex- perience tweets pertaining to medication uses. In International Workshop on Health Intelligence.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Identifying tweets of personal health experience through word embedding and LSTM neural network", |
|
"authors": [ |
|
{ |
|
"first": "Keyuan", |
|
"middle": [], |
|
"last": "Jiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shichao", |
|
"middle": [], |
|
"last": "Feng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qunhao", |
|
"middle": [], |
|
"last": "Song", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ricardo", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Calix", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matrika", |
|
"middle": [], |
|
"last": "Gupta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gordon R", |
|
"middle": [], |
|
"last": "Bernard", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "BMC Bioinformatics", |
|
"volume": "", |
|
"issue": "8", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Keyuan Jiang, Shichao Feng, Qunhao Song, Ricardo A Calix, Matrika Gupta, and Gordon R Bernard. 2018. Identifying tweets of personal health experience through word embedding and LSTM neural network. BMC Bioinformatics, 19(8).", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Shot or not: Comparison of NLP approaches for vaccination behaviour detection", |
|
"authors": [ |
|
{ |
|
"first": "Aditya", |
|
"middle": [], |
|
"last": "Joshi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiang", |
|
"middle": [], |
|
"last": "Dai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sarvnaz", |
|
"middle": [], |
|
"last": "Karimi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ross", |
|
"middle": [], |
|
"last": "Sparks", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cecile", |
|
"middle": [], |
|
"last": "Paris", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C Raina", |
|
"middle": [], |
|
"last": "Macintyre", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "4", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Aditya Joshi, Xiang Dai, Sarvnaz Karimi, Ross Sparks, Cecile Paris, and C Raina MacIntyre. 2018. Shot or not: Comparison of NLP approaches for vaccination behaviour detection. In SMM4H 2018.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Andreas Dengel, and Sheraz Ahmed. 2020. Improving Personal Health Mention Detection on Twitter Using Permutation Based Word Representation Learning", |
|
"authors": [ |
|
{ |
|
"first": "Imran", |
|
"middle": [], |
|
"last": "Pervaiz Iqbal Khan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Razzak", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "NeurIPS 2020", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pervaiz Iqbal Khan, Imran Razzak, Andreas Dengel, and Sheraz Ahmed. 2020. Improving Personal Health Mention Detection on Twitter Using Permu- tation Based Word Representation Learning. In NeurIPS 2020. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Adam: A method for stochastic optimization", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Diederik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jimmy", |
|
"middle": [ |
|
"Lei" |
|
], |
|
"last": "Kingma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ba", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 3rd International Conference on Learning Representations, ICLR'15", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diederik P Kingma and Jimmy Lei Ba. 2015. Adam: A method for stochastic optimization. In Proceed- ings of the 3rd International Conference on Learn- ing Representations, ICLR'15.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Semisupervised classification with graph convolutional networks", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Thomas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Max", |
|
"middle": [], |
|
"last": "Kipf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Welling", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas. N. Kipf and Max Welling. 2017. Semi- supervised classification with graph convolutional networks. In ICLR'17.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Overview of the Fifth Social Media Mining for Health applications (SMM4H) shared tasks at COLING 2020", |
|
"authors": [ |
|
{ |
|
"first": "Ari", |
|
"middle": [ |
|
"Z" |
|
], |
|
"last": "Klein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ivan", |
|
"middle": [], |
|
"last": "Flores", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arjun", |
|
"middle": [], |
|
"last": "Magge", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anne-Lyse", |
|
"middle": [], |
|
"last": "Minard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karen", |
|
"middle": [ |
|
"O" |
|
], |
|
"last": "Connor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Abeed", |
|
"middle": [], |
|
"last": "Sarker", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Elena", |
|
"middle": [], |
|
"last": "Tutubalina", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Davy", |
|
"middle": [], |
|
"last": "Weissenbacher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Graciela", |
|
"middle": [], |
|
"last": "Gonzalez-Hernandez", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "4", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ari Z. Klein, Ivan Flores, Arjun Magge, Anne- Lyse Minard, Karen O'Connor, Abeed Sarker, Elena Tutubalina, Davy Weissenbacher, and Gra- ciela Gonzalez-Hernandez. 2020. Overview of the Fifth Social Media Mining for Health applications (SMM4H) shared tasks at COLING 2020. In SMM4H 2020.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Accurate Unlexicalized Parsing", |
|
"authors": [ |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dan Klein and Christopher D. Manning. 2003. Accu- rate Unlexicalized Parsing. In ACL 2003.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "The SIDER database of drugs and side effects", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Kuhn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ivica", |
|
"middle": [], |
|
"last": "Letunic", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lars", |
|
"middle": [ |
|
"Juhl" |
|
], |
|
"last": "Jensen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peer", |
|
"middle": [], |
|
"last": "Bork", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Nucleic acids research", |
|
"volume": "", |
|
"issue": "D1", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael Kuhn, Ivica Letunic, Lars Juhl Jensen, and Peer Bork. 2016. The SIDER database of drugs and side effects. Nucleic acids research, 44(D1).", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Biobert: a pre-trained biomedical language representation model for biomedical text mining", |
|
"authors": [ |
|
{ |
|
"first": "Jinhyuk", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wonjin", |
|
"middle": [], |
|
"last": "Yoon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sungdong", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Donghyeon", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sunkyu", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chan", |
|
"middle": [], |
|
"last": "Ho So", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jaewoo", |
|
"middle": [], |
|
"last": "Kang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Bioinformatics", |
|
"volume": "36", |
|
"issue": "4", |
|
"pages": "1234--1240", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020. Biobert: a pre-trained biomed- ical language representation model for biomedical text mining. Bioinformatics, 36(4):1234-1240.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Employing personal/impersonal views in supervised and semisupervised sentiment classification", |
|
"authors": [ |
|
{ |
|
"first": "Shoushan", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chu-Ren", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guodong", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sophia Yat Mei", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shoushan Li, Chu-Ren Huang, Guodong Zhou, and Sophia Yat Mei Lee. 2010. Employing per- sonal/impersonal views in supervised and semi- supervised sentiment classification. In ACL 2010.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Bulletin of the Medical Library Association", |
|
"authors": [ |
|
{ |
|
"first": "Carolyn", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Lipscomb", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "", |
|
"volume": "88", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Carolyn E Lipscomb. 2000. Medical subject headings (MeSH). Bulletin of the Medical Library Associa- tion, 88(3).", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "RoBERTa: A robustly optimized BERT pretraining approach", |
|
"authors": [ |
|
{ |
|
"first": "Yinhan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Myle", |
|
"middle": [], |
|
"last": "Ott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naman", |
|
"middle": [], |
|
"last": "Goyal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jingfei", |
|
"middle": [], |
|
"last": "Du", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mandar", |
|
"middle": [], |
|
"last": "Joshi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Danqi", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Omer", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Lewis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veselin", |
|
"middle": [], |
|
"last": "Stoyanov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1907.11692" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Encoding sentences with graph convolutional networks for semantic role labeling", |
|
"authors": [ |
|
{ |
|
"first": "Diego", |
|
"middle": [], |
|
"last": "Marcheggiani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ivan", |
|
"middle": [], |
|
"last": "Titov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diego Marcheggiani and Ivan Titov. 2017. Encoding sentences with graph convolutional networks for se- mantic role labeling. In EMNLP 2017.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "KFU NLP team at SMM4H 2019 tasks: Want to extract adverse drugs reactions from tweets? BERT to the rescue", |
|
"authors": [ |
|
{ |
|
"first": "Zulfat", |
|
"middle": [], |
|
"last": "Miftahutdinov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilseyar", |
|
"middle": [], |
|
"last": "Alimova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Elena", |
|
"middle": [], |
|
"last": "Tutubalina", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "SMM4H", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zulfat Miftahutdinov, Ilseyar Alimova, and Elena Tu- tubalina. 2019. KFU NLP team at SMM4H 2019 tasks: Want to extract adverse drugs reactions from tweets? BERT to the rescue. In SMM4H 2019.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Distributed representations of words and phrases and their compositionality", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in Neural Information Processing Systems.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Using Twitter to examine smoking behavior and perceptions of emerging tobacco products", |
|
"authors": [ |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Mysl\u00edn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shu-Hong", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wendy", |
|
"middle": [], |
|
"last": "Chapman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Conway", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Journal of medical Internet research", |
|
"volume": "15", |
|
"issue": "8", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mark Mysl\u00edn, Shu-Hong Zhu, Wendy Chapman, and Mike Conway. 2013. Using Twitter to examine smoking behavior and perceptions of emerging to- bacco products. Journal of medical Internet re- search, 15(8).", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Pharmacovigilance from social media: mining adverse drug reaction mentions using sequence labeling with word embedding cluster features", |
|
"authors": [ |
|
{ |
|
"first": "Azadeh", |
|
"middle": [], |
|
"last": "Nikfarjam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Abeed", |
|
"middle": [], |
|
"last": "Sarker", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O'", |
|
"middle": [], |
|
"last": "Karen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Connor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Garciela", |
|
"middle": [], |
|
"last": "Ginn Rachel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Gonzalez", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Journal of the American Medical Informatics Association", |
|
"volume": "", |
|
"issue": "3", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Azadeh Nikfarjam, Abeed Sarker, Karen O'Connor, R Ginn Rachel, and Garciela Gonzalez. 2015. Phar- macovigilance from social media: mining adverse drug reaction mentions using sequence labeling with word embedding cluster features. Journal of the American Medical Informatics Association, 22(3).", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Automatic differentiation in PyTorch", |
|
"authors": [ |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Paszke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sam", |
|
"middle": [], |
|
"last": "Gross", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Soumith", |
|
"middle": [], |
|
"last": "Chintala", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gregory", |
|
"middle": [], |
|
"last": "Chanan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edward", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zachary", |
|
"middle": [], |
|
"last": "Devito", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zeming", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alban", |
|
"middle": [], |
|
"last": "Desmaison", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luca", |
|
"middle": [], |
|
"last": "Antiga", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Lerer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in PyTorch. In NIPS 2017.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Glove: Global Vectors for Word Representation", |
|
"authors": [ |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Pennington", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global Vectors for Word Representation. In EMNLP 2014.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Deep contextualized word representations", |
|
"authors": [ |
|
{ |
|
"first": "Matthew", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Peters", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Neumann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohit", |
|
"middle": [], |
|
"last": "Iyyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Gardner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of NAACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. In Proceedings of NAACL.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "Software framework for topic modelling with large corpora", |
|
"authors": [ |
|
{ |
|
"first": "Radim", |
|
"middle": [], |
|
"last": "Rehurek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Petr", |
|
"middle": [], |
|
"last": "Sojka", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Radim Rehurek and Petr Sojka. 2010. Software frame- work for topic modelling with large corpora. In In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "Paraskevas Koukaras, and Christos Tjortjis. 2020. Social media prediction: a literature review. Multimedia Tools and Applications", |
|
"authors": [ |
|
{ |
|
"first": "Dimitrios", |
|
"middle": [], |
|
"last": "Rousidis", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dimitrios Rousidis, Paraskevas Koukaras, and Christos Tjortjis. 2020. Social media prediction: a literature review. Multimedia Tools and Applications, 79(9).", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "Leveraging web based evidence gathering for drug information identification from tweets", |
|
"authors": [ |
|
{ |
|
"first": "Rupsa", |
|
"middle": [], |
|
"last": "Saha", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Abir", |
|
"middle": [], |
|
"last": "Naskar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tirthankar", |
|
"middle": [], |
|
"last": "Dasgupta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lipika", |
|
"middle": [], |
|
"last": "Dey", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "4", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rupsa Saha, Abir Naskar, Tirthankar Dasgupta, and Lipika Dey. 2018. Leveraging web based evidence gathering for drug information identification from tweets. In SMM4H 2018.", |
|
"links": null |
|
}, |
|
"BIBREF41": { |
|
"ref_id": "b41", |
|
"title": "DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter", |
|
"authors": [ |
|
{ |
|
"first": "Victor", |
|
"middle": [], |
|
"last": "Sanh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lysandre", |
|
"middle": [], |
|
"last": "Debut", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julien", |
|
"middle": [], |
|
"last": "Chaumond", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Wolf", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "NeurIPS", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. In NeurIPS 2019.", |
|
"links": null |
|
}, |
|
"BIBREF42": { |
|
"ref_id": "b42", |
|
"title": "Detection of adverse drug reaction mentions in tweets using ELMo", |
|
"authors": [ |
|
{ |
|
"first": "Sarah", |
|
"middle": [], |
|
"last": "Sarabadani", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "SMM4H 2019", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sarah Sarabadani. 2019. Detection of adverse drug re- action mentions in tweets using ELMo. In SMM4H 2019.", |
|
"links": null |
|
}, |
|
"BIBREF43": { |
|
"ref_id": "b43", |
|
"title": "The General Inquirer: A computer approach to content analysis", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Philip", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Stone", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Dexter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marshall S", |
|
"middle": [], |
|
"last": "Dunphy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1966, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philip J Stone, Dexter C Dunphy, and Marshall S Smith. 1966. The General Inquirer: A computer approach to content analysis. MIT press.", |
|
"links": null |
|
}, |
|
"BIBREF44": { |
|
"ref_id": "b44", |
|
"title": "Attention is all you need", |
|
"authors": [ |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Vaswani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noam", |
|
"middle": [], |
|
"last": "Shazeer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niki", |
|
"middle": [], |
|
"last": "Parmar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jakob", |
|
"middle": [], |
|
"last": "Uszkoreit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llion", |
|
"middle": [], |
|
"last": "Jones", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aidan", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Gomez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "\u0141ukasz", |
|
"middle": [], |
|
"last": "Kaiser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Illia", |
|
"middle": [], |
|
"last": "Polosukhin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS 2017.", |
|
"links": null |
|
}, |
|
"BIBREF45": { |
|
"ref_id": "b45", |
|
"title": "Towards text processing pipelines to identify adverse drug events-related tweets: University of Michigan @ SMM4H 2019 Task 1", |
|
"authors": [ |
|
{ |
|
"first": "Grace", |
|
"middle": [], |
|
"last": "Vg Vinod Vydiswaran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bryan", |
|
"middle": [], |
|
"last": "Ganzel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Deahan", |
|
"middle": [], |
|
"last": "Romas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amy", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Neha", |
|
"middle": [], |
|
"last": "Austin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Socheatha", |
|
"middle": [], |
|
"last": "Bhomia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephanie", |
|
"middle": [], |
|
"last": "Chan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Van", |
|
"middle": [], |
|
"last": "Hall", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aaron", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Miller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "SMM4H", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "VG Vinod Vydiswaran, Grace Ganzel, Bryan Romas, Deahan Yu, Amy Austin, Neha Bhomia, Socheatha Chan, Stephanie Hall, Van Le, Aaron Miller, et al. 2019. Towards text processing pipelines to iden- tify adverse drug events-related tweets: University of Michigan @ SMM4H 2019 Task 1. In SMM4H 2019.", |
|
"links": null |
|
}, |
|
"BIBREF46": { |
|
"ref_id": "b46", |
|
"title": "BIGODM System in the Social Media Mining for Health Applications Shared Task", |
|
"authors": [ |
|
{ |
|
"first": "Chen-Kai", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hong-Jie", |
|
"middle": [], |
|
"last": "Dai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bo-Hung", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "4", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chen-Kai Wang, Hong-Jie Dai, and Bo-Hung Wang. 2019. BIGODM System in the Social Media Min- ing for Health Applications Shared Task 2019. In SMM4H 2019.", |
|
"links": null |
|
}, |
|
"BIBREF47": { |
|
"ref_id": "b47", |
|
"title": "Overview of the fourth Social Media Mining for Health (SMM4H) shared tasks at ACL 2019", |
|
"authors": [ |
|
{ |
|
"first": "Davy", |
|
"middle": [], |
|
"last": "Weissenbacher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Abeed", |
|
"middle": [], |
|
"last": "Sarker", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arjun", |
|
"middle": [], |
|
"last": "Magge", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ashlynn", |
|
"middle": [], |
|
"last": "Daughton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O'", |
|
"middle": [], |
|
"last": "Karen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Connor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Graciela", |
|
"middle": [], |
|
"last": "Paul", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Gonzalez-Hernandez", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "SMM4H", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Davy Weissenbacher, Abeed Sarker, Arjun Magge, Ashlynn Daughton, Karen O'Connor, Michael J. Paul, and Graciela Gonzalez-Hernandez. 2019. Overview of the fourth Social Media Mining for Health (SMM4H) shared tasks at ACL 2019. In SMM4H 2019.", |
|
"links": null |
|
}, |
|
"BIBREF48": { |
|
"ref_id": "b48", |
|
"title": "Overview of the third Social Media Mining for Health (SMM4H) Shared Tasks at EMNLP", |
|
"authors": [ |
|
{ |
|
"first": "Davy", |
|
"middle": [], |
|
"last": "Weissenbacher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Abeed", |
|
"middle": [], |
|
"last": "Sarker", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Paul", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Graciela", |
|
"middle": [], |
|
"last": "Gonzalez-Hernandez", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "4", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Davy Weissenbacher, Abeed Sarker, Michael J. Paul, and Graciela Gonzalez-Hernandez. 2018. Overview of the third Social Media Mining for Health (SMM4H) Shared Tasks at EMNLP 2018. In SMM4H 2018.", |
|
"links": null |
|
}, |
|
"BIBREF49": { |
|
"ref_id": "b49", |
|
"title": "DrugBank 5.0: a major update to the DrugBank database", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "David S Wishart", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Yannick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "An", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Feunang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Elvis", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Guo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ana", |
|
"middle": [], |
|
"last": "Lo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Marcu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tanvir", |
|
"middle": [], |
|
"last": "Grant", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Sajed", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carin", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zinat", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Sayeeda", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Nucleic acids research", |
|
"volume": "", |
|
"issue": "D1", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David S Wishart, Yannick D Feunang, An C Guo, Elvis J Lo, Ana Marcu, Jason R Grant, Tanvir Sajed, Daniel Johnson, Carin Li, Zinat Sayeeda, et al. 2018. DrugBank 5.0: a major update to the DrugBank database for 2018. Nucleic acids research, 46(D1).", |
|
"links": null |
|
}, |
|
"BIBREF50": { |
|
"ref_id": "b50", |
|
"title": "MSA: Jointly detecting drug name and adverse drug reaction mentioning tweets with multi-head self-attention", |
|
"authors": [ |
|
{ |
|
"first": "Chuhan", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fangzhao", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhigang", |
|
"middle": [], |
|
"last": "Yuan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Junxin", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yongfeng", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xing", |
|
"middle": [], |
|
"last": "Xie", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Twelfth ACM International Conference on Web Search and Data Mining", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chuhan Wu, Fangzhao Wu, Zhigang Yuan, Junxin Liu, Yongfeng Huang, and Xing Xie. 2019. MSA: Jointly detecting drug name and adverse drug reaction mentioning tweets with multi-head self-attention. In Twelfth ACM International Conference on Web Search and Data Mining.", |
|
"links": null |
|
}, |
|
"BIBREF51": { |
|
"ref_id": "b51", |
|
"title": "Classification of medicationrelated tweets using stacked bidirectional LSTMs with context-aware attention", |
|
"authors": [ |
|
{ |
|
"first": "Orest", |
|
"middle": [], |
|
"last": "Xherija", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "4", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Orest Xherija. 2018. Classification of medication- related tweets using stacked bidirectional LSTMs with context-aware attention. In SMM4H 2018.", |
|
"links": null |
|
}, |
|
"BIBREF52": { |
|
"ref_id": "b52", |
|
"title": "Term identification methods for consumer health vocabulary development", |
|
"authors": [ |
|
{ |
|
"first": "Qing", |
|
"middle": [], |
|
"last": "Zeng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tony", |
|
"middle": [], |
|
"last": "Tse", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guy", |
|
"middle": [], |
|
"last": "Divita", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alla", |
|
"middle": [], |
|
"last": "Keselman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Crowell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Allen", |
|
"middle": [], |
|
"last": "Browne", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sergey", |
|
"middle": [], |
|
"last": "Goryachev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Long", |
|
"middle": [], |
|
"last": "Ngo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Journal of medical Internet research", |
|
"volume": "9", |
|
"issue": "1", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Qing Zeng, Tony Tse, Guy Divita, Alla Keselman, Jonathan Crowell, Allen Browne, Sergey Goryachev, and Long Ngo. 2007. Term identification methods for consumer health vocabulary development. Jour- nal of medical Internet research, 9(1).", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"num": null, |
|
"text": "(5) (Class 0): scientists found a flu vaccine flaw, now they have to fix it (6) (Class 1): waiting at the pharmacy for my flu shot SM19-1 Adverse drug reaction mention is the task of identifying mentions of side effects as the results of drug consumption (Weissenbacher et al., 2019). (7) (Class 0): I'm so proud of bob for taking xarelto! (8) (Class 1):", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"uris": null, |
|
"num": null, |
|
"text": "(12) : I've literally had a headache all day today and have taken four Tylenols throughout the day !Drug To identify drug mentions, we use the DrugBank database(Wishart et al., 2018), which includes commercial drug names as well as their scientific names.", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF3": { |
|
"uris": null, |
|
"num": null, |
|
"text": "Trisomy18 a) Without Birth Defect and Pregnancy gazetteers b) With Birth Defect and Pregnancy gazetteers Visualization of attention scores for a sample from SM20-5 after taking olanzapine i wake up and feel like i am in a straight jacket because my muscles feel stiff after taking olanzapine i wake up and feel like i am in a straight jacket because my muscles feel stiff a) Without Drug and Anatomy gazetteers b) With Drug and Anatomy gazetteers Visualization of attention scores for a sample from SM19-1", |
|
"type_str": "figure" |
|
}, |
|
"TABREF0": { |
|
"num": null, |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table><tr><td>Task</td><td colspan=\"2\">Train Test</td></tr><tr><td colspan=\"3\">SM18-2 14219 3554</td></tr><tr><td colspan=\"2\">SM18-4 4579</td><td>1144</td></tr><tr><td colspan=\"3\">SM19-1 25678 4575</td></tr><tr><td colspan=\"3\">SM20-5 18382 4603</td></tr></table>", |
|
"text": "Statistics of the data sets" |
|
}, |
|
"TABREF1": { |
|
"num": null, |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table><tr><td/><td/><td/><td/><td>obl</td><td/><td/><td/></tr><tr><td/><td/><td/><td/><td/><td>case</td><td/><td/></tr><tr><td/><td>nsubj</td><td>dep</td><td>obj</td><td>fixed</td><td/><td>compound</td><td/></tr><tr><td>I</td><td>stopped</td><td>taking</td><td>Crestor</td><td>because</td><td>of</td><td>muscle</td><td>pain</td></tr><tr><td/><td/><td/><td>Drug</td><td/><td/><td>Anatomy</td><td/></tr><tr><td>Drug</td><td>ADR</td><td>Anatomy</td><td/><td/><td/><td>ADR</td><td/></tr><tr><td colspan=\"8\">Figure 1: Dependency parse for I stopped taking Crestor because of muscle Pain, with gazetteer annotations</td></tr><tr><td colspan=\"2\">4.2 Named entities</td><td/><td/><td colspan=\"4\">(13) After a stillbirth in 2014 for #Trisomy18,</td></tr><tr><td colspan=\"4\">Proper names and names of organizations are ex-</td><td colspan=\"4\">yesterday we found out we are expecting a</td></tr><tr><td colspan=\"3\">tracted using the ANNIE module.</td><td/><td colspan=\"2\">healthy baby</td><td/><td/></tr><tr><td colspan=\"4\">Names People often mention the name of their</td><td/><td/><td/><td/></tr><tr><td colspan=\"4\">child when talking about a personal experience.</td><td/><td/><td/><td/></tr><tr><td colspan=\"4\">(14) My Kristin such a blessing from GOD -Kids</td><td/><td/><td/><td/></tr><tr><td colspan=\"3\">with Down Syndrome</td><td/><td/><td/><td/><td/></tr><tr><td colspan=\"4\">Organization The presence of an organization</td><td/><td/><td/><td/></tr><tr><td colspan=\"4\">mention often indicates that a tweet is talking in</td><td/><td/><td/><td/></tr><tr><td colspan=\"4\">a general sense and not relating a personal experi-</td><td/><td/><td/><td/></tr><tr><td>ence.</td><td/><td/><td/><td/><td/><td/><td/></tr><tr><td colspan=\"4\">(15) Ohio Senate Says 'No' to Abortion Based on</td><td/><td/><td/><td/></tr><tr><td colspan=\"3\">Down Syndrome Diagnosis</td><td/><td/><td/><td/><td/></tr></table>", |
|
"text": "FamilyRel Family terms such as mom, dad, mommy, daddy, mother, father, grandfather, grandmother, wife, husband, spouse, sister, brother, parent, sister in law, brother in law, cousin, niece, nephewAcquaintances Terms such as neighbor, friend, colleague, etc., fromGI (Stone et al., 1966) (tagged as SocRel)The last two gazeteers enable the system to distinguish examples of self reports (my child) from reports on a family member (my cousin's kid)." |
|
}, |
|
"TABREF2": { |
|
"num": null, |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table/>", |
|
"text": "The set of gazetteer lists used for each task, subsumed under the label Gaz" |
|
}, |
|
"TABREF5": { |
|
"num": null, |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table><tr><td/><td>Embedding</td><td colspan=\"2\">Epoch lr</td></tr><tr><td/><td>GLoVE</td><td>5</td><td>.1e-3</td></tr><tr><td>SM18-2</td><td>ELMo</td><td>6</td><td>.5e-4</td></tr><tr><td/><td colspan=\"2\">RoBERTa / BioBERT 10</td><td>.5e-5</td></tr><tr><td/><td>GLoVE</td><td>4</td><td>.1e-3</td></tr><tr><td>SM18-4</td><td>ELMo</td><td>6</td><td>.1e-3</td></tr><tr><td/><td colspan=\"2\">RoBERTa / BioBERT 6</td><td>.1e-4</td></tr><tr><td/><td>GLoVE</td><td>6</td><td>.1e-3</td></tr><tr><td>SM19-1</td><td>ELMo</td><td>8</td><td>.1e-4</td></tr><tr><td/><td colspan=\"2\">RoBERTa / BioBERT 10</td><td>.1e-5</td></tr><tr><td/><td>GLoVE</td><td>4</td><td>.1e-4</td></tr><tr><td>SM20-5</td><td>ELMo</td><td>6</td><td>.1e-4</td></tr><tr><td/><td colspan=\"2\">RoBERTa / BioBERT 8</td><td>.5e-5</td></tr></table>", |
|
"text": "The set of hyper-parameters used for each task" |
|
}, |
|
"TABREF6": { |
|
"num": null, |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table><tr><td/><td>SM18-2</td><td/><td>SM18-4</td><td/><td/><td>SM19-1</td><td/><td>SM20-5</td></tr><tr><td>Features</td><td>\u00b5P \u00b5R \u00b5F</td><td>P</td><td>R</td><td>F</td><td>P</td><td>R</td><td>F</td><td>\u00b5P \u00b5R \u00b5F</td></tr><tr><td>None</td><td>.63 .64 .63</td><td colspan=\"3\">.78 .76 .77</td><td colspan=\"3\">.50 .51 .50</td><td>.54 .53 .54</td></tr><tr><td>POS1</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>GLoVE</td><td/><td/><td/><td/><td/><td/><td/><td/></tr></table>", |
|
"text": "Ablation of grammatical features and gazetteer lists" |
|
} |
|
} |
|
} |
|
} |