|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:35:03.450208Z" |
|
}, |
|
"title": "NLP@VCU: Identifying adverse effects in English tweets for unbalanced data", |
|
"authors": [ |
|
{ |
|
"first": "Darshini", |
|
"middle": [], |
|
"last": "Mahendran", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Virginia Commonwealth University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Cora", |
|
"middle": [], |
|
"last": "Lewis", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Virginia Commonwealth University", |
|
"location": {} |
|
}, |
|
"email": "corammlewis@[email protected]" |
|
}, |
|
{ |
|
"first": "Bridget", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Mcinnes", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Virginia Commonwealth University", |
|
"location": {} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This paper describes our participation in the Social Media Mining for Health Application (SMM4H 2020) Challenge Track 2 for identifying tweets containing Adverse Effects (AEs). Our system uses Convolutional Neural Networks. We explore downsampling, oversampling, and adjusting the class weights to account for the imbalanced nature of the dataset. Our results showed downsampling outperformed oversampling and adjusting the class weights on the test set however all three obtained similar results on the development set.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This paper describes our participation in the Social Media Mining for Health Application (SMM4H 2020) Challenge Track 2 for identifying tweets containing Adverse Effects (AEs). Our system uses Convolutional Neural Networks. We explore downsampling, oversampling, and adjusting the class weights to account for the imbalanced nature of the dataset. Our results showed downsampling outperformed oversampling and adjusting the class weights on the test set however all three obtained similar results on the development set.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "This paper describes our participation in Task 2 of the Social Media Mining for Health Application (SMM4H) 2020 challenge to automatically identify Adverse Effects (AE) in English tweets. To address this challenge, we explored a supervised binary classification system to automatically identify the AEs using Convolutional Neural Networks (CNNs). In order to deal with the unbalanced nature of the data set, we explored downsampling, oversampling, and utilization of the class weights.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this section, we discuss our AE identification system. Our system can be found here 1 . Feature Representation. We pre-process the data as follows: 1) the HTML symbol & are removed as per (\u00dabeda et al., 2019) ; 2) double-quotes are removed; 3) hashtags, links, and usernames are replaced with the string \"hashtag\", \"link\" and \"username\" respectively as per (Cortes-Tejada et al., 2019); 4) emojis are substituted with a phrase that represents that emoji as per (Vydiswaran et al., 2019); 5) tweets are lowercased. Each word in the tweet is represented as an embedding. We evaluated three embedding types, GloVe (Pennington et al., 2014) , word2vec (Mikolov et al., 2013) and FastText (Godin, 2019) , trained over different corpora types. Our resulting system described here uses GloVe trained on Twitter.", |
|
"cite_spans": [ |
|
{ |
|
"start": 195, |
|
"end": 215, |
|
"text": "(\u00dabeda et al., 2019)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 618, |
|
"end": 643, |
|
"text": "(Pennington et al., 2014)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 655, |
|
"end": 677, |
|
"text": "(Mikolov et al., 2013)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 691, |
|
"end": 704, |
|
"text": "(Godin, 2019)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methods", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Algorithm. We evaluated using Convolutional Neural Network (CNNs) as per (\u00dabeda et al., 2019) . One of the beneficial properties of CNN is that it preserves the spatial orientation, in this case, the sequence of the words in the tweet. CNNs consist of four layers: 1) embedding layer -to encode words in sentences by real-valued vectors; 2) convolution layer -to get local features from each part of the input; 3) pooling layer -to extract the most relevant features, and 4) feed-forward layer -a fully connected layer to perform classification. For this, we first feed each tweet into CNN to learn the AE representation of the tweet. Second, we apply the convolution layer to learn the local features from the embedding vectors obtained from each word of a tweet. Next, we apply the max-pooling layer to extract the most important feature. Next, we unstack the volume into a flat vector and feed it into the fully connected feedforward layer. Finally, the fixed-length vector is fed into a softmax layer to perform the classification. For training, the classification error is back-propagated and the model is re-trained until the error is minimized. The weights of the matrix and bias are the parameters that get tuned until the optimized model is obtained.", |
|
"cite_spans": [ |
|
{ |
|
"start": 73, |
|
"end": 93, |
|
"text": "(\u00dabeda et al., 2019)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methods", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Imbalance data. Due to the imbalanced nature of the dataset, we evaluated our algorithm using three methods to reduce the imbalance: downsampling, oversampling, and adjusting Keras class weights:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "159", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Run 1: To downsample, non-AE tweets are removed in order to reach an input ratio of AE tweets to non-AE tweets. Through experimentation, we discovered that the best downsampling ratio is 1 AE tweet for every 4 non-AE tweets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "159", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Run 2: To oversample, the AE tweets are repeated a specific number of times. For example, if the data set is oversampled 3 times, there would be 3 copies of each AE tweet. Through experimentation, we learned that oversampling 6 times is the most effective number of oversamples.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "159", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Run 3: We used the class weights option in Keras which incorporates the ratio of how much an AE tweet should be valued compared to a non-AE tweet. (Chollet and others, 2015) For example, if class weight is 1 for the non-AE tweets and 20 for the AE tweets, Keras would treat each AE tweet like its worth twenty non-AE tweets. Through experimentation, we found that a class weight of 1 for non-AE tweets and a class weight of 10 for AE tweets is most effective.", |
|
"cite_spans": [ |
|
{ |
|
"start": 147, |
|
"end": 173, |
|
"text": "(Chollet and others, 2015)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "159", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The training set contains 20,544 tweets: 1,903 tweets contain AEs and 18,641 tweets do not contain AEs. The development set contains 5,134 tweets: 4,660 tweets are negative and 474 are positive. The training and development data sets are both highly imbalanced containing a 1:10 ratio of AE tweets to non-AE tweets. The test set contains 4,759 tweets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In this section, we report the Precision (P), Recall (R), and F-measure (F) of our system on the SSM4H Task 2 data set for the three runs described above. Table 1 shows the results obtained evaluated over the development data (Development Results), and the reported results evaluated over the test data (Reported Results). The results were achieved using Train-Test. The precision and recall were returned only for the best performing run. The results on the development set showed that using downsampling (Run1) or oversampling (Run 2) obtained a higher precision and lower recall whereas using class weights (Run 3) obtained a higher recall and lower precision. The F-1 scores achieved with the development data were all very similar, with downsampling and class weights achieving identical F-1 scores. This was unlike in the test data results where downsampling (Run 1) showed a higher recall than precision and obtained the highest F-1 score of all the runs. ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 155, |
|
"end": 162, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Our CNN model achieved a reported F1 score of 0.35 on the test dataset and 0.50 on the development data. Our experimentation with pre-trained word embedding showed that GloVe trained on Twitter is optimal for this task. Our results also show that downsampling, oversampling, and Keras class weights achieve similar F1 scores with the development data, though downsampling outperformed oversampling and class weights on the test data. In the future, we would like to experiment further in the pre-processing stage. We also plan to explore the possibility of character embeddings or utilizing Recurrent Neural Networks (RNNs) in depth. We would like to investigate the usage of additional word embeddings such as Bidirectional Encoder Representations from Transformers (BERT) for this task as well.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and Future Work", |
|
"sec_num": "5" |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Nlp@ uned at smm4h 2019: Neural networks applied to automatic classifications of adverse effects mentions in tweets", |
|
"authors": [ |
|
{ |
|
"first": "Javier", |
|
"middle": [], |
|
"last": "Cortes-Tejada", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Juan", |
|
"middle": [], |
|
"last": "Martinez-Romo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lourdes", |
|
"middle": [], |
|
"last": "Araujo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the Fourth Social Media Mining for Health Applications (# SMM4H) Workshop & Shared Task", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "93--95", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Javier Cortes-Tejada, Juan Martinez-Romo, and Lourdes Araujo. 2019. Nlp@ uned at smm4h 2019: Neural networks applied to automatic classifications of adverse effects mentions in tweets. In Proceedings of the Fourth Social Media Mining for Health Applications (# SMM4H) Workshop & Shared Task, pages 93-95.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Improving and Interpreting Neural Networks for Word-Level Prediction Tasks in Natural Language Processing", |
|
"authors": [ |
|
{ |
|
"first": "Fr\u00e9deric", |
|
"middle": [], |
|
"last": "Godin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fr\u00e9deric Godin. 2019. Improving and Interpreting Neural Networks for Word-Level Prediction Tasks in Natural Language Processing. Ph.D. thesis, Ghent University, Belgium.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Distributed representations of words and phrases and their compositionality", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3111--3119", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111-3119.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Glove: Global vectors for word representation", |
|
"authors": [ |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Pennington", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1532--1543", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word represen- tation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532-1543.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Using machine learning and deep learning methods to find mentions of adverse drug reactions in social media", |
|
"authors": [ |
|
{ |
|
"first": "Pilar", |
|
"middle": [], |
|
"last": "L\u00f3pez\u00fabeda", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Manuel Carlos D\u00edaz", |
|
"middle": [], |
|
"last": "Galiano", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L Alfonso Urena", |
|
"middle": [], |
|
"last": "Teresa Mart\u00edn-Valdivia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Lopez", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the Fourth Social Media Mining for Health Applications (# SMM4H) Workshop & Shared Task", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "102--106", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pilar L\u00f3pez\u00dabeda, Manuel Carlos D\u00edaz Galiano, M Teresa Mart\u00edn-Valdivia, and L Alfonso Urena Lopez. 2019. Using machine learning and deep learning methods to find mentions of adverse drug reactions in social media. In Proceedings of the Fourth Social Media Mining for Health Applications (# SMM4H) Workshop & Shared Task, pages 102-106.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Towards text processing pipelines to identify adverse drug events-related tweets: University of michigan@ smm4h 2019 task 1", |
|
"authors": [ |
|
{ |
|
"first": "Grace", |
|
"middle": [], |
|
"last": "Vg Vinod Vydiswaran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bryan", |
|
"middle": [], |
|
"last": "Ganzel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Deahan", |
|
"middle": [], |
|
"last": "Romas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amy", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Neha", |
|
"middle": [], |
|
"last": "Austin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Socheatha", |
|
"middle": [], |
|
"last": "Bhomia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephanie", |
|
"middle": [], |
|
"last": "Chan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Van", |
|
"middle": [], |
|
"last": "Hall", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aaron", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Miller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the Fourth Social Media Mining for Health Applications (# SMM4H) Workshop & Shared Task", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "107--109", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "VG Vinod Vydiswaran, Grace Ganzel, Bryan Romas, Deahan Yu, Amy Austin, Neha Bhomia, Socheatha Chan, Stephanie Hall, Van Le, Aaron Miller, et al. 2019. Towards text processing pipelines to identify adverse drug events-related tweets: University of michigan@ smm4h 2019 task 1. In Proceedings of the Fourth Social Media Mining for Health Applications (# SMM4H) Workshop & Shared Task, pages 107-109.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF1": { |
|
"type_str": "table", |
|
"text": "Development and Evaluation Results", |
|
"num": null, |
|
"content": "<table/>", |
|
"html": null |
|
} |
|
} |
|
} |
|
} |