ACL-OCL / Base_JSON /prefixC /json /clinicalnlp /2020.clinicalnlp-1.29.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:27:24.064949Z"
},
"title": "Utilizing Multimodal Feature Consistency to Detect Adversarial Examples on Clinical Summaries",
"authors": [
{
"first": "Wenjie",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Emory University Atlanta",
"location": {
"region": "GA",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Youngja",
"middle": [],
"last": "Park",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IBM Research Yorktown Heights",
"location": {
"region": "NY",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Taesung",
"middle": [],
"last": "Lee",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IBM Research Yorktown Heights",
"location": {
"region": "NY",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Ian",
"middle": [],
"last": "Molloy",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IBM Research Yorktown Heights",
"location": {
"region": "NY",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Pengfei",
"middle": [],
"last": "Tang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Emory University Atlanta",
"location": {
"region": "GA",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Li",
"middle": [],
"last": "Xiong",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Emory University Atlanta",
"location": {
"region": "GA",
"country": "USA"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Recent studies have shown that adversarial examples can be generated by applying small perturbations to the inputs such that the welltrained deep learning models will misclassify. With the increasing number of safety and security-sensitive applications of deep learning models, the robustness of deep learning models has become a crucial topic. The robustness of deep learning models for healthcare applications is especially critical because the unique characteristics and the high financial interests of the medical domain make it more sensitive to adversarial attacks. Among the modalities of medical data, the clinical summaries have higher risks to be attacked because they are generated by third-party companies. As few works studied adversarial threats on clinical summaries, in this work we first apply adversarial attack to clinical summaries of electronic health records (EHR) to show the text-based deep learning systems are vulnerable to adversarial examples. Secondly, benefiting from the multi-modality of the EHR dataset, we propose a novel defense method, MATCH (Multimodal feATure Consistency cHeck), which leverages the consistency between multiple modalities in the data to defend against adversarial examples on a single modality. Our experiments demonstrate the effectiveness of MATCH on a hospital readmission prediction task comparing with baseline methods.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Recent studies have shown that adversarial examples can be generated by applying small perturbations to the inputs such that the welltrained deep learning models will misclassify. With the increasing number of safety and security-sensitive applications of deep learning models, the robustness of deep learning models has become a crucial topic. The robustness of deep learning models for healthcare applications is especially critical because the unique characteristics and the high financial interests of the medical domain make it more sensitive to adversarial attacks. Among the modalities of medical data, the clinical summaries have higher risks to be attacked because they are generated by third-party companies. As few works studied adversarial threats on clinical summaries, in this work we first apply adversarial attack to clinical summaries of electronic health records (EHR) to show the text-based deep learning systems are vulnerable to adversarial examples. Secondly, benefiting from the multi-modality of the EHR dataset, we propose a novel defense method, MATCH (Multimodal feATure Consistency cHeck), which leverages the consistency between multiple modalities in the data to defend against adversarial examples on a single modality. Our experiments demonstrate the effectiveness of MATCH on a hospital readmission prediction task comparing with baseline methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Deep learning has been shown to be effective in a variety of real-world applications such as computer vision, natural language processing, and speech recognition (Krizhevsky et al., 2012; He et al., 2016; Kim, 2014) . It also has shown great potentials in clinical informatics such as medical diagnosis and regulatory decisions (Shickel et al., 2017) , including learning representations of patient records, supporting disease phenotyping, and conducting predictions (Wickramasinghe, 2017; Miotto et al., 2016) . However, recent studies show that these models are vulnerable to adversarial examples (Bruna et al., 2013) . In image classification, researchers have demonstrated that imperceptible changes in input can mislead the classifier (Goodfellow et al., 2014) . In the text domain, synonym substitution or character/word level modification on a few words can also cause the model to misclassify (Liang et al., 2017) . These perturbations are mostly imperceptible to human but can easily fool a high-performance deep learning model.",
"cite_spans": [
{
"start": 162,
"end": 187,
"text": "(Krizhevsky et al., 2012;",
"ref_id": "BIBREF14"
},
{
"start": 188,
"end": 204,
"text": "He et al., 2016;",
"ref_id": "BIBREF9"
},
{
"start": 205,
"end": 215,
"text": "Kim, 2014)",
"ref_id": "BIBREF13"
},
{
"start": 328,
"end": 350,
"text": "(Shickel et al., 2017)",
"ref_id": "BIBREF25"
},
{
"start": 467,
"end": 489,
"text": "(Wickramasinghe, 2017;",
"ref_id": "BIBREF30"
},
{
"start": 490,
"end": 510,
"text": "Miotto et al., 2016)",
"ref_id": "BIBREF20"
},
{
"start": 599,
"end": 619,
"text": "(Bruna et al., 2013)",
"ref_id": null
},
{
"start": 740,
"end": 765,
"text": "(Goodfellow et al., 2014)",
"ref_id": "BIBREF8"
},
{
"start": 901,
"end": 921,
"text": "(Liang et al., 2017)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Adversarial examples have received much attention in image and text-domain, yet very few work has been done on Electronic Health Records (EHR). Most existing works on adversarial examples in medical domains have been focused on medical images (Vatian et al., 2019; Ma et al., 2020) . A few works have studied adversarial examples in numerical EHR data (Sun et al., 2018; . Despite these attempts, there is no work on evaluating the adversarial robustness of clinical natural language processing (NLP) systems, as well as the potential defense techniques.",
"cite_spans": [
{
"start": 243,
"end": 264,
"text": "(Vatian et al., 2019;",
"ref_id": "BIBREF27"
},
{
"start": 265,
"end": 281,
"text": "Ma et al., 2020)",
"ref_id": "BIBREF19"
},
{
"start": 352,
"end": 370,
"text": "(Sun et al., 2018;",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Although there are some existing defense techniques in the text domain, these methods cannot be directly applied to clinical texts due to the special characteristics of clinical notes. On one hand, for ordinary texts, spelling or syntax checks can easily detect adversarial examples generated by introducing misspelled words. However, there are originally plenty of misspelling words or abbreviations in clinical notes, which places challenges to distinguish whether a misspelled word is under attack. One the other hand, data augmentation is another strategy of some adversarial defense techniques in text domain. For example, Synonyms Encoding Method (SEM) ) is a data preprocessing method that inserts a synonym encoder before the input layers to eliminate adversarial perturbations. However, for clinical notes, a large number of words are proper nouns which makes it difficult to generate synonym set thus challenging to apply such defense. Adversarial training (Miyato et al., 2016) has also been applied to increase the generalization ability of textual deep learning models. However, no research has studied the effectiveness of applying adversarial training in the training of text-based clinical deep learning systems.",
"cite_spans": [
{
"start": 967,
"end": 988,
"text": "(Miyato et al., 2016)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We note that most existing defense mechanisms have focused on a single modality of the data. However, EHR data always comes in multiple modalities including diagnoses, medications, physician summaries and medical image, which presents both challenges and opportunities for building more robust defense systems. This is because some modalities are particularly susceptible to adversarial attacks and still lack effective defense mechanisms. For example, the clinical summary is often generated by a third-party dictation system and has a higher risk to be attacked. We believe that the correlations between different modalities for the same entity can be exploited to defend against such attacks, as it is not realistic for an adversary to attack all modalities. In this work, we propose a novel defense method, Multimodal feATure Consistency cHeck (MATCH), against adversarial attacks by utilizing the multimodal properties in the data. We assume that one modality has been compromised, and the MATCH system detects whether an input is adversarial by measuring the consistency between the compromised modality and another uncompromised modality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To validate our idea, we conduct a case study on predicting the 30-day readmission risk using an EHR dataset. We craft adversarial examples on clinical summary and use the sequential numerical records as another un-attacked modality to detect the adversarial examples. Figure 1 depicts the highlevel flow of our system.",
"cite_spans": [],
"ref_spans": [
{
"start": 269,
"end": 277,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The main contributions of this paper include:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We apply adversarial attack methods to the clinical summaries of electronic health records (EHR) dataset to show the vulnerability of the state-of-the-art clinical deep learning systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We introduce a novel adversarial example detection method, MATCH, which automatically validates the consistency between multiple modalities in data. This is the first attempt to leverage multi-modality in adversarial research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We conduct experiments to demonstrate the effectiveness of the MATCH detection method. The results validate that they outperform existing state-of-the-art defense methods in the medical domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There have been many adversarial works on single modality adversarial tasks. Qiu et al. (2019) provided a comprehensive summary of the latest progress on adversarial attack and defense technology, categorized by applications including computer vision, natural language processing, cyberspace security, and physical world. Esmaeilpour et al. (2019) reviewed the existing adversarial attacks in audio classification. Since our case study focuses on attack and defense of text modality, we mainly review the text-based attacks and defenses in this section.",
"cite_spans": [
{
"start": 77,
"end": 94,
"text": "Qiu et al. (2019)",
"ref_id": "BIBREF23"
},
{
"start": 322,
"end": 347,
"text": "Esmaeilpour et al. (2019)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "2.1 Attack Methods for Text Data Kuleshov et al. (2018) proposed a Greedy Search Algorithm (GSA), which iteratively changes one word in a sentence and substitute the word with one of the synonymous that improves the objective function the most. Alzantot et al. (2018) introduced a Genetic Algorithm (GA) which is a populationbased synonym replacement algorithm including processing, sampling and crossover. Gong et al. (2018) proposed to search for adversarial examples in the embedding space by applying gradientbased methods on text embedding (Text-FGM) and then reconstructed the adversarial texts by the nearest neighbor search. Gao et al. (2018) presented the DeepWordBug algorithm to generate small perturbations in the character-level. This algorithm does not require the gradient. Ren et al. 2019proposed a new synonym substitution method, Probability Weighted Word Saliency (PWWS), which considered the word saliency as well as the classification probability. Jin et al. 2019proposed TextFooler, an adversarial approach by identifying the important words and then prioritize to replace them with the most semantically similar and grammatically correct words. This is the first attempt to attack the emerging BERT model on text classification. We compare these algorithms from the following aspects: Document level vs. Word level. Text-FGM and GA are document level attacks, which apply an attack on the whole text. DeepWordBug, GSA, PSWW, and TextFooler are word level attacks that perturb individual words. DeepWordBug, PSWW and TextFooler use heuristics to measure the importance of each word and select words to perturb.",
"cite_spans": [
{
"start": 33,
"end": 55,
"text": "Kuleshov et al. (2018)",
"ref_id": "BIBREF15"
},
{
"start": 245,
"end": 267,
"text": "Alzantot et al. (2018)",
"ref_id": "BIBREF1"
},
{
"start": 407,
"end": 425,
"text": "Gong et al. (2018)",
"ref_id": "BIBREF7"
},
{
"start": 633,
"end": 650,
"text": "Gao et al. (2018)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Continuous vs. Discrete. Text-FGM is a continuous attack, because the gradient-based perturbation is applied on the embedding of the words. All other attacks are discrete attacks, which are applied directly on words. Semantic vs. Syntactic. GSA, PSWW, Text-FGM and TextFooler can be categorized as a semantic attack since their strategies are to replace words or text with synonyms, while DeepWordBug is a syntactic attack because it is based on character-level modification. GA can generate both semantically and syntactically similar adversarial examples. Back-box vs. White-box. GSA, GA and Text-FGM are white-box attacks, because attackers need to access the model structure and model pa-rameters to calculate the gradient. DeepWordBug, TextFooler and PSWW are black-box attacks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In this paper, we evaluate our detection method against Text-FGM and DeepWordBug, which represent all the categories mentioned above.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Text-FGM. In Text-FGM, any gradient based attacks, such as DeepFool (Moosavi-Dezfooli et al., 2016) , Fast Gradient Method (FGM) (Goodfellow et al., 2014) (both FGSM and FGVM) can be applied. Applying FGVM on text is defined as follows. Given a classifier f and a word sequence",
"cite_spans": [
{
"start": 68,
"end": 99,
"text": "(Moosavi-Dezfooli et al., 2016)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "x = {x 1 , x 2 , ...x n }, emb(x) = emb(x) + ( \u2207L ||\u2207L|| 2 )",
"eq_num": "(1)"
}
],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "where L is the loss function and emb denotes the embedding vector. Then, the adversarial example is chosen as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "x adv = N N S(emb(x) ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "where N N S represents the nearest neighbor search algorithm which returns the closest word sequence given a perturbed embedding vector. In the following work, in order to minimize the number of words that need to be perturbed, we iteratively perform perturbation on one word at a time based on the importance score of the words, instead of applying perturbation on the entire sequence. In this way, we can maximize the overall semantic similarity between clean and adversarial sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "DeepWordBug. DeepWordBug first computes the word importance to the target sequence classifier. At each step, it selects the most important word and constructs an adversarial word applying a character level swap, substitution or deletion. It iterates until the label is flipped or the number of words changed is larger than a threshold.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Few works have been done on defending against adversarial examples in the text domain. Existing defense algorithms can be divided into detection and adversarial training. Adversarial Training. Adversarial training has been widely used in the image domain and also been adapted to text domain. Overfitting is the major reason why the adversarial training is sometimes not useful and effective specific to attacks that are used to generate adversarial examples in the training stage. Miyato et al. (2016) applied the adversarial training to text domain and achieved the state-of-the-art-performance. proposed Synonyms Encoding Method (SEM), which tried to find a mapping between word and their synonymous neighbors before the input layer. This can be considered as an adversarial training method via data augmentation. Then this mapping works as an encoder applied on classifier. The classifier is forced to be smooth in this way. However, SEM can only work for synonym substitution attacks.",
"cite_spans": [
{
"start": 482,
"end": 502,
"text": "Miyato et al. (2016)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Defense Methods for Text Data",
"sec_num": "2.2"
},
{
"text": "Efforts on building deep learning models for readmission prediction have attracted a growing interest. MIMIC-III (The Multiparameter Intelligent Monitoring in Intensive Care) (Johnson et al., 2016), a publicly available clinical dataset comprising EHR information related to patients admitted to critical care units, has become a common choice for such studies. We demonstrate our framework using a case study on the MIMIC data and adopt the stateof-the-art classification models which are briefly reviewed here.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Readmission Prediction",
"sec_num": "2.3"
},
{
"text": "For numerical records, (Xue et al., 2019 ) studied the temporal trends of physiological measurements and medications, and used them to improve the performance of ICU readmission risk prediction models. They converted the time series of each variable into trend graphs. Then, they applied frequent subgraph mining to extract important temporal trends. They trained a logistical regression model on grouped temporal trends. (Zebin and Chaussalet, 2019) proposed a heterogeneous bidirectional Long Short Term Memory plus Convolutional neural network (BiLSTM+CNN) model. The combination of them can automate the feature extraction process, by considering both time-series correlation and feature correlation. They outperformed all the benchmark classifiers on most performance measures. At the same time, anothers also proposed a LSTM-CNN based model and achieved comparable performance (Lin et al., 2019) . In this work, we adopt the architecture in (Zebin and Chaussalet, 2019) to conduct readmission prediction on sequential numerical records.",
"cite_spans": [
{
"start": 23,
"end": 40,
"text": "(Xue et al., 2019",
"ref_id": "BIBREF31"
},
{
"start": 422,
"end": 450,
"text": "(Zebin and Chaussalet, 2019)",
"ref_id": "BIBREF32"
},
{
"start": 883,
"end": 901,
"text": "(Lin et al., 2019)",
"ref_id": "BIBREF18"
},
{
"start": 947,
"end": 975,
"text": "(Zebin and Chaussalet, 2019)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Readmission Prediction",
"sec_num": "2.3"
},
{
"text": "For text data, Clinical BERT is recently introduced (Huang et al., 2019; Alsentzer et al., 2019) to model clinical notes by applying the BERT model (?). They outperformed baselines which use both the discharge summaries and the first few days of notes in ICU. In this work, we adopt Clinical BERT to predict readmission on text data.",
"cite_spans": [
{
"start": 52,
"end": 72,
"text": "(Huang et al., 2019;",
"ref_id": "BIBREF10"
},
{
"start": 73,
"end": 96,
"text": "Alsentzer et al., 2019)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Readmission Prediction",
"sec_num": "2.3"
},
{
"text": "In this section, we will explain our high-level idea and intuitions behind MATCH. System Overview. The main idea of MATCH is to reject adversarial examples if the features from one modality are far away from another un-attacked modality's features. In MATCH, we assume that there is duplicate information in multiple modalities (e.g., 'gray cat' in an image caption and a gray cat in image) and manipulating information can be harder in one modality than another modality. Thus, it is difficult for an attacker to make coherent perturbations across all modalities. In other words, using the gradient to find the steepest change in the decision surface is a common attack strategy, but such a gradient can be drastically different from modality to modality. Moreover, for a certain modality, even if the adversarial and clean examples are close in the input space, their differences would be amplified in the feature space. Therefore, if another un-attacked modality is introduced, the difference between the two modalities can be a criteria to distinguish adversarial and clean examples. Figure 2 shows our detection pipeline using text and numerical features. Note that, while we use text and numerical modalities for the experiments, our framework works for any modalities.",
"cite_spans": [],
"ref_spans": [
{
"start": 1088,
"end": 1094,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "We first pre-train two models on two modalities separately. These two models are trained only with clean data, and we use the outputs of their last fully-connected layer before logits layer as the extracted features. Note that the extracted features from two modalities are in different feature spaces, which requires a \"Projection\" step to bring the two feature sets into the same feature space. We train a projection model, a fully-connected layer network, for each modality on the clean examples. The objective function of the projection model is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-modality Model Consistency Check",
"sec_num": "3.1"
},
{
"text": "min \u03b8 1 ,\u03b8 2 M SE(p \u03b8 1 (F 1 (m 1 )) \u2212 p \u03b8 2 (F 2 (m 2 ))) (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-modality Model Consistency Check",
"sec_num": "3.1"
},
{
"text": "where m 1 and m 2 represent different modalities. F i and p \u03b8 i are the feature extractor and the projector of m i respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-modality Model Consistency Check",
"sec_num": "3.1"
},
{
"text": "Then, a consistency check model is trained only on clean data by minimizing the consistency level between multi-modal features. The consistency level is defined as the L 2 norm of the difference between the projected features from the two modalities. Once all the models are trained, given an input example with two modalities, the system detects it as an adversarial example if the consistency level between two modalities is greater than a threshold \u03b4:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-modality Model Consistency Check",
"sec_num": "3.1"
},
{
"text": "||p \u03b8 1 (F 1 (m 1 )) \u2212 p \u03b8 2 (F 2 (m 2 ))|| 2 > \u03b4 (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-modality Model Consistency Check",
"sec_num": "3.1"
},
{
"text": "\u03b4 is decided based on what percentage of clean examples are allowed to pass MATCH. Predictive Model and Feature Extractor. For clinical notes, we use pre-trained Clinical BERT as our feature extractor. Clinical BERT is pretrained using thr same tasks as (Devlin et al., 2019) and fine-tuned on readmission prediction. Clinical BERT also provides a readmission classifier, which is a single layer fully-connected layer. We use this classification representation as the extracted feature. For sequential numerical records, we adopt the architecture in (Zebin and Chaussalet, 2019) . However, as our data preprocessing steps and selected features are different, we modify the architecture to optimize the performance. Our architecture (Figure 3) employs a stacked-bidirectional-LSTM, followed by a convolutional layer and a fully connected layer. The number of stacks in stackedbidirectional-LSTM and the number of convolutional layers, as well as the convolution kernel size are tuned during experiments, which are different from the architecture in (Zebin and Chaussalet, 2019) . The output of the final layer is used as the extracted features. ",
"cite_spans": [
{
"start": 254,
"end": 275,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF4"
},
{
"start": 550,
"end": 578,
"text": "(Zebin and Chaussalet, 2019)",
"ref_id": "BIBREF32"
},
{
"start": 1048,
"end": 1076,
"text": "(Zebin and Chaussalet, 2019)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [
{
"start": 732,
"end": 742,
"text": "(Figure 3)",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Multi-modality Model Consistency Check",
"sec_num": "3.1"
},
{
"text": "In this section, we first present the attack performance of two text attack algorithms in order to demonstrate the vulnerability of state-of-the-art clinical deep learning systems. Secondly, we evaluate the effectiveness of the MATCH detection method for the readmission classification task using the MIMIC-III data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Clinical Summary. For the clinical summary, which is the target modality the attacker, we directly use the processed data from (Huang et al., 2019) . The data contains 34,560 patients with 2,963 positive readmission labels and 48,150 negative labels. In MIMIC-III (Johnson et al., 2016) , there are several categories in the clinical notes including ECG summaries, physician notes and discharge summaries. We select the discharge summary as our text modality, as it is most relevant to readmission prediction.",
"cite_spans": [
{
"start": 127,
"end": 147,
"text": "(Huang et al., 2019)",
"ref_id": "BIBREF10"
},
{
"start": 254,
"end": 286,
"text": "MIMIC-III (Johnson et al., 2016)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preprocessing",
"sec_num": "4.1"
},
{
"text": "Numerical Data. For the other modality which is used to conduct the consistency check, we use the patents' numeric data in their medical records. We use the patient ID from the discharge summary to extract the multivariate time series numerical records consisting of 90 continuous features including vital signs such as heart rate and blood pressure as well as other lab measurements. The features are selected based on the frequency of their appearance in all the patients' records.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preprocessing",
"sec_num": "4.1"
},
{
"text": "Then, we apply a standardization for each feature x across all patients and time steps using the following formula: x = x\u2212x std(x) . We pad all the sequences to the same length (120 hours before discharge), because this time window is crucial to predict the readmission rate. We ignore all the pre-vious time steps if a patient stayed more than 120 hours and repeat the last time step if a patient's sequence is shorter than 120 hours. We represent the numerical data as a 3-dimensional tensor: patients \u00d7 time step (120) \u00d7 features (90).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preprocessing",
"sec_num": "4.1"
},
{
"text": "For the clinical summary data, we use the pretrained Clinical BERT, whose AUC is 0.768. For the numerical data, the performance of our stacked bi-directional LSTM+CNN model produces AUC 0.65. Although the performance of the numerical data is lower than that of Clinical BERT, our experiments indicate that it does not affect MATCH's overall performance. The reason is that we only need this prediction model to learn the feature representation. As long as the two models have a comparable performance with each other, the extracted features from the two modalities have a similar representative ability. Clinical BERT is also used as the target classifier under attacked. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predictive Model Performance",
"sec_num": "4.2"
},
{
"text": "In this section, we present the attack performance of two text attack algorithms in order to demonstrate the vulnerability of state-of-the-art clinical deep learning systems. We select two attack algorithms that can present all attack categories we mentioned in the related work: Text-FGM,a whitebox, semantic attack and DeepWordBug a blackbox, syntactic attack. Besides, these two attack algorithms will also be used to evaluate the performance of our proposed MATCH, in order to show that MATCH can defense against various kinds of adversarial attacks. We generate adversarial examples with different attack power levels: 4%, 8%, 16%, which define the maximum percentage of word changes in a text. Then we show the attack success rate under different attack powers, as well as the generated adversarial examples of two attack algorithms. As shown in Figure 4 , both Text-FGM and DeepWord-Bug can produces high attack success rate on the Clinical Bert model. With higher percentage of word changes, the attack success rate also increased for b0th Text-FGM and DeepWordBug. This is intuitive because as more perturbations being introduced to the input space, the model is more likely to give a wrong prediction. For Text-FGM, it achieves almost 80% attack success rate with only 8% of word change, which indicated that the Clinical Bert model are easily fooled and give a wrong prediction. This result indicates the vulnerability of the state-of-the-art text-based medical deep learning systems. Figure 5 shows several examples of our generated adversarial examples from both attack methods compared to the clean examples. The red words represent the changed words in Text-FGM, and green words denote the changed words in Deep-WordBug. It is obvious that even the generated adversarial texts are indistinguishable to human knowledge, especially those that generated by Text-FGM, but well-trained deep learning models will misclassify.",
"cite_spans": [],
"ref_spans": [
{
"start": 852,
"end": 860,
"text": "Figure 4",
"ref_id": "FIGREF4"
},
{
"start": 1496,
"end": 1504,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Attack Results",
"sec_num": "4.3"
},
{
"text": "Besides the attack success rate and the generated adversarial examples, we also present the distribution of the number of misspelled words in the clean and adversarial examples. As shown in Figure 6 , the number of misspelled word distributions of the clean and the Text-FGM adversarial examples are difficult to separate, while the adversarial examples generated by DeepWordBug have a large distribution shift compared to that of the clean examples. Further, as the attack power grows, the distribution shift is more distinguishable. This explains why the spelling check service is effective to DeepWordBug but not useful for the synonym substitution attack.",
"cite_spans": [],
"ref_spans": [
{
"start": 190,
"end": 198,
"text": "Figure 6",
"ref_id": "FIGREF6"
}
],
"eq_spans": [],
"section": "Attack Results",
"sec_num": "4.3"
},
{
"text": "In this section, we use Text-FGM and DeepWord-Bug, which represent the two types of attacks, semantic vs. syntactic, to evaluate the performance of MATCH Comparison with Baseline Detection Methods. We use mis-spelling check (pyspellchecker form python) as a baseline to compare with MATCH, which is adopted in (Gao et al., 2018) . As shown in Figure 7 , we take the attack power (i.e., the percentage of word changes) of 4%, 8% and 16% and use the ROC curve to compare the detection performances between MATCH and the mis-spelling check. ROC curve can represent the correlations between True Positive Rate (TPR) and False Positive Rate (FPR). Here, we want to have higher TPR (adversarial examples can be detected) while achieve lower FPR (clean examples can pass the detector). Given the various detection thresholds \u03b4 which allow certain percentage of clean examples to pass detection, these ROC curves illustrate the discriminating ability of MATCH on detecting adversarial examples. Similar to MATCH, we take the number of misspelled words as a threshold and show the discriminating ability given different thresholds. We can note that MATCH significantly outperforms the baseline for both attacks. As misspelling check can effectively detect adversarial texts with large misspelling distribution shifts, we take the mis-spelling check as a pre-filter to filter out adversarial examples that are easy to detect. Then, we apply MATCH as a secondary detector. We try different combinations of mis-spelling word threshold and feature consistency threshold. The blue lines in the charts show the lower boundary of the ROC curves. For DeepWordBug, MATCH can achieve close to 100% TPR and 0% FPR. In addition, both MATCH and baseline method works better for DeepWordBug because the attack is syntactic, and the examples are easily separable based on the misspelling distribution shifts as observed from Figure 6 .",
"cite_spans": [
{
"start": 310,
"end": 328,
"text": "(Gao et al., 2018)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 343,
"end": 351,
"text": "Figure 7",
"ref_id": "FIGREF8"
},
{
"start": 1901,
"end": 1909,
"text": "Figure 6",
"ref_id": "FIGREF6"
}
],
"eq_spans": [],
"section": "Defense Result",
"sec_num": "4.4"
},
{
"text": "Comparison with Adversarial Training. Besides misspelling-check, we also use Adversarial Training (AT) to compare with MATCH on Text-FGM. As mentioned in the related work, AT is widly applied in image domain to improve the robustness of DNNs. As our prediction is a binary classification, and MATCH is a detector, in order to compare with Adversarial Training, we flip the prediction label of examples which are detected as adversarial examples and compare the accuracy with AT. The results in Table 1 show that the accuracy of MATCH is much higher than AT and No Defense.",
"cite_spans": [],
"ref_spans": [
{
"start": 494,
"end": 501,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Defense Result",
"sec_num": "4.4"
},
{
"text": "Impact of attack power. To better illustrate the impact of attack power, we plot the results of varying attack powers in Figure 8 . To clarify, for Deep-WordBug we do not include mis-spelling check as a pre-filter, only showing the performance of MATCH. Under DeepWordBug with attack power of 16%, MATCH can detect more than 60% of the adversarial examples, while misclassifying 30% of the clean examples as adversarial. Under Text-FGM with attack power of 16%, MATCH can detect more than 60% adversarial examples but only 20% of clean examples are mistaken as adversarial. The ROC curve shows that with a higher attack ",
"cite_spans": [],
"ref_spans": [
{
"start": 121,
"end": 129,
"text": "Figure 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Defense Result",
"sec_num": "4.4"
},
{
"text": "In this work, we proposed MATCH, a novel defense method by taking advantage of another modal's properties to detect adversarial examples on clinical notes. We evaluated our approaches with two different attack strategies: Text-FGM and Deep-WordBug. We conducted experiments on the 30day readmission prediction task by detecting adversarial examples in text modalities and use numerical modality to do the multi-modal consistency check. Our experiments showed the effectiveness of MATCH compared to the baseline methods. Although we only evaluated MATCH on clinical deep learning system and only attack on the clinial text modality, we believe MATCH would be a general framework that could work on any multi-modality dataset. In the future, it would be interesting to extending and evaluating the frame-work for different modalities such as image and audio. Besides, a more complex architecture may be applied to project extracted features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Publicly available clinical bert embeddings",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Alsentzer",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "Willie",
"middle": [],
"last": "Murphy",
"suffix": ""
},
{
"first": "Wei-Hung",
"middle": [],
"last": "Boag",
"suffix": ""
},
{
"first": "Di",
"middle": [],
"last": "Weng",
"suffix": ""
},
{
"first": "Tristan",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Naumann",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mcdermott",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1904.03323"
]
},
"num": null,
"urls": [],
"raw_text": "Emily Alsentzer, John R Murphy, Willie Boag, Wei- Hung Weng, Di Jin, Tristan Naumann, and Matthew McDermott. 2019. Publicly available clinical bert embeddings. arXiv preprint arXiv:1904.03323.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Generating natural language adversarial examples",
"authors": [
{
"first": "Moustafa",
"middle": [],
"last": "Alzantot",
"suffix": ""
},
{
"first": "Yash",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Ahmed",
"middle": [],
"last": "Elgohary",
"suffix": ""
},
{
"first": "Bo-Jhang",
"middle": [],
"last": "Ho",
"suffix": ""
},
{
"first": "Mani",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2890--2896",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, and Kai-Wei Chang. 2018. Generating natural language adversarial ex- amples. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2890-2896.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Longitudinal adversarial attack on electronic health records data",
"authors": [
{
"first": "Cao",
"middle": [],
"last": "Sungtae An",
"suffix": ""
},
{
"first": "Walter",
"middle": [
"F"
],
"last": "Xiao",
"suffix": ""
},
{
"first": "Jimeng",
"middle": [],
"last": "Stewart",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2019,
"venue": "The World Wide Web Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sungtae An, Cao Xiao, Walter F. Stewart, and Jimeng Sun. 2019. Longitudinal adversarial attack on elec- tronic health records data. In The World Wide Web Conference.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A robust approach for securing audio classification against adversarial attacks",
"authors": [
{
"first": "Mohammad",
"middle": [],
"last": "Esmaeilpour",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Cardinal",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [
"Lameiras"
],
"last": "Koerich",
"suffix": ""
}
],
"year": 2019,
"venue": "IEEE Transactions on Information Forensics and Security",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohammad Esmaeilpour, Patrick Cardinal, and Alessandro Lameiras Koerich. 2019. A robust ap- proach for securing audio classification against ad- versarial attacks. IEEE Transactions on Information Forensics and Security.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Black-box generation of adversarial text sequences to evade deep learning classifiers",
"authors": [
{
"first": "Ji",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Jack",
"middle": [],
"last": "Lanchantin",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"Lou"
],
"last": "Soffa",
"suffix": ""
},
{
"first": "Yanjun",
"middle": [],
"last": "Qi",
"suffix": ""
}
],
"year": 2018,
"venue": "IEEE Security and Privacy Workshops",
"volume": "",
"issue": "",
"pages": "50--56",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ji Gao, Jack Lanchantin, Mary Lou Soffa, and Yan- jun Qi. 2018. Black-box generation of adversarial text sequences to evade deep learning classifiers. In 2018 IEEE Security and Privacy Workshops (SPW), pages 50-56. IEEE.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Adversarial texts with gradient methods",
"authors": [
{
"first": "Zhitao",
"middle": [],
"last": "Gong",
"suffix": ""
},
{
"first": "Wenlu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Dawn",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Wei-Shinn",
"middle": [],
"last": "Ku",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1801.07175"
]
},
"num": null,
"urls": [],
"raw_text": "Zhitao Gong, Wenlu Wang, Bo Li, Dawn Song, and Wei-Shinn Ku. 2018. Adversarial texts with gradient methods. arXiv preprint arXiv:1801.07175.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Explaining and harnessing adversarial examples",
"authors": [
{
"first": "J",
"middle": [],
"last": "Ian",
"suffix": ""
},
{
"first": "Jonathon",
"middle": [],
"last": "Goodfellow",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Shlens",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Szegedy",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6572"
]
},
"num": null,
"urls": [],
"raw_text": "Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversar- ial examples. arXiv preprint arXiv:1412.6572.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Deep residual learning for image recognition",
"authors": [
{
"first": "Kaiming",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Xiangyu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Shaoqing",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the IEEE conference on computer vision and pattern recognition",
"volume": "",
"issue": "",
"pages": "770--778",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recog- nition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770- 778.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Clinicalbert: Modeling clinical notes and predicting hospital readmission",
"authors": [
{
"first": "Kexin",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Jaan",
"middle": [],
"last": "Altosaar",
"suffix": ""
},
{
"first": "Rajesh",
"middle": [],
"last": "Ranganath",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1904.05342"
]
},
"num": null,
"urls": [],
"raw_text": "Kexin Huang, Jaan Altosaar, and Rajesh Ranganath. 2019. Clinicalbert: Modeling clinical notes and predicting hospital readmission. arXiv preprint arXiv:1904.05342.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Is bert really robust? natural language attack on text classification and entailment",
"authors": [
{
"first": "Di",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Zhijing",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Joey",
"middle": [
"Tianyi"
],
"last": "Zhou",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Szolovits",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11932"
]
},
"num": null,
"urls": [],
"raw_text": "Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. 2019. Is bert really robust? natural lan- guage attack on text classification and entailment. arXiv preprint arXiv:1907.11932.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Mimiciii, a freely accessible critical care database",
"authors": [
{
"first": "E",
"middle": [
"W"
],
"last": "Alistair",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Tom",
"suffix": ""
},
{
"first": "Lu",
"middle": [],
"last": "Pollard",
"suffix": ""
},
{
"first": "H Lehman",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Mengling",
"middle": [],
"last": "Li-Wei",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Ghassemi",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Moody",
"suffix": ""
},
{
"first": "Leo",
"middle": [
"Anthony"
],
"last": "Szolovits",
"suffix": ""
},
{
"first": "Roger G",
"middle": [],
"last": "Celi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mark",
"suffix": ""
}
],
"year": 2016,
"venue": "Scientific data",
"volume": "3",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alistair EW Johnson, Tom J Pollard, Lu Shen, H Lehman Li-wei, Mengling Feng, Moham- mad Ghassemi, Benjamin Moody, Peter Szolovits, Leo Anthony Celi, and Roger G Mark. 2016. Mimic- iii, a freely accessible critical care database. Scien- tific data, 3:160035.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Convolutional neural networks for sentence classification",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1408.5882"
]
},
"num": null,
"urls": [],
"raw_text": "Yoon Kim. 2014. Convolutional neural net- works for sentence classification. arXiv preprint arXiv:1408.5882.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Imagenet classification with deep convolutional neural networks",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
}
],
"year": 2012,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "1097--1105",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hin- ton. 2012. Imagenet classification with deep con- volutional neural networks. In Advances in neural information processing systems, pages 1097-1105.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Adversarial examples for natural language classification problems",
"authors": [
{
"first": "Volodymyr",
"middle": [],
"last": "Kuleshov",
"suffix": ""
},
{
"first": "Shantanu",
"middle": [],
"last": "Thakoor",
"suffix": ""
},
{
"first": "Tingfung",
"middle": [],
"last": "Lau",
"suffix": ""
},
{
"first": "Stefano",
"middle": [],
"last": "Ermon",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Volodymyr Kuleshov, Shantanu Thakoor, Tingfung Lau, and Stefano Ermon. 2018. Adversarial exam- ples for natural language classification problems.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Textbugger: Generating adversarial text against real-world applications",
"authors": [
{
"first": "Jinfeng",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Shouling",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Tianyu",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1812.05271"
]
},
"num": null,
"urls": [],
"raw_text": "Jinfeng Li, Shouling Ji, Tianyu Du, Bo Li, and Ting Wang. 2018. Textbugger: Generating adversarial text against real-world applications. arXiv preprint arXiv:1812.05271.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Deep text classification can be fooled",
"authors": [
{
"first": "Bin",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Hongcheng",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Miaoqiang",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Pan",
"middle": [],
"last": "Bian",
"suffix": ""
},
{
"first": "Xirong",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Wenchang",
"middle": [],
"last": "Shi",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bin Liang, Hongcheng Li, Miaoqiang Su, Pan Bian, Xirong Li, and Wenchang Shi. 2017. Deep text clas- sification can be fooled. CoRR, abs/1704.08006.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Analysis and prediction of unplanned intensive care unit readmission using recurrent neural networks with long short-term memory",
"authors": [
{
"first": "Yu-Wei",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Yuqian",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Faraz",
"middle": [],
"last": "Faghri",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Roy H",
"middle": [],
"last": "Shaw",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Campbell",
"suffix": ""
}
],
"year": 2019,
"venue": "PloS one",
"volume": "",
"issue": "7",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yu-Wei Lin, Yuqian Zhou, Faraz Faghri, Michael J Shaw, and Roy H Campbell. 2019. Analysis and pre- diction of unplanned intensive care unit readmission using recurrent neural networks with long short-term memory. PloS one, 14(7).",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Understanding adversarial attacks on deep learning based medical image analysis systems. Pattern Recognition",
"authors": [
{
"first": "Xingjun",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Yuhao",
"middle": [],
"last": "Niu",
"suffix": ""
},
{
"first": "Lin",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Yisen",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yitian",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bailey",
"suffix": ""
},
{
"first": "Feng",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xingjun Ma, Yuhao Niu, Lin Gu, Yisen Wang, Yitian Zhao, James Bailey, and Feng Lu. 2020. Under- standing adversarial attacks on deep learning based medical image analysis systems. Pattern Recogni- tion, page 107332.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Deep patient: an unsupervised representation to predict the future of patients from the electronic health records",
"authors": [
{
"first": "Riccardo",
"middle": [],
"last": "Miotto",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Brian",
"suffix": ""
},
{
"first": "Joel",
"middle": [
"T"
],
"last": "Kidd",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dudley",
"suffix": ""
}
],
"year": 2016,
"venue": "Scientific reports",
"volume": "6",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Riccardo Miotto, Li Li, Brian A Kidd, and Joel T Dud- ley. 2016. Deep patient: an unsupervised represen- tation to predict the future of patients from the elec- tronic health records. Scientific reports, 6:26094.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Adversarial training methods for semisupervised text classification",
"authors": [
{
"first": "Takeru",
"middle": [],
"last": "Miyato",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Goodfellow",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Takeru Miyato, Andrew M Dai, and Ian Goodfel- low. 2016. Adversarial training methods for semi- supervised text classification. stat, 1050:7.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Deepfool: a simple and accurate method to fool deep neural networks",
"authors": [
{
"first": "Alhussein",
"middle": [],
"last": "Seyed-Mohsen Moosavi-Dezfooli",
"suffix": ""
},
{
"first": "Pascal",
"middle": [],
"last": "Fawzi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Frossard",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the IEEE conference on computer vision and pattern recognition",
"volume": "",
"issue": "",
"pages": "2574--2582",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. 2016. Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE conference on computer vi- sion and pattern recognition, pages 2574-2582.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Review of artificial intelligence adversarial attack and defense technologies",
"authors": [
{
"first": "Shilin",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Qihe",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Shijie",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Chunjiang",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2019,
"venue": "Applied Sciences",
"volume": "9",
"issue": "5",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shilin Qiu, Qihe Liu, Shijie Zhou, and Chunjiang Wu. 2019. Review of artificial intelligence adversarial attack and defense technologies. Applied Sciences, 9(5):909.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Generating natural language adversarial examples through probability weighted word saliency",
"authors": [
{
"first": "Yihe",
"middle": [],
"last": "Shuhuai Ren",
"suffix": ""
},
{
"first": "Kun",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Wanxiang",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Che",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1085--1097",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shuhuai Ren, Yihe Deng, Kun He, and Wanxiang Che. 2019. Generating natural language adversarial ex- amples through probability weighted word saliency. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1085-1097.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Deep ehr: a survey of recent advances in deep learning techniques for electronic health record (ehr) analysis",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "Shickel",
"suffix": ""
},
{
"first": "Patrick",
"middle": [
"James"
],
"last": "Tighe",
"suffix": ""
},
{
"first": "Azra",
"middle": [],
"last": "Bihorac",
"suffix": ""
},
{
"first": "Parisa",
"middle": [],
"last": "Rashidi",
"suffix": ""
}
],
"year": 2017,
"venue": "IEEE journal of biomedical and health informatics",
"volume": "22",
"issue": "5",
"pages": "1589--1604",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benjamin Shickel, Patrick James Tighe, Azra Bihorac, and Parisa Rashidi. 2017. Deep ehr: a survey of re- cent advances in deep learning techniques for elec- tronic health record (ehr) analysis. IEEE journal of biomedical and health informatics, 22(5):1589- 1604.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Identify susceptible locations in medical records via adversarial attacks on deep predictive models",
"authors": [
{
"first": "Mengying",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Fengyi",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Jinfeng",
"middle": [],
"last": "Yi",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jiayu",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "793--801",
"other_ids": {
"DOI": [
"10.1145/3219819.3219909"
]
},
"num": null,
"urls": [],
"raw_text": "Mengying Sun, Fengyi Tang, Jinfeng Yi, Fei Wang, and Jiayu Zhou. 2018. Identify susceptible locations in medical records via adversarial attacks on deep pre- dictive models. pages 793-801.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Impact of adversarial examples on the efficiency of interpretation and use of information from high-tech medical images",
"authors": [
{
"first": "Aleksandra",
"middle": [],
"last": "Vatian",
"suffix": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Gusarova",
"suffix": ""
},
{
"first": "Natalia",
"middle": [
"V"
],
"last": "Dobrenko",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Dudorov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aleksandra Vatian, Natalia Gusarova, Natalia V. Do- brenko, Sergey Dudorov, Niyaz Nigmatullin, Ana- toly A. Shalyto, and Artem Lobantsev. 2019. Impact of adversarial examples on the efficiency of interpre- tation and use of information from high-tech medi- cal images. FRUCT.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Radar: Recurrent autoencoder based detector for adversarial examples on temporal ehr",
"authors": [
{
"first": "Wenjie",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Pengfei",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Xiaoqian",
"middle": [],
"last": "Jiang",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenjie Wang, Pengfei Tang, Li Xiong, and Xiaoqian Jiang. 2020. Radar: Recurrent autoencoder based detector for adversarial examples on temporal ehr.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Natural language adversarial attacks and defenses in word level",
"authors": [
{
"first": "Xiaosen",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jin",
"middle": [],
"last": "Hao",
"suffix": ""
},
{
"first": "Kun",
"middle": [],
"last": "He",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.06723"
]
},
"num": null,
"urls": [],
"raw_text": "Xiaosen Wang, Hao Jin, and Kun He. 2019. Natural language adversarial attacks and defenses in word level. arXiv preprint arXiv:1909.06723.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Deepr: a convolutional net for medical records",
"authors": [
{
"first": "Nilmini",
"middle": [],
"last": "Wickramasinghe",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nilmini Wickramasinghe. 2017. Deepr: a convolu- tional net for medical records.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Predicting icu readmission using grouped physiological and medication trends",
"authors": [
{
"first": "Ye",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Diego",
"middle": [],
"last": "Klabjan",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Luo",
"suffix": ""
}
],
"year": 2019,
"venue": "Artificial intelligence in medicine",
"volume": "95",
"issue": "",
"pages": "27--37",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ye Xue, Diego Klabjan, and Yuan Luo. 2019. Pre- dicting icu readmission using grouped physiologi- cal and medication trends. Artificial intelligence in medicine, 95:27-37.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Design and implementation of a deep recurrent model for prediction of readmission in urgent care using electronic health records",
"authors": [
{
"first": "Tahmina",
"middle": [],
"last": "Zebin",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Thierry",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chaussalet",
"suffix": ""
}
],
"year": 2019,
"venue": "IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology (CIBCB)",
"volume": "",
"issue": "",
"pages": "1--5",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tahmina Zebin and Thierry J Chaussalet. 2019. De- sign and implementation of a deep recurrent model for prediction of readmission in urgent care using electronic health records. In 2019 IEEE Confer- ence on Computational Intelligence in Bioinformat- ics and Computational Biology (CIBCB), pages 1-5. IEEE.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Learning to discriminate perturbations for blocking adversarial attacks in text classification",
"authors": [
{
"first": "Yichao",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Jyun-Yu",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.03084"
]
},
"num": null,
"urls": [],
"raw_text": "Yichao Zhou, Jyun-Yu Jiang, Kai-Wei Chang, and Wei Wang. 2019. Learning to discriminate perturbations for blocking adversarial attacks in text classification. arXiv preprint arXiv:1909.03084.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Illustration of MATCH: an adversarial attack on the text modal and how MATCH detection finds the inconsistency using the numerical features as another modality."
},
"FIGREF1": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Detection. Most detection methods use spelling check.Gao et al. (2018) used Python's Autocorrect 0.3.0 to detect character-level adversarial examples. took advantage of a context-aware spelling check service to do the similar work. However, these detections are not effective for word level attacks. proposed a framework learning to discriminate perturbations (DISP), which learns to discriminate the perturbations and restore the original embeddings."
},
"FIGREF2": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Figure 2: Detection Pipeline"
},
"FIGREF3": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Stacked Bidirectional LSTM+CNN architecture"
},
"FIGREF4": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Attack Success Rate Comparison between Text-FGM and DeepWordBug Figure 5: Example of generated adversarial texts with Text-FGM and DeepWordBug"
},
"FIGREF6": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Distribution of misspelled words in adversarial /clean text under different attack power"
},
"FIGREF8": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Comparison of the adversarial detection performance between MATCH and misspelling check-based defense."
},
"FIGREF9": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "MATCH can more easily distinguish adversarial examples from clean examples."
},
"TABREF0": {
"content": "<table><tr><td colspan=\"4\">Attack Levels Clean No Defense MATCH AT</td></tr><tr><td>16%</td><td>0.672 0.407</td><td>0.525</td><td>0.435</td></tr><tr><td>8%</td><td>0.672 0.450</td><td>0.523</td><td>0.464</td></tr><tr><td>4%</td><td>0.672 0.483</td><td>0.522</td><td>0.471</td></tr></table>",
"html": null,
"num": null,
"text": "Comparison of the Adversarial Detection Accuracy",
"type_str": "table"
}
}
}
}