ACL-OCL / Base_JSON /prefixD /json /deelio /2020.deelio-1.3.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:22:07.682795Z"
},
"title": "Generalization to Mitigate Synonym Substitution Attacks",
"authors": [
{
"first": "Basemah",
"middle": [],
"last": "Alshemali",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Colorado at Colorado Springs Colorado Springs",
"location": {
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Jugal",
"middle": [],
"last": "Kalita",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Colorado at Colorado Springs Colorado Springs",
"location": {
"country": "USA"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Studies have shown that deep neural networks are vulnerable to adversarial examples-perturbed inputs that cause DNN-based models to produce incorrect results. One robust adversarial attack in the NLP domain is the synonym substitution. In attacks of this variety, the adversary substitutes words with synonyms. Since synonym substitution perturbations aim to satisfy all lexical, grammatical, and semantic constraints, they are difficult to detect with automatic syntax check as well as by humans. In this work, we propose the first defensive method to mitigate synonym substitution perturbations that can improve the robustness of DNNs with both clean and adversarial data. We improve the generalization of DNN-based classifiers by replacing the embeddings of the important words in the input samples with the average of their synonyms' embeddings. By doing so, we reduce model sensitivity to particular words in the input samples. Our algorithm is generic enough to be applied in any NLP domain and to any model trained on any natural language.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Studies have shown that deep neural networks are vulnerable to adversarial examples-perturbed inputs that cause DNN-based models to produce incorrect results. One robust adversarial attack in the NLP domain is the synonym substitution. In attacks of this variety, the adversary substitutes words with synonyms. Since synonym substitution perturbations aim to satisfy all lexical, grammatical, and semantic constraints, they are difficult to detect with automatic syntax check as well as by humans. In this work, we propose the first defensive method to mitigate synonym substitution perturbations that can improve the robustness of DNNs with both clean and adversarial data. We improve the generalization of DNN-based classifiers by replacing the embeddings of the important words in the input samples with the average of their synonyms' embeddings. By doing so, we reduce model sensitivity to particular words in the input samples. Our algorithm is generic enough to be applied in any NLP domain and to any model trained on any natural language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Deep Neural Networks (DNNs) have achieved remarkable success in various machine learning tasks, including computer vision (Krizhevsky et al., 2012; , speech recognition Chen et al., 2019) , and natural language processing (NLP) (Kim, 2014; Pirinen, 2019; Kambhatla et al., 2018) . However, studies have found that DNNs are vulnerable to adversarial examplesartificially modified input samples that lead DNNs to produce incorrect results, while not being detectable by humans (Szegedy et al., 2014) . These vulnerabilities have been exposed in the domains of computer vision (Goodfellow et al., 2015; Papernot et al., 2016; Carlini and Wagner, 2017) , speech (Alzantot et al., 2017; Carlini and Wagner, 2018) , and NLP (Ebrahimi et al., 2018; Jin et al., 2020) .",
"cite_spans": [
{
"start": 122,
"end": 147,
"text": "(Krizhevsky et al., 2012;",
"ref_id": "BIBREF23"
},
{
"start": 169,
"end": 187,
"text": "Chen et al., 2019)",
"ref_id": "BIBREF8"
},
{
"start": 228,
"end": 239,
"text": "(Kim, 2014;",
"ref_id": "BIBREF22"
},
{
"start": 240,
"end": 254,
"text": "Pirinen, 2019;",
"ref_id": "BIBREF30"
},
{
"start": 255,
"end": 278,
"text": "Kambhatla et al., 2018)",
"ref_id": "BIBREF21"
},
{
"start": 475,
"end": 497,
"text": "(Szegedy et al., 2014)",
"ref_id": "BIBREF34"
},
{
"start": 574,
"end": 599,
"text": "(Goodfellow et al., 2015;",
"ref_id": "BIBREF15"
},
{
"start": 600,
"end": 622,
"text": "Papernot et al., 2016;",
"ref_id": "BIBREF28"
},
{
"start": 623,
"end": 648,
"text": "Carlini and Wagner, 2017)",
"ref_id": "BIBREF5"
},
{
"start": 658,
"end": 681,
"text": "(Alzantot et al., 2017;",
"ref_id": "BIBREF2"
},
{
"start": 682,
"end": 707,
"text": "Carlini and Wagner, 2018)",
"ref_id": "BIBREF6"
},
{
"start": 718,
"end": 741,
"text": "(Ebrahimi et al., 2018;",
"ref_id": "BIBREF13"
},
{
"start": 742,
"end": 759,
"text": "Jin et al., 2020)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Based on the adversary's level of perturbation, three categories of adversarial attacks in NLP systems have been proposed: Character-level, tokenlevel, and sentence-level adversarial attacks (Alshemali and Kalita, 2020; Zhang et al., 2020) . One robust existing token-level adversarial attack in NLP is black-box synonym substitution (Alzantot et al., 2018; Ren et al., 2019; Jin et al., 2020) . In attacks of this variety, the adversary substitutes tokens with synonyms. Since synonym substitution perturbations aim to satisfy all lexical, grammatical, and semantic constraints, they are difficult to detect with automatic syntax check as well as by humans.",
"cite_spans": [
{
"start": 191,
"end": 219,
"text": "(Alshemali and Kalita, 2020;",
"ref_id": "BIBREF1"
},
{
"start": 220,
"end": 239,
"text": "Zhang et al., 2020)",
"ref_id": "BIBREF36"
},
{
"start": 334,
"end": 357,
"text": "(Alzantot et al., 2018;",
"ref_id": "BIBREF3"
},
{
"start": 358,
"end": 375,
"text": "Ren et al., 2019;",
"ref_id": "BIBREF31"
},
{
"start": 376,
"end": 393,
"text": "Jin et al., 2020)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we propose a defensive method to mitigate synonym substitution perturbations. We propose to improve the generalization of DNNbased models by replacing the embeddings of the important tokens in the input samples with the average of their synonyms' embeddings. By doing so, we reduce model sensitivity to particular tokens in the input samples. Experimenting on two popular datasets, for two types of text classification tasks, demonstrates that the proposed defense is not only capable of defending against these adversarial attacks, but is also capable of improving the performance of DNN-based models when tested on benign data. To our knowledge, our defense is the first proposed method that can effectively (1) Improve the robustness of DNN-based models against synonym substitution adversarial attacks and (2) Improve the generalization of DNN-based models with both clean and adversarial data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Alzantot et al. (2018) developed a black-box synonym substitution attack to generate adversarial samples for sentiment analysis. They first computed the nearest neighbors of a token based on the Euclidean distance in the embedding space. Then, they picked the token that maximizes the target label prediction when replacing the original token. Their adversarial examples successfully fooled their LSTM model's output with a 100% success rate, using the IMDB dataset (Maas et al., 2011) .",
"cite_spans": [
{
"start": 466,
"end": 485,
"text": "(Maas et al., 2011)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Ren et al. 2019proposed a black-box synonym substitution attack for text classification tasks. They employed word saliency to select the token to be replaced. For each token, they selected the synonym that causes the most significant change in the classification probability after replacement. They experimented with three datasets: IMDB, AG's News (Zhang et al., 2015) , and Yahoo! Answers 1 using the word-level CNN of Kim (2014) , the character-level CNN of Zhang et al. (2015) , a Bi-directional LSTM, and an LSTM. Their results showed that, under their attack, the classification accuracies on the three datasets IMDB, AG's News, and Yahoo! Answers were reduced by an average of 81.05%, 33.62%, and 38.65% respectively. adopted the Metropolis-Hastings (M-H) sampling approach (Metropolis et al., 1953; Hastings, 1970) to generate blackbox synonym substitution perturbations against text classification and textual entailment tasks. They used the M-H approach to replace targeted words with synonyms, followed by a language model to enforce the fluency of the sentence after replacing the words. Their attack successfully changed the output of their Bi-LSTM model and the Bi-DAF model (Seo et al., 2017) with 98.7% and 86.6% success rates, respectively, using the IMDB dataset, and the SNLI dataset (Bowman et al., 2015) . Jin et al. (2020) also proposed a black-box synonym substitution attack to evaluate text classification systems. They first identified important tokens for the target model, then gathered the top tokens whose cosine similarity with the selected tokens are greater than a threshold. They kept the candidates that altered the prediction of the target model. Using their attack, they evaluated the word-level CNN and a word-level LSTM, using the AG's News and IMDB datasets. Their results suggested that their attack reduced the accuracy of all target models by at least 64.2%.",
"cite_spans": [
{
"start": 349,
"end": 369,
"text": "(Zhang et al., 2015)",
"ref_id": "BIBREF37"
},
{
"start": 421,
"end": 431,
"text": "Kim (2014)",
"ref_id": "BIBREF22"
},
{
"start": 461,
"end": 480,
"text": "Zhang et al. (2015)",
"ref_id": "BIBREF37"
},
{
"start": 781,
"end": 806,
"text": "(Metropolis et al., 1953;",
"ref_id": "BIBREF26"
},
{
"start": 807,
"end": 822,
"text": "Hastings, 1970)",
"ref_id": "BIBREF16"
},
{
"start": 1189,
"end": 1207,
"text": "(Seo et al., 2017)",
"ref_id": "BIBREF33"
},
{
"start": 1303,
"end": 1324,
"text": "(Bowman et al., 2015)",
"ref_id": "BIBREF4"
},
{
"start": 1327,
"end": 1344,
"text": "Jin et al. (2020)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "1 https://webscope.sandbox.yahoo.com/catalog.php?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "This paper proposes improving the generalization of DNN-based models by reducing a model's sensitivity to particular tokens in the input samples. This effectively mitigates black-box synonym substitution perturbations. We propose a method that combines word importance ranking, synonym extraction, word embedding averaging, and majority voting techniques to mitigate adversarial perturbations. Figure 1 illustrates the overall schema of the proposed approach. The proposed approach for mitigating adversarial text consists of four main steps:",
"cite_spans": [],
"ref_spans": [
{
"start": 394,
"end": 402,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "\u2022",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "Step 1: Determine the N important tokens in the input sequence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "\u2022",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "Step 2: Build a synonym set for each important token.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "\u2022",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "Step 3: Replace the embedding of each important token by the average of its synonyms' embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "\u2022",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "Step 4: Perform a majority voting for the N replacements based on their predictions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "Given a sequence of tokens, only some key tokens act as influential signals for the model's prediction. Therefore, we use a selection mechanism to choose the tokens that most significantly influence the final prediction results. We use the Replace-1 scoring function R1S() of Gao et al. (2018) to score the importance of tokens in an input sequence according to the observed results from the targeted model. By assuming the input sequence x = x 1 x 2 ...x n , where x i is the token at the i th position, we measure the effect of the x i token on the output of the targeted model (F ). The scoring function R1S() measures the effect of x i on the model by replacing x i with x i . More formally:",
"cite_spans": [
{
"start": 276,
"end": 293,
"text": "Gao et al. (2018)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Scoring Function",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "R1S(x i ) = F (x 1 , x 2 , ..., x i\u22121 , x i , ..., x n )\u2212 F (x 1 , x 2 , ..., x i\u22121 , x i , ..., x n ),",
"eq_num": "(1)"
}
],
"section": "Scoring Function",
"sec_num": "3.1"
},
{
"text": "where x i is chosen to be out-of-vocabulary (OOV) and it is obtained by inserting, deleting, or substituting a letter in x i for a random letter. R1S() measures the importance of a token by calculating the effect of replacing it with an OOV token, while observing the model's prediction. The token's importance is thus calculated as the prediction change Figure 1 : Schema of the proposed defensive method. The proposed defense involves the following steps:",
"cite_spans": [],
"ref_spans": [
{
"start": 355,
"end": 363,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Scoring Function",
"sec_num": "3.1"
},
{
"text": "Step 1: Extract the important tokens in the input sample (here, we extract the three most important tokens).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scoring Function",
"sec_num": "3.1"
},
{
"text": "Step 2: Build a synonym set for each important token.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scoring Function",
"sec_num": "3.1"
},
{
"text": "Step 3: Replace the embedding of each important token by the average of its synonyms' embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scoring Function",
"sec_num": "3.1"
},
{
"text": "Step 4: Perform a majority voting for the replacements based on their predictions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scoring Function",
"sec_num": "3.1"
},
{
"text": "before and after replacing it with an OOV. By calculating the effect of replacing x i with OOV, the importance of all tokens in the input sample can be measured and ranked. This step is employed to report the N most important tokens in an input sample. In our experiments, setting N to be 5 produces the best results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scoring Function",
"sec_num": "3.1"
},
{
"text": "For a given token with a high importance score obtained in Step 1, we build a synonym set (Synset) for the selected token. Synonyms can be found in WordNet 2 (Miller, 1995) , a large lexical resource for the English language. For each token, we use WordNet to build a synonym set that contains all possible synonyms of the token. More formally,",
"cite_spans": [
{
"start": 158,
"end": 172,
"text": "(Miller, 1995)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Synonym Extraction",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Synset(token) = {syn 1 , syn 2 , ..., syn m },",
"eq_num": "(2)"
}
],
"section": "Synonym Extraction",
"sec_num": "3.2"
},
{
"text": "where m is the quantity of the token's synonyms that exist in the lexical resource (WordNet). If a token does not have any synonyms in the lexical resource, the processing moves to the next important token. In this step, we use WordNet as a lexical resource, but the proposed defense can use any other lexical resource (e.g. Wiktionary 3 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synonym Extraction",
"sec_num": "3.2"
},
{
"text": "In the previous steps, we determine the N important tokens in an input sample (Step 1), and then extract a synonym set for each one of the important tokens (Step 2). In the third step, for each important token, we replace its embedding by the average of its synonyms' embeddings. More formally,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Embedding Averaging",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "E(token) = 1 m m i=1 E(syn i ),",
"eq_num": "(3)"
}
],
"section": "Embedding Averaging",
"sec_num": "3.3"
},
{
"text": "where E() represents the word embeddings resource, and m is the count of synonyms in the synonym set of the token.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Embedding Averaging",
"sec_num": "3.3"
},
{
"text": "In the previous step, for each important token, we replace the embedding of the token by the average of its synonyms' embeddings. In this step, the model makes a prediction after each replacement, and assigns each replacement a vote based on its prediction. The model's final prediction will be the prediction with the majority of the votes. An example of this step is illustrated in Figure 2 . In this figure, the model made three predictions and the final classification is positive, based on the votes. The proposed approach with all steps is shown in Algorithm 1. Step 4: The model makes a prediction after each replacement, and assigns each replacement a vote based on its prediction. The model's final prediction is the prediction with the majority of the votes.",
"cite_spans": [],
"ref_spans": [
{
"start": 384,
"end": 392,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Majority Voting",
"sec_num": "3.4"
},
{
"text": "Algorithm 1: The overall procedure of the proposed defensive method. input : Input sample X, classifier F (), Replace-1 scoring function to extract important tokens in an input sample R1S(), lexical resource to extract synonyms Synset(), word embeddings resource to represent tokens E(), prediction set P , majority voting method",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Majority Voting",
"sec_num": "3.4"
},
{
"text": "V (). output :F (X) R1S(X) = {token 1 , token 2 , ..., token n } for c \u2190 1 to n do Synset(token c ) = {syn 1 , syn 2 , ..., syn m } E(token c ) = 1 m m i=1 E(syn i ) S = X S \u2190 E(token c ) P \u2190 F (S) end F (X) = V (P ) Return F (X)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Majority Voting",
"sec_num": "3.4"
},
{
"text": "In this paper, we proposed a simple and structurefree defensive strategy which can be successful in hardening DNNs against synonym substitution based adversarial attacks. As shown in Section 5, the proposed defense yielded great performance. The advantage of our approach is that it can use any embeddings and lexical resources. It does not require any additional data to train, or modify the architecture of the models. Our implementation is generic enough to be applied in any domain and to models trained on any natural language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Majority Voting",
"sec_num": "3.4"
},
{
"text": "We implemented the proposed defensive method using Python, Numpy, Tensorflow, Scikit-learn, and Pandas libraries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "To study the efficiency of our defense, we used the Internet Movie Database (Maas et al., 2011) . IMDB is a sentiment classification dataset which involves binary labels annotating the sentiment of sentences in movie reviews. IMDB consists of 25,000 training samples and 25,000 test samples, labeled as positive or negative. The average length of samples in IMDB is 262 words.",
"cite_spans": [
{
"start": 76,
"end": 95,
"text": "(Maas et al., 2011)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus",
"sec_num": "4.1"
},
{
"text": "To evaluate our proposed approach, several experiments on the word-level CNN model of Kim (2014) and the Bi-directional LSTM model of Ren et al. (2019) were conducted. We replicated Kim's CNN architecture, which contains three convolutional layers, a max-pooling layer, and a fully-connected layer. The Bi-directional LSTM model involves a Bi-directional LSTM layer and a fully connected layer.",
"cite_spans": [
{
"start": 86,
"end": 96,
"text": "Kim (2014)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Targeted Classification Models",
"sec_num": "4.2"
},
{
"text": "We evaluated our defensive method with two blackbox synonym substitution attacks: The attack of Alzantot et al. (2018) ",
"cite_spans": [
{
"start": 96,
"end": 118,
"text": "Alzantot et al. (2018)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adversarial Attacks",
"sec_num": "4.3"
},
{
"text": "We used the Global Vectors for Word Representation (GloVe) embedding space (Pennington et al., 2014) to generate word vectors of 300 dimensions.",
"cite_spans": [
{
"start": 75,
"end": 100,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Embeddings",
"sec_num": "4.4"
},
{
"text": "Classification accuracy is used as the metric to evaluate the performance of the proposed defensive model. Higher accuracy denotes a more effective approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Performance Evaluation",
"sec_num": "4.5"
},
{
"text": "The CNN and Bi-LSTM models were trained on the IMDB training set, and achieved training accuracy scores similar to the original implementations. Table 1 : The accuracy of the classification models on the original benign data with and without our defensive method. No adversarial perturbations were used. \"w/o defense\" denotes using the model with no defense. \"w/defense\" denotes using the model with our defense. Percent Increase is the percent increase of the classification accuracy after using the defense. Table 2 : The accuracy of the classifiers under adversarial attacks, with and without the defense applied. The accuracies of the models with the original data were 76.50% and 73.44% for the CNN and the Bi-LSTM, respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 145,
"end": 152,
"text": "Table 1",
"ref_id": null
},
{
"start": 510,
"end": 517,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "Following the practices of previous studies that have explored adversarial examples (Alzantot et al., 2018; Ren et al., 2019; Jin et al., 2020) , and because the process of generating adversarial examples to evaluate the defense is time and resource-consuming, we randomly sampled 1280 examples from the IMDB testing set to evaluate the efficiency of the proposed defensive method. As shown in Section 3, for each sample, our defensive method first extracts the five important tokens.",
"cite_spans": [
{
"start": 84,
"end": 107,
"text": "(Alzantot et al., 2018;",
"ref_id": "BIBREF3"
},
{
"start": 108,
"end": 125,
"text": "Ren et al., 2019;",
"ref_id": "BIBREF31"
},
{
"start": 126,
"end": 143,
"text": "Jin et al., 2020)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "It then extracts their synonyms from the lexical resource. Overall, there were 2.15 synonyms per important token on average, as the majority of important tokens had 2 or 3 synonyms. We first present how the defensive method behaves on benign data with no adversarial attacks. In Table 1 , we report the accuracy of the targeted models on the original test samples, with and without the defense applied. Table 1 shows that the defense is capable of improving the performance of the models even when they are not under attack. The classification accuracy of the CNN increases by 3.50%, and that for the Bi-LSTM is also increased by 5.46%. This indicates that the defense is beneficial not only in adversarial situations, but also in secure situations with no adversarial attacks.",
"cite_spans": [],
"ref_spans": [
{
"start": 279,
"end": 286,
"text": "Table 1",
"ref_id": null
},
{
"start": 403,
"end": 410,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "To evaluate the efficiency of our defense in adversarial situations, we used the adversarial attacks of Alzantot et al. (2018) and Ren et al. (2019) to perturb the 1280 benign samples and convert them to adversarial examples. A more effective defensive method should cause a smaller drop in model clas-sification accuracy when said model is under attack. Table 2 shows the efficacy of various adversarial attacks and the defensive method.",
"cite_spans": [
{
"start": 104,
"end": 126,
"text": "Alzantot et al. (2018)",
"ref_id": "BIBREF3"
},
{
"start": 131,
"end": 148,
"text": "Ren et al. (2019)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [
{
"start": 355,
"end": 362,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Effectiveness of the Defense",
"sec_num": "5.1"
},
{
"text": "Under the adversarial attacks of Alzantot et al. and Ren et al. , the classification accuracy of the models dropped significantly. For the CNN, the accuracy degraded more than 41.50% and 51.90%, under the Alzantot et al. and Ren et al. attacks, respectively . Similarly, the accuracy of the Bi-LSTM model reduced more than 49.94% and 68.37%, under the same attacks. Our results suggest that (1) DNN-based models with higher original accuracy (with clean data) are more difficult to be attacked. For instance, as shown in Tables 1 and 2, the underattack accuracy is higher for the CNN model compared with the Bi-LSTM model under all attacks. This agrees with the observation from previous research that, in general, models with higher original accuracy have higher under-attack accuracy (Jin et al., 2020) . 2The Bi-LSTM model is more vulnerable to the two attacks than the CNN model by a 12.45% accuracy difference on average. This supports the conclusion from previous research that, in the NLP domain, deep CNNs tend to be more robust than RNN models (Ren et al., 2019; Alshemali and Kalita, 2019). (3) While Alzantot et al. randomly selected the tokens to be replaced, Ren et al. employed the word saliency technique to determine the tokens to be replaced. This makes the attack of Ren et al. more effective than the attack of Alzantot et al. on both models by an average margin of 10.40% for the CNN and 18.43% for the Bi-LSTM.",
"cite_spans": [
{
"start": 33,
"end": 63,
"text": "Alzantot et al. and Ren et al.",
"ref_id": null
},
{
"start": 205,
"end": 257,
"text": "Alzantot et al. and Ren et al. attacks, respectively",
"ref_id": null
},
{
"start": 786,
"end": 804,
"text": "(Jin et al., 2020)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Effectiveness of the Defense",
"sec_num": "5.1"
},
{
"text": "After employing our defensive method, the ro- Table 3 : The accuracy of the nonneural classification models under adversarial attacks, with and without the defense applied. Percent Increase is the percent increase of the classification accuracy with the defense applied.",
"cite_spans": [],
"ref_spans": [
{
"start": 46,
"end": 53,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Effectiveness of the Defense",
"sec_num": "5.1"
},
{
"text": "bustness of the models significantly improved under all attacks. The effectiveness of the proposed defense is evaluated under the two attacks and the results are presented in Table 2 . Our results show that the proposed defense effectively mitigated most of the adversarial examples generated by the two attacks. Under the Alzantot et al. attack, the defense increased the accuracies of the models by 39.20% and 49.20% for the CNN and Bi-LSTM, respectively. Under the Ren et al. attack, the accuracies of the models were improved by an average of 43.40% and 62.13% for the CNN and Bi-LSTM, respectively. Our results highlight that (1) Under the same attack, the proposed defense performs better with the Bi-LSTM model than with the CNN by an average difference of 14.36%; and (2) Under the same model, the proposed defense performs better in mitigating Ren et al.'s adversarial examples than in mitigating the adversarial examples generated by the attack of Alzantot et al., with an average difference of 8.56%. This is likely because Ren et al. used WordNet to obtain their synonyms, while Alzantot et al. considered the nearest neighbors of a token's embedding vector as its synonyms.",
"cite_spans": [],
"ref_spans": [
{
"start": 175,
"end": 182,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Effectiveness of the Defense",
"sec_num": "5.1"
},
{
"text": "In this section, we evaluated the defense using two nonneural machine learning classification algorithms, that were selected due to their high performance on a variety of text classification tasks:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Nonneural Models",
"sec_num": "5.2"
},
{
"text": "(1) Support Vector Machine (SVM) (Cortes and Vapnik, 1995) ; and (2) Extreme Gradient Boosting (XGBoost) (Chen and Guestrin, 2016) . We examined the performance of our defense with the SVM and XGBoost models, trained on the IMDB dataset, and using the GloVe embedding space.",
"cite_spans": [
{
"start": 33,
"end": 58,
"text": "(Cortes and Vapnik, 1995)",
"ref_id": "BIBREF9"
},
{
"start": 105,
"end": 130,
"text": "(Chen and Guestrin, 2016)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Nonneural Models",
"sec_num": "5.2"
},
{
"text": "To evaluate the defense with the SVM and XG-Boost models, we used the adversarial attacks of Alzantot et al. (2018) and Ren et al. (2019) to perturb the same 1280 benign samples of IMDB re-views (used in Section 5.1) and convert them to adversarial examples. Table 3 shows how the defense behaves with nonneural models on benign and adversarial data. Table 3 shows that the SVM model has more than 28.00% and 33.00% accuracy degradation under the Alzantot et al. and Ren et al. attacks, respectively . Similarly, the accuracy of the XGBoost model was reduced by 35.54% and 44.21%, under the same attacks, respectively.",
"cite_spans": [
{
"start": 93,
"end": 115,
"text": "Alzantot et al. (2018)",
"ref_id": "BIBREF3"
},
{
"start": 120,
"end": 137,
"text": "Ren et al. (2019)",
"ref_id": "BIBREF31"
},
{
"start": 447,
"end": 499,
"text": "Alzantot et al. and Ren et al. attacks, respectively",
"ref_id": null
}
],
"ref_spans": [
{
"start": 259,
"end": 266,
"text": "Table 3",
"ref_id": null
},
{
"start": 351,
"end": 358,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Nonneural Models",
"sec_num": "5.2"
},
{
"text": "By utilizing our defense, the robustness of the nonneural models improved under all attacks. Our results illustrate that the proposed defense is effectively able to mitigate most of the adversarial examples generated by the two attacks. Under the Alzantot et al. attack, the defense increased the accuracies of the models by 16.10% and 20.39% for SVM and XGBoost, respectively. Under the Ren et al. attack, the accuracies of the models were improved by 19.15% and 25.47% for SVM and XGBoost, respectively. Table 3 also shows that the defense improved the performance of the models with benign data. The classification accuracy of the SVM model increases by 4.06%, and that for the XGBoost is also increased by 4.77%.",
"cite_spans": [],
"ref_spans": [
{
"start": 506,
"end": 513,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Nonneural Models",
"sec_num": "5.2"
},
{
"text": "In Sections 5.1 and 5.2, we evaluated the effectiveness of the discussed defense on the sentiment analysis task. Here, we evaluated it on the news categorization task, using the Bidirectional Encoder Representations from Transformers (BERT) embedding space and the BERT model (Devlin et al., 2019) . This model was trained on the AG's News categorization dataset (Zhang et al., 2015) . We used the 12-layer BERT model, also called the base-uncased version 4 .",
"cite_spans": [
{
"start": 276,
"end": 297,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF11"
},
{
"start": 363,
"end": 383,
"text": "(Zhang et al., 2015)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "News Categorization Task",
"sec_num": "5.3"
},
{
"text": "AG's News is a news categorization dataset which contains news articles categorized into four classes: World, Sports, Business and Sci/Tech. Table 4 : The classification accuracy of the BERT model under adversarial attacks, with and without the defense applied. Percent Increase is the percent increase of the classification accuracy with the defense applied.",
"cite_spans": [],
"ref_spans": [
{
"start": 141,
"end": 148,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "News Categorization Task",
"sec_num": "5.3"
},
{
"text": "The total number of training samples is 120,000 and testing 7,600. The average number of words per sample is 278.6. We randomly selected 1280 samples from the AG's News testing set to evaluate the effectiveness of the proposed defensive method. We used the adversarial attacks of Alzantot et al. and Ren et al. to perturb the 1280 benign samples and convert them to adversarial examples. Table 4 shows the efficacy of the defensive method with various adversarial attacks. Even for the powerful BERT, which has achieved great performance in various NLP tasks, adversarial attacks can still reduce its classification accuracy by about 30.56% with the attack of Alzantot et al. and by 35.56% with the attack of Ren et al.. These accuracy drops are unprecedented, however, employing our defense boosted the robustness of the BERT model under all attacks. Table 4 shows that, under the Alzantot et al. attack, the defense improved the accuracy of the model by 23.60%. Similarly, under the Ren et al. attack, the accuracy of the model was increased by 29.61%.",
"cite_spans": [
{
"start": 280,
"end": 310,
"text": "Alzantot et al. and Ren et al.",
"ref_id": null
}
],
"ref_spans": [
{
"start": 388,
"end": 395,
"text": "Table 4",
"ref_id": null
},
{
"start": 852,
"end": 859,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "News Categorization Task",
"sec_num": "5.3"
},
{
"text": "While the defended classifiers had higher accuracy scores than the undefended classifiers across all tasks, adversarial attacks, and datasets, it is important to determine whether the difference in performance of the defended models is statistically significant. Many researchers recommend McNemar's test (McNemar, 1947) for comparing the performance of two classifiers (Salzberg, 1997; Dietterich, 1998; Japkowicz and Shah, 2011; Costa et al., 2018) as it has a lower probability of Type I error. McNemar's is a non-parametric pairwise test designed for comparing two populations, or in this case, the predictions from two different classifiers on the same test dataset. In this paper, McNemar's test was applied to compare the performance of the defended models with their undefended counterparts (studied in Sections 5.1, 5.2, and 5.3). Here, we wish to compare the performance of the defended CNN with the undefended CNN, the de-fended SVM with the undefended SVM, etc.",
"cite_spans": [
{
"start": 290,
"end": 320,
"text": "McNemar's test (McNemar, 1947)",
"ref_id": null
},
{
"start": 370,
"end": 386,
"text": "(Salzberg, 1997;",
"ref_id": "BIBREF32"
},
{
"start": 387,
"end": 404,
"text": "Dietterich, 1998;",
"ref_id": "BIBREF12"
},
{
"start": 405,
"end": 430,
"text": "Japkowicz and Shah, 2011;",
"ref_id": "BIBREF19"
},
{
"start": 431,
"end": 450,
"text": "Costa et al., 2018)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Analysis",
"sec_num": "5.4"
},
{
"text": "We performed McNemar's test to determine if there was a significant difference between the accuracy of the defended models and that of the undefended ones. We tested the null hypothesis, which states that there is no significant difference in the accuracy of the models studied, and the alternative hypothesis, which states that there is a difference in the accuracy of the models studied. Several comparisons were performed, and the significance threshold for each individual pairwise test was adjusted to 0.05. In all cases, the difference between the defended models and the undefended models (the p-value) was significant (< 0.05). Thus, we reject the null hypothesis which assumed there was no difference between the classifiers, in favor of the alternative. The results show that there was a statistically significant difference in the accuracy of all models, which indicates that the defended models had significantly better performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Analysis",
"sec_num": "5.4"
},
{
"text": "In this paper, we proposed a structure-free defensive method that is capable of improving the performance of DNN-based models with both clean and adversarial data. Our findings show that replacing the embeddings of the important words in the input samples with the average of their synonyms' embeddings can significantly improve the generalization of DNN-based models. Our results indicate that the proposed defense is not only capable of defending against adversarial attacks, but is also capable of improving the performance of DNNbased models when tested on benign data. On average, the proposed defense improved the classification accuracy of the CNN and Bi-LSTM models by 41.30% and 55.66%, respectively, when tested under adversarial attacks. Extended investigation shows that our defensive method can improve the robustness of nonneural models, achieving an average of 17.62% and 22.93% classification accuracy increase on the SVM and XGBoost models, respectively. The proposed defensive method has also shown an average of 26.60% classification accuracy improvement when tested with the infamous BERT model. In further work, we plan to generalize our approach to achieve robustness against other types of adversarial attacks in NLP. We also hope to evaluate the defense with a variety of NLP systems, such as textual entailment systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "https://wordnet.princeton.edu/ 3 https://www.wiktionary.org/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/huggingface/transformers",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Toward mitigating adversarial texts",
"authors": [
{
"first": "Basemah",
"middle": [],
"last": "Alshemali",
"suffix": ""
},
{
"first": "Jugal",
"middle": [],
"last": "Kalita",
"suffix": ""
}
],
"year": 2019,
"venue": "International Journal of Computer Applications",
"volume": "178",
"issue": "50",
"pages": "1--7",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Basemah Alshemali and Jugal Kalita. 2019. Toward mitigating adversarial texts. International Journal of Computer Applications, 178(50):1-7.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Improving the reliability of deep neural networks in NLP: A review. Knowledge-Based Systems",
"authors": [
{
"first": "Basemah",
"middle": [],
"last": "Alshemali",
"suffix": ""
},
{
"first": "Jugal",
"middle": [],
"last": "Kalita",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "191",
"issue": "",
"pages": "1--19",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Basemah Alshemali and Jugal Kalita. 2020. Improving the reliability of deep neural networks in NLP: A review. Knowledge-Based Systems, 191(105210):1- 19.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Did you hear that? adversarial examples against automatic speech recognition",
"authors": [
{
"first": "Moustafa",
"middle": [],
"last": "Alzantot",
"suffix": ""
},
{
"first": "Bharathan",
"middle": [],
"last": "Balaji",
"suffix": ""
},
{
"first": "Mani",
"middle": [],
"last": "Srivastava",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 31st Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Moustafa Alzantot, Bharathan Balaji, and Mani Srivas- tava. 2017. Did you hear that? adversarial examples against automatic speech recognition. In Proceed- ings of the 31st Conference on Neural Information Processing Systems.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Generating natural language adversarial examples",
"authors": [
{
"first": "Moustafa",
"middle": [],
"last": "Alzantot",
"suffix": ""
},
{
"first": "Yash",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Ahmed",
"middle": [],
"last": "Elgohary",
"suffix": ""
},
{
"first": "Bo-Jhang",
"middle": [],
"last": "Ho",
"suffix": ""
},
{
"first": "Mani",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2890--2896",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, and Kai-Wei Chang. 2018. Generating natural language adversarial ex- amples. In Proceedings of the Conference on Em- pirical Methods in Natural Language Processing, pages 2890-2896.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A large annotated corpus for learning natural language inference",
"authors": [
{
"first": "Gabor",
"middle": [],
"last": "Samuel R Bowman",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Angeli",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Potts",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "632--642",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large an- notated corpus for learning natural language infer- ence. In Proceedings of the Conference on Empiri- cal Methods in Natural Language Processing, pages 632-642.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Towards evaluating the robustness of neural networks",
"authors": [
{
"first": "Nicholas",
"middle": [],
"last": "Carlini",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Wagner",
"suffix": ""
}
],
"year": 2017,
"venue": "2017 IEEE Symposium on Security and Privacy (SP)",
"volume": "",
"issue": "",
"pages": "39--57",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicholas Carlini and David Wagner. 2017. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy (SP), pages 39-57. IEEE.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Audio adversarial examples: Targeted attacks on speech-totext",
"authors": [
{
"first": "Nicholas",
"middle": [],
"last": "Carlini",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Wagner",
"suffix": ""
}
],
"year": 2018,
"venue": "2018 IEEE Security and Privacy Workshops (SPW)",
"volume": "",
"issue": "",
"pages": "1--7",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicholas Carlini and David Wagner. 2018. Audio ad- versarial examples: Targeted attacks on speech-to- text. In 2018 IEEE Security and Privacy Workshops (SPW), pages 1-7. IEEE.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Xgboost: A scalable tree boosting system",
"authors": [
{
"first": "Tianqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Guestrin",
"suffix": ""
}
],
"year": 2016,
"venue": "International Conference on knowledge discovery and data mining",
"volume": "",
"issue": "",
"pages": "785--794",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tianqi Chen and Carlos Guestrin. 2016. Xgboost: A scalable tree boosting system. In International Con- ference on knowledge discovery and data mining, pages 785-794. ACM.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Exploiting future word contexts in neural network language models for speech recognition",
"authors": [
{
"first": "Xie",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Xunying",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Anton",
"middle": [],
"last": "Ragni",
"suffix": ""
},
{
"first": "H",
"middle": [
"M"
],
"last": "Jeremy",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wong",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mark",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gales",
"suffix": ""
}
],
"year": 2019,
"venue": "IEEE/ACM Transactions on Audio, Speech, and Language Processing",
"volume": "27",
"issue": "9",
"pages": "1444--1454",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xie Chen, Xunying Liu, Yu Wang, Anton Ragni, Jeremy HM Wong, and Mark JF Gales. 2019. Ex- ploiting future word contexts in neural network lan- guage models for speech recognition. IEEE/ACM Transactions on Audio, Speech, and Language Pro- cessing, 27(9):1444-1454.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Supportvector networks",
"authors": [
{
"first": "Corinna",
"middle": [],
"last": "Cortes",
"suffix": ""
},
{
"first": "Vladimir",
"middle": [],
"last": "Vapnik",
"suffix": ""
}
],
"year": 1995,
"venue": "Machine Learning",
"volume": "20",
"issue": "",
"pages": "273--297",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Corinna Cortes and Vladimir Vapnik. 1995. Support- vector networks. Machine Learning, 20(3):273- 297.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Adaptive learning models evaluation in Twitter's timelines",
"authors": [
{
"first": "Joana",
"middle": [],
"last": "Costa",
"suffix": ""
},
{
"first": "Catarina",
"middle": [],
"last": "Silva",
"suffix": ""
},
{
"first": "Mario",
"middle": [],
"last": "Antunes",
"suffix": ""
},
{
"first": "Bernardete",
"middle": [],
"last": "Ribeiro",
"suffix": ""
}
],
"year": 2018,
"venue": "International Joint Conference on Neural Networks",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joana Costa, Catarina Silva, Mario Antunes, and Bernardete Ribeiro. 2018. Adaptive learning models evaluation in Twitter's timelines. In International Joint Conference on Neural Networks, pages 1-8. IEEE.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "The Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In The Conference of the North Ameri- can Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4171-4186.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Approximate statistical tests for comparing supervised classification learning algorithms",
"authors": [
{
"first": "G",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dietterich",
"suffix": ""
}
],
"year": 1998,
"venue": "Neural Computation",
"volume": "10",
"issue": "7",
"pages": "1895--1923",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas G Dietterich. 1998. Approximate statistical tests for comparing supervised classification learn- ing algorithms. Neural Computation, 10(7):1895- 1923.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Hotflip: White-box adversarial examples for NLP",
"authors": [
{
"first": "Javid",
"middle": [],
"last": "Ebrahimi",
"suffix": ""
},
{
"first": "Anyi",
"middle": [],
"last": "Rao",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Lowd",
"suffix": ""
},
{
"first": "Dejing",
"middle": [],
"last": "Dou",
"suffix": ""
}
],
"year": 2018,
"venue": "The Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "31--36",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. 2018. Hotflip: White-box adversarial exam- ples for NLP. In The Annual Meeting of the Associ- ation for Computational Linguistics, pages 31-36.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Black-box generation of adversarial text sequences to evade deep learning classifiers",
"authors": [
{
"first": "Ji",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Jack",
"middle": [],
"last": "Lanchantin",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"Lou"
],
"last": "Soffa",
"suffix": ""
},
{
"first": "Yanjun",
"middle": [],
"last": "Qi",
"suffix": ""
}
],
"year": 2018,
"venue": "IEEE Security and Privacy Workshops",
"volume": "",
"issue": "",
"pages": "50--56",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ji Gao, Jack Lanchantin, Mary Lou Soffa, and Yan- jun Qi. 2018. Black-box generation of adversarial text sequences to evade deep learning classifiers. In IEEE Security and Privacy Workshops, pages 50- 56.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Explaining and harnessing adversarial examples",
"authors": [
{
"first": "J",
"middle": [],
"last": "Ian",
"suffix": ""
},
{
"first": "Jonathon",
"middle": [],
"last": "Goodfellow",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Shlens",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Szegedy",
"suffix": ""
}
],
"year": 2015,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversar- ial examples. In International Conference on Learn- ing Representations.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Monte carlo sampling methods using markov chains and their applications",
"authors": [
{
"first": "W Keith",
"middle": [],
"last": "Hastings",
"suffix": ""
}
],
"year": 1970,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W Keith Hastings. 1970. Monte carlo sampling meth- ods using markov chains and their applications. Ox- ford University Press.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Bag of tricks for image classification with convolutional neural networks",
"authors": [
{
"first": "Zhi",
"middle": [],
"last": "Tong He",
"suffix": ""
},
{
"first": "Hang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zhongyue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Junyuan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Mu",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2019,
"venue": "The IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tong He, Zhi Zhang, Hang Zhang, Zhongyue Zhang, Junyuan Xie, and Mu Li. 2019. Bag of tricks for image classification with convolutional neural net- works. In The IEEE Conference on Computer Vision and Pattern Recognition.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups",
"authors": [
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Dong",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "George",
"middle": [
"E"
],
"last": "Dahl",
"suffix": ""
},
{
"first": "Abdel-Rahman",
"middle": [],
"last": "Mohamed",
"suffix": ""
},
{
"first": "Navdeep",
"middle": [],
"last": "Jaitly",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Senior",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Vanhoucke",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Tara",
"middle": [
"N"
],
"last": "Sainath",
"suffix": ""
}
],
"year": 2012,
"venue": "IEEE Signal Processing Magazine",
"volume": "29",
"issue": "6",
"pages": "82--97",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Geoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, et al. 2012. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Process- ing Magazine, 29(6):82-97.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Evaluating Learning Algorithms: A Classification Perspective",
"authors": [
{
"first": "Nathalie",
"middle": [],
"last": "Japkowicz",
"suffix": ""
},
{
"first": "Mohak",
"middle": [],
"last": "Shah",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nathalie Japkowicz and Mohak Shah. 2011. Evaluat- ing Learning Algorithms: A Classification Perspec- tive. Cambridge University Press.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Is BERT really robust? a strong baseline for natural language attack on text classification and entailment",
"authors": [
{
"first": "Di",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Zhijing",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Joey",
"middle": [
"Tianyi"
],
"last": "Zhou",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Szolovits",
"suffix": ""
}
],
"year": 2020,
"venue": "the Association for the Advancement of Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. 2020. Is BERT really robust? a strong baseline for natural language attack on text classi- fication and entailment. In the Association for the Advancement of Artificial Intelligence.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Decipherment of substitution ciphers with neural language models",
"authors": [
{
"first": "Nishant",
"middle": [],
"last": "Kambhatla",
"suffix": ""
},
{
"first": "Anahita",
"middle": [],
"last": "Mansouri Bigvand",
"suffix": ""
},
{
"first": "Anoop",
"middle": [],
"last": "Sarkar",
"suffix": ""
}
],
"year": 2018,
"venue": "the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "869--874",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nishant Kambhatla, Anahita Mansouri Bigvand, and Anoop Sarkar. 2018. Decipherment of substitution ciphers with neural language models. In the Con- ference on Empirical Methods in Natural Language Processing, pages 869-874.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Convolutional neural networks for sentence classification",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1746--1751",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the Con- ference on Empirical Methods in Natural Language Processing, pages 1746-1751.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Imagenet classification with deep convolutional neural networks",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
}
],
"year": 2012,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "1097--1105",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hin- ton. 2012. Imagenet classification with deep con- volutional neural networks. In Advances in Neural Information Processing Systems, pages 1097-1105.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Learning word vectors for sentiment analysis",
"authors": [
{
"first": "L",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"E"
],
"last": "Maas",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Daly",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Potts",
"suffix": ""
}
],
"year": 2011,
"venue": "The Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "142--150",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In The Annual Meeting of the Association for Computa- tional Linguistics: Human Language Technologies, pages 142-150.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Note on the sampling error of the difference between correlated proportions or percentages",
"authors": [
{
"first": "Quinn",
"middle": [],
"last": "Mcnemar",
"suffix": ""
}
],
"year": 1947,
"venue": "Psychometrika",
"volume": "12",
"issue": "2",
"pages": "153--157",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quinn McNemar. 1947. Note on the sampling error of the difference between correlated proportions or percentages. Psychometrika, 12(2):153-157.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Equation of state calculations by fast computing machines",
"authors": [
{
"first": "Nicholas",
"middle": [],
"last": "Metropolis",
"suffix": ""
},
{
"first": "Arianna",
"middle": [
"W"
],
"last": "Rosenbluth",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Marshall",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rosenbluth",
"suffix": ""
}
],
"year": 1953,
"venue": "The Journal of Chemical Physics",
"volume": "21",
"issue": "6",
"pages": "1087--1092",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicholas Metropolis, Arianna W Rosenbluth, Mar- shall N Rosenbluth, Augusta H Teller, and Edward Teller. 1953. Equation of state calculations by fast computing machines. The Journal of Chemical Physics, 21(6):1087-1092.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "WordNet: A lexical database for English",
"authors": [
{
"first": "A",
"middle": [],
"last": "George",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 1995,
"venue": "Communications of the ACM",
"volume": "38",
"issue": "11",
"pages": "39--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George A Miller. 1995. WordNet: A lexical database for English. Communications of the ACM, 38(11):39-41.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "The limitations of deep learning in adversarial settings",
"authors": [
{
"first": "Nicolas",
"middle": [],
"last": "Papernot",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Mcdaniel",
"suffix": ""
},
{
"first": "Somesh",
"middle": [],
"last": "Jha",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Fredrikson",
"suffix": ""
},
{
"first": "Ananthram",
"middle": [],
"last": "Berkay Celik",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Swami",
"suffix": ""
}
],
"year": 2016,
"venue": "IEEE European Symposium, Security and Privacy",
"volume": "",
"issue": "",
"pages": "372--387",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z Berkay Celik, and Ananthram Swami. 2016. The limitations of deep learning in adversar- ial settings. In IEEE European Symposium, Security and Privacy, pages 372-387.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1532-1543.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Neural and rule-based Finnish NLP models-expectations, experiments and experiences",
"authors": [
{
"first": "",
"middle": [],
"last": "Tommi A Pirinen",
"suffix": ""
}
],
"year": 2019,
"venue": "the International Workshop on Computational Linguistics for Uralic Languages",
"volume": "",
"issue": "",
"pages": "104--114",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tommi A Pirinen. 2019. Neural and rule-based Finnish NLP models-expectations, experiments and experi- ences. In the International Workshop on Computa- tional Linguistics for Uralic Languages, pages 104- 114.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Generating natural language adversarial examples through probability weighted word saliency",
"authors": [
{
"first": "Yihe",
"middle": [],
"last": "Shuhuai Ren",
"suffix": ""
},
{
"first": "Kun",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Wanxiang",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Che",
"suffix": ""
}
],
"year": 2019,
"venue": "The Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1085--1097",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shuhuai Ren, Yihe Deng, Kun He, and Wanxiang Che. 2019. Generating natural language adversarial ex- amples through probability weighted word saliency. In The Annual Meeting of the Association for Com- putational Linguistics, pages 1085 -1097.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "On comparing classifiers: Pitfalls to avoid and a recommended approach",
"authors": [
{
"first": "L",
"middle": [],
"last": "Steven",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Salzberg",
"suffix": ""
}
],
"year": 1997,
"venue": "Data Mining and Knowledge Discovery",
"volume": "1",
"issue": "3",
"pages": "317--328",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven L Salzberg. 1997. On comparing classifiers: Pit- falls to avoid and a recommended approach. Data Mining and Knowledge Discovery, 1(3):317-328.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Bidirectional attention flow for machine comprehension",
"authors": [
{
"first": "Minjoon",
"middle": [],
"last": "Seo",
"suffix": ""
},
{
"first": "Aniruddha",
"middle": [],
"last": "Kembhavi",
"suffix": ""
},
{
"first": "Ali",
"middle": [],
"last": "Farhadi",
"suffix": ""
},
{
"first": "Hannaneh",
"middle": [],
"last": "Hajishirzi",
"suffix": ""
}
],
"year": 2017,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention flow for machine comprehension. In International Conference on Learning Representations.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Intriguing properties of neural networks",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "Szegedy",
"suffix": ""
},
{
"first": "Wojciech",
"middle": [],
"last": "Zaremba",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Joan",
"middle": [],
"last": "Bruna",
"suffix": ""
},
{
"first": "Dumitru",
"middle": [],
"last": "Erhan",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Goodfellow",
"suffix": ""
},
{
"first": "Rob",
"middle": [],
"last": "Fergus",
"suffix": ""
}
],
"year": 2014,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2014. Intriguing properties of neural networks. In International Conference on Learning Representations.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Generating fluent adversarial examples for natural languages",
"authors": [
{
"first": "Huangzhao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Ning",
"middle": [],
"last": "Miao",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5564--5569",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huangzhao Zhang, Hao Zhou, Ning Miao, and Lei Li. 2019. Generating fluent adversarial examples for natural languages. In Proceedings of the 57th An- nual Meeting of the Association for Computational Linguistics, pages 5564-5569.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Adversarial attacks on deep-learning models in natural language processing: A survey",
"authors": [
{
"first": "Wei",
"middle": [
"Emma"
],
"last": "Zhang",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Quan",
"suffix": ""
},
{
"first": "Ahoud",
"middle": [],
"last": "Sheng",
"suffix": ""
},
{
"first": "Chenliang",
"middle": [],
"last": "Alhazmi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2020,
"venue": "ACM Transactions on Intelligent Systems and Technology (TIST)",
"volume": "11",
"issue": "3",
"pages": "1--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Emma Zhang, Quan Z Sheng, Ahoud Alhazmi, and Chenliang Li. 2020. Adversarial attacks on deep-learning models in natural language process- ing: A survey. ACM Transactions on Intelligent Sys- tems and Technology (TIST), 11(3):1-41.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Character-level convolutional networks for text classification",
"authors": [
{
"first": "Xiang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Junbo",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Lecun",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "649--657",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text clas- sification. In Advances in Neural Information Pro- cessing Systems, pages 649-657.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "the 2 nd most important token replacement Input sample with the 3 rd most important token replacement Figure 2:"
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "and the attack of Ren et al. (2019), explained in Section 2."
}
}
}
}