|
{ |
|
"paper_id": "2022", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T14:30:54.482952Z" |
|
}, |
|
"title": "Towards Stronger Adversarial Baselines Through Human-AI Collaboration", |
|
"authors": [ |
|
{ |
|
"first": "Wencong", |
|
"middle": [], |
|
"last": "You", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Oregon Eugene", |
|
"location": { |
|
"region": "OR" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Lowd", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Oregon Eugene", |
|
"location": { |
|
"region": "OR" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Natural language processing (NLP) systems are often used for adversarial tasks such as detecting spam, abuse, hate speech, and fake news. Properly evaluating such systems requires dynamic evaluation that searches for weaknesses in the model, rather than a static test set. Prior work has evaluated such models on both manually and automatically generated examples, but both approaches have limitations: manually constructed examples are time-consuming to create and are limited by the imagination and intuition of the creators, while automatically constructed examples are often ungrammatical or labeled inconsistently. We propose to combine human and AI expertise in generating adversarial examples, benefiting from humans' expertise in language and automated attacks' ability to probe the target system more quickly and thoroughly. We present a system that facilitates attack construction, combining human judgment with automated attacks to create better attacks more efficiently. Preliminary results from our own experimentation suggest that human-AI hybrid attacks are more effective than either human-only or AI-only attacks. A complete user study to validate these hypotheses is still pending.", |
|
"pdf_parse": { |
|
"paper_id": "2022", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Natural language processing (NLP) systems are often used for adversarial tasks such as detecting spam, abuse, hate speech, and fake news. Properly evaluating such systems requires dynamic evaluation that searches for weaknesses in the model, rather than a static test set. Prior work has evaluated such models on both manually and automatically generated examples, but both approaches have limitations: manually constructed examples are time-consuming to create and are limited by the imagination and intuition of the creators, while automatically constructed examples are often ungrammatical or labeled inconsistently. We propose to combine human and AI expertise in generating adversarial examples, benefiting from humans' expertise in language and automated attacks' ability to probe the target system more quickly and thoroughly. We present a system that facilitates attack construction, combining human judgment with automated attacks to create better attacks more efficiently. Preliminary results from our own experimentation suggest that human-AI hybrid attacks are more effective than either human-only or AI-only attacks. A complete user study to validate these hypotheses is still pending.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Humans have used language to deceive each other for millennia. With the advent of NLP systems, humans now work to deceive models and algorithms, from evading email spam filters in the early 2000s to defeating classifiers for social network spam, abusive language, misinformation, and more. More recently, humans have developed automated adversarial attacks that minimally modify text while changing the output of a classifier or other NLP systems (Ebrahimi et al., 2018) . These automated attacks have the potential to be much more efficient than humans, helping attackers to find weaknesses in models and helping defenders find and patch", |
|
"cite_spans": [ |
|
{ |
|
"start": 447, |
|
"end": 470, |
|
"text": "(Ebrahimi et al., 2018)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Original \u2192 Perturbed Text Label PSO city by the sea swings from one approach to the other , but in the end , it stays in formula -which is a [waste \u2192 moor] of de niro , mcdormand and the other good actors in the cast .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Attack", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Neg. those same weaknesses (Xie et al., 2021; Zhou et al., 2019) . The number of automated attacks continues to grow but their effectiveness remains low - Wang et al. (2021a) found that 90% of automated adversarial attacks changed the semantics of the original input or confused human annotators. We have observed similar behavior, as shown in Table 1 . These examples are generated by word-level attack algorithms PSO (Zang et al., 2020) , BAE (Garg and Ramakrishnan, 2020) , and PWWS (Ren et al., 2019) , as implemented in the TextAttack framework (Morris et al., 2020) , on the sentiment dataset SST-2 (Socher et al., 2013) against BERT model (Devlin et al., 2019) . Although all perturbations change the predicted label, PSO chooses a synonym that is inappropriate in the context, BAE selects a complete antonym, and PWWS picks some rare substitutes that are nonsensical and possibly offensive.", |
|
"cite_spans": [ |
|
{ |
|
"start": 27, |
|
"end": 45, |
|
"text": "(Xie et al., 2021;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 46, |
|
"end": 64, |
|
"text": "Zhou et al., 2019)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 155, |
|
"end": 174, |
|
"text": "Wang et al. (2021a)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 419, |
|
"end": 438, |
|
"text": "(Zang et al., 2020)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 445, |
|
"end": 474, |
|
"text": "(Garg and Ramakrishnan, 2020)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 486, |
|
"end": 504, |
|
"text": "(Ren et al., 2019)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 550, |
|
"end": 571, |
|
"text": "(Morris et al., 2020)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 605, |
|
"end": 626, |
|
"text": "(Socher et al., 2013)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 646, |
|
"end": 667, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 344, |
|
"end": 351, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Attack", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Doubtless, humans can be more effective than these attacks, given their effectiveness against realworld spam and abuse filters. We believe that the next step for adversarial attacks and robust NLP is human-AI collaboration, in which humans work with automated adversarial algorithms to pro-duce effective attacks efficiently. Furthermore, realworld attackers are already doing this. Spammers already use many different technologies to accomplish their tasks, including text spinners to rewrite text, HTML tricks to conceal suspicious text, botnets to scale up and avoid IP bans, and more. A typical spammer does not craft every message individually, but uses semi-automated techniques to generate many different messages 1 . In response, a growing amount of NLP research is now using human expertise through human-in-the-loop (HITL) methods to create new benchmarking datasets for evaluating and improving the robustness of NLP systems to adversarial inputs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Attack", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Thus far, human expertise in adversarial NLP tasks has been limited. There is a growing body of work in which humans are asked to craft inputs where a given model will perform poorly, but they receive little support in doing so -sometimes word saliences (Mozes et al., 2021) , sometimes model predictions (Kiela et al., 2021) , and sometimes even less. Overall, the effort between humans and machines is still largely separate; that is, humans generate adversarial examples alone based on model interpretations, without directly interacting with any attack algorithms.", |
|
"cite_spans": [ |
|
{ |
|
"start": 254, |
|
"end": 274, |
|
"text": "(Mozes et al., 2021)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 305, |
|
"end": 325, |
|
"text": "(Kiela et al., 2021)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Attack", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this paper, we study the potential of direct human-AI interaction for generating higher-quality adversarial examples for NLP tasks. We work with the state-of-the-art word-level attacks on benchmark datasets for sentiment analysis and abuse detection. We choose word-level attacks as they can be more subtle than character-level attacks, which have obvious misspellings. We design an interactive user interface that enables four types of attacks, including two human-AI collaboration methods. Instead of a pure black-box environment, our interface explains the algorithm's search space and allows humans to modify and improve the perturbations while giving humans immediate feedback from the target NLP model. Along with generated attacks, we collect data for user experience and user preference with regard to different attack approaches. We then further study the collected data and analyze the impact of proposed human-AI collaboration methods and the degree of improvement on the adversarial examples. At present, we have pilot data from using the system ourselves; a full user study is pending IRB approval. 1 For an example of a spammer script that does this, see https://alexking.org/blog/2013/12/ 22/spam-comment-generator-script.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1115, |
|
"end": 1116, |
|
"text": "1", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Attack", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We summarize our contributions as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Attack", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 We propose a novel human-AI collaboration strategy to enable direct human and AI interaction for generating word-level adversarial examples for NLP tasks effectively and efficiently.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Attack", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 We design a framework with friendly user interface to realize four types of attack methods on benchmark datasets against state-of-theart NLP models. In addition to helping generate adversarial examples, the framework also collects self-and peer-evaluation of example quality and user feedback about the interface.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Attack", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 We share initial results based on our own use of the system, while IRB approval for a full study is pending.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Attack", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The rest of the paper is structured as follows: Section 2 discusses work related to our research. Section 3 introduces our framework, the human-AI collaboration methods and the evaluation metrics. Section 4 gives preliminary results and some brief analysis for our findings. Section 5 explains the stages of experiments for generating and collecting quality data. Finally, we conclude and discuss future work in Section 6.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Attack", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We review prior work on automated adversarial attacks for NLP, and HITL in adversarial learning.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Automated adversarial attacks for NLP: With the growth of research that studies adversarial learning in NLP, a variety of attack methods have been developed on multiple levels. From characterlevel modifications such as HotFlip (Ebrahimi et al., 2018) , DeepWordBug (Gao et al., 2018) , and VIPER (Eger et al., 2019) , to word-level perturbations such as BAE (Garg and Ramakrishnan, 2020) , PSO (Zang et al., 2020) , PWWS (Ren et al., 2019) , and TextFooler . Many of them have been aggregated and organized by toolchains like TextAttack (Morris et al., 2020) and OpenAttack (Zeng et al., 2021) for easy access to researchers.", |
|
"cite_spans": [ |
|
{ |
|
"start": 227, |
|
"end": 250, |
|
"text": "(Ebrahimi et al., 2018)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 265, |
|
"end": 283, |
|
"text": "(Gao et al., 2018)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 296, |
|
"end": 315, |
|
"text": "(Eger et al., 2019)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 358, |
|
"end": 387, |
|
"text": "(Garg and Ramakrishnan, 2020)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 394, |
|
"end": 413, |
|
"text": "(Zang et al., 2020)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 416, |
|
"end": 439, |
|
"text": "PWWS (Ren et al., 2019)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 537, |
|
"end": 558, |
|
"text": "(Morris et al., 2020)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 574, |
|
"end": 593, |
|
"text": "(Zeng et al., 2021)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "For character-level attacks, although they show their effectiveness in many ways, they mainly fall in the following two categories: Some of the character-level modifications can be seen as typos if an algorithm simply influences the embedding space by replacing/inserting/deleting one or a few characters in a word, such as DeepWordBug (Gao et al., 2018) , then they may be easily detected by a grammar checker tool, like Grammarly 2 ; the others can introduce some unique encoding/decoding methods and transform letters to another form, such as VIPER (Eger et al., 2019 ) that adds accent signs on top of each letter, and these modification may be easily identified by human. Overall, character-level perturbations tend to be more obvious.", |
|
"cite_spans": [ |
|
{ |
|
"start": 336, |
|
"end": 354, |
|
"text": "(Gao et al., 2018)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 552, |
|
"end": 570, |
|
"text": "(Eger et al., 2019", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "On the other hand, the study of word-level attacks is more popular, as a substitute for a word may significantly impact the semantics of the text. Many attack methodologies have been investigated for searching for the optimal synonym substitutions, including BERT-based contextual prediction (Garg and Ramakrishnan, 2020; Li et al., 2020) , gradient-based word swap (Ebrahimi et al., 2018; Wallace et al., 2019) , particle swarm optimization (Zang et al., 2020) , and greedy word search with saliency scores (Ren et al., 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 292, |
|
"end": 321, |
|
"text": "(Garg and Ramakrishnan, 2020;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 322, |
|
"end": 338, |
|
"text": "Li et al., 2020)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 366, |
|
"end": 389, |
|
"text": "(Ebrahimi et al., 2018;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 390, |
|
"end": 411, |
|
"text": "Wallace et al., 2019)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 442, |
|
"end": 461, |
|
"text": "(Zang et al., 2020)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 508, |
|
"end": 526, |
|
"text": "(Ren et al., 2019)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We summarize three attacks that are included in our framework. BAE: BERT-based Adversarial Examples (BAE), a black-box contextual perturbation algorithm based on a BERT masked language model (MLM). BAE masks some part of the text, then replaces and inserts tokens into the text, using the BERT-MLM to generate adversarial examples. PWWS: Probability Weighted Word Saliency (PWWS), a black-box greedy algorithm that ranks the importance of words based on the saliency score and calculates the classification probability that are used to determine the synonym substitution. TextFooler: TextFooler, a black-box greedy algorithm identifies the important words and replaces them with the words that are most semantically similar and grammatically correct with a higher priority until the prediction is altered.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "These automated word-level attacks mostly rely on the knowledge of existing target models and algorithms' intensive search to locate the best synonym substitutions. However, recent work (Xie et al., 2021 (Xie et al., , 2022 shows that the quality of generated adversarial examples is actually far from satisfactory, with respect to the low attack success rate across domains, incorrect grammar, and distorted meaning.", |
|
"cite_spans": [ |
|
{ |
|
"start": 186, |
|
"end": 203, |
|
"text": "(Xie et al., 2021", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 204, |
|
"end": 223, |
|
"text": "(Xie et al., , 2022", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "HITL in adversarial learning: As the capacity of automated algorithms may be limited, many researchers propose incorporating crowd-sourcing into generating and annotating adversarial exam-2 Grammarly, https://www.grammarly.com/.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "ples. The Dynabench framework asks humans to manually construct examples where an NLP system would perform poorly (Kiela et al., 2021) . A HITL QA system that asks humans to write adversarial questions that break a QA system while remaining answerable by humans (Wallace and Boyd-Graber, 2018) . The Adversarial NLI project asks humans to annotate mislabeled data and uses humans as adversaries to create a benchmark natural language inference (NLI) dataset for a more robust NLP model (Nie et al., 2020) . The most related work compares the performance of human-and machinegenerated word-level adversarial examples for NLP classification tasks (Mozes et al., 2021) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 114, |
|
"end": 134, |
|
"text": "(Kiela et al., 2021)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 262, |
|
"end": 293, |
|
"text": "(Wallace and Boyd-Graber, 2018)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 486, |
|
"end": 504, |
|
"text": "(Nie et al., 2020)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 645, |
|
"end": 665, |
|
"text": "(Mozes et al., 2021)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "However, existing work falls short of direct collaboration between humans and AI. The advantages of human crowd-sourcing and that of automated algorithms are still quite distinct.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In our framework, we study the potential of direct human-AI collaboration for generating higherquality adversarial examples. At the time of submission, we have completed the design of the framework, confirmed the details for human-AI collaboration, and implemented the interactive user interface.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Framework", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Our task is divided into two parts: generating adversarial examples and evaluating adversarial examples. Figure 1 depicts the workflow. First we feed the input samples to the attack phase where four attack methods are implemented. Human participants then use these attack methods to generate adversarial examples aiming to fool the target model's predictions. Participants are asked to selfevaluate the quality of generated adversarial examples based on grammatical properties, the difficulty of generating those examples, and their experiences with the system in terms of the helpfulness of different HITL strategies. Peer-evaluation is also included for evaluating the grammatical properties, and identifying the source of any given text.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 105, |
|
"end": 113, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Components & Workflow", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We implement three word-level attacks -BAE, PWWS, and TextFooler from the TextAttack library on sentiment dataset SST-2 and abuse comment dataset Hatebase (Davidson et al., 2017) against the RoBERTa target models (Liu et al., 2019) that are trained on these datasets separately. We use RoBERTa as the target model because it outper- forms BERT (Devlin et al., 2019) and XLNet (Yang et al., 2019) on various datasets across domains for classification in recent work (Xie et al., 2022) . We summarize the characters of these attacks in Table 2 . Please refer to Section 2 for a detailed description of them. All attacks share the same Greedy-WIR search method implemented in Tex-tAttack. We make certain modifications to the scripts in the TextAttack library to generate desired intermediate attack results, which are used as interpretable information for HITL adversarial attacks.", |
|
"cite_spans": [ |
|
{ |
|
"start": 155, |
|
"end": 178, |
|
"text": "(Davidson et al., 2017)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 213, |
|
"end": 231, |
|
"text": "(Liu et al., 2019)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 344, |
|
"end": 365, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 376, |
|
"end": 395, |
|
"text": "(Yang et al., 2019)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 465, |
|
"end": 483, |
|
"text": "(Xie et al., 2022)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 534, |
|
"end": 541, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Components & Workflow", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "For attack generation, we design an interactive user interface introducing four attack methods:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generating Adversarial Examples", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u2022 Auto: Black-box. Participants simply read and evaluate adversarial examples generated by one of the automated attack algorithms. Participants are not provided with any insight on how an automated attack algorithm modifies a sample, but the perturbed example itself. This method is considered as the baseline.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generating Adversarial Examples", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u2022 Manual: Black-box. Participants rely on their judgment solely to attack a given sample. The only information they receive is the immediate target model prediction. Once an adversarial example is entered, the target model returns the prediction result to show whether or not the crafted example has successfully flipped the predictive label.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generating Adversarial Examples", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u2022 Select: Gray-box. Participants are given intermediate perturbation results from the automated algorithm -specifically, keywords and potential substitution candidates for each keyword. Participants can select the best word substitute using dropdown lists, or enter an alternative word in a text input box. See Figure 5 for the interface. Basically, the Select method relaxes the constraints from the automated algorithm, and allows humans to modify up to five keywords. The immediate predictive label and probability of the selected word combination from the target model is also provided to show whether the chosen words have successfully changed the prediction.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 311, |
|
"end": 319, |
|
"text": "Figure 5", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Generating Adversarial Examples", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u2022 Saliency: Gray-box. Participants are shown a dynamic saliency map as they craft their adversarial examples. A saliency map shows what words the target model identifies as most important that are most likely to affect the prediction, and then marks those words with colors with different intensities. Unlike (Mozes et al., 2021) , where the interface displays word saliencies calculated by replacing the word with an out-of-vocabulary token, we implement the built-in method in each automated attack to calculate the saliency score. For example, BAE and TextFooler simply delete the word and calculate the word saliencies, while PWWS replaces each word with an unknown token and calculates the weighted saliency. The corresponding mathematical expressions are provided in A.2 of the Appendix. Overall, the Saliency method grants even more flexibility by allowing humans to change more words if necessary in order to preserve correct grammar and semantics. Participants can adjust their perturbation based on the dynamic saliency map and the target model's immediate prediction, see Figure 6 for the interface.", |
|
"cite_spans": [ |
|
{ |
|
"start": 309, |
|
"end": 329, |
|
"text": "(Mozes et al., 2021)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1083, |
|
"end": 1091, |
|
"text": "Figure 6", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Generating Adversarial Examples", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "For each method, participants are given a small number of original samples selected from one of the datasets, perform adversarial attacks on those samples with or without the assistance of the automated algorithms.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generating Adversarial Examples", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "To evaluate generated adversarial examples, we consider the following properties:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluating Adversarial Examples", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "\u2022 Grammar: measures whether or not the text contains any syntax errors, and retains the original or similar semantics. This is crucial for identifying if an adversarial attack is successful, as if the perturbation is fundamentally wrong by making the sentence unreadable or flipping the emotion of the message completely, we consider it as a failed attack.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluating Adversarial Examples", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "\u2022 Plausibility: measures whether or not the text is naturally crafted by native speakers. A piece of text is highly plausible if it is natural, logically correct, appropriately worded, and preserving meaningful messages (Wang et al., 2021b) . These properties appear as naturalness, correctness, appropriateness and meaningfulness in our user interface.", |
|
"cite_spans": [ |
|
{ |
|
"start": 220, |
|
"end": 240, |
|
"text": "(Wang et al., 2021b)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluating Adversarial Examples", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "\u2022 Effort: reflects the difficulty level for participants to successfully perform adversarial attacks using different attack methods.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluating Adversarial Examples", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "\u2022 Helpfulness: collects the degree of helpfulness of the information provided to participants to assist with generating adversarial examples in different attack methods (i.e., intermediate search results, lists of candidates, saliency maps, and more).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluating Adversarial Examples", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "All properties are evaluated on a scale from 1 to 5 where 5 indicates the best quality, the most difficult, or the most helpful, depending on the specific property; see Figure 7 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 169, |
|
"end": 177, |
|
"text": "Figure 7", |
|
"ref_id": "FIGREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluating Adversarial Examples", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Participants are required to self-evaluate their own constructed examples using each of the attack methods. Since self-evaluation can be very subjective, to ensure the fairness and to yield a more balanced and less biased analysis and outcome, we also plan to include anonymous peer-evaluation using Amazon Mechanical Turk (AMT) 3 with a group of AMT workers who are excluded from previous attack tasks. Each AMT worker reads a random subset of the adversarial examples, identifies what source an example may come from, and evaluates the grammatical quality (i.e. grammar and plausibility) of that example on the same scales.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluating Adversarial Examples", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Our hypotheses are that with minimal human collaboration, compared to automated attacks alone, the attacks would yield more promising results that are meaningful while holding correct grammar and semantics. In our preliminary work, we already see promise for this direction. Table 3 shows an example where PWWS on its own failed to come up with a good attack example, but succeeded in identifying the key text to modify. A human was then able to propose alternative text, which tricked the classifier while maintaining the correct semantics. As a pilot experiment, to test the viability of the framework before recruiting participants, the authors used the framework on themselves to collect 532 unique adversarial examples generated from the SST-2 dataset. By studying these examples, we have seen the following patterns (which we hypothesize will extend to the full experiments):", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 275, |
|
"end": 282, |
|
"text": "Table 3", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Preliminary Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Success Rate: Figure 2 shows the attack success rate across all attack methods. Though an automated attack may have a higher attack success rate due to the advantage of intensive search and the NLP model-oriented design, humans can achieve comparable attack success rate if provided with better human-AI interaction. Additionally, manually crafted attacks without any assist cannot compete with the those generated through other methods.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 14, |
|
"end": 22, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Preliminary Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Grammar and Plausibility: Figure 3 presents the average scores for grammar and plausibility, where the error bars denote the standard errors of the scores. The scores are aggregated and averaged per the attack method from the self-evaluation results over the 532 adversarial examples. It is obvious that human-generated adversarial examples on average have higher scores considering the grammatical properties and plausibility. Manual attack and HITL methods seem to produce higherquality adversarial examples with the assistance of automated algorithms, as compared to automated The error bars denote the standard errors of the scores. The results illustrate that humans are able to perturb an NLP model with more effort but fewer queries, and the gray-box setting, which includes additional information for the participants, is easier to attack than the black-box settings. The extra information provides some insight and explanation about how an automate algorithm understands the NLP model and how an NLP model decides the predictions.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 26, |
|
"end": 34, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Preliminary Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We plan to hire approximately 54 adult native English speakers, of whom we expect a subset to be experts in NLP or linguistics, from our local university to generate adversarial examples, and additional adult native English speaker AMT workers for peer-evaluation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Planned Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Unlike the recent work of Mozes et al. (2021) , which relies entirely on online crowd-sourcing on AMT, we carry on in-person experiments for attack generation, where we provide a few examples and detailed instructions to the participants to show how our interface operates, and what the standards/baselines are for evaluating the adversarial examples. We expect to obtain higher-quality data by bringing participants into a more controlled environment where it's easier to provide instruction, answer questions, and receive feedback.", |
|
"cite_spans": [ |
|
{ |
|
"start": 26, |
|
"end": 45, |
|
"text": "Mozes et al. (2021)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Planned Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "To motivate participants through the process, we have designed an incentive payment plan. Details are included in A.3 of the Appendix.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Planned Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Stage 1: adversarial example generation and self-evaluation. In each task, each participant is asked to work with approximately 15 examples from a source dataset, generating adversarial examples based on the source examples. We show the same examples to three different participants, who work independently to find their own adversarial examples. This gives us a chance to observe how varied the solutions are; if solutions vary substantially, then a larger group of people may have a better chance to find a good attack.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Planned Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "To increase the quality of the adversarial examples, we plan to have each participant complete the Auto and Manual methods before moving on to our proposed HITL methods. This also serves the purpose of training participants in these tasks, similar to tasks 1-3 by Mozes et al. (2021) . By doing so, participants have the chance to get familiar with our user interface, and get a better understanding of the capacity of an automated attack algorithm versus a human, in terms of influencing the target model's predictions. They then closely interact with the automated algorithms and the target model, where they obtain extra interpretable information from both parties that could assist them with more effective perturbations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 264, |
|
"end": 283, |
|
"text": "Mozes et al. (2021)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Planned Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "To increase the independence of the factors that may potentially impact the experiment results statistically, such as the order of samples and attack tasks being presented to an participant, we mix up the order of samples in each attack method, and we switch the order of attack methods before giving them to the participants.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Planned Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Each participant at our local university is expected to submit about 45 adversarial examples if they successfully complete all four tasks (the examples are not necessarily all successful attacks). We also collect all the attempts they make between two submissions and consider the total number of attempts as the number of queries. We are hoping to gather at least 2000 unique and quality adversarial examples among participants from all tasks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Planned Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Stage 2: peer-evaluation After collecting and organising generated adversarial examples, we will recruit an independent group of AMT workers to annotate the data. Similar to (Mozes et al., 2021) , we plan to select AMT workers based on their historical performance. That is, AMT workers who have successfully completed more than 1000 human intelligence tasks, and have an approval rate that is higher than 98% would be selected for peerevaluation. We present AMT workers with a few adversarial examples (approximately 50 examples) generated by humans and/or automated algorithms, randomly and anonymously. Each example is evaluated by three AMT workers to reduce variance.", |
|
"cite_spans": [ |
|
{ |
|
"start": 174, |
|
"end": 194, |
|
"text": "(Mozes et al., 2021)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Planned Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We aim to recruit 30 qualified AMT workers and hope to gather 1500 unique peer-evaluation results from them for about 500 examples.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Planned Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Humans have excellent intuition about language, but weak intuition about deep networks; automated attacks are often the opposite. Given the weak performance of manual attacks and automated attacks against NLP systems, some type of human-AI collaboration is essential to truly evaluate their robustness, and to be prepared for the inevitable attacks from real-world adversaries.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion & Future Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "In the future, we will carry out the experiments as designed, and further include the IMDB movie review dataset curated by (Maas et al., 2011) . As the texts in the IMDB dataset are often longer, this dataset may provide participants greater flexibility in modifying the examples.", |
|
"cite_spans": [ |
|
{ |
|
"start": 123, |
|
"end": 142, |
|
"text": "(Maas et al., 2011)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion & Future Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We believe that further study into collaboration methods will lead to a better understanding of adversarial attacks and more robust NLP models. We hope to provide a new benchmark for HITL adversarial learning while we continue exploring other effective human-AI collaboration methods. We hope that our framework will help researchers and practitioners better evaluate the robustness of NLP models to the best attacks that humans and algorithms can construct, and then improve their models by training on these adversarial examples. we expect them to finish the task in 60 minutes, and we pay $28/person. The Select and Saliency may also require some effort and attempts so that we expect them to complete the tasks in 90 minutes, and we pay $40/person for each task. By doing so, we hope to keep participants interested and motivated throughout the whole process.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion & Future Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We also plan to reward ten participants $10 who give constructive feedback for our user interface or experiment design through a drawing system. Additionally, we will double the pay for the top three participants who provide the most quality adversarial examples, where the quality is evaluated anonymously on AMT during the peer-evaluation phase.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion & Future Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "For peer-evaluation performed on AMT, We will match the market prices and pay $0.2\u223c0.25/example to the AMT workers. Peerevaluation is fairly straightforward, and we estimate that it takes no more than 90 minutes for each AMT worker to complete the task. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion & Future Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Amazon Mechanical Turk, see https://www.mturk. com/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This work was supported by a grant from the Defense Advanced Research Projects Agency (DARPA), agreement number HR00112090135. This work benefited from access to the University of Oregon high-performance computer, Talapas.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgement", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A.1 User Interface See Figures 5, 6, and 7 on the next few pages.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Appendix", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We now describe the word salience methods used by BAE, TextFooler, and PWWS. These approaches are first described by Ren et al., 2019) ; we summarize their methods below.Considering a sentence X consisting of n words X = {w 1 , w 2 , . . . , w n }, and its true label y, BAE and TextFooler simply delete a word w i and measure the word importance I w i , \u2200w i \u2208 X for contributing to the model predictive score P (X). Denote the sentence without w i as X \\w i , whereThe importance score I w i is calculated as the difference between the predictive scores before and after deleting word w i , i.e.if P (X) = y and P (X \\w i ) =\u0177, where y \u0338 =\u0177.PWWS first replaces a word w i with a candidate word w * i to form a new sentence. . , w n }, where w * i is the best candidate that changes the predictive probability the most, calculated by. . , w n }, and w \u2032 i is a candidate token among all substitute candidates C for word w i . Therefore, the most significant predictive probability change is obtained byPWWS then calculates the standard saliency by replacing w i with an unknown token via S(X, w i ) = P (y|X) \u2212 P (y|X) whereX = {w 1 , . . . , unknown, . . . , w n }. A saliency vector S(X) is obtained by calculating the saliency for every word in the sentence. PWWS finally combines the predictive probability and the saliency vector through a dot product to get a probability weighted saliency score (Ren et al., 2019) . That iswhere \u03d5 is a softmax function. H(X, X * , w i ) eventually determines the word importance for PWWS.", |
|
"cite_spans": [ |
|
{ |
|
"start": 117, |
|
"end": 134, |
|
"text": "Ren et al., 2019)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 1403, |
|
"end": 1421, |
|
"text": "(Ren et al., 2019)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A.2 Word Saliency for BAE, TextFooler, and PWWS", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Each participant at the university is expected to complete the adversarial example generation tasks using all four attack methods for consistency. Therefore, we create an incentive payment plan to motivate participants to work through the four tasks: Auto, Manual, Select, and Saliency. The Auto setting is fairly simple, which we expect participants to finish the task in less than 30 minutes, and we pay $12/person. The Manual setting is slightly more time-consuming and more difficult,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A.3 Incentive Payment Plan", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Automated hate speech detection and the problem of offensive language", |
|
"authors": [ |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Davidson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dana", |
|
"middle": [], |
|
"last": "Warmsley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Macy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ingmar", |
|
"middle": [], |
|
"last": "Weber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "ICWSM", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "512--515", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas Davidson, Dana Warmsley, Michael W. Macy, and Ingmar Weber. 2017. Automated hate speech detection and the problem of offensive language. In ICWSM, pages 512-515.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "4171--4186", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1423" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "HotFlip: White-box adversarial examples for text classification", |
|
"authors": [ |
|
{ |
|
"first": "Javid", |
|
"middle": [], |
|
"last": "Ebrahimi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anyi", |
|
"middle": [], |
|
"last": "Rao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Lowd", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dejing", |
|
"middle": [], |
|
"last": "Dou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 56th", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P18-2006" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. 2018. HotFlip: White-box adversarial exam- ples for text classification. In Proceedings of the 56th", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Annual Meeting of the Association for Computational Linguistics", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "31--36", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 31-36, Melbourne, Australia. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Text processing like humans do: Visually attacking and shielding NLP systems", |
|
"authors": [ |
|
{ |
|
"first": "Steffen", |
|
"middle": [], |
|
"last": "Eger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G\u00f6zde", |
|
"middle": [], |
|
"last": "G\u00fcl\u015fahin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andreas", |
|
"middle": [], |
|
"last": "R\u00fcckl\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ji-Ung", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Claudia", |
|
"middle": [], |
|
"last": "Schulz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohsen", |
|
"middle": [], |
|
"last": "Mesgar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Krishnkant", |
|
"middle": [], |
|
"last": "Swarnkar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edwin", |
|
"middle": [], |
|
"last": "Simpson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iryna", |
|
"middle": [], |
|
"last": "Gurevych", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1634--1647", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1165" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Steffen Eger, G\u00f6zde G\u00fcl\u015eahin, Andreas R\u00fcckl\u00e9, Ji-Ung Lee, Claudia Schulz, Mohsen Mesgar, Krishnkant Swarnkar, Edwin Simpson, and Iryna Gurevych. 2019. Text processing like humans do: Visually attacking and shielding NLP systems. In Proceed- ings of the 2019 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1634-1647, Min- neapolis, Minnesota. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Black-box generation of adversarial text sequences to evade deep learning classifiers", |
|
"authors": [ |
|
{ |
|
"first": "Ji", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jack", |
|
"middle": [], |
|
"last": "Lanchantin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mary", |
|
"middle": [ |
|
"Lou" |
|
], |
|
"last": "Soffa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yanjun", |
|
"middle": [], |
|
"last": "Qi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "IEEE Security and Privacy Workshops", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "50--56", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1109/SPW.2018.00016" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ji Gao, Jack Lanchantin, Mary Lou Soffa, and Yanjun Qi. 2018. Black-box generation of adversarial text sequences to evade deep learning classifiers. In 2018 IEEE Security and Privacy Workshops (SPW), pages 50-56.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "BAE: BERT-based adversarial examples for text classification", |
|
"authors": [ |
|
{ |
|
"first": "Siddhant", |
|
"middle": [], |
|
"last": "Garg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Goutham", |
|
"middle": [], |
|
"last": "Ramakrishnan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "6174--6181", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.emnlp-main.498" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Siddhant Garg and Goutham Ramakrishnan. 2020. BAE: BERT-based adversarial examples for text clas- sification. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, pages 6174-6181, Online. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Is BERT really robust? A strong baseline for natural language attack on text classification and entailment", |
|
"authors": [ |
|
{ |
|
"first": "Di", |
|
"middle": [], |
|
"last": "Jin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhijing", |
|
"middle": [], |
|
"last": "Jin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joey", |
|
"middle": [ |
|
"Tianyi" |
|
], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Szolovits", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 34th AAAI Conference on Artificial Intelligence, 32nd Innovative Applications of Artificial Intelligence Conference, and 10th AAAI Symposium on Educational Advances in Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "8018--8025", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. 2020. Is BERT really robust? A strong baseline for natural language attack on text classifica- tion and entailment. In Proceedings of the 34th AAAI Conference on Artificial Intelligence, 32nd Innova- tive Applications of Artificial Intelligence Conference, and 10th AAAI Symposium on Educational Advances in Artificial Intelligence, New York, NY, USA, Febru- ary 7-12, 2020, pages 8018-8025. AAAI Press.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Dynabench: Rethinking benchmarking in NLP", |
|
"authors": [ |
|
{ |
|
"first": "Douwe", |
|
"middle": [], |
|
"last": "Kiela", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Max", |
|
"middle": [], |
|
"last": "Bartolo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yixin", |
|
"middle": [], |
|
"last": "Nie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Divyansh", |
|
"middle": [], |
|
"last": "Kaushik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Atticus", |
|
"middle": [], |
|
"last": "Geiger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhengxuan", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bertie", |
|
"middle": [], |
|
"last": "Vidgen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Grusha", |
|
"middle": [], |
|
"last": "Prasad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amanpreet", |
|
"middle": [], |
|
"last": "Singh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pratik", |
|
"middle": [], |
|
"last": "Ringshia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhiyi", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tristan", |
|
"middle": [], |
|
"last": "Thrush", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Riedel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zeerak", |
|
"middle": [], |
|
"last": "Waseem", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pontus", |
|
"middle": [], |
|
"last": "Stenetorp", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robin", |
|
"middle": [], |
|
"last": "Jia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohit", |
|
"middle": [], |
|
"last": "Bansal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Potts", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adina", |
|
"middle": [], |
|
"last": "Williams", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4110--4124", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2021.naacl-main.324" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Douwe Kiela, Max Bartolo, Yixin Nie, Divyansh Kaushik, Atticus Geiger, Zhengxuan Wu, Bertie Vid- gen, Grusha Prasad, Amanpreet Singh, Pratik Ring- shia, Zhiyi Ma, Tristan Thrush, Sebastian Riedel, Zeerak Waseem, Pontus Stenetorp, Robin Jia, Mohit Bansal, Christopher Potts, and Adina Williams. 2021. Dynabench: Rethinking benchmarking in NLP. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 4110-4124.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "BERT-ATTACK: Adversarial attack against BERT using BERT", |
|
"authors": [ |
|
{ |
|
"first": "Linyang", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruotian", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qipeng", |
|
"middle": [], |
|
"last": "Guo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiangyang", |
|
"middle": [], |
|
"last": "Xue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xipeng", |
|
"middle": [], |
|
"last": "Qiu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "6193--6202", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.emnlp-main.500" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Linyang Li, Ruotian Ma, Qipeng Guo, Xiangyang Xue, and Xipeng Qiu. 2020. BERT-ATTACK: Adversar- ial attack against BERT using BERT. In Proceed- ings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6193-6202, Online. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Learning word vectors for sentiment analysis", |
|
"authors": [ |
|
{ |
|
"first": "Andrew", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Maas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raymond", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Daly", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Pham", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Potts", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "142--150", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142-150, Portland, Oregon, USA. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "TextAttack: A framework for adversarial attacks, data augmentation, and adversarial training in NLP", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Morris", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eli", |
|
"middle": [], |
|
"last": "Lifland", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jin", |
|
"middle": [ |
|
"Yong" |
|
], |
|
"last": "Yoo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jake", |
|
"middle": [], |
|
"last": "Grigsby", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Di", |
|
"middle": [], |
|
"last": "Jin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yanjun", |
|
"middle": [], |
|
"last": "Qi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "119--126", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.emnlp-demos.16" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John Morris, Eli Lifland, Jin Yong Yoo, Jake Grigsby, Di Jin, and Yanjun Qi. 2020. TextAttack: A frame- work for adversarial attacks, data augmentation, and adversarial training in NLP. In Proceedings of the 2020 Conference on Empirical Methods in Natu- ral Language Processing: System Demonstrations, pages 119-126, Online. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Contrasting human-and machine-generated word-level adversarial examples for text classification", |
|
"authors": [ |
|
{ |
|
"first": "Maximilian", |
|
"middle": [], |
|
"last": "Mozes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Max", |
|
"middle": [], |
|
"last": "Bartolo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pontus", |
|
"middle": [], |
|
"last": "Stenetorp", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bennett", |
|
"middle": [], |
|
"last": "Kleinberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lewis", |
|
"middle": [], |
|
"last": "Griffin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "8258--8270", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Maximilian Mozes, Max Bartolo, Pontus Stenetorp, Bennett Kleinberg, and Lewis Griffin. 2021. Con- trasting human-and machine-generated word-level adversarial examples for text classification. In Pro- ceedings of the 2021 Conference on Empirical Meth- ods in Natural Language Processing, pages 8258- 8270, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Adversarial nli: A new benchmark for natural language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Yixin", |
|
"middle": [], |
|
"last": "Nie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adina", |
|
"middle": [], |
|
"last": "Williams", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Emily", |
|
"middle": [], |
|
"last": "Dinan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohit", |
|
"middle": [], |
|
"last": "Bansal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Weston", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Douwe", |
|
"middle": [], |
|
"last": "Kiela", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4885--4901", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-main.441" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. 2020. Adversarial nli: A new benchmark for natural language under- standing. In ACL, pages 4885-4901.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Generating natural language adversarial examples through probability weighted word saliency", |
|
"authors": [ |
|
{ |
|
"first": "Yihe", |
|
"middle": [], |
|
"last": "Shuhuai Ren", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kun", |
|
"middle": [], |
|
"last": "Deng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wanxiang", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Che", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1085--1097", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P19-1103" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shuhuai Ren, Yihe Deng, Kun He, and Wanxiang Che. 2019. Generating natural language adversarial exam- ples through probability weighted word saliency. In Proceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 1085- 1097.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Recursive deep models for semantic compositionality over a sentiment treebank", |
|
"authors": [ |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Perelygin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jean", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Chuang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Potts", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1631--1642", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empiri- cal Methods in Natural Language Processing, pages 1631-1642, Seattle, Washington, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Trick me if you can: Adversarial writing of trivia challenge questions", |
|
"authors": [ |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Wallace", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jordan", |
|
"middle": [], |
|
"last": "Boyd-Graber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of ACL 2018, Student Research Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "127--133", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P18-3018" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eric Wallace and Jordan Boyd-Graber. 2018. Trick me if you can: Adversarial writing of trivia challenge questions. In Proceedings of ACL 2018, Student Re- search Workshop, pages 127-133, Melbourne, Aus- tralia. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Universal adversarial triggers for attacking and analyzing NLP", |
|
"authors": [ |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Wallace", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shi", |
|
"middle": [], |
|
"last": "Feng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nikhil", |
|
"middle": [], |
|
"last": "Kandpal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Gardner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sameer", |
|
"middle": [], |
|
"last": "Singh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2153--2162", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-1221" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gard- ner, and Sameer Singh. 2019. Universal adversarial triggers for attacking and analyzing NLP. In Proceed- ings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Inter- national Joint Conference on Natural Language Pro- cessing, Hong Kong, China, November 3-7, 2019, pages 2153-2162. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Adversarial GLUE: A multitask benchmark for robustness evaluation of language models", |
|
"authors": [ |
|
{ |
|
"first": "Boxin", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chejian", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shuohang", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhe", |
|
"middle": [], |
|
"last": "Gan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yu", |
|
"middle": [], |
|
"last": "Cheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianfeng", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ahmed", |
|
"middle": [], |
|
"last": "Hassan Awadallah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bo", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Boxin Wang, Chejian Xu, Shuohang Wang, Zhe Gan, Yu Cheng, Jianfeng Gao, Ahmed Hassan Awadal- lah, and Bo Li. 2021a. Adversarial GLUE: A multi- task benchmark for robustness evaluation of language models. In Thirty-fifth Conference on Neural In- formation Processing Systems Datasets and Bench- marks Track (Round 2).", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "TextFlint: Unified multilingual robustness evaluation toolkit for natural language processing", |
|
"authors": [ |
|
{ |
|
"first": "Xiao", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qin", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tao", |
|
"middle": [], |
|
"last": "Gui", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qi", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yicheng", |
|
"middle": [], |
|
"last": "Zou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xin", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jiacheng", |
|
"middle": [], |
|
"last": "Ye", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yongxin", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rui", |
|
"middle": [], |
|
"last": "Zheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zexiong", |
|
"middle": [], |
|
"last": "Pang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qinzhuo", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhengyan", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chong", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruotian", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zichu", |
|
"middle": [], |
|
"last": "Fei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruijian", |
|
"middle": [], |
|
"last": "Cai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jun", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xingwu", |
|
"middle": [], |
|
"last": "Hu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhiheng", |
|
"middle": [], |
|
"last": "Yan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yiding", |
|
"middle": [], |
|
"last": "Tan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuan", |
|
"middle": [], |
|
"last": "Hu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qiyuan", |
|
"middle": [], |
|
"last": "Bian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhihua", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shan", |
|
"middle": [], |
|
"last": "Qin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bolin", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaoyu", |
|
"middle": [], |
|
"last": "Xing", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jinlan", |
|
"middle": [], |
|
"last": "Fu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yue", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Minlong", |
|
"middle": [], |
|
"last": "Peng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaoqing", |
|
"middle": [], |
|
"last": "Zheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yaqian", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhongyu", |
|
"middle": [], |
|
"last": "Wei", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: System Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "347--355", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2021.acl-demo.41" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiao Wang, Qin Liu, Tao Gui, Qi Zhang, Yicheng Zou, Xin Zhou, Jiacheng Ye, Yongxin Zhang, Rui Zheng, Zexiong Pang, Qinzhuo Wu, Zhengyan Li, Chong Zhang, Ruotian Ma, Zichu Fei, Ruijian Cai, Jun Zhao, Xingwu Hu, Zhiheng Yan, Yiding Tan, Yuan Hu, Qiyuan Bian, Zhihua Liu, Shan Qin, Bolin Zhu, Xiaoyu Xing, Jinlan Fu, Yue Zhang, Minlong Peng, Xiaoqing Zheng, Yaqian Zhou, Zhongyu Wei, Xipeng Qiu, and Xuanjing Huang. 2021b. TextFlint: Unified multilingual robustness evaluation toolkit for natural language processing. In Proceedings of the 59th Annual Meeting of the Association for Compu- tational Linguistics and the 11th International Joint Conference on Natural Language Processing: System Demonstrations, pages 347-355, Online. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "What models know about their attackers: Deriving attacker information from latent representations", |
|
"authors": [ |
|
{ |
|
"first": "Zhouhang", |
|
"middle": [], |
|
"last": "Xie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Brophy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Noack", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wencong", |
|
"middle": [], |
|
"last": "You", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kalyani", |
|
"middle": [], |
|
"last": "Asthana", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carter", |
|
"middle": [], |
|
"last": "Perkins", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sabrina", |
|
"middle": [], |
|
"last": "Reis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zayd", |
|
"middle": [], |
|
"last": "Hammoudeh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Lowd", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sameer", |
|
"middle": [], |
|
"last": "Singh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the 4th BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "69--78", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhouhang Xie, Jonathan Brophy, Adam Noack, Wen- cong You, Kalyani Asthana, Carter Perkins, Sabrina Reis, Zayd Hammoudeh, Daniel Lowd, and Sameer Singh. 2021. What models know about their attack- ers: Deriving attacker information from latent repre- sentations. In Proceedings of the 4th BlackboxNLP Workshop on Analyzing and Interpreting Neural Net- works for NLP, pages 69-78, Punta Cana, Dominican Republic. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Sameer Singh, and Daniel Lowd. 2022. Identifying adversarial attacks on text classifiers", |
|
"authors": [ |
|
{ |
|
"first": "Zhouhang", |
|
"middle": [], |
|
"last": "Xie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Brophy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Noack", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wencong", |
|
"middle": [], |
|
"last": "You", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kalyani", |
|
"middle": [], |
|
"last": "Asthana", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carter", |
|
"middle": [], |
|
"last": "Perkins", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sabrina", |
|
"middle": [], |
|
"last": "Reis", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhouhang Xie, Jonathan Brophy, Adam Noack, Wen- cong You, Kalyani Asthana, Carter Perkins, Sabrina Reis, Sameer Singh, and Daniel Lowd. 2022. Identi- fying adversarial attacks on text classifiers.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "XLNet: Generalized autoregressive pretraining for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Zhilin", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zihang", |
|
"middle": [], |
|
"last": "Dai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yiming", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jaime", |
|
"middle": [], |
|
"last": "Carbonell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Russ", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Quoc V", |
|
"middle": [], |
|
"last": "Salakhutdinov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "32", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Russ R Salakhutdinov, and Quoc V Le. 2019. XLNet: Generalized autoregressive pretraining for language understanding. In Advances in Neural In- formation Processing Systems, volume 32. Curran Associates, Inc.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Word-level textual adversarial attacking as combinatorial optimization", |
|
"authors": [ |
|
{ |
|
"first": "Yuan", |
|
"middle": [], |
|
"last": "Zang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fanchao", |
|
"middle": [], |
|
"last": "Qi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chenghao", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhiyuan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Meng", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qun", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maosong", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "6066--6080", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-main.540" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yuan Zang, Fanchao Qi, Chenghao Yang, Zhiyuan Liu, Meng Zhang, Qun Liu, and Maosong Sun. 2020. Word-level textual adversarial attacking as combi- natorial optimization. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 6066-6080, Online. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Openattack: An open-source textual adversarial attack toolkit", |
|
"authors": [ |
|
{ |
|
"first": "Guoyang", |
|
"middle": [], |
|
"last": "Zeng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fanchao", |
|
"middle": [], |
|
"last": "Qi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qianrui", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tingji", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bairu", |
|
"middle": [], |
|
"last": "Hou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuan", |
|
"middle": [], |
|
"last": "Zang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhiyuan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maosong", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: System Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "363--371", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2021.acl-demo.43" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Guoyang Zeng, Fanchao Qi, Qianrui Zhou, Tingji Zhang, Bairu Hou, Yuan Zang, Zhiyuan Liu, and Maosong Sun. 2021. Openattack: An open-source textual adversarial attack toolkit. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: System Demonstrations, pages 363-371.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Learning to discriminate perturbations for blocking adversarial attacks in text classification", |
|
"authors": [ |
|
{ |
|
"first": "Yichao", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jyun-Yu", |
|
"middle": [], |
|
"last": "Jiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4904--4913", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-1496" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yichao Zhou, Jyun-Yu Jiang, Kai-Wei Chang, and Wei Wang. 2019. Learning to discriminate perturbations for blocking adversarial attacks in text classification. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 4904- 4913, Hong Kong, China. Association for Computa- tional Linguistics.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "Queries & Effort attacks, these methods loosen the constraints on various degrees and grant humans more freedom to make more modifications if needed. Therefore humans have more flexibility crafting grammatically correct and plausible adversarial examples. Queries and Human Effort: The top of Figure 4 displays the number of queries it takes for an automated algorithm or a human to choose their word substitutions. The bottom of the figure gives the average effort scores for each attack method.", |
|
"uris": null, |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"FIGREF1": { |
|
"text": "The interface for the Select task", |
|
"uris": null, |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"FIGREF2": { |
|
"text": "The interface for the Saliency task", |
|
"uris": null, |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"FIGREF3": { |
|
"text": "The interface for self-evaluation 21", |
|
"uris": null, |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"TABREF1": { |
|
"text": "", |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table/>", |
|
"html": null |
|
}, |
|
"TABREF2": { |
|
"text": "Workflow. Human figures in attack phase indicate that there is direct human-AI interaction. Human figures in evaluation phase indicate that humans are involved in both self-evaluation and peer-evaluation.", |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table><tr><td>Input</td><td>Attack</td></tr><tr><td>Original text</td><td>Auto</td></tr><tr><td>Label</td><td/></tr><tr><td>Confidence</td><td colspan=\"2\">Manual</td></tr><tr><td>Output</td><td>Select</td></tr><tr><td>Adv. example</td><td/></tr><tr><td>Label</td><td colspan=\"2\">Saliency</td></tr><tr><td>Confidence</td><td/></tr><tr><td>Evaluation</td><td/></tr><tr><td>Peer-Evaluation</td><td colspan=\"2\">Self-Evaluation</td></tr><tr><td colspan=\"2\">Figure 1: System & Attack Transformation BAE BERT Masked Token Pre-diction PWWS WordNet-based synonym swap TF Counter-fitted word embed-ding swap</td><td>Operation Replace & In-sert Replace Replace</td></tr></table>", |
|
"html": null |
|
}, |
|
"TABREF3": { |
|
"text": "", |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table/>", |
|
"html": null |
|
}, |
|
"TABREF5": { |
|
"text": "Original vs. automated attack vs. HITL attack", |
|
"type_str": "table", |
|
"num": null, |
|
"content": "<table/>", |
|
"html": null |
|
} |
|
} |
|
} |
|
} |