Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "K18-1047",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:10:56.489591Z"
},
"title": "Adversarial Over-Sensitivity and Over-Stability Strategies for Dialogue Models",
"authors": [
{
"first": "Tong",
"middle": [],
"last": "Niu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "UNC Chapel Hill",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "UNC Chapel Hill",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present two categories of model-agnostic adversarial strategies that reveal the weaknesses of several generative, task-oriented dialogue models: Should-Not-Change strategies that evaluate over-sensitivity to small and semantics-preserving edits, as well as Should-Change strategies that test if a model is overstable against subtle yet semantics-changing modifications. We next perform adversarial training with each strategy, employing a maxmargin approach for negative generative examples. This not only makes the target dialogue model more robust to the adversarial inputs, but also helps it perform significantly better on the original inputs. Moreover, training on all strategies combined achieves further improvements, achieving a new state-ofthe-art performance on the original task (also verified via human evaluation). In addition to adversarial training, we also address the robustness task at the model-level, by feeding it subword units as both inputs and outputs, and show that the resulting model is equally competitive, requires only 1/4 of the original vocabulary size, and is robust to one of the adversarial strategies (to which the original model is vulnerable) even without adversarial training.",
"pdf_parse": {
"paper_id": "K18-1047",
"_pdf_hash": "",
"abstract": [
{
"text": "We present two categories of model-agnostic adversarial strategies that reveal the weaknesses of several generative, task-oriented dialogue models: Should-Not-Change strategies that evaluate over-sensitivity to small and semantics-preserving edits, as well as Should-Change strategies that test if a model is overstable against subtle yet semantics-changing modifications. We next perform adversarial training with each strategy, employing a maxmargin approach for negative generative examples. This not only makes the target dialogue model more robust to the adversarial inputs, but also helps it perform significantly better on the original inputs. Moreover, training on all strategies combined achieves further improvements, achieving a new state-ofthe-art performance on the original task (also verified via human evaluation). In addition to adversarial training, we also address the robustness task at the model-level, by feeding it subword units as both inputs and outputs, and show that the resulting model is equally competitive, requires only 1/4 of the original vocabulary size, and is robust to one of the adversarial strategies (to which the original model is vulnerable) even without adversarial training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Adversarial evaluation aims at filling in the gap between potential train/test distribution mismatch and revealing how models will perform under realworld inputs containing natural or malicious noise. Recently, there has been substantial work on adversarial attacks in computer vision and NLP. Unlike vision, where one can simply add in imperceptible perturbations without changing an image's meaning, carrying out such subtle changes in text is harder since text is discrete in nature (Jia and We publicly release all our code and data at https: //github.com/WolfNiu/AdversarialDialogue . Thus, some previous works have either avoided modifying original source inputs and only resorted to inserting distractive sentences (Jia and Liang, 2017) , or have restricted themselves to introducing spelling errors (Belinkov and Bisk, 2018) and adding non-functioning tokens (Shalyminov et al., 2017) . Furthermore, there has been limited adversarial work on generative NLP tasks, e.g., dialogue generation (Henderson et al., 2017) , which is especially important because it is a crucial component of real-world virtual assistants such as Alexa, Siri, and Google Home. It is also a challenging and worthwhile task to keep the output quality of a dialogue system stable, because a conversation usually involves multiple turns, and a small mistake in an early turn could cascade into bigger misunderstanding later on.",
"cite_spans": [
{
"start": 486,
"end": 494,
"text": "(Jia and",
"ref_id": null
},
{
"start": 722,
"end": 743,
"text": "(Jia and Liang, 2017)",
"ref_id": "BIBREF15"
},
{
"start": 807,
"end": 832,
"text": "(Belinkov and Bisk, 2018)",
"ref_id": "BIBREF2"
},
{
"start": 867,
"end": 892,
"text": "(Shalyminov et al., 2017)",
"ref_id": "BIBREF37"
},
{
"start": 999,
"end": 1023,
"text": "(Henderson et al., 2017)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Motivated by this, we present a comprehensive adversarial study on dialogue models -we not only simulate imperfect inputs in the real world, but also launch intentionally malicious attacks on the model in order to assess them on both oversensitivity and over-stability. Unlike most previous works that exclusively focus on Should-Not-Change adversarial strategies (i.e., non-semanticschanging perturbations to the source sequence that should not change the response), we demonstrate that it is equally valuable to consider Should-Change strategies (i.e., semantics-changing, intentional perturbations to the source sequence that should change the response).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We investigate three state-of-the-art models on two task-oriented dialogue datasets. Concretely, we propose and evaluate five naturally motivated and increasingly complex Should-Not-Change and five Should-Change adversarial strategies on the VHRED (Variational Hierarchical Encoder-Decoder) model (Serban et al., 2017b) and the RL (Reinforcement Learning) model (Li et al., 2016) with the Ubuntu Dialogue Cor-pus (Lowe et al., 2015) , and Dynamic Knowledge Graph Network with the Collaborative Communicating Agents (CoCoA) dataset (He et al., 2017) .",
"cite_spans": [
{
"start": 297,
"end": 319,
"text": "(Serban et al., 2017b)",
"ref_id": "BIBREF36"
},
{
"start": 362,
"end": 379,
"text": "(Li et al., 2016)",
"ref_id": "BIBREF16"
},
{
"start": 413,
"end": 432,
"text": "(Lowe et al., 2015)",
"ref_id": "BIBREF20"
},
{
"start": 531,
"end": 548,
"text": "(He et al., 2017)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "On the Should-Not-Change side for the Ubuntu task, we introduce adversarial strategies of increasing linguistic-unit complexity -from shallow word-level errors, to phrase-level paraphrastic changes, and finally to syntactic perturbations. We first propose two rule-based perturbations to the source dialogue context, namely Random Swap (randomly transposing neighboring tokens) and Stopword Dropout (randomly removing stopwords). Next, we propose two data-level strategies that leverage existing parallel datasets in order to simulate more realistic, diverse noises: namely, Data-Level Paraphrasing (replacing words with their paraphrases) and Grammar Errors (e.g., changing a verb to the wrong tense). Finally, we employ Generative-Level Paraphrasing, where we adopt a neural model to automatically generate paraphrases of the source inputs. 1 On the Should-Change side for the Ubuntu task, we propose the Add Negation strategy, which negates the root verb of the source input, and the Antonym strategy, which changes verbs, adjectives, or adverbs to their antonyms. As will be shown in Section 6, the above strategies are effective on the Ubuntu task, but not on the collaborative-style, database-dependent Co-CoA task. Thus for the latter, we investigate additional Should-Change strategies including Random Inputs (changing each word in the utterance to random ones), Random Inputs with Entities (like Random Inputs but leaving mentioned entities untouched), and Normal Inputs with Confusing Entities (replacing entities in an agent's utterance with distractive ones) to analyze where the model's robustness stems from.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To evaluate these strategies, we first show that (1) both VHRED and the RL model are vulnerable to most Should-Not-Change and all Should-Change strategies, and (2) DynoNet's robustness to Should-Change inputs shows that it does not pay any attention to natural language inputs other than the entities contained in them. Next, observing how our adversarial strategies 'successfully' fool the target models, we try to expose these models to such perturbation patterns early on during training itself, where we feed adversarial input context and ground-truth target pairs as training data. Importantly, we realize this adversarial training via a maximum-likelihood loss for Should-Not-Change strategies, and via a maxmargin loss for Should-Change strategies. We show that this adversarial training can not only make both VHRED and RL more robust to the adversarial data, but also improve their performances when evaluated on the original test set (verified via human evaluation). In addition, when we train VHRED on all of the perturbed data from each adversarial strategy together, the performance on the original task improves even further, achieving the state-of-the-art result by a significant margin (also verified via human evaluation).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Finally, we attempt to resolve the robustness issue directly at the model-level (instead of adversarial-level) by feeding subword units derived from the Byte Pair Encoding (BPE) algorithm (Sennrich et al., 2016) to the VHRED model. We show that the resulting model not only reduces the vocabulary size by around 75% (thus trains much faster) and obtains results comparable to the original VHRED, but is also naturally (i.e., without requiring adversarial training) robust to the Grammar Errors adversarial strategy.",
"cite_spans": [
{
"start": 188,
"end": 211,
"text": "(Sennrich et al., 2016)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For a comprehensive study on dialogue model robustness, we investigate both semi-task-based troubleshooting dialogue (the Ubuntu task) and the new important paradigm of collaborative twobot dialogue (the CoCoA task). The former focuses more on natural conversations, while the latter focuses more on the knowledge base. Consequently, the model trained on the latter tends to ignore the natural language context (as will be shown in Section 6.2) and hence requires a different set of adversarial strategies that can directly reveal this weakness (e.g., Random Inputs with Entities). Overall, adversarial strategies on Ubuntu and CoCoA reveal very different types of weaknesses of a dialogue model. We implement two models on the Ubuntu task and one on the Co-CoA task, each achieving state-of-the-art result on its respective task. Note that although we employ these two strong models as our testbeds for the proposed adversarial strategies, these adversarial strategies are not specific to the two models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tasks and Models",
"sec_num": "2"
},
{
"text": "Dataset and Task: The Ubuntu Dialogue Corpus (Lowe et al., 2015) contains 1 million 2-person, multi-turn dialogues extracted from Ubuntu chat logs, used to provide and receive technical support. We focus on the task of generating fluent, relevant, and goal-oriented responses. Evaluation Method: The model is evaluated on F1's for both activities (technical verbs, e.g., \"download\", \"install\") and entities (technical nouns, e.g., \"root\", \"web\"). These metrics are computed by mapping the ground-truth and model responses to their corresponding activity-entity representations using the automatic procedure described in Serban et al. (2017a), who found that F1 is \"particularly suited for the goal-oriented Ubuntu Dialogue Corpus\" based on manual inspection of the extracted activities and entities. We also conducted human studies on the dialogue quality of generated responses (see Section 5 for setup and Section 6.1 for results). Models: We reproduce the state-of-the-art Latent Variable Hierarchical Recurrent Encoder-Decoder (VHRED) model , and a Deep Reinforcement Learning based generative model (Li et al., 2016) . For the VHRED model, we apply additive attention mechanism (Bahdanau et al., 2015) to the source sequence while keeping the remaining architecture unchanged. For the RL-based model, we adopt the mixed objective function (Paulus et al., 2018) and employ a novel reward: during training, for each source sequence S, we sample a response G on the decoder side, feed the encoder with a random source sequence S R drawn from the train set, and use \u2212 log P (G|S R ) as the reward. Intuitively, if S R stands a high chance of generating G (which corresponds to a large negative reward), it is very likely that G is dull and generic.",
"cite_spans": [
{
"start": 45,
"end": 64,
"text": "(Lowe et al., 2015)",
"ref_id": "BIBREF20"
},
{
"start": 1104,
"end": 1121,
"text": "(Li et al., 2016)",
"ref_id": "BIBREF16"
},
{
"start": 1183,
"end": 1206,
"text": "(Bahdanau et al., 2015)",
"ref_id": "BIBREF1"
},
{
"start": 1344,
"end": 1365,
"text": "(Paulus et al., 2018)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ubuntu Dialogue",
"sec_num": "2.1"
},
{
"text": "Dataset and Task: The collaborative CoCoA 2 dialogue task involves two agents that are asymmetrically primed with a private Knowledge Base (KB), and engage in a natural language conversation to find out the unique entry shared by the two KBs. For a bot-bot chat of the CoCoA task, a bot is allowed one of the two actions each turn: performing an UTTERANCE action, where it generates an utterance, or making a SELECT action, where it chooses an entry from the KB. Note that each bot's SELECT action is visible to the other bot, and each is allowed to make multiple SELECT actions if the previous guess is wrong. Evaluation Method: One of the major metrics is Completion Rate, the percentage of two bots successfully finishing the task. Models: We focus on DynoNet, the bestperforming model for the CoCoA task (He et al., 2017) . It consists of a dynamic knowledge graph, a graph embedding over the entity nodes, and a Seq2seq-based utterance generator.",
"cite_spans": [
{
"start": 808,
"end": 825,
"text": "(He et al., 2017)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Collaborative Communicating Agents",
"sec_num": "2.2"
},
{
"text": "3 Adversarial Strategies",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Collaborative Communicating Agents",
"sec_num": "2.2"
},
{
"text": "For Ubuntu, we introduce adversarial strategies of increasing linguistic-unit complexity -from shallow word-level errors such as Random Swap and Stopword Dropout, to phrase-level paraphrastic changes, and finally to syntactic Grammar Errors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adversarial Strategies on Ubuntu",
"sec_num": "3.1"
},
{
"text": "(1) Random Swap: Swapping adjacent words occurs often in the real world, e.g., transposition of words is one of the most frequent errors in manuscripts (Headlam, 1902; Marqu\u00e9s-Aguado, 2014) ; it is also frequently seen in blog posts. 3 Thus, being robust to swapping adjacent words is useful for chatbots that take typed/written text as inputs (e.g., virtual customer support on a airline/bank website). Even for speech-based conversations, non-native speakers can accidentally swap words due to habits formed in their native language (e.g., SVO in English vs. SOV in Hindi, Japanese, and Korean). Inspired by this, we also generate globally contiguous but locally \"time-reversed\" text, where positions of neighboring words are swapped (e.g., \"I don't want you to go\" to \"I don't want to you go\").",
"cite_spans": [
{
"start": 152,
"end": 167,
"text": "(Headlam, 1902;",
"ref_id": "BIBREF11"
},
{
"start": 168,
"end": 189,
"text": "Marqu\u00e9s-Aguado, 2014)",
"ref_id": "BIBREF22"
},
{
"start": 234,
"end": 235,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Should-Not-Change Strategies",
"sec_num": null
},
{
"text": "(2) Stopword Dropout: Stopwords are the most frequent words in a language. The most commonly-used 25 words in the Oxford English corpus make up one-third of all printed material in English, and these words consequently carry less information than other words do in a sentence. 4 Inspired by this observation, we propose randomly dropping stopwords from the inputs (e.g., \"Ben ate the carrot\" to \"Ben ate carrot\").",
"cite_spans": [
{
"start": 277,
"end": 278,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Should-Not-Change Strategies",
"sec_num": null
},
{
"text": "(3) Data-level Paraphrasing: We repurpose PPDB 2.0 (Pavlick et al., 2015) and replace words and phrases in the original inputs with their paraphrases (e.g., \"She bought a bike\" to \"She purchased a bicycle\"). (4) Generative-level Paraphrasing: Although Data-level Paraphrasing provides us with semantic-preserving inputs most of the time, it still suffers from the fact that the validity of a paraphrase depends on the context, especially for words with multiple meanings. In addition, simply replacing word-by-word does not lead to new compositional sentence-level paraphrases, e.g., \"How old are you\" to \"What's your age\". We thus also experiment with generative-level paraphrasing, where we employ the Pointer-Generator Networks (See et al., 2017) , and train it on the recently published paraphrase dataset ParaNMT-5M (Wieting and Gimpel, 2017) which contains 5 millions paraphrase pairs. (5) Grammar Errors: We repurpose the AESW dataset (Daudaravicius, 2015 ), text extracted from 9, 919 published journal articles with data before/after language editing. This dataset was used for training models that identify and correct grammar errors. Based on the corrections in the edits, we build a look-up table to replace each correct word/phrase with a wrong one (e.g., \"He doesn't like cakes\" to \"He don't like cake\").",
"cite_spans": [
{
"start": 51,
"end": 73,
"text": "(Pavlick et al., 2015)",
"ref_id": "BIBREF30"
},
{
"start": 731,
"end": 749,
"text": "(See et al., 2017)",
"ref_id": "BIBREF32"
},
{
"start": 942,
"end": 962,
"text": "(Daudaravicius, 2015",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Should-Not-Change Strategies",
"sec_num": null
},
{
"text": "(1) Add Negation: Suppose we add negation to the source sequence of some task-oriented model -from \"I want some coffee\" to \"I don't want some coffee\". A proper response to the first utterance could be \"Sure, I will bring you some coffee\", but for the second one, the model should do anything but bring some coffee. We thus assume that if we add negation to the root verb of each source sequence and the response is unchanged, the model must be ignoring important linguistic cues like negation. Hence this qualifies as a Should-Change strategy, i.e., if the model is robust, it should change the response.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Should-Change Strategies",
"sec_num": null
},
{
"text": "(2) Antonym: We change words in utterances to their antonyms to apply more subtle meaning changes (e.g., \"You need to install Ubuntu\" to \"You need to uninstall Ubuntu\"). 5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Should-Change Strategies",
"sec_num": null
},
{
"text": "5 Note that Should-Change strategies may lead to contexts",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Should-Change Strategies",
"sec_num": null
},
{
"text": "We applied all the above successful strategies used for the Ubuntu task to the UTTERANCE actions in a bot-bot-chat setting for the CoCoA task, but found that none of them was effective on DynoNet. This is surprising considering that the model's language generation module is a traditional Seq2seq model. This observation motivated us to perform the following analysis. The high performance of bot-bot chat may have stemmed from two sources: information revealed in an utterance, or entries directly disclosed by a SELECT action.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adversarial Strategies on CoCoA",
"sec_num": "3.2"
},
{
"text": "To investigate which part the model relies on more, we experiment with different Should-Change strategies which introduce obvious perturbations that have minimal word or semantic meaning overlap with the original source inputs:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adversarial Strategies on CoCoA",
"sec_num": "3.2"
},
{
"text": "(1) Random Inputs: Turn both bots' utterances into random inputs. This aims at investigating how much the model depends on the SELECT action.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adversarial Strategies on CoCoA",
"sec_num": "3.2"
},
{
"text": "(2) Random Inputs with Kept Entities: Replace each bot's utterance with random inputs, but keep the contained entities untouched. This further investigates how much entities alone contribute to the final performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adversarial Strategies on CoCoA",
"sec_num": "3.2"
},
{
"text": "(3) Confusing Entity: Replace entities mentioned in bot A's utterances with entities that are present in bot B's KB but not in their shared entry (and vice versa). This aims at coaxing bot B into believing that the mentioned entities come from their shared entry. By intentionally making the utterances misleading, we expect DynoNet's performance to be lower -hence this qualifies as a Should-Change strategy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adversarial Strategies on CoCoA",
"sec_num": "3.2"
},
{
"text": "To make a model robust to an adversarial strategy, a natural approach is exposing it to the same pattern of perturbation during training (i.e., adversarial training). This is achieved by feeding adversarial inputs as training data. For each strategy, we report results under three train/test combinations: (1) trained with normal inputs, tested on adversarial inputs (N-train + A-test), which evaluates whether the adversarial strategy is effective at that do not correspond to any legitimate task completion action, but the purpose of such a strategy is to make sure that the model at least should not respond the same way as it responded to the original context, i.e., even for the no-action state, the model should respond with something different like \"Sorry, I cannot help with that.\" Our semantic similarity results in Table 4 capture this intuition directly. fooling the model and exposing its robustness issues; (2) trained with adversarial inputs, tested on adversarial inputs (A-train + A-test), which next evaluates whether adversarial training made the model more robust to that adversarial attack; and (3) trained with adversarial inputs, tested on normal inputs (A-train + N-test), which finally evaluates whether the adversarial training also makes the model perform equally or better on the original normal inputs. Note that (3) is important, because one should not make the model more robust to a strategy at the cost of lower performance on the original data; also when (3) improves the performance on the original inputs, it means adversarial training successfully teaches the model to recognize and be robust to a certain type of noise, so that the model performs better when encountering similar patterns during inference. Also note that we use perturbed train set for adversarial training, and perturbed test set for adversarial testing. There is thus no overlap between the two sets.",
"cite_spans": [],
"ref_spans": [
{
"start": 825,
"end": 865,
"text": "Table 4 capture this intuition directly.",
"ref_id": null
}
],
"eq_spans": [],
"section": "Adversarial Training",
"sec_num": "4"
},
{
"text": "Should-Not-Change Strategies",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adversarial Training for",
"sec_num": "4.1"
},
{
"text": "For each Should-Not-Change strategy, we take an already trained model from a certain checkpoint, 6 and train it on the adversarial inputs with maximum likelihood loss for K epochs (Shalyminov et al., 2017; Belinkov and Bisk, 2018; Jia and Liang, 2017; Iyyer et al., 2018) . By feeding \"adversarial source sequence + ground-truth response pairs\" as regular positive data, we teach the model that these pairs are also valid examples despite the added perturbations.",
"cite_spans": [
{
"start": 180,
"end": 205,
"text": "(Shalyminov et al., 2017;",
"ref_id": "BIBREF37"
},
{
"start": 206,
"end": 230,
"text": "Belinkov and Bisk, 2018;",
"ref_id": "BIBREF2"
},
{
"start": 231,
"end": 251,
"text": "Jia and Liang, 2017;",
"ref_id": "BIBREF15"
},
{
"start": 252,
"end": 271,
"text": "Iyyer et al., 2018)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adversarial Training for",
"sec_num": "4.1"
},
{
"text": "For Should-Change strategies, we want the F1's to be lower with adversarial inputs after adversarial training, since this shows that the model becomes sensitive to subtle yet semantic-changing perturbations. This cannot be achieved by naively training on the perturbed inputs with maximum likelihood loss, because the \"perturbed source sequence + ground-truth response pairs\" for Should-Change strategies are negative examples which we need to train the model to avoid from generating. Inspired by Mao et al. (2016) and Yu et al. (2017) , we instead use a linear combination of maximum likeli- 6 We do not train from scratch because each model (for each strategy) takes several days to converge.",
"cite_spans": [
{
"start": 498,
"end": 515,
"text": "Mao et al. (2016)",
"ref_id": "BIBREF21"
},
{
"start": 520,
"end": 536,
"text": "Yu et al. (2017)",
"ref_id": "BIBREF42"
},
{
"start": 594,
"end": 595,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adversarial Training for Should-Change Strategies",
"sec_num": "4.2"
},
{
"text": "Activity hood loss and max-margin loss:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "F1 Entity",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "L = L ML +\u03b1L MM L ML = i logP (t i |s i ) L MM = i max (0, M +logP (t i |a i )\u2212logP (t i |s i ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "where L ML is the maximum likelihood loss, L MM is the max-margin loss, \u03b1 is the weight of the maxmargin loss (set to 1.0 following Yu et al. 2017), M is the margin (tuned be to 0.1), and t i , s i and a i are the target sequence, normal input, and adversarial input, respectively. 7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "In addition to datasets, tasks, models and evaluation methods introduced in Section 2, we present training details in this section (see Appendix for a comprehensive version).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5"
},
{
"text": "We implemented VHRED and Reranking-RL in TensorFlow (Abadi et al., 2016) and employed greedy search for inference. As shown in Table 1 , for both models we obtained Activity and Entity F1's higher than the VHRED results reported in Serban et al. (2017a). Hence, each of these two implementations serves as a solid baseline for adversarial testing and training. we also add a heuristic where an inflected verb is replaced with its respective infinitive form, and a plural noun with its singular form. Note that for all strategies we only keep an adversarial token if it is within the original vocabulary set. Should-Change Strategies on Ubuntu: For Add Negation, we negate the first verb in each utterance. For Antonym, we modify the first verb, adjective or adverb that has an antonym. Human Evaluation: We also conducted human studies on MTurk to evaluate adversarial training (pairwise comparison for dialogue quality) and generative paraphrasing (five-point Likert scale). The utterances were randomly shuffled to anonymize model identity, and we used MTurk with US-located human evaluators with approval rate > 98%, and at least 10, 000 approved HITs. Results are presented in Section 6.1. Note that the human studies and automatic evaluation are complementary to each other: while MTurk annotators are good at judging how natural and coherent a response is, they are usually not experts in the Ubuntu operating system's technical details. On the other hand, automatic evaluation focuses more on the technical side (i.e., whether key activities or entities are present in the response). Model on CoCoA: We adopted the publicly available code from He et al. 2017, 9 and used their already trained DynoNet model.",
"cite_spans": [
{
"start": 52,
"end": 72,
"text": "(Abadi et al., 2016)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 127,
"end": 134,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Models on Ubuntu:",
"sec_num": null
},
{
"text": "Result Interpretation For Table 2 and 3 with Should-Not-Change strategies, lower is better in the first column (since a successful adversarial testing strategy will be effective at fooling the model), while higher is better in the second column (since successful adversarial training should bring the performance back up). However, for 9 https://worksheets.codalab.org/worksheets/ 0xc757f29f5c794e5eb7bfa8ca9c945573/ Should-Change strategies, the reverse holds. 10 Lastly, in the third column, higher is better since we want the adversarially trained model to perform better on the original source inputs.",
"cite_spans": [
{
"start": 462,
"end": 464,
"text": "10",
"ref_id": null
}
],
"ref_spans": [
{
"start": 26,
"end": 39,
"text": "Table 2 and 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Adversarial Results on Ubuntu",
"sec_num": "6.1"
},
{
"text": "Results on Should-Not-Change Strategies Table 2 and 3 present the adversarial results on F1 scores of all our strategies for VHRED and Reranking-RL, respectively. Table 2 shows that VHRED is robust to none of the Should-Not-Change strategies other than Random Swap, while Table 3 shows that Reranking-RL is robust to none of the Should-Not-Change strategies other than Stopword Dropout. For each effective strategy, at least one of the F1's decreases statistically significantly 11 as compared to the same model fed with normal inputs. Next, all adversarial trainings on Should-Not-Change strategies not only make the model more robust to adversarial inputs (each Atrain + A-test F1 is stat. significantly higher than that of N-train + A-test) , but also make them perform better on normal inputs (each A-train + Ntest F1 is stat. significantly higher than that of Ntrain + N-test, except for Grammar Errors's Activity F1). Motivated by the success in adversarial training on each strategy alone, we also experimented with training on all Should-Not-Change strategies combined, and obtained F1's stat. significantly higher than any single strategy (the All Should-Not-Change row in Table 4 : Textual similarity of adversarial strategies on the VHRED and Reranking-RL models. \"Cont.\" stands for \"Context\", and \"Resp.\" stands for \"Response\".",
"cite_spans": [],
"ref_spans": [
{
"start": 163,
"end": 170,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 272,
"end": 279,
"text": "Table 3",
"ref_id": "TABREF5"
},
{
"start": 1182,
"end": 1189,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Adversarial Results on Ubuntu",
"sec_num": "6.1"
},
{
"text": "esting strategy to note is Random Swap: although it itself is not effective as an adversarial strategy for VHRED, training on it does make the model perform better on normal inputs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adversarial Results on Ubuntu",
"sec_num": "6.1"
},
{
"text": "Results on Should-Change Strategies Table 2 and 3 show that Add Negation and Antonym are both successful Should-Change strategies, because no change in N-train + A-test F1 is stat. significant compared to that of N-train + Ntest, which shows that both models are ignoring the semantic-changing perturbations to the inputs. From the last two rows of A-train + A-test column in each table, we also see that adversarial training successfully brings down both F1's (stat. significantly) for each model, showing that the model becomes more sensitive to the context change.",
"cite_spans": [],
"ref_spans": [
{
"start": 36,
"end": 43,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Adversarial Results on Ubuntu",
"sec_num": "6.1"
},
{
"text": "Semantic Similarity In addition to F1, we also follow Serban et al. (2017a) and employ cosine similarity between average embeddings of normal and adversarial inputs/responses (proposed by Liu et al. (2016) ) to evaluate how much the inputs/responses change in semantic meaning (Table 4). This metric is useful in three ways. Firstly, by comparing the two columns of context similarity, we can get a general idea of how much change is perceived by each model. For example, we can see that Stopword Dropout leads to more evident changes from VHRED's perspective than from Reranking-RL's. This also agrees with the F1 results in Table 6 : Human evaluation scores on paraphrases generated by Pointer-Generator Networks and groundtruth pairs from ParaNMT-5M.",
"cite_spans": [
{
"start": 188,
"end": 205,
"text": "Liu et al. (2016)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 626,
"end": 633,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Adversarial Results on Ubuntu",
"sec_num": "6.1"
},
{
"text": "that Reranking-RL is much more robust to this strategy than VHRED is. The high context similarity of Should-Change strategies shows that although we have added \"not\" or replaced antonyms in every utterance of the source inputs, from the model's point of view the context has not changed much in meaning. Secondly, for each Should-Not-Change strategy, the cosine similarity of context is much higher than that of response, indicating that responses change more significantly in meaning than their corresponding contexts. Lastly, The high semantic similarity for Generative Paraphrasing also partly shows that the Pointer-Generator model in general produces faithful paraphrases. Human Evaluation As introduced in Section 5, we performed two human studies on adversarial training and Generative Paraphrasing. For the first study, Table 5 indicates that models trained on each adversarial strategy (as well as on all Should-Not-Change strategies combined) indeed on average produced better responses, and mostly agrees with the adversarial training results in Table 2. 12 Context Response N: ... you could save your ubuntu files and reinstall Windows , then install ubuntu as a dual boot option eou eot aight buddy , so how do i get that **unknown** space back eou Random Swap: ... you could your save ubuntu and files Windows reinstall , then install ubuntu as dual a option boot eou eot aight buddy , so do how i that get space **unknown** back eou NN: you can use the Live CD , you can install Ubuntu on the same partition as the Windows partition eou NA: I am using ubuntu . eou AA: you can use Windows XP on the Windows partition , and then install Ubuntu on the same drive eou Table 7 : VHRED output example before and after adversarial training on the Random Swap strategy.",
"cite_spans": [
{
"start": 1066,
"end": 1068,
"text": "12",
"ref_id": null
}
],
"ref_spans": [
{
"start": 828,
"end": 835,
"text": "Table 5",
"ref_id": "TABREF7"
},
{
"start": 1057,
"end": 1065,
"text": "Table 2.",
"ref_id": "TABREF3"
},
{
"start": 1680,
"end": 1687,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Adversarial Results on Ubuntu",
"sec_num": "6.1"
},
{
"text": "For the second study, Table 6 shows that on average the generated paraphrase has roughly the same semantic meaning with the original utterance, but may sometimes miss some information. Its quality is also close to that of the ground-truth in ParaNMT-5M dataset.",
"cite_spans": [],
"ref_spans": [
{
"start": 22,
"end": 29,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Adversarial Results on Ubuntu",
"sec_num": "6.1"
},
{
"text": "We present a selected example of generated responses before and after adversarial training on the Random Swap strategy with the VHRED model in Table 7 (more examples in Appendix on all strategies with both models). First of all, we can see that it is hard to differentiate between the original and the perturbed context (N-context and A-context) if one does not look very closely. For this reason, the model gets fooled by the adversarial strategy, i.e., after adversarial perturbation, the N-train + A-test response (NA-Response) is worse than that of N-train + N-test (NN-Response). However, after our adversarial training phase, A-train + A-test (AA-Response) becomes better again. Table 8 shows the results of Should-Change strategies on DynoNet with the CoCoA task. The Random Inputs strategy shows that even without communication, the two bots are able to locate their shared entry 82% of the time by revealing their own KB through SELECT action. When we keep the mentioned entities untouched but randomize all other tokens, DynoNet actually achieves stateof-the-art Completion Rate, indicating that the two agents are paying zero attention to each other's utterances other than the entities contained in them. This is also why we did not apply Add Negation and Antonym to DynoNet -if Random Inputs does not work, these two strategies will also make no difference to the performance (in other words Random Inputs subsumes the other two Shouldgies, though the latter does agree with F1 trends. Overall, we provide both human and F1 evaluations because they are complementary at judging naturalness/coherence vs. key Ubuntu technical activities/entities. Change strategies). We can also see that even with the Normal Inputs with Confusing Entities strategy, DynoNet is still able to finish the task 77% of the time, and with only slightly more turns. This again shows that the model mainly relies on the SELECT action to guess the shared entry.",
"cite_spans": [],
"ref_spans": [
{
"start": 143,
"end": 150,
"text": "Table 7",
"ref_id": null
},
{
"start": 685,
"end": 692,
"text": "Table 8",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Output Examples of Generated Responses",
"sec_num": null
},
{
"text": "7 Byte-Pair-Encoding VHRED",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adversarial Results on CoCoA",
"sec_num": "6.2"
},
{
"text": "Although we have shown that adversarial training on most strategies makes the dialogue model more robust, generating such perturbed data is not always straightforward for diverse, complex strategies. For example, our data-level and generativelevel strategies all leverage datasets that are not always available to a language. We are thus motivated to also address the robustness task on the model-level, and explore an extension to the VHRED model that makes it robust to Grammar Errors even without adversarial training. Model Description: We performed Byte Pair Encoding (BPE) (Sennrich et al., 2016) on the Ubuntu dataset. 13 This algorithm encodes rare/unknown words as sequences of subword units, which helps segmenting words with the same lemma but different inflections (e.g., \"showing\" to \"show + ing\", and \"cakes\" to \"cake + s\"), making the model more likely to be robust to grammar errors such as verb tense or plural/singular noun confusion. We experimented BPE with 5K merging operations, and obtained a vocabulary size of 5121. Results: As shown in Table 9 , BPE-VHRED achieved F1's (5.99, 3.66), which is stat. equal to (5.94, 3.52) obtained without BPE. To our best knowledge, we are the first to apply BPE to a gen-VHRED BPE-VHRED Normal Input 5.94, 3.52 5.99, 3.66 Grammar Errors 5.60, 3.09 5.86, 3.54 Table 9 : Activity, Entity F1 results of VHRED model vs. BPE-VHRED model tested on normal inputs.",
"cite_spans": [
{
"start": 579,
"end": 602,
"text": "(Sennrich et al., 2016)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [
{
"start": 1062,
"end": 1069,
"text": "Table 9",
"ref_id": null
},
{
"start": 1319,
"end": 1326,
"text": "Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Adversarial Results on CoCoA",
"sec_num": "6.2"
},
{
"text": "erative dialogue task. Moreover, BPE-VHRED achieved (5.86, 3.54) on Grammar Errors based adversarial test set, which is stat. equal to the F1's when tested with normal data, indicating that BPE-VHRED is more robust to this adversarial strategy than VHRED is, since the latter had (5.60, 3.09) when tested with perturbed data, where both F1's are stat. signif. lower than when fed with normal inputs. Moreover, BPE-VHRED reduces the vocabulary size by 15K, corresponding to 4.5M fewer parameters. This makes BPE-VHRED train much faster. Note that BPE only makes the model robust to one type of noise (i.e. Grammar Errors), and hence adversarial training on other strategies is still necessary (but we hope that this encourages future work to build other advanced models that are naturally robust to diverse adversaries).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adversarial Results on CoCoA",
"sec_num": "6.2"
},
{
"text": "Model-Dependent vs. Model-Agnostic Strategies: Many adversarial strategies have been applied to both Computer Vision (Biggio et al., 2012; Szegedy et al., 2013; Goodfellow et al., 2015; Mei and Zhu, 2015; Papernot et al., 2016; Narodytska and Kasiviswanathan, 2017; Carlini and Wagner, 2017; Papernot et al., 2017; Mironenco et al.; Wong, 2017; Gao et al., 2018) and NLP (Jia and Liang, 2017; Zhao et al., 2018; Belinkov and Bisk, 2018; Shalyminov et al., 2017; Mironenco et al.; Iyyer et al., 2018) . Previous works have distinguished between modelaware strategies, where the adversarial algorithms have access to the model parameters, and modelagnostic strategies, where the adversary does not have such information (Papernot et al., 2017; Narodytska and Kasiviswanathan, 2017) . We however, observed that within the model-agnostic category, there are two subcategories. One is half-model-agnostic, where although the adversary has no access to the model parameters, it is allowed to probe the target model and observe its output as a way to craft adversarial inputs (Biggio et al., 2012; Szegedy et al., 2013; Goodfellow et al., 2015; Mei and Zhu, 2015; Papernot et al., 2017; Mironenco et al.) . On the other hand, a pure-model-agnostic adversary, such as works by Jia and Liang (2017) and Belinkov and Bisk (2018) , does not have any access to the model outputs when creating adversarial inputs, and is thus more generalizable across models/tasks. We adopt the pure-model-agnostic approach, only drawing inspiration from real-world noise, and testing them on the target model. Adversarial in NLP: Text-based adversarial works have targeted both classification models (Weston et al., 2016; Jia and Liang, 2017; Wong, 2017; Samanta and Mehta, 2017; Shalyminov et al., 2017; Gao et al., 2018; Iyyer et al., 2018) and generative models (Hosseini et al., 2017; Henderson et al., 2017; Mironenco et al.; Zhao et al., 2018; Belinkov and Bisk, 2018) . To our best knowledge, our work is the first to target generative goal-oriented dialogue systems with several new adversarial strategies in both Should-Not-Change and Should-Change categories, and then to fix the broken models through adversarial training (esp. using max-margin loss for Should-Change), and also achieving model robustness without using any adversarial data.",
"cite_spans": [
{
"start": 117,
"end": 138,
"text": "(Biggio et al., 2012;",
"ref_id": "BIBREF3"
},
{
"start": 139,
"end": 160,
"text": "Szegedy et al., 2013;",
"ref_id": "BIBREF38"
},
{
"start": 161,
"end": 185,
"text": "Goodfellow et al., 2015;",
"ref_id": "BIBREF9"
},
{
"start": 186,
"end": 204,
"text": "Mei and Zhu, 2015;",
"ref_id": "BIBREF23"
},
{
"start": 205,
"end": 227,
"text": "Papernot et al., 2016;",
"ref_id": "BIBREF28"
},
{
"start": 228,
"end": 265,
"text": "Narodytska and Kasiviswanathan, 2017;",
"ref_id": "BIBREF25"
},
{
"start": 266,
"end": 291,
"text": "Carlini and Wagner, 2017;",
"ref_id": "BIBREF4"
},
{
"start": 292,
"end": 314,
"text": "Papernot et al., 2017;",
"ref_id": "BIBREF27"
},
{
"start": 315,
"end": 332,
"text": "Mironenco et al.;",
"ref_id": "BIBREF24"
},
{
"start": 333,
"end": 344,
"text": "Wong, 2017;",
"ref_id": "BIBREF41"
},
{
"start": 345,
"end": 362,
"text": "Gao et al., 2018)",
"ref_id": "BIBREF8"
},
{
"start": 371,
"end": 392,
"text": "(Jia and Liang, 2017;",
"ref_id": "BIBREF15"
},
{
"start": 393,
"end": 411,
"text": "Zhao et al., 2018;",
"ref_id": "BIBREF43"
},
{
"start": 412,
"end": 436,
"text": "Belinkov and Bisk, 2018;",
"ref_id": "BIBREF2"
},
{
"start": 437,
"end": 461,
"text": "Shalyminov et al., 2017;",
"ref_id": "BIBREF37"
},
{
"start": 462,
"end": 479,
"text": "Mironenco et al.;",
"ref_id": "BIBREF24"
},
{
"start": 480,
"end": 499,
"text": "Iyyer et al., 2018)",
"ref_id": "BIBREF14"
},
{
"start": 718,
"end": 741,
"text": "(Papernot et al., 2017;",
"ref_id": "BIBREF27"
},
{
"start": 742,
"end": 779,
"text": "Narodytska and Kasiviswanathan, 2017)",
"ref_id": "BIBREF25"
},
{
"start": 1069,
"end": 1090,
"text": "(Biggio et al., 2012;",
"ref_id": "BIBREF3"
},
{
"start": 1091,
"end": 1112,
"text": "Szegedy et al., 2013;",
"ref_id": "BIBREF38"
},
{
"start": 1113,
"end": 1137,
"text": "Goodfellow et al., 2015;",
"ref_id": "BIBREF9"
},
{
"start": 1138,
"end": 1156,
"text": "Mei and Zhu, 2015;",
"ref_id": "BIBREF23"
},
{
"start": 1157,
"end": 1179,
"text": "Papernot et al., 2017;",
"ref_id": "BIBREF27"
},
{
"start": 1180,
"end": 1197,
"text": "Mironenco et al.)",
"ref_id": "BIBREF24"
},
{
"start": 1269,
"end": 1289,
"text": "Jia and Liang (2017)",
"ref_id": "BIBREF15"
},
{
"start": 1294,
"end": 1318,
"text": "Belinkov and Bisk (2018)",
"ref_id": "BIBREF2"
},
{
"start": 1672,
"end": 1693,
"text": "(Weston et al., 2016;",
"ref_id": "BIBREF39"
},
{
"start": 1694,
"end": 1714,
"text": "Jia and Liang, 2017;",
"ref_id": "BIBREF15"
},
{
"start": 1715,
"end": 1726,
"text": "Wong, 2017;",
"ref_id": "BIBREF41"
},
{
"start": 1727,
"end": 1751,
"text": "Samanta and Mehta, 2017;",
"ref_id": "BIBREF31"
},
{
"start": 1752,
"end": 1776,
"text": "Shalyminov et al., 2017;",
"ref_id": "BIBREF37"
},
{
"start": 1777,
"end": 1794,
"text": "Gao et al., 2018;",
"ref_id": "BIBREF8"
},
{
"start": 1795,
"end": 1814,
"text": "Iyyer et al., 2018)",
"ref_id": "BIBREF14"
},
{
"start": 1837,
"end": 1860,
"text": "(Hosseini et al., 2017;",
"ref_id": "BIBREF13"
},
{
"start": 1861,
"end": 1884,
"text": "Henderson et al., 2017;",
"ref_id": "BIBREF12"
},
{
"start": 1885,
"end": 1902,
"text": "Mironenco et al.;",
"ref_id": "BIBREF24"
},
{
"start": 1903,
"end": 1921,
"text": "Zhao et al., 2018;",
"ref_id": "BIBREF43"
},
{
"start": 1922,
"end": 1946,
"text": "Belinkov and Bisk, 2018)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "8"
},
{
"text": "We first revealed both the over-sensibility and over-stability of state-of-the-art models on Ubuntu and CoCoA dialogue tasks, via Should-Not-Change and Should-Change adversarial strategies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "9"
},
{
"text": "We then showed that training on adversarial inputs not only made the models more robust to the perturbations, but also helped them achieve new stateof-the-art performance on the original data (with further improvements when we combined strategies). Lastly, we also proposed a BPE-enhanced VHRED model that not only trains faster with comparable performance, but is also robust to Grammar Errors even without adversarial training, motivating that if no strong adversary-generation tools (e.g., paraphraser) are available (esp. in lowresource domains/languages), we should try alternative model-robustness architectural changes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "9"
},
{
"text": "A real example of Generative-Paraphrasing: context \"You can find xorg . conf in /etc/X11 . It 's not needed unless it is . ;-) You may need to create one yourself .\" is paraphrased as \"You may find xorg . conf in /etc/X11 . It 's not necessary until it is . You may be required to create one .\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://stanfordnlp.github.io/cocoa/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "E.g., \"he would give to it me\" in https://talk. drugabuse.com/threads/his-behavior-this-week.4347/ 4 One could also use closed-class words (prepositions, determiners, coordinators, and pronouns), but we opt for stopwords because a majority of stopwords are indeed closedclass words, and secondly, closed-class words usually require a very accurate POS-tagger, which is not available for lowresource or noisy domains and languages (e.g., Ubuntu).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Please refer to supp. about greedy sampling based maxmargin setup and CoCoA discussion for adversarial training.8 https://github.com/becxer/pointer-generator",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Higher is better in the first column, because this shows that the model is not paying attention to important semantic changes in the source inputs (and is maintaining its original performance); while lower is better in the second column, since we want the model to be more sensitive to such changes after adversarial training.11 We obtained stat. significance via the bootstrap test(Noreen, 1989;Efron and Tibshirani, 1994) with 100K samples, and consider p < 0.05 as stat. significant.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Note that human evaluation does not show improvements with the Data-Level-Paraphrasing and Add-Negation strate-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We employed code released by the authors on https: //github.com/rsennrich/subword-nmt",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank the anonymous reviewers for their helpful comments and discussions. This work was supported by DARPA (YFA17-D17AP00022), Facebook ParlAI Research Award, Google Faculty Research Award, Bloomberg Data Science Research Grant, and Nvidia GPU awards. The views contained in this article are those of the authors and not of the funding agency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Tensorflow: A system for large-scale machine learning",
"authors": [
{
"first": "Mart\u00edn",
"middle": [],
"last": "Abadi",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Barham",
"suffix": ""
},
{
"first": "Jianmin",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Davis",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
},
{
"first": "Matthieu",
"middle": [],
"last": "Devin",
"suffix": ""
},
{
"first": "Sanjay",
"middle": [],
"last": "Ghemawat",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Irving",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Isard",
"suffix": ""
}
],
"year": 2016,
"venue": "OSDI",
"volume": "16",
"issue": "",
"pages": "265--283",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mart\u00edn Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. 2016. Tensorflow: A system for large-scale machine learning. In OSDI, volume 16, pages 265- 283.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of ICLR.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Synthetic and natural noise both break neural machine translation",
"authors": [
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Bisk",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yonatan Belinkov and Yonatan Bisk. 2018. Synthetic and natural noise both break neural machine transla- tion. In Proceedings of ICLR.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Poisoning attacks against support vector machines",
"authors": [
{
"first": "Battista",
"middle": [],
"last": "Biggio",
"suffix": ""
},
{
"first": "Blaine",
"middle": [],
"last": "Nelson",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Laskov",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Battista Biggio, Blaine Nelson, and Pavel Laskov. 2012. Poisoning attacks against support vector ma- chines. In Proceedings of ICML.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Adversarial examples are not easily detected: Bypassing ten detection methods",
"authors": [
{
"first": "Nicholas",
"middle": [],
"last": "Carlini",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Wagner",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security",
"volume": "",
"issue": "",
"pages": "3--14",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicholas Carlini and David Wagner. 2017. Adver- sarial examples are not easily detected: Bypassing ten detection methods. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Secu- rity, pages 3-14. ACM.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Automated evaluation of scientific writing data set (version 1.2)",
"authors": [
{
"first": "Vidas",
"middle": [],
"last": "Daudaravicius",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vidas Daudaravicius. 2015. Automated evaluation of scientific writing data set (version 1.2)[data file].",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "An introduction to the bootstrap",
"authors": [
{
"first": "Bradley",
"middle": [],
"last": "Efron",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"J"
],
"last": "Tibshirani",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bradley Efron and Robert J. Tibshirani. 1994. An in- troduction to the bootstrap. CRC press.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Black-box generation of adversarial text sequences to evade deep learning classifiers",
"authors": [
{
"first": "Ji",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Jack",
"middle": [],
"last": "Lanchantin",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"Lou"
],
"last": "Soffa",
"suffix": ""
},
{
"first": "Yanjun",
"middle": [],
"last": "Qi",
"suffix": ""
}
],
"year": 2018,
"venue": "Deep Learning and Security Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ji Gao, Jack Lanchantin, Mary Lou Soffa, and Yan- jun Qi. 2018. Black-box generation of adversarial text sequences to evade deep learning classifiers. In Deep Learning and Security Workshop.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Explaining and harnessing adversarial examples",
"authors": [
{
"first": "Ian",
"middle": [
"J"
],
"last": "Goodfellow",
"suffix": ""
},
{
"first": "Jonathon",
"middle": [],
"last": "Shlens",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Szegedy",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adver- sarial examples. In Proceedings of ICLR.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Learning symmetric collaborative dialogue agents with dynamic knowledge graph embeddings",
"authors": [
{
"first": "He",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Anusha",
"middle": [],
"last": "Balakrishnan",
"suffix": ""
},
{
"first": "Mihail",
"middle": [],
"last": "Eric",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "He He, Anusha Balakrishnan, Mihail Eric, and Percy Liang. 2017. Learning symmetric collaborative di- alogue agents with dynamic knowledge graph em- beddings. In Proceedings of ACL.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Transposition of words in mss",
"authors": [
{
"first": "W",
"middle": [],
"last": "Headlam",
"suffix": ""
}
],
"year": 1902,
"venue": "The Classical Review",
"volume": "16",
"issue": "5",
"pages": "243--256",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. Headlam. 1902. Transposition of words in mss. The Classical Review, 16(5):243-256.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Ethical challenges in data-driven dialogue systems",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Henderson",
"suffix": ""
},
{
"first": "Koustuv",
"middle": [],
"last": "Sinha",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Angelard-Gontier",
"suffix": ""
},
{
"first": "Nan",
"middle": [
"Rosemary"
],
"last": "Ke",
"suffix": ""
},
{
"first": "Genevieve",
"middle": [],
"last": "Fried",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Lowe",
"suffix": ""
},
{
"first": "Joelle",
"middle": [],
"last": "Pineau",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1711.09050"
]
},
"num": null,
"urls": [],
"raw_text": "Peter Henderson, Koustuv Sinha, Nicolas Angelard- Gontier, Nan Rosemary Ke, Genevieve Fried, Ryan Lowe, and Joelle Pineau. 2017. Ethical challenges in data-driven dialogue systems. arXiv preprint arXiv:1711.09050.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Deceiving google's perspective api built for detecting toxic comments",
"authors": [
{
"first": "Hossein",
"middle": [],
"last": "Hosseini",
"suffix": ""
},
{
"first": "Sreeram",
"middle": [],
"last": "Kannan",
"suffix": ""
},
{
"first": "Baosen",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Radha",
"middle": [],
"last": "Poovendran",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1702.08138"
]
},
"num": null,
"urls": [],
"raw_text": "Hossein Hosseini, Sreeram Kannan, Baosen Zhang, and Radha Poovendran. 2017. Deceiving google's perspective api built for detecting toxic comments. arXiv preprint arXiv:1702.08138.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Adversarial example generation with syntactically controlled paraphrase networks",
"authors": [
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Wieting",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohit Iyyer, John Wieting, Kevin Gimpel, and Luke Zettlemoyer. 2018. Adversarial example generation with syntactically controlled paraphrase networks. In Proceedings of NAACL.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Adversarial examples for evaluating reading comprehension systems",
"authors": [
{
"first": "Robin",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robin Jia and Percy Liang. 2017. Adversarial exam- ples for evaluating reading comprehension systems. In Proceedings of EMNLP.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Deep reinforcement learning for dialogue generation",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Will",
"middle": [],
"last": "Monroe",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, and Dan Jurafsky. 2016. Deep rein- forcement learning for dialogue generation. In Pro- ceedings of EMNLP.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Deep text classification can be fooled",
"authors": [
{
"first": "Bin",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Hongcheng",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Miaoqiang",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Pan",
"middle": [],
"last": "Bian",
"suffix": ""
},
{
"first": "Xirong",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Wenchang",
"middle": [],
"last": "Shi",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1704.08006"
]
},
"num": null,
"urls": [],
"raw_text": "Bin Liang, Hongcheng Li, Miaoqiang Su, Pan Bian, Xirong Li, and Wenchang Shi. 2017. Deep text classification can be fooled. arXiv preprint arXiv:1704.08006.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation",
"authors": [
{
"first": "Chia-Wei",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Lowe",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Iulian",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Serban",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Noseworthy",
"suffix": ""
},
{
"first": "Joelle",
"middle": [],
"last": "Charlin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pineau",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chia-Wei Liu, Ryan Lowe, Iulian V. Serban, Michael Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How not to evaluate your dialogue system: An empirical study of unsupervised evaluation met- rics for dialogue response generation. In Proceed- ings of EMNLP.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Delving into transferable adversarial examples and black-box attacks",
"authors": [
{
"first": "Yanpei",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xinyun",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Chang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Dawn",
"middle": [],
"last": "Song",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yanpei Liu, Xinyun Chen, Chang Liu, and Dawn Song. 2017. Delving into transferable adversarial exam- ples and black-box attacks. In Proceedings of ICLR.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Lowe",
"suffix": ""
},
{
"first": "Nissan",
"middle": [],
"last": "Pow",
"suffix": ""
},
{
"first": "Iulian",
"middle": [],
"last": "Serban",
"suffix": ""
},
{
"first": "Joelle",
"middle": [],
"last": "Pineau",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1506.08909"
]
},
"num": null,
"urls": [],
"raw_text": "Ryan Lowe, Nissan Pow, Iulian Serban, and Joelle Pineau. 2015. The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dia- logue systems. arXiv preprint arXiv:1506.08909.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Generation and comprehension of unambiguous object descriptions",
"authors": [
{
"first": "Junhua",
"middle": [],
"last": "Mao",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Toshev",
"suffix": ""
},
{
"first": "Oana",
"middle": [],
"last": "Camburu",
"suffix": ""
},
{
"first": "Alan",
"middle": [
"L"
],
"last": "Yuille",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Murphy",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the IEEE conference on computer vision and pattern recognition",
"volume": "",
"issue": "",
"pages": "11--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Junhua Mao, Jonathan Huang, Alexander Toshev, Oana Camburu, Alan L. Yuille, and Kevin Murphy. 2016. Generation and comprehension of unambiguous ob- ject descriptions. In Proceedings of the IEEE con- ference on computer vision and pattern recognition, pages 11-20.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Errors, corrections and other textual problems in three copies of a middle english antidotary",
"authors": [
{
"first": "Teresa",
"middle": [],
"last": "Marqu\u00e9s-Aguado",
"suffix": ""
}
],
"year": 2014,
"venue": "Nordic Journal of English Studies",
"volume": "13",
"issue": "1",
"pages": "53--77",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Teresa Marqu\u00e9s-Aguado. 2014. Errors, corrections and other textual problems in three copies of a middle english antidotary. Nordic Journal of English Stud- ies, 13(1):53-77.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Using machine teaching to identify optimal training-set attacks on machine learners",
"authors": [
{
"first": "Shike",
"middle": [],
"last": "Mei",
"suffix": ""
},
{
"first": "Xiaojin",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of AAAI",
"volume": "",
"issue": "",
"pages": "2871--2877",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shike Mei and Xiaojin Zhu. 2015. Using machine teaching to identify optimal training-set attacks on machine learners. In Proceedings of AAAI, pages 2871-2877.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Examining cooperation in visual dialog models",
"authors": [
{
"first": "Mircea",
"middle": [],
"last": "Mironenco",
"suffix": ""
},
{
"first": "Dana",
"middle": [],
"last": "Kianfar",
"suffix": ""
},
{
"first": "Ke",
"middle": [],
"last": "Tran",
"suffix": ""
}
],
"year": null,
"venue": "Evangelos Kanoulas, and Efstratios Gavves",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mircea Mironenco, Dana Kianfar, Ke Tran, Evangelos Kanoulas, and Efstratios Gavves. Examining coop- eration in visual dialog models. In Proceedings of NIPS.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Simple black-box adversarial perturbations for deep networks",
"authors": [
{
"first": "Nina",
"middle": [],
"last": "Narodytska",
"suffix": ""
},
{
"first": "Shiva",
"middle": [],
"last": "Prasad Kasiviswanathan",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of CVPR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nina Narodytska and Shiva Prasad Kasiviswanathan. 2017. Simple black-box adversarial perturbations for deep networks. In Proceedings of CVPR.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Computer-intensive methods for testing hypotheses",
"authors": [
{
"first": "Eric",
"middle": [
"W"
],
"last": "Noreen",
"suffix": ""
}
],
"year": 1989,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric W. Noreen. 1989. Computer-intensive methods for testing hypotheses. Wiley New York.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Practical black-box attacks against machine learning",
"authors": [
{
"first": "Nicolas",
"middle": [],
"last": "Papernot",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Mcdaniel",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Goodfellow",
"suffix": ""
},
{
"first": "Somesh",
"middle": [],
"last": "Jha",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Berkay Celik",
"suffix": ""
},
{
"first": "Ananthram",
"middle": [],
"last": "Swami",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security",
"volume": "",
"issue": "",
"pages": "506--519",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicolas Papernot, Patrick McDaniel, Ian Goodfel- low, Somesh Jha, Z. Berkay Celik, and Ananthram Swami. 2017. Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia Conference on Computer and Communica- tions Security, pages 506-519. ACM.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "The limitations of deep learning in adversarial settings",
"authors": [
{
"first": "Nicolas",
"middle": [],
"last": "Papernot",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Mcdaniel",
"suffix": ""
},
{
"first": "Somesh",
"middle": [],
"last": "Jha",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Fredrikson",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Berkay Celik",
"suffix": ""
},
{
"first": "Ananthram",
"middle": [],
"last": "Swami",
"suffix": ""
}
],
"year": 2016,
"venue": "Security and Privacy (Eu-roS&P)",
"volume": "",
"issue": "",
"pages": "372--387",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z. Berkay Celik, and Ananthram Swami. 2016. The limitations of deep learning in adversarial settings. In Security and Privacy (Eu- roS&P), 2016 IEEE European Symposium on, pages 372-387. IEEE.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "A deep reinforced model for abstractive summarization",
"authors": [
{
"first": "Romain",
"middle": [],
"last": "Paulus",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Romain Paulus, Caiming Xiong, and Richard Socher. 2018. A deep reinforced model for abstractive sum- marization. In Proceedings of ICLR.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Ppdb 2.0: Better paraphrase ranking, finegrained entailment relations, word embeddings, and style classification",
"authors": [
{
"first": "Ellie",
"middle": [],
"last": "Pavlick",
"suffix": ""
},
{
"first": "Pushpendre",
"middle": [],
"last": "Rastogi",
"suffix": ""
},
{
"first": "Juri",
"middle": [],
"last": "Ganitkevitch",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "2",
"issue": "",
"pages": "425--430",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ellie Pavlick, Pushpendre Rastogi, Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2015. Ppdb 2.0: Better paraphrase ranking, fine- grained entailment relations, word embeddings, and style classification. In Proceedings of the 53rd An- nual Meeting of the Association for Computational Linguistics and the 7th International Joint Confer- ence on Natural Language Processing (Volume 2: Short Papers), volume 2, pages 425-430.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Towards crafting text adversarial samples",
"authors": [
{
"first": "Suranjana",
"middle": [],
"last": "Samanta",
"suffix": ""
},
{
"first": "Sameep",
"middle": [],
"last": "Mehta",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1707.02812"
]
},
"num": null,
"urls": [],
"raw_text": "Suranjana Samanta and Sameep Mehta. 2017. Towards crafting text adversarial samples. arXiv preprint arXiv:1707.02812.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Get to the point: Summarization with pointergenerator networks",
"authors": [
{
"first": "Abigail",
"middle": [],
"last": "See",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Liu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer- generator networks. In Proceedings of ACL.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of ACL.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Multiresolution recurrent neural networks: An application to dialogue response generation",
"authors": [
{
"first": "Iulian",
"middle": [],
"last": "Vlad Serban",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Klinger",
"suffix": ""
},
{
"first": "Gerald",
"middle": [],
"last": "Tesauro",
"suffix": ""
},
{
"first": "Kartik",
"middle": [],
"last": "Talamadupula",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Aaron",
"middle": [
"C"
],
"last": "Courville",
"suffix": ""
}
],
"year": 2017,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "3288--3294",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Iulian Vlad Serban, Tim Klinger, Gerald Tesauro, Kar- tik Talamadupula, Bowen Zhou, Yoshua Bengio, and Aaron C. Courville. 2017a. Multiresolution re- current neural networks: An application to dialogue response generation. In AAAI, pages 3288-3294.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Building end-to-end dialogue systems using generative hierarchical neural network models",
"authors": [
{
"first": "Iulian",
"middle": [],
"last": "Vlad Serban",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Sordoni",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Aaron",
"middle": [
"C"
],
"last": "Courville",
"suffix": ""
},
{
"first": "Joelle",
"middle": [],
"last": "Pineau",
"suffix": ""
}
],
"year": 2016,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "3776--3784",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Iulian Vlad Serban, Alessandro Sordoni, Yoshua Ben- gio, Aaron C. Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using gener- ative hierarchical neural network models. In AAAI, pages 3776-3784.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "A hierarchical latent variable encoder-decoder model for generating dialogues",
"authors": [
{
"first": "Iulian",
"middle": [],
"last": "Vlad Serban",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Sordoni",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Lowe",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Charlin",
"suffix": ""
},
{
"first": "Joelle",
"middle": [],
"last": "Pineau",
"suffix": ""
},
{
"first": "Aaron",
"middle": [
"C"
],
"last": "Courville",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2017,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "3295--3301",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron C. Courville, and Yoshua Bengio. 2017b. A hierarchical latent variable encoder-decoder model for generating di- alogues. In AAAI, pages 3295-3301.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Challenging neural dialogue models with natural data: Memory networks fail on incremental phenomena",
"authors": [
{
"first": "Igor",
"middle": [],
"last": "Shalyminov",
"suffix": ""
},
{
"first": "Arash",
"middle": [],
"last": "Eshghi",
"suffix": ""
},
{
"first": "Oliver",
"middle": [],
"last": "Lemon",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of SemDial",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Igor Shalyminov, Arash Eshghi, and Oliver Lemon. 2017. Challenging neural dialogue models with nat- ural data: Memory networks fail on incremental phenomena. In Proceedings of SemDial.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Intriguing properties of neural networks",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "Szegedy",
"suffix": ""
},
{
"first": "Wojciech",
"middle": [],
"last": "Zaremba",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Joan",
"middle": [],
"last": "Bruna",
"suffix": ""
},
{
"first": "Dumitru",
"middle": [],
"last": "Erhan",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Goodfellow",
"suffix": ""
},
{
"first": "Rob",
"middle": [],
"last": "Fergus",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1312.6199"
]
},
"num": null,
"urls": [],
"raw_text": "Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Towards ai-complete question answering: A set of prerequisite toy tasks",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "Sumit",
"middle": [],
"last": "Chopra",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merri\u00ebnboer",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Weston, Antoine Bordes, Sumit Chopra, Alexan- der M Rush, Bart van Merri\u00ebnboer, Armand Joulin, and Tomas Mikolov. 2016. Towards ai-complete question answering: A set of prerequisite toy tasks. In Proceedings of ICLR.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Pushing the limits of paraphrastic sentence embeddings with millions of machine translations",
"authors": [
{
"first": "John",
"middle": [],
"last": "Wieting",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1711.05732"
]
},
"num": null,
"urls": [],
"raw_text": "John Wieting and Kevin Gimpel. 2017. Pushing the limits of paraphrastic sentence embeddings with millions of machine translations. arXiv preprint arXiv:1711.05732.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Dancin seq2seq: Fooling text classifiers with adversarial text example generation",
"authors": [
{
"first": "Catherine",
"middle": [],
"last": "Wong",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1712.05419"
]
},
"num": null,
"urls": [],
"raw_text": "Catherine Wong. 2017. Dancin seq2seq: Fooling text classifiers with adversarial text example generation. arXiv preprint arXiv:1712.05419.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "A joint speakerlistener-reinforcer model for referring expressions",
"authors": [
{
"first": "Licheng",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "Tamara",
"middle": [
"L"
],
"last": "Berg",
"suffix": ""
}
],
"year": 2017,
"venue": "Computer Vision and Pattern Recognition (CVPR)",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Licheng Yu, Hao Tan, Mohit Bansal, and Tamara L. Berg. 2017. A joint speakerlistener-reinforcer model for referring expressions. In Computer Vision and Pattern Recognition (CVPR), volume 2.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Generating natural adversarial examples",
"authors": [
{
"first": "Zhengli",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Dheeru",
"middle": [],
"last": "Dua",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Singh",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhengli Zhao, Dheeru Dua, and Sameer Singh. 2018. Generating natural adversarial examples. In Pro- ceedings of ICLR.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"text": "F1 results of previous works as compared to our models. LSTM, HRED and VHRED are results reported in Serban et al. (2017a). VHRED (w/ attn.) and Reranking-RL are our results. Top results are bolded.",
"content": "<table/>",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF3": {
"text": "Activity and Entity F1 results of adversarial strategies on the VHRED model. Numbers marked with * are stat. significantly higher/lower than their counterparts obtained with Normal Input (upper-right corner of table).",
"content": "<table/>",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF4": {
"text": "",
"content": "<table><tr><td>), except that</td></tr></table>",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF5": {
"text": "Activity and Entity F1 results of adversarial strategies on the Reranking-RL model. Numbers marked with * are stat. significantly higher/lower than their counterparts obtained with Normal Input (upper-right corner).",
"content": "<table><tr><td>Strategy Name</td><td colspan=\"4\">VHRED Cont. Resp. Cont. Resp. Reranking-RL</td></tr><tr><td>Random Swap</td><td>1.00</td><td>0.71</td><td>1.00</td><td>0.86</td></tr><tr><td>Stopword Dropout</td><td>0.61</td><td>0.50</td><td>0.76</td><td>0.68</td></tr><tr><td>Data-Level Para.</td><td>0.96</td><td>0.58</td><td>0.96</td><td>0.74</td></tr><tr><td>Gen.-Level Para.</td><td>0.70</td><td>0.40</td><td>0.76</td><td>0.55</td></tr><tr><td>Grammar Err.</td><td>0.96</td><td>0.58</td><td>0.97</td><td>0.74</td></tr><tr><td>Add Negation</td><td>0.96</td><td>0.69</td><td>0.97</td><td>0.81</td></tr><tr><td>Antonym</td><td>0.98</td><td>0.66</td><td>0.98</td><td>0.74</td></tr></table>",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF6": {
"text": "",
"content": "<table><tr><td>and 3, which indicate</td></tr></table>",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF7": {
"text": "",
"content": "<table><tr><td colspan=\"3\">: Human evaluation results on comparison be-</td></tr><tr><td colspan=\"3\">tween VHRED baseline trained on normal inputs vs.</td></tr><tr><td colspan=\"3\">VHRED trained on each Should-Not-Change strategy</td></tr><tr><td colspan=\"3\">(incl. one with all Should-Not-Change strategies com-</td></tr><tr><td colspan=\"3\">bined) and each Should-Change strategy for Ubuntu.</td></tr><tr><td/><td colspan=\"2\">Pointer-Generator ParaNMT-5M</td></tr><tr><td>Avg. Score</td><td>3.26</td><td>3.54</td></tr></table>",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF9": {
"text": "Adversarial Results on DynoNet.",
"content": "<table/>",
"num": null,
"html": null,
"type_str": "table"
}
}
}
}