Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Q19-1029",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:09:22.420576Z"
},
"title": "Trick Me If You Can: Human-in-the-Loop Generation of Adversarial Examples for Question Answering",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Wallace",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Pedro",
"middle": [],
"last": "Rodriguez",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Shi",
"middle": [],
"last": "Feng",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Ikuya",
"middle": [],
"last": "Yamada",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Jordan",
"middle": [],
"last": "Boyd-Graber",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Adversarial evaluation stress-tests a model's understanding of natural language. Because past approaches expose superficial patterns, the resulting adversarial examples are limited in complexity and diversity. We propose humanin-the-loop adversarial generation, where human authors are guided to break models. We aid the authors with interpretations of model predictions through an interactive user interface. We apply this generation framework to a question answering task called Quizbowl, where trivia enthusiasts craft adversarial questions. The resulting questions are validated via live human-computer matches: Although the questions appear ordinary to humans, they systematically stump neural and information retrieval models. The adversarial questions cover diverse phenomena from multi-hop reasoning to entity type distractors, exposing open challenges in robust question answering.",
"pdf_parse": {
"paper_id": "Q19-1029",
"_pdf_hash": "",
"abstract": [
{
"text": "Adversarial evaluation stress-tests a model's understanding of natural language. Because past approaches expose superficial patterns, the resulting adversarial examples are limited in complexity and diversity. We propose humanin-the-loop adversarial generation, where human authors are guided to break models. We aid the authors with interpretations of model predictions through an interactive user interface. We apply this generation framework to a question answering task called Quizbowl, where trivia enthusiasts craft adversarial questions. The resulting questions are validated via live human-computer matches: Although the questions appear ordinary to humans, they systematically stump neural and information retrieval models. The adversarial questions cover diverse phenomena from multi-hop reasoning to entity type distractors, exposing open challenges in robust question answering.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Proponents of machine learning claim human parity on tasks like reading comprehension (Yu et al., 2018) and commonsense inference (Devlin et al., 2018) . Despite these successes, many evaluations neglect that computers solve natural language processing (NLP) tasks in a fundamentally different way than humans.",
"cite_spans": [
{
"start": 86,
"end": 103,
"text": "(Yu et al., 2018)",
"ref_id": "BIBREF46"
},
{
"start": 130,
"end": 151,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Models can succeed without developing ''true'' language understanding, instead learning superficial patterns from crawled (Chen et al., 2016) or manually annotated data sets (Gururangan et al., 2018; Kaushik and Lipton, 2018) . Thus, recent work stress-tests models via adversarial evaluation: elucidating a system's capabilities by exploiting its weaknesses (Jia and Liang, 2017; Belinkov and Glass, 2019) . Unfortunately, whereas adversarial evaluation reveals simplistic model failures (Ribeiro et al., 2018; Mudrakarta et al., 2018) , exploring more complex failure patterns requires human involvement ( Figure 1 ): Automatically modifying natural language examples without invalidating them is difficult. Hence, the diversity of adversarial examples is often severely restricted.",
"cite_spans": [
{
"start": 122,
"end": 141,
"text": "(Chen et al., 2016)",
"ref_id": "BIBREF4"
},
{
"start": 174,
"end": 199,
"text": "(Gururangan et al., 2018;",
"ref_id": "BIBREF15"
},
{
"start": 200,
"end": 225,
"text": "Kaushik and Lipton, 2018)",
"ref_id": "BIBREF25"
},
{
"start": 359,
"end": 380,
"text": "(Jia and Liang, 2017;",
"ref_id": "BIBREF23"
},
{
"start": 381,
"end": 406,
"text": "Belinkov and Glass, 2019)",
"ref_id": "BIBREF1"
},
{
"start": 480,
"end": 511,
"text": "failures (Ribeiro et al., 2018;",
"ref_id": null
},
{
"start": 512,
"end": 536,
"text": "Mudrakarta et al., 2018)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [
{
"start": 608,
"end": 616,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Instead, our human-computer hybrid approach uses human creativity to generate adversarial examples. A user interface presents model interpretations and helps users craft model-breaking examples (Section 3). We apply this to a question answering (QA) task called Quizbowl, where trivia enthusiasts-who write questions for academic competitions-create diverse examples that stump existing QA models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The adversarially authored test set is nonetheless as easy as regular questions for humans (Section 4), but the relative accuracy of strong QA models drops as much as 40% (Section 5). We also host live human vs. computer matches-where models typically defeat top human teams-but observe spectacular model failures on adversarial questions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Analyzing the adversarial edits uncovers phenomena that humans can solve but computers cannot (Section 6), validating that our framework uncovers creative, targeted adversarial edits (Section 7). Our resulting adversarial data set presents a fun, challenging, and diverse resource for future QA research: A system that masters it will demonstrate more robust language understanding. Figure 1 : Adversarial evaluation in NLP typically focuses on a specific phenomenon (e.g., word replacements) and then generates the corresponding examples (top). Consequently, adversarial examples are limited to the diversity of what the underlying generative model or perturbation rule can produce-and also require downstream human evaluation to ensure validity. Our setup (bottom) instead has human-authored examples, using human-computer collaboration to craft adversarial examples with greater diversity.",
"cite_spans": [],
"ref_spans": [
{
"start": 383,
"end": 391,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Adversarial examples (Szegedy et al., 2013) often reveal model failures better than traditional test sets. However, automatic adversarial generation is tricky for NLP (e.g., by replacing words) without changing an example's meaning or invalidating it.",
"cite_spans": [
{
"start": 21,
"end": 43,
"text": "(Szegedy et al., 2013)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adversarial Evaluation for NLP",
"sec_num": "2"
},
{
"text": "Recent work sidesteps this by focusing on simple transformations that preserve meaning. For instance, Ribeiro et al. 2018generate adversarial perturbations such as replacing What has \u2192 What's. Other minor perturbations such as typos (Belinkov and Bisk, 2018) , adding distractor sentences (Jia and Liang, 2017; Mudrakarta et al., 2018) , or character replacements (Ebrahimi et al., 2018) preserve meaning while degrading model performance.",
"cite_spans": [
{
"start": 233,
"end": 258,
"text": "(Belinkov and Bisk, 2018)",
"ref_id": "BIBREF0"
},
{
"start": 289,
"end": 310,
"text": "(Jia and Liang, 2017;",
"ref_id": "BIBREF23"
},
{
"start": 311,
"end": 335,
"text": "Mudrakarta et al., 2018)",
"ref_id": "BIBREF30"
},
{
"start": 364,
"end": 387,
"text": "(Ebrahimi et al., 2018)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adversarial Evaluation for NLP",
"sec_num": "2"
},
{
"text": "Generative models can discover more adversarial perturbations but require post hoc human verification of the examples. For example, neural paraphrase or language models can generate syntax modifications (Iyyer et al., 2018) , plausible captions (Zellers et al., 2018) , or NLI premises . These methods improve examplelevel diversity but mainly target a specific phenomenon, (e.g., rewriting question syntax).",
"cite_spans": [
{
"start": 203,
"end": 223,
"text": "(Iyyer et al., 2018)",
"ref_id": "BIBREF21"
},
{
"start": 245,
"end": 267,
"text": "(Zellers et al., 2018)",
"ref_id": "BIBREF47"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adversarial Evaluation for NLP",
"sec_num": "2"
},
{
"text": "Furthermore, existing adversarial perturbations are restricted to sentences-not the paragraph inputs of Quizbowl and other tasks-due to challenges in long-text generation. For instance, syntax paraphrase networks (Iyyer et al., 2018) applied to Quizbowl only yield valid paraphrases 3% of the time (Appendix A).",
"cite_spans": [
{
"start": 213,
"end": 233,
"text": "(Iyyer et al., 2018)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adversarial Evaluation for NLP",
"sec_num": "2"
},
{
"text": "Instead, we task human authors with adversarial writing of questions: generating examples that break a specific QA system but are still answerable by humans. We expose model predictions and interpretations to question authors, who find question edits that confuse the model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Putting a Human in the Loop",
"sec_num": "2.1"
},
{
"text": "The user interface makes the adversarial writing process interactive and model-driven, in contrast to adversarial examples written independently of a model (Ettinger et al., 2017) . The result is an adversarially authored data set that explicitly exposes a model's limitations by design.",
"cite_spans": [
{
"start": 156,
"end": 179,
"text": "(Ettinger et al., 2017)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Putting a Human in the Loop",
"sec_num": "2.1"
},
{
"text": "Human-in-the-loop generation can replace or aid model-based adversarial generation approaches. Creating interfaces and interpretations is often easier than designing and training generative models for specific domains. In domains where adversarial generation is feasible, human creativity can reveal which tactics automatic approaches can later emulate. Model-based and human-in-theloop generation approaches can also be combined by training models to mimic human adversarial edit history, using the relative merits of both approaches.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Putting a Human in the Loop",
"sec_num": "2.1"
},
{
"text": "3 Our QA Testbed: Quizbowl The ''gold standard'' of academic competitions between universities and high schools is Quizbowl. Unlike QA formats such as Jeopardy! (Ferrucci et al., 2010) , Quizbowl questions are designed to be interrupted: Questions are read to two competing teams and whoever knows the answer first interrupts the question and ''buzzes in.'' This style of play requires questions to be structured ''pyramidally'' (Jose, 2017): Questions start with difficult clues and get progressively easier. These questions are carefully crafted to allow the most knowledgeable player to answer first. A question on Paris that begins ''this capital of France'' would test reaction speed, not knowledge; thus, skilled authors arrange the clues so players will recognize them with increasing probability (Figure 2 ). The answers to Quizbowl questions are typically well-known entities. In the QA community (Hirschman and Gaizauskas, 2001) , this is called ''factoid'' QA: The entities come from a relatively closed set of possible answers.",
"cite_spans": [
{
"start": 161,
"end": 184,
"text": "(Ferrucci et al., 2010)",
"ref_id": "BIBREF11"
},
{
"start": 906,
"end": 938,
"text": "(Hirschman and Gaizauskas, 2001)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 804,
"end": 813,
"text": "(Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Putting a Human in the Loop",
"sec_num": "2.1"
},
{
"text": "Like most QA data sets, Quizbowl questions are written for humans. Unfortunately, the heuristics that question authors use to select clues do not always apply to computers. For example, humans are unlikely to memorize every song in every opera by a particular composer. This, however, is trivial for a computer. In particular, a simple QA system easily solves the example in Figure 2 from seeing the reference to ''Un Bel Di''. Other questions contain uniquely identifying ''trigger words'' (Harris, 2006) . For example, ''martensite'' only appears in questions on steel. For these examples, a QA system needs to understand no additional information other than an if-then rule.",
"cite_spans": [
{
"start": 491,
"end": 505,
"text": "(Harris, 2006)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 375,
"end": 383,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Known Exploits of Quizbowl Questions",
"sec_num": "3.1"
},
{
"text": "One might wonder whether this means that factoid QA is thus an uninteresting, nearly solved research problem. However, some Quizbowl questions are fiendishly difficult for computers. Many questions have intricate coreference patterns (Guha et al., 2015) , require reasoning across multiple types of knowledge, or involve complex wordplay. If we can isolate and generate questions with these difficult phenemona, ''simplistic'' factoid QA quickly becomes non-trivial.",
"cite_spans": [
{
"start": 234,
"end": 253,
"text": "(Guha et al., 2015)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Known Exploits of Quizbowl Questions",
"sec_num": "3.1"
},
{
"text": "We conduct two rounds of adversarial writing. In the first, authors attack a traditional information retrieval (IR) system. The IR model is the baseline from a NIPS 2017 shared task on Quizbowl (Boyd-Graber et al., 2018) based on ElasticSearch (Gormley and Tong, 2015) .",
"cite_spans": [
{
"start": 244,
"end": 268,
"text": "(Gormley and Tong, 2015)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models and Data Sets",
"sec_num": "3.2"
},
{
"text": "In the second round, authors attack either the IR model or a neural QA model. The neural model is a bidirectional recurrent neural network (RNN) using the gated recurrent unit architecture (Cho et al., 2014) . The model treats Quizbowl as classification and predicts the answer entity from a sequence of words represented as 300dimensional GloVe embeddings (Pennington et al., 2014) . Both models in this round are trained using an expanded data set of approximately 110,000 Quizbowl questions. We expanded the second round data set to incorporate more diverse answers (25,000 entities vs. 11,000 in round one).",
"cite_spans": [
{
"start": 189,
"end": 207,
"text": "(Cho et al., 2014)",
"ref_id": null
},
{
"start": 357,
"end": 382,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models and Data Sets",
"sec_num": "3.2"
},
{
"text": "To help write adversarial questions, we expose what the model is thinking to the authors. We interpret models using saliency heat maps: Each word of the question is highlighted based on its importance to the model's prediction (Ribeiro et al., 2016) .",
"cite_spans": [
{
"start": 227,
"end": 249,
"text": "(Ribeiro et al., 2016)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Interpreting Quizbowl Models",
"sec_num": "3.3"
},
{
"text": "For the neural model, word importance is the decrease in prediction probability when a word is removed (Li et al., 2016; Wallace et al., 2018) . We focus on gradient-based approximations (Simonyan et al., 2014; Montavon et al., 2018) for their computational efficiency.",
"cite_spans": [
{
"start": 103,
"end": 120,
"text": "(Li et al., 2016;",
"ref_id": "BIBREF27"
},
{
"start": 121,
"end": 142,
"text": "Wallace et al., 2018)",
"ref_id": "BIBREF42"
},
{
"start": 187,
"end": 210,
"text": "(Simonyan et al., 2014;",
"ref_id": "BIBREF38"
},
{
"start": 211,
"end": 233,
"text": "Montavon et al., 2018)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Interpreting Quizbowl Models",
"sec_num": "3.3"
},
{
"text": "To interpret a model prediction on an input sequence of n words w = w 1 , w 2 , . . . w n , we approximate the classifier f with a linear function of w i derived from the first-order Taylor expansion. The importance of w i , with embedding v i , is the derivative of f with respect to the one-hot vector:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interpreting Quizbowl Models",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2202f \u2202w i = \u2202f \u2202v i \u2202v i \u2202w i = \u2202f \u2202v i \u2022 v i .",
"eq_num": "(1)"
}
],
"section": "Interpreting Quizbowl Models",
"sec_num": "3.3"
},
{
"text": "This simulates how model predictions change when a particular word's embedding is set to the zero vector-it approximates word removal (Ebrahimi et al., 2018; Wallace et al., 2018) .",
"cite_spans": [
{
"start": 134,
"end": 157,
"text": "(Ebrahimi et al., 2018;",
"ref_id": "BIBREF9"
},
{
"start": 158,
"end": 179,
"text": "Wallace et al., 2018)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Interpreting Quizbowl Models",
"sec_num": "3.3"
},
{
"text": "For the IR model, we use the ElasticSearch Highlight API (Gormley and Tong, 2015) , which provides word importance scores based on query matches from the inverted index. ",
"cite_spans": [
{
"start": 57,
"end": 81,
"text": "(Gormley and Tong, 2015)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Interpreting Quizbowl Models",
"sec_num": "3.3"
},
{
"text": "The authors interact with either the IR or RNN model through a user interface 1 (Figure 3 ). An author writes their question in the upper right and the model's top five predictions (Machine Guesses) appear in the upper left. If the top prediction is the right answer, the interface indicates where in the question the model is first correct. The goal is to cause the model to be incorrect or to delay the correct answer position as much as possible. 2 The words of the current question are highlighted using the applicable interpretation method in the lower right (Evidence). We do not enforce time restrictions or require questions to be adversarial: If the author fails to break the system, they are free to ''give up'' and submit any question.",
"cite_spans": [
{
"start": 450,
"end": 451,
"text": "2",
"ref_id": null
}
],
"ref_spans": [
{
"start": 80,
"end": 89,
"text": "(Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Adversarial Writing Interface",
"sec_num": "3.4"
},
{
"text": "The interface continually updates as the author writes. We track the question edit history to identify recurring model failures (Section 6) and understand how interpretations guide the authors (Section 7).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adversarial Writing Interface",
"sec_num": "3.4"
},
{
"text": "We focus on members of the Quizbowl community: They have deep trivia knowledge and craft questions for Quizbowl tournaments (Jennings, 2006) . We award prizes for questions read at live human-computer matches (Section 5.3).",
"cite_spans": [
{
"start": 124,
"end": 140,
"text": "(Jennings, 2006)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Question Authors",
"sec_num": "3.5"
},
{
"text": "The question authors are familiar with the standard format of Quizbowl questions (Lujan and Teitler, 2003) . The questions follow a common paragraph structure, are well edited for grammar, and finish with a simple ''give-away'' clue. These constraints benefit the adversarial writing process as it is very clear what constitutes a difficult but valid question. Thus, our examples go beyond surface level ''breaks'' such as character noise (Belinkov and Bisk, 2018) or syntax changes (Iyyer et al., 2018) . Rather, questions are difficult because of their semantic content (examples in Section 6).",
"cite_spans": [
{
"start": 81,
"end": 106,
"text": "(Lujan and Teitler, 2003)",
"ref_id": "BIBREF28"
},
{
"start": 439,
"end": 464,
"text": "(Belinkov and Bisk, 2018)",
"ref_id": "BIBREF0"
},
{
"start": 483,
"end": 503,
"text": "(Iyyer et al., 2018)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Question Authors",
"sec_num": "3.5"
},
{
"text": "To see how an author might write a question with the interface, we walk through an example of writing a question's first sentence. The author first selects the answer to their question from the training set-Johannes Brahms-and begins:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "How an Author Writes a Question",
"sec_num": "3.6"
},
{
"text": "Karl Ferdinand Pohl showed this composer some pieces on which this composer's Variations on a Theme by Haydn were based.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "How an Author Writes a Question",
"sec_num": "3.6"
},
{
"text": "The QA system buzzes (i.e., it has enough information to interrupt and answer correctly) after ''composer''. The author sees that the name ''Karl Ferdinand Pohl'' appears in Brahms' Wikipedia page and avoids that specific phrase, describing Pohl's position instead of naming him directly:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "How an Author Writes a Question",
"sec_num": "3.6"
},
{
"text": "This composer was given a theme called ''Chorale St. Antoni'' by the archivist of the Vienna Musikverein, which could have been written by Ignaz Pleyel.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "How an Author Writes a Question",
"sec_num": "3.6"
},
{
"text": "This rewrite adds in some additional information (there is a scholarly disagreement over who wrote the theme and its name), and the QA system now incorrectly thinks the answer is Fr\u00e9d\u00e9ric Chopin. The user can continue to build on the theme, writing While summering in Tutzing, this composer turned that theme into ''Variations on a Theme by Haydn''.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "How an Author Writes a Question",
"sec_num": "3.6"
},
{
"text": "Again, the author sees that the system buzzes ''Variations on a Theme'' with the correct answer. However, the author can rewrite the title in its original German, ''Variationen\u00fcber ein Thema von Haydn'' to fool the system. The author continues to create entire questions the model cannot solve.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "How an Author Writes a Question",
"sec_num": "3.6"
},
{
"text": "Our adversarial data set consists of 1,213 questions with 6,541 sentences across diverse topics (Table 1) . 3 There are 807 questions written against the IR system and 406 against the neural model by 115 unique authors. We plan to hold twice-yearly competitions to continue data collection.",
"cite_spans": [
{
"start": 108,
"end": 109,
"text": "3",
"ref_id": null
}
],
"ref_spans": [
{
"start": 96,
"end": 105,
"text": "(Table 1)",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "A New Adversarially Authored Data Set",
"sec_num": "4"
},
{
"text": "We validate that the adversarially authored questions are not of poor quality or too difficult for humans. We first automatically filter out questions based on length, the presence of vulgar statements, or repeated submissions (including re-submissions from the Quizbowl training or evaluation data). We next host a human-only Quizbowl event using intermediate and expert players (former and current collegiate Quizbowl players). We select 60 adversarially authored questions and 60 standard high school national championship questions, both with the same number of questions per category (list of categories in Table 1 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 612,
"end": 619,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Validating Questions with Quizbowlers",
"sec_num": "4.1"
},
{
"text": "To answer a Quizbowl question, a player interrupts the question-the earlier the better. To capture this dynamic, we record both the average answer position (as a percentage of the question, lower is better) and answer accuracy. We shuffle the regular and adversarially authored questions, read them to players, and record these two metrics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Validating Questions with Quizbowlers",
"sec_num": "4.1"
},
{
"text": "The adversarially authored questions are on average easier for humans than the regular test questions. For the adversarially authored set, humans buzz in with 41.6% of the question remaining and an accuracy of 89.7%. On the standard questions, humans buzz in with 28.3% of the question remaining and an accuracy of 84.2%. The difference in accuracy between the two types of questions is not significantly different (p = 0.16 using Fisher's exact test), but the buzzing position is earlier for adversarially authored questions (p = 0.0047 for a two-sided t-test). We expect the questions that were not played to be of comparable difficulty because they went through the same submission process and post-processing. We further explore the human-perceived difficulty of the adversarially-authored questions in Section 5.3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Validating Questions with Quizbowlers",
"sec_num": "4.1"
},
{
"text": "This section evaluates QA systems on the adversarially authored questions. We test three models: the IR and RNN models shown in the interface, as well as a Deep Averaging Network (Iyyer et al., 2015, DAN) to evaluate the transferability of the adversarial questions. We break our study into two rounds. The first round consists of adversarially authored questions written against the IR system Figure 4 : The first round of adversarial writing attacks the IR model. Like regular test questions, adversariallyauthored questions begin with difficult clues that trick the model. However, the adversarial questions are significantly harder during the crucial middle third of the question.",
"cite_spans": [
{
"start": 179,
"end": 204,
"text": "(Iyyer et al., 2015, DAN)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 394,
"end": 402,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Computer Experiments",
"sec_num": "5"
},
{
"text": "(Section 5.1); the second-round questions target both the IR and RNN (Section 5.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computer Experiments",
"sec_num": "5"
},
{
"text": "Finally, we also hold live competitions that pit the state-of-the-art Studio Ousia model (Yamada et al., 2018) against human teams (Section 5.3).",
"cite_spans": [
{
"start": 89,
"end": 110,
"text": "(Yamada et al., 2018)",
"ref_id": "BIBREF45"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Computer Experiments",
"sec_num": "5"
},
{
"text": "Questions Transfer To All Models",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "First-Round Attacks: IR Adversarial",
"sec_num": "5.1"
},
{
"text": "The first round of adversarially authored questions target the IR model and are significantly harder for the IR, RNN, and DAN models (Figure 4 ). For example, the DAN's accuracy drops from 54.1% to 32.4% on the full question (60% of original performance). For both adversarially authored and original test questions, the early clues are difficult to answer (near zero accuracy for the first 10-25% of the question). However, during the middle third of the questions, where buzzes in Quizbowl most frequently occur, the accuracy on original test questions rises significantly more quickly than the adversarially authored ones. For both type of questions, the accuracy rises towards the end as the clues become ''give-aways''.",
"cite_spans": [],
"ref_spans": [
{
"start": 133,
"end": 142,
"text": "(Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "First-Round Attacks: IR Adversarial",
"sec_num": "5.1"
},
{
"text": "Questions are Brittle",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Second-Round Attacks: RNN Adversarial",
"sec_num": "5.2"
},
{
"text": "In the second round, the authors also attack an RNN model. All models tested in the second round are trained on a larger data set (Section 3.2). A similar trend holds for IR adversarial questions in the second round ( Figure 5 ): A question that tricks the IR system also fools the two neural models (i.e., adversarial examples transfer). For example, the DAN model was never targeted but had substantial accuracy decreases in both rounds. This does not hold for questions written adversarially against the RNN model, however. On these questions, the neural models struggle but the IR model is largely unaffected (Figure 5, right) .",
"cite_spans": [],
"ref_spans": [
{
"start": 218,
"end": 226,
"text": "Figure 5",
"ref_id": null
},
{
"start": 613,
"end": 630,
"text": "(Figure 5, right)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Second-Round Attacks: RNN Adversarial",
"sec_num": "5.2"
},
{
"text": "In the offline setting (i.e., no pressure to ''buzz in'' before an opponent), models demonstrably struggle on the adversarial questions. But, what happens in standard Quizbowl-live, head-tohead games?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Humans vs. Computer, Live!",
"sec_num": "5.3"
},
{
"text": "We run two live humans vs. computer matches. The first match uses IR adversarial questions in a 40-question, tossup-only Quizbowl format. We pit a human team of national-level Quizbowl players against the Studio Ousia model (Yamada et al., 2018) , the current state-of-the-art Quizbowl system. The model combines neural, IR, and knowledge graph components (details in Appendix B), and won the 2017 NIPS shared task, defeating a team of expert humans 475 to 200 on regular Quizbowl test questions. Although the team at our live event was comparable to the NIPS 2017 team, the tables were turned: The human team won handedly 300 to 30.",
"cite_spans": [
{
"start": 224,
"end": 245,
"text": "(Yamada et al., 2018)",
"ref_id": "BIBREF45"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Humans vs. Computer, Live!",
"sec_num": "5.3"
},
{
"text": "Our second live event is significantly larger: Seven human teams play against models on over 400 questions written adversarially against the RNN model. The human teams range in ability from high school Quizbowl players to national-level teams (Jeopardy! champions, Academic Competition Federation national champions, top scorers in the World Quizzing Championships). The models are based on either IR or neural methods. Despite a few close games between the weaker human teams and the models, humans prevailed in every match. 4 Figure 5 : The second round of adversarial writing attacks the IR and RNN models. The questions targeted against the IR system degrade the performance of all models. However, the reverse does not hold: The IR model is robust to the questions written to fool the RNN. Figure 6 : Humans find adversarially authored question about as difficult as normal questions: rusty weekend warriors (Intermediate), active players (Expert), or the best trivia players in the world (National). Figure 7 : The accuracy of the state-of-the-art Studio Ousia model degrades on the adversarially authored questions despite never being directly targeted. This verifies that our findings generalize beyond the RNN and IR models.",
"cite_spans": [],
"ref_spans": [
{
"start": 528,
"end": 536,
"text": "Figure 5",
"ref_id": null
},
{
"start": 795,
"end": 803,
"text": "Figure 6",
"ref_id": null
},
{
"start": 1006,
"end": 1014,
"text": "Figure 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Humans vs. Computer, Live!",
"sec_num": "5.3"
},
{
"text": "Figures 6 and 7 summarize the live match results for the humans and Ousia model, respectively. Humans and models have considerably different trends in answer accuracy. Human accuracy on both regular and adversarial questions rises quickly in the last half of the question (curves in Figure 6 ). In essence, the ''give-away'' clues at the end of questions are easy for humans to answer.",
"cite_spans": [],
"ref_spans": [
{
"start": 283,
"end": 291,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Humans vs. Computer, Live!",
"sec_num": "5.3"
},
{
"text": "On the other hand, models on regular test questions do well in the first half, i.e., the ''difficult'' clues for humans are easier for models (Regular Test in Figure 7) . However, models, like humans, struggle on adversarial questions in the first half.",
"cite_spans": [],
"ref_spans": [
{
"start": 159,
"end": 168,
"text": "Figure 7)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Humans vs. Computer, Live!",
"sec_num": "5.3"
},
{
"text": "Questions Hard?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "What Makes Adversarially Authored",
"sec_num": "6"
},
{
"text": "This section analyzes the adversarially authored questions to identify the source of their difficulty.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "What Makes Adversarially Authored",
"sec_num": "6"
},
{
"text": "One possible source of difficulty is data scarcity: The answers to adversarial questions rarely appear in the training set. However, this is not the case; The mean number of training examples per answer (e.g., George Washington) is 14.9 for the adversarial questions versus 16.9 for the regular test data. Another explanation for question difficulty is limited ''overlap'' with the training datanamely, models cannot match n-grams from the training clues. We measure the proportion of test n-grams that also appear in training questions with the same answer ( equal for unigrams but surprisingly higher for adversarial questions' bigrams. The adversarial questions are also shorter and have fewer named entities (NEs). However, the proportion of NEs is roughly equivalent. One difference between the questions written against the IR system and the ones written against the RNN model is the drop in NEs. The decrease in NEs is higher for IR adversarial questions, which may explain their generalization: The RNN is more sensitive to changes in phrasing, whereas the IR system is more sensitive to specific words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quantitative Differences in Questions",
"sec_num": "6.1"
},
{
"text": "We next qualitatively analyze adversarially authored questions. We manually inspect the author edit logs, classifying questions into six different phenomena in two broad categories (Table 3) from a random sample of 100 questions, doublecounting questions into multiple phenomena when applicable.",
"cite_spans": [],
"ref_spans": [
{
"start": 181,
"end": 190,
"text": "(Table 3)",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Categorizing Adversarial Phenomena",
"sec_num": "6.2"
},
{
"text": "The first question category requires reasoning about known clues (Table 4) .",
"cite_spans": [],
"ref_spans": [
{
"start": 65,
"end": 74,
"text": "(Table 4)",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Adversarial Category 1: Reasoning",
"sec_num": "6.2.1"
},
{
"text": "Composing Seen Clues: These questions provide entities with a first-order relationship to the correct answer. The system must triangulate the correct answer by ''filling in the blank''. For example, the first question of Table 4 names the place of death of Tecumseh. The training data contains a question about his death reading ''though stiff fighting came from their Native American allies under Tecumseh, who died at this battle'' (The Battle of the Thames). The system must connect these two clues to answer.",
"cite_spans": [],
"ref_spans": [
{
"start": 221,
"end": 228,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Adversarial Category 1: Reasoning",
"sec_num": "6.2.1"
},
{
"text": "Logic & Calculations: These questions require mathematical or logical operators. For example, the training data contains a clue about the Battle of Thermopylae: ''King Leonidas and 300 Spartans died at the hands of the Persians.'' The second question in Table 4 requires adding 150 to the number of Spartans.",
"cite_spans": [],
"ref_spans": [
{
"start": 254,
"end": 261,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Adversarial Category 1: Reasoning",
"sec_num": "6.2.1"
},
{
"text": "Step Reasoning: This question type requires multiple reasoning steps between entities. For example, the last question of Table 4 requires a reasoning step from the ''I Have A Dream'' speech to the Lincoln Memorial and then another reasoning step to reach Abraham Lincoln.",
"cite_spans": [],
"ref_spans": [
{
"start": 121,
"end": 128,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Multi-",
"sec_num": null
},
{
"text": "Distracting Clues",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adversarial Category 2:",
"sec_num": "6.2.2"
},
{
"text": "The second category consists of circumlocutory clues (Table 5) .",
"cite_spans": [],
"ref_spans": [
{
"start": 53,
"end": 62,
"text": "(Table 5)",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Adversarial Category 2:",
"sec_num": "6.2.2"
},
{
"text": "Paraphrases: A common adversarial modification is to paraphrase clues to remove exact n-gram matches from the training data. This renders our IR system useless but also hurts the neural models. Many of the adversarial paraphrases go beyond syntax-only changes (e.g., the first row of Table 5 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 284,
"end": 291,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Adversarial Category 2:",
"sec_num": "6.2.2"
},
{
"text": "Entity Type Distractors: Whether explicit or implicit in a model, one key component for QA is determining the answer type of the question. Authors take advantage of this by providing clues that cause the model to select the wrong answer type. For example, in the second question of Table 5 , the ''lead-in'' clue implies the answer may be an actor. The RNN model answers Don Cheadle in response despite previously seeing the Bill Clinton ''playing a saxophone'' clue in the training data. (Hardwick, 1967; Watson, 1996) to a question about Lillian Hellman's The Little Foxes: ''Ritchie Watson commended this play's historical accuracy for getting the price for a dozen eggs right-ten cents-to defend against Elizabeth Hardwick's contention that it was a sentimental history.'' Novel clues create an incentive for models to use information beyond past questions and Wikipedia. Novel clues have different effects on IR and neural models: Whereas IR models largely ignore them, novel clues can lead neural models astray. For example, on a question about Tiananmen Square, the RNN model buzzes on the clue ''World Economic Herald''. However, adding a novel clue about ''the history of shav- Figure 8 : The interpretation successfully aids an attack against the IR system. The author removes the phrase containing the words ''ellipse'' and ''parabola'', which are highlighted in the interface (shown in bold). In its place, they add a phrase which the model associates with the answer sphere. ing'' renders the brittle RNN unable to buzz on the ''World Economic Herald'' clue that it was able to recognize before. 5 This helps to explain why adversarially authored questions written against the RNN do not stump IR models. The Question Length and the position where the model is first correct (Buzzing Position, lower is better) are shown as a question is written. In (1), the author makes a mistake by removing a sentence that makes the question easier for the IR model. In (2), the author uses the interpretation, replacing the highlighted word (shown in bold) ''molecules'' with ''species'' to trick the RNN model.",
"cite_spans": [
{
"start": 489,
"end": 505,
"text": "(Hardwick, 1967;",
"ref_id": "BIBREF16"
},
{
"start": 506,
"end": 519,
"text": "Watson, 1996)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [
{
"start": 282,
"end": 289,
"text": "Table 5",
"ref_id": "TABREF7"
},
{
"start": 1187,
"end": 1195,
"text": "Figure 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Adversarial Category 2:",
"sec_num": "6.2.2"
},
{
"text": "This section explores how model interpretations help to guide adversarial authors. We analyze the question edit log, which reflects how authors modify questions given a model interpretation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "How Do Interpretations Help?",
"sec_num": "7"
},
{
"text": "A direct edit of the highlighted words often creates an adversarial example (e.g., Figure 8 ). Figure 9 shows a more intricate example. The left plot shows the Question Length, as well as the position where the model is first correct (Buzzing Position, lower is better). We show two adversarial edits. In the first (1), the author removes the first sentence of the question, which makes the question easier for the model (buzzing position decreases). The author counteracts this in the second edit (2), where they use the interpretation to craft a targeted modification that breaks the IR model. However, models are not always this brittle. In Figure C .1, the interpretation fails to aid an adversarial attack against the RNN model. At each step, the author uses the highlighted words as a guide to edit targeted portions of the question yet fails to trick the model. The author gives up and submits their relatively non-adversarial question.",
"cite_spans": [],
"ref_spans": [
{
"start": 83,
"end": 91,
"text": "Figure 8",
"ref_id": null
},
{
"start": 95,
"end": 103,
"text": "Figure 9",
"ref_id": null
},
{
"start": 644,
"end": 652,
"text": "Figure C",
"ref_id": null
}
],
"eq_spans": [],
"section": "How Do Interpretations Help?",
"sec_num": "7"
},
{
"text": "We also interview the adversarial authors who attended our live events. Multiple authors agree that identifying oft-repeated ''stock'' clues was the interface's most useful feature. As one author explained, ''There were clues which I did not think were stock clues but were later revealed to be.'' In particular, the author's question about the Congress of Vienna used a clue about ''Krak\u00f3w becoming a free city,'' which the model immediately recognized.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interviews With Adversarial Authors",
"sec_num": "7.1"
},
{
"text": "Another interviewee was Jordan Brownstein, 6 a national Quizbowl champion and one of the best active players, who felt that computer opponents were better at questions that contained direct references to battles or poetry. He also explained how the different writing styles used by each Quizbowl author increases the difficulty of questions for computers. The interface's evidence panel allows authors to read existing clues that encourage these unique stylistic choices.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interviews With Adversarial Authors",
"sec_num": "7.1"
},
{
"text": "New data sets often allow for a finer-grained analysis of a linguistic phenomenon, task, or genre. The LAMBADA data set (Paperno et al., 2016) tests a model's understanding of the broad contexts present in book passages, whereas the Natural Questions corpus (Kwiatkowski et al., 2019) combs Wikipedia for answers to questions that users trust search engines to answer (Oeldorf-Hirsch et al., 2014) . Other work focuses on natural language inference, where challenge examples highlight model failures (Glockner et al., 2018; Naik et al., 2018; . Our work is unique in that we use human adversaries to expose model weaknesses, which provides a diverse set of phenomena (from paraphrases to multi-hop reasoning) that models cannot solve.",
"cite_spans": [
{
"start": 120,
"end": 142,
"text": "(Paperno et al., 2016)",
"ref_id": "BIBREF33"
},
{
"start": 258,
"end": 284,
"text": "(Kwiatkowski et al., 2019)",
"ref_id": "BIBREF26"
},
{
"start": 368,
"end": 397,
"text": "(Oeldorf-Hirsch et al., 2014)",
"ref_id": "BIBREF32"
},
{
"start": 500,
"end": 523,
"text": "(Glockner et al., 2018;",
"ref_id": "BIBREF12"
},
{
"start": 524,
"end": 542,
"text": "Naik et al., 2018;",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "8"
},
{
"text": "Other work puts an adversary in the data annotation or postprocessing loop. For instance, Dua et al. (2019) and Zhang et al. (2018) filter out easy questions using a baseline QA model, and Zellers et al. (2018) use stylistic classifiers to filter language inference examples. Rather than filtering out easy questions, we use human adversaries to generate hard ones. Similar to our work, Ettinger et al. (2017) use human adversaries. We extend their setting by providing humans with model interpretations to facilitate adversarial writing. Moreover, we have a readymade audience of question writers to generate adversarial questions.",
"cite_spans": [
{
"start": 90,
"end": 107,
"text": "Dua et al. (2019)",
"ref_id": "BIBREF8"
},
{
"start": 112,
"end": 131,
"text": "Zhang et al. (2018)",
"ref_id": "BIBREF48"
},
{
"start": 189,
"end": 210,
"text": "Zellers et al. (2018)",
"ref_id": "BIBREF47"
},
{
"start": 387,
"end": 409,
"text": "Ettinger et al. (2017)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "8"
},
{
"text": "The collaborative adversarial writing process reflects the complementary abilities of humans and computers. For instance, ''centaur'' chess teams of both a human and a computer are often stronger than a human or computer alone (Case, 2018) . In Starcraft, humans devise high-level ''macro'' strategies, whereas computers are superior at executing fast and precise ''micro'' actions (Vinyals et al., 2017) . In NLP, computers aid simultaneous human interpreters (He et al., 2016) at remembering forgotten information or translating unfamiliar words.",
"cite_spans": [
{
"start": 227,
"end": 239,
"text": "(Case, 2018)",
"ref_id": "BIBREF3"
},
{
"start": 382,
"end": 404,
"text": "(Vinyals et al., 2017)",
"ref_id": null
},
{
"start": 461,
"end": 478,
"text": "(He et al., 2016)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "8"
},
{
"text": "Finally, recent approaches to adversarial evaluation of NLP models (Section 2) typically target one phenomenon (e.g., syntactic modifications) and complement our human-in-the-loop approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "8"
},
{
"text": "One of the challenges of machine learning is knowing why systems fail. This work brings together two threads that attempt to answer this question: visualizations and adversarial examples. Visualizations underscore the capabilities of existing models, whereas adversarial examplescrafted with the ingenuity of human expertsshow that these models are still far from matching human prowess.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "9"
},
{
"text": "Our experiments with both neural and IR methodologies show that QA models still struggle with synthesizing clues, handling distracting information, and adapting to unfamiliar data. Our adversarially authored data set is only the first of many iterations (Ruef et al., 2016) . As models improve, future adversarially authored data sets can elucidate the limitations of next-generation QA systems.",
"cite_spans": [
{
"start": 254,
"end": 273,
"text": "(Ruef et al., 2016)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "9"
},
{
"text": "Whereas we focus on QA, our procedure is applicable to other NLP settings where there is (1) a pool of talented authors who (2) write text with specific goals. Future research can look to craft adversarially authored data sets for other NLP tasks that meet these criteria.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "9"
},
{
"text": "https://github.com/Eric-Wallace/trickmeinterface/2 The authors want normal Quizbowl questions that humans can easily answer by the very end. For popular answers, (e.g., Australia or Suez Canal), writing novel final give-away clues is difficult. We thus expect models to often answer correctly by the very end of the question.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Videos available at http://trickme.qanta.org.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The ''history of shaving'' is a tongue-in-cheek name for a poster displaying the hirsute leaders of Communist thought. It goes from the bearded Marx and Engels, to the mustachioed Lenin and Stalin, and finally the clean-shaven Mao.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.qbwiki.com/wiki/Jordan Brownstein.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank all of the Quiz Bowl players, writers, and judges who helped make this work possible, especially Ophir Lifshitz and Daniel Jensen. We also thank the anonymous reviewers and members of the UMD ''Feet Thinking'' group for helpful comments. Finally, we would also like to thank Sameer Singh, Matt Gardner, Pranav Goel, Sudha Rao, Pouya Pezeshkpour, Zhengli Zhao, and Saif Mohammad for their useful feedback. This work was supported by NSF grant IIS-1822494. Shi Feng is partially supported by subcontract to Raytheon BBN Technologies by DARPA award HR0011-15-C-0113, and Pedro Rodriguez is partially supported by NSF grant IIS-1409287 (UMD). Any opinions, findings, conclusions, or recommendations expressed here are those of the authors and do not necessarily reflect the view of the sponsors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "We apply the Syntactically Controlled Paraphrase Network SCPN; (Iyyer et al., 2018) to Quizbowl questions. The model operates on the sentence level and cannot paraphrase paragraphs. We thus feed in each sentence independently, ignoring possible breaks in coreference. The model does not correctly paraphrase most of the complex sentences present in Quizbowl questions. The paraphrases were rife with issues: ungrammatical, repetitive, or missing information. To simplify the setting, we focus on paraphrasing the shortest sentence from each question (often the final clue). The model still fails in this case. We analyze a random sample of 200 paraphrases: Only six maintained all of the original information. Table A .1 shows common failure cases. One recurring issue is an inability to maintain the correct NEs after paraphrasing. In Quizbowl, maintaining entity information is vital for ensuring question validity. We were surprised by this failure because SCPN incorporates a copy mechanism. Sentence Success/Failure Phenomena its types include ''frictional '', ''cyclical'', and ''structural'' Missing Information \u2717 its types include ''frictional'', and structural german author of the sorrows of young werther and a two-part faust Lost Named Entity \u2717 german author of the sorrows of mr. werther name this elegy on the death of john keats composed by percy shelley Incorrect Clue \u2717 name was this elegy on the death of percy shelley identify this play about willy loman written by arthur miller Unsuited Syntax Template \u2717 so you can identify this work of mr. miller he employed marco polo and his father as ambassadors Verb Synonym he hired marco polo and his father as ambassadors Table A .1: Failure and success cases for SCPN. The model fails to create a valid paraphrase of the sentence for 97% of questions.",
"cite_spans": [
{
"start": 63,
"end": 83,
"text": "(Iyyer et al., 2018)",
"ref_id": "BIBREF21"
},
{
"start": 1062,
"end": 1098,
"text": "'', ''cyclical'', and ''structural''",
"ref_id": null
}
],
"ref_spans": [
{
"start": 710,
"end": 717,
"text": "Table A",
"ref_id": null
},
{
"start": 1686,
"end": 1693,
"text": "Table A",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Failure of Syntactically Controlled Paraphrase Networks",
"sec_num": null
},
{
"text": "The Studio Ousia system works by aggregating scores from both a neural text classification model and an IR system. Additionally, it scores answers based on their match with the correct entity type (religious leader, government agency, etc.) predicted by a neural entity type classifier. The Studio Ousia system also uses data beyond Quizbowl questions and the text of Wikipedia pages, integrating entities from a knowledge graph and customized word vectors (Yamada et al., 2018) . ",
"cite_spans": [
{
"start": 457,
"end": 478,
"text": "(Yamada et al., 2018)",
"ref_id": "BIBREF45"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "B Studio Ousia Quizbowl Model",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Synthetic and natural noise both break neural machine translation",
"authors": [
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Bisk",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yonatan Belinkov and Yonatan Bisk. 2018. Syn- thetic and natural noise both break neural machine translation. In Proceedings of the International Conference on Learning Representations.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Analysis methods in neural language processing: A survey",
"authors": [
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Glass",
"suffix": ""
}
],
"year": 2019,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "7",
"issue": "",
"pages": "49--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yonatan Belinkov and James Glass. 2019. Anal- ysis methods in neural language processing: A survey. In Transactions of the Association for Computational Linguistics, 7:49-72.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Human-Computer Question Answering: The Case for Quizbowl",
"authors": [
{
"first": "Jordan",
"middle": [],
"last": "Boyd-Graber",
"suffix": ""
},
{
"first": "Shi",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Pedro",
"middle": [],
"last": "Rodriguez",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jordan Boyd-Graber, Shi Feng, and Pedro Rodriguez. 2018. Human-Computer Question Answering: The Case for Quizbowl. Springer.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "How To Become A",
"authors": [
{
"first": "Nicky",
"middle": [],
"last": "Case",
"suffix": ""
}
],
"year": 2018,
"venue": "Centaur. Journal of Design and Science. jods.mitpress",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicky Case. 2018. How To Become A Centaur. Journal of Design and Science. jods.mitpress. mit.edu/pub/issue3-case.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A thorough examination of the CNN/Daily Mail reading comprehension task",
"authors": [
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Bolton",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danqi Chen, Jason Bolton, and Christopher D. Manning. 2016. A thorough examination of the CNN/Daily Mail reading comprehension task. In Proceedings of the Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Learning phrase representations using RNN encoder-decoder for statistical machine translation",
"authors": [
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine trans- lation. In Proceedings of Empirical Methods in Natural Language Processing.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "BERT: Pretraining of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "Conference of the North American Chapter",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre- training of deep bidirectional transformers for language understanding. In Conference of the North American Chapter of the Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "DROP: A reading comprehension benchmark requiring discrete reasoning over paragraphs",
"authors": [
{
"first": "Dheeru",
"middle": [],
"last": "Dua",
"suffix": ""
},
{
"first": "Yizhong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Pradeep",
"middle": [],
"last": "Dasigi",
"suffix": ""
},
{
"first": "Gabriel",
"middle": [],
"last": "Stanovsky",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
}
],
"year": 2019,
"venue": "Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. 2019. DROP: A reading comprehen- sion benchmark requiring discrete reasoning over paragraphs. In Conference of the North American Chapter of the Association for Com- putational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "HotFlip: White-box adversarial examples for text classification",
"authors": [
{
"first": "Javid",
"middle": [],
"last": "Ebrahimi",
"suffix": ""
},
{
"first": "Anyi",
"middle": [],
"last": "Rao",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Lowd",
"suffix": ""
},
{
"first": "Dejing",
"middle": [],
"last": "Dou",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. 2018. HotFlip: White-box ad- versarial examples for text classification. In Proceedings of the Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Towards linguistically generalizable NLP systems: A workshop and shared task",
"authors": [
{
"first": "Allyson",
"middle": [],
"last": "Ettinger",
"suffix": ""
},
{
"first": "Sudha",
"middle": [],
"last": "Rao",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Emily",
"middle": [
"M"
],
"last": "Bender",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the First Workshop on Building Linguistically Generalizable NLP Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Allyson Ettinger, Sudha Rao, Hal Daum\u00e9 III, and Emily M. Bender. 2017. Towards linguis- tically generalizable NLP systems: A workshop and shared task. In Proceedings of the First Workshop on Building Linguistically General- izable NLP Systems.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Building Watson: An Overview of the DeepQA Project",
"authors": [
{
"first": "David",
"middle": [],
"last": "Ferrucci",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [],
"last": "Chu-Carroll",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Gondek",
"suffix": ""
},
{
"first": "Aditya",
"middle": [
"A"
],
"last": "Kalyanpur",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lally",
"suffix": ""
},
{
"first": "J",
"middle": [
"William"
],
"last": "Murdock",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Nyberg",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Prager",
"suffix": ""
},
{
"first": "Nico",
"middle": [],
"last": "Schlaefer",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Welty",
"suffix": ""
}
],
"year": 2010,
"venue": "AI Magazine",
"volume": "31",
"issue": "3",
"pages": "59--79",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Ferrucci, Eric Brown, Jennifer Chu- Carroll, James Fan, David Gondek, Aditya A. Kalyanpur, Adam Lally, J. William Murdock, Eric Nyberg, John Prager, Nico Schlaefer, and Chris Welty. 2010. Building Watson: An Overview of the DeepQA Project. AI Magazine, 31(3):59-79.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Breaking NLI systems with sentences that require simple lexical inferences",
"authors": [
{
"first": "Max",
"middle": [],
"last": "Glockner",
"suffix": ""
},
{
"first": "Vered",
"middle": [],
"last": "Shwartz",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Max Glockner, Vered Shwartz, and Yoav Goldberg. 2018. Breaking NLI systems with sentences that require simple lexical inferences. In Proceedings of the Association for Com- putational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Elasticsearch: The Definitive Guide",
"authors": [
{
"first": "Clinton",
"middle": [],
"last": "Gormley",
"suffix": ""
},
{
"first": "Zachary",
"middle": [],
"last": "Tong",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Clinton Gormley and Zachary Tong. 2015. Elasticsearch: The Definitive Guide, O'Reilly Media, Inc.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Removing the training wheels: A coreference dataset that entertains humans and challenges computers",
"authors": [
{
"first": "Anupam",
"middle": [],
"last": "Guha",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Danny",
"middle": [],
"last": "Bouman",
"suffix": ""
},
{
"first": "Jordan",
"middle": [],
"last": "Boyd-Graber",
"suffix": ""
}
],
"year": 2015,
"venue": "North American Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anupam Guha, Mohit Iyyer, Danny Bouman, and Jordan Boyd-Graber. 2015. Removing the training wheels: A coreference dataset that en- tertains humans and challenges computers. In North American Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Annotation artifacts in natural language inference data",
"authors": [
{
"first": "Swabha",
"middle": [],
"last": "Suchin Gururangan",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Swayamdipta",
"suffix": ""
},
{
"first": "Roy",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel",
"middle": [
"R"
],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Bowman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2018,
"venue": "Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel R. Bowman, and Noah A. Smith. 2018. Annotation artifacts in natural language inference data. In Conference of the North American Chapter of the Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "The Little Foxes revived. The New York Review of Books",
"authors": [
{
"first": "Elizabeth",
"middle": [],
"last": "Hardwick",
"suffix": ""
}
],
"year": 1967,
"venue": "",
"volume": "21",
"issue": "",
"pages": "4--5",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elizabeth Hardwick. 1967. The Little Foxes re- vived. The New York Review of Books, 21:4-5.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Prisoner of Trebekistan: A Decade in Jeopardy!",
"authors": [],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bob Harris. 2006. Prisoner of Trebekistan: A Decade in Jeopardy!. Crown Publisher.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Interpretese vs. translationese: The uniqueness of human strategies in simultaneous interpretation",
"authors": [
{
"first": "He",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Jordan",
"middle": [],
"last": "Boyd-Graber",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2016,
"venue": "Conference of the North American Chapter",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "He He, Jordan Boyd-Graber, and Hal Daum\u00e9 III. 2016. Interpretese vs. translationese: The unique- ness of human strategies in simultaneous interpretation. In Conference of the North American Chapter of the Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Natural language question answering: The view from here",
"authors": [
{
"first": "Lynette",
"middle": [],
"last": "Hirschman",
"suffix": ""
},
{
"first": "Rob",
"middle": [],
"last": "Gaizauskas",
"suffix": ""
}
],
"year": 2001,
"venue": "Natural Language Engineering",
"volume": "7",
"issue": "4",
"pages": "275--300",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lynette Hirschman and Rob Gaizauskas. 2001. Natural language question answering: The view from here. Natural Language Engineering, 7(4):275-300.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Deep unordered composition rivals syntactic methods for text classification",
"authors": [
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Varun",
"middle": [],
"last": "Manjunatha",
"suffix": ""
},
{
"first": "Jordan",
"middle": [],
"last": "Boyd-Graber",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohit Iyyer, Varun Manjunatha, Jordan Boyd- Graber, and Hal Daum\u00e9 III. 2015. Deep un- ordered composition rivals syntactic methods for text classification. In Proceedings of the Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Adversarial example generation with syntactically controlled paraphrase networks",
"authors": [
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Wieting",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Conference of the North American Chapter",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohit Iyyer, John Wieting, Kevin Gimpel, and Luke Zettlemoyer. 2018. Adversarial example generation with syntactically controlled para- phrase networks. In Conference of the North American Chapter of the Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Brainiac: Adventures in the Curious, Competitive, Compulsive World of Trivia Buffs",
"authors": [
{
"first": "Ken",
"middle": [
"Jennings"
],
"last": "",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ken Jennings. 2006. Brainiac: Adventures in the Curious, Competitive, Compulsive World of Trivia Buffs, Villard.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Adversarial examples for evaluating reading comprehension systems",
"authors": [
{
"first": "Robin",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In Proceedings of Empirical Methods in Natural Language Processing.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "The craft of writing pyramidal quiz questions: Why writing quiz bowl questions is an intellectual task",
"authors": [
{
"first": "Ike",
"middle": [
"Jose"
],
"last": "",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ike Jose. 2017. The craft of writing pyramidal quiz questions: Why writing quiz bowl ques- tions is an intellectual task. https://blog. lareviewofbooks.org/essays/craft- writing.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "How much reading does reading comprehension require? A critical investigation of popular benchmarks",
"authors": [
{
"first": "Divyansh",
"middle": [],
"last": "Kaushik",
"suffix": ""
},
{
"first": "Zachary",
"middle": [
"C"
],
"last": "Lipton",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Divyansh Kaushik and Zachary C. Lipton. 2018. How much reading does reading comprehen- sion require? A critical investigation of popular benchmarks. In Proceedings of Empirical Methods in Natural Language Processing.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Natural Questions: A benchmark for question answering research",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Kwiatkowski",
"suffix": ""
},
{
"first": "Jennimaria",
"middle": [],
"last": "Palomaki",
"suffix": ""
},
{
"first": "Olivia",
"middle": [],
"last": "Rhinehart",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Alberti",
"suffix": ""
},
{
"first": "Danielle",
"middle": [],
"last": "Epstein",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Kelcey",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Kelcey",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"M"
],
"last": "Dai",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Quoc",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
}
],
"year": 2019,
"venue": "In Transactions of the Association for Computational Linguistics",
"volume": "7",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom Kwiatkowski, Jennimaria Palomaki, Olivia Rhinehart, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Matthew Kelcey, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural Questions: A benchmark for question answering research. In Transactions of the Association for Computational Linguistics, vol 7, 2019.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Understanding neural networks through representation erasure",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Will",
"middle": [],
"last": "Monroe",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1612.08220"
]
},
"num": null,
"urls": [],
"raw_text": "Jiwei Li, Will Monroe, and Dan Jurafsky. 2016. Understanding neural networks through representation erasure. arXiv preprint arXiv: 1612.08220.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Writing good quizbowl questions: A quick primer",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Lujan",
"suffix": ""
},
{
"first": "Seth",
"middle": [],
"last": "Teitler",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Lujan and Seth Teitler. 2003. Writing good quizbowl questions: A quick primer. https:// www.ocf.berkeley.edu/%7Equizbowl/ qb-writing.html",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Methods for interpreting and understanding deep neural networks. Digital Signal Processing",
"authors": [
{
"first": "Gr\u00e9goire",
"middle": [],
"last": "Montavon",
"suffix": ""
},
{
"first": "Wojciech",
"middle": [],
"last": "Samek",
"suffix": ""
},
{
"first": "Klaus-Robert",
"middle": [],
"last": "M\u00fcjller",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "73",
"issue": "",
"pages": "1--5",
"other_ids": {
"DOI": [
"10.1016/j.dsp.2017.10.011"
]
},
"num": null,
"urls": [],
"raw_text": "Gr\u00e9goire Montavon, Wojciech Samek, and Klaus- Robert M\u00fcjller. 2018. Methods for interpreting and understanding deep neural networks. Digi- tal Signal Processing, 73:1-5. https://doi. org/10.1016/j.dsp.2017.10.011",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Did the model understand the question?",
"authors": [
{
"first": "Ankur",
"middle": [],
"last": "Pramod Kaushik Mudrakarta",
"suffix": ""
},
{
"first": "Mukund",
"middle": [],
"last": "Taly",
"suffix": ""
},
{
"first": "Kedar",
"middle": [],
"last": "Sundararajan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dhamdhere",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pramod Kaushik Mudrakarta, Ankur Taly, Mukund Sundararajan, and Kedar Dhamdhere. 2018. Did the model understand the question? In Proceedings of the Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Stress test evaluation for natural language inference",
"authors": [
{
"first": "Aakanksha",
"middle": [],
"last": "Naik",
"suffix": ""
},
{
"first": "Abhilasha",
"middle": [],
"last": "Ravichander",
"suffix": ""
},
{
"first": "Norman",
"middle": [],
"last": "Sadeh",
"suffix": ""
},
{
"first": "Carolyn",
"middle": [],
"last": "Rose",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aakanksha Naik, Abhilasha Ravichander, Norman Sadeh, Carolyn Rose, and Graham Neubig. 2018. Stress test evaluation for natural language inference. In Proceedings of International Con- ference on Computational Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "To search or to ask: The routing of information needs between traditional search engines and social networks",
"authors": [
{
"first": "Anne",
"middle": [],
"last": "Oeldorf-Hirsch",
"suffix": ""
},
{
"first": "Brent",
"middle": [],
"last": "Hecht",
"suffix": ""
},
{
"first": "Meredith",
"middle": [
"Ringel"
],
"last": "Morris",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Teevan",
"suffix": ""
},
{
"first": "Darren",
"middle": [],
"last": "Gergle",
"suffix": ""
}
],
"year": 2014,
"venue": "Conference on Computer Supported Cooperative Work and Social Computing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anne Oeldorf-Hirsch, Brent Hecht, Meredith Ringel Morris, Jaime Teevan, and Darren Gergle. 2014. To search or to ask: The routing of information needs between traditional search engines and social networks. In Conference on Computer Supported Cooperative Work and Social Computing.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "The LAMBADA dataset: Word prediction requiring a broad discourse context",
"authors": [
{
"first": "Denis",
"middle": [],
"last": "Paperno",
"suffix": ""
},
{
"first": "Germ\u00e1n",
"middle": [],
"last": "Kruszewski",
"suffix": ""
},
{
"first": "Angeliki",
"middle": [],
"last": "Lazaridou",
"suffix": ""
},
{
"first": "Ngoc",
"middle": [],
"last": "Quan",
"suffix": ""
},
{
"first": "Raffaella",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Sandro",
"middle": [],
"last": "Bernardi",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Pezzelle",
"suffix": ""
},
{
"first": "Gemma",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Raquel",
"middle": [],
"last": "Boleda",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fern\u00e1ndez",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Denis Paperno, Germ\u00e1n Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, and Raquel Fern\u00e1ndez. 2016. The LAMBADA dataset: Word prediction requiring a broad discourse context. In Pro- ceedings of the Association for Computational Linguistics.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "GloVe: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of Empirical Methods in Natural Language Processing.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Why should I trust you?: Explaining the predictions of any classifier",
"authors": [
{
"first": "Sameer",
"middle": [],
"last": "Marco Tulio Ribeiro",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Guestrin",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. Why should I trust you?: Explaining the predictions of any classifier. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Semantically equivalent adversarial rules for debugging NLP models",
"authors": [
{
"first": "Sameer",
"middle": [],
"last": "Marco Tulio Ribeiro",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Guestrin",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. Semantically equivalent adver- sarial rules for debugging NLP models. In Pro- ceedings of the Association for Computational Linguistics.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Build it, break it, fix it: Contesting secure development",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Ruef",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Hicks",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Parker",
"suffix": ""
},
{
"first": "Dave",
"middle": [],
"last": "Levin",
"suffix": ""
},
{
"first": "Michelle",
"middle": [
"L"
],
"last": "Mazurek",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Mardziel",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew Ruef, Michael Hicks, James Parker, Dave Levin, Michelle L. Mazurek, and Piotr Mardziel. 2016. Build it, break it, fix it: Contesting secure development. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Deep inside convolutional networks: Visualising image classification models and saliency maps",
"authors": [
{
"first": "Karen",
"middle": [],
"last": "Simonyan",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Vedaldi",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Zisserman",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. 2014. Deep inside convolutional networks: Visualising image classification models and saliency maps. In Proceedings of the International Conference on Learning Representations.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Intriguing properties of neural networks",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "Szegedy",
"suffix": ""
},
{
"first": "Wojciech",
"middle": [],
"last": "Zaremba",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Joan",
"middle": [],
"last": "Bruna",
"suffix": ""
},
{
"first": "Dumitru",
"middle": [],
"last": "Erhan",
"suffix": ""
},
{
"first": "Ian",
"middle": [
"J"
],
"last": "Goodfellow",
"suffix": ""
},
{
"first": "Rob",
"middle": [],
"last": "Fergus",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. In Proceedings of the International Conference on Learning Representations.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Starcraft II: A new challenge for reinforcement learning",
"authors": [
{
"first": "Michelle",
"middle": [],
"last": "Yeo",
"suffix": ""
},
{
"first": "Alireza",
"middle": [],
"last": "Makhzani",
"suffix": ""
},
{
"first": "Heinrich",
"middle": [],
"last": "K\u00fcttler",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Agapiou",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Schrittwieser",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Quan",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Gaffney",
"suffix": ""
},
{
"first": "Stig",
"middle": [],
"last": "Petersen",
"suffix": ""
},
{
"first": "Karen",
"middle": [],
"last": "Simonyan",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Schaul",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Hado Van Hasselt",
"suffix": ""
},
{
"first": "Timothy",
"middle": [
"P"
],
"last": "Silver",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Lillicrap",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Calderone",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Keet",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Brunasso",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "Lawrence",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Ekermo",
"suffix": ""
},
{
"first": "Rodney",
"middle": [],
"last": "Repp",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tsing",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1708.04782"
]
},
"num": null,
"urls": [],
"raw_text": "Michelle Yeo, Alireza Makhzani, Heinrich K\u00fcttler, John Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy P. Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, and Rodney Tsing. 2017. Starcraft II: A new chal- lenge for reinforcement learning. arXiv preprint arXiv:1708.04782.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Interpreting neural networks with nearest neighbors",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Wallace",
"suffix": ""
},
{
"first": "Shi",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Jordan",
"middle": [],
"last": "Boyd-Graber",
"suffix": ""
}
],
"year": 2018,
"venue": "EMNLP 2018 Workshop on Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric Wallace, Shi Feng, and Jordan Boyd-Graber. 2018. Interpreting neural networks with nearest neighbors. In EMNLP 2018 Workshop on Ana- lyzing and Interpreting Neural Networks for NLP.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Glue: A multi-task benchmark and analysis platform for natural language understanding",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Amapreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel",
"middle": [
"R"
],
"last": "Bowman",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Amapreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. Glue: A multi-task benchmark and analy- sis platform for natural language understanding. In Proceedings of the International Conference on Learning Representations.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Lillian hellman's ''The Little Foxes'' and the new south creed: An ironic view of southern history",
"authors": [
{
"first": "D",
"middle": [],
"last": "Ritchie",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Watson",
"suffix": ""
}
],
"year": 1996,
"venue": "The Southern Literary Journal",
"volume": "28",
"issue": "2",
"pages": "59--68",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ritchie D. Watson. 1996. Lillian hellman's ''The Little Foxes'' and the new south creed: An ironic view of southern history. The Southern Literary Journal, 28(2):59-68.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Studio Ousia's quiz bowl question answering system",
"authors": [
{
"first": "Ikuya",
"middle": [],
"last": "Yamada",
"suffix": ""
},
{
"first": "Ryuji",
"middle": [],
"last": "Tamaki",
"suffix": ""
},
{
"first": "Hiroyuki",
"middle": [],
"last": "Shindo",
"suffix": ""
},
{
"first": "Yoshiyasu",
"middle": [],
"last": "Takefuji",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1803.08652"
]
},
"num": null,
"urls": [],
"raw_text": "Ikuya Yamada, Ryuji Tamaki, Hiroyuki Shindo, and Yoshiyasu Takefuji. 2018. Studio Ousia's quiz bowl question answering system. arXiv preprint arXiv:1803.08652.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "QANet: Combining local convolution with global self-attention for reading comprehension",
"authors": [
{
"first": "Adams",
"middle": [
"Wei"
],
"last": "Yu",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Dohan",
"suffix": ""
},
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Norouzi",
"suffix": ""
},
{
"first": "Quoc",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adams Wei Yu, David Dohan, Minh-Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, and Quoc V. Le. 2018. QANet: Combining local convolution with global self-attention for reading comprehension. In Proceedings of the International Conference on Learning Representations.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "SWAG: A large-scale adversarial dataset for grounded commonsense inference",
"authors": [
{
"first": "Rowan",
"middle": [],
"last": "Zellers",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Bisk",
"suffix": ""
},
{
"first": "Roy",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. 2018. SWAG: A large-scale adversarial dataset for grounded commonsense inference. In Proceedings of Empirical Methods in Natural Language Processing.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Record: Bridging the gap between human and machine commonsense reading comprehension",
"authors": [
{
"first": "Sheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jingjing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.12885"
]
},
"num": null,
"urls": [],
"raw_text": "Sheng Zhang, Xiaodong Liu, Jingjing Liu, Jianfeng Gao, Kevin Duh, and Benjamin Van Durme. 2018. Record: Bridging the gap between human and machine commonsense reading compre- hension. arXiv preprint arXiv:1810.12885.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Generating natural adversarial examples",
"authors": [
{
"first": "Zhengli",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Dheeru",
"middle": [],
"last": "Dua",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Singh",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhengli Zhao, Dheeru Dua, and Sameer Singh. 2018. Generating natural adversarial examples. In Proceedings of the International Conference on Learning Representations.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "An example Quizbowl question. The question becomes progressively easier (for humans) to answer later on; thus, more knowledgeable players can answer after hearing fewer clues. Our adversarial writing process ensures that the clues also challenge computers.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "The author writes a question (top right), the QA system provides guesses (left), and explains why it makes those guesses (bottom right). The author can then adapt their question to ''trick'' the model.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF2": {
"text": "Figure 9: The Question Length and the position where the model is first correct (Buzzing Position, lower is better) are shown as a question is written. In (1), the author makes a mistake by removing a sentence that makes the question easier for the IR model. In (2), the author uses the interpretation, replacing the highlighted word (shown in bold) ''molecules'' with ''species'' to trick the RNN model.",
"num": null,
"uris": null,
"type_str": "figure"
},
"TABREF0": {
"text": "Data available at http://trickme.qanta.org.",
"content": "<table><tr><td>Science</td><td>17%</td></tr><tr><td>History</td><td>22%</td></tr><tr><td>Literature</td><td>18%</td></tr><tr><td>Fine Arts</td><td>15%</td></tr><tr><td>Religion, Mythology, Philosophy, and Social Science</td><td>13%</td></tr><tr><td>Current Events, Geography, and General Knowledge</td><td>15%</td></tr><tr><td>Total Questions</td><td>1,213</td></tr></table>",
"html": null,
"type_str": "table",
"num": null
},
"TABREF1": {
"text": "The topical diversity of the questions in the adversarially authored data set based on a random sample of 100 questions.",
"content": "<table/>",
"html": null,
"type_str": "table",
"num": null
},
"TABREF2": {
"text": ". The overlap is roughly",
"content": "<table><tr><td/><td colspan=\"2\">Adversarial Regular</td></tr><tr><td>Unigram overlap</td><td>0.40</td><td>0.37</td></tr><tr><td>Bigram overlap</td><td>0.08</td><td>0.05</td></tr><tr><td>Longest n-gram overlap</td><td>6.73</td><td>6.87</td></tr><tr><td>Average NE overlap</td><td>0.38</td><td>0.46</td></tr><tr><td>IR Adversarial</td><td>0.35</td><td/></tr><tr><td>RNN Adversarial</td><td>0.44</td><td/></tr><tr><td>Total Words</td><td>107.1</td><td>133.5</td></tr><tr><td>Total NE</td><td>9.1</td><td>12.5</td></tr></table>",
"html": null,
"type_str": "table",
"num": null
},
"TABREF3": {
"text": "The adversarially authored questions have similar n-gram overlap to the regular test questions. However, the overlap of the named entities (NE) decreases for IR Adversarial questions.",
"content": "<table><tr><td>Composing Seen Clues</td><td>15%</td></tr><tr><td>Logic &amp; Calculations</td><td>5%</td></tr><tr><td>Multi-Step Reasoning</td><td>25%</td></tr><tr><td>Paraphrases</td><td>38%</td></tr><tr><td>Entity Type Distractors</td><td>7%</td></tr><tr><td>Novel Clues</td><td>26%</td></tr><tr><td>Total Questions</td><td>1,213</td></tr></table>",
"html": null,
"type_str": "table",
"num": null
},
"TABREF4": {
"text": "",
"content": "<table/>",
"html": null,
"type_str": "table",
"num": null
},
"TABREF6": {
"text": "The first category of adversarially authored questions consists of examples that require reasoning. Answer displays the correct answer (all models were incorrect). For these examples, connecting the training and adversarially authored clues is simple for humans but difficult for models.",
"content": "<table><tr><td>Set</td><td>Question</td></tr></table>",
"html": null,
"type_str": "table",
"num": null
},
"TABREF7": {
"text": "The second category of adversarial questions consists of clues that are present in the training data but are written in a distracting manner. Training shows relevant snippets from the training data.",
"content": "<table><tr><td>Prediction displays the RNN model's answer prediction (always correct on Training, always incorrect</td></tr><tr><td>on Adversarial).</td></tr><tr><td>because our models have not seen these clues.</td></tr><tr><td>These questions are easy to create: Users can add</td></tr><tr><td>Novel Clues that-because they are not uniquely</td></tr><tr><td>associated with an answer-confuse the models.</td></tr><tr><td>While not as linguistically interesting, novel clues</td></tr><tr><td>are not captured by Wikipedia or Quizbowl data,</td></tr><tr><td>thus improving the data set's diversity. For exam-</td></tr><tr><td>ple, adding clues about literary criticism</td></tr></table>",
"html": null,
"type_str": "table",
"num": null
}
}
}
}