ACL-OCL / Base_JSON /prefixT /json /tacl /2020.tacl-1.10.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:52:54.187493Z"
},
"title": "Investigating Prior Knowledge for Challenging Chinese Machine Reading Comprehension",
"authors": [
{
"first": "Kai",
"middle": [],
"last": "Sun",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Cornell University",
"location": {
"settlement": "Ithaca",
"region": "NY"
}
},
"email": ""
},
{
"first": "Dian",
"middle": [],
"last": "Yu",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Dong",
"middle": [],
"last": "Yu",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Cornell University",
"location": {
"settlement": "Ithaca",
"region": "NY"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Machine reading comprehension tasks require a machine reader to answer questions relevant to the given document. In this paper, we present the first free-form multiple-Choice Chinese machine reading Comprehension dataset (C 3), containing 13,369 documents (dialogues or more formally written mixed-genre texts) and their associated 19,577 multiple-choice free-form questions collected from Chineseas-a-second-language examinations. We present a comprehensive analysis of the prior knowledge (i.e., linguistic, domainspecific, and general world knowledge) needed for these real-world problems. We implement rule-based and popular neural methods and find that there is still a significant performance gap between the best performing model (68.5%) and human readers (96.0%), especiallyon problems that require prior knowledge. We further study the effects of distractor plausibility and data augmentation based on translated relevant datasets for English on model performance. We expect C 3 to present great challenges to existing systems as answering 86.8% of questions requires both knowledge within and beyond the accompanying document, and we hope that C 3 can serve as a platform to study how to leverage various kinds of prior knowledge to better understand a given written or orally oriented text. C 3 is available at https://dataset.org/c3/.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Machine reading comprehension tasks require a machine reader to answer questions relevant to the given document. In this paper, we present the first free-form multiple-Choice Chinese machine reading Comprehension dataset (C 3), containing 13,369 documents (dialogues or more formally written mixed-genre texts) and their associated 19,577 multiple-choice free-form questions collected from Chineseas-a-second-language examinations. We present a comprehensive analysis of the prior knowledge (i.e., linguistic, domainspecific, and general world knowledge) needed for these real-world problems. We implement rule-based and popular neural methods and find that there is still a significant performance gap between the best performing model (68.5%) and human readers (96.0%), especiallyon problems that require prior knowledge. We further study the effects of distractor plausibility and data augmentation based on translated relevant datasets for English on model performance. We expect C 3 to present great challenges to existing systems as answering 86.8% of questions requires both knowledge within and beyond the accompanying document, and we hope that C 3 can serve as a platform to study how to leverage various kinds of prior knowledge to better understand a given written or orally oriented text. C 3 is available at https://dataset.org/c3/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "''Language is, at best, a means of directing others to construct similar-thoughts from their own prior knowledge.'' Adams and Bruce (1982) Machine reading comprehension (MRC) tasks have attracted substantial attention from both academia and industry. These tasks require a machine reader to answer questions relevant to a given document provided as input (Poon et al., 2010; . In this paper, we focus on free-form multiple-choice MRC tasks-given a document, select the correct answer option from all options associated with a freeform question, which is not limited to a single question type such as cloze-style questions formed by removing a span or a sentence in a text (Hill et al., 2016; Bajgar et al., 2016; Mostafazadeh et al., 2016; Xie et al., 2018; Zheng et al., 2019) or close-ended questions that can be answered with a minimal answer (e.g., yes or no; Clark et al., 2019) .",
"cite_spans": [
{
"start": 116,
"end": 138,
"text": "Adams and Bruce (1982)",
"ref_id": "BIBREF6"
},
{
"start": 355,
"end": 374,
"text": "(Poon et al., 2010;",
"ref_id": "BIBREF49"
},
{
"start": 672,
"end": 691,
"text": "(Hill et al., 2016;",
"ref_id": "BIBREF29"
},
{
"start": 692,
"end": 712,
"text": "Bajgar et al., 2016;",
"ref_id": "BIBREF7"
},
{
"start": 713,
"end": 739,
"text": "Mostafazadeh et al., 2016;",
"ref_id": "BIBREF44"
},
{
"start": 740,
"end": 757,
"text": "Xie et al., 2018;",
"ref_id": "BIBREF66"
},
{
"start": 758,
"end": 777,
"text": "Zheng et al., 2019)",
"ref_id": "BIBREF71"
},
{
"start": 864,
"end": 883,
"text": "Clark et al., 2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Researchers have developed a variety of freeform multiple-choice MRC datasets that contain a significant percentage of questions focusing on the implicitly expressed facts, events, opinions, or emotions in the given text Lai et al., 2017; Ostermann et al., 2018; Khashabi et al., 2018; Sun et al., 2019a) . Generally, we require the integration of our own prior knowledge and the information presented in the given text to answer these questions, posing new challenges for MRC systems. However, until recently, progress in the development of techniques for addressing this kind of MRC task for Chinese has lagged behind their English counterparts. A primary reason is that most previous work focuses on constructing MRC datasets for Chinese in which most answers are either spans (Cui et al., 2016; Cui et al., 2018a; Shao et al., 2018) or abstractive texts (He et al., 2017 ) merely based on the information explicitly expressed in the provided text.",
"cite_spans": [
{
"start": 221,
"end": 238,
"text": "Lai et al., 2017;",
"ref_id": "BIBREF37"
},
{
"start": 239,
"end": 262,
"text": "Ostermann et al., 2018;",
"ref_id": "BIBREF48"
},
{
"start": 263,
"end": 285,
"text": "Khashabi et al., 2018;",
"ref_id": "BIBREF35"
},
{
"start": 286,
"end": 304,
"text": "Sun et al., 2019a)",
"ref_id": "BIBREF56"
},
{
"start": 780,
"end": 798,
"text": "(Cui et al., 2016;",
"ref_id": "BIBREF13"
},
{
"start": 799,
"end": 817,
"text": "Cui et al., 2018a;",
"ref_id": "BIBREF12"
},
{
"start": 818,
"end": 836,
"text": "Shao et al., 2018)",
"ref_id": "BIBREF55"
},
{
"start": 858,
"end": 874,
"text": "(He et al., 2017",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "With a goal of developing similarly challenging, but free-form multiple-choice datasets, and promoting the development of MRC techniques for Chinese, we introduce the first free-form multiple-Choice Chinese machine reading Comprehension dataset (C 3 ) that not only contains multiple types of questions but also requires both the information in the given document and prior knowledge to answer questions. In particular, for assessing model generalizability across different domains, C 3 includes a dialogue-based task C 3 D in which the given document is a dialogue, and a mixed-genre task C 3 M in which the given document is a mixed-genre text that is relatively formally written. All problems are collected from real-world Chinese-as-a-secondlanguage examinations carefully designed by experts to test the reading comprehension abilities of language learners of Chinese.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We perform an in-depth analysis of what kinds of prior knowledge are needed for answering questions correctly in C 3 and two representative freeform multiple-choice MRC datasets for English (Lai et al., 2017; Sun et al., 2019a) , and to what extent. We find that solving these general-domain problems requires linguistic knowledge, domainspecific knowledge, and general world knowledge, the latter of which can be further broken down into eight types such as arithmetic, connotation, cause-effect, and implication. These freeform MRC datasets exhibit similar characteristics in that (i) they contain a high percentage (e.g., 86.8% in C 3 ) of questions requiring knowledge gained from the accompanying document as well as at least one type of prior knowledge and (ii) regardless of language, dialogue-based MRC tasks tend to require more general world knowledge and less linguistic knowledge compared with tasks accompanied with relatively formally written texts. Specifically, compared with existing MRC datasets for Chinese (He et al., 2017; Cui et al. 2018b) , C 3 requires more general world knowledge (57.3% of questions) to arrive at the correct answer options.",
"cite_spans": [
{
"start": 190,
"end": 208,
"text": "(Lai et al., 2017;",
"ref_id": "BIBREF37"
},
{
"start": 209,
"end": 227,
"text": "Sun et al., 2019a)",
"ref_id": "BIBREF56"
},
{
"start": 1026,
"end": 1043,
"text": "(He et al., 2017;",
"ref_id": "BIBREF27"
},
{
"start": 1044,
"end": 1061,
"text": "Cui et al. 2018b)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We implement rule-based and popular neural approaches to the MRC task and find that there is still a significant performance gap between the best-performing model (68.5%) and human readers (96.0%), especially on problems that require prior knowledge. We find that the existence of wrong answer options that highly superficially match the given text plays a critical role in increasing the difficulty level of questions and the demand for prior knowledge. Furthermore, additionally introducing 94k training instances based on translated free-form multiple-choice datasets for English can only lead to a 4.6% improvement in accuracy, still far from closing the gap to human performance. Our hope is that C 3 can serve as a platform for researchers interested in studying how to leverage different types of prior knowledge for in-depth text comprehension and facilitate future work on crosslingual and multilingual machine reading comprehension.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Traditionally, MRC tasks have been designed to be text-dependent Hermann et al., 2015) : They focus on evaluating comprehension of machine readers based on a given text, typically by requiring a model to answer questions relevant to the text. This is distinguished from many question answering (QA) tasks (Fader et al., 2014; Clark et al., 2016) , in which no ground truth document supporting answers is provided with each question, making them relatively less suitable for isolating improvements to MRC. We will first discuss standard MRC datasets for English, followed by MRC/QA datasets for Chinese.",
"cite_spans": [
{
"start": 65,
"end": 86,
"text": "Hermann et al., 2015)",
"ref_id": "BIBREF28"
},
{
"start": 305,
"end": 325,
"text": "(Fader et al., 2014;",
"ref_id": "BIBREF19"
},
{
"start": 326,
"end": 345,
"text": "Clark et al., 2016)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "English. Much of the early MRC work focuses on designing questions whose answers are spans from the given documents (Hermann et al., 2015; Hill et al., 2016; Bajgar et al., 2016; Rajpurkar et al., 2016; Trischler et al., 2017; Joshi et al., 2017) . As a question and its answer are usually in the same sentence, stateof-the-art methods have outperformed human performance on many such tasks. To increase task difficulty, researchers have explored a number of options including adding unanswerable (Trischler et al., 2017; Rajpurkar et al., 2018) or conversational (Choi et al., 2018; Reddy et al., 2019) questions that might require reasoning (Zhang et al., 2018a) , and designing abstractive answers (Nguyen et al., 2016; Ko\u010disk\u1ef3 et al., 2018; Dalvi et al., 2018) or (question, answer) pairs that involve cross-sentence or crossdocument content (Welbl et al., 2018; Yang et al., 2018) . In general, most questions concern the facts that are explicitly expressed in the text, making these tasks possible to measure the level of fundamental reading skills of machine readers.",
"cite_spans": [
{
"start": 116,
"end": 138,
"text": "(Hermann et al., 2015;",
"ref_id": "BIBREF28"
},
{
"start": 139,
"end": 157,
"text": "Hill et al., 2016;",
"ref_id": "BIBREF29"
},
{
"start": 158,
"end": 178,
"text": "Bajgar et al., 2016;",
"ref_id": "BIBREF7"
},
{
"start": 179,
"end": 202,
"text": "Rajpurkar et al., 2016;",
"ref_id": "BIBREF52"
},
{
"start": 203,
"end": 226,
"text": "Trischler et al., 2017;",
"ref_id": "BIBREF58"
},
{
"start": 227,
"end": 246,
"text": "Joshi et al., 2017)",
"ref_id": "BIBREF34"
},
{
"start": 497,
"end": 521,
"text": "(Trischler et al., 2017;",
"ref_id": "BIBREF58"
},
{
"start": 522,
"end": 545,
"text": "Rajpurkar et al., 2018)",
"ref_id": "BIBREF51"
},
{
"start": 564,
"end": 583,
"text": "(Choi et al., 2018;",
"ref_id": "BIBREF9"
},
{
"start": 584,
"end": 603,
"text": "Reddy et al., 2019)",
"ref_id": "BIBREF53"
},
{
"start": 643,
"end": 664,
"text": "(Zhang et al., 2018a)",
"ref_id": "BIBREF68"
},
{
"start": 701,
"end": 722,
"text": "(Nguyen et al., 2016;",
"ref_id": "BIBREF46"
},
{
"start": 723,
"end": 744,
"text": "Ko\u010disk\u1ef3 et al., 2018;",
"ref_id": "BIBREF36"
},
{
"start": 745,
"end": 764,
"text": "Dalvi et al., 2018)",
"ref_id": "BIBREF16"
},
{
"start": 846,
"end": 866,
"text": "(Welbl et al., 2018;",
"ref_id": "BIBREF62"
},
{
"start": 867,
"end": 885,
"text": "Yang et al., 2018)",
"ref_id": "BIBREF67"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Another research line has studied MRC tasks, usually in a free-form multiple-choice form, containing a significant percentage of questions that focus on the understanding of the implicitly expressed facts, events, opinions, or emotions in the given text Mostafazadeh et al., 2016; Khashabi et al., 2018; Lai et al., 2017; Sun et al., 2019a) . Therefore, these benchmarks may allow a relatively comprehensive evaluation of different reading skills and require a machine reader to integrate prior knowledge with information presented in a text. In particular, real-world language exams are ideal sources for constructing this kind of MRC dataset as they are designed with a similar goal of measuring different reading comprehension abilities of human language learners primarily based on a given text. Representative datasets in this category include RACE (Lai et al., 2017) and DREAM (Sun et al., 2019a) , both collected from English-asa-foreign-language exams designed for Chinese learners of English. C 3 M and C 3 D can be regarded as a Chinese counterpart of RACE and DREAM, respectively, and we will discuss their similarities in detail in Section 3.3.",
"cite_spans": [
{
"start": 254,
"end": 280,
"text": "Mostafazadeh et al., 2016;",
"ref_id": "BIBREF44"
},
{
"start": 281,
"end": 303,
"text": "Khashabi et al., 2018;",
"ref_id": "BIBREF35"
},
{
"start": 304,
"end": 321,
"text": "Lai et al., 2017;",
"ref_id": "BIBREF37"
},
{
"start": 322,
"end": 340,
"text": "Sun et al., 2019a)",
"ref_id": "BIBREF56"
},
{
"start": 854,
"end": 872,
"text": "(Lai et al., 2017)",
"ref_id": "BIBREF37"
},
{
"start": 883,
"end": 902,
"text": "(Sun et al., 2019a)",
"ref_id": "BIBREF56"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Chinese. Extractive MRC datasets for Chinese (Cui et al., 2016; Cui et al., 2018b; Cui et al., 2018a; Shao et al., 2018) have also been constructed-using web documents, news reports, books, and Wikipedia articles as source documents-and for which all answers are spans or sentences from the given documents. Zheng et al. (2019) propose a cloze-style multiple-choice MRC dataset by replacing idioms in a document with blank symbols, and the task is to predict the correct idiom from candidate idioms that are similar in meanings. The abstractive dataset DuReader (He et al., 2017) contains questions collected from query logs, free-form answers, and a small set of relevant texts retrieved from web pages per question. In contrast, C 3 is the first free-form multiple-choice Chinese MRC dataset that contains different types of questions and requires rich prior knowledge especially general world knowledge for a better understanding of the given text. Furthermore, 48.4% of problems require dialogue understanding, which has not been studied yet in existing Chinese MRC tasks.",
"cite_spans": [
{
"start": 45,
"end": 63,
"text": "(Cui et al., 2016;",
"ref_id": "BIBREF13"
},
{
"start": 64,
"end": 82,
"text": "Cui et al., 2018b;",
"ref_id": "BIBREF14"
},
{
"start": 83,
"end": 101,
"text": "Cui et al., 2018a;",
"ref_id": "BIBREF12"
},
{
"start": 102,
"end": 120,
"text": "Shao et al., 2018)",
"ref_id": "BIBREF55"
},
{
"start": 308,
"end": 327,
"text": "Zheng et al. (2019)",
"ref_id": "BIBREF71"
},
{
"start": 562,
"end": 579,
"text": "(He et al., 2017)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Similarly, questions in many existing multiplechoice QA datasets for Chinese (Cheng et al., 2016; Guo et al., 2017a,b; Zhang and Zhao, 2018; Zhang et al., 2018b; Hao et al., 2019; are also free-form and collected from exams. However, most of the pre-existing QA tasks for Chinese are designed to test the acquisition and exploitation of domain-specific (e.g., history, medical, and geography) knowledge rather than general reading comprehension, and the performance of QA systems is partially dependent on the performance of information retrieval or the relevance of external resource (e.g., corpora or knowledge bases). We compare C 3 with relevant MRC/QA datasets for Chinese and English in Table 1 .",
"cite_spans": [
{
"start": 77,
"end": 97,
"text": "(Cheng et al., 2016;",
"ref_id": "BIBREF8"
},
{
"start": 98,
"end": 118,
"text": "Guo et al., 2017a,b;",
"ref_id": null
},
{
"start": 119,
"end": 140,
"text": "Zhang and Zhao, 2018;",
"ref_id": "BIBREF70"
},
{
"start": 141,
"end": 161,
"text": "Zhang et al., 2018b;",
"ref_id": "BIBREF69"
},
{
"start": 162,
"end": 179,
"text": "Hao et al., 2019;",
"ref_id": "BIBREF26"
}
],
"ref_spans": [
{
"start": 693,
"end": 700,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In this section, we describe the construction of C 3 (Section 3.1). We also analyze the data (Section 3.2) and the types of prior knowledge needed for the MRC tasks (Section 3.3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "We collect the general-domain problems from Hanyu Shuiping Kaoshi (HSK) and Minzu Hanyu Kaoshi (MHK), which are designed for evaluating the Chinese listening and reading comprehension ability of second-language learners such as international students, overseas Chinese, and ethnic minorities. We include problems from both real and practice exams; all are freely accessible online for public usage. Each problem consists of a document and a series of questions. Each question is associated with several answer options, and EXACTLY ONE of them is correct. The goal is to select the correct option. According to the document type, we divide these problems into two subtasks: C 3 -Dialogue (C 3 D ), in which a dialogue serves as the document, and C 3 -Mixed (C 3 M ), in which the given non-dialogue document is of mixed genre, such as a story, a news report, a monologue, or an advertisement. We show a sample problem for each type in Tables 2 and 3, respectively. We remove duplicate problems and randomly split the data (13,369 documents and 19,577 questions in total) at the problem level, with 60% training, 20% development, and 20% test. (Cheng et al., 2016) N/A free-form multiple-choice 0.6K ARC (Clark et al., 2016) MCQA (Guo et al., 2017a) N/A free-form multiple-choice 14.4K ARC (Clark et al., 2016) MedQA (Zhang et al., 2018b) N/A free-form multiple-choice 235.2K ARC (Clark et al., 2016) GeoSQA N/A free-form multiple-choice 4.1K DD (Lally et al., 2017) Machine Reading Comprehension ",
"cite_spans": [
{
"start": 1142,
"end": 1162,
"text": "(Cheng et al., 2016)",
"ref_id": "BIBREF8"
},
{
"start": 1202,
"end": 1222,
"text": "(Clark et al., 2016)",
"ref_id": "BIBREF10"
},
{
"start": 1228,
"end": 1247,
"text": "(Guo et al., 2017a)",
"ref_id": "BIBREF23"
},
{
"start": 1288,
"end": 1308,
"text": "(Clark et al., 2016)",
"ref_id": "BIBREF10"
},
{
"start": 1315,
"end": 1336,
"text": "(Zhang et al., 2018b)",
"ref_id": "BIBREF69"
},
{
"start": 1378,
"end": 1398,
"text": "(Clark et al., 2016)",
"ref_id": "BIBREF10"
},
{
"start": 1444,
"end": 1464,
"text": "(Lally et al., 2017)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [
{
"start": 934,
"end": 963,
"text": "Tables 2 and 3, respectively.",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Collection Methodology and Task Definitions",
"sec_num": "3.1"
},
{
"text": "We summarize the overall statistics of C 3 in Table 4 . We observe notable differences exist between C 3 M and C 3 D . For example, C 3 M , in which most documents are formally written texts, has a larger vocabulary size compared to that of C 3 D with documents in spoken language. Similar observations have been made by Sun et al. (2019a) that the vocabulary size is relatively small in English dialogue-based machine reading comprehension tasks. In addition, the average document length (180.2) in C 3 M is longer than that in C 3 D (76.3). In general, C 3 may not be suitable for evaluating the comprehension ability of machine readers on lengthy texts as the average length of document C 3 is relatively short compared to that in datasets such as DuReader (He et al., 2017) (396.0) and RACE (Lai et al., 2017 ) (321.9).",
"cite_spans": [
{
"start": 321,
"end": 339,
"text": "Sun et al. (2019a)",
"ref_id": "BIBREF56"
},
{
"start": 760,
"end": 777,
"text": "(He et al., 2017)",
"ref_id": "BIBREF27"
},
{
"start": 795,
"end": 812,
"text": "(Lai et al., 2017",
"ref_id": "BIBREF37"
}
],
"ref_spans": [
{
"start": 46,
"end": 53,
"text": "Table 4",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Data Statistics",
"sec_num": "3.2"
},
{
"text": "Previous studies on Chinese machine reading comprehension focus mainly on the linguistic knowledge required (He et al., 2017; Cui et al., 2018a) . We aim instead for a more comprehensive analysis of the types of prior knowledge for answering questions. We carefully analyze a subset of questions randomly sampled from the development and test sets of C 3 and arrive at the following three kinds of prior knowledge required for answering questions. A question is labeled as matching if it exactly matches or nearly matches (without considering determiners, aspect particles, or conjunctive adverbs; Xia, 2000) a span in the given document; answering questions in this category seldom requires any prior knowledge.",
"cite_spans": [
{
"start": 108,
"end": 125,
"text": "(He et al., 2017;",
"ref_id": "BIBREF27"
},
{
"start": 126,
"end": 144,
"text": "Cui et al., 2018a)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Categories of Prior Knowledge",
"sec_num": "3.3"
},
{
"text": "LINGUISTIC: To answer a given question (e.g., Q 1-2 in Table 2 and Q3 in Table 3 ), we require lexical/syntactic knowledge including but not limited to: idioms, proverbs, negation, antonymy, synonymy, the possible meanings of the word, and syntactic transformations (Nassaji, 2006) . DOMAIN-SPECIFIC: This kind of world knowledge consists of, but is not limited to, facts about domain-specific concepts, their definitions and properties, and relations among these concepts (Grishman et al., 1983; Hansen, 1994) . GENERAL WORLD: It refers to the general knowledge about how the world works, sometimes called commonsense knowledge. We focus on the sort of world knowledge that an encyclopedia would assume readers know without being told (Lenat et al., 1985; Schubert, 2002) instead of the factual knowledge such as properties of famous entities. We further break down general world knowledge into eight subtypes, some of which (marked with \u2020) are similar to the categories summarized by LoBue and Yates (2011) for textual entailment recognition.",
"cite_spans": [
{
"start": 266,
"end": 281,
"text": "(Nassaji, 2006)",
"ref_id": "BIBREF45"
},
{
"start": 473,
"end": 496,
"text": "(Grishman et al., 1983;",
"ref_id": "BIBREF22"
},
{
"start": 497,
"end": 510,
"text": "Hansen, 1994)",
"ref_id": "BIBREF25"
},
{
"start": 736,
"end": 756,
"text": "(Lenat et al., 1985;",
"ref_id": "BIBREF39"
},
{
"start": 757,
"end": 772,
"text": "Schubert, 2002)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 55,
"end": 80,
"text": "Table 2 and Q3 in Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Categories of Prior Knowledge",
"sec_num": "3.3"
},
{
"text": "\u2022 Arithmetic \u2020 : This includes numerical computation and analysis (e.g., comparison and unit conversion).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Categories of Prior Knowledge",
"sec_num": "3.3"
},
{
"text": "\u2022 Connotation: Answering questions requires knowledge about implicit and implied sentiment towards something or somebody, emotions, and tone (Edmonds and Hirst, 2002; In 1928, recommended by Hsu Chih-Mo, Hu Shih, who was the president of the previous National University of China, employed Shen Ts'ung-wen as a lecturer of the university in charge of teaching the optional course of modern literature. At that time, Shen already made himself conspicuous in the literary world and was a little famous in society. For this sake, even before the beginning of class, the classroom was crowded with students. Upon the arrival of class, Shen went into the classroom. Seeing a dense crowd of students sitting beneath the platform, Shen was suddenly startled and his mind went blank. He was even unable to utter the first sentence he had rehearsed repeatedly.",
"cite_spans": [
{
"start": 141,
"end": 166,
"text": "(Edmonds and Hirst, 2002;",
"ref_id": "BIBREF18"
},
{
"start": 167,
"end": 167,
"text": "",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Categories of Prior Knowledge",
"sec_num": "3.3"
},
{
"text": "He stood there motionlessly, extremely embarrassed. He wrung his hands without knowing where to put them. Before class, he believed that he had a ready plan to meet the situation so he did not bring his teaching plan and textbook. For up to 10 minutes, the classroom was in perfect silence. All the students were curiously waiting for the new teacher to open his mouth. Breathing deeply, he gradually calmed down. Thereupon, the materials he had previously prepared gathered in his mind for the second time. Then he began his lecture. Nevertheless, since he was still nervous, it took him less than 15 minutes to finish the teaching contents he had planned to complete in an hour.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Categories of Prior Knowledge",
"sec_num": "3.3"
},
{
"text": "What should he do next? He was again caught in embarrassment. He had no choice but to pick up a piece of chalk before writing several words on the blackboard: This is the first time I have given a lecture. In the presence of a crowd of people, I feel terrified.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Categories of Prior Knowledge",
"sec_num": "3.3"
},
{
"text": "Immediately, a peal of friendly laughter filled the classroom. Presently, a round of encouraging applause was given to him. Hearing this episode, Hu heaped praise upon Shen, thinking that he was very successful. Because of this experience, Shen always reminded himself of not being nervous in his class for years afterwards. Gradually, he began to give his lecture at leisure in class.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Categories of Prior Knowledge",
"sec_num": "3.3"
},
{
"text": "Q1 In paragraph 2, ''a dense crowd'' refers to A.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Q1",
"sec_num": null
},
{
"text": "A. the light in the classroom was dim.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Q1",
"sec_num": null
},
{
"text": "B. the number of students attending his lecture was large. \u22c6 C.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B.",
"sec_num": null
},
{
"text": "C. the room was noisy. D.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B.",
"sec_num": null
},
{
"text": "D. the students were active in voicing their opinions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B.",
"sec_num": null
},
{
"text": "Q2 Shen did not bring the textbook because he felt that A.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Q2",
"sec_num": null
},
{
"text": "A. the teaching contents were not many.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Q2",
"sec_num": null
},
{
"text": "B. his preparation was sufficient. \u22c6 C.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B.",
"sec_num": null
},
{
"text": "C. his mental pressure could be reduced in this way. D.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B.",
"sec_num": null
},
{
"text": "D. the textbook was likely to restrict his ability to give a lecture.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B.",
"sec_num": null
},
{
"text": "Q3 Seeing the sentence written by Shen, the students A.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Q3",
"sec_num": null
},
{
"text": "A. hurriedly consoled him.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Q3",
"sec_num": null
},
{
"text": "B. blamed him in mind.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B.",
"sec_num": null
},
{
"text": "C. were greatly encouraged.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C.",
"sec_num": null
},
{
"text": "D. expressed their understanding and encouraged him. \u22c6 Q4 Q4 The passage above is mainly about A.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "D.",
"sec_num": null
},
{
"text": "A. the development of the Chinese educational system. B.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "D.",
"sec_num": null
},
{
"text": "B. how to make self-adjustment if one is nervous. C.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "D.",
"sec_num": null
},
{
"text": "C. the situation where Shen gave his lecture for the first time. \u22c6 D.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "D.",
"sec_num": null
},
{
"text": "D. how Shen turned into a teacher from a writer. \u2022 Cause-effect \u2020 : The occurrence of event A causes the occurrence of event B. We usually need this kind of knowledge to solve ''why'' questions when a causal explanation is not explicitly expressed in the given document.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "D.",
"sec_num": null
},
{
"text": "\u2022 Implication: This category refers to the main points, suggestions, opinions, facts, or event predictions that are not expressed explic-itly in the text, which cannot be reached by paraphrasing sentences using linguistic knowledge. For example, Q4 in Table 2 and Q2 in Table 3 belong to this category.",
"cite_spans": [],
"ref_spans": [
{
"start": 252,
"end": 259,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 270,
"end": 277,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "D.",
"sec_num": null
},
{
"text": "\u2022 Part-whole: We require knowledge that object A is a part of object B. Relations such as member-of, stuff-of, and component-of between two objects also fall into this category (Winston et al., 1987; Miller, 1998) . For example, we require implication mentioned above as well as part-whole knowledge (i.e., ''teacher'' is a kind of job) to summarize the main topic of the following \u2022 Scenario: We require knowledge about observable behaviors or activities of humans and their corresponding temporal/locational information. We also need knowledge about personal information (e.g., profession, education level, personality, and mental or physical status) of the involved participant and relations between the involved participants, implicitly indicated by the behaviors or activities described in texts. For example, we put Q3 in Table 2 in this category as ''friendly laughter'' may express ''understanding''. Q1 in Table 3 about the relation between the two speakers also belongs to this category.",
"cite_spans": [
{
"start": 177,
"end": 199,
"text": "(Winston et al., 1987;",
"ref_id": "BIBREF63"
},
{
"start": 200,
"end": 213,
"text": "Miller, 1998)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [
{
"start": 828,
"end": 835,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 915,
"end": 922,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "D.",
"sec_num": null
},
{
"text": "\u2022 Precondition \u2020 : If event A had not happened, event B would not have happened (Ikuta et al., 2014; O'Gorman et al., 2016) . Event A is usually mentioned in either the question or the correct answer option(s). For example, ''I went to a supermarket'' is a necessary precondition for ''I was shopping at a supermarket when my friend visited me''.",
"cite_spans": [
{
"start": 80,
"end": 100,
"text": "(Ikuta et al., 2014;",
"ref_id": "BIBREF32"
},
{
"start": 101,
"end": 123,
"text": "O'Gorman et al., 2016)",
"ref_id": "BIBREF47"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "D.",
"sec_num": null
},
{
"text": "\u2022 Other: Knowledge that belongs to none of the above subcategories.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "D.",
"sec_num": null
},
{
"text": "Two annotators (authors of this paper) annotate the type(s) of required knowledge for each question over 600 instances. To explore the differences and similarities in the required knowledge types between C 3 and existing free-form MRC datasets, following the same annotation schema, we also annotate instances from the largest Chinese freeform abstractive MRC dataset DuReader (He et al., 2017) and free-form multiple-choice English MRC datasets RACE (Lai et al., 2017) and DREAM (Sun et al., 2019a) , which can be regarded as the English counterpart of C 3 M and C 3 D , respectively. We also divide questions into one of three types-single, multiple, or independentbased on the minimum number of sentences in the document that explicitly or implicitly support the correct answer option. We regard a question as independent if it is context-independent, which usually requires prior vocabulary or domain-specific knowledge. The Cohen's kappa coefficient is 0.62.",
"cite_spans": [
{
"start": 377,
"end": 394,
"text": "(He et al., 2017)",
"ref_id": "BIBREF27"
},
{
"start": 451,
"end": 469,
"text": "(Lai et al., 2017)",
"ref_id": "BIBREF37"
},
{
"start": 480,
"end": 499,
"text": "(Sun et al., 2019a)",
"ref_id": "BIBREF56"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "D.",
"sec_num": null
},
{
"text": "M vs. C 3 D As shown in Table 5 , compared with the dialogue-based task (C 3 D ), C 3 M with nondialogue texts as documents requires more linguistic knowledge (49.0% vs. 30.7%) yet less general world knowledge (50.7% vs. 64.0%). As many as 24.3% questions in C 3 D need scenario knowledge, perhaps due to the fact that speakers in a dialogue (especially face-to-face) may not explicitly mention information that they assume others already know such as personal information, the relationship between the speakers, and temporal and location information. Interestingly, we observe a similar phenomenon when we compare the English datasets DREAM (dialogue-based) and RACE. Therefore, it is likely that dialogue-based freeform tasks can serve as ideal platforms for studying how to improve language understanding with general world knowledge regardless of language. C 3 vs. its English counterparts We are also interested in whether answering a specific type of question may require similar types of prior knowledge across languages. For example, C 3 D and its English counterpart DREAM (Sun et al., 2019a) have similar problem formats, document",
"cite_spans": [
{
"start": 1082,
"end": 1101,
"text": "(Sun et al., 2019a)",
"ref_id": "BIBREF56"
}
],
"ref_spans": [
{
"start": 24,
"end": 31,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "C 3",
"sec_num": null
},
{
"text": "Metric C 3 M C 3 D C 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C 3",
"sec_num": null
},
{
"text": "Min./Avg./Max. # of options per question 2 / 3.7 / 4 3 / 3.8 / 4 2 / 3.8 / 4 # of correct options per question 1 1 1 Min./Avg./Max. # of questions per document 1 / 1.9 / 6 1 / 1.2 / 6 1 / 1.5 / 6 Avg./Max. option length (in characters) 6.5 / 45 4.4 / 31 5.5 / 45 Avg./Max. question length (in characters) 13.5 / 57 10.9 / 34 12. Table 5 : Distribution (%) of types of required prior knowledge based on a subset of test and development sets of C 3 , Chinese freeform abstractive dataset DuReader (He et al., 2017) , and English free-form multiple-choice datasets RACE (Lai et al., 2017) and DREAM (Sun et al., 2019a) . Answering a question may require more than one type of prior knowledge.",
"cite_spans": [
{
"start": 495,
"end": 512,
"text": "(He et al., 2017)",
"ref_id": "BIBREF27"
},
{
"start": 567,
"end": 585,
"text": "(Lai et al., 2017)",
"ref_id": "BIBREF37"
},
{
"start": 596,
"end": 615,
"text": "(Sun et al., 2019a)",
"ref_id": "BIBREF56"
}
],
"ref_spans": [
{
"start": 329,
"end": 336,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "C 3",
"sec_num": null
},
{
"text": "= C 3 M \u222a C 3 D . C 3 M C 3 D C 3 RACE DREAM",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C 3",
"sec_num": null
},
{
"text": "types, and data collection methodologies (from Chinese-as-a-second-language and English-as-aforeign-language exams, respectively). We notice that the knowledge type distributions of the two datasets are indeed very similar. Therefore, C 3 may facilitate future cross-lingual MRC studies. C 3 vs. DuReader The 150 annotated instances of DuReader also exhibit properties similar to those identified in studies of abstractive MRC for English (Nguyen et al., 2016; Ko\u010disk\u1ef3 et al., 2018; Reddy et al., 2019) . Namely, turkers asked to write answers in their own words tend instead to write an extractive summary by copying short textual snippets or whole sentences in the given documents; this may explain why models designed for extractive MRC tasks achieve reasonable performance on abstractive tasks. We notice that questions in DuReader seldom require general world knowledge, which is possibly because users seldom ask questions about facts obvious to most people. On the other hand, as many as 16.7% of (question, answer) pairs in DuReader cannot be supported by the given text (vs. 1.3% in C 3 ); in most cases, they require prior knowledge about a particular domain (e.g., ''On which website can I watch The Glory of Tang Dynasty?'' and ''How to start a clothing store?''). In comparison, a larger fraction of C 3 requires linguistic knowledge or general world knowledge.",
"cite_spans": [
{
"start": 439,
"end": 460,
"text": "(Nguyen et al., 2016;",
"ref_id": "BIBREF46"
},
{
"start": 461,
"end": 482,
"text": "Ko\u010disk\u1ef3 et al., 2018;",
"ref_id": "BIBREF36"
},
{
"start": 483,
"end": 502,
"text": "Reddy et al., 2019)",
"ref_id": "BIBREF53"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "C 3",
"sec_num": null
},
{
"text": "We implement a classical rule-based method and recent state-of-the-art neural models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approaches",
"sec_num": "4"
},
{
"text": "We implement Distance-based Sliding Window , a rule-based method that chooses the answer option by taking into account (1) lexical similarity between a statement (i.e., a question and an answer option) and the given document with a fixed window size and (2) the minimum number of tokens between occurrences of the question and occurrences of an answer option in the document. This method assumes that a statement is more likely to be correct if there is a shorter distance between tokens within a statement, and more informative tokens in the statement appear in the document.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distance-Based Sliding Window",
"sec_num": "4.1"
},
{
"text": "We utilize Co-Matching , a Bi-LSTM-based model for multiple-choice MRC tasks for English. It explicitly treats a question and one of its associated answer options as two sequences and jointly models whether or not the given document matches them. We modify the pre-processing step and adapt this model to MRC tasks for Chinese (Section 5.1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Co-Matching",
"sec_num": "4.2"
},
{
"text": "We also apply the framework of fine-tuning a pre-trained language model on machine reading comprehension tasks (Radford et al., 2018) . We consider the following four pre-trained language models for Chinese: Chinese BERT-Base (denoted as BERT) , Chinese ERNIE-Base (denoted as ERNIE) , and Chinese BERT-Base with whole word masking during pre-training (denoted as BERT-wwm) and its enhanced version pre-trained over larger corpora (denoted as BERT-wwm-ext). These models have the same number of layers, hidden units, and attention heads. Given document d, question q, and answer option o i , we construct the input sequence by concatenating [CLS] , tokens in d, [SEP] , tokens in q, [SEP] , tokens in o i , and [SEP] , where [CLS] and [SEP] are the classifier token and sentence separator in a pre-trained language model, respectively. We add an embedding vector t 1 to each token before the first [SEP] (inclusive) and an embedding vector t 2 to every other token, where t 1 and t 2 are learned during language model pre-training for discriminating sequences. We denote the final hidden state for the first token in the input sequence as S i \u2208 R 1\u00d7H , where H is the hidden size. We introduce a classification layer W \u2208 R 1\u00d7H and obtain the unnormalized log probability P i \u2208 R of o i being correct by P i = S i W T . We obtain the final prediction for q by applying a softmax layer over the unnormalized log probabilities of all options associated with q.",
"cite_spans": [
{
"start": 111,
"end": 133,
"text": "(Radford et al., 2018)",
"ref_id": "BIBREF50"
},
{
"start": 641,
"end": 646,
"text": "[CLS]",
"ref_id": null
},
{
"start": 662,
"end": 667,
"text": "[SEP]",
"ref_id": null
},
{
"start": 683,
"end": 688,
"text": "[SEP]",
"ref_id": null
},
{
"start": 711,
"end": 716,
"text": "[SEP]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-Tuning Pre-Trained Language Models",
"sec_num": "4.3"
},
{
"text": "We use C 3 M and C 3 D together to train a neural model and perform testing on them separately, following the default setting on RACE that also contains two subsets (Lai et al., 2017) . We run every experiment five times with different random seeds and report the best development set performance and its corresponding test set performance.",
"cite_spans": [
{
"start": 165,
"end": 183,
"text": "(Lai et al., 2017)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "5.1"
},
{
"text": "Distance-Based Sliding Window. We simply treat each character as a token. We do not use Chinese word segmentation as it results in drops in performance based on our experiment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "5.1"
},
{
"text": "Co-Matching. We replace the English tokenizer with a Chinese word segmenter in HanLP. 1 We use the 300-dimensional Chinese word embeddings released by Li et al. (2018) .",
"cite_spans": [
{
"start": 86,
"end": 87,
"text": "1",
"ref_id": null
},
{
"start": 151,
"end": 167,
"text": "Li et al. (2018)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "5.1"
},
{
"text": "Fine-Tuning Pre-Trained Language Models. We set the learning rate, batch size, and maximal sequence length to 2 \u00d7 10 \u22125 , 24, and 512, respectively. We truncate the longest sequence among d, q, and o i (Section 4.3) when an input sequence exceeds the length limit 512. For all experiments, we fine-tune a model on C 3 for eight epochs. We keep the default values for the other hyperparameters .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "5.1"
},
{
"text": "As shown in Table 6 , methods based on pretrained language models (BERT, ERNIE, BERTwwm, and BERT-wwm-ext) outperform the Distance-based Sliding Window approach and Bi-LSTM-based Co-Matching by a large margin. BERT-wwm-ext performs better on C 3 compared where S(x, y) is a normalized similarity score between 0 and 1 that measures the edit distance to change x into a substring of y using single-character edits (insertions, deletions or substitutions). Particularly, if x is a substring of y, S(x, y) = 1; if x shares no character with y, S(x, y) = 0. By definition, S(w i , d) in Equation (1) measures the lexical similarity between distractor w i and d; S(c, d) measures the similarity between the correct answer option c and d.",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 19,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Baseline Results",
"sec_num": "5.2"
},
{
"text": "To quantitatively investigate the impact of the existence of plausible distractors on model performance, we group questions from the development set of C 3 by the largest distractor plausibility (i.e., max i \u03b3 i ), in the range of [\u22121, 1], for each question and compare the performance of Co-Matching, BERT, and BERT-wwm-ext in different groups.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Results",
"sec_num": "5.2"
},
{
"text": "As shown in Figure 1(a) , the largest distractor plausibility may serve as an indicator of the difficulty level of questions presented to the investigated models. When the largest distractor plausibility is smaller than \u22120.8, all three models exhibit strong performance (\u2265 90%). As the largest distractor plausibility increases, the performance of all models consistently drops. All models perform worse than average on questions having at least one high-plausible distractor (e.g., distractor plausibility > 0). Compared with BERT, the gain of the best-performing model (i.e., BERT-wwmext) mainly comes from its superior performance on these ''difficult'' questions.",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 23,
"text": "Figure 1(a)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Baseline Results",
"sec_num": "5.2"
},
{
"text": "Further, we find that distractor plausibility is strongly correlated with the need for prior knowledge when answering questions in C 3 based Figure 3 : Performance of BERT-wwm-ext trained on 1/8, 2/8, . . . , 8/8 of C 3 training data, and C 3 training data plus 1/8, 2/8, . . . , 8/8 of machine translated (MT) RACE and DREAM training data.",
"cite_spans": [],
"ref_spans": [
{
"start": 141,
"end": 149,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Baseline Results",
"sec_num": "5.2"
},
{
"text": "on the annotated instances, as shown in Figure 1(b) . For further analysis, we group annotated instances by different max i S(w i , d) and S(c, d) (in Equation (1)) and separately compare their need for linguistic knowledge and general world knowledge. As shown in Figure 2 , general world knowledge is crucial for question answering when the correct answer option is not mentioned explicitly in the document (i.e., S(c, d) is relatively small). In contrast, we tend to require linguistic knowledge when both the correct answer option and the most confusing distractor (i.e., the one with the largest distractor plausibility) are very similar to the given document.",
"cite_spans": [],
"ref_spans": [
{
"start": 40,
"end": 51,
"text": "Figure 1(b)",
"ref_id": "FIGREF0"
},
{
"start": 265,
"end": 273,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Baseline Results",
"sec_num": "5.2"
},
{
"text": "To extrapolate to what extent we can improve the performance of current models with more training data, we plot the development set performance of BERT-wwm-ext trained on different portions of the training data of C 3 . As shown in Figure 3 , the accuracy grows roughly linearly with the logarithm of the size of training data, and we observe a substantial gap between human performance and the expected BERT-wwm-ext performance, even assuming that 10 5 training instances are available, leaving much room for improvement.",
"cite_spans": [],
"ref_spans": [
{
"start": 232,
"end": 240,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussions on Data Augmentation",
"sec_num": "5.4"
},
{
"text": "Furthermore, as the knowledge type distributions of C 3 and its English counterparts RACE and DREAM are highly similar (Section 3.3), we translate RACE and DREAM from English to Chinese with Google Translate and plot the performance of BERT-wwm-ext trained on C 3 plus different numbers of translated instances. The learning curve is also roughly linear with the logarithm of the number of training instances from translated RACE and DREAM, but with a lower growth rate. Even augmenting the training data with all 94k translated instances only leads to a 4.6% improvement (from 67.8% to 72.4%) in accuracy on the development set of C 3 . From another perspective, BERT-wwm-ext trained on all translated instances without using any data in C 3 only achieves an accuracy of 67.1% on the development set of C 3 , slightly worse than 67.8% achieved when only the training data in C 3 is used, whose size is roughly 1/8 of that of the translated instances. These observations suggest a need to better leverage large-scale English resources from similar MRC tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions on Data Augmentation",
"sec_num": "5.4"
},
{
"text": "Besides augmenting the training data with translated instances, we also attempt to fine-tune a pre-trained multilingual BERT-Base released by on the training data of C 3 and all original training instances in English from RACE and DREAM. However, the accuracy on the development set of C 3 is 63.4%, which is even lower than the performance (65.7% in Table 6 ) of fine-tuning Chinese BERT-Base only on C 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 351,
"end": 358,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussions on Data Augmentation",
"sec_num": "5.4"
},
{
"text": "We present the first free-form multiple-choice Chinese machine reading comprehension dataset (C 3 ), collected from real-world language exams, requiring linguistic, domain-specific, or general world knowledge to answer questions based on the given written or orally oriented texts. We study the prior knowledge needed in this challenging machine reading comprehension dataset and carefully investigate the impacts of distractor plausibility and data augmentation (based on similar resources for English) on the performance of state-of-the-art neural models. Experimental results demonstrate the there is still a significant performance gap between the best-performing model (68.5%) and human readers (96.0%) and a need for better ways for exploiting rich resources in other languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "https://github.com/hankcs/HanLP.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank the editors and anonymous reviewers for their helpful and insightful feedback.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "BERT BERT-wwm-ext Human C 3 Table 7 : Performance comparison in accuracy (%) by categories based on a subset of development sets of C 3 ( * : \u2264 10 annotated instances fall into that category).with other three pre-trained language models, though there still exists a large gap (27.5%) between this method and human performance (96.0%). We also report the performance of Co-Matching, BERT, BERT-wwm-ext, and human on different question categories based on the annotated development sets (Table 7) , which consist of 150 questions in C 3 M and 150 questions in C 3 D . These models generally perform worse on questions that require prior knowledge or reasoning over multiple sentences than questions that can be answered by surface matching or only need the information from a single sentence (Section 3.3).",
"cite_spans": [],
"ref_spans": [
{
"start": 28,
"end": 35,
"text": "Table 7",
"ref_id": null
},
{
"start": 485,
"end": 494,
"text": "(Table 7)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Co-Matching",
"sec_num": null
},
{
"text": "We look into incorrect predictions of Co-Matching, BERT, and BERT-wwm-ext on the development set. We observe that the existence of plausible distractors may play a critical role in raising the difficulty level of questions for models. We regard a distractor (i.e., wrong answer option) as plausible if it, compared with the correct answer option, is more superficially similar to the given document. Two typical cases include (1) the information in the distractor is accurate based on the document but does not (fully) answer the question, and (2) the distractor distorts, oversimplifies, exaggerates, or misinterprets the information in the document.Given document d, the correct answer option c, and wrong answer options {w 1 , w 2 , . . . , w i , . . . , w n } associated with a certain question, we measure the distractor plausibility of distractor w i by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions on Distractor Plausibility",
"sec_num": "5.3"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Distance-Based Sliding Window",
"authors": [
{
"first": "",
"middle": [],
"last": "Richardson",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Distance-Based Sliding Window (Richardson et al., 2013) 47.9 45.8 39.6 40.4 43.8 43.1 Co-Matching (Wang et al., 2018) 47.0 48.2 55.5 51.4 51.0 49.8",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Performance of baseline in accuracy (%) on the C 3 dataset",
"authors": [],
"year": null,
"venue": "",
"volume": "6",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Table 6: Performance of baseline in accuracy (%) on the C 3 dataset ( * : based on the annotated subset of test and development sets of C 3 ).",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Background knowledge and reading comprehension",
"authors": [
{
"first": "Marilyn",
"middle": [],
"last": "Adams",
"suffix": ""
},
{
"first": "Bertram",
"middle": [],
"last": "Bruce",
"suffix": ""
}
],
"year": 1982,
"venue": "Reader Meets Author: Bridging the Gap",
"volume": "13",
"issue": "",
"pages": "2--25",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marilyn Adams and Bertram Bruce. 1982. Back- ground knowledge and reading comprehension. Reader Meets Author: Bridging the Gap, 13:2-25.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Embracing data abundance: Booktest dataset for reading comprehension",
"authors": [
{
"first": "Ondrej",
"middle": [],
"last": "Bajgar",
"suffix": ""
},
{
"first": "Rudolf",
"middle": [],
"last": "Kadlec",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ondrej Bajgar, Rudolf Kadlec, and Jan Kleindienst. 2016. Embracing data abundance: Booktest data- set for reading comprehension. arXiv preprint, cs.CL/1610.00956v1.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Takingupthegaokao challenge: An information retrieval approach",
"authors": [
{
"first": "Gong",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Weixi",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Ziwei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jianghui",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the IJCAI",
"volume": "",
"issue": "",
"pages": "2479--2485",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gong Cheng, Weixi Zhu, Ziwei Wang, Jianghui Chen, andYuzhongQu. 2016. Takingupthegaokao challenge: An information retrieval approach. In Proceedings of the IJCAI, pages 2479-2485. New York, NY.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "QuAC: Question answering in context",
"authors": [
{
"first": "Eunsol",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "He",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Yatskar",
"suffix": ""
},
{
"first": "Wen-Tau",
"middle": [],
"last": "Yih",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the EMNLP",
"volume": "",
"issue": "",
"pages": "2174--2184",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wen-tau Yih, Yejin Choi, Percy Liang, and Luke Zettlemoyer. 2018. QuAC: Question answering in context. In Proceedings of the EMNLP, pages 2174-2184. Brussels, Belgium.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Combining retrieval, statistics, and inference to answer elementary science questions",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Kwiatkowski",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the NAACL-HLT",
"volume": "",
"issue": "",
"pages": "2580--2586",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. 2019. BoolQ: Exploring the surprising difficulty of natural yes/no ques- tions. In Proceedings of the NAACL-HLT, pages 2924-2936. Minneapolis, MN, Peter Clark, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Oyvind Tafjord, Peter D. Turney, and Daniel Khashabi. 2016. Combining retrieval, statistics, and inference to answer elementary science questions. In Proceedings of the AAAI, pages 2580-2586. Phoenix, AZ.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Pre-training with whole word masking for Chinese BERT",
"authors": [
{
"first": "Yiming",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "Wanxiang",
"middle": [],
"last": "Che",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Ziqing",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Shijin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Guoping",
"middle": [],
"last": "Hu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, and Guoping Hu. 2019. Pre-training with whole word masking for Chinese BERT. arXiv preprint, cs.CL/1906. 08101v1.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Dataset for the first evaluation on chinese machine reading comprehension",
"authors": [
{
"first": "Yiming",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Zhipeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Wentao",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Shijin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Guoping",
"middle": [],
"last": "Hu",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the LREC",
"volume": "",
"issue": "",
"pages": "2721--2725",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yiming Cui, Ting Liu, Zhipeng Chen, Wentao Ma, Shijin Wang, and Guoping Hu. 2018a. Dataset for the first evaluation on chinese machine reading comprehension. In Proceedings of the LREC, pages 2721-2725. Miyazaki, Japan.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Consensus attention-based neural networks for Chinese reading comprehension",
"authors": [
{
"first": "Yiming",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Zhipeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Shijin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Guoping",
"middle": [],
"last": "Hu",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the COLING",
"volume": "",
"issue": "",
"pages": "1777--1786",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yiming Cui, Ting Liu, Zhipeng Chen, Shijin Wang, and Guoping Hu. 2016. Consensus attention-based neural networks for Chinese reading comprehension. In Proceedings of the COLING, pages 1777-1786. Osaka, Japan.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A span-extraction dataset for Chinese machine reading comprehension",
"authors": [
{
"first": "Yiming",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "Zhipeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Wentao",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Wanxiang",
"middle": [],
"last": "Che",
"suffix": ""
},
{
"first": "Shijin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Guoping",
"middle": [],
"last": "Hu",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the EMNLP",
"volume": "",
"issue": "",
"pages": "5882--5888",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yiming Cui, Ting Liu, Li Xiao, Zhipeng Chen, Wentao Ma, Wanxiang Che, Shijin Wang, and Guoping Hu. 2018b. A span-extraction dataset for Chinese machine reading comprehension. In Proceedings of the EMNLP, pages 5882-5888.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Tracking state changes in procedural text: A challenge dataset and models for process paragraph comprehension",
"authors": [
{
"first": "Bhavana",
"middle": [],
"last": "Dalvi",
"suffix": ""
},
{
"first": "Lifu",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Niket",
"middle": [],
"last": "Tandon",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Wen Tau Yih",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the NAACL-HLT",
"volume": "",
"issue": "",
"pages": "1595--1604",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bhavana Dalvi, Lifu Huang, Niket Tandon, Wen tau Yih, and Peter Clark. 2018. Tracking state changes in procedural text: A challenge dataset and models for process paragraph compre- hension. In Proceedings of the NAACL-HLT, pages 1595-1604. New Orleans, LA.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proeedings of the NAACL-HLT",
"volume": "",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proeedings of the NAACL- HLT, pages 4171-4186. Minneapolis, MN.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Nearsynonymy and lexical choice",
"authors": [
{
"first": "Philip",
"middle": [],
"last": "Edmonds",
"suffix": ""
},
{
"first": "Graeme",
"middle": [],
"last": "Hirst",
"suffix": ""
}
],
"year": 2002,
"venue": "Computational Linguistics",
"volume": "28",
"issue": "2",
"pages": "105--144",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philip Edmonds and Graeme Hirst. 2002. Near- synonymy and lexical choice. Computational Linguistics, 28(2):105-144.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Open question answering over curated and extracted knowledge bases",
"authors": [
{
"first": "Anthony",
"middle": [],
"last": "Fader",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the SIGKDD",
"volume": "",
"issue": "",
"pages": "1156--1165",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anthony Fader, Luke Zettlemoyer, and Oren Etzioni. 2014. Open question answering over curated and extracted knowledge bases. In Pro- ceedings of the SIGKDD, pages 1156-1165. New York City, NY.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Connotation lexicon: A dash of sentiment beneath the surface meaning",
"authors": [
{
"first": "Song",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Jun Seok",
"middle": [],
"last": "Kang",
"suffix": ""
},
{
"first": "Polina",
"middle": [],
"last": "Kuznetsova",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the ACL",
"volume": "",
"issue": "",
"pages": "1774--1784",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Song Feng, Jun Seok Kang, Polina Kuznetsova, and Yejin Choi. 2013. Connotation lexicon: A dash of sentiment beneath the surface meaning. In Proceedings of the ACL, pages1774-1784.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Isolating domain dependencies in natural language interfaces",
"authors": [
{
"first": "Ralph",
"middle": [],
"last": "Grishman",
"suffix": ""
},
{
"first": "Lynette",
"middle": [],
"last": "Hirschman",
"suffix": ""
},
{
"first": "Carol",
"middle": [],
"last": "Friedman",
"suffix": ""
}
],
"year": 1983,
"venue": "Proceedings of the ANLP",
"volume": "",
"issue": "",
"pages": "46--53",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ralph Grishman, Lynette Hirschman, and Carol Friedman. 1983. Isolating domain dependencies in natural language interfaces. In Proceedings of the ANLP, pages 46-53. Santa Monica, CA.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "IJCNLP-2017 Task 5: Multi-choice question answering in examinations",
"authors": [
{
"first": "Shangmin",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Shizhu",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Cao",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Zhuoyu",
"middle": [],
"last": "Wei",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the IJCNLP 2017, Shared Tasks",
"volume": "",
"issue": "",
"pages": "34--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shangmin Guo, Kang Liu, Shizhu He, Cao Liu, Jun Zhao, and Zhuoyu Wei. 2017a. IJCNLP- 2017 Task 5: Multi-choice question answering in examinations. In Proceedings of the IJCNLP 2017, Shared Tasks, pages 34-40. Taipei, Taiwan.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Which is the effective way for Gaokao: Information retrieval or neural networks?",
"authors": [
{
"first": "Shangmin",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Xiangrong",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Shizhu",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the EACL",
"volume": "",
"issue": "",
"pages": "111--120",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shangmin Guo, Xiangrong Zeng, Shizhu He, Kang Liu, and Jun Zhao. 2017b. Which is the effective way for Gaokao: Information retrieval or neural networks? In Proceedings of the EACL, pages 111-120. Valencia, Spain.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Reasoning with a domain model",
"authors": [
{
"first": "",
"middle": [],
"last": "Steffen Leo Hansen",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the NODALIDA",
"volume": "",
"issue": "",
"pages": "111--121",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steffen Leo Hansen. 1994. Reasoning with a do- main model. In Proceedings of the NODALIDA, pages 111-121. Stockholm, Sweden.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Exploiting sentence embedding for medical question answering",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Hao",
"suffix": ""
},
{
"first": "Xien",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Ji",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Ping",
"middle": [],
"last": "Lv",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the AAAI",
"volume": "",
"issue": "",
"pages": "938--945",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yu Hao, Xien Liu, Ji Wu, and Ping Lv. 2019. Exploiting sentence embedding for medical question answering. In Proceedings of the AAAI, pages 938-945. Honolulu, HI.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "DuReader: a Chinese machine reading comprehension dataset from real-world applications",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yajuan",
"middle": [],
"last": "Lyu",
"suffix": ""
},
{
"first": "Shiqi",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Xinyan",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yizhong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Qiaoqiao",
"middle": [],
"last": "She",
"suffix": ""
},
{
"first": "Xuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Tian",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Haifeng",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the MRQA",
"volume": "",
"issue": "",
"pages": "37--46",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei He, Kai Liu, Jing Liu, Yajuan Lyu, Shiqi Zhao, Xinyan Xiao, Yuan Liu, Yizhong Wang, Hua Wu, Qiaoqiao She, Xuan Liu, Tian Wu, and Haifeng Wang. 2017. DuReader: a Chinese machine reading comprehension dataset from real-world applications. In Proceedings of the MRQA, pages 37-46. Melbourne, Australia.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Teaching machines to read andcomprehend",
"authors": [
{
"first": "Karl",
"middle": [],
"last": "Moritz Hermann",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Kocisky",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "Lasse",
"middle": [],
"last": "Espeholt",
"suffix": ""
},
{
"first": "Will",
"middle": [],
"last": "Kay",
"suffix": ""
},
{
"first": "Mustafa",
"middle": [],
"last": "Suleyman",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the NIPS",
"volume": "",
"issue": "",
"pages": "1693--1701",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read andcomprehend. In Proceedings of the NIPS, pages 1693-1701. Montreal, Canada.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "The goldilocks principle: Reading children's books with explicit memory representations",
"authors": [
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "Sumit",
"middle": [],
"last": "Chopra",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2016. The goldilocks principle: Reading children's books with explicit memory representations. In Proceedings of the ICLR. San Juan, Puerto Rico.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "GeoSQA: A benchmark for scenario-based question answering in the geography domain at high school level",
"authors": [
{
"first": "Zixian",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Yulin",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Xiao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yuang",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Gong",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Lin",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Xinyu",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Yuzhong",
"middle": [],
"last": "Qu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the EMNLP-IJCNLP",
"volume": "",
"issue": "",
"pages": "5865--5870",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zixian Huang, Yulin Shen, Xiao Li, Yuang Wei, Gong Cheng, Lin Zhou, Xinyu Dai, and Yuzhong Qu. 2019. GeoSQA: A benchmark for scenario-based question answering in the geogra- phy domain at high school level. In Proceedings of the EMNLP-IJCNLP, pages 5865-5870.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Challenges of adding causation to richer event descriptions",
"authors": [
{
"first": "Rei",
"middle": [],
"last": "Ikuta",
"suffix": ""
},
{
"first": "Will",
"middle": [],
"last": "Styler",
"suffix": ""
},
{
"first": "Mariah",
"middle": [],
"last": "Hamang",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "Tim",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Gorman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Second Workshop on EVENTS: Definition, Detection, Coreference, and Representation",
"volume": "",
"issue": "",
"pages": "12--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rei Ikuta, Will Styler, Mariah Hamang, Tim O'Gorman, and Martha Palmer. 2014. Challenges of adding causation to richer event descriptions. In Proceedings of the Second Workshop on EVENTS: Definition, Detection, Coreference, and Representation, pages 12-20.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension",
"authors": [
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Eunsol",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"S"
],
"last": "Weld",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mandar Joshi, Eunsol Choi, Daniel S. Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for read- ing comprehension. arXiv preprint, cs.CL/1705. 03551v2.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Looking beyond the surface: A challenge set for reading comprehension over multiple sentences",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Khashabi",
"suffix": ""
},
{
"first": "Snigdha",
"middle": [],
"last": "Chaturvedi",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "Shyam",
"middle": [],
"last": "Upadhyay",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the NAACL-HLT",
"volume": "",
"issue": "",
"pages": "252--262",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Khashabi, Snigdha Chaturvedi, Michael Roth, Shyam Upadhyay, and Dan Roth. 2018. Looking beyond the surface: A challenge set for reading comprehension over multiple sen- tences. In Proceedings of the NAACL-HLT, pages 252-262. New Orleans, LA.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "The NarrativeQA reading comprehension challenge",
"authors": [
{
"first": "Tom\u00e1\u0161",
"middle": [],
"last": "Ko\u010disk\u1ef3",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Schwarz",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Karl",
"middle": [
"Moritz"
],
"last": "Hermann",
"suffix": ""
},
{
"first": "G\u00e1abor",
"middle": [],
"last": "Melis",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
}
],
"year": 2018,
"venue": "Transactions of the Association of Computational Linguistics",
"volume": "6",
"issue": "",
"pages": "317--328",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom\u00e1\u0161 Ko\u010disk\u1ef3, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, G\u00e1abor Melis, and Edward Grefenstette. 2018. The NarrativeQA reading comprehension challenge. Transactions of the Association of Computational Linguistics, 6:317-328.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "RACE: Largescale reading comprehension dataset from examinations",
"authors": [
{
"first": "Guokun",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Qizhe",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Hanxiao",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the EMNLP",
"volume": "",
"issue": "",
"pages": "785--794",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. RACE: Large- scale reading comprehension dataset from examinations. In Proceedings of the EMNLP, pages 785-794. Copenhagen, Denmark.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "WatsonPaths: Scenario-based question answering and inference over unstructured information",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Lally",
"suffix": ""
},
{
"first": "Sugato",
"middle": [],
"last": "Bagchi",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"A"
],
"last": "Barborak",
"suffix": ""
},
{
"first": "David",
"middle": [
"W"
],
"last": "Buchanan",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [],
"last": "Chu-Carroll",
"suffix": ""
},
{
"first": "David",
"middle": [
"A"
],
"last": "Ferrucci",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"R"
],
"last": "Glass",
"suffix": ""
},
{
"first": "Aditya",
"middle": [],
"last": "Kalyanpur",
"suffix": ""
},
{
"first": "Erik",
"middle": [
"T"
],
"last": "Mueller",
"suffix": ""
},
{
"first": "J",
"middle": [
"William"
],
"last": "Murdock",
"suffix": ""
},
{
"first": "Siddharth",
"middle": [],
"last": "Patwardhan",
"suffix": ""
},
{
"first": "John",
"middle": [
"M"
],
"last": "Prager",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "38",
"issue": "",
"pages": "59--76",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Lally, Sugato Bagchi, Michael A. Barborak, David W. Buchanan, Jennifer Chu-Carroll, David A. Ferrucci, Michael R. Glass, Aditya Kalyanpur, Erik T. Mueller, J. William Murdock, Siddharth Patwardhan, and John M. Prager. 2017. WatsonPaths: Scenario-based question answering and inference over unstructured information. AI Magazine, 38(2):59-76.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "CYC: Using common sense knowledge to overcome brittleness and knowledge acquisition bottlenecks",
"authors": [
{
"first": "Douglas",
"middle": [
"B"
],
"last": "Lenat",
"suffix": ""
},
{
"first": "Mayank",
"middle": [],
"last": "Prakash",
"suffix": ""
},
{
"first": "Mary",
"middle": [],
"last": "Shepherd",
"suffix": ""
}
],
"year": 1985,
"venue": "AI Magazine",
"volume": "6",
"issue": "4",
"pages": "65--65",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Douglas B. Lenat, Mayank Prakash, and Mary Shepherd. 1985. CYC: Using common sense knowledge to overcome brittleness and knowl- edge acquisition bottlenecks. AI Magazine, 6(4):65-65.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Dataset and neural recurrent sequence labeling model for open-domain factoid question answering",
"authors": [
{
"first": "Peng",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Zhengyan",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Xuguang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Ying",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Jie",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peng Li, Wei Li, Zhengyan He, Xuguang Wang, Ying Cao, Jie Zhou, and Wei Xu. 2016. Dataset and neural recurrent sequence labeling model for open-domain factoid question answering. arXiv preprint, cs.CL/1607.06275v2.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Analogical reasoning on chinese morphological and semantic relations",
"authors": [
{
"first": "Shen",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Zhe",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Renfen",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Wensi",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xiaoyong",
"middle": [],
"last": "Du",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the ACL",
"volume": "",
"issue": "",
"pages": "138--143",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shen Li, Zhe Zhao, Renfen Hu, Wensi Li, Tao Liu, and Xiaoyong Du. 2018. Analogical reasoning on chinese morphological and semantic relations. In Proceedings of the ACL, pages 138-143. Melbourne, Australia.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Types of common-sense knowledge needed for recognizing textual entailment",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Lobue",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Yates",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the ACL",
"volume": "",
"issue": "",
"pages": "329--334",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter LoBue and Alexander Yates. 2011. Types of common-sense knowledge needed for rec- ognizing textual entailment. In Proceedings of the ACL, pages 329-334. Portland, OR.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "WordNet: An Electronic Lexical database",
"authors": [
{
"first": "George",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George Miller. 1998. WordNet: An Electronic Lexical database. MIT Press.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "A corpus and evaluation framework for deeper understanding of commonsense stories",
"authors": [
{
"first": "Nasrin",
"middle": [],
"last": "Mostafazadeh",
"suffix": ""
},
{
"first": "Nathanael",
"middle": [],
"last": "Chambers",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Batra",
"suffix": ""
},
{
"first": "Lucy",
"middle": [],
"last": "Vanderwende",
"suffix": ""
},
{
"first": "Pushmeet",
"middle": [],
"last": "Kohli",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Allen",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the NAACL-HLT",
"volume": "",
"issue": "",
"pages": "839--849",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, and James Allen. 2016. A corpus and evaluation framework for deeper understanding of commonsense stories. In Proceedings of the NAACL-HLT, pages 839-849. San Diego, CA.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "The relationship between depth of vocabulary knowledge and l2 learners lexical inferencing strategy use and success",
"authors": [
{
"first": "Hossein",
"middle": [],
"last": "Nassaji",
"suffix": ""
}
],
"year": 2006,
"venue": "The Modern Language Journal",
"volume": "90",
"issue": "3",
"pages": "387--401",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hossein Nassaji. 2006. The relationship between depth of vocabulary knowledge and l2 learners lexical inferencing strategy use and success. The Modern Language Journal, 90(3):387-401.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "MS MARCO: A human generated machine reading comprehension dataset",
"authors": [
{
"first": "Tri",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Mir",
"middle": [],
"last": "Rosenberg",
"suffix": ""
},
{
"first": "Xia",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Saurabh",
"middle": [],
"last": "Tiwary",
"suffix": ""
},
{
"first": "Rangan",
"middle": [],
"last": "Majumder",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Deng",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. MS MARCO: A human generated machine reading comprehension dataset. arXiv preprint, cs.CL/1611.09268v3.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Richer event description: Integrating event coreference with temporal, causal and bridging annotation",
"authors": [
{
"first": "Kristin",
"middle": [],
"last": "Tim O'gorman",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Wright-Bettner",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the CNS",
"volume": "",
"issue": "",
"pages": "47--56",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tim O'Gorman, Kristin Wright-Bettner, and Martha Palmer. 2016. Richer event description: Integrating event coreference with temporal, causal and bridging annotation. In Proceedings of the CNS, pages 47-56. Austin, TX.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "SemEval-2018 Task 11: Machine comprehension using commonsense knowledge",
"authors": [
{
"first": "Simon",
"middle": [],
"last": "Ostermann",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "Ashutosh",
"middle": [],
"last": "Modi",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Thater",
"suffix": ""
},
{
"first": "Manfred",
"middle": [],
"last": "Pinkal",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the SemEval",
"volume": "",
"issue": "",
"pages": "747--757",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simon Ostermann, Michael Roth, Ashutosh Modi, Stefan Thater, and Manfred Pinkal. 2018. SemEval-2018 Task 11: Machine comprehension using commonsense knowledge. In Proceedings of the SemEval, pages 747-757. New Orleans, LA.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Machine reading at the University of Washington",
"authors": [
{
"first": "Hoifung",
"middle": [],
"last": "Poon",
"suffix": ""
},
{
"first": "Janara",
"middle": [],
"last": "Christensen",
"suffix": ""
},
{
"first": "Pedro",
"middle": [],
"last": "Domingos",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
},
{
"first": "Raphael",
"middle": [],
"last": "Hoffmann",
"suffix": ""
},
{
"first": "Chloe",
"middle": [],
"last": "Kiddon",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Xiao",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Mausam",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Schoenmackers",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Soderland",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Weld",
"suffix": ""
},
{
"first": "Congle",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the NAACL-HLT FAM-LbR",
"volume": "",
"issue": "",
"pages": "87--95",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hoifung Poon, Janara Christensen, Pedro Domingos, Oren Etzioni, Raphael Hoffmann, Chloe Kiddon, Thomas Lin, Xiao Ling, Mausam, Alan Ritter, Stefan Schoenmackers, Stephen Soderland, Dan Weld, Fei Wu, and Congle Zhang. 2010. Machine reading at the University of Washington. In Proceedings of the NAACL-HLT FAM-LbR, pages 87-95.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Improving language understanding by generative pre-training",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Karthik",
"middle": [],
"last": "Narasimhan",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Salimans",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving lang- uage understanding by generative pre-training. https://openai.com/blog/language-unsupervised/.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Know what you don't know: Unanswerable questions for squad",
"authors": [
{
"first": "Pranav",
"middle": [],
"last": "Rajpurkar",
"suffix": ""
},
{
"first": "Robin",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the ACL",
"volume": "",
"issue": "",
"pages": "784--789",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don't know: Unanswer- able questions for squad. In Proceedings of the ACL, pages 784-789. Melbourne, Australia.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "SQuAD: 100,000+ questions for machine comprehension of text",
"authors": [
{
"first": "Pranav",
"middle": [],
"last": "Rajpurkar",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Konstantin",
"middle": [],
"last": "Lopyrev",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the EMNLP",
"volume": "",
"issue": "",
"pages": "2383--2392",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehen- sion of text. In Proceedings of the EMNLP, pages 2383-2392. Austin, TX.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "CoQA: A conversational question answering challenge",
"authors": [
{
"first": "Siva",
"middle": [],
"last": "Reddy",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2019,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "7",
"issue": "",
"pages": "249--266",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Siva Reddy, Danqi Chen, and Christopher D. Manning. 2019. CoQA: A conversational ques- tion answering challenge. Transactions of the Association for Computational Linguistics, 7:249-266.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "MCTest: A challenge dataset for the open-domain machine comprehension of text",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Richardson",
"suffix": ""
},
{
"first": "J",
"middle": [
"C"
],
"last": "Christopher",
"suffix": ""
},
{
"first": "Erin",
"middle": [],
"last": "Burges",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Renshaw",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the EMNLP",
"volume": "",
"issue": "",
"pages": "94--97",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Richardson, Christopher J. C. Burges, and Erin Renshaw. 2013. MCTest: A challenge dataset for the open-domain machine com- prehension of text. In Proceedings of the EMNLP, pages 193-203. Seattle, WA, Lenhart Schubert. 2002. Can we derive general world knowledge from texts? In Proceedings of the HLT, pages 94-97. San Diego, CA.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "DRCD: A Chinese machine reading comprehension dataset",
"authors": [
{
"first": "Chih Chieh",
"middle": [],
"last": "Shao",
"suffix": ""
},
{
"first": "Trois",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yuting",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Yiying",
"middle": [],
"last": "Tseng",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Tsai",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chih Chieh Shao, Trois Liu, Yuting Lai, Yiying Tseng, and Sam Tsai. 2018. DRCD: A Chinese machine reading comprehension dataset. arXiv preprint, cs.CL/1806.00920v3.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "DREAM: A challenge data set and models for dialoguebased reading comprehension",
"authors": [
{
"first": "Kai",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Dian",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Jianshu",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Dong",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2019,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "7",
"issue": "",
"pages": "217--231",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kai Sun, Dian Yu, Jianshu Chen, Dong Yu, Yejin Choi, and Claire Cardie. 2019a. DREAM: A challenge data set and models for dialogue- based reading comprehension. Transactions of the Association for Computational Linguistics, 7:217-231.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "ERNIE: Enhanced representation through knowledge integration",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Shuohuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yukun",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Shikun",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Xuyi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Han",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Tian",
"suffix": ""
},
{
"first": "Danxiang",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Hao Tian",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, and Hua Wu. 2019b. ERNIE: Enhanced representation through knowledge integration. arXiv preprint, cs.CL/1904.09223v1.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "NewsQA: A machine comprehension dataset",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Trischler",
"suffix": ""
},
{
"first": "Tong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Xingdi",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Justin",
"middle": [],
"last": "Harris",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Sordoni",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Bachman",
"suffix": ""
},
{
"first": "Kaheer",
"middle": [],
"last": "Suleman",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the RepL4NLP",
"volume": "",
"issue": "",
"pages": "191--200",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, and Kaheer Suleman. 2017. NewsQA: A machine comprehension dataset. In Pro- ceedings of the RepL4NLP, pages 191-200.",
"links": null
},
"BIBREF60": {
"ref_id": "b60",
"title": "We usually dont like going to the dentist: Using common sense to detect irony on twitter",
"authors": [
{
"first": "Cynthia",
"middle": [],
"last": "Van Hee",
"suffix": ""
},
{
"first": "Els",
"middle": [],
"last": "Lefever",
"suffix": ""
},
{
"first": "V\u00e9ronique",
"middle": [],
"last": "Hoste",
"suffix": ""
}
],
"year": 2018,
"venue": "Computational Linguistics",
"volume": "44",
"issue": "4",
"pages": "793--832",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cynthia Van Hee, Els Lefever, and V\u00e9ronique Hoste. 2018. We usually dont like going to the dentist: Using common sense to detect irony on twitter. Computational Linguistics, 44(4): 793-832.",
"links": null
},
"BIBREF61": {
"ref_id": "b61",
"title": "A co-matching model for multichoice reading comprehension",
"authors": [
{
"first": "Shuohang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Mo",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Shiyu",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Jing",
"middle": [],
"last": "Jiang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the ACL",
"volume": "",
"issue": "",
"pages": "1--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shuohang Wang, Mo Yu, Shiyu Chang, and Jing Jiang. 2018. A co-matching model for multi- choice reading comprehension. In Proceedings of the ACL, pages 1-6. Melbourne, Australia.",
"links": null
},
"BIBREF62": {
"ref_id": "b62",
"title": "Constructing datasets for multihop reading comprehension across documents",
"authors": [
{
"first": "Johannes",
"middle": [],
"last": "Welbl",
"suffix": ""
},
{
"first": "Pontus",
"middle": [],
"last": "Stenetorp",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
}
],
"year": 2018,
"venue": "Transactions of the Association of Computational Linguistics",
"volume": "6",
"issue": "",
"pages": "287--302",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johannes Welbl, Pontus Stenetorp, and Sebastian Riedel. 2018. Constructing datasets for multi- hop reading comprehension across documents. Transactions of the Association of Computa- tional Linguistics, 6:287-302.",
"links": null
},
"BIBREF63": {
"ref_id": "b63",
"title": "A taxonomy of part-whole relations",
"authors": [
{
"first": "Morton",
"middle": [
"E"
],
"last": "Winston",
"suffix": ""
},
{
"first": "Roger",
"middle": [],
"last": "Chaffin",
"suffix": ""
},
{
"first": "Douglas",
"middle": [],
"last": "Herrmann",
"suffix": ""
}
],
"year": 1987,
"venue": "Cognitive Science",
"volume": "11",
"issue": "4",
"pages": "417--444",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Morton E. Winston, Roger Chaffin, and Douglas Herrmann. 1987. A taxonomy of part-whole relations. Cognitive Science, 11(4):417-444.",
"links": null
},
"BIBREF64": {
"ref_id": "b64",
"title": "The part-of-speech tagging guidelines for the Penn Chinese Treebank (3.0)",
"authors": [
{
"first": "Fei",
"middle": [],
"last": "Xia",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fei Xia. 2000. The part-of-speech tagging guidelines for the Penn Chinese Treebank (3.0).",
"links": null
},
"BIBREF66": {
"ref_id": "b66",
"title": "Large-scale cloze test dataset created by teachers",
"authors": [
{
"first": "Qizhe",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Guokun",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Zihang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the EMNLP",
"volume": "",
"issue": "",
"pages": "234--2356",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qizhe Xie, Guokun Lai, Zihang Dai, and Eduard Hovy. 2018. Large-scale cloze test dataset created by teachers. In Proceedings of the EMNLP, pages 234-2356. Brussels, Belgium.",
"links": null
},
"BIBREF67": {
"ref_id": "b67",
"title": "HotpotQA: A dataset for diverse, explainable multi-hop question answering",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Saizheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the EMNLP",
"volume": "",
"issue": "",
"pages": "2369--2380",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In Proceedings of the EMNLP, pages 2369-2380. Brussels, Belgium.",
"links": null
},
"BIBREF68": {
"ref_id": "b68",
"title": "ReCoRD: Bridging the gap between human and machine commonsense reading comprehension",
"authors": [
{
"first": "Sheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jingjing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sheng Zhang, Xiaodong Liu, Jingjing Liu, Jianfeng Gao, Kevin Duh, and Benjamin Van Durme. 2018a. ReCoRD: Bridging the gap between human and machine commonsense reading comprehension. arXiv preprint, cs.CL/ 1810.12885v1.",
"links": null
},
"BIBREF69": {
"ref_id": "b69",
"title": "Medical exam question answering with large-scale reading comprehension",
"authors": [
{
"first": "Xiao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Ji",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Zhiyang",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Xien",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Ying",
"middle": [],
"last": "Su",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the AAAI",
"volume": "",
"issue": "",
"pages": "5706--5713",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiao Zhang, Ji Wu, Zhiyang He, Xien Liu, and Ying Su. 2018b. Medical exam question answering with large-scale reading comprehension. In Proceedings of the AAAI, pages 5706-5713. New Orleans, LA.",
"links": null
},
"BIBREF70": {
"ref_id": "b70",
"title": "One-shot learning for question-answering in Gaokao history challenge",
"authors": [
{
"first": "Zhuosheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the COLING",
"volume": "",
"issue": "",
"pages": "449--461",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhuosheng Zhang and Hai Zhao. 2018. One-shot learning for question-answering in Gaokao his- tory challenge. In Proceedings of the COLING, pages 449-461. Santa Fe, NM.",
"links": null
},
"BIBREF71": {
"ref_id": "b71",
"title": "ChID: A large-scale Chinese idiom dataset for cloze test",
"authors": [
{
"first": "Chujie",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Minlie",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Aixin",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the ACL",
"volume": "",
"issue": "",
"pages": "778--787",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chujie Zheng, Minlie Huang, and Aixin Sun. 2019. ChID: A large-scale Chinese idiom dataset for cloze test. In Proceedings of the ACL, pages 778-787. Florence, Italy.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "Analysis of distractor plausibility.",
"type_str": "figure",
"num": null
},
"FIGREF1": {
"uris": null,
"text": "The need for two major types of prior knowledge when answering questions of different max i S(w i , d) and S(c, d).",
"type_str": "figure",
"num": null
},
"TABREF2": {
"text": "Comparison of C 3 and representative Chinese question answering and machine reading comprehension tasks. We list only one English counterpart for each Chinese dataset.",
"html": null,
"num": null,
"content": "<table/>",
"type_str": "table"
},
"TABREF3": {
"text": "",
"html": null,
"num": null,
"content": "<table/>",
"type_str": "table"
},
"TABREF4": {
"text": "There are so many people at the railway station. I have waited in line all day long. However, when my turn comes, they say that there is no ticket left unless the Spring Festival is over. F: It doesn't matter. It is all the same for you to come back after the Spring Festival is over. M: But according to our company's regulation, I must go to the office on the 6th day of the first lunar month. I'm afraid I have no time to go back after the Spring Festival, so could you and my dad come to Shanghai for the coming Spring Festival? F: I am too old to endure the travel. M: It is not difficult at all. After I help you buy the tickets, you can come here directly.",
"html": null,
"num": null,
"content": "<table><tr><td>Q1 What is the relationship between the speakers?</td></tr><tr><td>A. father and daughter.</td></tr><tr><td>B. mother and son. \u22c6</td></tr><tr><td>C. classmates.</td></tr><tr><td>D. colleagues.</td></tr><tr><td>Q2 What difficulty has the male met?</td></tr><tr><td>A. his company does not have a vacation.</td></tr><tr><td>B. things are expensive during the Spring Festival.</td></tr></table>",
"type_str": "table"
},
"TABREF5": {
"text": "",
"html": null,
"num": null,
"content": "<table><tr><td>: English translation of a sample problem</td></tr><tr><td>from C 3 -Dialogue (C 3 D ) (\u22c6: the correct option).</td></tr><tr><td>dialogue as ''profession'': ''F: Many of my</td></tr><tr><td>classmates become teachers after gradu-</td></tr><tr><td>ation. M: The best thing about being a</td></tr><tr><td>teacher is feeling happy every day as you</td></tr><tr><td>are surrounded by students!''.</td></tr></table>",
"type_str": "table"
},
"TABREF7": {
"text": "The overall statistics of C 3 . C 3",
"html": null,
"num": null,
"content": "<table/>",
"type_str": "table"
}
}
}
}