ACL-OCL / Base_JSON /prefixM /json /mrqa /2021.mrqa-1.12.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:13:33.670613Z"
},
"title": "Unsupervised Multiple Choices Question Answering: Start Learning from Basic Knowledge",
"authors": [
{
"first": "Chi-Liang",
"middle": [],
"last": "Liu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Taiwan University",
"location": {}
},
"email": ""
},
{
"first": "Hung-Yi",
"middle": [],
"last": "Lee",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Taiwan University",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we study the possibility of unsupervised Multiple Choices Question Answering (MCQA). From very basic knowledge, the MCQA model knows that some choices have higher probabilities of being correct than others. The information, though very noisy, guides the training of an MCQA model. The proposed method is shown to outperform the baseline approaches on RACE and is even comparable with some supervised learning approaches on MC500.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we study the possibility of unsupervised Multiple Choices Question Answering (MCQA). From very basic knowledge, the MCQA model knows that some choices have higher probabilities of being correct than others. The information, though very noisy, guides the training of an MCQA model. The proposed method is shown to outperform the baseline approaches on RACE and is even comparable with some supervised learning approaches on MC500.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Question Answering (QA) has been widely used for testing Reading Comprehension. Recently, numerous question answering datasets (Weston et al., 2015; Rajpurkar et al., 2016 Rajpurkar et al., , 2018 Yang et al., 2018; Trischler et al., 2017; Joshi et al., 2017; Kwiatkowski et al., 2019; Reddy et al., 2019; Richardson, 2013; Lai et al., 2017a; Khashabi et al., 2018) have been proposed. These datasets can be divided into two major categories: Extractive Question Answering (EQA) and Multiple Choices Question Answering (MCQA). In EQA, the answer has to be a span of the given reading passage, such as SQuAD (Rajpurkar et al., 2016) and NewsQA (Trischler et al., 2017) ; while in MCQA, the answer is one of the given choices, such as MCTest (Richardson, 2013) and RACE (Lai et al., 2017a) .",
"cite_spans": [
{
"start": 127,
"end": 148,
"text": "(Weston et al., 2015;",
"ref_id": "BIBREF17"
},
{
"start": 149,
"end": 171,
"text": "Rajpurkar et al., 2016",
"ref_id": "BIBREF12"
},
{
"start": 172,
"end": 196,
"text": "Rajpurkar et al., , 2018",
"ref_id": "BIBREF11"
},
{
"start": 197,
"end": 215,
"text": "Yang et al., 2018;",
"ref_id": "BIBREF18"
},
{
"start": 216,
"end": 239,
"text": "Trischler et al., 2017;",
"ref_id": "BIBREF16"
},
{
"start": 240,
"end": 259,
"text": "Joshi et al., 2017;",
"ref_id": "BIBREF3"
},
{
"start": 260,
"end": 285,
"text": "Kwiatkowski et al., 2019;",
"ref_id": "BIBREF6"
},
{
"start": 286,
"end": 305,
"text": "Reddy et al., 2019;",
"ref_id": "BIBREF14"
},
{
"start": 306,
"end": 323,
"text": "Richardson, 2013;",
"ref_id": "BIBREF15"
},
{
"start": 324,
"end": 342,
"text": "Lai et al., 2017a;",
"ref_id": "BIBREF7"
},
{
"start": 343,
"end": 365,
"text": "Khashabi et al., 2018)",
"ref_id": "BIBREF5"
},
{
"start": 607,
"end": 631,
"text": "(Rajpurkar et al., 2016)",
"ref_id": "BIBREF12"
},
{
"start": 643,
"end": 667,
"text": "(Trischler et al., 2017)",
"ref_id": "BIBREF16"
},
{
"start": 740,
"end": 758,
"text": "(Richardson, 2013)",
"ref_id": "BIBREF15"
},
{
"start": 768,
"end": 787,
"text": "(Lai et al., 2017a)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recently, large pretrained language models such as BERT have exceeded human performance in some EQA benchmark corpora, for example, SQuAD (Rajpurkar et al., 2016) . Compared to EQA, MCQA does not restrict the answer to be spans in context. This allowed MCQA can have more challenging questions than EQA, including but not limited to logical reasoning or summarization. The performance gap between BERT and human performance is still significant. In this paper, we focus on MCQA. A person who can read can deal with the MCQA task without further training, but this is not the case for a machine. The BERT-based models cannot be directly applied to solve the MCQA task without seeing any MCQA examples. Even for the models achieving human-level performance in EQA, they still need some MCQA examples with correct choices being labeled for fine-tuning. Although Keskar et al. (2019) ; Raffel et al. (2020) proposed the unified question answering model, they require unifying the multiple tasks to span extraction task.",
"cite_spans": [
{
"start": 138,
"end": 162,
"text": "(Rajpurkar et al., 2016)",
"ref_id": "BIBREF12"
},
{
"start": 859,
"end": 879,
"text": "Keskar et al. (2019)",
"ref_id": "BIBREF4"
},
{
"start": 882,
"end": 902,
"text": "Raffel et al. (2020)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The semi-supervised MCQA model training approach has been proposed (Chung et al., 2018) , in which an initial MCQA model is used to answer the unlabelled questions to generate pseudo labeled data. Then pseudo labeled data is used to fine-tune the MCQA model to improve the performance. However, the initial MCQA model still needs some labeled examples to train.",
"cite_spans": [
{
"start": 67,
"end": 87,
"text": "(Chung et al., 2018)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we study the possibility of unsupervised MCQA. Instead of starting from an initial MCQA model (Chung et al., 2018) , here, the machine starts with some prior knowledge. For example, a choice has a higher probability of being correct if the choice has word overlap with the document and question. With the basic rule, the machine knows that some choices have higher probabilities of being correct than others, and some choices can be ruled out. With these basic rules, an MCQA model can be trained without any labeled MCQA examples. With this approach, we got absolute gains of 4\u223c9% accuracy compared to the baseline methods on two MCQA benchmark corpora, RACE and MC500.",
"cite_spans": [
{
"start": 109,
"end": 129,
"text": "(Chung et al., 2018)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We consider MCQA where we are given a question q, a passage p and a set of choices C = {c 1 , c 2 , ...c n }, where n is the number of choices, and machine needs to select an answer a \u2208 C.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised MCQA",
"sec_num": "2"
},
{
"text": "We propose to address an unsupervised MCQA in a two-stage approach (Figure 1 ). First, we pick the candidate set T from choices by fundamental rule from human knowledge (sliding window) or a model trained without MCQA data (EQA model). Second, we train a model to pick the final answer from the candidates.",
"cite_spans": [],
"ref_spans": [
{
"start": 67,
"end": 76,
"text": "(Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Unsupervised MCQA",
"sec_num": "2"
},
{
"text": "The candidate selection approaches give a score to each choice which represents the likelihood of being correct. We use two systems to calculate the scores, one using simple lexical features and another using a pre-trained EQA model. A choice is selected into candidate set T if the choice's score is higher than a threshold t, and is the top k scores among all the choices {c 1 , c 2 , ...c n } of a question q. In this way, each question has at most k candidates in T . k should be smaller than n (k < n) to rule out some less likely choices. A question will not have any choice in T if none of its choices pass the threshold t. Both t and k are the hyperparameters. Note that our methods do not guarantee the answer must be in the candidate set. The candidate sets are only used during training, and we do not need to choose candidates when testing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Candidates Choosing",
"sec_num": "2.1"
},
{
"text": "We follow the sliding window algorithm in Richardson (2013) , matching a bag of words constructed from the question and choices to the passage to compute the scores of choices. The algorithm's details are shown in Algorithm 1.",
"cite_spans": [
{
"start": 42,
"end": 59,
"text": "Richardson (2013)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sliding Window (SW)",
"sec_num": null
},
{
"text": "EQA Matching In this setting, we use a pretrained EQA model as our reference. Given a passage and a question, the EQA model outputs an answer A, which is a text span from the passage. Then we use a string-matching algorithm to compute the similarity between A and each candidate c, and the similarity serves as the score for each candidate. Gestalt Pattern Matching (Ratcliff and Metzener, July 1988) algorithm is the stringmatching algorithm used here. The algorithm's details are shown in Appendix B.",
"cite_spans": [
{
"start": 366,
"end": 400,
"text": "(Ratcliff and Metzener, July 1988)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sliding Window (SW)",
"sec_num": null
},
{
"text": "The candidates T selected in the last subsection are used as the ground truth to train an MCQA model. Because the candidates are not always correct, and each question can have multiple choices selected in the candidate set, the typical supervised learning approaches cannot be directly applied here. Therefore, the following learning methods are explored to form our objective function L for training the MCQA model from the candidates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Methods",
"sec_num": "2.2"
},
{
"text": "L = \u2212 log P (c max | p; q) ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Highest-Only",
"sec_num": null
},
{
"text": "where c max is the choice of a question q in the candidate set with the highest score. The approach here has no difference from typical supervised learning, except that the ground truth is from the candidate selection approaches, not human labeling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Highest-Only",
"sec_num": null
},
{
"text": "Maximum Marginal Likelihood (MML) L = \u2212 log c i \u2208T P (c i | p; q)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Highest-Only",
"sec_num": null
},
{
"text": "In this objective, all the choices in the candidate set are considered correct. The learning target of the MCQA model is to maximize the probabilities that all the choices in the candidate set are labeled as correct. If there are more correct choices than the incorrect ones in the candidate set, the impact of the wrong choices in the candidate set can be mitigated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Highest-Only",
"sec_num": null
},
{
"text": "Hard-EM Proposed by Min et al. (2019) , this can be viewed as a variant of MML,",
"cite_spans": [
{
"start": 20,
"end": 37,
"text": "Min et al. (2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Highest-Only",
"sec_num": null
},
{
"text": "L = \u2212 log max c i \u2208T P (c i | p; q)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Highest-Only",
"sec_num": null
},
{
"text": "The underlying assumption of this objective can be understood as follows. For a question q, several choices are selected in the candidate set. Although we don't know which one is correct, we assume one of them is correct. Therefore, we want the MCQA model to learn to maximize the probability of one of the choices for a question.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Highest-Only",
"sec_num": null
},
{
"text": "To evaluate the proposed method's effectiveness compared to supervised learning and other approaches that do not require training data, we experiment on two MCQA tasks, RACE and MCTest(MC500). RACE Lai et al. (2017b) introduced the RACE dataset, collected from the English exams for middle and high school Chinese students. RACE consists of near 28000 passages and nearly 100000 questions. Specifically, the dataset can be split into two parts: RACE-M, collected from English examinations designed for middle school students; and RACE-H, collected from English examinations de-signed for high students. RACE-H is more difficult than RACE-M; the length of the passages and the vocabulary size in the RACE-H are much larger than that of the RACE-M.",
"cite_spans": [
{
"start": 193,
"end": 216,
"text": "RACE Lai et al. (2017b)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments Setup",
"sec_num": "3"
},
{
"text": "MC500 Richardson (2013) present MCTest which requires machines to answer multiple-choice reading comprehension questions about fictional stories. MCTest has two variants: MC160, which contains 160 stories, and MC500, which contains 500 stories. Moreover, MC500 can be subdivided into MC500-One and MC500-Multi. MC500-One refers to the questions that can be answered with one sentence. MC500-Multi refers to the questions that need evidence in multiple sentences to answer. The length of each story is approximately 150 to 300 words, and the topic of a story is a wide range. In our experiment, we evaluate our model on MC500 since there are only 280 questions in the MC160, which is not suitable in our setting.",
"cite_spans": [
{
"start": 6,
"end": 23,
"text": "Richardson (2013)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3.1"
},
{
"text": "Appendix A shows more details about both datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3.1"
},
{
"text": "In this work, we used BERT-base as the pre-trained model for both the EQA system and the MCQA system in the following experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Description",
"sec_num": "3.2"
},
{
"text": "EQA model The hyperparameters we used are the same as the official released for training SQuAD 1.1. For both datasets, the EQA model is trained on SQuAD 1.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Description",
"sec_num": "3.2"
},
{
"text": "MCQA Model To fine-tune the BERT model on the MCQA datasets, we construct four input sequences, each containing the concatenation of the passage, the question, and one of the choices (Zellers et al., 2018) . The separator tokens [SEP] are added between the passage and the question. Next, we fed the [CLS] token representation to the classifier and got the scores for each choice. Table 1 shows the results of baselines and our methods on RACE and MC500.",
"cite_spans": [
{
"start": 183,
"end": 205,
"text": "(Zellers et al., 2018)",
"ref_id": "BIBREF19"
},
{
"start": 229,
"end": 234,
"text": "[SEP]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 381,
"end": 388,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Model Description",
"sec_num": "3.2"
},
{
"text": "RACE Our methods outperform SW and EQA Match across all the datasets with absolute gain 4\u223c9% accuracy, which shows the MCQA model can improve itself from the noisy candidate sets. MML and Hard-EM outperform Highest-Only in all cases, which indicates that relying only on the single choice with the highest score is insufficient. The improvement with EQA Matching Algorithm is more significant than with SW Matching Algorithm. This implies Candidates Choosing stage plays a significant role in the performance; more details will be discussed later.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Results",
"sec_num": "4"
},
{
"text": "MC500 With the SW Matching algorithm, our methods outperform the performance baseline across all the datasets with absolute gains of 1\u223c5% accuracy. With the EQA Matching Algorithm, because on MC500, EQA has achieved a comparable result with supervised learning, the proposed approaches do not further improve EQA. The performance of our method drops in MC500-One because EQA models can better capture the information within a sentence than multiple sentences, leading MC500-One performance much better than MC500-Multi with EQA models. On the other hand, we improve the performance of MC500-Multiple by about 12%. This shows that our method can further improve EQA in the more difficult examples that the EQA model cannot answer correctly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Results",
"sec_num": "4"
},
{
"text": "Candidate Set & Matching Methods Table 2 shows the average size of candidate sets chosen by EQA and SW Matching, and their Percent Including Answer, that is, the percent of candidate set including the correct answer. The Percent Including Answer is much better for SW than EQA on RACE because the candidate sets selected by SW are larger than EQA. We find that EQA gives more concentrated confidence scores to the choices than SW, leading to smaller candidate sets. Although the Percent Including Answer of SW is larger than by EQA (Table 2) , the candidates picked by EQA have higher quality than candidates picked by SW, as shown in Table 3 . Table 2 implies that MCQA models from the proposed learning strategy do not just randomly choose a prediction from the candidates. The performance of the proposed approaches in Table 1 is much higher than the performance of randomly sampling from the candidate set, that is, (B) / (A) in Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 33,
"end": 40,
"text": "Table 2",
"ref_id": "TABREF1"
},
{
"start": 532,
"end": 541,
"text": "(Table 2)",
"ref_id": "TABREF1"
},
{
"start": 635,
"end": 642,
"text": "Table 3",
"ref_id": "TABREF2"
},
{
"start": 645,
"end": 652,
"text": "Table 2",
"ref_id": "TABREF1"
},
{
"start": 822,
"end": 829,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 933,
"end": 940,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5"
},
{
"text": "Question Types To see how our learning method works with respect to the type of question, we divided the questions in RACE into six types: why, what, where, when, who, and how. We choose to analyze RACE because it has more questions than MC 500. Figure 2 shows the accuracy of each question types. The results show that the proposed approach does not favor specific types of questions. We found that no matter the candidate set selection methods, the proposed method improved all types of questions, except \"where\" for EQA and \"when\" for SW. Understanding why some question types do not been improved by unsupervised MCQA in some cases is our future work.",
"cite_spans": [],
"ref_spans": [
{
"start": 246,
"end": 254,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5"
},
{
"text": "In this paper, we proposed an unsupervised MCQA method, which exploits the pseudo labels generated by some basic rules or external non-MCQA datasets. The proposed method significantly outperforms the baseline approaches on RACE and is even comparable with the supervised learning performance on MC500. We hope this paper sheds light on unsupervised learning in NLP tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
}
],
"back_matter": [
{
"text": "Algorithm 1: Sliding Window Input :Threshold t, max numbers of candidates k, a set of passage words P , set of words in question Q, and a set of words in choices C 1...n .Define :Count(w) :where P i is the i-th word in passage P ; Define :IC(w) :sort candidates descending by score return first k elements of candidates",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Matching Algorithms",
"sec_num": null
},
{
"text": "We finetuned all models with a linear learning rate decay schedule with 1000 warm-up steps. The batch size is 32, and the max length of the input size is 320. For RACE, we set the threshold to 0, the max number of candidates to 3 with SW Matching, and set the threshold to 50, the max number of candidates to 3 with the EQA Matching. For MC500, we set the threshold to 3, the max number of candidates to 2 with SW Matching, andAlgorithm 2: EQA Matching Input :Threshold t, max numbers of candidates k, a set of passage words P , set of words in question Q, and a set of words in choices C 1...n and a pre-trained EQA model M candidates \u2190 Array[] A \u2190 M.predict(P, Q) for i = 1 to n doif score i \u2265 t then candidates.append((i, score i )) sort candidates descending by score return first k elements of candidates the threshold to 50, the max number of candidates to 3 with the EQA Matching. Following Min et al. (2019) , when we use hard-EM as objective, we perform annealing: at training step t, the model use MML as objective with a probability of min(t/\u03c4, 0.8) and otherwise use hard-EM, where \u03c4 is a hyperparameter. We tried \u03c4 = 1000, 4000, and 8000.",
"cite_spans": [
{
"start": 898,
"end": 915,
"text": "Min et al. (2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "C Training Details",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Quac: Question answering in context",
"authors": [
{
"first": "Eunsol",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "He",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Yatskar",
"suffix": ""
},
{
"first": "Wentau",
"middle": [],
"last": "Yih",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2174--2184",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wen- tau Yih, Yejin Choi, Percy Liang, and Luke Zettle- moyer. 2018. Quac: Question answering in context. In Proceedings of the 2018 Conference on Empiri- cal Methods in Natural Language Processing, pages 2174-2184.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Supervised and unsupervised transfer learning for question answering",
"authors": [
{
"first": "Yu-An",
"middle": [],
"last": "Chung",
"suffix": ""
},
{
"first": "Hung-Yi",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Glass",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1585--1594",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1143"
]
},
"num": null,
"urls": [],
"raw_text": "Yu-An Chung, Hung-Yi Lee, and James Glass. 2018. Supervised and unsupervised transfer learning for question answering. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1585-1594, New Orleans, Louisiana. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension",
"authors": [
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Eunsol",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Daniel",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Weld",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1601--1611",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehen- sion. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 1601-1611.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Unifying question answering, text classification, and regression via span extraction",
"authors": [
{
"first": "Bryan",
"middle": [],
"last": "Nitish Shirish Keskar",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Mccann",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitish Shirish Keskar, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Unifying question an- swering, text classification, and regression via span extraction.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Looking beyond the surface: A challenge set for reading comprehension over multiple sentences",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Khashabi",
"suffix": ""
},
{
"first": "Snigdha",
"middle": [],
"last": "Chaturvedi",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "Shyam",
"middle": [],
"last": "Upadhyay",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "252--262",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1023"
]
},
"num": null,
"urls": [],
"raw_text": "Daniel Khashabi, Snigdha Chaturvedi, Michael Roth, Shyam Upadhyay, and Dan Roth. 2018. Looking be- yond the surface: A challenge set for reading com- prehension over multiple sentences. In Proceedings of the 2018 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Pa- pers), pages 252-262, New Orleans, Louisiana. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Natural questions: A benchmark for question answering research",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Kwiatkowski",
"suffix": ""
},
{
"first": "Jennimaria",
"middle": [],
"last": "Palomaki",
"suffix": ""
},
{
"first": "Olivia",
"middle": [],
"last": "Redfield",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Alberti",
"suffix": ""
},
{
"first": "Danielle",
"middle": [],
"last": "Epstein",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2019,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "7",
"issue": "",
"pages": "453--466",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom Kwiatkowski, Jennimaria Palomaki, Olivia Red- field, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. 2019. Natural questions: A bench- mark for question answering research. Transactions of the Association for Computational Linguistics, 7:453-466.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Race: Large-scale reading comprehension dataset from examinations",
"authors": [
{
"first": "Guokun",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Qizhe",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Hanxiao",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "785--794",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017a. Race: Large-scale reading comprehension dataset from examinations. In Pro- ceedings of the 2017 Conference on Empirical Meth- ods in Natural Language Processing, pages 785- 794.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "RACE: Large-scale ReAding comprehension dataset from examinations",
"authors": [
{
"first": "Guokun",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Qizhe",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Hanxiao",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "785--794",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1082"
]
},
"num": null,
"urls": [],
"raw_text": "Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017b. RACE: Large-scale ReAding comprehension dataset from examinations. In Proceedings of the 2017 Conference on Empiri- cal Methods in Natural Language Processing, pages 785-794, Copenhagen, Denmark. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A discrete hard EM approach for weakly supervised question answering",
"authors": [
{
"first": "Sewon",
"middle": [],
"last": "Min",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Hannaneh",
"middle": [],
"last": "Hajishirzi",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2019,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sewon Min, Danqi Chen, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2019. A discrete hard EM ap- proach for weakly supervised question answering. In EMNLP.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Exploring the limits of transfer learning with a unified text-to",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Raffel",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Roberts",
"suffix": ""
},
{
"first": "Katherine",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Sharan",
"middle": [],
"last": "Narang",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Matena",
"suffix": ""
},
{
"first": "Yanqi",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"J"
],
"last": "Liu",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text trans- former.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Know what you don't know: Unanswerable questions for SQuAD",
"authors": [
{
"first": "Pranav",
"middle": [],
"last": "Rajpurkar",
"suffix": ""
},
{
"first": "Robin",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "784--789",
"other_ids": {
"DOI": [
"10.18653/v1/P18-2124"
]
},
"num": null,
"urls": [],
"raw_text": "Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don't know: Unanswerable ques- tions for SQuAD. In Proceedings of the 56th An- nual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 784- 789, Melbourne, Australia. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "SQuAD: 100,000+ questions for machine comprehension of text",
"authors": [
{
"first": "Pranav",
"middle": [],
"last": "Rajpurkar",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Konstantin",
"middle": [],
"last": "Lopyrev",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/D16-1264"
]
},
"num": null,
"urls": [],
"raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Pattern matching: The gestalt approach",
"authors": [
{
"first": "John",
"middle": [
"W"
],
"last": "Ratcliff",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Metzener",
"suffix": ""
}
],
"year": 1988,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John W. Ratcliff and David Metzener. July 1988. Pat- tern matching: The gestalt approach.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Coqa: A conversational question answering challenge",
"authors": [
{
"first": "Siva",
"middle": [],
"last": "Reddy",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2019,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "7",
"issue": "",
"pages": "249--266",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Siva Reddy, Danqi Chen, and Christopher D Manning. 2019. Coqa: A conversational question answering challenge. Transactions of the Association for Com- putational Linguistics, 7:249-266.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Mctest: A challenge dataset for the open-domain machine comprehension of text",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Richardson",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Emprical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Richardson. 2013. Mctest: A challenge dataset for the open-domain machine comprehen- sion of text. In Proceedings of the 2013 Conference on Emprical Methods in Natural Language Process- ing (EMNLP 2013).",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Newsqa: A machine comprehension dataset",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Trischler",
"suffix": ""
},
{
"first": "Tong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Xingdi",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Justin",
"middle": [],
"last": "Harris",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Sordoni",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Bachman",
"suffix": ""
},
{
"first": "Kaheer",
"middle": [],
"last": "Suleman",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2nd Workshop on Representation Learning for NLP",
"volume": "",
"issue": "",
"pages": "191--200",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Trischler, Tong Wang, Xingdi Yuan, Justin Har- ris, Alessandro Sordoni, Philip Bachman, and Ka- heer Suleman. 2017. Newsqa: A machine compre- hension dataset. In Proceedings of the 2nd Work- shop on Representation Learning for NLP, pages 191-200.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Towards ai-complete question answering: Aset of prerequisite toy tasks",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "Sumit",
"middle": [],
"last": "Chopra",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merri\u00ebnboer",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1502.05698"
]
},
"num": null,
"urls": [],
"raw_text": "Jason Weston, Antoine Bordes, Sumit Chopra, Alexan- der M Rush, Bart van Merri\u00ebnboer, Armand Joulin, and Tomas Mikolov. 2015. Towards ai-complete question answering: Aset of prerequisite toy tasks. arXiv preprint arXiv:1502.05698.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Hotpotqa: A dataset for diverse, explainable multi-hop question answering",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Saizheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2369--2380",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christo- pher D Manning. 2018. Hotpotqa: A dataset for diverse, explainable multi-hop question answering. In Proceedings of the 2018 Conference on Empiri- cal Methods in Natural Language Processing, pages 2369-2380.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Swag: A large-scale adversarial dataset for grounded commonsense inference",
"authors": [
{
"first": "Rowan",
"middle": [],
"last": "Zellers",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Bisk",
"suffix": ""
},
{
"first": "Roy",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. 2018. Swag: A large-scale adversarial dataset for grounded commonsense inference.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Overall training process.",
"type_str": "figure",
"uris": null
},
"FIGREF2": {
"num": null,
"text": "Accuracy (%) on different type of question",
"type_str": "figure",
"uris": null
},
"TABREF0": {
"text": "Starting from SW Matching Algorithm SW 30.8 30.2 36.2 35.2 28.4 28.1 46.5 42.8 36.7 43.7 54.5 42,1 Highest-Only 31.8 30.8 37.5 36.4 29.4 28.5 46.0 42.3 44.4 41.5 47.2 43.0 MML 34.0 33.1 40.3 40.5 31.4 30.1 50.0 45.3 46.6 44.4 52.7Results on RACE and MC500 of MCTest. The evaluation measure is accuracy (%). The Supervised Learning was training with ground truth and used the same hyperparamter as others.",
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td/><td colspan=\"2\">RACE</td><td colspan=\"2\">RACE-M</td><td colspan=\"2\">RACE-H</td><td colspan=\"2\">MC500</td><td colspan=\"4\">MC500-One MC500-Multi.</td></tr><tr><td/><td>dev</td><td>test</td><td>dev</td><td>test</td><td>dev</td><td>test</td><td>dev</td><td>test</td><td>dev</td><td>test</td><td>dev</td><td>test</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>46.1</td></tr><tr><td>Hard-EM</td><td colspan=\"11\">34.3 34.0 41.0 41.2 31.5 31.0 51.5 45.7 44.4 47.7 57.3</td><td>44.0</td></tr><tr><td colspan=\"4\">Starting from EQA Matching Algorithm</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>EQA Match</td><td colspan=\"11\">32.3 32.2 40.3 40.5 28.9 28,8 62.5 64.1 75.6 80.9 51.8</td><td>49.8</td></tr><tr><td colspan=\"12\">Highest-Only 37.0 36.9 48.8 46.1 32.1 33.1 67.5 60.6 67.7 66.0 67.2</td><td>56.0</td></tr><tr><td>MML</td><td colspan=\"11\">38.6 39.4 49.7 49.6 34.0 35.2 65.5 61.3 67.8 67.1 63.6</td><td>56.3</td></tr><tr><td>Hard-EM</td><td colspan=\"11\">39.1 39.2 49.0 49.7 35.0 34.9 66.0 63.3 68.9 66.0 63.6</td><td>60.9</td></tr><tr><td>Supevised</td><td colspan=\"11\">64.9 65.5 70.0 71.0 64.0 63.3 70.0 64.3 75.6 69.0 60.4</td><td>65.4</td></tr><tr><td/><td/><td colspan=\"2\">RACE</td><td colspan=\"2\">MC500</td><td/><td/><td/><td/><td/><td/></tr><tr><td/><td/><td>dev</td><td>test</td><td>dev</td><td>test</td><td/><td/><td/><td/><td/><td/></tr><tr><td colspan=\"2\">SW Matching Algorithm</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td colspan=\"2\">(A) Avg. num. of candidates</td><td>3</td><td>3</td><td colspan=\"2\">1.98 1.85</td><td/><td/><td/><td/><td/><td/></tr><tr><td colspan=\"2\">(B) Percent Including Ans.</td><td colspan=\"4\">79.2 79.0 67.0 62.1</td><td/><td/><td/><td/><td/><td/></tr><tr><td>(B) / (A)</td><td/><td colspan=\"4\">26.4 26.3 33.8 33.6</td><td/><td/><td/><td/><td/><td/></tr><tr><td colspan=\"2\">EQA Matching Algorithm</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td colspan=\"6\">(A) Avg. num. of candidates 1.35 1.38 1.63 1.62</td><td/><td/><td/><td/><td/><td/></tr><tr><td colspan=\"2\">(B) Percent Including Ans.</td><td colspan=\"4\">40.9 41.8 73.0 71.5</td><td/><td/><td/><td/><td/><td/></tr><tr><td>(B) / (A)</td><td/><td colspan=\"4\">30.3 30.3 44.8 44.1</td><td/><td/><td/><td/><td/><td/></tr></table>"
},
"TABREF1": {
"text": "The average size of candidate sets chosen by EQA and SW Matching.",
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"2\">Percent Including Answer</td></tr><tr><td colspan=\"2\">means the percent of candidate set including the labeled</td></tr><tr><td colspan=\"2\">answer. (B) / (A) is the accuracy of randomly selecting</td></tr><tr><td>a choice from a candidate set.</td><td/></tr><tr><td colspan=\"2\">EQA SW RACE-train MC500-train</td></tr><tr><td>29759</td><td>202</td></tr><tr><td>8461</td><td>194</td></tr></table>"
},
"TABREF2": {
"text": "Candidate Set Analysis of RACE and MC500 of MCTest. Case1: candidates chosen by EQA including the answer but candidates chosen by SW not including the answer. Case2: candidates chosen by SW including the answer but candidates chosen by EQA not including the answer.",
"html": null,
"num": null,
"type_str": "table",
"content": "<table/>"
}
}
}
}