|
{ |
|
"paper_id": "C12-1001", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T13:26:49.239921Z" |
|
}, |
|
"title": "Multi-dimensional feature merger for Question Answering", |
|
"authors": [], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "In this paper, we introduce new features for question-answering systems. These features are inspired by the fact that justification of the correct answer (out of many candidate answers) may be present in multiple passages. Our features attempt to combine evidence from multiple passages retrieved for a candidate answer. We present results on two data-sets: Jeopardy! and Doctor's Dilemma. In both data-sets, our features are ranked highest in correlation with gold class (in the training data) and significantly improve the performance of our existing QA system, Watson.", |
|
"pdf_parse": { |
|
"paper_id": "C12-1001", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "In this paper, we introduce new features for question-answering systems. These features are inspired by the fact that justification of the correct answer (out of many candidate answers) may be present in multiple passages. Our features attempt to combine evidence from multiple passages retrieved for a candidate answer. We present results on two data-sets: Jeopardy! and Doctor's Dilemma. In both data-sets, our features are ranked highest in correlation with gold class (in the training data) and significantly improve the performance of our existing QA system, Watson.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Most existing factoid question answering systems adopt search strategies and scoring algorithms with the assumption that a short passage exists in the reference corpus which contains sufficient information to answer each question. This assumption largely holds true for short and focused factoid questions such as those found in the TREC QA track (Voorhees and Tice, 2000) . Examples of TREC QA questions include \"When did Hawaii become a state?'\" and \"What strait separates North America from Asia?'\" However, some more complex factoid questions contain facts encompassing multiple facets of the answer, which often cannot be found together in a short text passage. Consider the following examples, selected from collections of Jeopardy! 1 and Doctor's Dilemma 2 questions, respectively:", |
|
"cite_spans": [ |
|
{ |
|
"start": 347, |
|
"end": 372, |
|
"text": "(Voorhees and Tice, 2000)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "(1) WHO'S WHO IN SPORTS: Born in 1956, this Swedish tennis player won 6 French Opens & 5 straight Wimbledons (A: Bj\u00f6rn Borg)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "(2) CARDIOLOGY: Murmur associated with this condition is harsh, systolic, diamondshaped, and increases in intensity with Valsalva (A: Hypertrophic cardiomyopathy)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In both examples, information presented in the question can reasonably be expected to be in documents that describe the respective answer entities. However, it is quite unlikely that all the information will be present in one or two adjacent sentences in the document. More specifically, in example (1), we find birth year and nationality information in the basic biographic section of documents about Bj\u00f6rn Borg, while statistics about his tennis record can generally be found in a section about Borg's career. Similarly, for example (2), the descriptions of typical murmurs associated with hypertrophic cardiomyopathy (harsh, systolic, and diamond-shaped) may not fall under the same section as the impact of Valsalva maneuver on the murmur (which is a factor used to distinguish hypertrophic cardiomyopathy from aortic stenosis). As a result, a typical passage retrieved from most reference corpus would cover only a portion of the facts given in the question.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "These multi-faceted factoid questions present a challenge for existing question answering systems which make the aforementioned assumption. Consider the following short passages relevant to the question in example (2):", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "(2.1 a) Hypertrophic cardiomyopathy generates a harsh late-systolic murmur, ending at S2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "(2.1 b) The straining phase of the Valsalva maneuver induces an increase in the intensity of the systolic ejection murmur of hypertrophic cardiomyopathy.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "(2.2 a) A harsh, late-peaking, basal murmur radiating to the carotid arteries suggests aortic stenosis.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "(2.2 b) A classic physical finding of aortic stenosis is a harsh, crescendo-decrescendo systolic murmur that is loudest over the second right intercostal space and radiates to the carotid arteries.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Existing systems which evaluate each passage separately against the question would view each passage as having a similar degree of support for either hypertrophic cardiomyopathy or aortic stenosis as the answer to the question. However, these systems lose sight of a crucial fact, namely, that even though each passage covers half of the facts in the question, (2.1 a) and (2.1 b) cover disjoint subsets of the facts, while (2.2 a) and (2.2 b) address the same set of facts.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper, we introduce the notion of multi-dimensional feature merger or MDM features, which allow for passage scoring results to be combined across different dimensions, such as question segments and different passage scoring algorithms. In this motivating example, MDM features that combine results across question segments would capture the broader coverage of passages (2.1 a) and (2.2 b), and thus enable the system to recognize hypertophic cardiomyopathy as a better answer for the question than aortic stenosis. We describe a general-purpose MDM feature merging framework that can be adopted in question answering systems that evaluate candidate answers by matching candidate-bearing passages against the question. We discuss our implementation of this MDM feature merging framework on top of our own question answering system, Watson. Finally, we demonstrate how passage scoring results can be merged across various dimensions in our system, resulting in 1) new features that are more highly correlated with correct answers than the base features from which they were derived, and 2) significant component level performance improvement and 3) end-to-end performance improvement. We present a comprehensive set of experiments for our current domain of interest -the medical domain and a less comprehensive set of experiments for Jeopardy! data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The rest of the paper is organized as follows. In section 2, we describe our feature set. Since we build on existing state-of-the-art QA system, in section 3, we briefly describe the current system, focusing on the component of the system that we enhance in this paper. In section 4, we describe passage scorers in the current system, with specific examples of features that leverage scores assigned to passages by these scorers. In section 5, we presents a detailed description of the data we use for training and testing. Additionally, we present experiments and results to show the impact of our features. Section 6 presents a survey of current work in question answering. Finally, we conclude and present future direction of research in the last section.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "2 Multi-dimensional feature merger (MDM)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Given a question, Q, each of its candidate answer, CA, has a set of supporting passages (Figure 1) . In a typical question-answering system, support of each passage for a candidate answer is quantified. Then a merging strategy is used to combine the support of all passages for a particular candidate answer. In this paper, we introduce a general framework for merging support from supporting passages.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 88, |
|
"end": 98, |
|
"text": "(Figure 1)", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The methodology of calculating the support of a passage for a candidate answer is called passage scoring (Murdock et al., 2012a) . At an abstract level, a passage scorer is responsible for quantifying how well a passage matches a question. We represent a question and a passage as an ordered set of Table 1 : Standard formulae that constitute g(M ) Question large land animal has large ears P1.1 Figure 2 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 105, |
|
"end": 128, |
|
"text": "(Murdock et al., 2012a)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 299, |
|
"end": 306, |
|
"text": "Table 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 396, |
|
"end": 404, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "sum( s) avg( s) std( s) max( s) min( s) non-zero( s) cols j=1 s j sum( s) cols cols j=1 (sj\u2212avg( s)) 2 cols\u22121 arg max j\u2208[1,cols] s j arg min j\u2208[1,cols] s j |{s j |s j = 0\u2200 j \u2208 [1, cols]}|", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "x 1 x 2 x 3 x 4 x 5 x 6 f 1.1 P1.2 x 7 x 8 x 9 x 10 x 11 x 12 f 1.2 P2.1 x 13 x 14 x 15 x 16 x 17 x 18 f 2.1 P2.2 x 19 x 20 x 21 x 22 x 23 x 24 f 2.2", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "terms (Q = {q 1 , q 2 , . . . , q n }), and (P = {p 1 , p 2 , . . . , p m }) respectively, Passage scorers align question terms to passage terms and assign a score based on how well the terms align. For example, a passage scorer will take as input Q and P and output a vector of scores that represents how well the passage matches the question. We denote this vector for P as f such that f i is the score of how well one of the passage terms matches the i th term in the question. Note the length of this vector is fixed per question but may vary across questions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We collect all these vectors per question, per candidate answer into a matrix, M . For example, CA 1 may be represented as a matrix where row i corresponds to the passage scoring vector for passage P i . An element of this matrix, f i, j is the score assigned by one of the passage scorers of how well passage P i aligns with the term j in the question Q.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "This matrix is of variable dimensions for different candidate answers per question. Number of rows could be different because the number of supporting passages could be different for each candidate answer for the same question. Since different questions have different number of question terms, the number of columns could be different for candidate answers across questions. Therefore, we cannot capture the distribution of this matrix simply by linearizing the matrix.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper, we define a function f : M \u2192 R N , that maps each matrix into feature vector of fixed length, N . This function is defined as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "f (M ) =< g(M ), g(M \u2032 ) >", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "where M \u2032 is the transpose of matrix M and g is a function g : M \u2192 R N /2 that maps a matrix into feature vector of fixed length, defined as follows: This example is shown pictorially in Figure 2 Table 2 abstractly shows how passage scorers assign values to specific question terms for specific passages. For example, consider the P1.1 row, which represents how well the passage The African elephant is a very large land animal supports the answer elephant for the question This large land animal also has large ears. If the passage scorer is effective, it will give a high score to x 1 , x 2 and x 3 (because the passage does, indeed, provide strong justification for \"elephant\" satisfying the requirements of being large land animal). It will give a very small score (typically 0) to x 4 , x 5 , and x 6 , because the passage says nothing about elephants having large ears. However, some passage scorers may be mislead by the fact that the term \"large\" appears twice question and either one could align to the one occurrence in the passage. Often some passage scorers match too many terms and thus assign credit to terms that don't deserve it while others match too few and miss important content; this is why we have a diverse collection of scorers and let the classifier sort out how much to trust each of them.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 187, |
|
"end": 195, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "g(M ) =<sum( s), avg( s), std( s), max( s), min( s), dim( s), non-zero( s)>", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Using one of the existing merging strategy, say M AX , candidate answer 1, African Elephant, will get assigned a feature value equal to M AX {(", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "x 1 + x 2 + x 3 + x 4 + x 5 + x 6 ), (x 7 + x 8 + x 9 + x 10 + x 11 + x 12 )}.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "So either passage P1.1 or passage P1.2 will be selected as an optimal passage. As is apparent from this merger strategy, it does not attempt to leverage the complementary information in the two passages. Our merging strategy will attempt to capture the distribution of alignment across passages. For the matrix for African Elephant, g(M ) and g(M \u2032 ) will be the same, because", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "M , f (M ) =< g(M ), g(M \u2032 ) >. First dimension of vectors", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "sum( s) = sum( s \u2032 ) = 12 i=1 x i . But others will be different. For example, mean( s) = 1 6 * sum( s), whereas, mean( s \u2032 ) = 1 2 * sum( s).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Note, the sum( s) feature is aggregating the information across passages. In a passage scorer, which assigns 1 for a match and 0 otherwise, it is clear why this feature will have a higher value for African Elephant, the correct answer, than Hippo (because Hippo's don't have large ears).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our framework is general in three ways: 1) It is independent on the type of passage scorer, 2) More matrix operations (like rank(M)), may be easily added to the definition of function g(M ), and 3) Our framework is easily extensible to beyond two dimensions, which can be used to capture additional orthogonal feature dimensions (see future work section for an example).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In the following sections, we first describe a specific, and state-of-the-art QA system, Watson. We present where our features fit in the larger architecture. Then we give an overview of specific passage scorers and merging strategies in the current system, followed by experiments and results showing that the new features we introduce add value to the current system. Figure 3 . We refer the reader to (Ferrucci et al., 2010) for a detailed description of the architecture. In this section, we present a high level overview of the system pointing out where our features fit in.", |
|
"cite_spans": [ |
|
{ |
|
"start": 404, |
|
"end": 427, |
|
"text": "(Ferrucci et al., 2010)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 370, |
|
"end": 378, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The DeepQA system analyzes a question, Question Analysis (Lally et al., 2012) , and generates multiple possible candidate answers, Hypothesis Generation (Chu-Carroll et al., 2012).It then applies many different answer scoring algorithms, each of which produces features that are used to evaluate whether the answer is correct. One way in which DeepQA evaluates candidate answers is to first retrieve passages of text that contain the candidate answer, via a technique called Supporting Evidence Retrieval; each passage is then scored using a variety of algorithms called passage scorers Figure 4 : Training and test data for a question-answering system. Each question Q has multiple candidate answers, CA, where few, if any, are correct (class = 1).", |
|
"cite_spans": [ |
|
{ |
|
"start": 57, |
|
"end": 77, |
|
"text": "(Lally et al., 2012)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 587, |
|
"end": 595, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Overview of Watson", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "< Q 1 , CA 1 , \u22121 >, < Q 1 , CA 2 , \u22121 >, . . . , < Q 1 , CA i , 1 >, . . . , < Q 1 , CA n 1 , \u22121 > < Q 2 , CA 1 , \u22121 >, < Q 2 , CA 2 , \u22121 >, . . . , < Q 2 , CA j , 1 >, . . . , < Q 2 , CA n 2 , \u22121 > . . . < Q m , CA 1 , \u22121 >, < Q m , CA 2 , \u22121 >, . . . , < Q m , CA k , 1 >, . . . , < Q m , CA n m , \u22121 >", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Overview of Watson", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "in the Hypothesis and Evidence Scoring phase (Murdock et al., 2012a ). All of the features are sent to a Final Merging and Ranking (Gondek et al., 2012) component, which uses machine learning techniques to weigh and combine features to produce a single confidence value estimating the probability that the candidate answer is correct. The features we introduce are extracted and made available to the machine learning model in the Final Merging and Ranking component, where the scores assigned by different passage scorers are available. In the next section 4, we give details of existing passage scorers and their feature merging strategies used prior to the framework introduced in this paper.", |
|
"cite_spans": [ |
|
{ |
|
"start": 45, |
|
"end": 67, |
|
"text": "(Murdock et al., 2012a", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 131, |
|
"end": 152, |
|
"text": "(Gondek et al., 2012)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Overview of Watson", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Our question-answering system works by finding candidate answers, employing a variety of algorithms to compute feature values relating to those answers, and then using a statistical classifier to determine which candidate answer is correct. A question-answering scenario is shown in Figure 1 . For a given question Q, search components find a set of candidate answers {CA 1 , CA 2 , . . . , CA n }. The task of the classifier is to decide which of the candidate answers is the correct answer. Hence the training and test data for that classifier looks as in Figure 4 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 283, |
|
"end": 291, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 558, |
|
"end": 566, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Passage scoring", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Each candidate answer is associated with one or more passages that contain the candidate answer. A subset of the algorithms that compute feature values in our system are the passage scoring components. These components evaluate the evidence that a single passage provides relating to how well the candidate answer satisfies the requirements of the question. Thus among the feature values associated with a candidate answer, some will be passage scoring features. Our passage scorers are described in detail elsewhere (Murdock et al., 2012a ). Here we provide only a brief introduction to provide context for later sections of this paper. We have a variety of passage scoring algorithms that use different strategies for determining which parts of a question to attempt to match to each part of a passage and for determining whether two parts of a passage match. Some attempt to align question terms to passage terms using syntactic structure and/or semantic relations, while others use word order or ignore the relationship among terms completely (e.g., simply counting how many question terms appear in the passage, regardless of whether those terms are similarly arranged).", |
|
"cite_spans": [ |
|
{ |
|
"start": 517, |
|
"end": 539, |
|
"text": "(Murdock et al., 2012a", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Passage scoring", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Watson's passage scorers leverage available annotation components developed for the DeepQA framework, such as dependency parsing, Named Entity (NE) recognition, coreference resolution and relation detection. The question and the passage are decomposed into sets of terms, where a term can either be a single token or a multiword token. All of these scorers try to determine the amount of overlap between the passage and the question by looking at which terms match. The individual scorers put different restrictions on when a term is considered to match.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Passage scoring", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Currently, there are four scorers being used in the system: 1. Passage Term Match: Assigns a score based on which question terms are included in the passage, regardless of word order or grammatical relationship.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Passage scoring", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "2. Skip Bigram: Assigns a score based on whether pairs of terms that are connected or nearly connected in the syntactic-semantic structure of the question match corresponding pairs of terms in the passage.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Passage scoring", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "3. Textual Alignment: Assigns a score based on how well the word order of the passage aligns with that of the question, when the focus is replaced with the candidate answer.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Passage scoring", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Targets high-precision matching between the syntactic structures of passages and questions, and is therefore quite restrictive concerning structural overlap of the question and the passage. Like Skip Bigram, it operates on syntactic-semantic structural graphs, which contain one node for each lexical item.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Logical Form Answer Candidate Scorer (LFACS):", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "Each passage scoring component produces a fixed number of feature value pairs for each candidate answer within each passage. Some of these values range from 0 to 1, where a high score indicates that the passage matches the question well based on that passage scorer's evaluation criteria; other passage scorers have other ranges. Watson's final answer merging and ranking component considers a pre-defined set of features and applies a machine learned model to score each candidate answer. However, since each candidate has multiple, and generally a varying number of supporting passages, we use a merger to combine passage scores for < candidate answer, passage > pairs into a fixed set of features. For example, if a candidate answer has three passages and a passage scorer assigns a value of 0.5, 0.6, and 0.7 to each passage, these scores may be merged using a merger strategy like M AX . Using this merger strategy, the feature added to the learning model for the candidate answer under consideration will be M AX (0.5, 0.6, 0.7) = 0.7.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Logical Form Answer Candidate Scorer (LFACS):", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "We have the following three distinct algorithms that we use to merge features across passages (Gondek et al., 2012) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 94, |
|
"end": 115, |
|
"text": "(Gondek et al., 2012)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Logical Form Answer Candidate Scorer (LFACS):", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "1. Maximum: The final score for the candidate answer is the maximum score for that answer in any passages found for that answer.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Logical Form Answer Candidate Scorer (LFACS):", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "2. Sum: The final score for the candidate answer is the sum of the scores for that answer in each of the passages found for that answer.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Logical Form Answer Candidate Scorer (LFACS):", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "3. Decaying sum: The final score for the candidate answer is computed to be", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Logical Form Answer Candidate Scorer (LFACS):", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "m i=0 p i 2 i , where p 0 , p 1 , .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Logical Form Answer Candidate Scorer (LFACS):", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": ". . , p m are the scores of the passages that contain the answers, sorted in descending order.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Logical Form Answer Candidate Scorer (LFACS):", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "A key limitation of our earlier work is that the passage scorers capture limited complementary information that the passages have to offer. For example, in Figure 2 , a passage scoring component may assign scores s 1.1 , s 1.2 to passages P1.1 and P1.2 respectively. A merger strategy that takes maximum across passages will choose M AX (s 1.1 , s 1.2 ) as the optimal supporting passage. However, since these passages have complementary information to offer, it would be better to somehow aggregate this information. This is exactly where our multi-dimensional merging features come into the picture.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 156, |
|
"end": 164, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Logical Form Answer Candidate Scorer (LFACS):", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "As described in earlier publications (Gondek et al., 2012) , for each of our features, we have two other derived features: a feature for whether that feature is missing and a standardized version of the Feature name", |
|
"cite_spans": [ |
|
{ |
|
"start": 37, |
|
"end": 58, |
|
"text": "(Gondek et al., 2012)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Logical Form Answer Candidate Scorer (LFACS):", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "In terms of Table 2 MDM-TextualAlignmentsum-then-mean For each question term, compute the sum of the Textual Alignment scores across all passages, and then compute the mean of the sums", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 12, |
|
"end": 54, |
|
"text": "Table 2 MDM-TextualAlignmentsum-then-mean", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Explanation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "f (M ) = [(x 1 + x 7 ) + (x 2 + x 8 ) + (x 3 + x 9 ) + (x 4 + x 10 )+(x 5 + x 11 )+ (x 6 + x 1 2)]/6 MDM- SkipBigram- transpose-sum- then-mean", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Explanation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For each passage, compute the sum of the Skip-Bigram scores across all question terms, and then compute the mean of the sums", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Explanation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "f (M ) = [(x 1 + x 2 + . . . + x 6 ) + (x 7 + x 8 + . . . + x 12 )]/2", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Explanation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For each question term, compute the maximum of the LFACS scores across all passages, and then compute the mean of the maxima", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "MDM-LFACSmax-then-sum", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "f (M ) = ma x(x 1 , x 2 , . . . , x 6 ) + ma x(x 7 , x 8 , . . . , x 12 ) MDM- SkipBigramScore- transpose- sum-then- nonZeroColumns", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "MDM-LFACSmax-then-sum", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For each passage, compute the sum of the Skip-Bigram scores across all question terms, and then compute the number of sums that are non-zero feature. When the value of a feature is missing, we assert a value of 0 for the feature and a value of 1 for the corresponding derived missing feature; this allows the learner to distinguish between cases where the feature actually has 0 value versus cases where it simply did not apply at all. The standardized version of a feature is computed by subtracting the mean value of that feature and dividing by the standard deviation for that feature. Both mean and standard deviation are computed across all answers to a single question, not across all answers to all questions in the test set. The purpose of the standardized feature is to encode how much the base feature differs from a typical value of that feature for a single question.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "MDM-LFACSmax-then-sum", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Set cnt = 0. If (x 1 + x 2 + . . . + x 6 ) > 0, cnt = cnt + 1. If (x 7 + x 8 + . . . + x 12 ) > 0, cnt = cnt + 1. F (M ) = cnt.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "MDM-LFACSmax-then-sum", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In Table 3 , we present examples of some top scoring (in terms of correlation with the gold class) MDM features. For a passage scoring feature X , we produce the following MDM features: MDM-X -sum-then-mean (av g( s)), MDM-X -transpose-sum-then-mean (av g( s \u2032 )), MDM-X -sum-then-max (max( s)) etc.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 10, |
|
"text": "Table 3", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "MDM-LFACSmax-then-sum", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "To demonstrate the generality of our approach, we experimented with two data sets, an open-domain question set and one focused on the medical domain. We briefly describe these data sets in this section. Our first open-domain test set is a randomly selected set of 3,505 Jeopardy! questions. Jeopardy! questions span a large number of domains, including arts and entertainment, history, geography, and science. These questions are also generally more complex, incorporating multiple loosely related facts about the correct answers, particularly as compared with typical questions from the TREC QA track. The last characteristic makes Jeopardy! questions an excellent test set for our MDM feature merging framework.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments and Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Our second test set is a collection of 905 Doctor's Dilemma questions. Doctor's Dilemma, also Table 4 : Data distribution for our data-sets. #Question refers to number of questions. #Positive refers to number of positive instances i.e. correct answers to questions, #Negative refers to number of negative instances and #Average cand. per Q refers to the average number of candidates considered for a particular question. Note, this is simply total number of positive and negative examples divided by the number of questions in the data-set.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 94, |
|
"end": 101, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments and Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "known as Medical Jeopardy, is a competition organized by the American College of Physicians for medical interns and residents and held each year at the Internal Medicine meeting. The format of these questions is modeled after Jeopardy!, while their content is focused solely on topics related to medicine. Although not as linguistically complex as Jeopardy! questions, Doctor's Dilemma questions generally also consists of multiple facts about the correct answer, making it suitable as a test set for MDM features. Following are some examples from the Doctor's Dilemma domain:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments and Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "1. The syndrome characterized by joint pain, abdominal pain, palpable purpura, and a nephritic sediment. Answer: Henoch-Schonlein Purpura.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments and Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "2. Familial adenomatous polyposis is caused by mutations of this gene. Answer: APC Gene.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments and Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "3. The syndrome characterized by narrowing of the extra-hepatic bile duct from mechanical compression by a gallstone impacted in the cystic duct. Answer: Mirizzi's Syndrome.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments and Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We use a supervised learning paradigm, with features extracted as described in previous sections. We use logistic regression classifier for training and testing. We report results on a held-out test set for both data-sets. The distribution of training set for the two data-sets are in Table 4 . We test on 3,505 Jeopardy! questions and 905 DD questions.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 285, |
|
"end": 292, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments and Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We present three types of analyses to show the usefulness of our features. First, we present the correlation of our features with the gold class (for the training set only) i.e. correctness of a candidate answer. Second, we present a component level analysis, where we add our features to a baseline QA system and show improvement. Third, we present results on the end-to-end Watson system.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments and Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "A standard way to judge the goodness of features is to look at the features' Pearson's r correlation with the gold class (Hall, 2000) . The Pearson's r correlation coefficient between feature X and gold standard Y is given by:", |
|
"cite_spans": [ |
|
{ |
|
"start": 121, |
|
"end": 133, |
|
"text": "(Hall, 2000)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Correlation", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "r = n i=1 (X i \u2212X )(Y i \u2212\u0232 ) n i=1 (X i \u2212X ) 2 n i=1 (Y i \u2212\u0232 ) 2", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Correlation", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "whereX and\u0232 are the arithmetic mean of feature values and gold class values respectively. We refer to the degree of correlation between the feature and the gold class as the \"informativeness\" of the feature. Naturally, we would like to keep features that have high informativeness.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Correlation", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Figure 5: Inform analysis comparison of MDM features with the existing features in the system trained on Jeopardy! data. X-axis is the feature index (in no specific order) and Y-axis is the % correlation of features with the gold class. Figure 5 presents the informativeness of existing features (red squared dots) and MDM features (blue diamond dots) for the Jeopardy! data-set. In figure 5 , the x-axis is the feature index (existing features indexed from 1 to 535 and new features indexed from 1 to 110) and the y-axis is the informativeness of the features. For example, the highest informativeness of existing features (square red dot) is 30% (100 \u2022 r), while the highest informativeness of MDM features is 43.2%. Many of the MDM features have higher informativeness than the most correlated feature in the existing system.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 237, |
|
"end": 245, |
|
"text": "Figure 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 383, |
|
"end": 391, |
|
"text": "figure 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Correlation", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Similar is the case with the medical domain data. Figure 6 presents the informativeness of existing features (red squared dots) and MDM features (blue diamond dots) for the Doctor's Dilemma data-set. The highest informativeness of MDM features is 21.5%, which is comparable to the three existing features with highest informativeness (between 20% to 21%). However, as the graph shows, the vast majority of MDM features have substantially higher informativeness than the original features. the Jeopardy! domain, many of the MDM features are more correlated with answer correctness than most of the original features.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 50, |
|
"end": 58, |
|
"text": "Figure 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Correlation", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "As described in section 3, we add new features in the final merger stage of the system. Our features are calculated for each of the four passage scorers described in section 4. In this section, we evaluate the impact of these MDM features when only a single passage scoring component is employed in the system. To do so, we create a component level baseline for each of our four passage scorers as follows: on top of the Watson answer-scoring baseline configuration (Ferrucci et al., 2010) , which includes all of the standard question analysis, search, and candidate generation, but only one answer scorer (which checks answer types using a named entity detector (Murdock et al., 2012b) ) and a simplified configuration for merging and ranking answers. We add each of our existing Passage Figure 6 : Inform analysis comparison of MDM features with the existing features in the system trained on Doctor's Dilemma data. X-axis is the feature index (in no specific order) and Y-axis is the % correlation of features with the gold class.", |
|
"cite_spans": [ |
|
{ |
|
"start": 466, |
|
"end": 489, |
|
"text": "(Ferrucci et al., 2010)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 664, |
|
"end": 687, |
|
"text": "(Murdock et al., 2012b)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 790, |
|
"end": 798, |
|
"text": "Figure 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Component level analysis", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Term Match, Skip Bigram, Textual Alignment, and LFACS passage scoring, to create four baseline systems. We then compare each baseline to the system with our MDM features for the corresponding passage scorer and show a significant gain in Precision@70% and accuracy.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Component level analysis", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "We often consider Precision@70% as a numerical measure that combines the ability to correctly answer questions and the ability to measure confidence; this metric corresponds to the precision when the system answers 70% of the questions of which it is most confident. Table 6 : End-to-End comparison for medical domain data, Doctor's Dilemma. Baseline refers to the configuration with all the current features in the system. With MDM features refers to the configuration when we add all our MDM features to the existing feature set. This difference in performance is statistically significant with p < 0.05, using McNemar's significance testing.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 267, |
|
"end": 274, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Component level analysis", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "McNemar's significance test, these are statistically significant improvements over the baseline at p < 0.05. As is clear from the results, for each of the four passage scorers, adding MDM features that capture the distribution of the passage scores across multiple passages improves the performance, in terms of both Precision@70% and % accuracy, by a significant amount.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Component level analysis", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "For the Jeopardy! data-set, for the LFACS passage scorer, Precision@70% improves from 64.9% to 71.3% and % Accuracy improves from 52.2% to 57.3%. Both these improvements are statistically significant at p < 0.05, using McNemar's significance testing.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Component level analysis", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Based on these experimental results, we conclude that addition of MDM features for passage scorers significantly improves the performance of our QA system.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Component level analysis", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "In this section, we present results for running the full Watson system with and without MDM features. Table 6 shows the Precision@70% and % accuracy performance on the Doctor's Dilemma test set. The results show that by adding MDM features to existing system, we are able to get a statistically significantly better performance than the baseline system: Precision@70% improves from 37.2 to 40.2 and % accuracy improves from 29.2% to 31.3%.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 102, |
|
"end": 109, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "End-to-End Analysis", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Question answering has had a long history (Simmons, 1970) and has seen considerable advancement over the past decade (Maybury, 2004; Strzalkowski and Harabagiu, 2006) . However, to the best of our knowledge, there is no general purpose framework integrated into a QA system that is capable of aggregating information across multiple pieces of evidence, each analyzed using different analytics (features), and comparing this with coverage of terms/facts in the input question.", |
|
"cite_spans": [ |
|
{ |
|
"start": 42, |
|
"end": 57, |
|
"text": "(Simmons, 1970)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 117, |
|
"end": 132, |
|
"text": "(Maybury, 2004;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 133, |
|
"end": 166, |
|
"text": "Strzalkowski and Harabagiu, 2006)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Literature Survey", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "A technique that is complementary to ours is corpus expansion (Schlaefer et al., 2011) , in which corpus documents are expanded to include topically related facts from an external resource (e.g. Web). Sometimes in this process, pseudo documents are created which contain aggregate information about a particular entity. This approach helps standard document search by providing better document-level evidence/scores for the input search terms. The system is more likely to find a single document that addresses all of the parts of the question in a corpus after it has been expanded. However, passage scoring still encounters the same underlying problem even with an expanded corpus: in some cases, there will not be any single passage that addresses all of the requirements of the question.", |
|
"cite_spans": [ |
|
{ |
|
"start": 62, |
|
"end": 86, |
|
"text": "(Schlaefer et al., 2011)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Literature Survey", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The second related approach is question decomposition (Kalyanpur et al., 2012; Felshin, 2005) , which aims at decomposing the question into different facts that need to be independently or sequentially solved in order to arrive at the correct answer. However, question decomposition does not deal with the issue of combining multiple pieces of evidence (possibly assessed using different analytics) for the same fact within a decomposed question (which our approach does). In addition, the process of decomposing a question into multiple subquestions is an extremely challenging linguistic one, and is very sensitive to how questions are phrased; a set of rules that are effective at formulating subquestions from Jeopardy! clues may not be as effective for other types of questions. Multi-dimensional merging also requires that the question be divided up, but it does not require that the parts of the question form coherent subquestions, since it is performed after all of the linguistic analysis and comparison to evidence. In our implementation of multi-dimensional merging, we simply divide up the question into single terms.", |
|
"cite_spans": [ |
|
{ |
|
"start": 54, |
|
"end": 78, |
|
"text": "(Kalyanpur et al., 2012;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 79, |
|
"end": 93, |
|
"text": "Felshin, 2005)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Literature Survey", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We consider both corpus expansion and question decomposition as complementary to our approach. Both approaches are included in our baseline Jeopardy! system, and corpus expansion is included in our baseline medical system. The fact that our results show postive impact on effective question answering shows that multi-dimensional merging can add value to a system that already uses both corpus expansion and question decomposition techniques.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Literature Survey", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We introduced a general framework for aggregating evidence from different passages retrieved for a candidate answer. Moreover, we introduced a novel set of features, multi-dimensional feature merger or MDM features, that fit this framework and significantly improve the performance of the current state-of-the-art QA system, Watson. However, our framework is general and not restricted to Watson. It may be employed in any QA system that captures how well retrieved passages match the question under consideration.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and perspectives", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this paper, we only considered merging evidence across passages and question terms. However, this may be easily extended to merging evidence across passage scorers. There might be value in considering how different passage scorers match supporting passages with candidate answers. Using our framework, all that is required is adding a new dimension: depth to the two-dimensional matrix M , thus giving rise to a 3 \u2212 D matrix, say M 3D. Each two dimensional matrix, M in M 3D belongs one passage scorer. Therefore, depth of M 3D is the number of passage scorers used to match supporting passages with the question. In the future, we will explore decomposing and thus deriving features from this 3 \u2212 D matrix, possibly using Tensor algebra (Kolda and Bader, 2008) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 741, |
|
"end": 764, |
|
"text": "(Kolda and Bader, 2008)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and perspectives", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://www.jeopardy.com; Jeopardy! is a registered trademark of Jeopardy! Productions, Inc. 2 http://www.acponline.org/residents_fellows/competitions/doctors_dilemma", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "modified for readability.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Finding needles in the haystack: Search and candidate generation", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Chu-Carroll", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Fan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Boguraev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Carmel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Sheinwald", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Welty", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "IBM Journal Research and Developement", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chu-Carroll, J., Fan, J., Boguraev, B. K., Carmel, D., Sheinwald, D., and Welty, C. A. (2012). Finding needles in the haystack: Search and candidate generation. IBM Journal Research and Developement, 56.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Syntactic and semantic decomposition strategies for question answering from multiple resources", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S. (2005). Syntactic and semantic decomposition strategies for question answering from multiple resources. AAAI.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Building watson: An overview of the deepqa project", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Ferrucci", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Brown", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Chu-Carroll", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Fan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Gondek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Kalyanpur", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Lally", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Murdock", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Nyberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Prager", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Schlaefer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Welty", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "", |
|
"volume": "31", |
|
"issue": "", |
|
"pages": "59--79", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ferrucci, D. A., Brown, E. W., Chu-Carroll, J., Fan, J., Gondek, D., Kalyanpur, A., Lally, A., Murdock, J. W., Nyberg, E., Prager, J. M., Schlaefer, N., and Welty, C. A. (2010). Building watson: An overview of the deepqa project. AI Magazine, 31:59-79.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "A framework for merging and ranking of answers in deepqa", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Gondek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Lally", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Kalyanpur", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Murdock", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Duboue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Pan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Z", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Qiu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Welty", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "IBM Journal Research and Developement", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gondek, D. C., Lally, A., Kalyanpur, A., Murdock, J. W., Duboue, P. A., Zhang, L., Pan, Y., Qiu, Z. M., and Welty, C. A. (2012). A framework for merging and ranking of answers in deepqa. IBM Journal Research and Developement, 56.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Correlation-based feature selection for discrete and numeric class machine learning", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Hall", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "17th International Conference of Machine Learning (ICML)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "359--366", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hall, M. A. (2000). Correlation-based feature selection for discrete and numeric class machine learning. 17th International Conference of Machine Learning (ICML), pages 359-366.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Fact-based question decomposition in deepqa", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Kalyanpur", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Patwardhan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Boguraev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Lally", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chu-Carroll", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "IBM Journal Research and Developement", |
|
"volume": "56", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kalyanpur, A., Patwardhan, S., Boguraev, B. K., Lally, A., and Chu-Carroll, J. (2012). Fact-based question decomposition in deepqa. IBM Journal Research and Developement, 56:13:1-13:11.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Question analysis: How watson reads a clue", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Lally", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Prager", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Mccord", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Boguraev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Patwardhan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Fan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Fodor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chu-Carroll", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "IBM Journal of Research and Development", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lally, A., Prager, J. M., McCord, M. C., Boguraev, B. K., Patwardhan, S., Fan, J., Fodor, P., and Chu-Carroll, J. (2012). Question analysis: How watson reads a clue. IBM Journal of Research and Development, 56.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "New Directions in Question-Answering", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Maybury", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Maybury, M. T. (2004). New Directions in Question-Answering. Melno Park CA: American Association for Artificial Intelligence.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Textual evidence gathering and analysis", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Murdock", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Fan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Lally", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Shima", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Boguraev", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "IBM Journal Research and Developement", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Murdock, J. W., Fan, J., Lally, A., Shima, H., and Boguraev, B. K. (2012a). Textual evidence gathering and analysis. IBM Journal Research and Developement, 56.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Typing candidate answers using type coercion", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Murdock", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Kalyanpur", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Welty", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Fan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Ferrucci", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Gondek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Kanayama", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "IBM Journal Research and Developement", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Murdock, J. W., Kalyanpur, A., Welty, C. A., Fan, J., Ferrucci, D. A., Gondek, D. C., Zhang, L., and Kanayama, H. (2012b). Typing candidate answers using type coercion. IBM Journal Research and Developement, 56.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Statistical source expansion for question answering", |
|
"authors": [ |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Schlaefer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Chu-Carroll", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Nyberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Fan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Zadrozny", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Ferrucci", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 20th ACM international conference on Information and knowledge management, CIKM '11", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "345--354", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Schlaefer, N., Chu-Carroll, J., Nyberg, E., Fan, J., Zadrozny, W., and Ferrucci, D. (2011). Statistical source expansion for question answering. In Proceedings of the 20th ACM international conference on Information and knowledge management, CIKM '11, pages 345-354, New York, NY, USA. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Natural language question-answering systems: 1969", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Simmons", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1970, |
|
"venue": "Commun. ACM", |
|
"volume": "13", |
|
"issue": "", |
|
"pages": "15--30", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Simmons, R. F. (1970). Natural language question-answering systems: 1969. Commun. ACM, 13:15-30.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Advances in Open-Domain Question-Answering", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Strzalkowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Harabagiu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Strzalkowski, T. and Harabagiu, S. (2006). Advances in Open-Domain Question-Answering. Berlin Germany: Springer-Verlag.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Building a question answering test collection. SIGIR", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Voorhees", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Tice", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "200--207", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Voorhees, E. and Tice, D. (2000). Building a question answering test collection. SIGIR, pages 200-207.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "Typical question answering scenario. Q refers to question. CA are candidate answers for question Q, and p refers to passages supporting candidate answers.", |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"text": "(a) P1.1: The African Elephant is a very large land animal. (b) P1.2: African elephants have large ears. 2. Candidate answer 2: Hippo (a) P2.1: A hippo is a large land animal. (b) P2.2: Hippos have relatively small ears.", |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF2": { |
|
"text": "Architecture of Watson, state-of-the-art DeepQA system (taken from(Ferrucci et al., 2010)).", |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF3": { |
|
"text": "IBM undertook the challenge to build a question-answering system named Watson that is able to answer open domain questions, such as those posed in a U.S. quiz show Jeopardy!. An overview of the architecture of Watson is illustrated in", |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF1": { |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"text": "Passage match scores for question and passages in", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF3": { |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"text": "Examples of MDM features. First column is the feature name, column 2 a natural language description of the feature and the third column is the exact mathematical formula in reference toTable 2for passages P1.1 and P1.2 belonging to the candidate answer 1.", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF5": { |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td colspan=\"2\">Component Level Baseline</td><td colspan=\"2\">With MDM features</td></tr><tr><td>Passage Scorer</td><td colspan=\"4\">Precision@70% %Accuracy Precision@70% %Accuracy</td></tr><tr><td>Passage Term Match</td><td>24.9</td><td>20.2</td><td>29.2</td><td>23.4</td></tr><tr><td>Skip Bigram</td><td>26.8</td><td>21.5</td><td>28.7</td><td>23.3</td></tr><tr><td>Textual Alignment</td><td>22.9</td><td>18.8</td><td>25.7</td><td>21.1</td></tr><tr><td>LFACS</td><td>25.7</td><td>20.3</td><td>28.5</td><td>22.4</td></tr></table>", |
|
"text": "present results for our component level analysis for Doctor's Dilemma questions. A component level baseline for each passage scorer was computed as described above. System performance improves across the board after adding MDM features for a passage scorer. Using", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF6": { |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td>Baseline</td><td/><td colspan=\"2\">With MDM features</td></tr><tr><td>Data-set</td><td colspan=\"4\">Precision@70% %Accuracy Precision@70% %Accuracy</td></tr><tr><td>Doctor's Dilemma</td><td>37.2</td><td>29.2</td><td>40.2</td><td>31.3</td></tr></table>", |
|
"text": "Component level comparison for Doctor's Dilemma data-set for each of the four passage scorers. Each component level baseline is the answer-scoring baseline plus features for one of the passage scorers. All the numbers after adding MDM features for a passage scorer are significantly better than the baseline by p < 0.05, using McNemar's significance testing.", |
|
"html": null, |
|
"num": null |
|
} |
|
} |
|
} |
|
} |