Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "R09-1019",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:00:08.376154Z"
},
"title": "Exploiting the use of Prior Probabilities for Passage Retrieval in Question Answering",
"authors": [
{
"first": "Surya",
"middle": [],
"last": "Ganesh",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IIIT-Hyderabad",
"location": {
"country": "India"
}
},
"email": "[email protected]"
},
{
"first": "Vasudeva",
"middle": [],
"last": "Varma",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IIIT-Hyderabad",
"location": {
"country": "India"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Document Retrieval assumes that a document is independent of its relevance, and non-relevance. Previous works showed that the same assumption is being considered for passage retrieval in the context of Question Answering. In this paper, we relax this assumption and describe a method for estimating the prior of a passage being relevant, and non-relevant to a question. These prior probabilities are used in the process of ranking passages. We also describe a trivial method for identifying relevant and nonrelevant text to a question using the Web and AQUAINT corpus as information sources. An empirical evaluation on TREC 2006 Question Answering test set showed that in the context of Question Answering prior probabilities are necessary in ranking the passages.",
"pdf_parse": {
"paper_id": "R09-1019",
"_pdf_hash": "",
"abstract": [
{
"text": "Document Retrieval assumes that a document is independent of its relevance, and non-relevance. Previous works showed that the same assumption is being considered for passage retrieval in the context of Question Answering. In this paper, we relax this assumption and describe a method for estimating the prior of a passage being relevant, and non-relevant to a question. These prior probabilities are used in the process of ranking passages. We also describe a trivial method for identifying relevant and nonrelevant text to a question using the Web and AQUAINT corpus as information sources. An empirical evaluation on TREC 2006 Question Answering test set showed that in the context of Question Answering prior probabilities are necessary in ranking the passages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Passage Retrieval is an intermediate step between document retrieval and answer extraction in a typical Question Answering (QA) system. It reduces the search space for finding an answer from a massive collection of documents to a fixed number of passages (say top 100). Unless the answer is present in one of the retrieved passages, QA systems will not find the answer to a given question. So, passage retrieval is considered as one of the most important components of a QA system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The Probability Ranking Principle [13] states that a retrieval system should rank the documents in decreasing order of their probability of relevance to the query. According to the Language Modeling [11] decomposition [8] of this ranking principle, the documents should be ranked using the following equation: ",
"cite_spans": [
{
"start": 34,
"end": 38,
"text": "[13]",
"ref_id": "BIBREF12"
},
{
"start": 199,
"end": 203,
"text": "[11]",
"ref_id": "BIBREF10"
},
{
"start": 218,
"end": 221,
"text": "[8]",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Here the first term p(Q|D, R) measures the likelihood of the query given a document that is relevant and Language Modeling is being used to estimate this value. The second term measures the prior probabilities of document being relevant, and non relevant. But, document retrieval assumes that a document is independent of its relevance, and non-relevance. So, documents are just ranked based on Language Modeling i.e., the probability of a query being generated by a document. Previous works [9] [10] showed that the same approach is being used even for passage retrieval in the context of QA. Previously Jagadeesh et al. [5] used prior probabilities in Query-Based Multi-Document Summarization task. They defined an entropy based measure called Information Measure to capture the prior of a sentence. This information measure was computed using external information sources like the Web and Wikipedia. Their experimental results showed that prior probabilities are necessary for ranking sentences in the summarization task. We use a similar approach to exploit the use of prior probabilities for passage retrieval in QA.",
"cite_spans": [
{
"start": 492,
"end": 495,
"text": "[9]",
"ref_id": "BIBREF8"
},
{
"start": 496,
"end": 500,
"text": "[10]",
"ref_id": "BIBREF9"
},
{
"start": 622,
"end": 625,
"text": "[5]",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper we describe a mutual information measure called KullbackLeibler divergence (KL divergence) [3] to compute the prior of a passage. We also describe a trivial method for identifying relevant and non-relevant text to a question using the Web and AQUAINT corpus (used in TREC 1 QA evaluations) as information sources. The rest of this paper is organized as follows: Section 2 describes the estimation of prior probabilities of passages; Section 3 describes the identification of relevant and non-relevant text to a question; Section 4 describes the experiments conducted and their results and Section 5 concludes the paper.",
"cite_spans": [
{
"start": 105,
"end": 108,
"text": "[3]",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this section we assume that relevant (R) and nonrelevant (N ) text is identified for a given question. In Information Retrieval, KullbackLeibler divergence is often used to measure the distance between two language models [2] [14] . We use this mutual information measure to estimate prior probabilities of passages. Let U A denotes the unigram language model of passage A and U R , U N denote the unigram language models of relevant and non-relevant text respectively. KL divergence between U A , U R and U A , U N are computed as follows:",
"cite_spans": [
{
"start": 229,
"end": 233,
"text": "[14]",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Estimation of prior probability",
"sec_num": "2"
},
{
"text": "D(U A ||U R ) = v\u2208V U A (v) log U A (v) U R (v) D(U A ||U N ) = v\u2208V U A (v) log U A (v) U N (v)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimation of prior probability",
"sec_num": "2"
},
{
"text": "Where v is a term in the vocabulary V and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimation of prior probability",
"sec_num": "2"
},
{
"text": "U A (v), U R (v), U N (v)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimation of prior probability",
"sec_num": "2"
},
{
"text": "are the unigram probabilities of v in the passage, relevant and non-relevant text respectively. With the increase in the divergence between passage and relevant text, the probability of passage being relevant decreases. So, the prior probabilities are estimated as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimation of prior probability",
"sec_num": "2"
},
{
"text": "p(A|R) = 1 1 + D(U A ||U R ) p(A|N ) = 1 1 + D(U A ||U N )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimation of prior probability",
"sec_num": "2"
},
{
"text": "As KL divergence is always non-negative, both p(A|R) and p(A|N ) always lie in the range [0, 1] . This satisfies the basic law of probability i.e., the probability of an event should always lie in the range [0, 1]. p(A|R) = 1 when U A = U R , as the divergence of two equivalent distributions is zero. Similarly, p(A|N ) = 1 when U A = U N . Substituting the above estimates for prior probabilities in equation 1 gives the final ranking ranking function for passage retrieval.",
"cite_spans": [
{
"start": 89,
"end": 92,
"text": "[0,",
"ref_id": null
},
{
"start": 93,
"end": 95,
"text": "1]",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Estimation of prior probability",
"sec_num": "2"
},
{
"text": "log rank(A) = log p(Q|A, R) \u2212 log 1 + D(U A ||U R ) 1 + D(U A ||U N )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimation of prior probability",
"sec_num": "2"
},
{
"text": "3 Identifying relevant and nonrelevant text",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimation of prior probability",
"sec_num": "2"
},
{
"text": "In the previous section we have assumed that the relevant and non-relevant text for a given question is known. Here we will discuss a method to extract the required information based on different query formulation strategies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimation of prior probability",
"sec_num": "2"
},
{
"text": "Breck et al. [1] noticed a correlation between the number of times an answer appeared in the TREC corpus and the average performance of TREC systems on that particular question. This shows that, the more times an answer appears in the text collection, the easier it is to find it. As a text collection, the Web is larger in size than any research corpus by several orders of magnitude. An important implication of this size is the amount of data redundancy inherent in the Web i.e., each item of information has been stated in a variety of ways in different documents in the Web. Data redundancy in the Web indicates that the answer for a given natural language question exists in many different forms in different documents. So, our methodology for extracting relevant text relies on Web search engines. Currently, the Yahoo search engine is used to retrieve this text from the Web. Assuming that an answer is likely to be found within the vicinity of set of keywords in the question, a query composed of keywords in it is given to the search engine. For example, given the question \"Which position did Warren Moon play in professional football? \", the following query \"position warren moon play professional football\" is given to the search engine. The top N snippets/summaries provided by the search engine are extracted to form relevant text.",
"cite_spans": [
{
"start": 13,
"end": 16,
"text": "[1]",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Relevant text",
"sec_num": "3.1"
},
{
"text": "Most of the snippets provided by the search engine consist of broken sentences. These broken sentences may miss a part of answer pattern or entire answer pattern which is originally present in them. In either case, an automatic evaluation using a set of questions and their corresponding answer patterns will fail to show the actual quality of snippets. So, we manually examined the snippets for a set of 50 randomly selected questions from TREC 2006 test set [4] . We observed that on an average about 6 snippets out of top 10 snippets provided by the search engine are relevant to the question. As the quality of snippets is considerably high, we use them as relevant text to a given question.",
"cite_spans": [
{
"start": 460,
"end": 463,
"text": "[4]",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Relevant text",
"sec_num": "3.1"
},
{
"text": "The methodology for extracting non-relevant text is independent of the size of a text collection unlike the methodology for relevant text. Here the structure of a question is used to extract the required information. An input question is parsed to get POS tags corresponding to all the terms in it. We have used the stanford parser [6] [7] to get POS tag sequence corresponding to a question. Based on POS tags, all keywords in a question are splitted into two sets: Topic and Keyword.",
"cite_spans": [
{
"start": 332,
"end": 335,
"text": "[6]",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Non-relevant text",
"sec_num": "3.2"
},
{
"text": "Topic: Typically, questions ask for a specific information within a broad topic. For example, the question \"Which position did Warren Moon play in professional football?\", asks for a specific information regarding \"Warren Moon\". A topic can be a person, location, organization, event or any other entity, which are proper nouns. So, a topic set consists of all the proper nouns within a question. And, in questions where there are no proper nouns like \"Which country is the leading producer of rice?\", nouns \"rice\" and \"country\" are considered as individual topics and these terms form topic set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Non-relevant text",
"sec_num": "3.2"
},
{
"text": "Keywords: This set contains all the keywords in a question which are not members of topic set. So, for the question \"Which position did Warren Moon play in professional football?\", the constituents of this set are \"position\", \"play\", \"professional \" and \"football \".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Non-relevant text",
"sec_num": "3.2"
},
{
"text": "Using the above two sets, two distinct queries are formulated which represent their non-relevance to a question.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Non-relevant text",
"sec_num": "3.2"
},
{
"text": "QUERY I: It is formulated using topic set terms alone, which is based on the idea that text which covers general information regarding a topic in the question can be considered as non-relevant to it. So, for the above example question \"warren moon\" is expected to retrieve non-relevant text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Non-relevant text",
"sec_num": "3.2"
},
{
"text": "QUERY II: It is formulated using terms from both topic and keyword sets. The idea behind this query formulation is that text which covers information about a topic in the question but does not contain any of the keywords in it, can be considered as non-relevant to it. So, for the above example question, \"warren moon -position -playprofessional -football \" is expected to retrieve nonrelevant text. The negative operator (-) in the above query restricts the Information Retrieval system to retrieve only information without terms in the query that succeed '-' operator.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Non-relevant text",
"sec_num": "3.2"
},
{
"text": "As the methodology is independent of the size of a corpus, two text collections which include Web and AQUAINT corpus, are used to extract the required information. An empirical evaluation using TREC 2006 QA test set was performed to test the quality of text extracted by using the two queries described previously. Redundancy, a passage retrieval performance evaluation metric, is used to measure the average number of answer bearing passages found within the top N passages retrieved for each query formulation. So, here the quality of text is inversely proportional to redundancy i.e., lower the redundancy value better is the quality of text extracted. All the FACTOID questions from the test set were used to measure redundancy. Table 1 shows the average redundancy scores for the top N passages retrieved from AQUAINT corpus in the test set. QUERY I and QUERY II are the query formulations from a question as described previously and QUERY is a keyword query formulated for retrieving relevant snippets from the Web. These results show that QUERY II produces better quality of non-relevant text than QUERY I. And, compared to QUERY both QUERY I and QUERY II have significantly lower redundancy scores. A similar evaluation could not be performed on snippets retrieved from Web because of broken sentences as described in the previous section. ",
"cite_spans": [],
"ref_spans": [
{
"start": 733,
"end": 740,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Non-relevant text",
"sec_num": "3.2"
},
{
"text": ") = (1 \u2212 \u03b1) log p(Q|A, R) \u2212\u03b1 log 1 + D(U A ||U R ) 1 + D(U A ||U N )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Non-relevant text",
"sec_num": "3.2"
},
{
"text": "Where \u03b1 is a weighting parameter which lies between 0 and 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Non-relevant text",
"sec_num": "3.2"
},
{
"text": "In the context of QA, coverage and redundancy [12] are the two principal measures used to measure the performance of passage retrieval. The coverage gives the proportion of questions for which a correct answer can be found within the top N passages retrieved for each question. The redundancy gives the average number of answer bearing passages found within the top N passages retrieved for each question. In our experiments we have set N as 20 i.e., the top 20 passages are used for evaluation.",
"cite_spans": [
{
"start": 46,
"end": 50,
"text": "[12]",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "The data used to test the effectiveness of prior probabilities of passages includes: AQUAINT corpus, factoid questions from TREC 2006 QA task, and answer judgments provided by NIST for these questions. The AQUAINT corpus consists of 1,033,461 documents taken from AP newswire, the New York Times newswire and the English portion of the Xinhua News Agency newswire. The documents in this corpus contain paragraph markers which are used as passage level boundaries for our experiments. The answer judgments consist of answer patterns and document ids in which they occur. This allows the evaluation to be performed under two criteria: strict and lenient. For strict scoring, the answer pattern must occur in the passage, and the passage must be from one of the documents listed as relevant in the answer judgments. For lenient scoring, the answer pattern must occur in the passage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "We used two open source retrieval engines, Lucene and Indri, to test the effect of prior probabilities on passage retrieval. Lucene supports Boolean query language and ranked retrieval using BM25. Indri is a state-of-the-art retrieval engine that combines the merits of language model and inference network. We incorporated our approach for passage retrieval as a reranking step into these retrieval engines. After Lucene or Indri retrieves a ranked set of passages for a given question, top 200 passages are re-ranked, of which top 20 passages are considered for evaluation. The scores for top 20 passages returned by respective engines act as baseline to compare the re-ranked results using our approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "We performed two experiments in which QUERY and QUERY II were used to extract relevant and nonrelevant text respectively. In the first experiment, we compared the re-ranked and baseline results from the two retrieval engines, and they are shown in tables 2 and 3. Only Web was used to extract relevant text but for extracting non-relevant text both AQUAINT and Web were used. So, to analyze the effect of two text collections on computing the prior of a passage, we showed results for both of them. The results listed under AQUAINT and Web show considerable improvements over the baseline and in between the two, scores are marginally higher when Web was used. In the second experiment we tested our methodology for different values of weighting parameter (\u03b1) between 0.0 and 1.0 in the ranking function. Figure 1 shows the performance of passage retrieval for differ- ent \u03b1 values under strict and lenient criteria. In all the cases, the performance of passage retrieval improves over the baseline (\u03b1 = 0.0) for \u03b1 values between 0.0 and 0.8, and from then it is below the baseline. And, the performance reaches maximum for \u03b1 values between 0.3 and 0.5 which shows that performance is biased towards query likelihood scores. ",
"cite_spans": [],
"ref_spans": [
{
"start": 805,
"end": 813,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Question Answering aims at finding exact answers to natural language questions from a large collection of documents. Within a QA system, passage retrieval reduces the search space for finding an answer from such large collection of documents to a fixed number of passages. In this paper, we have explored the use of prior probabilities of a passage being relevant, and non-relevant to a question in the process of ranking passages. We described a method for estimating these prior probabilities using KullbackLeibler divergence, and a method for extracting relevant and non-relevant text to a question.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Our experiments on factoid questions from TREC 2006 test set showed that in the context of QA, use of prior probabilities improves the performance of passage retrieval. The experimental results also showed that performance is biased towards query likelihood scores. This could be because the information used for computing prior of a passage is not strictly relevant or non-relevant. In the future, we aim to further enhance the performance of our passage retrieval methodology by exploring different text classification algorithms to derive better prior probability estimates, and different techniques to extract relevant and non-relevant information to a question.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Text REtrieval Conference, http://trec.nist.gov",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Looking under the hood : Tools for diagnosing your question answering engine",
"authors": [
{
"first": "E",
"middle": [],
"last": "Breck",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Light",
"suffix": ""
},
{
"first": "G",
"middle": [
"S"
],
"last": "Mann",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Riloff",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Anand",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Rooth",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Thelen",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Breck, M. Light, G. S. Mann, E. Riloff, B. Brown, P. Anand, M. Rooth, and M. Thelen. Looking under the hood : Tools for diagnosing your question answer- ing engine. CoRR, cs.CL/0107006, 2001.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "An information-theoretic approach to automatic query expansion",
"authors": [
{
"first": "C",
"middle": [],
"last": "Carpineto",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Mori",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Romano",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Bigi",
"suffix": ""
}
],
"year": 2001,
"venue": "ACM Trans. Inf. Syst",
"volume": "19",
"issue": "1",
"pages": "1--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Carpineto, R. de Mori, G. Romano, and B. Bigi. An information-theoretic approach to automatic query expansion. ACM Trans. Inf. Syst., 19(1):1-27, 2001.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Elements of information theory",
"authors": [
{
"first": "T",
"middle": [
"M"
],
"last": "Cover",
"suffix": ""
},
{
"first": "J",
"middle": [
"A"
],
"last": "Thomas",
"suffix": ""
}
],
"year": 1991,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. M. Cover and J. A. Thomas. Elements of informa- tion theory. Wiley-Interscience, New York, NY, USA, 1991.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Overview of the trec 2006 question answering track 99",
"authors": [
{
"first": "H",
"middle": [
"T"
],
"last": "Dang",
"suffix": ""
},
{
"first": "J",
"middle": [
"J"
],
"last": "Lin",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Kelly",
"suffix": ""
}
],
"year": 2006,
"venue": "TREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. T. Dang, J. J. Lin, and D. Kelly. Overview of the trec 2006 question answering track 99. In TREC, 2006.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Capturing sentence prior for query-based multi-document summarization",
"authors": [
{
"first": "J",
"middle": [],
"last": "Jagarlamudi",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Pingali",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Varma",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Jagarlamudi, P. Pingali, and V. Varma. Capturing sentence prior for query-based multi-document sum- marization. In D. Evans, S. Furui, and C. Soulupuy, editors, RIAO. CID, 2007.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Accurate unlexicalized parsing",
"authors": [
{
"first": "D",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2003,
"venue": "ACL '03: Proceedings of the 41st Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "423--430",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Klein and C. D. Manning. Accurate unlexicalized parsing. In ACL '03: Proceedings of the 41st An- nual Meeting on Association for Computational Lin- guistics, pages 423-430, Morristown, NJ, USA, 2003. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Fast exact inference with a factored model for natural language parsing",
"authors": [
{
"first": "D",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2003,
"venue": "Advances in Neural Information Processing Systems 15 (NIPS)",
"volume": "",
"issue": "",
"pages": "3--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Klein and C. D. Manning. Fast exact inference with a factored model for natural language parsing. In In Advances in Neural Information Processing Systems 15 (NIPS), pages 3-10. MIT Press, 2003.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Probabilistic Relevance Models Based on Document and Query Generation",
"authors": [
{
"first": "J",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Zhai",
"suffix": ""
}
],
"year": 2003,
"venue": "Kluwer International Series on Information Retrieval",
"volume": "13",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Lafferty and C. Zhai. Probabilistic Relevance Mod- els Based on Document and Query Generation, vol- ume 13. Kluwer International Series on Information Retrieval, 2003.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Passage retrieval based on language models",
"authors": [
{
"first": "X",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "W",
"middle": [
"B"
],
"last": "Croft",
"suffix": ""
}
],
"year": 2002,
"venue": "CIKM '02: Proceedings of the eleventh international conference on Information and knowledge management",
"volume": "",
"issue": "",
"pages": "375--382",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "X. Liu and W. B. Croft. Passage retrieval based on language models. In CIKM '02: Proceedings of the eleventh international conference on Information and knowledge management, pages 375-382, New York, NY, USA, 2002. ACM.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A translation model for sentence retrieval",
"authors": [
{
"first": "V",
"middle": [],
"last": "Murdock",
"suffix": ""
},
{
"first": "W",
"middle": [
"B"
],
"last": "Croft",
"suffix": ""
}
],
"year": 2005,
"venue": "HLT '05: Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "684--691",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "V. Murdock and W. B. Croft. A translation model for sentence retrieval. In HLT '05: Proceedings of the conference on Human Language Technology and Em- pirical Methods in Natural Language Processing, pages 684-691, Morristown, NJ, USA, 2005. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A language modeling approach to information retrieval",
"authors": [
{
"first": "J",
"middle": [
"M"
],
"last": "Ponte",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. M. Ponte. A language modeling approach to infor- mation retrieval. Master's thesis, Amherst, MA, USA, 1998.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Evaluating passage retrieval approaches for question answering",
"authors": [
{
"first": "I",
"middle": [],
"last": "Roberts",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Gaizauskas",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of 26th European Conference on Information Retrieval",
"volume": "",
"issue": "",
"pages": "72--84",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I. Roberts and R. Gaizauskas. Evaluating passage re- trieval approaches for question answering. In In Pro- ceedings of 26th European Conference on Information Retrieval, pages 72-84, 2003.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "The probability ranking principle in ir",
"authors": [
{
"first": "S",
"middle": [
"E"
],
"last": "Robertson",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "281--286",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. E. Robertson. The probability ranking principle in ir. pages 281-286, 1997.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Model-based feedback in the language modeling approach to information retrieval",
"authors": [
{
"first": "C",
"middle": [],
"last": "Zhai",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Lafferty",
"suffix": ""
}
],
"year": 2001,
"venue": "CIKM '01: Proceedings of the tenth international conference on Information and knowledge management",
"volume": "",
"issue": "",
"pages": "403--410",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Zhai and J. Lafferty. Model-based feedback in the language modeling approach to information re- trieval. In CIKM '01: Proceedings of the tenth in- ternational conference on Information and knowledge management, pages 403-410, New York, NY, USA, 2001. ACM.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "log rank(D) = log p(Q|D, R) + log p(D|R) p(D|N )",
"type_str": "figure",
"num": null
},
"FIGREF1": {
"uris": null,
"text": "Performance of passage retrieval for different \u03b1 values from 0.0 to 1.0 under strict and lenient criteria. In all the cases '(-*-)' and '(\u2022 \u2022 \u2022 *\u2022 \u2022 \u2022 )' denotes re-ranked scores from Indri and Lucene.",
"type_str": "figure",
"num": null
},
"TABREF1": {
"text": "Redundancy scores for the passages retrieved from AQUAINT corpus using different queries As the extracted relevant and non-relevant text is not truly relevant and non-relevant to a question, a linear interpolation of Language Modeling score and prior probabilities are used to rank passages as shown in the equation below.",
"content": "<table/>",
"num": null,
"type_str": "table",
"html": null
},
"TABREF3": {
"text": "Lucene evaluation results under strict and lenient criteria",
"content": "<table/>",
"num": null,
"type_str": "table",
"html": null
},
"TABREF5": {
"text": "Indri evaluation results under strict and lenient criteria",
"content": "<table/>",
"num": null,
"type_str": "table",
"html": null
}
}
}
}