Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "A97-1046",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:13:58.715616Z"
},
"title": "Fast Statistical Parsing of Noun Phrases for Document Indexing",
"authors": [
{
"first": "Chengxiang",
"middle": [],
"last": "Zhai",
"suffix": "",
"affiliation": {
"laboratory": "Laboratory for Computational Linguistics",
"institution": "Carnegie Mellon University Pittsburgh",
"location": {
"postCode": "15213",
"region": "PA"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Information Retrieval (IR) is an important application area of Natural Language Processing (NLP) where one encounters the genuine challenge of processing large quantities of unrestricted natural language text. While much effort has been made to apply NLP techniques to IR, very few NLP techniques have been evaluated on a document collection larger than several megabytes. Many NLP techniques are simply not efficient enough, and not robust enough, to handle a large amount of text. This paper proposes a new probabilistic model for noun phrase parsing, and reports on the application of such a parsing technique to enhance document indexing. The effectiveness of using syntactic phrases provided by the parser to supplement single words for indexing is evaluated with a 250 megabytes document collection. The experiment's results show that supplementing single words with syntactic phrases for indexing consistently and significantly improves retrieval performance.",
"pdf_parse": {
"paper_id": "A97-1046",
"_pdf_hash": "",
"abstract": [
{
"text": "Information Retrieval (IR) is an important application area of Natural Language Processing (NLP) where one encounters the genuine challenge of processing large quantities of unrestricted natural language text. While much effort has been made to apply NLP techniques to IR, very few NLP techniques have been evaluated on a document collection larger than several megabytes. Many NLP techniques are simply not efficient enough, and not robust enough, to handle a large amount of text. This paper proposes a new probabilistic model for noun phrase parsing, and reports on the application of such a parsing technique to enhance document indexing. The effectiveness of using syntactic phrases provided by the parser to supplement single words for indexing is evaluated with a 250 megabytes document collection. The experiment's results show that supplementing single words with syntactic phrases for indexing consistently and significantly improves retrieval performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Information Retrieval (IR) is an increasingly important application area of Natural Language Processing (NLP). An IR task can be described as to find, from a given document collection, a subset of documents whose content is relevant to the information need of a user as expressed by a query. As the documents and query are often natural language texts, an IR task can usually be regarded as a special NLP task, where the document text and the query text need to be processed in order to judge the relevancy. A general strategy followed by most IR systems is to transform documents and the query into certain level of representation. A query representation can then be compared with a document representation to decide if the document is relevant to the query. In practice, the level of representation in an IR system is quite \"shallow\" --often merely a set of word-like strings, or indexing terms. The process to extract indexing terms from each document in the collection is called indexing. A query is often subject to similax processing, and the relevancy is judged based on the matching of query terms and document terms. In most systems, weights are assigned to terms to indicate how well they can be used to discriminate relevant documents from irrelevant ones.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The challenge in applying NLP to IR is to deal with a large amount of unrestricted natural language text. The NLP techniques used must be very efficient and robust, since the amount of text in the databases accessed is typically measured in gigabytes. In the past, NLP techniques of different levels, including morphological, syntactic/semantic, and discourse processing, were exploited to enhance retrieval (Smeaton 92; Lewis and Spaxck Jones 96), but were rarely evaluated using collections of documents larger than several megabytes. Many NLP techniques are simply not efficient enough or are too labor-intensive to successfully handle a large size document set. However, there are some exceptions. Evans et al. used selective NLP techniques, that are especially robust and efficient, for indexing (Evans et al. 91 ). Strzalkowski reported a fast and robust parser called TTP in (Strzalkowski 92; Strzalkowski and Vauthey 92) . These NLP techniques have been successfully used to process quite large collections, as shown in a series of TREC conference reports by the CLARIT TM1 system group and the New York University (later GE/NYU) group (cf., for example, (Evans and Lefferts 95; Evans et al. 96) , and (Strzalkowski 95; Strzalkowski et al. 96)) These research efforts demonstrated the feasibility of using selective NLP to handle large collections. A special NLP track emphasizing the evaluation of NLP techniques for IR is currently held in the context of TREC (Hatman 96).",
"cite_spans": [
{
"start": 702,
"end": 719,
"text": "Evans et al. used",
"ref_id": null
},
{
"start": 801,
"end": 817,
"text": "(Evans et al. 91",
"ref_id": null
},
{
"start": 882,
"end": 899,
"text": "(Strzalkowski 92;",
"ref_id": null
},
{
"start": 900,
"end": 928,
"text": "Strzalkowski and Vauthey 92)",
"ref_id": null
},
{
"start": 1163,
"end": 1186,
"text": "(Evans and Lefferts 95;",
"ref_id": null
},
{
"start": 1187,
"end": 1203,
"text": "Evans et al. 96)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, a fast probabilistic noun phrase parser is described. The parser can be exploited to 1CLARIT is a registered trademark of CLARITECH Corporation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "automatically extract syntactic phrases from a large amount of documents for indexing. A 250-megabyte document set 2 is used to evaluate the effectiveness of indexing using the phrases extracted by the parser. The experiment's results show that using syntactic phrases to supplement single words for indexing improves the retrieval performance significantly. This is quite encouraging compared to earlier experiments on phrase indexing. The noun phrase parser provides the possibility of combining different kinds of phrases with single words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of the paper is organized as follows. Section 2 discusses document indexing, and argues for the rationality of using syntactic phrases for indexing; Section 3 describes the fast noun phrase parser that we use to extract candidate phrases; Section 4 describes how we use a commercial IR system to perform the desired experiments; Section 5 reports and discusses the experiment results; Section 6 summarizes the conclusions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Phrases for Document Indexing",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2",
"sec_num": null
},
{
"text": "In most current IR systems, documents are primarily indexed by single words, sometimes supplemented by phrases obtained with statistical approaches, such as frequency counting of adjacent word pairs. However, single words are often ambiguous and not specific enough for accurate discrimination of documents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2",
"sec_num": null
},
{
"text": "For example, only using the word \"baalS' and \"terminology\" for indexing is not enough to distinguish \"bank terminology\" from \"terminology baalS'. More specific indexing units are needed. Syntactic phrases (i.e., phrases with certain syntactic relations) are almost always more specific than single words and thus are intuitively attractive for indexing. For example,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2",
"sec_num": null
},
{
"text": "if \"bank terminology\" occurs in the document, then, we can use the phrase \"bank terminology\" as an additional unit to supplement the single words \"banld' and \"terminology\" for indexing. In this way, a query with \"terminology banlZ' will match better with the document than one with \"bank terminology\", since the indexing phrase \"bank terminology\" provides extra discrimination. Despite the intuitive rationality of using phrases for indexing, syntactic phrases have been reported to show no significant improvement of retrieval performance (Lewis 91; Belkin and Croft 87; Fagan 87). Moreover Fagan (Fagan 87) found that syntactic phrases are not superior to simple statistical phrases. Lewis discussed why the syntactic phrase indexing has not worked and concluded that the problems with syntactic phrases are for the most part statistical (Lewis 91) . Indeed, many (perhaps most) syntactic phrases have very low frequency and tend to be over-weighted by the normal weighting method. However, the size of the collection used in 2the Wall Street Journal database in Tipster Disk2 (Harman 96) these early experiments is relatively small. We want to see if a much larger size of collection will make a difference. It is possible that a larger document collection might increase the frequency of most phrases, and thus alleviate the problem of low frequency.",
"cite_spans": [
{
"start": 598,
"end": 608,
"text": "(Fagan 87)",
"ref_id": null
},
{
"start": 840,
"end": 850,
"text": "(Lewis 91)",
"ref_id": null
},
{
"start": 1079,
"end": 1090,
"text": "(Harman 96)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "2",
"sec_num": null
},
{
"text": "We only consider noun phrases and the subphrases derived from them. Specifically, we want to obtain the full modification structure of each noun phrase in the documents and query. From the viewpoint of NLP, the task is noun phrase parsing (i.e., the analysis of noun phrase structure). When the phrases are used only to supplement, not replace, the single words for indexing, some parsing errors may be tolerable. This means that the penalty for a parsing error may not be significant. The challenge, however, is to be able to parse gigabytes of text in practically feasible time and as accurately as possible. The previous work taking on this challenge includes (Evans et al. 91; Evans et Strzalkowski et al. 95) . In (Strzalkowski et al. 95) , the structure of a noun phrase is disambiguated based on certain statistical heuristics, but there seems to be no effort to assign a full structure to every noun phrase. Furthermore, manual effort is needed in constructing grammar rules. Thus, the approach in (Strzalkowski et M. 95) does not address the special need of scalability and robustness along with speed. Evans and Zhai explored a hybrid noun phrase analysis method and used a quite rich set of phrases for document indexing (Evans and Zhai 96) . The indexing method was evaluated using the Associated Press newswire 89 (AP89) database in Tipster Diskl, and a general improvement of retrieval performance over the indexing with single words and full noun phrases was reported. However, the phrase extraction system as reported in (Evans and Zhal 96) is still not fast enough to deal with document collections measured by gigabytes. 3",
"cite_spans": [
{
"start": 663,
"end": 680,
"text": "(Evans et al. 91;",
"ref_id": null
},
{
"start": 681,
"end": 689,
"text": "Evans et",
"ref_id": null
},
{
"start": 690,
"end": 713,
"text": "Strzalkowski et al. 95)",
"ref_id": null
},
{
"start": 719,
"end": 743,
"text": "(Strzalkowski et al. 95)",
"ref_id": null
},
{
"start": 1006,
"end": 1029,
"text": "(Strzalkowski et M. 95)",
"ref_id": null
},
{
"start": 1232,
"end": 1251,
"text": "(Evans and Zhai 96)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "2",
"sec_num": null
},
{
"text": "We propose here a probabilistic model of noun phrase parsing. A fast statistical noun phrase parser has been developed based on the probabilistic model. The parser works fast and can be scaled up to parse gigabytes text within acceptable time. 4 Our goal is to generate different kinds of candidate syntactic 3It was reported to take about 3.5 hours to process 20 MB documents 4With a 133MH DEC alpha workstation, it is estimated to parse at a speed of 4 hours/gigabyte-text or 8 hours/gigabyte-nps, after 20 hours of training with 1 gigabyte text phrases from the structure of a noun phrase so that the effectiveness of different combinations of phrases and single words can be tested. in noun phrase structure analysis is to resolve such structural ambiguity. When a large corpus is available, which is true for an IR task, statistical preference of word combination or word modification can be a good clue for such disambiguation. As summarized in (Lauer 95) , there are two different models for corpus-based parsing of noun phrases: the adjacency model and the dependency model. The difference between the two models can be illustrated by the example compound noun \"informationsretrieval technique\". In the adjacency model, the structure would be decided by looking at the adjacency association of \"information retrievaF and \"retrieval technique\". \"information retrievat' will be grouped first, if \"information retrievaF has a stronger association than \"retrieval technique\", otherwise, \"retrieval technique\" will be grouped first. In the dependency model, however, the structure would be decided by looking at the dependency between \"information\" and \"retrievaP (i.e., the tendency for \"information\" to modify \"retrievat') and the dependency between \"information\" and \"technique\". If \"information\" has a stronger dependency association with \"retrievaP than with \"technique\", \"information retrievat' will be grouped first, otherwise, \"retrieval technique\" will be grouped first. The adjacency model dates at least from (Marcus 80) (Evans and Zhai 96) use primarily the adjacency model, but the association score also takes into account some degree of dependency. Lauer (Lauer 95) compared the adjacency model and the dependency model for compound noun disambiguation, and concluded that the SStrictly speaking, however, compound noun analysis is a special case of noun phrase analysis, but the same technique can oRen be used for both. dependency model provides a substantial advantage over the adjacency model.",
"cite_spans": [
{
"start": 951,
"end": 961,
"text": "(Lauer 95)",
"ref_id": null
},
{
"start": 2023,
"end": 2034,
"text": "(Marcus 80)",
"ref_id": null
},
{
"start": 2035,
"end": 2054,
"text": "(Evans and Zhai 96)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "2",
"sec_num": null
},
{
"text": "We now propose a probabilistic model in which the dependency structure, or the modification structure, of a noun phrase is treated as \"hidden\", similar to the tree structure in the probabilistic context-free grammar (Jelinek et al. 90) . The basic idea is as follows.",
"cite_spans": [
{
"start": 216,
"end": 235,
"text": "(Jelinek et al. 90)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "2",
"sec_num": null
},
{
"text": "A noun phrase can be assumed to be generated from a word modification structure (i.e., a dependency structure). Since noun phrases with more than two words are structurally ambiguous, if we only observe the noun phrase, then the actual structure that generates the noun phrase is \"hidden\". We treat the noun phrases with their possible structures as the complete data and the noun phrases occurring in the corpus (without the structures) as the observed incomplete data. In the training phase, an Expectation Maximization (EM) algorithm (Dempster et al. 77) can be used to estimate the parameters of word modification probabilities by iteratively maximizing the conditional expectation of the likelihood of the complete data given the observed incomplete data and a previous estimate of the parameters. In the parsing phase, a noun phrase is assigned the structure that has the maximum conditional probability given the noun phrase.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2",
"sec_num": null
},
{
"text": "Formally, assume that each noun phrase is generated using a word modification structure. For example, \"information retrieval technique\" may be generated using either the structure \" [XI[X2Xz] ]\" or the structure \"[[X1X2]X3]\". The log likelihood of generating a noun phrase, given the set of noun phrases observed in a corpus NP = {npi} can be written as:",
"cite_spans": [
{
"start": 182,
"end": 191,
"text": "[XI[X2Xz]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "2",
"sec_num": null
},
{
"text": "L(\u00a2) = ~] c(npi)log ~ P\u00a2(npi, sj) npiENP sjES",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2",
"sec_num": null
},
{
"text": "where, S is the set of all the possible modification structures; c(npi) is the count of the noun phrase npi in the corpus; and P\u00a2 (npi, sj) gives the probability of deriving the noun phrase npi using the modification structure sj.",
"cite_spans": [
{
"start": 130,
"end": 139,
"text": "(npi, sj)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "2",
"sec_num": null
},
{
"text": "With the simplification that generating a noun phrase from a modification structure is the same as generating all the corresponding word modification pairs in the noun phrase and with the assumption that each word modification pair in the noun phrase is generated independently, P\u00a2(npi, sj) can further be written as P\u00a2(npi, sj) = P\u00a2(sj) H PC (u, v) c(u'v;'~p''sD (u,v)eM(np~,sj) where, M(npi, sj) is the set of all word pairs (u, v) in npi such that u modifies (i.e., depends on) v according to sj. 6 c (u, v; npi, sj) is the count of the ~For example, if npl is \"information retrieval technique\", and sj is \"[[X1X~IX3]\", then, M(npi, sj) = {(information, retrieval), (retrieval, technique)}. modification pairs (u, v) being generated when npi is derived from sj. P\u00a2(sj) is the probability of structure sj; while Pc(u, v) is the probability of generating the word pair (u, v) given any word modification relation. P\u00a2(sj) and Pc(u, v) are subject to the constraint of summing up to 1 over all modification structures and over all possible word combinations respectively. 7",
"cite_spans": [
{
"start": 343,
"end": 379,
"text": "(u, v) c(u'v;'~p''sD (u,v)eM(np~,sj)",
"ref_id": null
},
{
"start": 427,
"end": 433,
"text": "(u, v)",
"ref_id": null
},
{
"start": 504,
"end": 519,
"text": "(u, v; npi, sj)",
"ref_id": null
},
{
"start": 814,
"end": 822,
"text": "Pc(u, v)",
"ref_id": null
},
{
"start": 870,
"end": 876,
"text": "(u, v)",
"ref_id": null
},
{
"start": 926,
"end": 934,
"text": "Pc(u, v)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "2",
"sec_num": null
},
{
"text": "The model is clearly a special case of the class of the algebraic language models, in which the probabilities are expressed as polynomials in the parameters (Lafferty 95) . For such models, the M-step in the EM algorithm can be carried out exactly, and the parameter update formulas are:",
"cite_spans": [
{
"start": 157,
"end": 170,
"text": "(Lafferty 95)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "2",
"sec_num": null
},
{
"text": "P,+I(U, v) = A'{ 1 ~ c(npi) ~ P~(sjlnpi)c(u,v;np,,sj) npi6NP s16S = )~1 ~ c(npi)P,(sklnpi) npiENP",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2",
"sec_num": null
},
{
"text": "where, A1 and A2 are the Lagrange multipliers corresponding to the two constraints mentioned above, and are given by the following formulas:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2",
"sec_num": null
},
{
"text": "(u,v)EWP rtpi 6NP sj ES 8kESnpi6NP",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2",
"sec_num": null
},
{
"text": "where, WP is the set of all possible word pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2",
"sec_num": null
},
{
"text": "Pn(sj Inpi) can be computed as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "st) P.(np , st)",
"eq_num": ",,) ''j)"
}
],
"section": "2",
"sec_num": null
},
{
"text": ";'One problem with such simplification is that the model may generate a set of word modification pairs that do not form a noun phrase, although such \"illegal noun phrases\" are never observed. A better model would be to write the probability of each word modification pair as the conditional probability of the modifier (i.e., the modifying word) given the head (i.e., the word being modified). That is, P,(npi, st) =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2",
"sec_num": null
},
{
"text": "1-[ P*(ulv)\u00a2(~'~;'\"~J) (u,v)EM(npi,sj) where h(np,) is the head (i.e., the last word) of the noun phrase npi (Lafferty 96) .",
"cite_spans": [
{
"start": 23,
"end": 38,
"text": "(u,v)EM(npi,sj)",
"ref_id": null
},
{
"start": 109,
"end": 122,
"text": "(Lafferty 96)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "P\u00a2(si)P~(h(npi)JsJ)",
"sec_num": null
},
{
"text": "The EM algorithm ensures that L(n+ 1) is greater than L(n). In other words, every step of parameter update increases the likelihood. Thus, at the time of training, the parser can first randomly initialize the parameters, and then, iteratively update the parameters according to the update formulas until the increase of the likelihood is smaller than some pre-set threshold, s In the implementation described here, the maximum length of any noun phrase is limited to six. In practice, this is not a very tight limit, since simple noun phrases with more than six words are quite rare. Summing over all the possible structures for any noun phrase is computed by enumerating all the possible structures with an equal length as the noun phrase. For example, in the case of a threeword noun phrase, only two structures need to be enumerated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "P\u00a2(si)P~(h(npi)JsJ)",
"sec_num": null
},
{
"text": "At the time of parsing noun phrases, the structure of any noun phrase np (S(np)) is determined by S(np) = argmaxsP(slnp) = argmax,P(np[s)P(s) = argmaxs H P(u, v)P(s) (u,v) eM (np,s) We found that the parameters may easily be biased owing to data sparseness. For example, the modification structure parameters naturally prefer left association to right association in the case of three-word noun phrases, when the data is sparse. Such bias in the parameters of the modification structure probability will be propagated to the word modification parameters when the parameters are iteratively updated using EM algorithm. In the experiments reported in this paper, an over-simplified solution is adopted. We simply fixed the modification structure parameter and assumed every dependency structure is equally likely.",
"cite_spans": [
{
"start": 166,
"end": 171,
"text": "(u,v)",
"ref_id": null
},
{
"start": 175,
"end": 181,
"text": "(np,s)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "P\u00a2(si)P~(h(npi)JsJ)",
"sec_num": null
},
{
"text": "Fast training is achieved by reading all the noun phrase instances into memory. 9 This forces us to split the whole noun phrase corpus into small chunks for training. In the experiments reported in this paper, we split the corpus into chunks of a size of around 4 megabytes. Each chunk has about 170,000 (or about 100,000 unique) raw multiple word noun phrases. The parameters estimated on each subcorpus are then merged (averaged). We do not know how much the merging of parameters affects the parameter estimation, but it seems that a majority of phrases are correctly parsed with the merged parameter estimation, based on a rough check of the parsing results. With this approach, it takes a 133-MHz DEC Alpha workstation about 5 hours to train the parser over the noun phrases from a 250-megabyte SFor the experiments reported in this paper, the threshold is 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "P\u00a2(si)P~(h(npi)JsJ)",
"sec_num": null
},
{
"text": "9An alternative way would be to keep the corpus in the disk. In this way, it is not necessary to split the corpus, unless it is extremely large. text corpus. Parsing is much faster, taking less than 1 hour to parse all noun phrases in the corpus of a 250-megabyte text. The parsing speed can be scaled up to gigabytes of text, even when the parser needs to be re-trained over the noun phrases in the whole corpus. However, the speed has not taken into account the time required for extracting the noun phrases for training. In the experiments described in the following section, the CLARIT noun phrase extractor is used to extract all the noun phrases from the 250-megabyte text corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "P\u00a2(si)P~(h(npi)JsJ)",
"sec_num": null
},
{
"text": "After the training on each chunk, the estimation of the parameter of word modifications is smoothed to account for the unseen word modification pairs. Smoothing is made by \"dropping\" a certain number of parameters that have the least probabilities, taking out the probabilities of the dropped parameters, and evenly distributing these probabilities among all the unseen word pairs as well as those pairs of the dropped parameters. It is unnecessary to keep the dropped parameters after smoothing, thus this method of smoothing helps reduce the memory overload when merging parameters. In the experiments reported in the paper, nearly half of the total number of word pairs seen in the training chunk were dropped. Since, word pairs with the least probabilities generally occur quite rarely in the corpus and usually represent semantically illegal word combinations, dropping such word pairs does not affect the parsing output so significantly as it seems. In fact, it may not affect the parsing decisions for the majority of noun phrases in the corpus at all.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "P\u00a2(si)P~(h(npi)JsJ)",
"sec_num": null
},
{
"text": "The potential parameter space for the probabilistic model can be extremely large, when the size of the training corpus is getting larger. One solution to this problem is to use a class-based model similar to the one proposed in (Brown et al. 92) or use parameters of conceptual association rather than word association, as discussed in (Lauer 94) (Lauer 95) .",
"cite_spans": [
{
"start": 228,
"end": 245,
"text": "(Brown et al. 92)",
"ref_id": null
},
{
"start": 336,
"end": 346,
"text": "(Lauer 94)",
"ref_id": null
},
{
"start": 347,
"end": 357,
"text": "(Lauer 95)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "P\u00a2(si)P~(h(npi)JsJ)",
"sec_num": null
},
{
"text": "We used the CLARIT commercial retrieval system as a retrieval engine to test the effectiveness of different indexing sets. The CLARIT system uses the vector space retrieval model (Salton and McGill 83) , in which documents and the query are all represented by a vector of weighted terms (either single words or phrases), and the relevancy judgment is based on the similarity (measured by the cosine measure) between the query vector and any document vector (Evans et al. 93; Evans and Lefferts 95; Evans et al. 96) . The experiment procedure is described by Figure 1 .",
"cite_spans": [
{
"start": 179,
"end": 201,
"text": "(Salton and McGill 83)",
"ref_id": null
},
{
"start": 457,
"end": 474,
"text": "(Evans et al. 93;",
"ref_id": null
},
{
"start": 475,
"end": 497,
"text": "Evans and Lefferts 95;",
"ref_id": null
},
{
"start": 498,
"end": 514,
"text": "Evans et al. 96)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 558,
"end": 566,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiment Design",
"sec_num": "4"
},
{
"text": "First, the original database is parsed to form different sets of indexing terms (say, using different combination of phrases). Then, each indexing set is passed to the CLARIT retrieval engine as a source document set. The CLARIT system is configured to accept the indexing set we passed as is to ensure that the actual indexing terms used inside the CLARIT system are exactly those generated. It is possible to generate three different kinds/levels of indexing units from a noun phrase:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Design",
"sec_num": "4"
},
{
"text": "(1) single words; (2) head modifier pairs (i.e., any word pair in the noun phrase that has a linguistic modification relation); and (3) the full noun phrase. For example, from the phrase structure Different combinations of the three kinds of terms can be selected for indexing. In particular, the indexing set formed solely of single words is used as a baseline to test the effect of using phrases. In the experiments reported here, we generated four different combinations of phrases:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Design",
"sec_num": "4"
},
{
"text": "--WD-SET : single word only (no phrases, baseline) --WD-HM-SET: The results from these different phrase sets are discussed in the next section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Design",
"sec_num": "4"
},
{
"text": "We used, as our document set, the Wall Street Journal database in Tipster Disk2 (Harman 96 ) the size of which is about 250 megabytes. We performed the experiments by using the TREC-5 ad hoc topics (i.e., TREC topics 251-300). Each run involves an automatic feedback with the top 10 documents returned from the initial retrieval. The CLARIT automatic feedback is performed by adding terms from a query-specific thesaurus extracted from the top N documents returned from the initial retrieval (Evans and Lefferts 95) . The results are evaluated using the standard measures of recall and precision. Recall measures how many of the relevant documents have actually been retrieved. Precision measures how many of the retrieved documents are indeed relevant. They are calculated by the following simple formulas: We used the standard TREC evaluation package provided by Cornell University and used the judgedrelevant documents from the TREC evaluations as the gold standard (Harman 94) .",
"cite_spans": [
{
"start": 80,
"end": 90,
"text": "(Harman 96",
"ref_id": null
},
{
"start": 492,
"end": 515,
"text": "(Evans and Lefferts 95)",
"ref_id": null
},
{
"start": 969,
"end": 980,
"text": "(Harman 94)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results analysis",
"sec_num": "5"
},
{
"text": "In Table 1 , we give a summary of the results and compare the three phrase combination runs with the corresponding baseline run. In the table, \"Ret-rel\" means \"retrieved-relevant\" and refers to the total number of relevant documents retrieved. \"Init Prec\" means \"initial precision\" and refers to the highest level of precision over all the points of recall. \"Avg Prec\" means \"average precision\" and is the average of all the precision values computed after each new relevant document is retrieved.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results analysis",
"sec_num": "5"
},
{
"text": "It is clear that phrases help both recall and precision when supplementing single words, as can be seen from the improvement of all phrase runs (WD-HM-SET, WD-NP-SET, WD-I-IM-NP-SET) over the single word run WD-SET.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results analysis",
"sec_num": "5"
},
{
"text": "It can also be seen that when only one kind of phrase (either the full NPs or the head modifiers) is used to supplement the single words, each can lead to a great improvement in precision. However, when we combine the two kinds of phrases, the effect is a greater improvement in recall rather than precision. The fact that each kind of phrase can improve precision significantly when used separately shows that Table 1 : Effects of Phrases with feedback and TREC-5 topics these phrases are indeed very useful for indexing. The combination of phrases results in only a smaller precision improvement but causes a much greater increase in recall. This may indicate that more experiments are needed to understand how to combine and weight different phrases effectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 411,
"end": 418,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results analysis",
"sec_num": "5"
},
{
"text": "The same parsing method has also been used to generate phrases from the same data for the CLARIT NLP track experiments in TREC-5 (Zhai et al. 97) , and similar results were obtained, although the WD-NP-SET was not tested. The results in (Zhai et al. 97) are not identical to the results here, because they are based on two separate training processes. It is possible that different training processes may result in slightly different parameter estimations, because the corpus is arbitrarily segmented into chunks of only roughly 4 megabytes for training, and the chunks actually used in different training processes may vary slightly.",
"cite_spans": [
{
"start": 129,
"end": 145,
"text": "(Zhai et al. 97)",
"ref_id": null
},
{
"start": 237,
"end": 253,
"text": "(Zhai et al. 97)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results analysis",
"sec_num": "5"
},
{
"text": "Information retrieval provides a good way to quantitatively (although indirectly) evaluate various NLP techniques. We explored the application of a fast statistical noun phrase parser to enhance document indexing in information retrieval. We proposed a new probabilistic model for noun phrase parsing and developed a fast noun phrase parser that can handle relatively large amounts of text efficiently. The effectiveness of enhancing document indexing with the syntactic phrases provided by the noun phrase parser was evaluated on the Wall Street Journal database in Tipster Disk2 using 50 TREC-5 ad hoc topics. Experiment results on this 250-megabyte document collection have shown that using different kinds of syntactic phrases provided by the noun phrase parser to supplement single words for indexing can significantly improve the retrieval performance, which is more encouraging than many early experiments on syntactic phrase indexing. Thus, using selective NLP, such as the noun phrase parsing technique we proposed, is not only feasible for use in information retrieval, but also effective in enhancing the retrieval performance./\u00b0 1\u00b0Whether such syntactic phrases are more effective than simple statistical phrases (e.g., high frequency word",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "There are two lines of future work: First, the results from information retrieval experiments often show variances on different kinds of document collections and different sizes of collections. It is thus desirable to test the noun phrase parsing technique in other and larger collections. More experiments and analyses are also needed to better understand how to more effectively combine different phrases with single words. In addition, it is very important to study how such phrase effects interact with other useful IR techniques such as relevancy feedback, query expansion, and term weighting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "Second, it is desirable to study how the parsing quality (e.g., in terms of the ratio of phrases parsed correctly) would affect the retrieval performance. It is very interesting to try the conditional probability model as mentioned in a footnote in section 3 The improvement of the probabilistic model of noun phrase parsing may result in phrases of higher quality than the phrases produced by the current noun phrase parser. Intuitively, the use of higher quality phrases might enhance document indexing more effectively, but this again needs to be tested.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
}
],
"back_matter": [
{
"text": "The author is especially grateful to David A. Evans for his advising and supporting of this work. Thanks are also due to John Lafferty, Nata~a Milid-Frayling, Xiang Tong, and two anonymous reviewers for their useful comments. Naturally, the author alone is responsible for all the errors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": "7"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Retrieval techniques",
"authors": [
{
"first": "N",
"middle": [],
"last": "Belkin",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Croft",
"suffix": ""
}
],
"year": 1987,
"venue": "Annual Review of Information Science Technology",
"volume": "22",
"issue": "",
"pages": "110--145",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Belkin, N., and Croft, B. 1987. Retrieval techniques. In: Williams, Martha E.(Ed.), Annual Review of Information Science Technology, Vol. 22. Amster- dam, NL: Elsevier Science Publishers. 1987. 110- 145.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Class-based n-gram models of natural language",
"authors": [
{
"first": "P",
"middle": [],
"last": "Brown",
"suffix": ""
}
],
"year": 1992,
"venue": "Computational Linguistics",
"volume": "18",
"issue": "4",
"pages": "467--479",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brown, P. et at. 1992. Class-based n-gram models of natural language. Computational Linguistics, 18(4), December, 1992. 467-479.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Maximum likelihood from incomplete data via the EM algorithm",
"authors": [
{
"first": "A",
"middle": [
"P"
],
"last": "Dempster",
"suffix": ""
}
],
"year": 1977,
"venue": "Journal of the Royal Statistical Society",
"volume": "",
"issue": "",
"pages": "39--1977",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dempster, A. P. et al. 1977. Maximum likelihood from incomplete data via the EM algorithm. Jour- nal of the Royal Statistical Society, 39 B, 1977. 1-38.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Automatic indexing using selective NLP and first-order thesauri",
"authors": [
{
"first": "D",
"middle": [
"A"
],
"last": "Evans",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Ginther-Webster",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Hart",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Lefferts",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Monarch",
"suffix": ""
}
],
"year": 1991,
"venue": "Intelligent Text and Image Handling. Proceedings of a Conference, RIAO '91",
"volume": "",
"issue": "",
"pages": "624--644",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Evans, D. A., Ginther-Webster, K. , Hart, M., Lef- ferts, R., Monarch, I., 1991. Automatic indexing using selective NLP and first-order thesauri. In: A. Lichnerowicz (ed.), Intelligent Text and Im- age Handling. Proceedings of a Conference, RIAO '91. Amsterdam, NL: Elsevier. 1991. pp. 624-644.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A. bigrams) remains to be tested",
"authors": [
{
"first": "D",
"middle": [
"A"
],
"last": "Evans",
"suffix": ""
},
{
"first": "R",
"middle": [
"G"
],
"last": "Lefferts",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "S",
"middle": [
"H"
],
"last": "Handerson",
"suffix": ""
},
{
"first": "W",
"middle": [
"R"
],
"last": "Hersh",
"suffix": ""
},
{
"first": "Archbold",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Evans, D. A., Lefferts, R. G., Grefenstette, G., Han- derson, S. H., Hersh, W. R., and Archbold, A. bigrams) remains to be tested.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "CLARIT TREC design, experiments, and results",
"authors": [
{
"first": "A",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1993,
"venue": "Government Printing Office",
"volume": "",
"issue": "",
"pages": "494--501",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. 1993. CLARIT TREC design, experiments, and results. In: Donna K. Harman (ed.), The First Text REtrieval Conference (TREC-1). NIST Special Publication 500-207. Washington, DC: U.S. Government Printing Office, 1993. pp. 251- 286; 494-501.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "CLARIT-TREC experiments",
"authors": [
{
"first": "David",
"middle": [
"A"
],
"last": "Evans",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"G"
],
"last": "Lefferts",
"suffix": ""
}
],
"year": 1995,
"venue": "Information Processing and Management",
"volume": "31",
"issue": "3",
"pages": "385--395",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Evans, David A. and Lefferts, Robert G. 1995. CLARIT-TREC experiments, Information Pro- cessing and Management, Vol. 31, No. 3, 1995. 385-395.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "The Fourth Text REtrieval Conference (TREC-~). NIST Special Publication 500-236",
"authors": [
{
"first": "D",
"middle": [],
"last": "Evans",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Milid-Frayling",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Lefferts",
"suffix": ""
}
],
"year": 1996,
"venue": "",
"volume": "",
"issue": "",
"pages": "305--321",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Evans, D., Milid-Frayling, N., and Lefferts, R. 1996. CLARIT TREC-4 Experiments, in Donna K. Hat- man (Ed.), The Fourth Text REtrieval Confer- ence (TREC-~). NIST Special Publication 500- 236. Washington, DC: U.S. Government Printing Office, 1996. pp. 305-321.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Noun-phrase analysis in unrestricted text for information retrieval",
"authors": [
{
"first": "D",
"middle": [],
"last": "Evans",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Zhai",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the 34th Annual meeting of Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "17--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Evans, D. and Zhai, C. 1996. Noun-phrase analy- sis in unrestricted text for information retrieval. Proceedings of the 34th Annual meeting of Associ- ation for Computational Linguistics, Santa Cruz, University of California, June 24-28, 1996.17-24.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Experiments in Automatic Phrase Indexing for Document Retrieval: A Comparison of Syntactic and Non-syntactic methods",
"authors": [
{
"first": "Joel",
"middle": [
"L"
],
"last": "Fagan",
"suffix": ""
}
],
"year": 1987,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fagan, Joel L. 1987. Experiments in Automatic Phrase Indexing for Document Retrieval: A Com- parison of Syntactic and Non-syntactic methods, PhD thesis, Dept. of Computer Science, Cornell University, Sept. 1987.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The Second Text REtrieval Conference (TREC-2), NIST Special publication 500-215",
"authors": [
{
"first": "D",
"middle": [],
"last": "Harman",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Harman, D. 1994. The Second Text REtrieval Con- ference (TREC-2), NIST Special publication 500- 215. National Institute of Standards and Technol- ogy, 1994.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "TREC 5 Conference Notes",
"authors": [
{
"first": "D",
"middle": [],
"last": "Harman",
"suffix": ""
}
],
"year": 1996,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Harman, D. 1996. TREC 5 Conference Notes, Nov. 20-22, 1996.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Basic methods of probabilistic context free grammars",
"authors": [
{
"first": "F",
"middle": [],
"last": "Jelinek",
"suffix": ""
},
{
"first": "J",
"middle": [
"D"
],
"last": "Lafferty",
"suffix": ""
},
{
"first": "R",
"middle": [
"L"
],
"last": "Mercer",
"suffix": ""
}
],
"year": 1990,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jelinek, F., Lafferty, J.D., and Mercer, R. L. 1990. Basic methods of probabilistic context free gram- mars. Yorktown Heights,N.Y.: IBM T.J. Wat- son Research Center, 1990. Research report RC. 16374.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Notes on the EM Algorithm, Information Theory course notes",
"authors": [
{
"first": "J",
"middle": [],
"last": "Lafferty",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lafferty, J. 1995. Notes on the EM Algorithm, In- formation Theory course notes, Carnegie Mellon University.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Personal Communications",
"authors": [
{
"first": "J",
"middle": [],
"last": "Lafferty",
"suffix": ""
}
],
"year": 1996,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lafferty, J. 1996. Personal Communications.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Conceptual association for compound noun analysis",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Lauer",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the 32nd Annual Meeting of the Association for Computational Linguistics, Student Session",
"volume": "",
"issue": "",
"pages": "337--339",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lauer, Mark. 1994. Conceptual association for com- pound noun analysis. Proceedings of the 32nd An- nual Meeting of the Association for Computa- tional Linguistics, Student Session, Las Cruces, NM, 1994. 337-339.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Corpus statistics meet with the noun compound: Some empirical results",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Lauer",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the 33th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lauer, Mark. 1995. Corpus statistics meet with the noun compound: Some empirical results. Proceed- ings of the 33th Annual Meeting of the Association for Computational Linguistics, 1995.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Representation and Learning in Information Retrieval",
"authors": [
{
"first": "D",
"middle": [],
"last": "Lewis",
"suffix": ""
}
],
"year": 1991,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lewis, D. 1991. Representation and Learning in In- formation Retrieval. Ph.D thesis, COINS Techni- cal Report 91-93, Univ. of Massachusetts, 1991.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Applications of natural language processing in information retrieval",
"authors": [
{
"first": "D",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Sparck Jones",
"suffix": ""
}
],
"year": 1996,
"venue": "Communications of ACM",
"volume": "39",
"issue": "1",
"pages": "92--101",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lewis, D. and Sparck Jones, K. 1996. Applications of natural language processing in information re- trieval. Communications of ACM, Vol. 39, No. 1, 1996, 92-101.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "The stress and structure of modified noun phrases in English",
"authors": [
{
"first": "M",
"middle": [],
"last": "Liberman",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Sproat",
"suffix": ""
}
],
"year": 1992,
"venue": "Lexical Matters",
"volume": "",
"issue": "",
"pages": "131--181",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liberman, M. and Sproat, R. 1992. The stress and structure of modified noun phrases in English. In: Sag, I. and Szabolcsi, A. (Eds.), Lexical Matters, CSLI Lecture Notes No. 24. University of Chicago Press, 1992. 131-181.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A Theory of Syntactic Rec ognition for Natural Language",
"authors": [
{
"first": "Mitchell",
"middle": [],
"last": "Marcus",
"suffix": ""
}
],
"year": 1980,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcus, Mitchell. 1980. A Theory of Syntactic Rec ognition for Natural Language. MIT Press, Cam- bridge, MA, 1980.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Lexical semantic techniques for corpus analysis",
"authors": [
{
"first": "J",
"middle": [],
"last": "Pustejovsky",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Bergler",
"suffix": ""
},
{
"first": "Anick",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1993,
"venue": "Special Issue on Using Large Corpora II",
"volume": "19",
"issue": "",
"pages": "331--358",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pustejovsky, J., Bergler, S., and Anick, P. 1993. Lex- ical semantic techniques for corpus analysis. In: Computational Linguistics, Vol. 19 (2), Special Is- sue on Using Large Corpora II, 1993. 331-358.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Structural ambiguity and conceptual relations",
"authors": [
{
"first": "P",
"middle": [],
"last": "Resnik",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Hearst",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of the Workshop on Very Large Corpora: Academic and Industrial Perspectives",
"volume": "",
"issue": "",
"pages": "58--64",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Resnik, P. and Hearst, M. 1993. Structural ambi- guity and conceptual relations. In: Proceedings of the Workshop on Very Large Corpora: Academic and Industrial Perspectives, June 22, 1993. Ohio State University. 58-64.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Introduction to Modern Information Retrieval",
"authors": [
{
"first": "G",
"middle": [],
"last": "Salton",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Mcgill",
"suffix": ""
}
],
"year": 1983,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Salton, G. and McGill, M. 1983. Introduction to Modern Information Retrieval, New York, NY: McGraw-Hill, 1983.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Progress in application of natural language processing to information retrieval",
"authors": [
{
"first": "Alan",
"middle": [
"F"
],
"last": "Smeaton",
"suffix": ""
}
],
"year": 1992,
"venue": "The Computer Journal",
"volume": "35",
"issue": "3",
"pages": "268--278",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Smeaton, Alan F. 1992. Progress in application of natural language processing to information re- trieval. The Computer Journal, Vol. 35, No. 3, 1992. 268-278.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "TTP: A fast and robust parser for natural language processing",
"authors": [
{
"first": "T",
"middle": [],
"last": "Strzalkowski",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the l~th International Conference on Computational Linguistics (COLING)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Strzalkowski, T. 1992. TTP: A fast and robust parser for natural language processing. Proceed- ings of the l~th International Conference on Com- putational Linguistics (COLING),Nantes, France, July, 1992. 198-204.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Information retrieval using robust natural language processing",
"authors": [
{
"first": "T",
"middle": [],
"last": "Strzalkowski",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Vauthey",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of the 30th ACL Meeting",
"volume": "",
"issue": "",
"pages": "104--111",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Strzalkowski, T. and Vauthey, B. 1992. Information retrieval using robust natural language processing. Proceedings of the 30th ACL Meeting, Neward, DE, June-July, 1992. 104-111.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Recent developments in natural language text retrieval",
"authors": [
{
"first": "T",
"middle": [],
"last": "Strzalkowski",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Carballo",
"suffix": ""
}
],
"year": 1994,
"venue": "The Second Text REtrieval Conference (TREC-2)",
"volume": "",
"issue": "",
"pages": "123--136",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Strzalkowski, T. and Carballo, J. 1994. Recent de- velopments in natural language text retrieval. In: Harman, D. (Ed.), The Second Text REtrieval Conference (TREC-2), NIST Special Publication 500-215. 1994. 123-136.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Natural language information retrieval. Information Processing and Management",
"authors": [
{
"first": "T",
"middle": [],
"last": "Strzalkowski",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "31",
"issue": "",
"pages": "397--417",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Strzalkowski, T. 1995. Natural language informa- tion retrieval. Information Processing and Man- agement. Vol. 31, No. 3, 1995. 397-417.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "The Third Text REtrieval Conference (TREC-3), NIST Special Publication 500-225",
"authors": [
{
"first": "T",
"middle": [],
"last": "Strzalkowski",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "39--53",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Strzalkowski, T. et al. 1995. Natural language in- formation retrieval: TREC-3 report. In: Har- man, D. (Ed.), The Third Text REtrieval Con- ference (TREC-3), NIST Special Publication 500- 225. 1995.39-53.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "The Fourth Text REtrieval Conference (TREC-4). NIST Special Publication 500-236",
"authors": [
{
"first": "T",
"middle": [],
"last": "Strzalkowski",
"suffix": ""
}
],
"year": 1996,
"venue": "",
"volume": "",
"issue": "",
"pages": "245--258",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Strzalkowski, T. et al. 1996. Natural language in- formation retrieval: TREC-4 report. In: Har- man, D. (Ed.), The Fourth Text REtrieval Con- ference (TREC-4). NIST Special Publication 500- 236. Washington, DC: U.S. Government Printing Office, 1996. pp. 245-258.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Evaluation of syntactic phrase indexing -CLARIT TREC5 NLP track report",
"authors": [
{
"first": "C",
"middle": [],
"last": "Zhai",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Tong",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Milid-Frayling",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Evans",
"suffix": ""
}
],
"year": 1997,
"venue": "The Fifth Text REtrieval Conference (TREC-5)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhai, C., Tong, X., Milid-Frayling, N., and Evans D. 1997. Evaluation of syntactic phrase indexing - CLARIT TREC5 NLP track report, to appear in The Fifth Text REtrieval Conference (TREC-5), NIST special publication, 1997, forthcoming.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"text": "Figure h Phrase indexing experiment procedure",
"uris": null
}
}
}
}