Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "U08-1013",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:11:58.714408Z"
},
"title": "Weighted Mutual Exclusion Bootstrapping for Domain Independent Lexicon and Template Acquisition",
"authors": [
{
"first": "Tara",
"middle": [],
"last": "Mcintosh",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Sydney NSW 2006",
"location": {
"country": "Australia"
}
},
"email": ""
},
{
"first": "James",
"middle": [
"R"
],
"last": "Curran",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Sydney NSW 2006",
"location": {
"country": "Australia"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present the Weighted Mutual Exclusion Bootstrapping (WMEB) algorithm for simultaneously extracting precise semantic lexicons and templates for multiple categories. WMEB is capable of extracting larger lexicons with higher precision than previous techniques, successfully reducing semantic drift by incorporating new weighting functions and a cumulative template pool while still enforcing mutual exclusion between the categories. We compare WMEB and two state-of-theart approaches on the Web 1T corpus and two large biomedical literature collections. WMEB is more efficient and scalable, and we demonstrate that it significantly outperforms the other approaches on the noisy web corpus and biomedical text. DRESS, BODY, CHEMICAL, COLOUR, DRINK, FOOD, JEWEL and WEB terms.\u0130n the biomedical experiments we introduced four stop categories-AMINO ACID, ANIMAL, BODY and ORGANISM.",
"pdf_parse": {
"paper_id": "U08-1013",
"_pdf_hash": "",
"abstract": [
{
"text": "We present the Weighted Mutual Exclusion Bootstrapping (WMEB) algorithm for simultaneously extracting precise semantic lexicons and templates for multiple categories. WMEB is capable of extracting larger lexicons with higher precision than previous techniques, successfully reducing semantic drift by incorporating new weighting functions and a cumulative template pool while still enforcing mutual exclusion between the categories. We compare WMEB and two state-of-theart approaches on the Web 1T corpus and two large biomedical literature collections. WMEB is more efficient and scalable, and we demonstrate that it significantly outperforms the other approaches on the noisy web corpus and biomedical text. DRESS, BODY, CHEMICAL, COLOUR, DRINK, FOOD, JEWEL and WEB terms.\u0130n the biomedical experiments we introduced four stop categories-AMINO ACID, ANIMAL, BODY and ORGANISM.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Automatically acquiring semantic lexicons and templates from raw text is essential for overcoming the knowledge bottleneck in many natural language processing tasks, e.g. question answering (Ravichandran and Hovy, 2002) . These tasks typically involve identifying named entity (NE) classes which are not found in annotated corpora and thus supervised NE recognition models are not always available. This issue becomes even more evident in new domains, such as biomedicine, where new semantic categories are often poorly represented in linguistic resources, if at all (Hersh et al., 2007) .",
"cite_spans": [
{
"start": 190,
"end": 219,
"text": "(Ravichandran and Hovy, 2002)",
"ref_id": "BIBREF11"
},
{
"start": 567,
"end": 587,
"text": "(Hersh et al., 2007)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There are two common approaches to extract semantic lexicons: distributional similarity and template-based bootstrapping . In template-based bootstrapping algorithms, templates that express a particular semantic type are used to recognise new terms, and in turn these new terms help identify new templates iteratively . These algorithms are attractive as they are domain and language independent, require minimal linguistic preprocessing, are relatively efficient, and can be applied to raw text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Unfortunately, semantic drift often occurs when ambiguous or erroneous terms or patterns are introduced into the lexicon or set of templates. developed Mutual Exclusion Bootstrapping (MEB) to reduce semantic drift by forcing semantic classes to be mutually exclusive.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We introduce a new algorithm, Weighted Mutual Exclusion Bootstrapping (WMEB), that automatically acquires multiple semantic lexicons and their templates simultaneously. It extends on the assumption of mutual exclusion between categories by incorporating a novel cumulative template pool and new term and template weighting functions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We compare WMEB against two state-of-theart mutual bootstrapping algorithms, MEB and BASILISK (Thelen and Riloff, 2002) . We have evaluated the terms and templates these algorithms extract under a range of conditions from three raw text collections: noisy web text, biomedical abstracts, and full-text articles.",
"cite_spans": [
{
"start": 85,
"end": 93,
"text": "BASILISK",
"ref_id": null
},
{
"start": 94,
"end": 119,
"text": "(Thelen and Riloff, 2002)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We demonstrate that WMEB outperforms these existing algorithms in extracting precise lexicons and templates from all three datasets. WMEB is significantly less susceptible to semantic drift and so can produce large lexicons accurately and efficiently across multiple domains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Hearst (1992) pioneered the use of templates for information extraction, focussing on acquiring isa relations using manually devised templates like such W as X, ..., Y and/or Z where X, ..., Y, Z are hyponyms of W. Various automated template-based bootstrapping algorithms have since been developed to iteratively build semantic lexicons from texts. Riloff and Shepherd (1997) proposed Iterative Bootstrapping (IB) where seed instances of a semantic category are used to identify related terms that frequently co-occur.",
"cite_spans": [
{
"start": 350,
"end": 376,
"text": "Riloff and Shepherd (1997)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "In Mutual Bootstrapping (MB) seed instances of a desired type are used to infer new templates, which in turn identify new lexicon entries. This process is repeated with the new terms identifying new templates. In each iteration, new terms and templates are selected based on a metric scoring their suitability for extracting additional templates and terms for the category. Unfortunately, if a term with multiple senses or a template which weakly constrains the semantic class is selected, semantic drift of the lexicon and templates occurs -the semantic class drifts into another category .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Extracting multiple semantic categories simultaneously has been proposed to reduce semantic drift. The bootstrapping instances compete with one another in an attempt to actively direct the categories away from each other (Thelen and Riloff, 2002; Yangarber et al., 2002; . This strategy is similar to the one sense per discourse assumption (Yarowsky, 1995) .",
"cite_spans": [
{
"start": 221,
"end": 246,
"text": "(Thelen and Riloff, 2002;",
"ref_id": "BIBREF14"
},
{
"start": 247,
"end": 270,
"text": "Yangarber et al., 2002;",
"ref_id": "BIBREF16"
},
{
"start": 340,
"end": 356,
"text": "(Yarowsky, 1995)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "In BASILISK (Thelen and Riloff, 2002) , candidate terms for a category are ranked highly if they have strong evidence for the category and little or no evidence for another. It is possible for an ambiguous term to be assigned to the less dominant sense, and in turn less precise templates will be selected, causing semantic drift. Drift may also be introduced as templates can be selected by different categories in different iterations.",
"cite_spans": [
{
"start": 12,
"end": 37,
"text": "(Thelen and Riloff, 2002)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "NOMEN (Yangarber et al., 2002) was developed to extract generalized names such as diseases and drugs, with no capitalisation cues. NOMEN, like BASILISK, identifies semantic category lexicons in parallel, however NOMEN extracts the left and right contexts of terms independently and gener-alises the contexts. introduced the algorithm Mutual Exclusion Bootstrapping (MEB) which more actively defines the semantic boundaries of the lexicons extracted simultaneously. In MEB, the categories compete for both terms and templates. Semantic drift is reduced in two ways: by eliminating templates that collide with two or more categories in an iteration (from all subsequent iterations), and by ignoring colliding candidate terms (for an iteration). This effectively excludes general templates that can occur frequently with multiple categories, and reduces the chance of assigning ambiguous terms to their less dominant sense.",
"cite_spans": [
{
"start": 6,
"end": 30,
"text": "(Yangarber et al., 2002)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "The scoring metric for candidate terms and templates in MEB is simple and na\u00efve. Terms and templates which 1) match the most input instances, and 2) have the potential to generate the most new candidates, are preferred . This second criteria aims to increase recall, however the selected instances are highly likely to introduce drift. We introduce a new weighting scheme to effectively overcome this.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Template-based bootstrapping algorithms have also been used in various Information Extraction (IE) tasks. Agichtein and Gravano (2000) developed the SNOWBALL system to identify the locations of companies, and Yu and Agichtein (2003) applied SNOWBALL to extract synonymous gene and protein terms. Pantel and Pennacchiotti (2006) used bootstrapping to identify numerous semantic relationships, such as is-a and part-of relationships. They incorporate the pointwise mutual information (MI) measure between the templates and instances to determine template reliability, as well as exploiting generic templates and the Web for filtering incorrect instances. We evaluate the effectiveness of MI as a weighting function for selecting terms and templates in WMEB.",
"cite_spans": [
{
"start": 106,
"end": 134,
"text": "Agichtein and Gravano (2000)",
"ref_id": "BIBREF0"
},
{
"start": 209,
"end": 232,
"text": "Yu and Agichtein (2003)",
"ref_id": "BIBREF19"
},
{
"start": 296,
"end": 327,
"text": "Pantel and Pennacchiotti (2006)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "In the biomedical domain, there is an increased interest in automatically extracting lexicons of biomedical entities such as antibodies and mutations, and the templates which extract such terms. This is primarily due to the lack, and scope, of annotated resources, and the introduction of new semantic categories which are severely underrepresented in corpora and lexicons. Meij and Ka-trenko (2007) applied MB to identify biomedical entities and their templates, which were both then used to find potential answer sentences for the TREC Genomics Track task (Hersh et al., 2007) . The accuracy of their extraction process was not evaluated, however their Information Retrieval system had performance gains in unambiguous and common entity types, where little semantic drift is likely to occur.",
"cite_spans": [
{
"start": 374,
"end": 399,
"text": "Meij and Ka-trenko (2007)",
"ref_id": null
},
{
"start": 558,
"end": 578,
"text": "(Hersh et al., 2007)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Our algorithm, Weighted Mutual Exclusion Bootstrapping (WMEB), extends MEB described in . MEB is a minimally supervised, mutual bootstrapping algorithm which reduces semantic drift by extracting multiple semantic categories with individual bootstrapping instances in parallel, and by forcing the categories to be mutually exclusive ( Figure 1 ). In MEB, the templates describe the context of a term (two terms to the left and right). Each MEB instance iterates simultaneously between two stages: template extraction and selection, and term extraction and selection. The key assumption of MEB is that terms only have a single sense and that templates only extract terms of a single sense. This is forced by excluding terms and templates from all categories if in one iteration they are selected by more than one category.",
"cite_spans": [],
"ref_spans": [
{
"start": 334,
"end": 342,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Weighted MEB (WMEB) Algorithm",
"sec_num": "3"
},
{
"text": "In this section we describe the architecture of WMEB. WMEB employs a new weighting scheme, which identifies candidate templates and terms that are strongly associated with the lexicon terms and their templates respectively. In WMEB, we also introduce the concept of a cumulative template pool. These techniques reduce the semantic drift in WMEB more effectively than in MEB.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Weighted MEB (WMEB) Algorithm",
"sec_num": "3"
},
{
"text": "WMEB takes as input a set of manually labelled seed terms for each category. Each category's seed set forms it's initial lexicon.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Architecture",
"sec_num": "3.1"
},
{
"text": "For each term in the category lexicon, WMEB extracts all candidate templates the term matches. To enforce mutually exclusive templates, candidate templates identified by multiple categories are excluded from the candidate set and all sub- sequent iterations. The remaining candidates are then ranked according to their reliability measure and their relevance weight (see Section 3.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Template Extraction and Selection",
"sec_num": null
},
{
"text": "After manually inspecting the templates selected by MEB and BASILISK, we introduced the cumulative template pool (pool) in WMEB. In MEB and BASILISK (Thelen and Riloff, 2002) , the top-k 1 templates for each iteration are used to extract new candidate terms. We observed that as the lexicons grow, more general templates can drift into the top-k. This was also noted by . As a result the earlier precise templates lose their influence.",
"cite_spans": [
{
"start": 149,
"end": 174,
"text": "(Thelen and Riloff, 2002)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Template Extraction and Selection",
"sec_num": null
},
{
"text": "WMEB successfully overcomes this by accumulating all selected templates from the current and all previous iterations in the pool, ensuring previous templates can contribute. The templates in the pool have equal weight in all iterations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Template Extraction and Selection",
"sec_num": null
},
{
"text": "In WMEB, the top-k templates are selected for addition to the pool. If all top-k templates are already in the pool, then the next available top template is added. This ensures at least one new template is added in each iteration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Template Extraction and Selection",
"sec_num": null
},
{
"text": "For each template in a category's pool, all available candidate terms matching the templates are identified. Like the candidate templates, terms which are extracted by multiple categories are also excluded. A colliding term will collide in all consecutive iterations due to the cumulative pool and thus WMEB creates a stricter term boundary between categories than MEB. The candidate terms are ranked with respect to their reliability and relevance weight, and the top-k terms are added to the category's lexicon.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Term Extraction and Selection",
"sec_num": null
},
{
"text": "In MEB, candidate terms and templates are ranked according to their reliability measure and ties are broken using the productivity measure. The reliability of a term for a given category, is the number of input templates in an iteration that can extract the term. The productivity of a term is the number of potentially new templates it may add in the next iteration. These measures are symmetrical for both terms and templates. More reliable instances would theoretically have higher precision, while high productive instances will have a high recall. Unfortunately, high productive instances could potentially introduce drift.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Term and Template Weighting",
"sec_num": "3.2"
},
{
"text": "In WMEB we replace the productivity measure with a new relevance weight. We have investigated scoring metrics which prefer terms and templates that are highly associated with their input instances, including: the chi-squared (\u03c7 2 ) statistic and three variations of the pointwise mutual information (MI) measure (Manning and Schutze, 1999, Chapter 5) . Each of these estimates the strength of the co-occurance of a term and a template. They do not give the likelihood of the instance being a member of a semantic category.",
"cite_spans": [
{
"start": 312,
"end": 350,
"text": "(Manning and Schutze, 1999, Chapter 5)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Term and Template Weighting",
"sec_num": "3.2"
},
{
"text": "The first variation of MI we investigate is MI 2 , which scales the probability of the term (t) and template (c) pair to ensure more frequent combinations have a greater weight.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Term and Template Weighting",
"sec_num": "3.2"
},
{
"text": "MI 2 (t, c) = log 2 p(t, c) 2 p(t)p(c)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Term and Template Weighting",
"sec_num": "3.2"
},
{
"text": "Each of the probabilities are calculated directly from the relative frequencies without smoothing. The scores are set to 0 if their observed frequencies are less than 5, as these estimates are sensitive to low frequencies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Term and Template Weighting",
"sec_num": "3.2"
},
{
"text": "The other variation of MI function we utilise is truncated MI (MIT), and is defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Term and Template Weighting",
"sec_num": "3.2"
},
{
"text": "MIT(t, c) = MI 2 (t, c) : MI(t, c) > 0 0 : MI(t, c) \u2264 0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Term and Template Weighting",
"sec_num": "3.2"
},
{
"text": "The overall relevance weight for a term or template is the sum of the scores of the pairs, where score corresponds to one of the scoring metrics, and C is the set of templates matching term t, and T is the set of terms matching template c.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Term and Template Weighting",
"sec_num": "3.2"
},
{
"text": "weight(t) = c\u2208C score(t, c) weight(c) = t\u2208T score(c, t)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Term and Template Weighting",
"sec_num": "3.2"
},
{
"text": "The terms and templates are ordered by their reliability, and ties are broken by their relevance weight. WMEB is much more efficient than BASILISK using these weighting scores -for all possible term and template pairs, the scores can be pre-calculated when the data is loaded, whereas in BASILISK, the scoring metric is more computationally expensive. In BASILISK, each individual calculation is dependent on the current state of the bootstrapping process, and therefore scores cannot be pre-calculated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Term and Template Weighting",
"sec_num": "3.2"
},
{
"text": "We evaluated the performance of BASILISK, MEB and WMEB using 5-grams from three raw text resources: the Google Web 1T corpus (Brants and Franz, 2006) , MEDLINE abstracts 2 and the TREC Genomics Track 2007 full-text articles (Hersh et al., 2007) . In our experiments, the term is the middle token of each 5-gram and the template is the two tokens on either side. Unlike and Yangarber (2003) , we do not use syntactic knowledge. Although we only extract unigrams, each algorithm can identify multi-term entities .",
"cite_spans": [
{
"start": 125,
"end": 149,
"text": "(Brants and Franz, 2006)",
"ref_id": "BIBREF1"
},
{
"start": 224,
"end": 244,
"text": "(Hersh et al., 2007)",
"ref_id": "BIBREF5"
},
{
"start": 373,
"end": 389,
"text": "Yangarber (2003)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "4.1"
},
{
"text": "The Web 1T 5-grams were filtered by removing templates appearing with only one term and templates containing numbers. All 5-gram contexts with a non-titlecase term were also filtered as we are extracting proper nouns. Limited preprocessing was required to extract the 5-grams from MEDLINE and TREC. The TREC documents were converted from HTML to raw text, and both collections were tokenised using bio-specific NLP tools (Grover et al., 2006) . We did not exclude lowercase terms or templates containing numbers. Templates appearing with less than 7 (MEDLINE) or 3 (TREC) terms were removed. These frequencies were selected to permit the largest number of templates and terms loadable by BASILISK 3 , to allow a fair comparison.",
"cite_spans": [
{
"start": 421,
"end": 442,
"text": "(Grover et al., 2006)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "4.1"
},
{
"text": "The size of the resulting datasets are shown in Table 1 . Note that, Web 1T has far fewer terms but many more templates than the biomedical sets, and TREC articles result in more templates than MEDLINE for a similar number of terms.",
"cite_spans": [],
"ref_spans": [
{
"start": 48,
"end": 55,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "4.1"
},
{
"text": "In the Web 1T experiments, we are extracting proper-noun NE and their templates. We use the categories from which are a subset of those in the BBN Pronoun Coreference and Entity Type Corpus (Weischedel and Brunstein, 2005) , and are shown in Table 2 .",
"cite_spans": [
{
"start": 190,
"end": 222,
"text": "(Weischedel and Brunstein, 2005)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 242,
"end": 249,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Semantic Categories & Stop Categories",
"sec_num": "4.2"
},
{
"text": "In the MEDLINE and TREC experiments we considered the TREC Genomics 2007 entities with a few modifications (Hersh et al., 2007) . We excluded the categories Toxicities, Pathways and Biological Substances, which are predominately multi-term entities, and the category Strains due to the difficulty for biologists to distinguish between strains and organisms. We combined the categories Genes and Proteins into PROT as there is a very high degree of metonymy between these, particularly once out of context. We were also interested in the fine grain distinction between types of cells and cell lines, so we split the Cell or Tissue Type category into CELL and CLNE entities. Five seed terms (as non-ambiguous as possible) were selected for each category based on the evaluators' knowledge of them and their high frequency counts in the collections, and are shown in Table 2 and 3. Separate MUTN seeds for MED-LINE and TREC were used as some high frequency MUTN terms in MEDLINE do not appear in TREC.",
"cite_spans": [
{
"start": 107,
"end": 127,
"text": "(Hersh et al., 2007)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 864,
"end": 871,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Semantic Categories & Stop Categories",
"sec_num": "4.2"
},
{
"text": "As in and Yangarber (2003) , we used additional stop categories, which are extracted as well but then discarded. Stop categories help constrain the categories of interest by creating extra boundaries against semantic drift. For the Web 1T experiments we used the stop categories described in -AD-",
"cite_spans": [
{
"start": 10,
"end": 26,
"text": "Yangarber (2003)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Categories & Stop Categories",
"sec_num": "4.2"
},
{
"text": "Our evaluation process involved manually inspecting each extracted term and judging whether it was a member of the semantic class. The biomedical terms were evaluated by a domain expert. Unfamiliar terms were checked using online resources including MEDLINE, Medical Subject Headings (MeSH), Wikipedia and Google.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.3"
},
{
"text": "Each ambiguous term was counted as correct if it was classified into one of its correct categories. If a single term was unambiguously part of a multi-word term we considered it correct. Modifiers such as cultured in cultured lymphocytes and chronic in chronic arthritis were marked as incorrect.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.3"
},
{
"text": "For comparing the accuracy of the systems we evaluated the precision of the first n selected terms for each category. In some experiments we report the average precision over each category (Av(n)).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.3"
},
{
"text": "Our evaluation also includes judging the quality of the templates extracted. This is the first empirical evaluation of the templates identified by bootstrapping algorithms. We inspected the first 100 templates extracted for each category, and classified them into three groups. Templates where the context is semantically coherent with terms only from the assigned category are considered to accurately define the category and are classified as true matches (TM). Templates where another category's term could also be inserted were designated as possible matches (PM). For example, DRUG matches: pharmacokinetics of X in patients and mechanism of X action ., however the latter is a PM as it also matches PROT. Templates like compared with X for the and Value of X in the are false matches as they do not provide any contextual information for a specific entity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.3"
},
{
"text": "All of our experiments use the stop categories mentioned in \u00a74.3, unless otherwise stated. The maximum number of terms and templates (top-k) which can be added in each iteration is 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "Our first experiment investigates the weighting functions for WMEB on the Web 1T and MED-LINE data (Table 4) other measures and is more consistent. In Table 4 we also see the first difference between the two domains. The MEDLINE data scores are significantly higher, with little semantic drift down to 400 terms. For the remaining WMEB experiments we use the \u03c7 2 weighting function. Table 5 and 6 summarise the comparison of BASILISK, MEB and WMEB, on the Web 1T and MEDLINE data respectively. The category analysis is measured on the top 100 terms. BASILISK outperforms MEB on both datasets, whereas WMEB performs similarly to BASILISK on the Web 1T data. For the MEDLINE data, WMEB outperforms both BASILISK and MEB.",
"cite_spans": [],
"ref_spans": [
{
"start": 99,
"end": 108,
"text": "(Table 4)",
"ref_id": "TABREF6"
},
{
"start": 151,
"end": 159,
"text": "Table 4",
"ref_id": "TABREF6"
},
{
"start": 384,
"end": 391,
"text": "Table 5",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "Each algorithm will stop extracting terms in a category if all candidate terms are exhausted. Templates may also become exhausted. Thus, each algorithm may be penalised when we evaluate past their stopping points. We have provided adjusted scores in brackets to take this into account. After adjustment, WMEB and BASILISK significantly outperform MEB, and WMEB is more accurate than BASILISK.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Terms",
"sec_num": "5.1"
},
{
"text": "It is clear that some categories are much easier than others to extract, e.g. LAST and CELL, while others are quite difficult, e.g. NORP and FUNC. For many categories there is a wide variation across the algorithms' performance, e.g. NORP and TUMR.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Terms",
"sec_num": "5.1"
},
{
"text": "The stop categories' seeds were optimised for MEB. When the stop categories are introduced BASILISK gains very little compared to MEB and WMEB. In BASILISK-NOSTOP categories rarely drift into unspecified categories, however they do drift into similar semantic categories of interest, e.g. TUMR drifts into CLNE and vice-versa, in early iterations. This is because BASILISK weakly defines the semantic boundaries. In WMEB, this rarely occurs as the boundaries are strict. We find DISE drifts into BODY and thus a significant per- formance gain is achieved with the BODY stop category. The remainder of the analysis will be performed on the biomedical data. In Table 7 , we can observe the degree of drift which occurs in each algorithm on MEDLINE for a given number of extracted terms. For example, the 101-200 row gives the accuracy of the 101-200th extracted terms. WMEB performs fairly consistently in each range, whereas MEB degrades quickly and BASILISK to a lesser extent. The terms extracted by WMEB in later stages are more accurate than the first 100 extracted terms identified by BASILISK and MEB.",
"cite_spans": [],
"ref_spans": [
{
"start": 659,
"end": 666,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Terms",
"sec_num": "5.1"
},
{
"text": "In practice, it is unlikely we would only have 5 seed terms. Thus, we investigated the impact of using 100 seeds as input for each algorithm (Table 8). Only BASILISK improved with these large seed sets, however it did not outperform WMEB with only 5 input seeds. WMEB and MEB do not gain from these additional seeds as they severely limit the search space by introducing many more colliding templates in the early iterations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Terms",
"sec_num": "5.1"
},
{
"text": "Previous work has not evaluated the quality of the templates extracted, which is crucial for tasks like question answering that will utilise the templates. Our evaluation compares the first 100 templates identified by each algorithm. Table 9 shows the distribution of true and possible matches.",
"cite_spans": [],
"ref_spans": [
{
"start": 234,
"end": 241,
"text": "Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Templates",
"sec_num": "5.2"
},
{
"text": "Although each algorithm performs well on CELL and CLNE, the templates for these are predominately PM. This is due to the difficulty of disambiguating CELL from CLNE. Categories which are hard to identify have far more partial and false matches. For example, the majority of TUMR templates can also semantically identify BODY. WMEB still performs well on categories with few TM, in particular SIGN (99% with 54 PM) . This is a result of mutual exclusion which forces those templates to a single category.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Templates",
"sec_num": "5.2"
},
{
"text": "BASILISK is noticeably less efficient than MEB (14 times slower on Web 1T, 5 times on MED-LINE) and WMEB (10 times slower on Web 1T, 4 times on MEDLINE). BASILISK cannot precalculate the scoring metrics as they are dependent on the state of the bootstrapping process. the baseline with no pool or weighting. WMEBpool corresponds to WMEB with weighting and without the cumulative pool, and WMEB-weight corresponds to WMEB with the pool and no weighting. The weighting is extremely effective with an approximate 7% performance gain over MEB. The pool also noticeably improved performance, especially in the later iterations where it is needed most. These two components combine effectively together to significantly outperform MEB and BASILISK.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Other experiments",
"sec_num": "5.3"
},
{
"text": "Our last evaluation is performed on the final test-set -the TREC Genomics full-text articles. We compare the performance of each algorithm on the TREC and MEDLINE collections (Table 11 ). WMEB performs consistently better than BASILISK and MEB, however each has a significant performance drop on TREC. This is due to the variation in language use. In abstracts, the content is more dense and precise, and thus contexts are likely to be less noisy. Full-text articles also contain less cohesive sections. In fact, ANTI with the largest performance drop for each algorithm (WMEB 95% MEDLINE, 30% TREC) extracted templates from the methods section, identifying companies that provide ANTI. ",
"cite_spans": [],
"ref_spans": [
{
"start": 175,
"end": 184,
"text": "(Table 11",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Other experiments",
"sec_num": "5.3"
},
{
"text": "In this paper, we have proposed Weighted Mutual Exclusion Bootstrapping (WMEB), for efficient extraction of high precision lexicons, and the templates that identify them, from raw text. WMEB extracts the terms and templates of multiple categories simultaneously, based on the assumption of mutual exclusion. WMEB extends on MEB by incorporating more sophisticated scoring of terms and templates based on association strength and a cumulative template pool to keep templates active in the extraction process. As a result, WMEB is significantly more effective at reducing semantic drift than MEB, which uses a simple weighting function, and BASILISK, which does not strictly enforce mutual exclusion between categories. We have evaluated these algorithms using a variety of semantic categories on three different raw text collections. We show that WMEB extracts more reliable large semantic lexicons than MEB and BASILISK (even with far fewer seeds). WMEB is more robust within the biomedical domain, which has an immediate need for these tools.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "In the future, we plan to further investigate the mutual exclusion assumption, and whether it can be weakened to increase recall without suffering semantic drift, and how it interacts with the term and template scoring functions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "Our results demonstrate that WMEB can accurately and efficiently extract terms and templates in both general web text and domain-specific biomedical literature, and so will be useful in a wide range of NLP applications.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "BASILISK also adds an additional template in each iteration, i.e. k is increased by one in each iteration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The set contains all MEDLINE abstracts available up to Oct 2007 (16 140 000 abstracts)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "BASILISK requires n times more memory to store the term and template frequency counts than MEB and WMEB, where n is the number of categories.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank the anonymous reviewers and members of the LTRG at the University of Sydney, for their feedback. This work was supported by the CSIRO ICT Centre and ARC Discovery grant DP0665973.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Snowball: Extracting relations from large plain-text collections",
"authors": [
{
"first": "Eugene",
"middle": [],
"last": "Agichtein",
"suffix": ""
},
{
"first": "Luis",
"middle": [],
"last": "Gravano",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 5th ACM International Conference on Digital Libraries",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eugene Agichtein and Luis Gravano. 2000. Snow- ball: Extracting relations from large plain-text col- lections. In Proceedings of the 5th ACM Interna- tional Conference on Digital Libraries.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Web 1T 5-gram version 1",
"authors": [
{
"first": "Thorsten",
"middle": [],
"last": "Brants",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Franz",
"suffix": ""
}
],
"year": 2006,
"venue": "Linguistics Data Consortium",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thorsten Brants and Alex Franz. 2006. Web 1T 5- gram version 1. Technical Report LDC2006T13, Linguistics Data Consortium.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Minimising semantic drift with mutual exclusion bootstrapping",
"authors": [
{
"first": "James",
"middle": [
"R"
],
"last": "Curran",
"suffix": ""
},
{
"first": "Tara",
"middle": [],
"last": "Murphy",
"suffix": ""
},
{
"first": "Bernhard",
"middle": [],
"last": "Scholz",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Conference of the Pacific Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "172--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James R. Curran, Tara Murphy, and Bernhard Scholz. 2007. Minimising semantic drift with mutual ex- clusion bootstrapping. In In Proceedings of the Conference of the Pacific Association for Compu- tational Linguistics, pages 172-180.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Tools to address the interdependence between tokenisation and standoff annotation",
"authors": [
{
"first": "Claire",
"middle": [],
"last": "Grover",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Matthews",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Tobin",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Multi-dimensional Markup in Natural Language Processing (NLPXML)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Claire Grover, Michael Matthews, and Richard Tobin. 2006. Tools to address the interdependence be- tween tokenisation and standoff annotation. In Pro- ceedings of the Multi-dimensional Markup in Natu- ral Language Processing (NLPXML), Trento, Italy.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Automatic aquisition of hyponyms from large text corpora",
"authors": [
{
"first": "A",
"middle": [],
"last": "Marti",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hearst",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of the 14th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "23--28",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marti A. Hearst. 1992. Automatic aquisition of hy- ponyms from large text corpora. In Proceedings of the 14th International Conference on Computa- tional Linguistics, pages 539-545, Nantes, France, 23-28 July.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "TREC 2007 Genomics Track Overview",
"authors": [
{
"first": "William",
"middle": [],
"last": "Hersh",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Ruslen",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Roberts",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 16th Text REtrieval Conference (TREC)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William Hersh, Aaron Cohen, L. Ruslen, and P. Roberts. 2007. TREC 2007 Genomics Track Overview. In Proceedings of the 16th Text RE- trieval Conference (TREC).",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Bootstrapping for text learning tasks",
"authors": [
{
"first": "Rosie",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Kamal",
"middle": [],
"last": "Nigam",
"suffix": ""
},
{
"first": "Ellen",
"middle": [],
"last": "Riloff",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the IJCAI-99 Workshop on Text Mining: Foundations, Techniques and Applications",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rosie Jones, Andrew Mccallum, Kamal Nigam, and Ellen Riloff. 1999. Bootstrapping for text learning tasks. In Proceedings of the IJCAI-99 Workshop on Text Mining: Foundations, Techniques and Appli- cations.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Foundations of Statistical Natural Language Processing",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Schutze",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher Manning and Hinrich Schutze. 1999. Foundations of Statistical Natural Language Pro- cessing. MIT.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Bootstrapping language associated with biomedical entities. The AID group at TREC Genomics",
"authors": [
{
"first": "Edgar",
"middle": [],
"last": "Meij",
"suffix": ""
},
{
"first": "Sophia",
"middle": [],
"last": "Katrenko",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of The 16th Text REtrieval Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edgar Meij and Sophia Katrenko. 2007. Bootstrap- ping language associated with biomedical entities. The AID group at TREC Genomics 2007. In Pro- ceedings of The 16th Text REtrieval Conference (TREC 2007).",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Experiments in mutual exclusion bootstrapping",
"authors": [
{
"first": "Tara",
"middle": [],
"last": "Murphy",
"suffix": ""
},
{
"first": "James",
"middle": [
"R"
],
"last": "Curran",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Australasian Language Technology Workshop (ALTW)",
"volume": "",
"issue": "",
"pages": "66--74",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tara Murphy and James R. Curran. 2007. Experi- ments in mutual exclusion bootstrapping. In Pro- ceedings of the Australasian Language Technology Workshop (ALTW), pages 66-74.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Espresso: Leveraging generic patterns for automatically harvesting semantic relations",
"authors": [
{
"first": "Patrick",
"middle": [],
"last": "Pantel",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Pennacchiotti",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Conference on Computational Linguistics and the 46th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "113--120",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Patrick Pantel and Marco Pennacchiotti. 2006. Espresso: Leveraging generic patterns for automat- ically harvesting semantic relations. In Proceed- ings of the Conference on Computational Linguis- tics and the 46th Annual Meeting of the Associa- tion for Computational Linguistics, pages 113-120, Sydney, Australia.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Learning surface text patterns for a question answering system",
"authors": [
{
"first": "Deepak",
"middle": [],
"last": "Ravichandran",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "41--47",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Deepak Ravichandran and Eduard Hovy. 2002. Learning surface text patterns for a question an- swering system. In Proceedings of the 40th Annual Meeting of the Association for Computational Lin- guistics, pages 41-47, Philadelphia, USA.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Learning dictionaries for information extraction by multi-level bootstrapping",
"authors": [
{
"first": "Ellen",
"middle": [],
"last": "Riloff",
"suffix": ""
},
{
"first": "Rosie",
"middle": [],
"last": "Jones",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the 16th National Conference on Artificial intelligence and the 11th Innovative Applications of Artificial Intelligence Conference",
"volume": "",
"issue": "",
"pages": "474--479",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ellen Riloff and Rosie Jones. 1999. Learning dic- tionaries for information extraction by multi-level bootstrapping. In Proceedings of the 16th Na- tional Conference on Artificial intelligence and the 11th Innovative Applications of Artificial Intelli- gence Conference, pages 474-479, Orlando, FL, USA.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A corpusbased approach for building semantic lexicons",
"authors": [
{
"first": "Ellen",
"middle": [],
"last": "Riloff",
"suffix": ""
},
{
"first": "Jessica",
"middle": [],
"last": "Shepherd",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the 2nd Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "117--124",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ellen Riloff and Jessica Shepherd. 1997. A corpus- based approach for building semantic lexicons. In Proceedings of the 2nd Conference on Empirical Methods in Natural Language Processing, pages 117-124, Providence, RI.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A bootstrapping method for learning semantic lexicons using extraction pattern contexts",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Thelen",
"suffix": ""
},
{
"first": "Ellen",
"middle": [],
"last": "Riloff",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "214--221",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Thelen and Ellen Riloff. 2002. A boot- strapping method for learning semantic lexicons us- ing extraction pattern contexts. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 214-221, Philadelphia, USA.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "BBN pronoun coreference and entity type corpus",
"authors": [
{
"first": "Ralph",
"middle": [],
"last": "Weischedel",
"suffix": ""
},
{
"first": "Ada",
"middle": [],
"last": "Brunstein",
"suffix": ""
}
],
"year": 2005,
"venue": "Linguistics Data Consortium",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ralph Weischedel and Ada Brunstein. 2005. BBN pronoun coreference and entity type corpus. Tech- nical Report LDC2005T33, Linguistics Data Con- sortium.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Unsupervised learning of generalized names",
"authors": [
{
"first": "Roman",
"middle": [],
"last": "Yangarber",
"suffix": ""
},
{
"first": "Winston",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Grishman",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 19th International Conference on Computational linguistics (COLING)",
"volume": "",
"issue": "",
"pages": "1135--1141",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roman Yangarber, Winston Lin, and Ralph Grish- man. 2002. Unsupervised learning of general- ized names. In Proceedings of the 19th Inter- national Conference on Computational linguistics (COLING), pages 1135-1141, San Francisco.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Counter-training in discovery of semantic patterns",
"authors": [
{
"first": "Roman",
"middle": [],
"last": "Yangarber",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "240--247",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roman Yangarber. 2003. Counter-training in discov- ery of semantic patterns. In Proceedings of the 41st Annual Meeting of the Association for Computa- tional Linguistics (ACL), pages 240-247.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Unsupervised word sense disambiguiation rivaling supervised methods",
"authors": [
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "189--196",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Yarowsky. 1995. Unsupervised word sense dis- ambiguiation rivaling supervised methods. In Pro- ceedings of the 33rd Annual Meeting of the Associa- tion for Computational Linguistics, pages 189-196.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Extracting synonymous gene and protein terms from biological literature",
"authors": [
{
"first": "Hong",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Agichtein",
"suffix": ""
}
],
"year": 2003,
"venue": "Bioinformatics",
"volume": "19",
"issue": "1",
"pages": "340--349",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hong Yu and Eugene Agichtein. 2003. Extracting synonymous gene and protein terms from biological literature. Bioinformatics, 19(1):i340-i349.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Figure 1: MEB architecture.",
"type_str": "figure",
"num": null,
"uris": null
},
"TABREF1": {
"text": "Filtered 5-gram dataset statistics.",
"num": null,
"html": null,
"content": "<table/>",
"type_str": "table"
},
"TABREF2": {
"text": "CAT DESCRIPTION FEM Person: female first name Mary Patricia Linda Barbara Elizabeth MALE Person: male first name James John Robert Michael William LAST Person: last name Smith Johnson Williams Jones Brown",
"num": null,
"html": null,
"content": "<table><tr><td>TTL Honorific title</td></tr><tr><td>General President Director King Doctor</td></tr><tr><td>NORP Nationality, Religion, Political (adjectival)</td></tr><tr><td>American European French British Western</td></tr><tr><td>FOG Facilities and Organisations</td></tr><tr><td>Ford Microsoft Sony Disneyland Google</td></tr><tr><td>PLCE Place: Geo-political entities and locations</td></tr><tr><td>Africa America Washington London Pacific</td></tr><tr><td>DAT Reference to a date or period</td></tr><tr><td>January May December October June</td></tr><tr><td>LANG Any named language</td></tr><tr><td>English Chinese Japanese Spanish Russian</td></tr></table>",
"type_str": "table"
},
"TABREF3": {
"text": "",
"num": null,
"html": null,
"content": "<table/>",
"type_str": "table"
},
"TABREF5": {
"text": "",
"num": null,
"html": null,
"content": "<table/>",
"type_str": "table"
},
"TABREF6": {
"text": ". For the Web 1T data, all of the measures are approximately equal at 400 extracted terms. On MEDLINE, \u03c7 2 outperforms the 78.5 74.2 69.1 65.2 87.1 89.5 89.0 89.4 MI 76.0 70.7 67.4 65.0 84.7 87.0 86.8 87.2 MI 2 80.7 72.7 68.1 64.7 84.3 82.8 82.3 80.7 MIT 79.3 74.4 69.1 65.5 86.1 84.4 84.3 83.8 Results comparing WMEB scoring functions.",
"num": null,
"html": null,
"content": "<table><tr><td>Web 1T</td><td>MEDLINE</td></tr><tr><td colspan=\"2\">100 200 300 400 100 200 300 400</td></tr><tr><td>\u03c7 2</td><td/></tr></table>",
"type_str": "table"
},
"TABREF8": {
"text": "Results comparing Web 1T terms.",
"num": null,
"html": null,
"content": "<table><tr><td>CAT</td><td colspan=\"6\">NO STOP BAS MEB WMEB BAS MEB WMEB STOP</td></tr><tr><td>ANTI</td><td>47</td><td>95</td><td>98</td><td>49</td><td>92</td><td>96</td></tr><tr><td>CELL</td><td>95</td><td>95</td><td>98</td><td>95</td><td>98</td><td>100</td></tr><tr><td>CLNE</td><td>91</td><td>93</td><td>96</td><td colspan=\"2\">81 100</td><td>100</td></tr><tr><td>DISE</td><td>77</td><td>33</td><td>49</td><td>82</td><td>39</td><td>76</td></tr><tr><td>DRUG</td><td>67</td><td>77</td><td>92</td><td>69</td><td>92</td><td>100</td></tr><tr><td>FUNC</td><td>73</td><td>60</td><td>71</td><td>73</td><td>61</td><td>81</td></tr><tr><td>MUTN</td><td>88</td><td>87</td><td>87</td><td>88</td><td>63</td><td>81</td></tr><tr><td>PROT</td><td>99</td><td>99</td><td>100</td><td colspan=\"2\">99 100</td><td>99</td></tr><tr><td>SIGN</td><td>96</td><td>55</td><td>67</td><td>97</td><td>95</td><td>99</td></tr><tr><td>TUMR</td><td>51</td><td>33</td><td>39</td><td>51</td><td>23</td><td>39</td></tr><tr><td colspan=\"3\">Av(100) 78.5 72.7</td><td colspan=\"3\">79.7 78.4 76.3</td><td>87.1</td></tr><tr><td colspan=\"3\">Av(200) 75.0 66.1</td><td colspan=\"3\">78.6 74.7 70.1</td><td>89.5</td></tr><tr><td colspan=\"3\">Av(300) 72.3 60.2</td><td colspan=\"3\">78.3 72.1 64.8</td><td>89.0</td></tr><tr><td colspan=\"3\">Av(400) 70.0 56.3</td><td colspan=\"3\">77.4 71.0 60.8</td><td>89.4</td></tr></table>",
"type_str": "table"
},
"TABREF9": {
"text": "Results comparing MEDLINE terms.",
"num": null,
"html": null,
"content": "<table/>",
"type_str": "table"
},
"TABREF11": {
"text": "Results comparing 100 seeds on MEDLINE.",
"num": null,
"html": null,
"content": "<table/>",
"type_str": "table"
},
"TABREF12": {
"text": "shows the effectiveness of WMEB's individual components on MEDLINE. Here MEB is",
"num": null,
"html": null,
"content": "<table><tr><td>CAT</td><td>BAS TM</td><td>PM</td><td>MEB TM</td><td>PM</td><td colspan=\"2\">WMEB TM PM</td></tr><tr><td>ANTI</td><td>63</td><td>8</td><td>97</td><td>0</td><td>100</td><td>0</td></tr><tr><td>CELL</td><td>2</td><td>98</td><td>1</td><td>68</td><td>2</td><td>84</td></tr><tr><td>CLNE</td><td>1</td><td>99</td><td>79</td><td>21</td><td>78</td><td>22</td></tr><tr><td>DISE</td><td>80</td><td>15</td><td>5</td><td>81</td><td>95</td><td>5</td></tr><tr><td>DRUG</td><td>80</td><td>17</td><td>82</td><td>16</td><td>78</td><td>17</td></tr><tr><td>FUNC</td><td>62</td><td>33</td><td>10</td><td>42</td><td>49</td><td>50</td></tr><tr><td>MUTN</td><td>3</td><td>27</td><td>3</td><td>26</td><td>9</td><td>91</td></tr><tr><td>PROT</td><td>98</td><td>1</td><td>54</td><td>0</td><td>99</td><td>0</td></tr><tr><td>SIGN</td><td>93</td><td>6</td><td>90</td><td>5</td><td>12</td><td>54</td></tr><tr><td>TUMR</td><td>4</td><td>94</td><td>0</td><td>81</td><td>2</td><td>67</td></tr><tr><td colspan=\"7\">AV(100) 48.6 39.8 42.1 34.0 52.4 39.0</td></tr></table>",
"type_str": "table"
},
"TABREF13": {
"text": "Results comparing MEDLINE templates Effect of WMEB weights and pool.",
"num": null,
"html": null,
"content": "<table><tr><td/><td>100 200 500</td></tr><tr><td>MEB</td><td>76.3 70.0 58.6</td></tr><tr><td>WMEB-pool</td><td>83.2 81.7 77.8</td></tr><tr><td colspan=\"2\">WMEB-weight 82.3 79.5 76.4</td></tr><tr><td>WMEB</td><td>87.1 89.5 88.2</td></tr></table>",
"type_str": "table"
},
"TABREF14": {
"text": "Table 11: Results comparing MEDLINE and TREC.",
"num": null,
"html": null,
"content": "<table><tr><td>ALG</td><td colspan=\"4\">MEDLINE Av(100) Av(200) Av(100) Av(200) TREC</td></tr><tr><td>BAS</td><td>78.4</td><td>74.7</td><td>63.0</td><td>57.9</td></tr><tr><td>MEB</td><td>76.3</td><td>70.1</td><td>55.2</td><td>49.6</td></tr><tr><td>WMEB</td><td>87.1</td><td>89.5</td><td>67.8</td><td>66.0</td></tr></table>",
"type_str": "table"
}
}
}
}