Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "C10-1032",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:56:31.018416Z"
},
"title": "Entity Disambiguation for Knowledge Base Population",
"authors": [
{
"first": "\u2020mark",
"middle": [],
"last": "Dredze",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University University of Maryland -Baltimore County",
"location": {}
},
"email": "[email protected]"
},
{
"first": "\u2020paul",
"middle": [],
"last": "Mcnamee",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University University of Maryland -Baltimore County",
"location": {}
},
"email": "[email protected]"
},
{
"first": "\u2020delip",
"middle": [],
"last": "Rao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University University of Maryland -Baltimore County",
"location": {}
},
"email": ""
},
{
"first": "\u2020adam",
"middle": [],
"last": "Gerber",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University University of Maryland -Baltimore County",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Tim",
"middle": [],
"last": "Finin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University University of Maryland -Baltimore County",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The integration of facts derived from information extraction systems into existing knowledge bases requires a system to disambiguate entity mentions in the text. This is challenging due to issues such as non-uniform variations in entity names, mention ambiguity, and entities absent from a knowledge base. We present a state of the art system for entity disambiguation that not only addresses these challenges but also scales to knowledge bases with several million entries using very little resources. Further, our approach achieves performance of up to 95% on entities mentioned from newswire and 80% on a public test set that was designed to include challenging queries.",
"pdf_parse": {
"paper_id": "C10-1032",
"_pdf_hash": "",
"abstract": [
{
"text": "The integration of facts derived from information extraction systems into existing knowledge bases requires a system to disambiguate entity mentions in the text. This is challenging due to issues such as non-uniform variations in entity names, mention ambiguity, and entities absent from a knowledge base. We present a state of the art system for entity disambiguation that not only addresses these challenges but also scales to knowledge bases with several million entries using very little resources. Further, our approach achieves performance of up to 95% on entities mentioned from newswire and 80% on a public test set that was designed to include challenging queries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The ability to identify entities like people, organizations and geographic locations (Tjong Kim Sang and De Meulder, 2003) , extract their attributes (Pasca, 2008) , and identify entity relations (Banko and Etzioni, 2008) is useful for several applications in natural language processing and knowledge acquisition tasks like populating structured knowledge bases (KB).",
"cite_spans": [
{
"start": 105,
"end": 122,
"text": "De Meulder, 2003)",
"ref_id": "BIBREF20"
},
{
"start": 150,
"end": 163,
"text": "(Pasca, 2008)",
"ref_id": "BIBREF17"
},
{
"start": 196,
"end": 221,
"text": "(Banko and Etzioni, 2008)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, inserting extracted knowledge into a KB is fraught with challenges arising from natural language ambiguity, textual inconsistencies, and lack of world knowledge. To the discerning human eye, the \"Bush\" in \"Mr. Bush left for the Zurich environment summit in Air Force One.\" is clearly the US president. Further context may reveal it to be the 43rd president, George W. Bush, and not the 41st president, George H. W. Bush. The ability to disambiguate a polysemous entity mention or infer that two orthographically different mentions are the same entity is crucial in updating an entity's KB record. This task has been variously called entity disambiguation, record linkage, or entity linking. When performed without a KB, entity disambiguation is called coreference resolution: entity mentions either within the same document or across multiple documents are clustered together, where each cluster corresponds to a single real world entity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The emergence of large scale publicly available KBs like Wikipedia and DBPedia has spurred an interest in linking textual entity references to their entries in these public KBs. Bunescu and Pasca (2006) and Cucerzan (2007) presented important pioneering work in this area, but suffer from several limitations including Wikipedia specific dependencies, scale, and the assumption of a KB entry for each entity. In this work we introduce an entity disambiguation system for linking entities to corresponding Wikipedia pages designed for open domains, where a large percentage of entities will not be linkable. Further, our method and some of our features readily generalize to other curated KB. We adopt a supervised approach, where each of the possible entities contained within Wikipedia are scored for a match to the query entity. We also describe techniques to deal with large knowledge bases, like Wikipedia, which contain millions of entries. Furthermore, our system learns when to withhold a link when an entity has no matching KB entry, a task that has largely been neglected in prior research in cross-document entity coreference. Our system produces high quality predictions compared with recent work on this task.",
"cite_spans": [
{
"start": 190,
"end": 202,
"text": "Pasca (2006)",
"ref_id": "BIBREF5"
},
{
"start": 207,
"end": 222,
"text": "Cucerzan (2007)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The information extraction oeuvre has a gamut of relation extraction methods for entities like persons, organizations, and locations, which can be classified as open-or closed-domain depending on the restrictions on extractable relations (Banko and Etzioni, 2008) . Closed domain systems extract a fixed set of relations while in open-domain systems, the number and type of relations are unbounded. Extracted relations still require processing before they can populate a KB with facts: namely, entity linking and disambiguation.",
"cite_spans": [
{
"start": 238,
"end": 263,
"text": "(Banko and Etzioni, 2008)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Motivated by ambiguity in personal name search, Mann and Yarowsky (2003) disambiguate person names using biographic facts, like birth year, occupation and affiliation. When present in text, biographic facts extracted using regular expressions help disambiguation. More recently, the Web People Search Task (Artiles et al., 2008) clustered web pages for entity disambiguation.",
"cite_spans": [
{
"start": 48,
"end": 72,
"text": "Mann and Yarowsky (2003)",
"ref_id": "BIBREF13"
},
{
"start": 306,
"end": 328,
"text": "(Artiles et al., 2008)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The related task of cross document coreference resolution has been addressed by several researchers starting from Bagga and Baldwin (1998) . Poesio et al. (2008) built a cross document coreference system using features from encyclopedic sources like Wikipedia. However, successful coreference resolution is insufficient for correct entity linking, as the coreference chain must still be correctly mapped to the proper KB entry.",
"cite_spans": [
{
"start": 114,
"end": 138,
"text": "Bagga and Baldwin (1998)",
"ref_id": "BIBREF1"
},
{
"start": 141,
"end": 161,
"text": "Poesio et al. (2008)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Previous work by Bunescu and Pasca (2006) and Cucerzan (2007) aims to link entity mentions to their corresponding topic pages in Wikipedia but the authors differ in their approaches. Cucerzan uses heuristic rules and Wikipedia disambiguation markup to derive mappings from surface forms of entities to their Wikipedia entries. For each entity in Wikipedia, a context vector is derived as a prototype for the entity and these vectors are compared (via dotproduct) with the context vectors of unknown entity mentions. His work assumes that all entities have a corresponding Wikipedia entry, but this assumption fails for a significant number of entities in news articles and even more for other genres, like blogs. Bunescu and Pasca on the other hand suggest a simple method to handle entities not in Wikipedia by learning a threshold to decide if the entity is not in Wikipedia. Both works mentioned rely on Wikipedia-specific annotations, such as category hierarchies and disambiguation links.",
"cite_spans": [
{
"start": 29,
"end": 41,
"text": "Pasca (2006)",
"ref_id": "BIBREF5"
},
{
"start": 46,
"end": 61,
"text": "Cucerzan (2007)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We just recently became aware of a system fielded by Li et al. at the TAC-KBP 2009 evaluation (2009 . Their approach bears a number of similarities to ours; both systems create candidate sets and then rank possibilities using differing learning methods, but the principal difference is in our approach to NIL prediction. Where we simply consider absence (i.e., the NIL candidate) as another entry to rank, and select the top-ranked option, they use a separate binary classifier to decide whether their top prediction is correct, or whether NIL should be output. We believe relying on features that are designed to inform whether absence is correct is the better alternative.",
"cite_spans": [
{
"start": 53,
"end": 82,
"text": "Li et al. at the TAC-KBP 2009",
"ref_id": null
},
{
"start": 83,
"end": 99,
"text": "evaluation (2009",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We define entity linking as matching a textual entity mention, possibly identified by a named entity recognizer, to a KB entry, such as a Wikipedia page that is a canonical entry for that entity. An entity linking query is a request to link a textual entity mention in a given document to an entry in a KB. The system can either return a matching entry or NIL to indicate there is no matching entry. In this work we focus on linking organizations, geo-political entities and persons to a Wikipedia derived KB.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Linking",
"sec_num": "3"
},
{
"text": "There are 3 challenges to entity linking: Name Variations. An entity often has multiple mention forms, including abbreviations (Boston Symphony Orchestra vs. BSO), shortened forms (Osama Bin Laden vs. Bin Laden), alternate spellings (Osama vs. Ussamah vs. Oussama), and aliases (Osama Bin Laden vs. Sheikh Al-Mujahid). Entity linking must find an entry despite changes in the mention string. Entity Ambiguity. A single mention, like Springfield, can match multiple KB entries, as many entity names, like people and organizations, tend to be polysemous. Absence. Processing large text collections virtually guarantees that many entities will not appear in the KB (NIL), even for large KBs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Key Issues",
"sec_num": "3.1"
},
{
"text": "The combination of these challenges makes entity linking especially challenging. Consider an example of \"William Clinton.\" Most readers will immediately think of the 42nd US president. However, the only two William Clintons in Wikipedia are \"William de Clinton\" the 1st Earl of Huntingdon, and \"William Henry Clinton\" the British general. The page for the 42nd US president is actually \"Bill Clinton\". An entity linking system must decide if either of the William Clintons are correct, even though neither are exact matches. If the system determines neither matches, should it return NIL or the variant \"Bill Clinton\"? If variants are acceptable, then perhaps \"Clinton, Iowa\" or \"DeWitt Clinton\" should be acceptable answers?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Key Issues",
"sec_num": "3.1"
},
{
"text": "We address these entity linking challenges. Robust Candidate Selection. Our system is flexible enough to find name variants but sufficiently restrictive to produce a manageable candidate list despite a large-scale KB. Features for Entity Disambiguation. We developed a rich and extensible set of features based on the entity mention, the source document, and the KB entry. We use a machine learning ranker to score each candidate. Learning NILs. We modify the ranker to learn NIL predictions, which obviates hand tuning and importantly, admits use of additional features that are indicative of NIL.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contributions",
"sec_num": "3.2"
},
{
"text": "Our contributions differ from previous efforts (Bunescu and Pasca, 2006; Cucerzan, 2007) in several important ways. First, previous efforts depend on Wikipedia markup for significant performance gains. We make no such assumptions, although we show that optional Wikipedia features lead to a slight improvement. Second, Cucerzan does not handle NILs while Bunescu and Pasca address them by learning a threshold. Our approach learns to predict NIL in a more general and direct way. Third, we develop a rich feature set for entity linking that can work with any KB. Finally, we apply a novel finite state machine method for learning name variations. 1 The remaining sections describe the candidate selection system, features and ranking, and our novel approach learning NILs, followed by an empirical evaluation.",
"cite_spans": [
{
"start": 56,
"end": 72,
"text": "and Pasca, 2006;",
"ref_id": "BIBREF5"
},
{
"start": 73,
"end": 88,
"text": "Cucerzan, 2007)",
"ref_id": "BIBREF7"
},
{
"start": 647,
"end": 648,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Contributions",
"sec_num": "3.2"
},
{
"text": "The first system component addresses the challenge of name variants. As the KB contains a large number of entries (818,000 entities, of which 35% are PER, ORG or GPE), we require an efficient selection of the relevant candidates for a query.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Candidate Selection for Name Variants",
"sec_num": "4"
},
{
"text": "Previous approaches used Wikipedia markup for filtering -only using the top-k page categories (Bunescu and Pasca, 2006) -which is limited to Wikipedia and does not work for general KBs. We consider a KB independent approach to selection that also allows for tuning candidate set size. This involves a linear pass over KB entry names (Wikipedia page titles): a naive implementation took two minutes per query. The following section reduces this to under two seconds per query.",
"cite_spans": [
{
"start": 107,
"end": 119,
"text": "Pasca, 2006)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Candidate Selection for Name Variants",
"sec_num": "4"
},
{
"text": "For a given query, the system selects KB entries using the following approach:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Candidate Selection for Name Variants",
"sec_num": "4"
},
{
"text": "\u2022 Titles that are exact matches for the mention.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Candidate Selection for Name Variants",
"sec_num": "4"
},
{
"text": "\u2022 Titles that are wholly contained in or contain the mention (e.g., Nationwide and Nationwide Insurance).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Candidate Selection for Name Variants",
"sec_num": "4"
},
{
"text": "\u2022 The first letters of the entity mention match the KB entry title (e.g., OA and Olympic Airlines).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Candidate Selection for Name Variants",
"sec_num": "4"
},
{
"text": "\u2022 The title matches a known alias for the entity (aliases described in Section 5.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Candidate Selection for Name Variants",
"sec_num": "4"
},
{
"text": "\u2022 The title has a strong string similarity score with the entity mention. We include several measures of string similarity, including: character Dice score > 0.9, skip bigram Dice score > 0.6, and Hamming distance <= 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Candidate Selection for Name Variants",
"sec_num": "4"
},
{
"text": "We did not optimize the thresholds for string similarity, but these could obviously be tuned to minimize the candidate sets and maximize recall.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Candidate Selection for Name Variants",
"sec_num": "4"
},
{
"text": "All of the above features are general for any KB. However, since our evaluation used a KB derived from Wikipedia, we included a few Wikipedia specific features. We added an entry if its Wikipedia page appeared in the top 20 Google results for a query.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Candidate Selection for Name Variants",
"sec_num": "4"
},
{
"text": "On the training dataset (Section 7) the selection system attained a recall of 98.8% and produced candidate lists that were three to four orders of magnitude smaller than the KB. Some recall errors were due to inexact acronyms: ABC (Arab Banking; 'Corporation' is missing), ASG (Abu Sayyaf; 'Group' is missing), and PCF (French Communist Party; French reverses the order of the pre-nominal adjectives). We also missed International Police (Interpol) and Becks (David Beckham; Mr. Beckham and his wife are collectively referred to as 'Posh and Becks').",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Candidate Selection for Name Variants",
"sec_num": "4"
},
{
"text": "Our previously described candidate selection relied on a linear pass over the KB, but we seek more efficient methods. We observed that the above non-string similarity filters can be precomputed and stored in an index, and that the skip bigram Dice score can be computed by indexing the skip bigrams for each KB title. We omitted the other string similarity scores, and collectively these changes enable us to avoid a linear pass over the KB. Finally we obtained speedups by serving the KB concurrently 2 . Recall was nearly identical to the full system described above: only two more queries failed. Additionally, more than 95% of the processing time was consumed by Dice score computation, which was only required to correctly retrieve less than 4% of the training queries. Omitting the Dice computation yielded results in a few milliseconds. A related approach is that of canopies for scaling clustering for large amounts of bibliographic citations (McCallum et al., 2000) . In contrast, our setting focuses on alignment vs. clustering mentions, for which overlapping partitioning approaches like canopies are applicable.",
"cite_spans": [
{
"start": 951,
"end": 974,
"text": "(McCallum et al., 2000)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Scaling Candidate Selection",
"sec_num": "4.1"
},
{
"text": "We select a single correct candidate for a query using a supervised machine learning ranker. We represent each query by a D dimensional vector x, where x \u2208 R D , and we aim to select a single KB entry y, where y \u2208 Y, a set of possible KB entries for this query produced by the selection system above, which ensures that Y is small. The ith query is given by the pair {x i , y i }, where we assume at most one correct KB entry.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Linking as Ranking",
"sec_num": "5"
},
{
"text": "To evaluate each candidate KB entry in Y we create feature functions of the form f (x, y), dependent on both the example x (document and entity mention) and the KB entry y. The features address name variants and entity disambiguation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Linking as Ranking",
"sec_num": "5"
},
{
"text": "We take a maximum margin approach to learning: the correct KB entry y should receive a higher score than all other possible KB entrie\u015d y \u2208 Y,\u0177 = y plus some margin \u03b3. This learning constraint is equivalent to the ranking SVM algorithm of Joachims (2002) , where we define an ordered pair constraint for each of the incorrect KB entries\u0177 and the correct entry y. Training sets parameters such that score(y) \u2265 score(\u0177) + \u03b3. We used the library SVM rank to solve this optimization problem. 3 We used a linear kernel, set the slack parameter C as 0.01 times the number of training examples, and take the loss function as the total number of swapped pairs summed over all training examples. While previous work used a custom kernel, we found a linear kernel just as effective with our features. This has the advantage of efficiency in both training and prediction 4 -important considerations in a system meant to scale to millions of KB entries.",
"cite_spans": [
{
"start": 238,
"end": 253,
"text": "Joachims (2002)",
"ref_id": "BIBREF10"
},
{
"start": 487,
"end": 488,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Linking as Ranking",
"sec_num": "5"
},
{
"text": "200 atomic features represent x based on each candidate query/KB pair. Since we used a linear kernel, we explicitly combined certain features (e.g., acroynym-match AND known-alias) to model correlations. This included combining each feature with the predicted type of the entity, allowing the algorithm to learn prediction functions specific to each entity type. With feature combinations, the total number of features grew to 26,569. The next sections provide an overview; for a detailed list see .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features for Entity Disambiguation",
"sec_num": "5.1"
},
{
"text": "Variation in entity name has long been recognized as a bane for information extraction systems. Poor handling of entity name variants results in low recall. We describe several features ranging from simple string match to finite state transducer matching. String Equality. If the query name and KB entry name are identical, this is a strong indication of a match, and in our KB entry names are distinct. However, similar or identical entry names that refer to distinct entities are often qualified with parenthetical expressions or short clauses. As an example, \"London, Kentucky\" is distinguished from \"London, Ontario\", \"London, Arkansas\", \"London (novel)\", and \"London\". Therefore, other string equality features were used, such as whether names are equivalent after some transformation. For example, \"Baltimore\" and \"Baltimore City\" are exact matches after removing a common GPE word like city; \"University of Vermont\" and \"University of VT\" match if VT is expanded. Approximate String Matching. Many entity mentions will not match full names exactly. We added features for character Dice, skip bigram Dice, and left and right Hamming distance scores. Features were set based on quantized scores. These were useful for detecting minor spelling variations or mistakes. Features were also added if the query was wholly contained in the entry name, or vice-versa, which was useful for handling ellipsis (e.g., \"United States Department of Agriculture\" vs. \"Department of Agriculture\"). We also included the ratio of the recursive longest common subsequence (Christen, 2006) to the shorter of the mention or entry name, which is effective at handling some deletions or word reorderings (e.g., \"Li Gong\" and \"Gong Li\"). Finally, we checked whether all of the letters of the query are found in the same order in the entry name (e.g., \"Univ Wisconsin\" would match \"University of Wisconsin\"). Acronyms. Features for acronyms, using dictionaries and partial character matches, enable matches between \"MIT\" and \"Madras Institute of Technology\" or \"Ministry of Industry and Trade.\" Aliases. Many aliases or nicknames are nontrivial to guess.",
"cite_spans": [
{
"start": 1558,
"end": 1574,
"text": "(Christen, 2006)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features for Name Variants",
"sec_num": "5.2"
},
{
"text": "For example JAVA is the stock symbol for Sun Microsystems, and \"Ginger Spice\" is a stage name of Geri Halliwell. A reasonable way to do this is to employ a dictionary and alias lists that are commonly available for many domains 5 . FST Name Matching. Another measure of surface similarity between a query and a candidate was computed by training finite-state transducers similar to those described in Dreyer et al. (2008) . These transducers assign a score to any string pair by summing over all alignments and scoring all contained character n-grams; we used n-grams of length 3 and less. The scores are combined using a global log-linear model. Since different spellings of a name may vary considerably in length (e.g., J Miller vs. Jennifer Miller) we eliminated the limit on consecutive insertions used in previous applications. 6",
"cite_spans": [
{
"start": 401,
"end": 421,
"text": "Dreyer et al. (2008)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features for Name Variants",
"sec_num": "5.2"
},
{
"text": "Most of our features do not depend on Wikipedia markup, but it is reasonable to include features from KB properties. Our feature ablation study shows that dropping these features causes a small but statistically significant performance drop.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Wikipedia Features",
"sec_num": "5.3"
},
{
"text": "WikiGraph statistics. We added features derived from the Wikipedia graph structure for an entry, like indegree of a node, outdegree of a node, and Wikipedia page length in bytes. These statistics favor common entity mentions over rare ones.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Wikipedia Features",
"sec_num": "5.3"
},
{
"text": "Wikitology. KB entries can be indexed with human or machine generated metadata consisting of keywords or categories in a domain-appropriate taxonomy. Using a system called Wikitology, Syed et al. (2008) investigated use of ontology terms obtained from the explicit category system in Wikipedia as well as relationships induced from the hyperlink graph between related Wikipedia pages. Following this approach we computed topranked categories for the query documents and used this information as features. If none of the candidate KB entries had corresponding highlyranked Wikitology pages, we used this as a NIL feature (Section 6.1).",
"cite_spans": [
{
"start": 184,
"end": 202,
"text": "Syed et al. (2008)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Wikipedia Features",
"sec_num": "5.3"
},
{
"text": "Although it may be an unsafe bias to give preference to common entities, we find it helpful to provide estimates of entity popularity to our ranker as others have done (Fader et al., 2009) . Apart from the graph-theoretic features derived from the Wikipedia graph, we used Google's PageRank to by adding features indicating the rank of the KB entry's corresponding Wikipedia page in a Google query for the target entity mention.",
"cite_spans": [
{
"start": 168,
"end": 188,
"text": "(Fader et al., 2009)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Popularity",
"sec_num": "5.4"
},
{
"text": "The mention document and text associated with a KB entry contain context for resolving ambiguity. Entity Mentions. Some features were based on presence of names in the text: whether the query appeared in the KB text and the entry name in the document. Additionally, we used a named-entity tagger and relation finder, SERIF (Boschee et al., 2005) , identified name and nominal mentions that were deemed co-referent with the entity mention in the document, and tested whether these nouns were present in the KB text. Without the NE analysis, accuracy on non-NIL entities dropped 4.5%. KB Facts. KB nodes contain infobox attributes (or facts); we tested whether the fact text was present in the query document, both locally to a mention, or anywhere in the text. Although these facts were derived from Wikipedia infoboxes, they could be obtained from other sources as well. Document Similarity We measured similarity between the query document and the KB text in two ways: cosine similarity with TF/IDF weighting (Salton and McGill, 1983) ; and using the Dice coefficient over bags of words. IDF values were approximated using counts from the Google 5gram dataset as by Klein and Nelson (2008) . Entity Types. Since the KB contained types for entries, we used these as features as well as the predicted NE type for the entity mention in the document text. Additionally, since only a small number of KB entries had PER, ORG, or GPE types, we also inferred types from Infobox class information to attain 87% coverage in the KB. This was helpful for discouraging selection of eponymous entries named after famous entities (e.g., the former U.S. president vs. \"John F. Kennedy International Airport\").",
"cite_spans": [
{
"start": 323,
"end": 345,
"text": "(Boschee et al., 2005)",
"ref_id": "BIBREF4"
},
{
"start": 1010,
"end": 1035,
"text": "(Salton and McGill, 1983)",
"ref_id": "BIBREF19"
},
{
"start": 1167,
"end": 1190,
"text": "Klein and Nelson (2008)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Document Features",
"sec_num": "5.5"
},
{
"text": "To take into account feature dependencies we created combination features by taking the crossproduct of a small set of diverse features. The attributes used as combination features included entity type; a popularity based on Google's rankings; document comparison using TF/IDF; coverage of co-referential nouns in the KB node text; and name similarity. The combinations were cascaded to allow arbitrary feature conjunctions. Thus it is possible to end up with a feature kbtypeis-ORG AND high-TFIDF-score AND low-namesimilarity. The combined features increased the number of features from roughly 200 to 26,000.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Combinations",
"sec_num": "5.6"
},
{
"text": "So far we have assumed that each example has a correct KB entry; however, when run over a large corpus, such as news articles, we expect a significant number of entities will not appear in the KB. Hence it will be useful to predict NILs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicting NIL Mentions",
"sec_num": "6"
},
{
"text": "We learn when to predict NIL using the SVM ranker by augmenting Y to include NIL, which then has a single feature unique to NIL answers. It can be shown that (modulo slack variables) this is equivalent to learning a single threshold \u03c4 for NIL predictions as in Bunescu and Pasca (2006) .",
"cite_spans": [
{
"start": 273,
"end": 285,
"text": "Pasca (2006)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Predicting NIL Mentions",
"sec_num": "6"
},
{
"text": "Incorporating NIL into the ranker has several advantages. First, the ranker can set the threshold optimally without hand tuning. Second, since the SVM scores are relative within a single example and cannot be compared across examples, setting a single threshold is difficult. Third, a threshold sets a uniform standard across all examples, whereas in practice we may have reasons to favor a NIL prediction in a given example. We design features for NIL prediction that cannot be captured in a single parameter.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicting NIL Mentions",
"sec_num": "6"
},
{
"text": "Integrating NIL prediction into learning means we can define arbitrary features indicative of NIL predictions in the feature vector corresponding to NIL. For example, if many candidates have good name matches, it is likely that one of them is correct. Conversely, if no candidate has high entrytext/article similarity, or overlap between facts and the article text, it is likely that the entity is absent from the KB. We included several features, such as a) the max, mean, and difference between max and mean for 7 atomic features for all KB candidates considered, b) whether any of the candidate entries have matching names (exact and fuzzy string matching), c) whether any KB entry was a top Wikitology match, and d) if the top Google match was not a candidate. Table 1 : Micro and macro-averaged accuracy for TAC-KBP data compared to best and median reported performance.",
"cite_spans": [],
"ref_spans": [
{
"start": 765,
"end": 772,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "NIL Features",
"sec_num": "6.1"
},
{
"text": "Results are shown for all features as well as removing a small number of features using feature selection on development data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NIL Features",
"sec_num": "6.1"
},
{
"text": "We evaluated our system on two datasets: the Text Analysis Conference (TAC) track on Knowledge Base Population (TAC-KBP) (McNamee and Dang, 2009) and the newswire data used by Cucerzan (2007) (Microsoft News Data).",
"cite_spans": [
{
"start": 121,
"end": 145,
"text": "(McNamee and Dang, 2009)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "7"
},
{
"text": "Since our approach relies on supervised learning, we begin by constructing our own training corpus. 7 We highlighted 1496 named entity mentions in news documents (from the TAC-KBP document collection) and linked these to entries in a KB derived from Wikipedia infoboxes. 8 We added to this collection 119 sample queries from the TAC-KBP data. The total of 1615 training examples included 539 (33.4%) PER, 618 (38.3%) ORG, and 458 (28.4%) GPE entity mentions. Of the training examples, 80.5% were found in the KB, matching 300 unique entities. This set has a higher number of NIL entities than did Bunescu and Pasca (2006) (10%) but lower than the TAC-KBP test set (43%).",
"cite_spans": [
{
"start": 100,
"end": 101,
"text": "7",
"ref_id": null
},
{
"start": 271,
"end": 272,
"text": "8",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "7"
},
{
"text": "All system development was done using a train (908 examples) and development (707 examples) split. The TAC-KBP and Microsoft News data sets were held out for final tests. A model trained on all 1615 examples was used for experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "7"
},
{
"text": "The KB is derived from English Wikipedia pages that contained an infobox. Entries contain basic descriptions (article text) and attributes. The TAC-KBP query set contains 3904 entity mentions for 560 distinct entities; entity type was only provided for evaluation. The majority of queries were for organizations (69%). Most queries were missing from the KB (57%). 77% of the distinct GPEs in the queries were present in the KB, but for PERs and ORGs these percentages were significantly lower, 19% and 30% respectively. Table 1 shows results on TAC-KBP data using all of our features as well a subset of features based on feature selection experiments on development data. We include scores for both microaveraged accuracy -averaged over all queries -and macro-averaged accuracy -averaged over each unique entity -as well as the best and median reported results for these data (McNamee and Dang, 2009) . We obtained the best reported results for macro-averaged accuracy, as well as the best results for NIL detection with microaveraged accuracy, which shows the advantage of our approach to learning NIL. See for additional experiments.",
"cite_spans": [
{
"start": 877,
"end": 901,
"text": "(McNamee and Dang, 2009)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 520,
"end": 527,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "TAC-KBP 2009 Experiments",
"sec_num": "7.1"
},
{
"text": "The candidate selection phase obtained a recall of 98.6%, similar to that of development data. Missed candidates included Iron Lady, which refers metaphorically to Yulia Tymoshenko, PCC, the Spanish-origin acronym for the Cuban Communist Party, and Queen City, a former nickname for the city of Seattle, Washington. The system returned a mean of 76 candidates per query, but the median was 15 and the maximum 2772 (Texas). In about 10% of cases there were four or fewer candidates and in 10% of cases there were more than 100 candidate KB nodes. We observed that ORGs were more difficult, due to the greater variation and complexity in their naming, and that they can be named after persons or locations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TAC-KBP 2009 Experiments",
"sec_num": "7.1"
},
{
"text": "We performed two feature analyses on the TAC-KBP data: an additive study -starting from a small baseline feature set used in candidate selection we add feature groups and measure performance changes (omitting feature combinations), and an ablative study -starting from all features, remove a feature group and measure performance. Table 2 shows the most significant features in the feature addition experiments. The baseline includes only features based on string similarity or aliases and is not effective at finding correct entries and strongly favors NIL predictions. Inclusion of features based on analysis of namedentities, popularity measures (e.g., Google rankings), and text comparisons provided the largest gains. The overall changes are fairly small, roughly \u00b11%; however changes in non-NIL precision are larger.",
"cite_spans": [],
"ref_spans": [
{
"start": 331,
"end": 338,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Feature Effectiveness",
"sec_num": "7.2"
},
{
"text": "The ablation study showed considerable redundancy across feature groupings. In several cases, performance could have been slightly improved by removing features. Removing all feature combinations would have improved overall performance to 81.05% by gaining on non-NIL for a small decline on NIL detection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Effectiveness",
"sec_num": "7.2"
},
{
"text": "We downloaded the evaluation data used in Cucerzan (2007) 9 : 20 news stories from MSNBC with 642 entity mentions manually linked to Wikipedia and another 113 mentions not having any corresponding link to Wikipedia. 10 A significant percentage of queries were not of type PER, ORG, or GPE (e.g., \"Christmas\"). SERIF assigned entity types and we removed 297 queries not recognized as entities (counts in Table 3) .",
"cite_spans": [],
"ref_spans": [
{
"start": 403,
"end": 411,
"text": "Table 3)",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Experiments on Microsoft News Data",
"sec_num": "7.3"
},
{
"text": "We learned a new model on the training data above using a reduced feature set to increase speed. 11 Using our fast candidate selection system, we resolved each query in 1.98 seconds (median). Query processing time was proportional to 9 http://research.microsoft.com/en-us/um/people/silviu/WebAssistant/TestData/ 10 One of the MSNBC news articles is no longer available so we used 759 total entities. 11 We removed Google, FST and conjunction features which reduced system accuracy but increased performance. the number of candidates considered. We selected a median of 13 candidates for PER, 12 for ORG and 102 for GPE. Accuracy results are in Table 3 . The high results reported for this dataset over TAC-KBP is primarily because we perform very well in predicting popular and rare entries -both of which are common in newswire text. One issue with our KB was that it was derived from infoboxes in Wikipedia's Oct 2008 version which has both new entities, 12 and is missing entities. 13 Therefore, we manually confirmed NIL answers and new answers for queries marked as NIL in the data. While an exact comparison is not possible (as described above), our results (94.7%) appear to be at least on par with Cucerzan's system (91.4% overall accuracy).With the strong results on TAC-KBP, we believe that this is strong confirmation of the effectiveness of our approach.",
"cite_spans": [
{
"start": 400,
"end": 402,
"text": "11",
"ref_id": null
},
{
"start": 986,
"end": 988,
"text": "13",
"ref_id": null
}
],
"ref_spans": [
{
"start": 644,
"end": 652,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Experiments on Microsoft News Data",
"sec_num": "7.3"
},
{
"text": "We presented a state of the art system to disambiguate entity mentions in text and link them to a knowledge base. Unlike previous approaches, our approach readily ports to KBs other than Wikipedia. We described several important challenges in the entity linking task including handling variations in entity names, ambiguity in entity mentions, and missing entities in the KB, and we showed how to each of these can be addressed. We described a comprehensive feature set to accomplish this task in a supervised setting. Importantly, our method discriminately learns when not to link with high accuracy. To spur further research in these areas we are releasing our entity linking system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "http://www.clsp.jhu.edu/ markus/fstrain",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Our Python implementation with indexing features and four threads achieved up to 80\u00d7 speedup compared to naive implementation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "www.cs.cornell.edu/people/tj/svm_light/svm_rank.html 4 Bunescu andPasca (2006) report learning tens of thousands of support vectors with their \"taxonomy\" kernel while a linear kernel represents all support vectors with a single weight vector, enabling faster training and prediction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We used multiple lists, including class-specific lists (i.e., for PER, ORG, and GPE) lists extracted from Freebase(Bollacker et al., 2008) and Wikipedia redirects. PER, ORG, and GPE are the commonly used terms for entity types for people, organizations and geo-political regions respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Without such a limit, the objective function may diverge for certain parameters of the model; we detect such cases and learn to avoid them during training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Data available from www.dredze.com 8 http://en.wikipedia.org/wiki/Help:Infobox",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "2008 vs. 2006 version used in Cucerzan (2007 We could not get the 2006 version from the author or the Internet.13 Since our KB was derived from infoboxes, entities not having an infobox were left out.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Web people search: results of the first evaluation and the plan for the second",
"authors": [
{
"first": "Javier",
"middle": [],
"last": "Artiles",
"suffix": ""
},
{
"first": "Satoshi",
"middle": [],
"last": "Sekine",
"suffix": ""
},
{
"first": "Julio",
"middle": [],
"last": "Gonzalo",
"suffix": ""
}
],
"year": 2008,
"venue": "WWW",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Javier Artiles, Satoshi Sekine, and Julio Gonzalo. 2008. Web people search: results of the first evalu- ation and the plan for the second. In WWW.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Entitybased cross-document coreferencing using the vector space model",
"authors": [
{
"first": "Amit",
"middle": [],
"last": "Bagga",
"suffix": ""
},
{
"first": "Breck",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 1998,
"venue": "Conference on Computational Linguistics (COLING)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amit Bagga and Breck Baldwin. 1998. Entity- based cross-document coreferencing using the vec- tor space model. In Conference on Computational Linguistics (COLING).",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The tradeoffs between open and traditional relation extraction",
"authors": [
{
"first": "Michele",
"middle": [],
"last": "Banko",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2008,
"venue": "Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michele Banko and Oren Etzioni. 2008. The tradeoffs between open and traditional relation extraction. In Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Freebase: a collaboratively created graph database for structuring human knowledge",
"authors": [
{
"first": "K",
"middle": [],
"last": "Bollacker",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Evans",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Paritosh",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Sturge",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Taylor",
"suffix": ""
}
],
"year": 2008,
"venue": "SIGMOD Management of Data",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Bollacker, C. Evans, P. Paritosh, T. Sturge, and J. Taylor. 2008. Freebase: a collaboratively cre- ated graph database for structuring human knowl- edge. In SIGMOD Management of Data.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Automatic information extraction",
"authors": [
{
"first": "E",
"middle": [],
"last": "Boschee",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Weischedel",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Zamanian",
"suffix": ""
}
],
"year": 2005,
"venue": "Conference on Intelligence Analysis",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Boschee, R. Weischedel, and A. Zamanian. 2005. Automatic information extraction. In Conference on Intelligence Analysis.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Using encyclopedic knowledge for named entity disambiguation",
"authors": [
{
"first": "C",
"middle": [],
"last": "Razvan",
"suffix": ""
},
{
"first": "Marius",
"middle": [],
"last": "Bunescu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pasca",
"suffix": ""
}
],
"year": 2006,
"venue": "European Chapter of the Assocation for Computational Linguistics (EACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Razvan C. Bunescu and Marius Pasca. 2006. Using encyclopedic knowledge for named entity disam- biguation. In European Chapter of the Assocation for Computational Linguistics (EACL).",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A comparison of personal name matching: Techniques and practical issues",
"authors": [
{
"first": "",
"middle": [],
"last": "Peter Christen",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Christen. 2006. A comparison of personal name matching: Techniques and practical issues. Techni- cal Report TR-CS-06-02, Australian National Uni- versity.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Large-scale named entity disambiguation based on wikipedia data",
"authors": [
{
"first": "",
"middle": [],
"last": "Silviu Cucerzan",
"suffix": ""
}
],
"year": 2007,
"venue": "Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Silviu Cucerzan. 2007. Large-scale named entity disambiguation based on wikipedia data. In Em- pirical Methods in Natural Language Processing (EMNLP).",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Latent-variable modeling of string transductions with finite-state methods",
"authors": [
{
"first": "Markus",
"middle": [],
"last": "Dreyer",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2008,
"venue": "Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Markus Dreyer, Jason Smith, and Jason Eisner. 2008. Latent-variable modeling of string transductions with finite-state methods. In Empirical Methods in Natural Language Processing (EMNLP).",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Scaling Wikipedia-based named entity disambiguation to arbitrary web text",
"authors": [
{
"first": "Anthony",
"middle": [],
"last": "Fader",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Soderland",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2009,
"venue": "WikiAI09 Workshop at IJCAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anthony Fader, Stephen Soderland, and Oren Etzioni. 2009. Scaling Wikipedia-based named entity dis- ambiguation to arbitrary web text. In WikiAI09 Workshop at IJCAI 2009.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Optimizing search engines using clickthrough data",
"authors": [
{
"first": "Thorsten",
"middle": [],
"last": "Joachims",
"suffix": ""
}
],
"year": 2002,
"venue": "Knowledge Discovery and Data Mining (KDD)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thorsten Joachims. 2002. Optimizing search engines using clickthrough data. In Knowledge Discovery and Data Mining (KDD).",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A comparison of techniques for estimating IDF values to generate lexical signatures for the web",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"L"
],
"last": "Nelson",
"suffix": ""
}
],
"year": 2008,
"venue": "Workshop on Web Information and Data Management (WIDM)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin Klein and Michael L. Nelson. 2008. A com- parison of techniques for estimating IDF values to generate lexical signatures for the web. In Work- shop on Web Information and Data Management (WIDM).",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "THU QUANTA at TAC 2009 KBP and RTE track",
"authors": [
{
"first": "Fangtao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Zhicheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Fan",
"middle": [],
"last": "Bu",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Xiaoyan",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Minlie",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2009,
"venue": "Text Analysis Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fangtao Li, Zhicheng Zhang, Fan Bu, Yang Tang, Xiaoyan Zhu, and Minlie Huang. 2009. THU QUANTA at TAC 2009 KBP and RTE track. In Text Analysis Conference (TAC).",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Unsupervised personal name disambiguation",
"authors": [
{
"first": "Gideon",
"middle": [
"S"
],
"last": "Mann",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 2003,
"venue": "Conference on Natural Language Learning (CONLL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gideon S. Mann and David Yarowsky. 2003. Unsuper- vised personal name disambiguation. In Conference on Natural Language Learning (CONLL).",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Efficient clustering of high-dimensional data sets with application to reference matching",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Kamal",
"middle": [],
"last": "Nigam",
"suffix": ""
},
{
"first": "Lyle",
"middle": [],
"last": "Ungar",
"suffix": ""
}
],
"year": 2000,
"venue": "Knowledge Discovery and Data Mining (KDD)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew McCallum, Kamal Nigam, and Lyle Ungar. 2000. Efficient clustering of high-dimensional data sets with application to reference matching. In Knowledge Discovery and Data Mining (KDD).",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Overview of the TAC 2009 knowledge base population track",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Mcnamee",
"suffix": ""
},
{
"first": "Hoa",
"middle": [
"Trang"
],
"last": "Dang",
"suffix": ""
}
],
"year": 2009,
"venue": "Text Analysis Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul McNamee and Hoa Trang Dang. 2009. Overview of the TAC 2009 knowledge base population track. In Text Analysis Conference (TAC).",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "HLTCOE approaches to knowledge base population at TAC",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Mcnamee",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Dredze",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Gerber",
"suffix": ""
},
{
"first": "Nikesh",
"middle": [],
"last": "Garera",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Finin",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Mayfield",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Piatko",
"suffix": ""
},
{
"first": "Delip",
"middle": [],
"last": "Rao",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
},
{
"first": "Markus",
"middle": [],
"last": "Dreyer",
"suffix": ""
}
],
"year": 2009,
"venue": "Text Analysis Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul McNamee, Mark Dredze, Adam Gerber, Nikesh Garera, Tim Finin, James Mayfield, Christine Pi- atko, Delip Rao, David Yarowsky, and Markus Dreyer. 2009. HLTCOE approaches to knowledge base population at TAC 2009. In Text Analysis Con- ference (TAC).",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Turning web text and search queries into factual knowledge: hierarchical class attribute extraction",
"authors": [
{
"first": "Marius",
"middle": [],
"last": "Pasca",
"suffix": ""
}
],
"year": 2008,
"venue": "National Conference on Artificial Intelligence (AAAI)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marius Pasca. 2008. Turning web text and search queries into factual knowledge: hierarchical class attribute extraction. In National Conference on Ar- tificial Intelligence (AAAI).",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Exploiting lexical and encyclopedic resources for entity disambiguation: Final report",
"authors": [
{
"first": "Massimo",
"middle": [],
"last": "Poesio",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Day",
"suffix": ""
},
{
"first": "Ron",
"middle": [],
"last": "Artstein",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Duncan",
"suffix": ""
},
{
"first": "Vladimir",
"middle": [],
"last": "Eidelman",
"suffix": ""
},
{
"first": "Claudio",
"middle": [],
"last": "Giuliano",
"suffix": ""
},
{
"first": "Rob",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Janet",
"middle": [],
"last": "Hitzeman",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Jern",
"suffix": ""
},
{
"first": "Mijail",
"middle": [],
"last": "Kabadjov",
"suffix": ""
},
{
"first": "Stanley",
"middle": [],
"last": "Yong",
"suffix": ""
},
{
"first": "Wai",
"middle": [],
"last": "Keong",
"suffix": ""
},
{
"first": "Gideon",
"middle": [],
"last": "Mann",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
},
{
"first": "Simone",
"middle": [],
"last": "Ponzetto",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Steinberger",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Strube",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Su",
"suffix": ""
}
],
"year": 2008,
"venue": "JHU CLSP 2007 Summer Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Massimo Poesio, David Day, Ron Artstein, Jason Dun- can, Vladimir Eidelman, Claudio Giuliano, Rob Hall, Janet Hitzeman, Alan Jern, Mijail Kabadjov, Stanley Yong, Wai Keong, Gideon Mann, Alessan- dro Moschitti, Simone Ponzetto, Jason Smith, Josef Steinberger, Michael Strube, Jian Su, Yannick Ver- sley, Xiaofeng Yang, and Michael Wick. 2008. Ex- ploiting lexical and encyclopedic resources for en- tity disambiguation: Final report. Technical report, JHU CLSP 2007 Summer Workshop.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Introduction to Modern Information Retrieval",
"authors": [
{
"first": "Gerard",
"middle": [],
"last": "Salton",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Mcgill",
"suffix": ""
}
],
"year": 1983,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gerard Salton and Michael McGill. 1983. Introduc- tion to Modern Information Retrieval. McGraw- Hill Book Company.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Introduction to the conll-2003 shared task: Languageindependent named entity recognition",
"authors": [
{
"first": "Erik",
"middle": [
"Tjong"
],
"last": "",
"suffix": ""
},
{
"first": "Kim",
"middle": [],
"last": "Sang",
"suffix": ""
},
{
"first": "Fien",
"middle": [],
"last": "De Meulder",
"suffix": ""
}
],
"year": 2003,
"venue": "Conference on Natural Language Learning (CONLL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erik Tjong Kim Sang and Fien De Meulder. 2003. In- troduction to the conll-2003 shared task: Language- independent named entity recognition. In Confer- ence on Natural Language Learning (CONLL).",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Wikipedia as an ontology for describing documents",
"authors": [
{
"first": "Zareen",
"middle": [],
"last": "Syed",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Finin",
"suffix": ""
},
{
"first": "Anupam",
"middle": [],
"last": "Joshi",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Second International Conference on Weblogs and Social Media",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zareen Syed, Tim Finin, and Anupam Joshi. 2008. Wikipedia as an ontology for describing documents. In Proceedings of the Second International Confer- ence on Weblogs and Social Media. AAAI Press.",
"links": null
}
},
"ref_entries": {
"TABREF2": {
"content": "<table/>",
"type_str": "table",
"html": null,
"text": "Additive analysis: micro-averaged accuracy.",
"num": null
},
"TABREF4": {
"content": "<table/>",
"type_str": "table",
"html": null,
"text": "Micro-average results for Microsoft data.",
"num": null
}
}
}
}