|
{ |
|
"paper_id": "Q16-1016", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:06:38.533750Z" |
|
}, |
|
"title": "J-NERD: Joint Named Entity Recognition and Disambiguation with Rich Linguistic Features", |
|
"authors": [ |
|
{ |
|
"first": "Dat", |
|
"middle": [ |
|
"Ba" |
|
], |
|
"last": "Nguyen", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Martin", |
|
"middle": [], |
|
"last": "Theobald", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Ulm", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Gerhard", |
|
"middle": [], |
|
"last": "Weikum", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Methods for Named Entity Recognition and Disambiguation (NERD) perform NER and NED in two separate stages. Therefore, NED may be penalized with respect to precision by NER false positives, and suffers in recall from NER false negatives. Conversely, NED does not fully exploit information computed by NER such as types of mentions. This paper presents J-NERD, a new approach to perform NER and NED jointly, by means of a probabilistic graphical model that captures mention spans, mention types, and the mapping of mentions to entities in a knowledge base. We present experiments with different kinds of texts from the CoNLL'03, ACE'05, and ClueWeb'09-FACC1 corpora. J-NERD consistently outperforms state-of-the-art competitors in end-to-end NERD precision, recall, and F1.", |
|
"pdf_parse": { |
|
"paper_id": "Q16-1016", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Methods for Named Entity Recognition and Disambiguation (NERD) perform NER and NED in two separate stages. Therefore, NED may be penalized with respect to precision by NER false positives, and suffers in recall from NER false negatives. Conversely, NED does not fully exploit information computed by NER such as types of mentions. This paper presents J-NERD, a new approach to perform NER and NED jointly, by means of a probabilistic graphical model that captures mention spans, mention types, and the mapping of mentions to entities in a knowledge base. We present experiments with different kinds of texts from the CoNLL'03, ACE'05, and ClueWeb'09-FACC1 corpora. J-NERD consistently outperforms state-of-the-art competitors in end-to-end NERD precision, recall, and F1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Motivation: Methods for Named Entity Recognition and Disambiguation, NERD for short, typically proceed in two stages:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 At the NER stage, text spans of entity mentions are detected and tagged with coarse-grained types like Person, Organization, Location, etc. This is typically performed by a trained Conditional Random Field (CRF) over word sequences (e.g., Finkel et al. (2005) ). \u2022 At the NED stage, mentions are mapped to entities in a knowledge base (KB) based on contextual similarity measures and the semantic coherence of the selected entities (e.g., Cucerzan (2014); Hoffart et al. (2011) ; Ratinov et al. (2011) ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 241, |
|
"end": 261, |
|
"text": "Finkel et al. (2005)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 458, |
|
"end": 479, |
|
"text": "Hoffart et al. (2011)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 482, |
|
"end": 503, |
|
"text": "Ratinov et al. (2011)", |
|
"ref_id": "BIBREF38" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "This two-stage approach has limitations. First, NER may produce false positives that can misguide NED. Second, NER may miss out on some entity mentions, and NED has no chance to compensate for these false negatives. Third, NED is not able to help NER, for example, by disambiguating \"easy\" mentions (e.g., of prominent entities with more or less unique names), and then using the entities and knowledge about them as enriched features for NER. Example: Consider the following sentences:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "David played for manu, real, and la galaxy. His wife posh performed with the spice girls. This is difficult for NER because of the absence of upper-case spelling, which is not untypical in social media, for example. Most NER methods will miss out on multi-word mentions or words that are also common nouns (\"spice\") or adjectives (\"posh\", \"real\"). Typically, NER would pass only the mentions \"David\", \"manu\", and \"la\" to the NED stage, which then is prone to many errors like mapping the first two mentions to any prominent people with first names David and Manu, and mapping the third one to the city of Los Angeles. With NER and NED performed jointly, the possible disambiguation of \"la galaxy\" to the soccer club can guide NER to tag the right mentions with the right types (e.g., recognizing that \"manu\" could be a short name for a soccer team), which in turn helps NED to map \"David\" to the right entity David Beckham. Contribution: This paper presents a novel kind of probabilistic graphical model for the joint recognition and disambiguation of named-entity mentions in natural-language texts. With this integrated approach to NERD, we aim to overcome the limitations of the two-stage NER/NED methods discussed above.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our method, called J-NERD 1 , is based on a supervised, non-linear graphical model that combines multiple per-sentence models into an entitycoherence-aware global model. The global model detects mention spans, tags them with coarsegrained types, and maps them to entities in a single joint-inference step based on the Viterbi algorithm (for exact inference) or Gibbs sampling (for approximate inference). The J-NERD method comprises the following novel contributions:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 a tree-shaped model for each sentence, whose structure is derived from the dependency parse tree and thus captures linguistic context in a deeper way compared to prior work with CRF's for NER and NED; \u2022 richer linguistic features not considered in prior work, harnessing dependency parse trees and verbal patterns that indicate mention types as part of their nsubj or dobj arguments; \u2022 an inference method that maintains the uncertainty of both mention candidates (i.e., token spans) and entity candidates for competing mention candidates, and makes joint decisions, as opposed to fixing mentions before reasoning on their disambiguation. We present experiments with three major datasets: the CoNLL'03 collection of newswire articles, the ACE'05 corpus of news and blogs, and the ClueWeb'09-FACC1 corpus of web pages. Baselines that we compare J-NERD with include AIDAlight (Nguyen et al., 2014) , Spotlight (Daiber et al., 2013) , and TagMe (Ferragina and Scaiella, 2010) , and the recent joint NER/NED method of Durrett and Klein (2014) . J-NERD consistently outperforms these competitors in terms of both precision and recall.", |
|
"cite_spans": [ |
|
{ |
|
"start": 876, |
|
"end": 897, |
|
"text": "(Nguyen et al., 2014)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 910, |
|
"end": 931, |
|
"text": "(Daiber et al., 2013)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 944, |
|
"end": 974, |
|
"text": "(Ferragina and Scaiella, 2010)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 1016, |
|
"end": 1040, |
|
"text": "Durrett and Klein (2014)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "NER: Detecting the boundaries of text spans that denote named entities has been mostly addressed by supervised CRF's over word sequences (McCallum and Li, 2003; Finkel et al., 2005) . The work of Ratinov and Roth (2009) improved these techniques by additional features from context aggregation and external lexical sources (gazetteers, etc.). Passos et 1 The J-NERD source is available at the URL http:// download.mpi-inf.mpg.de/d5/tk/jnerd-tacl.zip.", |
|
"cite_spans": [ |
|
{ |
|
"start": 137, |
|
"end": 160, |
|
"text": "(McCallum and Li, 2003;", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 161, |
|
"end": 181, |
|
"text": "Finkel et al., 2005)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 196, |
|
"end": 219, |
|
"text": "Ratinov and Roth (2009)", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 343, |
|
"end": 354, |
|
"text": "Passos et 1", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "al. (2014) harnessed skip-gram features and external dictionaries for further improvement. An alternative line of NER techniques is based on dictionaries of name-entity pairs, including nicknames, shorthand names, and paraphrases (e.g., \"the first man on the moon\"). The work of Ferragina and Scaiella (2010) and Mendes et al. (2011) are examples of dictionary-based NER. The work of Spitkovsky and Chang (2012) is an example of a large-scale dictionary that can be harnessed by such methods.", |
|
"cite_spans": [ |
|
{ |
|
"start": 279, |
|
"end": 308, |
|
"text": "Ferragina and Scaiella (2010)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 313, |
|
"end": 333, |
|
"text": "Mendes et al. (2011)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "An additional output of the CRF's are type tags for the recognized word spans, typically limited to coarse-grained types like Person, Organization, and Location (and also Miscellaneous). The most widely used tool of this kind is the Stanford NER Tagger (Finkel et al., 2005) . Many NED tools use the Stanford NER Tagger in their first stage of detecting mentions. Mention Typing: The specific NER task of inferring semantic types has been further refined and extended by various works on fine-grained typing (e.g., politicians, musicians, singers, guitarists) for entity mentions and general noun phrases (Fleischman and Hovy, 2002; Rahman and Ng, 2010; Ling and Weld, 2012; Yosef et al., 2012; Nakashole et al., 2013) . Most of these works are based on supervised classification, using linguistic features from mentions and their surrounding text. One exception is the work of Nakashole et al. (2013) which is based on text patterns that connect entities of specific types, acquired by sequence mining from the Wikipedia fulltext corpus. In contrast to our work, those are simple surface patterns, and the task addressed here is limited to typing noun phrases that likely denote emerging entities that are not yet registered in a KB. NED: Methods and tools for NED go back to the seminal work of Dill et al. (2003) , Bunescu and Pasca (2006), Cucerzan (2007) , and Milne and Witten (2008) . More recent advances led to open-source tools like the Wikipedia Miner Wikifier (Milne and Witten, 2013) , the Illinois Wikifier (Ratinov et al., 2011) , Spotlight (Mendes et al., 2011) , Semanticizer (Meij et al., 2012) , TagMe (Ferragina and Scaiella, 2010; Cornolti et al., 2014) , and AIDA (Hoffart et al., 2011) with its improved variant AIDA-light (Nguyen et al., 2014) . We choose some, namely, Spotlight, TagMe and AIDA-light, as baselines for our experiments. These are the best-performing, publicly available systems for news and web texts. Most of these methods combine contextual similarity measures with some form of consideration for the coherence among a selected set of candidate entities for disambiguation. The latter aspect can be cast into a variety of computational models, like graph algorithms (Hoffart et al., 2011) , integer linear programming (Ratinov et al., 2011) , or probabilistic graphical models (Kulkarni et al., 2009) . All these methods use the Stanford NER Tagger or dictionarybased matching for their NER stages. Kulkarni et al. (2009) uses an ILP or LP solver (with rounding) for the NED inference, which is computationally expensive. Note that some of the NED tools aim to link not only named entities but also general concepts (e.g. \"world peace\") for which Wikipedia has articles. In this paper, we solely focus on proper entities. Joint NERD: There is little prior work on performing NER and NED jointly. Sil and Yates (2013) , and Durrett and Klein (2014) are the most notable methods. Sil and Yates (2013) first compile a liberal set of mention and entity candidates, and then perform joint ranking of the candidates. Durrett and Klein (2014) present a CRF model for coreference resolution, mention typing, and mention disambiguation. Our model is also based on CRF's, but distinguishes itself from prior work in three ways: 1) tree-shaped per-sentence CRF's derived from dependency parse trees, as opposed to merely having connections among mentions and entity candidates; 2) linguistic features about verbal phrases from dependency parse trees; 3) the maintaining of candidates for both mentions and entities and jointly reasoning on their uncertainty. Our experiments include comparisons with the method of Durrett and Klein (2014) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 253, |
|
"end": 274, |
|
"text": "(Finkel et al., 2005)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 605, |
|
"end": 632, |
|
"text": "(Fleischman and Hovy, 2002;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 633, |
|
"end": 653, |
|
"text": "Rahman and Ng, 2010;", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 654, |
|
"end": 674, |
|
"text": "Ling and Weld, 2012;", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 675, |
|
"end": 694, |
|
"text": "Yosef et al., 2012;", |
|
"ref_id": "BIBREF43" |
|
}, |
|
{ |
|
"start": 695, |
|
"end": 718, |
|
"text": "Nakashole et al., 2013)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 878, |
|
"end": 901, |
|
"text": "Nakashole et al. (2013)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 1297, |
|
"end": 1315, |
|
"text": "Dill et al. (2003)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 1318, |
|
"end": 1329, |
|
"text": "Bunescu and", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 1330, |
|
"end": 1359, |
|
"text": "Pasca (2006), Cucerzan (2007)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1366, |
|
"end": 1389, |
|
"text": "Milne and Witten (2008)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 1472, |
|
"end": 1496, |
|
"text": "(Milne and Witten, 2013)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 1521, |
|
"end": 1543, |
|
"text": "(Ratinov et al., 2011)", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 1556, |
|
"end": 1577, |
|
"text": "(Mendes et al., 2011)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 1593, |
|
"end": 1612, |
|
"text": "(Meij et al., 2012)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 1621, |
|
"end": 1651, |
|
"text": "(Ferragina and Scaiella, 2010;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 1652, |
|
"end": 1674, |
|
"text": "Cornolti et al., 2014)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 1686, |
|
"end": 1708, |
|
"text": "(Hoffart et al., 2011)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 1735, |
|
"end": 1767, |
|
"text": "AIDA-light (Nguyen et al., 2014)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 2209, |
|
"end": 2231, |
|
"text": "(Hoffart et al., 2011)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 2261, |
|
"end": 2283, |
|
"text": "(Ratinov et al., 2011)", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 2320, |
|
"end": 2343, |
|
"text": "(Kulkarni et al., 2009)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 2442, |
|
"end": 2464, |
|
"text": "Kulkarni et al. (2009)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 2839, |
|
"end": 2859, |
|
"text": "Sil and Yates (2013)", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 2866, |
|
"end": 2890, |
|
"text": "Durrett and Klein (2014)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 2921, |
|
"end": 2941, |
|
"text": "Sil and Yates (2013)", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 3054, |
|
"end": 3078, |
|
"text": "Durrett and Klein (2014)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 3646, |
|
"end": 3670, |
|
"text": "Durrett and Klein (2014)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "There are also benchmarking efforts on measuring the performance for end-to-end NERD (Cornolti et al., 2013; Carmel et al., 2014; Usbeck et al., 2015) , as opposed to assessing NER and NED separately. However, to the best of our knowledge, none of the participants in these competitions considered integrating NER and NED.", |
|
"cite_spans": [ |
|
{ |
|
"start": 85, |
|
"end": 108, |
|
"text": "(Cornolti et al., 2013;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 109, |
|
"end": 129, |
|
"text": "Carmel et al., 2014;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 130, |
|
"end": 150, |
|
"text": "Usbeck et al., 2015)", |
|
"ref_id": "BIBREF42" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "3 J-NERD Factor Graph Model", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "To label a sequence of input tokens x 1 , . . . , x m with a sequence of output labels y 1 , . . . , y m , con-sisting of NER types and NED entities, we devise a family of linear-chain and tree-shaped probabilistic graphical models (Koller et al., 2007) . We employ these models to compactly encode a multivariate probability distribution over random variables X \u222a Y, where X denotes the set of input tokens x i we may observe, and Y denotes the set of output labels y i we may associate with these tokens. By writing x, we denote an assignment of tokens to X , while y denotes an assignment of labels to Y. In our running example, \"David\" is the first token x 1 with the desired label y 1 = PER:David Beckham where PER denotes the NER type Person and David Beckham is the entity of interest. Consecutive tokens with identical labels are considered to be entity mentions. For example, for x 5 = la and x 6 = galaxy, the output would ideally be y 5 = ORG:Los Angeles Galaxy and y 6 = ORG:Los Angeles Galaxy, denoting the soccer club. Upfront these are merely candidate labels, though. Our method may alternatively consider the labels y 5 = LOC:Los Angeles and y 6 = MISC:Samsung Galaxy. This would yield incorrect output with two single-token mentions and improper entities.", |
|
"cite_spans": [ |
|
{ |
|
"start": 232, |
|
"end": 253, |
|
"text": "(Koller et al., 2007)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Overview", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The feature templates f 1 -f 17 we describe in detail in Section 4 each take the possible assignments x, y of tokens and labels, respectively, as input and give a binary value or real number as output. Binary values denote the presence or absence of a feature (e.g., a particular token); real-valued ones typically denote frequencies of observed features.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Overview", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "For tractability, probabilistic graphical models are typically constrained by making conditional independence assumptions, thus imposing structure and locality on X \u222a Y. In our models, we postulate that the following conditional independence assumptions hold:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Overview", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "p(y i | x, y) = p(y i | x, y prev (i) )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Overview", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "That is, the label y i for the i th token directly depends only on the label y prev (i) of some previous token at position prev(i) and potentially on all input tokens. The case where prev (i) = i \u2212 1 is the standard setting for a linear-chain CRF, where the label of a token depends only on the label of its preceding token. We generalize this approach to considering prev(i) tokens based on the edges of a dependency parse tree and prev (i) tokens derived from co-references in preceding sentences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Overview", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "By the Hammersley-Clifford Theorem, such a graphical model can be factorized into a product form where each factor captures a subset A \u2286 X \u222aY of the random variables. Typically, each factor considers only those X and Y variables that are coupled by a conditional (in-)dependence assumptions, with overlapping A sets of different factors. The probability distribution encoded by the graphical model can then be expressed as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Overview", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "p(x, y) = 1 Z A F A (x A , y A )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Overview", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Here, F A (x A , y A ) denotes the factors of the model, each of which is of the following form:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Overview", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "F A (x A , y A ) = exp k \u03bb k f A,k (x A , y A ) The normalization constant Z = x,y A F A (x A , y A )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Overview", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "ensures that this distribution sums up to 1, while \u03bb k are the parameters of the model, which we aim to learn from various annotated background corpora.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Overview", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Our inference objective then is to find the most probable sequence of labels y * when given the token sequence x as evidence:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Overview", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "y * = arg max y p(y | x)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Overview", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "That is, in our setting, we fix x = tok 1 , . . . , tok m to the observed token sequence, while y = y 1 , . . . , y m ranges over all possible sequences of associated labels. In our approach, which we hence coined J-NERD, each y i label represents a combination of NER type and NED entity.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Overview", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "State-of-the-art NER methods, such as the Stanford NER Tagger, employ linear-chain factor graph, known as Conditional Random Fields (CRF's) (Sutton and McCallum, 2012). We also devise more sophisticated tree-shaped factor graphs whose structure is obtained from the dependency parse trees of the input sentences. These per-sentence models are optionally combined into a global factor graph by adding also cross-sentence dependencies (Finkel et al., 2005) . These cross-sentence dependencies are added whenever overlapping sets of entity candidates (i.e., potential co-references) are detected among the input sentences. Figure 3 gives an example of such a global graphical model for two sentences.", |
|
"cite_spans": [ |
|
{ |
|
"start": 433, |
|
"end": 454, |
|
"text": "(Finkel et al., 2005)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 620, |
|
"end": 628, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Overview", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The search space of candidate labels for our models depends on the candidates for mention spans (with the same NER type) and their NED entities. We use pruning heuristics to restrict this space: candidate spans for mentions are derived from dictionaries, and we consider only the top-20 entity candidates for each candidate mention. For a given sentence, this typically leads to a few thousand candidate labels over which the CRF inference runs. The candidates are determined independently for each sentence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Overview", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "These models employ a variety of feature templates that generate the factors of the joint probability distribution. Some of the features are fairly standard for NER/NED, whereas others are novel.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u2022 Standard features include lexico-syntactic properties of tokens like POS tags, matches in dictionaries/gazetteers, and similarity measures between token strings and entity names. Also, entity-entity coherence is an important feature for NED -not exactly a standard feature, but used in some prior works. \u2022 Features about the topical domain of an input text (e.g., politics, sports, football, etc.) are obtained by a classifier based on \"easy mentions\": those mentions for which the NED decision can be made with very high confidence without advanced features. The use of domains for NED was introduced by Nguyen et al. (2014). Here, we further extend this technique by harnessing domain features for joint inference on NER and NED. \u2022 The third feature group captures typed dependencies from the sentence parsing. To our knowledge, these have not been used in prior work on NER and NED. The NER types that we consider are the standard types PER for person, LOC for location and ORG for organization. All other types that, for example, the Stanford NER Tagger would mark, are collapsed into a type MISC for miscellaneous. These include labels like date and money (which are not genuine entities anyway) and also entity types like events and creative works such as movies, songs, etc. (which are disregarded by the Stanford NER Tagger). We add two dedicated tags for tokens to express the case when no meaningful NER type or NED entity can be assigned. For tokens that should not be labeled as a named entity at all (e.g., \"played\" in our example), we use the tag Other. For tokens with a valid NER type, we add the virtual entity Out-of-KB (for \"out of knowledge base\") to its entity candidates, to prepare for the possible situation where the token (and its surrounding tokens) actually denotes an emerging or long-tail entity that is not contained in the knowledge base.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "In the local models, J-NERD works on each sentence S = tok 1 , . . . , tok m separately. We construct a linear-chain CRF (see Figure 1 ) by introducing an observed variable x i for each token tok i that represents a proper word. For each x i , we additionally introduce a variable y i that represents the combined NERD label. As in any CRF, the x i , y i and y i , y i+1 pairs are connected via factors F(x, y), whose weights we obtain from the feature functions described in Section 4.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 126, |
|
"end": 134, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Linear-Chain Model", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "y 1 y 2 y 3 y 4 y 5 y 6 x 1 x 2 x 3 x 4 x 5 x 6", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linear-Chain Model", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "David played manu real la galaxy Figure 1 : Linear-chain model (CRF).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 33, |
|
"end": 41, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Linear-Chain Model", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The factor graph for the tree-shaped model is constructed in a similar way. However, here we add a factor that links a pair of labels y i , y j if their respective tokens tok i , tok j are connected via a typed dependency which we obtain from the Stanford parser. Figure 2 shows an example of such a tree model. Thus, while the linear-chain model adds factors between labels of adjacent tokens only based on their positions in the sentence, the tree model adds factors based on the dependency parse tree to enhance the coherence of labels across tokens.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 264, |
|
"end": 272, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Tree Model", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "For global models, we consider an entire input text consisting of multiple sentences S 1 , . . . , S n = tok 1 , . . . , tok m , for augmenting either one of the", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Global Models", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "y 1 y 2 y 3 y 4 y 5 y 6 x 1 x 2 x 3 x 4 x 5 x 6", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Global Models", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "David played manu real la galaxy", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Global Models", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "[nsubj] [p f ] [p f ] [p f ] [det] Figure 2: Tree model ([p f ] is [prep f or]).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Global Models", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "linear-chain model or tree-shaped model. As shown in Figure 3 , cross-sentence edges among pairs of labels y i , y j are introduced for candidate sets C i , C j that share at least one candidate entity, such as \"David\" and \"David Beckham\". Additionally, we introduce factors for all pairs of tokens in adjacent mentions within the same sentence, such as \"David\" and \"manu\".", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 53, |
|
"end": 61, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Global Models", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "Our inference objective is to find the most probable sequence of NERD labels y * = arg max y p(y | x) according to the objective function we defined in Section 3. Instead of considering the actual distribution", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inference & Learning", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "p(x, y) = 1 Z A exp k \u03bb k f A,k (x A , y A )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inference & Learning", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "for this purpose, we aim to maximize an equivalent objective function as follows. Each factor A in our model couples a label variable y t with a variable y prev(t) : either its immediately preceding token in the same sentence, or a parsing-dependency-linked token in the same sentence, or a co-reference-linked token in a different sentence. Each of these factors has its feature functions, and we can regroup these features on a per-token basis given the log-linear nature of the objective function. This leads to the following optimization problem which has its maximum for the same label assignment as the original problem:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inference & Learning", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "y * = arg max y 1 ...ym exp m t=1 K k=1 \u03bb k feature k (y t , y prev(t) , x 1 . . . x m )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inference & Learning", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "where", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inference & Learning", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "\u2022 prev (t) is the index of label y j on which y t depends, \u2022 feature 1..K are the feature functions generated from templates f 1 -f 17 of Section 4, y1 y2 y3 y4 y5 y6", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inference & Learning", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "x 1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inference & Learning", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "x 2", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inference & Learning", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "x 3 x 4 x 5 x 6", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inference & Learning", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "David played manu real la galaxy y7 y8 y9 y10 y11", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inference & Learning", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "x 7", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inference & Learning", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "x 8", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inference & Learning", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "x 9", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inference & Learning", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "x \u2022 and \u03bb k are the feature weights, i.e., the model parameters to be learned. The actual number of generated features, K, depends on the training corpus and the choice of the graphical model. For the CoNLL-YAGO2 training set, the tree models have K = 1, 767 parameters. Given a trained model, exact inference with respect to the above objective function can be efficiently performed by variants of the Viterbi algorithm (Sutton and McCallum, 2012) for the local models, both in the linear-chain and tree-shaped cases. For the global models, however, exact solutions are computationally intractable. Therefore, we employ Gibbs sampling (Finkel et al., 2005) to approximate the solution.", |
|
"cite_spans": [ |
|
{ |
|
"start": 636, |
|
"end": 657, |
|
"text": "(Finkel et al., 2005)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inference & Learning", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "As for the model parameters, J-NERD learns the feature weights \u03bb k from the training data by maximizing a respective conditional likelihood function (Sutton and McCallum, 2012) , using a variant of the L-BFGS optimization algorithm (Liu and Nocedal, 1989) . We do this for each local model (linear-chain and tree models), and apply the same learned weights to the corresponding global models. Our implementation uses the RISO toolkit 2 for belief networks.", |
|
"cite_spans": [ |
|
{ |
|
"start": 149, |
|
"end": 176, |
|
"text": "(Sutton and McCallum, 2012)", |
|
"ref_id": "BIBREF41" |
|
}, |
|
{ |
|
"start": 232, |
|
"end": 255, |
|
"text": "(Liu and Nocedal, 1989)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inference & Learning", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "We define feature templates for detecting the combined NER/NED labels of token that denote or are part of an entity mention. Once these labels are determined, the actual boundaries of the mentions, i.e., their token spans, are trivially derived by combining adjacent tokens with the same label (and disregarding all tokens with the tag Other). Language Preprocessing. We employ the Stan-2 http://riso.sourceforge.net/ ford CoreNLP tool suite 3 for processing input documents. This includes tokenization, sentence detection, POS tagging, lemmatization, and dependency parsing. All of these provide features for our graphical model. In particular, we harness dependency types between noun phrases (de Marneffe et al., 2006) , like nsubj, dobj, prep in, prep for, etc.", |
|
"cite_spans": [ |
|
{ |
|
"start": 695, |
|
"end": 721, |
|
"text": "(de Marneffe et al., 2006)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature Templates", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In the following, we introduce the complete set of feature templates f 1 through f 17 used by our method. Templates are instantiated based on the observed input and the candidate space of possible labels for this input, and guided by distant resources like knowledge bases and dictionaries. Templates f 1 , f 8 -f 13 , f 17 generate real numbers as values derived from frequencies in training data; all other templates generate binary values denoting presence or absence of certain features. The generated feature values depend on the assignment of input tokens to variables x i \u2208 X . In addition, our graphical models often consider only a specific subset of candidate labels as assignments to the output variables y i \u2208 Y. Therefore, we formulate the feature-generation process as a set of feature functions that depend on both (per-factor subsets of) X and Y. Table 1 illustrates the feature generation by the set of active feature functions for the token \"manu\" in our running example, using three different candidate labels. Entity Repository and Name-Entity Dictionary. Many feature templates harness a knowledge base, namely, YAGO2 (Hoffart et al., 2013) , as an entity repository and as a dictionary of name-to-entity pairs (i.e., aliases and paraphrases). We import the YAGO2 means and hasName relations, a total of more than 6 Million name-entity pairs (for ca. 3 Million distinct entities). We derive additional Table 1 : Positive features (value set to true or real number > 0) for the token \"manu\" (x 3 ) with candidate labels ORG:Manchester United F.C. (y 3 ), PER:Manu Chao (y 3 ) and Other (y 3 ). The domain is Football and the linguistic pattern is prep for [played, *] .", |
|
"cite_spans": [ |
|
{ |
|
"start": 1139, |
|
"end": 1161, |
|
"text": "(Hoffart et al., 2013)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 1676, |
|
"end": 1687, |
|
"text": "[played, *]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 863, |
|
"end": 870, |
|
"text": "Table 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1423, |
|
"end": 1430, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Feature Templates", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "y 3 y 3 y 3 f 1 : Token-Type Prior f 2 : Current POS f 3 : In-Dictionary f 4 : Uppercase f 5 : Surrounding POS f 6 : Surrounding Tokens f 7 : Surrounding In-Dictionary f 8 : Token-Entity Prior f 9 : Token-Entity n-Gram Similarity f 10 : Token-Entity Token Contexts f 11 : Entity-Entity Token Coherence f 12 : Entity-Domain Coherence f 13 : Entity-Entity Type Coherence f 14 : Typed-Dependency f 15 : Typed-Dependency/POS f 16 : Typed-Dependency/In-Dictionary f 17 : Token-Entity Linguistic Contexts NER-type-specific phrase dictionaries from supporting phrases of GATE (Cunningham et al., 2011) , e.g., \"Mr.\", \"Mrs.\", \"Dr.\", \"President\", etc. for the type PER; \"city\", \"river\", \"park\", etc. for the type LOC; \"company\", \"institute\", \"Inc.\", \"Ltd.\", etc. for the type ORG.", |
|
"cite_spans": [ |
|
{ |
|
"start": 569, |
|
"end": 594, |
|
"text": "(Cunningham et al., 2011)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Pruning the Candidate Space. To reduce the dimensionality of the generated feature space and to make the factor-graph inference tractable, we use pruning techniques based on the knowledge base and the dictionaries. To determine if a token can be a mention or part of a mention, we first perform exact-match lookups of all sub-sequences against the name-entity dictionary. As an option (and by default), this can be limited to sub-sequences that are tagged as noun phrases by the Stanford parser. For higher recall, we then add partial-match lookups when a token sub-sequence matches only some but not all tokens of an entity name in the dictionary. For example, for the sentence \"David played for manu, real and la galaxy\", we obtain \"David\", \"manu\", \"real\", \"la galaxy\", \"la\", and \"galaxy\" as candidate mentions. For each such candidate mention, we look up the knowledge base for entities and consider only the best n (using n = 20 in our experiments) highest ranked candidate entities. The ranking is based on the string similarity between the mention and the entity name, the prior popularity of the entity, and the local context similarity (using feature functions f 8 , f 9 , f 10 described in Subsection 4.1).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For the following definitions of the feature templates, let pos i denote the POS tag of tok i , dic i denote the NER tag from the dictionary lookup of tok i , and dep i denote the parsing dependency that connects tok i with another token. Further, we write sur i = tok i\u22121 , tok i , tok i+1 to refer to the sequence of tokens surrounding tok i . As for the possible labels, we denote by type i and ent i an NER type and candidate entity for the current token tok i , respectively. Token-Type Prior. Feature f 1 (type i , tok i ) captures a prior probability for tok i being of NER type type i . These probabilities are estimated from an NERannotated training corpus. In our experiments, we used training subsets of different test corpora such as CoNLL. For example, we may thus obtain a prior of f 1 (ORG, \"Ltd.\") = 0.8. Current POS. Template f 2 (type i , tok i ) generates a binary feature function if token tok i occurs in the training corpus with POS tag pos i and NER label type i . For example, f 2 (PER, \"David\") = 1 if the current token \"David\" has occurred with POS tag NNP and NER label PER in the training corpus. For combinations of tokens with POS tags and NER types that do not occur in the training corpus, no actual feature function is generated from the template (i.e., the value of function would be 0). For the rest of this section, we assume that all binary feature functions are generated from their feature templates in an analogous manner. In-Dictionary. Template f 3 (type i , tok i ) generates a binary feature function if the current token tok i occurs in the name-to-entity dictionary for some entity of NER label type i . Uppercase. Template f 4 (type i , tok i ) generates a binary feature function if the current token tok i appears in upper-case form and additionally has the NER label type i in the training corpus. Surrounding POS. Template f 5 (type i , tok i ) generates a binary feature function if the current token tok i and the POS sequence of its surrounding tokens sur i both appear in the training corpus, where tok i also has the NER label type i . Surrounding Tokens. Template f 6 (type i , tok i ) generates a binary feature function if the current token tok i has NER label type i , given that tok i also appears with surrounding tokens sur i in the training corpus. When instantiated, this template could possibly lead to a huge number of feature functions. For tractability, we thus ignore sequences that occur only once in the training corpus.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Standard Features", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "In-Dictionary. Template f 7 (type i , tok i ) performs dictionary lookups for surrounding tokens in sur i . Similar to f 6 , it generates a binary feature function if the current token tok i and the dictionary lookups of its surrounding tokens sur i appear in the training corpus, where tok i also has NER label type i . Token-Entity Prior.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Surrounding", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Feature f 8 (ent i , tok i ) captures a prior probability of tok i having NED label ent i .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Surrounding", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "These probabilities are estimated from co-occurrence frequencies of name-to-entity pairs in the background corpus, thus harnessing link-anchor texts in Wikipedia. For example, we may have a prior of f 8 (David Beckham, \"Beckham\") = 0.7, as David is more popular (today) than his wife Victoria.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Surrounding", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "On the other hand, f 8 (David Beckham, \"David\") may be lower than f 8 (David Bowie, \"David\"), for example, as this still active pop star is more frequently and prominently mentioned than the retired football player. Token-Entity n-Gram Similarity. Feature f 9 (ent i , tok i ) measures the Jaccard similarity of character-level n-grams of a name in the dictionary that includes tok i and is the primary (i.e., full and most frequently used) name of an entity ent j . For example, for n = 2 the value of f 9 (David Beckham, \"Becks\") is 3", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Surrounding", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "11 . In our experiments, we set n = 3. Token-Entity Token Contexts. Feature f 10 (ent i , tok i ) measures the weighted overlap similarity between the token contexts (tok-cxt) of token tok i and entity ent j . Specifically, we use a weighted generalization of the standard overlap coefficient, WO, between two sets X, Y of weighted elements, X k \u2208 X and Y k \u2208 Y :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Surrounding", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "WO(X, Y ) = k min(X k , Y k ) min( k X k , k Y k )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Surrounding", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We set the weights to be tf-idf scores, and hence we obtain:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Surrounding", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "f 10 (ent i , tok i ) = WO tok-cxt(ent i ), tok-cxt(tok i ) Entity-Entity Token Coherence. Feature f 11 (ent i , ent j ) measures the coherence between the token contexts of two entity candidates ent i and ent j :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Surrounding", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "f 11 (ent i , ent j ) = WO tok-cxt(ent i ), tok-cxt(ent j )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Surrounding", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "f 11 allows us to establish cross-dependencies among labels in our graphical model. For example, the two entities David Beckham and Manchester United are highly coherent as they share many tokens in their contexts, such as \"champions\", \"league\", \"premier\", \"cup\", etc. Thus, they should mapped jointly.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Surrounding", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We use WordNet domains, created by Miller (1995), Magnini and Cavagli (2000) , and Bentivogli et al. (2004) , to construct a taxonomy of 46 domains, including Politics, Economy, Sports, Science, Medicine, Biology, Art, Music, etc. We combine the domains with semantic types (classes of entities) provided by YAGO2, by assigning them to their respective domains. This is based on the manual assignment of WordNet synsets to domains, introduced by Magnini and Cavagli (2000) , and Bentivogli et al. (2004) , and extends to additional types in YAGO2. For example, Singer is assigned to Music, and Football Player to Football, a sub-domain of Sports. These types include the standard NER types Person (PER), Organization (ORG), Location (LOC), and Miscellaneous (MISC) which are further refined by the YAGO2 subclassOf hierarchy. In total, the 46 domains are enhanced with ca. 350,000 types imported from YAGO2. J-NERD classifies input texts into domains by means of \"easy mentions\". An easy mention is a match in the name-to-entity dictionary for which there exist at most three candidate entities (Nguyen et al., 2014) . Although the mention boundaries are not explicitly provided as input, J-NERD still can extract these easy mentions from the entirety of all mention candidates.", |
|
"cite_spans": [ |
|
{ |
|
"start": 50, |
|
"end": 76, |
|
"text": "Magnini and Cavagli (2000)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 83, |
|
"end": 107, |
|
"text": "Bentivogli et al. (2004)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 446, |
|
"end": 472, |
|
"text": "Magnini and Cavagli (2000)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 479, |
|
"end": 503, |
|
"text": "Bentivogli et al. (2004)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 1095, |
|
"end": 1116, |
|
"text": "(Nguyen et al., 2014)", |
|
"ref_id": "BIBREF34" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domain Features", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "In the following, let C * be the set of candidate entities for the \"easy\" mentions in the input document. For each domain d (see Section 3), we compute the coherence of the easy mentions M * = {m 1 , m 2 , . . . }:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domain Features", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "coh(M * ) = |C * \u2229 C d | |C * |", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domain Features", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "where C d is the set of all entities under domain d.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domain Features", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We classify the document into the domain with the highest coherence score.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domain Features", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Although the mentions and their entities may be inferred incorrectly, the domain classification still tends to work very reliably as it aggregates over all \"easy\" mention candidates. The following feature templates exploit domains.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domain Features", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Coherence. Template f 12 (ent i , tok i ) generates a binary feature function that captures the coherence between an entity candidate ent i of token tok i and the domain d which the input text is classified into. That is,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Entity-Domain", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "f 12 (ent i , tok i ) = 1 if d \u2208 dom(ent i ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Entity-Domain", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Otherwise, the feature value is 0. Entity-Entity Type Coherence. Feature f 13 (ent i , ent j ) computes the relatedness between the Wikipedia categories of two candidate entities ent i \u2208 C i and ent j \u2208 C j , where C i , C j denote the two sets of candidate entities associated with tok i , tok j , respectively:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Entity-Domain", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "f 13 (ent i , ent j ) = max cu\u2208cat(ent i ) cv\u2208cat(ent j ) rel(c u , c v )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Entity-Domain", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where the function rel(c u , c v ) computes the reciprocal length of the shortest path between categories c u , c v in the domain taxonomy (Nguyen et al., 2014) . Recall that our domain taxonomy contains a few hundred thousands of Wikipedia categories integrated in the YAGO2 type hierarchy.", |
|
"cite_spans": [ |
|
{ |
|
"start": 139, |
|
"end": 160, |
|
"text": "(Nguyen et al., 2014)", |
|
"ref_id": "BIBREF34" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Entity-Domain", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Recall that we harvest dependency-parsing patterns by using Wikipedia as a large background corpus.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linguistic Features", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Here we harness that Wikipedia contains many mentions with explicit links to entities and that the knowledge base provides us with the NER types for these entities.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linguistic Features", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Typed-Dependency. Template f 14 (type i , tok i ) generates a binary feature function if the background corpus contains the pattern dep i = deptype(arg1 , arg2 ) where the current token tok i is either arg1 or arg2 , and tok i is labeled with NER label type i . Typed-Dependency/POS. Template f 15 (type i , tok i ) captures linguistic patterns that combine parsing dependencies (like in f 14 ) and POS tags (like in f 2 ) learned from an annotated training corpus. It generates binary features if the current token tok i appears in the dependency pattern dep i with POS tag pos i and this combination also occurs in the training data under NER label type i . Typed-Dependency/In-Dictionary. Template f 16 (type i , tok i ) captures linguistic patterns that combine parsing dependencies (like in f 14 ) and dictionary lookups (like in f 3 ) learned from an annotated training corpus. It generates a binary feature function if the current token tok i appears in the dependency pattern dep i and has an entry dic i in the name-to-entity dictionary for some entity with NER label type i . Token-Entity Linguistic Contexts. Feature f 17 (ent i , tok i ) measures the weighted overlap between the linguistic contexts (ling-cxt) of token tok i and candidate entity ent i :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linguistic Features", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "f 17 (ent i , tok i ) = WO ling-cxt(ent i ), ling-cxt(tok i ) 5 Experiments 5.1 Data Collections", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linguistic Features", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Our evaluation is mainly based on the CoNLL-YAGO2 corpus of newswire articles. Additionally, we report on experiments with an extended version of the ACE-2005 corpus and a large sample of the entity-annotated ClueWeb'09-FACC1 Web crawl. CoNLL-YAGO2 is derived from the CoNLL-YAGO corpus (Hoffart et al., 2011) 4 by removing tables where mentions in table cells do not have linguistic context; a typical example is sports results. The resulting corpus contains 1,244 documents with 20,924 mentions including 4,774 Out-of-KB entities. Ground-truth entities in YAGO2 are provided by Hoffart et al. (2011) . For a consistent ground-truth set, we derived the NER types from the NED ground-truth entities, fixing some errors in the original annotations related to metonymy (e.g., labeling the mentions in \"India beats Pakistan 2:1\" incorrectly as LOC, whereas the entities are the sports teams of type ORG). This makes the dataset not only cleaner but also more demanding, as metonymous mentions are among the most difficult cases.", |
|
"cite_spans": [ |
|
{ |
|
"start": 287, |
|
"end": 309, |
|
"text": "(Hoffart et al., 2011)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 580, |
|
"end": 601, |
|
"text": "Hoffart et al. (2011)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linguistic Features", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "For our evaluation, we use the \"testb\" subset of CoNLL-YAGO, which -after the removal of tables -has 231 documents with 5,616 mentions including 1,131 Out-of-KB entities. The other 1,045 documents with a total of 17,870 mentions (including 4,057 Out-of-KB mentions) are used for training. ACE is an extended variant of the ACE 2005 corpus 5 , with additional NED labels by Bentivogli et al. (2010) . We consider only proper entities and exclude mentions of general concepts such as \"revenue\", \"world economy\", \"financial crisis\", etc., as they do not correspond to individual entities in a knowledge base. This reduces the number of mentions, but gives the task a crisp focus. We disallow overlapping mention spans and consider only maximum-length mentions, following the rationale of the ERD Challenge 2014. The test set contains 117 documents with 2,958 mentions. ClueWeb contains two randomly sampled subsets of the ClueWeb'09-FACC1 6 corpus with Freebase annotations:", |
|
"cite_spans": [ |
|
{ |
|
"start": 373, |
|
"end": 397, |
|
"text": "Bentivogli et al. (2010)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linguistic Features", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "\u2022 ClueWeb: 1,000 documents (24,289 mentions) each with at least 5 entities. \u2022 ClueWeb long\u2212tail : 1,000 documents (49,604 mentions) each with at least 3 long-tail entities. We consider an entity to be \"long-tail\" if it has at most 10 incoming links in the English Wikipedia. Note that these Web documents are very different in style from the news-centric articles in CoNLL and ACE. Also note that the entity markup is automatically generated, but with emphasis on high precision. So the data captures only a small subset of the potential entity mentions, and it may contain a small fraction of false entities.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linguistic Features", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "In addition to these larger test corpora, we ran experiments with several smaller datasets used in prior work: KORE , MSNBC (Cucerzan, 2007) , and a subset of AQUAINT (Milne and Witten, 2008) . Each of these has only a few hundred mentions, but they exhibit different characteristics. The findings on these datasets are fully in line with those of our main experiments; hence no explicit results are presented here.", |
|
"cite_spans": [ |
|
{ |
|
"start": 124, |
|
"end": 140, |
|
"text": "(Cucerzan, 2007)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 167, |
|
"end": 191, |
|
"text": "(Milne and Witten, 2008)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linguistic Features", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "In all of these test datasets, the ground-truth considers only individual entities and excludes general concepts, such as \"climate change\", \"harmony\", \"logic\", \"algebra\", etc. These proper entities are identified by the intersection of Wikipedia articles and YAGO2 entities. This way, we focus on NERD. Systems that are designed for the broader task of \"Wikification\" are not penalized by their (typically lower) performance on inputs other than proper entity mentions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linguistic Features", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "We compare J-NERD in its four variants (linear vs. tree and local vs. global) to various state-of-the-art NER/NED methods.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methods under Comparison", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "For NER (i.e., mention boundaries and types) we use the recent version 3.4.1 of the Stanford NER Tagger 7 (Finkel et al., 2005) and the recent version 2.8.4 of the Illinois Tagger 8 (Ratinov and Roth, 2009) as baselines. These systems have NER benchmark results on CoNLL'03 that are as good as the result reported in Passos et al. (2014) . We retrained this model by using the same corpus-specific training data that we use for J-NERD .", |
|
"cite_spans": [ |
|
{ |
|
"start": 106, |
|
"end": 127, |
|
"text": "(Finkel et al., 2005)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 182, |
|
"end": 206, |
|
"text": "(Ratinov and Roth, 2009)", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 317, |
|
"end": 337, |
|
"text": "Passos et al. (2014)", |
|
"ref_id": "BIBREF35" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methods under Comparison", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "For NED, we compared J-NERD against the following methods for which we obtained open-source software or could call a Web service:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methods under Comparison", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "\u2022 Berkeley-entity (Durrett and Klein, 2014 ) uses a joint model for coreference resolution, NER and NED with linkage to Wikipedia. \u2022 AIDA-light (Nguyen et al., 2014) is an optimized variant of the AIDA system (Hoffart et al., 2011) , based on YAGO2. It uses the Stanford tool for NER. \u2022 TagMe (Ferragina and Scaiella, 2010 ) is a Wikifier that maps mentions to entities or concepts in Wikipedia. It uses a Wikipedia-derived dictionary for NER. \u2022 Spotlight (Mendes et al., 2011) links mentions to entities in DBpedia. It uses the LingPipe dictionary-based chunker for NER. Some systems use confidence thresholds to decide on when to map a mention to Out-of-KB. For each dataset, we used withheld data to tune these systemspecific thresholds. Figure 4 illustrates the sensitivity of the thresholds for the CoNLL-YAGO2 dataset. ", |
|
"cite_spans": [ |
|
{ |
|
"start": 18, |
|
"end": 42, |
|
"text": "(Durrett and Klein, 2014", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 209, |
|
"end": 231, |
|
"text": "(Hoffart et al., 2011)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 293, |
|
"end": 322, |
|
"text": "(Ferragina and Scaiella, 2010", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 456, |
|
"end": 477, |
|
"text": "(Mendes et al., 2011)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 741, |
|
"end": 749, |
|
"text": "Figure 4", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Methods under Comparison", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "We evalute the output quality at the NER level alone and for the end-to-end NERD task. We do not evaluate NED alone, as this would require giving a ground-truth set of mentions to the systems to rule out that NER errors affect NED. Most competitors do not have interfaces for such a controlled NEDonly evaluation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Measures", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Each test collection has ground-truth annotations (G) consisting of text spans for mentions, NER types of the mentions, and mapping mentions to entities in the KB or to Out-of-KB. Recall that the Out-of-KB case captures entities that are not in the KB at all. Let X be the output of system X: detected mentions, NER types, NED mappings. Following the ERD 2014 Challenge (Carmel et al., 2014) , we define precision and recall of X for endto-end NERD as:", |
|
"cite_spans": [ |
|
{ |
|
"start": 370, |
|
"end": 391, |
|
"text": "(Carmel et al., 2014)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Measures", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Prec(X) = |X agrees with G|/|X| Rec(X) = |X agrees with G|/|G|", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Measures", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "where agreement means that X and G overlap in the text spans (i.e., have at least one token in common) for a mention, have the same NER type, and have the same mapping to an entity or Out-of-KB. The F 1 score of X is the harmonic mean of precision and recall.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Measures", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "For evaluating the mention-boundary detection alone, we consider only the overlap of text spans; for evaluating NER completely, we consider both mention overlap and agreement based on the assigned NER types.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Measures", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Our first experiment on CoNLL-YAGO2 is comparing the four CRF variants of J-NERD for three tasks: mention boundary detection, NER typing and endto-end NERD. Then, the best model of J-NERD is compared against various baselines and a pipelined configuration of our method. Finally, we test the influence of different features groups. Table 2 compares the different CRF variants. All CRFs have the same features, but differ in their factors. Therefore, some features are not effective for the linear model and the tree model. For the linear CRF, the parsing-based linguistic features and the cross-sentence features do not contribute; for the tree CRF, the cross-sentence features are not effective. We see that all variants perform very well on boundary detection and NER typing, with small differences only. For end-to-end NERD, however, J-NERD tree-global outperforms all other variants by a large margin. This results in achieving the best F 1 score of 78.7%, which is 2.6% higher than J-NERD linear-global . We performed a paired t-test between these two variants, and obtained a p-value of 0.01. The local variants of J-NERD lose around 4% of F 1 because they do not capture the coherence among mentions in different sentences.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 332, |
|
"end": 339, |
|
"text": "Table 2", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results for CoNLL-YAGO2", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "In the rest of our experiments, we focus on J-NERD tree-global and the task of end-to-end NERD.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments on CRF Variants", |
|
"sec_num": "5.4.1" |
|
}, |
|
{ |
|
"text": "In this subsection, we demonstrate the benefits of joint models against pipelined models including state-of-the-art baselines. In addition to the competitors introduced in Section 5.2, we add a pipelined configuration of J-NERD , coined P-NERD. That is, we first run J-NERD in NER mode (thus only considering NER features f 1..7 and f 14..16 ). The best sequence of NER labels is then given to J-NERD to run in NED mode (only considering NED features f 8..13 and f 17 ). The results are shown in Table 3 . J-NERD achieves the highest precision of 81.9% for endto-end NERD, outperforming all competitors by a significant margin. This results in achieving the best F 1 score of 78.7%, which is 1.2% higher than P-NERD and 1.4% higher than AIDA-light. Note that Nguyen et al. (2014) reported higher precision for AIDA-light, but that experiment did not consider Out-of-KB entities which pose an extra difficulty in our setting. Also, the test corpora -CoNLL-YAGO2 vs. CoNLL-YAGO -are not quite comparable (see above).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 496, |
|
"end": 503, |
|
"text": "Table 3", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Comparison of Joint vs. Pipelined Models and Baselines", |
|
"sec_num": "5.4.2" |
|
}, |
|
{ |
|
"text": "TagMe and Spotlight are clearly inferior on this dataset (more than 20% lower in F 1 than J-NERD). These systems are more geared towards efficiency and coping with popular and thus frequent entities, whereas the CoNLL-YAGO2 dataset contains very difficult test cases. For the best F 1 score of J-NERD, we performed a paired t-test against the other methods' F 1 values and obtained a p-value of 0.075.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparison of Joint vs. Pipelined Models and Baselines", |
|
"sec_num": "5.4.2" |
|
}, |
|
{ |
|
"text": "We also compared the NER performance of J-NERD against the state-of-the-art method for NER alone, the Stanford NER Tagger version 3.4.1 and the Illinois Tagger 2.8.4 (Table 4 ). For mention boundary detection, J-NERD achieved an F 1 score of 93.1% versus 93.4% by Stanford NER, 93.3% by ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 166, |
|
"end": 174, |
|
"text": "(Table 4", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Comparison of Joint vs. Pipelined Models and Baselines", |
|
"sec_num": "5.4.2" |
|
}, |
|
{ |
|
"text": "To analyze the influence of the features, we performed an additional ablation study on the global J-NERD tree model, which is the best variant of J-NERD , as follows: Table 5 shows the results, demonstrating that linguistic features are crucial for both NER and NERD. For example, in the sentence \"Woolmer played 19 tests for England\", the mention \"England\" refers to an organization (the English cricket team), not to a location. The dependency-type feature prep for[play, England] is a decisive cue to handle such cases properly. Domain features help in NED to eliminate, for example, football teams when the domain is cricket.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 167, |
|
"end": 174, |
|
"text": "Table 5", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Influence of Features", |
|
"sec_num": "5.4.3" |
|
}, |
|
{ |
|
"text": "\u2022 Standard", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Influence of Features", |
|
"sec_num": "5.4.3" |
|
}, |
|
{ |
|
"text": "For comparison to the recently developed Berkeleyentity system (Durrett and Klein, 2014) , the authors of that system provided us with detailed results for the entity-annotated ACE'2005 corpus, which allowed us to discount non-entity (so-called \"NOMtype\") mappings (see Subsection 5.1). All other systems, including the best J-NERD method, were run on the corpus under the same conditions. J-NERD outperforms P-NERD and Berkeleyentity: F 1 scores are 1.3% and 1.8% better, respectively, with a t-test p-value of 0.05 (Table 6 ). Following these three best-performing systems, AIDAlight also achieves decent results. The other systems show substantially inferior performance.", |
|
"cite_spans": [ |
|
{ |
|
"start": 63, |
|
"end": 88, |
|
"text": "(Durrett and Klein, 2014)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 517, |
|
"end": 525, |
|
"text": "(Table 6", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "End-to-End NERD on ACE", |
|
"sec_num": "5.5" |
|
}, |
|
{ |
|
"text": "The performance gains that J-NERD achieves over Berkeley-entity can be attributed to two factors. First, the rich linguistic features of J-NERD help to correctly cope with more of the difficult cases, e.g., when common nouns are actually names of people. Second, the coherence features of global J-NERD help to properly couple decisions on related entity mentions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "End-to-End NERD on ACE", |
|
"sec_num": "5.5" |
|
}, |
|
{ |
|
"text": "The results for ClueWeb are shown in Table 7 . Again, J-NERD outperforms all other systems with a t-test p-value of 0.05. The differences between J-NERD and fast NED systems such as TagMe or SpotLight become smaller as the number of prominent entities (i.e., prominent people, organizations and locations) is higher on ClueWeb than on CoNLL-YAGO2. ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 37, |
|
"end": 44, |
|
"text": "Table 7", |
|
"ref_id": "TABREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "End-to-End NERD on ClueWeb", |
|
"sec_num": "5.6" |
|
}, |
|
{ |
|
"text": "We have shown that coupling the tasks of NER and NED in a joint CRF-like model is beneficial. Our J-NERD method outperforms strong baselines on a variety of test datasets. The strength of J-NERD comes from three novel assets. First, our treeshaped models capture the structure of dependency parse trees, and we couple multiple such tree models across sentences. Second, we harness non-standard features about domains and novel features based on linguistic patterns derived from parsing. Third, our joint inference maintains uncertain candidates for both mentions and entities and makes decisions as late as possible. In our future work, we plan to explore more use cases for joint NERD, especially for content analytics over news streams and social media.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Transactions of the Association for Computational Linguistics, vol. 4, pp. 215-229, 2016. Action Editor: Hwee Tou Ng. Submission batch: 7/2015; Revision batch: 1/2016; 3/2016; Published 5/2016. c 2016 Association for Computational Linguistics. Distributed under a CC-BY 4.0 license.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://stanfordnlp.github.io/CoreNLP/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://www.mpi-inf.mpg.de/yago-naga/yago/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://projects.ldc.upenn.edu/ace/ 6 http://lemurproject.org/clueweb09/FACC1/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "nlp.stanford.edu/software/CRF-NER.shtml 8 http://cogcomp.cs.illinois.edu/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We would like to thank Greg Durrett for helpful discussions about entity disambiguation on ACE. We also thank the anonymous reviewers, and our action editors Lillian Lee and Hwee Tou Ng for their very thoughtful and helpful comments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": "7" |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Revising the Wordnet Domains Hierarchy: Semantics, Coverage and Balancing", |
|
"authors": [ |
|
{ |
|
"first": "Luisa", |
|
"middle": [], |
|
"last": "Bentivogli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pamela", |
|
"middle": [], |
|
"last": "Forner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bernardo", |
|
"middle": [], |
|
"last": "Magnini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Emanuele", |
|
"middle": [], |
|
"last": "Pianta", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Luisa Bentivogli, Pamela Forner, Bernardo Magnini, and Emanuele Pianta. 2004. Revising the Wordnet Do- mains Hierarchy: Semantics, Coverage and Balancing.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Proceedings of the Workshop on Multilingual Linguistic Ressources, MLR '04", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "101--108", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "In Proceedings of the Workshop on Multilingual Lin- guistic Ressources, MLR '04, pages 101-108. ACL.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Extending English ACE", |
|
"authors": [ |
|
{ |
|
"first": "Luisa", |
|
"middle": [], |
|
"last": "Bentivogli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pamela", |
|
"middle": [], |
|
"last": "Forner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Claudio", |
|
"middle": [], |
|
"last": "Giuliano", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alessandro", |
|
"middle": [], |
|
"last": "Marchetti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Emanuele", |
|
"middle": [], |
|
"last": "Pianta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kateryna", |
|
"middle": [], |
|
"last": "Tymoshenko", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Luisa Bentivogli, Pamela Forner, Claudio Giuliano, Alessandro Marchetti, Emanuele Pianta, and Kateryna Tymoshenko. 2010. Extending English ACE", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Corpus Annotation with Ground-truth Links to Wikipedia", |
|
"authors": [], |
|
"year": null, |
|
"venue": "The People's Web Meets NLP: Collaboratively Constructed Semantic Resources '10", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "19--27", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Corpus Annotation with Ground-truth Links to Wikipedia. In The People's Web Meets NLP: Collab- oratively Constructed Semantic Resources '10, pages 19-27. COLING.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Using Encyclopedic Knowledge for Named Entity Disambiguation", |
|
"authors": [ |
|
{ |
|
"first": "Razvan", |
|
"middle": [], |
|
"last": "Bunescu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marius", |
|
"middle": [], |
|
"last": "Pasca", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "EACL '06", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "9--16", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Razvan Bunescu and Marius Pasca. 2006. Using En- cyclopedic Knowledge for Named Entity Disambigua- tion. In EACL '06, pages 9-16. ACL.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "ERD'14: Entity Recognition and Disambiguation Challenge", |
|
"authors": [], |
|
"year": null, |
|
"venue": "SIGIR '14", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "ERD'14: Entity Recognition and Disambiguation Challenge. In SIGIR '14, page 1292. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "A Framework for Benchmarking Entity-annotation Systems", |
|
"authors": [ |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Cornolti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paolo", |
|
"middle": [], |
|
"last": "Ferragina", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Massimiliano", |
|
"middle": [], |
|
"last": "Ciaramita", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "WWW '13", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "249--260", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marco Cornolti, Paolo Ferragina, and Massimiliano Cia- ramita. 2013. A Framework for Benchmarking Entity-annotation Systems. In WWW '13, pages 249- 260. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "The SMAPH System for Query Entity Recognition and Disambiguation", |
|
"authors": [ |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Cornolti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paolo", |
|
"middle": [], |
|
"last": "Ferragina", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Massimiliano", |
|
"middle": [], |
|
"last": "Ciaramita", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hinrich", |
|
"middle": [], |
|
"last": "Sch\u00fctze", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stefan", |
|
"middle": [], |
|
"last": "R\u00fcd", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the First International Workshop on Entity Recognition and Disambiguation, ERD '14", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "25--30", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marco Cornolti, Paolo Ferragina, Massimiliano Cia- ramita, Hinrich Sch\u00fctze, and Stefan R\u00fcd. 2014. The SMAPH System for Query Entity Recognition and Disambiguation. In Proceedings of the First Inter- national Workshop on Entity Recognition and Disam- biguation, ERD '14, pages 25-30. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Large-Scale Named Entity Disambiguation Based on Wikipedia Data", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Silviu Cucerzan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "EMNLP-CONLL '07", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "708--716", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Silviu Cucerzan. 2007. Large-Scale Named Entity Dis- ambiguation Based on Wikipedia Data. In EMNLP- CONLL '07, pages 708-716. ACL.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Name Entities Made Obvious: The Participation in the ERD 2014 Evaluation", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Silviu Cucerzan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the First International Workshop on Entity Recognition and Disambiguation, ERD '14", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "95--100", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Silviu Cucerzan. 2014. Name Entities Made Obvious: The Participation in the ERD 2014 Evaluation. In Pro- ceedings of the First International Workshop on Entity Recognition and Disambiguation, ERD '14, pages 95- 100. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Improving Efficiency and Accuracy in Multilingual Entity Extraction", |
|
"authors": [ |
|
{ |
|
"first": "Joachim", |
|
"middle": [], |
|
"last": "Daiber", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Max", |
|
"middle": [], |
|
"last": "Jakob", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Hokamp", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pablo", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Mendes", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 9th International Conference on Semantic Systems, I-SEMANTICS '13", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "121--124", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joachim Daiber, Max Jakob, Chris Hokamp, and Pablo N. Mendes. 2013. Improving Efficiency and Accuracy in Multilingual Entity Extraction. In Pro- ceedings of the 9th International Conference on Se- mantic Systems, I-SEMANTICS '13, pages 121-124. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Generating typed dependency parses from phrase structure parses", |
|
"authors": [ |
|
{ |
|
"first": "Marie-Catherine", |
|
"middle": [], |
|
"last": "De Marneffe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bill", |
|
"middle": [], |
|
"last": "Maccartney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the International Conference on Language Resources and Evaluation, LREC '06", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "449--454", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marie-Catherine de Marneffe, Bill MacCartney, and Christopher D. Manning. 2006. Generating typed de- pendency parses from phrase structure parses. In Pro- ceedings of the International Conference on Language Resources and Evaluation, LREC '06, pages 449-454. ELRA.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "SemTag and Seeker: Bootstrapping the Semantic Web via Automated Semantic Annotation", |
|
"authors": [ |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Dill", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nadav", |
|
"middle": [], |
|
"last": "Eiron", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Gibson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Gruhl", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Guha", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anant", |
|
"middle": [], |
|
"last": "Jhingran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tapas", |
|
"middle": [], |
|
"last": "Kanungo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sridhar", |
|
"middle": [], |
|
"last": "Rajagopalan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Tomkins", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Tomlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Zien", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "WWW '03", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "178--186", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stephen Dill, Nadav Eiron, David Gibson, Daniel Gruhl, R. Guha, Anant Jhingran, Tapas Kanungo, Sridhar Ra- jagopalan, Andrew Tomkins, John A. Tomlin, and Ja- son Y. Zien. 2003. SemTag and Seeker: Bootstrap- ping the Semantic Web via Automated Semantic An- notation. In WWW '03, pages 178-186. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "A Joint Model for Entity Analysis: Coreference, Typing, and Linking", |
|
"authors": [ |
|
{ |
|
"first": "Greg", |
|
"middle": [], |
|
"last": "Durrett", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "TACL '14. ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Greg Durrett and Dan Klein. 2014. A Joint Model for Entity Analysis: Coreference, Typing, and Linking. In TACL '14. ACL.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "TAGME: On-the-fly Annotation of Short Text Fragments (by Wikipedia Entities)", |
|
"authors": [ |
|
{ |
|
"first": "Paolo", |
|
"middle": [], |
|
"last": "Ferragina", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ugo", |
|
"middle": [], |
|
"last": "Scaiella", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "CIKM '10", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1625--1628", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Paolo Ferragina and Ugo Scaiella. 2010. TAGME: On-the-fly Annotation of Short Text Fragments (by Wikipedia Entities). In CIKM '10, pages 1625-1628. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Incorporating Non-local Information into Information Extraction Systems by Gibbs Sampling", |
|
"authors": [ |
|
{ |
|
"first": "Jenny", |
|
"middle": [ |
|
"Rose" |
|
], |
|
"last": "Finkel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Trond", |
|
"middle": [], |
|
"last": "Grenager", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "ACL '05", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "363--370", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jenny Rose Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating Non-local Information into Information Extraction Systems by Gibbs Sam- pling. In ACL '05, pages 363-370. ACL.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Fine Grained Classification of Named Entities", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Fleischman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eduard", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "COLING '02", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--7", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael Fleischman and Eduard Hovy. 2002. Fine Grained Classification of Named Entities. In COLING '02, pages 1-7. ACL.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Robust Disambiguation of Named Entities in Text", |
|
"authors": [ |
|
{ |
|
"first": "Johannes", |
|
"middle": [], |
|
"last": "Hoffart", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohamed", |
|
"middle": [ |
|
"Amir" |
|
], |
|
"last": "Yosef", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilaria", |
|
"middle": [], |
|
"last": "Bordino", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hagen", |
|
"middle": [], |
|
"last": "F\u00fcrstenau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Manfred", |
|
"middle": [], |
|
"last": "Pinkal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marc", |
|
"middle": [], |
|
"last": "Spaniol", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bilyana", |
|
"middle": [], |
|
"last": "Taneva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stefan", |
|
"middle": [], |
|
"last": "Thater", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gerhard", |
|
"middle": [], |
|
"last": "Weikum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "EMNLP '11", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "782--792", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Johannes Hoffart, Mohamed Amir Yosef, Ilaria Bordino, Hagen F\u00fcrstenau, Manfred Pinkal, Marc Spaniol, Bilyana Taneva, Stefan Thater, and Gerhard Weikum. 2011. Robust Disambiguation of Named Entities in Text. In EMNLP '11, pages 782-792. ACL.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "KORE: Keyphrase Overlap Relatedness for Entity Disambiguation", |
|
"authors": [ |
|
{ |
|
"first": "Johannes", |
|
"middle": [], |
|
"last": "Hoffart", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephan", |
|
"middle": [], |
|
"last": "Seufert", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "CIKM '12", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "545--554", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Johannes Hoffart, Stephan Seufert, Dat Ba Nguyen, Mar- tin Theobald, and Gerhard Weikum. 2012. KORE: Keyphrase Overlap Relatedness for Entity Disam- biguation. In CIKM '12, pages 545-554. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "YAGO2: A Spatially and Temporally Enhanced Knowledge Base from Wikipedia", |
|
"authors": [ |
|
{ |
|
"first": "Johannes", |
|
"middle": [], |
|
"last": "Hoffart", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fabian", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Suchanek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Klaus", |
|
"middle": [], |
|
"last": "Berberich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gerhard", |
|
"middle": [], |
|
"last": "Weikum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Artificial Intelligence", |
|
"volume": "194", |
|
"issue": "", |
|
"pages": "28--61", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Johannes Hoffart, Fabian M. Suchanek, Klaus Berberich, and Gerhard Weikum. 2013. YAGO2: A Spa- tially and Temporally Enhanced Knowledge Base from Wikipedia. Artificial Intelligence, 194:28-61.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Graphical Models in a Nutshell", |
|
"authors": [ |
|
{ |
|
"first": "Daphne", |
|
"middle": [], |
|
"last": "Koller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nir", |
|
"middle": [], |
|
"last": "Friedman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lise", |
|
"middle": [], |
|
"last": "Getoor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Taskar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "An Introduction to Statistical Relational Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daphne Koller, Nir Friedman, Lise Getoor, and Benjamin Taskar. 2007. Graphical Models in a Nutshell. In An Introduction to Statistical Relational Learning. MIT Press.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Collective Annotation of Wikipedia Entities in Web Text", |
|
"authors": [ |
|
{ |
|
"first": "Sayali", |
|
"middle": [], |
|
"last": "Kulkarni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amit", |
|
"middle": [], |
|
"last": "Singh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ganesh", |
|
"middle": [], |
|
"last": "Ramakrishnan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Soumen", |
|
"middle": [], |
|
"last": "Chakrabarti", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "KDD '09", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "457--466", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sayali Kulkarni, Amit Singh, Ganesh Ramakrishnan, and Soumen Chakrabarti. 2009. Collective Annotation of Wikipedia Entities in Web Text. In KDD '09, pages 457-466. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Fine-grained Entity Recognition", |
|
"authors": [ |
|
{ |
|
"first": "Xiao", |
|
"middle": [], |
|
"last": "Ling", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Weld", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "AAAI '12", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiao Ling and Daniel S. Weld. 2012. Fine-grained En- tity Recognition. In AAAI '12. AAAI Press.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "On the Limited Memory BFGS Method for Large Scale Optimization", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Dong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jorge", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Nocedal", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1989, |
|
"venue": "Mathematical Programming", |
|
"volume": "45", |
|
"issue": "3", |
|
"pages": "503--528", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dong C. Liu and Jorge Nocedal. 1989. On the Limited Memory BFGS Method for Large Scale Optimization. Mathematical Programming, 45(3):503-528.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Integrating Subject Field Codes into Wordnet", |
|
"authors": [ |
|
{ |
|
"first": "Bernardo", |
|
"middle": [], |
|
"last": "Magnini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gabriela", |
|
"middle": [], |
|
"last": "Cavagli", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of the International Conference on Language Resources and Evaluation, LREC '00", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1413--1418", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bernardo Magnini and Gabriela Cavagli. 2000. Integrat- ing Subject Field Codes into Wordnet. In Proceed- ings of the International Conference on Language Re- sources and Evaluation, LREC '00, pages 1413-1418. ELRA.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Early Results for Named Entity Recognition with Conditional Random Fields, Feature Induction and Web-enhanced Lexicons", |
|
"authors": [ |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Mccallum", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "HLT-NAACL '03", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "188--191", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrew McCallum and Wei Li. 2003. Early Results for Named Entity Recognition with Conditional Random Fields, Feature Induction and Web-enhanced Lexi- cons. In HLT-NAACL '03, pages 188-191. ACL.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Adding Semantics to Microblog Posts", |
|
"authors": [ |
|
{ |
|
"first": "Edgar", |
|
"middle": [], |
|
"last": "Meij", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wouter", |
|
"middle": [], |
|
"last": "Weerkamp", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maarten", |
|
"middle": [], |
|
"last": "De Rijke", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "WSDM '12", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "563--572", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Edgar Meij, Wouter Weerkamp, and Maarten de Rijke. 2012. Adding Semantics to Microblog Posts. In WSDM '12, pages 563-572. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Dbpedia Spotlight: Shedding Light on the Web of Documents", |
|
"authors": [ |
|
{ |
|
"first": "Pablo", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Mendes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Max", |
|
"middle": [], |
|
"last": "Jakob", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andr\u00e9s", |
|
"middle": [], |
|
"last": "Garc\u00eda-Silva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christian", |
|
"middle": [], |
|
"last": "Bizer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 7th International Conference on Semantic Systems, I-SEMANTICS '11", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--8", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pablo N. Mendes, Max Jakob, Andr\u00e9s Garc\u00eda-Silva, and Christian Bizer. 2011. Dbpedia Spotlight: Shedding Light on the Web of Documents. In Proceedings of the 7th International Conference on Semantic Systems, I-SEMANTICS '11, pages 1-8. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "WordNet: A Lexical Database for English", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "George", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Miller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Communications of the ACM", |
|
"volume": "38", |
|
"issue": "11", |
|
"pages": "39--41", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "George A. Miller. 1995. WordNet: A Lexical Database for English. Communications of the ACM, 38(11):39- 41.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Learning to Link with Wikipedia", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Milne", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ian", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Witten", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "CIKM '08", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "509--518", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Milne and Ian H. Witten. 2008. Learning to Link with Wikipedia. In CIKM '08, pages 509-518. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "An Open-source Toolkit for Mining Wikipedia", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Milne", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ian", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Witten", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Artificial Intelligence", |
|
"volume": "194", |
|
"issue": "", |
|
"pages": "222--239", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Milne and Ian H. Witten. 2013. An Open-source Toolkit for Mining Wikipedia. Artificial Intelligence, 194:222-239.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Fine-grained Semantic Typing of Emerging Entities", |
|
"authors": [ |
|
{ |
|
"first": "Ndapandula", |
|
"middle": [], |
|
"last": "Nakashole", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomasz", |
|
"middle": [], |
|
"last": "Tylenda", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gerhard", |
|
"middle": [], |
|
"last": "Weikum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "ACL '13", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1488--1497", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ndapandula Nakashole, Tomasz Tylenda, and Gerhard Weikum. 2013. Fine-grained Semantic Typing of Emerging Entities. In ACL '13, pages 1488-1497. ACL.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "AIDA-light: High-Throughput Named-Entity Disambiguation", |
|
"authors": [ |
|
{ |
|
"first": "Johannes", |
|
"middle": [], |
|
"last": "Dat Ba Nguyen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martin", |
|
"middle": [], |
|
"last": "Hoffart", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gerhard", |
|
"middle": [], |
|
"last": "Theobald", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Weikum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the Workshop on Linked Data on the Web, LDOW '14", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dat Ba Nguyen, Johannes Hoffart, Martin Theobald, and Gerhard Weikum. 2014. AIDA-light: High- Throughput Named-Entity Disambiguation. In Pro- ceedings of the Workshop on Linked Data on the Web, LDOW '14. CEUR-WS.org.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Lexicon Infused Phrase Embeddings for Named Entity Resolution", |
|
"authors": [ |
|
{ |
|
"first": "Alexandre", |
|
"middle": [], |
|
"last": "Passos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vineet", |
|
"middle": [], |
|
"last": "Kumar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Mccallum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "CONLL '14", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "78--86", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexandre Passos, Vineet Kumar, and Andrew McCal- lum. 2014. Lexicon Infused Phrase Embeddings for Named Entity Resolution. In CONLL '14, pages 78- 86. ACL.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Inducing Finegrained Semantic Classes via Hierarchical and Collective Classification", |
|
"authors": [ |
|
{ |
|
"first": "Altaf", |
|
"middle": [], |
|
"last": "Rahman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vincent", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "COLING '10", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "931--939", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Altaf Rahman and Vincent Ng. 2010. Inducing Fine- grained Semantic Classes via Hierarchical and Collec- tive Classification. In COLING '10, pages 931-939. ACL.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Design Challenges and Misconceptions in Named Entity Recognition", |
|
"authors": [ |
|
{ |
|
"first": "Lev", |
|
"middle": [], |
|
"last": "Ratinov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "CONLL '09", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "147--155", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lev Ratinov and Dan Roth. 2009. Design Challenges and Misconceptions in Named Entity Recognition. In CONLL '09, pages 147-155. ACL.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "Local and Global Algorithms for Disambiguation to Wikipedia", |
|
"authors": [ |
|
{ |
|
"first": "Lev", |
|
"middle": [], |
|
"last": "Ratinov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "HLT '11", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1375--1384", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lev Ratinov, Dan Roth, Doug Downey, and Mike An- derson. 2011. Local and Global Algorithms for Dis- ambiguation to Wikipedia. In HLT '11, pages 1375- 1384. ACL.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "Re-ranking for Joint Named-Entity Recognition and Linking", |
|
"authors": [ |
|
{ |
|
"first": "Avirup", |
|
"middle": [], |
|
"last": "Sil", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Yates", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "CIKM '13", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2369--2374", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Avirup Sil and Alexander Yates. 2013. Re-ranking for Joint Named-Entity Recognition and Linking. In CIKM '13, pages 2369-2374. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "A Cross-Lingual Dictionary for English Wikipedia Concepts", |
|
"authors": [ |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "Valentin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Angel", |
|
"middle": [ |
|
"X" |
|
], |
|
"last": "Spitkovsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the International Conference on Language Resources and Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Valentin I. Spitkovsky and Angel X. Chang. 2012. A Cross-Lingual Dictionary for English Wikipedia Con- cepts. In Proceedings of the International Conference on Language Resources and Evaluation, LREC '12. ELRA.", |
|
"links": null |
|
}, |
|
"BIBREF41": { |
|
"ref_id": "b41", |
|
"title": "An Introduction to Conditional Random Fields. Foundations and Trends in Machine Learning", |
|
"authors": [ |
|
{ |
|
"first": "Charles", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Sutton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Mccallum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "", |
|
"volume": "4", |
|
"issue": "", |
|
"pages": "267--373", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Charles A. Sutton and Andrew McCallum. 2012. An Introduction to Conditional Random Fields. Founda- tions and Trends in Machine Learning, 4(4):267-373.", |
|
"links": null |
|
}, |
|
"BIBREF42": { |
|
"ref_id": "b42", |
|
"title": "GERBIL -General Entity Annotation Benchmark Framework", |
|
"authors": [ |
|
{ |
|
"first": "Ricardo", |
|
"middle": [], |
|
"last": "Usbeck", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "R\u00f6der", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Axel-Cyrille Ngonga", |
|
"middle": [], |
|
"last": "Ngomo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ciro", |
|
"middle": [], |
|
"last": "Baron", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andreas", |
|
"middle": [], |
|
"last": "Both", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martin", |
|
"middle": [], |
|
"last": "Br\u00fcmmer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Diego", |
|
"middle": [], |
|
"last": "Ceccarelli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Cornolti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Didier", |
|
"middle": [], |
|
"last": "Cherix", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bernd", |
|
"middle": [], |
|
"last": "Eickmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paolo", |
|
"middle": [], |
|
"last": "Ferragina", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christiane", |
|
"middle": [], |
|
"last": "Lemke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrea", |
|
"middle": [], |
|
"last": "Moro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roberto", |
|
"middle": [], |
|
"last": "Navigli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Francesco", |
|
"middle": [], |
|
"last": "Piccinno", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Giuseppe", |
|
"middle": [], |
|
"last": "Rizzo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Rapha\u00ebl Troncy, J\u00f6rg Waitelonis, and Lars Wesemann", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ricardo Usbeck, Michael R\u00f6der, Axel-Cyrille Ngonga Ngomo, Ciro Baron, Andreas Both, Martin Br\u00fcmmer, Diego Ceccarelli, Marco Cornolti, Didier Cherix, Bernd Eickmann, Paolo Ferragina, Christiane Lemke, Andrea Moro, Roberto Navigli, Francesco Piccinno, Giuseppe Rizzo, Harald Sack, Ren\u00e9 Speck, Rapha\u00ebl Troncy, J\u00f6rg Waitelonis, and Lars Wesemann. 2015. GERBIL -General Entity Annotation Benchmark Framework. In WWW '15. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF43": { |
|
"ref_id": "b43", |
|
"title": "HYENA: Hierarchical Type Classification for Entity Names", |
|
"authors": [ |
|
{ |
|
"first": "Mohamed", |
|
"middle": [], |
|
"last": "Amir Yosef", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sandro", |
|
"middle": [], |
|
"last": "Bauer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Johannes", |
|
"middle": [], |
|
"last": "Hoffart", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marc", |
|
"middle": [], |
|
"last": "Spaniol", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gerhard", |
|
"middle": [], |
|
"last": "Weikum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "COLING '12", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1361--1370", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mohamed Amir Yosef, Sandro Bauer, Johannes Hoffart, Marc Spaniol, and Gerhard Weikum. 2012. HYENA: Hierarchical Type Classification for Entity Names. In COLING '12, pages 1361-1370. ACL.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"text": "F 1 for varying confidence thresholds.", |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"TABREF0": { |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"content": "<table><tr><td/><td>[p f ]</td><td/><td>[prep in]</td></tr><tr><td>[nsubj]</td><td>[p f ]</td><td>[nsubjpass]</td><td/></tr><tr><td>[p f ]</td><td/><td/><td/></tr><tr><td/><td/><td>born</td><td/></tr><tr><td/><td>[det]</td><td>[nn]</td><td>[nn]</td></tr><tr><td/><td/><td/><td>10</td><td>x 11</td></tr><tr><td/><td>David</td><td>Beckham</td><td>London</td><td>England</td></tr><tr><td>Figure 3:</td><td/><td/><td/></tr></table>", |
|
"text": "Global model, linking two tree models ([p f ] is short for [prep f or])." |
|
}, |
|
"TABREF1": { |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"content": "<table><tr><td>Perspective Mention Boundary Detection NER Typing End-to-End NERD</td><td>Variants J-NERD linear-local J-NERD tree-local J-NERD linear-global 95.1 90.3 92.6 Prec Rec F 1 94.2 89.6 91.8 94.4 89.4 91.8 J-NERD tree-global 95.8 90.6 93.1 J-NERD linear-local 87.8 83.0 85.3 J-NERD tree-local 89.5 82.2 85.6 J-NERD linear-global 88.6 83.4 85.9 J-NERD tree-global 90.4 83.8 86.9 J-NERD linear-local 71.8 74.9 73.3 J-NERD tree-local 75.1 74.5 74.7 J-NERD linear-global 77.6 74.8 76.1 J-NERD tree-global 81.9 75.8 78.7</td></tr></table>", |
|
"text": "Experiments on CoNLL-YAGO2." |
|
}, |
|
"TABREF2": { |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"content": "<table><tr><td>Method P-NERD J-NERD AIDA-light TagMe</td><td>Prec 80.1 81.9 78.7 64.6</td><td>Rec 75.1 75.8 76.1 43.2</td><td>F 1 77.5 78.7 77.3 51.8</td></tr><tr><td>SpotLight</td><td>71.1</td><td>47.9</td><td>57.3</td></tr></table>", |
|
"text": "Comparison between joint models and pipelined models on end-to-end NERD." |
|
}, |
|
"TABREF3": { |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"content": "<table><tr><td>Perspective Mention Boundary Detection</td><td>Variants P-NERD J-NERD Stanford NER Illinois Tagger</td><td>Prec Rec F 1 95.6 90.5 92.9 95.8 90.6 93.1 95.6 91.3 93.4 95.5 91.2 93.3</td></tr><tr><td>NER Typing</td><td>P-NERD J-NERD Stanford NER Illinois Tagger</td><td>89.6 83.4 86.3 90.4 83.8 86.9 89.3 84.5 86.8 87.5 83.2 85.3</td></tr><tr><td colspan=\"3\">Illinois Tagger, and 92.9% by P-NERD. For NER</td></tr><tr><td colspan=\"3\">typing, J-NERD achieved an F 1 score of 86.9% ver-sus 86.8% by Stanford NER, 85.3% by Illinois Tag-</td></tr><tr><td colspan=\"3\">ger, and 86.3% by P-NERD. So we could not out-</td></tr><tr><td colspan=\"3\">perform the best prior method for NER alone, but</td></tr><tr><td colspan=\"3\">achieved very competitive results. Here, we do not</td></tr><tr><td colspan=\"3\">really leverage any form of joint inference (combin-</td></tr><tr><td colspan=\"3\">ing CRF's across sentences is used in Stanford NER,</td></tr><tr><td colspan=\"3\">too), but harness rich features on domains, entity</td></tr><tr><td colspan=\"3\">candidates, and linguistic dependencies.</td></tr></table>", |
|
"text": "Experiments on NER against state-of-the-art NER systems." |
|
}, |
|
"TABREF4": { |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"content": "<table/>", |
|
"text": "features only include features introduced in Section 4.1. \u2022 Standard and domain features exclude the linguistic features f 14 , f 15 , f 16 , f 17 . \u2022 Standard and linguistic features excludes the domain features f 12 and f 13 . \u2022 All features is the full-fledged J-NERD tree-global model." |
|
}, |
|
"TABREF5": { |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"content": "<table><tr><td>Perspective NER Typing End-to-End NERD</td><td>Setting Standard features Standard and domain features Standard and linguistic features 86.4 F 1 85.1 85.7 All features 86.9 Standard features 74.3 Standard and domain features 76.4 Standard and linguistic features 76.6 All features 78.7</td></tr></table>", |
|
"text": "Feature Influence on CoNLL-YAGO2." |
|
}, |
|
"TABREF6": { |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"content": "<table><tr><td>Method P-NERD J-NERD Berkeley-entity</td><td>Prec 68.2 69.1 65.6</td><td>Rec 60.8 62.3 61.8</td><td>F 1 64.2 65.5 63.7</td></tr><tr><td>AIDA-light</td><td>66.8</td><td>59.3</td><td>62.8</td></tr><tr><td>TagMe</td><td>60.6</td><td>43.5</td><td>50.7</td></tr><tr><td>SpotLight</td><td>68.7</td><td>29.6</td><td>41.4</td></tr></table>", |
|
"text": "NERD results on ACE." |
|
}, |
|
"TABREF7": { |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"content": "<table><tr><td>Dataset ClueWeb</td><td>Method P-NERD J-NERD AIDA-light 80.2 66.4 72.6 Prec Rec F 1 80.9 67.1 73.3 81.5 67.5 73.8 TagMe 78.4 60.5 68.3 SpotLight 79.7 57.1 66.5</td></tr><tr><td>ClueWeb long\u2212tail</td><td>P-NERD J-NERD AIDA-light 81.2 63.7 71.3 81.2 64.4 71.8 81.4 65.1 72.3 TagMe 78.4 58.3 66.9 SpotLight 81.2 56.3 66.5</td></tr></table>", |
|
"text": "NERD results on ClueWeb." |
|
} |
|
} |
|
} |
|
} |