|
{ |
|
"paper_id": "C02-1025", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T12:20:17.247134Z" |
|
}, |
|
"title": "Named Entity Recognition: A Maximum Entropy Approach Using Global Information", |
|
"authors": [ |
|
{ |
|
"first": "Hai", |
|
"middle": [], |
|
"last": "Leong Chieu", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Hwee", |
|
"middle": [ |
|
"Tou" |
|
], |
|
"last": "Ng", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This paper presents a maximum entropy-based named entity recognizer (NER). It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier. Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentencebased classifier. In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC-6 and MUC-7 test data.", |
|
"pdf_parse": { |
|
"paper_id": "C02-1025", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This paper presents a maximum entropy-based named entity recognizer (NER). It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier. Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentencebased classifier. In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC-6 and MUC-7 test data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC). A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information. In MUC-6 and MUC-7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC-6 and MUC-7 achieved accuracy comparable to rule-based systems on the named entity task.", |
|
"cite_spans": [ |
|
{ |
|
"start": 552, |
|
"end": 568, |
|
"text": "(Chinchor, 1998;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 569, |
|
"end": 584, |
|
"text": "Sundheim, 1995)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Statistical NERs usually find the sequence of tags that maximizes the probability \u00a2 \u00a1 \u00a4 \u00a3 \u00a6 \u00a5 \u00a7 \u00a9", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": ", where \u00a7 is the sequence of words in a sentence, and \u00a3 is the sequence of named-entity tags assigned to the words in \u00a7", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": ". Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999) . We propose maximizing", |
|
"cite_spans": [ |
|
{ |
|
"start": 268, |
|
"end": 290, |
|
"text": "(Mikheev et al., 1998;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 291, |
|
"end": 307, |
|
"text": "Borthwick, 1999)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": ", where \u00a3 is the sequence of namedentity tags assigned to the words in the sentence \u00a7 , and is the information that can be extracted from the whole document containing \u00a7 . Our system is built on a maximum entropy classifier. By making use of global context, it has achieved excellent results on both MUC-6 and MUC-7 official test data. We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u00a2 \u00a1 \u00a4 \u00a3 \u00a6 \u00a5 \u00a7 \u00a9", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework. The use of global features has improved the performance on MUC-6 test data from 90.75% to 93.27% (27% reduction in errors), and the performance on MUC-7 test data from 85.22% to 87.24% (14% reduction in errors). These results are achieved by training on the official MUC-6 and MUC-7 training data, which is much less training data than is used by other machine learning systems that worked on the MUC-6 or MUC-7 named entity task (Bikel et al., 1997; Bikel et al., 1999; Borthwick, 1999) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 592, |
|
"end": 612, |
|
"text": "(Bikel et al., 1997;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 613, |
|
"end": 632, |
|
"text": "Bikel et al., 1999;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 633, |
|
"end": 649, |
|
"text": "Borthwick, 1999)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u00a2 \u00a1 \u00a4 \u00a3 \u00a6 \u00a5 \u00a7 \u00a9", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first \"President George Bush\" then \"Bush\"). As such, global information from the whole context of a document is important to more accurately recognize named entities. Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "\u00a2 \u00a1 \u00a4 \u00a3 \u00a6 \u00a5 \u00a7 \u00a9", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Recently, statistical NERs have achieved results that are comparable to hand-coded systems. Since MUC-6, BBN's Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance. MUC-7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999) , notably Mikheev's system, which achieved the best performance of 93.39% on the official NE test data. MENE (Maximum Entropy Named Entity) (Borthwick, 1999) was combined with Proteus (a handcoded system), and came in fourth among all MUC-7 participants. MENE without Proteus, however, did not do very well and only achieved an Fmeasure of 84.22% (Borthwick, 1999) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 156, |
|
"end": 176, |
|
"text": "(Bikel et al., 1997)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 290, |
|
"end": 312, |
|
"text": "(Mikheev et al., 1998;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 313, |
|
"end": 329, |
|
"text": "Borthwick, 1999)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 677, |
|
"end": 694, |
|
"text": "(Borthwick, 1999)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Among machine learning-based NERs, Identi-Finder has proven to be the best on the official MUC-6 and MUC-7 test data. MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance. By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999) . Mikheev et al. (1998) did make use of information from the whole document. However, their system is a hybrid of hand-coded rules and machine learning methods. Another attempt at using global information can be found in (Borthwick, 1999) . He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution. Reference resolution involves finding words that co-refer to the same entity. In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each. MENE is then trained on 80% of the training corpus, and tested on the remaining 20%. This process is repeated 5 times by rotating the data appropriately. Finally, the concatenated 5 * 20% output is used to train the reference resolution component. We will show that by giving the first model some global features, MENERGI outperforms Borthwick's reference resolution classifier. On MUC-6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 350, |
|
"end": 367, |
|
"text": "(Borthwick, 1999)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 370, |
|
"end": 391, |
|
"text": "Mikheev et al. (1998)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 589, |
|
"end": 606, |
|
"text": "(Borthwick, 1999)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In Section 5, we try to compare results of MENE, IdentiFinder, and MENERGI. However, both MENE and IdentiFinder used more training data than we did (we used only the official MUC-6 and MUC-7 training data). On the MUC-6 data, Bikel et al. (1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced. Our results show that MENERGI performs as well as IdentiFinder when trained on comparable amount of training data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 226, |
|
"end": 245, |
|
"text": "Bikel et al. (1997;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 246, |
|
"end": 251, |
|
"text": "1999)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The system described in this paper is similar to the MENE system of (Borthwick, 1999) . It uses a maximum entropy framework and classifies each word given its features. Each name class \u00a3 is subdivided into 4 sub- classes, i.e., N begin, N continue, N end, and N unique. Hence, there is a total of 29 classes (7 name classes 4 sub-classes \u00a1 1 not-a-name class).", |
|
"cite_spans": [ |
|
{ |
|
"start": 68, |
|
"end": 85, |
|
"text": "(Borthwick, 1999)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 213, |
|
"end": 269, |
|
"text": "classes, i.e., N begin, N continue, N end, and N unique.", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System Description", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed. Such constraints are derived from training data, expressing some relationship between features and outcome. The probability distribution that satisfies the above property is the one with the highest entropy. It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997) :", |
|
"cite_spans": [ |
|
{ |
|
"start": 455, |
|
"end": 482, |
|
"text": "(Della Pietra et al., 1997)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Maximum Entropy", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u00a2 \u00a1 \u00a5 \u00a3 \u00a2 \u00a9 \u00a5 \u00a4 \u00a6 \u00a7 \u00a1 \u00a2 \u00a9 \u00a9 \" ! $ # & % ' ) (", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Maximum Entropy", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "where refers to the outcome, (Darroch and Ratcliff, 1972) . This is an iterative method that improves the estimation of the parameters at each iteration. We have used the Java-based opennlp maximum entropy package 1 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 29, |
|
"end": 57, |
|
"text": "(Darroch and Ratcliff, 1972)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Maximum Entropy", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique). To eliminate such sequences, we define a transition probability between word classes \u00a1 \u00a2 \u00a1 \u00a5 \u00a9 to be equal to 1 if the sequence is admissible, and 0 otherwise. The probability of the classes", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Testing", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u00a4 \u00a3 \u00a5 \u00a3 \u00a5 \u00a3 \u00a7 \u00a6", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Testing", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "assigned to the words in a sentence\u00a8in a document is defined as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Testing", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u00a1 \u00a5 \u00a3 \u00a4 \u00a3 \u00a5 \u00a3 \u00a6 \u00a5 \u00a9 \u00a5 \u00a4 \u00a6 \u00a1 \u00a1 \u00a1 \u00a5 \u00a9 \u00a9 \u00a1 \u00a4 \u00a1 \u00a5 \u00a1 \u00a9 where \u00a1 \u00a4 \u00a1 \u00a5 \u00a9", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Testing", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "is determined by the maximum entropy classifier. A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Testing", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The features we used can be divided into 2 classes: local and global. Local features are features that are based on neighboring tokens, as well as the token itself. Global features are extracted from other occurrences of the same token in the whole document.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature Description", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The local features used are similar to those used in BBN's IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999) . However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999) . This might be because our features are more comprehensive than those used by Borthwick. In IdentiFinder, there is a priority in the feature assignment, such that if one feature is used for a token, another feature lower in priority will not be used. In the maximum entropy framework, there is no such constraint. Multiple features can be used for the same token.", |
|
"cite_spans": [ |
|
{ |
|
"start": 53, |
|
"end": 92, |
|
"text": "BBN's IdentiFinder (Bikel et al., 1999)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 101, |
|
"end": 118, |
|
"text": "(Borthwick, 1999)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 326, |
|
"end": 343, |
|
"text": "(Borthwick, 1999)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature Description", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used. We group the features used into feature groups. Each feature group can be made up of many binary features. For each token , zero, one, or more of the features in each feature group are set to 1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature Description", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The local feature groups are:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Local Features", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Non-Contextual Feature: This feature is set to 1 for all tokens. This feature imposes constraints Table 1 : Features based on the token string that are based on the probability of each name class during training. Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones). The zone to which a token belongs is used as a feature. For example, in MUC-6, there are four zones (TXT, HL, DATELINE, DD). Hence, for each token, one of the four features zone-TXT, zone-HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 98, |
|
"end": 105, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Local Features", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init-Caps, zone) is set to 1. If it is made up of all capital letters, then (allCaps, zone) is set to 1. If it starts with a lower case letter, and contains both upper and lower case letters, then (mixedCaps, zone) is set to 1. A token that is allCaps will also be initCaps. This group consists of (3 total number of possible zones) features.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Local Features", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Case and Zone of and", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Local Features", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": ": Similarly, if (or ) is initCaps, a feature (initCaps, zone) ! # \" % $ (or (initCaps, zone)& # ' ! ) (", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Local Features", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": ") is set to 1, etc.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Local Features", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Token Information: This group consists of 10 features based on the string , as listed in Table 1 . For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 89, |
|
"end": 96, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Local Features", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "First Word: This feature group contains only one feature firstword. If the token is the first word of a sentence, then this feature is set to 1. Otherwise, it is set to 0.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Local Features", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Lexicon Feature: The string of the token is used as a feature. This group contains a large number of features (one for each token string present in the training data). At most one feature in this group will be set to 1. If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Local Features", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Local Features", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": ") ! # \" % $ is set to 1. If is not initCaps, then (not-initCaps, ) ! # \" % $", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Local Features", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "is set to 1. Same for . In the case where the next token is a hyphen, then is also used as a feature:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Local Features", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "(init- Caps, ) ! # \" % $", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Local Features", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "is set to 1. This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Local Features", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Local Features", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task. The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999) . The sources of our dictionaries are listed in Table 2 . For all lists except locations, the lists are processed into a list of tokens (unigrams). Location list is processed into a list of unigrams and bigrams (e.g., New York). For locations, tokens are matched against unigrams, and sequences of two consecutive tokens are matched against bigrams. A list of words occurring more than 10 times in the training data is also collected (commonWords). Only tokens with initCaps not found in commonWords are tested against each list in Table 2 . If they are found in a list, then a feature for that list will be set to 1. For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1. Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1. For example, if is found in the list of person first names, the feature PersonFirstName Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix. Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data. For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data. Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the \"frequency\" of Corp. is 2). The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix-List. A Person-Prefix-List is compiled in an analogous way. For MUC-6, for example, Corporate-Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp.\u00a1 and Person-Prefix-List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms ", |
|
"cite_spans": [ |
|
{ |
|
"start": 214, |
|
"end": 236, |
|
"text": "(Mikheev et al., 1999)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 2137, |
|
"end": 2364, |
|
"text": "associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp.\u00a1 and Person-Prefix-List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 285, |
|
"end": 292, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
}, |
|
{ |
|
"start": 769, |
|
"end": 776, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Local Features", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Context from the whole document can be important in classifying a named entity. A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later. Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998) . We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned. 3In sentence (1), McCann can be a person or an organization. Sentence (2) and (3) help to disambiguate one way or the other. If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.", |
|
"cite_spans": [ |
|
{ |
|
"start": 349, |
|
"end": 366, |
|
"text": "(Borthwick, 1999;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 367, |
|
"end": 388, |
|
"text": "Mikheev et al., 1998)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Global Features", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps. For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own. For example, in the sentence that starts with \"Bush put a freeze on . . . \", because Bush is the first word, the initial caps might be due to its position (as in \"They put a freeze on . . . \"). If somewhere else in the document we see \"restrictions put in place by President Bush\", then we can be surer that Bush is a name.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Global Features", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization. On the other hand, if it is seen as McCann Pte. Ltd., then organization will be more probable. With the same Corporate-Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Global Features", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM). The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document. Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique. For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Global Features", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name. However, it is unlikely that other occurrences of News Broadcasting Corp. in the same document also co-occur with Even. This group of features attempts to capture such information. For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified. For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sequence of Initial Caps (SOIC):", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document. needs to be in initCaps to be considered for this feature. If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears. As we will see from Table 3 , not much improvement is derived from this feature.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 302, |
|
"end": 309, |
|
"text": "Table 3", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Sequence of Initial Caps (SOIC):", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The baseline system in Table 3 refers to the maximum entropy system that uses only local features. As each global feature group is added to the list of features, we see improvements to both MUC-6 and . ICOC and CSPP contributed the greatest improvements. The effect of UNIQ is very small on both data sets. All our results are obtained by using only the official training data provided by the MUC conferences. The reason why we did not train with both MUC-6 and MUC-7 training data at the same time is because the task specifications for the two tasks are not identical. As can be seen in Table 4 , our training data is a lot less than those used by MENE and IdentiFinder 3 . In this section, we try to compare our results with those obtained by IdentiFinder '97 (Bikel et al., 1997 ), IdentiFinder '99 (Bikel et al., 1999 , and MENE (Borthwick, 1999) . Iden-tiFinder '99's results are considerably better than IdentiFinder '97's. IdentiFinder's performance in MUC-7 is published in (Miller et al., 1998) . MENE has only been tested on MUC-7.", |
|
"cite_spans": [ |
|
{ |
|
"start": 746, |
|
"end": 782, |
|
"text": "IdentiFinder '97 (Bikel et al., 1997", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 783, |
|
"end": 822, |
|
"text": "), IdentiFinder '99 (Bikel et al., 1999", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 834, |
|
"end": 851, |
|
"text": "(Borthwick, 1999)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 983, |
|
"end": 1004, |
|
"text": "(Miller et al., 1998)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 23, |
|
"end": 30, |
|
"text": "Table 3", |
|
"ref_id": "TABREF5" |
|
}, |
|
{ |
|
"start": 589, |
|
"end": 596, |
|
"text": "Table 4", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experimental Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6). Besides size of training data, the use of dictionaries is another factor that might affect performance. Bikel et al. (1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains. Borth-2 MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3 Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 87, |
|
"end": 95, |
|
"text": "(Table 5", |
|
"ref_id": "TABREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experimental Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Size of training data F-measure SRA '95", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Systems", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Hand-coded 96.4% IdentiFinder '99 650,000 words 94.9% MENERGI 160,000 tokens 93.27% IdentiFinder '99 200,000 words About 93% (from graph) IdentiFinder '97 450,000 words 93% IdentiFinder '97 about 100,000 words 91%-92% Krupka, 1995) . In (Bikel et al., 1997) and (Bikel et al., 1999) , performance was plotted against training data size to show how performance improves with training data size. We have estimated the performance of IdentiFinder '99 at 200K words of training data from the graphs.", |
|
"cite_spans": [ |
|
{ |
|
"start": 218, |
|
"end": 231, |
|
"text": "Krupka, 1995)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 237, |
|
"end": 257, |
|
"text": "(Bikel et al., 1997)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 262, |
|
"end": 282, |
|
"text": "(Bikel et al., 1999)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Systems", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For MUC-7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles. In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching. Both BBN and NYU have tagged their own data to supplement the official training data. Even with less training data, MENERGI outperforms Borthwick's MENE + reference resolution (Borthwick, 1999) . Except our own and MENE + reference resolution, the results in Table 6 are all official MUC-7 results.", |
|
"cite_spans": [ |
|
{ |
|
"start": 498, |
|
"end": 515, |
|
"text": "(Borthwick, 1999)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 581, |
|
"end": 588, |
|
"text": "Table 6", |
|
"ref_id": "TABREF8" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Systems", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The effect of a second reference resolution classifier is not entirely the same as that of global features. A secondary reference resolution classifier has information on the class assigned by the primary classifier. Such a classification can be seen as a not-always-correct summary of global features. The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document. We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre. Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive. Hence we decided to restrict ourselves to only information from the same document. Mikheev et al. (1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities. The overall performance of the LTG system was outstanding, but the system consists of a sequence of many hand-coded rules and machine-learning modules.", |
|
"cite_spans": [ |
|
{ |
|
"start": 331, |
|
"end": 348, |
|
"text": "(Borthwick, 1999)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 884, |
|
"end": 905, |
|
"text": "Mikheev et al. (1998)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Systems", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://maxent.sourceforge.net", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We have shown that the maximum entropy framework is able to use global information directly. This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997) . Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs. Information from a sentence is sometimes insufficient to classify a name correctly. Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier. We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources. Borthwick (1999) successfully made use of other handcoded systems as input for his MENE system, and achieved excellent results. However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English. We believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.", |
|
"cite_spans": [ |
|
{ |
|
"start": 266, |
|
"end": 286, |
|
"text": "(Bikel et al., 1997)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Nymble: A highperformance learning name-finder", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Daniel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Scott", |
|
"middle": [], |
|
"last": "Bikel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Miller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ralph", |
|
"middle": [], |
|
"last": "Schwartz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Weischedel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Proceedings of the Fifth Conference on Applied Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "194--201", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daniel M. Bikel, Scott Miller, Richard Schwartz, and Ralph Weischedel. 1997. Nymble: A high- performance learning name-finder. In Proceed- ings of the Fifth Conference on Applied Natural Language Processing, pages 194-201.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "An algorithm that learns what's in a name", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Daniel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Bikel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ralph", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Schwartz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Weischedel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Machine Learning", |
|
"volume": "34", |
|
"issue": "", |
|
"pages": "211--231", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daniel M. Bikel, Richard Schwartz, and Ralph M. Weischedel. 1999. An algorithm that learns what's in a name. Machine Learning, 34(1/2/3):211-231.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "A Maximum Entropy Approach to Named Entity Recognition", |
|
"authors": [ |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Borthwick", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrew Borthwick. 1999. A Maximum Entropy Approach to Named Entity Recognition. Ph.D. thesis, Computer Science Department, New York University.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "MUC-7 named entity task definition, version 3.5", |
|
"authors": [ |
|
{ |
|
"first": "Nancy", |
|
"middle": [], |
|
"last": "Chinchor", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Proceedings of the Seventh Message Understanding Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nancy Chinchor. 1998. MUC-7 named entity task definition, version 3.5. In Proceedings of the Sev- enth Message Understanding Conference.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Generalized iterative scaling for log-linear models", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Darroch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Ratcliff", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1972, |
|
"venue": "Annals of Mathematical Statistics", |
|
"volume": "43", |
|
"issue": "5", |
|
"pages": "1470--1480", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. N. Darroch and D. Ratcliff. 1972. Generalized iterative scaling for log-linear models. Annals of Mathematical Statistics, 43(5):1470-1480.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Inducing features of random fields", |
|
"authors": [ |
|
{ |
|
"first": "Vincent", |
|
"middle": [ |
|
"Della" |
|
], |
|
"last": "Stephen Della Pietra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Pietra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Lafferty", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence", |
|
"volume": "19", |
|
"issue": "4", |
|
"pages": "380--393", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stephen Della Pietra, Vincent Della Pietra, and John Lafferty. 1997. Inducing features of ran- dom fields. IEEE Transactions on Pattern Analy- sis and Machine Intelligence, 19(4):380-393.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "SRA: Description of the SRA system as used for MUC-6", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "George", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Krupka", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Proceedings of the Sixth Message Understanding Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "221--235", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "George R. Krupka. 1995. SRA: Description of the SRA system as used for MUC-6. In Proceedings of the Sixth Message Understanding Conference, pages 221-235.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Description of the LTG system used for MUC-7", |
|
"authors": [ |
|
{ |
|
"first": "Andrei", |
|
"middle": [], |
|
"last": "Mikheev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "Grover", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marc", |
|
"middle": [], |
|
"last": "Moens", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Proceedings of the Seventh Message Understanding Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrei Mikheev, Claire Grover, and Marc Moens. 1998. Description of the LTG system used for MUC-7. In Proceedings of the Seventh Message Understanding Conference.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Named entity recognition without gazetteers", |
|
"authors": [ |
|
{ |
|
"first": "Andrei", |
|
"middle": [], |
|
"last": "Mikheev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marc", |
|
"middle": [], |
|
"last": "Moens", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "Grover", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Proceedings of the Ninth Conference of the European Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--8", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrei Mikheev, Marc Moens, and Claire Grover. 1999. Named entity recognition without gazetteers. In Proceedings of the Ninth Confer- ence of the European Chapter of the Association for Computational Linguistics, pages 1-8.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Algorithms that learn to extract information BBN: Description of the SIFT system as used for MUC-7", |
|
"authors": [ |
|
{ |
|
"first": "Scott", |
|
"middle": [], |
|
"last": "Miller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Crystal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Heidi", |
|
"middle": [], |
|
"last": "Fox", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lance", |
|
"middle": [], |
|
"last": "Ramshaw", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Schwartz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rebecca", |
|
"middle": [], |
|
"last": "Stone", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Proceedings of the Seventh Message Understanding Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Scott Miller, Michael Crystal, Heidi Fox, Lance Ramshaw, Richard Schwartz, Rebecca Stone, Ralph Weischedel, and the Annotation Group. 1998. Algorithms that learn to extract informa- tion BBN: Description of the SIFT system as used for MUC-7. In Proceedings of the Seventh Message Understanding Conference.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Named entity task definition, version 2.1", |
|
"authors": [ |
|
{ |
|
"first": "Beth", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Sundheim", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Proceedings of the Sixth Message Understanding Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "319--332", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Beth M. Sundheim. 1995. Named entity task def- inition, version 2.1. In Proceedings of the Sixth Message Understanding Conference, pages 319- 332.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"text": "function. For example, in predicting if a word belongs to a word class, is either true or false, and", |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"FIGREF1": { |
|
"uris": null, |
|
"text": "Ifis initCaps and is one of January, February, . . . , December, then the feature MonthName is set to 1. If is one of Monday, Tuesday, . . . , Sun-day, then the feature DayOfTheWeek is set to 1. If is a number string (such as one, two, etc), then the feature NumberString is set to 1.", |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"TABREF1": { |
|
"text": ", the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.", |
|
"content": "<table><tr><td/><td/><td/><td/><td>.\u00a1 . For</td></tr><tr><td>a token</td><td colspan=\"4\">that is in a consecutive sequence of init-</td></tr><tr><td colspan=\"2\">Caps tokens tokens from</td><td>\u00a1</td><td>\u00a3 \u00a2 to</td><td>\u00a5 \u00a3 \u00a4 \u00a3 \u00a5 \u00a3 \u00a7 \u00a6 is in Corporate-Suffix-List, \u00a4 \u00a3 \u00a5 \u00a3 \u00a2 \u00a3 \u00a6 , if any of the \u00a9</td></tr><tr><td colspan=\"5\">then a feature Corporate-Suffix is set to 1. If any of</td></tr><tr><td colspan=\"3\">the tokens from</td><td colspan=\"2\">\u00a3 \u00a2</td><td>to</td><td>is in Person-Prefix-</td></tr><tr><td colspan=\"5\">List, then another feature Person-Prefix is set to 1.</td></tr><tr><td colspan=\"5\">Note that we check for</td><td>\u00a3 \u00a2</td></tr></table>", |
|
"num": null, |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF2": { |
|
"text": "/www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names", |
|
"content": "<table><tr><td>Description</td><td>Source</td></tr><tr><td>Location Names</td><td>http:/</td></tr><tr><td/><td>For example:</td></tr><tr><td/><td>McCann initiated a new global system. (1)</td></tr><tr><td/><td>CEO of McCann . . . (2)</td></tr></table>", |
|
"num": null, |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF3": { |
|
"text": "Sources of DictionariesThe McCann family . . .", |
|
"content": "<table/>", |
|
"num": null, |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF5": { |
|
"text": "", |
|
"content": "<table><tr><td colspan=\"5\">: F-measure after successive addition of each</td></tr><tr><td colspan=\"2\">global feature group</td><td/><td/><td/></tr><tr><td/><td colspan=\"2\">MUC-6</td><td colspan=\"2\">MUC-7</td></tr><tr><td>Systems</td><td>No. of</td><td>No. of</td><td>No. of</td><td>No. of</td></tr><tr><td/><td colspan=\"4\">Articles Tokens Articles Tokens</td></tr><tr><td>MENERGI</td><td>318</td><td>160,000</td><td>200</td><td>180,000</td></tr><tr><td>IdentiFinder</td><td>-</td><td>650,000</td><td>-</td><td>790,000</td></tr><tr><td>MENE</td><td>-</td><td>-</td><td>350</td><td>321,000</td></tr></table>", |
|
"num": null, |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF6": { |
|
"text": "", |
|
"content": "<table><tr><td>: Training Data</td></tr><tr><td>MUC-7 test accuracy. 2 For MUC-6, the reduction in</td></tr><tr><td>error due to global features is 27%, and for MUC-7,</td></tr><tr><td>14%</td></tr></table>", |
|
"num": null, |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF7": { |
|
"text": "Comparison of results for MUC-6", |
|
"content": "<table><tr><td>Systems</td><td colspan=\"2\">Size of training data F-measure</td></tr><tr><td>LTG system '98</td><td>Hybrid hand-coded</td><td>93.39%</td></tr><tr><td>IdentiFinder '98</td><td>790,000 words</td><td>90.44%</td></tr><tr><td>MENE + Proteus</td><td>Hybrid hand-coded</td><td>88.80%</td></tr><tr><td>'98</td><td>321,000 tokens</td><td/></tr><tr><td>MENERGI</td><td>180,000 tokens</td><td>87.24%</td></tr><tr><td>MENE+reference-</td><td>321,000 tokens</td><td>86.56%</td></tr><tr><td>resolution '99</td><td/><td/></tr><tr><td>MENE '98</td><td>321,000 tokens</td><td>84.22%</td></tr></table>", |
|
"num": null, |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF8": { |
|
"text": "", |
|
"content": "<table><tr><td>: Comparison of results for MUC-7</td></tr><tr><td>wick (1999) reported using dictionaries of person</td></tr><tr><td>first names, corporate names and suffixes, colleges</td></tr><tr><td>and universities, dates and times, state abbrevia-</td></tr><tr><td>tions, and world regions.</td></tr><tr><td>In MUC-6, the best result is achieved by SRA</td></tr><tr><td>(</td></tr></table>", |
|
"num": null, |
|
"html": null, |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |