Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W06-0206",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T04:02:51.587050Z"
},
"title": "Data Selection in Semi-supervised Learning for Name Tagging",
"authors": [
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "New York University New York",
"location": {
"postCode": "10003",
"region": "NY",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Ralph",
"middle": [],
"last": "Grishman",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "New York University New York",
"location": {
"postCode": "10003",
"region": "NY",
"country": "USA"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present two semi-supervised learning techniques to improve a state-of-the-art multilingual name tagger. For English and Chinese, the overall system obtains 1.7%-2.1% improvement in F-measure, representing a 13.5%-17.4% relative reduction in the spurious, missing, and incorrect tags. We also conclude that simply relying upon large corpora is not in itself sufficient: we must pay attention to unlabeled data selection too. We describe effective measures to automatically select documents and sentences.",
"pdf_parse": {
"paper_id": "W06-0206",
"_pdf_hash": "",
"abstract": [
{
"text": "We present two semi-supervised learning techniques to improve a state-of-the-art multilingual name tagger. For English and Chinese, the overall system obtains 1.7%-2.1% improvement in F-measure, representing a 13.5%-17.4% relative reduction in the spurious, missing, and incorrect tags. We also conclude that simply relying upon large corpora is not in itself sufficient: we must pay attention to unlabeled data selection too. We describe effective measures to automatically select documents and sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "When applying machine learning approaches to natural language processing tasks, it is timeconsuming and expensive to hand-label the large amounts of training data necessary for good performance. Unlabeled data can be collected in much larger quantities. Therefore, a natural question is whether we can use unlabeled data to build a more accurate learner, given the same amount of labeled data. This problem is often referred to as semi-supervised learning. It significantly reduces the effort needed to develop a training set. It has shown promise in improving the performance of many tasks such as name tagging (Miller et al., 2004) , semantic class extraction (Lin et al., 2003) , chunking (Ando and Zhang, 2005) , coreference resolution (Bean and Riloff, 2004) and text classification (Blum and Mitchell, 1998) .",
"cite_spans": [
{
"start": 612,
"end": 633,
"text": "(Miller et al., 2004)",
"ref_id": "BIBREF12"
},
{
"start": 662,
"end": 680,
"text": "(Lin et al., 2003)",
"ref_id": "BIBREF11"
},
{
"start": 692,
"end": 714,
"text": "(Ando and Zhang, 2005)",
"ref_id": "BIBREF0"
},
{
"start": 740,
"end": 763,
"text": "(Bean and Riloff, 2004)",
"ref_id": "BIBREF2"
},
{
"start": 788,
"end": 813,
"text": "(Blum and Mitchell, 1998)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, it is not clear, when semi-supervised learning is applied to improve a learner, how the system should effectively select unlabeled data, and how the size and relevance of data impact the performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper we apply two semi-supervised learning algorithms to improve a state-of-the-art name tagger. We run the baseline name tagger on a large unlabeled corpus (bootstrapping) and the test set (self-training), and automatically generate high-confidence machine-labeled sentences as additional 'training data'. We then iteratively retrain the model on the increased 'training data'.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We first investigated whether we can improve the system by simply using a lot of unlabeled data. By dramatically increasing the size of the corpus with unlabeled data, we did get a significant improvement compared to the baseline system. But we found that adding off-topic unlabeled data sometimes makes the performance worse. Then we tried to select relevant documents from the unlabeled data in advance, and got clear further improvements. We also obtained significant improvement by self-training (bootstrapping on the test data) without any additional unlabeled data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Therefore, in contrast to the claim in (Banko and Brill, 2001) , we concluded that, for some applications, effective use of large unlabeled corpora demands good data selection measures. We propose and quantify some effective measures to select documents and sentences in this paper.",
"cite_spans": [
{
"start": 39,
"end": 62,
"text": "(Banko and Brill, 2001)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of this paper is structured as follows. Section 2 briefly describes the efforts made by previous researchers to use semi-supervised learning as well as the work of (Banko and Brill, 2001) . Section 3 presents our baseline name tagger. Section 4 describes the motivation for our approach while Section 5 presents the details of two semi-supervised learning methods. Section 6 presents and discusses the experimental results on both English and Chinese. Section 7 presents our conclusions and directions for future work. that all focus on reducing annotation requirements. For the specific task of named entity annotation, some researchers have emphasized the creation of taggers from minimal seed sets (Strzalkowski and Wang, 1996; Collins and Singer, 1999; Lin et al., 2003) while another line of inquiry (which we are pursuing) has sought to improve on high-performance baseline taggers (Miller et al., 2004) . Banko and Brill (2001) suggested that the development of very large training corpora may be most effective for progress in empirical natural language processing. Their experiments show a logarithmic trend in performance as corpus size increases without performance reaching an upper bound. Recent work has replicated their work on thesaurus extraction (Curran and Moens, 2002) and is-a relation extraction (Ravichandran et al., 2004) , showing that collecting data over a very large corpus significantly improves system performance. However, and (Curran and Osborne, 2002) claimed that the choice of statistical model is more important than relying upon large corpora.",
"cite_spans": [
{
"start": 173,
"end": 196,
"text": "(Banko and Brill, 2001)",
"ref_id": "BIBREF1"
},
{
"start": 710,
"end": 739,
"text": "(Strzalkowski and Wang, 1996;",
"ref_id": "BIBREF16"
},
{
"start": 740,
"end": 765,
"text": "Collins and Singer, 1999;",
"ref_id": "BIBREF6"
},
{
"start": 766,
"end": 783,
"text": "Lin et al., 2003)",
"ref_id": "BIBREF11"
},
{
"start": 897,
"end": 918,
"text": "(Miller et al., 2004)",
"ref_id": "BIBREF12"
},
{
"start": 921,
"end": 943,
"text": "Banko and Brill (2001)",
"ref_id": "BIBREF1"
},
{
"start": 1273,
"end": 1297,
"text": "(Curran and Moens, 2002)",
"ref_id": "BIBREF7"
},
{
"start": 1327,
"end": 1354,
"text": "(Ravichandran et al., 2004)",
"ref_id": "BIBREF13"
},
{
"start": 1467,
"end": 1493,
"text": "(Curran and Osborne, 2002)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The performance of name taggers has been limited in part by the amount of labeled training data available. How can an unlabeled corpus help to address this problem? Based on its original training (on the labeled corpus), there will be some tags (in the unlabeled corpus) that the tagger will be very sure about. For example, there will be contexts that were always followed by a person name (e.g., \"Capt.\") in the training corpus. If we find a new token T in this context in the unlabeled corpus, we can be quite certain it is a person name. If the tagger can learn this fact about T, it can successfully tag T when it appears in the test corpus without any indicative context. In the same way, if a previously-unseen context appears consistently in the unlabeled corpus before known person names, the tagger should learn that this is a predictive context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "3"
},
{
"text": "We have adopted a simple learning approach: we take the unlabeled text about which the tagger has greatest confidence in its decisions, tag it, add it to the training set, and retrain the tagger. This process is performed repeatedly to bootstrap ourselves to higher performance. This approach can be used with any supervised-learning tagger that can produce some reliable measure of confidence in its decisions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "3"
},
{
"text": "Our baseline name tagger is based on an HMM that generally follows the Nymble model (Bikel et al, 1997) . Then it uses best-first search to generate NBest hypotheses, and also computes the margin -the difference between the log probabilities of the top two hypotheses. This is used as a rough measure of confidence in our name tagging. 1 In processing Chinese, to take advantage of name structures, we do name structure parsing using an extended HMM which includes a larger number of states (14). This new HMM can handle name prefixes and suffixes, and transliterated foreign names separately. We also augmented the HMM model with a set of post-processing rules to correct some omissions and systematic errors. The name tagger identifies three name types: Person (PER), Organization (ORG) and Geopolitical (GPE) entities (locations which are also political units, such as countries, counties, and cities).",
"cite_spans": [
{
"start": 84,
"end": 103,
"text": "(Bikel et al, 1997)",
"ref_id": "BIBREF4"
},
{
"start": 336,
"end": 337,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Multi-lingual Name Tagger",
"sec_num": "4"
},
{
"text": "We have applied this bootstrapping approach to two sources of data: first, to a large corpus of unlabeled data and second, to the test set. To distinguish the two, we shall label the first \"bootstrapping\" and the second \"self-training\". We begin (Sections 5.1 and 5.2) by describing the basic algorithms used for these two processes. We expected that these basic methods would provide a substantial performance boost, but our experiments showed that, for best gain, the additional training data should be related to the target problem, namely, our test set. We present measures to select documents (Section 5.3) and sentences (Section 5.4), and show (in Section 6) the effectiveness of these measures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Two Semi-Supervised Learning Methods for Name Tagging",
"sec_num": "5"
},
{
"text": "We divided the large unlabeled corpus into segments based on news sources and dates in order to: 1) create segments of manageable size; 2) separately evaluate the contribution of each segment (using a labeled development test set) and reject those which do not help; and 3) apply the latest updated best model to each subsequent segment. The procedure can be formalized as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bootstrapping",
"sec_num": "5.1"
},
{
"text": "1. Select a related set RelatedC from a large corpus of unlabeled data with respect to the test set TestT, using the document selection method described in section 5.3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bootstrapping",
"sec_num": "5.1"
},
{
"text": "2. Split RelatedC into n subsets and mark them C 1 , C 2 \u2026C n . Call the updated HMM name tagger NameM (initially the baseline tagger), and a development test set DevT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bootstrapping",
"sec_num": "5.1"
},
{
"text": "(1) Run NameM on C i ;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "For i=1 to n",
"sec_num": "3."
},
{
"text": "(2) For each tagged sentence S in C i , if S is tagged with high confidence, then keep S; otherwise remove S;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "For i=1 to n",
"sec_num": "3."
},
{
"text": "(3) Relabel the current name tagger (NameM) as OldNameM, add C i to the training data, and retrain the name tagger, producing an updated model NameM;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "For i=1 to n",
"sec_num": "3."
},
{
"text": "(4) Run NameM on DevT; if the performance gets worse, don't use C i and reset NameM = OldNameM;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "For i=1 to n",
"sec_num": "3."
},
{
"text": "An analogous approach can be used to tag the test set. The basic intuition is that the sentences in which the learner has low confidence may get support from those sentences previously labeled with high confidence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Self-training",
"sec_num": "5.2"
},
{
"text": "Initially, we build the baseline name tagger from the labeled examples, then gradually add the most confidently tagged test sentences into the training corpus, and reuse them for the next iteration, until all sentences are labeled. The procedure can be formalized as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Self-training",
"sec_num": "5.2"
},
{
"text": "1. Cluster the test set TestT into n clusters T 1 , T 2 , \u2026,T n , by collecting document pairs with low cross entropy (described in section 5.3.2) into the same cluster.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Self-training",
"sec_num": "5.2"
},
{
"text": "(1) NameM = baseline HMM name tagger;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "For i=1 to n",
"sec_num": "2."
},
{
"text": "(2) While (there are new sentences tagged with confidence higher than a threshold) a. Run NameM on T i ; b. Set an appropriate threshold for margin; c. For each tagged sentence S in T i , if S is tagged with high confidence, add S to the training data; d. Retrain the name tagger NameM with augmented training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "For i=1 to n",
"sec_num": "2."
},
{
"text": "At each iteration, we lower the threshold so that about 5% of the sentences (with the largest margin) are added to the training corpus. 2 As an example, this yielded the following gradually improving performance for one English cluster including 7 documents and 190 sentences. Table 1 . Incremental Improvement from Self-training (English)",
"cite_spans": [
{
"start": 136,
"end": 137,
"text": "2",
"ref_id": null
}
],
"ref_spans": [
{
"start": 277,
"end": 284,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "For i=1 to n",
"sec_num": "2."
},
{
"text": "Self-training can be considered a cache model variant, operating across the entire test collection. But it uses confidence measures as weights for each name candidate, and relies on names tagged with high confidence to re-adjust the prediction of the remaining names, while in a cache model, all name candidates are equally weighted for voting (independent of the learner's confidence).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "No. of iterations",
"sec_num": null
},
{
"text": "To further investigate the benefits of using very large corpora in bootstrapping, and also inspired by the gain from the \"essence\" of self-training, which aims to gradually emphasize the predictions from related sentences within the test set, we reconsidered the assumptions of our approach. The bootstrapping method implicitly assumes that the unlabeled data is reliable (not noisy) and uniformly useful, namely:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unlabeled Document Selection",
"sec_num": "5.3"
},
{
"text": "\u2022 The unlabeled data supports the acquisition of new names and contexts, to provide new evidence to be incorporated in HMM and reduce the sparse data problem; \u2022 The unlabeled data won't make the old estimates worse by adding too many names whose tags are incorrect, or at least are incorrect in the context of the labeled training data and the test data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unlabeled Document Selection",
"sec_num": "5.3"
},
{
"text": "If the unlabeled data is noisy or unrelated to the test data, it can hurt rather than improve the learner's performance on the test set. So it is necessary to coarsely measure the relevance of the unlabeled data to our target test set. We define an IR (information retrieval) -style relevance measure between the test set TestT and an unlabeled document d as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unlabeled Document Selection",
"sec_num": "5.3"
},
{
"text": "We model the information expected from the unlabeled data by a 'bag of words' technique. We construct a query term set from the test corpus TestT to check whether each unlabeled document d is useful or not.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "'Query set' construction",
"sec_num": "5.3.1"
},
{
"text": "\u2022 We prefer not to use all the words in TestT as key words, since we are only concerned about the distribution of name candidates. (Adding off-topic documents may in fact introduce noise into the model). For example, if one document in TestT talks about the presidential election in France while d talks about the presidential election in the US, they may share many common words such as 'election', 'voting', 'poll', and 'camp', but we would expect more gain from other unlabeled documents talking about the French election, since they may share many name candidates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "'Query set' construction",
"sec_num": "5.3.1"
},
{
"text": "\u2022 On the other hand it is insufficient to only take the name candidates in the top one hypothesis for each sentence (since we are particularly concerned with tokens which might be names but are not so labeled in the top hypothesis).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "'Query set' construction",
"sec_num": "5.3.1"
},
{
"text": "So our solution is to take all the name candidates in the top N best hypotheses for each sentence to construct a query set Q.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "'Query set' construction",
"sec_num": "5.3.1"
},
{
"text": "Using Q, we compute the cross entropy H (TestT, d) between TestT and d by:",
"cite_spans": [
{
"start": 40,
"end": 50,
"text": "(TestT, d)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-entropy Measure",
"sec_num": "5.3.2"
},
{
"text": "\u2211 \u2208 \u00d7 \u2212 = Q x d x prob TestT x prob d TestT H ) | ( log ) | ( ) , ( 2 where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-entropy Measure",
"sec_num": "5.3.2"
},
{
"text": "x is a name candidate in Q, and prob(x|TestT) is the probability (frequency) of x appearing in TestT while prob(x|d) is the probability of x in d. If H(T, d) is smaller than a threshold then we consider d a useful unlabeled document 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 147,
"end": 157,
"text": "If H(T, d)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Cross-entropy Measure",
"sec_num": "5.3.2"
},
{
"text": "We don't want to add all the tagged sentences in a relevant document to the training corpus because incorrectly tagged or irrelevant sentences can lead to degradation in model performance. The value of larger corpora is partly dependent on how much new information is extracted from each sentence of the unlabeled data compared to the training corpus that we already have.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Selection",
"sec_num": "5.4"
},
{
"text": "The following confidence measures were applied to assist the semi-supervised learning algorithm in selecting useful sentences for re-training the model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Selection",
"sec_num": "5.4"
},
{
"text": "For each sentence, we compute the HMM hypothesis margin (the difference in log probabilities) between the first hypothesis and the second hypothesis. We select the sentences with margins larger than a threshold 4 to be added to the training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Margin to find reliable sentences",
"sec_num": "5.4.1"
},
{
"text": "Unfortunately, the margin often comes down to whether a specific word has previously been observed in training; if the system has seen the word, it is certain, if not, it is uncertain. Therefore the sentences with high margins are a mix of interesting and uninteresting samples. We need to apply additional measures to remove the uninteresting ones. On the other hand, we may have confidence in a tagging due to evidence external to the HMM, so we explored measures beyond the HMM margin in order to recover additional sentences. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Margin to find reliable sentences",
"sec_num": "5.4.1"
},
{
"text": "Names introduced in an article are likely to be referred to again, so a name coreferred to by more other names is more likely to have been correctly tagged. In this paper, we use simple coreference resolution between names such as substring matching and name abbreviation resolution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Name coreference to find more reliable sentences",
"sec_num": "5.4.2"
},
{
"text": "In the bootstrapping method we apply singledocument coreference for each individual unlabeled text. In self-training, in order to further benefit from global contexts, we consider each cluster of relevant texts as one single big document, and then apply cross-document coreference.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Name coreference to find more reliable sentences",
"sec_num": "5.4.2"
},
{
"text": "Assume S is one sentence in the document, and there are k names tagged in S: {N 1 , N 2 .\u2026.. N k }, which are coreferred to by {CorefNum 1 , Coref-Num 2 , \u2026CorefNum k } other names separately. Then we use the following average name coreference count AveCoref as a confidence measure for tagging S:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Name coreference to find more reliable sentences",
"sec_num": "5.4.2"
},
{
"text": "5 \u2211 = = k i i k CorefNum AveCoref 1 / ) (",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Name coreference to find more reliable sentences",
"sec_num": "5.4.2"
},
{
"text": "In bootstrapping on unlabeled data, the margin criterion often selects some sentences which are too short or don't include any names. Although they are tagged with high confidence, they may make the model worse if added into the training data (for example, by artificially increasing the probability of non-names). In our experiments we don't use a sentence if it includes fewer than six words, or doesn't include any names.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Name count and sentence length to remove uninteresting sentences",
"sec_num": "5.4.3"
},
{
"text": "We depict the above two semi-supervised learning methods in Figure 1 and Figure 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 60,
"end": 68,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 73,
"end": 81,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data Flow",
"sec_num": "5.5"
},
{
"text": "We evaluated our system on two languages: English and Chinese. Table 2 shows the data used in our experiments.",
"cite_spans": [],
"ref_spans": [
{
"start": 63,
"end": 70,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "6.1"
},
{
"text": "We present in section 6.2 -6.4 the overall performance of precision (P), recall (R) and Fmeasure (F) for both languages, and also some diagnostic experiment results. For significance testing (using the sign test), we split the test set into 5 folders, 20 texts in each folder of English, and 18 texts in each folder of Chinese. Table 3 and Table 4 present the overall performance 6 by applying the two semi-supervised learning methods, separately and in combination, to our baseline name tagger. For English, the overall system achieves a 13.4% relative reduction on the spurious and incorrect tags, and 12.9% reduction in the missing rate. For Chinese, it achieves a 16.9% relative reduction on the spurious and incorrect tags, and 16.9% reduction in the missing rate. 7 For each of the five folders, we found that both bootstrapping and self-training produced an improvement in F score for each folder, and the combination of two methods is always better than each method alone. This allows us to reject the hypothesis that these improvements were random at a 95% confidence level. Figure 3 and 4 below show the results as each segment of the unlabeled data is added to the training corpus. We can see some flattening of the gain at the end, particularly for the larger English corpus, and that some segments do not help to boost the performance (reflected as dips in the Dev Set curve and gaps in the Test Set curve).",
"cite_spans": [],
"ref_spans": [
{
"start": 328,
"end": 335,
"text": "Table 3",
"ref_id": null
},
{
"start": 340,
"end": 347,
"text": "Table 4",
"ref_id": null
},
{
"start": 1084,
"end": 1092,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "6.1"
},
{
"text": "In order to investigate the contribution of document selection in bootstrapping, we performed diagnostic experiments for Chinese, whose results are shown in Table 5 . All the bootstrapping tests (rows 2 -4) use margin for sentence selection; row 4 augments this with the selection methods described in sections 5.4.2 and 5.4.3. Table 5 . Impact of Data Selection (Chinese)",
"cite_spans": [],
"ref_spans": [
{
"start": 157,
"end": 164,
"text": "Table 5",
"ref_id": null
},
{
"start": 328,
"end": 335,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Impact of Data Selection",
"sec_num": "6.3.2"
},
{
"text": "Comparing row 2 with row 3, we find that not using document selection, even though it multiplies the size of the corpus, results in 0.3% lower performance (0.3-0.4% loss for each folder). This leads us to conclude that simply relying upon large corpora is not in itself sufficient. Effective use of large corpora demands good confidence measures for document selection to remove offtopic material. By adding sentence selection (results in row 4) the system obtained 0.5% further improvement in F-Measure (0.4-0.7% for each folder). All improvements are statistically significant at the 95% confidence level.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Impact of Data Selection",
"sec_num": "6.3.2"
},
{
"text": "We have applied and evaluated different measures to extract high-confidence sentences in selftraining. The contributions of these confidence measures to F-Measure are presented in Table 6 . It shows that Chinese benefits more from adding name coreference, mainly because there are more coreference links between name abbreviations and full names. And we also can see that the margin is an important measure for both languages. All differences are statistically significant at the 95% confidence level except for the gain using cross-document information for the Chinese name tagging.",
"cite_spans": [],
"ref_spans": [
{
"start": 180,
"end": 187,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis of Self-training",
"sec_num": "6.4"
},
{
"text": "This paper demonstrates the effectiveness of two straightforward semi-supervised learning methods for improving a state-of-art name tagger, and investigates the importance of data selection for this application. Banko and Brill (2001) suggested that the development of very large training corpora may be central to progress in empirical natural language processing. When using large amounts of unlabeled data, as expected, we did get improvement by using unsupervised bootstrapping. However, exploiting a very large corpus did not by itself produce the greatest performance gain. Rather, we observed that good measures to select relevant unlabeled documents and useful labeled sentences are important.",
"cite_spans": [
{
"start": 212,
"end": 234,
"text": "Banko and Brill (2001)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "7"
},
{
"text": "The work described here complements the active learning research described by (Scheffer et al., 2001) . They presented an effective active learning approach that selects \"difficult\" (small margin) sentences to label by hand and then add to the training set. Our approach selects \"easy\" sentences -those with large margins -to add automatically to the training set. Combining these methods can magnify the gains possible with active learning.",
"cite_spans": [
{
"start": 78,
"end": 101,
"text": "(Scheffer et al., 2001)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "7"
},
{
"text": "In the future we plan to try topic identification techniques to select relevant unlabeled documents, and use the downstream information extraction components such as coreference resolution and relation detection to measure the confidence of the tagging for sentences. We are also interested in applying clustering as a preprocessing step for bootstrapping.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "7"
},
{
"text": "We have also used this metric in the context of rescoring of name hypotheses(Ji and Grishman, 2005);Scheffer et al. (2001) used a similar metric for active learning of name tags.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "To be precise, we repeatedly reduce the threshold by 0.1 until an additional 5% or more of the sentences are included; however, if more than an additional 20% of the sentences are captured because many sentences have the same margin, we add back 0.1 to the threshold.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We also tried a single match method, using the query set to find all the relevant documents that include any names belonging to Q, and got approximately the same result as cross-entropy. In addition to this relevance selection, we used one other simple filter: we removed a document if it includes fewer than five names, because it is unlikely to be news.4 In bootstrapping, this margin threshold is selected by testing on the development set, to achieve more than 93% F-Measure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For the experiments reported here, sentences were selected if AveCoref > 3.1 (or 3.1\u00d7number of documents for crossdocument coreference) or the sentence margin exceeded the margin threshold.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Only names which exactly match the key in both extent and type are counted as correct; unlike MUC scoring, no partial credit is given.7 The performance achieved should be considered in light of human performance on this task. The ACE keys used for the evaluations were obtained by dual annotation and adjudication. A single annotator, evaluated against the key, scored F=93.6% to 94.1% for English and 92.5% to 92.7% for Chinese. A second key, created independently by dual annotation and adjudication for a small amount of the English data, scored F=96.5% against the original key.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This material is based upon work supported by the Defense Advanced Research Projects Agency under Contract No. HR0011-06-C-0023, and the National Science Foundation under Grant IIS-00325657. Any opinions, findings and conclusions expressed in this material are those of the authors and do not necessarily reflect the views of the U. S. Government.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgment",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A High-Performance Semi-Supervised Learning Methods for Text Chunking",
"authors": [
{
"first": "Rie",
"middle": [],
"last": "Ando",
"suffix": ""
},
{
"first": "Tong",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. ACL2005",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rie Ando and Tong Zhang. 2005. A High- Performance Semi-Supervised Learning Methods for Text Chunking. Proc. ACL2005. pp. 1-8. Ann Arbor, USA",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Scaling to very very large corpora for natural language disambiguation",
"authors": [
{
"first": "Michele",
"middle": [],
"last": "Banko",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Brill",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc. ACL2001",
"volume": "",
"issue": "",
"pages": "26--33",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michele Banko and Eric Brill. 2001. Scaling to very very large corpora for natural language disam- biguation. Proc. ACL2001. pp. 26-33. Toulouse, France",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Unsupervised Learning of Contextual Role Knowledge for",
"authors": [
{
"first": "David",
"middle": [],
"last": "Bean",
"suffix": ""
},
{
"first": "Ellen",
"middle": [],
"last": "Riloff",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Bean and Ellen Riloff. 2004. Unsupervised Learning of Contextual Role Knowledge for",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Coreference Resolution. Proc. HLT-NAACL2004",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "297--304",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Coreference Resolution. Proc. HLT-NAACL2004. pp. 297-304. Boston, USA",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Nymble: a highperformance Learning Name-finder",
"authors": [
{
"first": "M",
"middle": [],
"last": "Daniel",
"suffix": ""
},
{
"first": "Scott",
"middle": [],
"last": "Bikel",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Weischedel",
"suffix": ""
}
],
"year": 1997,
"venue": "Proc. Fifth Conf. on Applied Natural Language Processing",
"volume": "",
"issue": "",
"pages": "194--201",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel M. Bikel, Scott Miller, Richard Schwartz, and Ralph Weischedel. 1997. Nymble: a high- performance Learning Name-finder. Proc. Fifth Conf. on Applied Natural Language Processing. pp.194-201. Washington D.C., USA",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Combining Labeled and Unlabeled Data with Co-training",
"authors": [
{
"first": "Avrim",
"middle": [],
"last": "Blum",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 1998,
"venue": "Proc. of the Workshop on Computational Learning Theory",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Avrim Blum and Tom Mitchell. 1998. Combining Labeled and Unlabeled Data with Co-training. Proc. of the Workshop on Computational Learning The- ory. Morgan Kaufmann Publishers",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Unsupervised Models for Named Entity Classification",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Yoram",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 1999,
"venue": "Proc. of EMNLP/VLC-99",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins and Yoram Singer. 1999. Unsuper- vised Models for Named Entity Classification. Proc. of EMNLP/VLC-99.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Scaling context space",
"authors": [
{
"first": "R",
"middle": [],
"last": "James",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Curran",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Moens",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James R. Curran and Marc Moens. 2002. Scaling con- text space. Proc. ACL 2002. Philadelphia, USA",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Ensemble Methods for Automatic Thesaurus Extraction",
"authors": [
{
"first": "R",
"middle": [],
"last": "James",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Curran",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James R. Curran. 2002. Ensemble Methods for Auto- matic Thesaurus Extraction. Proc. EMNLP 2002. Philadelphia, USA",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A very very large corpus doesn't always yield reliable estimates",
"authors": [
{
"first": "R",
"middle": [],
"last": "James",
"suffix": ""
},
{
"first": "Miles",
"middle": [],
"last": "Curran",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Osborne",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. ACL 2002 Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James R. Curran and Miles Osborne. 2002. A very very large corpus doesn't always yield reliable es- timates. Proc. ACL 2002 Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Computational Linguis- tics. Philadelphia, USA",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Improving Name Tagging by Reference Resolution and Relation Detection",
"authors": [
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Grishman",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. ACL2005",
"volume": "",
"issue": "",
"pages": "411--418",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Heng Ji and Ralph Grishman. 2005. Improving Name Tagging by Reference Resolution and Relation De- tection. Proc. ACL2005. pp. 411-418. Ann Arbor, USA.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Bootstrapping Learning of Semantic Classes from Positive and Negative Examples",
"authors": [
{
"first": "Winston",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Yangarber",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Grishman",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. ICML-2003 Workshop on The Continuum from Labeled to Unlabeled Data",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Winston Lin, Roman Yangarber and Ralph Grishman. 2003. Bootstrapping Learning of Semantic Classes from Positive and Negative Examples. Proc. ICML-2003 Workshop on The Continuum from La- beled to Unlabeled Data. Washington, D.C.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Name Tagging with Word Clusters and Discriminative Training",
"authors": [
{
"first": "Scott",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "Jethran",
"middle": [],
"last": "Guinness",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Zamanian",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. HLT-NAACL2004",
"volume": "",
"issue": "",
"pages": "337--342",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Scott Miller, Jethran Guinness and Alex Zamanian. 2004. Name Tagging with Word Clusters and Dis- criminative Training. Proc. HLT-NAACL2004. pp. 337-342. Boston, USA",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "The Terascale Challenge. Proc. KDD Workshop on Mining for and from the Semantic Web (MSW-04)",
"authors": [
{
"first": "Deepak",
"middle": [],
"last": "Ravichandran",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Pantel",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "1--11",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Deepak Ravichandran, Patrick Pantel, and Eduard Hovy. 2004. The Terascale Challenge. Proc. KDD Workshop on Mining for and from the Semantic Web (MSW-04). pp. 1-11. Seattle, WA, USA",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Learning Dictionaries for Information Extraction by Multi-Level Bootstrapping",
"authors": [
{
"first": "Ellen",
"middle": [],
"last": "Riloff",
"suffix": ""
},
{
"first": "Rosie",
"middle": [],
"last": "Jones",
"suffix": ""
}
],
"year": 1999,
"venue": "Proc. AAAI/IAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ellen Riloff and Rosie Jones. 1999. Learning Diction- aries for Information Extraction by Multi-Level Bootstrapping. Proc. AAAI/IAAI",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Active Hidden Markov Models for Information Extraction",
"authors": [
{
"first": "Tobias",
"middle": [],
"last": "Scheffer",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Decomain",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Wrobel",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc. Int'l Symposium on Intelligent Data Analysis",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tobias Scheffer, Christian Decomain, and Stefan Wrobel. 2001. Active Hidden Markov Models for Information Extraction. Proc. Int'l Symposium on Intelligent Data Analysis (IDA-2001).",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A Self-Learning Universal Concept Spotter",
"authors": [
{
"first": "Tomek",
"middle": [],
"last": "Strzalkowski",
"suffix": ""
},
{
"first": "Jin",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 1996,
"venue": "Proc. COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomek Strzalkowski and Jin Wang. 1996. A Self- Learning Universal Concept Spotter. Proc. COLING.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Bootstrapping for Name Tagging Figure 2. Self-Training for Name Tagging",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF2": {
"text": "Impact of Data Size (English)",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF3": {
"text": "Impact of Data Size (Chinese)",
"uris": null,
"type_str": "figure",
"num": null
},
"TABREF2": {
"text": "",
"num": null,
"content": "<table/>",
"html": null,
"type_str": "table"
}
}
}
}