|
{ |
|
"paper_id": "E12-1017", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T10:36:12.130978Z" |
|
}, |
|
"title": "Recall-Oriented Learning of Named Entities in Arabic Wikipedia", |
|
"authors": [ |
|
{ |
|
"first": "Behrang", |
|
"middle": [], |
|
"last": "Mohit", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "behrang@" |
|
}, |
|
{ |
|
"first": "Nathan", |
|
"middle": [], |
|
"last": "Schneider", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "nschneid@cs." |
|
}, |
|
{ |
|
"first": "Rishav", |
|
"middle": [], |
|
"last": "Bhowmick", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "rishavb@qatar." |
|
}, |
|
{ |
|
"first": "Kemal", |
|
"middle": [], |
|
"last": "Oflazer", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We consider the problem of NER in Arabic Wikipedia, a semisupervised domain adaptation setting for which we have no labeled training data in the target domain. To facilitate evaluation, we obtain annotations for articles in four topical groups, allowing annotators to identify domain-specific entity types in addition to standard categories. Standard supervised learning on newswire text leads to poor target-domain recall. We train a sequence model and show that a simple modification to the online learner-a loss function encouraging it to \"arrogantly\" favor recall over precisionsubstantially improves recall and F 1. We then adapt our model with self-training on unlabeled target-domain data; enforcing the same recall-oriented bias in the selftraining stage yields marginal gains. 1 1 The annotated dataset and a supplementary document with additional details of this work can be found at: http://www.ark.cs.cmu.edu/AQMAR delineated. One hallmark of this divergence between Wikipedia and the news domain is a difference in the distributions of named entities. Indeed, the classic named entity types (person, organization, location) may not be the most apt for articles in other domains (e.g., scientific or social topics). On the other hand, Wikipedia is a large dataset, inviting semisupervised approaches. In this paper, we describe advances on the problem of NER in Arabic Wikipedia. The techniques are general and make use of well-understood building blocks. Our contributions are:", |
|
"pdf_parse": { |
|
"paper_id": "E12-1017", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We consider the problem of NER in Arabic Wikipedia, a semisupervised domain adaptation setting for which we have no labeled training data in the target domain. To facilitate evaluation, we obtain annotations for articles in four topical groups, allowing annotators to identify domain-specific entity types in addition to standard categories. Standard supervised learning on newswire text leads to poor target-domain recall. We train a sequence model and show that a simple modification to the online learner-a loss function encouraging it to \"arrogantly\" favor recall over precisionsubstantially improves recall and F 1. We then adapt our model with self-training on unlabeled target-domain data; enforcing the same recall-oriented bias in the selftraining stage yields marginal gains. 1 1 The annotated dataset and a supplementary document with additional details of this work can be found at: http://www.ark.cs.cmu.edu/AQMAR delineated. One hallmark of this divergence between Wikipedia and the news domain is a difference in the distributions of named entities. Indeed, the classic named entity types (person, organization, location) may not be the most apt for articles in other domains (e.g., scientific or social topics). On the other hand, Wikipedia is a large dataset, inviting semisupervised approaches. In this paper, we describe advances on the problem of NER in Arabic Wikipedia. The techniques are general and make use of well-understood building blocks. Our contributions are:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "This paper considers named entity recognition (NER) in text that is different from most past research on NER. Specifically, we consider Arabic Wikipedia articles with diverse topics beyond the commonly-used news domain. These data challenge past approaches in two ways:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "First, Arabic is a morphologically rich language (Habash, 2010) . Named entities are referenced using complex syntactic constructions (cf. English NEs, which are primarily sequences of proper nouns). The Arabic script suppresses most vowels, increasing lexical ambiguity, and lacks capitalization, a key clue for English NER.", |
|
"cite_spans": [ |
|
{ |
|
"start": 49, |
|
"end": 63, |
|
"text": "(Habash, 2010)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Second, much research has focused on the use of news text for system building and evaluation. Wikipedia articles are not news, belonging instead to a wide range of domains that are not clearly \u2022 A small corpus of articles annotated in a new scheme that provides more freedom for annotators to adapt NE analysis to new domains; \u2022 An \"arrogant\" learning approach designed to boost recall in supervised training as well as self-training; and \u2022 An empirical evaluation of this technique as applied to a well-established discriminative NER model and feature set.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Experiments show consistent gains on the challenging problem of identifying named entities in Arabic Wikipedia text.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Most of the effort in NER has been focused around a small set of domains and general-purpose entity classes relevant to those domains-especially the categories PER(SON), ORG(ANIZATION), and LOC(ATION) (POL), which are highly prominent in news text. Arabic is no exception: the publicly available NER corpora-ACE (Walker et al., 2006) , ANER (Benajiba et al., 2008) , and OntoNotes (Hovy et al., 2006) Claudio Filippone (PER) ; Linux (SOFTWARE) ; Spanish League (CHAMPIONSHIPS) ; proton (PARTICLE) ; nuclear radiation (GENERIC-MISC) ; Real Zaragoza (ORG) appropriate entity classes will vary widely by domain; occurrence rates for entity classes are quite different in news text vs. Wikipedia, for instance (Balasuriya et al., 2009) . This is abundantly clear in technical and scientific discourse, where much of the terminology is domain-specific, but it holds elsewhere. Non-POL entities in the history domain, for instance, include important events (wars, famines) and cultural movements (romanticism). Ignoring such domain-critical entities likely limits the usefulness of the NE analysis.", |
|
"cite_spans": [ |
|
{ |
|
"start": 312, |
|
"end": 333, |
|
"text": "(Walker et al., 2006)", |
|
"ref_id": "BIBREF52" |
|
}, |
|
{ |
|
"start": 341, |
|
"end": 364, |
|
"text": "(Benajiba et al., 2008)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 381, |
|
"end": 400, |
|
"text": "(Hovy et al., 2006)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 461, |
|
"end": 476, |
|
"text": "(CHAMPIONSHIPS)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 706, |
|
"end": 731, |
|
"text": "(Balasuriya et al., 2009)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Arabic Wikipedia NE Annotation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Recognizing this limitation, some work on NER has sought to codify more robust inventories of general-purpose entity types (Sekine et al., 2002; Weischedel and Brunstein, 2005; Grouin et al., 2011) or to enumerate domain-specific types (Settles, 2004; Yao et al., 2003) . Coarse, general-purpose categories have also been used for semantic tagging of nouns and verbs (Ciaramita and Johnson, 2003 ). Yet as the number of classes or domains grows, rigorously documenting and organizing the classes-even for a single language-requires intensive effort. Ideally, an NER system would refine the traditional classes (Hovy et al., 2011) or identify new entity classes when they arise in new domains, adapting to new data. For this reason, we believe it is valuable to consider NER systems that identify (but do not necessarily label) entity mentions, and also to consider annotation schemes that allow annotators more freedom in defining entity classes.", |
|
"cite_spans": [ |
|
{ |
|
"start": 123, |
|
"end": 144, |
|
"text": "(Sekine et al., 2002;", |
|
"ref_id": "BIBREF45" |
|
}, |
|
{ |
|
"start": 145, |
|
"end": 176, |
|
"text": "Weischedel and Brunstein, 2005;", |
|
"ref_id": "BIBREF53" |
|
}, |
|
{ |
|
"start": 177, |
|
"end": 197, |
|
"text": "Grouin et al., 2011)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 236, |
|
"end": 251, |
|
"text": "(Settles, 2004;", |
|
"ref_id": "BIBREF46" |
|
}, |
|
{ |
|
"start": 252, |
|
"end": 269, |
|
"text": "Yao et al., 2003)", |
|
"ref_id": "BIBREF55" |
|
}, |
|
{ |
|
"start": 367, |
|
"end": 395, |
|
"text": "(Ciaramita and Johnson, 2003", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 610, |
|
"end": 629, |
|
"text": "(Hovy et al., 2011)", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Arabic Wikipedia NE Annotation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Our aim in creating an annotated dataset is to provide a testbed for evaluation of new NER models. We will use these data as development and sible NEs (Sekine et al., 2002) . Nezda et al. (2006) annotated and evaluated an Arabic NE corpus with an extended set of 18 classes (including temporal and numeric entities); this corpus has not been released publicly. testing examples, but not as training data. In \u00a74 we will discuss our semisupervised approach to learning, which leverages ACE and ANER data as an annotated training corpus.", |
|
"cite_spans": [ |
|
{ |
|
"start": 151, |
|
"end": 172, |
|
"text": "(Sekine et al., 2002)", |
|
"ref_id": "BIBREF45" |
|
}, |
|
{ |
|
"start": 175, |
|
"end": 194, |
|
"text": "Nezda et al. (2006)", |
|
"ref_id": "BIBREF39" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Arabic Wikipedia NE Annotation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We conducted a small annotation project on Arabic Wikipedia articles. Two college-educated native Arabic speakers annotated about 3,000 sentences from 31 articles. We identified four topical areas of interest-history, technology, science, and sports-and browsed these topics until we had found 31 articles that we deemed satisfactory on the basis of length (at least 1,000 words), cross-lingual linkages (associated articles in English, German, and Chinese 3 ), and subjective judgments of quality. The list of these articles along with sample NEs are presented in table 1. These articles were then preprocessed to extract main article text (eliminating tables, lists, info-boxes, captions, etc.) for annotation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotation Strategy", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Our approach follows ACE guidelines (LDC, 2005) in identifying NE boundaries and choosing POL tags. In addition to this traditional form of annotation, annotators were encouraged to articulate one to three salient, article-specific entity categories per article. For example, names of particles (e.g., proton) are highly salient in the Atom article. Annotators were asked to read the entire article first, and then to decide which nontraditional classes of entities would be important in the context of article. In some cases, annotators reported using heuristics (such as being proper nouns or having an English translation which is conventionally capitalized) to help guide their determination of non-canonical entities and entity classes. Annotators produced written descriptions of their classes, including example instances. This scheme was chosen for its flexibility: in contrast to a scenario with a fixed ontology, annotators required minimal training beyond the POL conventions, and did not have to worry about delineating custom categories precisely enough that they would extend straightforwardly to other topics or domains. Of course, we expect interannotator variability to be greater for these openended classification criteria.", |
|
"cite_spans": [ |
|
{ |
|
"start": 36, |
|
"end": 47, |
|
"text": "(LDC, 2005)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotation Strategy", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "During annotation, two articles (Prussia and Amman) were reserved for training annotators on the task. Once they were accustomed to annotation, both independently annotated a third article. We used this 4,750-word article (Gulf War, ) to measure inter-annotator agreement. Table 2 provides scores for tokenlevel agreement measures and entity-level F 1 between the two annotated versions of the article. 4 These measures indicate strong agreement for locating and categorizing NEs both at the token and chunk levels. Closer examination of agreement scores shows that PER and MIS classes have the lowest rates of agreement. That the miscellaneous class, used for infrequent or articlespecific NEs, receives poor agreement is unsurprising. The low agreement on the PER class seems to be due to the use of titles and descriptive terms in personal names. Despite explicit guidelines to exclude the titles, annotators disagreed on the inclusion of descriptors that disambiguate the NE (e.g., the father in : George Bush, the father). Table 3 : Custom NE categories suggested by one or both annotators for 10 articles. Article titles are translated from Arabic. \u2022 indicates that both annotators volunteered a category for an article; \u2022 indicates that only one annotator suggested the category. Annotators were not given a predetermined set of possible categories; rather, category matches between annotators were determined by post hoc analysis. NAME ROMAN indicates an NE rendered in Roman characters.", |
|
"cite_spans": [ |
|
{ |
|
"start": 403, |
|
"end": 404, |
|
"text": "4", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 273, |
|
"end": 280, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
}, |
|
{ |
|
"start": 1028, |
|
"end": 1035, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Annotation Quality Evaluation", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "History: Gulf War, Prussia, Damascus, Crusades WAR CONFLICT \u2022 \u2022 \u2022 Science: Atom, Periodic table THEORY \u2022 CHEMICAL \u2022 \u2022 NAME ROMAN \u2022 PARTICLE \u2022 \u2022 Sports: Football, Ra\u00fal Gonz\u00e1les SPORT \u2022 CHAMPIONSHIP \u2022 AWARD \u2022 NAME ROMAN \u2022 Technology: Computer, Richard Stallman COMPUTER VARIETY \u2022 SOFTWARE \u2022 COMPONENT \u2022", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotation Quality Evaluation", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "To investigate the variability between annotators with respect to custom category intuitions, we asked our two annotators to independently read 10 of the articles in the data (scattered across our four focus domains) and suggest up to 3 custom categories for each. We assigned short names to these suggestions, seen in table 3. In 13 cases, both annotators suggested a category for an article that was essentially the same (\u2022); three such categories spanned multiple articles. In three cases a category was suggested by only one annotator (\u2022). 5 Thus, we see that our annotators were generally, but not entirely, consistent with each other in their creation of custom categories. Further, almost all of our article-specific categories correspond to classes in the extended NE taxonomy of (Sekine et al., 2002) , which speaks to the reasonableness of both sets of categories-and by extension, our open-ended annotation process.", |
|
"cite_spans": [ |
|
{ |
|
"start": 788, |
|
"end": 809, |
|
"text": "(Sekine et al., 2002)", |
|
"ref_id": "BIBREF45" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Validating Category Intuitions", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Our annotation of named entities outside of the traditional POL classes creates a useful resource for entity detection and recognition in new domains. Even the ability to detect non-canonical types of NEs should help applications such as QA and MT (Toral et al., 2005; Babych and Hartley, 2003) . Possible avenues for future work include annotating and projecting non-canonical NEs from English articles to their Arabic counterparts (Hassan et al., 2007) , automatically clustering non-canonical types of entities into articlespecific or cross-article classes (cf. Frietag, 2004), or using non-canonical classes to improve the (author-specified) article categories in Wikipedia.", |
|
"cite_spans": [ |
|
{ |
|
"start": 248, |
|
"end": 268, |
|
"text": "(Toral et al., 2005;", |
|
"ref_id": "BIBREF50" |
|
}, |
|
{ |
|
"start": 269, |
|
"end": 294, |
|
"text": "Babych and Hartley, 2003)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 433, |
|
"end": 454, |
|
"text": "(Hassan et al., 2007)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Validating Category Intuitions", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Hereafter, we merge all article-specific categories with the generic MIS category. The proportion of entity mentions that are tagged as MIS, while varying to a large extent by document, is a major indication of the gulf between the news data (<10%) and the Wikipedia data (53% for the development set, 37% for the test set).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Validating Category Intuitions", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Below, we aim to develop entity detection models that generalize beyond the traditional POL entities. We do not address here the challenges of automatically classifying entities or inferring noncanonical groupings. Table 4 summarizes the various corpora used in this work. 6 Our NE-annotated Wikipedia subcorpus, described above, consists of several Arabic Wikipedia articles from four focus domains. 7 We do not use these for supervised training data; they serve only as development and test data. A larger set of Arabic Wikipedia articles, selected on the basis of quality heuristics, serves as unlabeled data for semisupervised learning.", |
|
"cite_spans": [ |
|
{ |
|
"start": 401, |
|
"end": 402, |
|
"text": "7", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 215, |
|
"end": 222, |
|
"text": "Table 4", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Validating Category Intuitions", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Our out-of-domain labeled NE data is drawn from the ANER (Benajiba et al., 2007) and ACE-2005 (Walker et al., 2006 newswire corpora. Entity types in this data are POL categories (PER, ORG, LOC) and MIS. Portions of the ACE corpus were held out as development and test data; the remainder is used in training.", |
|
"cite_spans": [ |
|
{ |
|
"start": 57, |
|
"end": 80, |
|
"text": "(Benajiba et al., 2007)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 85, |
|
"end": 93, |
|
"text": "ACE-2005", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 94, |
|
"end": 114, |
|
"text": "(Walker et al., 2006", |
|
"ref_id": "BIBREF52" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Our starting point for statistical NER is a featurebased linear model over sequences, trained using the structured perceptron (Collins, 2002) . 8 In addition to lexical and morphological 9 fea- 6 Additional details appear in the supplement. 7 We downloaded a snapshot of Arabic Wikipedia (http://ar.wikipedia.org) on 8/29/2009 and preprocessed the articles to extract main body text and metadata using the mwlib package for Python (PediaPress, 2010).", |
|
"cite_spans": [ |
|
{ |
|
"start": 126, |
|
"end": 141, |
|
"text": "(Collins, 2002)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 144, |
|
"end": 145, |
|
"text": "8", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 194, |
|
"end": 195, |
|
"text": "6", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 241, |
|
"end": 242, |
|
"text": "7", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Models", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "8 A more leisurely discussion of the structured perceptron and its connection to empirical risk minimization can be found in the supplementary document. 9 We obtain morphological analyses from the MADA tool (Habash and Rambow, 2005; Roth et al., 2008) . We use a first-order structured perceptron; none of our features consider more than a pair of consecutive BIO labels at a time. The model enforces the constraint that NE sequences must begin with B (so the bigram O, I is disallowed).", |
|
"cite_spans": [ |
|
{ |
|
"start": 153, |
|
"end": 154, |
|
"text": "9", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 207, |
|
"end": 232, |
|
"text": "(Habash and Rambow, 2005;", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 233, |
|
"end": 251, |
|
"text": "Roth et al., 2008)", |
|
"ref_id": "BIBREF44" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Models", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Training this model on ACE and ANER data achieves performance comparable to the state of the art (F 1 -measure 11 above 69%), but fares much worse on our Wikipedia test set (F 1 -measure around 47%); details are given in \u00a75.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Models", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "By augmenting the perceptron's online update with a cost function term, we can incorporate a task-dependent notion of error into the objective, as with structured SVMs (Taskar et al., 2004; Tsochantaridis et al., 2005) . Let c(y, y ) denote a measure of error when y is the correct label sequence but y is predicted. For observed sequence x and feature weights (model parameters) w, the structured hinge loss is", |
|
"cite_spans": [ |
|
{ |
|
"start": 168, |
|
"end": 189, |
|
"text": "(Taskar et al., 2004;", |
|
"ref_id": "BIBREF49" |
|
}, |
|
{ |
|
"start": 190, |
|
"end": 218, |
|
"text": "Tsochantaridis et al., 2005)", |
|
"ref_id": "BIBREF51" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recall-Oriented Perceptron", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "hinge (x, y, w) = max y w g(x, y ) + c(y, y ) \u2212 w g(x, y) (1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recall-Oriented Perceptron", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The maximization problem inside the parentheses is known as cost-augmented decoding. If c fac-tors similarly to the feature function g(x, y), then we can increase penalties for y that have more local mistakes. This raises the learner's awareness about how it will be evaluated. Incorporating cost-augmented decoding into the perceptron leads to this decoding step:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recall-Oriented Perceptron", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "y \u2190 arg max y w g(x, y ) + c(y, y ) , (2)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recall-Oriented Perceptron", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "which amounts to performing stochastic subgradient ascent on an objective function with the Eq. 1 loss (Ratliff et al., 2006) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 103, |
|
"end": 125, |
|
"text": "(Ratliff et al., 2006)", |
|
"ref_id": "BIBREF43" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recall-Oriented Perceptron", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "In this framework, cost functions can be formulated to distinguish between different types of errors made during training. For a tag sequence y = y 1 , y 2 , . . . , y M , Gimpel and Smith (2010b) define word-local cost functions that differently penalize precision errors (i.e., y i = O \u2227\u0177 i = O for the ith word), recall errors (y i = O \u2227\u0177 i = O), and entity class/position errors (other cases where y i =\u0177 i ). As will be shown below, a key problem in cross-domain NER is poor recall, so we will penalize recall errors more severely:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recall-Oriented Perceptron", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "c(y, y ) = M i=1 \uf8f1 \uf8f2 \uf8f3 0 if y i = y i \u03b2 if y i = O \u2227 y i = O 1 otherwise (3)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recall-Oriented Perceptron", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "for a penalty parameter \u03b2 > 1. We call our learner the \"recall-oriented\" perceptron (ROP).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recall-Oriented Perceptron", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We note that Minkov et al. (2006) similarly explored the recall vs. precision tradeoff in NER. Their technique was to directly tune the weight of a single feature-the feature marking O (nonentity tokens); a lower weight for this feature will incur a greater penalty for predicting O. Below we demonstrate that our method, which is less coarse, is more successful in our setting. 12 In our experiments we will show that injecting \"arrogance\" into the learner via the recall-oriented loss function substantially improves recall, especially for non-POL entities ( \u00a75.3).", |
|
"cite_spans": [ |
|
{ |
|
"start": 13, |
|
"end": 33, |
|
"text": "Minkov et al. (2006)", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 379, |
|
"end": 381, |
|
"text": "12", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recall-Oriented Perceptron", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "As we will show experimentally, the differences between news text and Wikipedia text call for domain adaptation. In the case of Arabic Wikipedia,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Self-Training and Semisupervised Learning", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Input: labeled data x (n) , y (n) N n=1 ; unlabeled data x (j) J j=1 ; supervised learner L; number of iterations T Output: w w \u2190 L( x (n) , y (n) N n=1 ) for t = 1 to T do for j = 1 to J d\u00f4 y (j) \u2190 arg max y w g(x (j) , y) w \u2190 L( x (n) , y (n) N n=1 \u222a x (j) ,\u0177 (j) J j=1 )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Self-Training and Semisupervised Learning", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Algorithm 1: Self-training.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Self-Training and Semisupervised Learning", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "there is no available labeled training data. Yet the available unlabeled data is vast, so we turn to semisupervised learning.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Self-Training and Semisupervised Learning", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Here we adapt self-training, a simple technique that leverages a supervised learner (like the perceptron) to perform semisupervised learning (Clark et al., 2003; Mihalcea, 2004; McClosky et al., 2006) . In our version, a model is trained on the labeled data, then used to label the unlabeled target data. We iterate between training on the hypothetically-labeled target data plus the original labeled set, and relabeling the target data; see Algorithm 1. Before self-training, we remove sentences hypothesized not to contain any named entity mentions, which we found avoids further encouragement of the model toward low recall.", |
|
"cite_spans": [ |
|
{ |
|
"start": 141, |
|
"end": 161, |
|
"text": "(Clark et al., 2003;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 162, |
|
"end": 177, |
|
"text": "Mihalcea, 2004;", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 178, |
|
"end": 200, |
|
"text": "McClosky et al., 2006)", |
|
"ref_id": "BIBREF36" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Self-Training and Semisupervised Learning", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We investigate two questions in the context of NER for Arabic Wikipedia:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "\u2022 Loss function: Does integrating a cost function into our learning algorithm, as we have done in the recall-oriented perceptron ( \u00a74.1), improve recall and overall performance on Wikipedia data? \u2022 Semisupervised learning for domain adaptation: Can our models benefit from large amounts of unlabeled Wikipedia data, in addition to the (out-of-domain) labeled data? We experiment with a self-training phase following the fully supervised learning phase.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We report experiments for the possible combinations of the above ideas. These are summarized in table 5. Note that the recall-oriented perceptron can be used for the supervised learning phase, for the self-training phase, or both. This leaves us with the following combinations:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "\u2022 reg/none (baseline): regular supervised learner.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "\u2022 ROP/none: recall-oriented supervised learner. Figure 1 : Tuning the recall-oriented cost parameter for different learning settings. We optimized for development set F 1 , choosing penalty \u03b2 = 200 for recall-oriented supervised learning (in the plot, ROP/*-this is regardless of whether a stage of self-training will follow); \u03b2 = 100 for recalloriented self-training following recall-oriented supervised learning (ROP/ROP); and \u03b2 = 3200 for recall-oriented self-training following regular supervised learning (reg/ROP).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 48, |
|
"end": 56, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "\u2022 reg/reg: standard self-training setup.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "\u2022 ROP/reg: recall-oriented supervised learner, followed by standard self-training. \u2022 reg/ROP: regular supervised model as the initial labeler for recall-oriented self-training. \u2022 ROP/ROP (the \"double ROP\" condition): recalloriented supervised model as the initial labeler for recall-oriented self-training. Note that the two ROPs can use different cost parameters.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "For evaluating our models we consider the named entity detection task, i.e., recognizing which spans of words constitute entities. This is measured by per-entity precision, recall, and F 1 . 13 To measure statistical significance of differences between models we use Gimpel and Smith's (2010) implementation of the paired bootstrap resampler of (Koehn, 2004) , taking 10,000 samples for each comparison.", |
|
"cite_spans": [ |
|
{ |
|
"start": 191, |
|
"end": 193, |
|
"text": "13", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 267, |
|
"end": 292, |
|
"text": "Gimpel and Smith's (2010)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 345, |
|
"end": 358, |
|
"text": "(Koehn, 2004)", |
|
"ref_id": "BIBREF33" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Our baseline is the perceptron, trained on the POL entity boundaries in the ACE+ANER corpus (reg/none). 14 Development data was used to select the number of iterations (10). We performed 3-fold cross-validation on the ACE data and found wide variance in the in-domain entity detection performance of this model: (Fold 1 corresponds to the ACE test set described in table 4.) We also trained the model to perform POL detection and classification, achieving nearly identical results in the 3-way cross-validation of ACE data. From these data we conclude that our baseline is on par with the state of the art for Arabic NER on ACE news text (Abdul-Hamid and Darwish, 2010) . 15 Here is the performance of the baseline entity detection model on our 20-article test set: 16 Unsurprisingly, performance on Wikipedia data varies widely across article domains and is much lower than in-domain performance. Precision scores fall between 60% and 72% for all domains, but recall in most cases is far worse. Miscellaneous class recall, in particular, suffers badly (under 10%)-which partially accounts for the poor recall in science and technology articles (they have by far the highest proportion of MIS entities).", |
|
"cite_spans": [ |
|
{ |
|
"start": 638, |
|
"end": 669, |
|
"text": "(Abdul-Hamid and Darwish, 2010)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 672, |
|
"end": 674, |
|
"text": "15", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "P R F 1 fold", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Following Clark et al. (2003) , we applied selftraining as described in Algorithm 1, with the perceptron as the supervised learner. Our unlabeled data consists of 397 Arabic Wikipedia articles (1 million words) selected at random from all articles exceeding a simple length threshold (1,000 words); see table 4. We used only one iteration (T = 1), as experiments on development data showed no benefit from additional rounds. Several rounds of self-training hurt performance, 15 Abdul-Hamid and Darwish report as their best result a macroaveraged F1-score of 76. As they do not specify which data they used for their held-out test set, we cannot perform a direct comparison. However, our feature set is nearly a superset of their best feature set, and their result lies well within the range of results seen in our cross-validation folds.", |
|
"cite_spans": [ |
|
{ |
|
"start": 10, |
|
"end": 29, |
|
"text": "Clark et al. (2003)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 475, |
|
"end": 477, |
|
"text": "15", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Self-Training", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "16 Our Wikipedia evaluations use models trained on POLM entity boundaries in ACE. Per-domain and overall scores are microaverages across articles. Table 5 : Entity detection precision, recall, and F 1 for each learning setting, microaveraged across the 24 articles in our Wikipedia test set. Rows differ in the supervised learning condition on the ACE+ANER data (regular vs. recall-oriented perceptron). Columns indicate whether this supervised learning phase was followed by selftraining on unlabeled Wikipedia data, and if so which version of the perceptron was used for self-training. an effect attested in earlier research (Curran et al., 2007) and sometimes known as \"semantic drift.\" Results are shown in table 5. We find that standard self-training (the middle column) has very little impact on performance. 17 Why is this the case? We venture that poor baseline recall and the domain variability within Wikipedia are to blame.", |
|
"cite_spans": [ |
|
{ |
|
"start": 627, |
|
"end": 648, |
|
"text": "(Curran et al., 2007)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 147, |
|
"end": 154, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Self-Training", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "The recall-oriented bias can be introduced in either or both of the stages of our semisupervised learning framework: in the supervised learning phase, modifying the objective of our baseline ( \u00a75.1); and within the self-training algorithm ( \u00a75.2). 18 As noted in \u00a74.1, the aim of this approach is to discourage recall errors (false negatives), which are the chief difficulty for the news text-trained model in the new domain. We selected the value of the false positive penalty for cost-augmented decoding, \u03b2, using the development data (figure 1).", |
|
"cite_spans": [ |
|
{ |
|
"start": 248, |
|
"end": 250, |
|
"text": "18", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recall-Oriented Learning", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "The results in table 5 demonstrate improvements due to the recall-oriented bias in both stages of learning. 19 When used in the super-vised phase (bottom left cell), the recall gains are substantial-nearly 9% over the baseline. Integrating this bias within self-training (last column of the table) produces a more modest improvement (less than 3%) relative to the baseline. In both cases, the improvements to recall more than compensate for the amount of degradation to precision. This trend is robust: wherever the recall-oriented perceptron is added, we observe improvements in both recall and F 1 . Perhaps surprisingly, these gains are somewhat additive: using the ROP in both learning phases gives a small (though not always significant) gain over alternatives (standard supervised perceptron, no self-training, or self-training with a standard perceptron). In fact, when the standard supervised learner is used, recall-oriented self-training succeeds despite the ineffectiveness of standard selftraining.", |
|
"cite_spans": [ |
|
{ |
|
"start": 108, |
|
"end": 110, |
|
"text": "19", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recall-Oriented Learning", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Performance breakdowns by (gold) class, figure 2, and domain, figure 3, further attest to the robustness of the overall results. The most dramatic gains are in miscellaneous class recalleach form of the recall bias produces an improvement, and using this bias in both the supervised and self-training phases is clearly most successful for miscellaneous entities. Correspondingly, the technology and science domains (in which this class dominates-83% and 61% of mentions, verprovements due to self-training are marginal, however: ROP self-training produces a significant gain only following regular supervised learning (p < 0.05). sus 6% and 12% for history and sports, respectively) receive the biggest boost. Still, the gaps between domains are not entirely removed.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recall-Oriented Learning", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Most improvements relate to the reduction of false negatives, which fall into three groups: (a) entities occurring infrequently or partially in the labeled training data (e.g. uranium); (b) domain-specific entities sharing lexical or contextual features with the POL entities (e.g. Linux, titanium); and (c) words with Latin characters, common in the science and technology domains. (a) and (b) are mostly transliterations into Arabic.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recall-Oriented Learning", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "An alternative-and simpler-approach to controlling the precision-recall tradeoff is the Minkov et al. (2006) strategy of tuning a single feature weight subsequent to learning (see \u00a74.1 above). We performed an oracle experiment to determine how this compares to recall-oriented learning in our setting. An oracle trained with the method of Minkov et al. outperforms the three models in table 5 that use the regular perceptron for the supervised phase of learning, but underperforms the supervised ROP conditions. 20 Overall, we find that incorporating the recalloriented bias in learning is fruitful for adapting to Wikipedia because the gains in recall outpace the damage to precision.", |
|
"cite_spans": [ |
|
{ |
|
"start": 88, |
|
"end": 108, |
|
"text": "Minkov et al. (2006)", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 512, |
|
"end": 514, |
|
"text": "20", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recall-Oriented Learning", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "To our knowledge, this work is the first suggestion that substantively modifying the supervised learning criterion in a resource-rich domain can reap benefits in subsequent semisupervised application in a new domain. Past work has looked at regularization (Chelba and Acero, 2006) and feature design (Daum\u00e9 III, 2007) ; we alter the loss function. Not surprisingly, the double-ROP approach harms performance on the original domain (on ACE data, we achieve 55.41% F 1 , far below the standard perceptron). Yet we observe that models can be prepared for adaptation even before a learner is exposed a new domain, sacrificing performance in the original domain.", |
|
"cite_spans": [ |
|
{ |
|
"start": 256, |
|
"end": 280, |
|
"text": "(Chelba and Acero, 2006)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 300, |
|
"end": 317, |
|
"text": "(Daum\u00e9 III, 2007)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The recall-oriented bias is not merely encouraging the learner to identify entities already seen in training. As recall increases, so does the number of new entity types recovered by the model: of the 2,070 NE types in the test data that were never seen in training, only 450 were ever found by the baseline, versus 588 in the reg/ROP condition, 632 in the ROP/none condition, and 717 in the double-ROP condition.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We note finally that our method is a simple extension to the standard structured perceptron; cost-augmented inference is often no more expensive than traditional inference, and the algorithmic change is equivalent to adding one additional feature. Our recall-oriented cost function is parameterized by a single value, \u03b2; recall is highly sensitive to the choice of this value (figure 1 shows how we tuned it on development data), and thus we anticipate that, in general, such tuning will be essential to leveraging the benefits of arrogance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Our approach draws on insights from work in the areas of NER, domain adaptation, NLP with Wikipedia, and semisupervised learning. As all are broad areas of research, we highlight only the most relevant contributions here.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Research in Arabic NER has been focused on compiling and optimizing the gazetteers and fea-ture sets for standard sequential modeling algorithms (Benajiba et al., 2008; Farber et al., 2008; Shaalan and Raza, 2008; Abdul-Hamid and Darwish, 2010) . We make use of features identified in this prior work to construct a strong baseline system. We are unaware of any Arabic NER work that has addressed diverse text domains like Wikipedia. Both the English and Arabic versions of Wikipedia have been used, however, as resources in service of traditional NER (Kazama and Torisawa, 2007; Benajiba et al., 2008) . Attia et al. (2010) heuristically induce a mapping between Arabic Wikipedia and Arabic WordNet to construct Arabic NE gazetteers. Balasuriya et al. (2009) highlight the substantial divergence between entities appearing in English Wikipedia versus traditional corpora, and the effects of this divergence on NER performance. There is evidence that models trained on Wikipedia data generalize and perform well on corpora with narrower domains. and Balasuriya et al. (2009) show that NER models trained on both automatically and manually annotated Wikipedia corpora perform reasonably well on news corpora. The reverse scenario does not hold for models trained on news text, a result we also observe in Arabic NER. Other work has gone beyond the entity detection problem: Florian et al. (2004) additionally predict within-document entity coreference for Arabic, Chinese, and English ACE text, while Cucerzan (2007) aims to resolve every mention detected in English Wikipedia pages to a canonical article devoted to the entity in question.", |
|
"cite_spans": [ |
|
{ |
|
"start": 145, |
|
"end": 168, |
|
"text": "(Benajiba et al., 2008;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 169, |
|
"end": 189, |
|
"text": "Farber et al., 2008;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 190, |
|
"end": 213, |
|
"text": "Shaalan and Raza, 2008;", |
|
"ref_id": "BIBREF47" |
|
}, |
|
{ |
|
"start": 214, |
|
"end": 244, |
|
"text": "Abdul-Hamid and Darwish, 2010)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 552, |
|
"end": 579, |
|
"text": "(Kazama and Torisawa, 2007;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 580, |
|
"end": 602, |
|
"text": "Benajiba et al., 2008)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 605, |
|
"end": 624, |
|
"text": "Attia et al. (2010)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 735, |
|
"end": 759, |
|
"text": "Balasuriya et al. (2009)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 1050, |
|
"end": 1074, |
|
"text": "Balasuriya et al. (2009)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 1373, |
|
"end": 1394, |
|
"text": "Florian et al. (2004)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "The domain and topic diversity of NEs has been studied in the framework of domain adaptation research. A group of these methods use selftraining and select the most informative features and training instances to adapt a source domain learner to the new target domain. Wu et al. (2009) bootstrap the NER leaner with a subset of unlabeled instances that bridge the source and target domains. Jiang and Zhai (2006) and Daum\u00e9 III (2007) make use of some labeled target-domain data to tune or augment the features of the source model towards the target domain. Here, in contrast, we use labeled target-domain data only for tuning and evaluation. Another important distinction is that domain variation in this prior work is restricted to topically-related corpora (e.g. newswire vs. broadcast news), whereas in our work, major topical differences distinguish the training and test corpora-and consequently, their salient NE classes. In these respects our NER setting is closer to that of Florian et al. (2010) , who recognize English entities in noisy text, (Surdeanu et al., 2011) , which concerns information extraction in a topically distinct target domain, and (Dalton et al., 2011) , which addresses English NER in noisy and topically divergent text.", |
|
"cite_spans": [ |
|
{ |
|
"start": 268, |
|
"end": 284, |
|
"text": "Wu et al. (2009)", |
|
"ref_id": "BIBREF54" |
|
}, |
|
{ |
|
"start": 390, |
|
"end": 411, |
|
"text": "Jiang and Zhai (2006)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 416, |
|
"end": 432, |
|
"text": "Daum\u00e9 III (2007)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 982, |
|
"end": 1003, |
|
"text": "Florian et al. (2010)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 1052, |
|
"end": 1075, |
|
"text": "(Surdeanu et al., 2011)", |
|
"ref_id": "BIBREF48" |
|
}, |
|
{ |
|
"start": 1159, |
|
"end": 1180, |
|
"text": "(Dalton et al., 2011)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Self-training (Clark et al., 2003; Mihalcea, 2004; McClosky et al., 2006) is widely used in NLP and has inspired related techniques that learn from automatically labeled data (Liang et al., 2008; Petrov et al., 2010) . Our self-training procedure differs from some others in that we use all of the automatically labeled examples, rather than filtering them based on a confidence score.", |
|
"cite_spans": [ |
|
{ |
|
"start": 14, |
|
"end": 34, |
|
"text": "(Clark et al., 2003;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 35, |
|
"end": 50, |
|
"text": "Mihalcea, 2004;", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 51, |
|
"end": 73, |
|
"text": "McClosky et al., 2006)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 175, |
|
"end": 195, |
|
"text": "(Liang et al., 2008;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 196, |
|
"end": 216, |
|
"text": "Petrov et al., 2010)", |
|
"ref_id": "BIBREF41" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Cost functions have been used in nonstructured classification settings to penalize certain types of errors more than others (Chan and Stolfo, 1998; Domingos, 1999; Kiddon and Brun, 2011) . The goal of optimizing our structured NER model for recall is quite similar to the scenario explored by Minkov et al. (2006) , as noted above.", |
|
"cite_spans": [ |
|
{ |
|
"start": 124, |
|
"end": 147, |
|
"text": "(Chan and Stolfo, 1998;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 148, |
|
"end": 163, |
|
"text": "Domingos, 1999;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 164, |
|
"end": 186, |
|
"text": "Kiddon and Brun, 2011)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 293, |
|
"end": 313, |
|
"text": "Minkov et al. (2006)", |
|
"ref_id": "BIBREF38" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "We explored the problem of learning an NER model suited to domains for which no labeled training data are available. A loss function to encourage recall over precision during supervised discriminative learning substantially improves recall and overall entity detection performance, especially when combined with a semisupervised learning regimen incorporating the same bias. We have also developed a small corpus of Arabic Wikipedia articles via a flexible entity annotation scheme spanning four topical domains (publicly available at http://www.ark.cs. cmu.edu/AQMAR).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "OntoNotes contains news-related text. ACE includes some text from blogs. In addition to the POL classes, both corpora include additional NE classes such as facility, event, product, vehicle, etc. These entities are infrequent and may not be comprehensive enough to cover the larger set of pos-", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "These three languages have the most articles on Wikipedia. Associated articles here are those that have been manually hyperlinked from the Arabic page as cross-lingual correspondences. They are not translations, but if the associations are accurate, these articles should be topically similar to the Arabic page that links to them.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The position and boundary measures ignore the distinctions between the POLM classes. To avoid artificial inflation of the token and token position agreement rates, we exclude the 81% of tokens tagged by both annotators as not belonging to an entity.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "When it came to tagging NEs, one of the two annotators was assigned to each article. Custom categories only suggested by the other annotator were ignored.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A gazetteer ought to yield further improvements in line with previous findings in NER(Ratinov and Roth, 2009).11 Though optimizing NER systems for F1 has been called into question(Manning, 2006), no alternative metric has achieved widespread acceptance in the community.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The distinction between the techniques is that our cost function adjusts the whole model in order to perform better at recall on the training data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Only entity spans that exactly match the gold spans are counted as correct. We calculated these scores with the conlleval.pl script from the CoNLL 2003 shared task.14 In keeping with prior work, we ignore non-POL categories for the ACE evaluation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In neither case does regular self-training produce a significantly different F1 score than no self-training.18 Standard Viterbi decoding was used to label the data within the self-training algorithm; note that cost-augmented decoding only makes sense in learning, not as a prediction technique, since it deliberately introduces errors relative to a correct output that must be provided.19 In terms of F1, the worst of the 3 models with the ROP supervised learner significantly outperforms the best model with the regular supervised learner (p < 0.005). The im-", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Tuning the O feature weight to optimize for F1 on our test set, we found that oracle precision would be 66.2, recall would be 39.0, and F1 would be 49.1. The F1 score of our best model is nearly 3 points higher than the Minkov et al.style oracle, and over 4 points higher than the non-oracle version where the development set is used for tuning.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We thank Mariem Fekih Zguir and Reham Al Tamime for assistance with annotation, Michael Heilman for his tagger implementation, and Nizar Habash and colleagues for the MADA toolkit. We thank members of the ARK group at CMU, Hal Daum\u00e9, and anonymous reviewers for their valuable suggestions. This publication was made possible by grant NPRP-08-485-1-083 from the Qatar National Research Fund (a member of the Qatar Foundation). The statements made herein are solely the responsibility of the authors.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Simplified feature set for Arabic named entity recognition", |
|
"authors": [ |
|
{ |
|
"first": "Ahmed", |
|
"middle": [], |
|
"last": "Abdul", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "-", |
|
"middle": [], |
|
"last": "Hamid", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kareem", |
|
"middle": [], |
|
"last": "Darwish", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 2010 Named Entities Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "110--115", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ahmed Abdul-Hamid and Kareem Darwish. 2010. Simplified feature set for Arabic named entity recognition. In Proceedings of the 2010 Named En- tities Workshop, pages 110-115, Uppsala, Sweden, July. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "An automatically built named entity lexicon for Arabic", |
|
"authors": [ |
|
{ |
|
"first": "Mohammed", |
|
"middle": [], |
|
"last": "Attia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Antonio", |
|
"middle": [], |
|
"last": "Toral", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lamia", |
|
"middle": [], |
|
"last": "Tounsi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Monica", |
|
"middle": [], |
|
"last": "Monachini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Josef", |
|
"middle": [], |
|
"last": "Van Genabith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the Seventh Conference on International Language Resources and Evaluation (LREC'10)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mohammed Attia, Antonio Toral, Lamia Tounsi, Mon- ica Monachini, and Josef van Genabith. 2010. An automatically built named entity lexicon for Arabic. In Nicoletta Calzolari, Khalid Choukri, Bente Maegaard, Joseph Mariani, Jan Odijk, Ste- lios Piperidis, Mike Rosner, and Daniel Tapias, ed- itors, Proceedings of the Seventh Conference on International Language Resources and Evaluation (LREC'10), Valletta, Malta, May. European Lan- guage Resources Association (ELRA).", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Improving machine translation quality with automatic named entity recognition", |
|
"authors": [ |
|
{ |
|
"first": "Bogdan", |
|
"middle": [], |
|
"last": "Babych", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anthony", |
|
"middle": [], |
|
"last": "Hartley", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the 7th International EAMT Workshop on MT and Other Language Technology Tools, EAMT '03", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bogdan Babych and Anthony Hartley. 2003. Im- proving machine translation quality with automatic named entity recognition. In Proceedings of the 7th International EAMT Workshop on MT and Other Language Technology Tools, EAMT '03.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Named entity recognition in Wikipedia", |
|
"authors": [ |
|
{ |
|
"first": "Dominic", |
|
"middle": [], |
|
"last": "Balasuriya", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicky", |
|
"middle": [], |
|
"last": "Ringland", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joel", |
|
"middle": [], |
|
"last": "Nothman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tara", |
|
"middle": [], |
|
"last": "Murphy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Curran", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 2009 Workshop on The People's Web Meets NLP: Collaboratively Constructed Semantic Resources", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "10--18", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dominic Balasuriya, Nicky Ringland, Joel Nothman, Tara Murphy, and James R. Curran. 2009. Named entity recognition in Wikipedia. In Proceedings of the 2009 Workshop on The People's Web Meets NLP: Collaboratively Constructed Semantic Re- sources, pages 10-18, Suntec, Singapore, August. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "ANERsys: an Arabic named entity recognition system based on maximum entropy", |
|
"authors": [ |
|
{ |
|
"first": "Yassine", |
|
"middle": [], |
|
"last": "Benajiba", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paolo", |
|
"middle": [], |
|
"last": "Rosso", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jos\u00e9 Miguel", |
|
"middle": [], |
|
"last": "Bened\u00edruiz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of CICLing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "143--153", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yassine Benajiba, Paolo Rosso, and Jos\u00e9 Miguel Bened\u00edRuiz. 2007. ANERsys: an Arabic named entity recognition system based on maximum en- tropy. In Alexander Gelbukh, editor, Proceedings of CICLing, pages 143-153, Mexico City, Mexio. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Arabic named entity recognition using optimized feature sets", |
|
"authors": [ |
|
{ |
|
"first": "Yassine", |
|
"middle": [], |
|
"last": "Benajiba", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mona", |
|
"middle": [], |
|
"last": "Diab", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paolo", |
|
"middle": [], |
|
"last": "Rosso", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "284--293", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yassine Benajiba, Mona Diab, and Paolo Rosso. 2008. Arabic named entity recognition using optimized feature sets. In Proceedings of the 2008 Confer- ence on Empirical Methods in Natural Language Processing, pages 284-293, Honolulu, Hawaii, Oc- tober. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Toward scalable learning with non-uniform class and cost distributions: a case study in credit card fraud detection", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Philip", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Salvatore", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Chan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Stolfo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Proceedings of the Fourth International Conference on Knowledge Discovery and Data Mining", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "164--168", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philip K. Chan and Salvatore J. Stolfo. 1998. To- ward scalable learning with non-uniform class and cost distributions: a case study in credit card fraud detection. In Proceedings of the Fourth Interna- tional Conference on Knowledge Discovery and Data Mining, pages 164-168, New York City, New York, USA, August. AAAI Press.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Adaptation of maximum entropy capitalizer: Little data can help a lot", |
|
"authors": [ |
|
{ |
|
"first": "Ciprian", |
|
"middle": [], |
|
"last": "Chelba", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Acero", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Computer Speech and Language", |
|
"volume": "20", |
|
"issue": "4", |
|
"pages": "382--399", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ciprian Chelba and Alex Acero. 2006. Adaptation of maximum entropy capitalizer: Little data can help a lot. Computer Speech and Language, 20(4):382- 399.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Supersense tagging of unknown nouns in WordNet", |
|
"authors": [ |
|
{ |
|
"first": "Massimiliano", |
|
"middle": [], |
|
"last": "Ciaramita", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "168--175", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Massimiliano Ciaramita and Mark Johnson. 2003. Su- persense tagging of unknown nouns in WordNet. In Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing, pages 168-175.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Bootstrapping POS-taggers using unlabelled data", |
|
"authors": [ |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Curran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miles", |
|
"middle": [], |
|
"last": "Osborne", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "49--55", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stephen Clark, James Curran, and Miles Osborne. 2003. Bootstrapping POS-taggers using unlabelled data. In Walter Daelemans and Miles Osborne, editors, Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pages 49-55.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Discriminative training methods for hidden Markov models: theory and experiments with perceptron algorithms", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the ACL-02 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--8", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael Collins. 2002. Discriminative training meth- ods for hidden Markov models: theory and experi- ments with perceptron algorithms. In Proceedings of the ACL-02 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1- 8, Stroudsburg, PA, USA. Association for Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Large-scale named entity disambiguation based on Wikipedia data", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Silviu Cucerzan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "708--716", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Silviu Cucerzan. 2007. Large-scale named entity disambiguation based on Wikipedia data. In Pro- ceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Com- putational Natural Language Learning (EMNLP- CoNLL), pages 708-716, Prague, Czech Republic, June.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Minimising semantic drift with Mutual Exclusion Bootstrapping", |
|
"authors": [ |
|
{ |
|
"first": "James", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Curran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tara", |
|
"middle": [], |
|
"last": "Murphy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bernhard", |
|
"middle": [], |
|
"last": "Scholz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of PA-CLING", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "James R. Curran, Tara Murphy, and Bernhard Scholz. 2007. Minimising semantic drift with Mutual Exclusion Bootstrapping. In Proceedings of PA- CLING, 2007.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Passage retrieval for incorporating global evidence in sequence labeling", |
|
"authors": [ |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Dalton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Allan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 20th ACM International Conference on Information and Knowledge Management (CIKM '11)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "355--364", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeffrey Dalton, James Allan, and David A. Smith. 2011. Passage retrieval for incorporating global evidence in sequence labeling. In Proceedings of the 20th ACM International Conference on Infor- mation and Knowledge Management (CIKM '11), pages 355-364, Glasgow, Scotland, UK, October. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Frustratingly easy domain adaptation", |
|
"authors": [ |
|
{ |
|
"first": "Hal", |
|
"middle": [], |
|
"last": "Daum\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iii", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "256--263", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hal Daum\u00e9 III. 2007. Frustratingly easy domain adaptation. In Proceedings of the 45th Annual Meeting of the Association of Computational Lin- guistics, pages 256-263, Prague, Czech Republic, June. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "MetaCost: a general method for making classifiers cost-sensitive", |
|
"authors": [ |
|
{ |
|
"first": "Pedro", |
|
"middle": [], |
|
"last": "Domingos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Proceedings of the Fifth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "155--164", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pedro Domingos. 1999. MetaCost: a general method for making classifiers cost-sensitive. Proceedings of the Fifth ACM SIGKDD International Confer- ence on Knowledge Discovery and Data Mining, pages 155-164.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Improving NER in Arabic using a morphological tagger", |
|
"authors": [ |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Farber", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dayne", |
|
"middle": [], |
|
"last": "Freitag", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nizar", |
|
"middle": [], |
|
"last": "Habash", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Owen", |
|
"middle": [], |
|
"last": "Rambow", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the Sixth International Language Resources and Evaluation (LREC'08)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2509--2514", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Benjamin Farber, Dayne Freitag, Nizar Habash, and Owen Rambow. 2008. Improving NER in Arabic using a morphological tagger. In Nicoletta Calzo- lari, Khalid Choukri, Bente Maegaard, Joseph Mar- iani, Jan Odjik, Stelios Piperidis, and Daniel Tapias, editors, Proceedings of the Sixth International Lan- guage Resources and Evaluation (LREC'08), pages 2509-2514, Marrakech, Morocco, May. European Language Resources Association (ELRA).", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "A statistical model for multilingual entity detection and tracking", |
|
"authors": [ |
|
{ |
|
"first": "Radu", |
|
"middle": [], |
|
"last": "Florian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hany", |
|
"middle": [], |
|
"last": "Hassan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Abraham", |
|
"middle": [], |
|
"last": "Ittycheriah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hongyan", |
|
"middle": [], |
|
"last": "Jing", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nanda", |
|
"middle": [], |
|
"last": "Kambhatla", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaoqiang", |
|
"middle": [], |
|
"last": "Luo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicolas", |
|
"middle": [], |
|
"last": "Nicolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Salim", |
|
"middle": [], |
|
"last": "Roukos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: HLT-NAACL 2004", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Radu Florian, Hany Hassan, Abraham Ittycheriah, Hongyan Jing, Nanda Kambhatla, Xiaoqiang Luo, Nicolas Nicolov, and Salim Roukos. 2004. A statistical model for multilingual entity detection and tracking. In Susan Dumais, Daniel Marcu, and Salim Roukos, editors, Proceedings of the Hu- man Language Technology Conference of the North American Chapter of the Association for Compu- tational Linguistics: HLT-NAACL 2004, page 18, Boston, Massachusetts, USA, May. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Improving mention detection robustness to noisy input", |
|
"authors": [ |
|
{ |
|
"first": "Radu", |
|
"middle": [], |
|
"last": "Florian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Pitrelli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Salim", |
|
"middle": [], |
|
"last": "Roukos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Imed", |
|
"middle": [], |
|
"last": "Zitouni", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of EMNLP 2010", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "335--345", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Radu Florian, John Pitrelli, Salim Roukos, and Imed Zitouni. 2010. Improving mention detection ro- bustness to noisy input. In Proceedings of EMNLP 2010, pages 335-345, Cambridge, MA, October. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Trained named entity recognition using distributional clusters", |
|
"authors": [ |
|
{ |
|
"first": "Dayne", |
|
"middle": [], |
|
"last": "Freitag", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of EMNLP 2004", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "262--269", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dayne Freitag. 2004. Trained named entity recog- nition using distributional clusters. In Dekang Lin and Dekai Wu, editors, Proceedings of EMNLP 2004, pages 262-269, Barcelona, Spain, July. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Softmaxmargin CRFs: Training log-linear models with loss functions", |
|
"authors": [ |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Gimpel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the Human Language Technologies Conference of the North American Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "733--736", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kevin Gimpel and Noah A. Smith. 2010a. Softmax- margin CRFs: Training log-linear models with loss functions. In Proceedings of the Human Language Technologies Conference of the North American Chapter of the Association for Computational Lin- guistics, pages 733-736, Los Angeles, California, USA, June.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Softmax-margin training for structured loglinear models", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Softmax-margin training for structured log- linear models. Technical Report CMU-LTI- 10-008, Carnegie Mellon University. http: //www.lti.cs.cmu.edu/research/ reports/2010/cmulti10008.pdf.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Proposal for an extension of traditional named entities: from guidelines to evaluation, an overview", |
|
"authors": [ |
|
{ |
|
"first": "Cyril", |
|
"middle": [], |
|
"last": "Grouin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sophie", |
|
"middle": [], |
|
"last": "Rosset", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pierre", |
|
"middle": [], |
|
"last": "Zweigenbaum", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karn", |
|
"middle": [], |
|
"last": "Fort", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Olivier", |
|
"middle": [], |
|
"last": "Galibert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ludovic", |
|
"middle": [], |
|
"last": "Quintard", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 5th Linguistic Annotation Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "92--100", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Cyril Grouin, Sophie Rosset, Pierre Zweigenbaum, Karn Fort, Olivier Galibert, and Ludovic Quin- tard. 2011. Proposal for an extension of tradi- tional named entities: from guidelines to evaluation, an overview. In Proceedings of the 5th Linguis- tic Annotation Workshop, pages 92-100, Portland, Oregon, USA, June. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Arabic tokenization, part-of-speech tagging and morphological disambiguation in one fell swoop", |
|
"authors": [ |
|
{ |
|
"first": "Nizar", |
|
"middle": [], |
|
"last": "Habash", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Owen", |
|
"middle": [], |
|
"last": "Rambow", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "573--580", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nizar Habash and Owen Rambow. 2005. Arabic to- kenization, part-of-speech tagging and morpholog- ical disambiguation in one fell swoop. In Proceed- ings of the 43rd Annual Meeting of the Associa- tion for Computational Linguistics (ACL'05), pages 573-580, Ann Arbor, Michigan, June. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Introduction to Arabic Natural Language Processing", |
|
"authors": [ |
|
{ |
|
"first": "Nizar", |
|
"middle": [], |
|
"last": "Habash", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nizar Habash. 2010. Introduction to Arabic Natural Language Processing. Morgan and Claypool Pub- lishers.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Improving named entity translation by exploiting comparable and parallel corpora", |
|
"authors": [ |
|
{ |
|
"first": "Ahmed", |
|
"middle": [], |
|
"last": "Hassan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Haytham", |
|
"middle": [], |
|
"last": "Fahmy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hany", |
|
"middle": [], |
|
"last": "Hassan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the Conference on Recent Advances in Natural Language Processing (RANLP '07)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ahmed Hassan, Haytham Fahmy, and Hany Hassan. 2007. Improving named entity translation by ex- ploiting comparable and parallel corpora. In Pro- ceedings of the Conference on Recent Advances in Natural Language Processing (RANLP '07), Borovets, Bulgaria.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "OntoNotes: the 90% solution", |
|
"authors": [ |
|
{ |
|
"first": "Eduard", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mitchell", |
|
"middle": [], |
|
"last": "Marcus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martha", |
|
"middle": [], |
|
"last": "Palmer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lance", |
|
"middle": [], |
|
"last": "Ramshaw", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ralph", |
|
"middle": [], |
|
"last": "Weischedel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the Human Language Technology Conference of the NAACL (HLT-NAACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "57--60", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eduard Hovy, Mitchell Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel. 2006. OntoNotes: the 90% solution. In Proceedings of the Human Language Technology Conference of the NAACL (HLT-NAACL), pages 57-60, New York City, USA, June. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Unsupervised discovery of domain-specific knowledge from text", |
|
"authors": [ |
|
{ |
|
"first": "Dirk", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chunliang", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eduard", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anselmo", |
|
"middle": [], |
|
"last": "Peas", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1466--1475", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dirk Hovy, Chunliang Zhang, Eduard Hovy, and Anselmo Peas. 2011. Unsupervised discovery of domain-specific knowledge from text. In Proceed- ings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 1466-1475, Portland, Oregon, USA, June. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Exploiting domain structure for named entity recognition", |
|
"authors": [ |
|
{ |
|
"first": "Jing", |
|
"middle": [], |
|
"last": "Jiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chengxiang", |
|
"middle": [], |
|
"last": "Zhai", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the Human Language Technology Conference of the NAACL (HLT-NAACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "74--81", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jing Jiang and ChengXiang Zhai. 2006. Exploit- ing domain structure for named entity recognition. In Proceedings of the Human Language Technol- ogy Conference of the NAACL (HLT-NAACL), pages 74-81, New York City, USA, June. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Exploiting Wikipedia as external knowledge for named entity recognition", |
|
"authors": [], |
|
"year": null, |
|
"venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "698--707", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Exploiting Wikipedia as external knowledge for named entity recognition. In Proceedings of the 2007 Joint Conference on Empirical Meth- ods in Natural Language Processing and Com- putational Natural Language Learning (EMNLP- CoNLL), pages 698-707, Prague, Czech Republic, June. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "That's what she said: double entendre identification", |
|
"authors": [ |
|
{ |
|
"first": "Chloe", |
|
"middle": [], |
|
"last": "Kiddon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuriy", |
|
"middle": [], |
|
"last": "Brun", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "89--94", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chloe Kiddon and Yuriy Brun. 2011. That's what she said: double entendre identification. In Pro- ceedings of the 49th Annual Meeting of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, pages 89-94, Portland, Ore- gon, USA, June. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Statistical significance tests for machine translation evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of EMNLP 2004", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "388--395", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Dekang Lin and Dekai Wu, editors, Proceedings of EMNLP 2004, pages 388-395, Barcelona, Spain, July. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Automatic Content Extraction) Arabic annotation guidelines for entities, version 5.3.3. Linguistic Data Consortium", |
|
"authors": [], |
|
"year": 2008, |
|
"venue": "Proceedings of the 25th International Conference on Machine Learning (ICML)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "592--599", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "LDC. 2005. ACE (Automatic Content Extraction) Arabic annotation guidelines for entities, version 5.3.3. Linguistic Data Consortium, Philadelphia. Percy Liang, Hal Daum\u00e9 III, and Dan Klein. 2008. Structure compilation: trading structure for fea- tures. In Proceedings of the 25th International Con- ference on Machine Learning (ICML), pages 592- 599, Helsinki, Finland.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Doing named entity recognition? Don't optimize for F 1", |
|
"authors": [ |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chris Manning. 2006. Doing named entity recogni- tion? Don't optimize for F 1 . http://nlpers. blogspot.com/2006/08/doing-named- entity-recognition-dont.html.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Effective self-training for parsing", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Mcclosky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eugene", |
|
"middle": [], |
|
"last": "Charniak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the Human Language Technology Conference of the NAACL, Main Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "152--159", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David McClosky, Eugene Charniak, and Mark John- son. 2006. Effective self-training for parsing. In Proceedings of the Human Language Technology Conference of the NAACL, Main Conference, pages 152-159, New York City, USA, June. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Co-training and self-training for word sense disambiguation", |
|
"authors": [ |
|
{ |
|
"first": "Rada", |
|
"middle": [], |
|
"last": "Mihalcea", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "HLT-NAACL 2004 Workshop: Eighth Conference on Computational Natural Language Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rada Mihalcea. 2004. Co-training and self-training for word sense disambiguation. In HLT-NAACL 2004 Workshop: Eighth Conference on Computa- tional Natural Language Learning (CoNLL-2004), Boston, Massachusetts, USA.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "NER systems that suit user's preferences: adjusting the recall-precision trade-off for entity extraction", |
|
"authors": [ |
|
{ |
|
"first": "Einat", |
|
"middle": [], |
|
"last": "Minkov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anthony", |
|
"middle": [], |
|
"last": "Tomasic", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [], |
|
"last": "Cohen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "93--96", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Einat Minkov, Richard Wang, Anthony Tomasic, and William Cohen. 2006. NER systems that suit user's preferences: adjusting the recall-precision trade-off for entity extraction. In Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers, pages 93-96, New York City, USA, June. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "What in the world is a Shahab? Wide coverage named entity recognition for Arabic", |
|
"authors": [ |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Nezda", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Hickl", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Lehmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sarmad", |
|
"middle": [], |
|
"last": "Fayyaz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proccedings of LREC", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "41--46", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Luke Nezda, Andrew Hickl, John Lehmann, and Sar- mad Fayyaz. 2006. What in the world is a Shahab? Wide coverage named entity recognition for Arabic. In Proccedings of LREC, pages 41-46.", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "Analysing Wikipedia and gold-standard corpora for NER training", |
|
"authors": [ |
|
{ |
|
"first": "Joel", |
|
"middle": [], |
|
"last": "Nothman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tara", |
|
"middle": [], |
|
"last": "Murphy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Curran", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2009)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "612--620", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joel Nothman, Tara Murphy, and James R. Curran. 2009. Analysing Wikipedia and gold-standard cor- pora for NER training. In Proceedings of the 12th Conference of the European Chapter of the Associ- ation for Computational Linguistics (EACL 2009), pages 612-620, Athens, Greece, March. Associa- tion for Computational Linguistics. PediaPress. 2010. mwlib. http://code. pediapress.com/wiki/wiki/mwlib.", |
|
"links": null |
|
}, |
|
"BIBREF41": { |
|
"ref_id": "b41", |
|
"title": "Uptraining for accurate deterministic question parsing", |
|
"authors": [ |
|
{ |
|
"first": "Slav", |
|
"middle": [], |
|
"last": "Petrov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pi-Chuan", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Ringgaard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hiyan", |
|
"middle": [], |
|
"last": "Alshawi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "705--713", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Slav Petrov, Pi-Chuan Chang, Michael Ringgaard, and Hiyan Alshawi. 2010. Uptraining for accurate de- terministic question parsing. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 705-713, Cambridge, MA, October. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF42": { |
|
"ref_id": "b42", |
|
"title": "Design challenges and misconceptions in named entity recognition", |
|
"authors": [ |
|
{ |
|
"first": "Lev", |
|
"middle": [], |
|
"last": "Ratinov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the Thirteenth Conference on Computational Natural Language Learning (CoNLL-2009)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "147--155", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lev Ratinov and Dan Roth. 2009. Design chal- lenges and misconceptions in named entity recog- nition. In Proceedings of the Thirteenth Confer- ence on Computational Natural Language Learning (CoNLL-2009), pages 147-155, Boulder, Colorado, June. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF43": { |
|
"ref_id": "b43", |
|
"title": "Subgradient methods for maximum margin structured learning", |
|
"authors": [ |
|
{ |
|
"first": "Nathan", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Ratliff", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"Andrew" |
|
], |
|
"last": "Bagnell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martin", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Zinkevich", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "ICML Workshop on Learning in Structured Output Spaces", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nathan D. Ratliff, J. Andrew Bagnell, and Martin A. Zinkevich. 2006. Subgradient methods for maxi- mum margin structured learning. In ICML Work- shop on Learning in Structured Output Spaces, Pittsburgh, Pennsylvania, USA.", |
|
"links": null |
|
}, |
|
"BIBREF44": { |
|
"ref_id": "b44", |
|
"title": "Arabic morphological tagging, diacritization, and lemmatization using lexeme models and feature ranking", |
|
"authors": [ |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Owen", |
|
"middle": [], |
|
"last": "Rambow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nizar", |
|
"middle": [], |
|
"last": "Habash", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mona", |
|
"middle": [], |
|
"last": "Diab", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cynthia", |
|
"middle": [], |
|
"last": "Rudin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of ACL-08: HLT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "117--120", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ryan Roth, Owen Rambow, Nizar Habash, Mona Diab, and Cynthia Rudin. 2008. Arabic morpho- logical tagging, diacritization, and lemmatization using lexeme models and feature ranking. In Pro- ceedings of ACL-08: HLT, pages 117-120, Colum- bus, Ohio, June. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF45": { |
|
"ref_id": "b45", |
|
"title": "Extended named entity hierarchy", |
|
"authors": [ |
|
{ |
|
"first": "Satoshi", |
|
"middle": [], |
|
"last": "Sekine", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kiyoshi", |
|
"middle": [], |
|
"last": "Sudo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chikashi", |
|
"middle": [], |
|
"last": "Nobata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of LREC", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Satoshi Sekine, Kiyoshi Sudo, and Chikashi Nobata. 2002. Extended named entity hierarchy. In Pro- ceedings of LREC.", |
|
"links": null |
|
}, |
|
"BIBREF46": { |
|
"ref_id": "b46", |
|
"title": "Biomedical named entity recognition using conditional random fields and rich feature sets", |
|
"authors": [ |
|
{ |
|
"first": "Burr", |
|
"middle": [], |
|
"last": "Settles", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "COLING 2004 International Joint workshop on Natural Language Processing in Biomedicine and its Applications (NLPBA/BioNLP) 2004", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "107--110", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Burr Settles. 2004. Biomedical named entity recogni- tion using conditional random fields and rich feature sets. In Nigel Collier, Patrick Ruch, and Adeline Nazarenko, editors, COLING 2004 International Joint workshop on Natural Language Processing in Biomedicine and its Applications (NLPBA/BioNLP) 2004, pages 107-110, Geneva, Switzerland, Au- gust. COLING.", |
|
"links": null |
|
}, |
|
"BIBREF47": { |
|
"ref_id": "b47", |
|
"title": "Arabic named entity recognition from diverse text types", |
|
"authors": [ |
|
{ |
|
"first": "Khaled", |
|
"middle": [], |
|
"last": "Shaalan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hafsa", |
|
"middle": [], |
|
"last": "Raza", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Advances in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "440--451", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Khaled Shaalan and Hafsa Raza. 2008. Arabic named entity recognition from diverse text types. In Advances in Natural Language Processing, pages 440-451. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF48": { |
|
"ref_id": "b48", |
|
"title": "Customizing an information extraction system to a new domain", |
|
"authors": [ |
|
{ |
|
"first": "Mihai", |
|
"middle": [], |
|
"last": "Surdeanu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Mcclosky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mason", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrey", |
|
"middle": [], |
|
"last": "Gusev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the ACL 2011 Workshop on Relational Models of Semantics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mihai Surdeanu, David McClosky, Mason R. Smith, Andrey Gusev, and Christopher D. Manning. 2011. Customizing an information extraction system to a new domain. In Proceedings of the ACL 2011 Workshop on Relational Models of Semantics, Port- land, Oregon, USA, June. Association for Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF49": { |
|
"ref_id": "b49", |
|
"title": "Max-margin Markov networks", |
|
"authors": [ |
|
{ |
|
"first": "Ben", |
|
"middle": [], |
|
"last": "Taskar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carlos", |
|
"middle": [], |
|
"last": "Guestrin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daphne", |
|
"middle": [], |
|
"last": "Koller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Advances in Neural Information Processing Systems 16", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ben Taskar, Carlos Guestrin, and Daphne Koller. 2004. Max-margin Markov networks. In Sebastian Thrun, Lawrence Saul, and Bernhard Sch\u00f6lkopf, editors, Advances in Neural Information Processing Systems 16. MIT Press.", |
|
"links": null |
|
}, |
|
"BIBREF50": { |
|
"ref_id": "b50", |
|
"title": "Improving question answering using named entity recognition. Natural Language Processing and Information Systems", |
|
"authors": [ |
|
{ |
|
"first": "Antonio", |
|
"middle": [], |
|
"last": "Toral", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Elisa", |
|
"middle": [], |
|
"last": "Noguera", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fernando", |
|
"middle": [], |
|
"last": "Llopis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rafael", |
|
"middle": [], |
|
"last": "Mu\u00f1oz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "", |
|
"volume": "3513", |
|
"issue": "", |
|
"pages": "181--191", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Antonio Toral, Elisa Noguera, Fernando Llopis, and Rafael Mu\u00f1oz. 2005. Improving question an- swering using named entity recognition. Natu- ral Language Processing and Information Systems, 3513/2005:181-191.", |
|
"links": null |
|
}, |
|
"BIBREF51": { |
|
"ref_id": "b51", |
|
"title": "Large margin methods for structured and interdependent output variables", |
|
"authors": [ |
|
{ |
|
"first": "Ioannis", |
|
"middle": [], |
|
"last": "Tsochantaridis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thorsten", |
|
"middle": [], |
|
"last": "Joachims", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Hofmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yasemin", |
|
"middle": [], |
|
"last": "Altun", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "6", |
|
"issue": "", |
|
"pages": "1453--1484", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ioannis Tsochantaridis, Thorsten Joachims, Thomas Hofmann, and Yasemin Altun. 2005. Large margin methods for structured and interdependent output variables. Journal of Machine Learning Research, 6:1453-1484, September.", |
|
"links": null |
|
}, |
|
"BIBREF52": { |
|
"ref_id": "b52", |
|
"title": "ACE 2005 multilingual training corpus. LDC2006T06, Linguistic Data Consortium", |
|
"authors": [ |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Walker", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephanie", |
|
"middle": [], |
|
"last": "Strassel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julie", |
|
"middle": [], |
|
"last": "Medero", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kazuaki", |
|
"middle": [], |
|
"last": "Maeda", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christopher Walker, Stephanie Strassel, Julie Medero, and Kazuaki Maeda. 2006. ACE 2005 multi- lingual training corpus. LDC2006T06, Linguistic Data Consortium, Philadelphia.", |
|
"links": null |
|
}, |
|
"BIBREF53": { |
|
"ref_id": "b53", |
|
"title": "BBN pronoun coreference and entity type corpus. LDC2005T33, Linguistic Data Consortium", |
|
"authors": [ |
|
{ |
|
"first": "Ralph", |
|
"middle": [], |
|
"last": "Weischedel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ada", |
|
"middle": [], |
|
"last": "Brunstein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ralph Weischedel and Ada Brunstein. 2005. BBN pronoun coreference and entity type cor- pus. LDC2005T33, Linguistic Data Consortium, Philadelphia.", |
|
"links": null |
|
}, |
|
"BIBREF54": { |
|
"ref_id": "b54", |
|
"title": "Domain adaptive bootstrapping for named entity recognition", |
|
"authors": [ |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wee", |
|
"middle": [], |
|
"last": "Sun Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nan", |
|
"middle": [], |
|
"last": "Ye", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hai", |
|
"middle": [ |
|
"Leong" |
|
], |
|
"last": "Chieu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1523--1532", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dan Wu, Wee Sun Lee, Nan Ye, and Hai Leong Chieu. 2009. Domain adaptive bootstrapping for named entity recognition. In Proceedings of the 2009 Con- ference on Empirical Methods in Natural Language Processing, pages 1523-1532, Singapore, August. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF55": { |
|
"ref_id": "b55", |
|
"title": "CHINERS: a Chinese named entity recognition system for the sports domain", |
|
"authors": [ |
|
{ |
|
"first": "Tianfang", |
|
"middle": [], |
|
"last": "Yao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Ding", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gregor", |
|
"middle": [], |
|
"last": "Erbach", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the Second SIGHAN Workshop on Chinese Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "55--62", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tianfang Yao, Wei Ding, and Gregor Erbach. 2003. CHINERS: a Chinese named entity recognition sys- tem for the sports domain. In Proceedings of the Second SIGHAN Workshop on Chinese Language Processing, pages 55-62, Sapporo, Japan, July. As- sociation for Computational Linguistics.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "Figure 3: Supervised learner precision vs. recall as evaluated on Wikipedia test data in different topical domains. The regular perceptron (baseline model) is contrasted with ROP. No self-training is applied.", |
|
"uris": null |
|
}, |
|
"TABREF0": { |
|
"type_str": "table", |
|
"content": "<table><tr><td>History</td><td>Science</td><td>Sports</td><td>Technology</td></tr><tr><td>dev: Damascus</td><td>Atom</td><td>Ra\u00fal Gonz\u00e1les</td><td>Linux</td></tr><tr><td colspan=\"2\">Imam Hussein Shrine Nuclear power</td><td>Real Madrid</td><td>Solaris</td></tr><tr><td>test: Crusades</td><td>Enrico Fermi</td><td colspan=\"2\">2004 Summer Olympics Computer</td></tr><tr><td colspan=\"2\">Islamic Golden Age Light</td><td>Christiano Ronaldo</td><td>Computer Software</td></tr><tr><td>Islamic History</td><td>Periodic Table</td><td>Football</td><td>Internet</td></tr><tr><td>Ibn Tolun Mosque</td><td>Physics</td><td colspan=\"2\">Portugal football team Richard Stallman</td></tr><tr><td>Ummaya Mosque</td><td colspan=\"2\">Muhammad al-Razi FIFA World Cup</td><td>X Window System</td></tr></table>", |
|
"num": null, |
|
"html": null, |
|
"text": "-all are in the news domain. 2 However," |
|
}, |
|
"TABREF1": { |
|
"type_str": "table", |
|
"content": "<table><tr><td colspan=\"2\">Amman were reserved</td></tr><tr><td colspan=\"2\">for training annotators,</td></tr><tr><td colspan=\"2\">and Gulf War for esti-</td></tr><tr><td>mating</td><td>inter-annotator</td></tr><tr><td>agreement.</td><td/></tr></table>", |
|
"num": null, |
|
"html": null, |
|
"text": "Translated titles of Arabic Wikipedia articles in our development and test sets, and some NEs with standard and article-specific classes. Additionally, Prussia and" |
|
}, |
|
"TABREF3": { |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"num": null, |
|
"html": null, |
|
"text": "Inter-annotator agreement measurements." |
|
}, |
|
"TABREF5": { |
|
"type_str": "table", |
|
"content": "<table><tr><td>tures known to work well for Arabic NER (Be-</td></tr><tr><td>najiba et al., 2008; Abdul-Hamid and Darwish,</td></tr><tr><td>2010), we incorporate some additional features</td></tr><tr><td>enabled by Wikipedia. We do not employ a</td></tr><tr><td>gazetteer, as the construction of a broad-domain</td></tr><tr><td>gazetteer is a significant undertaking orthogo-</td></tr><tr><td>nal to the challenges of a new text domain like</td></tr><tr><td>Wikipedia. 10 A descriptive list of our features is</td></tr><tr><td>available in the supplementary document.</td></tr></table>", |
|
"num": null, |
|
"html": null, |
|
"text": "Number of words (entity mentions) in data sets." |
|
}, |
|
"TABREF8": { |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"num": null, |
|
"html": null, |
|
"text": "SELF-TRAINING SUPERVISED none reg ROP reg 66.3 35.9 46.59 66.7 35.6 46.41 59.2 40.3 47.97 ROP 60.9 44.7 51.59 59.8 46.2 52.11 58.0 47.4 52.16" |
|
} |
|
} |
|
} |
|
} |