|
{ |
|
"paper_id": "Q15-1018", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:07:51.424300Z" |
|
}, |
|
"title": "Combining Minimally-supervised Methods for Arabic Named Entity Recognition", |
|
"authors": [ |
|
{ |
|
"first": "Maha", |
|
"middle": [], |
|
"last": "Althobaiti", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Essex Colchester", |
|
"location": { |
|
"country": "UK" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Udo", |
|
"middle": [], |
|
"last": "Kruschwitz", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Essex Colchester", |
|
"location": { |
|
"country": "UK" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Massimo", |
|
"middle": [], |
|
"last": "Poesio", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Essex Colchester", |
|
"location": { |
|
"country": "UK" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Supervised methods can achieve high performance on NLP tasks, such as Named Entity Recognition (NER), but new annotations are required for every new domain and/or genre change. This has motivated research in minimally supervised methods such as semisupervised learning and distant learning, but neither technique has yet achieved performance levels comparable to those of supervised methods. Semi-supervised methods tend to have very high precision but comparatively low recall, whereas distant learning tends to achieve higher recall but lower precision. This complementarity suggests that better results may be obtained by combining the two types of minimally supervised methods. In this paper we present a novel approach to Arabic NER using a combination of semi-supervised and distant learning techniques. We trained a semi-supervised NER classifier and another one using distant learning techniques, and then combined them using a variety of classifier combination schemes, including the Bayesian Classifier Combination (BCC) procedure recently proposed for sentiment analysis. According to our results, the BCC model leads to an increase in performance of 8 percentage points over the best base classifiers.", |
|
"pdf_parse": { |
|
"paper_id": "Q15-1018", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Supervised methods can achieve high performance on NLP tasks, such as Named Entity Recognition (NER), but new annotations are required for every new domain and/or genre change. This has motivated research in minimally supervised methods such as semisupervised learning and distant learning, but neither technique has yet achieved performance levels comparable to those of supervised methods. Semi-supervised methods tend to have very high precision but comparatively low recall, whereas distant learning tends to achieve higher recall but lower precision. This complementarity suggests that better results may be obtained by combining the two types of minimally supervised methods. In this paper we present a novel approach to Arabic NER using a combination of semi-supervised and distant learning techniques. We trained a semi-supervised NER classifier and another one using distant learning techniques, and then combined them using a variety of classifier combination schemes, including the Bayesian Classifier Combination (BCC) procedure recently proposed for sentiment analysis. According to our results, the BCC model leads to an increase in performance of 8 percentage points over the best base classifiers.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Supervised learning techniques are very effective and widely used to solve many NLP problems, including NER (Sekine and others, 1998; Benajiba et al., 2007a; Darwish, 2013) . The main disadvantage of supervised techniques, however, is the need for a large annotated corpus. Although a considerable amount of annotated data is available for many languages, including Arabic (Zaghouani, 2014) , changing the domain or expanding the set of classes always requires domain-specific experts and new annotated data, both of which demand time and effort. Therefore, much of the current research on NER focuses on approaches that require minimal human intervention to export the named entity (NE) classifiers to new domains and to expand NE classes (Nadeau, 2007; Nothman et al., 2013) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 108, |
|
"end": 133, |
|
"text": "(Sekine and others, 1998;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 134, |
|
"end": 157, |
|
"text": "Benajiba et al., 2007a;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 158, |
|
"end": 172, |
|
"text": "Darwish, 2013)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 373, |
|
"end": 390, |
|
"text": "(Zaghouani, 2014)", |
|
"ref_id": "BIBREF50" |
|
}, |
|
{ |
|
"start": 740, |
|
"end": 754, |
|
"text": "(Nadeau, 2007;", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 755, |
|
"end": 776, |
|
"text": "Nothman et al., 2013)", |
|
"ref_id": "BIBREF36" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Semi-supervised (Abney, 2010) and distant learning approaches (Mintz et al., 2009; Nothman et al., 2013) are alternatives to supervised methods that do not require manually annotated data. These approaches have proved to be effective and easily adaptable to new NE types. However, the performance of such methods tends to be lower than that achieved with supervised methods (Althobaiti et al., 2013; Nadeau, 2007; Nothman et al., 2013) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 16, |
|
"end": 29, |
|
"text": "(Abney, 2010)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 62, |
|
"end": 82, |
|
"text": "(Mintz et al., 2009;", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 83, |
|
"end": 104, |
|
"text": "Nothman et al., 2013)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 374, |
|
"end": 399, |
|
"text": "(Althobaiti et al., 2013;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 400, |
|
"end": 413, |
|
"text": "Nadeau, 2007;", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 414, |
|
"end": 435, |
|
"text": "Nothman et al., 2013)", |
|
"ref_id": "BIBREF36" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We propose combining these two minimally supervised methods in order to exploit their respective strengths and thereby obtain better results. Semisupervised learning tends to be more precise than distant learning, which in turn leads to higher recall than semi-supervised learning. In this work, we use various classifier combination schemes to combine the minimal supervision methods. Most previous studies have examined classifier combination schemes to combine multiple supervisedlearning systems (Florian et al., 2003; Saha and Ekbal, 2013) , but this research is the first to combine minimal supervision approaches. In addition, we report our results from testing the recently proposed Independent Bayesian Classifier Combination (IBCC) scheme (Kim and Ghahramani, 2012; Levenberg et al., 2014) and comparing it with traditional voting methods for ensemble combination.", |
|
"cite_spans": [ |
|
{ |
|
"start": 500, |
|
"end": 522, |
|
"text": "(Florian et al., 2003;", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 523, |
|
"end": 544, |
|
"text": "Saha and Ekbal, 2013)", |
|
"ref_id": "BIBREF41" |
|
}, |
|
{ |
|
"start": 749, |
|
"end": 775, |
|
"text": "(Kim and Ghahramani, 2012;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 776, |
|
"end": 799, |
|
"text": "Levenberg et al., 2014)", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "A lot of research has been devoted to Arabic NER over the past ten years. Much of the initial work employed hand-written rule-based techniques (Mesfar, 2007; Shaalan and Raza, 2009; Elsebai et al., 2009) . More recent approaches to Arabic NER are based on supervised learning techniques. The most common supervised learning techniques investigated for Arabic NER are Maximum Entropy (ME) (Benajiba et al., 2007b) , Support Vector Machines (SVMs) , and Conditional Random Fields (CRFs) Abdul-Hamid and Darwish, 2010) . Darwish (2013) presented cross-lingual features for NER that make use of the linguistic properties and knowledge bases of another language. In his study, English capitalisation features and an English knowledge base (DBpedia) were exploited as discriminative features for Arabic NER. A large Machine Translation (MT) phrase table and Wikipedia cross-lingual links were used for translation between Arabic and English. The results showed an overall F-score of 84.3% with an improvement of 5.5% over a strong baseline system on a standard dataset (the ANERcorp set collected by Benajiba et al. (2007a) ). Abdallah et al. (2012) proposed a hybrid NER system for Arabic that integrates a rule-based system with a decision tree classifier. Their integrated approach increased the F-score by between 8% and 14% when compared to the original rule based system and the pure machine learning technique. Oudah and Shaalan (2012) also developed hybrid Arabic NER systems that integrate a rulebased approach with three different supervised techniques: decision trees, SVMs, and logistic regression. Their best hybrid system outperforms state-ofthe-art Arabic NER systems Abdallah et al., 2012) on standard test sets.", |
|
"cite_spans": [ |
|
{ |
|
"start": 143, |
|
"end": 157, |
|
"text": "(Mesfar, 2007;", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 158, |
|
"end": 181, |
|
"text": "Shaalan and Raza, 2009;", |
|
"ref_id": "BIBREF43" |
|
}, |
|
{ |
|
"start": 182, |
|
"end": 203, |
|
"text": "Elsebai et al., 2009)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 388, |
|
"end": 412, |
|
"text": "(Benajiba et al., 2007b)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 485, |
|
"end": 515, |
|
"text": "Abdul-Hamid and Darwish, 2010)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 518, |
|
"end": 532, |
|
"text": "Darwish (2013)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 1094, |
|
"end": 1117, |
|
"text": "Benajiba et al. (2007a)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 1121, |
|
"end": 1143, |
|
"text": "Abdallah et al. (2012)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 1412, |
|
"end": 1436, |
|
"text": "Oudah and Shaalan (2012)", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 1677, |
|
"end": 1699, |
|
"text": "Abdallah et al., 2012)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Arabic NER", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Much current research seeks adequate alternatives to expensive corpus annotation that address the limitations of supervised learning methods: the need for substantial human intervention and the limited number of NE classes that can be handled by the system. Semi-supervised techniques and distant learning are examples of methods that require minimal supervision.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Minimal Supervision and NER", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Semi-supervised learning (SSL) (Abney, 2010) has been used for various NLP tasks, including NER (Nadeau, 2007) . 'Bootstrapping' is the most common semi-supervised technique. Bootstrapping involves a small degree of supervision, such as a set of seeds, to initiate the learning process (Nadeau and Sekine, 2007) . An early study that introduced mutual bootstrapping and proved highly influential is (Riloff and Jones, 1999) . They presented an algorithm that begins with a set of seed examples of a particular entity type. Then, all contexts found around these seeds in a large corpus are compiled, ranked, and used to find new examples. Pasca et al. (2006) used the same bootstrapping technique as Riloff and Jones (1999) , but applied the technique to very large corpora and managed to generate one million facts with a precision rate of about 88%. Ab-delRahman et al. (2010) proposed to integrate bootstrapping semi-supervised pattern recognition and a Conditional Random Fields (CRFs) classifier. They used semi-supervised pattern recognition in order to generate patterns that were then used as features in the CRFs classifier.", |
|
"cite_spans": [ |
|
{ |
|
"start": 96, |
|
"end": 110, |
|
"text": "(Nadeau, 2007)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 286, |
|
"end": 311, |
|
"text": "(Nadeau and Sekine, 2007)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 399, |
|
"end": 423, |
|
"text": "(Riloff and Jones, 1999)", |
|
"ref_id": "BIBREF40" |
|
}, |
|
{ |
|
"start": 638, |
|
"end": 657, |
|
"text": "Pasca et al. (2006)", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 699, |
|
"end": 722, |
|
"text": "Riloff and Jones (1999)", |
|
"ref_id": "BIBREF40" |
|
}, |
|
{ |
|
"start": 851, |
|
"end": 877, |
|
"text": "Ab-delRahman et al. (2010)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Minimal Supervision and NER", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Distant learning (DL) is another popular paradigm that avoids the high cost of supervision. It depends on the use of external knowledge (e.g., encyclopedias such as Wikipedia, unlabelled large corpora, or external semantic repositories) to increase the performance of the classifier, or to automatically create new resources for use in the learning process (Mintz et al., 2009; Nguyen and Moschitti, 2011) . Nothman et al. (2013) automatically created massive, multilingual training annotations for NER by exploiting the text and internal structure of Wikipedia. They first categorised Wikipedia articles into a specific set of named entity types across nine languages: Dutch, English, French, German, Italian, Polish, Portuguese, Rus-sian, and Spanish. Then, Wikipedia's links were transformed into named entity annotations based on the NE types of the target articles. Following this approach, millions of words were annotated in the aforementioned nine languages. Their method for automatically deriving corpora from Wikipedia outperformed the methods proposed by Richman and Schone (2008) and Mika et al. (2008) when testing the Wikipedia-trained models on CONLL shared task data and other gold-standard corpora. Alotaibi and Lee (2013) presented a methodology to automatically build two NE-annotated sets from Arabic Wikipedia. The corpora were built by transforming links into NE annotations according to the NE type of the target articles. POS-tagging, morphological analysis, and linked NE phrases were used to detect other mentions of NEs that appear without links in text. Their Wikipedia-trained model performed well when tested on various newswire test sets, but it did not surpass the performance of the supervised classifier that is trained and tested on data sets drawn from the same domain.", |
|
"cite_spans": [ |
|
{ |
|
"start": 357, |
|
"end": 377, |
|
"text": "(Mintz et al., 2009;", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 378, |
|
"end": 405, |
|
"text": "Nguyen and Moschitti, 2011)", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 408, |
|
"end": 429, |
|
"text": "Nothman et al. (2013)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 1067, |
|
"end": 1092, |
|
"text": "Richman and Schone (2008)", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 1097, |
|
"end": 1115, |
|
"text": "Mika et al. (2008)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 1217, |
|
"end": 1240, |
|
"text": "Alotaibi and Lee (2013)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Minimal Supervision and NER", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "We are not aware of any previous work combining minimally supervised methods for NER task in Arabic or any other natural language, but there are many studies that have examined classifier combination schemes to combine various supervisedlearning systems. Florian et al. (2003) presented the best system at the NER CoNLL 2003 task, with an F-score value equal to 88.76%. They used a combination of four diverse NE classifiers: the transformation-based learning classifier, a Hidden Markov Model classifier (HMM), a robust risk minimization classifier based on a regularized winnow method (Zhang et al., 2002) , and a ME classifier. The features they used included tokens, POS and chunk tags, affixes, gazetteers, and the output of two other NE classifiers trained on richer datasets. Their methods for combining the results of the four NE classifiers improved the overall performance by 17-21% when compared with the best performing classifier.", |
|
"cite_spans": [ |
|
{ |
|
"start": 255, |
|
"end": 276, |
|
"text": "Florian et al. (2003)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 587, |
|
"end": 607, |
|
"text": "(Zhang et al., 2002)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Classifier Combination and NER", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Saha and Ekbal (2013) studied classifier combination techniques for various NER models under single and multi-objective optimisation frameworks. They used seven diverse classifiers -naive Bayes, decision tree, memory based learner, HMM, ME, CRFs, and SVMs -to build a number of voting models based on identified text features that are selected mostly without domain knowledge. The combination methods used were binary and real vote-based ensembles. They reported that the proposed multiobjective optimisation classifier ensemble with real voting outperforms the individual classifiers, the three baseline ensembles, and the corresponding single objective classifier ensemble.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Classifier Combination and NER", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Two main minimally supervised approaches have been used for NER: semi-supervised learning (Althobaiti et al., 2013) and distant supervision (Nothman et al., 2013) . We developed state-of-the-art classifiers of both types that will be used as base classifiers in this paper. Our implementations of these classifiers are explained in Section 3.1 and Section 3.2.", |
|
"cite_spans": [ |
|
{ |
|
"start": 90, |
|
"end": 115, |
|
"text": "(Althobaiti et al., 2013)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 140, |
|
"end": 162, |
|
"text": "(Nothman et al., 2013)", |
|
"ref_id": "BIBREF36" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Two Minimally Supervised NER Classifiers", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "As previously mentioned, the most common SSL technique is bootstrapping, which only requires a set of seeds to initiate the learning process. We used an algorithm adapted from Althobaiti et al. (2013) and contains three components, as shown in Figure 1 . The algorithm begins with a list of a few examples of a given NE type (e.g., 'London' and 'Paris' can be used as seed examples for location entities) and learns patterns (P) that are used to find more examples (candidate NEs). These examples are eventually sorted and used again as seed examples for the next iteration.", |
|
"cite_spans": [ |
|
{ |
|
"start": 176, |
|
"end": 200, |
|
"text": "Althobaiti et al. (2013)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 244, |
|
"end": 252, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Semi-supervised Learning", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Our algorithm does not use plain frequencies since absolute frequency does not always produce good examples. This is because bad examples will be extracted by one pattern, however unwantedly, as many times as the bad examples appear in the text in relatively similar contexts. Meanwhile, good exam-ples are best extracted using more than one pattern, since they occur in a wider variety of contexts in the text. Instead, our algorithm ranks candidate NEs according to the number of different patterns that are used to extract them, since pattern variety is a better cue to semantics than absolute frequency (Baroni et al., 2010) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 607, |
|
"end": 628, |
|
"text": "(Baroni et al., 2010)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semi-supervised Learning", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "After sorting the examples according to the number of distinct patterns, all examples but the top m are discarded, where m is set to the number of examples from the previous iteration, plus one. These m examples will be used in the next iteration, and so on. For example, if we start the algorithm with 20 seed instances, the following iteration will start with 21, and the next one will start with 22, and so on. This procedure is necessary in order to carefully include examples from one iteration to another and to ensure that bad instances are not passed on to the next iteration. The same procedure was applied by (Althobaiti et al., 2013) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 619, |
|
"end": 644, |
|
"text": "(Althobaiti et al., 2013)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semi-supervised Learning", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "For distant learning we follow the state of the art approach to exploit Wikipedia for Arabic NER, as in (Althobaiti et al., 2014) . Our distant learning system exploits many of Wikipedia's features, such as anchor texts, redirects, and inter-language links, in order to automatically develop an Arabic NE annotated corpus, which is used later to train a state-ofthe-art supervised classifier. The three steps of this approach are:", |
|
"cite_spans": [ |
|
{ |
|
"start": 104, |
|
"end": 129, |
|
"text": "(Althobaiti et al., 2014)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Distant Learning", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "1. Classify Wikipedia articles into a set of NE types.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Distant Learning", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u2022 Identify and label matching text in the title and the first sentence of each article.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotate the Wikipedia text as follows:", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "\u2022 Label linked phrases in the text according to the NE type of the target article.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotate the Wikipedia text as follows:", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "\u2022 Compile a list of alternative titles for articles and filter out ambiguous ones.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotate the Wikipedia text as follows:", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "\u2022 Identify and label matching phrases in the list and the Wikipedia text.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotate the Wikipedia text as follows:", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "3. Filter sentences to prevent noisy sentences from being included in the corpus.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotate the Wikipedia text as follows:", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "We briefly explain these steps in the following sections.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotate the Wikipedia text as follows:", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "The Wikipedia articles in the dataset need to be classified into the set of named entity types in the classification scheme. We conduct an experiment that uses simple bag-of-words features extracted from different portions of the Wikipedia document and metadata such as categories, the infobox table, and tokens from the article title and first sentence of the document. To improve the accuracy of document classification, tokens are distinguished based on their location in the document. Therefore, categories and infobox features are marked with suffixes to differentiate them from tokens extracted from the article's body text (Tardif et al., 2009) . The feature set is represented by Term Frequency-Inverse Document Frequency (TF-IDF). In order to develop a Wikipedia document classifier to categorise Wikipedia documents into CoNLL NE types, namely person, location, organisation, miscellaneous, or other, we use a set of 4,000 manually classified Wikipedia articles that are available free online (Alotaibi and Lee, 2012) . 80% of the 4,000 hand-classified Wikipedia articles are used for training, and 20% for evaluation. The Wikipedia document classifier that we train performs well, achieving an F-score of 90%. The classifier is then used to classify all Wikipedia articles. At the end of this stage, we obtain a list of pairs containing each Wikipedia article and its NE Type in preparation for the next stage: developing the NE-tagged training corpus.", |
|
"cite_spans": [ |
|
{ |
|
"start": 630, |
|
"end": 651, |
|
"text": "(Tardif et al., 2009)", |
|
"ref_id": "BIBREF46" |
|
}, |
|
{ |
|
"start": 1003, |
|
"end": 1027, |
|
"text": "(Alotaibi and Lee, 2012)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Classifying Wikipedia Articles", |
|
"sec_num": "3.2.1" |
|
}, |
|
{ |
|
"text": "To begin the Annotation Process we identify matching terms in the article title and the first sentence and then tag the matching phrases with the NE-type of the article. The system adopts partial matching where all corresponding words in the title and the first sentence should first be identified. Then, the system annotates them and all words in between (Althobaiti et al., 2014) . The next step is to transform the links between Wikipedia articles into NE annotations according to the NE-type of the link target.", |
|
"cite_spans": [ |
|
{ |
|
"start": 356, |
|
"end": 381, |
|
"text": "(Althobaiti et al., 2014)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Annotation Process", |
|
"sec_num": "3.2.2" |
|
}, |
|
{ |
|
"text": "Wikipedia also contains a fair amount of NEs without links. We follow the technique proposed by Nothman et al. (2013) , which suggests inferring additional links using the aliases for each article.", |
|
"cite_spans": [ |
|
{ |
|
"start": 96, |
|
"end": 117, |
|
"text": "Nothman et al. (2013)", |
|
"ref_id": "BIBREF36" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Annotation Process", |
|
"sec_num": "3.2.2" |
|
}, |
|
{ |
|
"text": "Thus, we compile a list of alternative titles, including anchor texts and NE redirects (i.e., the linked phrases and redirected pages that refer to NE articles). It is necessary to filter the list, however, to remove noisy alternative titles, which usually appear due to (a) one-word meaningful named entities that are ambiguous when taken out of context and (b) multi-word alternative titles that contain apposition words (e.g., 'President', 'Vice Minister'). To this end we use the filtering algorithm proposed by Althobaiti et al. (2014) (see Algorithm 1). In this algorithm a capitalisation probability measure for Arabic is introduced. This involves finding the English gloss for each one-word alternative name and then computing its probability of being capitalised in the English Wikipedia. In order to find the English gloss for Arabic words, Wikipedia Arabic-to-English cross-lingual links are exploited. In case the English gloss for the Arabic word could not be found using inter-language links, an online translator is used. Before translating the Arabic word, a light stemmer is used to remove prefixes and conjunctions in order to acquire the translation of the word itself without its associated affixes. The capitalisation probability is computed as follows", |
|
"cite_spans": [ |
|
{ |
|
"start": 516, |
|
"end": 540, |
|
"text": "Althobaiti et al. (2014)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Annotation Process", |
|
"sec_num": "3.2.2" |
|
}, |
|
{ |
|
"text": "P r[EN ] = f (EN ) isCapitalised f (EN ) isCapitalised +f (EN ) notCapitalised", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Annotation Process", |
|
"sec_num": "3.2.2" |
|
}, |
|
{ |
|
"text": "where EN is the English gloss of the alternative name; f (EN ) isCapitalised is the number of times the English gloss EN is capitalised in the English Wikipedia; and f (EN ) notCapitalised is the number of times the English gloss EN is not capitalised in the English Wikipedia. By specifying a capitalisation threshold constraint, ambiguous one-word titles are prevented from being included in the list of alternative titles. The capitalisation threshold is set to 0.75 as suggested in (Althobaiti et al., 2014) . The multi-word alternative name is also omitted if any of its words belong to the list of apposition words.", |
|
"cite_spans": [ |
|
{ |
|
"start": 486, |
|
"end": 511, |
|
"text": "(Althobaiti et al., 2014)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Annotation Process", |
|
"sec_num": "3.2.2" |
|
}, |
|
{ |
|
"text": "The last stage is to incorporate sentences into the final corpus. We refer to this dataset as the Wikipedia-derived corpus (WDC). It contains 165,119 sentences of around 6 million tokens. Our model was then trained on the WDC corpus. In this paper we refer to this model as the DL classifier. The WDC dataset is available online 1 . We also plan to make the models available to the research community.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Building The Corpus", |
|
"sec_num": "3.2.3" |
|
}, |
|
{ |
|
"text": "In what follows we use SSL to refer to our semisupervised classifier (see Section 3.1) and DL to refer to our distant learning classifier (see Section 3.2). Table 1 shows the results of both classifiers when tested on the ANERcorp test set (see Section 5 for details about the dataset). As is apparent in Table 1 , the SSL classifier tends to be more precise at the expense of recall. The dis-tant learning technique is lower in precision than the semi-supervised learning technique, but higher in recall. Generally, preference is given to the distant supervision classifier in terms of F-score.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 157, |
|
"end": 164, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
}, |
|
{ |
|
"start": 305, |
|
"end": 312, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The Case for Classifier Combination", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The classifiers have different strengths. Our semisupervised algorithm iterates between pattern extraction and candidate NEs extraction and selection. Only the candidate NEs that the classifier is most confident of are added at each iteration, which results in the high precision. The SSL classifier performs better than distant learning in detecting NEs that appear in reliable/regular patterns. These patterns are usually learned easily during the training phase, either because they contain important NE indicators 2 or because they are supported by many reliable candidate NEs. For example, the SSL classifier has a high probability to successfully detect \"Obama\" and \"Louis van Gaal\" as person names in the following sentences:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Case for Classifier Combination", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u2022", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Case for Classifier Combination", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\"President Obama said on a visit to Britain ...\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Case for Classifier Combination", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u2022 \"Louis van Gaal the manager of Manchester United said that ...\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Case for Classifier Combination", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The patterns extracted from such sentences in the newswire domain are learned easily during the training phase, as they contain good NE indicators like \"president\" and \"manager\". Our distant learning method relies on Wikipedia structure and links to automatically create NE annotated data. It also depends on Wikipedia features, such as inter-language links and redirects, to handle the rich morphology of Arabic without the need to perform excessive pre-processing steps (e.g., POStagging, deep morphological analysis), which has a slight negative effect on the precision of the DL classifier. The recall, however, of the DL classifier is high, covering as many NEs as possible in all possible domains. Therefore, the DL classifier is better than the SSL classifier in detecting NEs that appear in ambiguous contexts (they can be used for different NE types) and with no obvious clues (NE indicators). For example, detecting \"Ferrari\" and \"Nokia\" as organization names in the following sentences:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Case for Classifier Combination", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u2022 \"Alonso got ahead of the Renault driver who prevented Ferrari from ... \"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Case for Classifier Combination", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u2022 \"Nokia's speech came a day after the completion of the deal\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Case for Classifier Combination", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The strengths and weaknesses of the SSL and DL classifiers indicates that a classifier ensemble could perform better than its individual components.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Case for Classifier Combination", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Classifier combination methods are suitable when we need to make the best use of the predictions of multiple classifiers to enable higher accuracy classifications. Dietterich (2000a) reviews many methods for constructing ensembles and explains why classifier combination techniques can often gain better performance than any base classifier. Tulyakov et al. (2008) introduce various categories of classifier combinations according to different criteria including the type of the classifier's output and the level at which the combinations operate. Several empirical and theoretical studies have been conducted to compare ensemble methods such as boosting, randomisation, and bagging techniques (Maclin and Opitz, 1997; Dietterich, 2000b; Bauer and Kohavi, 1999) . Ghahramani and Kim (2003) explore a general framework for a Bayesian model combination that explicitly models the relationship between each classifier's output and the unknown true label. As such, multiclass Bayesian Classifier Combination (BCC) models are developed to combine predictions of multiple classifiers. Their proposed method for BCC in the machine learning context is derived directly from the method proposed in (Haitovsky et al., 2002) for modelling disagreement between human assessors, which in turn is an extension of (Dawid and Skene, 1979). Similar studies for modelling data annotation using a variety of methods are presented in (Carpenter, 2008; Cohn and Specia, 2013) . Simpson et al. (2013) present a variant of BCC in which they consider the use of a principled approximate Bayesian method, variational Bayes (VB), as an inference technique instead of using Gibbs Sampling.", |
|
"cite_spans": [ |
|
{ |
|
"start": 342, |
|
"end": 364, |
|
"text": "Tulyakov et al. (2008)", |
|
"ref_id": "BIBREF47" |
|
}, |
|
{ |
|
"start": 694, |
|
"end": 718, |
|
"text": "(Maclin and Opitz, 1997;", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 719, |
|
"end": 737, |
|
"text": "Dietterich, 2000b;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 738, |
|
"end": 761, |
|
"text": "Bauer and Kohavi, 1999)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 764, |
|
"end": 789, |
|
"text": "Ghahramani and Kim (2003)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 1189, |
|
"end": 1213, |
|
"text": "(Haitovsky et al., 2002)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 1414, |
|
"end": 1431, |
|
"text": "(Carpenter, 2008;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 1432, |
|
"end": 1454, |
|
"text": "Cohn and Specia, 2013)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Classifier Combination Methods", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "They also alter the model so as to use point values for hyper-parameters, instead of placing exponential hyper-priors over them.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Classifier Combination Methods", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "The following sections detail the combination methods used in this paper to combine the minimally supervised classifiers for Arabic NER.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Classifier Combination Methods", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Voting is the most common method in classifier combination because of its simplicity and acceptable results (Van Halteren et al., 2001; Van Erp et al., 2002) . Each classifier is allowed to vote for the class of its choice. It is common to take the majority vote, where each base classifier is given one vote and the class with the highest number of votes is chosen. In the case of a tie, when two or more classes receive the same number of votes, a random selection is taken from among the winning classes. It is useful, however, if base classifiers are distinguished by their quality. For this purpose, weights are used to encode the importance of each base classifier (Van Erp et al., 2002) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 108, |
|
"end": 135, |
|
"text": "(Van Halteren et al., 2001;", |
|
"ref_id": "BIBREF49" |
|
}, |
|
{ |
|
"start": 136, |
|
"end": 157, |
|
"text": "Van Erp et al., 2002)", |
|
"ref_id": "BIBREF48" |
|
}, |
|
{ |
|
"start": 671, |
|
"end": 693, |
|
"text": "(Van Erp et al., 2002)", |
|
"ref_id": "BIBREF48" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Voting", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "Equal voting assumes that all classifiers have the same quality (Van Halteren et al., 2001 ). Weighted voting, on the other hand, gives more weight to classifiers of better quality. So, each classifier is weighted according to its overall precision, or its precision and recall on the class it suggests.", |
|
"cite_spans": [ |
|
{ |
|
"start": 64, |
|
"end": 90, |
|
"text": "(Van Halteren et al., 2001", |
|
"ref_id": "BIBREF49" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Voting", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "Formally, given K classifiers, a widely used combination scheme is through the linear interpolation of the classifiers' class probability distribution as follows", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Voting", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "P (C |S K 1 (w )) = K k =1 P k (C |S k (w )) \u2022 \u03bb k (w )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Voting", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "where P k (C|S k (w)) is an estimation of the probability that the correct classification is C given S k (w), the class for the word w as suggested by classifier k. \u03bb k (w) is the weight that specifies the importance given to each classifier k in the combination. P k (C|S k (w)) is computed as follows", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Voting", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "P k (C|S k (w)) = 1, if S k (w) = C 0, otherwise", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Voting", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "For equal voting, each classifier should have the same weight (e.g., \u03bb k (w) = 1/K). In case of weighted voting, the weight associated with each classifier can be computed from its precision and/or recall as illustrated above.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Voting", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "Using a Bayesian approach to classifier combination (BCC) provides a mathematical combination framework in which many classifiers, with various distributions and training features, can be combined to provide more accurate information. This framework explicitly models the relationship between each classifier's output and the unknown true label (Levenberg et al., 2014) . This section describes the Bayesian approach to the classifier combination we adopted in this paper which, like the work of Levenberg et al. (2014) , is based on Simpson et al. (2013) simplification of Ghahramani and Kim (2003) model.", |
|
"cite_spans": [ |
|
{ |
|
"start": 345, |
|
"end": 369, |
|
"text": "(Levenberg et al., 2014)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 496, |
|
"end": 519, |
|
"text": "Levenberg et al. (2014)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 574, |
|
"end": 599, |
|
"text": "Ghahramani and Kim (2003)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Independent Bayesian Classifier Combination (IBCC)", |
|
"sec_num": "4.2.2" |
|
}, |
|
{ |
|
"text": "For ith data point, true label t i is assumed to be generated by a multinomial distribution with the parameter \u03b4: p(t i = j|\u03b4) = \u03b4 j , which models the class proportions. True labels may take values t i = 1...J, where J is the number of true classes. It is also assumed that there are K base classifiers. The output of the classifiers are assumed to be discrete with values l = 1...L, where L is the number of possible outputs. The output c (k) i of the classifier k is assumed to be generated by a multinomial distribution with parameters \u03c0", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Independent Bayesian Classifier Combination (IBCC)", |
|
"sec_num": "4.2.2" |
|
}, |
|
{ |
|
"text": "(k) j : p(c (k) i = l|t i = j, \u03c0 (k) j ) = \u03c0 (k) j,l", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Independent Bayesian Classifier Combination (IBCC)", |
|
"sec_num": "4.2.2" |
|
}, |
|
{ |
|
"text": "where \u03c0 (k) is the confusion matrix for the classifier k, which quantifies the decision-making abilities of each base classifier.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Independent Bayesian Classifier Combination (IBCC)", |
|
"sec_num": "4.2.2" |
|
}, |
|
{ |
|
"text": "As in Simpson et al. (2013) study, we assume that parameters \u03c0 ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Independent Bayesian Classifier Combination (IBCC)", |
|
"sec_num": "4.2.2" |
|
}, |
|
{ |
|
"text": "0,j = [\u03b1 (k) 0,j1 , \u03b1 (k) 0,j2 , ..., \u03b1 (k) 0", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Independent Bayesian Classifier Combination (IBCC)", |
|
"sec_num": "4.2.2" |
|
}, |
|
{ |
|
"text": ",jL ] and \u03bd = [\u03bd 0,1 , \u03bd 0,2 , ..., \u03bd 0,J ] respectively. Given the observed class labels and based on the above prior, the joint distribution over all variables for the IBCC model is", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Independent Bayesian Classifier Combination (IBCC)", |
|
"sec_num": "4.2.2" |
|
}, |
|
{ |
|
"text": "p(\u03b4, \u03a0, t, c|A 0 , \u03bd) = I i=1 {\u03b4 t i K k=1 \u03c0 (k) t i ,c (k) i }p(\u03b4|\u03bd)p(\u03a0|A),", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Independent Bayesian Classifier Combination (IBCC)", |
|
"sec_num": "4.2.2" |
|
}, |
|
{ |
|
"text": "where \u03a0 = {\u03c0 ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Independent Bayesian Classifier Combination (IBCC)", |
|
"sec_num": "4.2.2" |
|
}, |
|
{ |
|
"text": "p(t i = j) = \u03c1 ij J y=1 \u03c1 iy ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Independent Bayesian Classifier Combination (IBCC)", |
|
"sec_num": "4.2.2" |
|
}, |
|
{ |
|
"text": "where", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Independent Bayesian Classifier Combination (IBCC)", |
|
"sec_num": "4.2.2" |
|
}, |
|
{ |
|
"text": "\u03c1 ij = \u03b4 j K k=1 \u03c0 j,c (k) i .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Independent Bayesian Classifier Combination (IBCC)", |
|
"sec_num": "4.2.2" |
|
}, |
|
{ |
|
"text": "In our implementation we used point values for A 0 as in (Simpson et al., 2013) . The values of hyperparameters A 0 offered a natural method to include any prior knowledge. Thus, they can be regarded as pseudo-counts of prior observations and they can be chosen to represent any prior level of uncertainty in the confusion matrices, \u03a0. Our inference technique for the unknown variables (\u03b4, \u03c0, and t) was Gibbs sampling as in (Ghahramani and Kim, 2003; Simpson et al., 2013) . Figure 2 shows the directed graphical model for IBCC. The c ", |
|
"cite_spans": [ |
|
{ |
|
"start": 57, |
|
"end": 79, |
|
"text": "(Simpson et al., 2013)", |
|
"ref_id": "BIBREF44" |
|
}, |
|
{ |
|
"start": 425, |
|
"end": 451, |
|
"text": "(Ghahramani and Kim, 2003;", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 452, |
|
"end": 473, |
|
"text": "Simpson et al., 2013)", |
|
"ref_id": "BIBREF44" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 476, |
|
"end": 484, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Independent Bayesian Classifier Combination (IBCC)", |
|
"sec_num": "4.2.2" |
|
}, |
|
{ |
|
"text": "In this section, we describe the two datasets we used:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "\u2022 Validation set 3 (NEWS + BBCNEWS): 90% of this dataset is used to estimate the weight of each base classifier and 10% is used to perform error analysis.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "\u2022 Test set (ANERcorp test set): This dataset is used to evaluate different classifier combination methods.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The validation set is composed of two datasets: NEWS and BBCNEWS. The NEWS set contains around 15k tokens collected by Darwish (2013) 3 Also known as development set. from the RSS feed of the Arabic (Egypt) version of news.google.com from October 2012. We created the BBCNEWS corpus by collecting a representative sample of news from BBC in May 2014. It contains around 3k tokens and covers different types of news such as politics, economics, and entertainment.", |
|
"cite_spans": [ |
|
{ |
|
"start": 119, |
|
"end": 133, |
|
"text": "Darwish (2013)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The ANERcorp test set makes up 20% of the whole ANERcorp set. The ANERcorp set is a newswire corpus built and manually tagged especially for the Arabic NER task by Benajiba et al. (2007a) and contains around 150k tokens. This test set is commonly used in the Arabic NER literature to evaluate supervised classifiers Abdul-Hamid and Darwish, 2010; Abdallah et al., 2012; Oudah and Shaalan, 2012) and minimallysupervised classifiers (Alotaibi and Lee, 2013; Althobaiti et al., 2013; Althobaiti et al., 2014) , which allows us to review the performance of the combined classifiers and compare it to the performance of each base classifier.", |
|
"cite_spans": [ |
|
{ |
|
"start": 164, |
|
"end": 187, |
|
"text": "Benajiba et al. (2007a)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 316, |
|
"end": 346, |
|
"text": "Abdul-Hamid and Darwish, 2010;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 347, |
|
"end": 369, |
|
"text": "Abdallah et al., 2012;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 370, |
|
"end": 394, |
|
"text": "Oudah and Shaalan, 2012)", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 431, |
|
"end": 455, |
|
"text": "(Alotaibi and Lee, 2013;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 456, |
|
"end": 480, |
|
"text": "Althobaiti et al., 2013;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 481, |
|
"end": 505, |
|
"text": "Althobaiti et al., 2014)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "6 Experimental Analysis", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In the IBCC model, the validation data was used as known t i to ground the estimates of model parameters. The hyper-parameters were set as \u03b1 (k) j = 1 and \u03bd j = 1 (Kim and Ghahramani, 2012; Levenberg et al., 2014) . The initial values for random variables were set as follows: (a) the class proportion \u03b4 was initialised to the result of counting t i and (b) the confusion matrix \u03c0 was initialised to the result of counting t i and the output of each classifier c (k) . Gibbs sampling was run well past stability (i.e., 1000 iterations). Stability was actually reached in approximately 100 iterations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 163, |
|
"end": 189, |
|
"text": "(Kim and Ghahramani, 2012;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 190, |
|
"end": 213, |
|
"text": "Levenberg et al., 2014)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 463, |
|
"end": 466, |
|
"text": "(k)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "All parameters required in voting methods were specified using the validation set. We examined two different voting methods: equal voting and weighted voting. In the case of equal voting, each classifier was given an equal weight, (1/K) where K was the number of classifiers to be combined. In weighted voting, total precision was used in order to give preference to classifiers with good quality.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "A proposed combined classifier simply and straightforwardly makes decisions based on the agreed decisions of the base classifiers, namely the SSL classifier and DL classifier. That is, if the base classifiers agree on the NE type of a certain word, then it is annotated by an agreed NE type. In the case of disagreement, the word is considered not named entity. Table 2 shows the results of this combined classifier, which is considered a baseline in this paper. The results of the combined classifier shows very high precision, which indicates that both base classifiers are mostly accurate. The base classifiers also commit different errors that are evident in the low recall. The accuracy and diversity of the single classifiers are the main conditions for a combined classifier to have better accuracy than any of its components (Dietterich, 2000a) . Therefore, in the next section we take into consideration various classifier combination methods in order to aggregate the best decisions of SSL and DL classifiers, and to improve overall performance.", |
|
"cite_spans": [ |
|
{ |
|
"start": 833, |
|
"end": 852, |
|
"text": "(Dietterich, 2000a)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 362, |
|
"end": 369, |
|
"text": "Table 2", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "A Simple Baseline Combined Classifier", |
|
"sec_num": "6.2.1" |
|
}, |
|
{ |
|
"text": "The SSL and DL classifiers are trained with two different algorithms using different training data. The SSL classifier is trained on ANERcorp training data, while the DL classifier is trained on a corpus automatically derived from Arabic Wikipedia, as explained in Section 3.1 and 3.2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combined Classifiers: Classifier Combination Methods", |
|
"sec_num": "6.2.2" |
|
}, |
|
{ |
|
"text": "We combine the SSL and DL classifiers using the three classifier combination methods, namely equal voting, weighted voting, and IBCC. Table 3 shows the results of these classifier combination methods. The IBCC scheme outperforms all voting techniques and base classifiers in terms of F-score. Regard-ing precision, voting techniques show the highest scores. However, the high precision is accompanied by a reduction in recall for both voting methods. The IBCC combination method also has relatively high precision compared to the precision of base classifiers. Much better recall is registered for IBCC, but it is still low. ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 134, |
|
"end": 141, |
|
"text": "Table 3", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Combined Classifiers: Classifier Combination Methods", |
|
"sec_num": "6.2.2" |
|
}, |
|
{ |
|
"text": "An error analysis of the validation set shows that 10.01% of the NEs were correctly detected by the semi-supervised classifier, but considered not NEs by the distant learning classifier. At the same time, the distant learning classifier managed to correctly detect 25.44% of the NEs that were considered not NEs by the semi-supervised classifier. We also noticed that false positive rates, i.e. the possibility of considering a word NE when it is actually not NE, are very low (0.66% and 2.45% for the semisupervised and distant learning classifiers respectively). These low false positive rates and the high percentage of the NEs that are detected and missed by the two classifiers in a mutually exclusive way can be exploited to obtain better results, more specifically, to increase recall without negatively affecting precision. Therefore, we restricted the combi-nation process to only include situations where the base classifiers agree or disagree on the NE type of a certain word. The combination process is ignored in cases where the base classifiers only disagree on detecting NEs. For example, if the base classifiers disagree on whether a certain word is an NE or not, the word is automatically considered an NE. Figure 3 provides some examples that illustrate the restrictions we applied to the combination process. The annotations in the examples are based on the CoNLL 2003 annotation guidelines (Chinchor et al., 1999) . Restricting the combination process in this way increases recall without negatively affecting the precision, as seen in Table 4 . The increase in recall makes the overall F-score for all combination methods higher than those of base classifiers. This way of using the IBCC model results in a performance level that is superior to all of the individual classifiers and other voting-based combined classifiers. Therefore, the IBCC model leads to a 12% increase in the performance of the best base classifier, while voting methods increase the performance by around 7% -10%. These results highlight the role of restricting the combination, which affects the performance of combination methods and gives more control over how and when the predictions of base classifiers should be combined.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1411, |
|
"end": 1434, |
|
"text": "(Chinchor et al., 1999)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1224, |
|
"end": 1233, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF4" |
|
}, |
|
{ |
|
"start": 1557, |
|
"end": 1564, |
|
"text": "Table 4", |
|
"ref_id": "TABREF9" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Combined Classifiers: Restriction of the Combination Process", |
|
"sec_num": "6.2.3" |
|
}, |
|
{ |
|
"text": "Statistical Significance of Results", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing Combined Classifiers:", |
|
"sec_num": "6.2.4" |
|
}, |
|
{ |
|
"text": "We tested whether the difference in performance between the three classifier combination methodsequal voting, weighted voting, and IBCC -is significant using two different statistical tests over the results of these combination methods on an ANERcorp test set. The alpha level of 0.01 was used as a significance criterion for all statistical tests. First, We ran a non-parametric sign test. The small pvalue (p 0.01) for each pair of the three combina- tion methods, as seen in Table 5 , suggests that these methods are significantly different. The only comparison where no significance was found is equal voting vs. weighted voting, when we used them to combine the data without any restrictions (p = 0.3394). Second, we used a bootstrap sampling (Efron and Tibshirani, 1994) , which is becoming the de facto standards in NLP (S\u00f8gaard et al., 2014) . Table 6 compares each pair of the three combination methods using a bootstrap sampling over documents with 10,000 replicates. It shows the p-values and confidence intervals of the difference between means. The differences in performance between almost all the three methods of combination are highly significant. The one exception is the comparison between equal voting and weighted voting, when they are used as a combination method without restriction, which shows a non-significant difference (pvalue = 0.508, CI = -0.365 to 0.349).", |
|
"cite_spans": [ |
|
{ |
|
"start": 748, |
|
"end": 776, |
|
"text": "(Efron and Tibshirani, 1994)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 827, |
|
"end": 849, |
|
"text": "(S\u00f8gaard et al., 2014)", |
|
"ref_id": "BIBREF45" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 478, |
|
"end": 485, |
|
"text": "Table 5", |
|
"ref_id": "TABREF11" |
|
}, |
|
{ |
|
"start": 852, |
|
"end": 859, |
|
"text": "Table 6", |
|
"ref_id": "TABREF12" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Comparing Combined Classifiers:", |
|
"sec_num": "6.2.4" |
|
}, |
|
{ |
|
"text": "Generally, the IBCC scheme performs significantly better than voting-based combination methods whether we impose restrictions on the combination process or not, as can be seen in Table 3 and Table 4 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 179, |
|
"end": 199, |
|
"text": "Table 3 and Table 4", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Comparing Combined Classifiers:", |
|
"sec_num": "6.2.4" |
|
}, |
|
{ |
|
"text": "Major advances over the past decade have occurred in Arabic NER with regard to utilising various supervised systems, exploring different features, and producing manually annotated corpora that mostly cover the standard set of NE types. More effort and time for additional manual annotations are required when expanding the set of NE types, or exporting NE classifiers to new domains. This has motivated research in minimally supervised methods, such as semi-supervised learning and distant learning, but the performance of such methods is lower than that achieved by supervised methods. However, semi-supervised methods and distant learning tend to have different strengths, which suggests that better results may be obtained by combining these methods. Therefore, we trained two classifiers based on distant learning and semi-supervision techniques, and then combined them using a variety of classifier combination schemes. Our main contributions in-clude the following:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "\u2022 We presented a novel approach to Arabic NER using a combination of semi-supervised learning and distant supervision.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "\u2022 We used the Independent Bayesian Classifier Combination (IBCC) scheme for NER, and compared it to traditional voting methods.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "\u2022 We introduced the classifier combination restriction as a means of controlling how and when the predictions of base classifiers should be combined.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "This research demonstrated that combining the two minimal supervision approaches using various classifier combination methods leads to better results for NER. The use of IBCC improves the performance by 8 percentage points over the best base classifier, whereas the improvement in the performance when using voting methods is only 4 to 6 percentage points. Although all combination methods result in an accurate classification, the IBCC model achieves better recall than other traditional combination methods. Our experiments also showed how restricting the combination process can increase the recall ability of all the combination methods without negatively affecting the precision. The approach we proposed in this paper can be easily adapted to new NE types and different domains without the need for human intervention. In addition, there are many ways to restrict the combination process according to the applications' preferences, either producing high accuracy or recall. For example, we may obtain a highly accurate combined classifier if we do not combine the predictions of all base classifiers for a certain word and automatically consider it not NE when one of the base classifier considers this word not NE.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Transactions of the Association for Computational Linguistics, vol. 3, pp. 243-255, 2015. Action Editor: Ryan McDonald. Submission batch: 1/2015; Revision batch 4/2015; Published 5/2015. c 2015 Association for Computational Linguistics. Distributed under a CC-BY-NC-SA 4.0 license.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://sites.google.com/site/mahajalthobaiti/resources", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Also known as trigger words which help in identifying NEs within text", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Integrating rule-based system with classification for arabic named entity recognition", |
|
"authors": [ |
|
{ |
|
"first": "Sherief", |
|
"middle": [], |
|
"last": "Abdallah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Khaled", |
|
"middle": [], |
|
"last": "Shaalan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Muhammad", |
|
"middle": [], |
|
"last": "Shoaib", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Computational Linguistics and Intelligent Text Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "311--322", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sherief Abdallah, Khaled Shaalan, and Muhammad Shoaib. 2012. Integrating rule-based system with classification for arabic named entity recognition. In Computational Linguistics and Intelligent Text Pro- cessing, pages 311-322. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Integrated machine learning techniques for arabic named entity recognition", |
|
"authors": [ |
|
{ |
|
"first": "Mohamed", |
|
"middle": [], |
|
"last": "Samir Abdelrahman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marwa", |
|
"middle": [], |
|
"last": "Elarnaoty", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aly", |
|
"middle": [], |
|
"last": "Magdy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Fahmy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "IJCSI", |
|
"volume": "7", |
|
"issue": "", |
|
"pages": "27--36", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Samir AbdelRahman, Mohamed Elarnaoty, Marwa Magdy, and Aly Fahmy. 2010. Integrated machine learning techniques for arabic named entity recogni- tion. IJCSI, 7:27-36.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Simplified feature set for Arabic named entity recognition", |
|
"authors": [ |
|
{ |
|
"first": "Ahmed", |
|
"middle": [], |
|
"last": "Abdul", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "-", |
|
"middle": [], |
|
"last": "Hamid", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kareem", |
|
"middle": [], |
|
"last": "Darwish", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 2010 Named Entities Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "110--115", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ahmed Abdul-Hamid and Kareem Darwish. 2010. Sim- plified feature set for Arabic named entity recognition. In Proceedings of the 2010 Named Entities Workshop, pages 110-115. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Semisupervised learning for computational linguistics", |
|
"authors": [ |
|
{ |
|
"first": "Steven", |
|
"middle": [], |
|
"last": "Abney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Steven Abney. 2010. Semisupervised learning for com- putational linguistics. CRC Press.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Mapping Arabic Wikipedia into the named entities taxonomy", |
|
"authors": [ |
|
{ |
|
"first": "Fahd", |
|
"middle": [], |
|
"last": "Alotaibi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "The COLING 2012 Organizing Committee", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "43--52", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fahd Alotaibi and Mark Lee. 2012. Mapping Ara- bic Wikipedia into the named entities taxonomy. In Proceedings of COLING 2012: Posters, pages 43-52, Mumbai, India, December. The COLING 2012 Orga- nizing Committee.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Automatically Developing a Fine-grained Arabic Named Entity Corpus and Gazetteer by utilizing Wikipedia", |
|
"authors": [ |
|
{ |
|
"first": "Fahd", |
|
"middle": [], |
|
"last": "Alotaibi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "IJCNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fahd Alotaibi and Mark Lee. 2013. Automatically De- veloping a Fine-grained Arabic Named Entity Corpus and Gazetteer by utilizing Wikipedia. In IJCNLP.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "A semi-supervised learning approach to arabic named entity recognition", |
|
"authors": [ |
|
{ |
|
"first": "Maha", |
|
"middle": [], |
|
"last": "Althobaiti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Udo", |
|
"middle": [], |
|
"last": "Kruschwitz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Massimo", |
|
"middle": [], |
|
"last": "Poesio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the International Conference Recent Advances in Natural Language Processing RANLP 2013", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "32--40", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Maha Althobaiti, Udo Kruschwitz, and Massimo Poesio. 2013. A semi-supervised learning approach to arabic named entity recognition. In Proceedings of the Inter- national Conference Recent Advances in Natural Lan- guage Processing RANLP 2013, pages 32-40, Hissar, Bulgaria, September. INCOMA Ltd. Shoumen, BUL- GARIA.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Automatic Creation of Arabic Named Entity Annotated Corpus Using Wikipedia", |
|
"authors": [ |
|
{ |
|
"first": "Maha", |
|
"middle": [], |
|
"last": "Althobaiti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Udo", |
|
"middle": [], |
|
"last": "Kruschwitz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Massimo", |
|
"middle": [], |
|
"last": "Poesio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the Student Research Workshop at the 14th Conference of the European Chapter of the Association for Computational Linguistics (EACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "106--115", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Maha Althobaiti, Udo Kruschwitz, and Massimo Poesio. 2014. Automatic Creation of Arabic Named Entity Annotated Corpus Using Wikipedia. In Proceedings of the Student Research Workshop at the 14th Confer- ence of the European Chapter of the Association for Computational Linguistics (EACL), pages 106-115, Gothenburg.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Strudel: A Corpus-Based Semantic Model Based on Properties and Types", |
|
"authors": [ |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Baroni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Brian", |
|
"middle": [], |
|
"last": "Murphy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eduard", |
|
"middle": [], |
|
"last": "Barbu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Massimo", |
|
"middle": [], |
|
"last": "Poesio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Cognitive Science", |
|
"volume": "34", |
|
"issue": "2", |
|
"pages": "222--254", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marco Baroni, Brian Murphy, Eduard Barbu, and Mas- simo Poesio. 2010. Strudel: A Corpus-Based Seman- tic Model Based on Properties and Types. Cognitive Science, 34(2):222-254.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "An empirical comparison of voting classification algorithms: Bagging, boosting, and variants", |
|
"authors": [ |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Bauer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ron", |
|
"middle": [], |
|
"last": "Kohavi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Machine learning", |
|
"volume": "36", |
|
"issue": "1-2", |
|
"pages": "105--139", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eric Bauer and Ron Kohavi. 1999. An empirical comparison of voting classification algorithms: Bag- ging, boosting, and variants. Machine learning, 36(1- 2):105-139.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Arabic named entity recognition using conditional random fields", |
|
"authors": [ |
|
{ |
|
"first": "Yassine", |
|
"middle": [], |
|
"last": "Benajiba", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paolo", |
|
"middle": [], |
|
"last": "Rosso", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proc. of Workshop on HLT & NLP within the Arabic World, LREC", |
|
"volume": "8", |
|
"issue": "", |
|
"pages": "143--153", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yassine Benajiba and Paolo Rosso. 2008. Arabic named entity recognition using conditional random fields. In Proc. of Workshop on HLT & NLP within the Arabic World, LREC, volume 8, pages 143-153.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Anersys: An Arabic Named Entity Recognition System based on Maximum Entropy", |
|
"authors": [ |
|
{ |
|
"first": "Yassine", |
|
"middle": [], |
|
"last": "Benajiba", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paolo", |
|
"middle": [], |
|
"last": "Rosso", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jos\u00e9 Miguel", |
|
"middle": [], |
|
"last": "Bened\u00edruiz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Computational Linguistics and Intelligent Text Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "143--153", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yassine Benajiba, Paolo Rosso, and Jos\u00e9 Miguel Bened\u00edruiz. 2007a. Anersys: An Arabic Named En- tity Recognition System based on Maximum Entropy. In Computational Linguistics and Intelligent Text Pro- cessing, pages 143-153. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Anersys: An arabic named entity recognition system based on maximum entropy", |
|
"authors": [ |
|
{ |
|
"first": "Yassine", |
|
"middle": [], |
|
"last": "Benajiba", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paolo", |
|
"middle": [], |
|
"last": "Rosso", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jos\u00e9 Miguel", |
|
"middle": [], |
|
"last": "Bened\u00edruiz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Computational Linguistics and Intelligent Text Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "143--153", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yassine Benajiba, Paolo Rosso, and Jos\u00e9 Miguel Bened\u00edruiz. 2007b. Anersys: An arabic named en- tity recognition system based on maximum entropy. In Computational Linguistics and Intelligent Text Pro- cessing, pages 143-153. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Arabic named entity recognition: An svm-based approach", |
|
"authors": [ |
|
{ |
|
"first": "Yassine", |
|
"middle": [], |
|
"last": "Benajiba", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mona", |
|
"middle": [], |
|
"last": "Diab", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paolo", |
|
"middle": [], |
|
"last": "Rosso", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of 2008 Arab International Conference on Information Technology (ACIT)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "16--18", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yassine Benajiba, Mona Diab, Paolo Rosso, et al. 2008. Arabic named entity recognition: An svm-based ap- proach. In Proceedings of 2008 Arab International Conference on Information Technology (ACIT), pages 16-18.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Multilevel bayesian models of categorical data annotation. Unpublished manuscript", |
|
"authors": [ |
|
{ |
|
"first": "Bob", |
|
"middle": [], |
|
"last": "Carpenter", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bob Carpenter. 2008. Multilevel bayesian models of categorical data annotation. Unpublished manuscript. Available online at http://lingpipe-blog.com/lingpipe- white-papers/, last accessed 15-March-2015.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Named Entity Recognition Task Definition. MITRE and SAIC", |
|
"authors": [ |
|
{ |
|
"first": "Nancy", |
|
"middle": [], |
|
"last": "Chinchor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Erica", |
|
"middle": [], |
|
"last": "Brown", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lisa", |
|
"middle": [], |
|
"last": "Ferro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Patty", |
|
"middle": [], |
|
"last": "Robinson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nancy Chinchor, Erica Brown, Lisa Ferro, and Patty Robinson. 1999. 1999 Named Entity Recognition Task Definition. MITRE and SAIC.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Modelling annotator bias with multi-task gaussian processes: An application to machine translation quality estimation", |
|
"authors": [ |
|
{ |
|
"first": "Trevor", |
|
"middle": [], |
|
"last": "Cohn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lucia", |
|
"middle": [], |
|
"last": "Specia", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "32--42", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Trevor Cohn and Lucia Specia. 2013. Modelling anno- tator bias with multi-task gaussian processes: An ap- plication to machine translation quality estimation. In ACL, pages 32-42.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Named Entity Recognition using Cross-lingual Resources: Arabic as an Example", |
|
"authors": [ |
|
{ |
|
"first": "Kareem", |
|
"middle": [], |
|
"last": "Darwish", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1558--1567", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kareem Darwish. 2013. Named Entity Recognition us- ing Cross-lingual Resources: Arabic as an Example. In ACL, pages 1558-1567.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Maximum likelihood estimation of observer error-rates using the em algorithm", |
|
"authors": [ |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Philip Dawid", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Allan M", |
|
"middle": [], |
|
"last": "Skene", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1979, |
|
"venue": "Applied statistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "20--28", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexander Philip Dawid and Allan M Skene. 1979. Max- imum likelihood estimation of observer error-rates us- ing the em algorithm. Applied statistics, pages 20-28.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Ensemble methods in machine learning", |
|
"authors": [ |
|
{ |
|
"first": "Thomas", |
|
"middle": [ |
|
"G" |
|
], |
|
"last": "Dietterich", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Multiple Classifier Systems", |
|
"volume": "1857", |
|
"issue": "", |
|
"pages": "1--15", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas G. Dietterich. 2000a. Ensemble methods in ma- chine learning. In Multiple Classifier Systems, volume 1857 of Lecture Notes in Computer Science, pages 1- 15. Springer Berlin Heidelberg.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "An experimental comparison of three methods for constructing ensembles of decision trees: Bagging, boosting, and randomization", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Thomas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Dietterich", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Machine learning", |
|
"volume": "40", |
|
"issue": "2", |
|
"pages": "139--157", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas G Dietterich. 2000b. An experimental compar- ison of three methods for constructing ensembles of decision trees: Bagging, boosting, and randomization. Machine learning, 40(2):139-157.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "An introduction to the bootstrap", |
|
"authors": [ |
|
{ |
|
"first": "Bradley", |
|
"middle": [], |
|
"last": "Efron", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Robert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Tibshirani", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bradley Efron and Robert J Tibshirani. 1994. An intro- duction to the bootstrap. CRC press.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "A rule based persons names arabic extraction system", |
|
"authors": [ |
|
{ |
|
"first": "Ali", |
|
"middle": [], |
|
"last": "Elsebai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Farid", |
|
"middle": [], |
|
"last": "Meziane", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fatma Zohra", |
|
"middle": [], |
|
"last": "Belkredim", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Communications of the IBIMA", |
|
"volume": "11", |
|
"issue": "6", |
|
"pages": "53--59", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ali Elsebai, Farid Meziane, and Fatma Zohra Belkredim. 2009. A rule based persons names arabic extraction system. Communications of the IBIMA, 11(6):53-59.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Named entity recognition through classifier combination", |
|
"authors": [ |
|
{ |
|
"first": "Radu", |
|
"middle": [], |
|
"last": "Florian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Abe", |
|
"middle": [], |
|
"last": "Ittycheriah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hongyan", |
|
"middle": [], |
|
"last": "Jing", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tong", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003", |
|
"volume": "4", |
|
"issue": "", |
|
"pages": "168--171", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Radu Florian, Abe Ittycheriah, Hongyan Jing, and Tong Zhang. 2003. Named entity recognition through clas- sifier combination. In Proceedings of the seventh con- ference on Natural language learning at HLT-NAACL 2003-Volume 4, pages 168-171. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Bayesian classifier combination", |
|
"authors": [ |
|
{ |
|
"first": "Zoubin", |
|
"middle": [], |
|
"last": "Ghahramani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hyun-Chul", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zoubin Ghahramani and Hyun-Chul Kim. 2003. Bayesian classifier combination. Technical report, University College London.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Modelling disagreements among and within raters assessments from the bayesian point of view", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Haitovsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Draft. Presented at the Valencia meeting", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Y Haitovsky, A Smith, and Y Liu. 2002. Modelling dis- agreements among and within raters assessments from the bayesian point of view. In Draft. Presented at the Valencia meeting 2002.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Bayesian classifier combination", |
|
"authors": [], |
|
"year": null, |
|
"venue": "International conference on artificial intelligence and statistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "619--627", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bayesian classifier combination. In International con- ference on artificial intelligence and statistics, pages 619-627.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Predicting economic indicators from web text using sentiment composition", |
|
"authors": [ |
|
{ |
|
"first": "Abby", |
|
"middle": [], |
|
"last": "Levenberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Pulman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karo", |
|
"middle": [], |
|
"last": "Moilanen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edwin", |
|
"middle": [], |
|
"last": "Simpson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Roberts", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "International Journal of Computer and Communication Engineering", |
|
"volume": "3", |
|
"issue": "2", |
|
"pages": "109--115", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Abby Levenberg, Stephen Pulman, Karo Moilanen, Ed- win Simpson, and Stephen Roberts. 2014. Predict- ing economic indicators from web text using sentiment composition. International Journal of Computer and Communication Engineering, 3(2):109-115.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "An empirical evaluation of bagging and boosting", |
|
"authors": [ |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Maclin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Opitz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "546--551", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Richard Maclin and David Opitz. 1997. An empiri- cal evaluation of bagging and boosting. AAAI/IAAI, 1997:546-551.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Named entity recognition for arabic using syntactic grammars", |
|
"authors": [ |
|
{ |
|
"first": "Slim", |
|
"middle": [], |
|
"last": "Mesfar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Natural Language Processing and Information Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "305--316", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Slim Mesfar. 2007. Named entity recognition for ara- bic using syntactic grammars. In Natural Language Processing and Information Systems, pages 305-316. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Learning to Tag and Tagging to Learn: A Case Study on Wikipedia", |
|
"authors": [ |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Mika", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Massimiliano", |
|
"middle": [], |
|
"last": "Ciaramita", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hugo", |
|
"middle": [], |
|
"last": "Zaragoza", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jordi", |
|
"middle": [], |
|
"last": "Atserias", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "", |
|
"volume": "23", |
|
"issue": "", |
|
"pages": "26--33", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peter Mika, Massimiliano Ciaramita, Hugo Zaragoza, and Jordi Atserias. 2008. Learning to Tag and Tag- ging to Learn: A Case Study on Wikipedia. vol- ume 23, pages 26-33.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Distant supervision for relation extraction without labeled data", |
|
"authors": [ |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Mintz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steven", |
|
"middle": [], |
|
"last": "Bills", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rion", |
|
"middle": [], |
|
"last": "Snow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Jurafsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "1003--1011", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mike Mintz, Steven Bills, Rion Snow, and Dan Jurafsky. 2009. Distant supervision for relation extraction with- out labeled data. In Proceedings of the Joint Confer- ence of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Lan- guage Processing of the AFNLP: Volume 2 -Volume 2, ACL '09, pages 1003-1011, Stroudsburg, PA, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "A survey of named entity recognition and classification. Lingvisticae Investigationes", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Nadeau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Satoshi", |
|
"middle": [], |
|
"last": "Sekine", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "", |
|
"volume": "30", |
|
"issue": "", |
|
"pages": "3--26", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Nadeau and Satoshi Sekine. 2007. A survey of named entity recognition and classification. Lingvisti- cae Investigationes, 30(1):3-26.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Semi-supervised named entity recognition: learning to recognize 100 entity types with little supervision", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Nadeau", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Nadeau. 2007. Semi-supervised named entity recognition: learning to recognize 100 entity types with little supervision.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "End-to-end relation extraction using distant supervision from external semantic repositories", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Truc-Vien", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alessandro", |
|
"middle": [], |
|
"last": "Nguyen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Moschitti", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: Short Papers", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "277--282", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Truc-Vien T. Nguyen and Alessandro Moschitti. 2011. End-to-end relation extraction using distant supervi- sion from external semantic repositories. In Pro- ceedings of the 49th Annual Meeting of the Associa- tion for Computational Linguistics: Human Language Technologies: Short Papers -Volume 2, HLT '11, pages 277-282, Stroudsburg, PA, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Learning multilingual Named Entity Recognition from Wikipedia", |
|
"authors": [ |
|
{ |
|
"first": "Joel", |
|
"middle": [], |
|
"last": "Nothman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicky", |
|
"middle": [], |
|
"last": "Ringland", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Will", |
|
"middle": [], |
|
"last": "Radford", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tara", |
|
"middle": [], |
|
"last": "Murphy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James R", |
|
"middle": [], |
|
"last": "Curran", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Artificial Intelligence", |
|
"volume": "194", |
|
"issue": "", |
|
"pages": "151--175", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joel Nothman, Nicky Ringland, Will Radford, Tara Mur- phy, and James R Curran. 2013. Learning multilin- gual Named Entity Recognition from Wikipedia. Ar- tificial Intelligence, 194:151-175.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "A pipeline arabic named entity recognition using a hybrid approach", |
|
"authors": [ |
|
{ |
|
"first": "Mai", |
|
"middle": [], |
|
"last": "Oudah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Khaled", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Shaalan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "COLING", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2159--2176", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mai Oudah and Khaled F Shaalan. 2012. A pipeline ara- bic named entity recognition using a hybrid approach. In COLING, pages 2159-2176.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "Organizing and searching the world wide web of facts-step one: the one-million fact extraction challenge", |
|
"authors": [ |
|
{ |
|
"first": "Marius", |
|
"middle": [], |
|
"last": "Pasca", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dekang", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Bigham", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "AAAI", |
|
"volume": "6", |
|
"issue": "", |
|
"pages": "1400--1405", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marius Pasca, Dekang Lin, Jeffrey Bigham, Andrei Lif- chits, and Alpa Jain. 2006. Organizing and searching the world wide web of facts-step one: the one-million fact extraction challenge. In AAAI, volume 6, pages 1400-1405.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "Mining Wiki Resources for Multilingual Named Entity Recognition", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Alexander", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Patrick", |
|
"middle": [], |
|
"last": "Richman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Schone", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--9", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexander E Richman and Patrick Schone. 2008. Mining Wiki Resources for Multilingual Named Entity Recog- nition. In ACL, pages 1-9.", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "Learning dictionaries for information extraction by multi-level bootstrapping", |
|
"authors": [ |
|
{ |
|
"first": "Ellen", |
|
"middle": [], |
|
"last": "Riloff", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rosie", |
|
"middle": [], |
|
"last": "Jones", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "AAAI", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "474--479", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ellen Riloff and Rosie Jones. 1999. Learning dictionar- ies for information extraction by multi-level bootstrap- ping. In AAAI, pages 474-479.", |
|
"links": null |
|
}, |
|
"BIBREF41": { |
|
"ref_id": "b41", |
|
"title": "Combining multiple classifiers using vote based classifier ensemble technique for named entity recognition", |
|
"authors": [ |
|
{ |
|
"first": "Sriparna", |
|
"middle": [], |
|
"last": "Saha", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Asif", |
|
"middle": [], |
|
"last": "Ekbal", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Data & Knowledge Engineering", |
|
"volume": "85", |
|
"issue": "", |
|
"pages": "15--39", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sriparna Saha and Asif Ekbal. 2013. Combining mul- tiple classifiers using vote based classifier ensemble technique for named entity recognition. Data & Knowledge Engineering, 85:15-39.", |
|
"links": null |
|
}, |
|
"BIBREF42": { |
|
"ref_id": "b42", |
|
"title": "NYU: Description of the Japanese NE system used for MET-2", |
|
"authors": [ |
|
{ |
|
"first": "Satoshi", |
|
"middle": [], |
|
"last": "Sekine", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Proceedings of the Seventh Message Understanding Conference", |
|
"volume": "17", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Satoshi Sekine et al. 1998. NYU: Description of the Japanese NE system used for MET-2. In Proceed- ings of the Seventh Message Understanding Confer- ence (MUC-7), volume 17.", |
|
"links": null |
|
}, |
|
"BIBREF43": { |
|
"ref_id": "b43", |
|
"title": "Nera: Named entity recognition for arabic", |
|
"authors": [ |
|
{ |
|
"first": "Khaled", |
|
"middle": [], |
|
"last": "Shaalan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hafsa", |
|
"middle": [], |
|
"last": "Raza", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Journal of the American Society for Information Science and Technology", |
|
"volume": "60", |
|
"issue": "8", |
|
"pages": "1652--1663", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Khaled Shaalan and Hafsa Raza. 2009. Nera: Named entity recognition for arabic. Journal of the Ameri- can Society for Information Science and Technology, 60(8):1652-1663.", |
|
"links": null |
|
}, |
|
"BIBREF44": { |
|
"ref_id": "b44", |
|
"title": "Dynamic bayesian combination of multiple imperfect classifiers", |
|
"authors": [ |
|
{ |
|
"first": "Edwin", |
|
"middle": [], |
|
"last": "Simpson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Roberts", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Decision Making and Imperfection", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--35", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Edwin Simpson, Stephen Roberts, Ioannis Psorakis, and Arfon Smith. 2013. Dynamic bayesian combination of multiple imperfect classifiers. In Decision Making and Imperfection, pages 1-35. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF45": { |
|
"ref_id": "b45", |
|
"title": "Whats in a p-value in nlp?", |
|
"authors": [ |
|
{ |
|
"first": "Anders", |
|
"middle": [], |
|
"last": "S\u00f8gaard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anders", |
|
"middle": [], |
|
"last": "Johannsen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barbara", |
|
"middle": [], |
|
"last": "Plank", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dirk", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hector", |
|
"middle": [], |
|
"last": "Martinez", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the eighteenth conference on computational natural language learning (CONLL14)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--10", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Anders S\u00f8gaard, Anders Johannsen, Barbara Plank, Dirk Hovy, and Hector Martinez. 2014. Whats in a p-value in nlp? In Proceedings of the eighteenth conference on computational natural language learning (CONLL14), pages 1-10.", |
|
"links": null |
|
}, |
|
"BIBREF46": { |
|
"ref_id": "b46", |
|
"title": "Improved Text Categorisation for Wikipedia Named Entities", |
|
"authors": [ |
|
{ |
|
"first": "Sam", |
|
"middle": [], |
|
"last": "Tardif", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Curran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tara", |
|
"middle": [], |
|
"last": "Murphy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the Australasian Language Technology Association Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "104--108", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sam Tardif, James R. Curran, and Tara Murphy. 2009. Improved Text Categorisation for Wikipedia Named Entities. In Proceedings of the Australasian Language Technology Association Workshop, pages 104-108.", |
|
"links": null |
|
}, |
|
"BIBREF47": { |
|
"ref_id": "b47", |
|
"title": "Review of classifier combination methods", |
|
"authors": [ |
|
{ |
|
"first": "Sergey", |
|
"middle": [], |
|
"last": "Tulyakov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stefan", |
|
"middle": [], |
|
"last": "Jaeger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Venu", |
|
"middle": [], |
|
"last": "Govindaraju", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Doermann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Machine Learning in Document Analysis and Recognition", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "361--386", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sergey Tulyakov, Stefan Jaeger, Venu Govindaraju, and David Doermann. 2008. Review of classifier combi- nation methods. In Machine Learning in Document Analysis and Recognition, pages 361-386. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF48": { |
|
"ref_id": "b48", |
|
"title": "An overview and comparison of voting methods for pattern recognition", |
|
"authors": [ |
|
{ |
|
"first": "Merijn", |
|
"middle": [], |
|
"last": "Van Erp", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Louis", |
|
"middle": [], |
|
"last": "Vuurpijl", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lambert", |
|
"middle": [], |
|
"last": "Schomaker", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Eighth International Workshop on Frontiers in Handwriting Recognition", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "195--200", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Merijn Van Erp, Louis Vuurpijl, and Lambert Schomaker. 2002. An overview and comparison of voting methods for pattern recognition. In Eighth International Work- shop on Frontiers in Handwriting Recognition, pages 195-200. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF49": { |
|
"ref_id": "b49", |
|
"title": "Improving accuracy in word class tagging through the combination of machine learning systems", |
|
"authors": [ |
|
{ |
|
"first": "Walter", |
|
"middle": [], |
|
"last": "Hans Van Halteren", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jakub", |
|
"middle": [], |
|
"last": "Daelemans", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Zavrel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Computational linguistics", |
|
"volume": "27", |
|
"issue": "2", |
|
"pages": "199--229", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hans Van Halteren, Walter Daelemans, and Jakub Za- vrel. 2001. Improving accuracy in word class tagging through the combination of machine learning systems. Computational linguistics, 27(2):199-229.", |
|
"links": null |
|
}, |
|
"BIBREF50": { |
|
"ref_id": "b50", |
|
"title": "Critical Survey of the Freely Available Arabic Corpora", |
|
"authors": [ |
|
{ |
|
"first": "Wajdi", |
|
"middle": [], |
|
"last": "Zaghouani", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Workshop on Free/Open-Source Arabic Corpora and Corpora Processing Tools", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "615--637", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wajdi Zaghouani. 2014. Critical Survey of the Freely Available Arabic Corpora. In Workshop on Free/Open-Source Arabic Corpora and Corpora Pro- cessing Tools, pages 1-8, Reykjavik, Iceland. Tong Zhang, Fred Damerau, and David Johnson. 2002. Text chunking based on a generalization of winnow. The Journal of Machine Learning Research, 2:615- 637.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "The Three Components of SSL System.", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF2": { |
|
"text": "|j = 1...J, k = 1...K}. The conditional probability of a test data point t i being assigned class j is given by", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF3": { |
|
"text": "circular nodes are variables with distributions and square nodes are variables instantiated with point values. The directed graph of IBCC.", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF4": { |
|
"text": "Examples of restricting the combination process.", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF0": { |
|
"content": "<table><tr><td>3</td><td colspan=\"2\">if (T.size() >= 2) then</td><td/></tr><tr><td>4 5</td><td/><td colspan=\"2\">/ * All tokens of T do not belong to apposition list * / if (! containAppositiveWord(T)) then add l i to the set RL</td></tr><tr><td>6 7 8</td><td>else</td><td>lightstem \u2190 findLightStem(l i ) english gloss \u2190 translate(lightstem) / * Compute Capitalisation Probability for English gloss</td><td>* /</td></tr></table>", |
|
"type_str": "table", |
|
"text": "Filtering Alternative NamesInput: A set L = {l 1 , l 2 , . . . , ln} of all alternative names of Wikipedia articles Output: A set RL = {rl 1 , rl 2 , . . . , rln} of reliable alternative names 1 for i \u2190 1 to n do 2 T \u2190 split l i into tokens", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF2": { |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"text": "The results of SSL and DL classifiers on the ANERcorp test set.", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF4": { |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"text": "The results of the baseline", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF6": { |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"text": "The performances of various combination methods.", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF9": { |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"text": "The performances of various combination methods when restricting the combination process.", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF11": { |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"text": "The sign test results (exact p values) for the pairwise comparisons of the combination methods.", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF12": { |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"text": "The bootstrap test results (p-values and CI) for the pairwise comparisons of the combination methods.", |
|
"html": null, |
|
"num": null |
|
} |
|
} |
|
} |
|
} |