ACL-OCL / Base_JSON /prefixW /json /wanlp /2021.wanlp-1.12.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T05:59:20.086810Z"
},
"title": "Dynamic Ensembles in Named Entity Recognition for Historical Arabic Texts",
"authors": [
{
"first": "Muhammad",
"middle": [],
"last": "Majadly",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Haifa Haifa",
"location": {
"country": "Israel"
}
},
"email": ""
},
{
"first": "Tomer",
"middle": [],
"last": "Sagi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Haifa Haifa",
"location": {
"country": "Israel"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The use of Named Entity Recognition (NER) over archaic Arabic texts is steadily increasing. However, most tools have been either developed for modern English or trained over English language documents and are limited over historical Arabic text. Even Arabic NER tools are often trained on modern web-sourced text, making their fit for a historical task questionable. To mitigate historic Arabic NER resource scarcity, we propose a dynamic ensemble model utilizing several learners. The dynamic aspect is achieved by utilizing predictors and features over NER algorithm results that identify which have performed better on a specific task in real-time. We evaluate our approach against state-of-the-art Arabic NER and static ensemble methods over a novel historical Arabic NER task we have created. Our results show that our approach improves upon the state-of-the-art and reaches a 0.8 F-score on this challenging task.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "The use of Named Entity Recognition (NER) over archaic Arabic texts is steadily increasing. However, most tools have been either developed for modern English or trained over English language documents and are limited over historical Arabic text. Even Arabic NER tools are often trained on modern web-sourced text, making their fit for a historical task questionable. To mitigate historic Arabic NER resource scarcity, we propose a dynamic ensemble model utilizing several learners. The dynamic aspect is achieved by utilizing predictors and features over NER algorithm results that identify which have performed better on a specific task in real-time. We evaluate our approach against state-of-the-art Arabic NER and static ensemble methods over a novel historical Arabic NER task we have created. Our results show that our approach improves upon the state-of-the-art and reaches a 0.8 F-score on this challenging task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Digitized historical literature is an essential resource in facilitating historical and social research. Information extraction, the task of automatically extracting structured information from digitized documents, has emerged as a method to scale the analysis of texts and allow the integration of information from different sources (e.g., (Ehrmann et al., 2020) , (Ren et al., 2017) ). One of the cardinal tasks in information extraction is NER, extracting entities from text and categorizing them into predefined categories. Numerous NER algorithms have been suggested, from rule-based approaches (Mesfar, 2007) to machine learning (ML) approaches e.g., (Zhou and Su, 2002) . However, an overwhelming majority of the algorithms have been developed over modern English text. Rule-based approaches designed using modern English grammatical rules are irrelevant to Arabic (Shaalan, 2014) . ML approaches rely on large modern English corpora sourced from international news outlets and social media. Training these tools to extract named entities from historical texts written in ancient Arabic requires large amounts of tagged text in the same language and preferably the same dialect, which are sorely missing.",
"cite_spans": [
{
"start": 341,
"end": 363,
"text": "(Ehrmann et al., 2020)",
"ref_id": "BIBREF13"
},
{
"start": 366,
"end": 384,
"text": "(Ren et al., 2017)",
"ref_id": "BIBREF40"
},
{
"start": 600,
"end": 614,
"text": "(Mesfar, 2007)",
"ref_id": "BIBREF33"
},
{
"start": 657,
"end": 676,
"text": "(Zhou and Su, 2002)",
"ref_id": "BIBREF53"
},
{
"start": 872,
"end": 887,
"text": "(Shaalan, 2014)",
"ref_id": "BIBREF45"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we explore two avenues to mitigate the weakness of existing Arabic NER tools over historical Arabic texts (Shaalan, 2014) . First, we utilize an approach named dynamic prediction, that was used in adjacent fields such as schema matching (Sagi and Gal, 2013) , business process matching (Weidlich et al., 2013) , and pattern recognition (Ko et al., 2008) . In dynamic prediction, the results of different techniques are combined according to each result's (or component of the result) predicted quality. Here, we build an ensemble learning model based on dynamic prediction for NER algorithms. To alleviate the lack of resources in Arabic historical NER, we have created a novel dataset -the Bedaya Corpus and examine how training ML-based NER tools on historic Arabic impacts their performance instead of training them over modern Arabic text.",
"cite_spans": [
{
"start": 120,
"end": 135,
"text": "(Shaalan, 2014)",
"ref_id": "BIBREF45"
},
{
"start": 251,
"end": 271,
"text": "(Sagi and Gal, 2013)",
"ref_id": "BIBREF42"
},
{
"start": 300,
"end": 323,
"text": "(Weidlich et al., 2013)",
"ref_id": "BIBREF50"
},
{
"start": 350,
"end": 367,
"text": "(Ko et al., 2008)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The contribution of this paper can, therefore, be summarized as follows. 1) We introduce a dynamic ensemble model for NER over historical Arabic texts. 2) We present Bedaya corpus, an Arabic historical dataset. 3) We perform a detailed empirical evaluation of our approach over baseline and stateof-the-art methods. We now provide background and preliminary definitions of NER and ensemblelearning in Section 2 and present related work in Section 3. In Section 4 we present our predictors and ensemble learning technique. In Section 5, we present the datasets. In Section 6, we report the results of our empirical evaluation, concluding our work in Section 7.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Named Entity Recognition (NER) seeks to locate and classify named entities mentioned in unstructured text into categories such as person names, organizations, locations, events, and others. For example, the output for the sentence \"Romeo and Juliet meet in Verona.\" is as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background 2.1 NER",
"sec_num": "2"
},
{
"text": "[Romeo](Person) and [Juliet] ",
"cite_spans": [
{
"start": 20,
"end": 28,
"text": "[Juliet]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background 2.1 NER",
"sec_num": "2"
},
{
"text": "(Person) meet in [Verona](Location).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background 2.1 NER",
"sec_num": "2"
},
{
"text": "Some NER algorithms provide a confidence value (usually over [0, 1] ) associated with each mapping. Others provide a set of confidence values for each token, one for each possible class. There are two broad approaches to constructing NER algorithms. Rule based approaches, rely on rules and patterns manually created in order to capture terms from input documents. For example, Mikheev et al. (Mikheev et al., 1999) suggest the following rule:",
"cite_spans": [
{
"start": 61,
"end": 64,
"text": "[0,",
"ref_id": null
},
{
"start": 65,
"end": 67,
"text": "1]",
"ref_id": null
},
{
"start": 393,
"end": 415,
"text": "(Mikheev et al., 1999)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background 2.1 NER",
"sec_num": "2"
},
{
"text": "[Xxxx] (sequence of capitalized words) + 'is' + 'a' + [JJ*] (sequence of zero or more adjectives) + [REL] (relative). then [Xxxx] is (Person), e.g., \"[John White] is a beloved brother\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background 2.1 NER",
"sec_num": "2"
},
{
"text": "Rule-based systems achieve high precision but require a significant time investment to develop. Moreover, transferring rules from one language to another or between domains is very challenging (Jiang et al., 2016) .",
"cite_spans": [
{
"start": 193,
"end": 213,
"text": "(Jiang et al., 2016)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background 2.1 NER",
"sec_num": "2"
},
{
"text": "In machine-learning (ML)-based approaches, an algorithm containing rules or some other internal representation is learned from training examples (Mansouri et al., 2008) . This approach can be further categorized into supervised, unsupervised, semi-supervised. Supervised approaches rely on manually tagged examples. Unsupervised approaches attempt to divide the tokens into similar groups, hopefully grouping the same class's entities. Semi-supervised approaches use a small number of tagged examples to guide the otherwise unsupervised process. Several features are used in ML-based approaches, such as part of speech, capitalized words, special marks (punctuation, numbers, dates, and titles), and word length. Recent NER systems rely on (deep) neural-networks over sequences of words (Li et al., 2020) . Gazetteers, or entity dictionaries, are an essential resource for NER that support entity tagging (Zamin and Oxley, 2011) by allowing to look up words and phrases that are commonly used as named entities (e.g., (Shaalan and Raza, 2009) , (Sajadi and Minaei, 2017)).",
"cite_spans": [
{
"start": 145,
"end": 168,
"text": "(Mansouri et al., 2008)",
"ref_id": "BIBREF32"
},
{
"start": 787,
"end": 804,
"text": "(Li et al., 2020)",
"ref_id": "BIBREF29"
},
{
"start": 905,
"end": 928,
"text": "(Zamin and Oxley, 2011)",
"ref_id": "BIBREF52"
},
{
"start": 1018,
"end": 1042,
"text": "(Shaalan and Raza, 2009)",
"ref_id": "BIBREF46"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background 2.1 NER",
"sec_num": "2"
},
{
"text": "NER over Arabic text has proved much more challenging than other languages due to the complexity of Arabic morphology, absence of capital letters, and the lack of resources on which to train ML-based tools (Shaalan, 2014) . For example, while performance on CONLL 2000, a common NER task in English has reached an F1-score of 97.3% (Liu et al., 2019) , the best performance on a modern Arabic corpus ANERcorp only achieves an F1-score of 89.9% (Balla and Delany, 2020). Arabic texts can be categorized into classical Arabic, Modern Standard Arabic (MSA), and Arabic dialects (Shaalan, 2014) . Historical Islamic literature is written in classical Arabic (Habash, 2010). According to (Hetzron, 1997) , MSA differs from classical Arabic in vocabulary, syntax and styles. moreover they present 9 common morphonology changes that happened upon the time that lead for MSA from classical Arabic, thus changes leads to changes in syntax and words meanings. Researchers must consider these differences when using modern datasets and NER tools to build NER tools for historical Arabic texts.",
"cite_spans": [
{
"start": 206,
"end": 221,
"text": "(Shaalan, 2014)",
"ref_id": "BIBREF45"
},
{
"start": 332,
"end": 350,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF30"
},
{
"start": 575,
"end": 590,
"text": "(Shaalan, 2014)",
"ref_id": "BIBREF45"
},
{
"start": 683,
"end": 698,
"text": "(Hetzron, 1997)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background 2.1 NER",
"sec_num": "2"
},
{
"text": "Ensemble learning algorithms (Definition 2.1) aim to combine the outputs of several base-algorithms (e.g., classifiers) to get better predictive performance. Ensemble methods have been demonstrated to provide superior predictive performance versus single algorithms on a variety of problems. The potential for performance improvement is higher when the algorithms' performance is diverse, such that different algorithms succeed and fail on different tasks (Oza and Russell, 2001 ).",
"cite_spans": [
{
"start": 456,
"end": 478,
"text": "(Oza and Russell, 2001",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Learning",
"sec_num": "2.2"
},
{
"text": "Definition 2.1 (NER ensemble learning algorithm). Let D \u2286 D be a set of training documents and let f m : T D \u2192 C be the expected mapping between each token in this set and a class (supervised labels). Let F be the set of all possible NER algorithms and let F \u2282 F be the set of base NER algorithms. Let M D be the set of all such possible mappings for D and let m F \u2282 M D be the set of NER algorithm results returned by F over D. A NER ensemble learning algorithm is a function f :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Learning",
"sec_num": "2.2"
},
{
"text": "2 D \u00d7 M D \u00d7 2 M D \u2192 F.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Learning",
"sec_num": "2.2"
},
{
"text": "Given a specific training set D, an expected mapping f m , a set of NER algorithms F , and a set of NER algorithm results m F , f outputs a NER algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Learning",
"sec_num": "2.2"
},
{
"text": "Different ensemble learning approaches have been attempted. In voting based approaches, each classifier votes for a class, and the class with the most votes is chosen (Dietterich, 2000) . Linear approaches assign a weight to each classifier, and the class with the highest combined score is then chosen. Several ensemble methods are based on heuristics. Bagging (Breiman, 1996) algorithms train each model in the ensemble using a randomly drawn subset of the training set. In contrast, Boosting (Schapire, 1990) algorithms gradually build the model by training each new model instance to highlight the cases that previous models misclassified.",
"cite_spans": [
{
"start": 167,
"end": 185,
"text": "(Dietterich, 2000)",
"ref_id": "BIBREF12"
},
{
"start": 362,
"end": 377,
"text": "(Breiman, 1996)",
"ref_id": "BIBREF8"
},
{
"start": 495,
"end": 511,
"text": "(Schapire, 1990)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Learning",
"sec_num": "2.2"
},
{
"text": "However, the above mentioned approaches are trained once, and their combined algorithm remains static, regardless of base classifiers' results. This static approach can lead to unexpected results. For example, a reliable classifier utilizes a word's capitalization as a major signal. In a long sentence that is, for some reason, capitalized throughout (perhaps for emphasis), this classifier may decide that all tokens belong to one long, named entity. The result would appear wrong to human eyes but will be taken heavily into account due to the classifier's static high importance in the ensemble. In section 4.2, we present our dynamic approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Learning",
"sec_num": "2.2"
},
{
"text": "Speck and Ngomo 2014 explore the use of NER ensemble learning in English, combining four NER algorithms using 15 different ensemble learning algorithms. They evaluate these methods over five different English language datasets and show that ensembles can reduce the error rate by an average of 40%. However, all of the ensemble learning methods evaluated were static methods. In this work, we employ predictors to assess the quality of NER results dynamically.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "3"
},
{
"text": "For Arabic NER, Abdallah et al. 2012 integrate an ML-based approach with a rule-based one. They train a decision tree model to combine the results, achieving an F1-score of 87.8%. (Sajadi and Minaei, 2017) learn a static ensemble for named entity recognition over classical Arabic text and rely on the Adaboost algorithm, an implementation of the multi-class boosting method. Using their novel NOORcorp (5) historic Arabic corpus, the system was evaluated over its ability to recognize three classes: location, person, organization. Ekbal and Bandyopadhyay 2010 propose a NER for Arabic based on Support Vector Machines (SVM) (Hearst et al., 1998) used to classify feature vectors for every word.",
"cite_spans": [
{
"start": 626,
"end": 647,
"text": "(Hearst et al., 1998)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "3"
},
{
"text": "Recent NER systems rely on (deep) neuralnetworks over sequences of words. These systems infer features from raw sentences by using several layers of components allowing to represent higher levels of abstraction over the word sequence (see survey (Li et al., 2020) ). The current state of the art are NER systems based on the contextual word embedding approach (Peters et al., 2018) and especially systems which employ BERT (Devlin et al., 2018) , a novel pre-trained deeply bidirectional, unsupervised language representation, that takes into account the context for each occurrence of a given word. Al-Smadi et al., 2020 propose an Arabic NER system using a six-layer deep neural network model based on the transfer learning architectures among deep neural networks (Yosinski et al., 2014) , (Devlin et al., 2018) . The system achieves an overall F1-score of 90% Compared with their baseline BI-LSTM-CRF model which reached an overall F1-score of 73%. Their work is trained and tested over MSA, they used WikiFANE-Gold (Alotaibi and Lee, 2014) as a dataset that builds over the Arabic version of Wikipedia. While our work focus on historical text, and emphasizes (see 6.4) the non improving NER over historical Arabic text using MSA.",
"cite_spans": [
{
"start": 246,
"end": 263,
"text": "(Li et al., 2020)",
"ref_id": "BIBREF29"
},
{
"start": 423,
"end": 444,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF11"
},
{
"start": 767,
"end": 790,
"text": "(Yosinski et al., 2014)",
"ref_id": "BIBREF51"
},
{
"start": 793,
"end": 814,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "3"
},
{
"text": "Dynamic prediction has been previously proposed for schema matching (Sagi and Gal, 2013) and pattern matching (Ko et al., 2008) . To the best of our knowledge, it has not been used to combine NER algorithms, as proposed here.",
"cite_spans": [
{
"start": 68,
"end": 88,
"text": "(Sagi and Gal, 2013)",
"ref_id": "BIBREF42"
},
{
"start": 110,
"end": 127,
"text": "(Ko et al., 2008)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "3"
},
{
"text": "Static ensemble methods rely on the base NER algorithms' results to generate a NER algorithm that combines their results such that on every document set, the results of the base NER algorithms are combined in the same way. Predictors have been explored for schema matching (Sagi and Gal, 2013) and pattern recognition (Ko et al., 2008) as a method for fine-tuning the ensemble creation method to the task at hand. Predictors assess the result of each base algorithm on the task and provide a score. The ensemble model can use this score to determine whether or not the algorithm has succeeded in this specific occasion to identify the correct class. Formally:",
"cite_spans": [
{
"start": 273,
"end": 293,
"text": "(Sagi and Gal, 2013)",
"ref_id": "BIBREF42"
},
{
"start": 318,
"end": 335,
"text": "(Ko et al., 2008)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Employing Predictors for Dynamic Ensemble Building",
"sec_num": "4"
},
{
"text": "Definition 4.1 (Predictor). Let D be a set of documents and let {T D |\u2200t \u2208 T D : t \u2208 d \u2227 d \u2208 D} be the set of tokens contained in these documents. Let C be a set of classes, and let f : T D \u2192 C \u00d7 R be a NER algorithm result assigning a real number and a class to each token. Let N be the set of all possible such NER algorithm results and let n \u2208 D be some natural non-zero number, then a predictor is a function f :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Employing Predictors for Dynamic Ensemble Building",
"sec_num": "4"
},
{
"text": "N \u2192 R n .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Employing Predictors for Dynamic Ensemble Building",
"sec_num": "4"
},
{
"text": "Thus, a predictor assesses the quality of the algorithm's outputs without knowing the expected results. Predictors are defined over different result cardinalities. Dataset-level predictors output a single number for the entire dataset. Sentence-level predictors output one number for each sentence, and token-level predictors output a number for each token. Furthermore, as mentioned in Section 2, a NER algorithm result may include a confidence score for each token for each of the possible classes. In this case the NER algorithm result (and predictor input) is a function f :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Employing Predictors for Dynamic Ensemble Building",
"sec_num": "4"
},
{
"text": "T D \u2192 (C \u00d7 R) |C| .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Employing Predictors for Dynamic Ensemble Building",
"sec_num": "4"
},
{
"text": "Here, we propose the following predictors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Employing Predictors for Dynamic Ensemble Building",
"sec_num": "4"
},
{
"text": "Token-level Predictors Our token-level predictors are calculated over the NER algorithm's confidence rates for each token and class. Thus, an algorithm tasked with classifying tokens into four classes will report four confidence values for each token, representing the confidence in its prediction for every class. The first predictor is max confidence rate token (MCT), which takes the value of the maximum confidence rate per token. Usually, this is the reported confidence for the chosen class. This predictor indicates how confident the NER tool is in its outcome, which leads it to prefer NER tools with higher confidence rates. The second predictor is the Difference confidence Rate Token (DCT), which measures the difference between the token's maximal confidence and second-best confidence rate, another indication of how confident the NER tool is in its outcome.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predictors",
"sec_num": "4.1"
},
{
"text": "We explore two approaches to constructing sentence-level predictors. The first two predictors rely on the confidence rate information as in the token-level predictors. The latter two rely on counting results. Binary distance (BD) measures the sum of distances between the reported confidence value and the closest binary value. BD indicates the number of confident tokens in a sentence. A higher value for confident tokens leads to a lower value of BD. For each token's sentence, the confidence value is rounded towards the closest binary value, and the difference is taken and summed. For example, in Figure 1 , given the confidence values from two NER's output, the lower BD indicates a confident NER. This predictor's rationale is that good NER results are those in which the NER algorithm is confident and either marks a token as belonging to the class or not. Half-hearted values are penalized. Following a similar rationale, difference confidence rate sentence DCS is the sum of differences between the highest and second-highest confidence rate for every token in the sentence. Like DCT, DCS indicates the NER tool's confidence in its outcome. The second group of sentence-level predictors utilizes a counting representation of the sentence over the number of named entities in general or the number of named entities from a specific class. With these predictors, the ensemble model can prefer a NER Tool with good performance for recognizing a type of Named Entity or can ignore others that are bad at recognizing this type. Sentence number of named entities (SNN) counts the number of tokens marked as belonging to named entities in the sentence normalized over the sentence's length. We similarly define three additional predictors: sentence number of persons (SNP), sentence number of organizations (SNO), and sentence Number of Locations (SNL). Each predictor counts the number of tokens from their respective class. Finally, SLD measures the proportion of tokens associated with the most prevalent class in the sentence from all tokens in the sentence. Table 1 contains examples for predictor calculation over the sentence \"Sherlock Holmes lives on Baker Street\". Predictor Name",
"cite_spans": [],
"ref_spans": [
{
"start": 602,
"end": 610,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 2066,
"end": 2073,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Sentence-level Predictors",
"sec_num": null
},
{
"text": "If the confidence rate for 'Sherlock' is 0.9 for Person class and 0.7, 0.8, and 0.3 for Location, Organization, and Other classes respectively, its MCT will be 0.9. DCT If the confidence rate for 'Sherlock' is 0.9 for Person class and 0.7, 0.8, and 0.3 for Location, Organization and Other classes respectively, its DCT will be 0.9 -0.8 = 0.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MCT",
"sec_num": null
},
{
"text": "In figure 1 we describe two NER tools N ER 1 and N ER 2 . N ER 2 is judged superior by BD since it is very confident for three tokens and only uncertain for two versus N ER 1 that is only confident for two tokens. DCS If the confidence rate for every token in the sentence (six in our example) is 0.9 for the Person class and 0.7, 0.5, and 0.3 for the Location, Organization, and Other classes respectively. DSC for this sentence will be 6 n=1 (0.9 \u2212 0.7).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BD",
"sec_num": null
},
{
"text": "If N ER 1 recognizes just Sherlock and Holmes as Person (or another NE) the predictor score will be 2 for this NER tool.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SNN",
"sec_num": null
},
{
"text": "If N ER 1 recognizes just Sherlock and Holmes as Person (or another NE) the predictor score will be 2 for Person, and zero for Organization and Location. SLD If N ER 1 recognizes Sherlock, Holmes as Person, and Baker as Location (or another named entity) the predictor score will be 2/3. Figure 2 shows the workflow of our technique. After training NER base algorithms over the training data, predictors are calculated over the results. Both the predictor value and the raw NER algorithm results serve as inputs for the ensemblelearning model, aiming to learn an ensemble method. When the learned method is employed at test/usage, the dataset is first fed into the NER tools to get the NER results, then the predictors are calculated, and the learned ensemble method combines both to give a class label for each token. ",
"cite_spans": [],
"ref_spans": [
{
"start": 288,
"end": 296,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "SNP, SNO, SNL",
"sec_num": null
},
{
"text": "There is an apparent lack of Arabic corpora (Abouenour et al., 2010). Moreover, few of these corpora have been made freely and publicly available for research purposes (Bies et al., 2012) . while others are available but under licenses (Mostefa et al., 2009) Thus, researchers often rely on private corpora, which precludes reproducing previous work and comparing its performance over a consistent benchmark.",
"cite_spans": [
{
"start": 168,
"end": 187,
"text": "(Bies et al., 2012)",
"ref_id": "BIBREF7"
},
{
"start": 236,
"end": 258,
"text": "(Mostefa et al., 2009)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "5"
},
{
"text": "In this work we use three corpora, NoorCorp (Sajadi and Minaei, 2017) based on a different historical book circa 800 AD, ANERcorp (Benajiba et al., 2007) , a modern corpus which has become a standard in Arabic NER works. ANERcorp is based upon web-documents written in modern Arabic collected from 316 articles from newspapers such as bbc 1 and aljazeera 2 . Table 2 compares the corpora's token counts and class distributions. The datasets are tagged into four classes, Person, Organization, Location, and Other. The latter is assigned to tokens that do not belong to the first three classes. The third corpus is a historic one contributed in this work.",
"cite_spans": [
{
"start": 130,
"end": 153,
"text": "(Benajiba et al., 2007)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 359,
"end": 366,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Datasets",
"sec_num": "5"
},
{
"text": "The scarcity of public Historic NER datasets has led us to create a new corpus. The Bedaya corpus 3 , is based on Ibn Kathir's historical Arabic opus Al-Bid\u0101ya wa-al-Nih\u0101ya (The Beginning and The End). Kathir's ten volumes contain an extensive description of the world's history from its creation and are widely used in Islamic studies. The dataset is taken continuously from the 7th part of the book. It contains 20,500 tokens with 5,664 different tokens. Named entities were annotated manually by one of the authors and verified by an Arabic language expert that reviewed 50% of the dataset with a 99.998% inter-rater agreement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bedaya corpus",
"sec_num": null
},
{
"text": "Over a 80-20% training-test partition (Sajadi and Minaei, 2017) over NOORCorp dataset reached approximately 97% F1-score. In our evaluation of the NOORCorp dataset, we challenge our tools with a harder task, which is to train over one dataset and to be tested on a different one. 4 Thus, we ensure the generality of our algorithm and obtain enough space for ensemble algorithms improving.",
"cite_spans": [
{
"start": 280,
"end": 281,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bedaya corpus",
"sec_num": null
},
{
"text": "Bedaya ANER ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Class Noor",
"sec_num": null
},
{
"text": "We now present a series of empirical evaluations examining the following questions. Which predictors are the most correlated with our desired quality measures ( \u00a7 6.2)? Does our dynamic ensemble approach outperform the baseline approaches of using a single NER algorithm and using static ensemble learning ( \u00a7 6.3)? What is the performance impact of training NER algorithms over modern Arabic text versus historic Arabic for a historic Arabic NER task ( \u00a7 6.4)? We begin by introducing the experimental setup.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "We employ C4.5 (Quinlan, 2014) as our ensemble learning algorithm. C4.5 generates a decision tree, a learned classifier that uses a tree-like graph or model of decisions and possible outcomes. Each internal node represents an \"examination\" on an attribute, each branch represents the outcome of the examination, and each leaf node represents a class label. The paths from the root to the leaves represent classification rules. Our DEM approach utilizes the J48 implementation of C4.5 in Weka (Hall et al., 2009) .",
"cite_spans": [
{
"start": 15,
"end": 30,
"text": "(Quinlan, 2014)",
"ref_id": "BIBREF39"
},
{
"start": 492,
"end": 511,
"text": "(Hall et al., 2009)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Algorithm",
"sec_num": "6.1.1"
},
{
"text": "The following are the NER algorithms used in this work. Pattern is a rule-based tool. The tool uses pattern comparison with a small lexicon containing the 50 most prevalent words for every named entitytag in the historical Arabic dataset NoorCourp. Additionally, the tool employs six rules related to the Arabic language and historic Arabic specifications taken from (Sajadi and Minaei, 2017) and (Shaalan and Raza, 2009) with some minor changes. Table 3 presents the rules we use. The rest of the tools utilize an ML approach. CRF uses the conditional random field technique (Lafferty et al., 2001 ), a discriminative model for sequence labeling. It models the dependency between each sample and the entire input sequence. CRF uses the following extracted features: Part of speech (Stanford POS tagger (Manning et al., 2014) ), special flag marks (is punctuation?, is a number?), words, word parts (last three, two, one letter), word length, and nearby words features. LSTM-CRF (Huang et al., 2015 ) combines a bidirectional long short-term memory neural network (Hochreiter and Schmidhuber, 1997) with CRF. Long short-term memory (LSTM) is an artificial recurrent neural network (Rumelhart et al., 1986) . Unlike standard feedforward neural networks, LSTM has feedback connections and can process entire sequences of data. Polyglot (Al-Rfou et al., 2015) is a multilingual, semi-supervised ML-based NER that uses word embedding. Word embeddings are representations of words acquired by using vast amounts of raw text. These representations capture information about words' syntactic functionality and semantics. are trained on Wikipedia, the text consists of the most frequent 100K words for each language, and the word representation consists of 64 dimensions. The polyglot model learns a simple three-layer neural network using the word embedding representation for classification, taking as features the embeddings for the words in a nearby interval of text around each word. The last three algorithms (SVM, LR, and DT) use three different machine learning classifiers over the same feature space: part of speech tags, and word length. SVM (Chang and Lin, 2011) works by finding a hyperplane in N-dimensional space (N number of features) that distinctly classifies the data points. Logistic Regression (LR) (Hosmer Jr et al., 2013) uses a logistic function to model a binary dependent variable. Decision Tree (DT) (Quinlan, 1986 ) is a decision support tool that uses a tree-like graph or model of decisions and their possible consequences, including chance event outcomes. To ensure diversity in our ensemble approach, we chose these non-sequence ML-based algorithms that consider token-level information together with the sequence-based algorithms CRF, LSTM-CRF, and Polyglot. The classifiers use the following extracted features: Part Of Speech tags, the word itself, words, and word length.",
"cite_spans": [
{
"start": 367,
"end": 392,
"text": "(Sajadi and Minaei, 2017)",
"ref_id": "BIBREF43"
},
{
"start": 397,
"end": 421,
"text": "(Shaalan and Raza, 2009)",
"ref_id": "BIBREF46"
},
{
"start": 577,
"end": 599,
"text": "(Lafferty et al., 2001",
"ref_id": "BIBREF27"
},
{
"start": 783,
"end": 826,
"text": "(Stanford POS tagger (Manning et al., 2014)",
"ref_id": null
},
{
"start": 980,
"end": 999,
"text": "(Huang et al., 2015",
"ref_id": "BIBREF23"
},
{
"start": 1065,
"end": 1099,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF21"
},
{
"start": 1182,
"end": 1206,
"text": "(Rumelhart et al., 1986)",
"ref_id": "BIBREF41"
},
{
"start": 2146,
"end": 2167,
"text": "(Chang and Lin, 2011)",
"ref_id": "BIBREF10"
},
{
"start": 2420,
"end": 2434,
"text": "(Quinlan, 1986",
"ref_id": "BIBREF38"
}
],
"ref_spans": [
{
"start": 447,
"end": 455,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "NER Tools",
"sec_num": "6.1.2"
},
{
"text": "Ensemble methods have been demonstrated to provide superior predictive performance versus single algorithms on a variety of problems. The potential for performance improvement is higher when the algorithms' performance is diverse, such that different algorithms succeed and fail on different tasks (Oza and Russell, 2001 ). To ensure diversity, we choose both rule-based and machine learning algorithms, and within the latter group, a diverse set of principles and models. Empirical diversity was measured over this task by using the Pearson product-moment correlation coefficient (PPMCC) (Steel et al., 1960) between each pair of NER algorithms results over the Bedaya dataset. An absolute value of 1.0 indicates a perfect correlation, and 0.0 indicates no correlation at all. CRF is positively correlated with LSTM-CRF at 0.7 and less correlated with POLYGLOT at 0.38. Polyglot is highly correlated with LR with 0.7. DT with LR and SVM with LSTM-CRF are very correlated at 0.84. Pattern and LR have a 0.78 correlation between them. The most correlated tools are SVM NER and DT with 0.96.",
"cite_spans": [
{
"start": 298,
"end": 320,
"text": "(Oza and Russell, 2001",
"ref_id": "BIBREF36"
},
{
"start": 589,
"end": 609,
"text": "(Steel et al., 1960)",
"ref_id": "BIBREF48"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NER Diversity",
"sec_num": "6.1.3"
},
{
"text": "We measure NER performance using the commonly employed precision, recall, and F1-score. Precision is the portion of correctly classified tokens (true positives) among the tokens classified as belonging to a named entity by the NER system (predicted positive). In contrast, recall is the fraction of true positives predicted by the NER system among the total number of tokens belonging to named entities in the dataset (actual positive). F1-score is the harmonic mean of precision and recall.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measures",
"sec_num": "6.1.4"
},
{
"text": "Oracle We use the concept of an oracle (Ko et al., 2008) to measure the upper limit for the ensemble of classifiers' performance. An oracle is the union of true positive results of all classifiers. If one classifier can correctly classify a given input, then an ensemble of classifiers that can classify this input will not be better than the oracle.",
"cite_spans": [
{
"start": 39,
"end": 56,
"text": "(Ko et al., 2008)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Measures",
"sec_num": "6.1.4"
},
{
"text": "Definition 6.1 (Oracle). Let T be a token and let C be the set of classes where c \u2208 C is the expected class of token T and let O \u2208 C be the class for tokens labeled as a non named entity. The oracle over NER tools [N 1 , ..., N n ] is defined as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measures",
"sec_num": "6.1.4"
},
{
"text": "Oracle(T ) = c \u2203i \u2208 [1, .., n]|N i (T ) = c O else",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measures",
"sec_num": "6.1.4"
},
{
"text": "The oracle's performance over Bedaya dataset is 94% Precision, 75% Recall, and F1-score 83%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measures",
"sec_num": "6.1.4"
},
{
"text": "To assess the quality of a predictor, we use the correlation between the predictor value and the result's eventual quality. In our case, we measure it using PPMCC and Point Biserial Correlation (PBC) (Tate, 1954) . PPMCC can be used between two continuous variables to assess their linear relationship. PBC is used to measure the correlation between a binary (is entity?) and continuous variable (prediction). We split our dataset (Bedaya) into sentences for sentence-level predictors, run all base NER algorithms over each sentence, and compare the predictor value over each sentence to precision and recall calculated on it using PPMCC. For token-level predictors (DCT and MCT), we use PBC with true/false token predictions. Correlation measurement is done on two levels, for all the tools together and for each tool individually. Predictors correlated with all the tools are more general and make the system more effective. On the other hand, predictors correlated with one tool can give helpful information to the ensemble model about a specific tool. Table 4 summarizes correlation results for predictors with all the tools reporting their correlation over 10,000 tokens and 500 sentences using seven different NER tools with quality measures precision and recall. The best-correlated sentence-level predictor is DCS, and the best token-level predictor is MCT. A cross-correlational analysis revealed that BD is correlated strongly with DCS, SNN is strongly correlated with SNP, and DCT is correlated with MCT. Other predictor pairs are weakly correlated.",
"cite_spans": [
{
"start": 200,
"end": 212,
"text": "(Tate, 1954)",
"ref_id": "BIBREF49"
}
],
"ref_spans": [
{
"start": 1056,
"end": 1063,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Evaluating Predictors",
"sec_num": "6.2"
},
{
"text": "When applying NER algorithms over Bedaya (Table 5) with NoorCorp (Table 5) as a train dataset, somewhat surprisingly, classic CRF achieves the best score while the state-of-the-art ML-model LSTM-CRF NER is only the fourth-best tool. Using DEM results in an ensemble that outperforms CRF by three percentage points on recall and 2.4 percentage points on F1-score. In compared with the following static ensemble methods: C4.5 (Quinlan, 2014) , Adaboost (M1) (Freund and Schapire, 1996) and bagging (BG) (Breiman, 1996) with RandomForest (Breiman, 2001) , REP-Tree, and C4.5 as the base classifiers. The rest of the static ensemble methods are Logistic Model Trees (LMT) (Landwehr et al., 2005) , Sequential Minimal Optimization (SMO) (Hastie and Tibshirani, 1998) , and Naive Bayes (NB) (John and Langley, 1995). As we chose the C4.5 algorithm as our dynamic ensemble method, we can see the impact of utilizing C4.5 with predictors versus using it without predictors by comparing the other eight static models' results. It seems that in this setup and over this task, DEM can improve both the recall and the resulting F1-score by approximately three percentage points compared with the best static method's performance. When comparing to the Oracle, one can see that DEM manages to close half of the distance between the best performing NER and the Oracle in terms of F1-score. In contrast, the next best ensemble method only manages to close a quarter of this distance. ",
"cite_spans": [
{
"start": 424,
"end": 439,
"text": "(Quinlan, 2014)",
"ref_id": "BIBREF39"
},
{
"start": 456,
"end": 483,
"text": "(Freund and Schapire, 1996)",
"ref_id": "BIBREF15"
},
{
"start": 501,
"end": 516,
"text": "(Breiman, 1996)",
"ref_id": "BIBREF8"
},
{
"start": 535,
"end": 550,
"text": "(Breiman, 2001)",
"ref_id": "BIBREF9"
},
{
"start": 668,
"end": 691,
"text": "(Landwehr et al., 2005)",
"ref_id": "BIBREF28"
},
{
"start": 732,
"end": 761,
"text": "(Hastie and Tibshirani, 1998)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 65,
"end": 74,
"text": "(Table 5)",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Evaluating DEM",
"sec_num": "6.3"
},
{
"text": "Of the seven NER algorithms we evaluate, five are ML-based. Here, we evaluate their performance when trained on modern and historical datasets. Figure 3 shows the algorithms' performance when tested over the Bedaya dataset and trained over two different datasets: ANER (Benajiba et al., 2007) a modern corpus, NoorCorp (Sajadi and Minaei, 2017) a historical corpus, and over both. The results demonstrate the importance of historic Arabic datasets as algorithms trained on historic data achieve better performance than when trained upon modern data. When trained over a mix of historical and modern data, didn't lead to improvement, thus relatively poor performance highlights the need to find better ways to transfer learning from modern to historical texts. Table 7 shows some outputs of tokens from Bedaya dataset by CRF-NER tool (the best tool in our ensemble model) when it trained over modern and historic text.",
"cite_spans": [
{
"start": 269,
"end": 292,
"text": "(Benajiba et al., 2007)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 144,
"end": 152,
"text": "Figure 3",
"ref_id": "FIGREF2"
},
{
"start": 760,
"end": 767,
"text": "Table 7",
"ref_id": "TABREF11"
}
],
"eq_spans": [],
"section": "Modern Text Versus Historical Text",
"sec_num": "6.4"
},
{
"text": "We have proposed a dynamic ensemble model for NER and demonstrated its efficacy over a historical Arabic task. We have shown that training MLbased NER algorithms over modern Arabic text negatively impacts their performance over historical text, even when the training is combined with historical sources. This result highlights the need for more tagged historical datasets, such as the Bedaya corpus contributed by this work. In future work, we intend to enhance the dynamic ensemble model by exploring additional predictors and alternative ensemble learning methods. We further intend to explore different ways to utilize modern Arabic corpora in historical text analysis tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "7"
},
{
"text": "https://www.bbc.com/arabic 2 https://www.aljazeera.net/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "available at https://github.com/ muhammad-majadly/Bedaya-dataset 4 It should be noted, that in preliminary work we were able to reproduce these same results over an 80-20 split of NOORCorp using both static and dynamic ensembles.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Integrating rule-based system with classification for Arabic named entity recognition",
"authors": [
{
"first": "Sherief",
"middle": [],
"last": "Abdallah",
"suffix": ""
},
{
"first": "Khaled",
"middle": [],
"last": "Shaalan",
"suffix": ""
},
{
"first": "Muhammad",
"middle": [],
"last": "Shoaib",
"suffix": ""
}
],
"year": 2012,
"venue": "International Conference on Intelligent Text Processing and Computational Linguistics (CICLing '12)",
"volume": "",
"issue": "",
"pages": "311--322",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sherief Abdallah, Khaled Shaalan, and Muhammad Shoaib. 2012. Integrating rule-based system with classification for Arabic named entity recognition. In International Conference on Intelligent Text Pro- cessing and Computational Linguistics (CICLing '12), pages 311-322. Springer.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Using the yago ontology as a resource for the enrichment of Named Entities in Arabic WordNet",
"authors": [
{
"first": "Lahsen",
"middle": [],
"last": "Abouenour",
"suffix": ""
},
{
"first": "Karim",
"middle": [],
"last": "Bouzoubaa",
"suffix": ""
},
{
"first": "Paolo",
"middle": [
"Rosso"
],
"last": "",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of The Seventh International Conference on Language Resources and Evaluation (LREC 2010) Workshop on Language Resources and Human Language Technology for Semitic Languages",
"volume": "",
"issue": "",
"pages": "27--31",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lahsen Abouenour, Karim Bouzoubaa, and Paolo Rosso. 2010. Using the yago ontology as a re- source for the enrichment of Named Entities in Ara- bic WordNet. In Proceedings of The Seventh In- ternational Conference on Language Resources and Evaluation (LREC 2010) Workshop on Language Resources and Human Language Technology for Semitic Languages, pages 27-31.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Polyglot-ner: Massive multilingual named entity recognition",
"authors": [
{
"first": "Rami",
"middle": [],
"last": "Al-Rfou",
"suffix": ""
},
{
"first": "Vivek",
"middle": [],
"last": "Kulkarni",
"suffix": ""
},
{
"first": "Bryan",
"middle": [],
"last": "Perozzi",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Skiena",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the society of industrial applied mathematics (SIAM '15) International Conference on Data Mining",
"volume": "",
"issue": "",
"pages": "586--594",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rami Al-Rfou, Vivek Kulkarni, Bryan Perozzi, and Steven Skiena. 2015. Polyglot-ner: Massive mul- tilingual named entity recognition. In Proceed- ings of the society of industrial applied mathematics (SIAM '15) International Conference on Data Min- ing, pages 586-594. SIAM.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Transfer learning for arabic named entity recognition with deep neural networks",
"authors": [
{
"first": "Mohammad",
"middle": [],
"last": "Al-Smadi",
"suffix": ""
},
{
"first": "Saad",
"middle": [],
"last": "Al-Zboon",
"suffix": ""
},
{
"first": "Yaser",
"middle": [],
"last": "Jararweh",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Juola",
"suffix": ""
}
],
"year": 2020,
"venue": "IEEE Access",
"volume": "8",
"issue": "",
"pages": "37736--37745",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohammad Al-Smadi, Saad Al-Zboon, Yaser Jarar- weh, and Patrick Juola. 2020. Transfer learning for arabic named entity recognition with deep neural networks. IEEE Access, 8:37736-37745.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A hybrid approach to features representation for fine-grained arabic named entity recognition",
"authors": [
{
"first": "Fahd",
"middle": [],
"last": "Alotaibi",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "984--995",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fahd Alotaibi and Mark Lee. 2014. A hybrid ap- proach to features representation for fine-grained arabic named entity recognition. In Proceedings of COLING 2014, the 25th International Confer- ence on Computational Linguistics: Technical Pa- pers, pages 984-995.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Exploration of approaches to arabic named entity recognition",
"authors": [
{
"first": "Amn",
"middle": [],
"last": "Husamelddin",
"suffix": ""
},
{
"first": "Sarah",
"middle": [
"Jane"
],
"last": "Balla",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Delany",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Husamelddin AMN Balla and Sarah Jane Delany. 2020. Exploration of approaches to arabic named entity recognition.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Anersys: An Arabic named entity recognition system based on maximum entropy",
"authors": [
{
"first": "Yassine",
"middle": [],
"last": "Benajiba",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Rosso",
"suffix": ""
},
{
"first": "Jos\u00e9 Miguel",
"middle": [],
"last": "Bened\u00edruiz",
"suffix": ""
}
],
"year": 2007,
"venue": "International Conference on Intelligent Text Processing and Computational Linguistics (CICLing '07)",
"volume": "",
"issue": "",
"pages": "143--153",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yassine Benajiba, Paolo Rosso, and Jos\u00e9 Miguel Bened\u00edruiz. 2007. Anersys: An Arabic named entity recognition system based on maximum entropy. In International Conference on Intelligent Text Process- ing and Computational Linguistics (CICLing '07), pages 143-153. Springer.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Linguistic resources for Arabic machine translation",
"authors": [
{
"first": "Ann",
"middle": [],
"last": "Bies",
"suffix": ""
},
{
"first": "Denise",
"middle": [],
"last": "Dipersio",
"suffix": ""
},
{
"first": "Mohamed",
"middle": [],
"last": "Maamouri",
"suffix": ""
}
],
"year": 2012,
"venue": "John Benjamins",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ann Bies, Denise DiPersio, and Mohamed Maamouri. 2012. Linguistic resources for Arabic machine translation. John Benjamins, Amsterdam, The Netherlands.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Bagging predictors",
"authors": [
{
"first": "Leo",
"middle": [],
"last": "Breiman",
"suffix": ""
}
],
"year": 1996,
"venue": "Machine learning",
"volume": "24",
"issue": "2",
"pages": "123--140",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leo Breiman. 1996. Bagging predictors. Machine learning, 24(2):123-140.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Random forests. Machine learning",
"authors": [
{
"first": "Leo",
"middle": [],
"last": "Breiman",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "45",
"issue": "",
"pages": "5--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leo Breiman. 2001. Random forests. Machine learn- ing, 45(1):5-32.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Libsvm: A library for support vector machines",
"authors": [
{
"first": "Chih-Chung",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Chih-Jen",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2011,
"venue": "ACM transactions on intelligent systems and technology (TIST '11)",
"volume": "2",
"issue": "",
"pages": "1--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chih-Chung Chang and Chih-Jen Lin. 2011. Libsvm: A library for support vector machines. ACM trans- actions on intelligent systems and technology (TIST '11), 2(3):1-27.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Ensemble methods in machine learning",
"authors": [
{
"first": "G",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dietterich",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the First International Workshop on Multiple Classifier Systems, (MCS '00)",
"volume": "",
"issue": "",
"pages": "1--15",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas G. Dietterich. 2000. Ensemble methods in ma- chine learning. In Proceedings of the First Inter- national Workshop on Multiple Classifier Systems, (MCS '00), page 1-15, Berlin, Heidelberg. Springer- Verlag.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Extended overview of clef hipe 2020: named entity processing on historical newspapers",
"authors": [
{
"first": "Maud",
"middle": [],
"last": "Ehrmann",
"suffix": ""
},
{
"first": "Matteo",
"middle": [],
"last": "Romanello",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Fl\u00fcckiger",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Clematide",
"suffix": ""
}
],
"year": 2020,
"venue": "CLEF 2020 Working Notes. Conference and Labs of the Evaluation Forum",
"volume": "2696",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maud Ehrmann, Matteo Romanello, Alex Fl\u00fcckiger, and Simon Clematide. 2020. Extended overview of clef hipe 2020: named entity processing on histori- cal newspapers. In CLEF 2020 Working Notes. Con- ference and Labs of the Evaluation Forum, volume 2696. CEUR.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Named entity recognition using support vector machine: A language independent approach",
"authors": [
{
"first": "Asif",
"middle": [],
"last": "Ekbal",
"suffix": ""
},
{
"first": "Sivaji",
"middle": [],
"last": "Bandyopadhyay",
"suffix": ""
}
],
"year": 2010,
"venue": "International Journal of Electrical, Computer, and Systems Engineering",
"volume": "4",
"issue": "",
"pages": "155--170",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Asif Ekbal and Sivaji Bandyopadhyay. 2010. Named entity recognition using support vector machine: A language independent approach. International Jour- nal of Electrical, Computer, and Systems Engineer- ing, 4(2):155-170.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Experiments with a new boosting algorithm",
"authors": [
{
"first": "Yoav",
"middle": [],
"last": "Freund",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"E"
],
"last": "Schapire",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the Thirteenth International Conference on International Conference on Machine Learning, ICML'96",
"volume": "",
"issue": "",
"pages": "148--156",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoav Freund and Robert E. Schapire. 1996. Exper- iments with a new boosting algorithm. In Pro- ceedings of the Thirteenth International Conference on International Conference on Machine Learning, ICML'96, page 148-156, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Introduction to Arabic natural language processing",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Nizar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Habash",
"suffix": ""
}
],
"year": 2010,
"venue": "Synthesis Lectures on Human Language Technologies",
"volume": "3",
"issue": "1",
"pages": "1--187",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nizar Y Habash. 2010. Introduction to Arabic natural language processing. Synthesis Lectures on Human Language Technologies, 3(1):1-187.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "The weka data mining software: an update",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Eibe",
"middle": [],
"last": "Frank",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Holmes",
"suffix": ""
},
{
"first": "Bernhard",
"middle": [],
"last": "Pfahringer",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Reutemann",
"suffix": ""
},
{
"first": "Ian",
"middle": [
"H"
],
"last": "Witten",
"suffix": ""
}
],
"year": 2009,
"venue": "ACM SIGKDD explorations newsletter",
"volume": "11",
"issue": "1",
"pages": "10--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, and Ian H Witten. 2009. The weka data mining software: an update. ACM SIGKDD explorations newsletter, 11(1):10- 18.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Classification by pairwise coupling",
"authors": [
{
"first": "Trevor",
"middle": [],
"last": "Hastie",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Tibshirani",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 1997 Conference on Advances in Neural Information Processing Systems 10, (NIPS '97)",
"volume": "",
"issue": "",
"pages": "507--513",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Trevor Hastie and Robert Tibshirani. 1998. Classifi- cation by pairwise coupling. In Proceedings of the 1997 Conference on Advances in Neural Information Processing Systems 10, (NIPS '97), page 507-513, Cambridge, MA, USA. MIT Press.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Support vector machines",
"authors": [
{
"first": "Marti",
"middle": [
"A"
],
"last": "Hearst",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Susan",
"suffix": ""
},
{
"first": "Edgar",
"middle": [],
"last": "Dumais",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Osuna",
"suffix": ""
},
{
"first": "Bernhard",
"middle": [],
"last": "Platt",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Scholkopf",
"suffix": ""
}
],
"year": 1998,
"venue": "IEEE Intelligent Systems and their applications",
"volume": "13",
"issue": "",
"pages": "18--28",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marti A. Hearst, Susan T Dumais, Edgar Osuna, John Platt, and Bernhard Scholkopf. 1998. Support vec- tor machines. IEEE Intelligent Systems and their ap- plications, 13(4):18-28.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "The Semitic Languages",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Hetzron",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Hetzron. 1997. The Semitic Languages. Taylor & Francis.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Applied logistic regression",
"authors": [
{
"first": "Stanley",
"middle": [],
"last": "David W Hosmer",
"suffix": ""
},
{
"first": "Rodney X",
"middle": [],
"last": "Lemeshow",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sturdivant",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "398",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David W Hosmer Jr, Stanley Lemeshow, and Rodney X Sturdivant. 2013. Applied logistic regression, vol- ume 398. John Wiley & Sons.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Bidirectional lstm-crf models for sequence tagging",
"authors": [
{
"first": "Zhiheng",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1508.01991"
]
},
"num": null,
"urls": [],
"raw_text": "Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirec- tional lstm-crf models for sequence tagging. arXiv preprint arXiv:1508.01991.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Evaluating and combining name entity recognition systems",
"authors": [
{
"first": "Ridong",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Rafael",
"middle": [
"E"
],
"last": "Banchs",
"suffix": ""
},
{
"first": "Haizhou",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2016,
"venue": "Association for Computational Linguistics (ACL '16)",
"volume": "",
"issue": "",
"pages": "21--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ridong Jiang, Rafael E Banchs, and Haizhou Li. 2016. Evaluating and combining name entity recognition systems. In Proceedings of the Sixth Named Entity Workshop, pages 21-27. Association for Computa- tional Linguistics (ACL '16).",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Estimating continuous distributions in bayesian classifiers",
"authors": [
{
"first": "H",
"middle": [],
"last": "George",
"suffix": ""
},
{
"first": "Pat",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Langley",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the Eleventh Conference on Uncertainty in Artificial Intelligence, (UAI '95)",
"volume": "",
"issue": "",
"pages": "338--345",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George H. John and Pat Langley. 1995. Estimating con- tinuous distributions in bayesian classifiers. In Pro- ceedings of the Eleventh Conference on Uncertainty in Artificial Intelligence, (UAI '95), page 338-345, San Francisco, CA, USA. Morgan Kaufmann Pub- lishers Inc.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "From dynamic classifier selection to dynamic ensemble selection",
"authors": [
{
"first": "H",
"middle": [
"R"
],
"last": "Albert",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Ko",
"suffix": ""
},
{
"first": "Alceu Souza",
"middle": [],
"last": "Sabourin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Britto",
"suffix": ""
}
],
"year": 2008,
"venue": "Pattern recognition",
"volume": "41",
"issue": "5",
"pages": "1718--1731",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Albert HR Ko, Robert Sabourin, and Alceu Souza Britto Jr. 2008. From dynamic classifier selection to dynamic ensemble selection. Pattern recognition, 41(5):1718-1731.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data",
"authors": [
{
"first": "John",
"middle": [
"D"
],
"last": "Lafferty",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Fernando",
"middle": [
"C N"
],
"last": "Pereira",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the Eighteenth International Conference on Machine Learning, ICML '01",
"volume": "",
"issue": "",
"pages": "282--289",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling se- quence data. In Proceedings of the Eighteenth Inter- national Conference on Machine Learning, ICML '01, page 282-289, San Francisco, CA, USA. Mor- gan Kaufmann Publishers Inc.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Logistic model trees",
"authors": [
{
"first": "Niels",
"middle": [],
"last": "Landwehr",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Eibe",
"middle": [],
"last": "Frank",
"suffix": ""
}
],
"year": 2005,
"venue": "Machine learning",
"volume": "59",
"issue": "1-2",
"pages": "161--205",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Niels Landwehr, Mark Hall, and Eibe Frank. 2005. Lo- gistic model trees. Machine learning, 59(1-2):161- 205.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "A survey on deep learning for named entity recognition",
"authors": [
{
"first": "Jing",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Aixin",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Jianglei",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Chenliang",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2020,
"venue": "IEEE Transactions on Knowledge and Data Engineering",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jing Li, Aixin Sun, Jianglei Han, and Chenliang Li. 2020. A survey on deep learning for named entity recognition. IEEE Transactions on Knowledge and Data Engineering.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Gcdt: A global context enhanced deep transition architecture for sequence labeling",
"authors": [
{
"first": "Yijin",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Fandong",
"middle": [],
"last": "Meng",
"suffix": ""
},
{
"first": "Jinchao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jinan",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Yufeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Jie",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1906.02437"
]
},
"num": null,
"urls": [],
"raw_text": "Yijin Liu, Fandong Meng, Jinchao Zhang, Jinan Xu, Yufeng Chen, and Jie Zhou. 2019. Gcdt: A global context enhanced deep transition architecture for se- quence labeling. arXiv preprint arXiv:1906.02437.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "The stanford corenlp natural language processing toolkit",
"authors": [
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "Jenny",
"middle": [
"Rose"
],
"last": "Bauer",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Finkel",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Bethard",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mc-Closky",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations",
"volume": "",
"issue": "",
"pages": "55--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher D Manning, Mihai Surdeanu, John Bauer, Jenny Rose Finkel, Steven Bethard, and David Mc- Closky. 2014. The stanford corenlp natural language processing toolkit. In Proceedings of 52nd annual meeting of the association for computational linguis- tics: system demonstrations, pages 55-60.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Named entity recognition approaches",
"authors": [
{
"first": "Alireza",
"middle": [],
"last": "Mansouri",
"suffix": ""
},
{
"first": "Lilly",
"middle": [],
"last": "Suriani Affendey",
"suffix": ""
},
{
"first": "Ali",
"middle": [],
"last": "Mamat",
"suffix": ""
}
],
"year": 2008,
"venue": "International Journal of Computer Science and Network Security IJCSNS",
"volume": "8",
"issue": "2",
"pages": "339--344",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alireza Mansouri, Lilly Suriani Affendey, and Ali Ma- mat. 2008. Named entity recognition approaches. International Journal of Computer Science and Net- work Security IJCSNS, 8(2):339-344.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Named entity recognition for Arabic using syntactic grammars",
"authors": [
{
"first": "Slim",
"middle": [],
"last": "Mesfar",
"suffix": ""
}
],
"year": 2007,
"venue": "International Conference on Application of Natural Language to Information Systems (NLDB '07)",
"volume": "",
"issue": "",
"pages": "305--316",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Slim Mesfar. 2007. Named entity recognition for Arabic using syntactic grammars. In International Conference on Application of Natural Language to Information Systems (NLDB '07), pages 305-316. Springer.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Named entity recognition without gazetteers",
"authors": [
{
"first": "Andrei",
"middle": [],
"last": "Mikheev",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Moens",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Grover",
"suffix": ""
}
],
"year": 1999,
"venue": "Ninth Conference of the European Chapter of the Association for Computational Linguistics (EACL '99)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrei Mikheev, Marc Moens, and Claire Grover. 1999. Named entity recognition without gazetteers. In Ninth Conference of the European Chapter of the Association for Computational Linguistics (EACL '99).",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "A multilingual named entity corpus for Arabic",
"authors": [
{
"first": "Djamel",
"middle": [],
"last": "Mostefa",
"suffix": ""
},
{
"first": "Mariama",
"middle": [],
"last": "La\u00efb",
"suffix": ""
},
{
"first": "St\u00e9phane",
"middle": [],
"last": "Chaudiron",
"suffix": ""
},
{
"first": "Khalid",
"middle": [],
"last": "Choukri",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Chalendar",
"suffix": ""
},
{
"first": ";",
"middle": [],
"last": "English",
"suffix": ""
},
{
"first": "French",
"middle": [],
"last": "Medar",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Djamel Mostefa, Mariama La\u00efb, St\u00e9phane Chaudiron, Khalid Choukri, and G Chalendar. 2009. A multi- lingual named entity corpus for Arabic, English and French. MEDAR, 2009:2nd.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Online ensemble learning",
"authors": [
{
"first": "Chandrakant",
"middle": [],
"last": "Nikunj",
"suffix": ""
},
{
"first": "Stuart",
"middle": [],
"last": "Oza",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Russell",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikunj Chandrakant Oza and Stuart Russell. 2001. On- line ensemble learning. University of California, Berkeley.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Dissecting contextual word embeddings: Architecture and representation",
"authors": [
{
"first": "E",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Wen-Tau",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Yih",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1808.08949"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew E Peters, Mark Neumann, Luke Zettlemoyer, and Wen-tau Yih. 2018. Dissecting contextual word embeddings: Architecture and representation. arXiv preprint arXiv:1808.08949.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Induction of decision trees. Machine learning",
"authors": [
{
"first": "J",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Ross",
"middle": [],
"last": "Quinlan",
"suffix": ""
}
],
"year": 1986,
"venue": "",
"volume": "1",
"issue": "",
"pages": "81--106",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Ross Quinlan. 1986. Induction of decision trees. Ma- chine learning, 1(1):81-106.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "C4. 5: programs for machine learning",
"authors": [
{
"first": "J Ross",
"middle": [],
"last": "Quinlan",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J Ross Quinlan. 2014. C4. 5: programs for machine learning. Elsevier.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Life-inet: A structured network-based knowledge exploration and analytics system for life sciences",
"authors": [
{
"first": "Xiang",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Jiaming",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Meng",
"middle": [],
"last": "Qu",
"suffix": ""
},
{
"first": "Xuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Zeqiu",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Qi",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Meng",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Fangbo",
"middle": [],
"last": "Tao",
"suffix": ""
},
{
"first": "Saurabh",
"middle": [],
"last": "Sinha",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Liem",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of ACL 2017, System Demonstrations",
"volume": "",
"issue": "",
"pages": "55--60",
"other_ids": {
"DOI": [
"10.18653/v1/P17-4010"
]
},
"num": null,
"urls": [],
"raw_text": "Xiang Ren, Jiaming Shen, Meng Qu, Xuan Wang, Ze- qiu Wu, Qi Zhu, Meng Jiang, Fangbo Tao, Saurabh Sinha, David Liem, et al. 2017. Life-inet: A struc- tured network-based knowledge exploration and an- alytics system for life sciences. In Proceedings of ACL 2017, System Demonstrations, pages 55-60.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Learning representations by backpropagating errors",
"authors": [
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "David E Rumelhart",
"suffix": ""
},
{
"first": "Ronald J",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Williams",
"suffix": ""
}
],
"year": 1986,
"venue": "Nature",
"volume": "323",
"issue": "6088",
"pages": "533--536",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. 1986. Learning representations by back- propagating errors. Nature, 323(6088):533-536.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Schema matching prediction with applications to data source discovery and dynamic ensembling",
"authors": [
{
"first": "Tomer",
"middle": [],
"last": "Sagi",
"suffix": ""
},
{
"first": "Avigdor",
"middle": [],
"last": "Gal",
"suffix": ""
}
],
"year": 2013,
"venue": "The VLDB Journal",
"volume": "22",
"issue": "5",
"pages": "689--710",
"other_ids": {
"DOI": [
"10.1007/s00778-013-0325-y"
]
},
"num": null,
"urls": [],
"raw_text": "Tomer Sagi and Avigdor Gal. 2013. Schema matching prediction with applications to data source discov- ery and dynamic ensembling. The VLDB Journal, 22(5):689-710.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Arabic named entity recognition using boosting method",
"authors": [
{
"first": "Mohamad",
"middle": [],
"last": "Bagher Sajadi",
"suffix": ""
},
{
"first": "Behrooz",
"middle": [],
"last": "Minaei",
"suffix": ""
}
],
"year": 2017,
"venue": "2017 Artificial Intelligence and Signal Processing Conference (AISP '17)",
"volume": "",
"issue": "",
"pages": "281--288",
"other_ids": {
"DOI": [
"10.1109/AISP.2017.8324098"
]
},
"num": null,
"urls": [],
"raw_text": "Mohamad Bagher Sajadi and Behrooz Minaei. 2017. Arabic named entity recognition using boosting method. In 2017 Artificial Intelligence and Signal Processing Conference (AISP '17), pages 281-288, Shiraz, Iran. IEEE.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "The strength of weak learnability",
"authors": [
{
"first": "E",
"middle": [],
"last": "Robert",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Schapire",
"suffix": ""
}
],
"year": 1990,
"venue": "Machine learning",
"volume": "5",
"issue": "2",
"pages": "197--227",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert E Schapire. 1990. The strength of weak learn- ability. Machine learning, 5(2):197-227.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "A survey of Arabic named entity recognition and classification",
"authors": [
{
"first": "Khaled",
"middle": [],
"last": "Shaalan",
"suffix": ""
}
],
"year": 2014,
"venue": "Computational Linguistics",
"volume": "40",
"issue": "2",
"pages": "469--510",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Khaled Shaalan. 2014. A survey of Arabic named en- tity recognition and classification. Computational Linguistics, 40(2):469-510.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Nera: Named entity recognition for Arabic",
"authors": [
{
"first": "Khaled",
"middle": [],
"last": "Shaalan",
"suffix": ""
},
{
"first": "Hafsa",
"middle": [],
"last": "Raza",
"suffix": ""
}
],
"year": 2009,
"venue": "Journal of the American Society for Information Science and Technology",
"volume": "60",
"issue": "8",
"pages": "1652--1663",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Khaled Shaalan and Hafsa Raza. 2009. Nera: Named entity recognition for Arabic. Journal of the Ameri- can Society for Information Science and Technology, 60(8):1652-1663.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Ensemble learning for named entity recognition (iswc '13). In International semantic web conference",
"authors": [
{
"first": "Ren\u00e9",
"middle": [],
"last": "Speck",
"suffix": ""
},
{
"first": "Axel-Cyrille Ngonga",
"middle": [],
"last": "Ngomo",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "519--534",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ren\u00e9 Speck and Axel-Cyrille Ngonga Ngomo. 2014. Ensemble learning for named entity recognition (iswc '13). In International semantic web confer- ence, pages 519-534. Springer.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Principles and procedures of statistics",
"authors": [
{
"first": "Robert George Douglas",
"middle": [],
"last": "Steel",
"suffix": ""
},
{
"first": "James",
"middle": [
"Hiram"
],
"last": "Torrie",
"suffix": ""
}
],
"year": 1960,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert George Douglas Steel, James Hiram Torrie, et al. 1960. Principles and procedures of statistics.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Correlation between a discrete and a continuous variable. point-biserial correlation",
"authors": [
{
"first": "F",
"middle": [],
"last": "Robert",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tate",
"suffix": ""
}
],
"year": 1954,
"venue": "The Annals of mathematical statistics",
"volume": "25",
"issue": "3",
"pages": "603--607",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert F Tate. 1954. Correlation between a discrete and a continuous variable. point-biserial correlation. The Annals of mathematical statistics, 25(3):603- 607.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Predicting the quality of process model matching",
"authors": [
{
"first": "Matthias",
"middle": [],
"last": "Weidlich",
"suffix": ""
},
{
"first": "Tomer",
"middle": [],
"last": "Sagi",
"suffix": ""
},
{
"first": "Henrik",
"middle": [],
"last": "Leopold",
"suffix": ""
}
],
"year": 2013,
"venue": "Business Process Management (BPM '13)",
"volume": "",
"issue": "",
"pages": "203--210",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthias Weidlich, Tomer Sagi, Henrik Leopold, Avig- dor Gal, and Jan Mendling. 2013. Predicting the quality of process model matching. In Business Process Management (BPM '13), pages 203-210. Springer.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "How transferable are features in deep neural networks? arXiv preprint",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Yosinski",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Clune",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Hod",
"middle": [],
"last": "Lipson",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1411.1792"
]
},
"num": null,
"urls": [],
"raw_text": "Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. 2014. How transferable are features in deep neural networks? arXiv preprint arXiv:1411.1792.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Building a corpus-derived gazetteer for named entity recognition",
"authors": [
{
"first": "Norshuhani",
"middle": [],
"last": "Zamin",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Oxley",
"suffix": ""
}
],
"year": 2011,
"venue": "International Conference on Software Engineering and Computer Systems",
"volume": "",
"issue": "",
"pages": "73--80",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Norshuhani Zamin and Alan Oxley. 2011. Building a corpus-derived gazetteer for named entity recog- nition. In International Conference on Software Engineering and Computer Systems, pages 73-80. Springer.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Named entity recognition using an HMM-based chunk tagger",
"authors": [
{
"first": "Guodong",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Su",
"suffix": ""
}
],
"year": 2002,
"venue": "proceedings of the 40th Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "473--480",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "GuoDong Zhou and Jian Su. 2002. Named entity recognition using an HMM-based chunk tagger. In proceedings of the 40th Annual Meeting on Associa- tion for Computational Linguistics, pages 473-480. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "BD-sentence example."
},
"FIGREF1": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Workflow of the proposed approach."
},
"FIGREF2": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Comparative results of NER trained over modern (ANER) and historic (NoorCorp) data."
},
"TABREF0": {
"html": null,
"num": null,
"text": "",
"content": "<table/>",
"type_str": "table"
},
"TABREF2": {
"html": null,
"num": null,
"text": "Dataset Token Counts and Class Distribution.",
"content": "<table/>",
"type_str": "table"
},
"TABREF4": {
"html": null,
"num": null,
"text": "Pattern-NER rules.",
"content": "<table/>",
"type_str": "table"
},
"TABREF5": {
"html": null,
"num": null,
"text": "DEM is",
"content": "<table><tr><td/><td>Pearson</td><td colspan=\"2\">Biserial</td></tr><tr><td colspan=\"3\">Predictor Precision Recall R pb</td><td>p value</td></tr><tr><td>BD</td><td>-0.181</td><td>-0.233</td></tr><tr><td>DCS</td><td>0.182</td><td>0.255</td></tr><tr><td>SNN</td><td>0.121</td><td>0.147</td></tr><tr><td>SNP</td><td>0.096</td><td>0.120</td></tr><tr><td>SNO</td><td>0.110</td><td>0.120</td></tr><tr><td>SNL</td><td>0.040</td><td>0.070</td></tr><tr><td>SLD</td><td>0.020</td><td>-0.014</td></tr><tr><td>DCT</td><td/><td colspan=\"2\">0.42 &lt; 0.001</td></tr><tr><td>MCT</td><td/><td colspan=\"2\">0.40 &lt; 0.001</td></tr></table>",
"type_str": "table"
},
"TABREF6": {
"html": null,
"num": null,
"text": "Correlation results for our proposed predictors. Pearson Correlation with Precision and Recall, and Biserial Correlation with true/false token prediction.",
"content": "<table><tr><td>Type</td><td colspan=\"2\">Precision Recall F1</td></tr><tr><td>CRF</td><td>90.0%</td><td>68.0% 77.4%</td></tr><tr><td>LSTMCRF</td><td>81.3%</td><td>64.6% 73.0%</td></tr><tr><td>SVM</td><td>86.3%</td><td>66.0% 74.8%</td></tr><tr><td>LR</td><td>88.3%</td><td>50.0% 63.8%</td></tr><tr><td>DT</td><td>85.0%</td><td>65.3% 75.2%</td></tr><tr><td>Pattern</td><td>84.0%</td><td>30.0% 42.0%</td></tr><tr><td>PolyGlot</td><td>59.0%</td><td>39.0% 47.0%</td></tr><tr><td>DEM</td><td colspan=\"2\">90.8% 71.0% 79.8%</td></tr><tr><td>Oracle</td><td colspan=\"2\">94.0% 75.0% 83.0%</td></tr></table>",
"type_str": "table"
},
"TABREF7": {
"html": null,
"num": null,
"text": "Performance of DEM vs. single NER Tools.",
"content": "<table/>",
"type_str": "table"
},
"TABREF9": {
"html": null,
"num": null,
"text": "Performance of DEM vs. Static Models.",
"content": "<table/>",
"type_str": "table"
},
"TABREF11": {
"html": null,
"num": null,
"text": "Output Examples for CRF-NER when trained over modern and historical text.",
"content": "<table/>",
"type_str": "table"
}
}
}
}