ACL-OCL / Base_JSON /prefixR /json /ranlp /2021.ranlp-1.141.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:53:51.788841Z"
},
"title": "Serbian NER&Beyond: The Archaic and the Modern Intertwinned",
"authors": [
{
"first": "\u0160andrih",
"middle": [],
"last": "Branislava",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Belgrade",
"location": {}
},
"email": "[email protected]"
},
{
"first": "",
"middle": [],
"last": "Todorovi\u0107",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Belgrade",
"location": {}
},
"email": ""
},
{
"first": "Ranka",
"middle": [],
"last": "Stankovi\u0107",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Belgrade",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Cvetana",
"middle": [],
"last": "Krstev",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Belgrade",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Milica",
"middle": [
"Ikoni\u0107"
],
"last": "Ne\u0161i\u0107",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Belgrade",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this work, we present a Serbian literary corpus that is being developed under the umbrella of the \"Distant Reading for European Literary History\" COST Action CA16204. Using this corpus of novels written more than a century ago, we have developed and made publicly available a Named Entity Recognizer (NER) trained to recognize 7 different named entity types, with a Convolutional Neural Network (CNN) architecture, having F 1 score of \u224891% on the test dataset. This model has been further assessed on a separate evaluation dataset. We wrap up with comparison of the developed model with the existing one, followed by a discussion of pros and cons of the both models.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "In this work, we present a Serbian literary corpus that is being developed under the umbrella of the \"Distant Reading for European Literary History\" COST Action CA16204. Using this corpus of novels written more than a century ago, we have developed and made publicly available a Named Entity Recognizer (NER) trained to recognize 7 different named entity types, with a Convolutional Neural Network (CNN) architecture, having F 1 score of \u224891% on the test dataset. This model has been further assessed on a separate evaluation dataset. We wrap up with comparison of the developed model with the existing one, followed by a discussion of pros and cons of the both models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The \"Distant Reading for European Literary History\" 1 (COST Action CA16204) has started in 2017 with the purpose of using computational methods to analyse large collections of literary texts Frontini et al., 2020) . The main goal of this ongoing action is to compile a multilingual open-source collection, named European Literary Text Collection (ELTeC), containing linguistically annotated sub-collections of 100 novels per language written more than 100 years ago.",
"cite_spans": [
{
"start": 191,
"end": 213,
"text": "Frontini et al., 2020)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we present a collection of Serbian texts in this corpus, named SrpELTeC. Alongside, we describe our efforts in developing its Named Entity (NE) layer, defined previously as one of the main action's deliverables.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For this purpose, we adjusted and used the existing rule-based NE recognizer for Serbian, dubbed SrpNER, that we will describe in Section 2 together with some approaches to NE recognition in literary texts. This SrpNER model was applied to the raw version of the selected texts from SrpELTeC collection, presented in Section 3. Based on the specifically tailored guidelines, different evaluators performed careful checks and corrections, yielding a gold standard (SrpELTeC-gold). This enabled us to train a CNN-based NE recognizer, named SrpCNNER, presented in Section 4. Having the gold dataset, prepared as described in Subsection 4.1, we trained (Subsection 4.2) and evaluated the model in two different settings: first, we discussed our model's performance on the SrpELTeC-gold test subset, as shown in Subsection 4.3. Afterwards we carried out a detailed evaluation on a collection of novels that were not present in the gold standard, named SrpELTeC-eval, with the findings and a thorough discussion given in Section 5. Finally, conclusions and plans for the future work were stated in Section 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The existence of large-scale lexical resources for Serbian, e-dictionaries in particular (Krstev, 2008) , coupled with local grammars in the form of finite-state transducers (Vitas and Krstev, 2012) , enabled the development of a comprehensive rule-based system for NER Srp-NER. This system presented by Krstev et al. (2014) targeted 11 classes of NEs: dates and time (moments and periods), money and measurement expressions, geopolitical names (countries, settlements, oronyms and hydronyms), and personal names (one or more last names with or without first names and nicknames). The system was developed to recognize NEs in newspapers and similar texts. It was manually evaluated on a sample of unseen newspaper texts. The overall F 1 score of the model was \u2248 96%. To the best of our knowledge, so far there were no attempts to produce a NER system for Serbian literary texts.",
"cite_spans": [
{
"start": 89,
"end": 103,
"text": "(Krstev, 2008)",
"ref_id": "BIBREF5"
},
{
"start": 174,
"end": 198,
"text": "(Vitas and Krstev, 2012)",
"ref_id": "BIBREF17"
},
{
"start": 304,
"end": 324,
"text": "Krstev et al. (2014)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The enhanced version of SrpNER was later utilized by \u0160andrih et al. (2019) for the preparation of a gold standard annotated with personal names, which was used for building training sets for 4 different levels of annotation, on which two ML-based NE recognizers were trained and evaluated (SpaCy and Stanford). As a support for the developed NER models, \u0160andrih et al. (2019) joined several existing tools and developed various new tools, combined into a web platform NER&Beyond. 2 Although NER systems in general were developed mostly for newspaper and similar texts, there were some endeavours to produce functional systems for literary texts as well. Enrichment of French Renaissance texts with proper names (Maurel et al., 2014) faced two challenges: text diversity due to various spellings of words, and need to deal with numerous XML-TEI tags used to preserve the format of original editions. Authors' solution was based on the cascades of finite-state automata and both general dictionaries and those built specifically for the project. The evaluation showed that the slot error rate of name tagging was 6.1%.",
"cite_spans": [
{
"start": 53,
"end": 74,
"text": "\u0160andrih et al. (2019)",
"ref_id": null
},
{
"start": 354,
"end": 375,
"text": "\u0160andrih et al. (2019)",
"ref_id": null
},
{
"start": 480,
"end": 481,
"text": "2",
"ref_id": null
},
{
"start": 711,
"end": 732,
"text": "(Maurel et al., 2014)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "A dataset of literary entities comprising 210,532 tokens evenly drawn from 100 different English literary texts annotated with ACE entity categories (person, location, geo-political entity, facility, organization, and vehicle) 3 was published in (Bamman et al., 2019) . The authors' main motivation was to asses NER models' performance on different types of texts. Their conclusion was that recognition improved for almost all entity types when literary texts were used for the both training and evaluation (on average P = 75.1%, R = 62.6% and F 1 = 68.3%), whilst for training on general texts, such as news data, and testing on literary texts the results were much poorer (on average P = 57.8%, R = 37.7% and F 1 = 45.7%). SHINRA2020-ML shared-task (Sekine et al., 2020) targeted the categorization of Wikipedia entities using the Extended Named Entity (ENE) hierarchy in 30 languages (Serbian was not one of them). ENE included about 220 fine-grained categories of NEs in a hierarchy of up to four layers. Some traditional NE types such as location were specified as either geopolitical location (\"city\", \"province\", \"country\", etc.) or geological region (\"mountain\", \"river\", \"lake\", etc.). ENE also included some new NE types like \"products\", \"event\", \"position\", etc. Dekker et al. (2019) experimented with different off-the-shelf NER tools for the extraction of social network graphs from classic and modern English fiction novels. The authors wanted to find out to what extent are these tools suitable for identifying fictional characters in novels, and what are differences and similarities that can be discovered between social networks extracted for different novels.",
"cite_spans": [
{
"start": 246,
"end": 267,
"text": "(Bamman et al., 2019)",
"ref_id": "BIBREF0"
},
{
"start": 751,
"end": 772,
"text": "(Sekine et al., 2020)",
"ref_id": "BIBREF14"
},
{
"start": 1274,
"end": 1294,
"text": "Dekker et al. (2019)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Distant Reading Training School for Named Entity Recognition and Geo-Tagging for Literary Analysis organized within the COST Action 16204 4 covered NER approaches in general, annotation campaigns, practical work with NER tools, annotating NER in TEI, analyzing NER annotation for literary characters and place names and NER data analysis. Different types of NER systems were tested for several languages, some based on symbolic methods, relying on rules developed by experts and dictionaries (gazetteers), others using statistical and data-driven approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The NE layer of ELTeC corpus has presently been produced for three languages: Hungarian, Portuguese, Slovene. reported on the NER annotation of the Portuguese sub-collection of the ELTeC corpus. Authors used the PALAVRAS-NER parser, a Constraint Grammar (CG) system, in which NER is an integrated task of grammatical tagging, implemented with the basic tagset of 6 NE categories (person, organization, place, event, semantic products and objects) with about 20 subcategories at three levels, disambiguated by CG-rules: known lexical entries and gazetteer lists, pattern-based name type prediction and context-based name type inference for unkno-wn words. This system was applied to eight novels that were fully human revised. Evaluation results varied for precision from 64.6% to 80.8%, and recall from 64.3% to 82.0%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "At the mentioned Distant reading training school it was concluded that spaCy module 5 for Python was used for training NER models for many involved languages, already having tagsets that could be mapped to the ELTeC annotation scheme, elaborated later in Section 3. Partalidou et al. 2019developed a POS-tagger and a NER for Greek using spaCy, based on newspaper articles and Wikipedia dataset, able to recognize the following entity types: location, organization, person and facility. Jabbari et al. (2020) created a corpus consisting of news articles in French, which served as a dataset for training and evaluation of a NER and a relation extraction algorithms using spaCy. Modrzejewski et al. (2020) incorporated NER trained in spaCy into an English/German Machine Translation system, with the aim to improve NE translation.",
"cite_spans": [
{
"start": 677,
"end": 703,
"text": "Modrzejewski et al. (2020)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Morever, Jiang et al. 2016conducted a comparative evaluation of different publicly available NER tools. Based on different criteria, authors concluded that spaCy was among best performing across all tested datasets. Having all this in mind, we decided for spaCy as a framework for developing a Serbian NER model on a collection old literary texts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "As described earlier in Section 1, the focus of the COST Action CA16204 is to compile the ELTeC corpus containing collections of old European novels published between 1840 and 1920 in various languages. In order to make these sub-collections decent representatives of their corresponding languages, the novels were selected to evenly represent a) novels of various sizes: short, medium, long; b) four twenty-year time periods within the examined time span, c) canonical novels as well as those not known to wider audience or completely forgotten, as judged by the number of reprints, and d) female and male authors (Frontini et al., 2020) .",
"cite_spans": [
{
"start": 615,
"end": 638,
"text": "(Frontini et al., 2020)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Serbian Collection in the ELTeC",
"sec_num": "3"
},
{
"text": "The last version of the ELTeC (v. 1.1.0) was released in April 2021. 6 It contained 14 language sub-collections each with at least 50 novels, while 8 collections contained targeted 100 novels per language.",
"cite_spans": [
{
"start": 69,
"end": 70,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Serbian Collection in the ELTeC",
"sec_num": "3"
},
{
"text": "The SrpELTeC corpus 7 in the latest EL-TeC release has 90 novels. The work on this collection is still in progress with the aim to obtain the complete collection by the end of the project. Contrary to a number of other European languages involved in this action, the Serbian corpus is being produced from scratch, because the vast majority of novels from the selected time period were not digitized before, they were not digitized in the proper manner or were not available .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Serbian Collection in the ELTeC",
"sec_num": "3"
},
{
"text": "This preparation procedure involved several steps: selection of novels, retrieval of hard copies, scanning, OCR, automatic correction of OCR errors (for which a specialized tool based on the Serbian morphological dictionaries was produced ), correction of remaining errors by a number of volunteer readers, and production of metadata.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Serbian Collection in the ELTeC",
"sec_num": "3"
},
{
"text": "One of the important aspects of this ELTeC collection is to feature annotations of certain named entities. At this moment, annotation of named entities is carried out for nine languages, including Serbian. According to the guidelines, the common NER tagset includes the following 7 categories: demonyms (DEMO), professions and titles (ROLE), works of art (WORK), person names (PERS), places (LOC), events (EVENT) and organizations (ORG). 8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Serbian Collection in the ELTeC",
"sec_num": "3"
},
{
"text": "In this section we first explain how we have turned the SrpELTeC corpus into a dataset for NER. Afterwards, we describe the training of the NER model SrpCNNER, followed by a detailed evaluation. Web users can navigate to http://ner.jerteh.rs/ in order to apply the SrpCNNER model directly on input text. The model can also be applied to a customsize collection of text files using the previously mentioned NER&Beyond web platform.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SrpCNNER Model for Serbian",
"sec_num": "4"
},
{
"text": "The SrpNER system for Serbian introduced in Section 2 was used in the first stage of the gold standard preparation (dubbed SrpELTeCgold) in order to automatically annotate Sr-pELTeC collection. The tagset used by Srp-NER differed from the simplified tagset used in the ELTeC project -the tags are more refined, e.g. toponyms are classified as oronyms, hydronyms, settlements etc., and nesting of tags is allowed. Thus, the tags produced by SrpNER had to be mapped to ElTeC tags as illustrated in Figure 1 : Before text annotation, we used the advantage of rule-based NER systems and adjusted SrpNER to these specific texts that differ significantly from newspaper texts for which SrpNER was primarily developed in order to improve its performance and facilitate the work of evaluators. Some modifications of rules and used lexicons were done for the whole collection (e.g. Danas 'today' cannot be the name of an organization since this publishing house was established 20 years ago), while others were novel-specific (e.g. Una can be the first name or the name of the river -we retained only the possibility appropriate to the particular novel).",
"cite_spans": [],
"ref_spans": [
{
"start": 496,
"end": 504,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Gold Standard: SrpELTeC-gold",
"sec_num": "4.1"
},
{
"text": "The EVENT named entity is somewhat special: SrpNER does not recognize this entity, so the evaluators were asked to identify and annotate them when they occur in text. SrpNER does not recognize WORK entity either, but these annotations were in many cases added by volunteer readers during text correction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gold Standard: SrpELTeC-gold",
"sec_num": "4.1"
},
{
"text": "Afterwards, students were given different novel chapters along with the annotation guidelines presented briefly in Table 1 . Following these instructions and under constant supervision of their professors, students manually corrected the automatically annotated chapters.",
"cite_spans": [],
"ref_spans": [
{
"start": 115,
"end": 122,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Gold Standard: SrpELTeC-gold",
"sec_num": "4.1"
},
{
"text": "The evaluators were divided into two groups: the first group performed corrections using the BRAT annotation tool, 9 while the second group used the INCEpTION. 10 We wanted to receive user feedback on both platforms for the sake of creating the annotation process as comfortable and efficient as possible in the future, but also to provide choice to annotators. The fundamental difference was the input format these platforms needed: BRAT tool uses the standoff format, whilst INCEpTION relies on the CoNLL-2002 verticalized format. 11 In order to convert from one format into another, we used the NER&Beyond web application. Table 2 displays distribution of different entity types over SrpELTeC-gold novels. The first four digits of text identifiers represent the year of the first publication of a novel. For some novels, NER was not performed on the whole text, but rather on randomly selected chapters. These annotated samples were also included in the gold standard. The cumulative values of entities on all samples are indicated in the first row (ID \"sample\"). Column tok indicates a novel's size in terms of tokens.",
"cite_spans": [
{
"start": 160,
"end": 162,
"text": "10",
"ref_id": null
}
],
"ref_spans": [
{
"start": 626,
"end": 633,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Gold Standard: SrpELTeC-gold",
"sec_num": "4.1"
},
{
"text": "We trained our SrpCNNER model on the SrpELTeC-gold corpus using the spaCy Python module, version 3.0. In order to prepare the dataset for training, we first segmented texts into sentences, ending up with 43,129 sentences in total, including sentences that did not contain named entities. Afterwards, we randomly shuffled and split these sentences into training, test and development sets with the ratio of 8:1:1, i.e. 34,503 sentences in the train set, and the same number of sentences, 4,313, in the test and development sets, respectively. These sentences were prepared as Python list-objects containing tuples as elements. An example of such tuple is the following:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "4.2"
},
{
"text": "\" Had\u017ei-\u0110era je za to vreme u\u0161ao u sobu agama , da im nazove dobro jutro, a manastir- ski sluga po\u010de prislu\u017eivati rakiju i kafu.\", 12 'entities': [(0, 10, 'PERS'), (39, 44, 'ROLE'), (86, 91, 'ROLE')] The spaCy v3.0 enables specification of custom neural network architecture within a simple text file. Using the quick-start widget, 13 user can easily set up the default setting configuration. In our case, the model's language was Serbian, containing the ner component only, trained on CPU. We made the following adjustments to the default configuration (referring to the corresponding file blocks):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "4.2"
},
{
"text": "[components.tok2vec.model.encode] changed size of the token-to-vector layer from 96 to 300, that is maximum recommended value (width parameter);",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "4.2"
},
{
"text": "[components.ner.model] changed width of a hidden layer from 64 to to 300 (hidden_width parameter);",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "4.2"
},
{
"text": "[components.ner.model.tok2vec] set the architecture (@architectures) to HashEmbedCNN 14 having width of the input and the output equal to 300 (width), with 8 convolutional layers (depth), 10,000 rows in the hash embedding tables (embed_size), with the recommended 1 token on either side to concatenate during the convolutions (window_size), without pretrained static vectors (pretrained_vectors = null).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "4.2"
},
{
"text": "Model training ended up after 11 epochs (the number of epochs is automatically generated), having 93.33%, 90.14% and 91.71% F 1 score, precision and recall, on the development set, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "4.2"
},
{
"text": "Afterwards, we examined our model's performance on the test set. We run the previously trained model on raw, non-annotated sentences from the SrpELTeC. After comparing the obtained annotations with the ones given in the test subset of the SrpELTeC-gold, we obtained the precision (P ), recall (R) and F 1 scores displayed in Table 3 . The normalized confusion matrix is given in Figure 2 ('O' represents tokens that are not NE). One can observe that WORK and EVENT were frequently missed or confused with PERS.",
"cite_spans": [],
"ref_spans": [
{
"start": 325,
"end": 332,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 379,
"end": 387,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.3"
},
{
"text": "Despite the encouraging results obtained on the SrpELTeC-gold, shown in Subsection 4.3, we 14 HashEmbedCNN, https://spacy.io/api/architectures# HashEmbedCNN wanted to further asses our model's performance. For this purpose, we prepared an independent evaluation set, dubbed SrpELTeC-eval, containing corrected annotated chapters from three novels that were not included in the training procedure. Table 4 displays entity distribution over SrpELTeC-eval. Named entities are represented by their first letter (e.g. P represents PERS). It should be noted that the EVENT type did not occur in this dataset. tok 19070 44 55 23 23 3 0 2,027 19180 18 13 2 5 0 5 3,928 19121 33 18 14 2 0 0 3,045 We applied the same evaluation procedure for the both recognizers. After running a them on SrpELTeC-eval, we took the strictest approach and differentiated between the following three situations:",
"cite_spans": [],
"ref_spans": [
{
"start": 397,
"end": 404,
"text": "Table 4",
"ref_id": "TABREF5"
},
{
"start": 603,
"end": 683,
"text": "tok 19070 44 55 23 23 3 0 2,027 19180 18 13 2 5 0 5 3,928 19121 33 18 14",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Separate Evaluation Set",
"sec_num": "5"
},
{
"text": "[TP] an entity is recognized exactly as it should, comparing to the gold standard (the text and the named entity types matchtrue positives);",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ID P R L D O W",
"sec_num": null
},
{
"text": "[FP] there are three cases here: 1) an entity is recognized, but not with the correct type (e.g. PERS mistaken for a ROLE); 2) an entity is recognized as a correct type but the scope is not correct (e.g only a first name is recognized as PERS, although a full name is given); or 3) model annotated something that is not present in the gold standard -false positives;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ID P R L D O W",
"sec_num": null
},
{
"text": "[FN] an entity present in the gold standard was not recognized -false negatives.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ID P R L D O W",
"sec_num": null
},
{
"text": "In the subsections that follow, we analyze the performances of our newly trained model SrpCNNER and the adjusted SrpNER on the SrpELTeC-eval corpus. Finally, we discuss their strengths and weaknesses and make certain statements about their applicability in different contexts and situations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ID P R L D O W",
"sec_num": null
},
{
"text": "The overall results for the SrpCNNER are displayed in the upper part of Table 5 . As previously explained, for the case of FP, there is a specific situation that something was recognized, but not with the correct entity type. Such cases are indicated by the number in parentheses of the FP column (therefore, numbers TP, FN and the one given in parentheses from the FP column sum up to the total number of entities given in the column in Table 4 ). Values of precision (P ), recall (R) and F 1 scores over each entity are shown in the upper part of Figure 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 72,
"end": 79,
"text": "Table 5",
"ref_id": "TABREF7"
},
{
"start": 438,
"end": 445,
"text": "Table 4",
"ref_id": "TABREF5"
},
{
"start": 549,
"end": 557,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "SrpCNNER vs. SrpELTeC-eval",
"sec_num": "5.1"
},
{
"text": "The overall results for the SrpNER are displayed in the lower part of Table 5 . Values of precision (P ), recall (R) and F 1 scores over each entity are shown in the lower part of Figure 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 70,
"end": 77,
"text": "Table 5",
"ref_id": "TABREF7"
},
{
"start": 180,
"end": 188,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "SrpNER vs. SrpELTeC-eval",
"sec_num": "5.2"
},
{
"text": "From the obtained results it is obvious that SrpNER was not nearly as successful as when applied to newspaper texts. This could well be expected since each novel has its own specifics, and one cannot say that novels in general share some common language features, as newspapers do. Also, one can observe that results are very different for each of three samples; however, we cannot draw some firm conclusions, since the used samples were rather small.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SrpNER vs. SrpELTeC-eval",
"sec_num": "5.2"
},
{
"text": "Based on the results shown in Figure 3 (upper part) and Table 5 , it becomes obvious that SrpCNNER does not perform so well on unseen texts. In order to understand the reasons for that, we observed each and single case in isolation, which brought us to certain findings.",
"cite_spans": [],
"ref_spans": [
{
"start": 30,
"end": 38,
"text": "Figure 3",
"ref_id": "FIGREF2"
},
{
"start": 56,
"end": 63,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5.3"
},
{
"text": "SrpCNNER performed rather well in recognizing personal names (e.g. Ana, Nikola, Gavra \u0110akovi\u0107, Ismail), roles and titles (e.g. car 'tsar', sultan, princeza 'princess', sve\u0161tenik 'priest'), locations (e.g. Beograd, Pariz, Ni\u0161), and demonyms (e.g. \u0160vaba 'German' (pejorative), ruskom 'Russian', francuskom 'French'). However, the number of FP cases was intriguing, due to the ambiguity of use. For example, the model recognized all occurrences of the word otac 'father' as a ROLE, although it can represent both a male parent (which according to the guidelines should not be annotated) and a priest (which should be annotated). Similar is the case with \u010dika 'uncle', which in Serbian, when used before a personal name, has the meaning of mister/sir (familiarly). Both words are used rather frequently, and out of 33 false positives for the novel 19180, 13 were occurrences of exactly these two words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5.3"
},
{
"text": "The novel 19070 revealed some new weak points. For example, occurrences such as Fati-Sultan, Ismail-beg and Ahmed-hafuz are specific to this novel and they represent a combination of a PERS-ROLE entities, a construction that is not usual in Serbian -ROLE PERS order is preferred. SrpCNNER recognized these two entities as a single PERS, WORK or LOC entity (among 43 false positives for the 19070, 7 were these names in various inflected forms), or did not recognize them at all (14 times).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5.3"
},
{
"text": "We also noticed that some false positives were due to specific characteristics of texts. Namely, the orthography in the old novels was not stable, leading to incorrect occurrences (according to contemporary usage); for instance, the word gospode 'god' was considered, according to the decision of the evaluator, FP because written with the lower-case G, while the same word written with the upper-case G Gospode 'God' was found among the true positives.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5.3"
},
{
"text": "It should be noted that in literary texts it is not always easy to decide what is the right type of an NE. For instance, in a sentence from 19180: Sa Tolstojem sam se pomirila i obo\u017eavam ga za Anu Karenjinu 'I reconciled with Tolstoy and I adore him for Anna Karenina', Ana Karenjina can refer to the novel (WORK) or to its main character (PERS), and it is open to interpretation. Similarly, the names of saints (PERS) were sometimes difficult to distinguish from festivities that celebrate them (EVENT). One such example from 18950 is: Mi slavimo Svetog Nikolu, ovog letnjeg. 'We celebrate Saint Nicolas, the one that comes in summer.'",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5.3"
},
{
"text": "Finally, we have noticed that our gold standard has flaws, introduced by evaluators, especially when facing some of the tricky cases mentioned before. It would have certainly been better if we could engage two evaluators for each text, but our human resources were limited.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5.3"
},
{
"text": "Overall conclusion is that SrpCNNER performs satisfactorily on similar texts, which can be seen from the model's performance on the test set displayed in Table 3 . Since this collection of novels contains very diverse texts, both lexically and syntactically, SrpCNNER did not generalize that well on unseen texts.",
"cite_spans": [],
"ref_spans": [
{
"start": 154,
"end": 161,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5.3"
},
{
"text": "We presented the corpus of old Serbian novels, which served as a basis for training a CNN-based NER model SrpCNNER using the spaCy module's framework for Python. After comparing this newly developed model for Serbian with the existing rule-based SrpNER, we came to the conclusion that the previously developed one performs better on this type of texts, due to its adaptability. However, it is not easy to set it up and use it, while the model trained in spaCy can be easily and efficiently applied to the large text collections, and there is still a lot of room for improvement. First of all we need to remove observed flaws from SrpELTeC-gold. Moreover, in the future we intend to use the pre-trained word embedding vectors instead of the default tok2vec layer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6"
},
{
"text": "The integration of POS-tagging and lemmatization with NER into TEI ELTeC level 2 schema 15 is an ongoing activity, where a pipeline starts with SrpNER annotation, followed by POS-tagging and lemmatization by a Tree-Tagger (Schmid, 1999; . As a result, first 16 novels from SrpELTeC collection were annotated with POS, lemmas, and NE in a format agreed by the COST action.",
"cite_spans": [
{
"start": 222,
"end": 236,
"text": "(Schmid, 1999;",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6"
},
{
"text": "Distant Reading, https://www.distant-reading.net",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "NER&Beyond, http://nerbeyond.jerteh.rs/ 3 ACE (Automatic Content Extraction) 2005 Multilingual Training Corpus, https://catalog.ldc.upenn. edu/LDC2006T06",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Materials for the NER Training School, https://github.com/distantreading/WG2/tree/ master/NER_TS",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "spaCy, https://spacy.io/ 6 ELTeC (Distant Reading for European Literary Hi-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "story), https://zenodo.org/communities/eltec 7 SrpELTeC, https://distantreading.github.io/ELTeC/srp/ index.html 8 ELTeC Collections with NE-annotations, http:// brat.jerteh.rs/index.xhtml#/eltec/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "BRAT, https://brat.nlplab.org 10 INCEpTION annotation tool, https://inception-project.github.io/11 Among other CoNLL and XML variants that this tool supports.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Translates as: In the meantime, Haji-\u0110era entered the room to wish agas good morning, when the monastery servant started offering coffee and brandy.13 Quick-start spaCy3 widget, https://spacy.io/usage/training#quickstart",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Encoding Guidelines for the ELTeC: level 2, https://distantreading.github.io/Schema/ eltec--2.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research was done in the scope of the COST action CA16204 \"Distant Reading for European Literary History\". We thank the students of the Department of Library and Information Sciences, Faculty of Philology, master students of Social Sciences and Computing at Multidisciplinary Graduate Studies and PhD students of Intelligent systems program (University of Belgrade) for their help in evaluating the data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "An Annotated Dataset of Literary Entities",
"authors": [
{
"first": "David",
"middle": [],
"last": "Bamman",
"suffix": ""
},
{
"first": "Sejal",
"middle": [],
"last": "Popat",
"suffix": ""
},
{
"first": "Sheng",
"middle": [],
"last": "Shen",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2138--2144",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Bamman, Sejal Popat, and Sheng Shen. 2019. An Annotated Dataset of Literary En- tities. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Langua- ge Technologies, volume 1, pages 2138-2144.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Evaluating Named Entity Recognition Tools for Extracting Social Networks from Novels",
"authors": [
{
"first": "Niels",
"middle": [],
"last": "Dekker",
"suffix": ""
},
{
"first": "Tobias",
"middle": [],
"last": "Kuhn",
"suffix": ""
},
{
"first": "Marieke",
"middle": [],
"last": "Van Erp",
"suffix": ""
}
],
"year": 2019,
"venue": "PeerJ Computer Science",
"volume": "5",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Niels Dekker, Tobias Kuhn, and Marieke van Erp. 2019. Evaluating Named Entity Recognition To- ols for Extracting Social Networks from Novels. PeerJ Computer Science, 5:e189.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Named Entity Recognition for Distant Reading in ELTeC",
"authors": [
{
"first": "Francesca",
"middle": [],
"last": "Frontini",
"suffix": ""
},
{
"first": "Carmen",
"middle": [],
"last": "Brando",
"suffix": ""
},
{
"first": "Joanna",
"middle": [],
"last": "Byszuk",
"suffix": ""
},
{
"first": "Ioana",
"middle": [],
"last": "Galleron",
"suffix": ""
},
{
"first": "Diana",
"middle": [],
"last": "Santos",
"suffix": ""
},
{
"first": "Ranka",
"middle": [],
"last": "Stankovi\u0107",
"suffix": ""
}
],
"year": 2020,
"venue": "CLARIN Annual Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Francesca Frontini, Carmen Brando, Joanna Bys- zuk, Ioana Galleron, Diana Santos, and Ranka Stankovi\u0107. 2020. Named Entity Recognition for Distant Reading in ELTeC. In CLARIN Annual Conference 2020.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A French Corpus and Annotation Schema for Named Entity Recognition and Relation Extraction of Financial News",
"authors": [
{
"first": "Ali",
"middle": [],
"last": "Jabbari",
"suffix": ""
},
{
"first": "Olivier",
"middle": [],
"last": "Sauvage",
"suffix": ""
},
{
"first": "Hamada",
"middle": [],
"last": "Zeine",
"suffix": ""
},
{
"first": "Hamza",
"middle": [],
"last": "Chergui",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 12 th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "2293--2299",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ali Jabbari, Olivier Sauvage, Hamada Zeine, and Hamza Chergui. 2020. A French Corpus and An- notation Schema for Named Entity Recognition and Relation Extraction of Financial News. In Proceedings of the 12 th Language Resources and Evaluation Conference, pages 2293-2299, Marse- ille, France. European Language Resources As- sociation.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Evaluating and Combining Name Entity Recognition Systems",
"authors": [
{
"first": "Ridong",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Rafael",
"middle": [
"E"
],
"last": "Banchs",
"suffix": ""
},
{
"first": "Haizhou",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 6 th Named Entity Workshop",
"volume": "",
"issue": "",
"pages": "21--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ridong Jiang, Rafael E Banchs, and Haizhou Li. 2016. Evaluating and Combining Name Entity Recognition Systems. In Proceedings of the 6 th Named Entity Workshop, pages 21-27.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Automata, Texts and Electronic Dictionaries. Faculty of",
"authors": [
{
"first": "Cvetana",
"middle": [],
"last": "Krstev",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cvetana Krstev. 2008. Processing of Serbian. Au- tomata, Texts and Electronic Dictionaries. Fa- culty of Philology of the University of Belgrade.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Analysis of the first Serbian Literature Corpus of the Late 19 th and Early 20 th century with the TXM platform",
"authors": [
{
"first": "Cvetana",
"middle": [],
"last": "Krstev",
"suffix": ""
},
{
"first": "Jelena",
"middle": [],
"last": "Ja\u0107imovi\u0107",
"suffix": ""
},
{
"first": "Branislava",
"middle": [],
"last": "\u0160andrih",
"suffix": ""
},
{
"first": "Ranka",
"middle": [],
"last": "Stankovi\u0107",
"suffix": ""
}
],
"year": 2019,
"venue": "DH_BUDAPEST_2019",
"volume": "",
"issue": "",
"pages": "36--37",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cvetana Krstev, Jelena Ja\u0107imovi\u0107, Branislava \u0160andrih, and Ranka Stankovi\u0107. 2019. Ana- lysis of the first Serbian Literature Cor- pus of the Late 19 th and Early 20 th century with the TXM platform. In DH_BUDAPEST_2019, pages 36-37. Centre for Digital Humanities -E\u00f6tv\u00f6s Lor\u00e1nd Univer- sity.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A System for Named Entity Recognition Based on Local Grammars",
"authors": [
{
"first": "Cvetana",
"middle": [],
"last": "Krstev",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Obradovi\u0107",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of Logic and Computation",
"volume": "24",
"issue": "2",
"pages": "473--489",
"other_ids": {
"DOI": [
"10.1093/logcom/exs079"
]
},
"num": null,
"urls": [],
"raw_text": "Cvetana Krstev, Ivan Obradovi\u0107, Milo\u0161 Utvi\u0107, and Du\u0161ko Vitas. 2014. A System for Named Entity Recognition Based on Local Grammars. Journal of Logic and Computation, 24(2):473-489.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Old or New, we Repair, Adjust and Alter (Texts)",
"authors": [
{
"first": "Cvetana",
"middle": [],
"last": "Krstev",
"suffix": ""
},
{
"first": "Ranka",
"middle": [],
"last": "Stankovi\u0107",
"suffix": ""
}
],
"year": 2020,
"venue": "Infotheca -Journal for Digital Humanities",
"volume": "19",
"issue": "2",
"pages": "61--80",
"other_ids": {
"DOI": [
"10.18485/infotheca.2019.19.2.3"
]
},
"num": null,
"urls": [],
"raw_text": "Cvetana Krstev and Ranka Stankovi\u0107. 2020. Old or New, we Repair, Adjust and Alter (Te- xts). Infotheca -Journal for Digital Humanities, 19(2):61-80.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Enrichment of Renaissance Texts with Proper Names",
"authors": [
{
"first": "Denis",
"middle": [],
"last": "Maurel",
"suffix": ""
},
{
"first": "Nathalie",
"middle": [],
"last": "Friburger",
"suffix": ""
},
{
"first": "Iris",
"middle": [],
"last": "Eshkol-Taravella",
"suffix": ""
}
],
"year": 2014,
"venue": "INFOtheca: Journal of Information and Library Science",
"volume": "15",
"issue": "1",
"pages": "15--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Denis Maurel, Nathalie Friburger, and Iris Eshkol- Taravella. 2014. Enrichment of Renaissance Te- xts with Proper Names. INFOtheca: Journal of Information and Library Science, 15(1):15-27.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Incorporating External Annotation to improve Named Entity Translation in NMT",
"authors": [
{
"first": "Maciej",
"middle": [],
"last": "Modrzejewski",
"suffix": ""
},
{
"first": "Miriam",
"middle": [],
"last": "Exel",
"suffix": ""
},
{
"first": "Bianka",
"middle": [],
"last": "Buschbeck",
"suffix": ""
},
{
"first": "Thanh-Le",
"middle": [],
"last": "Ha",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Waibel",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 22 nd Annual Conference of the European Association for Machine Translation",
"volume": "",
"issue": "",
"pages": "45--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maciej Modrzejewski, Miriam Exel, Bianka Busch- beck, Thanh-Le Ha, and Alexander Waibel. 2020. Incorporating External Annotation to im- prove Named Entity Translation in NMT. In Proceedings of the 22 nd Annual Conference of the European Association for Machine Transla- tion, pages 45-51, Lisboa, Portugal. European Association for Machine Translation.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Design and Implementation of an Open Source Greek POS-Tagger and Entity Recognizer Using spaCy",
"authors": [
{
"first": "Eleni",
"middle": [],
"last": "Partalidou",
"suffix": ""
}
],
"year": 2019,
"venue": "Eleftherios Spyromitros-Xioufis, Stavros Doropoulos, Stavros Vologiannidis, and Konstantinos Diamantaras",
"volume": "",
"issue": "",
"pages": "337--341",
"other_ids": {
"DOI": [
"10.1145/3350546.3352543"
]
},
"num": null,
"urls": [],
"raw_text": "Eleni Partalidou, Eleftherios Spyromitros-Xioufis, Stavros Doropoulos, Stavros Vologiannidis, and Konstantinos Diamantaras. 2019. Design and Implementation of an Open Source Greek POS- Tagger and Entity Recognizer Using spaCy. In IEEE/WIC/ACM International Conference on Web Intelligence, WI '19, page 337-341, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Avaliando Entidades Mencionadas na Cole\u00e7\u00e3o ELTeC-por",
"authors": [
{
"first": "Diana",
"middle": [],
"last": "Santos",
"suffix": ""
},
{
"first": "Eckhard",
"middle": [],
"last": "Bick",
"suffix": ""
},
{
"first": "Marcin",
"middle": [],
"last": "Wlodek",
"suffix": ""
}
],
"year": 2020,
"venue": "Linguam\u00e1tica",
"volume": "12",
"issue": "2",
"pages": "29--49",
"other_ids": {
"DOI": [
"10.21814/lm.12.2.336"
]
},
"num": null,
"urls": [],
"raw_text": "Diana Santos, Eckhard Bick, and Marcin Wlodek. 2020. Avaliando Entidades Mencionadas na Co- le\u00e7\u00e3o ELTeC-por. Linguam\u00e1tica, 12(2):29-49.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Improvements in Part-of-Speech Tagging with an Application to German",
"authors": [
{
"first": "Helmut",
"middle": [],
"last": "Schmid",
"suffix": ""
}
],
"year": 1999,
"venue": "Natural language processing using very large corpora",
"volume": "",
"issue": "",
"pages": "13--25",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Helmut Schmid. 1999. Improvements in Part-of- Speech Tagging with an Application to German. In Natural language processing using very large corpora, pages 13-25. Springer.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Overview of SHINRA2020-ML Task",
"authors": [
{
"first": "Satoshi",
"middle": [],
"last": "Sekine",
"suffix": ""
},
{
"first": "Masako",
"middle": [],
"last": "Nomoto",
"suffix": ""
},
{
"first": "Kouta",
"middle": [],
"last": "Nakayama",
"suffix": ""
},
{
"first": "Asuka",
"middle": [],
"last": "Sumida",
"suffix": ""
},
{
"first": "Koji",
"middle": [],
"last": "Matsuda",
"suffix": ""
},
{
"first": "Maya",
"middle": [],
"last": "Ando",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the NTCIR-15 Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Satoshi Sekine, Masako Nomoto, Kouta Nakayama, Asuka Sumida, Koji Matsuda, and Maya Ando. 2020. Overview of SHINRA2020-ML Task. In Proceedings of the NTCIR-15 Conference.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Machine Learning and Deep Neural Network-Based Lemmatization and Morphosyntactic Tagging for Serbian",
"authors": [
{
"first": "Ranka",
"middle": [],
"last": "Stankovi\u0107",
"suffix": ""
},
{
"first": "Branislava",
"middle": [],
"last": "\u0160andrih",
"suffix": ""
},
{
"first": "Cvetana",
"middle": [],
"last": "Krstev",
"suffix": ""
},
{
"first": "Milo\u0161",
"middle": [],
"last": "Utvi\u0107",
"suffix": ""
},
{
"first": "Mihailo",
"middle": [],
"last": "\u0160kori\u0107",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 12 th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "3954--3962",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ranka Stankovi\u0107, Branislava \u0160andrih, Cvetana Kr- stev, Milo\u0161 Utvi\u0107, and Mihailo \u0160kori\u0107. 2020. Machine Learning and Deep Neural Network- Based Lemmatization and Morphosyntactic Tag- ging for Serbian. In Proceedings of the 12 th Lan- guage Resources and Evaluation Conference, pa- ges 3954-3962.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Named Entity Recognition for Distant Reading in Several European Literatures",
"authors": [
{
"first": "Ranka",
"middle": [],
"last": "Stankovi\u0107",
"suffix": ""
},
{
"first": "Diana",
"middle": [],
"last": "Santos",
"suffix": ""
},
{
"first": "Francesca",
"middle": [],
"last": "Frontini",
"suffix": ""
},
{
"first": "Tomaz",
"middle": [],
"last": "Erjavec",
"suffix": ""
},
{
"first": "Carmen",
"middle": [],
"last": "Brando",
"suffix": ""
}
],
"year": 2019,
"venue": "DH Budapest",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ranka Stankovi\u0107, Diana Santos, Francesca Fronti- ni, Tomaz Erjavec, and Carmen Brando. 2019. Named Entity Recognition for Distant Reading in Several European Literatures. In DH Buda- pest 2019.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Processing of Corpora of Serbian Using Electronic Dictionaries",
"authors": [
{
"first": "Du\u0161ko",
"middle": [],
"last": "Vitas",
"suffix": ""
},
{
"first": "Cvetana",
"middle": [],
"last": "Krstev",
"suffix": ""
}
],
"year": 2012,
"venue": "Prace Filologiczne",
"volume": "63",
"issue": "",
"pages": "279--292",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Du\u0161ko Vitas and Cvetana Krstev. 2012. Processing of Corpora of Serbian Using Electronic Dictiona- ries. Prace Filologiczne, 63:279-292.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Development and Evaluation of Three Named Entity Recognition Systems for Serbian -The Case of Personal Names",
"authors": [
{
"first": "Branislava",
"middle": [],
"last": "\u0160andrih",
"suffix": ""
},
{
"first": "Cvetana",
"middle": [],
"last": "Krstev",
"suffix": ""
},
{
"first": "Ranka",
"middle": [],
"last": "Stankovi\u0107",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the International Conference on Recent Advances in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1060--1068",
"other_ids": {
"DOI": [
"10.26615/978-954-452-056-4_122"
]
},
"num": null,
"urls": [],
"raw_text": "Branislava \u0160andrih, Cvetana Krstev, and Ranka Stankovi\u0107. 2019. Development and Evaluation of Three Named Entity Recognition Systems for Serbian -The Case of Personal Names. In Pro- ceedings of the International Conference on Re- cent Advances in Natural Language Processing (RANLP 2019), pages 1060-1068, Varna, Bulga- ria. INCOMA Ltd.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"text": "SrpNER tags mapped to ELTeC tags (Russian tzar Nikolai).",
"num": null
},
"FIGREF1": {
"uris": null,
"type_str": "figure",
"text": "Confusion matrix on the test set.",
"num": null
},
"FIGREF2": {
"uris": null,
"type_str": "figure",
"text": "SrpCNNER vs. SrpELTeC-eval (upper) and SrpNER vs. SrpELTeC-eval (lower).",
"num": null
},
"TABREF0": {
"content": "<table><tr><td>Entity</td><td>Explanation</td></tr><tr><td>PERS</td><td/></tr><tr><td>Personal names</td><td/></tr><tr><td>LOC Locations</td><td>Continents, countries, regions, populated places, oronyms, water sur-faces, names of celestial bodies, city locations.</td></tr><tr><td>DEMO Origin or residence</td><td>Residents of states, cities, regions, or ethnic groups; adjectives derived from the names of locations.</td></tr><tr><td>ORG</td><td/></tr><tr><td>Organizations,</td><td>Company names, politic parties, educational institutions, sport teams,</td></tr><tr><td>institutions, societies</td><td>hospitals, museums, libraries, hotels, cafes, churches and shrines.</td></tr><tr><td>WORK Art works</td><td>Titles of books, plays, poems, paintings, sculptures, newspapers.</td></tr></table>",
"num": null,
"type_str": "table",
"text": "First names, surnames, nicknames and their combinations (of real people and fictional characters, including gods and saints). Possessive adjectives from personal names should not be annotated.ROLE Occupations and titlesOccupations, titles and responsibilities: doctor, teacher; king; director.",
"html": null
},
"TABREF1": {
"content": "<table><tr><td>ID</td><td colspan=\"7\">PERS ROLE LOC DEMO ORG WORK EVENT</td><td>tok</td></tr><tr><td>samples</td><td>707</td><td>207</td><td>156</td><td>105</td><td>8</td><td>4</td><td>14</td><td>19,274</td></tr><tr><td>18750</td><td>1,688</td><td>1,050</td><td>388</td><td>239</td><td>29</td><td>10</td><td>21</td><td>31,743</td></tr><tr><td>18871</td><td>1,612</td><td>1,509</td><td>328</td><td>229</td><td>52</td><td>60</td><td>18</td><td>34,324</td></tr><tr><td>18880</td><td>1,372</td><td>986</td><td>271</td><td>201</td><td>32</td><td>59</td><td>10</td><td>26,642</td></tr><tr><td>18881</td><td>935</td><td>619</td><td>95</td><td>105</td><td>12</td><td>14</td><td>1</td><td>13,898</td></tr><tr><td>18890</td><td>804</td><td>714</td><td>36</td><td>56</td><td>1</td><td>0</td><td>0</td><td>29,337</td></tr><tr><td>18932</td><td>1,521</td><td>259</td><td>46</td><td>35</td><td>0</td><td>5</td><td>2</td><td>16,821</td></tr><tr><td>18950</td><td>764</td><td>581</td><td>51</td><td>103</td><td>12</td><td>6</td><td>33</td><td>14,454</td></tr><tr><td>19021</td><td>1,647</td><td>2,285</td><td>123</td><td>58</td><td>82</td><td>4</td><td>15</td><td>40,804</td></tr><tr><td>19040</td><td>1,655</td><td>917</td><td>221</td><td>281</td><td>1</td><td>3</td><td>7</td><td>32,367</td></tr><tr><td>19140</td><td>770</td><td>412</td><td>240</td><td>94</td><td>45</td><td>5</td><td>7</td><td>31,583</td></tr><tr><td>19190</td><td>1,181</td><td>797</td><td>8</td><td>13</td><td>49</td><td>24</td><td>19</td><td>33,562</td></tr><tr><td>total</td><td colspan=\"3\">14,788 10,405 1,979</td><td>1,568</td><td>323</td><td>198</td><td colspan=\"2\">149 330,119</td></tr></table>",
"num": null,
"type_str": "table",
"text": "Annotation guidelines.",
"html": null
},
"TABREF2": {
"content": "<table/>",
"num": null,
"type_str": "table",
"text": "SrpELTeC-gold NE distribution.",
"html": null
},
"TABREF4": {
"content": "<table/>",
"num": null,
"type_str": "table",
"text": "SrpCNNER on the test set.",
"html": null
},
"TABREF5": {
"content": "<table/>",
"num": null,
"type_str": "table",
"text": "SrpELTeC-eval NE distribution.",
"html": null
},
"TABREF7": {
"content": "<table/>",
"num": null,
"type_str": "table",
"text": "Evaluation results SrpELTeC-eval.",
"html": null
}
}
}
}