|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T01:10:55.296131Z" |
|
}, |
|
"title": "Using a Frustratingly Easy Domain and Tagset Adaptation for Creating Slavic Named Entity Recognition Systems", |
|
"authors": [ |
|
{ |
|
"first": "Luis", |
|
"middle": [ |
|
"Adri\u00e1n" |
|
], |
|
"last": "Cabrera-Diego", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "La Rochelle Universit\u00e9", |
|
"location": { |
|
"addrLine": "La Rochelle", |
|
"postCode": "L3i, 17031", |
|
"country": "France luis.cabrera" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Jose", |
|
"middle": [ |
|
"G" |
|
], |
|
"last": "Moreno", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Universit\u00e9 Paul Sabatier", |
|
"location": { |
|
"postCode": "31062", |
|
"country": "France" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Antoine", |
|
"middle": [], |
|
"last": "Doucet", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "La Rochelle Universit\u00e9", |
|
"location": { |
|
"addrLine": "L3i, La Rochelle", |
|
"postCode": "17031", |
|
"country": "France" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We present a collection of Named Entity Recognition (NER) systems for six Slavic languages: Bulgarian, Czech, Polish, Slovenian, Russian and Ukrainian. These NER systems have been trained using different BERT models and a Frustratingly Easy Domain Adaptation (FEDA). FEDA allow us creating NER systems using multiple datasets without having to worry about whether the tagset (e.g. Location, Event, Miscellaneous, Time) in the source and target domains match, while increasing the amount of data available for training. Moreover, we boosted the prediction on named entities by marking uppercase words and predicting masked words. Participating in the 3 rd Shared Task on SlavNER 1 , our NER systems reached a strict micro F-score of up to 0.908. The results demonstrate good generalization, even in named entities with weak regularity, such as book titles, or entities that were never seen during the training.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We present a collection of Named Entity Recognition (NER) systems for six Slavic languages: Bulgarian, Czech, Polish, Slovenian, Russian and Ukrainian. These NER systems have been trained using different BERT models and a Frustratingly Easy Domain Adaptation (FEDA). FEDA allow us creating NER systems using multiple datasets without having to worry about whether the tagset (e.g. Location, Event, Miscellaneous, Time) in the source and target domains match, while increasing the amount of data available for training. Moreover, we boosted the prediction on named entities by marking uppercase words and predicting masked words. Participating in the 3 rd Shared Task on SlavNER 1 , our NER systems reached a strict micro F-score of up to 0.908. The results demonstrate good generalization, even in named entities with weak regularity, such as book titles, or entities that were never seen during the training.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Named Entity Recognition (NER) is a fundamental task in domain of Natural Language Processing (NLP) that consists of extracting entities that semantically refer to aspects such as locations, people or organizations (Luoma et al., 2020) . Since the creation of BERT (Devlin et al., 2019) , multiple NER systems have brought the state of the art to new levels of performance. Nonetheless, there are many challenges that still need to be faced, especially in the case of less-resources languages.", |
|
"cite_spans": [ |
|
{ |
|
"start": 215, |
|
"end": 235, |
|
"text": "(Luoma et al., 2020)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 265, |
|
"end": 286, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In the 2 nd Shared Task on SlavNER (Piskorski et al., 2019) , the top-two systems in the detection Named Entities (NEs), Tsygankova et al. (2019) and , managed to reach a relaxed partial micro F-score of 0.9, followed by two other systems with values slightly better than 0.8 (Moreno et al., 2019) . For the 3 rd Shared Task on SlavNER, we consider that in order to improve the scores, in terms of the strict evaluation, and NEs related to products and events, it is necessary to include additional data that could improve the generalization of the models to any kind of topic.", |
|
"cite_spans": [ |
|
{ |
|
"start": 35, |
|
"end": 59, |
|
"text": "(Piskorski et al., 2019)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 121, |
|
"end": 145, |
|
"text": "Tsygankova et al. (2019)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 276, |
|
"end": 297, |
|
"text": "(Moreno et al., 2019)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "While in the literature there are multiple techniques for training models over additional datasets, such as transfer learning and domain adaptation, using these techniques might pose additional questions. For example, to determine which layers to freeze, fine-tune or substitute. Furthermore, different datasets might use dissimilar tagsets, which might be incompatible (Nozza et al., 2021) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 370, |
|
"end": 390, |
|
"text": "(Nozza et al., 2021)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper, we present the participation of laboratory L3i in the 3 rd Shared Task on SlavNER. Specifically, we participate with multiple NER systems for Slavic languages using different BERT models and training over diverse datasets through a Frustratingly Easy Domain Adaptation (FEDA) algorithm (Daum\u00e9 III, 2007; Kim et al., 2016) . 2 The FEDA algorithm has for objective to learn common and domain-specific patterns between multiple datasets, while keeping separately patterns belonging only to the domain-specific data (Daum\u00e9 III, 2007) . Particularly, the use of FEDA allow us sharing the knowledge and patterns found in multiple datasets without having to worry about which different tagsets are used among them.", |
|
"cite_spans": [ |
|
{ |
|
"start": 301, |
|
"end": 318, |
|
"text": "(Daum\u00e9 III, 2007;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 319, |
|
"end": 336, |
|
"text": "Kim et al., 2016)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 339, |
|
"end": 340, |
|
"text": "2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 527, |
|
"end": 544, |
|
"text": "(Daum\u00e9 III, 2007)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Apart from the FEDA algorithm, we explore some other techniques that might improve the performance of our NER system based on the ideas of Cabrera-Diego et al. (2021) . Specifically, we analyze whether the marking and enrichment of uppercase tokens can improve the detection of NEs. As well, we use the prediction of masked tokens as a way to improve NER systems' generalization.", |
|
"cite_spans": [ |
|
{ |
|
"start": 139, |
|
"end": 166, |
|
"text": "Cabrera-Diego et al. (2021)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The rest of the paper is organized as follows. In Section 2, we introduce the background for the proposed work. This is followed by the methodology in Section 3. The data and the experimental settings are described in Section 4 and Section 5, respectively. In Section 6, we present the results obtained. Finally, the conclusions and future work are detailed in Section 7.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Uppercase sentences: Although most of the NER corpora found in the literature provide texts following standard case rules, it is not infrequent to find datasets containing some sentences in which all the words are in uppercase, e.g. English CoNLL 2003 (Tjong Kim Sang and De Meulder, 2003) or SSJ500k (Krek et al., 2019) . In NLP systems based on BERT or similar, where Byte Pair Encoding (BPE) tokenizers are used, the presence of uppercase sentences might pose a greater challenge than standard case sentences. The reason is that an uppercase word have different BPE tokens with respect to its lower and title-case versions, and in consequence different dense representation (Powalski and Stanislawek, 2020; Sun et al., 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 252, |
|
"end": 289, |
|
"text": "(Tjong Kim Sang and De Meulder, 2003)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 301, |
|
"end": 320, |
|
"text": "(Krek et al., 2019)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 677, |
|
"end": 709, |
|
"text": "(Powalski and Stanislawek, 2020;", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 710, |
|
"end": 727, |
|
"text": "Sun et al., 2020)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Weak generalization: One of the most challenging aspects of NER systems is to deal with NEs that have a weak or zero regularity, such as names of movies, and NEs that were never seen during training (Lin et al., 2020b) . Some methods found in the literature for improving generalization consists of learning manually defined triggers (Lin et al., 2020a) , but also permuting NEs and reducing context such as in Lin et al. (2020b) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 199, |
|
"end": 218, |
|
"text": "(Lin et al., 2020b)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 334, |
|
"end": 353, |
|
"text": "(Lin et al., 2020a)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 411, |
|
"end": 429, |
|
"text": "Lin et al. (2020b)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "FEDA: Originally proposed by Daum\u00e9 III (2007) , the FEDA was firstly designed for sparse machine learning algorithms. Later, Kim et al. (2016) , proposed a neural network version of this domain adaptation algorithm. While the former resides in duplicating input features, the latter consists of activating specific neural network layers.", |
|
"cite_spans": [ |
|
{ |
|
"start": 29, |
|
"end": 45, |
|
"text": "Daum\u00e9 III (2007)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 125, |
|
"end": 142, |
|
"text": "Kim et al. (2016)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Consider D = {D 1 , D 2 , . . . , D n |n > 1} a collection of datasets from which we want to train a model. Furthermore, consider a classifier C a stack of two linear layers in which in between we set an activation layer ReLU and a dropout. The first linear layer has a size of 512, while the output h produced by C has a size of l, which is the number of different labels found in D. Thus, the proposed model for doing the FEDA consists of adding on top of BERT n + 1 classifiers such that we have C = {C 0 , C 1 , C 2 , . . . , C n }. The classifier C 0 represents a general classifier that will receive as input the sentences from all the datasets in D, while C k \u2208 {C|0 < k \u2264 n} represent a specialized classifier that will focus only on the sentences that belong to the dataset D k \u2208 {D|0 < k \u2264 n}. For each sentence belonging to a dataset D k , we do the element-wise sum between h 0 and h k , i.e. H k = h 0 + h k . Finally, H k is introduced it into a CRF layer, which will determine the labels of each word in a sentence. Figure 1 depicts the proposed architecture.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 1031, |
|
"end": 1039, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "For increasing the generalization of our NER systems, we explore the prediction of masked tokens during the training as proposed by Cabrera-Diego et al. (2021) . Firstly, this method converts randomly selected tokens, within a sentence, into BERT's special token [MASK] . Then, the NER system has to predict correctly the sentence's NEs, despite the missing information, as well as predicting the masked tokens. The prediction of masked tokens is done by introducing BERT's output into a linear layer, which has the same size of the pretrained vocabulary. During training, the loss produced by the prediction of masked tokens is added to the loss produced by the recognition of NEs; during testing, this layer is inactive.", |
|
"cite_spans": [ |
|
{ |
|
"start": 132, |
|
"end": 159, |
|
"text": "Cabrera-Diego et al. (2021)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 263, |
|
"end": 269, |
|
"text": "[MASK]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Although Powalski and Stanislawek (2020) propose UniCase, an architecture for training a language model that learns the casing of a word separately to the tokenization, in this work, we use a simpler method that does not require to retrain a language model. Specifically, we use a marking and enrichment approach, where an uppercase word is tagged with two special BERT's tokens, defined by us as [UP] and [up] , and where we include additional case versions. For instance, the word \"ROME\" becomes \"", |
|
"cite_spans": [ |
|
{ |
|
"start": 9, |
|
"end": 40, |
|
"text": "Powalski and Stanislawek (2020)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 397, |
|
"end": 401, |
|
"text": "[UP]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 406, |
|
"end": 410, |
|
"text": "[up]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "[UP] [ROM, ##E] [Rome] [r,##ome] [up]\".", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "It is important to indicate that the prediction of the NE type is done uniquely over the first token, which correspond to the special token [UP] . In other words, the output produced by BERT for the rest of the tokens is masked. The marking of the uppercase words is based on the ideas proposed by Cabrera-Diego et al. (2021).", |
|
"cite_spans": [ |
|
{ |
|
"start": 140, |
|
"end": 144, |
|
"text": "[UP]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We use the data provided by the organizers for the 3 rd Shared Task on SlavNER. However, for the Czech Named Entity Corpus 2.0 (CNEC) (\u0160ev\u010d\u00edkov\u00e1 et al., 2007) : Czech corpus annotated with fine-grained NE. In this work, we have used 6 types of NE: Location, Organization, Media, Artifact, Person and Time.", |
|
"cite_spans": [ |
|
{ |
|
"start": 134, |
|
"end": 158, |
|
"text": "(\u0160ev\u010d\u00edkov\u00e1 et al., 2007)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "FactRuEval 4 : Russian corpus annotated with three NE types: Location, Organization and Person.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Finnish NER (Luoma et al., 2020) : Although Finnish is not a language to process in SlavNER, it has similar NE types to those used in the shared task: Date, Event, Location, Organization, Person, Product and Time. We use this dataset to enrich the NEs knowledge, specially on events and products. Polish dataset annotated with nine super NE types, from these six were chosen: Event, Location, Organization, Person, Place and Product. Location and Place were merged as the former.", |
|
"cite_spans": [ |
|
{ |
|
"start": 12, |
|
"end": 32, |
|
"text": "(Luoma et al., 2020)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "SSJ500k (Krek et al., 2019) : Slovene corpus annotated with four types of NE: Location, Miscellaneous, Organization and Person.", |
|
"cite_spans": [ |
|
{ |
|
"start": 8, |
|
"end": 27, |
|
"text": "(Krek et al., 2019)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Wikiann (Pan et al., 2017) : It is a multilingual NER corpus based on Wikipedia articles; it was annotated automatically using three types of NEs: Location, Organization and Person. We use of the corpus partitions used by Rahimi et al. (2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 8, |
|
"end": 26, |
|
"text": "(Pan et al., 2017)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 222, |
|
"end": 242, |
|
"text": "Rahimi et al. (2019)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We use for all the additional corpora their training, development and testing partitions; if these are not provided, we create them using a stratified approach to ensure a proportional number of NEs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Regarding BERT, we use different pre-trained models: CroSloEngual (Ul\u010dar and Robnik-\u0160ikonja, 2020), Polish BERT 8 , RuBERT and Language-Agnostic BERT Sentence Embedding (LaBSE) (Feng et al., 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 177, |
|
"end": 196, |
|
"text": "(Feng et al., 2020)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "All the files coming from SlavNER are tokenized and, those used for training and development are annotated at token-level. For Bulgarian and Slovene, we tokenize the documents using Reldi-Tokenizer 9 , while for the rest of languages, we use the neural parser proposed by Kanerva et al. (2018) . Further-more, we over-tokenize all the files, i.e. we separate all the punctuation from tokens within a sentence, to solve some cases where abbreviation periods or dashes were not considered as part of a NE. For example, in Slovene, Roman numerals are followed by a period, such as in Benedikt XVI. nevertheless, some NE annotations did not consider the period. Some rules and manual corrections were applied to the tokenization where we determined the fix was critical. For instance, in Polish, W. Brytania (Great Britain) was being split into two sentences by the tokenizer. We automatically annotated the files by searching the longest match in the tokenized format and the annotation file. In case of ambiguity, the annotation tool requested a manual intervention. For the final submission, we converted the token-level output to a document-level one.", |
|
"cite_spans": [ |
|
{ |
|
"start": 272, |
|
"end": 293, |
|
"text": "Kanerva et al. (2018)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "All the NEs types are encoded using BIOES (Beginning, Inside, Outside/Other, End, Single). As well, to reduce the number of entities types, we normalize those where the theoretical meaning is the same, i.e. PERS into PER or EVENT into EVT.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "For the models where masked tokens have to be predicted, we only affect sentences in the training partitions that are longer than 3 actual tokens, i.e. not BPE tokens. At each epoch, we select randomly 25% of each sentence's tokens and substitute them with [MASK] . If a token after being processed by BERT's tokenizer produces more than one BPE token, we mask only one of them. 10 Regarding the models that are trained with marked uppercase tokens, at each training epoch, we randomly convert 5% of all the sentences into uppercase. This is done to provide some examples of uppercase sentences to datasets that do not present this phenomenon.", |
|
"cite_spans": [ |
|
{ |
|
"start": 257, |
|
"end": 263, |
|
"text": "[MASK]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 379, |
|
"end": 381, |
|
"text": "10", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In Table 2 , we present the final models created for recognizing NEs. As well, we detail which are the datasets used for training them and which are the additional features that they make use. The combinations of datasets and features used for the final models were selected according to their performance on internal models. To enrich the knowledge in Bulgarian, we added the Macedonian Wikiann dataset, as both languages are considered as mutually intelligible. All the models were trained up to 20 epochs using an early stop approach. In Table 1 , we present a summary of the hyperparameters used for training the NER systems. ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 10, |
|
"text": "Table 2", |
|
"ref_id": "TABREF4" |
|
}, |
|
{ |
|
"start": 541, |
|
"end": 548, |
|
"text": "Table 1", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In Table 3 , we present the performance of our systems in terms of strict micro F-score. We can observe, that the marking of uppercase words worked better, in general, for the Covid-19 topic, specially on the Cyrillic-2 model. As well, single language models worked better on the Covid-19 topic, while the model Latin-1 worked better on the U.S. Elections topic. In most languages, the hardest NEs to predict were related to products and events due to their weak regularity or because they never appeared on the training datasets.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 10, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "From a manual inspection, we have observed that multiple events were considered as products, such as Miss USA, Pizzagate and Covid-19. Some products were marked as organizations such as Zoom, COVAX, Apple TV+, although fewer organizations were tagged as products, such as Pfizer/Moderna and BBC. Nonetheless, many of these NEs could be both types depending on the context in which happen. In certain documents, organizations were marked as locations and viceversa, such as Ostravsk\u00e9 Fakultn\u00ed Nemocnice (Ostrava University Hospital) and Szpitala Wojskowego w Szczecinie (Military Hospital in Szczecin).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We have found interesting examples regarding products despite their irregularity. For example, the Cyrillic and Latin models managed to detect partially the 2020 book \"Nelojalen: resni\u010dna zgodba nekdanjega osebnega odvetnika predsednika Donalda Trumpa\" (Disloyal: A Memoir: The True Story of the Former Personal Attorney to President Donald J. Trump). Specifically, the entity was Table 3 : Strict micro F-scores obtained by each model for every language and topic. The Global column is the strict micro F-score regarding all the test data. split into two \"Nelojalen: resni\u010dna zgodba nekdanjega osebnega odvetnika predsednika\" as a product and Donalda Trumpa (Donald Trump) as a person. But there were some exact matches, such as the book \"Cyberwar: How Russian Hackers and Trolls Helped Elect a President\" or the document \"Preve\u010d in nikoli dovolj: kako je moja dru\u017eina ustvarila najnevarnej\u0161ega mo\u017ea na svetu\" (Treaty on Measures for the Further Reduction and Limitation of Strategic Offensive Arms). Furthermore, some scientific articles were tagged as products, such as \"A Study to Evaluate Efficacy, Safety, and Immunogenicity of mRNA-1273 Vaccine in Adults Aged 18 Years and Older to Prevent COVID-19\", although they did not appear in the gold standard.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 381, |
|
"end": 388, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Some models considered BioNTech as an organization and Instagram as a product despite these NEs were never seen during the training. As well, some medication-related products were correctly found such as AZD1222, \u043a\u0430\u043d\u0430\u043a\u0438\u043d\u0443\u043c\u0430\u0431 (Canakinumab), Remdesivir or Zithromax, even if they did not exist on the training corpora.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We observed, specially in Cyrillic-scripted languages, that some named entities were incorrect because they were predicted without punctuation marks. For example: Moderna Inc vs Moderna Inc., \u0433\u0430\u043c-\u043a\u043e\u0432\u0438\u0434-\u0432\u0430\u043a vs \u0433\u0430\u043c-\u043a\u043e\u0432\u0438\u0434-\u0432\u0430\u043a and \u0441\u043f\u0443\u0442\u043d\u0438\u043a\u043e\u043c vs \"\u0441\u043f\u0443\u0442\u043d\u0438\u043a\u043e\u043c\". In Latin-scripted languages, we observed the opposite although less frequently. For instance, Roberta F. Kennedyho Jr. vs Roberta F. Kennedyho Jr. In some documents the punctuation mark is included in certain NEs but not in others, such as in Korea P\u0142n. vs Korea P\u0142n but Korei P\u0142n..", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "This work presented the participation of Laboratory L3i in the 3 rd Shared Task on SlavNER. Specifically, we proposed a collection of BERT-based NER systems that were trained using multiple datasets through FEDA.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and Future Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "The results showed us that our NER systems worked better on the U.S. Elections topic (strict micro F-score between 0.762 and 0.908) than on the Covid-19 topic (0.666 -0.775). Overall, a competitive strength of our NER systems is that they managed to predict named entities occurring with weak regularity or that were never seen before.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and Future Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "In the future, we will apply the proposed architecture on other languages and datasets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and Future Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "bsnlp.cs.helsinki.fi/shared-task.html, last visited on 9 March 2021", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "github.com/EMBEDDIA/NER FEDA", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "labinform.ru/pub/named entities 4 github.com/dialogue-evaluation/factRuEval-2016 5 nkjp.pl 6 github.com/lang-uk/ner-uk", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "clarin-pl.eu/dspace/handle/11321/270 8 huggingface.co/dkleczek/bert-base-polish-cased-v1 9 github.com/clarinsi/reldi-tokeniser", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For Polish BERT, we mask all the tokens as this model was trained using whole word masking.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This work has been supported by the European Union's Horizon 2020 research and innovation program under grants 770299 (NewsEye) and 825153 (Embeddia).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Tuning Multilingual Transformers for Language-Specific Named Entity Recognition", |
|
"authors": [ |
|
{ |
|
"first": "Mikhail", |
|
"middle": [], |
|
"last": "Arkhipov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maria", |
|
"middle": [], |
|
"last": "Trofimova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuri", |
|
"middle": [], |
|
"last": "Kuratov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexey", |
|
"middle": [], |
|
"last": "Sorokin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 7th Workshop on Balto-Slavic Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "89--93", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W19-3712" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mikhail Arkhipov, Maria Trofimova, Yuri Kuratov, and Alexey Sorokin. 2019. Tuning Multilingual Trans- formers for Language-Specific Named Entity Recog- nition. In Proceedings of the 7th Workshop on Balto- Slavic Natural Language Processing, pages 89-93, Florence, Italy. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Simple ways to improve NER in every language using markup", |
|
"authors": [ |
|
{ |
|
"first": "Jose", |
|
"middle": [ |
|
"G" |
|
], |
|
"last": "Luis Adri\u00e1n Cabrera-Diego", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Antoine", |
|
"middle": [], |
|
"last": "Moreno", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Doucet", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the 2nd International Workshop on Cross-lingual Event-centric Open Analytics, Online. CEUR-WS", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Luis Adri\u00e1n Cabrera-Diego, Jose G. Moreno, and An- toine Doucet. 2021. Simple ways to improve NER in every language using markup. In Proceedings of the 2nd International Workshop on Cross-lingual Event-centric Open Analytics, Online. CEUR-WS.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Frustratingly Easy Domain Adaptation", |
|
"authors": [ |
|
{ |
|
"first": "Hal", |
|
"middle": [], |
|
"last": "Daum\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iii", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "256--263", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hal Daum\u00e9 III. 2007. Frustratingly Easy Domain Adaptation. In Proceedings of the 45th Annual Meet- ing of the Association of Computational Linguistics, pages 256-263, Prague, Czech Republic. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "4171--4186", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1423" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Un- derstanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Languageagnostic BERT Sentence Embedding", |
|
"authors": [ |
|
{ |
|
"first": "Fangxiaoyu", |
|
"middle": [], |
|
"last": "Feng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yinfei", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Cer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naveen", |
|
"middle": [], |
|
"last": "Arivazhagan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fangxiaoyu Feng, Yinfei Yang, Daniel Cer, Naveen Arivazhagan, and Wei Wang. 2020. Language- agnostic BERT Sentence Embedding. ArXiv cs.CL eprint: 2007.01852.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Turku Neural Parser Pipeline: An End-to-End System for the CoNLL 2018 Shared Task", |
|
"authors": [ |
|
{ |
|
"first": "Jenna", |
|
"middle": [], |
|
"last": "Kanerva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Filip", |
|
"middle": [], |
|
"last": "Ginter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niko", |
|
"middle": [], |
|
"last": "Miekka", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Akseli", |
|
"middle": [], |
|
"last": "Leino", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tapio", |
|
"middle": [], |
|
"last": "Salakoski", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies. Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/K18-2013" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jenna Kanerva, Filip Ginter, Niko Miekka, Akseli Leino, and Tapio Salakoski. 2018. Turku Neu- ral Parser Pipeline: An End-to-End System for the CoNLL 2018 Shared Task. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Frustratingly Easy Neural Domain Adaptation", |
|
"authors": [ |
|
{ |
|
"first": "Young-Bum", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karl", |
|
"middle": [], |
|
"last": "Stratos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruhi", |
|
"middle": [], |
|
"last": "Sarikaya", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "387--396", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Young-Bum Kim, Karl Stratos, and Ruhi Sarikaya. 2016. Frustratingly Easy Neural Domain Adapta- tion. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguis- tics: Technical Papers, pages 387-396, Osaka, Japan. The COLING 2016 Organizing Committee.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Adaptation of Deep Bidirectional Multilingual Transformers for Russian Language", |
|
"authors": [ |
|
{ |
|
"first": "Yuri", |
|
"middle": [], |
|
"last": "Kuratov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mikhail", |
|
"middle": [], |
|
"last": "Arkhipov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yuri Kuratov and Mikhail Arkhipov. 2019. Adapta- tion of Deep Bidirectional Multilingual Transform- ers for Russian Language. ArXiv cs.CL eprint: 1905.07213.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "TriggerNER: Learning with Entity Triggers as Explanations for Named Entity Recognition", |
|
"authors": [ |
|
{ |
|
"first": "Dong-Ho", |
|
"middle": [], |
|
"last": "Bill Yuchen Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiao", |
|
"middle": [], |
|
"last": "Moreno", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Prashant", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiang", |
|
"middle": [], |
|
"last": "Shiralkar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ren", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "8503--8511", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-main.752" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bill Yuchen Lin, Dong-Ho Lee, Ming Shen, Ryan Moreno, Xiao Huang, Prashant Shiralkar, and Xi- ang Ren. 2020a. TriggerNER: Learning with Entity Triggers as Explanations for Named Entity Recog- nition. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 8503-8511, Online. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "A Rigorous Study on Named Entity Recognition: Can Fine-tuning Pretrained Model Lead to the Promised Land?", |
|
"authors": [ |
|
{ |
|
"first": "Hongyu", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yaojie", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jialong", |
|
"middle": [], |
|
"last": "Tang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xianpei", |
|
"middle": [], |
|
"last": "Han", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Le", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhicheng", |
|
"middle": [], |
|
"last": "Wei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicholas Jing", |
|
"middle": [], |
|
"last": "Yuan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "7291--7300", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.emnlp-main.592" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hongyu Lin, Yaojie Lu, Jialong Tang, Xianpei Han, Le Sun, Zhicheng Wei, and Nicholas Jing Yuan. 2020b. A Rigorous Study on Named Entity Recog- nition: Can Fine-tuning Pretrained Model Lead to the Promised Land? In Proceedings of the 2020 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 7291-7300, On- line. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "A Broad-coverage Corpus for Finnish Named Entity Recognition", |
|
"authors": [ |
|
{ |
|
"first": "Jouni", |
|
"middle": [], |
|
"last": "Luoma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miika", |
|
"middle": [], |
|
"last": "Oinonen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maria", |
|
"middle": [], |
|
"last": "Pyyk\u00f6nen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veronika", |
|
"middle": [], |
|
"last": "Laippala", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sampo", |
|
"middle": [], |
|
"last": "Pyysalo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 12th Language Resources and Evaluation Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4615--4624", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jouni Luoma, Miika Oinonen, Maria Pyyk\u00f6nen, Veronika Laippala, and Sampo Pyysalo. 2020. A Broad-coverage Corpus for Finnish Named Entity Recognition. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 4615- 4624, Marseille, France. European Language Re- sources Association.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Polish Corpus of Wroc\u0142aw University of Technology 1.2. CLARIN-PL digital repository", |
|
"authors": [ |
|
{ |
|
"first": "Micha\u0142", |
|
"middle": [], |
|
"last": "Marci\u0144czuk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marcin", |
|
"middle": [], |
|
"last": "Oleksy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marek", |
|
"middle": [], |
|
"last": "Maziarz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jan", |
|
"middle": [], |
|
"last": "Wieczorek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dominika", |
|
"middle": [], |
|
"last": "Fikus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Agnieszka", |
|
"middle": [], |
|
"last": "Turek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Micha\u0142", |
|
"middle": [], |
|
"last": "Wolski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomasz", |
|
"middle": [], |
|
"last": "Berna\u015b", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jan", |
|
"middle": [], |
|
"last": "Koco\u0144", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pawe\u0142", |
|
"middle": [], |
|
"last": "Kedzia", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Micha\u0142 Marci\u0144czuk, Marcin Oleksy, Marek Maziarz, Jan Wieczorek, Dominika Fikus, Agnieszka Turek, Micha\u0142 Wolski, Tomasz Berna\u015b, Jan Koco\u0144, and Pawe\u0142 Kedzia. 2016. Polish Corpus of Wroc\u0142aw University of Technology 1.2. CLARIN-PL digital repository.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "TLR at BSNLP2019: A multilingual named entity recognition system", |
|
"authors": [ |
|
{ |
|
"first": "Jose", |
|
"middle": [ |
|
"G" |
|
], |
|
"last": "Moreno", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Elvys", |
|
"middle": [], |
|
"last": "Linhares Pontes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Micka\u00ebl", |
|
"middle": [], |
|
"last": "Coustaty", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Antoine", |
|
"middle": [], |
|
"last": "Doucet", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 7th Workshop on Balto-Slavic Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "83--88", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W19-3711" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jose G. Moreno, Elvys Linhares Pontes, Micka\u00ebl Coustaty, and Antoine Doucet. 2019. TLR at BSNLP2019: A multilingual named entity recog- nition system. In Proceedings of the 7th Work- shop on Balto-Slavic Natural Language Processing, pages 83-88, Florence, Italy. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Two-stage approach in Russian named entity recognition", |
|
"authors": [ |
|
{ |
|
"first": "Valerie", |
|
"middle": [], |
|
"last": "Mozharova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Natalia", |
|
"middle": [], |
|
"last": "Loukachevitch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "2016 Conference on Intelligence, Social Media and Web (ISMW FRUCT)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--6", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1109/FRUCT.2016.7584769" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Valerie Mozharova and Natalia Loukachevitch. 2016. Two-stage approach in Russian named entity recog- nition. In 2016 Conference on Intelligence, Social Media and Web (ISMW FRUCT), pages 1-6, St. Pe- tersburg, Russia.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "LearningToAdapt with word embeddings: Domain adaptation of Named Entity Recognition systems", |
|
"authors": [ |
|
{ |
|
"first": "Debora", |
|
"middle": [], |
|
"last": "Nozza", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pikakshi", |
|
"middle": [], |
|
"last": "Manchanda", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Elisabetta", |
|
"middle": [], |
|
"last": "Fersini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matteo", |
|
"middle": [], |
|
"last": "Palmonari", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Enza", |
|
"middle": [], |
|
"last": "Messina", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Information Processing & Management", |
|
"volume": "58", |
|
"issue": "3", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1016/j.ipm.2021.102537" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Debora Nozza, Pikakshi Manchanda, Elisabetta Fersini, Matteo Palmonari, and Enza Messina. 2021. LearningToAdapt with word embeddings: Domain adaptation of Named Entity Recognition systems. Information Processing & Management, 58(3):102537.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Crosslingual Name Tagging and Linking for 282 Languages", |
|
"authors": [ |
|
{ |
|
"first": "Xiaoman", |
|
"middle": [], |
|
"last": "Pan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Boliang", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "May", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joel", |
|
"middle": [], |
|
"last": "Nothman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Heng", |
|
"middle": [], |
|
"last": "Ji", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1946--1958", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P17-1178" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiaoman Pan, Boliang Zhang, Jonathan May, Joel Nothman, Kevin Knight, and Heng Ji. 2017. Cross- lingual Name Tagging and Linking for 282 Lan- guages. In Proceedings of the 55th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1946-1958, Van- couver, Canada. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "The Second Cross-Lingual Challenge on Recognition, Normalization, Classification, and Linking of Named Entities across Slavic Languages", |
|
"authors": [ |
|
{ |
|
"first": "Jakub", |
|
"middle": [], |
|
"last": "Piskorski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Laska", |
|
"middle": [], |
|
"last": "Laskova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Micha\u0142", |
|
"middle": [], |
|
"last": "Marci\u0144czuk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lidia", |
|
"middle": [], |
|
"last": "Pivovarova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pavel", |
|
"middle": [], |
|
"last": "P\u0159ib\u00e1\u0148", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Josef", |
|
"middle": [], |
|
"last": "Steinberger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roman", |
|
"middle": [], |
|
"last": "Yangarber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 7th Workshop on Balto-Slavic Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "63--74", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W19-3709" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jakub Piskorski, Laska Laskova, Micha\u0142 Marci\u0144czuk, Lidia Pivovarova, Pavel P\u0159ib\u00e1\u0148, Josef Steinberger, and Roman Yangarber. 2019. The Second Cross- Lingual Challenge on Recognition, Normalization, Classification, and Linking of Named Entities across Slavic Languages. In Proceedings of the 7th Work- shop on Balto-Slavic Natural Language Processing, pages 63-74, Florence, Italy. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "The First Cross-Lingual Challenge on Recognition, Normalization, and Matching of Named Entities in Slavic Languages", |
|
"authors": [ |
|
{ |
|
"first": "Jakub", |
|
"middle": [], |
|
"last": "Piskorski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lidia", |
|
"middle": [], |
|
"last": "Pivovarova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Josef", |
|
"middle": [], |
|
"last": "Jan\u0161najder", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roman", |
|
"middle": [], |
|
"last": "Steinberger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Yangarber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 6th Workshop on Balto-Slavic Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "76--85", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W17-1412" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jakub Piskorski, Lidia Pivovarova, Jan\u0160najder, Josef Steinberger, and Roman Yangarber. 2017. The First Cross-Lingual Challenge on Recognition, Normal- ization, and Matching of Named Entities in Slavic Languages. In Proceedings of the 6th Workshop on Balto-Slavic Natural Language Processing, pages 76-85, Valencia, Spain. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Uni-Case -Rethinking Casing in Language Models", |
|
"authors": [ |
|
{ |
|
"first": "Rafal", |
|
"middle": [], |
|
"last": "Powalski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomasz", |
|
"middle": [], |
|
"last": "Stanislawek", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rafal Powalski and Tomasz Stanislawek. 2020. Uni- Case -Rethinking Casing in Language Models. ArXiv cs.CL eprint: 2010.11936.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Narodowy korpus jezyka polskiego", |
|
"authors": [ |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Przepi\u00f3rkowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miros\u0142aw", |
|
"middle": [], |
|
"last": "Ba\u0144ko", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rafa\u0142", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "G\u00f3rski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barbara", |
|
"middle": [], |
|
"last": "Lewandowska-Tomaszczyk", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Naukowe PWN", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adam Przepi\u00f3rkowski, Miros\u0142aw Ba\u0144ko, Rafa\u0142 L. G\u00f3rski, and Barbara Lewandowska-Tomaszczyk. 2012. Narodowy korpus jezyka polskiego. Naukowe PWN, Warsaw, Poland.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Massively Multilingual Transfer for NER", |
|
"authors": [ |
|
{ |
|
"first": "Afshin", |
|
"middle": [], |
|
"last": "Rahimi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuan", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Trevor", |
|
"middle": [], |
|
"last": "Cohn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "151--164", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Afshin Rahimi, Yuan Li, and Trevor Cohn. 2019. Mas- sively Multilingual Transfer for NER. In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 151-164, Flo- rence, Italy. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Adv-BERT: BERT is not robust on misspellings! Generating nature adversarial samples on BERT. ArXiv", |
|
"authors": [ |
|
{ |
|
"first": "Lichao", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kazuma", |
|
"middle": [], |
|
"last": "Hashimoto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wenpeng", |
|
"middle": [], |
|
"last": "Yin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Akari", |
|
"middle": [], |
|
"last": "Asai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jia", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philip", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Caiming", |
|
"middle": [], |
|
"last": "Xiong", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lichao Sun, Kazuma Hashimoto, Wenpeng Yin, Akari Asai, Jia Li, Philip Yu, and Caiming Xiong. 2020. Adv-BERT: BERT is not robust on misspellings! Generating nature adversarial samples on BERT. ArXiv cs.CL eprint: 2003.04985.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Introduction to the CoNLL-2003 Shared Task: Language-Independent Named Entity Recognition", |
|
"authors": [ |
|
{ |
|
"first": "Erik", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Tjong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kim", |
|
"middle": [], |
|
"last": "Sang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fien", |
|
"middle": [], |
|
"last": "De Meulder", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "142--147", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 Shared Task: Language-Independent Named Entity Recognition. In Proceedings of the Seventh Conference on Natu- ral Language Learning at HLT-NAACL 2003, pages 142-147.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "BSNLP2019 Shared Task Submission: Multisource Neural NER Transfer", |
|
"authors": [ |
|
{ |
|
"first": "Tatiana", |
|
"middle": [], |
|
"last": "Tsygankova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Mayhew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 7th Workshop on Balto-Slavic Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "75--82", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W19-3710" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tatiana Tsygankova, Stephen Mayhew, and Dan Roth. 2019. BSNLP2019 Shared Task Submission: Mul- tisource Neural NER Transfer. In Proceedings of the 7th Workshop on Balto-Slavic Natural Language Processing, pages 75-82, Florence, Italy. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "FinEst BERT and CroSloEngual BERT", |
|
"authors": [ |
|
{ |
|
"first": "Matej", |
|
"middle": [], |
|
"last": "Ul\u010dar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marko", |
|
"middle": [], |
|
"last": "Robnik-\u0160ikonja", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceeding of the 23rd International Conference on Text, Speech, and Dialogue (TSD 2020)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "104--111", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1007/978-3-030-58323-1_11" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matej Ul\u010dar and Marko Robnik-\u0160ikonja. 2020. FinEst BERT and CroSloEngual BERT. In Proceeding of the 23rd International Conference on Text, Speech, and Dialogue (TSD 2020), pages 104-111, Brno, Czech Republic. Springer International Publishing.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Named Entities in Czech: Annotating Data and Developing NE Tagger", |
|
"authors": [ |
|
{ |
|
"first": "Zden\u011bk\u017eabokrtsk\u00fd", |
|
"middle": [], |
|
"last": "Magda\u0161ev\u010d\u00edkov\u00e1", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Old\u0159ich", |
|
"middle": [], |
|
"last": "Kr\u016fza", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Text, Speech and Dialogue", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "188--195", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1007/978-3-540-74628-7_26" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Magda\u0160ev\u010d\u00edkov\u00e1, Zden\u011bk\u017dabokrtsk\u00fd, and Old\u0159ich Kr\u016fza. 2007. Named Entities in Czech: Annotating Data and Developing NE Tagger. In Text, Speech and Dialogue, pages 188-195, Pilsen, Czech Repub- lic. Springer Berlin Heidelberg.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "Our FEDA-based architecture for NER with BERT. development of our internal models, we use the topics of Nord Stream and Ryanair as testing partition, while the rest as training and development. For the final models, all the data provided is split into training and development sets.Besides the data provided by SlavNER's organizers, we use the follwing NER corpora:SlavNER 2017(Piskorski et al., 2017): Slavic Corpus annotated with 4 NE types: Location, Miscellaneous, Organization and Person.Collection Named Entities 5 (CNE5)(Mozharova and Loukachevitch, 2016) 3 : Russian NER corpus manually annotated with five NE types: Geopolitical, Location, Media, Person and Organization.", |
|
"uris": null |
|
}, |
|
"FIGREF1": { |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "National Corpus of Polish (NKJP) 5 (Przepi\u00f3rkowski et al., 2012): Polish corpus tagged with five NE types: Person, Organization, Geopolitical, Location, Date and Time. NER-UK 6 : Collection of 264 Ukrainian docu-ments manually annotated with four types of NE: Location, Miscellaneous, Organization and Person. Polish Corpus of Wroc\u0142aw University of Technology (KPWr) 7 (Marci\u0144czuk et al., 2016):", |
|
"uris": null |
|
}, |
|
"TABREF2": { |
|
"num": null, |
|
"text": "Hyperparameters used for training the models.", |
|
"content": "<table/>", |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF4": { |
|
"num": null, |
|
"text": "Datasets used for training each of the model explored in this work. The number of classifiers (C) consider both the general and specialized ones used in the architecture.", |
|
"content": "<table><tr><td/><td/><td/><td/><td>Covid-19</td><td/><td/><td/><td/><td/><td colspan=\"3\">U.S. Elections</td><td/><td/><td/></tr><tr><td>Model</td><td>Bg</td><td>Cs</td><td>Pl</td><td>Ru</td><td>Sl</td><td>Uk</td><td>All</td><td>Bg</td><td>Cs</td><td>Pl</td><td>Ru</td><td>Sl</td><td>Uk</td><td>All</td><td>Global</td></tr><tr><td>Cyrillic-1</td><td colspan=\"15\">0.716 0.714 0.760 0.657 0.732 0.722 0.715 0.843 0.837 0.841 0.741 0.837 0.787 0.793 0.764</td></tr><tr><td>Cyrillic-2</td><td colspan=\"15\">0.720 0.730 0.783 0.642 0.744 0.727 0.721 0.865 0.857 0.849 0.746 0.858 0.813 0.807 0.775</td></tr><tr><td>Latin-1</td><td colspan=\"15\">0.730 0.765 0.791 0.662 0.752 0.706 0.733 0.850 0.890 0.908 0.762 0.898 0.789 0.824 0.790</td></tr><tr><td>Latin-2</td><td colspan=\"15\">0.733 0.763 0.792 0.666 0.758 0.688 0.734 0.854 0.890 0.891 0.759 0.884 0.782 0.819 0.787</td></tr><tr><td colspan=\"16\">Single lang. 0.725 0.766 0.793 0.611 0.775 0.701 0.729 0.813 0.889 0.887 0.742 0.891 0.781 0.807 0.778</td></tr></table>", |
|
"html": null, |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |