|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T01:10:50.456976Z" |
|
}, |
|
"title": "BSNLP Shared Task 2021 submission: Multilingual Slavic Named Entity Recognition", |
|
"authors": [ |
|
{ |
|
"first": "Rinalds", |
|
"middle": [], |
|
"last": "V\u012bksna", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Inguna", |
|
"middle": [], |
|
"last": "Skadi\u0146", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Named entity recognition, in particular for morphological rich languages, is challenging task due to the richness of inflected forms and ambiguity. This challenge is being addressed by SlavNER Shared Task. In this paper we describe system submitted to this task. Our system uses pre-trained multilingual BERT Language Model and is fine-tuned for six Slavic languages of this task on texts distributed by organizers. Our multilingual NER model achieves 83.7 F1 score on all corpora, with best result for Polish (88.8) and worst for Russian (79.1). Entity linking module achieved F1 score of 48.8 as evaluated by bsnlp2021 organizers.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Named entity recognition, in particular for morphological rich languages, is challenging task due to the richness of inflected forms and ambiguity. This challenge is being addressed by SlavNER Shared Task. In this paper we describe system submitted to this task. Our system uses pre-trained multilingual BERT Language Model and is fine-tuned for six Slavic languages of this task on texts distributed by organizers. Our multilingual NER model achieves 83.7 F1 score on all corpora, with best result for Polish (88.8) and worst for Russian (79.1). Entity linking module achieved F1 score of 48.8 as evaluated by bsnlp2021 organizers.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Named entity recognition, in particular for morphological rich languages, is challenging task due to the richness of inflected forms and their ambiguity. Evaluation results are usually lower for these languages, when compared to morphologically simpler languages. For instance, for Finnish language Virtanen et al. (2019) reports the F1 score of 92.4 on in-domain data and 81.47 on out of domain data, for Latvian state-of-the-art NER system (Znoti\u0146\u0161 and Barzdins, 2020) achieves the F1 score 82.6, while for English LUKE model (Yamada et al., 2020) achieves F1 score of 94.3 on CoNLL-2003 dataset. In this paper we present our submission to the SlavNER Shared Task on the analysis of Named Entities in multilingual Web documents in Slavic languages. Our submission implements modular architecture, consisting of three modules that correspond to the tasks of the Shared task -named entity recognition, entity normalization and multilingual entity linking.", |
|
"cite_spans": [ |
|
{ |
|
"start": 299, |
|
"end": 321, |
|
"text": "Virtanen et al. (2019)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 442, |
|
"end": 470, |
|
"text": "(Znoti\u0146\u0161 and Barzdins, 2020)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 528, |
|
"end": 549, |
|
"text": "(Yamada et al., 2020)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 579, |
|
"end": 598, |
|
"text": "CoNLL-2003 dataset.", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Results of the previous challenges on named entity recognition show that fine-tuning of a large language model leads to the best overall result. Using this approach in previous Shared task the best result was achieved by Arkhipov et al. (2019) , allowing to reach 87.2 F1 score for Bulgarian, 87.3 F1 for Russian, 93.2 F1 for Polish and 93.9 F1 score for Czech. For Named Entity linking task we use a dynamic knowledge base, which is built at run-time using identified entity mentions and their embeddings, similar to Yamada et al. (2016) . Our model uses pre-trained LaBSE model (Feng et al., 2020) to obtain aligned embeddings in different languages. We achieve average F1 score of 48.8 (us election 2020 dataset F1 score 51.98 and covid-19 dataset F1 score 42.3).", |
|
"cite_spans": [ |
|
{ |
|
"start": 221, |
|
"end": 243, |
|
"text": "Arkhipov et al. (2019)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 518, |
|
"end": 538, |
|
"text": "Yamada et al. (2016)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 580, |
|
"end": 599, |
|
"text": "(Feng et al., 2020)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We use data provided by Shared Task organizers to train the named entity recognition (NER) component. The data consists of raw text documents with some metadata (language, source, title, date, file-id) and annotation documents. Each annotation file contains a file-id linking it to the respective text document and list of Named Entities present in document.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Preparation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In order to train NER, we transformed data into conll2003 like format. At first, raw documents were split into sentences using nltk library (Bird et al., 2009) . Language specific nltk models were used for sentence segmentation where they were available and Russian model was applied when language specific models were not available.", |
|
"cite_spans": [ |
|
{ |
|
"start": 140, |
|
"end": 159, |
|
"text": "(Bird et al., 2009)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Preparation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Each token in sentence is labeled as either belonging to one of the named entity classes used in this task or labeled with label \"O\". Although in this dataset documents are categorized into 5 topics -\"asia-bibi\", \"brexit\", \"nord-stream\", \"ryanair\" and \"other\", we train single model for all topics.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Preparation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "As an additional data for training, we ex- plored various data sets in languages covered by bsnlp2021, however, in our opinion none of publicly available data sets match entity types required in this task. We found a data set in Latvian (Znoti\u0146\u0161, 2015) , which includes the same entities as this task (person, location, organization, event and product).", |
|
"cite_spans": [ |
|
{ |
|
"start": 237, |
|
"end": 252, |
|
"text": "(Znoti\u0146\u0161, 2015)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Preparation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Since Latvian is also a morphologically rich inflected language, we decided to train a NER system using this data in addition to the data provided by shared task organizers.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Preparation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The architecture of our system is modular, consisting of three modules (Figure 1 ). At first, NER component identifies candidate entity mentions. Then, entity mentions are linked to already found entity mentions or added as new entity to a list. Finally, a base form for a given entity mention is obtained.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 71, |
|
"end": 80, |
|
"text": "(Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Architecture and Modules", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We consider mention detection as sequence labeling problem aiming at labeling each token of the sequence. In our implementation we modify BERT (Devlin et al., 2019) model by adding dense layer and CRF layer on top of BERT model for Named Entity detection. We use multilingual BERT 1 provided by Google to fine-tune a single model for all six languages (Bulgarian,Czech,Polish,Russian, Slovene and Ukrainian) covered by the shared task. Provided data were split into 90% training dataset and 10% test set. We train single NER model using data from all 6 languages and evaluate for each language separately. Results of this internal evaluation are summarized in Table 1 . This named entity mention recognition model achieved average F1 score of 93.", |
|
"cite_spans": [ |
|
{ |
|
"start": 143, |
|
"end": 164, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 660, |
|
"end": 667, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Mention Detection", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We expected that the test data for shared task will be crawled from Web and thus input may be noisy and include lowercase, uppercase and mixed case named entity mentions. Knowing that state-of-theart NER models demonstrate good performance on grammatically correct datasets, while performing poorly on noisy data, in particular, data containing capitalization errors (Mayhew et al., 2020) , we augmented training data with their noisy copies. To minimize impact of noisy data on NER performance, we augment training data using method described by Bodapati et al. (2019) , i.e., using uppercased and lower-cased text variants. We prepared four datasets for training:", |
|
"cite_spans": [ |
|
{ |
|
"start": 367, |
|
"end": 388, |
|
"text": "(Mayhew et al., 2020)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 547, |
|
"end": 569, |
|
"text": "Bodapati et al. (2019)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mention Detection", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u2022 original data (TLD1),", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mention Detection", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u2022 original data augmented with casing variants (TLD2),", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mention Detection", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u2022 original data + Latvian (TLD3),", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mention Detection", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u2022 original data + Latvian and augmented with casing variants (TLD4).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mention Detection", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We trained four corresponding systems which were used in final evaluation ( Table 2) . The best performing system was TLD1. System TLD3 was trained using additional Latvian corpus, which allowed to detect Event and Person entities better, however this was not enough to reach better overall result. Systems TLD2 and TLD4 were trained on datasets augmented with their lower-cased and upper-cased versions. The augmentation with noisy data lead to performance decrease by 2.6 F1 points for both systems, apparently because there are few casing errors in the test data. entities is poor for all systems, as they mostly failed to detect unseen events and products (e.g., covid, sputnik, coronavirus, inauguration, election). Table 3 provides more detailed evaluation results of TLD1 system. The system performs better on Slovene, Polish and Czech texts which have Latin script, while for Bulgarian, Russian and Ukrainian which use Cyrillic script results are lower, still acceptable. For all languages Event type is poorly identified. It could be explained by entities from medicine domain (e.g., covid-19) which were not part of the training data and thus were the most challenging for our recognizer. Relatively poor results for Product and Event detection in Russian and Ukrainian can be partially explained by the fact that evaluation script rejected entities without quotation marks (e.g. Sputnik V is considered wrong, since \"Sputnik V\" is expected).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 76, |
|
"end": 84, |
|
"text": "Table 2)", |
|
"ref_id": "TABREF2" |
|
}, |
|
{ |
|
"start": 721, |
|
"end": 728, |
|
"text": "Table 3", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Mention Detection", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The goal of the entity linking task is to associate entity mentions found in a text with corresponding entries in a Knowledge Base (KB) (Zheng et al., 2010) . Entity linking consists of three sub-tasks: candidate generation, candidate ranking and unlinkable mention prediction (Shen et al., 2015) . When performed without a knowledge base, entity linking reduces to entity coreference resolution, where entity mentions across one or multiple documents are clustered into multiple clusters, each represent- ing specific entity, based on the entity mention and context. For entity linking we use mention-ranking model (Rahman and Ng, 2009) to decide whether or not an active mention is coreferent with a candidate antecedent.", |
|
"cite_spans": [ |
|
{ |
|
"start": 136, |
|
"end": 156, |
|
"text": "(Zheng et al., 2010)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 277, |
|
"end": 296, |
|
"text": "(Shen et al., 2015)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 616, |
|
"end": 637, |
|
"text": "(Rahman and Ng, 2009)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Entity Linking", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "For each entity mention we obtain embedding using language-agnostic BERT sentence embedding LaBSE. Each candidate mention is compared with entities already in a linked entities list by calculating cosine similarity score. We use entity type information as a hard consistency check (which filters out mentions which do not have the same type) (Khosla and Rose, 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 342, |
|
"end": 365, |
|
"text": "(Khosla and Rose, 2020)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Entity Linking", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We use two similarity thresholds as hyperparameters: one for early stopping if cosine similarity is over 0.95 and second for unlinkable entity detection, set at 0.6. Entity having similarity score higher than early stopping value, is considered to be the same entity as candidate antecedent and no further comparison is needed. Entity with lower similarity score than unlinkable threshold for any antecedent, is considered as new and added to a list of entities found. For entities with similarity scores between these two hyperparameters, the most similar entity is selected and linked as correct entity.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Entity Linking", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The best results in entity linking task achieved TLD3 system. Evaluation results for this system are summarized in Table 4 . Since this task depends on the results of mention detection task, results for Product and Event classes are poor. We can observe reasonable or even good precision, while recall is very poor for almost all entity types.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 115, |
|
"end": 122, |
|
"text": "Table 4", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Entity Linking", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "For entity normalization we apply Stanza (Qi et al., 2020) language specific lemmatizers. Since Stanza performs lemmatization on word level each word in multi-word named entity is lemmatized separately. Such approach was useful for person lemmatization, however failed for other categories of named entities, in particular long organization names.", |
|
"cite_spans": [ |
|
{ |
|
"start": 41, |
|
"end": 58, |
|
"text": "(Qi et al., 2020)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Entity Normalization", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Evaluation results for the normalization task are summarized in Table 5 . In almost all cases the system trained on data provided by shared task organizers (TLD1) achieved the highest F-score. For most of languages results are between 43 (Czech) and 52 (Ukrainian), except Bulgarian with only 15 F-Score. The reasons for such low results are multiple: first, errors in detection entity mentions automatically translate into missing normalized forms for normalization; second, multi-word entities are normalized by converting each word to its base form and third, stanza models used for have varying performance in different languages. ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 64, |
|
"end": 71, |
|
"text": "Table 5", |
|
"ref_id": "TABREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Entity Normalization", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "In this paper we proposed modular architecture that allows to find Named Entities in six Slavic languages and links identified entities to the same entity in other documents in different languages. Each module can be separately updated to improve system performance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "As the next step, constituent modules of this system could be improved, for example with entity normalization rules for multi-word entities or implementing longer context for obtaining entity embeddings for linking.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "4" |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "The research has been supported by the European Regional Development Fund within the research project \"Multilingual Artificial Intelligence Based Human Computer Interaction\" No. 1.1.1.1/18/A/148.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Tuning multilingual transformers for language-specific named entity recognition", |
|
"authors": [ |
|
{ |
|
"first": "Mikhail", |
|
"middle": [], |
|
"last": "Arkhipov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maria", |
|
"middle": [], |
|
"last": "Trofimova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuri", |
|
"middle": [], |
|
"last": "Kuratov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexey", |
|
"middle": [], |
|
"last": "Sorokin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 7th Workshop on Balto-Slavic Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "89--93", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W19-3712" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mikhail Arkhipov, Maria Trofimova, Yuri Kuratov, and Alexey Sorokin. 2019. Tuning multilingual trans- formers for language-specific named entity recogni- tion. In Proceedings of the 7th Workshop on Balto- Slavic Natural Language Processing, pages 89-93, Florence, Italy. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Natural Language Processing with Python: Analyzing Text with the Natural Language Toolkit", |
|
"authors": [ |
|
{ |
|
"first": "Steven", |
|
"middle": [], |
|
"last": "Bird", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ewan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edward", |
|
"middle": [], |
|
"last": "Loper", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural Language Processing with Python: An- alyzing Text with the Natural Language Toolkit. O'Reilly, Beijing.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Robustness to capitalization errors in named entity recognition", |
|
"authors": [ |
|
{ |
|
"first": "Sravan", |
|
"middle": [], |
|
"last": "Bodapati", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hyokun", |
|
"middle": [], |
|
"last": "Yun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yaser", |
|
"middle": [], |
|
"last": "Al-Onaizan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 5th Workshop on Noisy User-generated Text (W-NUT 2019)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "237--242", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-5531" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sravan Bodapati, Hyokun Yun, and Yaser Al-Onaizan. 2019. Robustness to capitalization errors in named entity recognition. In Proceedings of the 5th Work- shop on Noisy User-generated Text (W-NUT 2019), pages 237-242, Hong Kong, China. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "4171--4186", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1423" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Languageagnostic bert sentence embedding", |
|
"authors": [ |
|
{ |
|
"first": "Fangxiaoyu", |
|
"middle": [], |
|
"last": "Feng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yinfei", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Cer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naveen", |
|
"middle": [], |
|
"last": "Arivazhagan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fangxiaoyu Feng, Yinfei Yang, Daniel Cer, Naveen Arivazhagan, and Wei Wang. 2020. Language- agnostic bert sentence embedding.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Using type information to improve entity coreference resolution", |
|
"authors": [ |
|
{ |
|
"first": "Sopan", |
|
"middle": [], |
|
"last": "Khosla", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carolyn", |
|
"middle": [], |
|
"last": "Rose", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the First Workshop on Computational Approaches to Discourse", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "20--31", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.codi-1.3" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sopan Khosla and Carolyn Rose. 2020. Using type in- formation to improve entity coreference resolution. In Proceedings of the First Workshop on Computa- tional Approaches to Discourse, pages 20-31, On- line. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Robust named entity recognition with truecasing pretraining", |
|
"authors": [ |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Mayhew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nitish", |
|
"middle": [], |
|
"last": "Gupta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "The Thirty-Second Innovative Applications of Artificial Intelligence Conference", |
|
"volume": "2020", |
|
"issue": "", |
|
"pages": "8480--8487", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stephen Mayhew, Nitish Gupta, and Dan Roth. 2020. Robust named entity recognition with truecasing pre- training. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty- Second Innovative Applications of Artificial Intelli- gence Conference, IAAI 2020, The Tenth AAAI Sym- posium on Educational Advances in Artificial Intel- ligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 8480-8487. AAAI Press.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Stanza: A Python natural language processing toolkit for many human languages", |
|
"authors": [ |
|
{ |
|
"first": "Peng", |
|
"middle": [], |
|
"last": "Qi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuhao", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuhui", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Bolton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D. Manning. 2020. Stanza: A Python natural language processing toolkit for many human languages. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics: System Demonstrations.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Supervised models for coreference resolution", |
|
"authors": [ |
|
{ |
|
"first": "Altaf", |
|
"middle": [], |
|
"last": "Rahman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vincent", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "968--977", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Altaf Rahman and Vincent Ng. 2009. Supervised mod- els for coreference resolution. In Proceedings of the 2009 Conference on Empirical Methods in Nat- ural Language Processing, pages 968-977, Singa- pore. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Entity linking with a knowledge base: Issues, techniques, and solutions", |
|
"authors": [ |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Han", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "IEEE Transactions on Knowledge and Data Engineering", |
|
"volume": "27", |
|
"issue": "2", |
|
"pages": "443--460", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1109/TKDE.2014.2327028" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "W. Shen, J. Wang, and J. Han. 2015. Entity linking with a knowledge base: Issues, techniques, and so- lutions. IEEE Transactions on Knowledge and Data Engineering, 27(2):443-460.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Multilingual is not enough: Bert for finnish", |
|
"authors": [ |
|
{ |
|
"first": "Antti", |
|
"middle": [], |
|
"last": "Virtanen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jenna", |
|
"middle": [], |
|
"last": "Kanerva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rami", |
|
"middle": [], |
|
"last": "Ilo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jouni", |
|
"middle": [], |
|
"last": "Luoma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Juhani", |
|
"middle": [], |
|
"last": "Luotolahti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tapio", |
|
"middle": [], |
|
"last": "Salakoski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Filip", |
|
"middle": [], |
|
"last": "Ginter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sampo", |
|
"middle": [], |
|
"last": "Pyysalo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Antti Virtanen, Jenna Kanerva, Rami Ilo, Jouni Luoma, Juhani Luotolahti, Tapio Salakoski, Filip Ginter, and Sampo Pyysalo. 2019. Multilingual is not enough: Bert for finnish.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "LUKE: Deep contextualized entity representations with entityaware self-attention", |
|
"authors": [ |
|
{ |
|
"first": "Ikuya", |
|
"middle": [], |
|
"last": "Yamada", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Akari", |
|
"middle": [], |
|
"last": "Asai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hiroyuki", |
|
"middle": [], |
|
"last": "Shindo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hideaki", |
|
"middle": [], |
|
"last": "Takeda", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuji", |
|
"middle": [], |
|
"last": "Matsumoto", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "6442--6454", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.emnlp-main.523" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, and Yuji Matsumoto. 2020. LUKE: Deep contextualized entity representations with entity- aware self-attention. In Proceedings of the 2020 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 6442-6454, On- line. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Joint learning of the embedding of words and entities for named entity disambiguation", |
|
"authors": [ |
|
{ |
|
"first": "Ikuya", |
|
"middle": [], |
|
"last": "Yamada", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hiroyuki", |
|
"middle": [], |
|
"last": "Shindo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hideaki", |
|
"middle": [], |
|
"last": "Takeda", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshiyasu", |
|
"middle": [], |
|
"last": "Takefuji", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "250--259", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/K16-1025" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ikuya Yamada, Hiroyuki Shindo, Hideaki Takeda, and Yoshiyasu Takefuji. 2016. Joint learning of the em- bedding of words and entities for named entity dis- ambiguation. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 250-259, Berlin, Germany. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Learning to link entities with knowledge base", |
|
"authors": [ |
|
{ |
|
"first": "Zhicheng", |
|
"middle": [], |
|
"last": "Zheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fangtao", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Minlie", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaoyan", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "483--491", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhicheng Zheng, Fangtao Li, Minlie Huang, and Xi- aoyan Zhu. 2010. Learning to link entities with knowledge base. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Lin- guistics, pages 483-491, Los Angeles, California. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "NLP-PIPE: Latvian NLP tool pipeline. CLARIN-LV digital library at MII, Universiy of Latvia", |
|
"authors": [ |
|
{ |
|
"first": "Art\u016brs", |
|
"middle": [], |
|
"last": "Znoti\u0146\u0161", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Art\u016brs Znoti\u0146\u0161. 2015. NLP-PIPE: Latvian NLP tool pipeline. CLARIN-LV digital library at MII, Uni- versiy of Latvia.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "LVBERT: Transformer-Based Model for Latvian Language Understanding", |
|
"authors": [ |
|
{ |
|
"first": "Art\u016brs", |
|
"middle": [], |
|
"last": "Znoti\u0146\u0161", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guntis", |
|
"middle": [], |
|
"last": "Barzdins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "111--115", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3233/FAIA200610" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Art\u016brs Znoti\u0146\u0161 and Guntis Barzdins. 2020. LVBERT: Transformer-Based Model for Latvian Language Un- derstanding, pages 111-115. IOS Press Ebooks.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"num": null, |
|
"text": "Overall System Architecture", |
|
"type_str": "figure" |
|
}, |
|
"TABREF0": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>1 https://github.com/google-</td></tr><tr><td>research/bert/blob/master/multilingual.md</td></tr></table>", |
|
"text": "98.5 96.1 84.9 94.0 89.3 LOC 98.6 96.7 98.4 88.2 94.3 95.9 ORG 96.5 92.5 94.4 95.4 83.5 95.0 PER 97.3 96.1 96.6 95.2 92.0 96.9 PRO 85.6 94.9 89.2 61.5 66.6 66.6 ALL 96.8 95.4 96.0 87.3 89.4 95.3 Internal evaluation results.", |
|
"html": null |
|
}, |
|
"TABREF2": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td colspan=\"6\">: Four systems evaluated on shared task test data</td></tr><tr><td colspan=\"4\">(Relaxed partial matching)</td><td/><td/></tr><tr><td/><td>bg</td><td>cs</td><td>pl</td><td>ru</td><td>sl</td><td>uk</td></tr><tr><td colspan=\"7\">PER 89.3 95.7 93.9 87.6 95.5 96.5</td></tr><tr><td colspan=\"7\">LOC 94.5 93.5 96.4 91.2 91.4 95.1</td></tr><tr><td colspan=\"7\">ORG 79.7 86.6 83.7 72.7 81.7 81.0</td></tr><tr><td colspan=\"7\">PRO 61.1 66.8 75.7 51.4 64.3 52.6</td></tr><tr><td colspan=\"7\">EVT 23.6 18.0 37.5 21.2 41.8 09.8</td></tr><tr><td>All</td><td colspan=\"6\">83.9 85.7 88.8 79.1 87.1 82.7</td></tr></table>", |
|
"text": "", |
|
"html": null |
|
}, |
|
"TABREF3": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"text": "", |
|
"html": null |
|
}, |
|
"TABREF5": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>: Entity linking results evaluated on SlavNER</td></tr><tr><td>test data (Document level, system TLD3)</td></tr></table>", |
|
"text": "", |
|
"html": null |
|
}, |
|
"TABREF6": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"text": "TLD1 TLD2 TLD3 TLD4 bg 15.76 14.80 15.85 13.71 cs 43.36 40.56 42.74 40.03 pl 48.40 45.94 47.45 45.96 ru 44.12 42.20 43.64 42.28 sl 32.07 30.31 31.57 29.92 uk 52.10 49.78 50.71 50.25", |
|
"html": null |
|
}, |
|
"TABREF7": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"text": "Evaluation results (F-score) for normalization task", |
|
"html": null |
|
} |
|
} |
|
} |
|
} |