ACL-OCL / Base_JSON /prefixC /json /case /2021.case-1.19.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:13:58.610118Z"
},
"title": "ALEM at CASE 2021 Task 1: Multilingual Text Classification on News Articles",
"authors": [
{
"first": "Alaeddin",
"middle": [
"Sel\u00e7uk"
],
"last": "G\u00fcrel",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We participated CASE shared task in ACL-IJCNLP 2021. This paper is a summary of our experiments and ideas about this shared task. For each subtask we shared our approach, successful and failed methods and our thoughts about them. We submit our results once for every subtask, except for subtask3, in task submission system and present scores based on our validation set formed from given training samples in this paper. Techniques and models we mentioned includes BERT, Multilingual BERT, oversampling, undersampling, data augmentation and their implications with each other. Most of the experiments we came up with were not completed, as time did not permit, but we share them here as we plan to do them as suggested in the future work part of document.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "We participated CASE shared task in ACL-IJCNLP 2021. This paper is a summary of our experiments and ideas about this shared task. For each subtask we shared our approach, successful and failed methods and our thoughts about them. We submit our results once for every subtask, except for subtask3, in task submission system and present scores based on our validation set formed from given training samples in this paper. Techniques and models we mentioned includes BERT, Multilingual BERT, oversampling, undersampling, data augmentation and their implications with each other. Most of the experiments we came up with were not completed, as time did not permit, but we share them here as we plan to do them as suggested in the future work part of document.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "This paper includes review and explanations about our ideas and experiments for the CASE shared task in ACL-IJCNLP 2021. The main purpose and goal for this shared task is to identify and classify sociopolitical and crisis event information at multiple levels and languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Main categories for subtasks are document classification (subtask1), sentence classification (sub-task2), event sentence coreference identification (subtask3) and event extraction (subtask4). Each subtask has three batches of training data which are in English, Spanish and Portuguese (H\u00fcrriyetoglu et al., 2020 (H\u00fcrriyetoglu et al., , 2019a . Document classification and sentence classification tasks are binary classification tasks which aim to classify news articles and sentences respectively. The classification criteria of the document classification task is whether news article contains at least one past or ongoing event. Sentence classification is also a binary classification task, sentences are labeled as 1 if they contain event triggers within them. Event sentence coreference identification task aims to identify which event sentences are referring the same event. The objective of the event extraction task is to gather event trigger information and event information from given news article.",
"cite_spans": [
{
"start": 285,
"end": 311,
"text": "(H\u00fcrriyetoglu et al., 2020",
"ref_id": "BIBREF4"
},
{
"start": 312,
"end": 341,
"text": "(H\u00fcrriyetoglu et al., , 2019a",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We participated in subtask1, subtask2 and sub-task4. The training data for subtask3 was not sufficient for us to build and optimize the model for the given time schedule, since it was not possible to get exact results for test data. Our results are based on validation data that we constructed from given training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We propose a multilingual BERT Model (Devlin et al., 2018) for the shared task 1 (H\u00fcrriyetoglu et al., 2021a,b) . We trained and measured the performance of our model which is fine-tuned in English, Spanish and Portuguese. The model is formed by using and modifying multiple pretrained BERT models for each subtask and language we participated for 1 .",
"cite_spans": [
{
"start": 37,
"end": 58,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF1"
},
{
"start": 81,
"end": 111,
"text": "(H\u00fcrriyetoglu et al., 2021a,b)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Training data includes three languages for each subtask, English, Spanish and Portuguese. The data distributions are given below for each level. For both document classification and sentence classification tasks, training data was shared in JSON Lines text format. In this data, each document/sentence has an ID, text and label. The data of event extraction task was shared in similar format to CoNLL format. In token level data, documents are starting with SAMPLE START token, document and sentences are separated by empty lines and [SEP] token respectively. There are seven different categories in event extraction dataset which are etime (Event time), fname (Facility name), organizer, participant, place, target and trigger.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Data",
"sec_num": "2.1"
},
{
"text": "We used Huggingface's transformers (Wolf et al., 2020) library in order to fine-tune our BERT model for each subtask. We fine-tuned separate BERT models, each model pre-trained using a corpus in their respective language. The training data provided was quite unbalanced for every language in terms of both sample size and label distribution. We have tried over and under sampling techniques using imbalanced-learn package (Lema\u00eetre et al., 2017) to form a better training split. Both of the methods for our case affected the results in a negligible amount. So we decided to use naive random sampling for our experiments.",
"cite_spans": [
{
"start": 35,
"end": 54,
"text": "(Wolf et al., 2020)",
"ref_id": "BIBREF9"
},
{
"start": 422,
"end": 445,
"text": "(Lema\u00eetre et al., 2017)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "One other obstacle we worked on is BERT's maximum token size for its inputs. Tokenized input given to BERT is trimmed if it includes more than 512 tokens. This is a huge data loss for our subtasks, especially for document level classification. Many documents are trimmed by default configuration, so we tried a populating method to avoid losing any data with cost of extra labelling process. The idea is to split the data to be trimmed into chunks less than 512 tokens and label each one as it was labeled before splitting. This may cause a incorrect labeling process since the document is now cut into texts and each one of them may be against its parent label by its own in the training process. As a practical example of this method, let's say we have a text",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "Z = X 0 \u2022 X 1 \u2022 ... \u2022 X n ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "where each X i are strings that form Z when concatenated. Tokenized length of Z is greater than 512 and it is labeled as 0 in training set. We split Z into X i s to obtain less than 512 tokens for each part and set the labels of each X i as 0. This blind labelling process may cause incorrectly assigned labels for some X i s, since label 1 may be more suitable for their individual meanings. However we did not observe a significant change on the results for any of the languages. Considering this method did not improve the results, we did not use it for our final tests.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "We also used this method in the prediction phase. The texts were splitted similarly as in the given example. The final prediction was decided by majority of votes method e.g. if 3 texts are labeled as 1,1,0, then their parent prediction is 1 as it has higher vote.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "\u2022 English -BERT",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "\u2022 Spanish -BETO (Ca\u00f1ete et al., 2020) \u2022 Portuguese -BERTimbau Base (Souza et al., 2020) For the multilingual BERT experiments we have used the pretrained mBERT model in order to finetune our data for subtasks. We used BERT tokenizer which is based on WordPiece tokenization algorithm. We splitted training data with the purpose of forming a test set before submitting the final results to shared task system. The split for train and test data distributed 80% to 20% respectively. The method we use concatenates all English, Spanish and Portuguese data and train them altogether. The split is deterministic and stayed same for all of our experiments for all models in order to obtain results for the same test data.",
"cite_spans": [
{
"start": 16,
"end": 37,
"text": "(Ca\u00f1ete et al., 2020)",
"ref_id": "BIBREF0"
},
{
"start": 67,
"end": 87,
"text": "(Souza et al., 2020)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "The scores we demonstrate on the document classification and sentence classification are based on f1-macro metric. The evaluation criteria that we used in event extraction for validation data is f1 score. We experimented with various epoch numbers and batch sizes with the intent of optimizing the hyper-parameters. We made our decisions to use these epoch numbers and batch sizes based on Language etime fname organizer participant place target trigger English 1209 1201 1261 2663 1570 1470 4595 Spanish 40 49 25 88 15 64 157 Portuguese 41 48 19 73 61 32 122 Table 3 : Label distribution of training data in token level our experimental setup. The epoch and batch parameters given to training phase for BERT Base for document classification task with epoch as 5 and batch as 32, sentence classification task with epoch as 3 and batch as 64. For Multilingual BERT we fine-tuned parameters as 3 epochs and 32 batches for document classification task and 5 epochs and 32 batches for sentence classification task. English BERT gives better results in comparison with multilingual BERT model by 0.09%. In our experiments we observed that multilingual BERT model has superior results for Spanish Language by 2.5% when compared to Spanish BERT model used in terms of our measurement criteria. Portuguese BERT has a higher f1-macro score by 0.42% when we compare it with its counterpart, multilingual BERT. There is no significant gap between the f1-macro scores of multilingual BERT and BERT Base models which are pretrained with their respective languages. Table 6 : Results for token level There isn't enough data points for Spanish and Portuguese languages for training and evaluation of event extraction task. We think that we need different approaches in order to train and evaluate this data for further testing, but we share the evaluation performance results for English language since it has enough data points to form an acceptable model when compared to the other languages. We made our document, sentence and event extraction submissions based on BERT base models which are trained with their respective languages for each . We used f1-score metric with the purpose of analysing event extraction performance for each token category.",
"cite_spans": [],
"ref_spans": [
{
"start": 454,
"end": 590,
"text": "English 1209 1201 1261 2663 1570 1470 4595 Spanish 40 49 25 88 15 64 157 Portuguese 41 48 19 73 61 32 122 Table 3",
"ref_id": "TABREF1"
},
{
"start": 1575,
"end": 1582,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "This paper describes our system description for submission for CASE @ ACL-IJCNLP 2021: Socio-Political and Crisis Events Detection shared task. In training phase, we performed our experiments using separate pretrained language models with different training data. We report their performance for 3 tasks with the addition of the results for multilingual BERT model. We also compared our models with the other BERT models which are trained with their respective language data. We tested our fine-tuned language models with the test data provided by shared task organizers and made our submissions for document classification and sentence classification tasks. We achieved 80.82, 72.98 and 46.47 f1-macro scores in document classification. f1-macro scores of the sentence classification task are 79.67, 42.79 and 45.30 for English, Portuguese and Spanish respectively. We didn't make submission for token classification task due time limitations, but shared the results we observed in tests on our validation set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5"
},
{
"text": "One of the important issues with BERT is to optimize the training data in order to align with its maximum token size while training. In some tasks, especially in document level classification, this is a significant factor for pre-processing, since the length of the input texts are too long for being tokenized to fit BERT as whole. This situation leads to an experiment devoted for managing this limitation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5"
},
{
"text": "Following our experiments in over-and undersampling methods, we would like to use data augmentation for future training methods in order to achieve an equilibrium in terms of training data labels. Augmenting method may be text generation from already given documents and sentences, but we do not expect this method being successful for languages other than English since our sample data is not as much for the other languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5"
},
{
"text": "One another method we considered applying for future experiments was ensemble learning. The idea is training different models for the same task and observe their differentiated scores and group them by their success on predicting particular inputs. This method has a cost of training many models and measuring their prediction success with respect to the others, however after forming an optimal set of models, we can use them to unite on a cumulative score on a single input by assigning a weight for each of their individual output. This idea of combining many models can be also used for BERT initiated environment by constructing a system where the structure is built on top of BERT and inserting custom networks into its embedding layers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5"
},
{
"text": "There are many improvements and analysis to be done in order to understand strengths and weaknesses of this system and further improvements might be added on top of it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5"
},
{
"text": "Code that we used for this shared task submission can be found at https://github.com/alaeddingurel/ALEM-CASE2021",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Spanish pre-trained bert model and evaluation data",
"authors": [
{
"first": "Jos\u00e9",
"middle": [],
"last": "Ca\u00f1ete",
"suffix": ""
},
{
"first": "Gabriel",
"middle": [],
"last": "Chaperon",
"suffix": ""
},
{
"first": "Rodrigo",
"middle": [],
"last": "Fuentes",
"suffix": ""
},
{
"first": "Jou-Hui",
"middle": [],
"last": "Ho",
"suffix": ""
},
{
"first": "Hojin",
"middle": [],
"last": "Kang",
"suffix": ""
},
{
"first": "Jorge",
"middle": [],
"last": "P\u00e9rez",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jos\u00e9 Ca\u00f1ete, Gabriel Chaperon, Rodrigo Fuentes, Jou- Hui Ho, Hojin Kang, and Jorge P\u00e9rez. 2020. Span- ish pre-trained bert model and evaluation data. In PML4DC at ICLR 2020.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "BERT: pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language under- standing. CoRR, abs/1810.04805.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Multilingual protest news detection -shared task 1, case 2021",
"authors": [
{
"first": "Ali",
"middle": [],
"last": "H\u00fcrriyetoglu",
"suffix": ""
},
{
"first": "Osman",
"middle": [],
"last": "Mutlu",
"suffix": ""
},
{
"first": "Erdem",
"middle": [],
"last": "Farhana Ferdousi Liza",
"suffix": ""
},
{
"first": "Ritesh",
"middle": [],
"last": "Y\u00f6r\u00fck",
"suffix": ""
},
{
"first": "Shyam",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ratan",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 4th Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE 2021), online",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ali H\u00fcrriyetoglu, Osman Mutlu, Farhana Ferdousi Liza, Erdem Y\u00f6r\u00fck, Ritesh Kumar, and Shyam Ratan. 2021a. Multilingual protest news detection -shared task 1, case 2021. In Proceedings of the 4th Workshop on Challenges and Applications of Auto- mated Extraction of Socio-political Events from Text (CASE 2021), online. Association for Computational Linguistics (ACL).",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Challenges and applications of automated extraction of socio-political events from text (case 2021): Workshop and shared task report",
"authors": [
{
"first": "Ali",
"middle": [],
"last": "H\u00fcrriyetoglu",
"suffix": ""
},
{
"first": "Hristo",
"middle": [],
"last": "Tanev",
"suffix": ""
},
{
"first": "Vanni",
"middle": [],
"last": "Zavarella",
"suffix": ""
},
{
"first": "Jakub",
"middle": [],
"last": "Piskorski",
"suffix": ""
},
{
"first": "Reyyan",
"middle": [],
"last": "Yeniterzi",
"suffix": ""
},
{
"first": "Erdem",
"middle": [],
"last": "Y\u00f6r\u00fck",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 4th Workshop on Challenges and Applications of Automated Extraction of Sociopolitical Events from Text (CASE 2021), online. Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ali H\u00fcrriyetoglu, Hristo Tanev, Vanni Zavarella, Jakub Piskorski, Reyyan Yeniterzi, and Erdem Y\u00f6r\u00fck. 2021b. Challenges and applications of automated extraction of socio-political events from text (case 2021): Workshop and shared task report. In Proceedings of the 4th Workshop on Challenges and Applications of Automated Extraction of Socio- political Events from Text (CASE 2021), online. As- sociation for Computational Linguistics (ACL).",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Cross-context news corpus for protest events related knowledge base construction",
"authors": [
{
"first": "Ali",
"middle": [],
"last": "H\u00fcrriyetoglu",
"suffix": ""
},
{
"first": "Erdem",
"middle": [],
"last": "Y\u00f6r\u00fck",
"suffix": ""
},
{
"first": "Deniz",
"middle": [],
"last": "Y\u00fcret",
"suffix": ""
},
{
"first": "Osman",
"middle": [],
"last": "Mutlu",
"suffix": ""
},
{
"first": "F\u0131rat",
"middle": [],
"last": "Agr\u0131 Yoltar",
"suffix": ""
},
{
"first": "Burak",
"middle": [],
"last": "Duru\u015fan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "G\u00fcrel",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.24432/C5D59R"
]
},
"num": null,
"urls": [],
"raw_text": "Ali H\u00fcrriyetoglu, Erdem Y\u00f6r\u00fck, Deniz Y\u00fcret, Osman Mutlu, \u00c7 agr\u0131 Yoltar, F\u0131rat Duru\u015fan, and Burak G\u00fcrel. 2020. Cross-context news corpus for protest events related knowledge base construction. In Automated Knowledge Base Construction.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A task set proposal for automatic protest information collection across multiple countries",
"authors": [
{
"first": "Ali",
"middle": [],
"last": "H\u00fcrriyetoglu",
"suffix": ""
},
{
"first": "Erdem",
"middle": [],
"last": "Y\u00f6r\u00fck",
"suffix": ""
},
{
"first": "Deniz",
"middle": [],
"last": "Y\u00fcret",
"suffix": ""
},
{
"first": "Burak",
"middle": [],
"last": "Agr\u0131 Yoltar",
"suffix": ""
},
{
"first": "F\u0131rat",
"middle": [],
"last": "G\u00fcrel",
"suffix": ""
},
{
"first": "Osman",
"middle": [],
"last": "Duru\u015fan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mutlu",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Information Retrieval",
"volume": "",
"issue": "",
"pages": "316--323",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ali H\u00fcrriyetoglu, Erdem Y\u00f6r\u00fck, Deniz Y\u00fcret, \u00c7 agr\u0131 Yoltar, Burak G\u00fcrel, F\u0131rat Duru\u015fan, and Osman Mutlu. 2019a. A task set proposal for automatic protest information collection across multiple coun- tries. In Advances in Information Retrieval, pages 316-323, Cham. Springer International Publishing.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Overview of clef 2019 lab protestnews: Extracting protests from news in a cross-context setting",
"authors": [
{
"first": "Ali",
"middle": [],
"last": "H\u00fcrriyetoglu",
"suffix": ""
},
{
"first": "Erdem",
"middle": [],
"last": "Y\u00f6r\u00fck",
"suffix": ""
},
{
"first": "Deniz",
"middle": [],
"last": "Y\u00fcret",
"suffix": ""
},
{
"first": "Burak",
"middle": [],
"last": "Agr\u0131 Yoltar",
"suffix": ""
},
{
"first": "F\u0131rat",
"middle": [],
"last": "G\u00fcrel",
"suffix": ""
},
{
"first": "Osman",
"middle": [],
"last": "Duru\u015fan",
"suffix": ""
},
{
"first": "Arda",
"middle": [],
"last": "Mutlu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Akdemir",
"suffix": ""
}
],
"year": 2019,
"venue": "Experimental IR Meets Multilinguality, Multimodality, and Interaction",
"volume": "",
"issue": "",
"pages": "425--432",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ali H\u00fcrriyetoglu, Erdem Y\u00f6r\u00fck, Deniz Y\u00fcret, \u00c7 agr\u0131 Yoltar, Burak G\u00fcrel, F\u0131rat Duru\u015fan, Osman Mutlu, and Arda Akdemir. 2019b. Overview of clef 2019 lab protestnews: Extracting protests from news in a cross-context setting. In Experimental IR Meets Multilinguality, Multimodality, and Interac- tion, pages 425-432, Cham. Springer International Publishing.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Imbalanced-learn: A python toolbox to tackle the curse of imbalanced datasets in machine learning",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Lema\u00eetre",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Nogueira",
"suffix": ""
},
{
"first": "Christos",
"middle": [
"K"
],
"last": "Aridas",
"suffix": ""
}
],
"year": 2017,
"venue": "Journal of Machine Learning Research",
"volume": "18",
"issue": "17",
"pages": "1--5",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Lema\u00eetre, Fernando Nogueira, and Chris- tos K. Aridas. 2017. Imbalanced-learn: A python toolbox to tackle the curse of imbalanced datasets in machine learning. Journal of Machine Learning Research, 18(17):1-5.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "BERTimbau: pretrained BERT models for Brazilian Portuguese",
"authors": [
{
"first": "F\u00e1bio",
"middle": [],
"last": "Souza",
"suffix": ""
},
{
"first": "Rodrigo",
"middle": [],
"last": "Nogueira",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Lotufo",
"suffix": ""
}
],
"year": 2020,
"venue": "9th Brazilian Conference on Intelligent Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F\u00e1bio Souza, Rodrigo Nogueira, and Roberto Lotufo. 2020. BERTimbau: pretrained BERT models for Brazilian Portuguese. In 9th Brazilian Conference on Intelligent Systems, BRACIS, Rio Grande do Sul, Brazil, October 20-23 (to appear).",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R\u00e9mi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Davison",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Shleifer",
"suffix": ""
},
{
"first": "Clara",
"middle": [],
"last": "Patrick Von Platen",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "Canwen",
"middle": [],
"last": "Plu",
"suffix": ""
},
{
"first": "Teven",
"middle": [
"Le"
],
"last": "Xu",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Scao",
"suffix": ""
},
{
"first": "Mariama",
"middle": [],
"last": "Gugger",
"suffix": ""
},
{
"first": "Quentin",
"middle": [],
"last": "Drame",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Lhoest",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language pro- cessing. In Proceedings of the 2020 Conference on",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Natural Language Processing: System Demonstrations",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "38--45",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Asso- ciation for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"text": "Label distribution of training data in document level",
"content": "<table><tr><td colspan=\"4\">The total number of documents, sentences and to-</td></tr><tr><td colspan=\"4\">kens provided for the English Language was much</td></tr><tr><td colspan=\"3\">larger than other source languages.</td><td/></tr><tr><td>Language</td><td>0</td><td>1</td><td>Total</td></tr><tr><td>English</td><td colspan=\"3\">18602 4223 22825</td></tr><tr><td>Spanish</td><td>2232</td><td>509</td><td>2741</td></tr><tr><td>Portuguese</td><td>961</td><td>221</td><td>1182</td></tr></table>",
"type_str": "table",
"html": null,
"num": null
},
"TABREF2": {
"text": "",
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null
},
"TABREF4": {
"text": "Results for document level",
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null
},
"TABREF6": {
"text": "Results for sentence level",
"content": "<table><tr><td>BERT models pretrained with respective lan-</td></tr><tr><td>guages has greatest scores with comparison with</td></tr><tr><td>multilingual BERT for all languages in sentence</td></tr><tr><td>classification task.</td></tr></table>",
"type_str": "table",
"html": null,
"num": null
}
}
}
}