ACL-OCL / Base_JSON /prefixV /json /vardial /2021.vardial-1.13.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:14:14.652417Z"
},
"title": "Dialect Identification through Adversarial Learning and Knowledge Distillation on Romanian BERT",
"authors": [
{
"first": "George-Eduard",
"middle": [],
"last": "Zaharia",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University Politehnica of Bucharest",
"location": {
"country": "Romanian Academy"
}
},
"email": ""
},
{
"first": "Andrei-Marius",
"middle": [],
"last": "Avram",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University Politehnica of Bucharest",
"location": {
"country": "Romanian Academy"
}
},
"email": "[email protected]"
},
{
"first": "Dumitru-Clementin",
"middle": [],
"last": "Cercel",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University Politehnica of Bucharest",
"location": {
"country": "Romanian Academy"
}
},
"email": ""
},
{
"first": "Traian",
"middle": [],
"last": "Rebedea",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University Politehnica of Bucharest",
"location": {
"country": "Romanian Academy"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Dialect identification is a task with applicability in a vast array of domains, ranging from automatic speech recognition to opinion mining. This work presents our architectures used for the VarDial 2021 Romanian Dialect Identification subtask. We introduced a series of solutions based on Romanian or multilingual Transformers, as well as adversarial training techniques. At the same time, we experimented with a knowledge distillation tool in order to check whether a smaller model can maintain the performance of our best approach. Our best solution managed to obtain a weighted F1-score of 0.7324, allowing us to obtain the 2 nd place on the leaderboard.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Dialect identification is a task with applicability in a vast array of domains, ranging from automatic speech recognition to opinion mining. This work presents our architectures used for the VarDial 2021 Romanian Dialect Identification subtask. We introduced a series of solutions based on Romanian or multilingual Transformers, as well as adversarial training techniques. At the same time, we experimented with a knowledge distillation tool in order to check whether a smaller model can maintain the performance of our best approach. Our best solution managed to obtain a weighted F1-score of 0.7324, allowing us to obtain the 2 nd place on the leaderboard.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Dialect identification has attracted researchers from both the fields of speech and natural language processing, because of its wide appliance in different tasks such as automatic speech recognition (Biadsy, 2011), machine translation (Salloum et al., 2014) , or opinion mining (Salamah and Elkhlifi, 2014) . VarDial (Zampieri et al., 2020) is an yearly workshop that deals with boosting the research in this direction by creating computational resources for languages that are closely related with each other, varieties in language, and dialects. This year's edition (Chakravarthi et al., 2021) was composed of four subtasks: (1) Dravidian Language Identification (DLI), (2) Romanian Dialect Identification (RDI), (3) Social Media Variety Geolocation (SMG), and (4) Uralic Language Identification (ULI). We chose to participate in the second subtask of the workshop, the RDI subtask. This subtask was also proposed in the previous edition of the workshop (Gaman et al., 2020) , but this time the participants are given an augmented version of the Moldavian and Romanian Dialectal Corpus (MOROCO) dataset (Butnaru and Ionescu, 2019) that contains texts from the news domain. Also, as in the previous edition, the test set comes from another domain, so cross-domain algorithms must be employed in order to maximize the results.",
"cite_spans": [
{
"start": 235,
"end": 257,
"text": "(Salloum et al., 2014)",
"ref_id": "BIBREF32"
},
{
"start": 278,
"end": 306,
"text": "(Salamah and Elkhlifi, 2014)",
"ref_id": "BIBREF31"
},
{
"start": 317,
"end": 340,
"text": "(Zampieri et al., 2020)",
"ref_id": "BIBREF45"
},
{
"start": 568,
"end": 595,
"text": "(Chakravarthi et al., 2021)",
"ref_id": null
},
{
"start": 956,
"end": 976,
"text": "(Gaman et al., 2020)",
"ref_id": "BIBREF15"
},
{
"start": 1105,
"end": 1132,
"text": "(Butnaru and Ionescu, 2019)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Romanian language resources are in an ongoing process of maturity, and the language is slowly starting to gain the datasets necessary to not be considered under-resourced anymore. Some of the recent publicly available Romanian corpora include the Large Romanian Sentiment DataSet (LaRoSeDa) (Tache et al., 2021), the Romanian Named Entity Corpus (RONEC) (Dumitrescu and , and the Romanian version of the Cross-lingual Question Answering Dataset (xQuAD) (Artetxe et al., 2020) . Moreover, with the rise of Transformer-based pretrained language models (Vaswani et al., 2017) , some Romanian model versions have also been created Masala et al., 2020) . The speech resources are also starting to catch-up with the introduction of the Romanian Speech Corpus (RSC) (Georgescu et al., 2020) which was recently released for public usage, counting around 100 hours of speech, and with the introduction of a deep neural network architecture based on Deep-Speech2 (Amodei et al., 2016) for automatic speech recognition (Avram et al., 2020b) .",
"cite_spans": [
{
"start": 453,
"end": 475,
"text": "(Artetxe et al., 2020)",
"ref_id": "BIBREF1"
},
{
"start": 550,
"end": 572,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF37"
},
{
"start": 627,
"end": 647,
"text": "Masala et al., 2020)",
"ref_id": "BIBREF25"
},
{
"start": 759,
"end": 783,
"text": "(Georgescu et al., 2020)",
"ref_id": "BIBREF16"
},
{
"start": 953,
"end": 974,
"text": "(Amodei et al., 2016)",
"ref_id": "BIBREF0"
},
{
"start": 1008,
"end": 1029,
"text": "(Avram et al., 2020b)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we proposed a series of models based on Transformers, pre-trained on the Romanian language, and aimed to tackle the dialect identification task. By using different techniques, including adversarial training (Goodfellow et al., 2014b) , knowledge distillation (Hinton et al., 2015) , or Generative Adversarial Networks (GANs) (Goodfellow et al., 2014a), we managed to obtain good scores, allowing us to classify 2 nd in the RDI subtask organized at VarDial 2021.",
"cite_spans": [
{
"start": 221,
"end": 247,
"text": "(Goodfellow et al., 2014b)",
"ref_id": null
},
{
"start": 273,
"end": 294,
"text": "(Hinton et al., 2015)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The current work is structured as follows. The next section presents the state of the art regarding the Romanian-Moldavian dialect identification task. Section 3 outlines the methods created by us in order to tackle the previously mentioned challenge, while section 4 describes the results and displays the error analysis we conducted. Section 5 concludes the work and features some future improvements that can be made to further increase the performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The third edition of the VarDial evaluation campaign (Zampieri et al., 2019) took place in 2019 and presented another challenging task for the Romanian language -Moldavian vs. Romanian Crossdialect Topic identification. The task was composed of three smaller subtasks in which the participants had to (1) discriminate between the Moldavian (MD) and the Romanian (RO) dialects, (2) use Moldavian samples to classify Romanian samples by topic (MD \u2192 RO), and (3) use Romanian samples to classify Moldavian samples by topic (RO \u2192 MD). The highest macro F1-score on the first subtask was 89.50%, achieved by the DTeam team (Tudoreanu, 2019 ) with an ensemble model that combines a skip-gram convolutional neural network (CNN) (Kim, 2014) using the softmax loss and a CNN that was trained using a triplet loss. The highest scores for the cross-dialect subtasks 2 and 3 were obtained by the tearsofjoy team (Wu et al., 2019) that used a linear Support Vector Machine (SVM) classifier trained on a combination of character and word n-gram features. They obtained a 61.15% F1-score (macro) for the MD \u2192 RO subtasks and a 55.33% F1-score (macro) for the RO \u2192 MD subtask. Onose et al. (2019) adopted a non-Transformer approach and employed the usage of neural network models with Bidirectional Long Short-Term Memory cells (Hochreiter and Schmidhuber, 1997) , Bidirectional Gated Recurrent units (Cho et al., 2014) , as well as a Hierarchical Attention Network (Yang et al., 2016) .",
"cite_spans": [
{
"start": 53,
"end": 76,
"text": "(Zampieri et al., 2019)",
"ref_id": "BIBREF44"
},
{
"start": 618,
"end": 634,
"text": "(Tudoreanu, 2019",
"ref_id": "BIBREF36"
},
{
"start": 721,
"end": 732,
"text": "(Kim, 2014)",
"ref_id": "BIBREF23"
},
{
"start": 900,
"end": 917,
"text": "(Wu et al., 2019)",
"ref_id": "BIBREF39"
},
{
"start": 1312,
"end": 1346,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF20"
},
{
"start": 1385,
"end": 1403,
"text": "(Cho et al., 2014)",
"ref_id": null
},
{
"start": 1450,
"end": 1469,
"text": "(Yang et al., 2016)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The fourth edition of VarDial (Gaman et al., 2020) occured in 2020 and came with the RDI task, a binary classification task where participants had to identify the dialect of a given text -either Romanian or Moldavian. To differentiate from the previous edition, the evaluation set was taken from another domain, namely Twitter messages. The highest F1-score (macro) of 78.75% was achieved by the Tubingen team (\u00c7\u00f6ltekin, 2020) by using an ensemble of SVMs trained on word and character n-grams. Also, this edition saw the appliance of the Romanian Bidirectional Encoder Representations from Transformers (BERT) by two teams (Popa and S , tef\u0203nescu, 2020; Zaharia et al., 2020a) , with the best performing variant obtaining a 77.51% F1-score (macro).",
"cite_spans": [
{
"start": 30,
"end": 50,
"text": "(Gaman et al., 2020)",
"ref_id": "BIBREF15"
},
{
"start": 624,
"end": 654,
"text": "(Popa and S , tef\u0203nescu, 2020;",
"ref_id": "BIBREF30"
},
{
"start": 655,
"end": 677,
"text": "Zaharia et al., 2020a)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Our architectures are based on multiple Transformer-based models, considering their performance on various natural language processing (NLP) tasks (Avram et al., 2020a; Dima et al., 2020; Ionescu et al., 2020; Paraschiv et al., 2020; Paraschiv and Cercel, 2019; Tanase et al., 2020b,a; . For Romanian Dialect Identification, we employed the usage of a Transformer model extensively pre-trained on the Romanian language , as well as two multilingual models, namely XLM-RoBERTa (Conneau et al., 2019) and multilingual BERT (mBERT) (Pires et al., 2019) . Their performance on the Romanian language is lower when compared to Romanian BERT, however, in an ensemble, their predictions can prove to increase the score of our approach.",
"cite_spans": [
{
"start": 147,
"end": 168,
"text": "(Avram et al., 2020a;",
"ref_id": null
},
{
"start": 169,
"end": 187,
"text": "Dima et al., 2020;",
"ref_id": "BIBREF12"
},
{
"start": 188,
"end": 209,
"text": "Ionescu et al., 2020;",
"ref_id": "BIBREF15"
},
{
"start": 210,
"end": 233,
"text": "Paraschiv et al., 2020;",
"ref_id": "BIBREF28"
},
{
"start": 234,
"end": 261,
"text": "Paraschiv and Cercel, 2019;",
"ref_id": "BIBREF27"
},
{
"start": 262,
"end": 285,
"text": "Tanase et al., 2020b,a;",
"ref_id": null
},
{
"start": 476,
"end": 498,
"text": "(Conneau et al., 2019)",
"ref_id": "BIBREF10"
},
{
"start": 529,
"end": 549,
"text": "(Pires et al., 2019)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer-based Models",
"sec_num": "3.1"
},
{
"text": "Moreover, using a training technique similar to the ones employed by GANs proved to improve the performance of several NLP models (Croce et al., 2020) . Similarly, we augment our Romanian BERT architecture with a generator, as well as a discriminator. The generator receives as input a 100-dimensional noise vector and produces an output vector as similar to real inputs as possible. Moreover, the discriminator acts as a classifier but, instead of only being forced to distinguish between the two classes of the RDI task (i.e., RO or MD), it also has the purpose to identify whether the input is fake or not and classify it accordingly, into a third class. The discriminator is penalized if it classifies a fake input as a true one or vice versa. After the training process, the generator components and the third output of the discriminator are disabled, therefore our architecture works as a classifier based on Romanian BERT features.",
"cite_spans": [
{
"start": 130,
"end": 150,
"text": "(Croce et al., 2020)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Adversarial Network Applied on Romanian BERT",
"sec_num": "3.2"
},
{
"text": "We continued our experiments by applying a knowledge distillation technique, in order to check whether the high performance of Romanian BERT is maintained after it is distilled into a smaller model. For this approach, we used TextBrewer (Yang et al., 2020) , a tool that receives as input a teacher model and trains a student model such that the latter is able to closely replicate the behavior of the former. We used Romanian BERT as the teacher, a Transformer-based model with 12 hidden layers, thus implying a large number of parameters. Moreover, the student model is also based on Transformers, however, it is not pre-trained on any corpus and it has only 3 hidden layers, instead of 12. This reduction greatly decreases the computational resources required for further fine-tuning or prediction.",
"cite_spans": [
{
"start": 237,
"end": 256,
"text": "(Yang et al., 2020)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Distillation Applied on Romanian BERT",
"sec_num": "3.3"
},
{
"text": "Adversarial training has the purpose of enhancing the robustness of the model and, as a consequence, increase its performance in certain scenarios (Karimi et al., 2020) . The system works by introducing adversarial perturbations at the level of the Transformer embeddings. The process turns into a minimization problem, with the purpose of determining the worst perturbations while minimizing the loss function.",
"cite_spans": [
{
"start": 147,
"end": 168,
"text": "(Karimi et al., 2020)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adversarial Training",
"sec_num": "3.4"
},
{
"text": "The following formulas present the process of obtaining the adversarial perturbations, based on the gradient of the loss function g:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adversarial Training",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "g = \u2207 x log p(y|x;\u03b8)",
"eq_num": "(1)"
}
],
"section": "Adversarial Training",
"sec_num": "3.4"
},
{
"text": "r adv = \u2212 g ||g|| 2 (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adversarial Training",
"sec_num": "3.4"
},
{
"text": "where\u03b8 is a copy of the model's parameters, r adv are the perturbations, and is the dimension of the perturbations. After computing the perturbations, they are then added to the Transformer embeddings and an adversarial loss is obtained, given by Eq. 3:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adversarial Training",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2212log p(y|x + r adv ; \u03b8)",
"eq_num": "(3)"
}
],
"section": "Adversarial Training",
"sec_num": "3.4"
},
{
"text": "The final loss represents the sum between the adversarial loss and the simple loss obtained by passing the unaltered input through the neural network.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adversarial Training",
"sec_num": "3.4"
},
{
"text": "The most important element that influenced the performance of our models is represented by the way we selected the training entries, as well as how we established the threshold for which an entry can be considered Romanian or Moldavian.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Custom Selection Technique",
"sec_num": "3.5"
},
{
"text": "Firstly, the training dataset contains long entries, most of them surpassing the 512-token limit input by our Transformer models. At the same time, the validation entries are much shorter, with an average length of just 20 words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Custom Selection Technique",
"sec_num": "3.5"
},
{
"text": "Therefore, the first step we performed was to split the training entries into sentences and label each sentence with the label of the initial entry. However, the number of training entries was greatly increased, from 39,487 to 431,875. Considering this large number, we decided to filter the training entries, inasmuch as only the most relevant ones were kept for the final fine-tuning process. To do this, we initially trained our architecture, based on the Romanian BERT model, on the original validation entries. After four epochs, we tested the model on the split training entries and we selected only the ones that predicted Romanian or Moldavian with the confidence of over 95%. This way, we were able to select only the entries that are the closest in structure and context to the one from the validation dataset and, presumably, from the test one. We reduced the number of the split training entries to 158,363.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Custom Selection Technique",
"sec_num": "3.5"
},
{
"text": "For determining the prediction threshold, we trained our architectures on the previously mentioned entries and, after the final epoch, we discovered a prediction threshold that maximized the performance. We performed the selection by trying different values such as, if the confidence of a prediction surpasses the threshold, then it is classified as Moldavian, for example, if not, it is classified as Romanian. The optimal value of the threshold was 0.21.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Custom Selection Technique",
"sec_num": "3.5"
},
{
"text": "We also experimented with various machine learning techniques, such as SVMs, Random Forest, Multinomial Naive Bayes, and Logistic Regression, alongside character n-gram features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Machine Learning Approaches",
"sec_num": "3.6"
},
{
"text": "The dataset is the one provided for the RDI subtask of the VarDial 2021 competition. There are three subsets, one for training, one for validation, and one for testing. The validation and testing datasets (5,237 and 5,282 entries, respectively) contain very short entries, with an average length of 20 words. At the same time, the training dataset (39,487 entries) contains much longer texts, many of them surpassing 512 words. The class distribution is relatively balanced, with 21,366 Romanian entries and 18,121 Moldavian entries in the training dataset plus 2,625 and 2,612 entries, respectively, in the validation one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Analysis and Preprocessing",
"sec_num": "4.1"
},
{
"text": "In terms of preprocessing, we standardized the punctuation, by removing repeated characters such as question marks. Moreover, we cleaned the entries that contained an unnecessary number of whitespaces.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Analysis and Preprocessing",
"sec_num": "4.1"
},
{
"text": "For running the Transformer-based models we used Adam with weight decay optimizer (AdamW) (Kingma and Ba, 2014) and an epsilon value of 1e-8. We fine-tuned them for four epochs with a learning rate of 2e-5. Moreover, for the GAN approach, we allowed the generator to backpropagate its loss every 200 steps, such that the discriminator, which also performs as the classifier, had a small advantage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "4.2"
},
{
"text": "Table 1 presents all the results obtained with neural network approaches. The best results are obtained by the ensemble used by taking the predictions from the six deep learning models used in our experiments. With a weighted F1-score of 0.7324, the ensemble slightly surpasses the Romanian BERT model trained under adversarial circumstances, which scored a weighted F1 of 0.7318. The small performance improvement is noticeable on the validation dataset, as well, the ensemble scoring a 0.8564 weighted F1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "The adversarial training technique helps the Romanian BERT model improve its performance, considering the higher weighted F1-score on the validation dataset, 0.8559, obtained by the adversarial model when compared to the standard counterpart (0.8492). In contrast, the GAN training approach does not help the model achieve improved performance. The weighted F1-score is slightly higher than the standard Romanian BERT. However, when compared to the adversarial training technique, GAN+Romanian BERT lacks behind in terms of performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "The distilled model obtained by using TextBrewer with Romanian BERT comes last in terms of performance in the group of the Romanian BERT-based models. The weighted F1-scores of 0.7891 and 0.6744 obtained on the validation and test datasets are behind all the scores obtained by using variations of the Romanian BERT. The lack of performance can be attributed to the lower number of parameters of the distilled model, which it is not able to grasp the distinctive features of Romanian and Moldavian entries as well as the full model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "The last models in terms of performance are Multilingual BERT and XLM-RoBERTa. Even though the pre-training corpus of XLM-RoBERTa is bigger, the model is surpassed by mBERT. One reason for this can be represented by the inclusion of more Romanian entries in the mBERT pre-training corpus. Table 2 contains the results obtained by using various machine learning techniques alongside character n-gram features. The scores are much lower when compared to the ones achieved by the neural network approaches. The best performing model is the SVM trained with the parameter C equal to 3. The model achieves a weighted F1score of 0.6298, 0.1324 lower than the worst score obtained by the neural network approaches.",
"cite_spans": [],
"ref_spans": [
{
"start": 289,
"end": 296,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "Most misclassifications come from the inability of the models to identify dialect-specific features in the input entries. Taking as an example the validation dataset, many entries that are classified as Romanian or Moldavian have no surface differences in terms of structure or the used words (i.e., \"Ultimele s , tiri despre coronavirus $NE$\" is labeled with the Romanian dialect, while \"Cum te protejez\u00ee \u0131mpotriva coronavirusului $NE$\" is Moldavian). At the same time, the entries that are classified with high confidence as either Romanian or Moldavian are the ones with unmasked named entities, that are also present in the training dataset (i.e., \"Arafat\", \"Ceban\", \"Igor\") or dialect-specific words (i.e., \"raional\").",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "4.4"
},
{
"text": "Some entires are obstructed by the masked named entities (i.e., \"Este criz\u0203\u00een $NE$ s , i $NE$ $NE$\", \"FOTO-VIDEO. Ministrul $NE$ $NE$ $NE$ s , i $NE$ $NE$ $NE$ la $NE$ $NE$ Nu este vorba\") and therefore our models cannot properly identify features specific to one dialect or the other. Moreover, the performance difference between the validation and test sets (i.e., 0.8564 vs. 0.7324) can be attributed to the selection technique we used for filtering the training entries. The new inputs are chosen such that they are similar to the validation entries, not the test ones.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "4.4"
},
{
"text": "This work presents our approaches for the Romanian Dialect Identification subtask organized by VarDial 2021. We proposed a series of systems based on state-of-the-art, Transformer-based models, which imply the usage of adversarial techniques for improving the robustness. At the same time, we also experimented with TextBrewer, a knowledge distillation tool that allows us to compress a teacher model into a student model such that the performance can be maintained while reducing the size. Moreover, by using an ensemble of all the models we experimented with, we managed to improve the overall performance. For future work, we intend to experiment with different variants of adversarial training for increasing the scores obtained by our models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "5"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Deep speech 2: End-to-end speech recognition in english and mandarin",
"authors": [
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Rishita",
"middle": [],
"last": "Sundaram Ananthanarayanan",
"suffix": ""
},
{
"first": "Jingliang",
"middle": [],
"last": "Anubhai",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Bai",
"suffix": ""
},
{
"first": "Carl",
"middle": [],
"last": "Battenberg",
"suffix": ""
},
{
"first": "Jared",
"middle": [],
"last": "Case",
"suffix": ""
},
{
"first": "Bryan",
"middle": [],
"last": "Casper",
"suffix": ""
},
{
"first": "Qiang",
"middle": [],
"last": "Catanzaro",
"suffix": ""
},
{
"first": "Guoliang",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2016,
"venue": "In International conference on machine learning",
"volume": "",
"issue": "",
"pages": "173--182",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dario Amodei, Sundaram Ananthanarayanan, Rishita Anubhai, Jingliang Bai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Qiang Cheng, Guo- liang Chen, et al. 2016. Deep speech 2: End-to-end speech recognition in english and mandarin. In In- ternational conference on machine learning, pages 173-182. PMLR.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "On the cross-lingual transferability of monolingual representations",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
},
{
"first": "Dani",
"middle": [],
"last": "Yogatama",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4623--4637",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikel Artetxe, Sebastian Ruder, and Dani Yogatama. 2020. On the cross-lingual transferability of mono- lingual representations. In Proceedings of the 58th Annual Meeting of the Association for Computa- tional Linguistics, pages 4623-4637.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Dumitru-Clementin Cercel, and Costin-Gabriel Chiru. 2020a. Upb at semeval-2020 task 6: Pretrained language models for definitionextraction",
"authors": [
{
"first": "Andrei-Marius",
"middle": [],
"last": "Avram",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2009.05603"
]
},
"num": null,
"urls": [],
"raw_text": "Andrei-Marius Avram, Dumitru-Clementin Cercel, and Costin-Gabriel Chiru. 2020a. Upb at semeval-2020 task 6: Pretrained language models for definitionex- traction. arXiv preprint arXiv:2009.05603.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Towards a romanian end-to-end automatic speech recognition based on deepspeech2",
"authors": [
{
"first": "Andrei-Marius",
"middle": [],
"last": "Avram",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "P\u0203is",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Vasile",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tufis",
"suffix": ""
}
],
"year": 2020,
"venue": "Proc. Rom. Acad. Ser. A",
"volume": "21",
"issue": "",
"pages": "395--402",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrei-Marius Avram, P\u0202IS , Vasile, and Dan Tufis. 2020b. Towards a romanian end-to-end automatic speech recognition based on deepspeech2. In Proc. Rom. Acad. Ser. A, volume 21, pages 395-402.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Automatic dialect and accent recognition and its application to speech recognition",
"authors": [
{
"first": "Fadi",
"middle": [],
"last": "Biadsy",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fadi Biadsy. 2011. Automatic dialect and accent recognition and its application to speech recogni- tion. Ph.D. thesis, Columbia University.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Moroco: The moldavian and romanian dialectal corpus",
"authors": [
{
"first": "Andrei",
"middle": [],
"last": "Butnaru",
"suffix": ""
},
{
"first": "Radu Tudor",
"middle": [],
"last": "Ionescu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "688--698",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrei Butnaru and Radu Tudor Ionescu. 2019. Mo- roco: The moldavian and romanian dialectal corpus. In Proceedings of the 57th Annual Meeting of the As- sociation for Computational Linguistics, pages 688- 698.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Eswari Rajagopal, Yves Scherrer, and Marcos Zampieri. 2021. Findings of the VarDial Evaluation Campaign 2021",
"authors": [
{
"first": "Mihaela",
"middle": [],
"last": "Bharathi Raja Chakravarthi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "G\u0203man",
"suffix": ""
},
{
"first": "Tudor",
"middle": [],
"last": "Radu",
"suffix": ""
},
{
"first": "Heidi",
"middle": [],
"last": "Ionescu",
"suffix": ""
},
{
"first": "Tommi",
"middle": [],
"last": "Jauhiainen",
"suffix": ""
},
{
"first": "Krister",
"middle": [],
"last": "Jauhiainen",
"suffix": ""
},
{
"first": "Nikola",
"middle": [],
"last": "Lind\u00e9n",
"suffix": ""
},
{
"first": "Niko",
"middle": [],
"last": "Ljube\u0161i\u0107",
"suffix": ""
},
{
"first": "Ruba",
"middle": [],
"last": "Partanen",
"suffix": ""
},
{
"first": "Christoph",
"middle": [],
"last": "Priyadharshini",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Purschke",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the Eighth Workshop on NLP for Similar Languages, Varieties and Dialects",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bharathi Raja Chakravarthi, Mihaela G\u0203man, Radu Tu- dor Ionescu, Heidi Jauhiainen, Tommi Jauhiainen, Krister Lind\u00e9n, Nikola Ljube\u0161i\u0107, Niko Partanen, Ruba Priyadharshini, Christoph Purschke, Eswari Rajagopal, Yves Scherrer, and Marcos Zampieri. 2021. Findings of the VarDial Evaluation Campaign 2021. In Proceedings of the Eighth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial).",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Learning phrase representations using rnn encoder-decoder for statistical machine translation",
"authors": [
{
"first": "Yoshua",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1724--1734",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP), pages 1724- 1734.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Dialect identification under domain shift: Experiments with discriminating romanian and moldavian",
"authors": [
{
"first": "",
"middle": [],
"last": "\u00c7 Agr\u0131 \u00c7\u00f6ltekin",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 7th Workshop on NLP for Similar Languages, Varieties and Dialects",
"volume": "",
"issue": "",
"pages": "186--192",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "\u00c7 agr\u0131 \u00c7\u00f6ltekin. 2020. Dialect identification under do- main shift: Experiments with discriminating roma- nian and moldavian. In Proceedings of the 7th Work- shop on NLP for Similar Languages, Varieties and Dialects, pages 186-192.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Unsupervised cross-lingual representation learning at scale",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Kartikay",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Wenzek",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1911.02116"
]
},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Gan-bert: Generative adversarial learning for robust text classification with a bunch of labeled examples",
"authors": [
{
"first": "Danilo",
"middle": [],
"last": "Croce",
"suffix": ""
},
{
"first": "Giuseppe",
"middle": [],
"last": "Castellucci",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Basili",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2114--2119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danilo Croce, Giuseppe Castellucci, and Roberto Basili. 2020. Gan-bert: Generative adversarial learn- ing for robust text classification with a bunch of la- beled examples. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, pages 2114-2119.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Approaching smm4h 2020 with ensembles of bert flavours",
"authors": [
{
"first": "George-Andrei",
"middle": [],
"last": "Dima",
"suffix": ""
},
{
"first": "Andrei-Marius",
"middle": [],
"last": "Avram",
"suffix": ""
},
{
"first": "Dumitru-Clementin",
"middle": [],
"last": "Cercel",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifth Social Media Mining for Health Applications Workshop & Shared Task",
"volume": "",
"issue": "",
"pages": "153--157",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George-Andrei Dima, Andrei-Marius Avram, and Dumitru-Clementin Cercel. 2020. Approaching smm4h 2020 with ensembles of bert flavours. In Proceedings of the Fifth Social Media Mining for Health Applications Workshop & Shared Task, pages 153-157.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "The birth of romanian bert",
"authors": [
{
"first": "Stefan",
"middle": [],
"last": "Dumitrescu",
"suffix": ""
},
{
"first": "Andrei-Marius",
"middle": [],
"last": "Avram",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
"volume": "",
"issue": "",
"pages": "4324--4328",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stefan Dumitrescu, Andrei-Marius Avram, and Sampo Pyysalo. 2020. The birth of romanian bert. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, pages 4324-4328.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Introducing ronec-the romanian named entity corpus",
"authors": [
{
"first": "Stefan",
"middle": [],
"last": "Daniel Dumitrescu",
"suffix": ""
},
{
"first": "Andrei-Marius",
"middle": [],
"last": "Avram",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of The 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "4436--4443",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stefan Daniel Dumitrescu and Andrei-Marius Avram. 2020. Introducing ronec-the romanian named en- tity corpus. In Proceedings of The 12th Language Resources and Evaluation Conference, pages 4436- 4443.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A report on the vardial evaluation campaign 2020",
"authors": [
{
"first": "Mihaela",
"middle": [],
"last": "Gaman",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Tudor",
"middle": [],
"last": "Radu",
"suffix": ""
},
{
"first": "Heidi",
"middle": [],
"last": "Ionescu",
"suffix": ""
},
{
"first": "Tommi",
"middle": [],
"last": "Jauhiainen",
"suffix": ""
},
{
"first": "Krister",
"middle": [],
"last": "Jauhiainen",
"suffix": ""
},
{
"first": "Nikola",
"middle": [],
"last": "Lind\u00e9n",
"suffix": ""
},
{
"first": "Niko",
"middle": [],
"last": "Ljube\u0161i\u0107",
"suffix": ""
},
{
"first": "Christoph",
"middle": [],
"last": "Partanen",
"suffix": ""
},
{
"first": "Yves",
"middle": [],
"last": "Purschke",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Scherrer",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 7th Workshop on NLP for Similar Languages, Varieties and Dialects",
"volume": "",
"issue": "",
"pages": "1--14",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mihaela Gaman, Dirk Hovy, Radu Tudor Ionescu, Heidi Jauhiainen, Tommi Jauhiainen, Krister Lind\u00e9n, Nikola Ljube\u0161i\u0107, Niko Partanen, Christoph Purschke, Yves Scherrer, et al. 2020. A report on the vardial evaluation campaign 2020. In Proceedings of the 7th Workshop on NLP for Similar Languages, Varieties and Dialects, pages 1-14.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Rsc: A romanian read speech corpus for automatic speech recognition",
"authors": [
{
"first": "Alexandru-Lucian",
"middle": [],
"last": "Georgescu",
"suffix": ""
},
{
"first": "Horia",
"middle": [],
"last": "Cucu",
"suffix": ""
},
{
"first": "Andi",
"middle": [],
"last": "Buzo",
"suffix": ""
},
{
"first": "Corneliu",
"middle": [],
"last": "Burileanu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of The 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "6606--6612",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexandru-Lucian Georgescu, Horia Cucu, Andi Buzo, and Corneliu Burileanu. 2020. Rsc: A romanian read speech corpus for automatic speech recognition. In Proceedings of The 12th Language Resources and Evaluation Conference, pages 6606-6612.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Generative adversarial networks",
"authors": [
{
"first": "J",
"middle": [],
"last": "Ian",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Goodfellow",
"suffix": ""
},
{
"first": "Mehdi",
"middle": [],
"last": "Pouget-Abadie",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Mirza",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Sherjil",
"middle": [],
"last": "Warde-Farley",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Ozair",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Courville",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1406.2661"
]
},
"num": null,
"urls": [],
"raw_text": "Ian J Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014a. Gen- erative adversarial networks. arXiv preprint arXiv:1406.2661.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Jonathon Shlens, and Christian Szegedy. 2014b. Explaining and harnessing adversarial examples",
"authors": [
{
"first": "J",
"middle": [],
"last": "Ian",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Goodfellow",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6572"
]
},
"num": null,
"urls": [],
"raw_text": "Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014b. Explaining and harnessing adver- sarial examples. arXiv preprint arXiv:1412.6572.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Distilling the knowledge in a neural network",
"authors": [
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1503.02531"
]
},
"num": null,
"urls": [],
"raw_text": "Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Upb at fincausal-2020, tasks 1 & 2: Causality analysis in financial documents using pretrained language models",
"authors": [],
"year": 2020,
"venue": "Proceedings of the 1st Joint Workshop on Financial Narrative Processing and MultiLing Financial Summarisation",
"volume": "",
"issue": "",
"pages": "55--59",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marius Ionescu, Andrei-Marius Avram, George-Andrei Dima, Dumitru-Clementin Cercel, and Mihai Das- calu. 2020. Upb at fincausal-2020, tasks 1 & 2: Causality analysis in financial documents using pre- trained language models. In Proceedings of the 1st Joint Workshop on Financial Narrative Processing and MultiLing Financial Summarisation, pages 55- 59.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Adversarial training for aspect-based sentiment analysis with bert",
"authors": [
{
"first": "Akbar",
"middle": [],
"last": "Karimi",
"suffix": ""
},
{
"first": "Leonardo",
"middle": [],
"last": "Rossi",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Prati",
"suffix": ""
},
{
"first": "Katharina",
"middle": [],
"last": "Full",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2001.11316"
]
},
"num": null,
"urls": [],
"raw_text": "Akbar Karimi, Leonardo Rossi, Andrea Prati, and Katharina Full. 2020. Adversarial training for aspect-based sentiment analysis with bert. arXiv preprint arXiv:2001.11316.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Convolutional neural networks for sentence classification",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1408.5882"
]
},
"num": null,
"urls": [],
"raw_text": "Yoon Kim. 2014. Convolutional neural net- works for sentence classification. arXiv preprint arXiv:1408.5882.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Robert-a romanian bert model",
"authors": [
{
"first": "Mihai",
"middle": [],
"last": "Masala",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Ruseti",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Dascalu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "6626--6637",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mihai Masala, Stefan Ruseti, and Mihai Dascalu. 2020. Robert-a romanian bert model. In Proceedings of the 28th International Conference on Computational Linguistics, pages 6626-6637.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Sc-upb at the vardial 2019 evaluation campaign: Moldavian vs. romanian crossdialect topic identification",
"authors": [
{
"first": "Cristian",
"middle": [],
"last": "Onose",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dumitru-Clementin",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Cercel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Trausan-Matu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Sixth Workshop on NLP for Similar Languages, Varieties and Dialects",
"volume": "",
"issue": "",
"pages": "172--177",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cristian Onose, Dumitru-Clementin Cercel, and Stefan Trausan-Matu. 2019. Sc-upb at the vardial 2019 evaluation campaign: Moldavian vs. romanian cross- dialect topic identification. In Proceedings of the Sixth Workshop on NLP for Similar Languages, Va- rieties and Dialects, pages 172-177.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Upb at germeval-2019 task 2: Bert-based offensive language classification of german tweets",
"authors": [
{
"first": "Andrei",
"middle": [],
"last": "Paraschiv",
"suffix": ""
},
{
"first": "Dumitru-Clementin",
"middle": [],
"last": "Cercel",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrei Paraschiv and Dumitru-Clementin Cercel. 2019. Upb at germeval-2019 task 2: Bert-based offensive language classification of german tweets. In KON- VENS.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Upb at semeval-2020 task 11: Propaganda detection with domain-specific trained bert",
"authors": [
{
"first": "Andrei",
"middle": [],
"last": "Paraschiv",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dumitru-Clementin",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Cercel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dascalu",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2009.05289"
]
},
"num": null,
"urls": [],
"raw_text": "Andrei Paraschiv, Dumitru-Clementin Cercel, and Mi- hai Dascalu. 2020. Upb at semeval-2020 task 11: Propaganda detection with domain-specific trained bert. arXiv preprint arXiv:2009.05289.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "How multilingual is multilingual bert?",
"authors": [
{
"first": "Telmo",
"middle": [],
"last": "Pires",
"suffix": ""
},
{
"first": "Eva",
"middle": [],
"last": "Schlinger",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Garrette",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4996--5001",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual bert? In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4996-5001.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Applying multilingual and monolingual transformer-based models for dialect identification",
"authors": [
{
"first": "Cristian",
"middle": [],
"last": "Popa",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Vlad",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 7th Workshop on NLP for Similar Languages, Varieties and Dialects",
"volume": "",
"issue": "",
"pages": "193--201",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cristian Popa and Vlad S , tef\u0203nescu. 2020. Apply- ing multilingual and monolingual transformer-based models for dialect identification. In Proceedings of the 7th Workshop on NLP for Similar Languages, Va- rieties and Dialects, pages 193-201.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Microblogging opinion mining approach for kuwaiti dialect",
"authors": [
{
"first": "Jana",
"middle": [],
"last": "Ben Salamah",
"suffix": ""
},
{
"first": "Aymen",
"middle": [],
"last": "Elkhlifi",
"suffix": ""
}
],
"year": 2014,
"venue": "The International Conference on Computing Technology and Information Management (ICC-TIM)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jana Ben Salamah and Aymen Elkhlifi. 2014. Mi- croblogging opinion mining approach for kuwaiti di- alect. In The International Conference on Comput- ing Technology and Information Management (ICC- TIM), page 388. Society of Digital Information and Wireless Communication.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Sentence level dialect identification for machine translation system selection",
"authors": [
{
"first": "Wael",
"middle": [],
"last": "Salloum",
"suffix": ""
},
{
"first": "Heba",
"middle": [],
"last": "Elfardy",
"suffix": ""
},
{
"first": "Linda",
"middle": [],
"last": "Alamir-Salloum",
"suffix": ""
},
{
"first": "Nizar",
"middle": [],
"last": "Habash",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "772--778",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wael Salloum, Heba Elfardy, Linda Alamir-Salloum, Nizar Habash, and Mona Diab. 2014. Sentence level dialect identification for machine translation system selection. In Proceedings of the 52nd Annual Meet- ing of the Association for Computational Linguistics (Volume 2: Short Papers), pages 772-778.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Clustering word embeddings with self-organizing maps. application on larosedaa large romanian sentiment data set",
"authors": [
{
"first": "Mihaela",
"middle": [],
"last": "Anca Maria Tache",
"suffix": ""
},
{
"first": "Radu Tudor",
"middle": [],
"last": "Gaman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ionescu",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2101.04197"
]
},
"num": null,
"urls": [],
"raw_text": "Anca Maria Tache, Mihaela Gaman, and Radu Tu- dor Ionescu. 2021. Clustering word embeddings with self-organizing maps. application on laroseda- a large romanian sentiment data set. arXiv preprint arXiv:2101.04197.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Upb at semeval-2020 task 12: Multilingual offensive language detection on social media by fine-tuning a variety of bertbased models",
"authors": [
{
"first": "Dumitru-Clementin",
"middle": [],
"last": "Mircea-Adrian Tanase",
"suffix": ""
},
{
"first": "Costing-Gabriel",
"middle": [],
"last": "Cercel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chiru",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2010.13609"
]
},
"num": null,
"urls": [],
"raw_text": "Mircea-Adrian Tanase, Dumitru-Clementin Cercel, and Costing-Gabriel Chiru. 2020a. Upb at semeval- 2020 task 12: Multilingual offensive language detec- tion on social media by fine-tuning a variety of bert- based models. arXiv preprint arXiv:2010.13609.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Detecting aggressiveness in mexican spanish social media content by fine-tuning transformer-based models",
"authors": [
{
"first": "George-Eduard",
"middle": [],
"last": "Mircea-Adrian Tanase",
"suffix": ""
},
{
"first": "Dumitru-Clementin",
"middle": [],
"last": "Zaharia",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Cercel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dascalu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Iberian Languages Evaluation Forum (IberLEF 2020) co-located with 36th Conference of the Spanish Society for Natural Language Processing (SEPLN) 2020",
"volume": "",
"issue": "",
"pages": "236--245",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mircea-Adrian Tanase, George-Eduard Zaharia, Dumitru-Clementin Cercel, and Mihai Dascalu. 2020b. Detecting aggressiveness in mexican spanish social media content by fine-tuning transformer-based models. In Proceedings of the Iberian Languages Evaluation Forum (IberLEF 2020) co-located with 36th Conference of the Spanish Society for Natural Language Processing (SEPLN) 2020, pages 236-245.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Dteam@ vardial 2019: Ensemble based on skip-gram and triplet loss neural networks for moldavian vs. romanian cross-dialect topic identification",
"authors": [
{
"first": "Diana",
"middle": [],
"last": "Tudoreanu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Sixth Workshop on NLP for Similar Languages, Varieties and Dialects",
"volume": "",
"issue": "",
"pages": "202--208",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diana Tudoreanu. 2019. Dteam@ vardial 2019: En- semble based on skip-gram and triplet loss neural networks for moldavian vs. romanian cross-dialect topic identification. In Proceedings of the Sixth Workshop on NLP for Similar Languages, Varieties and Dialects, pages 202-208.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 31st International Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "6000--6010",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Sys- tems, pages 6000-6010.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Upb at semeval-2020 task 8: Joint textual and visual modeling in a multi-task learning architecture for memotion analysis",
"authors": [
{
"first": "George-Alexandru",
"middle": [],
"last": "Vlad",
"suffix": ""
},
{
"first": "George-Eduard",
"middle": [],
"last": "Zaharia",
"suffix": ""
},
{
"first": "Dumitru-Clementin",
"middle": [],
"last": "Cercel",
"suffix": ""
},
{
"first": "Costin-Gabriel",
"middle": [],
"last": "Chiru",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Trausan-Matu",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2009.02779"
]
},
"num": null,
"urls": [],
"raw_text": "George-Alexandru Vlad, George-Eduard Zaharia, Dumitru-Clementin Cercel, Costin-Gabriel Chiru, and Stefan Trausan-Matu. 2020. Upb at semeval- 2020 task 8: Joint textual and visual modeling in a multi-task learning architecture for memotion analysis. arXiv preprint arXiv:2009.02779.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Language discrimination and transfer learning for similar languages: experiments with feature combinations and adaptation",
"authors": [
{
"first": "Nianheng",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Demattos",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kwok Him So",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pin-Zhen Chen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "\u00c7\u00f6ltekin",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Sixth Workshop on NLP for Similar Languages, Varieties and Dialects",
"volume": "",
"issue": "",
"pages": "54--63",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nianheng Wu, Eric DeMattos, Kwok Him So, Pin-zhen Chen, and \u00c7 agr\u0131 \u00c7\u00f6ltekin. 2019. Language discrim- ination and transfer learning for similar languages: experiments with feature combinations and adapta- tion. In Proceedings of the Sixth Workshop on NLP for Similar Languages, Varieties and Dialects, pages 54-63.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Hierarchical attention networks for document classification",
"authors": [
{
"first": "Zichao",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Diyi",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Smola",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 conference of the North American chapter of the association for computational linguistics: human language technologies",
"volume": "",
"issue": "",
"pages": "1480--1489",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchi- cal attention networks for document classification. In Proceedings of the 2016 conference of the North American chapter of the association for computa- tional linguistics: human language technologies, pages 1480-1489.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Textbrewer: An open-source knowledge distillation toolkit for natural language processing",
"authors": [
{
"first": "Ziqing",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "Zhipeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Wanxiang",
"middle": [],
"last": "Che",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Shijin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Guoping",
"middle": [],
"last": "Hu",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2002.12620"
]
},
"num": null,
"urls": [],
"raw_text": "Ziqing Yang, Yiming Cui, Zhipeng Chen, Wanxiang Che, Ting Liu, Shijin Wang, and Guoping Hu. 2020. Textbrewer: An open-source knowledge distilla- tion toolkit for natural language processing. arXiv preprint arXiv:2002.12620.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Exploring the power of romanian bert for dialect identification",
"authors": [
{
"first": "George-Eduard",
"middle": [],
"last": "Zaharia",
"suffix": ""
},
{
"first": "Andrei-Marius",
"middle": [],
"last": "Avram",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the 7th Workshop on NLP for Similar Languages, Varieties and Dialects",
"volume": "",
"issue": "",
"pages": "232--241",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George-Eduard Zaharia, Andrei-Marius Avram, Dumitru-Clementin Cercel, and Traian Rebedea. 2020a. Exploring the power of romanian bert for dialect identification. In Proceedings of the 7th Workshop on NLP for Similar Languages, Varieties and Dialects, pages 232-241.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Cross-lingual transfer learning for complex word identification",
"authors": [
{
"first": "George-Eduard",
"middle": [],
"last": "Zaharia",
"suffix": ""
},
{
"first": "Dumitru-Clementin",
"middle": [],
"last": "Cercel",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Dascalu",
"suffix": ""
}
],
"year": 2020,
"venue": "2020 IEEE 32nd International Conference on Tools with Artificial Intelligence (ICTAI)",
"volume": "",
"issue": "",
"pages": "384--390",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George-Eduard Zaharia, Dumitru-Clementin Cercel, and Mihai Dascalu. 2020b. Cross-lingual trans- fer learning for complex word identification. In 2020 IEEE 32nd International Conference on Tools with Artificial Intelligence (ICTAI), pages 384-390. IEEE.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "A report on the third vardial evaluation campaign",
"authors": [
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "Shervin",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "Yves",
"middle": [],
"last": "Scherrer",
"suffix": ""
},
{
"first": "Tanja",
"middle": [],
"last": "Samardzic",
"suffix": ""
},
{
"first": "Francis",
"middle": [],
"last": "Tyers",
"suffix": ""
},
{
"first": "Miikka",
"middle": [],
"last": "Silfverberg",
"suffix": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Klyueva",
"suffix": ""
},
{
"first": "Tung-Le",
"middle": [],
"last": "Pan",
"suffix": ""
},
{
"first": "Chu-Ren",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Radu",
"middle": [
"Tudor"
],
"last": "Ionescu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Sixth Workshop on NLP for Similar Languages, Varieties and Dialects",
"volume": "",
"issue": "",
"pages": "1--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcos Zampieri, Shervin Malmasi, Yves Scherrer, Tanja Samardzic, Francis Tyers, Miikka Silfverberg, Natalia Klyueva, Tung-Le Pan, Chu-Ren Huang, Radu Tudor Ionescu, et al. 2019. A report on the third vardial evaluation campaign. In Proceedings of the Sixth Workshop on NLP for Similar Languages, Varieties and Dialects, pages 1-16.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Natural language processing for similar languages, varieties, and dialects: A survey",
"authors": [
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Yves",
"middle": [],
"last": "Scherrer",
"suffix": ""
}
],
"year": 2020,
"venue": "Natural Language Engineering",
"volume": "26",
"issue": "6",
"pages": "595--612",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcos Zampieri, Preslav Nakov, and Yves Scherrer. 2020. Natural language processing for similar lan- guages, varieties, and dialects: A survey. Natural Language Engineering, 26(6):595-612.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"content": "<table><tr><td>Model</td><td colspan=\"6\">Validation macro-F1 weighted-F1 micro-F1 macro-F1 weighted-F1 micro-F1 Test</td></tr><tr><td>Romanian BERT</td><td>0.8492</td><td>0.8492</td><td>0.8495</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Romanian BERT + Adversarial Training</td><td>0.8558</td><td>0.8559</td><td>0.8564</td><td>0.7319</td><td>0.7318</td><td>0.7319</td></tr><tr><td>Multilingual BERT</td><td>0.8097</td><td>0.8097</td><td>0.8098</td><td>-</td><td>-</td><td>-</td></tr><tr><td>XLM-RoBERTa</td><td>0.7619</td><td>0.7619</td><td>0.7622</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Romanian BERT + GAN</td><td>0.8516</td><td>0.8516</td><td>0.8523</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Romanian BERT + TextBrewer</td><td>0.7891</td><td>0.7891</td><td>0.7893</td><td>0.6743</td><td>0.6744</td><td>0.6749</td></tr><tr><td>Ensemble</td><td>0.8564</td><td>0.8564</td><td>0.8566</td><td>0.7324</td><td>0.7324</td><td>0.7324</td></tr></table>",
"text": "Deep learning results.",
"type_str": "table",
"num": null,
"html": null
},
"TABREF1": {
"content": "<table><tr><td>Model</td><td colspan=\"3\">Validation macro-F1 weighted-F1 micro-F1</td></tr><tr><td>SVM</td><td>0.6297</td><td>0.6298</td><td>0.6324</td></tr><tr><td>Random Forest</td><td>0.5216</td><td>0.5218</td><td>0.5379</td></tr><tr><td>Multinomial Naive Bayes</td><td>0.5726</td><td>0.5728</td><td>0.5921</td></tr><tr><td>Logistic Regression</td><td>0.6250</td><td>0.6251</td><td>0.6270</td></tr></table>",
"text": "Machine learning results on the validation dataset.",
"type_str": "table",
"num": null,
"html": null
}
}
}
}