|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T12:14:30.242998Z" |
|
}, |
|
"title": "Exploring the Power of Romanian BERT for Dialect Identification", |
|
"authors": [ |
|
{ |
|
"first": "George-Eduard", |
|
"middle": [], |
|
"last": "Zaharia", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University Politehnica of Bucharest", |
|
"location": { |
|
"country": "Romanian Academy" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Andrei-Marius", |
|
"middle": [], |
|
"last": "Avram", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University Politehnica of Bucharest", |
|
"location": { |
|
"country": "Romanian Academy" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Dumitru-Clementin", |
|
"middle": [], |
|
"last": "Cercel", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University Politehnica of Bucharest", |
|
"location": { |
|
"country": "Romanian Academy" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Traian", |
|
"middle": [], |
|
"last": "Rebedea", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University Politehnica of Bucharest", |
|
"location": { |
|
"country": "Romanian Academy" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Dialect identification represents a key aspect for improving a series of tasks, such as opinion mining, considering that the location of the speaker can greatly influence the attitude towards a subject. In this work, we describe the systems developed by our team for VarDial 2020: Romanian Dialect Identification, a task specifically created for challenging participants to solve the dialect identification problem for an under-resourced language, such as Romanian. More specifically, we introduce a series of neural architectures based on Transformers, that combine a BERT model exclusively pre-trained on the Romanian language with several other techniques, such as adversarial training or character-level embeddings. By using a custom Romanian BERT model, we were able to reach a macro-F1 score of 64.75 on the test dataset, thus allowing us to be ranked 5 th out of 8 participant teams. Moreover, we improved the F1-scores reported by the authors of MOROCO with over 1.7%, obtaining a 96.23% macro-F1 score, alongside micro and weighted F1 scores of 96.25%.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Dialect identification represents a key aspect for improving a series of tasks, such as opinion mining, considering that the location of the speaker can greatly influence the attitude towards a subject. In this work, we describe the systems developed by our team for VarDial 2020: Romanian Dialect Identification, a task specifically created for challenging participants to solve the dialect identification problem for an under-resourced language, such as Romanian. More specifically, we introduce a series of neural architectures based on Transformers, that combine a BERT model exclusively pre-trained on the Romanian language with several other techniques, such as adversarial training or character-level embeddings. By using a custom Romanian BERT model, we were able to reach a macro-F1 score of 64.75 on the test dataset, thus allowing us to be ranked 5 th out of 8 participant teams. Moreover, we improved the F1-scores reported by the authors of MOROCO with over 1.7%, obtaining a 96.23% macro-F1 score, alongside micro and weighted F1 scores of 96.25%.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Currently, the Romanian language is still considered an under-resourced language, although in the recent years, several datasets were created that tried to mitigate this problem such as the reference corpus of the Contemporary Romanian Language (CoRoLa) (Mititelu et al., 2018) , the Romanian Named Entity Corpus (RONEC) (Dumitrescu and Avram, 2019), the Biomedical Gold Standard Corpus (MoNERo) (Mitrofan et al., 2019) , the Romanian Speech Corpus (RSC) (Georgescu et al., 2020) , and the Romanian WordNet (Dumitrescu et al., 2018) . With the rise of attention-based language models, the first Romanian Bidirectional Encoder Representations from Transformer (Ro-BERT) appeared and it outperformed Multilingual BERT (M-BERT) (Pires et al., 2019) on all the evaluation tasks (Dumitrescu et al., 2020) 1 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 254, |
|
"end": 277, |
|
"text": "(Mititelu et al., 2018)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 396, |
|
"end": 419, |
|
"text": "(Mitrofan et al., 2019)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 455, |
|
"end": 479, |
|
"text": "(Georgescu et al., 2020)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 507, |
|
"end": 532, |
|
"text": "(Dumitrescu et al., 2018)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 725, |
|
"end": 745, |
|
"text": "(Pires et al., 2019)", |
|
"ref_id": "BIBREF30" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "One of the most addressed and challenging tasks in natural language processing research is text dialect identification. As a response to this challenge, Butnaru and Ionescu (2019) introduced the Moldavian and Romanian Dialectal Corpus (MOROCO), a dataset that contains 33,564 samples of text collected from news websites, grouped in two dialects using the top level domain of the websites: Romanian and Moldavian (\".ro\" and \".md\"). Moreover, a shared task, called Romanian Dialect Identification (RDI), was proposed at VarDial 2020 (G\u0203man et al., 2020) and it aimed to evaluate the performance of each participant system on this corpus.", |
|
"cite_spans": [ |
|
{ |
|
"start": 153, |
|
"end": 179, |
|
"text": "Butnaru and Ionescu (2019)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 532, |
|
"end": 552, |
|
"text": "(G\u0203man et al., 2020)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Starting from the MOROCO dataset, the RDI competition introduces the challenge of properly identifying the Romanian or Moldavian dialect, considering that the test dataset is from a different domain. That is, the validation dataset contains long texts, written in either the Romanian or the Moldavian dialect, while the test dataset is composed of short entries, based on tweets. Therefore, this difference influenced the performance of our models, considering that we were able to obtain a 97.04% macro-F1 score on the validation set, while the evaluation on the test set yielded a 64.75% macro-F1 score.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "This work is structured as follows. In Section 2, we perform an analysis of existing solutions for closely related dialect identification tasks. Section 3 outlines our solutions for the dialect identification issue, while Section 4 details the performed experiments, experimental setup, and error analysis. Finally, we draw conclusions in Section 5.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "There are various approaches regarding the language dialect identification task. Some of them are centered around the Romanian language, while others are focused on different ones, such as the Arabic or German dialects. However, they are equally important, considering that some techniques can cross the language barrier and be used as universal dialect identification methods.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "For example, previous work (Onose et al., 2019) in Romanian dialect identification employed the usage of various deep learning models, including Recurrent Neural Networks (RNNs) (Elaraby and Abdul-Mageed, 2018a) , Long Short-Term Memory (LSTM) networks (Hochreiter and Schmidhuber, 1997) , and Gated Recurrent Units (GRUs) (Cho et al., 2014) , alongside various word embeddings. Furthermore, Tudoreanu (2019) applied an ensemble of neural networks that uses a triplet loss alongside Convolutional Neural Networks (CNNs) (Kim, 2014) , with the purpose of maximizing the distance between an anchor sample and a negative example while minimizing the difference between the anchor and the positive example. Wu et al. (2019) also considered Support Vector Machines (Cortes and Vapnik, 1995) , but paired with character n-grams.", |
|
"cite_spans": [ |
|
{ |
|
"start": 27, |
|
"end": 47, |
|
"text": "(Onose et al., 2019)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 178, |
|
"end": 211, |
|
"text": "(Elaraby and Abdul-Mageed, 2018a)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 253, |
|
"end": 287, |
|
"text": "(Hochreiter and Schmidhuber, 1997)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 323, |
|
"end": 341, |
|
"text": "(Cho et al., 2014)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 392, |
|
"end": 408, |
|
"text": "Tudoreanu (2019)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 520, |
|
"end": 531, |
|
"text": "(Kim, 2014)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 703, |
|
"end": 719, |
|
"text": "Wu et al. (2019)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 760, |
|
"end": 785, |
|
"text": "(Cortes and Vapnik, 1995)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Romanian Dialect Identification", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Aiming to tackle the Arab dialect identification problem by participating at the MADAR shared task (Bouamor et al., 2019) , Abdul-Mageed et al. (2019) introduced a series of solutions based on traditional, deep learning, Natural Language Processing (NLP) techniques, like GRUs and, at the same time, state-of-the-art, Transformer-based methods, i.e., BERT . Moreover, Salameh et al. (2018) engaged in the same problem by employing a solution based on features, including character and word n-grams and applying a Multinomial Naive Bayes classifier. Similar traditional methods were also applied by Elaraby and Abdul-Mageed (2018b) using logistic regression, SVMs, and, moreover, models based on RNNs. Other work (Butnaru and Ionescu, 2018) proposed string kernel functions (Lodhi et al., 2002) that capture the similarity between text samples based on character n-grams, while a different approach (Ali, 2018) simply implies the usage of CNNs.", |
|
"cite_spans": [ |
|
{ |
|
"start": 99, |
|
"end": 121, |
|
"text": "(Bouamor et al., 2019)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 368, |
|
"end": 389, |
|
"text": "Salameh et al. (2018)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 712, |
|
"end": 739, |
|
"text": "(Butnaru and Ionescu, 2018)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 773, |
|
"end": 793, |
|
"text": "(Lodhi et al., 2002)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 898, |
|
"end": 909, |
|
"text": "(Ali, 2018)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dialect Identification for Other Languages", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Employing similar techniques, but switching the language, Malmasi and Zampieri (2017) addressed the German dialect identification issue by also using traditional machine learning techniques, but, furthermore, adding different ensemble classifiers. Further focusing on the German language, Gaman and Ionescu (2020) proposed several methods for approaching the previously mentioned subject, including character-level CNNs, Support Vector Regressors based on string kernels and ensemble learning systems (Chen and Guestrin, 2016) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 289, |
|
"end": 313, |
|
"text": "Gaman and Ionescu (2020)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 501, |
|
"end": 526, |
|
"text": "(Chen and Guestrin, 2016)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dialect Identification for Other Languages", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "We focused our approaches around Transformers (Vaswani et al., 2017) , considering that they represent state-of-the-art solutions for solving NLP problems.", |
|
"cite_spans": [ |
|
{ |
|
"start": 46, |
|
"end": 68, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF35" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methods", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Aimed for multilingual NLP problems, M-BERT is a variant of BERT (Devlin et al., 2018) , pre-trained on over 100 languages, thus ensuring good performance for all of them, not only for the English language. M-BERT can be used for a wide array of tasks, including sequence classification, therefore allowing us to fine-tune the model for our problem, dialect identification in Romanian.", |
|
"cite_spans": [ |
|
{ |
|
"start": 65, |
|
"end": 86, |
|
"text": "(Devlin et al., 2018)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multilingual BERT", |
|
"sec_num": "3.1.1" |
|
}, |
|
{ |
|
"text": "We also experimented with the embeddings obtained from the Ro-BERT model. Ro-BERT was trained on three publicly available corpora: OPUS (Tiedemann, 2012) , OSCAR (Su\u00e1rez et al., 2019) , and Wikipedia, using masked language modeling and next sentence prediction as training objectives. To validate the resulted model, the performance of Ro-BERT was compared with the performance of M-BERT on three tasks from Romanian corpora: (1) Simple Universal Dependencies -the models had to predict independently the Universal Part-of-Speech (UPOS) and the eXtended Part-of-Speech, (2) Joint Universal Dependencies -the models had to jointly predict the UPOS, Universal Features, Lemmas and Dependency Parsing, and (3) Named Entity Recognition -the models had to predict the BIO labels. For the first two tasks, the authors used the Romanian RRT corpus (Barbu Mititelu et al., 2016) , while for the last one RONEC (Dumitrescu and Avram, 2019). The evaluation results showed that Ro-BERT outperformed M-BERT on all tasks with values ranging between 1% and 3%.", |
|
"cite_spans": [ |
|
{ |
|
"start": 136, |
|
"end": 153, |
|
"text": "(Tiedemann, 2012)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 162, |
|
"end": 183, |
|
"text": "(Su\u00e1rez et al., 2019)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 841, |
|
"end": 870, |
|
"text": "(Barbu Mititelu et al., 2016)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Romanian BERT", |
|
"sec_num": "3.1.2" |
|
}, |
|
{ |
|
"text": "To use the Transformer-based language models on the competition dataset, we firstly tokenized the sentences by using the Byte-Pair Encoding (BPE) tokenizer with the additional $NE$ token. As depicted in Table 1 , some of the sequences can be very long, so applying the model directly on them as described in Devlin et al. (2018) is not optimal. To mitigate this problem, we applied the model on consecutive sequences of 512 tokens that share the first 128 tokens with the previous sequence. Then, to create a binary output, we averaged the embeddings of all tokens out of each 512 token sequence and projected it into a scalar. This process is further depicted in Figure 1 . We will further reference this system under the name of Custom-Ro-BERT-FT.", |
|
"cite_spans": [ |
|
{ |
|
"start": 308, |
|
"end": 328, |
|
"text": "Devlin et al. (2018)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 203, |
|
"end": 210, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 664, |
|
"end": 672, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Custom Ro-BERT Fine-tuning", |
|
"sec_num": "3.2.1" |
|
}, |
|
{ |
|
"text": "Next, we intended to enhance the word representations with information at the morpheme-level, therefore, we needed to use character-level embeddings. By breaking each word into a sequence of characters, Figure 2 : The architecture of the EC model. then mapping them to a series of indexes and then feeding them into a Bidirectional LSTM (BiLSTM), we were able to obtain another set of representations for the inputs. The character-level embeddings allowed us to identify structural similarities between different words, an important aspect when tackling a dialect identification problem.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 203, |
|
"end": 211, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Embedding Concatenation", |
|
"sec_num": "3.2.2" |
|
}, |
|
{ |
|
"text": "Furthermore, we also added pre-trained fastText word embeddings (Bojanowski et al., 2017) . The three representations (i.e., Transformer embeddings, word embeddings, and character embeddings) were concatenated, making sure that the first dimension is identical for all of them, representing the number of input tokens. The resulted tensor was then fed to a BiLSTM network and then to a linear layer, thus obtaining the final representation for the input sequence. Finally, we used the sigmoid activation function for obtaining the final class. Figure 2 depicts the previously described architecture, called EC.", |
|
"cite_spans": [ |
|
{ |
|
"start": 64, |
|
"end": 89, |
|
"text": "(Bojanowski et al., 2017)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 544, |
|
"end": 552, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Embedding Concatenation", |
|
"sec_num": "3.2.2" |
|
}, |
|
{ |
|
"text": "Initially applied for improving computer vision solutions, adversarial training (Goodfellow et al., 2014) represents a technique that intentionally alters a percentage of training entries with perturbations. Even though the changes are minimal, the effects on the performance of the system can be major, since the perturbations can lead to missclassifications. The previously mentioned process can be applied to both text and image models. Because of the generalization it creates, it can lead to improved performance for the first category. Therefore, language models achieve better results when trained under this approach.", |
|
"cite_spans": [ |
|
{ |
|
"start": 80, |
|
"end": 105, |
|
"text": "(Goodfellow et al., 2014)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Adversarial Training", |
|
"sec_num": "3.2.3" |
|
}, |
|
{ |
|
"text": "Since we intended to also maximize the performance of our models, we resorted to an adversarial training technique. Therefore, we used FreeLB (Zhu et al., 2019) , an enhanced adversarial training method for natural language models. FreeLB performs adversarial training by introducing adversarial perturbations at word-level embeddings and then minimizing the adversarial loss resulted from the input samples. The model receives training data in batches that are affected by the adversarial algorithm, namely, they are augmented with extra adversarial entries. Each iteration creates some outputs, the purpose of the FreeLB algorithm being to take the gradients of these outputs and to average them. Furthermore, FreeLB minimizes the maximum risk at each ascent step, with the advantage of creating an insignificant overhead. MOROCO was created by collecting texts from the top news websites in Romania and Moldavia, and by automatically labeling them using the Internet domain, resulting in 33,564 samples (45.89% Moldavian and 54.11% Romanian) having a total of more than 10 million tokens. The news were selected from six domains: culture, finance, politics, science, sports, and tech. The authors further processed the text by removing all HTML tags and by replacing the named entities with the $NE$ token in order to prevent the models from classifying based on features that are not specific to the dialect, but to the environment in which the dialect is used. Moreover, in order to provide a proper comparison with other similar corpora, five tasks were created on the MOROCO data set: binary classification by dialect (MOROCO-RDI), intra-dialect classification by topic using the Romanian or the Moldavian samples, and cross-dialect topic classification by training a model on the samples of one dialect and testing on the other dialect set of samples. The dataset was also split into training, validation and testing, resulting in subsets that contained 21,719, 5,921 and 5,924 number of samples.", |
|
"cite_spans": [ |
|
{ |
|
"start": 142, |
|
"end": 160, |
|
"text": "(Zhu et al., 2019)", |
|
"ref_id": "BIBREF38" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Adversarial Training", |
|
"sec_num": "3.2.3" |
|
}, |
|
{ |
|
"text": "At the evaluation phase of the RDI task, a new data set was used to evaluate the performance of the submitted models. The new set contained 5,022 samples, mostly taken from social media.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Adversarial Training", |
|
"sec_num": "3.2.3" |
|
}, |
|
{ |
|
"text": "Further, we analyzed the two datasets (MOROCO and RDI) in Table 1 by computing the average number of tokens and the maximum number of tokens, using the white space tokenizer, the Romanian BERT uncased tokenizer, and the M-BERT uncased tokenizer. We note that the change in domain led to a significant difference in the number of tokens, of several orders of magnitude, which in turn made our models to perform much worse on the RDI test set than we initially estimated on the MOROCO test set.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 58, |
|
"end": 65, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Adversarial Training", |
|
"sec_num": "3.2.3" |
|
}, |
|
{ |
|
"text": "For the EC solution, we considered the Adam optimizer (Kingma and Ba, 2014) with a 0.001 learning rate. Furthermore, the BiLSTM hidden size is 500, while the input maximum length is 280 tokens. We trained the model for 8 epochs, by using an early stopping policy. At the same time, for the adversarial training method, we used an initial learning rate of 5e-5, alongside the Adam optimizer. Moreover, the weight decay and the epsilon parameters were kept with their default values 0.0 and 1e-8 respectively, and the training process spanned over 12 epochs. For the standard Ro-BERT and also for the custom Ro-BERT fine-tuning process, we employed the Adam with weight decay (AdamW) optimizer (Loshchilov and Hutter, 2017 ) with a 2e-5 learning rate, for 4 epochs.", |
|
"cite_spans": [ |
|
{ |
|
"start": 692, |
|
"end": 720, |
|
"text": "(Loshchilov and Hutter, 2017", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Implementation Details", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "The first experiment we conducted was a comparison between M-BERT and Ro-BERT on the MOROCO-RDI test dataset in order to choose a language model to work with. At this stage, because the entries had a high number of tokens, we also experimented with various N , i.e., the number of consecutive sequences with 512 tokens that share 128 tokens with the previous sequence, on which the language model is applied. The maximum number of tokens (determined by N ) and by using the Ro-BERT and M-BERT tokenizers is presented in Table 2 , together with the percentage of samples that have fewer tokens than the maximum. Also, the results are depicted in Figure 3 . The left figure presents the case where all samples that have more tokens than the maximum allowed, are dropped both from the train set Table 2 : Maximum number of tokens allowed for a given N and the percentage of samples that satisfy this condition. and the test set, while the right figure keeps all the samples from the test set, but also drops the samples from the training set that do not meet the requirements. It can be observed that in both cases Ro-BERT offers a better performance for all values of N and that the rate of change in performance of M-BERT improves faster than the performance of Ro-BERT, maybe even surpassing Ro-BERT for a large number of tokens 2 . At evaluation time, for the RDI dataset, the sequences were much smaller than the ones from the initial dataset (MOROCO-RDI), and in order to avoid overfitting on long sequences, we used the system that applies Ro-BERT only 3 times instead of 4 times. Moreover, our choice is further motivated by the fact that the difference in performance between N = 3 and N = 4 is rather small (0.6%).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 520, |
|
"end": 527, |
|
"text": "Table 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 645, |
|
"end": 653, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 792, |
|
"end": 799, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Custom Language Model Comparison", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Next, we experimented with four different types of architectures, out of which we used the best three for our VarDial submissions. Table 3 presents the results obtained on the entire test datasets. The MOROCO-RDI dataset contains entries similar to the ones used for training and validation. On the other hand, the RDI shared task counterpart contains much shorter entries, obtained from tweets.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 131, |
|
"end": 138, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Submitted Results", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "The best results obtained for the RDI dataset are yielded by using the Custom-Ro-BERT-FT technique, with a weighted-F1 score of 64.80%, alongside a 64.75% micro-F1 and a 67.10% macro-F1. The closest result further obtained by our experiments comes at a difference of 8.41% in terms of weighted-F1 score, provided by the FreeLB technique, with a value of 56.39%. Also, the same experiment produced a 60.77% micro-F1 score and a 56.32% macro-F1 score. Furthermore, the EC solution proves to offer poorer results, considering the increased width and depth of the neural network and thus the large number of parameters that needed to be fine-tuned. The main metric, the weighted-F1 score, has a value of 46.59%, while the others, the macro and micro F1 measures, have values of 46.48% and 55.07%, respectively. If we focus our attention on the MOROCO-RDI test dataset format, we can see that the performance difference is considerable. With a 96.25% weighted-F1 score and very close values for macro and micro F1, the Custom-Ro-BERT-FT technique offers the best results, closely followed by FreeLB at a margin of 0.1% in weighted-F1 score, with a value of 96.15%. Moreover, the micro and macro F1 scores have values of 96.15% and 96.12%, respectively. The EC model offers a value of 86.43% weighted-F1 alongside 86.17% and 86.31% micro and macro F1 scores. Furthermore, the standard Ro-BERT finetuning comes second to last in terms of performance, with a 1.73% difference in weighted-F1 score when compared to the Custom-Ro-BERT-FT model. Table 4 presents examples of entries correctly or wrongly classified. As seen, most of the incorrect entries are part of the Moldavian dialect. The main reason behind the misclassifications is represented by the domain and length differences between the train and development datasets and the evaluation dataset. For training and validation, the average number of tokens is about 310, while for the evaluation dataset, the number is around 15. This discrepancy does not allow the models to properly detect the dialect form the test entries, considering that, in some situations, there are no proper key features that can point towards a Romanian or a Moldavian dialect. For example, the last two examples of misclassified entries from Table 4 do not show any defining aspects of either one of the dialects. Moreover, the fourth one contains only one proper word, \"Primele\" (eng. \"The first\"), while the other words are masked by the $NE$ token. This may be an important problem for the task and dataset at hand, as the differences between Moldavian and Romanian are minor and might not arise in short fragments of text such as tweets. For future versions, these datasets should be manually curated to contain more relevant samples.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 1535, |
|
"end": 1542, |
|
"text": "Table 4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 2270, |
|
"end": 2277, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Submitted Results", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "On the other hand, some entries have clear indicators that point towards a certain dialect. As an example, the root word \"raion\" (eng. \"district\"), specific to the Moldavian dialect, is a clear indicator of the origin of that input. Additionally, some named entities are not masked and are also present in the training dataset, thus clearing the origin of the including text (e.g., the named entities \"Dodon\" or \"Chicu\" in the first two correctly classified samples).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error Analysis", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "This paper presented our approaches regarding the Romanian Dialect Identification task, organized by VarDial 2020. We proposed a series of Transformer-based architectures that intended to solve the dialect identification issue. All the solutions employ the usage of Ro-BERT, a Transformer model pre-trained on Romanian language corpora. By fine-tuning Ro-BERT with two different techniques, standard and custom, we were able to achieve good scores on both the MOROCO-RDI test dataset and the RDI dataset, used for this year's competition. Also, by using an adversarial training technique (FreeLB) on Ro-BERT, we improved the state-of-the-art score on the MOROCO-RDI dataset, while the performance", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "True label Correct 1) Dodon: $NE$avut\u00een trecut un guvern, un stat capturat, a urmat un $NE$ condus de $NE$ Dodon: In the past, $NE$ had a government, a captured state, followed by a $NE$ lead by $NE$ MD 2) Chicu crede crearea platformelor industriale\u00een fiecare centru raional o solut , ie de renas , tere a economiei nat , ionale", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Category Entry", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Chicu believes that the creation of industrial platforms in each district center represents a solution for the rebirth of the national economy MD 3) Seri de tango s , i saloane de flori la un spital de psihiatrie din $NE$ Un psiholog aduce sperant ,\u0203 unor oameni ca", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Category Entry", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Tango evenings and flower salons at a psychiatric hospital in $NE$ A psychologist brings hope to people like RO Wrong 1) Pericol pentru $NE$ $NE$ vor revenirea a $NE$ $NE$ de militari rus , i\u00een zona de securitate", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Category Entry", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Danger to $NE$ $NE$ want the return of $NE$ $NE$ Russian military in the security zone MD 2) FOTO $NE$ $NE$ $NE$ iarna a devenit prim\u0203var\u0203. Un arbust ornamental a\u00eenflorit, $NE$ de vremea cald\u0203 PHOTO $NE$ $NE$ $NE$ winter has become spring. An ornamental shrub bloomed, $NE$ because of the warm weather", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Category Entry", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "3) Cum te protejezi\u00eempotriva coronavirusului $NE$ How to protect yourself against the coronavirus $NE$ MD 4) Primele $NE$ $NE$ $NE$ $NE$ $NE$ $NE$", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "RO", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The first $NE$ $NE$ $NE$ $NE$ $NE$ $NE$ MD Table 4 : Examples of correctly and wrongly classified entries. MD: Moldavian, RO: Romanian.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 43, |
|
"end": 50, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "RO", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "decreased on the RDI set. Moreover, employing an embedding concatenation technique does not help with performance, yielding the poorest results among the four techniques we experimented with. For future work, we intend to also experiment with multi-task learning approaches (Caruana, 1997) , considering that, usually, an auxiliary task can help the model to detect additional features that can lead to increased performance. Another aspect we plan to test are CoRoLA-based word embeddings, which can replace their counterpart in the embedding concatenation experiment.", |
|
"cite_spans": [ |
|
{ |
|
"start": 274, |
|
"end": 289, |
|
"text": "(Caruana, 1997)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "RO", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Unfortunately, we could not make this analysis due to the lack of computational resources.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Dianet: Bert and hierarchical attention multi-task learning of fine-grained dialect", |
|
"authors": [ |
|
{ |
|
"first": "Muhammad", |
|
"middle": [], |
|
"last": "Abdul-Mageed", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chiyu", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Abdelrahim", |
|
"middle": [], |
|
"last": "Elmadany", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arun", |
|
"middle": [], |
|
"last": "Rajendran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lyle", |
|
"middle": [], |
|
"last": "Ungar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1910.14243" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Muhammad Abdul-Mageed, Chiyu Zhang, AbdelRahim Elmadany, Arun Rajendran, and Lyle Ungar. 2019. Dianet: Bert and hierarchical attention multi-task learning of fine-grained dialect. arXiv preprint arXiv:1910.14243.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Character level convolutional neural network for arabic dialect identification", |
|
"authors": [ |
|
{ |
|
"first": "Mohamed", |
|
"middle": [], |
|
"last": "Ali", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Fifth Workshop on NLP for Similar Languages, Varieties and Dialects", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "122--127", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mohamed Ali. 2018. Character level convolutional neural network for arabic dialect identification. In Proceedings of the Fifth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial 2018), pages 122-127.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "The romanian treebank annotated according to universal dependencies", |
|
"authors": [ |
|
{ |
|
"first": "Radu", |
|
"middle": [], |
|
"last": "V Barbu Mititelu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Radu", |
|
"middle": [], |
|
"last": "Ion", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Elana", |
|
"middle": [], |
|
"last": "Simionescu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cenel-Augusto", |
|
"middle": [], |
|
"last": "Irimia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Perez", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the tenth international conference on natural language processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "V Barbu Mititelu, Radu Ion, Radu Simionescu, Elana Irimia, and Cenel-Augusto Perez. 2016. The romanian treebank annotated according to universal dependencies. In Proceedings of the tenth international conference on natural language processing (hrtal2016).", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Enriching word vectors with subword information", |
|
"authors": [ |
|
{ |
|
"first": "Piotr", |
|
"middle": [], |
|
"last": "Bojanowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edouard", |
|
"middle": [], |
|
"last": "Grave", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Armand", |
|
"middle": [], |
|
"last": "Joulin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "5", |
|
"issue": "", |
|
"pages": "135--146", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135-146.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "The madar shared task on arabic fine-grained dialect identification", |
|
"authors": [ |
|
{ |
|
"first": "Houda", |
|
"middle": [], |
|
"last": "Bouamor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sabit", |
|
"middle": [], |
|
"last": "Hassan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nizar", |
|
"middle": [], |
|
"last": "Habash", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the Fourth Arabic Natural Language Processing Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "199--207", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Houda Bouamor, Sabit Hassan, and Nizar Habash. 2019. The madar shared task on arabic fine-grained dialect identification. In Proceedings of the Fourth Arabic Natural Language Processing Workshop, pages 199-207.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Unibuckernel reloaded: First place in arabic dialect identification for the second year in a row", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Andrei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Radu Tudor", |
|
"middle": [], |
|
"last": "Butnaru", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ionescu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1805.04876" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrei M Butnaru and Radu Tudor Ionescu. 2018. Unibuckernel reloaded: First place in arabic dialect identifica- tion for the second year in a row. arXiv preprint arXiv:1805.04876.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Moroco: The moldavian and romanian dialectal corpus", |
|
"authors": [ |
|
{ |
|
"first": "Andrei", |
|
"middle": [], |
|
"last": "Butnaru", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Radu Tudor", |
|
"middle": [], |
|
"last": "Ionescu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "688--698", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrei Butnaru and Radu Tudor Ionescu. 2019. Moroco: The moldavian and romanian dialectal corpus. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 688-698.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Multitask learning. Machine learning", |
|
"authors": [ |
|
{ |
|
"first": "Rich", |
|
"middle": [], |
|
"last": "Caruana", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "", |
|
"volume": "28", |
|
"issue": "", |
|
"pages": "41--75", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rich Caruana. 1997. Multitask learning. Machine learning, 28(1):41-75.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Xgboost: A scalable tree boosting system", |
|
"authors": [ |
|
{ |
|
"first": "Tianqi", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carlos", |
|
"middle": [], |
|
"last": "Guestrin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "785--794", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tianqi Chen and Carlos Guestrin. 2016. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining, pages 785-794.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Learning phrase representations using rnn encoder-decoder for statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bart", |
|
"middle": [], |
|
"last": "Van Merri\u00ebnboer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Caglar", |
|
"middle": [], |
|
"last": "Gulcehre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dzmitry", |
|
"middle": [], |
|
"last": "Bahdanau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fethi", |
|
"middle": [], |
|
"last": "Bougares", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Holger", |
|
"middle": [], |
|
"last": "Schwenk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1406.1078" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kyunghyun Cho, Bart Van Merri\u00ebnboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Support vector machine", |
|
"authors": [ |
|
{ |
|
"first": "Corinna", |
|
"middle": [], |
|
"last": "Cortes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vladimir", |
|
"middle": [], |
|
"last": "Vapnik", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Machine learning", |
|
"volume": "20", |
|
"issue": "3", |
|
"pages": "273--297", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Corinna Cortes and Vladimir Vapnik. 1995. Support vector machine. Machine learning, 20(3):273-297.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1810.04805" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirec- tional transformers for language understanding. arXiv preprint arXiv:1810.04805.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Introducing ronec-the romanian named entity corpus", |
|
"authors": [ |
|
{ |
|
"first": "Stefan", |
|
"middle": [], |
|
"last": "Daniel Dumitrescu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrei-Marius", |
|
"middle": [], |
|
"last": "Avram", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1909.01247" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stefan Daniel Dumitrescu and Andrei-Marius Avram. 2019. Introducing ronec-the romanian named entity corpus. arXiv preprint arXiv:1909.01247.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Rowordneta python api for the romanian wordnet", |
|
"authors": [ |
|
{ |
|
"first": "Stefan", |
|
"middle": [], |
|
"last": "Daniel Dumitrescu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrei", |
|
"middle": [ |
|
"Marius" |
|
], |
|
"last": "Avram", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luciana", |
|
"middle": [], |
|
"last": "Morogan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stefan-Adrian", |
|
"middle": [], |
|
"last": "Toma", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "10th International Conference on Electronics, Computers and Artificial Intelligence (ECAI)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--6", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stefan Daniel Dumitrescu, Andrei Marius Avram, Luciana Morogan, and Stefan-Adrian Toma. 2018. Rowordnet- a python api for the romanian wordnet. In 2018 10th International Conference on Electronics, Computers and Artificial Intelligence (ECAI), pages 1-6. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "The birth of romanian bert", |
|
"authors": [ |
|
{ |
|
"first": "Stefan", |
|
"middle": [], |
|
"last": "Daniel Dumitrescu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrei-Marius", |
|
"middle": [], |
|
"last": "Avram", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sampo", |
|
"middle": [], |
|
"last": "Pyysalo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2009.08712" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stefan Daniel Dumitrescu, Andrei-Marius Avram, and Sampo Pyysalo. 2020. The birth of romanian bert. arXiv preprint arXiv:2009.08712.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Deep models for arabic dialect identification on benchmarked data", |
|
"authors": [ |
|
{ |
|
"first": "Mohamed", |
|
"middle": [], |
|
"last": "Elaraby", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Muhammad", |
|
"middle": [], |
|
"last": "Abdul-Mageed", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Fifth Workshop on NLP for Similar Languages, Varieties and Dialects", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "263--274", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mohamed Elaraby and Muhammad Abdul-Mageed. 2018a. Deep models for arabic dialect identification on benchmarked data. In Proceedings of the Fifth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial 2018), pages 263-274.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Deep models for arabic dialect identification on benchmarked data", |
|
"authors": [ |
|
{ |
|
"first": "Mohamed", |
|
"middle": [], |
|
"last": "Elaraby", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Muhammad", |
|
"middle": [], |
|
"last": "Abdul-Mageed", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Fifth Workshop on NLP for Similar Languages, Varieties and Dialects", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "263--274", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mohamed Elaraby and Muhammad Abdul-Mageed. 2018b. Deep models for arabic dialect identification on benchmarked data. In Proceedings of the Fifth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial 2018), pages 263-274.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Combining deep learning and string kernels for the localization of swiss german tweets", |
|
"authors": [ |
|
{ |
|
"first": "Mihaela", |
|
"middle": [], |
|
"last": "Gaman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Radu Tudor", |
|
"middle": [], |
|
"last": "Ionescu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2010.03614" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mihaela Gaman and Radu Tudor Ionescu. 2020. Combining deep learning and string kernels for the localization of swiss german tweets. arXiv preprint arXiv:2010.03614.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Rsc: A romanian read speech corpus for automatic speech recognition", |
|
"authors": [ |
|
{ |
|
"first": "Alexandru-Lucian", |
|
"middle": [], |
|
"last": "Georgescu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Horia", |
|
"middle": [], |
|
"last": "Cucu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andi", |
|
"middle": [], |
|
"last": "Buzo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Corneliu", |
|
"middle": [], |
|
"last": "Burileanu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of The 12th Language Resources and Evaluation Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "6606--6612", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexandru-Lucian Georgescu, Horia Cucu, Andi Buzo, and Corneliu Burileanu. 2020. Rsc: A romanian read speech corpus for automatic speech recognition. In Proceedings of The 12th Language Resources and Evalua- tion Conference, pages 6606-6612.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Generative adversarial nets", |
|
"authors": [ |
|
{ |
|
"first": "Ian", |
|
"middle": [], |
|
"last": "Goodfellow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jean", |
|
"middle": [], |
|
"last": "Pouget-Abadie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mehdi", |
|
"middle": [], |
|
"last": "Mirza", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bing", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Warde-Farley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sherjil", |
|
"middle": [], |
|
"last": "Ozair", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aaron", |
|
"middle": [], |
|
"last": "Courville", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2672--2680", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in neural information processing systems, pages 2672-2680.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Yves Scherrer, and Marcos Zampieri. 2020. A Report on the VarDial Evaluation Campaign 2020", |
|
"authors": [ |
|
{ |
|
"first": "Mihaela", |
|
"middle": [], |
|
"last": "G\u0203man", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dirk", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tudor", |
|
"middle": [], |
|
"last": "Radu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Heidi", |
|
"middle": [], |
|
"last": "Ionescu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tommi", |
|
"middle": [], |
|
"last": "Jauhiainen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Krister", |
|
"middle": [], |
|
"last": "Jauhiainen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nikola", |
|
"middle": [], |
|
"last": "Lind\u00e9n", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niko", |
|
"middle": [], |
|
"last": "Ljube\u0161i\u0107", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christoph", |
|
"middle": [], |
|
"last": "Partanen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Purschke", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Proceedings of the Seventh Workshop on NLP for Similar Languages, Varieties and Dialects", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mihaela G\u0203man, Dirk Hovy, Radu Tudor Ionescu, Heidi Jauhiainen, Tommi Jauhiainen, Krister Lind\u00e9n, Nikola Ljube\u0161i\u0107, Niko Partanen, Christoph Purschke, Yves Scherrer, and Marcos Zampieri. 2020. A Report on the VarDial Evaluation Campaign 2020. In Proceedings of the Seventh Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial).", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Long short-term memory", |
|
"authors": [ |
|
{ |
|
"first": "Sepp", |
|
"middle": [], |
|
"last": "Hochreiter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J\u00fcrgen", |
|
"middle": [], |
|
"last": "Schmidhuber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Neural computation", |
|
"volume": "9", |
|
"issue": "8", |
|
"pages": "1735--1780", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Convolutional neural networks for sentence classification", |
|
"authors": [ |
|
{ |
|
"first": "Yoon", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1408.5882" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yoon Kim. 2014. Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Adam: A method for stochastic optimization", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Diederik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jimmy", |
|
"middle": [], |
|
"last": "Kingma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ba", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1412.6980" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Text classification using string kernels", |
|
"authors": [ |
|
{ |
|
"first": "Huma", |
|
"middle": [], |
|
"last": "Lodhi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Craig", |
|
"middle": [], |
|
"last": "Saunders", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Shawe-Taylor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nello", |
|
"middle": [], |
|
"last": "Cristianini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Watkins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "419--444", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Huma Lodhi, Craig Saunders, John Shawe-Taylor, Nello Cristianini, and Chris Watkins. 2002. Text classification using string kernels. Journal of Machine Learning Research, 2(Feb):419-444.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "German dialect identification in interview transcriptions", |
|
"authors": [ |
|
{ |
|
"first": "Shervin", |
|
"middle": [], |
|
"last": "Malmasi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marcos", |
|
"middle": [], |
|
"last": "Zampieri", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the Fourth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "164--169", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shervin Malmasi and Marcos Zampieri. 2017. German dialect identification in interview transcriptions. In Proceedings of the Fourth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial), pages 164-169.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "The reference corpus of the contemporary romanian language (corola)", |
|
"authors": [ |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Verginica Barbu Mititelu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Elena", |
|
"middle": [], |
|
"last": "Tufi\u015f", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Irimia", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Verginica Barbu Mititelu, Dan Tufi\u015f, and Elena Irimia. 2018. The reference corpus of the contemporary roma- nian language (corola). In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018).", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Monero: a biomedical gold standard corpus for the romanian language", |
|
"authors": [ |
|
{ |
|
"first": "Maria", |
|
"middle": [], |
|
"last": "Mitrofan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Grigorina", |
|
"middle": [], |
|
"last": "Verginica Barbu Mititelu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mitrofan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 18th BioNLP Workshop and Shared Task", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "71--79", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Maria Mitrofan, Verginica Barbu Mititelu, and Grigorina Mitrofan. 2019. Monero: a biomedical gold standard corpus for the romanian language. In Proceedings of the 18th BioNLP Workshop and Shared Task, pages 71-79.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Sc-upb at the vardial 2019 evaluation campaign: Moldavian vs. romanian cross-dialect topic identification", |
|
"authors": [ |
|
{ |
|
"first": "Cristian", |
|
"middle": [], |
|
"last": "Onose", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Dumitru-Clementin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stefan", |
|
"middle": [], |
|
"last": "Cercel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Trausan-Matu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the Sixth Workshop on NLP for Similar Languages, Varieties and Dialects", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "172--177", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Cristian Onose, Dumitru-Clementin Cercel, and Stefan Trausan-Matu. 2019. Sc-upb at the vardial 2019 evaluation campaign: Moldavian vs. romanian cross-dialect topic identification. In Proceedings of the Sixth Workshop on NLP for Similar Languages, Varieties and Dialects, pages 172-177.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "How multilingual is multilingual bert", |
|
"authors": [ |
|
{ |
|
"first": "Telmo", |
|
"middle": [], |
|
"last": "Pires", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eva", |
|
"middle": [], |
|
"last": "Schlinger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Garrette", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1906.01502" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual bert? arXiv preprint arXiv:1906.01502.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Fine-grained arabic dialect identification", |
|
"authors": [ |
|
{ |
|
"first": "Mohammad", |
|
"middle": [], |
|
"last": "Salameh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Houda", |
|
"middle": [], |
|
"last": "Bouamor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nizar", |
|
"middle": [], |
|
"last": "Habash", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 27th International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1332--1344", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mohammad Salameh, Houda Bouamor, and Nizar Habash. 2018. Fine-grained arabic dialect identification. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1332-1344.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Asynchronous pipeline for processing huge corpora on medium to low resource infrastructures", |
|
"authors": [ |
|
{ |
|
"first": "Pedro Javier Ortiz", |
|
"middle": [], |
|
"last": "Su\u00e1rez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Beno\u00eet", |
|
"middle": [], |
|
"last": "Sagot", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Laurent", |
|
"middle": [], |
|
"last": "Romary", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "7th Workshop on the Challenges in the Management of Large Corpora (CMLC-7). Leibniz-Institut f\u00fcr Deutsche Sprache", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pedro Javier Ortiz Su\u00e1rez, Beno\u00eet Sagot, and Laurent Romary. 2019. Asynchronous pipeline for processing huge corpora on medium to low resource infrastructures. In 7th Workshop on the Challenges in the Management of Large Corpora (CMLC-7). Leibniz-Institut f\u00fcr Deutsche Sprache.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Parallel data, tools and interfaces in opus", |
|
"authors": [ |
|
{ |
|
"first": "J\u00f6rg", |
|
"middle": [], |
|
"last": "Tiedemann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Lrec", |
|
"volume": "2012", |
|
"issue": "", |
|
"pages": "2214--2218", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J\u00f6rg Tiedemann. 2012. Parallel data, tools and interfaces in opus. In Lrec, volume 2012, pages 2214-2218.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Dteam@ vardial 2019: Ensemble based on skip-gram and triplet loss neural networks for moldavian vs. romanian cross-dialect topic identification", |
|
"authors": [ |
|
{ |
|
"first": "Diana", |
|
"middle": [], |
|
"last": "Tudoreanu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the Sixth Workshop on NLP for Similar Languages, Varieties and Dialects", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "202--208", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diana Tudoreanu. 2019. Dteam@ vardial 2019: Ensemble based on skip-gram and triplet loss neural networks for moldavian vs. romanian cross-dialect topic identification. In Proceedings of the Sixth Workshop on NLP for Similar Languages, Varieties and Dialects, pages 202-208.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Attention is all you need", |
|
"authors": [ |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Vaswani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noam", |
|
"middle": [], |
|
"last": "Shazeer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niki", |
|
"middle": [], |
|
"last": "Parmar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jakob", |
|
"middle": [], |
|
"last": "Uszkoreit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llion", |
|
"middle": [], |
|
"last": "Jones", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aidan", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Gomez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "\u0141ukasz", |
|
"middle": [], |
|
"last": "Kaiser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Illia", |
|
"middle": [], |
|
"last": "Polosukhin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5998--6008", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Language discrimination and transfer learning for similar languages: experiments with feature combinations and adaptation", |
|
"authors": [ |
|
{ |
|
"first": "Nianheng", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Demattos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the Sixth Workshop on NLP for Similar Languages, Varieties and Dialects", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "54--63", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nianheng Wu, Eric DeMattos, Kwok Him So, Pin-zhen Chen, and \u00c7 agr\u0131 \u00c7\u00f6ltekin. 2019. Language discrimi- nation and transfer learning for similar languages: experiments with feature combinations and adaptation. In Proceedings of the Sixth Workshop on NLP for Similar Languages, Varieties and Dialects, pages 54-63.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "No army, no navy: Bert semi-supervised learning of arabic dialects", |
|
"authors": [ |
|
{ |
|
"first": "Chiyu", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Muhammad", |
|
"middle": [], |
|
"last": "Abdul-Mageed", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the Fourth Arabic Natural Language Processing Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "279--284", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chiyu Zhang and Muhammad Abdul-Mageed. 2019. No army, no navy: Bert semi-supervised learning of arabic dialects. In Proceedings of the Fourth Arabic Natural Language Processing Workshop, pages 279-284.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "Freelb: Enhanced adversarial training for natural language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Chen", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yu", |
|
"middle": [], |
|
"last": "Cheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhe", |
|
"middle": [], |
|
"last": "Gan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Siqi", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tom", |
|
"middle": [], |
|
"last": "Goldstein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jingjing", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chen Zhu, Yu Cheng, Zhe Gan, Siqi Sun, Tom Goldstein, and Jingjing Liu. 2019. Freelb: Enhanced adversarial training for natural language understanding. In International Conference on Learning Representations.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "The architecture of Custom-Ro-BERT-FT. The Emb' notation shows that the respective embedding is different from the embedding obtained on the same tokens, on the previous sequence." |
|
}, |
|
"FIGREF1": { |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "M-BERT and Ro-BERT comparison with various N -the number of times each language model is applied, trained on the MOROCO-RDI dataset and tested on samples from the test set that do not have a token length longer than the maximum allowed in the training set (left) or on the whole test set (right)." |
|
}, |
|
"TABREF0": { |
|
"num": null, |
|
"html": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>4 Experiments</td><td/><td/><td/><td/><td/><td/></tr><tr><td colspan=\"2\">4.1 Dataset Analysis</td><td/><td/><td/><td/><td/></tr><tr><td/><td colspan=\"2\">White Space Tokenizer</td><td colspan=\"2\">Ro-BERT Tokenizer</td><td colspan=\"2\">M-BERT Tokenizer</td></tr><tr><td colspan=\"2\">Dataset Avg. Train (MOROCO) 310.04</td><td>15988</td><td>356.08</td><td>18456</td><td>449.70</td><td>24169</td></tr><tr><td>Valid (MOROCO)</td><td>309.92</td><td>10809</td><td>355.87</td><td>12578</td><td>450.02</td><td>16676</td></tr><tr><td>Test (MOROCO)</td><td>313.65</td><td>13213</td><td>360.87</td><td>15313</td><td>455.50</td><td>20151</td></tr><tr><td>Test (RDI)</td><td>15.63</td><td>25</td><td>21.71</td><td>42</td><td>26.65</td><td>44</td></tr></table>", |
|
"text": "Tokens Max. Tokens Avg. Tokens Max. Tokens Avg. Tokens Max. Tokens" |
|
}, |
|
"TABREF1": { |
|
"num": null, |
|
"html": null, |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"text": "Statistics of the datasets we used in our experiments, MOROCO-RDI and RDI." |
|
}, |
|
"TABREF2": { |
|
"num": null, |
|
"html": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>1</td><td>512</td><td>72.38%</td><td>83.60%</td></tr><tr><td>2</td><td>896</td><td>92.28%</td><td>95.50%</td></tr><tr><td>3</td><td>1280</td><td>96.62%</td><td>98.04%</td></tr><tr><td>4</td><td>1536</td><td>97.78%</td><td>98.64%</td></tr></table>", |
|
"text": "No. of Apply (N ) No. of Tokens M-BERT Perc. Ro-BERT Perc." |
|
}, |
|
"TABREF4": { |
|
"num": null, |
|
"html": null, |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"text": "Results obtained by our models on the test sets." |
|
} |
|
} |
|
} |
|
} |