ACL-OCL / Base_JSON /prefixW /json /wanlp /2020.wanlp-1.30.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T05:59:07.564493Z"
},
"title": "Identifying Nuanced Dialect for Arabic Tweets with Deep Learning and Reverse Translation Corpus Extension System",
"authors": [
{
"first": "Rawan",
"middle": [],
"last": "Tahssin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Alexandria University",
"location": {}
},
"email": ""
},
{
"first": "Youssef",
"middle": [],
"last": "Kishk",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Alexandria University",
"location": {}
},
"email": ""
},
{
"first": "Marwan",
"middle": [],
"last": "Torki",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Alexandria University",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we present our work for the NADI Shared Task (Abdul-Mageed et al., 2020): Nuanced Arabic Dialect Identification for Subtask-1: country-level dialect identification. We introduce a Reverse Translation Corpus Extension Systems (RTCES) to handle data imbalance along with reported results on several experimented approaches of word and document representations and different models architectures. The top scoring model was based on the Transformer-based Model for Arabic Language Understanding (AraBERT) (Antoun et al., 2020), with our modified extended corpus based on reverse translation of the given Arabic tweets. The selected system achieved a macro average F1 score of 20.34% on the test set, which places our team CodeLyoko as the 7 th out of 18 teams in the final ranking Leaderboard.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we present our work for the NADI Shared Task (Abdul-Mageed et al., 2020): Nuanced Arabic Dialect Identification for Subtask-1: country-level dialect identification. We introduce a Reverse Translation Corpus Extension Systems (RTCES) to handle data imbalance along with reported results on several experimented approaches of word and document representations and different models architectures. The top scoring model was based on the Transformer-based Model for Arabic Language Understanding (AraBERT) (Antoun et al., 2020), with our modified extended corpus based on reverse translation of the given Arabic tweets. The selected system achieved a macro average F1 score of 20.34% on the test set, which places our team CodeLyoko as the 7 th out of 18 teams in the final ranking Leaderboard.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Arabic is one of the most complex languages, which presents significant challenges for natural language processing. Like other languages, Arabic has a number of dialectal varieties. Many of these varieties of Arabic have started being widely represented in the written form with the emergence of social media. Arabic language speakers use Modern Standard Arabic (MSA) as the official language in very formal situations , while they use an Arabic Dialect for everyday conversation. Dialect identification is the task of detecting the source variety of a given text or speech segment automatically. Previous work on Arabic dialect identification has focused on country-level varieties such as the Arabic Fine-Grained Dialect Identification task (MADAR) co-located with The Fourth Arabic Natural Language Processing Workshop (WANLP 2019) (Bouamor et al., 2019) . The classification task remains challenging as it covers 21 different Arabic dialects with high similarities and common words. Throughout the paper, we propose an approach for data balancing and augmentation without using any external manually-labelled data sets. We also report the different systems that were experimented in feature extraction and word embedding such as Term Frequency-Inverse Document Frequency (TF-IDF) and fastText (Mikolov et al., 2018) . For the tweets classification, Logistic Regression, Bi-directional Long Short Term Memory (LSTM) (Graves and Schmidhuber, 2005) and AraBERT were evaluated to reach the top score.",
"cite_spans": [
{
"start": 835,
"end": 857,
"text": "(Bouamor et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 1297,
"end": 1319,
"text": "(Mikolov et al., 2018)",
"ref_id": "BIBREF10"
},
{
"start": 1419,
"end": 1449,
"text": "(Graves and Schmidhuber, 2005)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The data used in all of the proposed systems is based on the official available dataset for Subtask-1 with no external data sets used. Table 1 shows the distribution of available data across different sets.",
"cite_spans": [],
"ref_spans": [
{
"start": 135,
"end": 142,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Dataset Description",
"sec_num": "2.1"
},
{
"text": "# Tweets 21000 4957 5000 The available data is covering the dialects of 21 Arab countries with the distribution in Figure 1 for the training set. ",
"cite_spans": [],
"ref_spans": [
{
"start": 115,
"end": 123,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Train Dev Test",
"sec_num": null
},
{
"text": "As the available dataset was collected from general tweets, thus, it required a generic transformation before its usage as an input to our systems. A pre-processing phase (Shoukry and Rafea, 2012) was implemented to remove punctuation, vowel elongation, URLs, mentions and diacritization. English and French words along with emojis were kept to be used as features.",
"cite_spans": [
{
"start": 171,
"end": 196,
"text": "(Shoukry and Rafea, 2012)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preprocessing",
"sec_num": "2.2"
},
{
"text": "The presence of class imbalance between countries labels within the training corpus was highly noticed as shown in Figure 1 . Accordingly, a reverse translation approach was taken to handle this imbalance and augmentation. The approach consisted of a number of steps, starting from pre-processing module till the new generated sentence as shown in Figure 2 . First, the entire pre-processed data is translated to English using Google's NMT API (Wu et al., 2016) to provide an equivalent corpus in English. The next step is the reverse translation of the newly created English corpus to translate back the whole data to Arabic. As a final step, the extraction of the difference between the original Arabic tweet and the newly generated Arabic tweet from the reverse translation;to create a new sentence. An example of the steps applied on a tweet form the corpus is shown in Table 2 .",
"cite_spans": [
{
"start": 444,
"end": 461,
"text": "(Wu et al., 2016)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 115,
"end": 123,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 348,
"end": 356,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 874,
"end": 881,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Reverse Translation Corpus Extension System (RTCES)",
"sec_num": "2.3"
},
{
"text": "Step 1 (Pre-processing)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Steps Output Sentence",
"sec_num": null
},
{
"text": "Step 2 (English translated)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Steps Output Sentence",
"sec_num": null
},
{
"text": "Too much difference, review the difference in time and circumstances of life, and you will know Step 3 (Reverse Translated)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Steps Output Sentence",
"sec_num": null
},
{
"text": "Step 4 (Sentence Difference) Table 2 : RTCES applied on an Example from training set One of the main observations that made this approach interesting, was the ability to filter out parts of the words based on Modern Standard Arabic (MSA) and keep the words reflecting the Arabic dialects of each country. This filtering served the purpose of our task and allowed the formation of new sentences for the classes with lower occurrences.",
"cite_spans": [],
"ref_spans": [
{
"start": 29,
"end": 36,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Steps Output Sentence",
"sec_num": null
},
{
"text": "For our explored document and word representations as well as the classification model approaches, an extended corpus has been used. The new extended corpus consisted of the initial training set, added to it the new sentences generated from RTCES excluding the classes with higher occurrences (Egypt, Iraq and Saudi Arabia) to provide a more balanced distribution of classes. Figure 2 shows the complete system architecture. Finally, our extended training corpus is composed of 32,417 training sentences whose distribution is shown in Figure 3 . ",
"cite_spans": [],
"ref_spans": [
{
"start": 376,
"end": 384,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 535,
"end": 543,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Steps Output Sentence",
"sec_num": null
},
{
"text": "The aim of this work is to design a system that can classify 21 different Arabic dialects efficiently. In this section, we describe some selected experimented approaches and architectures out of various attempts to reach the goal of NADI Shared Task (Abdul-Mageed et al., 2020) . All these approaches were applied to the output of the RTCES and their results were reported.",
"cite_spans": [
{
"start": 250,
"end": 277,
"text": "(Abdul-Mageed et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Systems",
"sec_num": "3"
},
{
"text": "Extracted features using TF-IDF and Logistic Regression Model from (Pedregosa et al., 2011) with the tuned parameters from k-fold cross validation as shown in Figure 4 (a) .",
"cite_spans": [
{
"start": 67,
"end": 91,
"text": "(Pedregosa et al., 2011)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 159,
"end": 171,
"text": "Figure 4 (a)",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "TF-IDF with Logistic Regression Model",
"sec_num": "3.1"
},
{
"text": "fastText (Mikolov et al., 2018 ) is a deep learning-based approach for efficient learning of word representations. It was selected as it returns a vector representation to non-existing words in its vocabulary by computing the closest word based on the character level n-gram. The implementation of Gensim (\u0158eh\u016f\u0159ek and Sojka, 2010) was used. We trained fastText over the extra data corpus of 10M unlabelled tweets, after the pre-processing phase shown in section 2.1; to obtain efficient vector representations for each word. The returned vectors were averaged to obtain a representation suitable for the Logistic Regression input, the obtained result are shown in Figure 4(b) .",
"cite_spans": [
{
"start": 9,
"end": 30,
"text": "(Mikolov et al., 2018",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 664,
"end": 675,
"text": "Figure 4(b)",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "FastText averaged word embeddings with Logistic Regression Model",
"sec_num": "3.2"
},
{
"text": "The word vectors and labels were sequenced, padded and passed to a bi-directional LSTM (Graves and Schmidhuber, 2005) model which is able to exploit previous and future context of a given word and calculated the loss from the concatenation of the last hidden layer in both directions as shown in Figure 4 (b) . The bi-directional LSTM model was built using Keras (Chollet, 2015) . ",
"cite_spans": [
{
"start": 87,
"end": 117,
"text": "(Graves and Schmidhuber, 2005)",
"ref_id": "BIBREF7"
},
{
"start": 364,
"end": 379,
"text": "(Chollet, 2015)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 296,
"end": 309,
"text": "Figure 4 (b)",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "FastText word embeddings with Bi-directional LSTM",
"sec_num": "3.3"
},
{
"text": "Our top submission model was based on AraBERT (Antoun et al., 2020) , which is an Arabic language model based on Multilingual Bidirectional Encoder Representations from Transformers (BERT) trained on 70M sentences or 23GB of Arabic text with 3B words from a collection of publically available large scale raw Arabic text. We applied first a tokenization and segmentation phase using Fast and Accurate Arabic Word Segmenter (Farasa) (Abdelali et al., 2016 ) on the extended corpus described in section 2.3. The pre-trained AraBERT model is fine-tuned with one additional output layer of 21 classes, then the model is trained on our corpus which reaches a 20.34 test-set F1 score.",
"cite_spans": [
{
"start": 46,
"end": 67,
"text": "(Antoun et al., 2020)",
"ref_id": "BIBREF2"
},
{
"start": 432,
"end": 454,
"text": "(Abdelali et al., 2016",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fined tuned AraBERT",
"sec_num": "3.4"
},
{
"text": "Interesting observations at the beginning of conducting our work showed that TF-IDF and a simple Logistic Regression model performed better than NN-based models. However, with more experiments, NN-based models outperformed it. The results of the approaches described in the previous section are shown in Table 3 . Table 3 : Results (in %) on Dev. set",
"cite_spans": [],
"ref_spans": [
{
"start": 304,
"end": 311,
"text": "Table 3",
"ref_id": null
},
{
"start": 314,
"end": 321,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4"
},
{
"text": "The fine-tuned AraBERT score was the highest. Accordingly, its predictions were selected to be our final submission for the NADI Shared Task 2020 Subtask-1. Moreover, one of the observations of our approach was that our results on the development set are quite close to those on the test set. This indicates that no over-fitting took place as shown in Table 4 . The final Macro average F1, accuracy, precision and recall scores for the best-performing model were addressed in section 3.4. In an attempt to improve the reported results, the 21 countries were clustered to 5 super classes inspired by (Fares et al., 2019) according to the origin of the dialect labels as shown in Table 5 . A two-level hierarchical prediction structure inspired by (de Francony et al., 2019) was implemented. The predicted labels from the first five super classes level are passed to the second level to output the prediction of the corresponding countries in each of the five origins. This is a general structure that can be used on different models. However, we reported its results on the first level classes using system explained in Section 3.1 using TF-IDF with Logistic Regression Model as shown in Figure 5 . The result of the second level were close to that obtained from the system explained in Section 3.2 which is 14.241%. We aim to enhance this structure and report its results on other models as a future work. ",
"cite_spans": [
{
"start": 599,
"end": 619,
"text": "(Fares et al., 2019)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 352,
"end": 359,
"text": "Table 4",
"ref_id": "TABREF3"
},
{
"start": 678,
"end": 685,
"text": "Table 5",
"ref_id": "TABREF5"
},
{
"start": 1187,
"end": 1195,
"text": "Figure 5",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "We introduced a reverse Arabic translation solution to handle unbalanced data and small data set, a hierarchical architecture to enhance the efficiency and deal with the 21 classes classification, several neural network based models built on different word and document representations. Future work will include trying to ensemble the mentioned models, enhance the two-level hierarchical prediction structure and exploring the effect of adding a named entity recognition system module for better focus on highly effective words that identifies each country such as places, food, public figures, etc. Moreover, we will examine more data augmentation methods such as suggested in (Fares et al., 2019; Ibrahim et al., 2018; Ibrahim et al., 2020) .",
"cite_spans": [
{
"start": 678,
"end": 698,
"text": "(Fares et al., 2019;",
"ref_id": null
},
{
"start": 699,
"end": 720,
"text": "Ibrahim et al., 2018;",
"ref_id": "BIBREF8"
},
{
"start": 721,
"end": 742,
"text": "Ibrahim et al., 2020)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
}
],
"back_matter": [
{
"text": "We would like to thank Ms. Samaa Abdelaal, our language editor for her dedicated work and efforts on this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Farasa: A fast and furious segmenter for arabic",
"authors": [
{
"first": "Ahmed",
"middle": [],
"last": "Abdelali",
"suffix": ""
},
{
"first": "Kareem",
"middle": [],
"last": "Darwish",
"suffix": ""
},
{
"first": "Nadir",
"middle": [],
"last": "Durrani",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Mubarak",
"suffix": ""
}
],
"year": 2016,
"venue": "HLT-NAACL Demos",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ahmed Abdelali, Kareem Darwish, Nadir Durrani, and H. Mubarak. 2016. Farasa: A fast and furious segmenter for arabic. In HLT-NAACL Demos.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Houda Bouamor, and Nizar Habash. 2020. NADI 2020: The First Nuanced Arabic Dialect Identification Shared Task",
"authors": [
{
"first": "Muhammad",
"middle": [],
"last": "Abdul-Mageed",
"suffix": ""
},
{
"first": "Chiyu",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the Fifth Arabic Natural Language Processing Workshop (WANLP 2020)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Muhammad Abdul-Mageed, Chiyu Zhang, Houda Bouamor, and Nizar Habash. 2020. NADI 2020: The First Nuanced Arabic Dialect Identification Shared Task. In Proceedings of the Fifth Arabic Natural Language Processing Workshop (WANLP 2020), Barcelona, Spain.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Arabert: Transformer-based model for arabic language understanding",
"authors": [
{
"first": "Wissam",
"middle": [],
"last": "Antoun",
"suffix": ""
},
{
"first": "Fady",
"middle": [],
"last": "Baly",
"suffix": ""
},
{
"first": "Hazem",
"middle": [],
"last": "Hajj",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wissam Antoun, Fady Baly, and Hazem Hajj. 2020. Arabert: Transformer-based model for arabic language understanding.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The MADAR shared task on Arabic fine-grained dialect identification",
"authors": [
{
"first": "Houda",
"middle": [],
"last": "Bouamor",
"suffix": ""
},
{
"first": "Sabit",
"middle": [],
"last": "Hassan",
"suffix": ""
},
{
"first": "Nizar",
"middle": [],
"last": "Habash",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Arabic Natural Language Processing Workshop",
"volume": "",
"issue": "",
"pages": "199--207",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Houda Bouamor, Sabit Hassan, and Nizar Habash. 2019. The MADAR shared task on Arabic fine-grained dialect identification. In Proceedings of the Fourth Arabic Natural Language Processing Workshop, pages 199-207, Florence, Italy, August. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Hierarchical deep learning for Arabic dialect identification",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Gael De Francony",
"suffix": ""
},
{
"first": "Praveen",
"middle": [],
"last": "Guichard",
"suffix": ""
},
{
"first": "Haithem",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Abdessalam",
"middle": [],
"last": "Afli",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bouchekif",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Arabic Natural Language Processing Workshop",
"volume": "",
"issue": "",
"pages": "249--253",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gael de Francony, Victor Guichard, Praveen Joshi, Haithem Afli, and Abdessalam Bouchekif. 2019. Hierar- chical deep learning for Arabic dialect identification. In Proceedings of the Fourth Arabic Natural Language Processing Workshop, pages 249-253, Florence, Italy, August. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Arabic dialect identification with deep learning and hybrid frequency based features",
"authors": [],
"year": 2019,
"venue": "Proceedings of the Fourth Arabic Natural Language Processing Workshop",
"volume": "",
"issue": "",
"pages": "224--228",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Youssef Fares, Zeyad El-Zanaty, Kareem Abdel-Salam, Muhammed Ezzeldin, Aliaa Mohamed, Karim El-Awaad, and Marwan Torki. 2019. Arabic dialect identification with deep learning and hybrid frequency based features. In Proceedings of the Fourth Arabic Natural Language Processing Workshop, pages 224-228, Florence, Italy, August. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Framewise phoneme classification with bidirectional lstm networks",
"authors": [
{
"first": "A",
"middle": [],
"last": "Graves",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings. 2005 IEEE International Joint Conference on Neural Networks",
"volume": "4",
"issue": "",
"pages": "2047--2052",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Graves and J. Schmidhuber. 2005. Framewise phoneme classification with bidirectional lstm networks. In Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005., volume 4, pages 2047- 2052 vol. 4.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Imbalanced toxic comments classification using data augmentation and deep learning",
"authors": [
{
"first": "Mai",
"middle": [],
"last": "Ibrahim",
"suffix": ""
},
{
"first": "Marwan",
"middle": [],
"last": "Torki",
"suffix": ""
},
{
"first": "Nagwa",
"middle": [],
"last": "El-Makky",
"suffix": ""
}
],
"year": 2018,
"venue": "17th IEEE International Conference on Machine Learning and Applications (ICMLA)",
"volume": "",
"issue": "",
"pages": "875--878",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mai Ibrahim, Marwan Torki, and Nagwa El-Makky. 2018. Imbalanced toxic comments classification using data augmentation and deep learning. In 2018 17th IEEE International Conference on Machine Learning and Ap- plications (ICMLA), pages 875-878, Dec.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Alexu-backtranslation-tl at semeval-2020 task [12]: Improving offensive language detection using data augmentation and transfer learning",
"authors": [
{
"first": "Mai",
"middle": [],
"last": "Ibrahim",
"suffix": ""
},
{
"first": "Marwan",
"middle": [],
"last": "Torki",
"suffix": ""
},
{
"first": "Nagwa",
"middle": [],
"last": "El-Makky",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the International Workshop on Semantic Evaluation (SemEval)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mai Ibrahim, Marwan Torki, and Nagwa El-Makky. 2020. Alexu-backtranslation-tl at semeval-2020 task [12]: Improving offensive language detection using data augmentation and transfer learning. In Proceedings of the International Workshop on Semantic Evaluation (SemEval).",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Advances in pretraining distributed word representations",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Puhrsch",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Edouard Grave, Piotr Bojanowski, Christian Puhrsch, and Armand Joulin. 2018. Advances in pre- training distributed word representations. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan, May. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Scikit-learn: Machine learning in python",
"authors": [
{
"first": "Fabian",
"middle": [],
"last": "Pedregosa",
"suffix": ""
},
{
"first": "Ga\u00ebl",
"middle": [],
"last": "Varoquaux",
"suffix": ""
},
{
"first": "Alexandre",
"middle": [],
"last": "Gramfort",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Michel",
"suffix": ""
},
{
"first": "Bertrand",
"middle": [],
"last": "Thirion",
"suffix": ""
},
{
"first": "Olivier",
"middle": [],
"last": "Grisel",
"suffix": ""
},
{
"first": "Mathieu",
"middle": [],
"last": "Blondel",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Prettenhofer",
"suffix": ""
},
{
"first": "Ron",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Dubourg",
"suffix": ""
},
{
"first": "Jake",
"middle": [],
"last": "Vanderplas",
"suffix": ""
},
{
"first": "Alexandre",
"middle": [],
"last": "Passos",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Cournapeau",
"suffix": ""
},
{
"first": "Matthieu",
"middle": [],
"last": "Brucher",
"suffix": ""
},
{
"first": "Matthieu",
"middle": [],
"last": "Perrot",
"suffix": ""
},
{
"first": "Duchesnay",
"middle": [],
"last": "And\u00e9douard",
"suffix": ""
}
],
"year": 2011,
"venue": "J. Mach. Learn. Res",
"volume": "12",
"issue": "",
"pages": "2825--2830",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fabian Pedregosa, Ga\u00ebl Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Math- ieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake Vanderplas, Alexandre Passos, David Cour- napeau, Matthieu Brucher, Matthieu Perrot, and\u00c9douard Duchesnay. 2011. Scikit-learn: Machine learning in python. J. Mach. Learn. Res., 12(null):2825-2830, November.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Software Framework for Topic Modelling with Large Corpora",
"authors": [
{
"first": "Petr",
"middle": [],
"last": "Radim\u0159eh\u016f\u0159ek",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sojka",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks",
"volume": "",
"issue": "",
"pages": "45--50",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Radim\u0158eh\u016f\u0159ek and Petr Sojka. 2010. Software Framework for Topic Modelling with Large Corpora. In Proceed- ings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pages 45-50, Valletta, Malta, May. ELRA.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Preprocessing egyptian dialect tweets for sentiment mining",
"authors": [
{
"first": "Amira",
"middle": [],
"last": "Shoukry",
"suffix": ""
},
{
"first": "Ahmed",
"middle": [],
"last": "Rafea",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "11",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amira Shoukry and Ahmed Rafea. 2012. Preprocessing egyptian dialect tweets for sentiment mining. 11.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Google's neural machine translation system: Bridging the gap between human and machine translation",
"authors": [
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Quoc",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Norouzi",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Qin",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Klaus",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Klingner",
"suffix": ""
},
{
"first": "Apurva",
"middle": [],
"last": "Shah",
"suffix": ""
},
{
"first": "Melvin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Xiaobing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Gouws",
"suffix": ""
},
{
"first": "Yoshikiyo",
"middle": [],
"last": "Kato",
"suffix": ""
},
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "Hideto",
"middle": [],
"last": "Kazawa",
"suffix": ""
},
{
"first": "Keith",
"middle": [],
"last": "Stevens",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Kurian",
"suffix": ""
},
{
"first": "Nishant",
"middle": [],
"last": "Patil",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2016,
"venue": "Oriol Vinyals",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, \u0141ukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. CoRR, abs/1609.08144.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Training set classes distribution"
},
"FIGREF1": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Reverse Translation Corpus Extension System"
},
"FIGREF2": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Modified corpus classes distribution"
},
"FIGREF3": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "(a) TF-IDF based system Architecture (b) fastText-based system Architecture"
},
"FIGREF4": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Implemented approaches Architectures"
},
"FIGREF5": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Confusion matrix for structure in 4.1 on first level classes"
},
"TABREF0": {
"num": null,
"type_str": "table",
"html": null,
"content": "<table><tr><td>This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http://</td></tr><tr><td>creativecommons.org/licenses/by/4.0/.</td></tr></table>",
"text": "Available dataset distribution"
},
"TABREF3": {
"num": null,
"type_str": "table",
"html": null,
"content": "<table><tr><td>4.1 Two-level Hierarchical Prediction Structure</td></tr></table>",
"text": "Final Submitted results (in %) of AraBERT on Test and Dev. sets"
},
"TABREF5": {
"num": null,
"type_str": "table",
"html": null,
"content": "<table/>",
"text": "Two-level classes distribution"
}
}
}
}