|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T05:58:55.556335Z" |
|
}, |
|
"title": "Team Alexa at NADI Shared Task", |
|
"authors": [ |
|
{ |
|
"first": "Mutaz", |
|
"middle": [], |
|
"last": "Bni", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Jordan University of Science and Technology Irbid", |
|
"location": { |
|
"country": "Jordan" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Younes", |
|
"middle": [], |
|
"last": "Nour", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Jordan University of Science and Technology Irbid", |
|
"location": { |
|
"country": "Jordan" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Al-Khdour", |
|
"middle": [], |
|
"last": "Mohammad", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Jordan University of Science and Technology Irbid", |
|
"location": { |
|
"country": "Jordan" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "In this paper, we discuss our team's work on the NADI Shared Task. The task requires classifying Arabic tweets among 21 dialects. We tested out different approaches, and the best one was the simplest one. Our best submission was using Multinational Naive Bayes classifier with n-grams as features. Our best submitted score on the test phase was 17% F1-score and 35% accuracy. However, in the post-evaluation phase we used an ensemble model including BERT and Multinational Naive Bayes classifier and it outperformed the top submission on the task, this ensemble model achieved 27.73% F1-score and 40.90% accuracy.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "In this paper, we discuss our team's work on the NADI Shared Task. The task requires classifying Arabic tweets among 21 dialects. We tested out different approaches, and the best one was the simplest one. Our best submission was using Multinational Naive Bayes classifier with n-grams as features. Our best submitted score on the test phase was 17% F1-score and 35% accuracy. However, in the post-evaluation phase we used an ensemble model including BERT and Multinational Naive Bayes classifier and it outperformed the top submission on the task, this ensemble model achieved 27.73% F1-score and 40.90% accuracy.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "The interest of the research community concerning the Arabic natural language processing (NLP) currently is focused on dialect identification at several levels, region level, country level, and provinces. Most previous works focused on Modern Standard Arabic (MSA) (Elfardy and Diab, 2013) (Al-Sabbagh and Girju, 2012) because it is commonly used in formal writing between Arab countries. Many previous work on Arabic dialect classification used a combination of n-gram both on word and char level with Multinomial Naive Bayes such as Meftouh et al., 2019; Talafha et al., 2019) . Eldesouki et al. (2016) successfully applied SVM. Zhang and Abdul-Mageed (2019) proposed a semi-supervised model with BERT and obtained the top rank for MADAR Twitter User Dialect Identification subtask in the MADAR Shared task (Bouamor et al., 2019) . MADAR corpus was the first large-scale resource built for Arabic dialects .", |
|
"cite_spans": [ |
|
{ |
|
"start": 265, |
|
"end": 289, |
|
"text": "(Elfardy and Diab, 2013)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 290, |
|
"end": 318, |
|
"text": "(Al-Sabbagh and Girju, 2012)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 535, |
|
"end": 556, |
|
"text": "Meftouh et al., 2019;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 557, |
|
"end": 578, |
|
"text": "Talafha et al., 2019)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 581, |
|
"end": 604, |
|
"text": "Eldesouki et al. (2016)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 631, |
|
"end": 660, |
|
"text": "Zhang and Abdul-Mageed (2019)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 809, |
|
"end": 831, |
|
"text": "(Bouamor et al., 2019)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The Nuanced Arabic Dialect Identification (NADI) (Abdul-Mageed et al., 2020) provided a labeled dataset consisting of Arabic tweets with two subtasks: the first subtask is based on country-level dialect identification and the second subtask is province-level dialect identification. The dataset was challenging and the same dataset was used for both subtasks, even the algorithms that achieved high results in similar tasks did not gain satisfying results. In this paper, we focused on the first subtask. Alexa model was built using a weighted ensemble model with n-grams features in the word and character levels. The ensemble model consists of the OneVsRest classifier with MNB, MNB, and Logistic Regression. Our model obtained a 17% F1-score and 35% accuracy; it ranks twelve based on F1 out of 18 participants.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The remainder of the paper is organized as follows: data analysis will be shown in section 2. Section 3 proposes a description of the Alexa model. The results for subtask 1 are discussed in Sections 4, and 5 respectively, followed by the conclusion in Section 6.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The NADI shared task contains two subtasks; both use the same training and development data but differ in labels. The number of training, development, and testing datasets are shown in Table 1 . The first subtask contains country-level dialects as labels, where the second subtask contains provincelevel dialects as labels. The dataset has a total of 100 provinces, all of them are from 21 Arab countries.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 185, |
|
"end": 192, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The dataset is collected from tweets in different domains. It is highly imbalanced data; the number of classes for each country is shown in Table 2 . ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 140, |
|
"end": 147, |
|
"text": "Table 2", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We experimented with different pre-processing settings, such as removing the links, @username, additional white spaces, punctuations, English alphabets, emojis, diacritics, repeated characters, and the non-Arabic tweets that use Arabic alphabets such as Pashto, Urdu, and Persian. However, preprocessing the data had a negative effect contrary to the expected; the results decreased, so we concluded that training the classifiers without preprocessing would be more effective.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pre-processing", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The organizers included 10 million unlabeled tweets. Some previous work used such data to generate more training samples. In Zhang and Abdul-Mageed (2019) , the authors used self-learning method to augment the training dataset. This method increased their baseline's accuracy by 3% and their F1-score by 6%. However, we did not use this dataset since all our experiments exploiting it did not yield any improvements.", |
|
"cite_spans": [ |
|
{ |
|
"start": 125, |
|
"end": 154, |
|
"text": "Zhang and Abdul-Mageed (2019)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unlabeled Tweets", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "In this section, we propose our system (Alexa model) that consists of multiple steps. In feature extraction level, we extract features from the tweets such as language models (n-grams) (Brown et al., 1992) that assigns probabilities to a specific number of sequences of words or characters. There are many experiments done by combining different sets of language models on word level and character level (Talafha et al., 2019) and (Meftouh et al., 2019) . On the word level, unigrams and bigrams (1,2) were the best, and on character level we end up with n grams range from 1 to 5 character with two types of tfidf vectorizers as shown in the Figure 1 \"char\" and \"char wb.\" \"char wb\" characters with the word boundaries. The features were weighted as follows: word-level unigram and bigram features weighted as 0.8, character-level Tfidf Vectorizer \"char wb\" weighted as 1.1 and Tfidf Vectorizer \"char\" weighted as 1.0.", |
|
"cite_spans": [ |
|
{ |
|
"start": 185, |
|
"end": 205, |
|
"text": "(Brown et al., 1992)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 404, |
|
"end": 426, |
|
"text": "(Talafha et al., 2019)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 431, |
|
"end": 453, |
|
"text": "(Meftouh et al., 2019)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Then, the extracted features were concatenated to train the ensemble model; the ensemble model is composed of One Vs Rest strategy with MNB (Small and Hsiao, 1985) , MNB, and Logistic regression (Kleinbaum et al., 2002) . The prediction form the three classifiers are summed to produce the one label for each tweet. The parameters used to train each classifier: 1. One Vs Rest strategy with MNB parameters: alpha equal 0.05 and the default values for the rest parameters.", |
|
"cite_spans": [ |
|
{ |
|
"start": 140, |
|
"end": 163, |
|
"text": "(Small and Hsiao, 1985)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 195, |
|
"end": 219, |
|
"text": "(Kleinbaum et al., 2002)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "2. MNB parameters: alpha equal 0.02 and the default values for the rest parameters.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "3. Logistic regression parameters: multi class='ovr', and the default values for the rest parameters. When the assigned parameter in multi class is 'ovr', then the algorithm uses one-vs-rest (OvR) scheme. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "It is worth mentioning that using different pre-trained embedding models with this dataset did not perform well. We used BERT (Devlin et al., 2018) multilingual model and Aravec model (Soliman et al., 2017) to generate embeddings for the dataset. Both of them achieved low results when using them as main features or as extra features. On the other hand, using these models for training outperformed the best submission on the task. We will talk about the model and the results in the post-evaluation section.", |
|
"cite_spans": [ |
|
{ |
|
"start": 126, |
|
"end": 147, |
|
"text": "(Devlin et al., 2018)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We tested out different ideas in the post-evaluation phase. We concatenated similar dialects into one label and used it to get extra features for the data we have. We ended up with four main dialects. \"GULF\", AFRICA\", \"LEVANT\", and \"MAGRIB\". Stacking the probability of each dialect with the n-grams features did not boost the results by noticeable differences. Another approach we tested is adding hand-written rules, adding a few rules boosted the F1-score by 2%. For example, we increase the probability of a tweet to be labeled as \"Jordan\" if we see the word \"jordan\" in it. However, some rules could lead to miss classifying some tweets. Figure 2 shows our ensemble model that concatenates weighted predicted probabilities from bert-largearabic model (Safaya et al., 2020) and MNB classifier. BERT-large-arabic model is one of ArabicBERT models, ArabicBERT released on four different sizes (Large, Base, Medium, and Mini), as well it was trained on nearly 95 GB of Arabic text from Open Super-large Crawled and Wikipedia. Furthermore, training data was in Modern Standard Arabic and dialectical Arabic, in our opinion, this is the main reason to enhance the performance of dialect classification, as for this task because it produces a meaningful embedding representation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 756, |
|
"end": 777, |
|
"text": "(Safaya et al., 2020)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 643, |
|
"end": 651, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Post-Evaluation", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The predicted probabilities were multiplied by weights in order to get the highest F1-score possible for the development dataset, the final weights were determined for both classifiers based on multiple experiments as follows: 0.35 for MNB probabilities and 1.4 for BERT probabilities. This model outperformed the best submission on the task. It achieved 27.73% F1-score and 40.90% accuracy. The parameters used to train each classifier: 1. One Vs Rest strategy with MNB parameters: alpha equal 0.01 and the default values for the rest of the parameters. The features were weighted as follows: word-level unigram and bigram features weighted as 0.8, character-level Tfidf Vectorizer \"char wb\" weighted as 1.1 and Tfidf Vectorizer \"char\" weighted as 1.0.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ensemble model", |
|
"sec_num": "4.1.1" |
|
}, |
|
{ |
|
"text": "2. BERT-large-arabic parameters: num train epochs equal 2, learning rate equal 2e-5, and the default values for the rest of the parameters. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ensemble model", |
|
"sec_num": "4.1.1" |
|
}, |
|
{ |
|
"text": "The provided dataset did not include Modern Standard Arabic (MSA) label in the training and development dataset, see Figure 3 which contains wrongly classified tweets, the tweets in Figure 3 are written in MSA. Our team investigated this problem; we used a model that was trained on detecting MSA text to extract the wrongly labeled tweets in our dataset. To train this model, we used MADAR corpus because it contains MSA tweets and other dialects. Our used model achieves an accuracy score higher than the top submitted score on the MADAR Shared task. Such that, we assume that this model can accurately classify MSA tweets. We also used different Arabic datasets to see if our claim is true or not and we got similar results. Based on our system, more than 20% of the training and development tweets were MSA tweets. Table 3 shows our numbers. To test our findings, we re-labeled MSA tweets to \"MSA\" and then applied multinomial naive bayes classifier on the development dataset, the results were as shown on table 4. In Table 4 , we noticed that the F1-score decreased when re-labeling the dataset, this because some tweets are written mostly in MSA but may contain some dialect words, so without having the MSA label, these tweets will be classified correctly to their labeled (provided label) class. But when adding the MSA label, such tweets will be classified as MSA tweets, which decreases the overall F1-score for many classes and increases it for the MSA class.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 117, |
|
"end": 125, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
}, |
|
{ |
|
"start": 182, |
|
"end": 190, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
}, |
|
{ |
|
"start": 819, |
|
"end": 826, |
|
"text": "Table 3", |
|
"ref_id": "TABREF2" |
|
}, |
|
{ |
|
"start": 1023, |
|
"end": 1030, |
|
"text": "Table 4", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In this paper, we tested different approaches in an Arabic dialect classification task, NADI shared task. The best submitted score on the test phase was using an ensemble model that contains Multinational naive Bayes, OneVsRest MNB, and logistic regression. For features, we used both CountVectorizer and TfidfVectorizer features. Our best submission achieved 17% F1-score. However, in the post-evaluation phase, we used an ensemble model which outperformed the best submitted score on the task. Our ensemble model contained weighted concatenation between BERT's probabilities and MNB probabilities. This model achieved 27.73% F1-score and 40.90% accuracy. We also noticed that many tweets were wrongly labeled because there were tweets written in MSA, and there was no MSA label. Also, there are six countries with less than 240 tweets; this leads the model to never predict these dialects. For future work, we would like to overcome these issues by finding a way to deal with the imbalanced dataset and a way to overcome the MSA problem.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Houda Bouamor, and Nizar Habash. 2020. NADI 2020: The First Nuanced Arabic Dialect Identification Shared Task", |
|
"authors": [ |
|
{ |
|
"first": "Muhammad", |
|
"middle": [], |
|
"last": "Abdul-Mageed", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chiyu", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Proceedings of the Fifth Arabic Natural Language Processing Workshop (WANLP 2020)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Muhammad Abdul-Mageed, Chiyu Zhang, Houda Bouamor, and Nizar Habash. 2020. NADI 2020: The First Nuanced Arabic Dialect Identification Shared Task. In Proceedings of the Fifth Arabic Natural Language Processing Workshop (WANLP 2020), Barcelona, Spain.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Yadac: Yet another dialectal arabic corpus", |
|
"authors": [ |
|
{ |
|
"first": "Rania", |
|
"middle": [], |
|
"last": "Al", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "-", |
|
"middle": [], |
|
"last": "Sabbagh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roxana", |
|
"middle": [], |
|
"last": "Girju", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "LREC", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2882--2889", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rania Al-Sabbagh and Roxana Girju. 2012. Yadac: Yet another dialectal arabic corpus. In LREC, pages 2882- 2889.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "The MADAR Arabic dialect corpus and lexicon", |
|
"authors": [ |
|
{ |
|
"first": "Houda", |
|
"middle": [], |
|
"last": "Bouamor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nizar", |
|
"middle": [], |
|
"last": "Habash", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohammad", |
|
"middle": [], |
|
"last": "Salameh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wajdi", |
|
"middle": [], |
|
"last": "Zaghouani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Owen", |
|
"middle": [], |
|
"last": "Rambow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dana", |
|
"middle": [], |
|
"last": "Abdulrahim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ossama", |
|
"middle": [], |
|
"last": "Obeid", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Salam", |
|
"middle": [], |
|
"last": "Khalifa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fadhl", |
|
"middle": [], |
|
"last": "Eryani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Erdmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kemal", |
|
"middle": [], |
|
"last": "Oflazer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Houda Bouamor, Nizar Habash, Mohammad Salameh, Wajdi Zaghouani, Owen Rambow, Dana Abdulrahim, Os- sama Obeid, Salam Khalifa, Fadhl Eryani, Alexander Erdmann, and Kemal Oflazer. 2018. The MADAR Arabic dialect corpus and lexicon. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan, May. European Language Resources Association (ELRA).", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "The MADAR Shared Task on Arabic Fine-Grained Dialect Identification", |
|
"authors": [ |
|
{ |
|
"first": "Houda", |
|
"middle": [], |
|
"last": "Bouamor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sabit", |
|
"middle": [], |
|
"last": "Hassan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nizar", |
|
"middle": [], |
|
"last": "Habash", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the Fourth Arabic Natural Language Processing Workshop (WANLP19)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Houda Bouamor, Sabit Hassan, and Nizar Habash. 2019. The MADAR Shared Task on Arabic Fine-Grained Di- alect Identification. In Proceedings of the Fourth Arabic Natural Language Processing Workshop (WANLP19), Florence, Italy.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Class-based n-gram models of natural language", |
|
"authors": [ |
|
{ |
|
"first": "Vincent J Della", |
|
"middle": [], |
|
"last": "Peter F Brown", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Pietra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Peter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jennifer", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Desouza", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert L", |
|
"middle": [], |
|
"last": "Lai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mercer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Computational linguistics", |
|
"volume": "18", |
|
"issue": "4", |
|
"pages": "467--480", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peter F Brown, Vincent J Della Pietra, Peter V Desouza, Jennifer C Lai, and Robert L Mercer. 1992. Class-based n-gram models of natural language. Computational linguistics, 18(4):467-480.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1810.04805" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirec- tional transformers for language understanding. arXiv preprint arXiv:1810.04805.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Qcri@ dsl 2016: Spoken arabic dialect identification using textual features", |
|
"authors": [ |
|
{ |
|
"first": "Mohamed", |
|
"middle": [], |
|
"last": "Eldesouki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fahim", |
|
"middle": [], |
|
"last": "Dalvi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hassan", |
|
"middle": [], |
|
"last": "Sajjad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kareem", |
|
"middle": [], |
|
"last": "Darwish", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the Third Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial3)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "221--226", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mohamed Eldesouki, Fahim Dalvi, Hassan Sajjad, and Kareem Darwish. 2016. Qcri@ dsl 2016: Spoken arabic dialect identification using textual features. In Proceedings of the Third Workshop on NLP for Similar Lan- guages, Varieties and Dialects (VarDial3), pages 221-226.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Sentence level dialect identification in arabic", |
|
"authors": [ |
|
{ |
|
"first": "Heba", |
|
"middle": [], |
|
"last": "Elfardy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mona", |
|
"middle": [], |
|
"last": "Diab", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "456--461", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Heba Elfardy and Mona Diab. 2013. Sentence level dialect identification in arabic. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 456-461.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Logistic regression", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "David G Kleinbaum", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Dietz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mitchel", |
|
"middle": [], |
|
"last": "Gail", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mitchell", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David G Kleinbaum, K Dietz, M Gail, Mitchel Klein, and Mitchell Klein. 2002. Logistic regression. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "The smart classifier for arabic finegrained dialect identification", |
|
"authors": [ |
|
{ |
|
"first": "Karima", |
|
"middle": [], |
|
"last": "Meftouh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karima", |
|
"middle": [], |
|
"last": "Abidi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Salima", |
|
"middle": [], |
|
"last": "Harrat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kamel", |
|
"middle": [], |
|
"last": "Smaili", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Karima Meftouh, Karima Abidi, Salima Harrat, and Kamel Smaili. 2019. The smart classifier for arabic fine- grained dialect identification.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Kuisail at semeval-2020 task 12: Bert-cnn for offensive speech identification in social media", |
|
"authors": [ |
|
{ |
|
"first": "Ali", |
|
"middle": [], |
|
"last": "Safaya", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Moutasem", |
|
"middle": [], |
|
"last": "Abdullatif", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Deniz", |
|
"middle": [], |
|
"last": "Yuret", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ali Safaya, Moutasem Abdullatif, and Deniz Yuret. 2020. Kuisail at semeval-2020 task 12: Bert-cnn for offensive speech identification in social media.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Fine-grained arabic dialect identification", |
|
"authors": [ |
|
{ |
|
"first": "Mohammad", |
|
"middle": [], |
|
"last": "Salameh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Houda", |
|
"middle": [], |
|
"last": "Bouamor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nizar", |
|
"middle": [], |
|
"last": "Habash", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 27th International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1332--1344", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mohammad Salameh, Houda Bouamor, and Nizar Habash. 2018. Fine-grained arabic dialect identification. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1332-1344.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Multinomial logit specification tests", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Kenneth", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cheng", |
|
"middle": [], |
|
"last": "Small", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Hsiao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1985, |
|
"venue": "International economic review", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "619--627", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kenneth A Small and Cheng Hsiao. 1985. Multinomial logit specification tests. International economic review, pages 619-627.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Aravec: A set of arabic word embedding models for use in arabic nlp", |
|
"authors": [ |
|
{ |
|
"first": "Kareem", |
|
"middle": [], |
|
"last": "Abu Bakr Soliman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Eissa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Samhaa R El-Beltagy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Procedia Computer Science", |
|
"volume": "117", |
|
"issue": "", |
|
"pages": "256--265", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Abu Bakr Soliman, Kareem Eissa, and Samhaa R El-Beltagy. 2017. Aravec: A set of arabic word embedding models for use in arabic nlp. Procedia Computer Science, 117:256-265.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Team just at the madar shared task on arabic fine-grained dialect identification", |
|
"authors": [ |
|
{ |
|
"first": "Bashar", |
|
"middle": [], |
|
"last": "Talafha", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ali", |
|
"middle": [], |
|
"last": "Fadel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mahmoud", |
|
"middle": [], |
|
"last": "Al-Ayyoub", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yaser", |
|
"middle": [], |
|
"last": "Jararweh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Al-Smadi", |
|
"middle": [], |
|
"last": "Mohammad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Patrick", |
|
"middle": [], |
|
"last": "Juola", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the Fourth Arabic Natural Language Processing Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "285--289", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bashar Talafha, Ali Fadel, Mahmoud Al-Ayyoub, Yaser Jararweh, AL-Smadi Mohammad, and Patrick Juola. 2019. Team just at the madar shared task on arabic fine-grained dialect identification. In Proceedings of the Fourth Arabic Natural Language Processing Workshop, pages 285-289.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "No army, no navy: Bert semi-supervised learning of arabic dialects", |
|
"authors": [ |
|
{ |
|
"first": "Chiyu", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Muhammad", |
|
"middle": [], |
|
"last": "Abdul-Mageed", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the Fourth Arabic Natural Language Processing Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "279--284", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chiyu Zhang and Muhammad Abdul-Mageed. 2019. No army, no navy: Bert semi-supervised learning of arabic dialects. In Proceedings of the Fourth Arabic Natural Language Processing Workshop, pages 279-284.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "Alexa model architecture", |
|
"uris": null |
|
}, |
|
"FIGREF1": { |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "The ensemble model architecture", |
|
"uris": null |
|
}, |
|
"FIGREF2": { |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "examples of wrongly classified tweets", |
|
"uris": null |
|
}, |
|
"TABREF0": { |
|
"type_str": "table", |
|
"content": "<table><tr><td>Corpus Train Dev Test</td></tr><tr><td>NADI 21000 4957 5000</td></tr></table>", |
|
"html": null, |
|
"text": "The distribution of the dataset", |
|
"num": null |
|
}, |
|
"TABREF1": { |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td/><td>: Label Distribution</td><td/></tr><tr><td>Country</td><td>Number</td><td>Country</td><td>Number</td></tr><tr><td>Bahrain</td><td>210</td><td>Yemen</td><td>851</td></tr><tr><td>Djibouti</td><td>210</td><td>Syria</td><td>1,070</td></tr><tr><td>Sudan</td><td>210</td><td>Morocco</td><td>1,070</td></tr><tr><td>Mauritania</td><td>210</td><td>United Arab Emirates</td><td>1,070</td></tr><tr><td>Somalia</td><td>210</td><td>Libya</td><td>1,070</td></tr><tr><td>Qatar</td><td>234</td><td>Oman</td><td>1,098</td></tr><tr><td>Kuwait</td><td>420</td><td>Algeria</td><td>1,491</td></tr><tr><td>Palestine</td><td>420</td><td>Saudi Arabia</td><td>2,312</td></tr><tr><td>Jordan</td><td/><td>Iraq</td><td>2,556</td></tr><tr><td>Lebanon</td><td>639</td><td>Egypt</td><td>4,473</td></tr><tr><td>Tunisia</td><td>750</td><td>-</td><td>-</td></tr></table>", |
|
"html": null, |
|
"text": "", |
|
"num": null |
|
}, |
|
"TABREF2": { |
|
"type_str": "table", |
|
"content": "<table><tr><td colspan=\"4\">: Number of MSA Tweets in Each Set</td></tr><tr><td/><td>Train</td><td>Dev</td><td>Test</td></tr><tr><td>Total</td><td colspan=\"3\">21,000 4,957 5,000</td></tr><tr><td colspan=\"2\">Estimated MSA 4,930</td><td colspan=\"2\">1,074 1,074</td></tr></table>", |
|
"html": null, |
|
"text": "", |
|
"num": null |
|
}, |
|
"TABREF3": { |
|
"type_str": "table", |
|
"content": "<table><tr><td colspan=\"3\">Results after adding the MSA label</td></tr><tr><td/><td colspan=\"2\">F1-score Accuracy</td></tr><tr><td>With MSA</td><td>14.9</td><td>43.837</td></tr><tr><td>Without MSA</td><td>14.9</td><td>35.203</td></tr></table>", |
|
"html": null, |
|
"text": "", |
|
"num": null |
|
} |
|
} |
|
} |
|
} |