ACL-OCL / Base_JSON /prefixW /json /wanlp /2020.wanlp-1.32.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T05:58:57.500319Z"
},
"title": "Arabic Dialects Identification for All Arabic countries",
"authors": [
{
"first": "Ahmed",
"middle": [
"Hussein"
],
"last": "Aliwy",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Kufa",
"location": {
"country": "Iraq"
}
},
"email": "[email protected]"
},
{
"first": "Hawraa",
"middle": [
"Ali"
],
"last": "Taher",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Kufa",
"location": {
"country": "Iraq"
}
},
"email": ""
},
{
"first": "Zena",
"middle": [
"A"
],
"last": "Abutiheen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Kerbala",
"location": {
"country": "Iraq"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Arabic dialects are among of three main variant of Arabic language (Classical Arabic, modern standard Arabic and dialectal Arabic). It has many variants according to the country, city (provinces) or town. In this paper, several techniques with multiple algorithms are applied for Arabic dialects identification starting from removing noise till classification task using all Arabic countries as 21 classes. Three types of classifiers (Na\u00efve Bayes, Logistic Regression, and Decision Tree) are combined using voting with two different methodologies. Also clustering technique is used for decreasing the noise that result from the existing of MSA tweets in the data set for training phase. The results of f-measure were 27.17, 41.34 and 52.38 for first methodology without clustering, second methodology without clustering, and second methodology with clustering, the used data set is NADI shared task data set.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Arabic dialects are among of three main variant of Arabic language (Classical Arabic, modern standard Arabic and dialectal Arabic). It has many variants according to the country, city (provinces) or town. In this paper, several techniques with multiple algorithms are applied for Arabic dialects identification starting from removing noise till classification task using all Arabic countries as 21 classes. Three types of classifiers (Na\u00efve Bayes, Logistic Regression, and Decision Tree) are combined using voting with two different methodologies. Also clustering technique is used for decreasing the noise that result from the existing of MSA tweets in the data set for training phase. The results of f-measure were 27.17, 41.34 and 52.38 for first methodology without clustering, second methodology without clustering, and second methodology with clustering, the used data set is NADI shared task data set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Arabic Dialects is one of three variations of Arabic language (Classical Arabic CA, Modern Standard Arabic MSA and dialectal Arabic DA). It is a native language of Arabic people used in the communication among them and in the social media (Itani, 2018) . Each country of 21 countries, in the Arab world, has its own dialect, sometimes they are written in same script with different pronunciation. Recently, dialect identification (DI) is interesting field in Natural Language Processing (NLP) and conducted by number of researchers because it is increasing rapidly on the web and social media. There are nine distinct dialectal categories in Arab world: Egyptian, Gulf, Iraqi, Levantine, Maghrebi (El-Haj et al., 2018) , Yemeni, Somali, Sudanese and Mauritania. Each one of them has many varieties according to the city and town. It is clear that there are four levels of Arabic dialectal identification (ADI): (1) identification of dialectal Arabic from MSA and CA, it is very easy task similar to identification of Arabic language among other languages, (2) identification of the main category of dialectal Arabic of( nine or five categories), it is more difficult than previous point, (3) identification of country level out of 21 countries, it more difficult than the previous two points, (4) identification of city level or town level which is subfield of country level, it is the most difficult among all these levels. Table 1 show dialects of the word \u202b-\u0623\u0631\ufbfe\ufedc\ufe94\"\u202c \u00c2ryk\u0127\"-(sofa) 1 in some Arabic countries*.",
"cite_spans": [
{
"start": 239,
"end": 252,
"text": "(Itani, 2018)",
"ref_id": "BIBREF0"
},
{
"start": 697,
"end": 718,
"text": "(El-Haj et al., 2018)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 1425,
"end": 1432,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Native word Country Native word Country Native word Table 1 : dialects of the word \u202b-\u0623\u0631\ufbfe\ufedc\ufe94\"\u202c \u00c2ryk\u0127\" (sofa) in 16 Arabic countries.",
"cite_spans": [],
"ref_spans": [
{
"start": 52,
"end": 59,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Country",
"sec_num": null
},
{
"text": "Iraq \u202b-\ufedb\ufeae\u0648\ufbfe\ufe96\u202c krwyt \u202b-\ufed7\ufee8\ufed4\ufe94\u202c qnf\u0127 Kuwait \u202b-\ufedb\ufee8\ufe92\ufbab\u202c knbh \u202b-\ufed7\ufee8\ufed4\ufbab\u202c qnf\u0127 Saudi Arabia \u202b-\ufe91\ufe8e\u0637\ufeae\ufee3\ufbab\u202c bATrmh \u202b-\ufedb\ufee8\ufe92\ufbab\u202c knbh Bahrain \u202b-\ufed7\ufee8\ufed4\ufe94\u202c qnf\u0127 Lebanon \u202b-\ufedb\ufee8\ufe92\ufe8e\ufbfe\ufbab\u202c knbAyh Syria \u202b-\ufedb\ufee8\ufe92\ufe8e\ufbfe\ufbab\u202c knbAyh Egypt \u202b-\ufedb\ufee8\ufe92\ufbab\u202c knbh Oman \u202b-\ufed7\ufee8\ufed4\ufe94\u202c qnf\u0127 Morocco & Tunisia \u202b-\ufed3\ufeee\u0637\ufeee\u064a\u202c fwTwy Algeria \u202b-\ufed3\ufeee\u0637\ufeee\u064a\u202c fwTwy Palestine \u202b-\ufedb\ufee8\ufe92\ufe8e\ufbfe\ufbab\u202c knbAyh Yemen \u202b-\ufedb\ufee8\ufe92\ufbab\u202c knbh Jordan \u202b-\ufedb\ufee8\ufe92\ufe8e\ufbfe\ufbab\u202c knbAyh Qatar \u202b-\ufedb\ufee8\ufe92\ufbab\u202c knbh United Arab Emirates \u202b-\u0627\ufee7\ufe98\ufeae\ufbfe\ufbab\u202c Antryh, \u202b-\ufed7\ufee8\ufed4\ufe94\u202c qnf\u0127 \u202b-\ufedb\ufee8\ufe92\ufbab\u202c knbh",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Country",
"sec_num": null
},
{
"text": "There are many challenges in ADI over identification of other types of Arabic language such as: 1. In case of the levels 2,3 and 4, all the countries and cities use MSA for writing in some times, therefore these will be noise in the dialectal identification. 2. Existing of a city in a specific country use dialect that very close to other country more than its country such as Basrah in Iraq use dialect very close to Kuwait dialect. 3. Some countries are much similar in most words and different in little words therefore identification of their dialects are much difficult. this paper is a part of NADI 2020 shared task (subtask1) where the tweets in Arabic dialect has been classified into the country belong it by voting among three classifiers (Na\u00efve Bayes NB, Logistic Regression LR, and Decision Tree DT) in different methodologies to make final decision. Also a preprocessing phase is done, before implementation of the classifier, such as noisy redundant, removing stopwords, feature extraction and feature selection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Country",
"sec_num": null
},
{
"text": "There are many works for identification of Arabic dialects in all the levels. We chose some of the close works to our work. Belgacem et al. (2010) worked on nine dialects (Tunisia, Algeria, Syria, Lebanon, Yemen, Egypt, Golf's Countries, Morocco and Iraq) using the platform Alize and Gaussian Mixture Models (GMM). They showed the complexity of the automatic identification of Arabic dialects. Elfardy & Diab (2013) used a supervised approach for identification of Arabic dialects. They got an accuracy of 85.5% on an Arabic online-commentary. Cotterell et al. (2014) presented a multi-dialect, multi-genre, human annotated corpus of dialectal Arabic with data obtained from both online commentary on the newspaper and Twitter. They used five Arabic dialects ( Egyptian, Levantine, Gulf, Maghrebi and Iraqi). Sadat et al.(2014) presented a set of experiments of letter-based (n-gram) Markov language model and NB classifiers on social media. Experimental results showed that NB classifier using character-level bigram model can identify the 18 different Arabic dialects with a considerable accuracy. Malmasi & Zampieri (2016) described a system to identify of four regional Arabic dialects (Egyptian, Levantine, Gulf, North African) and Modern Standard Arabic (MSA) in a transcribed speech corpus as a DSL shared task. They used ensemble classifier of set of linear models as base classifiers and they achieved a score of 0.51 in the closed training track. El-Haj et al. (2018) presented Subtractive Bivalency Profiling (SBP) for identification of four Arabic dialects( Egyptian, Levant , Gulf , and North African) as well as MSA where the accuracy were 76%. Mishra, & Mujadia (2019) explored the use of different features (char, word n-gram, language model probabilities, etc) on different classifiers for Arabic dialects identification. The work is part of Multi Arabic Dialect Applications and Resources (MADAR) Shared Task (Bouamor, et al.,2019) in WANLP 2019 on Arabic Fine-Grained Dialect Identification. They showed that traditional machine learning classifier tends to perform better when compared to neural network models in a low resource setting. Salameh et al. (2018) presented a fine-grained dialect classification task covering 25 specific cities from across the Arab World, in addition to Standard Arabic. They used several classification systems with large space of features. Their results show that the exact city of a speaker can be identified at an accuracy of 67.9%. ADI, in our work, is achieved by (i) identifying Arabic language from other similar languages, (ii) identifying dialects from MSA & CA, and (iii) identifying dialects among 21 Arabic dialects. The final step is achieved by voting among three well-known and very different classifiers (NB, LR and DT) in two different methodologies. Also, because the used data is none golden standard, little steps of noisy removal are done.",
"cite_spans": [
{
"start": 124,
"end": 146,
"text": "Belgacem et al. (2010)",
"ref_id": "BIBREF2"
},
{
"start": 395,
"end": 416,
"text": "Elfardy & Diab (2013)",
"ref_id": "BIBREF4"
},
{
"start": 545,
"end": 568,
"text": "Cotterell et al. (2014)",
"ref_id": "BIBREF5"
},
{
"start": 810,
"end": 828,
"text": "Sadat et al.(2014)",
"ref_id": "BIBREF6"
},
{
"start": 1458,
"end": 1478,
"text": "El-Haj et al. (2018)",
"ref_id": "BIBREF1"
},
{
"start": 1660,
"end": 1684,
"text": "Mishra, & Mujadia (2019)",
"ref_id": "BIBREF8"
},
{
"start": 1928,
"end": 1950,
"text": "(Bouamor, et al.,2019)",
"ref_id": "BIBREF9"
},
{
"start": 2159,
"end": 2180,
"text": "Salameh et al. (2018)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "The used data set is NADI Tweeter data set (Abdul-Mageed et al., 2020) . It consists of three parts training part of 21,000 tweets, development part of 4957 Tweets and Test set of 5,000 tweets. It is not golden standard corpus and has different noise levels such as existing of 405 non-Arabic tweets (Kurdish and Persian). Also, this data set is mixed of DA, MSA and CA. therefore a noisy removal should be taken as preprocessing. The data sets are labeled in two levels; first level (country level) of 21 coun-tries and second level (provinces level) of 100 provinces. Table 2 show the statistics of training data set with /without non-Arabic tweets.",
"cite_spans": [
{
"start": 43,
"end": 70,
"text": "(Abdul-Mageed et al., 2020)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 570,
"end": 577,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Data set",
"sec_num": "3"
},
{
"text": "Our system consist of five phases: (i) preprocessing, (ii) noisy tweets removal, (iii) formal clitics and stop words removal, (iv) features extraction and selection, and (v) the classification. Figure 1 explained the proposed method. They will be explained in the next few sections. ",
"cite_spans": [],
"ref_spans": [
{
"start": 194,
"end": 202,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Our system",
"sec_num": "4"
},
{
"text": "Preprocessing is a step used in almost all NLP applications. In this work, this stage consists of two main steps noisy preprocessing, and non-Arabic letter removal. The first step is done before noisy tweets removal, it is achieved by deleting English letters, special symbols, numbers, tweeter mark-up, Emoticons, repetition letters etc., and unification of letter variants (normalization). The second step of preprocessing is removing non-Arabic letters from Arabic tweets (only), it is applied after noisy tweets removal.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "4.1"
},
{
"text": "As was mentioned previously the data set has levels of errors such as foreign tweets and MSA. Approximately 504 tweets were recorded manually as non-Arabic tweets. These tweets and the others are used for learning the binary classifier in character-level and word-level for identifying the foreign languages such as Persian and Kurdish languages. From many tests, the best was unigram for word-level and bigram for character-level with Na\u00efve Bayes classifier. In the classification step of development and test sets, we should see that all the classified tweets as foreign language will be classified as Iraq class because of the probability of being foreign as Iraq class is the highest (2556-2174)/504 \u2248 0.76, see table 2 for more details. In case of dialects and non-dialects (CA & MSA), 2,000 tweets are classified manually from training set as DA or non-DA. Then these two types of tweets (2,000 tweets) are used as centers for two clusters. Kmean clustering was used for clustering the remaining tweets into the two clusters using Simisupervised clustering where the manually classified tweets will not be changed their clusters in the iterations of Kmean clustering but stay always in the specific cluster. The non-DA cluster is checked manually only because it was small and all the dialects tweets were removed. The final clusters, in our case are two classes, are used for leaning a binary classifier for using it in the identification of dialects from non-dialects. In the classification step of development and test sets, we should see that all the classified tweets as non-DA will be classified as Egypt dialects but it is bad selection therefore it will produce extra errors in the evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Noisy tweets removal",
"sec_num": "4.2"
},
{
"text": "The third phase is formal clitics removal such as \u202b-\u0627\u0644\"\u202c Al\"-(the), \"\u202b-\u0648\u0627\u0644\u202cwAl\"-(and the) and \"\u202b-\u0648\ufedf\ufede\u202cwll\"-(and for the) but not the letter \" \u202b\u06be\u202c \u202b\u0640\u202c \" \"h\" in word \u202b-\u06be\ufe84\ufedf\ufecc\ufe90\"\u202c h\u00c2l\u03c2b\"-(I will play) because it is dialectal clitics in Egyptian dialects but not formal clitics (not in modern standard Arabic). Also, the stop words will be deleted in this stage but a very simple list of stop words is taken and they are tuned using the training set according to threshold. For example, the word \u202b-\ufecb\ufee8\ufee8\ufbab\"\u202c \u03c2nnh\"-(about us) is dialect therefore it will be not deleted but the word \u202b-\ufecb\ufee8\ufbab\"\u202c \u03c2nh\"-(about us) will be deleted.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formal clitics removal",
"sec_num": "4.3"
},
{
"text": "The fourth phase is feature selection which started by selecting the effective prefixes and suffixes of size of (1-4) letters according to their threshold and weights. Also, all the words (without formal clitics) are taken as features where their features are TF-IDF according to the equation below. IDF = log ( ) \u2026 11, = TF , * IDF \u2026.(12) Where w represents the word and i represents the class.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature selection",
"sec_num": "4.4"
},
{
"text": "The last phase, the classification process, is achieved by voting among three well-known and very different classifiers (Na\u00efve Bayes, Logistic Regression, and Decision Tree). The classification process is done using voting in two methodologies; the first is normal classification using 21 classes. The second methodology is done using binary classifiers where each one is learned from two classes, the first class represent one class of 21 classes and the second class represent the other 20 classes. For each tweet, there are 21 classification processes done for two classes (\"specific country\" or \"others\"). If all classifiers produce \"others\" class except one give country then this class (country) will be selected as the class for this tweets otherwise other classifier is will be used for classifying among the candidate countries only. For example suppose we try to classify tweet t, 18 classifiers gave 2 \"others\" and 3 classifiers gave Iraq, Kuwait and Qatar respectively then this tweet will be feed to other classifier that learned from training tweets of these 3 classes only and the output will be the final class. But if, in our example, 20 classifiers gave \"others\" and one gave \"Iraq\" then the class of the tweet t will be Iraq directly without using extra classifier. If all the twenty one classifiers are classified the tweet t as \"other\", then this tweet is unknown tweet and it will be classified as Egypt class because Egypt class has the highest probability among other classes. We should see that there is not any possibility for getting two appearance of one country from two classifiers because each classifier of 21 classifiers is used for one country.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification",
"sec_num": "4.5"
},
{
"text": "The system was implemented on NADI shared task data set (subtask 1). The results for identification of foreign tweets in development set were 0.985, 1 and 0.992 for precision, recall and f-measure respectively where the total foreign tweets were 132, as part of noisy tweets removal. The classification results (official 3 ) is 27.17 as f-measure for voting without using clustering. For voting with clustering (unofficial), the f-measure is 41.34. For voting (21 binary classifiers) with clustering, the f-measure is 52.38. all these percent's are for development data set. Table 3 shows the results for these three types of tests. Table 3 shows the summary of all tests results. We should know that the foreign tweets are classified as Iraq and the MSA tweets are classified as Egypt for evaluation purpose which it in most cases cause dropdown in the scores. Aslo, unknown tweets in the last test are classified as Egypt class. ",
"cite_spans": [],
"ref_spans": [
{
"start": 575,
"end": 582,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 633,
"end": 640,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "In this paper, ADI is implemented and applied on NADI shared task dataset of very close 21 classes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "The proposed system starts from removing noise till the classification process for 21 classes of all Arabic countries. Three classifiers were combined and used in two methodologies (one classifier for classification of 21 classes or using twenty one binary classifiers). Selecting the classes Iraq, Egypt and Egypt classes for Foreign, MSA and unknown tweets respectively dropped down the macro-average score but really foreign and MSA are noise. The results of classification were very low for many reasons: (i) existing of 504 non-Arabic tweets in training set, (ii) existing of MSA tweets which is used in all the Arabic countries result in high noise in learning phase, (iii) some dialects are close to each other's, (iv) existing of ambiguous tweets where it written as MSA but can be pronounced in many dialects, (v) existing of a city in a country that used dialect close to other country more than its country, (vi) the tweets were classified, in the data set, according to user location but not user dialect which produce errors in the classes of training set and hence the wrong learning. The result of using twenty two binary classifiers with clustering (removing MSA from training data) gave us the best results because the system will focus on the dialect of each country in the training process without noise tweets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "Arabic Dialects are native languages for each country or city in Arab world where sometimes the writing of dialects is same but the pronunciations are different. We can see four levels of Arabic dialectal identification (ADI) from the easiest task to the most difficult task. The first level is identification of dialectal Arabic from the other two Arabic language varieties. The second level is identification of the main category of dialectal Arabic of (nine or five categories). The third level is identification of country level out of 21 countries. The fourth level is identification of city dialects or town dialects. There are many challenges in ADI according to the level number. Simply we can conclude that ADI is a hard task for many reasons as was mentioned in discussion section. The task need to golden standard corpus and a dictionary for almost all words used in each dialect. The learning from low noise data set gives a good results but the task is still need to special learning techniques and huge dataset (golden standard).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and future works",
"sec_num": "7"
},
{
"text": "We used Habash-Soudi-Buckwalter transliteration in the form: \"Arabic word-its transliteration\"-(its translation) *This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http://creativecommons.org/licenses/by/4.0/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Each classifier is done using voting among three classifiers (NB, LR and DT) 3 Official means that the results are sent to NADI shared task team before the deadline.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Sentiment analysis and resources for informal Arabic text on social media",
"authors": [
{
"first": "",
"middle": [
"M"
],
"last": "Itani",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Itani. M. 2018 \"Sentiment analysis and resources for informal Arabic text on social media.\" PhD diss., Sheffield Hallam University.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Arabic Dialect Identification in the Context of Bivalency and Code-Switching",
"authors": [
{
"first": "M",
"middle": [],
"last": "El-Haj",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Ryson",
"suffix": ""
},
{
"first": "Aboelezz",
"middle": [
"M"
],
"last": "",
"suffix": ""
}
],
"year": 2018,
"venue": "Eleventh international conference on language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "El-Haj M., Ryson P., and Aboelezz M., 2018 \"Arabic Dialect Identification in the Context of Bivalency and Code-Switching,\" Eleventh international conference on language Resources and Evaluation LREC 2018.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Automatic Identification of Arabic Dialects",
"authors": [
{
"first": "M",
"middle": [],
"last": "Belgacem",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Antoniadis",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Besacier",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Belgacem, M., Antoniadis, G., & Besacier, L. 2010. Automatic Identification of Arabic Dialects. In LREC2010.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Verifiably effective Arabic dialect identification",
"authors": [
{
"first": "K",
"middle": [],
"last": "Darwish",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Sajjad",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Mubarak",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1465--1468",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Darwish, K., Sajjad, H., & Mubarak, H., 2014. Verifiably effective Arabic dialect identification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP) (pp. 1465-1468).",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Sentence level dialect identification in Arabic",
"authors": [
{
"first": "H",
"middle": [],
"last": "Elfardy",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Diab",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "456--461",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elfardy, H., & Diab, M. (2013, August). Sentence level dialect identification in Arabic. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) (pp. 456- 461).",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A Multi-Dialect, Multi-Genre Corpus of Informal Written Arabic",
"authors": [
{
"first": "R",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2014,
"venue": "LREC",
"volume": "",
"issue": "",
"pages": "241--245",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cotterell, R., & Callison-Burch, C. 2014. A Multi-Dialect, Multi-Genre Corpus of Informal Written Arabic. In LREC (pp. 241-245).",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Automatic identification of Arabic language varieties and dialects in social media",
"authors": [
{
"first": "F",
"middle": [],
"last": "Sadat",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Kazemi",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Farzindar",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Second Workshop on Natural Language Processing for Social Media (So-cialNLP)",
"volume": "",
"issue": "",
"pages": "22--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sadat, F., Kazemi, F., & Farzindar, A. 2014. Automatic identification of Arabic language varieties and dialects in social media. In Proceedings of the Second Workshop on Natural Language Processing for Social Media (So- cialNLP) (pp. 22-27).",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Arabic dialect identification in speech transcripts",
"authors": [
{
"first": "S",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Zampieri",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Third Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial3)",
"volume": "",
"issue": "",
"pages": "106--113",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Malmasi, S., & Zampieri, M. 2016. Arabic dialect identification in speech transcripts. In Proceedings of the Third Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial3) (pp. 106-113).",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Arabic Dialect Identification for Travel and Twitter Text",
"authors": [
{
"first": "P",
"middle": [],
"last": "Mishra",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Mujadia",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Arabic Natural Language Processing Workshop",
"volume": "",
"issue": "",
"pages": "234--238",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mishra, P., & Mujadia, V. 2019, Arabic Dialect Identification for Travel and Twitter Text. In Proceedings of the Fourth Arabic Natural Language Processing Workshop (pp. 234-238).",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "The MADAR shared task on Arabic fine-grained dialect identification",
"authors": [
{
"first": "H",
"middle": [],
"last": "Bouamor",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Hassan",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Habash",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Arabic Natural Language Processing Workshop",
"volume": "",
"issue": "",
"pages": "199--207",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bouamor, H., Hassan, S., & Habash, N. 2019. \"The MADAR shared task on Arabic fine-grained dialect identifi- cation. In Proceedings of the Fourth Arabic Natural Language Processing Workshop (pp. 199-207).",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Fine-grained Arabic dialect identification",
"authors": [
{
"first": "M",
"middle": [],
"last": "Salameh",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Bouamor",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Habash",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1332--1344",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Salameh, M., Bouamor, H., & Habash, N. 2018. Fine-grained Arabic dialect identification. In Proceedings of the 27th International Conference on Computational Linguistics (pp. 1332-1344).",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "The Shared Task on Nuanced Arabic Dialect Identification (NADI)",
"authors": [
{
"first": "",
"middle": [],
"last": "Abdul-Mageed",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Muhammad",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chiyu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bouamor",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Houda",
"suffix": ""
},
{
"first": "Nizar",
"middle": [],
"last": "Habash",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifth Arabic Natural Language Processing Workshop (WANLP2020)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abdul-Mageed, Muhammad, Zhang, Chiyu, Bouamor, Houda and Habash, Nizar 2020 \"The Shared Task on Nu- anced Arabic Dialect Identification (NADI)\", Proceedings of the Fifth Arabic Natural Language Processing Workshop (WANLP2020), Barcelona, Spain",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"html": null,
"content": "<table/>",
"type_str": "table",
"text": "",
"num": null
},
"TABREF3": {
"html": null,
"content": "<table/>",
"type_str": "table",
"text": "",
"num": null
}
}
}
}