ACL-OCL / Base_JSON /prefixW /json /wanlp /2020.wanlp-1.16.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T05:59:13.066520Z"
},
"title": "Multi-Task Sequence Prediction For Tunisian Arabizi Multi-Level Annotation",
"authors": [
{
"first": "Elisa",
"middle": [],
"last": "Gugliotta",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universit\u00e9 Grenoble Alpes -Laboratoire LIG",
"location": {
"addrLine": "Getalp group. 2"
}
},
"email": "[email protected]"
},
{
"first": "Marco",
"middle": [],
"last": "Dinarelli",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universit\u00e9 Grenoble Alpes -Laboratoire LIG",
"location": {
"addrLine": "Getalp group. 2"
}
},
"email": "[email protected]"
},
{
"first": "Olivier",
"middle": [],
"last": "Kraif",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper we propose a multi-task sequence prediction system, based on recurrent neural networks and used to annotate on multiple levels an Arabizi Tunisian corpus. The annotation performed are text classification, tokenization, PoS tagging and encoding of Tunisian Arabizi into CODA* Arabic orthography. The system is learned to predict all the annotation levels in cascade, starting from Arabizi input. We evaluate the system on the TIGER German corpus, suitably converting data to have a multi-task problem, in order to show the effectiveness of our neural architecture. We show also how we used the system in order to annotate a Tunisian Arabizi corpus, which has been afterwards manually corrected and used to further evaluate sequence models on Tunisian data. Our system is developed for the Fairseq framework, which allows for a fast and easy use for any other sequence prediction problem.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper we propose a multi-task sequence prediction system, based on recurrent neural networks and used to annotate on multiple levels an Arabizi Tunisian corpus. The annotation performed are text classification, tokenization, PoS tagging and encoding of Tunisian Arabizi into CODA* Arabic orthography. The system is learned to predict all the annotation levels in cascade, starting from Arabizi input. We evaluate the system on the TIGER German corpus, suitably converting data to have a multi-task problem, in order to show the effectiveness of our neural architecture. We show also how we used the system in order to annotate a Tunisian Arabizi corpus, which has been afterwards manually corrected and used to further evaluate sequence models on Tunisian data. Our system is developed for the Fairseq framework, which allows for a fast and easy use for any other sequence prediction problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In the last decade neural networks became the state-of-the-art models in most NLP problems. Sequenceto-sequence models Vaswani et al., 2017) , built on top of recurrent (Hochreiter and Schmidhuber, 1997; , convolutional (Gehring et al., 2017; Wu et al., 2019) or attentional (Bahdanau et al., 2014; Vaswani et al., 2017) modules, and structured in encoder-decoder architectures, are currently the most effective models for NLP problems. Neural networks have been used also for multi-task learning since early in their diffusion (Collobert and Weston, 2007; Collobert and Weston, 2008; Collobert et al., 2011) .",
"cite_spans": [
{
"start": 119,
"end": 140,
"text": "Vaswani et al., 2017)",
"ref_id": "BIBREF55"
},
{
"start": 169,
"end": 203,
"text": "(Hochreiter and Schmidhuber, 1997;",
"ref_id": "BIBREF26"
},
{
"start": 220,
"end": 242,
"text": "(Gehring et al., 2017;",
"ref_id": "BIBREF21"
},
{
"start": 243,
"end": 259,
"text": "Wu et al., 2019)",
"ref_id": "BIBREF57"
},
{
"start": 275,
"end": 298,
"text": "(Bahdanau et al., 2014;",
"ref_id": "BIBREF0"
},
{
"start": 299,
"end": 320,
"text": "Vaswani et al., 2017)",
"ref_id": "BIBREF55"
},
{
"start": 528,
"end": 556,
"text": "(Collobert and Weston, 2007;",
"ref_id": "BIBREF11"
},
{
"start": 557,
"end": 584,
"text": "Collobert and Weston, 2008;",
"ref_id": "BIBREF12"
},
{
"start": 585,
"end": 608,
"text": "Collobert et al., 2011)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As a semitic language, Arabic has a highly inflectional and derivational morphology, which makes Arabic processing an engaging challenge. This morphological complexity has traditionally been handled through morphological analysers, such as BAMA (Buckwalter, 2004) , which has been used by the Linguistic Data Consortium (LDC) to develop the Penn Arabic Treebank (PATB) (Maamouri et al., 2004) . Recently, the number of NLP contributions to morphological analysis, disambiguation, Part-of-Speech (PoS) tagging and lemmatization has increased substantially, for both Modern Standard and Dialectal Arabic (MSA and DA, respectively). Multitask learning was proved to be an effective way to process Arabic morphology for MSA fine-grained PoS tagging (Inoue et al., 2017) , as well as for DA (Zalmout and Habash, 2019) . Concerning NLP applied to DA, it is possible to observe two main macro-strategies aimed at remedying the lack of data for DA: 1. MSA systems adaptation to DA processing, like (David et al., 2006) who exploited the Penn Arabic Treebank (PATB) (Maamouri et al., 2004) and used explicit knowledge about the relation between MSA and Levantine Arabic. Instead, (Duh and Kirchhoff, 2005) built a PoS tagger for Egyptian through a minimally supervised approach by leveraging the CallHome Egyptian Colloquial Arabic corpus (ECA). 2. The constitution of new resources not based on MSA-DA relations, in particular dialectal corpora, such as the Fisher Levantine Arabic Conversational Telephone Speech (Maamouri et al., 2007) . 1 This second strategy has been followed also collecting more ad-hoc resources. presented the first parallel DA corpus, collecting the dialects of 25 Arab cities, including the Tunisian dialects of Tunis and Sfax. The MADAR corpus has been created by translating selected sentences from the Basic Traveling Expression Corpus (BTEC) (Takezawa et al., 2007) . Regarding Tunisian Dialect (TD), the resource constitution strategy has been instantiated as MSA resource adaptation to the DA, e.g. building lexicons (Boujelbane et al., 2013) , PoS-taggers (Boujelbane et al., 2014; Hamdi et al., 2015) , morphological analysers (Zribi et al., 2013) or morphological systems to disambiguate annotated transcriptions (Zribi et al., 2017) . Considering the lack of freely available resources, we opted for an approach similar to the one used in Curras Palestinian corpus collection (Jarrar et al., 2017) , which exploits MADAMIRA tools (Pasha et al., 2014) , (cf. section 4.1).",
"cite_spans": [
{
"start": 245,
"end": 263,
"text": "(Buckwalter, 2004)",
"ref_id": "BIBREF9"
},
{
"start": 369,
"end": 392,
"text": "(Maamouri et al., 2004)",
"ref_id": "BIBREF34"
},
{
"start": 745,
"end": 765,
"text": "(Inoue et al., 2017)",
"ref_id": "BIBREF27"
},
{
"start": 786,
"end": 812,
"text": "(Zalmout and Habash, 2019)",
"ref_id": "BIBREF62"
},
{
"start": 990,
"end": 1010,
"text": "(David et al., 2006)",
"ref_id": "BIBREF15"
},
{
"start": 1057,
"end": 1080,
"text": "(Maamouri et al., 2004)",
"ref_id": "BIBREF34"
},
{
"start": 1171,
"end": 1196,
"text": "(Duh and Kirchhoff, 2005)",
"ref_id": "BIBREF17"
},
{
"start": 1506,
"end": 1529,
"text": "(Maamouri et al., 2007)",
"ref_id": "BIBREF35"
},
{
"start": 1532,
"end": 1533,
"text": "1",
"ref_id": null
},
{
"start": 1864,
"end": 1887,
"text": "(Takezawa et al., 2007)",
"ref_id": "BIBREF52"
},
{
"start": 2041,
"end": 2066,
"text": "(Boujelbane et al., 2013)",
"ref_id": "BIBREF5"
},
{
"start": 2081,
"end": 2106,
"text": "(Boujelbane et al., 2014;",
"ref_id": "BIBREF6"
},
{
"start": 2107,
"end": 2126,
"text": "Hamdi et al., 2015)",
"ref_id": "BIBREF25"
},
{
"start": 2153,
"end": 2173,
"text": "(Zribi et al., 2013)",
"ref_id": "BIBREF63"
},
{
"start": 2240,
"end": 2260,
"text": "(Zribi et al., 2017)",
"ref_id": "BIBREF64"
},
{
"start": 2404,
"end": 2425,
"text": "(Jarrar et al., 2017)",
"ref_id": "BIBREF28"
},
{
"start": 2458,
"end": 2478,
"text": "(Pasha et al., 2014)",
"ref_id": "BIBREF45"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The development of informal online communication provided a solution to most of the data availability problems, making accessible to the scientific community a large amount of texts, both written and oral. Concerning texts written in DA, it is possible to find two main writing systems: Arabic and Latin scripts. With regard to the second one, letters are used together with digits for the encoding of those Arabic letters without correspondence in the Roman alphabet. This system is already well known as Arabizi, or Arabish for non-Arabic speakers. Most of the work developed on Arabish focus on language identification (Darwish, 2014) and sentiment analysis (Duwairi et al., 2016; Fourati et al., 2020) . Several works are focused on the conversion of Arabish into Arabic script, as the Parallel Annotated Egyptian Arabish-Arabic Script SMS/Chat Corpus (Bies et al., 2014) . Transliteration has also been addressed for Tunisian Arabish (Masmoudi et al., 2015; Masmoudi et al., 2019; Younes et al., 2020) .",
"cite_spans": [
{
"start": 622,
"end": 637,
"text": "(Darwish, 2014)",
"ref_id": "BIBREF14"
},
{
"start": 661,
"end": 683,
"text": "(Duwairi et al., 2016;",
"ref_id": "BIBREF18"
},
{
"start": 684,
"end": 705,
"text": "Fourati et al., 2020)",
"ref_id": "BIBREF20"
},
{
"start": 856,
"end": 875,
"text": "(Bies et al., 2014)",
"ref_id": "BIBREF2"
},
{
"start": 939,
"end": 962,
"text": "(Masmoudi et al., 2015;",
"ref_id": "BIBREF37"
},
{
"start": 963,
"end": 985,
"text": "Masmoudi et al., 2019;",
"ref_id": "BIBREF38"
},
{
"start": 986,
"end": 1006,
"text": "Younes et al., 2020)",
"ref_id": "BIBREF61"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper we propose a multi-task sequence prediction system based on recurrent neural networks, that we used to annotate at multiple levels the Tunisian Arabish Corpus (TArC) (Gugliotta and Dinarelli, 2020) . The annotation levels include tokenization, Part-of-Speech (PoS) tagging and Tunisian Arabish encoding into Arabic script. The system is learned to predict all the annotation levels in cascade, starting from Arabish input. We evaluate the system on the TIGER German corpus (Brants et al., 2004) in order to show the effectiveness of our neural architecture. While the purpose of this evaluation is not to improve state-of-the-art on this task, our results are comparable and sometimes better than the best published models. We show also how we used the system in order to annotate TArC, which has been afterwards manually corrected and used to further evaluate sequence models on Tunisian data. Our system is developped for Fairseq 2 (Ott et al., 2019) , it can therefore be used for any problem involving sequence prediction. 3 In the remainder of the paper we describe the TArC corpus, that we annotated with multi-level information, and we used to evaluate our neural system (in section 2). In section 3 we describe our multi-task neural architecture for multi-level annotation, in section 4 we describe the TIGER corpus, the experimental settings, and all the results obtained with our system, on both TIGER and TArC corpora. We conclude the paper in section 5.",
"cite_spans": [
{
"start": 180,
"end": 211,
"text": "(Gugliotta and Dinarelli, 2020)",
"ref_id": "BIBREF22"
},
{
"start": 487,
"end": 508,
"text": "(Brants et al., 2004)",
"ref_id": "BIBREF7"
},
{
"start": 948,
"end": 966,
"text": "(Ott et al., 2019)",
"ref_id": "BIBREF42"
},
{
"start": 1041,
"end": 1042,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The corpus used in this paper is the Tunisian Arabish Corpus (TArC) (Gugliotta and Dinarelli, 2020) , the result of a multidisciplinary work with a hybrid approach based on: 1. dialectological research questions; 2. corpus linguistics standards and 3. deep learning techniques. TArC has been conceived with the aim to extend the dialectological investigation to the web, not only considering it as a new resource for linguistic analyses, but mainly because the object of TArC is a Computer Mediated Communication (CMC) writing system.",
"cite_spans": [
{
"start": 68,
"end": 99,
"text": "(Gugliotta and Dinarelli, 2020)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tunisian Arabish Multi-Level Annotated Corpus",
"sec_num": "2"
},
{
"text": "The gathering of CMC corpora for linguistic study purposes is a long-standing practice: as early as the 1990s, in order to study linguistic and communicational aspects, researchers began to collect corpora from mailing lists, newsgroups, electronic conferences or chat rooms (Yates, 1996; Todla, 1999; Berjaoui, 2001; Feldweg et al., 1995) . Nowadays, the study of CMCs is a research domain it-self, crossing various disciplines such as sociology and linguistics. The linguistic questions related to CMCcorpora may for example concern paraverbal phenomena and the expression of emotions (Riordan and Kreuz, 2010; Tantawi and Rosson, 2019) , politeness formulas and the degree of message formality (Brysbaert and Lahousse, 2019) , the effects of orality in written communication (Soffer, 2010) , the role of code-mixing and code-switching in mediated discourse (Morel and Doehler, 2013; Mave et al., 2018) , their graphic and orthographic characteristics (Sullivan, 2017 ) (concerning Arabic). Lastly, a lot of research deals currently with the automatic processing of such corpora (Lopez et al., 2018; Panckhurst, 2017) .",
"cite_spans": [
{
"start": 275,
"end": 288,
"text": "(Yates, 1996;",
"ref_id": "BIBREF59"
},
{
"start": 289,
"end": 301,
"text": "Todla, 1999;",
"ref_id": "BIBREF54"
},
{
"start": 302,
"end": 317,
"text": "Berjaoui, 2001;",
"ref_id": "BIBREF1"
},
{
"start": 318,
"end": 339,
"text": "Feldweg et al., 1995)",
"ref_id": "BIBREF19"
},
{
"start": 587,
"end": 612,
"text": "(Riordan and Kreuz, 2010;",
"ref_id": "BIBREF46"
},
{
"start": 613,
"end": 638,
"text": "Tantawi and Rosson, 2019)",
"ref_id": "BIBREF53"
},
{
"start": 697,
"end": 727,
"text": "(Brysbaert and Lahousse, 2019)",
"ref_id": "BIBREF8"
},
{
"start": 778,
"end": 792,
"text": "(Soffer, 2010)",
"ref_id": "BIBREF47"
},
{
"start": 860,
"end": 885,
"text": "(Morel and Doehler, 2013;",
"ref_id": "BIBREF41"
},
{
"start": 886,
"end": 904,
"text": "Mave et al., 2018)",
"ref_id": "BIBREF39"
},
{
"start": 954,
"end": 969,
"text": "(Sullivan, 2017",
"ref_id": "BIBREF49"
},
{
"start": 1081,
"end": 1101,
"text": "(Lopez et al., 2018;",
"ref_id": "BIBREF32"
},
{
"start": 1102,
"end": 1119,
"text": "Panckhurst, 2017)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tunisian Arabish Multi-Level Annotated Corpus",
"sec_num": "2"
},
{
"text": "Among the purposes of dialectology there is the dialect collection and description with traditional approaches: fieldwork, oral text collection and transcription, glossary building. We observed that in the case of Arabic varieties the descriptive landscape is made of multiple studies on single phenomena. For this reason, we developed a resource inspired by dialectological investigation, which borrows the principles of corpus linguistics in order to guarantee representativeness, accessibility, balance and authenticity of the linguistic data (Szmrecsanyi and Anderwald, 2018; Wynne, 2005) . The data gathered in TArC, together with various metadata, takes a snapshot of Tunisian Arabish writing and its evolution over the last ten years. TArC is built selecting data with the following criteria: 1. text mode: informal writing; 2. text genres: forum, blog, social networks, rap lyrics; 3. domain: CMC; 4. language: Tunisian; 5. location; 6. publication date. The last two items were registered via metadata extraction (publication date, user's age, gender and provenience).",
"cite_spans": [
{
"start": 546,
"end": 579,
"text": "(Szmrecsanyi and Anderwald, 2018;",
"ref_id": "BIBREF51"
},
{
"start": 580,
"end": 592,
"text": "Wynne, 2005)",
"ref_id": "BIBREF58"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tunisian Arabish Multi-Level Annotated Corpus",
"sec_num": "2"
},
{
"text": "The building process automation overcomes the observer's paradox problem (Labov, 1972) , an issue much discussed in dialectology (Boberg et al., 2018) . It also allows the reproducibility of the work, as well as the quantitative extension of an open corpus (such as TArC), which is normally difficult to ensure by dialectological research. TArC collection has therefore been enhanced thanks to the multi-task architecture, used for a semi-automatic annotation (cf. section 3) to get as close as possible to a consistent linguistic annotation (Wynne, 2005) . The automatically generated annotations were post-edited by a linguist qualified in Arabic language and Tunisian variety, whose work was occasionally verified by native speakers. 4 Such annotation work complies with both the applicative and the analytical purposes of a corpus. The former concerns the generation of NLP tools for the Tunisian Arabish processing. The latter is realised through the multi-functional annotation levels of TArC, which allow global and systematic studies of Tunisian variety and its Arabish encoding. This way, TArC usefulness returns to the dialectological area, the field in which the preliminary research questions were addressed.",
"cite_spans": [
{
"start": 73,
"end": 86,
"text": "(Labov, 1972)",
"ref_id": "BIBREF30"
},
{
"start": 129,
"end": 150,
"text": "(Boberg et al., 2018)",
"ref_id": "BIBREF3"
},
{
"start": 542,
"end": 555,
"text": "(Wynne, 2005)",
"ref_id": "BIBREF58"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tunisian Arabish Multi-Level Annotated Corpus",
"sec_num": "2"
},
{
"text": "TArC has been annotated with four information levels. 1) Classification of words in three classes: arabizi, foreign and emotag. The first class is for Tunisian and MSA words, the second one is to classify non-Arabic code-mixing; the third is used for elements as smileys or emoticons. 2) Encoding in Arabic script in Conventional Orthography for Dialectal Arabic (CODA*) . 3) Tokenization, Tunisian words encoded in CODA* have been tokenized following the D3 BWFORM configuration scheme where basically all clitics are tokenized, including the article (Pasha et al., 2014) . 4) Part-of-Speech according to the PATB guidelines (Maamouri et al., 2009) . All levels have been developed following the same incremental and semi-automatic procedure described in (Gugliotta and Dinarelli, 2020) for the CODAfying stage.",
"cite_spans": [
{
"start": 552,
"end": 572,
"text": "(Pasha et al., 2014)",
"ref_id": "BIBREF45"
},
{
"start": 626,
"end": 649,
"text": "(Maamouri et al., 2009)",
"ref_id": "BIBREF36"
},
{
"start": 756,
"end": 787,
"text": "(Gugliotta and Dinarelli, 2020)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tunisian Arabish Multi-Level Annotated Corpus",
"sec_num": "2"
},
{
"text": "There are several works about multi-task learning with neural networks for NLP problems (Wu and Huang, 2015; Luong et al., 2016) , inter alia. Most of the time the neural architecture factorises some parameters for information that can be shared among tasks, and then uses different modules (e.g. decoders) for each task, which are learned independently.",
"cite_spans": [
{
"start": 88,
"end": 108,
"text": "(Wu and Huang, 2015;",
"ref_id": "BIBREF56"
},
{
"start": 109,
"end": 128,
"text": "Luong et al., 2016)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Task Sequence Prediction System",
"sec_num": "3"
},
{
"text": "As described in section 2, our goal for Tunisian Arabish data is a multi-level annotation scheme, where the different levels are potentially related. From an NLP point of view, this relations imply that some levels of annotation may help disambiguation when annotating other levels. For instance the classification information can disambiguate annotation into CODA*, tokenization and PoS tagging. Intuitively we expected that learning tasks in chain, organised in a cascade manner in a neural network, would benefit to each other, in contrast to learning tasks individually.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Task Sequence Prediction System",
"sec_num": "3"
},
{
"text": "181\u00f4 3 Decoder 3 Decoder 2 Decoder 1 Encoder x o 1 o 2 L 1 (o 1 ,\u00f4 1 ) L 2 (o 2 ,\u00f4 2 ) L 3 (o 3 ,\u00f4 3 ) + L h E h E h E h 1 h 1 h 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Task Sequence Prediction System",
"sec_num": "3"
},
{
"text": "Figure 1: A high-level schema of our multi-task neural system",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Task Sequence Prediction System",
"sec_num": "3"
},
{
"text": "We follow the intuition above and we propose a multi-task neural architecture where the different learned tasks are organised in a cascade. The input is the Arabish text. The outputs, corresponding to the tasks to be learned, are, in this order, the classification information, the conversion into CODA* orthography, the tokenization of the CODAfied tokens and the PoS tags. Outputs from previous tasks are reused by the following tasks, they are thus learned jointly and interdependently. The input is transformed into hidden context-aware representations with an encoder based on recurrent layers. The outputs are processed by different decoders, each of them taking as input the hidden state of the encoder, and the hidden state of each of the previous decoders. The output of each decoder is used to learn each task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Task Neural Architecture",
"sec_num": "3.1"
},
{
"text": "More formally, let the task i be represented by the model M i (x, H i ), with x the input (Arabish text representations), and H i the list of hidden states from the previous models, plus the current model's hidden state h i . Each model i generates an output\u00f4 i and a hidden state h i .\u00f4 i is the predicted output, which is used to learn the task i by computing a loss L i (o i ,\u00f4 i ) comparing\u00f4 i to the expected output o i . Internally the global model M is made of an Encoder and I decoders Decoder i , with i = 1 . . . I. The list H i includes both the encoder hidden state h E and the decoders hidden states h 1 ...h i . An high-level schema of this architecture, with the flow of information for three tasks (I = 3), is presented in figure 1. All the tasks are learned jointly by minimising the global loss L = i L i (o i ,\u00f4 i ), on top of the circled + in the schema (Figure 1 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 874,
"end": 883,
"text": "(Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Multi-Task Neural Architecture",
"sec_num": "3.1"
},
{
"text": "Like in the original sequence-to-sequence model based on an attention mechanism (Bahdanau et al., 2014) , each decoder attends to encoder and decoder's hidden state information with an attention mechanism. The decoder Decoder i has therefore i different attention mechanisms, one for attending encoder's information, and one for each previous decoder's hidden state. In order to evaluate our multi-task system, we used two different corpora. One is TArC, described in section 2, the other is the German TIGER corpus (Brants et al., 2004) . TArC corpus has been initially collected from forums, social media and blogs, for a total of 32 062 words, and recently extended to 43 313 words by adding the text type of rap lyrics. In order to better organise the automatic annotation and the manual-correction stages, we split the initial corpus into blocks of roughly 6 500 tokens. Statistics of TArC are presented in table 2. The initial model, used to bootstrap the corpus annotation, has been trained using 2000 sentences from the Tunisian MADAR corpus. MADAR data are well-formed texts encoded in Arabic script, this avoid any code-switching and spelling inconsistency. We processed MADAR data using the MADAMIRA tool (Pasha et al., 2014) . 6 , producing tokenization and PoS tags. After a manual correction, we obtained the first TArC training block for starting the annotation procedure.",
"cite_spans": [
{
"start": 80,
"end": 103,
"text": "(Bahdanau et al., 2014)",
"ref_id": "BIBREF0"
},
{
"start": 516,
"end": 537,
"text": "(Brants et al., 2004)",
"ref_id": "BIBREF7"
},
{
"start": 1216,
"end": 1236,
"text": "(Pasha et al., 2014)",
"ref_id": "BIBREF45"
},
{
"start": 1239,
"end": 1240,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Task Neural Architecture",
"sec_num": "3.1"
},
{
"text": "The German corpus TIGER (Brants et al., 2004) is annotated with rich morpho-syntactic information. These include PoS tags, but also gender, number, cases, and other inflection information, as well as conjugation information for verbs. The combination of all these components constitutes the output labels. We used the same data split used in (Lavergne and Yvon, 2017) . Statistics of this corpus are given in table 1.",
"cite_spans": [
{
"start": 24,
"end": 45,
"text": "(Brants et al., 2004)",
"ref_id": "BIBREF7"
},
{
"start": 342,
"end": 367,
"text": "(Lavergne and Yvon, 2017)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Task Neural Architecture",
"sec_num": "3.1"
},
{
"text": "We first describe some data pre-processing performed on both corpora, in order to better exploit the small amount of data in TArC, on one side; on the other side, we performed a similar pre-processing on the TIGER corpus, in order to have similar experimental settings and therefore be able to validate the multi-task model with results comparable with the literature.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Pre-processing",
"sec_num": "4.2.1"
},
{
"text": "The TIGER corpus has been used as a benchmark for our multi-task system, before applying it to TArC. Since TIGER data are not natively multi-task, we re-organised TIGER labels in two parts: the first consisting of the PoS core-tag only, the second consisting of the whole label. For example, given the label ADJA.PoS.Nom.Sg.Masc 7 , we take the PoS tag ADJA as a first level of information, and the whole label as a second level. This simple pre-processing allows to have two tasks to learn with our system: a coarse and a fine-grained morpho-syntactic tagging, where the second task, more complex, can be learned using also the information of the first, which is simpler.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Pre-processing",
"sec_num": "4.2.1"
},
{
"text": "In order to reduce data sparsity in TArC, we performed sequence prediction at each annotation level using sub-token units, except for the classification level. Sub-token units are characters for Arabish, CODAfied tokens and tokenization levels. For the PoS tags we performed an ad-hoc split into coarser units. The PoS tags annotated in TArC follow the LDC guidelines described in (Maamouri et al., 2009) . 8 The tags contain rich information, like for the TIGER labels, describing the morphological structure of tokens. For instance the tag PV-PVSUFF SUBJ:3MS+[PREP+PRON 2S]PVSUFF IO:2S, contains information about a verb with inflectional morphology (PV-PVSUFF SUBJ:3MS), plus information on a pre-pronominal enclitic group attached to the verb (PREP+PRON 2S). This group contains also an indirect object in suffix form (PVSUFF IO:2S). Each of these 3 macro components contains person features, 3MS for the verb, 2S for the enclitic pronoun, and 2S for the indirect object suffix.",
"cite_spans": [
{
"start": 381,
"end": 404,
"text": "(Maamouri et al., 2009)",
"ref_id": "BIBREF36"
},
{
"start": 407,
"end": 408,
"text": "8",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Pre-processing",
"sec_num": "4.2.1"
},
{
"text": "Quite intuitively, such complex tags, taken as a whole, are very rare in the data. Indeed more than half of them occur only once in our data. 9 However their components are quite common (e.g. PV, PVSUFF, SUBJ, 3, M, S and so on). For this reason we split the tag above into a sequence of components like: PV, PVSUFF, SUBJ, :3, @M, @S, +, [, PREP, +, PRON, 2, @S, ], PVSUFF, IO, :2, @S. Symbols like @ are added for the post-processing phase to correctly reconstruct the whole tag. For the same reason, each time a tag is split in this way, the components are wrapped with start and end markers \u00a1SOT\u00bf, \u00a1EOT\u00bf (for Start and End Of Token). A whole tag sequence, associated to an input sentence, is made by concatenating the sequences resulting from the split of each tag. The same start and end markers are used also for the other annotation levels, which are split into single characters, so that the model can learn itself that each token in the input sequence corresponds to one token in all the annotation levels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Pre-processing",
"sec_num": "4.2.1"
},
{
"text": "In order to have the same settings for TArC and TIGER data, we split the input tokens in the TIGER data into characters, adding the start and end markers. The labels are left unchanged, beyond artificially creating 2 label levels to test our multi-task system (we actually performed experiments also splitting TIGER labels into components, cf. table 3 and 4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Pre-processing",
"sec_num": "4.2.1"
},
{
"text": "The TArC classification level was added first. This was done using a character-level model pre-trained exploiting: i) the Hussem Ben Belgacem's French dictionary, consisting of 336,531 tokens. 10 , and ii) a Tunisian Arabish dictionary of 100,936 tokens, resulting from the merge of the TUNIZI Sentiment Analysis Tunisian Arabic Dataset (Fourati et al., 2020) 11 and the TLD dataset (Younes et al., 2015) .",
"cite_spans": [
{
"start": 337,
"end": 359,
"text": "(Fourati et al., 2020)",
"ref_id": "BIBREF20"
},
{
"start": 383,
"end": 404,
"text": "(Younes et al., 2015)",
"ref_id": "BIBREF60"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Pre-processing",
"sec_num": "4.2.1"
},
{
"text": "In order to obtain an emotag dictonary, we extracted all the smileys and emoticons from the Arabish dictionary above. Once the model was pre-trained on the above data, it was possible to apply also to this annotation level the semi-automatic and incremental annotation procedure used in (Gugliotta and Dinarelli, 2020) . At the end of the procedure, the model reached 97% of accuracy. All data were manually checked and corrected.",
"cite_spans": [
{
"start": 287,
"end": 318,
"text": "(Gugliotta and Dinarelli, 2020)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Pre-processing",
"sec_num": "4.2.1"
},
{
"text": "Concerning model settings, we note that encoder and decoders in our multi-task neural models are all LSTM (Hochreiter and Schmidhuber, 1997) . 12 An optimisation of hyper-parameters like learning rate, dropout ratio (Srivastava et al., 2014) , layer size, etc. has been performed on development data of TArC. For experiments on TIGER the same hyper-parameters have been used. The goal here is not to obtain the best absolute results on this task, it is to show that our system is competitive enough to be used safely on unpublished data. Such hyperparameter optimal values resulted in: 5E \u22124 for learning rate, 0.5 for dropout ratio (at all layers, including embeddings), 5.0 for gradient clipping (Pascanu et al., 2012) , 256 for both embeddings and hidden layer size (for all layers). We share all embeddings, at input and output layers, and in encoder and decoders.",
"cite_spans": [
{
"start": 106,
"end": 140,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF26"
},
{
"start": 143,
"end": 145,
"text": "12",
"ref_id": null
},
{
"start": 216,
"end": 241,
"text": "(Srivastava et al., 2014)",
"ref_id": "BIBREF48"
},
{
"start": 698,
"end": 720,
"text": "(Pascanu et al., 2012)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Settings",
"sec_num": "4.2.2"
},
{
"text": "The loss functions used in all our experiments, for all the decoder outputs (see L, L 1 , etc. in section 3.1), are the cross-entropy loss. All models are learned with an ADAM optimiser (Kingma and Ba, 2014) with default parameters. Model's outputs are evaluated with the accuracy, after applying postprocessing to reconstruct original tokens. This means that if a single character or component in a token is wrong, the token is considered wrong in the accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Settings",
"sec_num": "4.2.2"
},
{
"text": "We present first results obtained on the corpus TIGER. We remind that we artificially performed multitasking on TIGER by isolating the core-tag from its features for each morpho-syntactic tag, and using the core-tag and the whole one as separated output to be predicted (see section 4.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "The first set of experiments was performed to choose the optimal number of layers in each decoder of our multi-task system. Results are shown in table 3, the two tasks are PoS, for core-tags only, and MORPHO for core+feature tags. The results of both tasks show that the model performs at best with 3 layers in each decoder, though the gain with respect to the other choices is small. Despite the gain is small, we observed consistently the best results, both in terms of accuracy and loss values, and on both corpora, with 3 layers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "In the table 3 we show also the comparison of our results with the literature. To the best of our knwoledge the best results on the corpus TIGER have been published in (Dinarelli and Grobol, 2019) , which improved previous state-of-the-art of (Lavergne and Yvon, 2017) . Our results are comparable with the state-of-the-art, even slightly better on morpho-syntactic tagging, Dev data. We would like to insist on the fact that experiments on TIGER have been performed not with the goal to improve the stateof-the-art, but only for validating our multi-task system for performing multi-level annotation of TArC as multi-tasking. In this respect, the model used in (Dinarelli and Grobol, 2019) is quite sophisticated, it performs sequence labelling exploiting both token and character information on the input side, and performing bidirectional decoding on the output side. Our model performs decoding at character-level only, though using several layers over 2 tasks. Beyond this comparison, we consider our results on the TIGER corpus satisfactory for a multi-task setting.",
"cite_spans": [
{
"start": 168,
"end": 196,
"text": "(Dinarelli and Grobol, 2019)",
"ref_id": "BIBREF16"
},
{
"start": 243,
"end": 268,
"text": "(Lavergne and Yvon, 2017)",
"ref_id": "BIBREF31"
},
{
"start": 662,
"end": 690,
"text": "(Dinarelli and Grobol, 2019)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "The last 3 lines of table 3 and 4 show results on TIGER Dev and Test data, respectively. In these experiments we compare models learned for decoding label components, instead of whole labels, using character-level input (Char decoding), models learned with whole tokens on input and output side (Token decoding), and models combining both information, but learned from whole-token tag sequences (Token+char decoding). As we can see, Char decoding setting is by far the most effective. Combining token and character level information largely improves the Token decoding setting, but it is still much less effective than the Char decoding setting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "It could be interesting to observe which gain can be achieved with a multi-task model, e.g. on PoS tagging, with respect to a mono-task sequence-to-sequence model on the same task. In order to show such gain, we performed an experiment of PoS tagging with our multi-task system in a mono-task setting, with the same experimental settings. We compare this result with the multi-task counter-part in table 3. The two results are shown in table 5. As we can see, a substantial gain can be achieved performing PoS tagging as part of a multi-task setting. Even if, when learned for multi-tasking, PoS tagging is the first task and so it cannot exploit information coming from preceding tasks, the gain is given by the backpropagation of the morpho-syntactic tagging error through the whole network. Once again, results are obtained decoding at character level only for keeping the same experimental settings as for the TArC.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "Experiments on TArC are divided in two phases, corresponding to two annotation phases: the first concerns the Arabish conversion into Arabic script. The second phase consists in classification of each token in arabizi, foreign or emotag classes, together with tokenization of Arabic-encoded tokens, and PoS tagging. Each phase was performed with a semi-automatic procedure, where a model was trained on a first block of data. Such model was used to annotate another block of data. This was then manually corrected and added to the training data. A new model was trained and used to annotate a new block. This procedure was iterated up to the annotation of the full corpus (32 062 tokens).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "For the first phase of the annotation (Arabic script encoding only) we used the mono-task sequence- to-sequence model of (Dinarelli and Grobol, 2019) . Indeed the Arabic script encoding of tokens is the most costly and difficult phase, so we thought it could be easier to have it first, annotating the other levels afterwards. The Arabic script encoding accuracy of the model was below 70% for the first block. This still allowed the annotator to correct the block 3 times faster than if the block was annotated from scratch. For the following data blocks, accuracy of the model increased progressively, up to roughly 76% for the fourth block. At this point we started the second phase, which included the annotation of the fifth and last block with encoding conversion. In the second phase, we repeated the iterative semi-automatic annotation procedure of the first phase for the classification, tokenization and PoS tagging levels. These were performed with the multi-task system. The first model for bootstrapping the annotation procedure was trained on a part of the MADAR data consisting of roughly 12,000 tokens ( 2,000 sentences). These data were annotated with tokenization and PoS information using MADAMIRA as explained in section 4.1, and then manually corrected. The classification information was added manually, which was trivial since all tokens belong to the arabizi class in this data. The model trained on MADAR data has been used to annotate the first block of TArC, which is the step 0 of the iterative procedure. In the following 3 iterations, the MADAR data were used together with the TArC blocks already manually corrected. The input for these 3 steps was thus the CODAfied Tunisian. Exploiting MADAR was only possible up to the 4th block, since the blocks after the fourth were not already provided with CODAfied tokens (see the 1st annotation phase above). However, we planned to add all the annotation levels to the 5th block, including Corpus: TIGER Dev data PoS tagging results Model LSTM",
"cite_spans": [
{
"start": 121,
"end": 149,
"text": "(Dinarelli and Grobol, 2019)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "Mono-task 95.66% Multi-task 98.30% Table 5 : Comparison of results of PoS tags decoding from source characters, on the TIGER development data with mono-task and multi-task models. the encoding in Arabic script level, with the multi-task system. The 5th block was thus annotated using only TArC four blocks in Arabish as training data. At each iteration step, the Arabish data were split randomly into train and validation (dev) sets, so that the dev set is representative of the whole data at each iteration. 13 We report the results on the 3 tasks of the first 4 steps, where the input was CODAfied Tunisian, and the results on the 4 tasks of the following steps, where the input was Arabish, in table 6. The tasks are indicated in the table with Class for classification, Arabic for Arabic script encoding, Token for tokenization, and PoS for PoS tagging, respectively. In the column \"Train. tokens\" of the table we report the number of training tokens for each step. Between parenthesis, when this is meaningful, we also report the number of training tokens coming from TArC (the remainder is from the MADAR corpus).",
"cite_spans": [
{
"start": 509,
"end": 511,
"text": "13",
"ref_id": null
}
],
"ref_spans": [
{
"start": 35,
"end": 42,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "In table 6, Step0 is the bootstrapping step, where the model is trained on MADAR data only. Results are on a randomly chosen dev data set consisting of 15% of the whole data set. Starting from Step1, the dev data set is a 15% random split of the TArC data only, as we are interested in the effectiveness of our multi-task system on spontaneous and informal writing data for annotation purposes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "Results in table 6 prove that the multi-task system is effective also on TArC, especially taking into account the small amount of data available for training the models. The classification task (Class) is quite well solved, as at best the model, when evaluated on TArC text, is over 97% of accuracy. Results for tokenization (Token) are also satisfactory, in particular at step 3, where the model is over 91% of accuracy. Results on PoS tagging (PoS) are quite lower with respect to the other tasks, but we note that this task is the most difficult, among the 3 of the first 4 steps. Indeed, classification only consists in associating to each token one of the 3 classes arabizi, foreign or emotag. The tokenization task consists in splitting a CODAfied token into its components with some orthographical transformations, input and output script is thus the same, the model needs to learn the splitting. In contrast, PoS tagging is a conversion from Arabic characters into PoS components.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "As we have explained in section 4.2, PoS tags are quite complex, and splitting them into components allows to mitigate the problem of data sparsity. Moreover, accuracy is computed after post-processing, that is after PoS tags have been reconstructed from components. A single mistake on a component results in a wrong tag, affecting the accuracy. Taking all of that into account, we consider the best PoS tagging result of 76.38% of accuracy as an acceptable result.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "In table 6 we observe a substantial drop of results from step 0 (where the model is evaluated on the MADAR dev set) to step 1 (where the model is evaluated on the Arabish dev set only). 14 This is not surprising, as MADAR is made of morphosyntactically well-formed text, while TArC is made of CMC spontaneous texts. This behaviour is useful to explain the difference of results between step 3 and step 4 and 5. Beyond that, the increased amount of TArC data with respect to MADAR data through steps 1 to 3, allows to improve results obtained on the MADAR data (Step0).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "Results in table 6 drop again between steps 3 and 4. We remind that at step 3, data blocks from 1 to 3, plus the MADAR data, are used for training the model, a 15% split of the TArC data are used for validation, and the model is used to annotate the fourth data block. At step 4 only TArC data are used for training, again a 15% split is used for validation, and the fifth block is annotated. At this step an Table 6 : Summary of results, in terms of accuracy, obtained on the TArC data at the different steps of the iterative procedure for semi-automatic annotation of the corpus. The tasks are indicated with Class for classification, Arabic for Arabic script encoding, Token for tokenization, and PoS for PoS tagging.",
"cite_spans": [],
"ref_spans": [
{
"start": 409,
"end": 416,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "additional task is performed: encoding of Arabish into CODA*.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "As we can see in table 6, all results except for classification, substantially dropped. This is due to having an additional task with respect to the previous steps, and thus an additional decoder in the system, and to the use of a smaller training set. We note however this drop is similar to the one between steps 0 and 1. We conclude thus that MADAR well-formed texts have a positive effect on learning spontaneous Arabish text. It is interesting to observe that the drop in PoS tagging results with respect to tokenization, at steps 4 and 5, is much smaller than the drop at steps 1 and 3. This suggests to improve Arabish CODAfication results, which may be achieved by adding Arabish encoding to MADAR. Results on the step 5 are similar to step 4. This is not surprising as well, since data in the block 5 have a different style, coming from a different source (blogs). This balances the increased amount of data for training the model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "In order to exploit the MADAR data also at steps 4 and 5, we designed an ad-hoc parameter initialisation using the model trained at step 0. Note that such model has a different architecture as MADAR is in Arabic script, it doesn't contain Arabish. 15 Results obtained with this initialisation are reported in the last lines of table 6 marked as smart-init. As we can see, except for the classification task which is biased by the fact that in MADAR all tokens are in the arabizi class, all other task results improved with respect to step 4 and 5 without pre-initialisation.",
"cite_spans": [
{
"start": 248,
"end": 250,
"text": "15",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "We presented a multi-task sequence labeling system based on recurrent neural networks, developed for the Fairseq framework and used to annotate TArC on multiple levels. The annotation levels provided are: classification, tokenization, PoS tagging and encoding of Tunisian Arabish into Arabic script, according to CODA*. We described the annotation procedure, after showing the effectiveness of our neural architecture with an evaluation on the TIGER German corpus. As a next stage we plan to expand TArC quantitatively to improve the results and its usability in linguistics and NLP fields. Future work includes qualitative extension through the addition of further annotation levels, such as lemmatization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "These resources are not freely available. This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http: //creativecommons.org/licenses/by/4.0/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/pytorch/fairseq 3 Our system, with data used in this paper, is available at https://gricad-gitlab.univ-grenoble-alpes.fr/dinarelm/tarc-multitask-system.The last updated version of TArC is available at https://github.com/eligugliotta/tarc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Due to COVID-19 lockdown it was not possible to conduct the field research scheduled for March 2020.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We note that we have been testing also gating mechanisms to blend the outputs of the attention mechanisms like in(Miculicich et al., 2018), but this always gave worse results than the sum.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Version used: MADAMIRA 2.0. D3 BW* schemes(Habash, 2010).7 The different pieces stand for adjective, possessive, nominative, singular and male, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In the concatenation style we used \"-\" and the square brackets, to better manage the information through our model. 9 More precisely, 423 PoS tags out of 776 in the dictionary, that is 54.9%, occur only once. 10 https://github.com/hbenbel/French-Dictionary (last access on 15/09/2020). 11 https://github.com/chaymafourati/TUNIZI-Sentiment-Analysis-Tunisian-Arabizi-Dataset (last access on 15/09/2020).12 The system is however generic, and potentially any kind of encoder and decoder available in Fairseq may be used. We are currently working on adding the use of Transformer encoder and decoders(Vaswani et al., 2017).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In this respect, we note that data in different blocks are heterogeneous, as they are not all from the same source. Hence keeping the same dev data set for all the iterations would not be representative.14 All MADAR tokens are classified as arabizi, it is thus normal that the model gets almost perfect result in classifying it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This resulted in a quite task-specific parameter initialisation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Aspects of the Moroccan Arabic orthography with preliminary insights from the Moroccan computer-mediated communication",
"authors": [
{
"first": "Nasser",
"middle": [],
"last": "Berjaoui",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nasser Berjaoui. 2001. Aspects of the Moroccan Arabic orthography with preliminary insights from the Moroccan computer-mediated communication. na.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Transliteration of arabizi into arabic orthography: Developing a parallel annotated arabizi-arabic script sms/chat corpus",
"authors": [
{
"first": "Ann",
"middle": [],
"last": "Bies",
"suffix": ""
},
{
"first": "Zhiyi",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Mohamed",
"middle": [],
"last": "Maamouri",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Grimes",
"suffix": ""
},
{
"first": "Haejoong",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Wright",
"suffix": ""
},
{
"first": "Stephanie",
"middle": [],
"last": "Strassel",
"suffix": ""
},
{
"first": "Nizar",
"middle": [],
"last": "Habash",
"suffix": ""
},
{
"first": "Ramy",
"middle": [],
"last": "Eskander",
"suffix": ""
},
{
"first": "Owen",
"middle": [],
"last": "Rambow",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the EMNLP 2014 workshop on Arabic natural language processing (ANLP)",
"volume": "",
"issue": "",
"pages": "93--103",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ann Bies, Zhiyi Song, Mohamed Maamouri, Stephen Grimes, Haejoong Lee, Jonathan Wright, Stephanie Strassel, Nizar Habash, Ramy Eskander, and Owen Rambow. 2014. Transliteration of arabizi into arabic orthography: Developing a parallel annotated arabizi-arabic script sms/chat corpus. In Proceedings of the EMNLP 2014 workshop on Arabic natural language processing (ANLP), pages 93-103.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The handbook of dialectology",
"authors": [
{
"first": "Charles",
"middle": [],
"last": "Boberg",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "Dominic James Landon",
"middle": [],
"last": "Nerbonne",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Watt",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Charles Boberg, John A Nerbonne, and Dominic James Landon Watt. 2018. The handbook of dialectology. Wiley Online Library.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The madar arabic dialect corpus and lexicon",
"authors": [
{
"first": "Houda",
"middle": [],
"last": "Bouamor",
"suffix": ""
},
{
"first": "Nizar",
"middle": [],
"last": "Habash",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Salameh",
"suffix": ""
},
{
"first": "Wajdi",
"middle": [],
"last": "Zaghouani",
"suffix": ""
},
{
"first": "Owen",
"middle": [],
"last": "Rambow",
"suffix": ""
},
{
"first": "Dana",
"middle": [],
"last": "Abdulrahim",
"suffix": ""
},
{
"first": "Ossama",
"middle": [],
"last": "Obeid",
"suffix": ""
},
{
"first": "Salam",
"middle": [],
"last": "Khalifa",
"suffix": ""
},
{
"first": "Fadhl",
"middle": [],
"last": "Eryani",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Erdmann",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Houda Bouamor, Nizar Habash, Mohammad Salameh, Wajdi Zaghouani, Owen Rambow, Dana Abdulrahim, Os- sama Obeid, Salam Khalifa, Fadhl Eryani, Alexander Erdmann, et al. 2018. The madar arabic dialect corpus and lexicon. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018).",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Building bilingual lexicon to create dialect tunisian corpora and adapt language model",
"authors": [
{
"first": "Rahma",
"middle": [],
"last": "Boujelbane",
"suffix": ""
},
{
"first": "Siwar",
"middle": [],
"last": "Mariem Ellouze Khemekhem",
"suffix": ""
},
{
"first": "Lamia Hadrich",
"middle": [],
"last": "Benayed",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Belguith",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Second Workshop on Hybrid Approaches to Translation",
"volume": "",
"issue": "",
"pages": "88--93",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rahma Boujelbane, Mariem Ellouze Khemekhem, Siwar BenAyed, and Lamia Hadrich Belguith. 2013. Building bilingual lexicon to create dialect tunisian corpora and adapt language model. In Proceedings of the Second Workshop on Hybrid Approaches to Translation, pages 88-93.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Fine-grained pos tagging of spoken tunisian dialect corpora",
"authors": [
{
"first": "Rahma",
"middle": [],
"last": "Boujelbane",
"suffix": ""
},
{
"first": "Mariem",
"middle": [],
"last": "Mallek",
"suffix": ""
},
{
"first": "Mariem",
"middle": [],
"last": "Ellouze",
"suffix": ""
},
{
"first": "Lamia Hadrich",
"middle": [],
"last": "Belguith",
"suffix": ""
}
],
"year": 2014,
"venue": "International Conference on Applications of Natural Language to Data Bases/Information Systems",
"volume": "",
"issue": "",
"pages": "59--62",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rahma Boujelbane, Mariem Mallek, Mariem Ellouze, and Lamia Hadrich Belguith. 2014. Fine-grained pos tagging of spoken tunisian dialect corpora. In International Conference on Applications of Natural Language to Data Bases/Information Systems, pages 59-62. Springer.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "TIGER: Linguistic interpretation of a german corpus",
"authors": [
{
"first": "Sabine",
"middle": [],
"last": "Brants",
"suffix": ""
},
{
"first": "Stefanie",
"middle": [],
"last": "Dipper",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Eisenberg",
"suffix": ""
},
{
"first": "Silvia",
"middle": [],
"last": "Hansen-Schirra",
"suffix": ""
},
{
"first": "Esther",
"middle": [],
"last": "Konig",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Lezius",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Rohrer",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "Hans",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
}
],
"year": 2004,
"venue": "Research on Language and Computation",
"volume": "2",
"issue": "4",
"pages": "597--620",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sabine Brants, Stefanie Dipper, Peter Eisenberg, Silvia Hansen-Schirra, Esther Konig, Wolfgang Lezius, Christian Rohrer, George Smith, and Hans Uszkoreit. 2004. TIGER: Linguistic interpretation of a german corpus. Research on Language and Computation, 2(4):597-620, dec.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Computer-mediated versus non-computer-mediated corpora of informal french: Differences in politeness and intensification in the expression of contrast by au contraire",
"authors": [
{
"first": "Jorina",
"middle": [],
"last": "Brysbaert",
"suffix": ""
},
{
"first": "Karen",
"middle": [],
"last": "Lahousse",
"suffix": ""
}
],
"year": 2019,
"venue": "Social Media Corpora for the Humanities (CMC-Corpora2019)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jorina Brysbaert and Karen Lahousse. 2019. Computer-mediated versus non-computer-mediated corpora of in- formal french: Differences in politeness and intensification in the expression of contrast by au contraire. Social Media Corpora for the Humanities (CMC-Corpora2019), page 48.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Buckwalter arabic morphological analyzer (bama) version 2.0. Linguistic Data Consortium (LDC)",
"authors": [
{
"first": "Tim",
"middle": [],
"last": "Buckwalter",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tim Buckwalter. 2004. Buckwalter arabic morphological analyzer (bama) version 2.0. Linguistic Data Consor- tium (LDC), University of Pennsylvania, Philadelphia, PA, USA.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Learning phrase representations using RNN encoder-decoder for statistical machine translation",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merrienboer",
"suffix": ""
},
{
"first": "Fethi",
"middle": [],
"last": "Aglar G\u00fcl\u00e7ehre",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Bougares",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho, Bart van Merrienboer, \u00c7 aglar G\u00fcl\u00e7ehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. CoRR, abs/1406.1078.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Fast Semantic Extraction Using a Novel Neural Network Architecture",
"authors": [
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics",
"volume": "",
"issue": "",
"pages": "560--567",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronan Collobert and Jason Weston. 2007. Fast Semantic Extraction Using a Novel Neural Network Architecture. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 560-567, Prague, Czech Republic, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A unified architecture for natural language processing: Deep neural networks with multitask learning",
"authors": [
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 25th International Conference on Machine Learning, ICML '08",
"volume": "",
"issue": "",
"pages": "160--167",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th International Conference on Machine Learning, ICML '08, pages 160-167, New York, NY, USA. ACM.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Natural language processing (almost) from scratch",
"authors": [
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "L\u00e9on",
"middle": [],
"last": "Bottou",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Karlen",
"suffix": ""
},
{
"first": "Koray",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Kuksa",
"suffix": ""
}
],
"year": 2011,
"venue": "J. Mach. Learn. Res",
"volume": "12",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. J. Mach. Learn. Res., 12, November.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Arabizi detection and conversion to arabic",
"authors": [
{
"first": "Kareem",
"middle": [],
"last": "Darwish",
"suffix": ""
}
],
"year": 2014,
"venue": "the Arabic Natural Language Processing Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kareem Darwish. 2014. Arabizi detection and conversion to arabic. In In the Arabic Natural Language Processing Workshop, EMNLP.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Parsing arabic dialects",
"authors": [
{
"first": "Chiang",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Nizar",
"middle": [],
"last": "Habash",
"suffix": ""
},
{
"first": "Owen",
"middle": [],
"last": "Rambow",
"suffix": ""
},
{
"first": "Safiullah",
"middle": [],
"last": "Shareef",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of EACL",
"volume": "",
"issue": "",
"pages": "369--376",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chiang David, Mona Diab, Nizar Habash, Owen Rambow, and Safiullah Shareef. 2006. Parsing arabic dialects. In Proceedings of EACL, pages 369-376.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Hybrid neural models for sequence modelling: The best of three worlds",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Dinarelli",
"suffix": ""
},
{
"first": "Lo\u00efc",
"middle": [],
"last": "Grobol",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Dinarelli and Lo\u00efc Grobol. 2019. Hybrid neural models for sequence modelling: The best of three worlds. CoRR. arXiv preprint 1909.07102.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Pos tagging of dialectal arabic: a minimally supervised approach",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
},
{
"first": "Katrin",
"middle": [],
"last": "Kirchhoff",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the acl workshop on computational approaches to semitic languages",
"volume": "",
"issue": "",
"pages": "55--62",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Duh and Katrin Kirchhoff. 2005. Pos tagging of dialectal arabic: a minimally supervised approach. In Proceedings of the acl workshop on computational approaches to semitic languages, pages 55-62.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Sentiment analysis for arabizi text",
"authors": [
{
"first": "M",
"middle": [],
"last": "Rehab",
"suffix": ""
},
{
"first": "Mosab",
"middle": [],
"last": "Duwairi",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Alfaqeh",
"suffix": ""
},
{
"first": "Areen",
"middle": [],
"last": "Wardat",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Alrabadi",
"suffix": ""
}
],
"year": 2016,
"venue": "2016 7th International Conference on Information and Communication Systems (ICICS)",
"volume": "",
"issue": "",
"pages": "127--132",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rehab M Duwairi, Mosab Alfaqeh, Mohammad Wardat, and Areen Alrabadi. 2016. Sentiment analysis for arabizi text. In 2016 7th International Conference on Information and Communication Systems (ICICS), pages 127- 132. IEEE.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Zum sprachgebrauch in deutschen newsgruppen",
"authors": [
{
"first": "Helmut",
"middle": [],
"last": "Feldweg",
"suffix": ""
},
{
"first": "Ralf",
"middle": [],
"last": "Kibiger",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Thielen",
"suffix": ""
}
],
"year": 1995,
"venue": "Osnabr\u00fccker Beitr\u00e4ge zur Sprachtheorie",
"volume": "50",
"issue": "",
"pages": "143--154",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Helmut Feldweg, Ralf Kibiger, and Christine Thielen. 1995. Zum sprachgebrauch in deutschen newsgruppen. Osnabr\u00fccker Beitr\u00e4ge zur Sprachtheorie, 50:143-154.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Tunizi: a tunisian arabizi sentiment analysis dataset",
"authors": [
{
"first": "Chayma",
"middle": [],
"last": "Fourati",
"suffix": ""
},
{
"first": "Abir",
"middle": [],
"last": "Messaoudi",
"suffix": ""
},
{
"first": "Hatem",
"middle": [],
"last": "Haddad",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.14303"
]
},
"num": null,
"urls": [],
"raw_text": "Chayma Fourati, Abir Messaoudi, and Hatem Haddad. 2020. Tunizi: a tunisian arabizi sentiment analysis dataset. arXiv preprint arXiv:2004.14303.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Convolutional sequence to sequence learning",
"authors": [
{
"first": "Jonas",
"middle": [],
"last": "Gehring",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Denis",
"middle": [],
"last": "Yarats",
"suffix": ""
},
{
"first": "Yann N",
"middle": [],
"last": "Dauphin",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. of ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. 2017. Convolutional sequence to sequence learning. In Proc. of ICML.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Tarc: Incrementally and semi-automatically collecting a tunisian arabish corpus",
"authors": [
{
"first": "Elisa",
"middle": [],
"last": "Gugliotta",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Dinarelli",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of The 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "6279--6286",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elisa Gugliotta and Marco Dinarelli. 2020. Tarc: Incrementally and semi-automatically collecting a tunisian arabish corpus. In Proceedings of The 12th Language Resources and Evaluation Conference, pages 6279-6286.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Unified guidelines and resources for arabic dialect orthography",
"authors": [
{
"first": "Nizar",
"middle": [],
"last": "Habash",
"suffix": ""
},
{
"first": "Fadhl",
"middle": [],
"last": "Eryani",
"suffix": ""
},
{
"first": "Salam",
"middle": [],
"last": "Khalifa",
"suffix": ""
},
{
"first": "Owen",
"middle": [],
"last": "Rambow",
"suffix": ""
},
{
"first": "Dana",
"middle": [],
"last": "Abdulrahim",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Erdmann",
"suffix": ""
},
{
"first": "Reem",
"middle": [],
"last": "Faraj",
"suffix": ""
},
{
"first": "Wajdi",
"middle": [],
"last": "Zaghouani",
"suffix": ""
},
{
"first": "Houda",
"middle": [],
"last": "Bouamor",
"suffix": ""
},
{
"first": "Nasser",
"middle": [],
"last": "Zalmout",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nizar Habash, Fadhl Eryani, Salam Khalifa, Owen Rambow, Dana Abdulrahim, Alexander Erdmann, Reem Faraj, Wajdi Zaghouani, Houda Bouamor, Nasser Zalmout, et al. 2018. Unified guidelines and resources for arabic dialect orthography. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018).",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Introduction to arabic natural language processing",
"authors": [
{
"first": "Nizar",
"middle": [],
"last": "Habash",
"suffix": ""
}
],
"year": 2010,
"venue": "Synthesis Lectures on Human Language Technologies",
"volume": "3",
"issue": "1",
"pages": "1--187",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nizar Habash. 2010. Introduction to arabic natural language processing. Synthesis Lectures on Human Language Technologies, 3(1):1-187.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Pos-tagging of tunisian dialect using standard arabic resources and tools",
"authors": [
{
"first": "Ahmed",
"middle": [],
"last": "Hamdi",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Nasr",
"suffix": ""
},
{
"first": "Nizar",
"middle": [],
"last": "Habash",
"suffix": ""
},
{
"first": "N\u00faria",
"middle": [],
"last": "Gala",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Second Workshop on Arabic Natural Language Processing",
"volume": "",
"issue": "",
"pages": "59--68",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ahmed Hamdi, Alexis Nasr, Nizar Habash, and N\u00faria Gala. 2015. Pos-tagging of tunisian dialect using standard arabic resources and tools. In Proceedings of the Second Workshop on Arabic Natural Language Processing, pages 59-68. Association for Computational Linguistics (ACL), July.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Comput",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Comput., 9(8):1735-1780, November.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Joint prediction of morphosyntactic categories for finegrained arabic part-of-speech tagging exploiting tag dictionary information",
"authors": [
{
"first": "Go",
"middle": [],
"last": "Inoue",
"suffix": ""
},
{
"first": "Hiroyuki",
"middle": [],
"last": "Shindo",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 21st Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "421--431",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Go Inoue, Hiroyuki Shindo, and Yuji Matsumoto. 2017. Joint prediction of morphosyntactic categories for fine- grained arabic part-of-speech tagging exploiting tag dictionary information. In Proceedings of the 21st Confer- ence on Computational Natural Language Learning (CoNLL 2017), pages 421-431.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Curras: an annotated corpus for the palestinian arabic dialect",
"authors": [
{
"first": "Mustafa",
"middle": [],
"last": "Jarrar",
"suffix": ""
},
{
"first": "Nizar",
"middle": [],
"last": "Habash",
"suffix": ""
},
{
"first": "Faeq",
"middle": [],
"last": "Alrimawi",
"suffix": ""
}
],
"year": 2017,
"venue": "Language Resources and Evaluation",
"volume": "51",
"issue": "3",
"pages": "745--775",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mustafa Jarrar, Nizar Habash, Faeq Alrimawi, Diyam Akra, and Nasser Zalmout. 2017. Curras: an annotated corpus for the palestinian arabic dialect. Language Resources and Evaluation, 51(3):745-775.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "the 3rd International Conference for Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. cite arxiv:1412.6980Comment: Published as a conference paper at the 3rd International Conference for Learning Representations, San Diego, 2015.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Sociolinguistic Patterns. Conduct and Communication",
"authors": [
{
"first": "William",
"middle": [],
"last": "Labov",
"suffix": ""
}
],
"year": 1972,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William Labov. 1972. Sociolinguistic Patterns. Conduct and Communication. University of Pennsylvania Press, Incorporated.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Learning the structure of variable-order crfs: a finite-state perspective",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Lavergne",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fran\u00e3 \u00a7ois Yvon",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "433--439",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Lavergne and Fran\u00c3 \u00a7ois Yvon. 2017. Learning the structure of variable-order crfs: a finite-state per- spective. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 433-439. Association for Computational Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Extracting absolute spatial entities from sms: comparing a supervised and an unsupervised approach. Language and the new (instant) media",
"authors": [
{
"first": "C\u00e9dric",
"middle": [],
"last": "Lopez",
"suffix": ""
},
{
"first": "Sarah",
"middle": [],
"last": "Zenasni",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Kergosien",
"suffix": ""
},
{
"first": "Ioannis",
"middle": [],
"last": "Partalas",
"suffix": ""
},
{
"first": "Mathieu",
"middle": [],
"last": "Roche",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C\u00e9dric Lopez, Sarah Zenasni, Eric Kergosien, Ioannis Partalas, Mathieu Roche, Maguelonne Teisseire, and Rachel Panckhurst. 2018. Extracting absolute spatial entities from sms: comparing a supervised and an unsupervised approach. Language and the new (instant) media.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Multi-task sequence to sequence learning",
"authors": [
{
"first": "Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Quoc",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
}
],
"year": 2016,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thang Luong, Quoc V. Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. 2016. Multi-task sequence to sequence learning. In International Conference on Learning Representations.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "The penn arabic treebank: Building a large-scale annotated arabic corpus",
"authors": [
{
"first": "Mohamed",
"middle": [],
"last": "Maamouri",
"suffix": ""
},
{
"first": "Ann",
"middle": [],
"last": "Bies",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Buckwalter",
"suffix": ""
},
{
"first": "Wigdan",
"middle": [],
"last": "Mekki",
"suffix": ""
}
],
"year": 2004,
"venue": "NEMLAR conference on Arabic language resources and tools",
"volume": "27",
"issue": "",
"pages": "466--467",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohamed Maamouri, Ann Bies, Tim Buckwalter, and Wigdan Mekki. 2004. The penn arabic treebank: Build- ing a large-scale annotated arabic corpus. In NEMLAR conference on Arabic language resources and tools, volume 27, pages 466-467. Cairo.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Fisher levantine arabic conversational telephone speech. Linguistic Data Consortium",
"authors": [
{
"first": "Mohamed",
"middle": [],
"last": "Maamouri",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Buckwalter",
"suffix": ""
},
{
"first": "Dave",
"middle": [],
"last": "Graff",
"suffix": ""
},
{
"first": "Hubert",
"middle": [],
"last": "Jin",
"suffix": ""
}
],
"year": 2007,
"venue": "LDC Catalog",
"volume": "",
"issue": "",
"pages": "2007--2009",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohamed Maamouri, Tim Buckwalter, Dave Graff, and Hubert Jin. 2007. Fisher levantine arabic conversational telephone speech. Linguistic Data Consortium, University of Pennsylvania, LDC Catalog No.: LDC2007S02.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Penn arabic treebank guidelines",
"authors": [
{
"first": "Mohamed",
"middle": [],
"last": "Maamouri",
"suffix": ""
},
{
"first": "Ann",
"middle": [],
"last": "Bies",
"suffix": ""
},
{
"first": "Sondos",
"middle": [],
"last": "Krouna",
"suffix": ""
},
{
"first": "Fatma",
"middle": [],
"last": "Gaddeche",
"suffix": ""
},
{
"first": "Basma",
"middle": [],
"last": "Bouziri",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohamed Maamouri, Ann Bies, Sondos Krouna, Fatma Gaddeche, and Basma Bouziri. 2009. Penn arabic tree- bank guidelines. Linguistic Data Consortium.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Arabic transliteration of romanized tunisian dialect text: A preliminary investigation",
"authors": [
{
"first": "Abir",
"middle": [],
"last": "Masmoudi",
"suffix": ""
},
{
"first": "Nizar",
"middle": [],
"last": "Habash",
"suffix": ""
},
{
"first": "Mariem",
"middle": [],
"last": "Ellouze",
"suffix": ""
},
{
"first": "Yannick",
"middle": [],
"last": "Est\u00e8ve",
"suffix": ""
},
{
"first": "Lamia Hadrich",
"middle": [],
"last": "Belguith",
"suffix": ""
}
],
"year": 2015,
"venue": "International Conference on Intelligent Text Processing and Computational Linguistics",
"volume": "",
"issue": "",
"pages": "608--619",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abir Masmoudi, Nizar Habash, Mariem Ellouze, Yannick Est\u00e8ve, and Lamia Hadrich Belguith. 2015. Arabic transliteration of romanized tunisian dialect text: A preliminary investigation. In International Conference on Intelligent Text Processing and Computational Linguistics, pages 608-619. Springer.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Transliteration of arabizi into arabic script for tunisian dialect",
"authors": [
{
"first": "Abir",
"middle": [],
"last": "Masmoudi",
"suffix": ""
},
{
"first": "Mourad",
"middle": [],
"last": "Mariem Ellouze Khmekhem",
"suffix": ""
},
{
"first": "Lamia Hadrich",
"middle": [],
"last": "Khrouf",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Belguith",
"suffix": ""
}
],
"year": 2019,
"venue": "ACM Transactions on Asian and Low-Resource Language Information Processing (TALLIP)",
"volume": "19",
"issue": "2",
"pages": "1--21",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abir Masmoudi, Mariem Ellouze Khmekhem, Mourad Khrouf, and Lamia Hadrich Belguith. 2019. Transliteration of arabizi into arabic script for tunisian dialect. ACM Transactions on Asian and Low-Resource Language Information Processing (TALLIP), 19(2):1-21.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Language identification and analysis of code-switched social media text",
"authors": [
{
"first": "Deepthi",
"middle": [],
"last": "Mave",
"suffix": ""
},
{
"first": "Suraj",
"middle": [],
"last": "Maharjan",
"suffix": ""
},
{
"first": "Thamar",
"middle": [],
"last": "Solorio",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching",
"volume": "",
"issue": "",
"pages": "51--61",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Deepthi Mave, Suraj Maharjan, and Thamar Solorio. 2018. Language identification and analysis of code-switched social media text. In Proceedings of the Third Workshop on Computational Approaches to Linguistic Code- Switching, pages 51-61.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Document-level neural machine translation with hierarchical attention networks",
"authors": [
{
"first": "Lesly",
"middle": [],
"last": "Miculicich",
"suffix": ""
},
{
"first": "Dhananjay",
"middle": [],
"last": "Ram",
"suffix": ""
},
{
"first": "Nikolaos",
"middle": [],
"last": "Pappas",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Henderson",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2947--2954",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lesly Miculicich, Dhananjay Ram, Nikolaos Pappas, and James Henderson. 2018. Document-level neural ma- chine translation with hierarchical attention networks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2947-2954, Brussels, Belgium, October-November. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Les 'textos' plurilingues: l'alternance codique comme ressource d'affiliation\u00e0 une communaut\u00e9 globalis\u00e9e. Revue fran\u00e7aise de linguistique appliqu\u00e9e",
"authors": [
{
"first": "Etienne",
"middle": [],
"last": "Morel",
"suffix": ""
},
{
"first": "Simona",
"middle": [
"Pekarek"
],
"last": "Doehler",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "18",
"issue": "",
"pages": "29--43",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Etienne Morel and Simona Pekarek Doehler. 2013. Les 'textos' plurilingues: l'alternance codique comme ressource d'affiliation\u00e0 une communaut\u00e9 globalis\u00e9e. Revue fran\u00e7aise de linguistique appliqu\u00e9e, 18(2):29-43.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "fairseq: A fast, extensible toolkit for sequence modeling",
"authors": [
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Alexei",
"middle": [],
"last": "Baevski",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of NAACL-HLT 2019: Demonstrations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of NAACL-HLT 2019: Demonstrations.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Entre linguistique et informatique. Des outils de traitement automatique du langage naturel\u00e9crit (TALNE)\u00e0 l'analyse du discours num\u00e9rique m\u00e9di\u00e9 (DNM)",
"authors": [
{
"first": "Rachel",
"middle": [],
"last": "Panckhurst",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rachel Panckhurst. 2017. Entre linguistique et informatique. Des outils de traitement automatique du langage naturel\u00e9crit (TALNE)\u00e0 l'analyse du discours num\u00e9rique m\u00e9di\u00e9 (DNM). Ph.D. thesis, Universit\u00e9 Paris-Est, Paris, France.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Understanding the exploding gradient problem",
"authors": [
{
"first": "Razvan",
"middle": [],
"last": "Pascanu",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2012. Understanding the exploding gradient problem. CoRR, abs/1211.5063.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "MADAMIRA: A fast, comprehensive tool for morphological analysis and disambiguation of Arabic",
"authors": [
{
"first": "Arfath",
"middle": [],
"last": "Pasha",
"suffix": ""
},
{
"first": "Mohamed",
"middle": [],
"last": "Al-Badrashiny",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Ahmed",
"middle": [
"El"
],
"last": "Kholy",
"suffix": ""
},
{
"first": "Ramy",
"middle": [],
"last": "Eskander",
"suffix": ""
},
{
"first": "Nizar",
"middle": [],
"last": "Habash",
"suffix": ""
},
{
"first": "Manoj",
"middle": [],
"last": "Pooleery",
"suffix": ""
},
{
"first": "Owen",
"middle": [],
"last": "Rambow",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)",
"volume": "",
"issue": "",
"pages": "1094--1101",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arfath Pasha, Mohamed Al-Badrashiny, Mona Diab, Ahmed El Kholy, Ramy Eskander, Nizar Habash, Manoj Pooleery, Owen Rambow, and Ryan Roth. 2014. MADAMIRA: A fast, comprehensive tool for morphological analysis and disambiguation of Arabic. In Proceedings of the Ninth International Conference on Language Re- sources and Evaluation (LREC'14), pages 1094-1101, Reykjavik, Iceland, May. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Emotion encoding and interpretation in computer-mediated communication: Reasons for use",
"authors": [
{
"first": "A",
"middle": [],
"last": "Monica",
"suffix": ""
},
{
"first": "Roger",
"middle": [
"J"
],
"last": "Riordan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kreuz",
"suffix": ""
}
],
"year": 2010,
"venue": "Computers in human behavior",
"volume": "26",
"issue": "6",
"pages": "1667--1673",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Monica A Riordan and Roger J Kreuz. 2010. Emotion encoding and interpretation in computer-mediated commu- nication: Reasons for use. Computers in human behavior, 26(6):1667-1673.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "silent orality\": toward a conceptualization of the digital oral features in cmc and sms texts",
"authors": [
{
"first": "Oren",
"middle": [],
"last": "Soffer",
"suffix": ""
}
],
"year": 2010,
"venue": "Communication Theory",
"volume": "20",
"issue": "4",
"pages": "387--404",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oren Soffer. 2010. \"silent orality\": toward a conceptualization of the digital oral features in cmc and sms texts. Communication Theory, 20(4):387-404.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Dropout: A simple way to prevent neural networks from overfitting",
"authors": [
{
"first": "Nitish",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of Machine Learning Research",
"volume": "15",
"issue": "",
"pages": "1929--1958",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15:1929-1958.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Writing Arabizi: Orthographic Variation in Romanized Lebanese Arabic on Twitter",
"authors": [
{
"first": "Natalie",
"middle": [],
"last": "Sullivan",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Natalie Sullivan. 2017. Writing Arabizi: Orthographic Variation in Romanized Lebanese Arabic on Twitter. Ph.D. thesis, The University of Texas at Austin, USA.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Proceedings of NIPS, Cambridge, MA, USA. MIT Press.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Corpus-Based Approaches to Dialect Study",
"authors": [
{
"first": "Benedikt",
"middle": [],
"last": "Szmrecsanyi",
"suffix": ""
},
{
"first": "Lieselotte",
"middle": [],
"last": "Anderwald",
"suffix": ""
}
],
"year": 2018,
"venue": "The Handbook of Dialectology",
"volume": "",
"issue": "",
"pages": "300--313",
"other_ids": {
"DOI": [
"10.1002/9781118827628.ch17"
]
},
"num": null,
"urls": [],
"raw_text": "Benedikt Szmrecsanyi and Lieselotte Anderwald. 2018. Corpus-Based Approaches to Dialect Study. In The Handbook of Dialectology, pages 300-313. John Wiley & Sons, Ltd. Section: 17 eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1002/9781118827628.ch17.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Multilingual spoken language corpus development for communication research",
"authors": [
{
"first": "Toshiyuki",
"middle": [],
"last": "Takezawa",
"suffix": ""
},
{
"first": "Genichiro",
"middle": [],
"last": "Kikui",
"suffix": ""
},
{
"first": "Masahide",
"middle": [],
"last": "Mizushima",
"suffix": ""
},
{
"first": "Eiichiro",
"middle": [],
"last": "Sumita",
"suffix": ""
}
],
"year": 2006,
"venue": "International Journal of Computational Linguistics & Chinese Language Processing",
"volume": "12",
"issue": "",
"pages": "303--324",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Toshiyuki Takezawa, Genichiro Kikui, Masahide Mizushima, and Eiichiro Sumita. 2007. Multilingual spoken language corpus development for communication research. In International Journal of Computational Linguis- tics & Chinese Language Processing, Volume 12, Number 3, September 2007: Special Issue on Invited Papers from ISCSLP 2006, pages 303-324.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "The paralinguistic function of emojis in twitter communication",
"authors": [
{
"first": "Yasmin",
"middle": [],
"last": "Tantawi",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"Beth"
],
"last": "Rosson",
"suffix": ""
}
],
"year": 2019,
"venue": "Social Media Corpora for the Humanities (CMC-Corpora2019)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yasmin Tantawi and Mary Beth Rosson. 2019. The paralinguistic function of emojis in twitter communication. Social Media Corpora for the Humanities (CMC-Corpora2019), page 68.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Patterns of communicative behaviour in internet chatrooms. Unpublished master's thesis",
"authors": [
{
"first": "Sunisa",
"middle": [],
"last": "Todla",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sunisa Todla. 1999. Patterns of communicative behaviour in internet chatrooms. Unpublished master's thesis, Chulalongkorn University.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "Collaborative multi-domain sentiment classification",
"authors": [
{
"first": "Fangzhao",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Yongfeng",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2015,
"venue": "2015 IEEE International Conference on Data Mining",
"volume": "",
"issue": "",
"pages": "459--468",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fangzhao Wu and Yongfeng Huang. 2015. Collaborative multi-domain sentiment classification. In 2015 IEEE International Conference on Data Mining, pages 459-468.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "Pay less attention with lightweight and dynamic convolutions",
"authors": [
{
"first": "Felix",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Alexei",
"middle": [],
"last": "Baevski",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Dauphin",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
}
],
"year": 2019,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Felix Wu, Angela Fan, Alexei Baevski, Yann Dauphin, and Michael Auli. 2019. Pay less attention with lightweight and dynamic convolutions. In International Conference on Learning Representations.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "Developing linguistic corpora: A guide to good practice",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Wynne",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin Wynne. 2005. Developing linguistic corpora: A guide to good practice. Oxbow Books Limited.",
"links": null
},
"BIBREF59": {
"ref_id": "b59",
"title": "Oral and written linguistic aspects of computer conferencing",
"authors": [
{
"first": "Simeon",
"middle": [
"J"
],
"last": "Yates",
"suffix": ""
}
],
"year": 1996,
"venue": "Pragmatics and beyond New Series",
"volume": "",
"issue": "",
"pages": "29--46",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simeon J. Yates. 1996. Oral and written linguistic aspects of computer conferencing. Pragmatics and beyond New Series, pages 29-46.",
"links": null
},
"BIBREF60": {
"ref_id": "b60",
"title": "Constructing linguistic resources for the tunisian dialect using textual user-generated contents on the social web",
"authors": [
{
"first": "Jihen",
"middle": [],
"last": "Younes",
"suffix": ""
},
{
"first": "Hadhemi",
"middle": [],
"last": "Achour",
"suffix": ""
},
{
"first": "Emna",
"middle": [],
"last": "Souissi",
"suffix": ""
}
],
"year": 2015,
"venue": "International Conference on Web Engineering",
"volume": "",
"issue": "",
"pages": "3--14",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jihen Younes, Hadhemi Achour, and Emna Souissi. 2015. Constructing linguistic resources for the tunisian dialect using textual user-generated contents on the social web. In International Conference on Web Engineering, pages 3-14. Springer.",
"links": null
},
"BIBREF61": {
"ref_id": "b61",
"title": "Romanized tunisian dialect transliteration using sequence labelling techniques",
"authors": [
{
"first": "Jihene",
"middle": [],
"last": "Younes",
"suffix": ""
},
{
"first": "Hadhemi",
"middle": [],
"last": "Achour",
"suffix": ""
},
{
"first": "Emna",
"middle": [],
"last": "Souissi",
"suffix": ""
},
{
"first": "Ahmed",
"middle": [],
"last": "Ferchichi",
"suffix": ""
}
],
"year": 2020,
"venue": "Journal of King Saud University-Computer and Information Sciences",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jihene Younes, Hadhemi Achour, Emna Souissi, and Ahmed Ferchichi. 2020. Romanized tunisian dialect translit- eration using sequence labelling techniques. Journal of King Saud University-Computer and Information Sci- ences.",
"links": null
},
"BIBREF62": {
"ref_id": "b62",
"title": "Joint diacritization, lemmatization, normalization, and fine-grained morphological tagging",
"authors": [
{
"first": "Nasser",
"middle": [],
"last": "Zalmout",
"suffix": ""
},
{
"first": "Nizar",
"middle": [],
"last": "Habash",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.02267"
]
},
"num": null,
"urls": [],
"raw_text": "Nasser Zalmout and Nizar Habash. 2019. Joint diacritization, lemmatization, normalization, and fine-grained morphological tagging. arXiv preprint arXiv:1910.02267.",
"links": null
},
"BIBREF63": {
"ref_id": "b63",
"title": "Morphological analysis of tunisian dialect",
"authors": [
{
"first": "In\u00e8s",
"middle": [],
"last": "Zribi",
"suffix": ""
},
{
"first": "Lamia Hadrich",
"middle": [],
"last": "Mariem Ellouze Khemekhem",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Belguith",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Sixth International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "992--996",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "In\u00e8s Zribi, Mariem Ellouze Khemekhem, and Lamia Hadrich Belguith. 2013. Morphological analysis of tunisian dialect. In Proceedings of the Sixth International Joint Conference on Natural Language Processing, pages 992-996.",
"links": null
},
"BIBREF64": {
"ref_id": "b64",
"title": "Morphological disambiguation of tunisian dialect",
"authors": [
{
"first": "In\u00e8s",
"middle": [],
"last": "Zribi",
"suffix": ""
},
{
"first": "Mariem",
"middle": [],
"last": "Ellouze",
"suffix": ""
}
],
"year": 2017,
"venue": "Journal of king Saud University-computer and information sciences",
"volume": "29",
"issue": "2",
"pages": "147--155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "In\u00e8s Zribi, Mariem Ellouze, Lamia Hadrich Belguith, and Philippe Blache. 2017. Morphological disambiguation of tunisian dialect. Journal of king Saud University-computer and information sciences, 29(2):147-155.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"html": null,
"content": "<table><tr><td/><td>Training</td><td/><td>Dev</td><td/><td>Test</td><td/></tr><tr><td># sentences</td><td>40 472</td><td/><td>5 000</td><td/><td colspan=\"2\">5 000</td></tr><tr><td/><td colspan=\"6\">Words Labels Words Labels Words Labels</td></tr><tr><td># tokens</td><td>719 530</td><td colspan=\"2\">-76 704</td><td colspan=\"2\">-92 004</td><td>-</td></tr><tr><td>dictionary</td><td>77 220</td><td colspan=\"2\">681 15 852</td><td colspan=\"2\">501 20 149</td><td>537</td></tr><tr><td>OOV%</td><td>-</td><td>-</td><td>30,90</td><td>0,01</td><td>37,18</td><td>0,015</td></tr></table>",
"type_str": "table",
"num": null,
"text": "The queries for the attention mechanisms are always the Decoder i 's hidden states, while keys and values are the encoder and previous decoder hidden states. The attention vectors computed by the attention mechanisms are simply summed together to generate the final state, used to predict the next output.5"
},
"TABREF1": {
"html": null,
"content": "<table><tr><td/><td>Sentences</td><td/><td>Words</td><td/></tr><tr><td>Total</td><td>4121</td><td/><td>32 062</td><td/></tr><tr><td/><td/><td colspan=\"3\">arabizi foreign emotag</td></tr><tr><td>forum</td><td>756</td><td>6039</td><td>5856</td><td>14</td></tr><tr><td>social</td><td colspan=\"2\">3146 11 843</td><td>3614</td><td>587</td></tr><tr><td>blog</td><td>219</td><td>3763</td><td>343</td><td>3</td></tr></table>",
"type_str": "table",
"num": null,
"text": "Statistics of the German corpus TIGER"
},
"TABREF2": {
"html": null,
"content": "<table><tr><td>4 Evaluation</td></tr><tr><td>4.1 Data</td></tr></table>",
"type_str": "table",
"num": null,
"text": "Statistics of the already annotated part of TArC"
},
"TABREF4": {
"html": null,
"content": "<table><tr><td colspan=\"2\">Corpus: TIGER Test data</td><td/></tr><tr><td colspan=\"2\">Best results</td><td/></tr><tr><td/><td>PoS</td><td>MORPHO</td></tr><tr><td colspan=\"2\">(Dinarelli and Grobol, 2019) 97.74%</td><td>91.86%</td></tr><tr><td colspan=\"2\">Our results</td><td/></tr><tr><td>Model</td><td colspan=\"2\">LSTM</td></tr><tr><td>Task</td><td>PoS</td><td>Morpho</td></tr><tr><td>Char decoding</td><td>97.44%</td><td>91.81%</td></tr><tr><td>Token decoding</td><td>94.44%</td><td>83.37%</td></tr><tr><td>Token+char decoding</td><td>97.25%</td><td>87.87%</td></tr></table>",
"type_str": "table",
"num": null,
"text": "Summary of results, in terms of accuracy, obtained on the TIGER development data set with the Tarc Multi-Task system."
},
"TABREF5": {
"html": null,
"content": "<table/>",
"type_str": "table",
"num": null,
"text": "Summary of results, in terms of accuracy, obtained on the TIGER test data set with the Tarc Multi-Task system."
}
}
}
}