|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T05:58:42.811304Z" |
|
}, |
|
"title": "On the Importance of Tokenization in Arabic Embedding Models", |
|
"authors": [ |
|
{ |
|
"first": "Mohamed", |
|
"middle": [], |
|
"last": "Alkaoud", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of California", |
|
"location": { |
|
"addrLine": "Davis One Shields Ave Davis", |
|
"postCode": "95616", |
|
"region": "CA", |
|
"country": "United States" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Mairaj", |
|
"middle": [], |
|
"last": "Syed", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of California", |
|
"location": { |
|
"addrLine": "Davis One Shields Ave Davis", |
|
"postCode": "95616", |
|
"region": "CA", |
|
"country": "United States" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Arabic, like other highly inflected languages, encodes a large amount of information in its morphology and word structure. In this work, we propose two embedding strategies that modify the tokenization phase of traditional word embedding models (Word2Vec) and contextual word embedding models (BERT) to take into account Arabic's relatively complex morphology. In Word2Vec, we segment words into subwords during training time and then compose wordlevel representations from the subwords during test time. We train our embeddings on Arabic Wikipedia and show that they perform better than a Word2Vec model on multiple Arabic natural language processing datasets while being approximately 60% smaller in size. Moreover, we showcase our embeddings' ability to produce accurate representations of some out-of-vocabulary words that were not encountered before. In BERT, we modify the tokenization layer of Google's pretrained multilingual BERT model by incorporating information on morphology. By doing so, we achieve state of the art performance on two Arabic NLP datasets without pretraining.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Arabic, like other highly inflected languages, encodes a large amount of information in its morphology and word structure. In this work, we propose two embedding strategies that modify the tokenization phase of traditional word embedding models (Word2Vec) and contextual word embedding models (BERT) to take into account Arabic's relatively complex morphology. In Word2Vec, we segment words into subwords during training time and then compose wordlevel representations from the subwords during test time. We train our embeddings on Arabic Wikipedia and show that they perform better than a Word2Vec model on multiple Arabic natural language processing datasets while being approximately 60% smaller in size. Moreover, we showcase our embeddings' ability to produce accurate representations of some out-of-vocabulary words that were not encountered before. In BERT, we modify the tokenization layer of Google's pretrained multilingual BERT model by incorporating information on morphology. By doing so, we achieve state of the art performance on two Arabic NLP datasets without pretraining.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Word embeddings are one of the main building blocks of most natural language processing tasks. Many word embedding techniques have been proposed (Mikolov et al., 2013b; Pennington et al., 2014; ) that try to capture better word representations. Although most of the approaches are language agnostic, they were historically designed to be used with English. Nonetheless, it was shown that these techniques work well in other languages (Grave et al., 2018; Soliman et al., 2017) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 145, |
|
"end": 168, |
|
"text": "(Mikolov et al., 2013b;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 169, |
|
"end": 193, |
|
"text": "Pennington et al., 2014;", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 434, |
|
"end": 454, |
|
"text": "(Grave et al., 2018;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 455, |
|
"end": 476, |
|
"text": "Soliman et al., 2017)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "One feature that is unique to Arabic, and other highly inflected languages, is the expressiveness of its words. The fact that Arabic encodes a large amount of information in its word structure leads to potential problems in learning embeddings due to the large number of forms for each word, the more likely chances of out-of-vocabulary (OOV) instances, and the increase in model size.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this work, we propose two embedding strategies for Arabic that take into consideration its rich morphology by modifying the tokenization phase. The first technique concerns traditional embedding models and the second a contextual one. Our experiments are done on Word2Vec (Mikolov et al., 2013a) and BERT (Devlin et al., 2018 ) since they are the most popular traditional and contextual embedding techniques, respectively. Nonetheless, the approaches we propose are embedding-agnostic and can be applied to other embedding techniques. Figure 1 summarizes our two approaches and illustrates what happens in the training and inferences stages of each approach.", |
|
"cite_spans": [ |
|
{ |
|
"start": 275, |
|
"end": 298, |
|
"text": "(Mikolov et al., 2013a)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 308, |
|
"end": 328, |
|
"text": "(Devlin et al., 2018", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 538, |
|
"end": 546, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In traditional embeddings, we analyze the effect of tokenizing words into subwords by splitting their suffixes and prefixes before training an embedding model and then combining these subwords using an algorithm we propose. We show that by doing so we get:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "1. Better performance: our model outperforms Word2Vec in multiple tasks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "2. Smaller size: our model is 59.6% smaller than a Word2Vec model trained on the same corpus.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http:// creativecommons.org/licenses/by/4.0/. 3. Superior out-of-vocabulary handling: our model is able to handle some OOV instances unlike Word2Vec.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In contextual embeddings, we investigate different tokenization schemes when using BERT without needing any pretraining, in contrast to previous approaches such as Antoun et al. (2020) . We simply modify the tokenization part of Google's pretrained multilingual BERT model (Devlin et al., 2018) resulting in models that:", |
|
"cite_spans": [ |
|
{ |
|
"start": 164, |
|
"end": 184, |
|
"text": "Antoun et al. (2020)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 273, |
|
"end": 294, |
|
"text": "(Devlin et al., 2018)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "1. Achieve state of the art results on two Arabic NLP datasets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "2. Do not require pretraining and can work on top of existing models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The rest of the paper is structured as follows: Section 2 introduces the background and explains some of the fundamental ideas of word embeddings; Section 3 defines the process of gathering and cleaning our data; Section 4 highlights the embedding approaches we are proposing; Section 5 details the experiments and the results; Section 6 discusses related work; and Section 7 concludes by summarizing our findings and pointing to potential directions for future work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Word embedding techniques rely on the distributional hypothesis (Harris, 1954) which suggests that words that appear in similar contexts tend to have similar meanings. Mikolov et al. (2013a) popularized word embeddings when they introduced Word2Vec and showed that it produces representations that capture not only syntax but also words' semantics. Many related techniques have been produced after that (Mikolov et al., 2013b; Pennington et al., 2014; . The word vectors produced by such techniques capture interesting semantics; one popular example is how the the vectors capture relationships between them. For example, if we subtract the vector for 'man' from the vector of 'king', and then add the vector of 'woman' we get very close to the vector of 'queen'.", |
|
"cite_spans": [ |
|
{ |
|
"start": 64, |
|
"end": 78, |
|
"text": "(Harris, 1954)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 168, |
|
"end": 190, |
|
"text": "Mikolov et al. (2013a)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 403, |
|
"end": 426, |
|
"text": "(Mikolov et al., 2013b;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 427, |
|
"end": 451, |
|
"text": "Pennington et al., 2014;", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Contextual word embeddings were proposed (Devlin et al., 2018; Peters et al., 2018; Liu et al., 2019) to tackle the issues that arise from words have multiple senses and meanings. In traditional embeddings such as Word2Vec, each word is encoded in a vector, which is a fixed representation. This may cause problems with homographs and words that have multiple senses depending on the context. For example the wear 'bear' means two very different things in the following sentences: \"The right of the people to keep and bear arms shall not be infringed.\" and \"A wild bear was seen in the city.\" Yet, traditional embeddings will only capture one fixed representation. Contextual word embeddings aim to solve this issue by modeling embeddings where the context of the word will affect its generated representation. ", |
|
"cite_spans": [ |
|
{ |
|
"start": 41, |
|
"end": 62, |
|
"text": "(Devlin et al., 2018;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 63, |
|
"end": 83, |
|
"text": "Peters et al., 2018;", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 84, |
|
"end": 101, |
|
"text": "Liu et al., 2019)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In this section, we describe the process of gathering and preparing the data used for training the embeddings. We do not require any training data for contextual embeddings since we do not require pretraining: we use Google's multilingual BERT model (Devlin et al., 2018) , which supports 104 languages (including Arabic) and was trained on their respective Wikipedia dumps. For traditional embeddings, we use Arabic Wikipedia as a corpus. We downloaded the Wikipedia dump from January 2018 and then cleaned it by using WikiExtractor 1 , which is a utility that generates plain text from an XML formatted Wikipedia dump. We then use a custom set of regexes to filter out all non-Arabic words, such as English words and numbers, and remove all diacritics and kashidas resulting in over 86 million tokens.", |
|
"cite_spans": [ |
|
{ |
|
"start": 250, |
|
"end": 271, |
|
"text": "(Devlin et al., 2018)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "One example of Arabic's morphological complexity is in the number of verb forms it possesses which is much higher than in English as we can see in Table 1 . While traditionally, this aspect of Arabic has been challenging to the natural language processing and computational linguistics communities (Farghaly and Shaalan, 2009; Al-Ayyoub et al., 2018) , we asked whether we may benefit from this characteristic. Can we tokenize text differently for Arabic than we do for English, and would that result in better performance on NLP tasks? We experimented with two approaches; one applied to traditional embedding models (Word2Vec) and the other on contextual models (BERT).", |
|
"cite_spans": [ |
|
{ |
|
"start": 298, |
|
"end": 326, |
|
"text": "(Farghaly and Shaalan, 2009;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 327, |
|
"end": 350, |
|
"text": "Al-Ayyoub et al., 2018)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 147, |
|
"end": 154, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Approach", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Traditional word embedding models are trained on a large corpus of text. The main difference in our approach is that we preprocess the text before feeding it into the embedding algorithm by splitting every word into subwords, which are its prefix(es), stem, and suffix(es) using Farasa, an Arabic segmenter, Figure 3: The increase in the size of the vocabulary when using words and subwords. The x-axis shows the number of words processed in the Arabic Wikipedia. The y-axis indicates the size of the vocabulary. (Abdelali et al., 2016) . The effect of using the Farasa segmenter can be seen in Figure 2 2 , where each row of squares represents a vector. Then we train the resulting corpus using Word2Vec, though we note that our approach is embedding agnostic and may be used with any embedding model. Notice that our vocabulary, and the resulting vectors, will be completely different now as shown in Figure 2 . It may seem at first glance that our vocabulary is increasing, but in fact it decreases as we keep adding more words as shown in Figure 3 , which depicts the sizes of the vocabularies in the first million words in Arabic Wikipedia.", |
|
"cite_spans": [ |
|
{ |
|
"start": 513, |
|
"end": 536, |
|
"text": "(Abdelali et al., 2016)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 595, |
|
"end": 603, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 903, |
|
"end": 911, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 1043, |
|
"end": 1051, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Traditional Embedding Models", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "One question that comes to mind is how do we generate embeddings for words that were split into subwords because in most cases we want embeddings at the word level and not at the subword level. We propose the following technique for getting the embeddings of all types of words, including those with multiple subwords. For words with only one component, we just return the embeddings learnt by the model. For words with multiple components (subwords), we get the embedding of the longest subword, and the average of the embeddings of the remaining subwords. We then take a weighted average of the two values which results in our embedding as Equations 1 and 2 show.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Traditional Embedding Models", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "l = arg max x\u2208S (|x|) (1) v = \u03b1 * (M [l]) + (1 \u2212 \u03b1) * ( x\u2208S\\{l} M [x] n \u2212 1 )", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Traditional Embedding Models", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "where S is the set containing all the subwords contained in the word, M is the learnt model that contains embeddings for all subwords, and \u03b1 is a parameter that decides the weights between the longest subword and the other subwords. Algorithm 1 illustrates the algorithm used in determining embeddings for all cases. This method not only allows us to generate embeddings for all words in the original corpus, but also increases its capacity to deal with out-of-vocabulary (OOV) words that the model has never seen before as shown in Figure 4 . For example, in Figure 4 , we see how our model can produce a representation of 'and their iPhone' which is one word in Arabic. A traditional model trained on the Arabic Wikipedia will fail to produce a representation of the word 'and their iPhone' because that word never appeared in Wikipedia. In fact, since there are many forms for each word, no matter how large the training corpus is, it is almost impossible for it to have seen occurrences of all possible forms of all words in it. Our model can tackle this problem because it operates on a subword level and has seen all the three components that make up the word: 'and', 'their' and 'iPhone' as shown in Figure 4 . This procedure allows one to expand a model's vocabulary without retraining or requiring numerous examples of a given word. Of Algorithm 1 Generating embeddings of words from subwords OOV: Can't be found.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 533, |
|
"end": 541, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 560, |
|
"end": 568, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1207, |
|
"end": 1215, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Traditional Embedding Models", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "function GETEMBEDDING(word, \u03b1, model) if word \u2208 model then return model[word] else S = get components(word) for s \u2208 S do if s / \u2208 model then return error l = argmax s\u2208S (|x|) S = S \u2212 l return \u03b1 * (model[l]) + (1 \u2212 \u03b1) * (sum([model[s] for s \u2208 S]) \u00f7 |S|)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Traditional Embedding Models", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Figure 4: Dealing with out-of-vocabulary words in both the traditional models and our proposed model. course, not all out-of-vocabulary words will be found this way. Nonetheless, it's a cheap way to generate representations of new words that is not possible in classical approaches.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Traditional embeddings:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Transformer-based (Vaswani et al., 2017) contextual embedding models, such as BERT (Devlin et al., 2018) , often require a tokenization step that solves problems such as the out-of-vocabulary issue. Bytepair encoding (BPE) (Sennrich et al., 2015; Gage, 1994) , one of the most popular tokenization methods relies on segmenting each word into the most frequent subwords. Shapiro and Duh (2018a) have shown that byte-pair encoding does not perform well for Arabic compared to other languages. One possible explanation of this is that byte-pair encoding does not include information derived from a given language's morphology. We did some experiments using BERT's pretrained tokenizer and found instances where the produced segments generated erroneous meanings. For example, the word mal\\ab in most cases is a noun that refers to a stadium or field. BERT tokenizes mal\\ab by segmenting it to mal (milliliter) and \\ab (gulp or fill up), instead of the correct segmentation: ma (a prefix used to create nouns of place) and l\\ab (play). BERT's segmentation seems to indicate that the word mal\\ab is related to liquids and water (milliliter/gulp/fill up). Keep in mind that both ma and l\\ab are in BERT's subword vocabulary.", |
|
"cite_spans": [ |
|
{ |
|
"start": 18, |
|
"end": 40, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 83, |
|
"end": 104, |
|
"text": "(Devlin et al., 2018)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 223, |
|
"end": 246, |
|
"text": "(Sennrich et al., 2015;", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 247, |
|
"end": 258, |
|
"text": "Gage, 1994)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 370, |
|
"end": 393, |
|
"text": "Shapiro and Duh (2018a)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Contextual Embedding Models", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We propose two methods that aim to incorporate a language's structure via better segmentations, which we call MorphBERT (Morphology BERT) and CharBERT (Character BERT). In MorphBERT, a custom tokenizer is used to replace the default tokenizer layer as seen in Figure 5 . That custom tokenizer will use a language specific segmenter, Farasa (Abdelali et al., 2016) in our case, to segment each word before processing it. We then pass each word, after segmentation, to the original model's tokenizer to make sure that the produced segments are in the model's vocabulary. Keep in mind that MorphBERT differs from AraBERTv1 (Antoun et al., 2020) , an Arabic BERT model that also utilizes Farasa, in that it does not require pretraining. In addition to that, Antoun et al. (2020) preprocess the training corpus by segmenting it using Farasa before training AraBERTv1 which is not the case with MorphBERT. In CharBERT, we segment everything to characters as shown in Figure 5 . The main idea behind CharBERT is to let the network learn these language structures on its own. Both of these models do not require training and can be used with any pretrained model as we see in Figure 5 . This is important due to the expensive -money-wise, time-wise, and environment-wise -process of training BERT and other state of the art models. We believe that developing simpler, more sustainable, and more efficient NLP models is of an utmost importance due to the many problems that arise from computationally heavy models, (Strubell et al., 2019) which can take weeks to train on many TPUs/GPUs. Moreover, most people, especially in less developed countries, do not have the resources to train these models, which limits accessibility.", |
|
"cite_spans": [ |
|
{ |
|
"start": 340, |
|
"end": 363, |
|
"text": "(Abdelali et al., 2016)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 620, |
|
"end": 641, |
|
"text": "(Antoun et al., 2020)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 1506, |
|
"end": 1529, |
|
"text": "(Strubell et al., 2019)", |
|
"ref_id": "BIBREF30" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 260, |
|
"end": 268, |
|
"text": "Figure 5", |
|
"ref_id": "FIGREF4" |
|
}, |
|
{ |
|
"start": 961, |
|
"end": 969, |
|
"text": "Figure 5", |
|
"ref_id": "FIGREF4" |
|
}, |
|
{ |
|
"start": 1168, |
|
"end": 1176, |
|
"text": "Figure 5", |
|
"ref_id": "FIGREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Contextual Embedding Models", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "In this section we detail our experiments and highlight the results produced by the models we proposed.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments and Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "For intrinsic evaluation of traditional embedding models we used the Arabic word analogy benchmark created by Elrazzaz et al. (2017) . This dataset consists of nine relations that each consist of over 100 word pairs. We use the following datasets to evaluate our models extrinsically: 1. APMD: The Arab poem meters dataset (Alyafeai, 2020) which consists of 55,440 poem verses with each verse classified into one of the fourteen Arabic poetry meters. The data is split into training and testing sets. (Elnagar et al., 2018) consists of 93,700 hotel reviews that are classified into positive or negative according to their rating. Reviews with a rating of four or five were assigned positive, and those with a rating of one or two were labeled negative. Reviews with a rating of three were ignored. We split the data into 80% and 20% training and testing sets respectively using the script provided by Antoun et al. (2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 110, |
|
"end": 132, |
|
"text": "Elrazzaz et al. (2017)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 501, |
|
"end": 523, |
|
"text": "(Elnagar et al., 2018)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 901, |
|
"end": 921, |
|
"text": "Antoun et al. (2020)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Datasets", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "The Large-scale Arabic Book Reviews dataset (Aly and Atiya, 2013) consists of 63,000 book reviews rated between one and five. We use the unbalanced two-class dataset, where reviews with a rating of one or two are labeled negative, and those with a rating on four or five are labeled positive. The data is split into training and testing sets.", |
|
"cite_spans": [ |
|
{ |
|
"start": 44, |
|
"end": 65, |
|
"text": "(Aly and Atiya, 2013)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "LABR:", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "Word analogies, which consists of sets of pairs that share a common relationship, are often used to evaluate different embedding techniques. For example, let's say that we have the following pairs: (king, queen), and (male, female). Both of these pairs contain a word that indicates masculinity and a second word that indicates the feminine version of the first word. A perfect word embedding representation should be able to capture this relationship, and the way to mathematically measure it is by calculating the vector king\u2212male+female. After that, we check to see whether the resulting vector is the closest to the vector queen or not. We use the Arabic word analogy benchmark created by Elrazzaz et al. (2017) to evaluate our approach. We used Gensim's (\u0158eh\u016f\u0159ek and Sojka, 2010) word analogy evaluate function to evaluate the models and set the 'dummy4unknown' flag on so that all tuples of pairs that contain a word (or more) that are not in our vocabulary will get zero accuracy (instead of being skipped), similar to the procedure adopted by Elrazzaz et al. (2017) . We train two models: a vanilla Word2Vec model (base model) and our proposed model on the Arabic Wikipedia dataset mentioned in Section 3. For the base model, we set the window size to be five. Since our model segments words before learning embeddings, we compute the average number of subwords per word in our corpus and adjust the window size by that number. The average number of components per word is around two (1.97); to account for that we set window size in our model to be be ten instead of five. For both, we set the number of epochs to be equal to ten. We experimented with multiple \u03b1 values for our proposed model and found that setting it to 0.3 achieves the best results. Moreover, to avoid vocabulary size discrepancies, we standardize the vocabulary size before the evaluation by ensuring that our model has the same vocabulary as the base model. To do that, we create a new empty set and go through all vocabulary in the base model and if the word exists in our model (words with only one subword) then we add its representation to the set; otherwise, we decompose it and then add its representation according to equation 2. The results are summarized in Table 2 . Our method performs as well, if not better, than the Word2Vec while being around 60% smaller in size. The difference in size come from the vocabulary size which is 155K for our model compared to 383K for the base model. We also notice that CBOW performs better than Skip-gram which is consistent with previous research findings (Elrazzaz et al., 2017) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 693, |
|
"end": 715, |
|
"text": "Elrazzaz et al. (2017)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 1051, |
|
"end": 1073, |
|
"text": "Elrazzaz et al. (2017)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 2586, |
|
"end": 2609, |
|
"text": "(Elrazzaz et al., 2017)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 2248, |
|
"end": 2255, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Intrinsic evaluation", |
|
"sec_num": "5.2.1" |
|
}, |
|
{ |
|
"text": "One important distinction that separates our model from the base model is its ability to accommodate OOV words. To test the quality of these generated OOV vectors, we go through all the OOV words in our analogy benchmark and generate vectors for the ones we can, i.e. the ones for which we have entries Accuracy (without OOV handling) Accuracy (with OOV handling) FastText 8.05% 6.44% Our model 10.10% 13.32% Table 3 : Performance of our model compared to fastText when generating vectors for OOV words.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 409, |
|
"end": 416, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "OOV Handling", |
|
"sec_num": "5.2.2" |
|
}, |
|
{ |
|
"text": "APMD LABR HARD Base model 29.95% 86.18% 92.99% Our model 37.75% 86.21% 93.09% Table 4 : Performance (accuracy) of our model compared to the base model on the three datasets.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 78, |
|
"end": 85, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "OOV Handling", |
|
"sec_num": "5.2.2" |
|
}, |
|
{ |
|
"text": "for all their respective subwords. The total number of OOV words is 127 and we are able to generate representations for 70 of them covering 55.12% of all the OOV words. After that we follow a similar procedure to the one we have done in the previous subsection and check how the accuracy has been affected. We evaluated the performance of our best performing model (CBOW, dim=200) and found that adding the OOV representations increased the accuracy from 10.10% to 13.32%. To compare our OOV representations, we trained a fastText model, a popular embedding approach that can handle OOV, and then calculated the accuracy before adding the OOV entities and after; keep in mind that the vocabulary size in both, fastText and our model, will be the same. FastText achieves 8.05% before adding the OOV entities and 6.44% after adding them as shown in Table 3 . Not only did fastText achieve worse gains than our model, it actually performs worse than before which calls to question the accuracy of fastText's OOV embeddings for Arabic.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 847, |
|
"end": 854, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "OOV Handling", |
|
"sec_num": "5.2.2" |
|
}, |
|
{ |
|
"text": "We evaluate our approach on the three datasets mentioned above. We feed the embeddings of the words to a bidirectional Gated Recurrent Units (GRU) (Cho et al., 2014) network to train it. After that, we evaluate the network on the test set. Table 4 shows the performance of our model compared to the base model. We can see that our model clearly outperforms the base model in one dataset (APMD) and performs slightly better on the two other datasets (LABR and HARD).", |
|
"cite_spans": [ |
|
{ |
|
"start": 147, |
|
"end": 165, |
|
"text": "(Cho et al., 2014)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 240, |
|
"end": 247, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Extrinsic Evaluation", |
|
"sec_num": "5.2.3" |
|
}, |
|
{ |
|
"text": "We fine-tune the three models: the base cased multilingual BERT, MorphBERT, and CharBERT; and then compare their performances on the three datasets mentioned above. We follow the recommendations from BERT's paper (Devlin et al., 2018) in setting the fine-tuning hyperparameters. We run them all for four epochs in batches of 32 or 16 depending on the lengths of input sequences to avoid memory issues on the GPU. We optimize using the Adam algorithm with a learning rate of 2e \u22125 , \u03b21 = 0.9, and \u03b22 = 0.999. We also compare our models to AraBERT (AraBERTv0.1 and AraBERTv1) which consists of two monolingual BERT models trained on Arabic that were proposed by Antoun et al. (2020) and use the results reported by them for LABR and HARD. For APMD, we downloaded their models and fine-tuned them on the task. We also evaluate our models on the cleaned ANERcorp (Benajiba and Rosso, 2007; Antoun et al., 2020) NER dataset. For ANERcorp, we used the script provided by Antoun et al. (2020) in fine-tuning our models. Table 5 shows the performance of the five models on the downstream tasks.", |
|
"cite_spans": [ |
|
{ |
|
"start": 213, |
|
"end": 234, |
|
"text": "(Devlin et al., 2018)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 660, |
|
"end": 680, |
|
"text": "Antoun et al. (2020)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 859, |
|
"end": 885, |
|
"text": "(Benajiba and Rosso, 2007;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 886, |
|
"end": 906, |
|
"text": "Antoun et al., 2020)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 965, |
|
"end": 985, |
|
"text": "Antoun et al. (2020)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1013, |
|
"end": 1020, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Contextual Embedding Models", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "It is interesting that by just changing the tokenization method we can improve BERT's performance in Arabic without retraining. As we can see in Table 5 , MorphBERT and CharBERT achieve state of the art performance on LABR and APMD respectively. The previous state of the art model for LABR is the MCE-CNN model proposed by Dahou et al. (2019) which achieves an accuracy of 87.48%. Our models perform better than AraBERT in two tasks even though AraBERT was: 1) trained specifically for Arabic, and 2) trained on a larger Arabic corpus: 24GB of data for AraBERT compared to 4.3GB for the multilingual BERT. Although MultiBERT was trained on over a hundred languages, simply replacing tokenizations allowed us to add language specific information without requiring any training. While normally byte pair encoding learns representations of subwords without paying attention to their Table 5 : Performance of MorphBERT and CharBERT compared to the multilingual BERT (MultiBERT), AraBERTv0.1, and AraBERTv1 meaning, we can utilize this procedure of breaking words into chunks that make more sense as we saw in the mal\\ab example before. CharBERT in particular is interesting; one would expect that it will require more time to fine-tune since it only uses characters. Nevertheless, it achieves great performance without requiring more epochs than the other methods. One potential issue with CharBERT is that it results in very long sequences due to the character segmentation approach it follows which lead to more frequent truncations than other models. One potential way to mitigate this is by using new models such as Longformer (Beltagy et al., 2020) that allow longer sequences than BERT. Previous research (Virtanen et al., 2019; Antoun et al., 2020; Vries et al., 2019; Martin et al., 2020) has shown that a language specific BERT model performs better than a multilingual one. This is the first work, according to our knowledge, that shows that by tweaking a multilingual BERT model one can beat a BERT model trained on a specific language. Natural language processing entered a new era with the advent of pretrained models that do not need to be trained from scratch for every task but can simply be tweaked/fine-tuned instead. Our results shows that it may be possible to only have one multilingual model that can be tweaked instead of learning a pretrained model for every language.", |
|
"cite_spans": [ |
|
{ |
|
"start": 324, |
|
"end": 343, |
|
"text": "Dahou et al. (2019)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 1628, |
|
"end": 1650, |
|
"text": "(Beltagy et al., 2020)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 1708, |
|
"end": 1731, |
|
"text": "(Virtanen et al., 2019;", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 1732, |
|
"end": 1752, |
|
"text": "Antoun et al., 2020;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 1753, |
|
"end": 1772, |
|
"text": "Vries et al., 2019;", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 1773, |
|
"end": 1793, |
|
"text": "Martin et al., 2020)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 145, |
|
"end": 152, |
|
"text": "Table 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 881, |
|
"end": 888, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Contextual Embedding Models", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Many works have noted how sensitivity to Arabic's morphological complexity can result in better performance in standard NLP tasks. However, we do not know of any previous work that has specifically focused on the effect of tokenization on different Arabic embedding models. Antoun et al. (2020) trained an Arabic specific BERT model. They also trained another Arabic BERT model in which they segment the text before training the model and showed that it usually improves performance. Taylor and Brychc\u00edn (2018) analyzed morphological relations in Arabic word embeddings. They noted that some morphological features are captured in embeddings representations. Shapiro and Duh (2018b) proposed utilizing subword information in training embeddings to enrich the representations and showed that it improves the performance on word similarity tasks. Salama et al. (2018) investigated morphological-based embeddings and lemma-based embeddings. They utilized part-of-speech information to train their embeddings, similar to Trask et al. (2015) , and then build lemma-based embeddings from them by aggregating on different senses of each word first and them combining words that share the same lemma. El-Kishky et al. (2019) tackled the problem of extracting roots of words and proposed an extension to fastText (Bojanowski et al., 2016) that utilize morphemes.", |
|
"cite_spans": [ |
|
{ |
|
"start": 274, |
|
"end": 294, |
|
"text": "Antoun et al. (2020)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 484, |
|
"end": 510, |
|
"text": "Taylor and Brychc\u00edn (2018)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 659, |
|
"end": 682, |
|
"text": "Shapiro and Duh (2018b)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 845, |
|
"end": 865, |
|
"text": "Salama et al. (2018)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 1017, |
|
"end": 1036, |
|
"text": "Trask et al. (2015)", |
|
"ref_id": "BIBREF32" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We show that tokenization that pays attention to Arabic's morphology can create better traditional and contextual embedding models. Breaking words into subwords in Word2Vec not only leads to an increase in performance and reduction in the vocabulary size and hence the model size, but also provides a simple way to produce good out-of-vocabulary representations. We also show the importance of tokenization in BERT where we were able to achieve impressive performance without requiring any pretraining. One possible future work would be to investigate tokenization's effect in other morphologically rich languages such as Hebrew and Turkish and see if our results can be generalized to other highly inflected languages.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "https://github.com/attardi/wikiextractor", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "All transliterations in the paper follow the International Journal of Middle East Studies (IJMES) transliteration system.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Farasa: A fast and furious segmenter for Arabic", |
|
"authors": [ |
|
{ |
|
"first": "Ahmed", |
|
"middle": [], |
|
"last": "Abdelali", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kareem", |
|
"middle": [], |
|
"last": "Darwish", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nadir", |
|
"middle": [], |
|
"last": "Durrani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hamdy", |
|
"middle": [], |
|
"last": "Mubarak", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "11--16", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ahmed Abdelali, Kareem Darwish, Nadir Durrani, and Hamdy Mubarak. 2016. Farasa: A fast and furious segmenter for Arabic. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations, pages 11-16, San Diego, California, June. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Deep learning for arabic nlp: A survey", |
|
"authors": [ |
|
{ |
|
"first": "Mahmoud", |
|
"middle": [], |
|
"last": "Al-Ayyoub", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aya", |
|
"middle": [], |
|
"last": "Nuseir", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kholoud", |
|
"middle": [], |
|
"last": "Alsmearat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yaser", |
|
"middle": [], |
|
"last": "Jararweh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Brij", |
|
"middle": [], |
|
"last": "Gupta", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Journal of computational science", |
|
"volume": "26", |
|
"issue": "", |
|
"pages": "522--531", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mahmoud Al-Ayyoub, Aya Nuseir, Kholoud Alsmearat, Yaser Jararweh, and Brij Gupta. 2018. Deep learning for arabic nlp: A survey. Journal of computational science, 26:522-531.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "LABR: A large scale Arabic book reviews dataset", |
|
"authors": [ |
|
{ |
|
"first": "Mohamed", |
|
"middle": [], |
|
"last": "Aly", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amir", |
|
"middle": [], |
|
"last": "Atiya", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "494--498", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mohamed Aly and Amir Atiya. 2013. LABR: A large scale Arabic book reviews dataset. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 494-498, Sofia, Bulgaria, August. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "AraBERT: Transformer-based model for Arabic language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Wissam", |
|
"middle": [], |
|
"last": "Antoun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fady", |
|
"middle": [], |
|
"last": "Baly", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hazem", |
|
"middle": [], |
|
"last": "Hajj", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 4th Workshop on Open-Source Arabic Corpora and Processing Tools, with a Shared Task on Offensive Language Detection", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "9--15", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wissam Antoun, Fady Baly, and Hazem Hajj. 2020. AraBERT: Transformer-based model for Arabic language understanding. In Proceedings of the 4th Workshop on Open-Source Arabic Corpora and Processing Tools, with a Shared Task on Offensive Language Detection, pages 9-15, Marseille, France, May. European Language Resource Association.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Longformer: The long-document transformer", |
|
"authors": [ |
|
{ |
|
"first": "Iz", |
|
"middle": [], |
|
"last": "Beltagy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthew", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Peters", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arman", |
|
"middle": [], |
|
"last": "Cohan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2004.05150" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. arXiv:2004.05150.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Anersys 2.0: Conquering the ner task for the arabic language by combining the maximum entropy with pos-tag information", |
|
"authors": [ |
|
{ |
|
"first": "Yassine", |
|
"middle": [], |
|
"last": "Benajiba", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paolo", |
|
"middle": [], |
|
"last": "Rosso", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "IICAI", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1814--1823", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yassine Benajiba and Paolo Rosso. 2007. Anersys 2.0: Conquering the ner task for the arabic language by combining the maximum entropy with pos-tag information. In IICAI, pages 1814-1823.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Enriching word vectors with subword information", |
|
"authors": [ |
|
{ |
|
"first": "Piotr", |
|
"middle": [], |
|
"last": "Bojanowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edouard", |
|
"middle": [], |
|
"last": "Grave", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Armand", |
|
"middle": [], |
|
"last": "Joulin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1607.04606" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016. Enriching word vectors with subword information. arXiv preprint arXiv:1607.04606.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "On the properties of neural machine translation: Encoder-decoder approaches", |
|
"authors": [ |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bart", |
|
"middle": [], |
|
"last": "Van Merri\u00ebnboer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dzmitry", |
|
"middle": [], |
|
"last": "Bahdanau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "103--111", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kyunghyun Cho, Bart van Merri\u00ebnboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder approaches. In Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, pages 103-111, Doha, Qatar, October. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Multi-channel embedding convolutional neural network model for arabic sentiment classification", |
|
"authors": [ |
|
{ |
|
"first": "Abdelghani", |
|
"middle": [], |
|
"last": "Dahou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shengwu", |
|
"middle": [], |
|
"last": "Xiong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Junwei", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohamed", |
|
"middle": [ |
|
"Abd" |
|
], |
|
"last": "Elaziz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "ACM Trans. Asian Low-Resour. Lang. Inf. Process", |
|
"volume": "18", |
|
"issue": "4", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Abdelghani Dahou, Shengwu Xiong, Junwei Zhou, and Mohamed Abd Elaziz. 2019. Multi-channel embedding convolutional neural network model for arabic sentiment classification. ACM Trans. Asian Low-Resour. Lang. Inf. Process., 18(4), May.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1810.04805" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirec- tional transformers for language understanding. arXiv preprint arXiv:1810.04805.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Constrained sequence-to-sequence Semitic root extraction for enriching word embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Ahmed", |
|
"middle": [], |
|
"last": "El-Kishky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xingyu", |
|
"middle": [], |
|
"last": "Fu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aseel", |
|
"middle": [], |
|
"last": "Addawood", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nahil", |
|
"middle": [], |
|
"last": "Sobh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clare", |
|
"middle": [], |
|
"last": "Voss", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jiawei", |
|
"middle": [], |
|
"last": "Han", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the Fourth Arabic Natural Language Processing Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "88--96", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ahmed El-Kishky, Xingyu Fu, Aseel Addawood, Nahil Sobh, Clare Voss, and Jiawei Han. 2019. Constrained sequence-to-sequence Semitic root extraction for enriching word embeddings. In Proceedings of the Fourth Arabic Natural Language Processing Workshop, pages 88-96, Florence, Italy, August. Association for Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Hotel Arabic-Reviews Dataset Construction for Sentiment Analysis Applications", |
|
"authors": [ |
|
{ |
|
"first": "Ashraf", |
|
"middle": [], |
|
"last": "Elnagar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yasmin", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Khalifa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anas", |
|
"middle": [], |
|
"last": "Einea", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "35--52", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ashraf Elnagar, Yasmin S. Khalifa, and Anas Einea, 2018. Hotel Arabic-Reviews Dataset Construction for Senti- ment Analysis Applications, pages 35-52. Springer International Publishing, Cham.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Methodical evaluation of Arabic word embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Mohammed", |
|
"middle": [], |
|
"last": "Elrazzaz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shady", |
|
"middle": [], |
|
"last": "Elbassuoni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Khaled", |
|
"middle": [], |
|
"last": "Shaban", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chadi", |
|
"middle": [], |
|
"last": "Helwe", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "454--458", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mohammed Elrazzaz, Shady Elbassuoni, Khaled Shaban, and Chadi Helwe. 2017. Methodical evaluation of Arabic word embeddings. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 454-458, Vancouver, Canada, July. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Arabic natural language processing: Challenges and solutions", |
|
"authors": [ |
|
{ |
|
"first": "Ali", |
|
"middle": [], |
|
"last": "Farghaly", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Khaled", |
|
"middle": [], |
|
"last": "Shaalan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "ACM Transactions on Asian Language Information Processing (TALIP)", |
|
"volume": "8", |
|
"issue": "4", |
|
"pages": "1--22", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ali Farghaly and Khaled Shaalan. 2009. Arabic natural language processing: Challenges and solutions. ACM Transactions on Asian Language Information Processing (TALIP), 8(4):1-22.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "A new algorithm for data compression", |
|
"authors": [ |
|
{ |
|
"first": "Philip", |
|
"middle": [], |
|
"last": "Gage", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "C Users Journal", |
|
"volume": "12", |
|
"issue": "2", |
|
"pages": "23--38", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philip Gage. 1994. A new algorithm for data compression. C Users Journal, 12(2):23-38.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Learning word vectors for 157 languages", |
|
"authors": [ |
|
{ |
|
"first": "Edouard", |
|
"middle": [], |
|
"last": "Grave", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Piotr", |
|
"middle": [], |
|
"last": "Bojanowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Prakhar", |
|
"middle": [], |
|
"last": "Gupta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Armand", |
|
"middle": [], |
|
"last": "Joulin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the International Conference on Language Resources and Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Edouard Grave, Piotr Bojanowski, Prakhar Gupta, Armand Joulin, and Tomas Mikolov. 2018. Learning word vec- tors for 157 languages. In Proceedings of the International Conference on Language Resources and Evaluation (LREC 2018).", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Distributional structure. Word", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Zellig", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Harris", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1954, |
|
"venue": "", |
|
"volume": "10", |
|
"issue": "", |
|
"pages": "146--162", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zellig S Harris. 1954. Distributional structure. Word, 10(2-3):146-162.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Bag of tricks for efficient text classification", |
|
"authors": [ |
|
{ |
|
"first": "Armand", |
|
"middle": [], |
|
"last": "Joulin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edouard", |
|
"middle": [], |
|
"last": "Grave", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Piotr", |
|
"middle": [], |
|
"last": "Bojanowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1607.01759" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2016. Bag of tricks for efficient text classification. arXiv preprint arXiv:1607.01759.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Roberta: A robustly optimized bert pretraining approach", |
|
"authors": [ |
|
{ |
|
"first": "Yinhan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Myle", |
|
"middle": [], |
|
"last": "Ott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naman", |
|
"middle": [], |
|
"last": "Goyal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jingfei", |
|
"middle": [], |
|
"last": "Du", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mandar", |
|
"middle": [], |
|
"last": "Joshi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Danqi", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Omer", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Lewis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veselin", |
|
"middle": [], |
|
"last": "Stoyanov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1907.11692" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "\u00c9ric Villemonte de la Clergerie, Djam\u00e9 Seddah, and Beno\u00eet Sagot", |
|
"authors": [ |
|
{ |
|
"first": "Louis", |
|
"middle": [], |
|
"last": "Martin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Muller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pedro Javier Ortiz", |
|
"middle": [], |
|
"last": "Su\u00e1rez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoann", |
|
"middle": [], |
|
"last": "Dupont", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Laurent", |
|
"middle": [], |
|
"last": "Romary", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Louis Martin, Benjamin Muller, Pedro Javier Ortiz Su\u00e1rez, Yoann Dupont, Laurent Romary,\u00c9ric Villemonte de la Clergerie, Djam\u00e9 Seddah, and Beno\u00eet Sagot. 2020. Camembert: a tasty french language model. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Efficient estimation of word representations in vector space", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1301.3781" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Distributed representations of words and phrases and their compositionality", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3111--3119", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111-3119.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Glove: Global vectors for word representation", |
|
"authors": [ |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Pennington", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1532--1543", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word represen- tation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532-1543.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Deep contextualized word representations", |
|
"authors": [ |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Peters", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Neumann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohit", |
|
"middle": [], |
|
"last": "Iyyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Gardner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "2227--2237", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettle- moyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana, June. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Software Framework for Topic Modelling with Large Corpora", |
|
"authors": [ |
|
{ |
|
"first": "Petr", |
|
"middle": [], |
|
"last": "Radim\u0159eh\u016f\u0159ek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Sojka", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "45--50", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Radim\u0158eh\u016f\u0159ek and Petr Sojka. 2010. Software Framework for Topic Modelling with Large Corpora. In Proceed- ings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pages 45-50, Valletta, Malta, May. ELRA. http://is.muni.cz/publication/884893/en.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Morphological word embedding for arabic", |
|
"authors": [ |
|
{ |
|
"first": "Rana", |
|
"middle": [ |
|
"Aref" |
|
], |
|
"last": "Salama", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Abdou", |
|
"middle": [], |
|
"last": "Youssef", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aly", |
|
"middle": [], |
|
"last": "Fahmy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Procedia computer science", |
|
"volume": "142", |
|
"issue": "", |
|
"pages": "83--93", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rana Aref Salama, Abdou Youssef, and Aly Fahmy. 2018. Morphological word embedding for arabic. Procedia computer science, 142:83-93.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Neural machine translation of rare words with subword units", |
|
"authors": [ |
|
{ |
|
"first": "Rico", |
|
"middle": [], |
|
"last": "Sennrich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barry", |
|
"middle": [], |
|
"last": "Haddow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Birch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1508.07909" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Bpe and charcnns for translation of morphology: A cross-lingual comparison and analysis", |
|
"authors": [ |
|
{ |
|
"first": "Pamela", |
|
"middle": [], |
|
"last": "Shapiro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Duh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1809.01301" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pamela Shapiro and Kevin Duh. 2018a. Bpe and charcnns for translation of morphology: A cross-lingual compar- ison and analysis. arXiv preprint arXiv:1809.01301.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Morphological word embeddings for Arabic neural machine translation in low-resource settings", |
|
"authors": [ |
|
{ |
|
"first": "Pamela", |
|
"middle": [], |
|
"last": "Shapiro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Duh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Second Workshop on Subword/Character LEvel Models", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--11", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pamela Shapiro and Kevin Duh. 2018b. Morphological word embeddings for Arabic neural machine translation in low-resource settings. In Proceedings of the Second Workshop on Subword/Character LEvel Models, pages 1-11, New Orleans, June. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Aravec: A set of arabic word embedding models for use in arabic nlp", |
|
"authors": [ |
|
{ |
|
"first": "Kareem", |
|
"middle": [], |
|
"last": "Abu Bakr Soliman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Eissa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Samhaa R El-Beltagy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Procedia Computer Science", |
|
"volume": "117", |
|
"issue": "", |
|
"pages": "256--265", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Abu Bakr Soliman, Kareem Eissa, and Samhaa R El-Beltagy. 2017. Aravec: A set of arabic word embedding models for use in arabic nlp. Procedia Computer Science, 117:256-265.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Energy and policy considerations for deep learning in NLP", |
|
"authors": [ |
|
{ |
|
"first": "Emma", |
|
"middle": [], |
|
"last": "Strubell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ananya", |
|
"middle": [], |
|
"last": "Ganesh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Mccallum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3645--3650", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Emma Strubell, Ananya Ganesh, and Andrew McCallum. 2019. Energy and policy considerations for deep learning in NLP. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3645-3650, Florence, Italy, July. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "The representation of some phrases in arabic word semantic vector spaces", |
|
"authors": [ |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Taylor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tom\u00e1\u0161", |
|
"middle": [], |
|
"last": "Brychc\u00edn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Open Computer Science", |
|
"volume": "8", |
|
"issue": "1", |
|
"pages": "182--193", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stephen Taylor and Tom\u00e1\u0161 Brychc\u00edn. 2018. The representation of some phrases in arabic word semantic vector spaces. Open Computer Science, 8(1):182-193.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "sense2vec-a fast and accurate method for word sense disambiguation in neural word embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Trask", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Phil", |
|
"middle": [], |
|
"last": "Michalak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1511.06388" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrew Trask, Phil Michalak, and John Liu. 2015. sense2vec-a fast and accurate method for word sense disam- biguation in neural word embeddings. arXiv preprint arXiv:1511.06388.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Attention is all you need", |
|
"authors": [ |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Vaswani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noam", |
|
"middle": [], |
|
"last": "Shazeer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niki", |
|
"middle": [], |
|
"last": "Parmar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jakob", |
|
"middle": [], |
|
"last": "Uszkoreit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llion", |
|
"middle": [], |
|
"last": "Jones", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aidan", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Gomez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kaiser", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Illia", |
|
"middle": [], |
|
"last": "Polosukhin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5998--6008", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Multilingual is not enough: Bert for finnish", |
|
"authors": [ |
|
{ |
|
"first": "Antti", |
|
"middle": [], |
|
"last": "Virtanen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jenna", |
|
"middle": [], |
|
"last": "Kanerva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rami", |
|
"middle": [], |
|
"last": "Ilo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jouni", |
|
"middle": [], |
|
"last": "Luoma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Juhani", |
|
"middle": [], |
|
"last": "Luotolahti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tapio", |
|
"middle": [], |
|
"last": "Salakoski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Filip", |
|
"middle": [], |
|
"last": "Ginter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sampo", |
|
"middle": [], |
|
"last": "Pyysalo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Antti Virtanen, Jenna Kanerva, Rami Ilo, Jouni Luoma, Juhani Luotolahti, Tapio Salakoski, Filip Ginter, and Sampo Pyysalo. 2019. Multilingual is not enough: Bert for finnish.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "BERTje: A Dutch BERT Model", |
|
"authors": [ |
|
{ |
|
"first": "Andreas", |
|
"middle": [], |
|
"last": "Wietse De Vries", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arianna", |
|
"middle": [], |
|
"last": "Van Cranenburgh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tommaso", |
|
"middle": [], |
|
"last": "Bisazza", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Caselli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Malvina", |
|
"middle": [], |
|
"last": "Gertjan Van Noord", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Nissim", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1912.09582" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wietse de Vries, Andreas van Cranenburgh, Arianna Bisazza, Tommaso Caselli, Gertjan van Noord, and Malvina Nissim. 2019. BERTje: A Dutch BERT Model. arXiv:1912.09582 [cs], December.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "The training and inference stages of our two proposed strategies.", |
|
"uris": null |
|
}, |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "\u202b\u0629\u202c \u202b\u0631\u0627\u0626\u062d\u202c \u202b\u0627\u0644\u202c \u202b\u0648\u202c \u202b\u0644\u0648\u0646\u202c \u202b\u0627\u0644\u202c \u202b\u0629\u202c \u202b\u0639\u062f\u064a\u0645\u202c \u202b\u0629\u202c \u202b\u0634\u0641\u0627\u0641\u202c \u202b\u0629\u202c \u202b\u0645\u0627\u062f\u202c \u202b\u0647\u0648\u202c \u202b\u0645\u0627\u0621\u202c \u202b\u0627\u0644\u202c al m\u0101\u02be h\u016b m\u0101d at shf\u0101f at al l\u016bn w al r\u0101\u02be\u1e25 at The effect of word segmentation on the resulting vectors. In the top we have representations of words and in the bottom we have representations of subwords. Verb Forms go go, went, going, gone, goes", |
|
"uris": null |
|
}, |
|
"FIGREF4": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "The different segmentation approaches: BERT (default tokenizer), CharBERT, and MorphBERT.", |
|
"uris": null |
|
}, |
|
"TABREF0": { |
|
"html": null, |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"num": null, |
|
"text": "Verb forms in English and Arabic." |
|
}, |
|
"TABREF2": { |
|
"html": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>Dimension</td><td>50</td><td>100</td><td>200</td></tr><tr><td>Skipgram</td><td colspan=\"2\">Base model Our model (\u03b1=0.3) 5.36% 8.15% 5.76% 8.33%</td><td>8.88% 9.40%</td></tr><tr><td>CBOW</td><td colspan=\"3\">Base model Our model (\u03b1=0.3) 6.31% 8.92% 10.10% 6.00% 8.72% 10.05%</td></tr></table>", |
|
"num": null, |
|
"text": "Top-1 accuracy in the word analogies test.2. HARD: The Hotel Arabic Reviews Dataset" |
|
} |
|
} |
|
} |
|
} |