|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T05:58:36.790506Z" |
|
}, |
|
"title": "ARAGPT2: Pre-Trained Transformer for Arabic Language Generation", |
|
"authors": [ |
|
{ |
|
"first": "Wissam", |
|
"middle": [], |
|
"last": "Antoun", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "American University of Beirut", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Fady", |
|
"middle": [], |
|
"last": "Baly", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "American University of Beirut", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Hazem", |
|
"middle": [], |
|
"last": "Hajj", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "American University of Beirut", |
|
"location": {} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Recently, pre-trained transformer-based architectures have proven to be very efficient at language modeling and understanding, given that they are trained on a large enough corpus. Applications in language generation for Arabic are still lagging in comparison to other NLP advances primarily due to the lack of advanced Arabic language generation models. In this paper, we develop the first advanced Arabic language generation model, AraGPT2, trained from scratch on a large Arabic corpus of internet text and news articles. Our largest model, ARAGPT2-MEGA, has 1.46 billion parameters, which makes it the largest Arabic language model available. The MEGA model was evaluated and showed success on different tasks including synthetic news generation, and zero-shot question answering. For text generation, our best model achieves a perplexity of 29.8 on held-out Wikipedia articles. A study conducted with human evaluators showed the significant success of AraGPT2-mega in generating news articles that are difficult to distinguish from articles written by humans. We thus develop and release an automatic discriminator model with a 98% percent accuracy in detecting model-generated text. The models are also publicly available 1 , hoping to encourage new research directions and applications for Arabic NLP.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Recently, pre-trained transformer-based architectures have proven to be very efficient at language modeling and understanding, given that they are trained on a large enough corpus. Applications in language generation for Arabic are still lagging in comparison to other NLP advances primarily due to the lack of advanced Arabic language generation models. In this paper, we develop the first advanced Arabic language generation model, AraGPT2, trained from scratch on a large Arabic corpus of internet text and news articles. Our largest model, ARAGPT2-MEGA, has 1.46 billion parameters, which makes it the largest Arabic language model available. The MEGA model was evaluated and showed success on different tasks including synthetic news generation, and zero-shot question answering. For text generation, our best model achieves a perplexity of 29.8 on held-out Wikipedia articles. A study conducted with human evaluators showed the significant success of AraGPT2-mega in generating news articles that are difficult to distinguish from articles written by humans. We thus develop and release an automatic discriminator model with a 98% percent accuracy in detecting model-generated text. The models are also publicly available 1 , hoping to encourage new research directions and applications for Arabic NLP.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Few years ago, Natural language processing (NLP) was revolutionized with the introduction of multi-head self-attention transformer architecture (Vaswani et al., 2017) . The transformer achieved superior performance compared to recurrent neural networks several NLP tasks including machine translation, sentence classification with BERT (Devlin et al., 2019) , and ELECTRA (Clark et al., 2020b) , and sentence completion with GPT-2 , GROVER (Zellers et al., 2019) , and CTRL (Keskar et al., 2019) . Recent works have shown that larger models pre-trained on larger datasets can further improve performance i.e. RoBERTa (Liu et al., 2019) , and XLM-R (Conneau et al., 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 144, |
|
"end": 166, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 336, |
|
"end": 357, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 372, |
|
"end": 393, |
|
"text": "(Clark et al., 2020b)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 440, |
|
"end": 462, |
|
"text": "(Zellers et al., 2019)", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 474, |
|
"end": 495, |
|
"text": "(Keskar et al., 2019)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 617, |
|
"end": 635, |
|
"text": "(Liu et al., 2019)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 648, |
|
"end": 670, |
|
"text": "(Conneau et al., 2019)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "On the other hand, work on Arabic language modeling has mostly targeted natural language understanding (NLU) by pre-training transformerbased models using the Masked Language Modeling (MLM) task i.e. ARABERT (Antoun et al., 2020a) . In contrast, Arabic text generation or causal language modeling hasn't received much attention. Few works such as hULMonA (ElJundi et al., 2019) used next word prediction as a pretraining task in for transfer learning in Arabic text classification. (Khooli, 2020) and (Doiron, 2020) leveraged the existing GPT2 English model and adapted it for Arabic using text from the Arabic Wikipedia dumps, which is sub-optimal for Arabic.", |
|
"cite_spans": [ |
|
{ |
|
"start": 208, |
|
"end": 230, |
|
"text": "(Antoun et al., 2020a)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 355, |
|
"end": 377, |
|
"text": "(ElJundi et al., 2019)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 482, |
|
"end": 496, |
|
"text": "(Khooli, 2020)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 501, |
|
"end": 515, |
|
"text": "(Doiron, 2020)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper, the first advanced language generation models built from the grounds up on Arabic language have been developed. The process of pretraining ARAGPT2, a GPT-2 transformer model for the Arabic language is described. The model comes in 4 size variants: base (135M 2 ), medium (370M), large (792M) and mega (1.46B 3 ), which allows the exploration of ARAGPT2 in multiple applications with different data availability and computational constraints. The perplexity measure is used to automatically evaluate ARAGPT2. Furthermore, a human-based evaluation is provided, which highlights the ability of ARAGPT2 to deceive human evaluators. Finally, an ARAELEC-TRA (Antoun et al., 2020b) based detector is devel-oped and released. It is able to consistently identify news articles written by ARAGPT2. Making such powerful models publicly available to the Arabic research community enables research in rising Arabic NLP fields i.e Conversational Agents (Naous et al., 2020) , Detection of Automatic News Generation Detection (Harrag et al., 2020) ...", |
|
"cite_spans": [ |
|
{ |
|
"start": 667, |
|
"end": 689, |
|
"text": "(Antoun et al., 2020b)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 954, |
|
"end": 974, |
|
"text": "(Naous et al., 2020)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 1026, |
|
"end": 1047, |
|
"text": "(Harrag et al., 2020)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our contributions can be summarized as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 A methodology to pre-train a billion-size GPT2 model on a large-scale Arabic corpus. \u2022 An automatic discriminator that achieves a 98% accuracy in detecting model-generated synthetic text. \u2022 The four variants of ARAGPT2 are released on popular NLP libraries, along with the automatic ARAGPT2 discriminator. The rest of the paper is structured as follows. Section 2 provides a concise review of previous literature on Arabic language modeling. Section 3 details the methodology used in developing ARAGPT2. Section 4 describes the experimental setup, evaluation procedures and results. In addition, the approach to build a machine-generated text discriminator is presented in Section 5. Finally, a conclusion of the work and implications are mentioned in Section 6.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "GPT-1 (Radford et al., 2018) showed that Causal Language Modeling 4 is an effective pre-training technique that improves a model's generalization capabilities. GPT-2 then showed that using a larger model trained on a larger dataset surpasses the state-of-the-art of many tasks in a zero-shot setting, where a model solves a task without receiving any training on that task. Taking the scaling approach to the extreme led to the creation of GPT-3 (Brown et al., 2020) , with 175 billion parameter model, also trained with CLM using terabytes of internet text. GPT-3 explored the idea of few-shot learning, where a model is given examples from a new task as a text prompt, which unlocks new capabilities at test time. It was later shown that a carefully designed GPT-3 prompt allows the model to generate website designs, scramble/unscramble words...", |
|
"cite_spans": [ |
|
{ |
|
"start": 6, |
|
"end": 28, |
|
"text": "(Radford et al., 2018)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 440, |
|
"end": 466, |
|
"text": "GPT-3 (Brown et al., 2020)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "English and Non-Arabic Language modeling", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The advantage of scaling model sizes and training datasets comes with drawbacks, particularly the high computational cost, in addition to the huge corpora required for pre-training. It was estimated that training GPT-2 and GPT-3 costs $43K and $4.6M respectively, without any hyper-parameter tuning. These drawbacks restricted the availability of large pre-trained models to English mainly and a handful of other languages i.e. ruGPT3 5 for Russian, and Chinese 1.5B GPT2 (Zhang, 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 472, |
|
"end": 485, |
|
"text": "(Zhang, 2019)", |
|
"ref_id": "BIBREF41" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "English and Non-Arabic Language modeling", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Work on Arabic causal language modeling has been mostly limited to automatic speech recognition (ASR) systems. Since the language modeling component in ASR systems is a key module that ensures that the output text adheres with the statistical structure of language. Work on Arabic language models in ASR systems has mostly relied on Ngrams language models. (Ali et al., 2014) built an N-grams language model (LM) using GALE training data transcripts of 1.4M words. More recent work in Arabic ASR implemented a recurrent neural network as an LM, using 130M tokens, and achieved a perplexity of 481 compared to 436 for a 4-gram LM (Khurana et al., 2019) . Hamed et al. (2017) developed a code-switched Arabic-English language model using tri-gram LM and provided performance superior compared to two separate monolingual LMs. The code-switched LM was trained on 2.3M sentences or 13M words and achieved a perplexity of 275. With the rising popularity of transfer learning in NLP, Arabic CLM was used as a pre-training task for an Arabic universal LM, hULMonA (ElJundi et al., 2019) . The model was then fine-tuned on different downstream text classification tasks. hUL-MonA is a 3 stack of AWD-LSTM 6 layers (Howard and Ruder, 2018) , trained on 600K Wikipedia article pre-segmented using the MADAMIRA Arabic morphological analyzer and disambiguator (Pasha et al., 2014) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 357, |
|
"end": 375, |
|
"text": "(Ali et al., 2014)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 629, |
|
"end": 651, |
|
"text": "(Khurana et al., 2019)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 1057, |
|
"end": 1079, |
|
"text": "(ElJundi et al., 2019)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1206, |
|
"end": 1230, |
|
"text": "(Howard and Ruder, 2018)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 1348, |
|
"end": 1368, |
|
"text": "(Pasha et al., 2014)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Arabic Language modeling", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Masked Language Modeling (MLM) has been useful as a pre-training task for several Arabic NLU models. Masked Language Modeling (MLM) is a slightly different objective than CLM that requires a system to predict a masked word within a sequence compared to CLM which predicts the missing word at the end of a sequence. MLM was used in models such as ARABERT (Antoun et al., 2020a) , Arabic-BERT (Safaya et al., 2020) , Arabic-ALBERT 7 , GigaBERT (Lan et al., 2020) , MarBERT (Abdul-Mageed et al., 2020) , and QARiB (Chowdhury et al., 2020) . Only two works have attempted to create an Arabic transformer causal language model. Khooli (2020) and Doiron (2020) finetuned the OpenAI GPT2-base model on Arabic Wikipedia, which was mainly trained on English text. Doiron (2020) also continued training on a collection of dialectal Arabic datasets, in order to create a dialectal Arabic GPT2. While this approach has shown the capability to generate Arabic text, it is sub-optimal for Arabic and is useful in cases where the training data is scarce.", |
|
"cite_spans": [ |
|
{ |
|
"start": 354, |
|
"end": 376, |
|
"text": "(Antoun et al., 2020a)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 391, |
|
"end": 412, |
|
"text": "(Safaya et al., 2020)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 442, |
|
"end": 460, |
|
"text": "(Lan et al., 2020)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 463, |
|
"end": 498, |
|
"text": "MarBERT (Abdul-Mageed et al., 2020)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 511, |
|
"end": 535, |
|
"text": "(Chowdhury et al., 2020)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 623, |
|
"end": 636, |
|
"text": "Khooli (2020)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 641, |
|
"end": 654, |
|
"text": "Doiron (2020)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Arabic Language modeling", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Our proposed model is hence, the first Arabic transformer-based causal language model trained from scratch on the largest Arabic corpora available at the time of writing.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Arabic Language modeling", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "ARAGPT2 is a stacked transformer-decoder model trained using the causal language modeling objective. The model is trained on 77GB of Arabic text. ARAGPT2 comes in four variants as detailed in Table 1 , with the smallest model, base, having the same size as ARABERT-base which makes it accessible for the larger part of researchers. Larger model variants (medium, large, xlarge) offer improved performance but are harder to fine-tune and computationally more expensive. The ARAGPT2detector is based on the pre-trained ARAELEC-TRA model fine-tuned on the synthetically generated dataset. More details on the training procedure and dataset are provided in the following sections.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 192, |
|
"end": 199, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "ARAGPT2: Methodology", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "ARAGPT2 closely follows GPT2's variant architectures and training procedure. Table 1 shows the different model sizes, number of heads, number of layers, parameter count, and optimizer used for each model variant. All models are trained with context sizes of 1024 tokens. The LAMB (You et al., 2019) optimizer is used in the base and medium models only, since it allows using large batch sizes without worrying about training divergence. Using LAMB and Adam (Kingma and Ba, 2014) to train the large and mega variants isn't possible on TPUv3 due to the optimizer's high memory requirements, since memory cost scales 7 https://github.com/KUIS-AI-Lab/Arabic-ALBERT/ linearly with the number of parameters. The limitations were overcome by following the training procedure of the GROVER model (Zellers et al., 2019) by using the Adafactor optimizer (Shazeer and Stern, 2018) , which reduces memory requirements by factoring the second-order momentum parameters into a tensor product of two vectors. The GROVER architecture was also used instead of GPT2's, in which the layer normalization order in the transformer block is changed.", |
|
"cite_spans": [ |
|
{ |
|
"start": 280, |
|
"end": 298, |
|
"text": "(You et al., 2019)", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 788, |
|
"end": 810, |
|
"text": "(Zellers et al., 2019)", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 844, |
|
"end": 869, |
|
"text": "(Shazeer and Stern, 2018)", |
|
"ref_id": "BIBREF32" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 77, |
|
"end": 84, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The training dataset is a collection of the publicly available Arabic corpora listed below:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u2022 The unshuffled OSCAR corpus (Ortiz Su\u00e1rez et al., 2020 (Zeroual et al., 2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 30, |
|
"end": 56, |
|
"text": "(Ortiz Su\u00e1rez et al., 2020", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 57, |
|
"end": 79, |
|
"text": "(Zeroual et al., 2019)", |
|
"ref_id": "BIBREF40" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u2022 News articles provided by As-safir newspaper.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Preprocessing First, the corpus was filtered by removing short documents with less than 3 sentences, and documents with more than 20% repeated sentences. URLs, emails, and user mentions were also replaced with special tokens. All diacritics, and elongations were removed as well, while punctuation and non-alphabetic characters were padded with white-spaces. Moreover, the '<|endoftext|>' token is appended at the end of each document. The total dataset size is 77GB with 8.8B words 8 . The majority of the training data is comprised of Arabic news article, which is mostly written in MSA. The corpus also contains a small set of English words i.e. named entities, which are kept without lower-casing. Subsequently, a Byte-level byte-pair-encoding (BPE) tokenizer is trained with 64000 vocabulary size on all of our preprocessed dataset, using the optimized BPE implementation from the HuggingFace library (Wolf et al., 2020) . Finally, the BPE encoding is applied on the preprocessed dataset, which results in a total of 9.7M training examples with 1024 sub-word tokens each. All models were trained on a TPUv3-128 slice 9 with different batch sizes and the total number of steps as shown in Table 2 . Base and mega were trained for approximately 20 epochs, while medium and large were trained for 10 and 6 epochs respectively, due to TPU access limitations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 906, |
|
"end": 925, |
|
"text": "(Wolf et al., 2020)", |
|
"ref_id": "BIBREF37" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1193, |
|
"end": 1200, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "For the validation dataset, the Arabic Wikipedia articles that were published after August 2020 were used, since older articles were included in the September Wikipedia dump. The perplexity score was selected as a numerical evaluation metric since it measures the degree of 'uncertainty' a model has assigning probabilities to the test text. Table 2 shows that, unsurprisingly, validation perplexity keeps improving with larger model sizes.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 342, |
|
"end": 349, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Numerical Evaluation", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "In fact, the model is still under-fitting the validation set from Wikipedia. The generation capabilities of the different variants of ARAGPT2 is illustrated through the selected examples in Appendix A.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Numerical Evaluation", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "During zero-shot task evaluation, the model is only given a natural language instruction to motivate and ground the task, without any back-propagation happening. The task of searching and finding the best input prompt, also known as \"prompt engineering\", is hard. Since the search space is practically infinite, and the performance is highly sensitive to changes in the prompt. The zero-shot performance of ARAGPT2-Mega is evaluated on two tasks, 9 TPUv3-128 has a total of 2TB of HBM memory with 16GB per core. TPUs were freely provided by the TFRC program.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Zero-Shot Evaluation", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "question answering, and translation. ARAGPT2-MEGA correctly answers 25% of the trivia questions but fails in English-to-Arabic translation. Details on the datasets, prompts, and evaluation are presented in Appendix B.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Zero-Shot Evaluation", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Machine-Generated Text", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluating the Human Ability to Detect", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "The gold standard for evaluating a model's language generation capability is human evaluation. We presented 74 Arabic-speaking subjects from various social media with a survey designed to test the average-human ability to distinguish between machine-generated and human-written text and thus testing the model's ability to deceive a human subject. The survey had a total of 8 news articles, 4 machine-generated using ARAGPT2-Mega and 4 written by humans. Each category was split into long and short text, which allows us to test the long-term generation coherency. In addition, the human evaluators are allowed to add justification for each answer. The survey results, Figure 1 , show that ARAGPT2-Mega successfully fooled approx. 60% of the respondents, with longer passages having a higher error rate than short passages. In the provided explanations, some subjects relied on punctuation mistakes, coherence, and repetition issues, while others spotted factual inaccuracies. However, the results also show that humans were misclassifying human-written 50% the time (chance level performance), while also citing factual inconsistencies, grammatical errors, and unusual writing styles 10 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 669, |
|
"end": 677, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluating the Human Ability to Detect", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "These surprising results show that ARAGPT2 can accurately generate human-like text while maintaining grammatical correctness that can fool the average reader. It should be noted that there exist some tools, i.e. the Giant Language model Test Room (GLTR) (Gehrmann et al., 2019) , that allows humans to study the statistical distributional differences in text generated by GPT2-based models and human-written text. Figure 5 in Appendix C displays a visualization of token-level information created by GLTR with text generated by ARAGPT2 and on human-written articles.", |
|
"cite_spans": [ |
|
{ |
|
"start": 254, |
|
"end": 277, |
|
"text": "(Gehrmann et al., 2019)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 414, |
|
"end": 422, |
|
"text": "Figure 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluating the Human Ability to Detect", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "Large language models could have a significant societal impact if used for malicious purposes, such as automating the generation of misleading news articles, fake reviews, or high-quality phishing messages. The survey in Section 4.4, showcases the failure of the average-human to consistently detect machine-generated text, which motivates the problem of automatic detection of ARAGPT2generated text. Related work on the detection of machine-generated text by Jawahar et al. (2020) indicates that automatic detectors like the GROVERdetector (Zellers et al., 2019) and the RoBERTAdetector (Solaiman et al., 2019) have better success than human evaluators. In addition, previous work on detecting Arabic GPT2 (Khooli, 2020) auto-generated tweets, achieved 98.7% accuracy, by fine-tuning an ARABERTv0.1 (Antoun et al., 2020a) based classifier (Harrag et al., 2020) . Our detector is based on the pre-trained ARA-ELECTRA (Antoun et al., 2020b) model, which we fine-tuned on a dataset created by combining 1500 human-written news articles, with 1500 ar-ticles generated by ARAGPT2-Mega. For article generation, we only provided the model with a short prompt of 25 words. We created two versions of the dataset, one with short texts (150 tokens) and one with long texts (500 tokens), in order to evaluate the impact of the text's length.", |
|
"cite_spans": [ |
|
{ |
|
"start": 460, |
|
"end": 481, |
|
"text": "Jawahar et al. (2020)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 541, |
|
"end": 563, |
|
"text": "(Zellers et al., 2019)", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 588, |
|
"end": 611, |
|
"text": "(Solaiman et al., 2019)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 707, |
|
"end": 721, |
|
"text": "(Khooli, 2020)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 800, |
|
"end": 822, |
|
"text": "(Antoun et al., 2020a)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 840, |
|
"end": 861, |
|
"text": "(Harrag et al., 2020)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 917, |
|
"end": 939, |
|
"text": "(Antoun et al., 2020b)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Automatic Detection of Machine Generated Text", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Fine-tuned ARAELECTRA achieves 98.7% and 94.9% F1-score on long and short text respectively 11 , which indicates that longer text is easier to detect than short text. The high scores achieved by ARAELECTRA can be explained by the fact that machine-generated text tends to be more predictable compared to human-written text (see Appendix C, Fig. 5 ). The difference in text predictability can be easily exploited by a language model to detect machine-generated text. Another contributing factor is that ARAELECTRA was pre-trained on the exact same dataset as ARAGPT2.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 340, |
|
"end": 346, |
|
"text": "Fig. 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Automatic Detection of Machine Generated Text", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "ARAGPT2 is the first advanced Arabic language generation model based on the transformer architecture. The model was trained on the largest publicly available collection of filtered Arabic corpora. The model was evaluated using the perplexity measure which measures how well a probability model predicts a sample. Results show that ARAGPT2 is able to produce high quality Arabic text that is coherent, grammatically correct and syntactically sound.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "It is important to note that ARAGPT2, like many ML models, has ethical implications and can be used maliciously i.e. automatic fake news generation, modeling the dataset inherent biases... To help detect misuse of the model, a detector model that is tasked to detect output generated by ARAGPT2 is also released. More importantly, our hopes that publicly releasing ARAGPT2 will open up doors for new research possibilities for the Arabic NLP community. In zero-shot factoid question answering, the information contained within the language model can be queried. The model is tested on the Arabic examples from the TyDiQA (Clark et al., 2020a) validation dataset (921 examples), and on the test set of ARCD (Mozannar et al., 2019) (702 examples) . Hence, the model os provided with the following prompt: \"Answer the following question:\" -\" \" , followed by the question, then the phrase \"The answer is\" -\" \". It is also possible to append the phrase \"in the year\" -\"", |
|
"cite_spans": [ |
|
{ |
|
"start": 621, |
|
"end": 642, |
|
"text": "(Clark et al., 2020a)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 706, |
|
"end": 744, |
|
"text": "(Mozannar et al., 2019) (702 examples)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "\", if the expected answer is a year, as shown in Table 3 . The answer length is set to be the same as the gold answer length, and a repetition penalty is applied as in CTRL (Keskar et al., 2019) , which penalizes the probability scores of previously generated tokens. A 'no repeat tri-gram' strategy that inhibits the model from generating the same tri-gram more than once has also been employed. Note that the context passage is not provided, which forces the model to rely only on the information gained during pretraining.", |
|
"cite_spans": [ |
|
{ |
|
"start": 168, |
|
"end": 194, |
|
"text": "CTRL (Keskar et al., 2019)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 49, |
|
"end": 56, |
|
"text": "Table 3", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The model achieves a 3.93% exact-match score and an F1-score of 14.51% on TyDiQA, and 4.07% exact-match score and 13.88% F1-score on ARCD. Since exact-match and F1-score misses answers that are correct but are worded differently (as shown in Table 4 ). A subset of 500 answers from the best TyDiQA run is selected, and scored manually. Manual scoring shows that ARAGPT2 correctly answered 24.6% of the questions. The model was particularly good in countries and capitals question, year of birth and death, and some geography. Yet it was failing mostly on questions about quantities i.e. population counts, area, age... The predefined answer length negatively affected the generated answers in some cases, which is a limitation of the current approach. ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 242, |
|
"end": 249, |
|
"text": "Table 4", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "A experiments has also been conducted to test the translation capability of ARAGPT2 by appending the prompt \"What is the translation of this sentence ?:\" -\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B.2 Translation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\" to the sentence from the source language, in order to induce the translation behavior of the model. We then apply greedy decoding to get the generated target sentence. Evaluation is performed on 5000 randomly selected pairs from the English-Arabic Tatoeba (Tiedemann, 2012) dataset. The model achieved only 1.32 BLEU score 12 . The low score is due to the scarce representation of English words in the vocabulary, since most words were split into single characters. Additionally, given that the prompt design greatly affects the model's zero-shot performance, our prompt design might have been suboptimal. Nevertheless, this negative result encourages research into prompt engineering for Arabic language models, which we leave as future work. Figure 5 : It is clear that the machine generated text in (a) is mostly green and yellow highlighted, while in the human-written text, (b), an increase in red and purple highlighted words can be noticed. P.S.: We use ARAGPT2base as the backend model in GLTR", |
|
"cite_spans": [ |
|
{ |
|
"start": 258, |
|
"end": 275, |
|
"text": "(Tiedemann, 2012)", |
|
"ref_id": "BIBREF34" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 746, |
|
"end": 754, |
|
"text": "Figure 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "B.2 Translation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Pretrained variants of ARAGPT2 (base, medium, large, mega) and discriminator are publicly available on github.com/aub-mind/arabert/tree/master/aragpt2", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Million Parameters 3 Billion Parameters", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This is the regular Language Modeling objective where the model learns the probability of a word given the previous context. The CLM acronym is used to distinguish from masked language modeling (MLM).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/sberbank-ai/ru-gpts/ 6 ASGD Weight-Dropped LSTM", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Word count was done after preprocessing, where white space is inserted before and after punctuations, brackets, numbers... which increased the total word count", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Survey results are available on our GitHub repository.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The trained model will be publicly available in our repository", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Using the sacrebleu scorer(Post, 2018)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This research was supported by the University Research Board (URB) at the American University of Beirut (AUB), and by the TFRC program for providing free access to cloud TPUs. Many thanks to As-Safir newspaper for the data access, and also thanks to Nick Doiron for the insightful discussions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A Generated Samples from ARAGPT2 : Random unseen context about coronavirus vaccine(top). Followed by two generated samples bu ARAGPT2-mega. Generated text 1 (top p = 0.95), Generated text 2 (top p = 1) Figure 3 : Random unseen contexts about children stories. Followed by a generated sample by ARAGPT2-mega with top p = 0.95", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 202, |
|
"end": 210, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "annex", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Toward micro-dialect identification in diaglossic and code-switched environments", |
|
"authors": [ |
|
{ |
|
"first": "Muhammad", |
|
"middle": [], |
|
"last": "Abdul-Mageed", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chiyu", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Abdel-Rahim", |
|
"middle": [], |
|
"last": "Elmadany", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lyle", |
|
"middle": [], |
|
"last": "Ungar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2010.04900" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Muhammad Abdul-Mageed, Chiyu Zhang, Abdel- Rahim Elmadany, and Lyle Ungar. 2020. To- ward micro-dialect identification in diaglossic and code-switched environments. arXiv preprint arXiv:2010.04900.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "A complete kaldi recipe for building arabic speech recognition systems", |
|
"authors": [ |
|
{ |
|
"first": "Ahmed", |
|
"middle": [], |
|
"last": "Ali", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yifan", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Patrick", |
|
"middle": [], |
|
"last": "Cardinal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Najim", |
|
"middle": [], |
|
"last": "Dahak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephan", |
|
"middle": [], |
|
"last": "Vogel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Glass", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "2014 IEEE spoken language technology workshop (SLT)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "525--529", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ahmed Ali, Yifan Zhang, Patrick Cardinal, Najim Da- hak, Stephan Vogel, and James Glass. 2014. A com- plete kaldi recipe for building arabic speech recogni- tion systems. In 2014 IEEE spoken language tech- nology workshop (SLT), pages 525-529. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Arabert: Transformer-based model for arabic language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Wissam", |
|
"middle": [], |
|
"last": "Antoun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fady", |
|
"middle": [], |
|
"last": "Baly", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hazem", |
|
"middle": [], |
|
"last": "Hajj", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "LREC 2020 Workshop Language Resources and Evaluation Conference 11-16", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wissam Antoun, Fady Baly, and Hazem Hajj. 2020a. Arabert: Transformer-based model for arabic lan- guage understanding. In LREC 2020 Workshop Lan- guage Resources and Evaluation Conference 11-16 May 2020, page 9.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Araelectra: Pre-training text discriminators for arabic language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Wissam", |
|
"middle": [], |
|
"last": "Antoun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fady", |
|
"middle": [], |
|
"last": "Baly", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hazem", |
|
"middle": [], |
|
"last": "Hajj", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wissam Antoun, Fady Baly, and Hazem Hajj. 2020b. Araelectra: Pre-training text discriminators for ara- bic language understanding.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Language models are few-shot learners", |
|
"authors": [ |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Tom B Brown", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nick", |
|
"middle": [], |
|
"last": "Mann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Melanie", |
|
"middle": [], |
|
"last": "Ryder", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jared", |
|
"middle": [], |
|
"last": "Subbiah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Prafulla", |
|
"middle": [], |
|
"last": "Kaplan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arvind", |
|
"middle": [], |
|
"last": "Dhariwal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pranav", |
|
"middle": [], |
|
"last": "Neelakantan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Girish", |
|
"middle": [], |
|
"last": "Shyam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amanda", |
|
"middle": [], |
|
"last": "Sastry", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Askell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2005.14165" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. arXiv preprint arXiv:2005.14165.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Improving Arabic text categorization using transformer training diversification", |
|
"authors": [ |
|
{ |
|
"first": "Ahmed", |
|
"middle": [], |
|
"last": "Shammur Absar Chowdhury", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kareem", |
|
"middle": [], |
|
"last": "Abdelali", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jung", |
|
"middle": [], |
|
"last": "Darwish", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joni", |
|
"middle": [], |
|
"last": "Soon-Gyo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bernard", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Salminen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Jansen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the Fifth Arabic Natural Language Processing Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "226--236", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shammur Absar Chowdhury, Ahmed Abdelali, Ka- reem Darwish, Jung Soon-Gyo, Joni Salminen, and Bernard J. Jansen. 2020. Improving Arabic text cate- gorization using transformer training diversification. In Proceedings of the Fifth Arabic Natural Language Processing Workshop, pages 226-236, Barcelona, Spain (Online). Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Tydi qa: A benchmark for information-seeking question answering in typologically diverse languages", |
|
"authors": [ |
|
{ |
|
"first": "Jonathan", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eunsol", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Garrette", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tom", |
|
"middle": [], |
|
"last": "Kwiatkowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vitaly", |
|
"middle": [], |
|
"last": "Nikolaev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jennimaria", |
|
"middle": [], |
|
"last": "Palomaki", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jonathan H. Clark, Eunsol Choi, Michael Collins, Dan Garrette, Tom Kwiatkowski, Vitaly Nikolaev, and Jennimaria Palomaki. 2020a. Tydi qa: A benchmark for information-seeking question answering in typo- logically diverse languages. Transactions of the As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Electra: Pretraining text encoders as discriminators rather than generators", |
|
"authors": [ |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Minh-Thang", |
|
"middle": [], |
|
"last": "Luong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Quoc", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher D", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2003.10555" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kevin Clark, Minh-Thang Luong, Quoc V Le, and Christopher D Manning. 2020b. Electra: Pre- training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Unsupervised cross-lingual representation learning at scale", |
|
"authors": [ |
|
{ |
|
"first": "Alexis", |
|
"middle": [], |
|
"last": "Conneau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kartikay", |
|
"middle": [], |
|
"last": "Khandelwal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naman", |
|
"middle": [], |
|
"last": "Goyal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vishrav", |
|
"middle": [], |
|
"last": "Chaudhary", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Wenzek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Francisco", |
|
"middle": [], |
|
"last": "Guzm\u00e1n", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edouard", |
|
"middle": [], |
|
"last": "Grave", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Myle", |
|
"middle": [], |
|
"last": "Ott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veselin", |
|
"middle": [], |
|
"last": "Stoyanov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1911.02116" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "4171--4186", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Making a mini gpt-2 with dialect prompts", |
|
"authors": [ |
|
{ |
|
"first": "Nick", |
|
"middle": [], |
|
"last": "Doiron", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nick Doiron. 2020. Making a mini gpt-2 with dialect prompts.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "1.5 billion words arabic corpus", |
|
"authors": [ |
|
{ |
|
"first": "Ibrahim Abu", |
|
"middle": [], |
|
"last": "El-Khair", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1611.04033" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ibrahim Abu El-Khair. 2016. 1.5 billion words arabic corpus. arXiv preprint arXiv:1611.04033.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Wassim El-Hajj, and Khaled Shaban. 2019. hulmona: The universal language model in arabic", |
|
"authors": [ |
|
{ |
|
"first": "Obeida", |
|
"middle": [], |
|
"last": "Eljundi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wissam", |
|
"middle": [], |
|
"last": "Antoun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nour", |
|
"middle": [ |
|
"El" |
|
], |
|
"last": "Droubi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hazem", |
|
"middle": [], |
|
"last": "Hajj", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Proceedings of the Fourth Arabic Natural Language Processing Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "68--77", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Obeida ElJundi, Wissam Antoun, Nour El Droubi, Hazem Hajj, Wassim El-Hajj, and Khaled Shaban. 2019. hulmona: The universal language model in arabic. In Proceedings of the Fourth Arabic Natural Language Processing Workshop, pages 68-77.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Gltr: Statistical detection and visualization of generated text", |
|
"authors": [ |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Gehrmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hendrik", |
|
"middle": [], |
|
"last": "Strobelt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander M", |
|
"middle": [], |
|
"last": "Rush", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1906.04043" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sebastian Gehrmann, Hendrik Strobelt, and Alexan- der M Rush. 2019. Gltr: Statistical detection and visualization of generated text. arXiv preprint arXiv:1906.04043.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Building a first language model for codeswitch arabic-english", |
|
"authors": [], |
|
"year": 2017, |
|
"venue": "Procedia Computer Science", |
|
"volume": "117", |
|
"issue": "", |
|
"pages": "208--216", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Injy Hamed, Mohamed Elmahdy, and Slim Abdennad- her. 2017. Building a first language model for code- switch arabic-english. Procedia Computer Science, 117:208-216.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Bert transformer model for detecting Arabic GPT2 auto-generated tweets", |
|
"authors": [ |
|
{ |
|
"first": "Fouzi", |
|
"middle": [], |
|
"last": "Harrag", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maria", |
|
"middle": [], |
|
"last": "Dabbah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kareem", |
|
"middle": [], |
|
"last": "Darwish", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ahmed", |
|
"middle": [], |
|
"last": "Abdelali", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the Fifth Arabic Natural Language Processing Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "207--214", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fouzi Harrag, Maria Dabbah, Kareem Darwish, and Ahmed Abdelali. 2020. Bert transformer model for detecting Arabic GPT2 auto-generated tweets. In Proceedings of the Fifth Arabic Natural Language Processing Workshop, pages 207-214, Barcelona, Spain (Online). Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Universal language model fine-tuning for text classification", |
|
"authors": [ |
|
{ |
|
"first": "Jeremy", |
|
"middle": [], |
|
"last": "Howard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Ruder", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1801.06146" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeremy Howard and Sebastian Ruder. 2018. Univer- sal language model fine-tuning for text classification. arXiv preprint arXiv:1801.06146.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Automatic detection of machine generated text: A critical survey", |
|
"authors": [ |
|
{ |
|
"first": "Ganesh", |
|
"middle": [], |
|
"last": "Jawahar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Muhammad", |
|
"middle": [], |
|
"last": "Abdul-Mageed", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Laks", |
|
"middle": [], |
|
"last": "Lakshmanan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 28th International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2296--2309", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ganesh Jawahar, Muhammad Abdul-Mageed, and Laks Lakshmanan, V.S. 2020. Automatic detection of machine generated text: A critical survey. In Proceedings of the 28th International Conference on Computational Linguistics, pages 2296-2309, Barcelona, Spain (Online). International Committee on Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Ctrl: A conditional transformer language model for controllable generation", |
|
"authors": [ |
|
{ |
|
"first": "Bryan", |
|
"middle": [], |
|
"last": "Nitish Shirish Keskar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mccann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Lav", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Caiming", |
|
"middle": [], |
|
"last": "Varshney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Xiong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1909.05858" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nitish Shirish Keskar, Bryan McCann, Lav R Varshney, Caiming Xiong, and Richard Socher. 2019. Ctrl: A conditional transformer language model for control- lable generation. arXiv preprint arXiv:1909.05858.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "2020. gpt2-small-arabic", |
|
"authors": [ |
|
{ |
|
"first": "Abed", |
|
"middle": [], |
|
"last": "Khooli", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Abed Khooli. 2020. gpt2-small-arabic.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Darts: Dialectal arabic transcription system", |
|
"authors": [ |
|
{ |
|
"first": "Sameer", |
|
"middle": [], |
|
"last": "Khurana", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ahmed", |
|
"middle": [], |
|
"last": "Ali", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Glass", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1909.12163" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sameer Khurana, Ahmed Ali, and James Glass. 2019. Darts: Dialectal arabic transcription system. arXiv preprint arXiv:1909.12163.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Adam: A method for stochastic optimization", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Diederik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jimmy", |
|
"middle": [], |
|
"last": "Kingma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ba", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1412.6980" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Gigabert: Zero-shot transfer learning from english to arabic", |
|
"authors": [ |
|
{ |
|
"first": "Wuwei", |
|
"middle": [], |
|
"last": "Lan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yang", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alan", |
|
"middle": [], |
|
"last": "Ritter", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of The 2020 Conference on Empirical Methods on Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wuwei Lan, Yang Chen, Wei Xu, and Alan Ritter. 2020. Gigabert: Zero-shot transfer learning from english to arabic. In Proceedings of The 2020 Conference on Empirical Methods on Natural Language Process- ing (EMNLP).", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Roberta: A robustly optimized bert pretraining approach", |
|
"authors": [ |
|
{ |
|
"first": "Yinhan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Myle", |
|
"middle": [], |
|
"last": "Ott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naman", |
|
"middle": [], |
|
"last": "Goyal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jingfei", |
|
"middle": [], |
|
"last": "Du", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mandar", |
|
"middle": [], |
|
"last": "Joshi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Danqi", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Omer", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Lewis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veselin", |
|
"middle": [], |
|
"last": "Stoyanov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1907.11692" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Neural Arabic question answering", |
|
"authors": [ |
|
{ |
|
"first": "Hussein", |
|
"middle": [], |
|
"last": "Mozannar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Elie", |
|
"middle": [], |
|
"last": "Maamary", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karl", |
|
"middle": [ |
|
"El" |
|
], |
|
"last": "Hajal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hazem", |
|
"middle": [], |
|
"last": "Hajj", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the Fourth Arabic Natural Language Processing Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "108--118", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hussein Mozannar, Elie Maamary, Karl El Hajal, and Hazem Hajj. 2019. Neural Arabic question answer- ing. In Proceedings of the Fourth Arabic Natu- ral Language Processing Workshop, pages 108-118, Florence, Italy. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Empathy-driven Arabic conversational chatbot", |
|
"authors": [ |
|
{ |
|
"first": "Tarek", |
|
"middle": [], |
|
"last": "Naous", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christian", |
|
"middle": [], |
|
"last": "Hokayem", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hazem", |
|
"middle": [], |
|
"last": "Hajj", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the Fifth Arabic Natural Language Processing Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "58--68", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tarek Naous, Christian Hokayem, and Hazem Hajj. 2020. Empathy-driven Arabic conversational chat- bot. In Proceedings of the Fifth Arabic Natu- ral Language Processing Workshop, pages 58-68, Barcelona, Spain (Online). Association for Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "A monolingual approach to contextualized word embeddings for mid-resource languages", |
|
"authors": [ |
|
{ |
|
"first": "Pedro Javier Ortiz", |
|
"middle": [], |
|
"last": "Su\u00e1rez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Laurent", |
|
"middle": [], |
|
"last": "Romary", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Beno\u00eet", |
|
"middle": [], |
|
"last": "Sagot", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1703--1714", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pedro Javier Ortiz Su\u00e1rez, Laurent Romary, and Beno\u00eet Sagot. 2020. A monolingual approach to contextual- ized word embeddings for mid-resource languages. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1703-1714, Online. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "MADAMIRA: A fast, comprehensive tool for morphological analysis and disambiguation of Arabic", |
|
"authors": [ |
|
{ |
|
"first": "Arfath", |
|
"middle": [], |
|
"last": "Pasha", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohamed", |
|
"middle": [], |
|
"last": "Al-Badrashiny", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mona", |
|
"middle": [], |
|
"last": "Diab", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ahmed", |
|
"middle": [ |
|
"El" |
|
], |
|
"last": "Kholy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ramy", |
|
"middle": [], |
|
"last": "Eskander", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nizar", |
|
"middle": [], |
|
"last": "Habash", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Manoj", |
|
"middle": [], |
|
"last": "Pooleery", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Owen", |
|
"middle": [], |
|
"last": "Rambow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1094--1101", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Arfath Pasha, Mohamed Al-Badrashiny, Mona Diab, Ahmed El Kholy, Ramy Eskander, Nizar Habash, Manoj Pooleery, Owen Rambow, and Ryan Roth. 2014. MADAMIRA: A fast, comprehensive tool for morphological analysis and disambiguation of Arabic. In Proceedings of the Ninth International Conference on Language Resources and Evalua- tion (LREC'14), pages 1094-1101, Reykjavik, Ice- land. European Language Resources Association (ELRA).", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "A call for clarity in reporting BLEU scores", |
|
"authors": [ |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Post", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Third Conference on Machine Translation: Research Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "186--191", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186- 191, Belgium, Brussels. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Improving language understanding by generative pre-training", |
|
"authors": [ |
|
{ |
|
"first": "Alec", |
|
"middle": [], |
|
"last": "Radford", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karthik", |
|
"middle": [], |
|
"last": "Narasimhan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language under- standing by generative pre-training.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Language models are unsupervised multitask learners", |
|
"authors": [ |
|
{ |
|
"first": "Alec", |
|
"middle": [], |
|
"last": "Radford", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rewon", |
|
"middle": [], |
|
"last": "Child", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Luan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dario", |
|
"middle": [], |
|
"last": "Amodei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Ope- nAI.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "KUISAIL at SemEval-2020 task 12: BERT-CNN for offensive speech identification in social media", |
|
"authors": [ |
|
{ |
|
"first": "Ali", |
|
"middle": [], |
|
"last": "Safaya", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Moutasem", |
|
"middle": [], |
|
"last": "Abdullatif", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Deniz", |
|
"middle": [], |
|
"last": "Yuret", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the Fourteenth Workshop on Semantic Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2054--2059", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ali Safaya, Moutasem Abdullatif, and Deniz Yuret. 2020. KUISAIL at SemEval-2020 task 12: BERT- CNN for offensive speech identification in social me- dia. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 2054-2059, Barcelona (online). International Committee for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Adafactor: Adaptive learning rates with sublinear memory cost", |
|
"authors": [ |
|
{ |
|
"first": "Noam", |
|
"middle": [], |
|
"last": "Shazeer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mitchell", |
|
"middle": [], |
|
"last": "Stern", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1804.04235" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Noam Shazeer and Mitchell Stern. 2018. Adafactor: Adaptive learning rates with sublinear memory cost. arXiv preprint arXiv:1804.04235.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Release strategies and the social impacts of language models", |
|
"authors": [ |
|
{ |
|
"first": "Irene", |
|
"middle": [], |
|
"last": "Solaiman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miles", |
|
"middle": [], |
|
"last": "Brundage", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jack", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amanda", |
|
"middle": [], |
|
"last": "Askell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ariel", |
|
"middle": [], |
|
"last": "Herbert-Voss", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alec", |
|
"middle": [], |
|
"last": "Radford", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gretchen", |
|
"middle": [], |
|
"last": "Krueger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jong", |
|
"middle": [ |
|
"Wook" |
|
], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sarah", |
|
"middle": [], |
|
"last": "Kreps", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1908.09203" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Irene Solaiman, Miles Brundage, Jack Clark, Amanda Askell, Ariel Herbert-Voss, Jeff Wu, Alec Rad- ford, Gretchen Krueger, Jong Wook Kim, Sarah Kreps, et al. 2019. Release strategies and the so- cial impacts of language models. arXiv preprint arXiv:1908.09203.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Parallel data, tools and interfaces in opus", |
|
"authors": [ |
|
{ |
|
"first": "J\u00f6rgrg", |
|
"middle": [], |
|
"last": "Tiedemann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J\u00f6rgrg Tiedemann. 2012. Parallel data, tools and inter- faces in opus. In Proceedings of the Eight Interna- tional Conference on Language Resources and Eval- uation (LREC'12), Istanbul, Turkey. European Lan- guage Resources Association (ELRA).", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Attention is all you need. Advances in neural information processing systems", |
|
"authors": [ |
|
{ |
|
"first": "Illia", |
|
"middle": [], |
|
"last": "Kaiser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Polosukhin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "30", |
|
"issue": "", |
|
"pages": "5998--6008", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information process- ing systems, 30:5998-6008.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Transformers: State-of-the-art natural language processing", |
|
"authors": [ |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Wolf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lysandre", |
|
"middle": [], |
|
"last": "Debut", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Victor", |
|
"middle": [], |
|
"last": "Sanh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julien", |
|
"middle": [], |
|
"last": "Chaumond", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clement", |
|
"middle": [], |
|
"last": "Delangue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anthony", |
|
"middle": [], |
|
"last": "Moi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pierric", |
|
"middle": [], |
|
"last": "Cistac", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tim", |
|
"middle": [], |
|
"last": "Rault", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R\u00e9mi", |
|
"middle": [], |
|
"last": "Louf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Morgan", |
|
"middle": [], |
|
"last": "Funtowicz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joe", |
|
"middle": [], |
|
"last": "Davison", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sam", |
|
"middle": [], |
|
"last": "Shleifer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clara", |
|
"middle": [], |
|
"last": "Patrick Von Platen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yacine", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julien", |
|
"middle": [], |
|
"last": "Jernite", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Canwen", |
|
"middle": [], |
|
"last": "Plu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Teven", |
|
"middle": [ |
|
"Le" |
|
], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sylvain", |
|
"middle": [], |
|
"last": "Scao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mariama", |
|
"middle": [], |
|
"last": "Gugger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Quentin", |
|
"middle": [], |
|
"last": "Drame", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Lhoest", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Rush", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "38--45", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language pro- cessing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "Large batch optimization for deep learning: Training bert in 76 minutes", |
|
"authors": [ |
|
{ |
|
"first": "Yang", |
|
"middle": [], |
|
"last": "You", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jing", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sashank", |
|
"middle": [], |
|
"last": "Reddi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Hseu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sanjiv", |
|
"middle": [], |
|
"last": "Kumar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Srinadh", |
|
"middle": [], |
|
"last": "Bhojanapalli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaodan", |
|
"middle": [], |
|
"last": "Song", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Demmel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kurt", |
|
"middle": [], |
|
"last": "Keutzer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cho-Jui", |
|
"middle": [], |
|
"last": "Hsieh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1904.00962" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yang You, Jing Li, Sashank Reddi, Jonathan Hseu, Sanjiv Kumar, Srinadh Bhojanapalli, Xiaodan Song, James Demmel, Kurt Keutzer, and Cho-Jui Hsieh. 2019. Large batch optimization for deep learn- ing: Training bert in 76 minutes. arXiv preprint arXiv:1904.00962.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "Defending against neural fake news", |
|
"authors": [ |
|
{ |
|
"first": "Rowan", |
|
"middle": [], |
|
"last": "Zellers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ari", |
|
"middle": [], |
|
"last": "Holtzman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hannah", |
|
"middle": [], |
|
"last": "Rashkin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yonatan", |
|
"middle": [], |
|
"last": "Bisk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ali", |
|
"middle": [], |
|
"last": "Farhadi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Franziska", |
|
"middle": [], |
|
"last": "Roesner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yejin", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "9054--9065", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, and Yejin Choi. 2019. Defending against neural fake news. In Advances in neural information processing systems, pages 9054-9065.", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "Osian: Open source international arabic news corpus-preparation and integration into the clarin-infrastructure", |
|
"authors": [ |
|
{ |
|
"first": "Imad", |
|
"middle": [], |
|
"last": "Zeroual", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dirk", |
|
"middle": [], |
|
"last": "Goldhahn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Eckart", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Abdelhak", |
|
"middle": [], |
|
"last": "Lakhouaja", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the Fourth Arabic Natural Language Processing Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "175--182", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Imad Zeroual, Dirk Goldhahn, Thomas Eckart, and Ab- delhak Lakhouaja. 2019. Osian: Open source inter- national arabic news corpus-preparation and integra- tion into the clarin-infrastructure. In Proceedings of the Fourth Arabic Natural Language Processing Workshop, pages 175-182.", |
|
"links": null |
|
}, |
|
"BIBREF41": { |
|
"ref_id": "b41", |
|
"title": "Gpt2-ml: Gpt-2 for multiple languages", |
|
"authors": [ |
|
{ |
|
"first": "Zhibo", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhibo Zhang. 2019. Gpt2-ml: Gpt-2 for multiple languages.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null, |
|
"text": "Survey results showing human error rates on machine generated (left) and human written text (right)" |
|
}, |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null, |
|
"text": "Random unseen contexts on miscellaneous topics. Followed by a generated sample by ARAGPT2-mega with top p = 0.95" |
|
}, |
|
"FIGREF3": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null, |
|
"text": "(a) Text generated by ARAGPT2-Mega. The first sentence is the human-written prompt top k" |
|
}, |
|
"TABREF2": { |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td colspan=\"4\">\u202b\ufeb3\ufe8e\ufea7\ufee8\ufe94\u202c \u202b\u0648\ufee7\ufed8\ufeaa\ufee3\ufbad\ufe8e\u202c</td></tr><tr><td colspan=\"2\">Prompt 2 (Short Conversation -Greeting)</td><td/><td/><td/></tr><tr><td/><td colspan=\"4\">\u202b\u061f\u202c \u202b\ufea3\ufe8e\ufedf\ufeda\u202c \u202b\ufedb\ufbff\ufed2\u202c \u202b\ufee3\ufeae\ufea3\ufe92\ufe8e\u202c</td></tr><tr><td colspan=\"2\">Generated Text</td><td/><td/><td/></tr><tr><td>\u202b\ufec2\u202c</td><td colspan=\"4\">\u202b\ufe91\ufea8\ufbff\ufeae\u202c \u202b\ufdf2\u202c \u202b\ufeb7\u0640\ufe8e\u0621\u202c \u202b\u0627\u0646\u202c</td></tr><tr><td>\u202b\u0623\ufea7\ufeae\u0649\u202c \u202b\ufee3\ufeae\u0629\u202c \u202b\ufecb\ufee0\ufbff\ufbad\ufe8e\u202c \u202b\u0627\ufebf\ufed0\ufec2\u202c</td><td colspan=\"4\">\u202b\ufe9b\ufee2\u202c \u202b\ufedf\ufbad\ufe8e\u202c \u202b\u0627\ufedf\ufee4\ufea8\ufebc\ufeba\u202c \u202b\u0627\ufedf\ufee4\ufedc\ufe8e\u0646\u202c \u202b\ufed3\ufef2\u202c \u202b\ufe97\ufeae\ufee3\ufbff\ufbad\ufe8e\u202c \u202b\u0623\u0646\u202c \u202b\ufe97\ufeae\ufbfe\ufeaa\u202c \u202b\u0627\ufedf\ufe98\ufef2\u202c \u202b\u0627\ufedf\ufedc\ufeae\u0629\u202c \u202b\ufecb\ufee0\ufef0\u202c</td></tr><tr><td colspan=\"2\">Prompt 3 (Wikipedia-style)</td><td/><td/><td/></tr><tr><td>\u202b\u0648\ufe97\ufecc\ufeaa\u202c \u060c 1926 \u202b\u0645\u202c \u202b\ufecb\ufe8e\u0645\u202c \u202b\u0648\ufed3\ufef2\u202c \u060c 1946 \u202b\u0627\ufefb\ufee7\ufe98\ufeaa\u0627\u0628\u202c \u202b\ufe91\ufecc\ufeaa\u202c \u202b\u0627\ufeb3\ufe98\ufed8\ufefc\ufedf\ufbad\ufe8e\u202c \u202b\ufecb\ufee0\ufef0\u202c \u202b\ufedf\ufe92\ufee8\ufe8e\u0646\u202c \u202b\ufea3\ufebc\ufee0\ufe96\u202c \u202b\u0645\u202c</td><td/><td/><td/><td/></tr><tr><td/><td colspan=\"4\">. \u202b\u0627\ufedf\ufed4\ufeae\ufee7\ufeb4\ufef2\u202c</td></tr><tr><td colspan=\"2\">Generated Text</td><td/><td/><td/></tr><tr><td>\u202b\u0627\ufedf\ufeaa\ufbfe\ufe8e\ufee7\ufe94\u202c \u202b\u0623\ufee3\ufe8e\u202c \u060c</td><td>\u0662</td><td>][</td><td>\u0661</td><td>]</td></tr></table>", |
|
"text": "\u202b\ufee7\ufe8e\u0631\u202c \u202b\ufecb\ufee0\ufef0\u202c \u202b\u0648\ufee7\ufeb4\ufee0\ufed8\ufbab\u202c \u202b\u0627\ufef7\ufeb3\ufeee\u062f\u202c \u202b\u0648\u0627\ufedf\ufed4\ufee0\ufed4\ufede\u202c \u202b\u0648\u0627\ufedf\ufee4\ufee0\ufea2\u202c \u202b\u0627\ufedf\ufed0\ufe8e\u0631\u202c \u202b\u0648\u0648\u0631\u0642\u202c \u202b\u0648\u0627\ufedf\ufbad\ufbff\ufede\u202c \u202b\u0627\ufedf\ufe92\ufebc\ufede\u202c \u202b\ufee3\ufeca\u202c \u202b\u0627\ufedf\ufee4\ufe8e\u0621\u202c \u202b\ufee3\ufee6\u202c \u202b\ufed7\ufeaa\u0631\u202c \u202b\ufed3\ufef2\u202c \u202b\u0627\ufedf\ufeaa\ufe9f\ufe8e\u062c\u202c \u202b\ufee7\ufec0\ufeca\u202c \u202b\u0627\ufedf\ufe98\ufea4\ufec0\ufbff\ufeae\u202c \u202b\u0637\ufeae\ufbfe\ufed8\ufe94\u202c \u202b\u0627\ufedf\ufee0\ufe92\ufee8\ufe8e\ufee7\ufbff\ufe94\u202c \u202b\u0627\ufedf\ufeaa\ufe9f\ufe8e\u062c\u202c \u202b\ufed3\ufe98\ufe94\u202c \u202b\ufee7\ufea4\ufee4\ufef2\u202c . \u202b\u0648\ufecb\ufe8e\u0621\u202c \u202b\ufed3\ufef2\u202c \u202b\u0648\ufee7\ufec0\ufecc\ufbab\u202c \u202b\u0627\ufedf\ufee4\ufeae\u0642\u202c \u202b\ufee7\ufebc\ufed4\ufef2\u202c \u202b\ufe9b\ufee2\u202c \u202b\ufe9f\ufe8e\ufee7\ufe92\ufe8e\u202c \u202b\u0648\ufee7\ufec0\ufecc\ufbab\u202c \u202b\u0648\ufee7\ufed4\ufe98\ufe98\ufbab\u202c \u202b\u0627\ufedf\ufee4\ufeae\u0642\u202c \u202b\ufee3\ufee6\u202c \u202b\u0627\ufedf\ufeaa\ufe9f\ufe8e\u062c\u202c \u202b\ufebb\ufed4\ufef2\u202c . \u202b\ufe97\ufee4\ufe8e\ufee3\ufe8e\u202c \u202b\ufbfe\ufee8\ufec0\ufe9e\u202c \u202b\ufea3\ufe98\ufef0\u202c \u202b\ufee3\ufe98\ufeee\ufeb3\ufec4\ufe94\u202c \u202b\u0627\ufedf\ufea8\ufe92\ufeb0\u202c \u202b\ufed3\ufbff\ufbab\u202c \u202b\u0648\ufee7\ufed8\ufee0\ufef2\u202c \u202b\u0627\ufedf\ufeb0\ufbfe\ufe96\u202c . \u202b\u0648\ufee3\ufed8\ufeae\ufee3\ufeb8\ufe8e\u202c \u202b\u0630\u06be\ufe92\ufbff\ufe8e\u202c \u202b\ufbfe\ufebc\ufe92\ufea2\u202c \u202b\ufea3\ufe98\ufef0\u202c Generated Text \u202b\u0627\ufedf\ufee4\ufed4\ufeae\u0648\u0645\u202c \u202b\u0627\ufedf\ufe92\ufed8\ufeaa\u0648\ufee7\ufeb2\u202c \u202b\ufee3\ufee6\u202c \u202b\u0627\ufedf\ufed8\ufee0\ufbff\ufede\u202c \u202b\ufed3\ufeee\ufed7\ufbad\ufe8e\u202c \u202b\ufee7\ufeae\u0634\u202c \u202b\ufe9b\ufee2\u202c \u060c \u202b\u0627\ufedf\ufea8\ufe92\ufeb0\u202c \u202b\ufed3\ufeee\u0642\u202c \u202b\u0648\u0627\ufedf\ufee4\ufed4\ufeae\u0648\ufee3\ufe94\u202c \u202b\u0627\ufedf\ufee4\ufed8\ufeb8\ufeae\u0629\u202c \u202b\u0648\u0627\ufedf\ufe92\ufee8\ufeaa\u0648\u0631\u0629\u202c \u202b\u0648\u0627\ufedf\ufea8\ufec0\ufeae\u0627\u0621\u202c \u202b\u0648\u0627\ufedf\ufea4\ufee4\ufeae\u0627\u0621\u202c \u202b\u0627\ufedf\ufea8\ufec0\ufeae\u0627\u0621\u202c \u202b\u0627\ufedf\ufed4\ufee0\ufbff\ufed4\ufee0\ufe94\u202c \u202b\ufeb7\ufeae\u0627\ufe8b\ufea2\u202c \u202b\ufee7\ufeee\u0632\u0639\u202c \u202b\ufee3\u202c \u202b\ufe91\ufe8e\ufedf\ufed8\ufee0\ufbff\ufede\u202c \u202b\u0640\ufbad\ufe8e\u202c \u202b\u0648\ufee7\ufeae\ufeb7\u202c \u202b\u0627\ufedf\ufeb0\ufbfe\ufe96\u202c \u202b\ufee3\ufee6\u202c \u202b\ufe91\ufe8e\ufedf\ufed8\ufee0\ufbff\ufede\u202c \u202b\ufed3\ufeae\u0646\u202c \u202b\u0640\ufbff\ufee8\ufbff\ufe94\u202c \u202b\ufebb\u202c \u202b\ufee7\ufeaa\u06be\ufee6\u202c . \u202b\u0627\ufedf\ufee4\ufea4\ufee4\ufeba\u202c \u202b\u0640\ufee8\ufeee\ufe91\ufeae\u202c \u202b\u0648\u0627\ufedf\ufebc\u202c \u202b\ufea3\ufeae\u0627\u0631\u0629\u202c \u202b\ufecb\ufee0\ufef0\u202c \u202b\u0640\ufe92\ufed8\ufe8e\u202c \u202b\ufee3\ufeb4\u202c \u202b\ufee3\ufea4\ufee4\ufef0\u202c \u202b\ufed3\ufeae\u0646\u202c \u202b\u0625\ufedf\ufef0\u202c \u202b\u0640\ufbff\ufee8\ufbff\ufe94\u202c \u202b\u0627\ufedf\ufebc\u202c \u202b\ufee7\ufeaa\ufea7\ufede\u202c \u202b\ufe9b\ufee2\u202c \u202b\u0627\ufedf\ufec4\ufea4\ufbff\ufee6\u202c \u202b\ufee6\u202c 180 \u202b\ufe91\ufbff\ufee6\u202c \u202b\ufe97\ufe98\ufeae\u0627\u0648\u062d\u202c \u202b\ufedf\ufee4\ufeaa\u0629\u202c \u202b\ufee3\ufe8c\ufeee\ufbfe\ufe94\u202c \u202b\u062f\u0631\ufe9f\ufe94\u202c 15 \u202b\u0648\u202c 20 \u202b\ufee7\ufed8\ufec4\ufecc\ufbad\ufe8e\u202c \u202b\ufe9b\ufee2\u202c \u202b\ufed7\ufee0\ufbff\ufefc\u202c \u202b\ufedf\ufe98\ufe92\ufeae\u062f\u202c \u202b\u0648\ufee7\ufe98\ufeae\ufedb\ufbad\ufe8e\u202c \u202b\u0627\ufedf\ufed4\ufeae\u0646\u202c \u202b\ufee3\ufee6\u202c \u202b\u0627\ufedf\ufebc\u0640\u0640\ufbff\ufee8\ufbff\ufe94\u202c \u202b\ufee7\ufea8\ufeae\u062c\u202c . \u202b\u0627\ufedf\ufebc\u0640\u0640\ufbff\ufee8\ufbff\ufe94\u202c \u202b\u0648\ufe9f\ufbab\u202c \u202b\ufbfe\ufea4\ufee4\ufeae\u202c \u202b\ufea3\ufe98\ufef0\u202c \u202b\u0623\u0648\u202c \u202b\u062f\ufed7\ufbff\ufed8\ufe94\u202c . \u202b\u0640\ufed0\u202c \u202b\u0627\ufedf\ufec0\u202c \u202b\u06be\ufeee\u202c \u202b\ufed3\ufecc\ufee0\ufbab\u202c \u202b\ufecb\ufee0\ufbff\ufeda\u202c \u202b\ufee3\ufe8e\u202c \u202b\ufedb\ufede\u202c , \u202b\ufee3\ufea8\ufe98\ufee0\ufed4\ufe94\u202c \u202b\ufe91\ufe84\ufedf\ufeee\u0627\u0646\u202c \u202b\u0627\ufedf\ufee4\ufee0\ufeee\ufee7\ufe94\u202c \u202b\u0627\ufedf\ufedc\ufeae\u0627\u062a\u202c \u202b\ufee3\ufee6\u202c \u202b\ufee3\ufea0\ufee4\ufeee\ufecb\ufe94\u202c \u202b\ufecb\ufee6\u202c \u202b\ufecb\ufe92\ufe8e\u0631\u0629\u202c \u202b\u06be\ufef2\u202c \u202b\u0648\u202c \u202b\ufedb\ufe98\ufbff\ufeae\u202c \u202b\ufea3\ufee0\ufeee\u0629\u202c \u202b\ufedf\ufecc\ufe92\ufe94\u202c \u202b\ufe9f\ufe92\ufe98\ufee0\ufedc\ufee2\u202c \u202b\u0627\ufedf\ufbff\ufeee\u0645\u202c . \u202b\u0640\ufec2\u202c \u202b\u0627\ufedf\ufee4\ufe98\ufeee\ufeb3\u202c \u202b\u0627\ufef7\ufe91\ufbff\ufebe\u202c \u202b\ufedf\ufee0\ufe92\ufea4\ufeae\u202c \u202b\u0640\ufeae\ufed7\ufef2\u202c \u202b\u0627\ufedf\ufeb8\u202c \u202b\u0627\ufedf\ufea0\ufeb0\u0621\u202c \u202b\ufecb\ufee0\ufef0\u202c \u202b\u0627\ufedf\ufee4\ufec4\ufede\u202c \u202b\u0640\ufe8e\u0637\ufe8a\u202c \u202b\u0627\ufedf\ufeb8\u202c \u202b\ufecb\ufee0\ufef0\u202c \u202b\u0648\ufe97\ufed8\ufeca\u202c \u060c \u202b\ufe91\ufbff\ufeae\u0648\u062a\u202c \u202b\u0640\ufee4\ufe98\ufbad\ufe8e\u202c \u202b\ufecb\ufe8e\ufebb\u202c \u202b\ufecb\ufeae\ufe91\ufbff\ufe94\u202c \u202b\u062f\u0648\ufedf\ufe94\u202c \u202b\u0627\ufedf\ufee0\ufe92\ufee8\ufe8e\ufee7\ufbff\ufe94\u202c \u202b\u0627\ufedf\ufea0\ufee4\ufbad\ufeee\u0631\ufbfe\ufe94\u202c \u202b\ufe97\ufecc\ufeaa\u202c \u202b\ufedf\ufe92\ufee8\ufe8e\u0646\u202c \u202b\ufed3\ufbad\ufef2\u202c \u202b\ufe97\ufe8e\u0631\ufbfe\ufea8\ufbad\ufe8e\u202c \u202b\u0625\ufedf\ufef0\u202c \u202b\u0648\ufe91\ufe8e\ufedf\ufecc\ufeee\u062f\u0629\u202c \u060c \u202b\ufe91\ufe8e\ufedf\ufeb4\u0640\u0640\ufedc\ufe8e\u0646\u202c \u202b\u0627\ufedf\ufee4\ufedc\ufe98\ufec8\ufe94\u202c \u202b\u0627\ufedf\ufe92\ufee0\ufeaa\u0627\u0646\u202c \u202b\u0623\ufedb\ufe9c\ufeae\u202c \u202b\ufee3\ufee6\u202c \u202b\u0648\u0627\ufea3\ufeaa\u0629\u202c \u202b\u0627\ufedf\ufee4\ufbff\ufefc\u062f\u202c \u202b\ufed7\ufe92\ufede\u202c \u202b\u0627\ufedf\ufe9c\ufe8e\ufedf\ufe9c\ufe94\u202c \u202b\u0627\ufef7\ufedf\ufed4\ufbff\ufe94\u202c \u202b\ufed3\ufef2\u202c \u202b\u0627\ufedf\ufe92\ufeb8\u0640\u0640\ufeae\ufbfe\ufe94\u202c \u202b\u0627\ufedf\ufea4\ufec0\u0640\u0640\ufe8e\u0631\u0627\u062a\u202c \u202b\ufef7\ufed7\ufeaa\u0645\u202c \u202b\ufee3\ufeee\u0637\ufee8\ufe8e\u202c \u202b\ufe97\ufecc\ufe98\ufe92\ufeae\u202c \u202b\ufee3\ufeae\u202c \u202b\ufecb\ufee0\ufef0\u202c \u202b\ufee3\ufbad\ufee4\ufe8e\u202c \u202b\u062f\u0648\u0631\u0627\u202c \u202b\u0627\ufedf\ufeee\ufecb\ufeae\u0629\u202c \u202b\u0640\ufbad\ufe8e\u202c \u202b\u0640\ufe8e\u0631\ufbfe\ufeb4\u0640\u202c \u202b\ufedf\ufe98\ufec0\u0640\u202c \u202b\u0648\ufedb\ufe8e\u0646\u202c \u060c \u202b\u0627\ufedf\ufeee\ufed7\ufe96\u202c \u202b\u0630\ufedf\ufeda\u202c \u202b\ufed3\ufef2\u202c \u202b\u0648\u0627\ufedf\ufe98\ufea0\ufe8e\u0631\ufbfe\ufe94\u202c \u202b\u0627\ufedf\ufe9c\ufed8\ufe8e\ufed3\ufbff\ufe94\u202c \u202b\u0627\ufedf\ufee4\ufeae\u0627\ufedb\ufeb0\u202c \u202b\u0623\u06be\ufee2\u202c \u202b\ufee3\ufee6\u202c \u202b\ufed3\ufedc\ufe8e\ufee7\ufe96\u202c \u060c \u202b\ufee3\ufee8\ufbad\ufe8e\u202c \u202b\u0640\ufe8e\ufea3\ufee0\ufbff\ufe94\u202c \u202b\u0627\ufedf\ufeb4\u0640\u202c \u202b\u0627\ufedf\ufee4\ufee8\ufec4\ufed8\ufe94\u202c \u202b\u0640\ufe94\u202c \u202b\ufea7\ufe8e\ufebb\u0640\u202c \u202b\ufecb\ufe8e\u0645\u202c \u202b\ufe9f\ufee4\ufbad\ufeee\u0631\ufbfe\ufe94\u202c \u202b\ufedb\ufeaa\u0648\ufedf\ufe94\u202c \u202b\u0648\u0623\ufecb\ufee0\ufee8\ufe96\u202c \u060c \u202b\u0625\ufedf\ufbff\ufbad\ufe8e\u202c \u202b\ufe9f\ufee4\ufe8e\ufecb\ufe8e\u062a\u202c \u202b\ufecb\ufeaa\u0629\u202c \u202b\ufedf\ufea0\ufeee\u0621\u202c \u202b\ufed3\ufef2\u202c \u202b\u0627\ufedf\ufe98\ufe8e\u0631\ufbfe\ufea6\u202c \u202b\u0640\ufbff\ufe94\u202c \u202b\u0648\u0627\ufedf\ufed4\ufeae\ufee7\ufeb4\u0640\u0640\u202c \u202b\u0627\ufef9\ufee7\ufea0\ufee0\ufbff\ufeb0\ufbfe\ufe94\u202c \u202b\u06be\ufee4\ufe8e\u202c \u202b\u0623\ufea7\ufeae\ufbfe\ufbff\ufee6\u202c \u202b\u0640\ufee4\ufbff\ufe98\ufbff\ufee6\u202c \u202b\u0631\ufeb3\u0640\u0640\u202c \u202b\ufedf\ufed0\ufe98\ufbff\ufee6\u202c \u202b\u0648\ufe9f\ufeee\u062f\u202c \u202b\u0625\ufedf\ufef0\u202c \u202b\u0640\ufe8e\ufed3\ufe94\u202c \u202b\ufe91\ufe8e\ufef9\ufebf\u0640\u0640\u202c \u060c \u202b\ufedf\ufee0\ufe92\ufefc\u062f\u202c \u202b\u0640\ufee4\ufbff\ufe94\u202c \u202b\u0627\ufedf\ufeae\ufeb3\u0640\u0640\u202c \u202b\u0627\ufedf\ufee0\ufed0\ufe94\u202c \u202b\u06be\ufef2\u202c \u202b\u0627\ufedf\ufecc\ufeae\ufe91\ufbff\ufe94\u202c \u202b\u0627\ufedf\ufee0\ufed0\ufe94\u202c \u202b[\ufe97\ufecc\ufeaa\u202c \u202b\u0627\ufedf\ufee4\ufeb4\u0640\ufee0\u202c \u202b\ufbfe\ufeb8\u0640\ufedc\ufede\u202c \u202b\ufea3\ufbff\ufe9a\u202c \u060c \u202b\u0627\ufef9\ufeb3\u0640\ufefc\u0645\u202c \u202b\ufed3\ufbad\ufef2\u202c \u202b\u0627\ufedf\ufe92\ufefc\u062f\u202c \u202b\ufed3\ufef2\u202c \u202b\u0627\ufedf\ufeb4\u0640\ufe8e\ufe8b\ufeaa\u0629\u202c \u202b\ufea3\ufeee\u0627\ufedf\ufef2\u202c \u202b\ufee4\ufeee\u0646\u202c 96 \u202b\u0627\ufedf\ufee4\ufeb4\u0640\ufbff\ufea4\ufbff\ufe94\u202c \u202b\u0627\ufedf\ufec4\ufeee\u0627\ufe8b\ufed2\u202c \u202b\ufecb\ufee0\ufef0\u202c \u202b\u0627\ufedf\ufe92\ufe8e\ufed7\ufeee\u0646\u202c \u202b\u0648\ufbfe\ufe98\ufeee\u0632\u0639\u202c \u060c \u202b\u0627\ufedf\ufeb4\u0640\ufedc\ufe8e\u0646\u202c \u202b\ufecb\ufeaa\u062f\u202c \u202b\u0625\ufe9f\ufee4\ufe8e\ufedf\ufef2\u202c \u202b\ufee3\ufee6\u202c % \u202b\ufbfe\ufeee\ufe9f\ufeaa\u202c \u202b\ufedb\ufee4\ufe8e\u202c \u060c \u202b\u0640\ufe8e\u062f\u064a\u202c \u202b\u0648\u0627\ufefb\ufed7\ufe98\ufebc\u0640\u202c \u202b\u0627\ufedf\ufe9c\ufed8\ufe8e\ufed3\ufef2\u202c \u202b\u0648\ufee3\ufeae\ufedb\ufeb0\u06be\ufe8e\u202c \u202b\u0627\ufedf\ufe92\ufefc\u062f\u202c \u202b\u0640\ufee4\ufe94\u202c \u202b\ufecb\ufe8e\ufebb\u0640\u202c \u202b\ufe97\ufecc\ufeaa\u202c \u202b\u0627\ufedf\ufe98\ufef2\u202c \u202b\ufe91\ufbff\ufeae\u0648\u062a\u202c \u202b\ufee3\ufeaa\ufbfe\ufee8\ufe94\u202c \u202b\ufedf\ufe92\ufee8\ufe8e\u0646\u202c \u202b\ufed3\ufef2\u202c \u202b\u0640\ufbff\ufe8e\ufea3\ufbff\ufe94\u202c \u202b\u0627\ufedf\ufeb4\u0640\u202c \u202b\u0627\ufedf\ufee4\ufecc\ufe8e\ufedf\ufee2\u202c \u202b\u0623\ufe91\ufeae\u0632\u202c \u202b\u0648\ufee3\ufee6\u202c \u060c \u202b\u0627\ufef7\ufea7\ufeae\u0649\u202c \u202b\u0648\u0627\ufedf\ufeaa\ufbfe\ufe8e\ufee7\ufe8e\u062a\u202c \u202b\u0648\u202c \u060c \u202b\ufe9f\ufecc\ufbff\ufe98\ufe8e\u202c \u202b\ufee3\ufed0\ufe8e\u0631\u0629\u202c \u202b\ufee3\ufe9c\ufede\u202c \u202b\u0627\ufedf\ufbad\ufe8e\ufee3\ufe94\u202c \u202b\u0627\ufef7\ufe9b\ufeae\ufbfe\ufe94\u202c \u202b\u0627\ufedf\ufee4\ufeee\u0627\ufed7\ufeca\u202c \u202b\ufee3\ufee6\u202c \u202b\u0627\ufedf\ufecc\ufeaa\ufbfe\ufeaa\u202c \u202b\ufe91\ufbad\ufe8e\u202c \u202b\ufee3\ufee6\u202c \u202b\u0648\u0627\ufea3\ufeaa\u0627\u202c \u202b\ufbfe\ufecc\ufeaa\u202c \u202b\u0627\ufedf\ufeac\u064a\u202c \u202b\u0627\ufedf\ufeaa\ufbfe\ufee6\u202c \u202b\ufe91\ufbff\ufe96\u202c \u202b\u0648\ufed7\ufebc\ufeae\u202c \u060c \u202b\u0640\ufed6\u202c \u202b\u0640\ufeae\ufeb3\u202c \u202b\ufeb3\u202c \u202b\u0648\ufee3\ufe98\ufea4\ufed2\u202c \u060c \u202b\u0625\ufbfe\ufed4\ufede\u202c \u202b\u0648\ufe91\ufeae\u062c\u202c \u060c \u202b\ufe91\ufecc\ufee0\ufe92\ufeda\u202c \u202b\ufed7\ufee0\ufecc\ufe94\u202c ] . \u202b\u0627\ufedf\ufee4\ufee4\ufbff\ufeb0\u0629\u202c \u202b\u0627\ufedf\ufee4\ufecc\ufe8e\ufedf\ufee2\u202c \u202b\ufee3\ufee6\u202c \u202b\u0627\ufedf\ufedc\ufe9c\ufbff\ufeae\u202c \u202b\u0648\ufecf\ufbff\ufeae\u06be\ufe8e\u202c \u060c \u202b\u0627\ufedf\ufecc\ufe8e\ufedf\ufee2\u202c \u202b\ufed3\ufef2\u202c \u202b\u0627\ufedf\ufe98\ufe8e\u0631\ufbfe\ufea8\ufbff\ufe94\u202c \u202b\u0627\ufedf\ufed8\ufebc\u0640\u0640\ufeee\u0631\u202c \u202b\u0623\ufed7\ufeaa\u0645\u202c \u0663 \u202b\u0627\ufedf\ufea8\ufefc\ufe91\ufe94\u202c \u202b\u0627\ufedf\ufec4\ufe92\ufbff\ufecc\ufe94\u202c \u202b\ufecb\ufee0\ufef0\u202c \u202b\u0623\ufeb3\u0640\u0640\ufe8e\ufeb3\u0640\u0640\ufef2\u202c \u202b\ufe91\ufeb8\u0640\u0640\ufedc\ufede\u202c \u202b\ufedf\ufe92\ufee8\ufe8e\u0646\u202c \u202b\ufed3\ufef2\u202c \u202b\u0627\ufedf\ufeb4\u0640\u0640\ufbff\ufe8e\ufea3\ufe94\u202c \u202b[\ufe97\ufecc\ufe98\ufee4\ufeaa\u202c \u202b\u0640\u202c \u202b\ufed7\ufec0\u202c \u202b\u0640\ufee0\ufeee\u0646\u202c \u202b\ufbfe\ufed4\ufec0\u202c \u202b\u0640\ufbff\ufe8e\u062d\u202c \u202b\u0627\ufedf\ufeb4\u202c \u202b\u0623\ufecf\ufee0\ufe90\u202c \u202b\ufed3\ufe88\u0646\u202c \u202b\ufedf\ufeac\ufedf\ufeda\u202c \u060c \u202b\u0640\ufe98\ufe8e\u0621\u202c \u202b\u0648\ufeb7\u202c \u202b\u0640\ufbff\ufed4\ufe8e\u202c \u202b\ufebb\u202c \u202b\u0627\ufedf\ufee4\ufecc\ufe98\ufeaa\u0644\u202c \u202b\u0648\u0627\ufedf\ufee4\ufee8\ufe8e\u062e\u202c \u202b\ufe91\ufe8e\ufedf\ufee4\ufee8\ufe8e\u0638\ufeae\u202c \u202b\u0640\ufe98\ufee4\ufe98\ufe8e\u0639\u202c \u202b\ufedf\ufefc\ufeb3\u202c \u202b\u0640\ufe8e\ufea3\ufee0\ufbff\ufe94\u202c \u202b\u0648\u0627\ufedf\ufeb4\u202c \u202b\u0627\ufedf\ufea0\ufe92\ufee0\ufbff\ufe94\u202c \u202b\u0627\ufedf\ufee4\ufee8\ufe8e\u0637\ufed6\u202c \u202b\ufed3\ufef2\u202c \u202b\u0640\ufbff\ufed4\ufbff\ufe94\u202c \u202b\u0627\ufedf\ufebc\u202c \u202b\ufecb\ufec4\ufee0\ufe98\ufbad\ufee2\u202c \u202b\ufe8e\u0621\u202c \u202b\u0627\ufedf\ufee4\ufecc\ufe98\u0640\ufeaa\u0644\u202c \u202b\ufe91\ufee4\ufee8\u0640\ufe8e\ufea7\ufbad\u0640\ufe8e\u202c \u202b\ufedf\ufee0\ufe98\ufee4\ufe98\ufeca\u202c \u202b\u0640\ufe94\u202c \u202b\u0627\ufedf\u0640\ufeaa\u0627\ufea7\ufee0\ufbff\u202c \u202b\u0648\u0627\ufedf\ufee4\ufee8\u0640\ufe8e\u0637\ufed6\u202c \u202b\u0627\ufedf\ufee4\u0640\ufeaa\u0646\u202c \u202b\u0625\ufedf\ufef0\u202c \u202b\u0640\ufbab\u202c \u202b\u0627\ufedf\ufe98\ufeee\ufe9f\u202c \u202b\u0627\ufef5\ufea7\ufeae\u202c \u202b\u0627\ufedf\ufe92\ufecc\ufebe\u202c \u202b\u0640\ufede\u202c \u202b\ufbfe\ufed4\ufec0\u0640\u0640\u0640\u0640\u202c \u202b\ufe91\ufbff\ufee8\ufee4\u0640\ufe8e\u202c \u060c \u202b\u0640\ufe94\u202c \u202b\u0627\ufedf\ufeae\u0627\ufe8b\ufecc\u202c \u202b\u0640\ufe94\u202c \u202b\u0627\ufedf\ufee4\u0640\ufe8e\ufe8b\ufbff\u202c \u202b\u0648\u0627\ufedf\ufee4\ufee8\u0640\ufe8e\u0638\ufeae\u202c \u202b\u0640\ufe94\u202c \u202b\u0627\ufedf\ufea8\ufefc\ufe91\u202c \u202b\u0640\ufe94\u202c \u202b\u0627\ufedf\ufec4\ufe92\ufbff\ufecc\ufbff\u202c \u202b\u0627\ufedf\ufeb4\ufe8e\ufea3\ufeae\u0629\u202c \u202b\u0627\ufedf\ufec4\ufe92\ufbff\ufecc\ufe94\u202c \u202b\u0623\ufea3\ufec0\ufe8e\u0646\u202c \u202b\ufed3\ufef2\u202c \u202b\u0648\u0627\ufefb\ufeb3\ufe98\ufea0\ufee4\ufe8e\u0645\u202c" |
|
}, |
|
"TABREF3": { |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>Answer the following question: When was the first</td></tr><tr><td>episode of the series Buffy the Vampire Slayer shown?</td></tr><tr><td>The answer is in the year</td></tr></table>", |
|
"text": "The input prompt for question answering" |
|
}, |
|
"TABREF4": { |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>Question</td></tr><tr><td>Who is Alfred Nobel?</td></tr><tr><td>Predicted Answer</td></tr><tr><td>Inventor of the dynamite, and the inventor of</td></tr><tr><td>Ground Truth</td></tr><tr><td>An engineer and an inventor and a Swedish chemist</td></tr><tr><td>Question</td></tr><tr><td>When was the FIFA founded?</td></tr><tr><td>Predicted Answer</td></tr><tr><td>1904 AD</td></tr><tr><td>Ground Truth</td></tr><tr><td>21 May of the year 1904</td></tr><tr><td>Question</td></tr><tr><td>Who is Edgar Degas?</td></tr><tr><td>Predicted Answer</td></tr><tr><td>He is a French visual artist, born in</td></tr><tr><td>Ground Truth</td></tr><tr><td>Visual artist and painter and sculptor</td></tr></table>", |
|
"text": "Examples of correct answers that have zero exact match score." |
|
}, |
|
"TABREF5": { |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>Top K Frac P</td><td>Colors (top k):</td><td>10</td><td>100</td><td>1000</td></tr></table>", |
|
"text": "\u202b\ufeb7\u0631\u0637\ufef2\u202c \u202b\ufed3\ufbfe\ufbad\ufe8e\u202c \u202b\u0623\ufebb\ufbfe\u0628\u202c \u060c \u202b\u0627\ufedf\ufee3\ufea3\ufe97\ufee0\ufe94\u202c \u202b\u0627\ufedf\ufed8\u062f\u0633\u202c \u202b\ufed3\ufef2\u202c \u202b\u062f\u06be\u0633\u202c \u202b\ufecb\ufee3\ufee0\ufbfe\ufe94\u202c \u060c \u202b\u0623\ufee3\u0633\u202c \u060c \u202b\u0627\ufedf\ufecc\ufee3\u0648\u062f\u202c \u202b\u0631\u0623\u0633\u202c \u202b\ufea3\ufef2\u202c \u202b\ufee3\u0646\u202c ( \u202b\ufecb\ufe8e\ufee3\ufe8e\u202c 23 ) \u202b\u0627\ufedf\u0633\ufefb\ufbfe\ufee3\ufe94\u202c \u202b\ufee3\ufea3\ufee3\u0648\u062f\u202c \u202b\ufee3\ufea3\ufee3\u062f\u202c \u202b\u0627\ufedf\ufee3\ufed8\u062f\ufeb3\ufef2\u202c \u202b\u0627\ufedf\ufeb7\ufe8e\u0628\u202c \u202b\ufee7\ufed4\u0630\u202c \u202b\ufea3\u0631\u0633\u202c \u202b\ufeb7\u0631\u0637\ufe94\u202c \u202b\ufecb\ufee7\ufe8e\ufebb\u0631\u202c \u202b\ufee3\u0646\u202c \u202b\ufecb\u062f\u062f\u0627\u202c \u202b\ufe91\ufeb3\ufbfe\ufe8e\u0631\ufe97\ufbab\u202c \u202b\ufebb\u062f\u0645\u202c \u202b\ufeb3\ufbfe\ufe8e\u0631\u0629\u202c \u202b\ufeb3\ufe8e\ufe8b\ufed6\u202c \u202b\u0625\u0646\u202c \u202b\u0631\u0648\u0632\u0646\ufed3\ufbfe\ufee0\u062f\u202c \u202b\ufee3\ufbfe\ufedb\ufef2\u202c \u202b\u0627\ufef9\ufeb3\u0631\u0627\ufe8b\ufbfe\ufee0\ufbfe\ufe94\u202c \u202b\u0627\ufedf\ufeb7\u0631\u0637\ufe94\u202c \u202b\ufe91\ufe8e\ufeb3\u0645\u202c \u202b\u0627\ufedf\ufee3\ufe97\ufea3\u062f\u062b\u202c \u202b\u0648\ufed7\ufe8e\u0644\u202c . \u202b\u0637\ufed4\ufbfe\ufed4\ufe94\u202c \u202b\ufe91\ufe9f\u0631\u0648\u062d\u202c \u202b\u0625\ufeb3\u0631\u0627\ufe8b\ufbfe\ufee0\ufef2\u202c \u202b\u06be\u062f\u0627\ufeb3\ufe8e\u202c \u202b\ufee3\ufeb3\ufe97\ufeb7\ufed4\ufef0\u202c \u202b\u0625\ufedf\ufef0\u202c \u202b\u0623\ufe9b\u0631\u06be\ufe8e\u202c \u202b\ufecb\ufee0\ufef0\u202c \u202b\ufee7\ufed8\u0644\u202c \u202b\u0637\ufed4\ufbfe\ufed4\ufe94\u202c \u202b\ufe91\ufe9f\u0631\u0648\u062d\u202c \u202b\u0623\ufea3\u062f\u06be\u0645\u202c \u202b\u0625\ufebb\ufe8e\ufe91\ufe94\u202c \u202b\u0625\ufedf\ufef0\u202c \u202b\u0623\u062f\u0649\u202c \u202b\ufee3\ufe8e\u202c \u202b\u0627\ufedf\ufed0\u0631\ufe91\ufbfe\ufe94\u202c \u202b\u0627\ufedf\ufed8\u062f\u0633\u202c \u202b\ufed3\ufef2\u202c \u202b\ufbfe\ufe8e\ufed3\ufe8e\u202c \u202b\ufeb7\ufe8e\u0631\u0639\u202c \u202b\ufed3\ufef2\u202c \u202b\ufe91\u062f\u0648\u0631\ufbfe\ufe94\u202c \u202b\ufbfe\ufed8\u0648\ufee3\u0648\u0646\u202c \u202b\ufedb\ufe8e\ufee7\u0648\u0627\u202c \u202b\u0627\ufedf\u0630\ufbfe\u0646\u202c \u202b\u0627\ufedf\ufea3\u062f\u0648\u062f\u202c . \u202b\u0627\ufef7\ufed7\ufebb\ufef0\u202c \u202b\u0627\ufedf\ufee3\ufeb3\ufe9f\u062f\u202c \u202b\ufe9f\ufee7\u0648\u0628\u202c \u202b\ufeb3\ufee0\u0648\u0627\u0646\u202c \u202b\ufe91\ufee0\u062f\u0629\u202c \u202b\ufeb3\ufedb\ufe8e\u0646\u202c \u202b\ufee3\u0646\u202c \u202b\ufed3\ufee0\ufeb3\u0637\ufbfe\ufee7\ufef2\u202c \u202b\ufeb7\ufe8e\u0628\u202c \u202b\u06be\u0648\u202c \u202b\u0627\ufedf\ufecc\ufee3\ufee0\ufbfe\ufe94\u202c \u202b\ufee3\ufee7\ufed4\u0630\u202c \u202b\u0623\u0646\u202c \u202b\u0627\ufef9\ufeb3\u0631\u0627\ufe8b\ufbfe\ufee0\ufbfe\ufe94\u202c \u202b\u0627\ufef9\ufecb\ufefc\u0645\u202c \u202b\u0648\ufeb3\ufe8e\ufe8b\u0644\u202c \u202b\u0648\u0630\ufedb\u0631\u062a\u202c . \u202b\u0627\ufedf\ufecc\ufefc\u062c\u202c \u202b\ufedf\ufe97\ufee0\ufed8\ufef2\u202c \u202b\ufedb\ufe8e\u0631\u0645\u202c \u202b\ufecb\ufbfe\u0646\u202c" |
|
}, |
|
"TABREF6": { |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>Top K Frac P</td><td>Colors (top k):</td><td>10</td><td>100</td><td>1000</td></tr></table>", |
|
"text": "\u202b\ufe9f\u062f\ufbfe\ufe94\u202c \u202b\u0627\ufef7\ufedb\ufe9b\u0631\u202c \u202b\u0627\ufefb\ufea7\ufe97\ufe91\ufe8e\u0631\u202c \u202b\u0623\u0646\u202c \u202b\ufecf\ufbfe\u0631\u202c \u060c \u202b\ufecb\ufeb3\ufedb\u0631\ufbfe\ufe94\u202c \u202b\u0623\u0648\u202c \u202b\ufeb3\ufbfe\ufe8e\ufeb3\ufbfe\ufe94\u202c \u060c \u202b\u0627\ufedf\ufee3\ufecc\ufe8e\u0631\ufebf\ufe94\u202c \u202b\u0623\u0648\u202c \u202b\u0627\ufedf\u062f\u0648\ufedf\ufe94\u202c \u202b\ufee3\ufeca\u202c \u202b\ufeb3\u0648\u0627\u0621\u202c \u060c \u202b\u0627\ufedf\ufee3\ufe8e\ufebf\ufbfe\ufe94\u202c \u202b\u0627\ufedf\ufed4\ufe97\u0631\u0629\u202c \u202b\ufed3\ufef2\u202c \u202b\u0648\u0627\ufea7\ufe97\ufe91\ufe8e\u0631\u0627\u062a\u202c \u202b\ufe97\ufea3\u062f\ufbfe\ufe8e\u062a\u202c \u202b\ufeb3\ufee0\ufeb3\ufee0\ufe94\u202c \u202b\u0627\ufef7\ufedb\u0631\u0627\u062f\u202c \u202b\u0648\u0627\ufe9f\ufbab\u202c \u202b\u0627\ufedf\ufeb7\ufecc\u0628\u202c \u202b\ufea3\ufee3\ufe8e\ufbfe\ufe94\u202c \u202b\u0648\ufea3\u062f\u0627\u062a\u202c \u202b\ufee3\ufed8\ufe8e\ufe97\ufee0\u0648\u202c \u202b\ufe97\ufee3\ufedb\u0646\u202c \u202b\ufea3\ufbfe\u062b\u202c \u060c ( \u202b\ufedb\u0648\ufe91\ufe8e\ufee7\ufef2\u202c ) \u202b\u0627\ufedf\ufecc\u0631\u0628\u202c \u202b\ufecb\ufbfe\u0646\u202c \u202b\ufed3\ufef2\u202c \u202b\u062f\u0627\ufecb\u0634\u202c -\u202b\u0648\u0627\ufedf\ufeb7\ufe8e\u0645\u202c \u202b\u0627\ufedf\ufecc\u0631\u0627\u0642\u202c \u202b\ufed3\ufef2\u202c \u202b\u0627\ufef9\ufeb3\ufefc\ufee3\ufbfe\ufe94\u202c \u202b\u0627\ufedf\u062f\u0648\ufedf\ufe94\u202c \u202b\ufe97\ufee7\u0638\ufbfe\u0645\u202c \u202b\ufee3\u0648\u0627\ufe9f\ufbab\ufe97\ufbad\u0645\u202c \u202b\ufed3\ufef2\u202c \u202b\ufedb\ufe8e\u0646\u202c \u060c \u202b\u0627\ufedf\ufed8\u0631\ufbfe\ufe91\ufe94\u202c \u202b\ufee3\u0648\u0627\ufed7\ufecc\ufbad\u0645\u202c \u202b\ufe97\ufea3\ufebb\ufbfe\u0646\u202c \u202b\ufee3\ufeca\u202c \u202b\ufe91\ufe8e\ufedf\ufe97\u0632\u0627\ufee3\u0646\u202c \u060c \u202b\u0627\ufedf\ufee3\ufee7\u0637\ufed8\ufe94\u202c \u202b\ufecb\u0646\u202c \u202b\u0648\u0625\ufe91\ufecc\ufe8e\u062f\u0647\u202c \u202b\u062f\u0627\ufecb\u0634\u202c \u202b\ufe97\ufed8\u062f\u0645\u202c \u202b\ufedb\ufeb3\u0631\u202c \u202b\ufee3\u0646\u202c \u060c \u202b\u0627\ufedf\u062f\u0648\ufedf\ufef2\u202c \u202b\u0627\ufedf\ufe97\ufea3\ufe8e\ufedf\u0641\u202c \u202b\u0648\u0637\ufbfe\u0631\u0627\u0646\u202c \u202b\u0627\ufedf\ufe91\u0634\ufee3\u0631\ufedb\ufe94\u202c \u202b\ufee3\ufeca\u202c \u202b\ufe91\ufe8e\ufedf\ufe97\ufecc\ufe8e\u0648\u0646\u202c \u060c \u202b\u0627\ufedf\ufedb\u0631\u062f\ufbfe\ufe94\u202c \u202b\ufe9f\ufe91\ufbad\ufe94\u202c \ufffd\ufffd \u202b\ufedf\ufee0\ufe97\ufebb\u062f\u064a\u202c \u202b\u0627\ufef7\ufedb\u0631\u0627\u062f\u202c \u202b\u0627\ufedf\ufee3\ufed8\ufe8e\ufe97\ufee0\ufbfe\u0646\u202c \u202b\ufee3\u0646\u202c \u202b\u0627\ufedf\ufecc\u062f\ufbfe\u062f\u202c \u202b\u0627\ufee7\u0637\ufefc\u0642\u202c \u202b\ufee3\ufea3\u0637\ufe94\u202c \u202b\ufed3\ufedb\ufe8e\ufee7\u062a\u202c \u202b\ufecb\ufed4\u0631\ufbfe\u0646\u202c \u202b\u0623\ufee3\ufe8e\u202c . \u202b\u0627\ufedf\ufecc\ufbfe\u0646\u202c \u202b\u0631\u0623\u0633\u202c \u202b\u0625\ufedf\ufef0\u202c \u202b\u0648\ufebb\u0648\ufefb\u202c \u202b\ufecb\ufe8e\u0645\u0648\u062f\u0627\u202c \u202b\u0623\u0648\u202c \u202b\u0627\ufedf\ufed8\ufe8e\ufee3\ufeb7\ufee0\ufef2\u202c \u202b\ufed3\ufef2\u202c \u202b\ufeb3\u0648\u0627\u0621\u202c . \u202b\u0648\u0627\ufedf\u0632\u06be\u0631\u0627\u0621\u202c \u202b\u0646\ufe91\u0644\u202c \u202b\ufecb\u0646\u202c \u202b\u0627\ufedf\ufea3\ufebb\ufe8e\u0631\u202c \u202b\ufe97\ufea7\ufed4\ufbfe\u0641\u202c \u202b\ufed3\ufef2\u202c \u202b\ufee3\ufbad\ufee3\ufe8e\u202c \u202b\u062f\u0648\u0631\u0627\u202c \u202b\u0648\ufedf\ufecc\u0628\u0648\u0627\u202c \u060c \u202b\u0648\u0627\ufef7\u0634\u0631\ufed3\ufbfe\ufe94\u202c \u202b\ufee3\ufed8\ufebb\u0648\u062f\u202c \u202b\u0627\ufedf\ufeb7\ufbfe\ufea6\u202c \u202b\ufee3\ufe9b\u0644\u202c \u060c \u202b\ufea3\ufee0\u0628\u202c \u202b\ufed3\ufef2\u202c \u202b\u0623\ufea3\ufbfe\ufe8e\u0621\u202c \u202b\ufea3\ufe97\ufef0\u202c \u060c \u202b\u0627\ufedf\ufee7\ufebb\u0631\u0629\u202c" |
|
} |
|
} |
|
} |
|
} |