|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T03:13:44.809315Z" |
|
}, |
|
"title": "PARASHOOT: A Hebrew Question Answering Dataset", |
|
"authors": [ |
|
{ |
|
"first": "Keren", |
|
"middle": [ |
|
"Omer" |
|
], |
|
"last": "Omri", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Tel Aviv University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Tel Aviv University", |
|
"location": {} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "NLP research in Hebrew has largely focused on morphology and syntax, where rich annotated datasets in the spirit of Universal Dependencies are available. Semantic datasets, however, are in short supply, hindering crucial advances in the development of NLP technology in Hebrew. In this work, we present PARASHOOT, the first question answering dataset in modern Hebrew. The dataset follows the format and crowdsourcing methodology of SQuAD, and contains approximately 3000 annotated examples, similar to other questionanswering datasets in low-resource languages. We provide the first baseline results using recently-released BERT-style models for Hebrew, showing that there is significant room for improvement on this task.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "NLP research in Hebrew has largely focused on morphology and syntax, where rich annotated datasets in the spirit of Universal Dependencies are available. Semantic datasets, however, are in short supply, hindering crucial advances in the development of NLP technology in Hebrew. In this work, we present PARASHOOT, the first question answering dataset in modern Hebrew. The dataset follows the format and crowdsourcing methodology of SQuAD, and contains approximately 3000 annotated examples, similar to other questionanswering datasets in low-resource languages. We provide the first baseline results using recently-released BERT-style models for Hebrew, showing that there is significant room for improvement on this task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Natural language processing has seen a surge in the pretraining paradigm in recent years with the appearance of pretrained models in a plethora of languages, including Hebrew (Chriqui and Yahav, 2021; Seker et al., 2021) . While such models have shown to perform remarkably well on a variety of tasks, most of the evaluation of the Hebrew models, however, has been focused on morphology and syntax tasks in the spirit of universal dependencies (Nivre et al., 2017) , while end-user-focused evaluation has been limited to sentiment analysis (Chriqui and Yahav, 2021) and named entity recognition .", |
|
"cite_spans": [ |
|
{ |
|
"start": 175, |
|
"end": 200, |
|
"text": "(Chriqui and Yahav, 2021;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 201, |
|
"end": 220, |
|
"text": "Seker et al., 2021)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 444, |
|
"end": 464, |
|
"text": "(Nivre et al., 2017)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 540, |
|
"end": 565, |
|
"text": "(Chriqui and Yahav, 2021)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper, we try to remedy the scarcity of semantic datasets by presenting PARASHOOT, 1 the first question answering dataset in Hebrew, in the style of SQuAD (Rajpurkar et al., 2016) . We follow similar work in constructing non-English question answering datasets (d'Hoffschmidt et al., 2020; Mozannar et al., 2019; Lim et al., 2019, inter alia), and turn to Hebrew-speaking crowdsource workers, asking them to write questions given paragraphs sampled at random from Hebrew Wikipedia. Through this process, we collect approximately 3000 annotated (paragraph, question, answer) triplets, in a setting that may be suitable for few-shot learning, simulating the amount of data a startup or academic group can quickly collect with a limited annotation budget or a short deadline.", |
|
"cite_spans": [ |
|
{ |
|
"start": 163, |
|
"end": 187, |
|
"text": "(Rajpurkar et al., 2016)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 269, |
|
"end": 297, |
|
"text": "(d'Hoffschmidt et al., 2020;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 298, |
|
"end": 320, |
|
"text": "Mozannar et al., 2019;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 321, |
|
"end": 338, |
|
"text": "Lim et al., 2019,", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Statistical analysis of PARASHOOT shows that the dataset is diverse in question types and complexity, and that the annotations are of decent quality. We provide baseline results based on two recentlyreleased BERT-style models in Hebrew, showing that there is much potential in devising better pretraining and fine-tuning schemes to improve the performance of Hebrew language models on this dataset. We hope that this new dataset will pave the way for practitioners and researchers to advance natural language understanding in Hebrew. 2", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We present PARASHOOT, a question answering dataset in Hebrew, in a format that closely follows that of SQuAD (Rajpurkar et al., 2016) . Each example in the dataset is a triplet consisting of a paragraph, a question, and a span from the paragraph text constituting the answer to the question. We scrape paragraphs from random Hebrew Wikipedia articles, and crowdsource questions and answers for each one, resulting in 3038 annotated examples. While larger datasets may facilitate betterperforming models, recent work has advocated for research on smaller labeled datasets (Ram et al., 2021) , which more accurately reflect the amount of data a startup or academic lab can collect in a short amount of time and resources. Dragging the mouse over a span in the paragraph automatically fills the question slot, allowing for quick and accurate annotation of answer spans.", |
|
"cite_spans": [ |
|
{ |
|
"start": 109, |
|
"end": 133, |
|
"text": "(Rajpurkar et al., 2016)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 571, |
|
"end": 589, |
|
"text": "(Ram et al., 2021)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We collect random articles from Hebrew Wikipedia, covering a wide range of domains and topics. We only sample articles containing at least two paragraphs and 500 characters. 3 Finally, for each such article, two candidate paragraphs are randomly sampled and added to the annotation corpus. These paragraphs will eventually become the passages in the question answering dataset.", |
|
"cite_spans": [ |
|
{ |
|
"start": 174, |
|
"end": 175, |
|
"text": "3", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Corpus", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "We recruit annotators by using the Prolific crowdsourcing platform. 4 Being a native Hebrew speaker is the only required qualification, allowing the participation of a few dozen annotators in the campaign. Annotators are presented with random paragraphs from the annotation set, and tasked to write 3-5 questions that are explicitly answered by the given text, for each paragraph. As in the original SQuAD annotation campaign, annotators are instructed to phrase the questions in their own words, and highlight the minimal span of characters from the paragraph that contains the answer to each ques- 3 We filter out images, tables, etc. 4 tion. Our implementation also provides automatic data validation heuristics that alert the annotators if, for instance, the answer span is too long or not a substring of the paragraph. Figure 1 shows a screenshot from the annotation web page. 5 We acknowledge the fact that this data collection technique is known to encourage annotation artifacts (Gururangan et al., 2018; Kaushik and Lipton, 2018) , and several newer annotation methods, such as TyDi QA (Clark et al., 2020) , have been introduced to alleviate them. Nevertheless, we follow SQuAD's annotation methodology, as it necessitates considerably fewer resources. Maintaining an hourly wage of over $10, 6 we were able to collect our entire dataset, including discarded data from development runs, for under $800.", |
|
"cite_spans": [ |
|
{ |
|
"start": 68, |
|
"end": 69, |
|
"text": "4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 600, |
|
"end": 601, |
|
"text": "3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 637, |
|
"end": 638, |
|
"text": "4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 882, |
|
"end": 883, |
|
"text": "5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 987, |
|
"end": 1012, |
|
"text": "(Gururangan et al., 2018;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 1013, |
|
"end": 1038, |
|
"text": "Kaushik and Lipton, 2018)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 1095, |
|
"end": 1115, |
|
"text": "(Clark et al., 2020)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 1303, |
|
"end": 1304, |
|
"text": "6", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 824, |
|
"end": 832, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Annotation", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "In total, we amass 3106 question-answer examples. Of those, we discard 68 examples (2.2%) that contained yes/no questions or extremely short/long answers. The resulting dataset contains 3038 examples, which we divide to training, validation, and test by article, preventing content overlap. Table 1 details the amount of unique articles, paragraphs, and questions of each split.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 291, |
|
"end": 298, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Post-Processing", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "We analyze the dataset in various ways to assess its quality and limitations as a benchmark.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "To measure the quality of the annotated data, we randomly select 50 examples from the validation set, and manually analyze them ourselves. 7 Specifically, we check whether the annotated answer span is correct (answers the question) and minimal (contains only the answer). majority of the annotations are indeed valid, answering the questions with a minimal span. Yet, a significant minority contains additional supporting information, which makes the answer span longer than the desired minimal span by 2.5 times on average. We can thus expect an upper bound of 57% token F1 on those examples, setting the performance ceiling at around 84% F1 for the entire dataset. Finally, we present examples from the validation set that illustrate the annotation quality (Figure 3 ).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 759, |
|
"end": 768, |
|
"text": "(Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Annotation Quality", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "To measure the dataset's diversity, we cluster questions by their question word (typically the first word in the question). Table 3 shows that what ( \u202b)\u05de\u05d4\u202c and which ( \u202b)\u05d0\u05d9\u05d6\u05d4\u202c questions account for a third of the sample, with other answer types being distributed in a rather balanced distribution. We also observe that about 11% of the data contains how ( \u202b)\u05d0\u05d9\u202c and why ( \u202b)\u05dc\u05de\u05d4\u202c questions, which may reflect more complex instances.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 124, |
|
"end": 131, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Question Diversity", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We measure the length in words (using whitespace tokenization) of each question and each answer. Figure 2 shows the distributions of annotated questions and answers. We observe that most questions use between 4-7 words, which is typical of simple questions in Hebrew. More complicated questions constitute 27.6% of the data, for example: \u202b\u05d2\u05d9\u05dc\u05d1\u05e8\u05d8?\u202c \u202b\u05e9\u05db\u05ea\u05d1\u05d5\u202c \u202b\u05d4\u05d0\u05d7\u05e8\u05d5\u05e0\u05d4\u202c \u202b\u05d4\u05d0\u05d5\u05e4\u05e8\u05d4\u202c \u202b\u05e0\u05e7\u05e8\u05d0\u05ea\u202c \u202b\u05d0\u05d9\u202c \u202b\u05d9\u05d7\u05d3\u05d9\u05d5\u202c \u202b\u05d5\u05e1\u05d0\u05dc\u05d9\u05d1\u202c (What is the last opera written jointly by Gilbert and Sullivan called?) There are even questions with only 2 words; due to Hebrew's rich morphology, these questions are usually translated to 3-4 words in English, e.g. ? \u202b\u05d4\u05de\u05e0\u05d9\u05db\u05d0\u05d9\u05d6\u202c \u202b\u05de\u05d4\u05d5\u202c (What is Manichaeism?) Answer lengths, however, can vary greatly, depending on whether the annotators wrote minimal spans (typically 1-4 words) or included supporting information in the answer spans (see Section 3.1).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 97, |
|
"end": 105, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Sequence Length", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "As a morphologically-rich language (Tsarfaty et al., 2010; Seddah et al., 2013) , modern Hebrew exhibits a variety of non-trivial phenomena that are uncommon in English and could be challenging for NLP models . We can identify some of these phenomena in our dataset. Consider for example the following question-answer pair from the validation set: This example illustrates a morphological variation between the question and the answer: the same entity appears as a morpheme in a compound word in the question's text: \u202b\u05e9\u05d8\u05d7\u05d5\u202c (its area), \u202b\u05db\u05e9\u05d4\u05d5\u05e7\u202c (when it was established), but as a standalone word (i.e. without inflection) in the answer: \u202b\u05e9\u05d8\u05d7\u202c (area), \u202b\u05d4\u05d5\u05e7\u202c (was established). These phenomena make exact match-optimized predictions more difficult for models aimed to solve this task.", |
|
"cite_spans": [ |
|
{ |
|
"start": 35, |
|
"end": 58, |
|
"text": "(Tsarfaty et al., 2010;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 59, |
|
"end": 79, |
|
"text": "Seddah et al., 2013)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linguistic Phenomena", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "We establish baseline results for PARASHOOT using BERT-style models. Results indicate the task is challenging, leaving much room for future work in Hebrew NLP to advance the state of the art.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We fine-tune three adaptations of BERT (Devlin et al., 2019) : mBERT, trained by the original authors on a corpus consisting of the entire Wikipedia dumps of 100 languages; HeBERT (Chriqui and Yahav, 2021) , trained on the OSCAR corpus (Ortiz Su\u00e1rez et al., 2020) and Hebrew Wikipedia;", |
|
"cite_spans": [ |
|
{ |
|
"start": 39, |
|
"end": 60, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 180, |
|
"end": 205, |
|
"text": "(Chriqui and Yahav, 2021)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 243, |
|
"end": 263, |
|
"text": "Su\u00e1rez et al., 2020)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiment Setup", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "AlephBERT (Seker et al., 2021) , also trained on the OSCAR corpus, with an additional 71.5 million tweets in Hebrew. All models are equivalent in size to BERT-base, i.e. 12 layers, 768 model dimensions, and 110M parameters in total.", |
|
"cite_spans": [ |
|
{ |
|
"start": 10, |
|
"end": 30, |
|
"text": "(Seker et al., 2021)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiment Setup", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We fine-tune the models using the default implementation of HuggingFace Transformers (Wolf et al., 2020) . We select the best model by validation set performance over the following hyperparameter grid: learning rate \u2208 {3e\u22125, 5e\u22125, 1e\u22124}, batch size \u2208 {16, 32, 64}, and update steps \u2208 {512, 800, 1024}. We compare the models' predictions to the annotated answer using token-wise F1 score and exact match (EM), as defined by Rajpurkar et al. (2016) . Table 4 shows the performance of each model on PARASHOOT, with mBERT achieving the highest performance (56.1 F1). We also observe significant variance across the models, with mBERT and AlephBERT performing significantly better than HeBERT. It is not immediately clear where this discrepancy stems from; one possibility is that the introduction of noisy data via multilinguality (mBERT) or tweets (AlephBERT) makes that model more robust to potential noise in the annotated questions (e.g. typos). Comparing these results to the estimated ceiling performance of 84 F1 (see Section 3.1), we can infer that PARASHOOT poses a genuine challenge to future Hebrew models and encourages further analysis of the semantic capabilities of the current models.", |
|
"cite_spans": [ |
|
{ |
|
"start": 85, |
|
"end": 104, |
|
"text": "(Wolf et al., 2020)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 423, |
|
"end": 446, |
|
"text": "Rajpurkar et al. (2016)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 449, |
|
"end": 456, |
|
"text": "Table 4", |
|
"ref_id": "TABREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiment Setup", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We analyze the error distribution by sampling 50 examples from the validation set and comparing AlephBERT's predictions to the annotated answers. Table 5 shows how the examples are distributed into five categories, accounting for every type of overlap between the model's prediction and the annotated answer. Putting aside exact matches (which account for about a quarter of examples), nearly half of the errors stem from zero overlap between the annotated answer and the model's prediction. We observe that a significant part of the sample (22%) contains cases where the annotated answer is a substring of the model's prediction, which might be, to a large extent, an artifact of the long answer annotations we observe in Section 3.1. For examples of erroneous predictions see Appendix A.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 146, |
|
"end": 153, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Error Analysis", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "In this paper, we present PARASHOOT, the first question answering dataset in modern Hebrew, in a style and data collection methodology similar to that of SQuAD. Figure 3 : Examples from the validation set. The text in bold shows crowd-annotated answers. The underlined text represents the (expert-annotated) minimal answer span. The first example demonstrates a non-minimal span that has some overlap with the question's text. The second example demonstrates a valid minimal span selection. Model \u2229 Annotation = \u2205 34% 46% Table 5 : An error analysis of 50 random examples from the validation set, based on AlephBERT's predictions. The first reflects exact matches, and the last case accounts for zero overlap between model prediction and annotated answer. The three categories in the middle refer to partially correct answers, where the model's prediction has some overlap with the annotated answer.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 161, |
|
"end": 169, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 522, |
|
"end": 529, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "potential of this dataset for researchers and practitioners alike to develop better models and datasets for natural language understanding in Hebrew.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "A portmanteau of paragraph and \u202b\"\u05ea\u202c \u202b\u05e9\u05d5\u202c (shoot), the Hebrew abbreviation of Q&A.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The dataset is publicly available at https://github. com/omrikeren/ParaShoot", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The platform's code is based upon https://github. com/cdqa-suite/cdQA-annotator.6 7.50 GBP \u2248 10.50 USD, at the time of writing. 7 The authors are native speakers of modern Hebrew.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This work was supported by the Tel Aviv University Data Science Center, Len Blavatnik and the Blavatnik Family foundation, the Alon Scholarship, Intel Corporation, and the Yandex Initiative for Machine Learning. We thank Reut Tsarfaty for her valuable feedback.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": " Figure A .1: Predictions made by fine-tuned Aleph-BERT vs. annotated answers. In the first example, the prediction produced by the model is clearly an error. In the second example, the annotated answer span is excessively long, and the model predicts a more accurate substring of this span. In the third example, the model predicts a full sentence, while the annotated answer span is shorter.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 1, |
|
"end": 9, |
|
"text": "Figure A", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "annex", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Neural modeling for named entities and morphology (nemo\u02c62)", |
|
"authors": [ |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Bareket", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Reut", |
|
"middle": [], |
|
"last": "Tsarfaty", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2007.15620" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dan Bareket and Reut Tsarfaty. 2020. Neural modeling for named entities and morphology (nemo\u02c62). arXiv preprint arXiv:2007.15620.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Hebert & hebemo: a hebrew bert model and a tool for polarity analysis and emotion recognition", |
|
"authors": [ |
|
{ |
|
"first": "Avihay", |
|
"middle": [], |
|
"last": "Chriqui", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Inbal", |
|
"middle": [], |
|
"last": "Yahav", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2102.01909" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Avihay Chriqui and Inbal Yahav. 2021. Hebert & hebemo: a hebrew bert model and a tool for polar- ity analysis and emotion recognition. arXiv preprint arXiv:2102.01909.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "TyDi QA: A benchmark for information-seeking question answering in typologically diverse languages", |
|
"authors": [ |
|
{ |
|
"first": "Jonathan", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eunsol", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Garrette", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tom", |
|
"middle": [], |
|
"last": "Kwiatkowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vitaly", |
|
"middle": [], |
|
"last": "Nikolaev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jennimaria", |
|
"middle": [], |
|
"last": "Palomaki", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "8", |
|
"issue": "", |
|
"pages": "454--470", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/tacl_a_00317" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jonathan H. Clark, Eunsol Choi, Michael Collins, Dan Garrette, Tom Kwiatkowski, Vitaly Nikolaev, and Jennimaria Palomaki. 2020. TyDi QA: A bench- mark for information-seeking question answering in typologically diverse languages. Transactions of the Association for Computational Linguistics, 8:454- 470.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "4171--4186", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1423" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "FQuAD: French question answering dataset", |
|
"authors": [ |
|
{ |
|
"first": "Wacim", |
|
"middle": [], |
|
"last": "Martin D'hoffschmidt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Quentin", |
|
"middle": [], |
|
"last": "Belblidia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tom", |
|
"middle": [], |
|
"last": "Heinrich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maxime", |
|
"middle": [], |
|
"last": "Brendl\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Vidal", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Findings of the Association for Computational Linguistics: EMNLP 2020", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1193--1208", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.findings-emnlp.107" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Martin d'Hoffschmidt, Wacim Belblidia, Quentin Hein- rich, Tom Brendl\u00e9, and Maxime Vidal. 2020. FQuAD: French question answering dataset. In Findings of the Association for Computational Lin- guistics: EMNLP 2020, pages 1193-1208, Online. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Annotation artifacts in natural language inference data", |
|
"authors": [ |
|
{ |
|
"first": "Swabha", |
|
"middle": [], |
|
"last": "Suchin Gururangan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Omer", |
|
"middle": [], |
|
"last": "Swayamdipta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roy", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Samuel", |
|
"middle": [], |
|
"last": "Schwartz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Bowman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "107--112", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N18-2017" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A. Smith. 2018. Annotation artifacts in natural lan- guage inference data. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 107-112, New Orleans, Louisiana. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "How much reading does reading comprehension require? a critical investigation of popular benchmarks", |
|
"authors": [ |
|
{ |
|
"first": "Divyansh", |
|
"middle": [], |
|
"last": "Kaushik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zachary", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Lipton", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5010--5015", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D18-1546" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Divyansh Kaushik and Zachary C. Lipton. 2018. How much reading does reading comprehension require? a critical investigation of popular benchmarks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5010-5015, Brussels, Belgium. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Korquad1. 0: Korean qa dataset for machine reading comprehension", |
|
"authors": [ |
|
{ |
|
"first": "Seungyoung", |
|
"middle": [], |
|
"last": "Lim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Myungji", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jooyoul", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1909.07005" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Seungyoung Lim, Myungji Kim, and Jooyoul Lee. 2019. Korquad1. 0: Korean qa dataset for ma- chine reading comprehension. arXiv preprint arXiv:1909.07005.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Neural Arabic question answering", |
|
"authors": [ |
|
{ |
|
"first": "Hussein", |
|
"middle": [], |
|
"last": "Mozannar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Elie", |
|
"middle": [], |
|
"last": "Maamary", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karl", |
|
"middle": [ |
|
"El" |
|
], |
|
"last": "Hajal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hazem", |
|
"middle": [], |
|
"last": "Hajj", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the Fourth Arabic Natural Language Processing Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "108--118", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W19-4612" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hussein Mozannar, Elie Maamary, Karl El Hajal, and Hazem Hajj. 2019. Neural Arabic question answer- ing. In Proceedings of the Fourth Arabic Natu- ral Language Processing Workshop, pages 108-118, Florence, Italy. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Universal Dependencies", |
|
"authors": [ |
|
{ |
|
"first": "Joakim", |
|
"middle": [], |
|
"last": "Nivre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Zeman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Filip", |
|
"middle": [], |
|
"last": "Ginter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Francis", |
|
"middle": [], |
|
"last": "Tyers", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Tutorial Abstracts", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joakim Nivre, Daniel Zeman, Filip Ginter, and Francis Tyers. 2017. Universal Dependencies. In Proceed- ings of the 15th Conference of the European Chap- ter of the Association for Computational Linguistics: Tutorial Abstracts, Valencia, Spain. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "A monolingual approach to contextualized word embeddings for mid-resource languages", |
|
"authors": [ |
|
{ |
|
"first": "Pedro Javier Ortiz", |
|
"middle": [], |
|
"last": "Su\u00e1rez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Laurent", |
|
"middle": [], |
|
"last": "Romary", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Beno\u00eet", |
|
"middle": [], |
|
"last": "Sagot", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1703--1714", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-main.156" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pedro Javier Ortiz Su\u00e1rez, Laurent Romary, and Beno\u00eet Sagot. 2020. A monolingual approach to contextual- ized word embeddings for mid-resource languages. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1703-1714, Online. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "SQuAD: 100,000+ questions for machine comprehension of text", |
|
"authors": [ |
|
{ |
|
"first": "Pranav", |
|
"middle": [], |
|
"last": "Rajpurkar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jian", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Konstantin", |
|
"middle": [], |
|
"last": "Lopyrev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Percy", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2383--2392", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D16-1264" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 2383-2392, Austin, Texas. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Few-shot question answering by pretraining span selection", |
|
"authors": [ |
|
{ |
|
"first": "Ori", |
|
"middle": [], |
|
"last": "Ram", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuval", |
|
"middle": [], |
|
"last": "Kirstain", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Berant", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amir", |
|
"middle": [], |
|
"last": "Globerson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Omer", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "3066--3079", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2021.acl-long.239" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, and Omer Levy. 2021. Few-shot ques- tion answering by pretraining span selection. In Pro- ceedings of the 59th Annual Meeting of the Associa- tion for Computational Linguistics and the 11th In- ternational Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 3066-3079. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Overview of the SPMRL 2013 shared task: A cross-framework evaluation of parsing morphologically rich languages", |
|
"authors": [ |
|
{ |
|
"first": "Djam\u00e9", |
|
"middle": [], |
|
"last": "Seddah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Reut", |
|
"middle": [], |
|
"last": "Tsarfaty", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sandra", |
|
"middle": [], |
|
"last": "K\u00fcbler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marie", |
|
"middle": [], |
|
"last": "Candito", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jinho", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Choi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rich\u00e1rd", |
|
"middle": [], |
|
"last": "Farkas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jennifer", |
|
"middle": [], |
|
"last": "Foster", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iakes", |
|
"middle": [], |
|
"last": "Goenaga", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Koldo Gojenola Galletebeitia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Spence", |
|
"middle": [], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nizar", |
|
"middle": [], |
|
"last": "Green", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Habash", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wolfgang", |
|
"middle": [], |
|
"last": "Kuhlmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joakim", |
|
"middle": [], |
|
"last": "Maier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Nivre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Przepi\u00f3rkowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wolfgang", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yannick", |
|
"middle": [], |
|
"last": "Seeker", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veronika", |
|
"middle": [], |
|
"last": "Versley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marcin", |
|
"middle": [], |
|
"last": "Vincze", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alina", |
|
"middle": [], |
|
"last": "Woli\u0144ski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Wr\u00f3blewska", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clergerie", |
|
"middle": [], |
|
"last": "Villemonte De La", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the Fourth Workshop on Statistical Parsing of Morphologically-Rich Languages", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "146--182", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Djam\u00e9 Seddah, Reut Tsarfaty, Sandra K\u00fcbler, Marie Candito, Jinho D. Choi, Rich\u00e1rd Farkas, Jen- nifer Foster, Iakes Goenaga, Koldo Gojenola Gal- letebeitia, Yoav Goldberg, Spence Green, Nizar Habash, Marco Kuhlmann, Wolfgang Maier, Joakim Nivre, Adam Przepi\u00f3rkowski, Ryan Roth, Wolfgang Seeker, Yannick Versley, Veronika Vincze, Marcin Woli\u0144ski, Alina Wr\u00f3blewska, and Eric Villemonte de la Clergerie. 2013. Overview of the SPMRL 2013 shared task: A cross-framework evaluation of parsing morphologically rich languages. In Proceed- ings of the Fourth Workshop on Statistical Parsing of Morphologically-Rich Languages, pages 146-182, Seattle, Washington, USA. Association for Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Alephbert: A hebrew large pretrained language model to start-off your hebrew nlp application with", |
|
"authors": [ |
|
{ |
|
"first": "Amit", |
|
"middle": [], |
|
"last": "Seker", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Elron", |
|
"middle": [], |
|
"last": "Bandel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Bareket", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Idan", |
|
"middle": [], |
|
"last": "Brusilovsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2104.04052" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Amit Seker, Elron Bandel, Dan Bareket, Idan Brusilovsky, Refael Shaked Greenfeld, and Reut Tsarfaty. 2021. Alephbert: A hebrew large pre- trained language model to start-off your hebrew nlp application with. arXiv preprint arXiv:2104.04052.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "From SPMRL to NMRL: What did we learn (and unlearn) in a decade of parsing morphologically-rich languages (MRLs)?", |
|
"authors": [ |
|
{ |
|
"first": "Reut", |
|
"middle": [], |
|
"last": "Tsarfaty", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Bareket", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stav", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amit", |
|
"middle": [], |
|
"last": "Seker", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "7396--7408", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-main.660" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Reut Tsarfaty, Dan Bareket, Stav Klein, and Amit Seker. 2020. From SPMRL to NMRL: What did we learn (and unlearn) in a decade of parsing morphologically-rich languages (MRLs)? In Pro- ceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 7396- 7408, Online. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Statistical parsing of morphologically rich languages (SPMRL) what, how and whither", |
|
"authors": [ |
|
{ |
|
"first": "Reut", |
|
"middle": [], |
|
"last": "Tsarfaty", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Djam\u00e9", |
|
"middle": [], |
|
"last": "Seddah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sandra", |
|
"middle": [], |
|
"last": "Kuebler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yannick", |
|
"middle": [], |
|
"last": "Versley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marie", |
|
"middle": [], |
|
"last": "Candito", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jennifer", |
|
"middle": [], |
|
"last": "Foster", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ines", |
|
"middle": [], |
|
"last": "Rehbein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lamia", |
|
"middle": [], |
|
"last": "Tounsi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the NAACL HLT 2010 First Workshop on Statistical Parsing of Morphologically-Rich Languages", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--12", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Reut Tsarfaty, Djam\u00e9 Seddah, Yoav Goldberg, Sandra Kuebler, Yannick Versley, Marie Candito, Jennifer Foster, Ines Rehbein, and Lamia Tounsi. 2010. Sta- tistical parsing of morphologically rich languages (SPMRL) what, how and whither. In Proceedings of the NAACL HLT 2010 First Workshop on Statistical Parsing of Morphologically-Rich Languages, pages 1-12, Los Angeles, CA, USA. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Transformers: State-of-the-art natural language processing", |
|
"authors": [ |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Wolf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lysandre", |
|
"middle": [], |
|
"last": "Debut", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Victor", |
|
"middle": [], |
|
"last": "Sanh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julien", |
|
"middle": [], |
|
"last": "Chaumond", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clement", |
|
"middle": [], |
|
"last": "Delangue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anthony", |
|
"middle": [], |
|
"last": "Moi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pierric", |
|
"middle": [], |
|
"last": "Cistac", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tim", |
|
"middle": [], |
|
"last": "Rault", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Remi", |
|
"middle": [], |
|
"last": "Louf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Morgan", |
|
"middle": [], |
|
"last": "Funtowicz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joe", |
|
"middle": [], |
|
"last": "Davison", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sam", |
|
"middle": [], |
|
"last": "Shleifer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clara", |
|
"middle": [], |
|
"last": "Patrick Von Platen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yacine", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julien", |
|
"middle": [], |
|
"last": "Jernite", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Canwen", |
|
"middle": [], |
|
"last": "Plu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Teven", |
|
"middle": [ |
|
"Le" |
|
], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sylvain", |
|
"middle": [], |
|
"last": "Scao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mariama", |
|
"middle": [], |
|
"last": "Gugger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Drame", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "38--45", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.emnlp-demos.6" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language process- ing. In Proceedings of the 2020 Conference on Em- pirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"text": "The annotation user interface, containing the article's title, the paragraph, a slot for entering a question, and an additional slot for entering the answer.", |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"TABREF0": { |
|
"content": "<table><tr><td/><td colspan=\"3\">#Articles #Paragraphs #Questions</td></tr><tr><td>Train</td><td>295</td><td>565</td><td>1792</td></tr><tr><td>Validation</td><td>33</td><td>63</td><td>221</td></tr><tr><td>Test</td><td>165</td><td>319</td><td>1025</td></tr><tr><td>Total</td><td>493</td><td>947</td><td>3038</td></tr></table>", |
|
"text": "www.prolific.co", |
|
"type_str": "table", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF1": { |
|
"content": "<table/>", |
|
"text": "The number of unique articles, paragraphs, and questions in each split of PARASHOOT. The dataset is partitioned by articles.", |
|
"type_str": "table", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF2": { |
|
"content": "<table><tr><td>shows that the</td></tr></table>", |
|
"text": "Figure 2: The length distribution of questions (left) and answers (right) in the entire dataset.", |
|
"type_str": "table", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF3": { |
|
"content": "<table><tr><td>Question Word</td><td/><td>Frequency</td></tr><tr><td>What</td><td>. . . / \u202b/\u05de\u05d4\u05d5\u202c \u202b\u05de\u05d4\u202c</td><td>16.29%</td></tr><tr><td>Which</td><td>. . . / \u202b/\u05d0\u05d9\u05d6\u05d5\u202c \u202b\u05d0\u05d9\u05d6\u05d4\u202c</td><td>15.84%</td></tr><tr><td>Who</td><td>. . . / \u202b/\u05de\u05d9\u05d4\u05d5\u202c \u202b\u05de\u05d9\u202c</td><td>14.03%</td></tr><tr><td>When</td><td>\u202b/\u05de\u05de\u05ea\u05d9\u202c \u202b\u05de\u05ea\u05d9\u202c</td><td>13.57%</td></tr><tr><td>Where</td><td>. . . / \u202b/\u05d4\u05d9\u05db\u202c \u202b\u05d0\u05d9\u05e4\u05d4\u202c</td><td>10.86%</td></tr><tr><td>How</td><td>\u202b/\u05db\u05d9\u05e6\u05d3\u202c \u202b\u05d0\u05d9\u202c</td><td>6.79%</td></tr><tr><td colspan=\"2\">How much/many . . . / \u202b/\u05d1\u05db\u05de\u05d4\u202c \u202b\u05db\u05de\u05d4\u202c</td><td>5.43%</td></tr><tr><td>Why</td><td>\u202b/\u05de\u05d3\u05d5\u05e2\u202c \u202b\u05dc\u05de\u05d4\u202c</td><td>4.52%</td></tr></table>", |
|
"text": "Distribution of annotated answer span quality, based on manual analysis of 50 examples from the validation set.", |
|
"type_str": "table", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF4": { |
|
"content": "<table/>", |
|
"text": "", |
|
"type_str": "table", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF5": { |
|
"content": "<table><tr><td colspan=\"2\">haya</td><td>shitkho</td><td>shel</td><td colspan=\"2\">kfar</td><td>shmaryahu</td></tr><tr><td colspan=\"2\">was</td><td>area-of-it</td><td>of</td><td colspan=\"2\">Kfar</td><td>Shmaryahu</td></tr><tr><td colspan=\"3\">kshe-hukam</td><td/><td/></tr><tr><td colspan=\"4\">when-was.established</td><td/></tr><tr><td colspan=\"6\">'What was Kfar Shmaryahu's area when it</td></tr><tr><td colspan=\"3\">was established?'</td><td/><td/></tr><tr><td colspan=\"4\">A: ... \u202b\u05e9\u05dc\u202c \u202b\u05e9\u05d8\u05d7\u202c \u202b\u05e2\u05dc\u202c \u202b\u05d4\u05d5\u05e7\u202c \u202b\u05d4\u05d9\u05e9\u05d5\u05d1\u202c</td><td/></tr><tr><td colspan=\"2\">ha-yeshuv</td><td>hukam</td><td/><td/><td>al</td><td>shetakh</td></tr><tr><td colspan=\"2\">the-village</td><td colspan=\"3\">was.established</td><td>on</td><td>area</td></tr><tr><td>shel</td><td>...</td><td/><td/><td/></tr><tr><td>of</td><td>...</td><td/><td/><td/></tr><tr><td colspan=\"6\">'The village was established on an area of ...'</td></tr></table>", |
|
"text": "Q: ? \u202b\u05db\u05e9\u05d4\u05d5\u05e7\u202c \u202b\u05e9\u05de\u05e8\u05d9\u05d4\u05d5\u202c \u202b\u05db\u05e4\u05e8\u202c \u202b\u05e9\u05dc\u202c \u202b\u05e9\u05d8\u05d7\u05d5\u202c \u202b\u05d4\u05d9\u05d4\u202c \u202b\u05de\u05d4\u202c ma what", |
|
"type_str": "table", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF7": { |
|
"content": "<table/>", |
|
"text": "Baseline performance on the test set.", |
|
"type_str": "table", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF8": { |
|
"content": "<table><tr><td/><td/><td/><td>...</td><td colspan=\"3\">\u202b\u05de\u05d5\u05e8\u05dc\u05d9)\u202c \u202b(\u05e7\u05d0\u05e8\u05df\u202c</td><td colspan=\"2\">\u202b\u05dc\u05d5\u05e7\u05d0\u05e1\u202c \u202b\u05e9\u05d0\u05e8\u05dc\u05d5\u05d8\u202c</td><td colspan=\"2\">\u202b\u05d4\u05d8\u05d5\u05d1\u05d4\u202c \u202b\u05d7\u05d1\u05e8\u05ea\u05d4\u202c \u202b\u05e2\u05dd\u202c \u202b\u05de\u05ea\u05d0\u05e8\u05e1\u202c \u202b\u05d4\u05d5\u05d0\u202c</td><td>\u202b\u05de\u05db\u05df\u202c \u202b\u05dc\u05d0\u05d7\u05e8\u202c</td><td>...</td><td>Context :</td></tr><tr><td/><td/><td/><td/><td>\u2026</td><td>He l</td><td colspan=\"2\">ater becomes</td><td colspan=\"2\">engaged</td><td>to</td><td>her best friend</td><td>Charlotte Lucas</td><td>(Karen</td><td>Morely)</td><td>...</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>?</td><td>\u202b\u05de\u05ea\u05d0\u05e8\u05e1\u202c \u202b\u05e7\u05d5\u05dc\u05d9\u05e0\u05e1\u202c \u202b\u05de\u05e8\u202c \u202b\u05dc\u05de\u05d9\u202c</td><td>Question :</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>To whom does Mr. Collins get engaged?</td></tr><tr><td/><td>...</td><td colspan=\"9\">\u202b\u05d4\u05e4\u05e1\u05e7\u05d4\u202c \u202b\u05dc\u05dc\u05d0\u202c \u202b\u05e9\u05e0\u05de\u05e9\u05db\u05ea\u202c \u202b\u05e7\u05de\u05d5\u05e0\u05d9\u05e7\u05d4\u202c \u202b\u05d1\u05d5\u05d5\u05d0\u05dc\u202c \u202b\u05d0\u05e8\u05db\u05d0\u05d5\u05dc\u05d5\u05d2\u05d9\u05d5\u05ea\u202c \u202b\u05d7\u05e4\u05d9\u05e8\u05d5\u05ea\u202c \u202b\u05e9\u05dc\u202c \u202b\u05ea\u05e7\u05d5\u05e4\u05d4\u202c \u202b\u05d4\u05ea\u05d7\u05d9\u05dc\u05d4\u202c</td><td>20</td><td>-</td><td>\u202b\u05d4\u202c \u202b\u05d4\u05de\u05d0\u05d4\u202c \u202b\u05e9\u05dc\u202c</td><td>60</td><td>\u202b\u05d4\u202c \u202b\u05e9\u05e0\u05d5\u05ea\u202c -</td><td>\u202b\u05de\u202c</td><td>...</td><td>Context :</td></tr><tr><td>\u2026</td><td>From</td><td>the 60s of the 20th century</td><td colspan=\"8\">began a period of archeological excavations in Val</td><td>camonica</td><td>that continues unabated</td><td>\u2026</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td colspan=\"4\">\u202b\u05e7\u05de\u05d5\u05e0\u05d9\u05e7\u05d4?\u202c \u202b\u05d1\u05d5\u05d5\u05d0\u05dc\u202c \u202b\u05d0\u05e8\u05db\u05d0\u05d5\u05dc\u05d5\u05d2\u05d9\u05d5\u05ea\u202c \u202b\u05d7\u05e4\u05d9\u05e8\u05d5\u05ea\u202c \u202b\u05e9\u05dc\u202c \u202b\u05ea\u05e7\u05d5\u05e4\u05d4\u202c \u202b\u05d4\u05ea\u05d7\u05d9\u05dc\u05d4\u202c \u202b\u05de\u05ea\u05d9\u202c</td><td>Question :</td></tr><tr><td/><td/><td/><td/><td/><td/><td colspan=\"5\">When did a period of archeological excavations begin in Valcamonica?</td></tr></table>", |
|
"text": "Baseline results demonstrate the", |
|
"type_str": "table", |
|
"html": null, |
|
"num": null |
|
} |
|
} |
|
} |
|
} |