ACL-OCL / Base_JSON /prefixN /json /newsum /2021.newsum-1.16.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:47:22.042541Z"
},
"title": "A New Dataset and Efficient Baselines for Document-level Text Simplification in German",
"authors": [
{
"first": "Annette",
"middle": [],
"last": "Rios",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Zurich",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Nicolas",
"middle": [],
"last": "Spring",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Zurich",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Tannon",
"middle": [],
"last": "Kew",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Zurich",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Marek",
"middle": [],
"last": "Kostrzewa",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Zurich",
"location": {}
},
"email": ""
},
{
"first": "Andreas",
"middle": [],
"last": "S\u00e4uberli",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Zurich",
"location": {}
},
"email": ""
},
{
"first": "Mathias",
"middle": [],
"last": "M\u00fcller",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Zurich",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Sarah",
"middle": [],
"last": "Ebling",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Zurich",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The task of document-level text simplification is very similar to summarization with the additional difficulty of reducing complexity. We introduce a newly collected data set of German texts, collected from the Swiss news magazine 20 Minuten ('20 Minutes') that consists of full articles paired with simplified summaries. Furthermore, we present experiments on ATS with the pretrained multilingual mBART and a modified version thereof that is more memoryfriendly, using both our new data set and existing simplification corpora. Our modifications of mBART let us train at a lower memory cost without much loss in performance, in fact, the smaller mBART even improves over the standard model in a setting with multiple simplification levels.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "The task of document-level text simplification is very similar to summarization with the additional difficulty of reducing complexity. We introduce a newly collected data set of German texts, collected from the Swiss news magazine 20 Minuten ('20 Minutes') that consists of full articles paired with simplified summaries. Furthermore, we present experiments on ATS with the pretrained multilingual mBART and a modified version thereof that is more memoryfriendly, using both our new data set and existing simplification corpora. Our modifications of mBART let us train at a lower memory cost without much loss in performance, in fact, the smaller mBART even improves over the standard model in a setting with multiple simplification levels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Text simplification is the process of reducing the complexity of a text to make it more easily understandable and improve its accessibility for a wider audience. Depending on the use case, target groups of simplified texts may include low-proficiency readers such as persons with intellectual disabilities, prelingually deaf persons, or non-native readers. Automatic text simplification (ATS) employs natural language processing methods for generating a simplified version of a given text in standard language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In general, simplification often results in a reduction of content similar to summarization, but with additional syntactic and lexical changes. Considering only a compression ratio in terms of sentence length or word count can be somewhat misleading since the simplified documents often elaborate on concepts and split complex sentences into smaller units.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Research on text simplification for German is still sparse but has gained momentum in recent years due to a number of legal and political developments in German-speaking countries, such as the introduction of a set of regulations for accessible information technology (Barrierefreie-Informationstechnik-Verordnung, BITV 2.0) in Germany, the approval of rules for accessible information and communication (Barrierefreie Information und Kommunikation, BIK) in Austria, and the ratification of the United Nations Convention on the Rights of Persons with Disabilities (UN CRPD) in Germany, Austria, and Switzerland.",
"cite_spans": [
{
"start": 404,
"end": 454,
"text": "(Barrierefreie Information und Kommunikation, BIK)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we report on two contributions regarding ATS for German:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. We introduce a new data set of simplified news articles from the Swiss daily magazine 20 Minuten ('20 Minutes'). The source side of the corpus contains the full, standard German news, whereas the target side consists of a shortened and simplified version that is meant to give readers an easy and fast-to-read overview. 1 2. We apply an adapted version of the mBART model (Liu et al., 2020) to the task of document-level ATS. The model needs to learn to reduce the content of the original document to the most salient parts, just as in summarization tasks. However, on top of that, the model also needs to account for linguistic changes that correspond to the targeted simplification level. 2",
"cite_spans": [
{
"start": 375,
"end": 393,
"text": "(Liu et al., 2020)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In addition to the new 20 Minuten data set, we evaluate our adapted mBART model with pre-existing corpora for German ATS (see Section 3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Traditionally, ATS has relied on rule-based approaches in separate steps, e.g. lexical substitutions followed by syntactic modifications. In the case of lexical simplification (i.e. the identification of difficult words and the substitution with simpler synonyms), most modern approaches include features based on semantics, context, and language models (Glava\u0161 and \u0160tajner, 2015; Qiang et al., 2020) . Syntactic simplification (the identification and simplification of difficult syntactic structures) is mostly done using manually written rules applied to a syntax tree (Siddharthan, 2006; Scarton et al., 2017) . Such systems are still among the most successful for languages with little simplification data such as Basque (Aranzabe et al., 2012), Bulgarian (Lozanova et al., 2013) , or French (Brouwers et al., 2014) .",
"cite_spans": [
{
"start": 354,
"end": 380,
"text": "(Glava\u0161 and \u0160tajner, 2015;",
"ref_id": "BIBREF7"
},
{
"start": 381,
"end": 400,
"text": "Qiang et al., 2020)",
"ref_id": "BIBREF20"
},
{
"start": 571,
"end": 590,
"text": "(Siddharthan, 2006;",
"ref_id": "BIBREF23"
},
{
"start": 591,
"end": 612,
"text": "Scarton et al., 2017)",
"ref_id": "BIBREF22"
},
{
"start": 760,
"end": 783,
"text": "(Lozanova et al., 2013)",
"ref_id": "BIBREF12"
},
{
"start": 796,
"end": 819,
"text": "(Brouwers et al., 2014)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "For languages with enough parallel data (i.e. mainly English), data-driven approaches that rely on machine learning have emerged, where ATS is most often framed as a monolingual machine translation task. Statistical machine translation has been applied to learn complex-simple phrase correspondences from parallel sentence-aligned corpora (Wubben et al., 2012) , sometimes in conjunction with rule-based simplification (Narayan and Gardent, 2014) or via integration of syntactic information through syntax-based SMT (Xu et al., 2016a) .",
"cite_spans": [
{
"start": 339,
"end": 360,
"text": "(Wubben et al., 2012)",
"ref_id": "BIBREF27"
},
{
"start": 419,
"end": 446,
"text": "(Narayan and Gardent, 2014)",
"ref_id": "BIBREF17"
},
{
"start": 516,
"end": 534,
"text": "(Xu et al., 2016a)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "More recently, neural machine translation (NMT) has been used to train models to directly map complex to simple sentences. Supervised learning with recurrent or transformer architectures dominate current state-of-the-art research, some with additional simplification-specific adaptations such as lexical constraints, rule-based preprocessing, or parametrization mechanisms (Nisioi et al., 2017; Zhang and Lapata, 2017; Sulem et al., 2018; Mallinson and Lapata, 2019; Kriz et al., 2019; Martin et al., 2020a) . Some unsupervised or semi-supervised neural models, which reduce the need for parallel data, have reached similar performances (Surya et al., 2019; Kumar et al., 2020; Zhao et al., 2020; Martin et al., 2020b) . Finally, experiments with multi-task learning have shown promising results (Guo et al., 2018; Dmitrieva and Tiedemann, 2021) , with the possibility of zero-shot translations for languages without any parallel data (Mallinson et al., 2020) . These approaches represent the current state of the art, but are largely limited to English (Al-Thanyyan and Azmi, 2021) due to a lack of training data in other languages. Initial experiments with German are ongoing (Battisti et al., 2020).",
"cite_spans": [
{
"start": 373,
"end": 394,
"text": "(Nisioi et al., 2017;",
"ref_id": "BIBREF18"
},
{
"start": 395,
"end": 418,
"text": "Zhang and Lapata, 2017;",
"ref_id": "BIBREF30"
},
{
"start": 419,
"end": 438,
"text": "Sulem et al., 2018;",
"ref_id": "BIBREF24"
},
{
"start": 439,
"end": 466,
"text": "Mallinson and Lapata, 2019;",
"ref_id": "BIBREF13"
},
{
"start": 467,
"end": 485,
"text": "Kriz et al., 2019;",
"ref_id": "BIBREF9"
},
{
"start": 486,
"end": 507,
"text": "Martin et al., 2020a)",
"ref_id": "BIBREF15"
},
{
"start": 637,
"end": 657,
"text": "(Surya et al., 2019;",
"ref_id": "BIBREF25"
},
{
"start": 658,
"end": 677,
"text": "Kumar et al., 2020;",
"ref_id": "BIBREF10"
},
{
"start": 678,
"end": 696,
"text": "Zhao et al., 2020;",
"ref_id": "BIBREF31"
},
{
"start": 697,
"end": 718,
"text": "Martin et al., 2020b)",
"ref_id": null
},
{
"start": 796,
"end": 814,
"text": "(Guo et al., 2018;",
"ref_id": "BIBREF8"
},
{
"start": 815,
"end": 845,
"text": "Dmitrieva and Tiedemann, 2021)",
"ref_id": "BIBREF6"
},
{
"start": 935,
"end": 959,
"text": "(Mallinson et al., 2020)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "When simplifying text, operations often occur across sentence borders, affecting the structure of a text as a whole. This complicates the use of sentence alignment and limits the effectiveness of sentence-level simplification models. Initial experiments exist that use document-level data to avoid these problems (Zhong et al., 2020; Dmitrieva and Tiedemann, 2021) .",
"cite_spans": [
{
"start": 313,
"end": 333,
"text": "(Zhong et al., 2020;",
"ref_id": "BIBREF32"
},
{
"start": 334,
"end": 364,
"text": "Dmitrieva and Tiedemann, 2021)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In this paper, we treat text simplification as a document-level task similar to summarization: the model needs to identify the most relevant information from the original text and generate a condensed version thereof. On top of that, the model should ideally learn to modify syntactic structures (e.g. split long sentences) and replace complex words (e.g. compound nouns) with simpler alternatives.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We introduce a new data set collected from the Swiss news magazine 20 Minuten that consists of full articles paired with shortened, simplified summaries that serve as a quick \"tl;dr\" for the reader. In contrast to other data used in our work, this data set does not distinguish different simplification levels. The corpus contains a total of 18,305 articles published since 2020. For each article we collect the title, the lead, the full news text, and the summary. We also keep track of paragraph formatting, even though this information is not used in the models presented in this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "Additionally, we use a combination of two existing corpora for German ATS that explicitly label the difficulty level of the target documents according to the Common European Framework of Reference for Languages (CEFR) (Council of Europe, 2009) . For some documents, we have multiple levels of simplification available. 3 The levels available to us are A1, A2 and B1 (from most simplified to close to standard German). The three corpora we use for our experiments have the following characteristics:",
"cite_spans": [
{
"start": 230,
"end": 243,
"text": "Europe, 2009)",
"ref_id": null
},
{
"start": 319,
"end": 320,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "APA is an extended version of the Austrian Press Agency corpus described in . This data set contains news articles professionally simplified to levels A2 and B1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "20m is a newly collected corpus from the Swiss news portal 20 Minuten. Similar to the APA data, these are news articles paired with condensed, simplified summaries. The target side in this corpus does not distinguish between simplification levels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "capito is a corpus of documents from capito, the largest provider of human simplification services for German. This data set covers a wide range of topics and domains, from official information (e.g. what to do in case of a suspected covid infection) to local news, technical guidelines and instruction manuals. The capito documents are much more varied than the other data sets, both in content and length. The simplified target texts in this corpus cover levels A1, A2 and B1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "For both the APA and the 20m data set, the compression ratio is comparable to summaries, as the simplified documents are generally much shorter than the original text. For the capito data, this is not always the case, at least in terms of word count; the simplified texts often elaborate on concepts or processes, which leads to a similar word count between the standard and the simplified documents. However, regarding content, the simplified texts usually do condense the original information to the most salient facts. For this reason, we argue that even on this data set, the task is very similar to summarization. Table 1 illustrates the size of the different data sets and compression ratios according to simplification levels. See Appendix A.2 for samples from all three data sets.",
"cite_spans": [],
"ref_spans": [
{
"start": 619,
"end": 626,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "Initial experiments showed that fine-tuning the standard pretrained mBART model (Liu et al., 2020) from Huggingface (Wolf et al., 2020) , performs relatively well with our data, however, training is very memory-intensive, requiring a 32GB GPU even with a small batch size. For this reason, we modify the original model to allow us to train on devices with less memory. 4 Our modifications are based on the code for BART with Longformer attention by the Allen Institute for AI (Beltagy et al., 2020) . 5 As in the BART model with Longformer attention, we swap the standard attention in the mBART encoder for Longformer's windowed attention. 6 This allows for increasing the maximum input positions and avoids having to truncate long source documents to a predefined length. We use a maximum input length of 4096 to cover most of the documents in our data. The new positional embeddings are initialized with a copy of the original pretrained embeddings of size 1024, as described in Beltagy et al. (2020) . The decoder remains unchanged with a maximum sequence length of 1024.",
"cite_spans": [
{
"start": 80,
"end": 98,
"text": "(Liu et al., 2020)",
"ref_id": "BIBREF11"
},
{
"start": 116,
"end": 135,
"text": "(Wolf et al., 2020)",
"ref_id": "BIBREF26"
},
{
"start": 369,
"end": 370,
"text": "4",
"ref_id": null
},
{
"start": 476,
"end": 498,
"text": "(Beltagy et al., 2020)",
"ref_id": "BIBREF3"
},
{
"start": 501,
"end": 502,
"text": "5",
"ref_id": null
},
{
"start": 640,
"end": 641,
"text": "6",
"ref_id": null
},
{
"start": 981,
"end": 1002,
"text": "Beltagy et al. (2020)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model and Training",
"sec_num": "4"
},
{
"text": "Furthermore, we reduce the original mBART vocabulary from 250k to 20k, keeping only those subwords and their embeddings that are most relevant for German. 7 We apply the pretrained multilingual sentencepiece model to \u223c4.5 million German sentences 8 and use the most frequent 20k subwords to filter the original mBART vocabulary.",
"cite_spans": [
{
"start": 155,
"end": 156,
"text": "7",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model and Training",
"sec_num": "4"
},
{
"text": "We then extend the special language tokens with tags for the different simplification levels (e.g. \"de_A1\"). These are initialized with the pretrained embedding for the German language tag (\"de_DE\") and updated during fine-tuning. And lastly, we add the option to train and translate mixed batches with multiple target language labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model and Training",
"sec_num": "4"
},
{
"text": "We train our models with early stopping according to rougeL on a held-out validation set. The models converge after training for 2 to 5 days, the exact configuration and hyperparameters can be found in Appendix A.1. All models are trained on a single V100 GPU with the same accumulated batch size (60), but note that the standard mBART can only fit a batch size of 1 on the GPU, whereas our modified version can fit 4 samples in a batch and thus needs fewer accumulation steps.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model and Training",
"sec_num": "4"
},
{
"text": "The results in Table 2 for the CEFR-labeled APA+capito data clearly show that with higher simplification levels, the task becomes harder: scores for both the standard mBART and our modified 6 We use a more recent version of both pytorch lightning and huggingface libraries and therefore have to make some changes, not only to the Longformer code, but also to the mBART model in huggingface itself. All code will be released upon publication. 7 The step of trimming the embedding matrix is the most effective in reducing the size of the model and allowing it to be fine-tuned on smaller devices.",
"cite_spans": [
{
"start": 190,
"end": 191,
"text": "6",
"ref_id": null
},
{
"start": 442,
"end": 443,
"text": "7",
"ref_id": null
}
],
"ref_spans": [
{
"start": 15,
"end": 22,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "8 Parts of the Common Crawl corpus 2019, News Commentary v15, Europarl v10 (all available from http://www. statmt.org/wmt20/translation-task.html) and our own data, see Section 3. train dev test compression ratio capito APA 20m capito APA 20m capito APA 20m capito APA 20m A1 652 --50 --50 --54% --A2 1708 2250 -87 113 -91 109 -97% 23% -B1 1074 2302 -56 144 -65 135 -98% 25% simple --17905 --200 --200 --11% Table 2 : Results of automatic simplification with fine-tuned standard mBART and our modified, smaller version with longformer attention (small mBART). Since standard mBART does not have labels for simplification levels, target language is set to 'de_DE' for fine-tuning and evaluation. Decoding for all models is done with beam size=6. version ('small mBART') generally decrease with increasing distance to standard German. 9 The mBART modifications to reduce memory-usage come at a small loss in performance according to rougeL and BLEU on the 20m data set. However, this smaller model with the additional language level tags outperfoms standard mBART on the APA+capito data set. Overall, the 20m articles are harder to simplify, since the compression ratio is relatively high (11%, see Table 1 ).",
"cite_spans": [
{
"start": 864,
"end": 865,
"text": "9",
"ref_id": null
}
],
"ref_spans": [
{
"start": 169,
"end": 432,
"text": "Section 3. train dev test compression ratio capito APA 20m capito APA 20m capito APA 20m capito APA 20m A1 652 --50 --50 --54% --A2 1708 2250 -87 113 -91 109 -97% 23% -B1 1074 2302 -56 144 -65 135 -98% 25% simple --17905 --200 --200",
"ref_id": "TABREF0"
},
{
"start": 439,
"end": 446,
"text": "Table 2",
"ref_id": null
},
{
"start": 1228,
"end": 1235,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "In this paper, we have introduced a data set of simplified news articles from the Swiss magazine 20 Minuten, aligned on document level. The task of document-level simplification resembles that of summarization, as models need to identify the salient parts and produce a condensed version of the original text. For simplification, models should also learn to simplify syntactic structures and lexical items. Experiments based on fine-tuning the pretrained 9 Apart from BLEU and rougeL, we evaluate with SARI (Xu et al., 2016b) , a metric introduced specifically for ATS.",
"cite_spans": [
{
"start": 455,
"end": 456,
"text": "9",
"ref_id": null
},
{
"start": 507,
"end": 525,
"text": "(Xu et al., 2016b)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "mBART model from huggingface show that the model can learn to produce not just condensed, but also simpler output. Our added modifications make mBART fine-tuning significantly more memoryfriendly. Since the new 20m data set does not distinguish between simplification levels, we use an existing data set annotated with CEFR levels to evaluate our models according to specific simplification levels. Results show that our modified mBART, while using considerably less memory, can simplify documents without much loss in performance on the 20m data and even improves over standard mBART on documents labeled with CEFR tags.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "In future work, we will conduct ablation studies to measure the effect of our modifications individually, specifically, seeing whether using windowed attention to give the model access to the full source document instead of a clipped version is beneficial. Lastly, automatic evaluation with metrics such as rougeL, BLEU, and SARI do not provide sufficient insights. To get more accurate feedback and better understand issues specific to simplification, we plan to conduct an evaluation with professional translators. Then please sign this piece of paper. Table 4 : capito simplification example for levels A2 and A1 with elaborations. Document length for capito varies considerably, from documents with one sentence to documents with several thousand sentences.",
"cite_spans": [],
"ref_spans": [
{
"start": 555,
"end": 562,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "Original (standard German) English \"Lonely Planet\" k\u00fcrt Salzburg f\u00fcr 2020 zur besten Stadt \"Lonely Planet\" selects Salzburg as the best city to visit in 2020 Salzburg ist im kommenden Jahr f\u00fcr den Reisebuchverlag \"Lonely Planet\" die beste Stadt zum Bereisen.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendix",
"sec_num": null
},
{
"text": "Salzburg is the best city to travel to next year according to the travel book publisher \"Lonely Planet\". Im neuen \"Lonely Planets Best in Travel 2020\" f\u00fchrt die Mozartstadt das Ranking in der Kategorie der St\u00e4dte nicht zuletzt wegen des 100-Jahr-Jubil\u00e4ums der Festspiele an.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendix",
"sec_num": null
},
{
"text": "In the new \"Lonely Planet's Best in Travel 2020\", the city of Mozart leads the ranking in the category of cities, not least because of the 100th anniversary of the festival. Der Reisef\u00fchrer \"Best in Travel\" k\u00fcrt jedes Jahr zehn Top-St\u00e4dte, -L\u00e4nder und -Regionen.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendix",
"sec_num": null
},
{
"text": "The \"Best in Travel\" publication selects the top ten cities, countries and regions each year. \"Trommelwirbel, bitte\", hei\u00dft es auf der Homepage des Verlages. \"Drum roll, please,\" reads the publisher's homepage. \"Der Herzensbrecher einer Alpenstadt besingt das Jubil\u00e4um in vollen T\u00f6nen.\" \"The heartbreaker of an Alpine city celebrates the anniversary in full tones.\" Salzburg f\u00fchrt das Ranking 2020 vor den St\u00e4dten Washington DC, Kairo, dem irischen Galway und der Beethoven-Stadt Bonn an.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendix",
"sec_num": null
},
{
"text": "Salzburg leads the 2020 ranking ahead of Washington DC, Cairo, Galway, Ireland, and Bonn, the city of Beethoven.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendix",
"sec_num": null
},
{
"text": "In der L\u00e4nderkategorie liegt Buthan voran, als Top-Region wurde die Seidenstra\u00dfe in Zentralasien angegeben.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendix",
"sec_num": null
},
{
"text": "Buthan leads in the country category, with the Silk Road in Central Asia given as the top region. \u00d6sterreich kommt im Ranking 2020 kein zweites Mal vor.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendix",
"sec_num": null
},
{
"text": "Austria does not appear a second time in the 2020 ranking.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendix",
"sec_num": null
},
{
"text": "Reisef\u00fchrer erkl\u00e4rt Salzburg zur besten Stadt der Welt Travel guide declares Salzburg the best city in the world Die \u00f6sterreichische Stadt Salzburg ist weltweit die beste Stadt zum Bereisen im kommenden Jahr.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B1 German English",
"sec_num": null
},
{
"text": "The Austrian city of Salzburg is the best city in the world to travel in the coming year. Das sagt die Rangliste des britischen Reisef\u00fchrers \"Lonely Planet\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B1 German English",
"sec_num": null
},
{
"text": "The ranking of the British travel guide \"Lonely Planet\" says so.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B1 German English",
"sec_num": null
},
{
"text": "\"Lonely Planet\" erstellt jedes Jahr eine Rangliste der besten 10 St\u00e4dte, L\u00e4nder und Regionen auf der ganzen Welt. \"Lonely Planet\" ranks the best 10 cities, countries and regions around the world each year. F\u00fcr das Jahr 2020 liegt Salzburg auf Platz 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B1 German English",
"sec_num": null
},
{
"text": "For the year 2020, Salzburg made it to first place. Salzburg f\u00fchrt vor den St\u00e4dten Washington in den USA, Kairo in \u00c4gypten, Galway in Irland und Bonn in Deutschland.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B1 German English",
"sec_num": null
},
{
"text": "Salzburg leads, ahead of the following cities: Washington in the USA, Cairo in Egypt, Galway in Ireland and Bonn in Germany. In der Rangliste der besten L\u00e4nder zum Bereisen 2020 gewann das Land Buthan in S\u00fcd-Asien.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B1 German English",
"sec_num": null
},
{
"text": "In the ranking of the best countries to travel in 2020, the country of Buthan in South Asia won.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B1 German English",
"sec_num": null
},
{
"text": "Salzburg ist 2020 die beste Stadt zum Bereisen Salzburg is the best city to travel in 2020 Die Stadt Salzburg ist im Jahr 2020 die beste Stadt zum Bereisen. The city of Salzburg is the best city to travel in 2020. Das sagt der Verlag von den Reise-B\u00fcchern namens Lonely Planet.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A2 German English",
"sec_num": null
},
{
"text": "The publisher of travel books called Lonely Planet says so.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A2 German English",
"sec_num": null
},
{
"text": "Salzburg gewann vor den St\u00e4dten Washington in den USA, Kairo in \u00c4gypten und Galway in Irland.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A2 German English",
"sec_num": null
},
{
"text": "Salzburg won ahead of the following cities: Washington in the USA, Cairo in Egypt and Galway in Ireland. Der Verlag sucht jedes Jahr die besten 10 St\u00e4dte, L\u00e4nder und Regionen zum Bereisen.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A2 German English",
"sec_num": null
},
{
"text": "Every year, the publisher looks for the best 10 cities, countries and regions to travel. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A2 German English",
"sec_num": null
},
{
"text": "The data set is available from: https://github. com/ZurichNLP/20Minuten2 Code is available from: https://github.com/ a-rios/longmbart",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Note that our train/dev/test split is based on document IDs: if a document has multiple versions in different levels, we assign all of those to the same split, in order to avoid a scenario where we would train on a document de\u2192A2 and then test on the same document with de\u2192B1, as this would give the model an unfair advantage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "All models in this paper are trained on 32GB V100 GPUs for comparability to the baseline standard mBART, but with our modifications, we can load and fine-tune mBART on smaller GPUs (tested on a single 12GB Titan X).5 https://github.com/allenai/longformer",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "At that moment, a 22-year-old motorcyclist was driving behind her. Wie die Kantonspolizei St. Gallen mitteilt, war das Handy mittlerweile zu Boden gefallen und der T\u00f6fffahrer richtete seinen Blick auf den Gegenstand am Boden.According to the cantonal police of St. Gallen, the cell phone had fallen to the ground and the motorcyclist turned his gaze to the object on the ground.Dabei bemerkte der Mann nicht, dass das Auto vor ihm abbremst.In doing so, the man did not notice that the car in front of him was slowing down. Er prallte mit dem T\u00f6ff in das Auto der 58-J\u00e4hrigen. He crashed his motorcycle into the car of the 58-yearold. Dabei erlitt der T\u00f6fffahrer unbestimmte Verletzungen.The driver of the motorcycle suffered unspecified injuries. Mit einem Rettungswagen wurde er ins Spital gebracht.He was taken to hospital in an ambulance.Laut der Polizei entstand ein Sachschaden von mehr als 20\"000 Franken.According to the police, the damage to property amounted to more than 20,000 Swiss francs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "annex",
"sec_num": null
},
{
"text": "Eine Autofahrerin hat ihr Handy auf dem Dach vergessen.A car driver forgot her cell phone on the roof.Als sie das bemerkte, bremste sie w\u00e4hrend der Fahrt ab.When she realized, she braked abruptly while driving.Ein T\u00f6fffahrer hinter ihr war durch das heruntergefallene Handy abgelenkt und prallte darauf in das Auto.A motorcyclist behind her got distracted by the dropped cell phone and crashed into the car.Der 22-j\u00e4hrige T\u00f6fffahrer erlitt unbestimmte Verletzungen.The 22-year-old motorcyclist suffered unspecified injuries. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simplified German English",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Automated text simplification: A survey",
"authors": [
{
"first": "",
"middle": [],
"last": "Suha S Al-Thanyyan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Azmi",
"suffix": ""
}
],
"year": 2021,
"venue": "ACM Computing Surveys (CSUR)",
"volume": "54",
"issue": "2",
"pages": "1--36",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Suha S Al-Thanyyan and Aqil M Azmi. 2021. Auto- mated text simplification: A survey. ACM Comput- ing Surveys (CSUR), 54(2):1-36.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "First approach to automatic text simplification in basque",
"authors": [
{
"first": "Arantza D\u0131az De",
"middle": [],
"last": "Mar\u0131a Jes\u00fas Aranzabe",
"suffix": ""
},
{
"first": "Itziar",
"middle": [],
"last": "Ilarraza",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gonzalez-Dios",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Natural Language Processing for Improving Textual Accessibility (NLP4ITA) workshop (LREC 2012)",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mar\u0131a Jes\u00fas Aranzabe, Arantza D\u0131az De Ilarraza, and Itziar Gonzalez-Dios. 2012. First approach to auto- matic text simplification in basque. In Proceedings of the Natural Language Processing for Improving Textual Accessibility (NLP4ITA) workshop (LREC 2012), pages 1-8, Istanbul, Turkey.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A corpus for automatic readability assessment and text simplification of German",
"authors": [
{
"first": "Alessia",
"middle": [],
"last": "Battisti",
"suffix": ""
},
{
"first": "Dominik",
"middle": [],
"last": "Pf\u00fctze",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "S\u00e4uberli",
"suffix": ""
},
{
"first": "Marek",
"middle": [],
"last": "Kostrzewa",
"suffix": ""
},
{
"first": "Sarah",
"middle": [],
"last": "Ebling",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "3302--3311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alessia Battisti, Dominik Pf\u00fctze, Andreas S\u00e4uberli, Marek Kostrzewa, and Sarah Ebling. 2020. A cor- pus for automatic readability assessment and text simplification of German. In Proceedings of the 12th Language Resources and Evaluation Confer- ence, pages 3302-3311, Marseille, France. Euro- pean Language Resources Association.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Longformer: The long-document transformer",
"authors": [
{
"first": "Iz",
"middle": [],
"last": "Beltagy",
"suffix": ""
},
{
"first": "Matthew",
"middle": [
"E"
],
"last": "Peters",
"suffix": ""
},
{
"first": "Arman",
"middle": [],
"last": "Cohan",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.05150"
]
},
"num": null,
"urls": [],
"raw_text": "Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. arXiv:2004.05150.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Syntactic sentence simplification for French",
"authors": [
{
"first": "Laetitia",
"middle": [],
"last": "Brouwers",
"suffix": ""
},
{
"first": "Delphine",
"middle": [],
"last": "Bernhard",
"suffix": ""
},
{
"first": "Anne-Laure",
"middle": [],
"last": "Ligozat",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Fran\u00e7ois",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 3rd Workshop on Predicting and Improving Text Readability for Target Reader Populations (PITR)",
"volume": "",
"issue": "",
"pages": "47--56",
"other_ids": {
"DOI": [
"10.3115/v1/W14-1206"
]
},
"num": null,
"urls": [],
"raw_text": "Laetitia Brouwers, Delphine Bernhard, Anne-Laure Ligozat, and Thomas Fran\u00e7ois. 2014. Syntactic sen- tence simplification for French. In Proceedings of the 3rd Workshop on Predicting and Improving Text Readability for Target Reader Populations (PITR), pages 47-56, Gothenburg, Sweden. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Common European Framework of Reference for Languages: Learning, teaching, assessment",
"authors": [],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Council of Europe. 2009. Common European Frame- work of Reference for Languages: Learning, teach- ing, assessment. Cambridge University Press, Cam- bridge.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A multitask learning approach to text simplification. Recent Trends in Analysis of Images",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Dmitrieva",
"suffix": ""
},
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
}
],
"year": 2021,
"venue": "Social Networks and Texts",
"volume": "1357",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anna Dmitrieva and J\u00f6rg Tiedemann. 2021. A multi- task learning approach to text simplification. Recent Trends in Analysis of Images, Social Networks and Texts, 1357:78.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Simplifying lexical simplification: Do we need simplified corpora?",
"authors": [
{
"first": "Goran",
"middle": [],
"last": "Glava\u0161",
"suffix": ""
},
{
"first": "Sanja",
"middle": [],
"last": "\u0160tajner",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "2",
"issue": "",
"pages": "63--68",
"other_ids": {
"DOI": [
"10.3115/v1/P15-2011"
]
},
"num": null,
"urls": [],
"raw_text": "Goran Glava\u0161 and Sanja \u0160tajner. 2015. Simplifying lexical simplification: Do we need simplified cor- pora? In Proceedings of the 53rd Annual Meet- ing of the Association for Computational Linguistics and the 7th International Joint Conference on Natu- ral Language Processing (Volume 2: Short Papers), pages 63-68, Beijing, China. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Dynamic multi-level multi-task learning for sentence simplification",
"authors": [
{
"first": "Han",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Ramakanth",
"middle": [],
"last": "Pasunuru",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "462--476",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Han Guo, Ramakanth Pasunuru, and Mohit Bansal. 2018. Dynamic multi-level multi-task learning for sentence simplification. In Proceedings of the 27th International Conference on Computational Linguis- tics, pages 462-476, Santa Fe, New Mexico, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Complexity-weighted loss and diverse reranking for sentence simplification",
"authors": [
{
"first": "Reno",
"middle": [],
"last": "Kriz",
"suffix": ""
},
{
"first": "Jo\u00e3o",
"middle": [],
"last": "Sedoc",
"suffix": ""
},
{
"first": "Marianna",
"middle": [],
"last": "Apidianaki",
"suffix": ""
},
{
"first": "Carolina",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Gaurav",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Eleni",
"middle": [],
"last": "Miltsakaki",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "3137--3147",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1317"
]
},
"num": null,
"urls": [],
"raw_text": "Reno Kriz, Jo\u00e3o Sedoc, Marianna Apidianaki, Car- olina Zheng, Gaurav Kumar, Eleni Miltsakaki, and Chris Callison-Burch. 2019. Complexity-weighted loss and diverse reranking for sentence simplifica- tion. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 3137-3147, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Iterative edit-based unsupervised sentence simplification",
"authors": [
{
"first": "Dhruv",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Lili",
"middle": [],
"last": "Mou",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Golab",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Vechtomova",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "7918--7928",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.707"
]
},
"num": null,
"urls": [],
"raw_text": "Dhruv Kumar, Lili Mou, Lukasz Golab, and Olga Vech- tomova. 2020. Iterative edit-based unsupervised sen- tence simplification. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 7918-7928, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Multilingual denoising pre-training for neural machine translation",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Xian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Marjan",
"middle": [],
"last": "Ghazvininejad",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2020,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "8",
"issue": "",
"pages": "726--742",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pre-training for neural machine translation. Transac- tions of the Association for Computational Linguis- tics, 8(0):726-742.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Text modification for Bulgarian Sign Language users",
"authors": [
{
"first": "Slavina",
"middle": [],
"last": "Lozanova",
"suffix": ""
},
{
"first": "Ivelina",
"middle": [],
"last": "Stoyanova",
"suffix": ""
},
{
"first": "Svetlozara",
"middle": [],
"last": "Leseva",
"suffix": ""
},
{
"first": "Svetla",
"middle": [],
"last": "Koeva",
"suffix": ""
},
{
"first": "Boian",
"middle": [],
"last": "Savtchev",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Second Workshop on Predicting and Improving Text Readability for Target Reader Populations",
"volume": "",
"issue": "",
"pages": "39--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Slavina Lozanova, Ivelina Stoyanova, Svetlozara Le- seva, Svetla Koeva, and Boian Savtchev. 2013. Text modification for Bulgarian Sign Language users. In Proceedings of the Second Workshop on Predicting and Improving Text Readability for Target Reader Populations, pages 39-48, Sofia, Bulgaria. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Controllable sentence simplification: Employing syntactic and lexical constraints",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Mallinson",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.04387"
]
},
"num": null,
"urls": [],
"raw_text": "Jonathan Mallinson and Mirella Lapata. 2019. Con- trollable sentence simplification: Employing syn- tactic and lexical constraints. arXiv preprint arXiv:1910.04387.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Zero-shot crosslingual sentence simplification",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Mallinson",
"suffix": ""
},
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "5109--5126",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.415"
]
},
"num": null,
"urls": [],
"raw_text": "Jonathan Mallinson, Rico Sennrich, and Mirella Lap- ata. 2020. Zero-shot crosslingual sentence simplifi- cation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pages 5109-5126, Online. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Controllable sentence simplification",
"authors": [
{
"first": "Louis",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "\u00c9ric",
"middle": [],
"last": "De La Clergerie",
"suffix": ""
},
{
"first": "Beno\u00eet",
"middle": [],
"last": "Sagot",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "4689--4698",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Louis Martin, \u00c9ric de la Clergerie, Beno\u00eet Sagot, and Antoine Bordes. 2020a. Controllable sentence sim- plification. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 4689- 4698, Marseille, France. European Language Re- sources Association.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "\u00c9ric de la Clergerie, Antoine Bordes, and Beno\u00eet Sagot. 2020b. MUSS: Multilingual unsupervised sentence simplification by mining paraphrases",
"authors": [
{
"first": "Louis",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2005.00352"
]
},
"num": null,
"urls": [],
"raw_text": "Louis Martin, Angela Fan, \u00c9ric de la Clerg- erie, Antoine Bordes, and Beno\u00eet Sagot. 2020b. MUSS: Multilingual unsupervised sentence simpli- fication by mining paraphrases. arXiv preprint arXiv:2005.00352.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Hybrid simplification using deep semantics and machine translation",
"authors": [
{
"first": "Shashi",
"middle": [],
"last": "Narayan",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Gardent",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "435--445",
"other_ids": {
"DOI": [
"10.3115/v1/P14-1041"
]
},
"num": null,
"urls": [],
"raw_text": "Shashi Narayan and Claire Gardent. 2014. Hybrid sim- plification using deep semantics and machine trans- lation. In Proceedings of the 52nd Annual Meet- ing of the Association for Computational Linguis- tics (Volume 1: Long Papers), pages 435-445, Balti- more, Maryland. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Exploring neural text simplification models",
"authors": [
{
"first": "Sergiu",
"middle": [],
"last": "Nisioi",
"suffix": ""
},
{
"first": "Sanja",
"middle": [],
"last": "\u0160tajner",
"suffix": ""
},
{
"first": "Simone",
"middle": [
"Paolo"
],
"last": "Ponzetto",
"suffix": ""
},
{
"first": "Liviu",
"middle": [
"P"
],
"last": "Dinu",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "85--91",
"other_ids": {
"DOI": [
"10.18653/v1/P17-2014"
]
},
"num": null,
"urls": [],
"raw_text": "Sergiu Nisioi, Sanja \u0160tajner, Simone Paolo Ponzetto, and Liviu P. Dinu. 2017. Exploring neural text sim- plification models. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 85-91,",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Lexical simplification with pretrained encoders",
"authors": [
{
"first": "Jipeng",
"middle": [],
"last": "Qiang",
"suffix": ""
},
{
"first": "Yun",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Yunhao",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Xindong",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence",
"volume": "34",
"issue": "",
"pages": "8649--8656",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jipeng Qiang, Yun Li, Yi Zhu, Yunhao Yuan, and Xin- dong Wu. 2020. Lexical simplification with pre- trained encoders. In Proceedings of the Thirty- Fourth AAAI Conference on Artificial Intelligence, volume 34, pages 8649-8656. Association for the Advancement of Artificial Intelligence.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Benchmarking data-driven automatic text simplification for German",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "S\u00e4uberli",
"suffix": ""
},
{
"first": "Sarah",
"middle": [],
"last": "Ebling",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Volk",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 1st Workshop on Tools and Resources to Empower People with REAding DIfficulties (READI)",
"volume": "",
"issue": "",
"pages": "41--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas S\u00e4uberli, Sarah Ebling, and Martin Volk. 2020. Benchmarking data-driven automatic text simplifica- tion for German. In Proceedings of the 1st Workshop on Tools and Resources to Empower People with REAding DIfficulties (READI), pages 41-48, Mar- seille, France. European Language Resources Asso- ciation.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "MUSST: A multilingual syntactic simplification tool",
"authors": [
{
"first": "Carolina",
"middle": [],
"last": "Scarton",
"suffix": ""
},
{
"first": "Alessio",
"middle": [],
"last": "Palmero Aprosio",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Tonelli",
"suffix": ""
},
{
"first": "Tamara",
"middle": [],
"last": "Mart\u00edn Wanton",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the IJCNLP 2017, System Demonstrations",
"volume": "",
"issue": "",
"pages": "25--28",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carolina Scarton, Alessio Palmero Aprosio, Sara Tonelli, Tamara Mart\u00edn Wanton, and Lucia Specia. 2017. MUSST: A multilingual syntactic simplifica- tion tool. In Proceedings of the IJCNLP 2017, Sys- tem Demonstrations, pages 25-28, Tapei, Taiwan. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Syntactic simplification and text cohesion",
"authors": [
{
"first": "Advaith",
"middle": [],
"last": "Siddharthan",
"suffix": ""
}
],
"year": 2006,
"venue": "Research on Language and Computation",
"volume": "4",
"issue": "1",
"pages": "77--109",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Advaith Siddharthan. 2006. Syntactic simplification and text cohesion. Research on Language and Com- putation, 4(1):77-109.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Simple and effective text simplification using semantic and neural methods",
"authors": [
{
"first": "Elior",
"middle": [],
"last": "Sulem",
"suffix": ""
},
{
"first": "Omri",
"middle": [],
"last": "Abend",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Rappoport",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "162--173",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1016"
]
},
"num": null,
"urls": [],
"raw_text": "Elior Sulem, Omri Abend, and Ari Rappoport. 2018. Simple and effective text simplification using se- mantic and neural methods. In Proceedings of the 56th Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers), pages 162-173, Melbourne, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Unsupervised neural text simplification",
"authors": [
{
"first": "Sai",
"middle": [],
"last": "Surya",
"suffix": ""
},
{
"first": "Abhijit",
"middle": [],
"last": "Mishra",
"suffix": ""
},
{
"first": "Anirban",
"middle": [],
"last": "Laha",
"suffix": ""
},
{
"first": "Parag",
"middle": [],
"last": "Jain",
"suffix": ""
},
{
"first": "Karthik",
"middle": [],
"last": "Sankaranarayanan",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2058--2068",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1198"
]
},
"num": null,
"urls": [],
"raw_text": "Sai Surya, Abhijit Mishra, Anirban Laha, Parag Jain, and Karthik Sankaranarayanan. 2019. Unsupervised neural text simplification. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 2058-2068, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R\u00e9mi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Davison",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Shleifer",
"suffix": ""
},
{
"first": "Clara",
"middle": [],
"last": "Patrick Von Platen",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "Canwen",
"middle": [],
"last": "Plu",
"suffix": ""
},
{
"first": "Teven",
"middle": [
"Le"
],
"last": "Xu",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Scao",
"suffix": ""
},
{
"first": "Mariama",
"middle": [],
"last": "Gugger",
"suffix": ""
},
{
"first": "Quentin",
"middle": [],
"last": "Drame",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Lhoest",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "38--45",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language pro- cessing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Sentence simplification by monolingual machine translation",
"authors": [
{
"first": "",
"middle": [],
"last": "Sander Wubben",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Van Den",
"suffix": ""
},
{
"first": "Emiel",
"middle": [],
"last": "Bosch",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Krahmer",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1015--1024",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sander Wubben, Antal van den Bosch, and Emiel Krah- mer. 2012. Sentence simplification by monolingual machine translation. In Proceedings of the 50th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1015- 1024, Jeju Island, Korea. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Optimizing statistical machine translation for text simplification",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Courtney",
"middle": [],
"last": "Napoles",
"suffix": ""
},
{
"first": "Ellie",
"middle": [],
"last": "Pavlick",
"suffix": ""
},
{
"first": "Quanze",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2016,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "4",
"issue": "",
"pages": "401--415",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00107"
]
},
"num": null,
"urls": [],
"raw_text": "Wei Xu, Courtney Napoles, Ellie Pavlick, Quanze Chen, and Chris Callison-Burch. 2016a. Optimizing statistical machine translation for text simplification. Transactions of the Association for Computational Linguistics, 4:401-415.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Optimizing Statistical Machine Translation for Text Simplification",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Courtney",
"middle": [],
"last": "Napoles",
"suffix": ""
},
{
"first": "Ellie",
"middle": [],
"last": "Pavlick",
"suffix": ""
},
{
"first": "Quanze",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2016,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "4",
"issue": "",
"pages": "401--415",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Xu, Courtney Napoles, Ellie Pavlick, Quanze Chen, and Chris Callison-Burch. 2016b. Optimiz- ing Statistical Machine Translation for Text Simpli- fication. Transactions of the Association for Compu- tational Linguistics, 4(401-415).",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Sentence simplification with deep reinforcement learning",
"authors": [
{
"first": "Xingxing",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "584--594",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1062"
]
},
"num": null,
"urls": [],
"raw_text": "Xingxing Zhang and Mirella Lapata. 2017. Sentence simplification with deep reinforcement learning. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 584-594, Copenhagen, Denmark. Association for Computational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Semi-supervised bilingual lexicon induction with two-way interaction",
"authors": [
{
"first": "Xu",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Zihao",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Yong",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "2973--2984",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.238"
]
},
"num": null,
"urls": [],
"raw_text": "Xu Zhao, Zihao Wang, Hao Wu, and Yong Zhang. 2020. Semi-supervised bilingual lexicon induction with two-way interaction. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2973-2984, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Discourse level factors for sentence deletion in text simplification",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Zhong",
"suffix": ""
},
{
"first": "Chao",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Junyi Jessy",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence",
"volume": "34",
"issue": "",
"pages": "9709--9716",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang Zhong, Chao Jiang, Wei Xu, and Junyi Jessy Li. 2020. Discourse level factors for sentence deletion in text simplification. In Proceedings of the Thirty- Fourth AAAI Conference on Artificial Intelligence, volume 34, pages 9709-9716. Association for the Advancement of Artificial Intelligence.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"num": null,
"type_str": "table",
"html": null,
"content": "<table><tr><td>See Ap-</td></tr></table>",
"text": "Number of documents with compression ratio. APA and capito use simplification levels A2/B1 and A1/A2/B1, respectively. 20m does not distinguish between simplification levels (labeled as 'simple')."
},
"TABREF2": {
"num": null,
"type_str": "table",
"html": null,
"content": "<table><tr><td>A.2 Examples</td></tr></table>",
"text": "Training configurations for standard mBART fine-tuning and modified version. Differences highlighted in bold."
},
"TABREF3": {
"num": null,
"type_str": "table",
"html": null,
"content": "<table/>",
"text": "APA example for levels A2 and B1. APA news articles are generally relatively short with up to \u223c100 sentences."
}
}
}
}