|
{ |
|
"paper_id": "Q15-1021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:07:22.599716Z" |
|
}, |
|
"title": "Problems in Current Text Simplification Research: New Data Can Help", |
|
"authors": [ |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Pennsylvania", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Callison-Burch", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Pennsylvania", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Courtney", |
|
"middle": [], |
|
"last": "Napoles", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Johns Hopkins University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Simple Wikipedia has dominated simplification research in the past 5 years. In this opinion paper, we argue that focusing on Wikipedia limits simplification research. We back up our arguments with corpus analysis and by highlighting statements that other researchers have made in the simplification literature. We introduce a new simplification dataset that is a significant improvement over Simple Wikipedia, and present a novel quantitative-comparative approach to study the quality of simplification data resources.", |
|
"pdf_parse": { |
|
"paper_id": "Q15-1021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Simple Wikipedia has dominated simplification research in the past 5 years. In this opinion paper, we argue that focusing on Wikipedia limits simplification research. We back up our arguments with corpus analysis and by highlighting statements that other researchers have made in the simplification literature. We introduce a new simplification dataset that is a significant improvement over Simple Wikipedia, and present a novel quantitative-comparative approach to study the quality of simplification data resources.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "The goal of text simplification is to rewrite complex text into simpler language that is easier to understand. Research into this topic has many potential practical applications. For instance, it can provide reading aids for people with disabilities (Carroll et al., 1999; Canning et al., 2000; Inui et al., 2003) , low-literacy (Watanabe et al., 2009; De Belder and Moens, 2010) , non-native backgrounds (Petersen and Ostendorf, 2007; Allen, 2009) or non-expert knowledge (Elhadad and Sutaria, 2007; Siddharthan and Katsos, 2010) . Text simplification may also help improve the performance of many natural language processing (NLP) tasks, such as parsing (Chandrasekar et al., 1996) , summarization (Siddharthan et al., 2004; Klebanov et al., 2004; Vanderwende et al., 2007; Xu and Grishman, 2009) , semantic role labeling (Vickrey and Koller, 2008) , information extraction (Miwa et al., 2010) and machine translation (Gerber and Hovy, 1998; Chen et al., 2012) , by transforming long, complex sentences into ones that are more easily processed.", |
|
"cite_spans": [ |
|
{ |
|
"start": 250, |
|
"end": 272, |
|
"text": "(Carroll et al., 1999;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 273, |
|
"end": 294, |
|
"text": "Canning et al., 2000;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 295, |
|
"end": 313, |
|
"text": "Inui et al., 2003)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 329, |
|
"end": 352, |
|
"text": "(Watanabe et al., 2009;", |
|
"ref_id": "BIBREF50" |
|
}, |
|
{ |
|
"start": 353, |
|
"end": 379, |
|
"text": "De Belder and Moens, 2010)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 405, |
|
"end": 435, |
|
"text": "(Petersen and Ostendorf, 2007;", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 436, |
|
"end": 448, |
|
"text": "Allen, 2009)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 473, |
|
"end": 500, |
|
"text": "(Elhadad and Sutaria, 2007;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 501, |
|
"end": 530, |
|
"text": "Siddharthan and Katsos, 2010)", |
|
"ref_id": "BIBREF46" |
|
}, |
|
{ |
|
"start": 656, |
|
"end": 683, |
|
"text": "(Chandrasekar et al., 1996)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 700, |
|
"end": 726, |
|
"text": "(Siddharthan et al., 2004;", |
|
"ref_id": "BIBREF47" |
|
}, |
|
{ |
|
"start": 727, |
|
"end": 749, |
|
"text": "Klebanov et al., 2004;", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 750, |
|
"end": 775, |
|
"text": "Vanderwende et al., 2007;", |
|
"ref_id": "BIBREF48" |
|
}, |
|
{ |
|
"start": 776, |
|
"end": 798, |
|
"text": "Xu and Grishman, 2009)", |
|
"ref_id": "BIBREF56" |
|
}, |
|
{ |
|
"start": 824, |
|
"end": 850, |
|
"text": "(Vickrey and Koller, 2008)", |
|
"ref_id": "BIBREF49" |
|
}, |
|
{ |
|
"start": 876, |
|
"end": 895, |
|
"text": "(Miwa et al., 2010)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 920, |
|
"end": 943, |
|
"text": "(Gerber and Hovy, 1998;", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 944, |
|
"end": 962, |
|
"text": "Chen et al., 2012)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The Parallel Wikipedia Simplification (PWKP) corpus prepared by Zhu et al. (2010) , has become the benchmark dataset for training and evaluating automatic text simplification systems. An associated test set of 100 sentences from Wikipedia has been used for comparing the state-of-the-art approaches. The collection of simple-complex parallel sentences sparked a major advance for machine translationbased approaches to simplification. However, we will show that this dataset is deficient and should be considered obsolete.", |
|
"cite_spans": [ |
|
{ |
|
"start": 64, |
|
"end": 81, |
|
"text": "Zhu et al. (2010)", |
|
"ref_id": "BIBREF59" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this opinion paper, we argue that Wikipedia as a simplification data resource is suboptimal for several reasons: 1) It is prone to automatic sentence alignment errors; 2) It contains a large proportion of inadequate simplifications; 3) It generalizes poorly to other text genres. These problems are largely due to the fact that Simple Wikipedia is an encyclopedia spontaneously and collaboratively created for \"children and adults who are learning English language\" without more specific guidelines. We quantitatively illustrate the seriousness of these problems through manual inspection and statistical analysis.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our manual inspection reveals that about 50% of the sentence pairs in the PWKP corpus are not simplifications. We also introduce a new comparative approach to simplification corpus analysis. In particular, we assemble a new simplification corpus of news articles, 1 re-written by professional editors to meet the readability standards for children at multi-Not Aligned (17%)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "[NORM] The soprano ranges are also written from middle C to A an octave higher, but sound one octave higher than written.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "[SIMP] The xylophone is usually played so that the music sounds an octave higher than written.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "[NORM] Chile is the longest north-south country in the world, and also claims of Antarctica as part of its territory.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Not Simpler", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "[SIMP] Chile, which claims a part of the Antarctic continent, is the longest country on earth. (33%)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Not Simpler", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "[ ple grade levels. This parallel corpus is higher quality and its size is comparable to the PWKP dataset. It helps us to showcase the limitations of Wikipedia data in comparison and it provides potential remedies that may improve simplification research. We are not the only researchers to notice problems with Simple Wikipedia. There are many hints in past publications that reflect the inadequacy of this resource, which we piece together in this paper to support our arguments. Several different simplification datasets have been proposed (Bach et al., 2011; Woodsend and Lapata, 2011a; Coster and Kauchak, 2011; Woodsend and Lapata, 2011b) , but most of these are derived from Wikipedia and not thoroughly analyzed. Siddharthan (2014)'s excellent survey of text simplification research states that one of the most important questions that needs to be addressed is \"how good is the quality of Simple English Wikipedia\". To the best of our knowledge, we are the first to systematically quantify the quality of Simple English Wikipedia and directly answer this question.", |
|
"cite_spans": [ |
|
{ |
|
"start": 543, |
|
"end": 562, |
|
"text": "(Bach et al., 2011;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 563, |
|
"end": 590, |
|
"text": "Woodsend and Lapata, 2011a;", |
|
"ref_id": "BIBREF53" |
|
}, |
|
{ |
|
"start": 591, |
|
"end": 616, |
|
"text": "Coster and Kauchak, 2011;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 617, |
|
"end": 644, |
|
"text": "Woodsend and Lapata, 2011b)", |
|
"ref_id": "BIBREF54" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Not Simpler", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We make our argument not as a criticism of others or ourselves, but as an effort to refocus research directions in the future (Eisenstein, 2013) . We hope to inspire the creation of higher quality simplification datasets, and to encourage researchers to think critically about existing resources and evaluation methods. We believe this will lead to breakthroughs in text simplification research.", |
|
"cite_spans": [ |
|
{ |
|
"start": 126, |
|
"end": 144, |
|
"text": "(Eisenstein, 2013)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Not Simpler", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The Parallel Wikipedia Simplification (PWKP) corpus (Zhu et al., 2010) contains approximately 108,000 automatically aligned sentence pairs from cross-linked articles between Simple and Normal English Wikipedia. It has become a benchmark dataset for simplification largely because of its size and availability, and because follow-up papers (Woodsend and Lapata, 2011a; Coster and Kauchak, 2011; Wubben et al., 2012; Narayan and Gardent, 2014; Siddharthan and Angrosh, 2014; Angrosh et al., 2014) often compare with Zhu et al.'s system outputs to demonstrate further improvements.", |
|
"cite_spans": [ |
|
{ |
|
"start": 52, |
|
"end": 70, |
|
"text": "(Zhu et al., 2010)", |
|
"ref_id": "BIBREF59" |
|
}, |
|
{ |
|
"start": 339, |
|
"end": 367, |
|
"text": "(Woodsend and Lapata, 2011a;", |
|
"ref_id": "BIBREF53" |
|
}, |
|
{ |
|
"start": 368, |
|
"end": 393, |
|
"text": "Coster and Kauchak, 2011;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 394, |
|
"end": 414, |
|
"text": "Wubben et al., 2012;", |
|
"ref_id": "BIBREF55" |
|
}, |
|
{ |
|
"start": 415, |
|
"end": 441, |
|
"text": "Narayan and Gardent, 2014;", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 442, |
|
"end": 472, |
|
"text": "Siddharthan and Angrosh, 2014;", |
|
"ref_id": "BIBREF44" |
|
}, |
|
{ |
|
"start": 473, |
|
"end": 494, |
|
"text": "Angrosh et al., 2014)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Simple Wikipedia is not that simple", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The large quantity of parallel text from Wikipedia made it possible to build simplification systems using statistical machine translation (SMT) technology. But after the initial success of these firstgeneration systems, we started to suffer from the inadequacy of the parallel Wikipedia simplification datasets. There is scattered evidence in the literature. Bach et al. (2011) mentioned they have attempted to use parallel Wikipedia data, but opted to construct their own corpus of 854 sentences (25% from New York Times and 75% are from Wikipedia) with one manual simplification per sentence. Woodsend and Lapata (2011a) showed that rewriting rules learned from Simple Wikipedia revision histories produce better output compared to the \"unavoidably noisy\" aligned sentences from Simple-Normal Wikipedia. The Woodsend and Lapata (2011b) model, that used quasi-synchronous grammars learned from Wikipedia revision history, left 22% sentences unchanged in the test set. Wubben et al. (2012) found that a phrase-based machine translation model trained on the PWKP dataset often left the input unchanged, since \"much of training data consists of partially equal input and output strings\". Coster and Kauchak (2011) constructed another parallel Wikipedia dataset using a more sophisticated sentence alignment algorithm with an additional step that first aligns paragraphs. They noticed that 27% aligned sentences are identical between simple and normal, and retained them in the dataset \"since not all sentences need to be simplified and it is important for any simplification algorithm to be able to handle this case\". However, we will show that many sentences that need to be simplified are not simplified in the Simple Wikipedia.", |
|
"cite_spans": [ |
|
{ |
|
"start": 359, |
|
"end": 377, |
|
"text": "Bach et al. (2011)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 810, |
|
"end": 837, |
|
"text": "Woodsend and Lapata (2011b)", |
|
"ref_id": "BIBREF54" |
|
}, |
|
{ |
|
"start": 969, |
|
"end": 989, |
|
"text": "Wubben et al. (2012)", |
|
"ref_id": "BIBREF55" |
|
}, |
|
{ |
|
"start": 1186, |
|
"end": 1211, |
|
"text": "Coster and Kauchak (2011)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Simple Wikipedia is not that simple", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We manually examined the Parallel Wikipedia Simplification (PWKP) corpus and found that it is noisy and half of its sentence pairs are not simplifications (Table 1) . We randomly sampled 200 one-toone sentence pairs from the PWKP dataset (one-tomany sentence splitting cases consist of only 6.1% of the dataset), and classify each sentence pair into one of the three categories:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 155, |
|
"end": 164, |
|
"text": "(Table 1)", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Simple Wikipedia is not that simple", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Not Aligned (17%) -Two sentences have different meanings, or only have partial content overlap.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Simple Wikipedia is not that simple", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The SIMP sentence has the same meaning as the NORM sentence, but is not simpler.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Not Simpler (33%)-", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The SIMP sentence has the same meaning as the NORM sentence, and is simpler. We fur-ther breakdown into whether the simplification is due to deletion or paraphrasing. Table 1 shows a detailed breakdown and representative examples for each category. Although Zhu et al. (2010) and Coster and Kauchak (2011) have provided a simple analysis on the accuracy of sentence alignment, there are some important facts that cannot be revealed without in-depth manual inspection. The \"non-simplification\" noise in the parallel Simple-Normal Wikipedia data is a much more serious problem than we all thought. The quality of \"real simplifications\" also varies: some sentences are simpler by only one word while the rest of sentence is still complex.", |
|
"cite_spans": [ |
|
{ |
|
"start": 258, |
|
"end": 275, |
|
"text": "Zhu et al. (2010)", |
|
"ref_id": "BIBREF59" |
|
}, |
|
{ |
|
"start": 280, |
|
"end": 305, |
|
"text": "Coster and Kauchak (2011)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 167, |
|
"end": 174, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Real Simplification (50%)-", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The main causes of non-simplifications and partial-simplifications in the parallel Wikipedia corpus include: 1) The Simple Wikipedia was created by volunteer contributors with no specific objective; 2) Very rarely are the simple articles complete re-writes of the regular articles in Wikipedia (Coster and Kauchak, 2011) , which makes automatic sentence alignment errors worse; 3) As an encyclopedia, Wikipedia contains many difficult sentences with complex terminology. The difficulty of sentence alignment between Normal-Simple Wikipedia is highlighted by a recent study by Hwang et al. (2015) that achieves state-of-the-art performance of 0.712 maximum F1 score (over the precisionrecall curve) by combining Wiktionary-based and dependency-parse-based sentence similarities. And in fact, even the simple side of the PWKP corpus contains an extensive English vocabulary of 78,009 unique words. 6,669 of these words do not exist in the normal side (Table 2) . Below is a sentence from an article entitled \"Photolithography\" in Simple Wikipedia:", |
|
"cite_spans": [ |
|
{ |
|
"start": 294, |
|
"end": 320, |
|
"text": "(Coster and Kauchak, 2011)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 576, |
|
"end": 595, |
|
"text": "Hwang et al. (2015)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 949, |
|
"end": 958, |
|
"text": "(Table 2)", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Real Simplification (50%)-", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Microphototolithography is the use of photolithography to transfer geometric shapes on a photomask to the surface of a semiconductor wafer for making integrated circuits.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Real Simplification (50%)-", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We should use the PWKP corpus with caution and consider other alternative parallel simplification corpora. Alternatives could come from Wikipedia (but better aligned and selected) or from manual simplification of other domains, such as newswire. In the The vocabulary size of the Parallel Wikipedia Simplification (PWKP) corpus and the vocabulary difference between its normal and simple sides (as a 2\u00d72 matrix). Only words consisting of the 26 English letters are counted. next section, we will present a corpus of news articles simplified by professional editors, called the Newsela corpus. We perform a comparative corpus analysis of the Newsela corpus versus the PWKP corpus to further illustrate concerns about PWKP's quality.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Real Simplification (50%)-", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "To study how professional editors conduct text simplification, we have assembled a new simplification dataset that consists of 1,130 news articles. Each article has been re-written 4 times for children at different grade levels by editors at Newsela 2 , a company that produces reading materials for pre-college classroom use. We use Simp-4 to denote the most simplified level and Simp-1 to denote the least simplified level. This data forms a parallel corpus, where we can align sentences at different reading levels, as shown in Table 3 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 531, |
|
"end": 538, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "What the Newsela corpus teaches us", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Unlike Simple Wikipedia, which was created without a well-defined objective, Newsela is meant to help teachers prepare curricula that match the English language skills required at each grade level. It is motivated by the Common Core Standards (Porter et al., 2011) in the United States. All the Newsela articles are grounded in the Lexile 3 readability score, which is widely used to measure text complexity and assess students' reading ability.", |
|
"cite_spans": [ |
|
{ |
|
"start": 243, |
|
"end": 264, |
|
"text": "(Porter et al., 2011)", |
|
"ref_id": "BIBREF37" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "What the Newsela corpus teaches us", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We conducted a manual examination of the Newsela data similar to the one for Wikipedia data in Table 1 . The breakdown of aligned sentence pairs between different versions in Newsela is shown in Figure 1 . It is based on 50 randomly selected sentence pairs and shows much more reliable simplification than the Wikipedia data.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 95, |
|
"end": 102, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 195, |
|
"end": 203, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Manual examination of Newsela corpus", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We designed a sentence alignment algorithm for the Newsela corpus based on Jaccard similarity (Jaccard, 1912). We first align each sentence in the simpler version (e.g. s1 in Simp-3) to the sentence in the immediate more complex version (e.g. s2 in Simp-2) of the highest similarity score. We compute the similarity based on overlapping word lemmas: 4", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Manual examination of Newsela corpus", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Sim(s1, s2) = |Lemmas(s1) \u2229 Lemmas(s2)| |Lemmas(s1) \u222a Lemmas(s2)| (1) We then align sentences into groups across all 5 versions for each article. For cases where no sentence splitting is involved, we discard any sentence pairs with a similarity smaller than 0.40. If splitting occurs, we set the similarity threshold to 0.20 instead.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Manual examination of Newsela corpus", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Newsela's professional editors produce simplifications with noticeably higher quality than Wikipedia's simplifications. Compared to sentence alignment for Normal-Simple Wikipedia, automatically aligning Newsela is more straightforward and reliable. The better correspondence between the simplified and complex articles and the availability of multiple simplified versions in the Newsela data also contribute to the accuracy of sentence alignment. Text 12 1400L Slightly more fourth-graders nationwide are reading proficiently compared with a decade ago, but only a third of them are now reading well, according to a new report. 7", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 447, |
|
"end": 455, |
|
"text": "Text 12", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Manual examination of Newsela corpus", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "1070L Fourth-graders in most states are better readers than they were a decade ago. But only a third of them actually are able to read well, according to a new report. 6 930L Fourth-graders in most states are better readers than they were a decade ago. But only a third of them actually are able to read well, according to a new report.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Grade Level Lexile Score", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Most fourth-graders are better readers than they were 10 years ago. But few of them can actually read well. 3 510L Fourth-graders are better readers than 10 years ago. But few of them read well. . Each cell shows the number of unique word types that appear in the corpus listed in the column but do not appear in the corpus listed in the row. We also list the average frequency of those vocabulary items. For example, in the cell marked *, the Simp-4 version contains 583 unique words that do not appear in the Original version. By comparing the cells marked **, we see about half of the words (19,197 out of 39,046) in the Original version are not in the Simp-4 version. Most of the vocabulary that is removed consists of low-frequency words (with an average frequency of 2.6 in the Original). Table 4 shows the basic statistics of the Newsela corpus and the PWKP corpus. They are clearly different. Compared to the Newsela data, the Wikipedia corpus contains remarkably longer (more complex) words and the difference of sentence length before and after simplification is much smaller. We use the Penn Treebank tokenizer in the Moses package. 5 Tables 2 and 5 show the vocabulary statistics and the vocabulary difference matrix of the PWKP and Newsela corpus. While the vocabulary size of the PWKP corpus drops only 18% from 95,111 unique words to 78,009, the vocabulary size of the Newsela corpus is reduced dramatically by 50.8% from 39,046 to 19,197 words at its most simplified level (Simp-4). Moreover, in the Newsela data, only several hundred words that occur in the simpler version do not occur in the more complex version. The words introduced are often abbreviations (\"National Hurricane Center\" \u2192 \"NHC\"), less formal words (\"unscrupulous\" \u2192 \"crooked\") and shortened words (\"chimpanzee\" \u2192 \"chimp\"). This implies a more complete and precise degree of simplification in the Newsela than the PWKP dataset.", |
|
"cite_spans": [ |
|
{ |
|
"start": 594, |
|
"end": 616, |
|
"text": "(19,197 out of 39,046)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1144, |
|
"end": 1145, |
|
"text": "5", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 795, |
|
"end": 802, |
|
"text": "Table 4", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "720L", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In this section, we visualize the differences in the topics and degree of simplification between the Simple Wikipedia and the Newsela corpus. To do this, we employ the log-odds-ratio informative Dirichlet prior method of Monroe et al. (2008) to find words and punctuation marks that are statistically overrepresented in the simplified text compared to the original text. The method measures each token by the z-score of its log-odds-ratio as:", |
|
"cite_spans": [ |
|
{ |
|
"start": 221, |
|
"end": 241, |
|
"text": "Monroe et al. (2008)", |
|
"ref_id": "BIBREF30" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Log-odds-ratio analysis of words", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "\u03b4 (i\u2212j) t \u03c3 2 (\u03b4 (i\u2212j) t ) (2)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Log-odds-ratio analysis of words", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "It uses a background corpus when calculating the log-odds-ratio \u03b4 t for token t, and controls for its variance \u03c3 2 . Therefore it is capable of detecting differences even in very frequent tokens. Other methods used to discover word associations, such as mu-tual information, log likelihood ratio, t-test and chisquare, often have problems with frequent words (Jurafsky et al., 2014) . We choose the Monroe et al. (2008) method because many function words and punctuations are very frequent and play important roles in text simplification.", |
|
"cite_spans": [ |
|
{ |
|
"start": 359, |
|
"end": 382, |
|
"text": "(Jurafsky et al., 2014)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 399, |
|
"end": 419, |
|
"text": "Monroe et al. (2008)", |
|
"ref_id": "BIBREF30" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Log-odds-ratio analysis of words", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The log-odds-ratio \u03b4", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Log-odds-ratio analysis of words", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "(i\u2212j) t", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Log-odds-ratio analysis of words", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "for token t estimates the difference of the frequency of token t between two text sets i and j as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Log-odds-ratio analysis of words", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u03b4 (i\u2212j) t = log( y i t + \u03b1 t n i + \u03b1 0 \u2212 (y i t + \u03b1 t ) ) \u2212 log( y j t + \u03b1 t n j + \u03b1 0 \u2212 (y j t + \u03b1 t ) )", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "Log-odds-ratio analysis of words", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "where n i is the size of corpus i, n j is the size of corpus j, y i t is the count of token t in corpus i, y j t is the count of token t in corpus j, \u03b1 0 is the size of the background corpus, and \u03b1 t is the count of token t in the background corpus. We use the combination of both simple and complex sides in the corpus as the background.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Log-odds-ratio analysis of words", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "And the variance of the log-odds-ratio is estimated by: Table 6 lists the top 50 words and punctuation marks that are the most strongly associated with the complex text. Both corpora significantly reduce function words and punctuation. The content words show the differences of the topics and subject matters between the two corpora. Table 7 lists the top 50 words that are the most strongly associated with the simplified text. The two corpora are more agreeable on what the simple words are than what complex words need to be simplified. Table 8 shows the frequency and odds ratio of example words from the top 50 complex words. The odds ratio of token t between two texts sets i and j is defined as:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 56, |
|
"end": 63, |
|
"text": "Table 6", |
|
"ref_id": "TABREF8" |
|
}, |
|
{ |
|
"start": 334, |
|
"end": 341, |
|
"text": "Table 7", |
|
"ref_id": "TABREF9" |
|
}, |
|
{ |
|
"start": 540, |
|
"end": 547, |
|
"text": "Table 8", |
|
"ref_id": "TABREF10" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Log-odds-ratio analysis of words", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u03c3 2 (\u03b4 (i\u2212j) t ) \u2248 1 y i t + \u03b1 t + 1 y j t + \u03b1 t", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": "Log-odds-ratio analysis of words", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "r (i\u2212j) t = y i t /y j t n i /n j", |
|
"eq_num": "(5)" |
|
} |
|
], |
|
"section": "Log-odds-ratio analysis of words", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "It reflects the difference of topics and degree of simplification between the Wikipedia and the Newsela data. The high proportion of clause-related function words, such as \"which\" and \"where\", that are retained in Simple Wikipedia indicates the incompleteness of simplification in the Simple Wikipedia. The dramatic frequency decrease of words like \"which\" and \"advocates\" in Newsela shows the consistent quality from professional simplifications. Wikipedia has good coverage on certain words, such as \"approximately\", because of its large volume.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Log-odds-ratio analysis of words", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "We can also reveal the syntax patterns that are most strongly associated with simple text versus complex text using the log-odds-ratio technique. Table 9 shows syntax patterns that represent \"parent node (head word) \u2192 children node(s)\" structures from a constituency parse tree. To extract theses patterns we parsed our corpus with the Stanford Parser (Klein and Manning, 2002) and applied its built-in head word identifier from Collins (2003) . Both the Newsela and Wikipedia corpora exhibit syntactic differences that are intuitive and interesting. However, as with word frequency (Table 8) , complex syntactic patterns are retained more often in Wikipedia's simplifications than in Newsela's. In order to show interesting syntax patterns in the Wikipedia parallel data for Table 9 , we first had to discard 3613 sentences in PWKP that contain both \"is a commune\" and \"France\". As the word-level analysis in Tables 6 and 7 hints, there is an exceeding number of sentences about communes in France in the PWKP corpus, such as the sentence pair below:", |
|
"cite_spans": [ |
|
{ |
|
"start": 352, |
|
"end": 377, |
|
"text": "(Klein and Manning, 2002)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 429, |
|
"end": 443, |
|
"text": "Collins (2003)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 583, |
|
"end": 592, |
|
"text": "(Table 8)", |
|
"ref_id": "TABREF10" |
|
}, |
|
{ |
|
"start": 776, |
|
"end": 783, |
|
"text": "Table 9", |
|
"ref_id": "TABREF11" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Log-odds-ratio analysis of syntax patterns", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "[NORM]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Log-odds-ratio analysis of syntax patterns", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "La Couture is a commune in the Pas-de-Calais department in the Nord-Pas-de-Calais region of France .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Log-odds-ratio analysis of syntax patterns", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "[SIMP] La Couture, Pas-de-Calais is a commune. It is found in the region Nord-Pas-de-Calais in the Pas-de-Calais department in the north of France.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Log-odds-ratio analysis of syntax patterns", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "This is a template sentence from a stub geographic article and its deterministic simplification. The influence of this template sentence is more over-whelming in the syntax-level analysis than in the word-level analysis --about 1/3 of the top 30 syntax patterns would be related to these sentence pairs if they were not discarded.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Log-odds-ratio analysis of syntax patterns", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "There are few publicly accessible document-level parallel simplification corpora (Barzilay and Lapata, 2008 ). The Newsela corpus will enable more research on document-level simplification, such as anaphora choice (Siddharthan and Copestake, 2002) , content selection (Woodsend and Lapata, 2011b) , and discourse relation preservation (Siddharthan, 2003) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 81, |
|
"end": 107, |
|
"text": "(Barzilay and Lapata, 2008", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 214, |
|
"end": 247, |
|
"text": "(Siddharthan and Copestake, 2002)", |
|
"ref_id": "BIBREF45" |
|
}, |
|
{ |
|
"start": 268, |
|
"end": 296, |
|
"text": "(Woodsend and Lapata, 2011b)", |
|
"ref_id": "BIBREF54" |
|
}, |
|
{ |
|
"start": 335, |
|
"end": 354, |
|
"text": "(Siddharthan, 2003)", |
|
"ref_id": "BIBREF41" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Document-level compression", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "Simple Wikipedia is rarely used to study document-level simplification. Woodsend and Lapata (2011b) developed a model that simplifies Wikipedia articles while selecting their most important content. However, they could only use Simple Wikipedia in very limited ways. They noted that Simple Wikipedia is \"less mature\" with many articles that are just \"stubs, comprising a single paragraph of just one or two sentences\". We quantify their observation in Figure 2 , plotting the documentlevel compression ratio of Simple vs. Normal Wikipedia articles. The compression ratio is the ratio of the number of characters between each simple-complex article pair. In the plot, we use all 60 thousand article pairs from the Simple-Normal Wikipedia collected by Kauchak (2013) in May 2011. The overall compression ratio is skewed towards almost 0. For comparison, we also plot the ratio between the simplest version (Simp-4) and the original version (Original) of the news articles in the Newsela corpus. The Newsela corpus has a much more reasonable compression ratio and is therefore likely to be more suitable for studying documentlevel simplification.", |
|
"cite_spans": [ |
|
{ |
|
"start": 750, |
|
"end": 764, |
|
"text": "Kauchak (2013)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 452, |
|
"end": 460, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Document-level compression", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "Although discourse is known to affect readability, the relation between discourse and text simplification is still under-studied with the use of statistical methods (Williams et al., 2003; Siddharthan, 2006; Siddharthan and Katsos, 2010) . Text simplification often involves splitting one sentence into multiple sentences, which is likely to require discourse-level changes such as introducing explicit rhetorical rela-tions. However, previous research that uses Simple-Normal Wikipedia largely focuses on sentence-level transformation, without taking large discourse structure into account. Figure 3 : A radar chart that visualizes the odds ratio (radius axis) of discourse connectives in simple side vs. complex side. An odds ratio larger than 1 indicates the word is more likely to occur in the simplified text than in the complex text, and vice versa. Simple cue words (in the shaded region), except \"hence\", are more likely to be added during Newsela's simplification process than in Wikipedia's. Complex conjunction connectives (in the unshaded region) are more likely to be retained in Wikipedia's simplifications than in Newsela's.", |
|
"cite_spans": [ |
|
{ |
|
"start": 165, |
|
"end": 188, |
|
"text": "(Williams et al., 2003;", |
|
"ref_id": "BIBREF52" |
|
}, |
|
{ |
|
"start": 189, |
|
"end": 207, |
|
"text": "Siddharthan, 2006;", |
|
"ref_id": "BIBREF42" |
|
}, |
|
{ |
|
"start": 208, |
|
"end": 237, |
|
"text": "Siddharthan and Katsos, 2010)", |
|
"ref_id": "BIBREF46" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 592, |
|
"end": 600, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Analysis of discourse connectives", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "To preserve the rhetorical structure, Siddharthan (2003 Siddharthan ( , 2006 proposed to introduce cue words when simplifying various conjoined clauses. We perform an analysis on discourse connectives that are relevant to readability as suggested by Siddharthan (2003) . Figure 3 presents the odds ratios of simple cue words and complex conjunction connectives. The odds radios are computed for Newsela between the Original and Simp-4 versions, and for Wikipedia between Normal and Simple documents collected by Kauchak (2013) . It suggests that Newsela exhibits a more complete degree of simplification than Wikipedia, and that it may be able to enable more computational studies of the role of discourse in text simplification in the future. Figure 2: Distribution of document-level compression ratio, displayed as a histogram smoothed by kernel density estimation. The Newsela corpus is more normally distributed, suggesting more consistent quality.", |
|
"cite_spans": [ |
|
{ |
|
"start": 38, |
|
"end": 55, |
|
"text": "Siddharthan (2003", |
|
"ref_id": "BIBREF41" |
|
}, |
|
{ |
|
"start": 56, |
|
"end": 76, |
|
"text": "Siddharthan ( , 2006", |
|
"ref_id": "BIBREF42" |
|
}, |
|
{ |
|
"start": 250, |
|
"end": 268, |
|
"text": "Siddharthan (2003)", |
|
"ref_id": "BIBREF41" |
|
}, |
|
{ |
|
"start": 512, |
|
"end": 526, |
|
"text": "Kauchak (2013)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 271, |
|
"end": 279, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Analysis of discourse connectives", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "Overall, we have shown that the professional simplification of Newsela is more rigorous and more consistent than Simple English Wikipedia. The language and content also differ between the encyclopedia and news domains. They are not exchangeable in developing nor in evaluating simplification systems. In the next section, we will review the evaluation methodology used in recent research, discuss its shortcomings and propose alternative evaluations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Newsela's quality is better than Wikipedia", |
|
"sec_num": "3.7" |
|
}, |
|
{ |
|
"text": "With the popularity of parallel Wikipedia data in simplification research, most state-of-the-art systems evaluate on simplifying sentences from Wikipedia. All simplification systems published in the ACL, NAACL, EACL, COLING and EMNLP main conferences since Zhu's 2010 work compared solely on the same test set that consists of only 100 sentences from Wikipedia, except one paper that additionally experimented with 5 short news summaries. The most widely practiced evaluation methodology is to have human judges rate on grammaticality (or fluency), simplicity, and adequacy (or meaning preservation) on a 5-point Likert scale. Such evaluation is insufficient to measure 1) the practical value of a system to a specific target reader population and 2) the performance of individual simplification components: sentence splitting, dele-tion and paraphrasing. Although the inadequacy of text simplification evaluations has been discussed before (Siddharthan, 2014) , we focus on these two common deficiencies and suggest two future directions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 941, |
|
"end": 960, |
|
"text": "(Siddharthan, 2014)", |
|
"ref_id": "BIBREF43" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation of simplification systems", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Simplification has many subtleties, since what constitutes simplification for one type of user may not be appropriate for another. Many researchers have studied simplification in the context of different audiences. However, most recent automatic simplification systems are developed and evaluated with little consideration of target reader population. There is one attempt by Angrosh et al. (2014) who evaluate their system by asking non-native speakers comprehension questions. They conducted an English vocabulary size test to categorize the users into different levels of language skills.", |
|
"cite_spans": [ |
|
{ |
|
"start": 376, |
|
"end": 397, |
|
"text": "Angrosh et al. (2014)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Targeting specific audiences", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The Newsela corpus allows us to target children at different grade levels. From the application point of view, making knowledge accessible to all children is an important yet challenging part of education (Scarton et al., 2010; Moraes et al., 2014) . From the technical point of view, reading grade level is a clearly defined objective for both simplification systems and human annotators. Once there is a well-defined objective, with constraints such as vocabulary size and sentence length, it is easier to fairly compare different systems. Newsela provides human simplification at different grade levels and reading comprehension quizzes alongside each article.", |
|
"cite_spans": [ |
|
{ |
|
"start": 205, |
|
"end": 227, |
|
"text": "(Scarton et al., 2010;", |
|
"ref_id": "BIBREF40" |
|
}, |
|
{ |
|
"start": 228, |
|
"end": 248, |
|
"text": "Moraes et al., 2014)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Targeting specific audiences", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "In addition, readability is widely studied and can be automatically estimated (Kincaid et al., 1975; Pitler and Nenkova, 2008; Petersen and Ostendorf, 2009) . Although existing readability metrics assume text is well-formed, they can potentially be used in combination with text quality metrics (Post, 2011; Louis and Nenkova, 2013) to evaluate simplifications. They can also be used to aid humans in the creation of reference simplifications.", |
|
"cite_spans": [ |
|
{ |
|
"start": 78, |
|
"end": 100, |
|
"text": "(Kincaid et al., 1975;", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 101, |
|
"end": 126, |
|
"text": "Pitler and Nenkova, 2008;", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 127, |
|
"end": 156, |
|
"text": "Petersen and Ostendorf, 2009)", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 295, |
|
"end": 307, |
|
"text": "(Post, 2011;", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 308, |
|
"end": 332, |
|
"text": "Louis and Nenkova, 2013)", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Targeting specific audiences", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "It is widely accepted that sentence simplification involves three different elements: splitting, deletion and paraphrasing (Feng, 2008; Narayan and Gardent, 2014) . Splitting breaks a long sentence into a few short sentences to achieve better readability. Deletion reduces the complexity by removing unimportant parts of a sentence. Paraphrasing rewrites text into a simpler version via reordering, substitution and occasionally expansion.", |
|
"cite_spans": [ |
|
{ |
|
"start": 123, |
|
"end": 135, |
|
"text": "(Feng, 2008;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 136, |
|
"end": 162, |
|
"text": "Narayan and Gardent, 2014)", |
|
"ref_id": "BIBREF33" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluating sub-tasks separately", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Most state-of-the-art systems consist of all or a subset of these three components. However, the popular human evaluation criteria (grammaticality, simplicity and adequacy) do not explain which components in a system are good or bad. More importantly, deletion may be unfairly penalized since shorter output tends to result in lower adequacy judgements (Napoles et al., 2011) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 353, |
|
"end": 375, |
|
"text": "(Napoles et al., 2011)", |
|
"ref_id": "BIBREF32" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluating sub-tasks separately", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We therefore advocate for a more informative evaluation that separates out each sub-task. We believe this will lead to more easily quantifiable metrics and possibly the development of automatic metrics. For example, early work shows potential use of precision and recall to evaluate splitting (Siddharthan, 2006; Gasperin et al., 2009) and deletion (Riezler et al., 2003; Filippova and Strube, 2008) . Several studies also have investigated various metrics for evaluating sentence paraphrasing (Callison-Burch et al., 2008; Chen and Dolan, 2011; Ganitkevitch et al., 2011; Xu et al., 2012 Xu et al., , 2013 Weese et al., 2014) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 293, |
|
"end": 312, |
|
"text": "(Siddharthan, 2006;", |
|
"ref_id": "BIBREF42" |
|
}, |
|
{ |
|
"start": 313, |
|
"end": 335, |
|
"text": "Gasperin et al., 2009)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 349, |
|
"end": 371, |
|
"text": "(Riezler et al., 2003;", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 372, |
|
"end": 399, |
|
"text": "Filippova and Strube, 2008)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 494, |
|
"end": 523, |
|
"text": "(Callison-Burch et al., 2008;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 524, |
|
"end": 545, |
|
"text": "Chen and Dolan, 2011;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 546, |
|
"end": 572, |
|
"text": "Ganitkevitch et al., 2011;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 573, |
|
"end": 588, |
|
"text": "Xu et al., 2012", |
|
"ref_id": "BIBREF57" |
|
}, |
|
{ |
|
"start": 589, |
|
"end": 606, |
|
"text": "Xu et al., , 2013", |
|
"ref_id": "BIBREF58" |
|
}, |
|
{ |
|
"start": 607, |
|
"end": 626, |
|
"text": "Weese et al., 2014)", |
|
"ref_id": "BIBREF51" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluating sub-tasks separately", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "In this paper, we presented the first systematic analysis of the quality of Simple Wikipedia as a simpli-fication data resource. We conducted a qualitative manual examination and several statistical analyses (including vocabulary change matrices, compression ratio histograms, log-odds-ratio calculations, etc.). We introduced a new, high-quality corpus of professionally simplified news articles, Newsela, as an alternative resource, that allowed us to demonstrate Simple Wikipedia's inadequacies in comparison. We further discussed problems with current simplification evaluation methodology and proposed potential improvements.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Summary and recommendations", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Our goal for this opinion paper is to stimulate progress in text simplification research. Simple English Wikipedia played a vital role in inspiring simplification approaches based on statistical machine translation. However, it has so many drawbacks that we recommend the community to drop it as the standard benchmark set for simplification. Other resources like the Newsela corpus are superior, since they provide a more consistent level of quality, target a particular audience, and approach the size of parallel Simple-Normal English Wikipedia. We believe that simplification is an important area of research that has the potential for broader impact beyond NLP research. But we must first adopt appropriate data sets and research methodologies.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Summary and recommendations", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Researchers can request the Newsela data following the instructions at: https://newsela. com/data/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Summary and recommendations", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "https://newsela.com/ 3 http://en.wikipedia.org/wiki/Lexile", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We use the WordNet lemmatization in the NLTK package: http://www.nltk.org/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/moses-smt/ mosesdecoder/blob/master/scripts/ tokenizer/tokenizer.perl", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "The authors would like to thank Dan Cogan-Drew, Jennifer Coogan, and Kieran Sobel from Newsela for creating their data and generously sharing it with us. We also thank action editor Rada Mihalcea and three anonymous reviewers for their thoughtful comments, and Ani Nenkova, Alan Ritter and Maxine Eskenazi for valuable discussions.This material is based on research sponsored by the NSF under grant IIS-1430651. The views and conclusions contained in this publication are those of the authors and should not be interpreted as representing official policies or endorsements of the NSF or the U.S. Government.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "A study of the role of relative clauses in the simplification of news texts for learners of English", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Allen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "System", |
|
"volume": "37", |
|
"issue": "4", |
|
"pages": "585--599", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Allen, D. (2009). A study of the role of relative clauses in the simplification of news texts for learners of English. System, 37(4):585-599.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Lexico-syntactic text simplification and compression with typed dependencies", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Angrosh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Nomoto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Siddharthan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics (EACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Angrosh, M., Nomoto, T., and Siddharthan, A. (2014). Lexico-syntactic text simplification and compression with typed dependencies. In Pro- ceedings of the 14th Conference of the Euro- pean Chapter of the Association for Computa- tional Linguistics (EACL).", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Tris: A statistical sentence simplifier with loglinear models and margin-based discriminative training", |
|
"authors": [ |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Bach", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Q", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Vogel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Waibel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of 5th International Joint Conference on Natural Language Processing (IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bach, N., Gao, Q., Vogel, S., and Waibel, A. (2011). Tris: A statistical sentence simplifier with log- linear models and margin-based discriminative training. In Proceedings of 5th International Joint Conference on Natural Language Process- ing (IJCNLP).", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Modeling local coherence: An entity-based approach", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Barzilay", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Computational Linguistics", |
|
"volume": "34", |
|
"issue": "1", |
|
"pages": "1--34", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Barzilay, R. and Lapata, M. (2008). Modeling local coherence: An entity-based approach. Computa- tional Linguistics, 34(1):1-34.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "ParaMetric: An automatic evaluation metric for paraphrasing", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Callison-Burch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Cohn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the 22nd International Conference on Computational Linguistics (COLING)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Callison-Burch, C., Cohn, T., and Lapata, M. (2008). ParaMetric: An automatic evaluation metric for paraphrasing. In Proceedings of the 22nd International Conference on Computational Linguistics (COLING).", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Cohesive generation of syntactically simplified newspaper text", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Canning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Tait", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Archibald", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Crawley", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of the Third International Workshop on Text", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Canning, Y., Tait, J., Archibald, J., and Crawley, R. (2000). Cohesive generation of syntactically simplified newspaper text. In Proceedings of the Third International Workshop on Text, Speech and Dialogue (TSD).", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Simplifying text for language-impaired readers", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Carroll", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Minnen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Pearce", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Canning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Tait", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Proceedings of the 14th Conference of the 9th European Conference for Computational Linguistics (EACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Carroll, J., Minnen, G., Pearce, D., Canning, Y., De- vlin, S., and Tait, J. (1999). Simplifying text for language-impaired readers. In Proceedings of the 14th Conference of the 9th European Conference for Computational Linguistics (EACL).", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Motivations and methods for text simplification", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Chandrasekar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Doran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Srinivas", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Proceedings of the 16th Conference on Computational linguistics (COLING)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chandrasekar, R., Doran, C., and Srinivas, B. (1996). Motivations and methods for text simpli- fication. In Proceedings of the 16th Conference on Computational linguistics (COLING).", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Collecting highly parallel data for paraphrase evaluation", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "Dolan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chen, D. L. and Dolan, W. B. (2011). Collecting highly parallel data for paraphrase evaluation. In Proceedings of the 49th Annual Meeting of the As- sociation for Computational Linguistics (ACL).", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "A simplification-translationrestoration framework for cross-domain smt applications", |
|
"authors": [ |
|
{ |
|
"first": "H.-B", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H.-H", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H.-H", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C.-T", |
|
"middle": [], |
|
"last": "Tan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 24th International Conference on Computational Linguistics (COLING)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chen, H.-B., Huang, H.-H., Chen, H.-H., and Tan, C.-T. (2012). A simplification-translation- restoration framework for cross-domain smt ap- plications. In Proceedings of the 24th Interna- tional Conference on Computational Linguistics (COLING).", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Head-driven statistical models for natural language parsing", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Computational linguistics", |
|
"volume": "29", |
|
"issue": "4", |
|
"pages": "589--637", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Collins, M. (2003). Head-driven statistical models for natural language parsing. Computational lin- guistics, 29(4):589-637.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Simple English Wikipedia: A new text simplification task", |
|
"authors": [ |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Coster", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Kauchak", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL-HLT)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Coster, W. and Kauchak, D. (2011). Simple English Wikipedia: A new text simplification task. In Pro- ceedings of the 49th Annual Meeting of the As- sociation for Computational Linguistics: Human Language Technologies (ACL-HLT).", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Text simplification for children", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "De Belder", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M.-F", |
|
"middle": [], |
|
"last": "Moens", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Prroceedings of the SIGIR Workshop on Accessible Search Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "De Belder, J. and Moens, M.-F. (2010). Text simpli- fication for children. In Prroceedings of the SIGIR Workshop on Accessible Search Systems.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "What to do about bad language on the Internet", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Eisenstein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eisenstein, J. (2013). What to do about bad language on the Internet. In Proceedings of the 2013 Con- ference of the North American Chapter of the As- sociation for Computational Linguistics: Human Language Technologies (NAACL-HLT).", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Mining a lexicon of technical terms and lay equivalents", |
|
"authors": [ |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Elhadad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Sutaria", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the Workshop on BioNLP 2007: Biological, Translational, and Clinical Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Elhadad, N. and Sutaria, K. (2007). Mining a lex- icon of technical terms and lay equivalents. In Proceedings of the Workshop on BioNLP 2007: Biological, Translational, and Clinical Language Processing.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Text simplification: A survey", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Feng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Feng, L. (2008). Text simplification: A survey. Technical report, The City University of New York.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Dependency tree based sentence compression", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Filippova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Strube", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the 5th International Natural Language Generation Conference (INLG)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Filippova, K. and Strube, M. (2008). Dependency tree based sentence compression. In Proceedings of the 5th International Natural Language Gener- ation Conference (INLG).", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Learning sentential paraphrases from bilingual parallel corpora for text-to-text generation", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Ganitkevitch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Callison-Burch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Napoles", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Van Durme", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ganitkevitch, J., Callison-Burch, C., Napoles, C., and Van Durme, B. (2011). Learning senten- tial paraphrases from bilingual parallel corpora for text-to-text generation. In Proceedings of the 2011 Conference on Empirical Methods in Natu- ral Language Processing (EMNLP).", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Natural language processing for social inclusion: A text simplification architecture for different literacy levels", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Gasperin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Maziero", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Specia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Pardo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aluisio", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of SEMISH-XXXVI Semin\u00e1rio", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gasperin, C., Maziero, E., Specia, L., Pardo, T., and Aluisio, S. M. (2009). Natural language process- ing for social inclusion: A text simplification ar- chitecture for different literacy levels. In Proceed- ings of SEMISH-XXXVI Semin\u00e1rio Integrado de Software e Hardware.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Improving translation quality by manipulating sentence length", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Gerber", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Machine Translation and the Information Soup", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "448--460", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gerber, L. and Hovy, E. (1998). Improving transla- tion quality by manipulating sentence length. In Machine Translation and the Information Soup, pages 448-460. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Aligning sentences from Standard Wikipedia to Simple Wikipedia", |
|
"authors": [ |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Hwang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Hajishirzi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Ostendorf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hwang, W., Hajishirzi, H., Ostendorf, M., and Wu, W. (2015). Aligning sentences from Standard Wikipedia to Simple Wikipedia. In Proceed- ings of the 2015 Conference of the North Ameri- can Chapter of the Association for Computational Linguistics (NAACL).", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Text simplification for reading assistance: A project note", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Inui", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Fujita", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Takahashi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Iida", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Iwakura", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the 2nd International Workshop on Paraphrasing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Inui, K., Fujita, A., Takahashi, T., Iida, R., and Iwakura, T. (2003). Text simplification for read- ing assistance: A project note. In Proceedings of the 2nd International Workshop on Paraphrasing.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "The distribution of the flora in the alpine zone", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Jaccard", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1912, |
|
"venue": "New Phytologist", |
|
"volume": "11", |
|
"issue": "2", |
|
"pages": "37--50", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jaccard, P. (1912). The distribution of the flora in the alpine zone. New Phytologist, 11(2):37-50.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Narrative framing of consumer sentiment in online restaurant reviews", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Jurafsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Chahuneau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Routledge", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "First Monday", |
|
"volume": "19", |
|
"issue": "4", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jurafsky, D., Chahuneau, V., Routledge, B. R., and Smith, N. A. (2014). Narrative framing of consumer sentiment in online restaurant reviews. First Monday, 19(4).", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Improving text simplification language modeling using unsimplified text data", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Kauchak", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 2013 Conference of the Association for Computational Linguistics (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kauchak, D. (2013). Improving text simplification language modeling using unsimplified text data. In Proceedings of the 2013 Conference of the As- sociation for Computational Linguistics (ACL).", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Derivation of new readability formulas (automated readability index, fog count and flesch reading ease formula) for navy enlisted personnel", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Kincaid", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Fishburne", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Rogers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Chissom", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1975, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "DTIC Doc-- ument", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kincaid, J. P., Fishburne Jr, R. P., Rogers, R. L., and Chissom, B. S. (1975). Derivation of new read- ability formulas (automated readability index, fog count and flesch reading ease formula) for navy enlisted personnel. Technical report, DTIC Doc- ument.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Text simplification for information-seeking applications", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "Klebanov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Marcu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "On the Move to Meaningful Internet Systems 2004: CoopIS, DOA, and ODBASE", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "735--747", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Klebanov, B. B., Knight, K., and Marcu, D. (2004). Text simplification for information-seeking appli- cations. In On the Move to Meaningful Inter- net Systems 2004: CoopIS, DOA, and ODBASE, pages 735-747. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Fast exact inference with a factored model for natural language parsing", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Klein, D. and Manning, C. D. (2002). Fast exact inference with a factored model for natural lan- guage parsing. In Advances in Neural Information Processing Systems.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "What makes writing great? First experiments on article quality prediction in the science journalism domain", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Louis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Nenkova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Transactions of the Association for Computational Linguistics (TACL)", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "341--352", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Louis, A. and Nenkova, A. (2013). What makes writing great? First experiments on article qual- ity prediction in the science journalism domain. Transactions of the Association for Computa- tional Linguistics (TACL), 1:341-352.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Entity-focused sentence simplification for relation extraction", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Miwa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Saetre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Miyao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Tsujii", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 24th International Conference on Computational Linguistics (COLING)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Miwa, M., Saetre, R., Miyao, Y., and Tsujii, J. (2010). Entity-focused sentence simplification for relation extraction. In Proceedings of the 24th International Conference on Computational Lin- guistics (COLING).", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Fightin'words: Lexical feature selection and evaluation for identifying the content of political conflict", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Monroe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Colaresi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Quinn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Political Analysis", |
|
"volume": "16", |
|
"issue": "4", |
|
"pages": "372--403", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Monroe, B. L., Colaresi, M. P., and Quinn, K. M. (2008). Fightin'words: Lexical feature selection and evaluation for identifying the content of po- litical conflict. Political Analysis, 16(4):372-403.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Adapting graph summaries to the users' reading levels", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Moraes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Mccoy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Carberry", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 8th International Natural Language Generation Conference (INLG)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Moraes, P., McCoy, K., and Carberry, S. (2014). Adapting graph summaries to the users' read- ing levels. In Proceedings of the 8th Interna- tional Natural Language Generation Conference (INLG).", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Evaluating sentence compression: Pitfalls and suggested remedies", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Napoles", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Callison-Burch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Van Durme", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the Workshop on Monolingual Text-To-Text Generation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Napoles, C., Callison-Burch, C., and Van Durme, B. (2011). Evaluating sentence compression: Pit- falls and suggested remedies. In Proceedings of the Workshop on Monolingual Text-To-Text Gen- eration.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Hybrid simplification using deep semantics and machine translation", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Narayan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Gardent", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Narayan, S. and Gardent, C. (2014). Hybrid simpli- fication using deep semantics and machine trans- lation. In Proceedings of the 52nd Annual Meet- ing of the Association for Computational Linguis- tics (ACL).", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Text simplification for language learners: A corpus analysis", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Petersen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Ostendorf", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the Workshop on Speech and Language Technology for Education", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Petersen, S. and Ostendorf, M. (2007). Text simpli- fication for language learners: A corpus analysis. In Proceedings of the Workshop on Speech and Language Technology for Education.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "A machine learning approach to reading level assessment", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Petersen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Ostendorf", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Computer Speech & Language", |
|
"volume": "23", |
|
"issue": "1", |
|
"pages": "89--106", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Petersen, S. E. and Ostendorf, M. (2009). A ma- chine learning approach to reading level assess- ment. Computer Speech & Language, 23(1):89- 106.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Revisiting readability: A unified framework for predicting text quality", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Pitler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Nenkova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pitler, E. and Nenkova, A. (2008). Revisiting read- ability: A unified framework for predicting text quality. In Proceedings of the Conference on Em- pirical Methods in Natural Language Processing (EMNLP).", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Common Core Standards the new US intended curriculum", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Porter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Mcmaken", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Hwang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yang", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Educational Researcher", |
|
"volume": "40", |
|
"issue": "3", |
|
"pages": "103--116", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Porter, A., McMaken, J., Hwang, J., and Yang, R. (2011). Common Core Standards the new US intended curriculum. Educational Researcher, 40(3):103-116.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "Judging grammaticality with tree substitution grammar derivations", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Post", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL-HLT)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Post, M. (2011). Judging grammaticality with tree substitution grammar derivations. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL-HLT).", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "Statistical sentence condensation using ambiguity packing and stochastic disambiguation methods for lexical-functional grammar", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Riezler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "King", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Crouch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Zaenen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technology (NAACL-HLT)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Riezler, S., King, T. H., Crouch, R., and Zaenen, A. (2003). Statistical sentence condensation us- ing ambiguity packing and stochastic disambigua- tion methods for lexical-functional grammar. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Compu- tational Linguistics: Human Language Technol- ogy (NAACL-HLT).", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "Simplifica: A tool for authoring simplified texts in brazilian portuguese guided by readability assessments", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Scarton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "De Oliveira", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Candido", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Gasperin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alu\u00edsio", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Scarton, C., De Oliveira, M., Candido Jr, A., Gasperin, C., and Alu\u00edsio, S. M. (2010). Sim- plifica: A tool for authoring simplified texts in brazilian portuguese guided by readability assess- ments. In Proceedings of the 2010 Annual Con- ference of the North American Chapter of the As- sociation for Computational Linguistics: Human Language Technologies (NAACL-HLT).", |
|
"links": null |
|
}, |
|
"BIBREF41": { |
|
"ref_id": "b41", |
|
"title": "Preserving discourse structure when simplifying text", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Siddharthan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of European Workshop on Natural Language Generation (ENLG)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Siddharthan, A. (2003). Preserving discourse struc- ture when simplifying text. In Proceedings of Eu- ropean Workshop on Natural Language Genera- tion (ENLG).", |
|
"links": null |
|
}, |
|
"BIBREF42": { |
|
"ref_id": "b42", |
|
"title": "Syntactic simplification and text cohesion", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Siddharthan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Research on Language and Computation", |
|
"volume": "4", |
|
"issue": "1", |
|
"pages": "77--109", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Siddharthan, A. (2006). Syntactic simplification and text cohesion. Research on Language and Com- putation, 4(1):77-109.", |
|
"links": null |
|
}, |
|
"BIBREF43": { |
|
"ref_id": "b43", |
|
"title": "A survey of research on text simplification", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Siddharthan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Special issue of International Journal of Applied Linguistics", |
|
"volume": "165", |
|
"issue": "2", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Siddharthan, A. (2014). A survey of research on text simplification. Special issue of International Journal of Applied Linguistics, 165(2).", |
|
"links": null |
|
}, |
|
"BIBREF44": { |
|
"ref_id": "b44", |
|
"title": "Hybrid text simplification using synchronous dependency grammars with hand-written and automatically harvested rules", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Siddharthan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Angrosh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 25th International Conference on Computational Linguistics (COLING)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Siddharthan, A. and Angrosh, M. (2014). Hybrid text simplification using synchronous dependency grammars with hand-written and automatically harvested rules. In Proceedings of the 25th Inter- national Conference on Computational Linguis- tics (COLING).", |
|
"links": null |
|
}, |
|
"BIBREF45": { |
|
"ref_id": "b45", |
|
"title": "Generating anaphora for simplifying text", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Siddharthan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Copestake", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the 4th Discourse Anaphora and Anaphor Resolution Colloquium (DAARC)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Siddharthan, A. and Copestake, A. (2002). Generat- ing anaphora for simplifying text. In Proceedings of the 4th Discourse Anaphora and Anaphor Res- olution Colloquium (DAARC).", |
|
"links": null |
|
}, |
|
"BIBREF46": { |
|
"ref_id": "b46", |
|
"title": "Reformulating discourse connectives for non-expert readers", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Siddharthan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Katsos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Siddharthan, A. and Katsos, N. (2010). Reformulat- ing discourse connectives for non-expert readers. In Proceedings of the 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT).", |
|
"links": null |
|
}, |
|
"BIBREF47": { |
|
"ref_id": "b47", |
|
"title": "Syntactic simplification for improving content selection in multi-document summarization", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Siddharthan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Nenkova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Mckeown", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the 20th International Conference on Computational Linguistics (COL-ING)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Siddharthan, A., Nenkova, A., and McKeown, K. (2004). Syntactic simplification for improving content selection in multi-document summariza- tion. In Proceedings of the 20th International Conference on Computational Linguistics (COL- ING).", |
|
"links": null |
|
}, |
|
"BIBREF48": { |
|
"ref_id": "b48", |
|
"title": "Beyond SumBasic: Taskfocused summarization with sentence simplification and lexical expansion", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Vanderwende", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Suzuki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Brockett", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Nenkova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Information Processing & Management", |
|
"volume": "43", |
|
"issue": "", |
|
"pages": "1606--1618", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vanderwende, L., Suzuki, H., Brockett, C., and Nenkova, A. (2007). Beyond SumBasic: Task- focused summarization with sentence simplifica- tion and lexical expansion. Information Process- ing & Management, 43(6):1606-1618.", |
|
"links": null |
|
}, |
|
"BIBREF49": { |
|
"ref_id": "b49", |
|
"title": "Sentence simplification for semantic role labeling", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Vickrey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Koller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL-HLT)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vickrey, D. and Koller, D. (2008). Sentence simpli- fication for semantic role labeling. In Proceed- ings of the 46th Annual Meeting of the Associa- tion for Computational Linguistics: Human Lan- guage Technologies (ACL-HLT).", |
|
"links": null |
|
}, |
|
"BIBREF50": { |
|
"ref_id": "b50", |
|
"title": "Facilita: reading assistance for lowliteracy readers", |
|
"authors": [ |
|
{ |
|
"first": "W", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Watanabe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Junior", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Uz\u00eada", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"P D M" |
|
], |
|
"last": "Fortes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [ |
|
"A S" |
|
], |
|
"last": "Pardo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alu\u00edsio", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 27th ACM International Conference on Design of Communication", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Watanabe, W. M., Junior, A. C., Uz\u00eada, V. R., Fortes, R. P. d. M., Pardo, T. A. S., and Alu\u00edsio, S. M. (2009). Facilita: reading assistance for low- literacy readers. In Proceedings of the 27th ACM International Conference on Design of Communi- cation.", |
|
"links": null |
|
}, |
|
"BIBREF51": { |
|
"ref_id": "b51", |
|
"title": "PARADIGM: Paraphrase diagnostics through grammar matching", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Weese", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Ganitkevitch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Callison-Burch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics (EACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Weese, J., Ganitkevitch, J., and Callison-Burch, C. (2014). PARADIGM: Paraphrase diagnos- tics through grammar matching. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics (EACL).", |
|
"links": null |
|
}, |
|
"BIBREF52": { |
|
"ref_id": "b52", |
|
"title": "Experiments with discourse-level choices and readability", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Williams", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Reiter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Osman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the European Natural Language Generation Workshop (ENLG)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Williams, S., Reiter, E., and Osman, L. (2003). Ex- periments with discourse-level choices and read- ability. In Proceedings of the European Natural Language Generation Workshop (ENLG).", |
|
"links": null |
|
}, |
|
"BIBREF53": { |
|
"ref_id": "b53", |
|
"title": "Learning to simplify sentences with quasi-synchronous grammar and integer programming", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Woodsend", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Woodsend, K. and Lapata, M. (2011a). Learning to simplify sentences with quasi-synchronous gram- mar and integer programming. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing (EMNLP).", |
|
"links": null |
|
}, |
|
"BIBREF54": { |
|
"ref_id": "b54", |
|
"title": "WikiSimple: Automatic simplification of Wikipedia articles", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Woodsend", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 25th Conference on Artificial Intelligence (AAAI)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Woodsend, K. and Lapata, M. (2011b). WikiSimple: Automatic simplification of Wikipedia articles. In Proceedings of the 25th Conference on Artificial Intelligence (AAAI).", |
|
"links": null |
|
}, |
|
"BIBREF55": { |
|
"ref_id": "b55", |
|
"title": "Sentence simplification by monolingual machine translation", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Wubben", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Van Den Bosch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Krahmer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wubben, S., van den Bosch, A., and Krahmer, E. (2012). Sentence simplification by monolingual machine translation. In Proceedings of the 50th Annual Meeting of the Association for Computa- tional Linguistics (ACL).", |
|
"links": null |
|
}, |
|
"BIBREF56": { |
|
"ref_id": "b56", |
|
"title": "A parse-and-trim approach with information significance for chinese sentence compression", |
|
"authors": [ |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Grishman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 2009 Workshop on Language Generation and Summarisation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xu, W. and Grishman, R. (2009). A parse-and-trim approach with information significance for chi- nese sentence compression. In Proceedings of the 2009 Workshop on Language Generation and Summarisation.", |
|
"links": null |
|
}, |
|
"BIBREF57": { |
|
"ref_id": "b57", |
|
"title": "Paraphrasing for style", |
|
"authors": [ |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Ritter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Dolan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Grishman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Cherry", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 24th International Conference on Computational Linguistics (COLING)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xu, W., Ritter, A., Dolan, B., Grishman, R., and Cherry, C. (2012). Paraphrasing for style. In Pro- ceedings of the 24th International Conference on Computational Linguistics (COLING).", |
|
"links": null |
|
}, |
|
"BIBREF58": { |
|
"ref_id": "b58", |
|
"title": "Gathering and generating paraphrases from twitter with application to normalization", |
|
"authors": [ |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Ritter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Grishman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the Sixth Workshop on Building and Using Comparable Corpora (BUCC)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xu, W., Ritter, A., and Grishman, R. (2013). Gather- ing and generating paraphrases from twitter with application to normalization. In Proceedings of the Sixth Workshop on Building and Using Com- parable Corpora (BUCC).", |
|
"links": null |
|
}, |
|
"BIBREF59": { |
|
"ref_id": "b59", |
|
"title": "A monolingual tree-based translation model for sentence simplification", |
|
"authors": [ |
|
{ |
|
"first": "Z", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Bernhard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "Gurevych", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics (COLING)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhu, Z., Bernhard, D., and Gurevych, I. (2010). A monolingual tree-based translation model for sen- tence simplification. In Proceedings of the 23rd International Conference on Computational Lin- guistics (COLING).", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"uris": null, |
|
"text": "Manual classification of aligned sentence pairs from the Newsela corpus. We categorize randomly sampled 50 sentence pairs drawn from the Original-Simp2 and 50 sentences from the Original-Simp4.", |
|
"type_str": "figure" |
|
}, |
|
"TABREF0": { |
|
"text": "NORM] Death On 1 October 1988, Strauss collapsed while hunting with the Prince of Thurn and Taxis in the Thurn and Taxis forests, east of Regensburg. [SIMP] Death On October 1, 1988, Strau\u00df collapsed while hunting with the Prince of Thurn and Taxis in the Thurn and Taxis forests, east of Regensburg. This article is a list of the 50 U.S. states and the District of Columbia ordered by population density. [SIMP] This is a list of the 50 U.S. states, ordered by population density.", |
|
"num": null, |
|
"content": "<table><tr><td>Deletion Only (21%) [NORM] Real Simpli-fication Paraphrase Only (17%) [NORM] In 2002, both Russia and China also had prison populations in excess of 1 million. [SIMP] In 2002, both Russia and China also had over 1 million people in prison. (50%) Deleltion + (12%) Paraphrase</td></tr></table>", |
|
"type_str": "table", |
|
"html": null |
|
}, |
|
"TABREF1": { |
|
"text": "", |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"html": null |
|
}, |
|
"TABREF3": { |
|
"text": "", |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"html": null |
|
}, |
|
"TABREF4": { |
|
"text": "Example of sentences written at multiple levels of text complexity from the Newsela data set. The Lexile readability score and grade level apply to the whole article rather than individual sentences, so the same sentences may receive different scores, e.g. the above sentences for the 6th and 7th grades. The bold font highlights the parts of sentence that are different from the adjacent version(s).", |
|
"num": null, |
|
"content": "<table><tr><td/><td/><td/><td>Newsela</td><td/><td/><td>PWKP</td><td/></tr><tr><td/><td>Original</td><td>Simp-1</td><td>Simp-2</td><td>Simp-3</td><td>Simp-4</td><td>Normal</td><td>Simple</td></tr><tr><td>Total #sents Total #tokens</td><td colspan=\"7\">56,037 1,301,767 1,126,148 1,052,915 903,417 764,103 2,645,771 2,175,240 57,940 63,419 64,035 64,162 108,016 114,924</td></tr><tr><td>Avg #sents per doc Avg #words per doc Avg #words per sent Avg #chars per word</td><td>49.59 1,152.01 23.23 4.32</td><td>51.27 996.59 19.44 4.28</td><td>56.12 931.78 16.6 4.21</td><td>56.67 799.48 14.11 4.11</td><td>56.78 676.2 11.91 4.02</td><td>--*24.49 5.06</td><td>--*18.93 4.89</td></tr></table>", |
|
"type_str": "table", |
|
"html": null |
|
}, |
|
"TABREF5": { |
|
"text": "Basic statistics of the Newsela Simplification corpus vs. the Parallel Wikipedia Simplification (PWKP) corpus. The Newsela corpus consists of 1130 articles with original and 4 simplified versions each. Simp-1 is of the least simplified level, while Simp-4 is the most simplified. The numbers marked by * are slightly different from previously reported, because of the use of different tokenizers.", |
|
"num": null, |
|
"content": "<table><tr><td>Newsela</td><td>Original</td><td>Simp-1</td><td>Simp-2</td><td>Simp-3</td><td>Simp-4</td></tr><tr><td colspan=\"6\">#words (avg. freq) **39,046 (28.31) 33,272 (28.64) 29,569 (30.09) 24,468 (31.17) 20,432 (31.45)</td></tr><tr><td>Original Simp-1 Simp-2 Simp-3 Simp-4</td><td>0 6,498 (1.38) 10,292 (1.67) 15,298 (2.14) **19,197 (2.60)</td><td>724 (1.19) 0 4,321 (1.32) 9,408 (1.79) 13,361 (2.24)</td><td>815 (1.25) 618 (1.08) 0 5,637 (1.46) 9,612 (1.87)</td><td>720 (1.32) 604 (1.15) 536 (1.13) 0 4,569 (1.40)</td><td>*583 (1.33) 521 (1.21) 475 (1.16) 533 (1.14) 0</td></tr></table>", |
|
"type_str": "table", |
|
"html": null |
|
}, |
|
"TABREF6": { |
|
"text": "This table shows the vocabulary changes between different levels of simplification in the Newsela corpus (as a 5\u00d75 matrix)", |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"html": null |
|
}, |
|
"TABREF8": { |
|
"text": "Top 50 tokens associated with the complex text, computed using theMonroe et al. (2008) method. Bold words are shared by the complex version of Newsela and the complex version of Wikipedia. Verb is are can will make get were wants was called help hurt be made like stop want works do live found is made called started pays said was got are like get can means says has went comes make put used", |
|
"num": null, |
|
"content": "<table><tr><td>Linguistic class</td><td>Newsela -Simp4</td><td>Wikipedia (PWKP) -Simple</td></tr><tr><td colspan=\"2\">Punctuation Determiner/Pronoun they it he she them lot . Conjunction Adverb also not there too about very now then how Noun people money scientists government things countries rules problems group Adjective many important big new used</td><td>. it he they lot this she because about very there movie people northwest north region loire player websites southwest movies football things big biggest famous different important many</td></tr></table>", |
|
"type_str": "table", |
|
"html": null |
|
}, |
|
"TABREF9": { |
|
"text": "Top 50 tokens associated with the simplified text.", |
|
"num": null, |
|
"content": "<table><tr><td/><td/><td>Newsela</td><td/><td/><td>PWKP</td><td/></tr><tr><td/><td colspan=\"6\">Original Simp-4 odds-ratio Normal Simple odds-ratio</td></tr><tr><td>which where advocates approximately thus</td><td>2259 1472 136 21 35</td><td>249 546 0 0 9</td><td>0.188 0.632 0 0 0.438</td><td>7261 1972 6 480 385</td><td>4608 1470 3 140 138</td><td>0.774 0.909 0.610 0.356 0.437</td></tr></table>", |
|
"type_str": "table", |
|
"html": null |
|
}, |
|
"TABREF10": { |
|
"text": "Frequency of example words fromTable 6. These complex words are reduced at a much greater rate in the simplified Newsela than they are in the Simple English Wikipedia. A smaller odds ratio indicates greater reduction.", |
|
"num": null, |
|
"content": "<table><tr><td/><td>Wikipedia (PWKP) -Normal</td><td/><td>Newsela -Simp4</td><td>Wikipedia (PWKP) -Simple</td></tr><tr><td>PP(of) \u2192 IN NP WHNP(which) \u2192 WDT SBAR(which) \u2192 WHNP S PP(to) \u2192 TO NP NP(percent) \u2192 CD NN WHNP(that) \u2192 WDT SBAR(that) \u2192 WHNP S PP(with) \u2192 IN NP PP(according) \u2192 VBG PP NP(percent) \u2192 NP PP NP(we) \u2192 PRP PP(including) \u2192 VBG NP SBAR(who) \u2192 WHNP S SBAR(as) \u2192 IN S WHNP(who) \u2192 WP NP(i) \u2192 FW PP(as) \u2192 IN NP NP(director) \u2192 NP PP PP(by) \u2192 IN NP S(has) \u2192 VP PP(in) \u2192 IN NP SBAR(while) \u2192 IN S PP(as) \u2192 JJ IN NP PRN(-) \u2192 : NP : S('s) \u2192 NP VP S(said) \u2192 \" S , \" NP VP . PP(at) \u2192 IN NP PP(among) \u2192 IN NP SBAR(although) \u2192 IN S VP(said) \u2192 VBD NP</td><td>PP(as) \u2192 IN NP PP(of) \u2192 IN NP VP(born) \u2192 VBN NP NP PP WHNP(which) \u2192 WDT PP(to) \u2192 TO NP NP(municipality) \u2192 DT JJ NN FRAG(-) \u2192 ADJP : FRAG(-) \u2192 FRAG : FRAG NP()) \u2192 NNP NNP NNP NP(film) \u2192 DT NN NP(footballer) \u2192 DT JJ JJ NN NP(footballer) \u2192 NP SBAR ADVP(currently) \u2192 RB VP(born) \u2192 VBN NP NP ADVP(initially) \u2192 RB PP(with) \u2192 IN NP WHPP(of) \u2192 IN WHNP SBAR(although) \u2192 IN S ADVP(primarily) \u2192 RB S(links) \u2192 NP VP . VP(links) \u2192 VBZ NP PP(following) \u2192 VBG NP ADVP(subsequently) \u2192 RB SBAR(which) \u2192 WHNP S SBAR(while) \u2192 IN S S(plays) \u2192 ADVP VP PP(within) \u2192 IN NP PP(by) \u2192 IN NP SBAR(of) \u2192 WHNP S S(is) \u2192 S : S .</td><td>1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30</td><td>S(is) \u2192 NP VP . NP(they) \u2192 PRP S(are) \u2192 NP VP . S(was) \u2192 NP VP . NP(people) \u2192 NNS VP(is) \u2192 VBZ NP NP(he) \u2192 PRP S(were) \u2192 NP VP . NP(it) \u2192 PRP S(can) \u2192 NP VP . S(will) \u2192 NP VP . ADVP(also) \u2192 RB S(have) \u2192 NP VP . S(could) \u2192 NP VP . S(said) \u2192 NP VP . S(has) \u2192 NP VP . NP(people) \u2192 JJ NNS NP(money) \u2192 NN NP(government) \u2192 DT NN S(do) \u2192 NP VP . NP(scientists) \u2192 NNS VP(called) \u2192 VBN NP S(had) \u2192 NP VP . S(says) \u2192 NP VP . S(would) \u2192 NP VP . S(say) \u2192 NP VP . S(works) \u2192 NP VP . S(may) \u2192 NP VP . S(did) \u2192 NP VP . S(think) \u2192 NP VP .</td><td>NP(it) \u2192 PRP S(is) \u2192 NP VP . S(was) \u2192 NP VP . NP(he) \u2192 PRP NP(they) \u2192 PRP NP(player) \u2192 DT JJ JJ NN NN S(are) \u2192 NP VP . NP(movie) \u2192 DT NN S(has) \u2192 NP VP . VP(called) \u2192 VBN NP VP(is) \u2192 VBZ PP VP(made) \u2192 VBN PP VP(said) \u2192 VBD SBAR VP(has) \u2192 VBZ NP VP(is) \u2192 VBZ NP NP(this) \u2192 DT VP(was) \u2192 VBD NP NP(people) \u2192 NNS NP(lot) \u2192 DT NN NP(season) \u2192 NN CD S(can) \u2192 NP VP . VP(is) \u2192 VBZ VP SBAR(because) \u2192 IN S VP(are) \u2192 VBP NP NP(player) \u2192 DT JJ NN NN NP(there) \u2192 EX NP(lot) \u2192 NP PP NP(websites) \u2192 JJ NNS PP(like) \u2192 IN NP</td></tr></table>", |
|
"type_str": "table", |
|
"html": null |
|
}, |
|
"TABREF11": { |
|
"text": "Top 30 syntax patterns associated with the complex text (left) and simplified text (right). Bold patterns are the top patterns shared by Newsela and Wikipedia.", |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"html": null |
|
} |
|
} |
|
} |
|
} |