|
{ |
|
"paper_id": "C18-1039", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T13:09:18.229928Z" |
|
}, |
|
"title": "Dynamic Multi-Level Multi-Task Learning for Sentence Simplification", |
|
"authors": [ |
|
{ |
|
"first": "Han", |
|
"middle": [], |
|
"last": "Guo", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "UNC Chapel Hill", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Ramakanth", |
|
"middle": [], |
|
"last": "Pasunuru", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "UNC Chapel Hill", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Mohit", |
|
"middle": [], |
|
"last": "Bansal", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "UNC Chapel Hill", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Sentence simplification aims to improve readability and understandability, based on several operations such as splitting, deletion, and paraphrasing. However, a valid simplified sentence should also be logically entailed by its input sentence. In this work, we first present a strong pointercopy mechanism based sequence-to-sequence sentence simplification model, and then improve its entailment and paraphrasing capabilities via multi-task learning with related auxiliary tasks of entailment and paraphrase generation. Moreover, we propose a novel 'multi-level' layered soft sharing approach where each auxiliary task shares different (higher versus lower) level layers of the sentence simplification model, depending on the task's semantic versus lexico-syntactic nature. We also introduce a novel multi-armed bandit based training approach that dynamically learns how to effectively switch across tasks during multi-task learning. Experiments on multiple popular datasets demonstrate that our model outperforms competitive simplification systems in SARI and FKGL automatic metrics, and human evaluation. Further, we present several ablation analyses on alternative layer sharing methods, soft versus hard sharing, dynamic multi-armed bandit sampling approaches, and our model's learned entailment and paraphrasing skills.", |
|
"pdf_parse": { |
|
"paper_id": "C18-1039", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Sentence simplification aims to improve readability and understandability, based on several operations such as splitting, deletion, and paraphrasing. However, a valid simplified sentence should also be logically entailed by its input sentence. In this work, we first present a strong pointercopy mechanism based sequence-to-sequence sentence simplification model, and then improve its entailment and paraphrasing capabilities via multi-task learning with related auxiliary tasks of entailment and paraphrase generation. Moreover, we propose a novel 'multi-level' layered soft sharing approach where each auxiliary task shares different (higher versus lower) level layers of the sentence simplification model, depending on the task's semantic versus lexico-syntactic nature. We also introduce a novel multi-armed bandit based training approach that dynamically learns how to effectively switch across tasks during multi-task learning. Experiments on multiple popular datasets demonstrate that our model outperforms competitive simplification systems in SARI and FKGL automatic metrics, and human evaluation. Further, we present several ablation analyses on alternative layer sharing methods, soft versus hard sharing, dynamic multi-armed bandit sampling approaches, and our model's learned entailment and paraphrasing skills.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Sentence simplification is the task of improving the readability and understandability of an input text. This challenging task has been the subject of research interest because it can address automatic ways of improving reading aids for people with limited language skills, or language impairments such as dyslexia (Rello et al., 2013) , autism (Evans et al., 2014) , and aphasia (Carroll et al., 1999) . It also has wide applications in NLP tasks as a preprocessing step, for example, to improve the performance of parsers (Chandrasekar et al., 1996 ), summarizers (Klebanov et al., 2004 , and semantic role labelers (Vickrey and Koller, 2008; Woodsend and Lapata, 2014) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 315, |
|
"end": 335, |
|
"text": "(Rello et al., 2013)", |
|
"ref_id": "BIBREF55" |
|
}, |
|
{ |
|
"start": 345, |
|
"end": 365, |
|
"text": "(Evans et al., 2014)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 380, |
|
"end": 402, |
|
"text": "(Carroll et al., 1999)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 524, |
|
"end": 550, |
|
"text": "(Chandrasekar et al., 1996", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 551, |
|
"end": 588, |
|
"text": "), summarizers (Klebanov et al., 2004", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 618, |
|
"end": 644, |
|
"text": "(Vickrey and Koller, 2008;", |
|
"ref_id": "BIBREF65" |
|
}, |
|
{ |
|
"start": 645, |
|
"end": 671, |
|
"text": "Woodsend and Lapata, 2014)", |
|
"ref_id": "BIBREF70" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Several sentence simplification systems focus on operations such as splitting a long sentence into shorter sentences (Siddharthan, 2006; Petersen and Ostendorf, 2007) , deletion of less important words/phrases (Knight and Marcu, 2002; Clarke and Lapata, 2006; Filippova and Strube, 2008) , and paraphrasing (Devlin, 1999; Inui et al., 2003; Kaji et al., 2002) . Inspired from machine translation based neural models, recent work has built end-to-end sentence simplification models along with attention mechanism, and further improved it with reinforcement-based policy gradient approaches (Zhang and Lapata, 2017) . Our baseline is a novel application of the pointer-copy mechanism (See et al., 2017) for the sentence simplification task, which allows the model to directly copy words and phrases from the input to the output. We further improve this strong baseline by bringing in auxiliary entailment and paraphrasing knowledge via soft and dynamic multi-level, multi-task learning. 1 Apart from the three simplification operations discussed above, we also ensure that the simplified output is a directed logical entailment w.r.t. the input text, i.e., does not generate any contradictory or unrelated information. We incorporate this entailment skill via multi-task learning (Luong et al., 2015) with an auxiliary entailment generation task. Further, we also induce word/phrase-level paraphrasing knowledge via a paraphrase generation task, enabling parallel learning of these three tasks in a threeway multi-task learning setup. We employ a novel 'multi-level' layered, soft sharing approach, where the parameters between the tasks are loosely coupled at different levels of layers; we share higherlevel semantic layers between the sentence simplification and entailment generation tasks (which teaches the model to generate outputs that are entailed by the full input), while sharing the lower-level lexicosyntactic layers between the sentence simplification and paraphrase generation tasks (which teaches the model to paraphrase only the smaller sub-sentence pieces).", |
|
"cite_spans": [ |
|
{ |
|
"start": 117, |
|
"end": 136, |
|
"text": "(Siddharthan, 2006;", |
|
"ref_id": "BIBREF60" |
|
}, |
|
{ |
|
"start": 137, |
|
"end": 166, |
|
"text": "Petersen and Ostendorf, 2007)", |
|
"ref_id": "BIBREF54" |
|
}, |
|
{ |
|
"start": 210, |
|
"end": 234, |
|
"text": "(Knight and Marcu, 2002;", |
|
"ref_id": "BIBREF43" |
|
}, |
|
{ |
|
"start": 235, |
|
"end": 259, |
|
"text": "Clarke and Lapata, 2006;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 260, |
|
"end": 287, |
|
"text": "Filippova and Strube, 2008)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 307, |
|
"end": 321, |
|
"text": "(Devlin, 1999;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 322, |
|
"end": 340, |
|
"text": "Inui et al., 2003;", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 341, |
|
"end": 359, |
|
"text": "Kaji et al., 2002)", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 589, |
|
"end": 613, |
|
"text": "(Zhang and Lapata, 2017)", |
|
"ref_id": "BIBREF75" |
|
}, |
|
{ |
|
"start": 682, |
|
"end": 700, |
|
"text": "(See et al., 2017)", |
|
"ref_id": "BIBREF57" |
|
}, |
|
{ |
|
"start": 985, |
|
"end": 986, |
|
"text": "1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1278, |
|
"end": 1298, |
|
"text": "(Luong et al., 2015)", |
|
"ref_id": "BIBREF47" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Finally, we also propose a multi-armed bandit approach that dynamically learns an effective schedule (curriculum) of switching between tasks for optimization during multi-task learning, instead of the traditional approach with a manually-tuned static (fixed) mixing ratio (Luong et al., 2015) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 272, |
|
"end": 292, |
|
"text": "(Luong et al., 2015)", |
|
"ref_id": "BIBREF47" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Empirically, we evaluate our system on three standard datasets: Newsela, WikiSmall, and WikiLarge. First, we show that our pointer-copy baseline is significantly better than sequence-to-sequence models, and competitive w.r.t. the state-of-the-art. Next, we show that our multi-level, multi-task framework performs significantly better than our strong pointer baseline and other competitive sentence simplification models on both automatic evaluation as well as on human study simplicity criterion. Further, we show that the dynamic multi-armed bandit based switching of tasks during training improves over the traditional manually-tuned static mixing ratio. Lastly, we show several ablation studies based on different layer-sharing approaches (higher versus lower) with auxiliary tasks, hard versus soft sharing, dynamic mixing ratio sampling, as well as our model's learned entailment and paraphrasing skills.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Previous approaches to sentence simplification systems range from hand-designed rules (Siddharthan, 2006) , to syntactic and lexical simplification via synonyms and paraphrases (Siddharthan, 2014; Kaji et al., 2002; Horn et al., 2014; Glava\u0161 and\u0160tajner, 2015) , as well as treating simplification as a monolingual MT task, where operations are learned from examples of complex-simple sentence pairs (Specia, 2010; Koehn et al., 2007; Coster and Kauchak, 2011; Zhu et al., 2010; Wubben et al., 2012; Narayan and Gardent, 2014) . Recently, Xu et al. (2016) trained a syntax-based MT model using the newly proposed SARI as a simplification-specific objective. Further, Zhang and Lapata (2017) used reinforcement learning in a sequence-to-sequence approach to directly optimize simplification metrics. In this work, we first introduce the pointer-copy mechanism (See et al., 2017) as a novel application to sentence simplification, and then use multi-task learning to bring in auxiliary entailment and paraphrasing skills.", |
|
"cite_spans": [ |
|
{ |
|
"start": 86, |
|
"end": 105, |
|
"text": "(Siddharthan, 2006)", |
|
"ref_id": "BIBREF60" |
|
}, |
|
{ |
|
"start": 177, |
|
"end": 196, |
|
"text": "(Siddharthan, 2014;", |
|
"ref_id": "BIBREF61" |
|
}, |
|
{ |
|
"start": 197, |
|
"end": 215, |
|
"text": "Kaji et al., 2002;", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 216, |
|
"end": 234, |
|
"text": "Horn et al., 2014;", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 235, |
|
"end": 259, |
|
"text": "Glava\u0161 and\u0160tajner, 2015)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 399, |
|
"end": 413, |
|
"text": "(Specia, 2010;", |
|
"ref_id": "BIBREF62" |
|
}, |
|
{ |
|
"start": 414, |
|
"end": 433, |
|
"text": "Koehn et al., 2007;", |
|
"ref_id": "BIBREF44" |
|
}, |
|
{ |
|
"start": 434, |
|
"end": 459, |
|
"text": "Coster and Kauchak, 2011;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 460, |
|
"end": 477, |
|
"text": "Zhu et al., 2010;", |
|
"ref_id": "BIBREF77" |
|
}, |
|
{ |
|
"start": 478, |
|
"end": 498, |
|
"text": "Wubben et al., 2012;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 499, |
|
"end": 525, |
|
"text": "Narayan and Gardent, 2014)", |
|
"ref_id": "BIBREF48" |
|
}, |
|
{ |
|
"start": 538, |
|
"end": 554, |
|
"text": "Xu et al. (2016)", |
|
"ref_id": "BIBREF73" |
|
}, |
|
{ |
|
"start": 666, |
|
"end": 689, |
|
"text": "Zhang and Lapata (2017)", |
|
"ref_id": "BIBREF75" |
|
}, |
|
{ |
|
"start": 858, |
|
"end": 876, |
|
"text": "(See et al., 2017)", |
|
"ref_id": "BIBREF57" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Multi-task learning, known for improving the generalization performance of a task with related tasks, has successful application to many domains of machine learning (Caruana, 1998; Collobert and Weston, 2008; Girshick, 2015; Luong et al., 2015; . Although there are many variants of multi-task learning (Ruder et al., 2017; Hashimoto et al., 2017; Luong et al., 2015) , our approach is similar to Luong et al. (2015) , where different tasks share some common model parameters with alternating mini-batches optimization. In this work, we explore a multi-level (i.e., taskspecific higher-level semantic versus lower-level lexico-syntactic layer sharing) and soft-sharing mechanism for improving sentence simplification via related tasks of entailment and paraphrase generation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 165, |
|
"end": 180, |
|
"text": "(Caruana, 1998;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 181, |
|
"end": 208, |
|
"text": "Collobert and Weston, 2008;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 209, |
|
"end": 224, |
|
"text": "Girshick, 2015;", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 225, |
|
"end": 244, |
|
"text": "Luong et al., 2015;", |
|
"ref_id": "BIBREF47" |
|
}, |
|
{ |
|
"start": 303, |
|
"end": 323, |
|
"text": "(Ruder et al., 2017;", |
|
"ref_id": "BIBREF56" |
|
}, |
|
{ |
|
"start": 324, |
|
"end": 347, |
|
"text": "Hashimoto et al., 2017;", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 348, |
|
"end": 367, |
|
"text": "Luong et al., 2015)", |
|
"ref_id": "BIBREF47" |
|
}, |
|
{ |
|
"start": 397, |
|
"end": 416, |
|
"text": "Luong et al. (2015)", |
|
"ref_id": "BIBREF47" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Recognizing Textual Entailment (RTE) is the task of predicting entailment, contradiction, or neutral relationships, and is useful for many downstream tasks like Q&A, summarization, and information retrieval (Harabagiu and Hickl, 2006; Dagan et al., 2006; Lai and Hockenmaier, 2014; Jimenez et al., 2014) . Neural network models (Bowman et al., 2015; Parikh et al., 2016) and large datasets (Bowman et al., 2015; Williams et al., 2017) enabled recent strong progress. Recently, and Guo et al. (2018) presented results using entailment generation as an auxiliary task for abstractive summarization; however, we use entailment as well as paraphrasing knowledge in a soft and multi-level layer sharing setup to improve sentence simplification.", |
|
"cite_spans": [ |
|
{ |
|
"start": 207, |
|
"end": 234, |
|
"text": "(Harabagiu and Hickl, 2006;", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 235, |
|
"end": 254, |
|
"text": "Dagan et al., 2006;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 255, |
|
"end": 281, |
|
"text": "Lai and Hockenmaier, 2014;", |
|
"ref_id": "BIBREF45" |
|
}, |
|
{ |
|
"start": 282, |
|
"end": 303, |
|
"text": "Jimenez et al., 2014)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 328, |
|
"end": 349, |
|
"text": "(Bowman et al., 2015;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 350, |
|
"end": 370, |
|
"text": "Parikh et al., 2016)", |
|
"ref_id": "BIBREF51" |
|
}, |
|
{ |
|
"start": 390, |
|
"end": 411, |
|
"text": "(Bowman et al., 2015;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 412, |
|
"end": 434, |
|
"text": "Williams et al., 2017)", |
|
"ref_id": "BIBREF68" |
|
}, |
|
{ |
|
"start": 481, |
|
"end": 498, |
|
"text": "Guo et al. (2018)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Previous work (Barzilay and McKeown, 2001; Ganitkevitch et al., 2013; Wieting and Gimpel, 2017a) has developed methods and datasets for generating paraphrase pairs which can be useful for downstream applications such as question answering, semantic parsing, and information extraction (Fader et al., 2013; Berant and Liang, 2014; Zhang et al., 2015) . Wieting and Gimpel (2017a) recently introduced a large sentential paraphrase dataset via back-translation, and showed promising results when applied to learning sentence embeddings. In this work, we use this paraphrase dataset as an auxiliary generation task to improve our sentence simplification model by teaching it about paraphrasing in a multi-task setting.", |
|
"cite_spans": [ |
|
{ |
|
"start": 14, |
|
"end": 42, |
|
"text": "(Barzilay and McKeown, 2001;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 43, |
|
"end": 69, |
|
"text": "Ganitkevitch et al., 2013;", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 70, |
|
"end": 96, |
|
"text": "Wieting and Gimpel, 2017a)", |
|
"ref_id": "BIBREF66" |
|
}, |
|
{ |
|
"start": 285, |
|
"end": 305, |
|
"text": "(Fader et al., 2013;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 306, |
|
"end": 329, |
|
"text": "Berant and Liang, 2014;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 330, |
|
"end": 349, |
|
"text": "Zhang et al., 2015)", |
|
"ref_id": "BIBREF76" |
|
}, |
|
{ |
|
"start": 352, |
|
"end": 378, |
|
"text": "Wieting and Gimpel (2017a)", |
|
"ref_id": "BIBREF66" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Many control problems can be cast as a multi-armed bandits algorithm, where the goal of the agent is to select the arm/action from one of the M choices that gives the maximum expected future reward (Bubeck et al., 2012) . Optimal control and reinforcement learning have been used to find the trade-off between exploitation and exploration, and yield theoretically-sound regret bounds, e.g., Boltzmann exploration (Kaelbling et al., 1996) , UCB (Auer et al., 2002a) , Thompson sampling (Chapelle and Li, 2011) , adversarial bandits (Auer et al., 2002b) , and information gain using variational approaches (Houthooft et al., 2016) . Recently, Graves et al. (2017) use a non-stationary multi-armed bandit to automatically select the curriculum or syllabus that a neural network follows so as to maximize learning efficiency. Sharma and Ravindran (2017) use multi-armed bandit sampling to choose which domain data (harder vs. easier) to feed as input to a single model (using different Atari games), whereas we use multi-armed bandit sampling to decide the optimization curriculum (mixing ratio) among our three models for sentence simplification, entailment generation, and paraphrase generation (with different softly-shared layers).", |
|
"cite_spans": [ |
|
{ |
|
"start": 198, |
|
"end": 219, |
|
"text": "(Bubeck et al., 2012)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 413, |
|
"end": 437, |
|
"text": "(Kaelbling et al., 1996)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 444, |
|
"end": 464, |
|
"text": "(Auer et al., 2002a)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 485, |
|
"end": 508, |
|
"text": "(Chapelle and Li, 2011)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 531, |
|
"end": 551, |
|
"text": "(Auer et al., 2002b)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 604, |
|
"end": 628, |
|
"text": "(Houthooft et al., 2016)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 641, |
|
"end": 661, |
|
"text": "Graves et al. (2017)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In this section, we first describe our sentence simplification baseline model with attention mechanism, which is further improved by pointer-copy mechanism. Later, we introduce our two auxiliary tasks (entailment and paraphrase generation) and discuss how they can share specific lower/higher-level layers/parameters to improve the sentence simplification task in a multi-task learning setting. Finally, we discuss our new multi-armed bandit based dynamic multi-task learning approach.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Models", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Our baseline is a 2-layer sequence-to-sequence model with both attention (Bahdanau et al., 2015) and pointer-copy mechanism (See et al., 2017) . Given the sequence of input/source tokens x = {x 1 , ..., x Ts }, the model learns an auto-regressive distribution over output/target tokens y = {y 1 , ..., y To }, which is defined as P vocab (y|x; \u03b8) = t p(y t |y 1:t\u22121 , x; \u03b8), where \u03b8 represents model parameters and p(y t |y 1:t\u22121 , x; \u03b8) is probability of generating token y t at decoder time step t given the previous generated tokens y 1:t\u22121 and input x. Given encoder hidden states {h i }, and decoder's t th time step hidden state (of last layer) s t , the context vector c t = i \u03b1 t,i h i , where the attention weights \u03b1 t,i define an attention distribution over encoder hidden states: \u03b1 t,i = exp(e t,i )/ k exp(e t,k ), where", |
|
"cite_spans": [ |
|
{ |
|
"start": 73, |
|
"end": 96, |
|
"text": "(Bahdanau et al., 2015)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 124, |
|
"end": 142, |
|
"text": "(See et al., 2017)", |
|
"ref_id": "BIBREF57" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pointer-Copy Baseline Sentence Simplification Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "e t,i = v T a tanh(W a s t + U a h i + b a ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pointer-Copy Baseline Sentence Simplification Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Finally, the conditional distribution at each time step t of the decoder is defined as p(y t |y 1:t\u22121 , x; \u03b8) = softmax(W s s t ), where the final hidden state s t is a combination of context vector c t and last layer hidden state s t and is defined as", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pointer-Copy Baseline Sentence Simplification Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "s t = tanh(W c [c t , s t ])", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pointer-Copy Baseline Sentence Simplification Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": ", where W s and W c are trained parameters. Pointer-Copy Mechanism: This helps in directly copying the words from the source inputs to the target outputs via merging the generative distribution and attention distribution (as a proxy of copy distribution). The goal of sentence simplification is to rewrite sentences more simply, while preserving important information; hence, it also involves significant amount of copying from the source. Our pointer mechanism approach is similar to See et al. (2017) . At each time step of the decoder, the model makes a (soft) choice between words from the vocabulary distribution P vocab and attention distribution P att (based on words in the input) using the word generation probability", |
|
"cite_spans": [ |
|
{ |
|
"start": 485, |
|
"end": 502, |
|
"text": "See et al. (2017)", |
|
"ref_id": "BIBREF57" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pointer-Copy Baseline Sentence Simplification Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "p g = \u03c3(W g c t + U g s t + V g d t + b g ), where \u03c3(\u2022)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pointer-Copy Baseline Sentence Simplification Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "is sigmoid, W g , U g , V g and b g are trainable parameters, and d t is decoder input. The final vocabulary distribution is defined as the weighted combination of vocabulary and attention distributions:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pointer-Copy Baseline Sentence Simplification Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "P f (y) = p g P vocab (y) + (1 \u2212 p g )P att (y)", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Pointer-Copy Baseline Sentence Simplification Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "3.2 Auxiliary Tasks Entailment Generation The task of entailment generation is to generate a hypothesis which is entailed by the given input premise. A good simplified sentence should be entailed by (follow from) the source Figure 2 : Overview of our multi-armed bandits algorithm for dynamic mixing ratio learning. It consists of a controller with 3 arms/tasks. sentence, and hence we incorporate such knowledge through an entailment generation task into our sentence simplification task. We share the higher-level semantic layers between the two tasks (see reasoning in Sec. 3.3 below). We use entailment pairs from SNLI (Bowman et al., 2015) and Multi-NLI (Williams et al., 2017) datasets for training our entailment generation model, where we use the same architecture as our sentence simplification model.", |
|
"cite_spans": [ |
|
{ |
|
"start": 659, |
|
"end": 682, |
|
"text": "(Williams et al., 2017)", |
|
"ref_id": "BIBREF68" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 224, |
|
"end": 232, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Pointer-Copy Baseline Sentence Simplification Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Paraphrase Generation Paraphrase generation is the task of generating similar meaning phrases or sentences by reordering and modifying the syntax and/or lexicon. Paraphrasing is one of the common operations used in sentence simplification, i.e, by substituting complex words and phrases with their simpler paraphrase forms. Hence, we add this knowledge to the sentence simplification task via multitask learning, by sharing the lower-level lexico-syntactic layers between the two tasks (see reasoning in Sec. 3.3 below). For this, we use the paraphrase pairs from ParaNMT (Wieting and Gimpel, 2017a) . Here, again, we use the same architecture as our sentence simplification model.", |
|
"cite_spans": [ |
|
{ |
|
"start": 572, |
|
"end": 599, |
|
"text": "(Wieting and Gimpel, 2017a)", |
|
"ref_id": "BIBREF66" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pointer-Copy Baseline Sentence Simplification Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In this subsection, we discuss our multi-task, multi-level soft sharing strategy with parallel training of sentence simplification and related auxiliary tasks (entailment and paraphrase generation).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multi-Task Learning", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The predominant approach for multi-task learning in sequence-to-sequence models is to directly hardshare all encoder/decoder layers/parameters (Luong et al., 2015; Johnson et al., 2016; Kaiser et al., 2017) . However, this approach places very strong constraints/priors on the primary model to compress knowledge from diverse tasks. We believe that while the auxiliary tasks considered in this work share many similarities with the primary sentence simplification task, they are still different in either lower-level or higher-level representations (e.g., entailment will deal with higher-level, full-sentence logical inference, while paraphrasing will handle the lower-level intermediate word/phrase simplifications). In this section, we propose to relax the priors in two ways: (1) we share the model parameters in a finer-grained scale, i.e. layer-specific sharing, by keeping some of their parameters private, while sharing related representations; and (2) we encourage shared parameters to be close in certain distance metrics with a penalty term instead of hard-parameter-tying (Luong et al., 2015) . Fig. 1 shows our multi-task model with parallel training of three tasks: sentence simplification (primary task), entailment generation (auxiliary task), and paraphrase generation (auxiliary task). Recently, Belinkov et al. (2017) observed that different layers in a sequence-tosequence model (trained on translation) exhibit different functionalities: lower-layers (closer to inputs) of the encoder learn to represent word structure while higher layers (farther from inputs) are more focused on semantics and meanings (Zeiler and Fergus (2014) observed similar findings for convolutional image features). Based on these findings, we share the higher-level layers 2 between the entailment generation and sentence simplification tasks, since they share higher semantic-level language inference skills (for full sentence-to-sentence logical directedness). On the other hand, we share the lower-level lexico-syntactic layers 3 between the paraphrase generation and sentence simplification tasks, since they share more word/phrase and syntactic level paraphrasing knowledge to simplify the smaller, intermediate sentence pieces. Sec. 6 present empirical ablations to support our intuitive layer sharing. 4 Soft Sharing In multi-task learning, we can do either hard sharing or soft sharing of parameters. Hard sharing directly ties the parameters to be shared, and receives gradient information from multiple tasks. On the other hand, soft sharing only loosely couples the parameters, and encourages them to be close in representation space. Hence the soft sharing approach gives more flexibility for parameter sharing, hence allowing different tasks to choose what parts of their parameters space to share. We minimize the l 2 distance between shared parameters as a regularization along with the cross entropy loss. Hence, the final loss function of the primary task with a related auxiliary task is defined as follows:", |
|
"cite_spans": [ |
|
{ |
|
"start": 143, |
|
"end": 163, |
|
"text": "(Luong et al., 2015;", |
|
"ref_id": "BIBREF47" |
|
}, |
|
{ |
|
"start": 164, |
|
"end": 185, |
|
"text": "Johnson et al., 2016;", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 186, |
|
"end": 206, |
|
"text": "Kaiser et al., 2017)", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 1084, |
|
"end": 1104, |
|
"text": "(Luong et al., 2015)", |
|
"ref_id": "BIBREF47" |
|
}, |
|
{ |
|
"start": 1314, |
|
"end": 1336, |
|
"text": "Belinkov et al. (2017)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 2306, |
|
"end": 2307, |
|
"text": "4", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1107, |
|
"end": 1113, |
|
"text": "Fig. 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Multi-Task Learning", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "L(\u03b8) = \u2212 log P f (y|x; \u03b8) + \u03bb||\u03b8 s \u2212 \u03c6 s || (2)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multi-Level Sharing Mechanism", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where \u03b8 represents the full parameters of the primary task (sentence simplification), \u03b8 s and \u03c6 s are the subsets of shared parameters between the primary and auxiliary task resp., and \u03bb is a hyperparameter.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multi-Level Sharing Mechanism", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Multi-Task Training We employ multi-task learning with parallel training of related tasks in alternate mini-batches based on a mixing ratio \u03b1 ss :\u03b1 eg :\u03b1 pp , where we alternatively optimize \u03b1 ss , \u03b1 eg , \u03b1 pp minibatches of sentence simplification, entailment generation, and paraphrase generation, respectively, until all models converge. In the next section, we discuss a new approach to replace this static mixing ratio with dynamically-learned task switching.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multi-Level Sharing Mechanism", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Current multi-task models are trained via alternate mini-batch optimization based on a task 'mixing ratio' (Luong et al., 2015; , i.e., how many iterations on each task relative to other tasks (see end of Sec. 3.3). This is usually treated as a very important hyperparameter to be tuned, and the search space scales exponentially with the number of tasks. Hence, we importantly replace this manually-tuned and static mixing ratio with a 'dynamic' mixing ratio learning approach, where a controller automatically switches between the tasks during training, based on the current state of the multi-task model. Specifically, we use a multi-armed bandits based controller with Boltzmann exploration (Kaelbling et al., 1996) with an exponential moving average update rule. We view the problem of learning the right mixing of tasks as a sequential control problem, where the controller's goal is to decide the next task/action after every n s training steps in each task-sampling round t b . 5 Let {a 1 , ..., a M } represent the set of 3 tasks in our multi-task setting, i.e., sentence simplification, entailment generation, and paraphrase generation. We model the controller as a M -armed bandits, where it selects a sequence of actions/arms over the current training trajectory to maximize the expected future payoffs (see Fig. 2 ). At each round t b , the controller selects an arm based on noisy value estimates and observes rewards r t b for the selected arm (we use the negative validation loss of the primary task as the reward in our setup). One problem in bandits learning is the trade-off between exploration and exploitation, where the agent needs to make a decision between taking the action that yields the best payoff on current estimates, or explore new actions whose payoffs are not yet certain. For this, we use the Boltzmann exploration (Kaelbling et al., 1996) with exponentially moving action value estimates. Let \u03c0 t b be the policy of the bandit controller at round t b , we define this to be:", |
|
"cite_spans": [ |
|
{ |
|
"start": 107, |
|
"end": 127, |
|
"text": "(Luong et al., 2015;", |
|
"ref_id": "BIBREF47" |
|
}, |
|
{ |
|
"start": 695, |
|
"end": 719, |
|
"text": "(Kaelbling et al., 1996)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 1850, |
|
"end": 1874, |
|
"text": "(Kaelbling et al., 1996)", |
|
"ref_id": "BIBREF36" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1320, |
|
"end": 1326, |
|
"text": "Fig. 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Dynamic Mixing Ratio Learning", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u03c0 t b (a i ) = exp(Q t b ,i /\u03c4 ) M j=1 exp(Q t b ,j /\u03c4 )", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "Dynamic Mixing Ratio Learning", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "where Q t b ,i is the estimated action value of each arm i at round t b , and \u03c4 is the temperature. 6 If Q 0,i is the initial value estimate of arm i, then Q t b ,i is the exponentially weighted mean with the decay rate \u03b1:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dynamic Mixing Ratio Learning", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "Q t b ,i = (1 \u2212 \u03b1) t b Q 0,i + t b k=1 \u03b1(1 \u2212 \u03b1) t b \u2212k r k (4)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dynamic Mixing Ratio Learning", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "To further help the exploration process, we follow the principle of optimism under uncertainty (Sutton and Barto, 1998) and set Q 0,i to be above the maximum empirical rewards. Empirically, we show that this approach of 'dynamic mixing ratio' is equal or better than the traditional static mixing ratio (see Table 3 ). Also, we further show ablation study in Sec. 6 to show that this switching approach is better than the alternative approach of first using multi-armed bandits for finding an optimal 'final' mixing ratio and then re-training the model based on this bandits-selected mixing ratio.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 308, |
|
"end": 315, |
|
"text": "Table 3", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Dynamic Mixing Ratio Learning", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "Datasets We first describe the three standard sentence simplification datasets we evaluate on: Newsela, WikiSmall, and WikiLarge; next, we describe datasets for our auxiliary entailment and paraphrase generation tasks. Newsela (Xu et al., 2015) is acknowledged as a higher-quality dataset for studying sentence simplifications, as opposed to Wikipedia-based datasets which automatically align complex-simple sentence pairs and have generalization issues (Zhang and Lapata, 2017; Xu et al., 2015; Amancio and Specia, 2014; Hwang et al., 2015; \u0160tajner et al., 2015) . Newsela consists of 1, 130 news articles, and we follow previous work (Zhang and Lapata, 2017) to use the first 1, 070 documents for training, and 30 documents each for development and test. WikiSmall (Zhu et al., 2010) contains automatically-aligned complexsimple sentences from the ordinary-simple English Wikipedias. The data has 89, 042 pairs for training and 100 for test. We use the 205-pairs validation set from Zhang and Lapata (2017) . WikiLarge (Zhang and Lapata, 2017) is a larger Wikipedia corpus aggregating pairs from Kauchak (2013) , Woodsend and Lapata (2011) , and WikiSmall. We use the exact training/evaluation sets provided by Zhang and Lapata (2017) . SNLI and MultiNLI: For the task of entailment generation, we use the Stanford Natural Language Inference (SNLI) corpus (Bowman et al., 2015) and MultiNLI (Williams et al., 2017) . We use their entailment labeled pairs for our entailment generation task, following previous work . The combined SNLI and MultiNLI dataset has 302, 879 entailment pairs, out of which we use 276, 720 pairs for training, and the rest are divided into validation and test sets. ParaNMT: For the task of paraphrase generation, we use the back-translated paraphrase dataset provided by Wieting and Gimpel (2017a) . The filtered version of the dataset has 5.3 million pairs of paraphrases. 7 We use 99% for training, and the rest are evenly divided into validation and test sets.", |
|
"cite_spans": [ |
|
{ |
|
"start": 227, |
|
"end": 244, |
|
"text": "(Xu et al., 2015)", |
|
"ref_id": "BIBREF72" |
|
}, |
|
{ |
|
"start": 454, |
|
"end": 478, |
|
"text": "(Zhang and Lapata, 2017;", |
|
"ref_id": "BIBREF75" |
|
}, |
|
{ |
|
"start": 479, |
|
"end": 495, |
|
"text": "Xu et al., 2015;", |
|
"ref_id": "BIBREF72" |
|
}, |
|
{ |
|
"start": 496, |
|
"end": 521, |
|
"text": "Amancio and Specia, 2014;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 522, |
|
"end": 541, |
|
"text": "Hwang et al., 2015;", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 542, |
|
"end": 563, |
|
"text": "\u0160tajner et al., 2015)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 636, |
|
"end": 660, |
|
"text": "(Zhang and Lapata, 2017)", |
|
"ref_id": "BIBREF75" |
|
}, |
|
{ |
|
"start": 767, |
|
"end": 785, |
|
"text": "(Zhu et al., 2010)", |
|
"ref_id": "BIBREF77" |
|
}, |
|
{ |
|
"start": 985, |
|
"end": 1008, |
|
"text": "Zhang and Lapata (2017)", |
|
"ref_id": "BIBREF75" |
|
}, |
|
{ |
|
"start": 1098, |
|
"end": 1112, |
|
"text": "Kauchak (2013)", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 1115, |
|
"end": 1141, |
|
"text": "Woodsend and Lapata (2011)", |
|
"ref_id": "BIBREF69" |
|
}, |
|
{ |
|
"start": 1213, |
|
"end": 1236, |
|
"text": "Zhang and Lapata (2017)", |
|
"ref_id": "BIBREF75" |
|
}, |
|
{ |
|
"start": 1393, |
|
"end": 1416, |
|
"text": "(Williams et al., 2017)", |
|
"ref_id": "BIBREF68" |
|
}, |
|
{ |
|
"start": 1800, |
|
"end": 1826, |
|
"text": "Wieting and Gimpel (2017a)", |
|
"ref_id": "BIBREF66" |
|
}, |
|
{ |
|
"start": 1903, |
|
"end": 1904, |
|
"text": "7", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Setup", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Evaluation Metrics Following previous work (Zhang and Lapata, 2017) , we report all the standard evaluation metrics: SARI (Xu et al., 2016) , FKGL (Kincaid et al., 1975) , and BLEU (Papineni et al., 2002) . However, several studies have shown that BLEU is poorly correlated w.r.t. simplicity (Zhu et al., 2010; \u0160tajner et al., 2015; Xu et al., 2016) . Moreover, Shardlow (2014) argues that FKGL (Kincaid et al., 1975) , which measures readability of simpler output (lower is better), favors very short sentences even though longer/less coarse counterparts can be simpler. Further, Xu et al. (2016) argues that BLEU tends to favor conservative systems that do not make many changes, and proposes SARI metric which explicitly measures the quality of words that are added and deleted. SARI is shown to correlate well with human judgment in simplicity (Xu et al., 2016) , and hence we primarily focus on this metric in our models' performance analysis. 8 Further, we also do human evaluation based on: Fluency ('is the output grammatical and well formed?'), Adequacy ('to what extent is the meaning expressed in the original sentence preserved in the output?') and Simplicity ('is the output simpler than the original sentence?'), following guidelines suggested by Xu et al. (2016) and Zhang and Lapata (2017) . Training Details All our soft/hard and layer-specific sharing decisions (Sec. 6) were made on the validation/dev set. Our model selection (tuning) criteria is based on the average of our 3 metrics (SARI, BLEU, 1/FKGL) on the validation set. Please refer to the appendix for full training details (vocabulary overlap, mixing ratios and bandit sampler decay rates and reward, WikiLarge pre-training, etc.).", |
|
"cite_spans": [ |
|
{ |
|
"start": 43, |
|
"end": 67, |
|
"text": "(Zhang and Lapata, 2017)", |
|
"ref_id": "BIBREF75" |
|
}, |
|
{ |
|
"start": 122, |
|
"end": 139, |
|
"text": "(Xu et al., 2016)", |
|
"ref_id": "BIBREF73" |
|
}, |
|
{ |
|
"start": 142, |
|
"end": 169, |
|
"text": "FKGL (Kincaid et al., 1975)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 181, |
|
"end": 204, |
|
"text": "(Papineni et al., 2002)", |
|
"ref_id": "BIBREF50" |
|
}, |
|
{ |
|
"start": 292, |
|
"end": 310, |
|
"text": "(Zhu et al., 2010;", |
|
"ref_id": "BIBREF77" |
|
}, |
|
{ |
|
"start": 311, |
|
"end": 332, |
|
"text": "\u0160tajner et al., 2015;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 333, |
|
"end": 349, |
|
"text": "Xu et al., 2016)", |
|
"ref_id": "BIBREF73" |
|
}, |
|
{ |
|
"start": 362, |
|
"end": 377, |
|
"text": "Shardlow (2014)", |
|
"ref_id": "BIBREF58" |
|
}, |
|
{ |
|
"start": 395, |
|
"end": 417, |
|
"text": "(Kincaid et al., 1975)", |
|
"ref_id": "BIBREF40" |
|
}, |
|
{ |
|
"start": 581, |
|
"end": 597, |
|
"text": "Xu et al. (2016)", |
|
"ref_id": "BIBREF73" |
|
}, |
|
{ |
|
"start": 848, |
|
"end": 865, |
|
"text": "(Xu et al., 2016)", |
|
"ref_id": "BIBREF73" |
|
}, |
|
{ |
|
"start": 1261, |
|
"end": 1277, |
|
"text": "Xu et al. (2016)", |
|
"ref_id": "BIBREF73" |
|
}, |
|
{ |
|
"start": 1282, |
|
"end": 1305, |
|
"text": "Zhang and Lapata (2017)", |
|
"ref_id": "BIBREF75" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Setup", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We evaluate our models on three datasets and via several automatic metrics plus human evaluation. 9", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Pointer Baseline First, we compare our pointer baseline with various previous works: PBMT-R (Wubben et al., 2012) , Hybrid (Narayan and Gardent, 2014) , SBMT-SARI (Xu et al., 2016) 10 , and EncDecA, DRESS, and DRESS-LS (Zhang and Lapata, 2017) . As shown in Table 1 , our pointer baseline already achieves the best score in FKGL and the second-best score in SARI on Newsela, and also achieves overall comparable results on both WikiSmall and WikiLarge (see Table 2 ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 92, |
|
"end": 113, |
|
"text": "(Wubben et al., 2012)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 123, |
|
"end": 150, |
|
"text": "(Narayan and Gardent, 2014)", |
|
"ref_id": "BIBREF48" |
|
}, |
|
{ |
|
"start": 163, |
|
"end": 180, |
|
"text": "(Xu et al., 2016)", |
|
"ref_id": "BIBREF73" |
|
}, |
|
{ |
|
"start": 219, |
|
"end": 243, |
|
"text": "(Zhang and Lapata, 2017)", |
|
"ref_id": "BIBREF75" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 258, |
|
"end": 265, |
|
"text": "Table 1", |
|
"ref_id": "TABREF2" |
|
}, |
|
{ |
|
"start": 457, |
|
"end": 464, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We further improve our strong pointer-based sentence simplification baseline model by multi-task learning it with entailment and paraphrase generation. First, we show that our 2-way multi-task models with auxiliary tasks (entailment and paraphrase generation) are statistically significantly better than our pointer baseline and previous works in both SARI and FKGL on Newsela (see Table 1 ). 11 Next, Table 1 and Table 2 summarize the performance of our final 3way multi-level, multi-task models with entailment generation and paraphrase generation on all three datasets. Here, our 3-way multi-task models are statistically significantly better than our pointer baselines in both SARI and FKGL (with p < 0.01) on Newsela and WikiSmall, and in SARI (p < 0.01) on WikiLarge. Also, our 3-way multi-task model is statistically significantly better than the 2-way multi-task models in SARI and FKGL with p < 0.01 (see Table 1 ). In Sec. 6, we further provide a set of detailed ablation experiments investigating the effects of different (higher-level versus lower-level) layer sharing methods and soft-vs. hard-sharing in our multi-level, multi-task models; and we show the superiority of our final choice of higher-level semantic sharing for entailment generation and lower-level lexico-syntactic sharing for paraphrase generation.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 382, |
|
"end": 389, |
|
"text": "Table 1", |
|
"ref_id": "TABREF2" |
|
}, |
|
{ |
|
"start": 402, |
|
"end": 421, |
|
"text": "Table 1 and Table 2", |
|
"ref_id": "TABREF2" |
|
}, |
|
{ |
|
"start": 914, |
|
"end": 921, |
|
"text": "Table 1", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Multi-Task Models", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Finally, we present results on our 3-way multi-task model with the new approach of using 'dynamic' mixing ratios based on multi-armed bandits sampling (see Sec. 3.4). 9 As described in Sec. 4, Newsela is considered as a higher quality dataset for text simplification, and thus we report ablationstyle results (e.g., 2-way multi-task models and different layer-sharing ablations) and human evaluation on Newsela (since Wikipedia datasets are automatically-aligned). Moreover, we report SARI, FKGL, and BLEU for completeness, but as described in Sec. 4, SARI is the primary human-correlated metric for sentence simplification. 10 We borrow the SBMT-SARI results for WikiLarge from Zhang and Lapata (2017).", |
|
"cite_spans": [ |
|
{ |
|
"start": 167, |
|
"end": 168, |
|
"text": "9", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 625, |
|
"end": 627, |
|
"text": "10", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dynamic Mixing Ratio Models", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "11 Stat. significance is computed via bootstrap test (Noreen, 1989; Efron and Tibshirani, 1994) . Both our 2-way multi-task models are statistically significantly better in SARI and FKGL with p < 0.01 w.r.t. our pointer baseline and previous works. Note the discussion in Sec. 4 about why BLEU is not a good sentence simplification metric. Table 4 : Human evaluation results (on left) and closeness-to-input source results (on right), for Newsela.", |
|
"cite_spans": [ |
|
{ |
|
"start": 53, |
|
"end": 67, |
|
"text": "(Noreen, 1989;", |
|
"ref_id": "BIBREF49" |
|
}, |
|
{ |
|
"start": 68, |
|
"end": 95, |
|
"text": "Efron and Tibshirani, 1994)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 340, |
|
"end": 347, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Dynamic Mixing Ratio Models", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "As shown in Table 3 , this dynamic multi-task approach achieves a stat. significant improvement in SARI as compared to the traditional fixed and manually-tuned mixing ratio based 3-way multi-task model: 33.22 vs. 32.98 (p < 0.05) on Newsela, and 29.58 vs. 28.24 (p < 0.001) on WikiSmall. Hence, this allows us to achieve better results while also avoiding the hassle of tuning on the large space of mixing ratios over several different tasks. In Sec. 6, we further provide ablation analysis to study whether the improvements come from the bandit learning this dynamic curriculum or from the bandit finding the final optimal mixing-ratio at the end of the sampling procedure (and also compare it to a random curriculum).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 12, |
|
"end": 19, |
|
"text": "Table 3", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Dynamic Mixing Ratio Models", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We also perform an anonymized human study comparing our pointer baseline, our multi-task model, some previous works (Hybrid (Narayan and Gardent, 2014) and state-of-the-art DRESS-LS (Zhang and Lapata, 2017)), and ground-truth references (see left part of Table 4 ), based on fluency, adequacy, and simplicity (see Sec. 4 for more details about these criteria) using 5-point Likert scale. We asked annotators to evaluate the models (randomly shuffled to anonymize model identity) based on 200 samples from the representative and cleaner Newsela test set, and their scores are reported in Table 4 . Our 3-way multi-task model achieves a significantly higher (p < 0.001) simplicity score compared to DRESS-LS, Hybrid, and our pointer baseline models. However, we next observe that our 3-way multi-task model has lower adequacy score as compared to DRESS-LS and the pointer model, but this is because our 3-way multi-task model focuses more strongly on simplification, which is the goal of the given task. Moreover, based on the overall average score of the three human evaluation criteria, our 3-way multi-task model is also significantly better (p < 0.03) than the state-of-the-art DRESS-LS model (and p < 0.001 w.r.t. Hybrid model). 12 Also, on further investigation, we found that a problem with the adequacy metric is that it gets artificially high scores for output sentences which are exact match (or a very close match) with the input source sentence, i.e., they have very little simplification and hence almost fully retain the exact meaning. In the right part of Table 4 , we analyzed the matching scores of the outputs from different models w.r.t. the source input text, based on BLEU, ROUGE (Lin, 2004) and exact match. First, this shows that the ground-truth sentence-simplification references are in fact (as expected) very different from the input source (0% exact match, 18% BLEU, 44% ROUGE). Next, we find that our multi-task model also has low match-with-input scores (2% exact match, 9% BLEU, 38% ROUGE), similar to the behavior of the ground-truth references. On the other hand, DRESS-LS (and pointer baseline) model is generating output sentences which are substantially closer to the input and hence is not making enough changes (14% exact match, 43% BLEU, 68% ROUGE) as compared to the references (which explains their higher adequacy but lower simplicity scores).", |
|
"cite_spans": [ |
|
{ |
|
"start": 124, |
|
"end": 151, |
|
"text": "(Narayan and Gardent, 2014)", |
|
"ref_id": "BIBREF48" |
|
}, |
|
{ |
|
"start": 1699, |
|
"end": 1710, |
|
"text": "(Lin, 2004)", |
|
"ref_id": "BIBREF46" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 255, |
|
"end": 262, |
|
"text": "Table 4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 587, |
|
"end": 594, |
|
"text": "Table 4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1569, |
|
"end": 1576, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Human Evaluation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this section, we conduct several ablation analyses to study the different layer-sharing mechanisms (higher semantic vs. lower lexico-syntactic), soft-vs. hard-sharing, two dynamic multi-armed bandit approaches, and our model's learned entailment and paraphrasing skills. We also present and analyze some output examples from several models. 13 Note that all our soft and layer sharing decisions were strictly made on the dev/validation set (see Sec. 4). Table 5 : Multi-task layer ablation results on Newsela.", |
|
"cite_spans": [ |
|
{ |
|
"start": 344, |
|
"end": 346, |
|
"text": "13", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 457, |
|
"end": 464, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Ablations and Analysis", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We empirically show that our final multi-level layer sharing method (i.e., higher-level semantic layer sharing with entailment generation, while lower-level lexico-syntactic layer sharing with paraphrase generation) performs better than the following alternative layer sharing methods: (1) both auxiliary tasks with high-level layer sharing, (2) both with low-level layer sharing, and (3) reverse/swapped sharing (i.e. lower-level layer sharing for entailment, and higher-level layer sharing for paraphrasing). Results in Table 5 show that our approach of high-level sharing for entailment generation and low-level sharing for paraphrase generation is statistically significantly better than all other alternative approaches in SARI (p < 0.01) (and statistically better or equal in FKGL).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 522, |
|
"end": 529, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Different Layer Sharing Approaches", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this work, we use soft-sharing instead of hard-sharing approach (benefits discussed in Sec. 3.3) in all of our models. Table 5 also presents empirical results comparing softvs. hard-sharing on our final 3-way multi-task model, and we observe that soft-sharing is statistically significantly better than hard-sharing in SARI with p < 0.01.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 122, |
|
"end": 129, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Soft-vs. Hard-Sharing", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Quantitative Improvements in Entailment We employ a state-of-the-art entailment classifier (Chen et al., 2017) to calculate the entailment probabilities of output sentence being entailed by the groundtruth. 14 Table 6 summaries the average entailment scores for the Hybrid, DRESS-LS, Pointer baseline, and 2-way multi-task model (with entailment generation auxiliary task), showing that the 2-way multitask model improves in the aspect of logical entailment (p < 0.001), demonstrating the inference skill acquired by the simplification model via the auxiliary knowledge from the entailment generation task.", |
|
"cite_spans": [ |
|
{ |
|
"start": 91, |
|
"end": 110, |
|
"text": "(Chen et al., 2017)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 207, |
|
"end": 209, |
|
"text": "14", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 210, |
|
"end": 217, |
|
"text": "Table 6", |
|
"ref_id": "TABREF8" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Soft-vs. Hard-Sharing", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Quantitative Improvements in Paraphrasing We use the paraphrase classifier from Wieting and Gimpel (2017b) to compute the paraphrase probability score between the generated output and the input source. The results in Table 6 show that our 2-way multi-task model (with paraphrasing generation auxiliary task) is closer to the ground-truth in terms of the amount of paraphrasing (w.r.t. input) required by the sentence-simplification task, while the pointer baseline and previous models have higher scores due to higher amount of copying from input source (see 'Match-with-Input' discussion in Sec. 5, Table 4 ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 80, |
|
"end": 106, |
|
"text": "Wieting and Gimpel (2017b)", |
|
"ref_id": "BIBREF67" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 217, |
|
"end": 224, |
|
"text": "Table 6", |
|
"ref_id": "TABREF8" |
|
}, |
|
{ |
|
"start": 600, |
|
"end": 607, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Soft-vs. Hard-Sharing", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Addition/Deletion Operations We also measured the performance of the various models in terms of the addition and deletion operations using SARI's sub-operation scores computed w.r.t. both the groundtruth and source (Xu et al., 2016) . Table 7 shows that our multi-task model is equal or better in terms of both operations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 215, |
|
"end": 232, |
|
"text": "(Xu et al., 2016)", |
|
"ref_id": "BIBREF73" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 235, |
|
"end": 242, |
|
"text": "Table 7", |
|
"ref_id": "TABREF9" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Soft-vs. Hard-Sharing", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Two Multi-Armed-Bandit Approaches As described in Sec. 3.4, our multi-armed bandit approach with dynamic mixing ratio during multi-task training learns a sufficiently good curriculum to improve the sentence simplification task (see Sec. 5). Here, we further show an ablation study on another alternative approach of using multi-armed bandits, where we record the last 10% of the actions from the bandit controller 15 , then calculate the corresponding mixing ra-tio based on this 10%, and run another independent model from scratch with this fixed mixing ratio. We found that the curriculum-style dynamic switching of tasks is in fact very effective as compared to this other 2-stage approach (33.22 versus 32.58 in SARI with p < 0.01). This is intuitive because the dynamic switching of tasks during multi-task training allows the model Figure 3 : Task selection probability over training trajectory, predicted by bandit controller.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 838, |
|
"end": 846, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Soft-vs. Hard-Sharing", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "to choose the best next task to run based on the current state (as well as the previous curriculum path) of the model, as opposed to a fixed/static single mixing ratio for the full training period. In Fig. 3 , we visualize the (moving averages of) probabilities of selecting each task, which shows that in the 0-1000 #rounds range, the bandit initially gives higher weight to the main task, but gradually redistributes the probabilities to the auxiliary tasks; and beyond 1000 #rounds, it then alternates switching among the three different tasks periodically. We also experimented with replacing the bandit controller with random task choices, and our bandit-controller achieves statistically significantly better results than this approach in both SARI and FKGL with p < 0.01, which shows that the path learned by the bandit controller is meaningful.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 201, |
|
"end": 207, |
|
"text": "Fig. 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Soft-vs. Hard-Sharing", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Multi-Task Learning vs. Data Augmentation To verify that our improvements come indeed from the auxiliary tasks' specific character/capabilities and not just due to adding more data, we separately trained word embeddings on each auxiliary dataset (i.e., SNLI+MultiNLI and ParaNMT) and incorporated them into the primary simplification model. We found that both our 2-way multi-task models perform stat. significantly better than these models (which use the auxiliary word-embeddings), suggesting that merely adding more data is not enough. Moreover, Table 5 shows that only specific intuitive (syntactic vs. semantic) layer sharing between the primary and auxiliary tasks helps results and not just adding data.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 549, |
|
"end": 556, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Soft-vs. Hard-Sharing", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Input: he put henson in charge of escorting his slaves to his brother 's kentucky plantation . Reference: he sent henson to take his slaves to kentucky . DRESS-LS: he put henson in charge of escorting his slaves to his brother 's kentucky plantation . Baseline: he put his slaves to his brother 's kentucky plantation . Multi-Task: he put henson in charge of escorting . Input: northern states did not allow slavery , but escaped slaves were returned to their owners as property , so henson would have to flee to canada to be free . Reference: states in the north did not allow slavery . DRESS-LS: southern states did not allow slavery , but the guatemalans were returned to their owners as property . Baseline: he slaves were returned to their owners as property . Multi-Task: northern states did not allow slavery . Output Examples Fig. 4 shows two output examples comparing DRESS-LS, pointer baseline, and multi-task models (and reference). We see that our multi-task model simplifies the input appropriately (similar extent to reference) while also keeping reasonably important information from the source. The pointer baseline and the DRESS-LS models simplify to a lesser extent and keep much more of the original input (as also suggested by our match-with-input investigation in Table 4 ).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 834, |
|
"end": 840, |
|
"text": "Fig. 4", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 1285, |
|
"end": 1292, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Soft-vs. Hard-Sharing", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We presented a multi-level, multi-task learning approach to incorporate natural language inference and paraphrasing knowledge into sentence simplification models, via soft sharing at higher-level semantic and lower-level lexico-syntactic levels. We also introduced a multi-armed bandits approach for learning a dynamic mixing ratio of tasks. We demonstrated strong simplification improvements on three standard datasets via automatic and human evaluation, and also discussed several ablation and analysis studies.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "convergence, and use these models to initialize the multi-task models, and set the learning rate to 1/10 of its original default value (0.001). We set the decay rate \u03b1 in the bandit controller to be 0.3. We use the negative validation loss as the reward at each sampling step to the bandit algorithm. The validation loss is divided by two as a smoothing technique. 16 All our soft/hard and layer-specific sharing decisions (Sec. 6) were made on the validation/dev set. We follow previous work (Zhang and Lapata, 2017) in their pre-processing and post-processing of named entities. We capped vocabulary size to be 50K and replaced less frequent words with UNK token. 17 Unlike previous work (Zhang and Lapata, 2017) , we do not use UNK-replacement at test time, but instead rely on our pointer-copy mechanism. We use beam search with beam size of 5. All other details provided in our released code.", |
|
"cite_spans": [ |
|
{ |
|
"start": 365, |
|
"end": 367, |
|
"text": "16", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 493, |
|
"end": 517, |
|
"text": "(Zhang and Lapata, 2017)", |
|
"ref_id": "BIBREF75" |
|
}, |
|
{ |
|
"start": 690, |
|
"end": 714, |
|
"text": "(Zhang and Lapata, 2017)", |
|
"ref_id": "BIBREF75" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http:// creativecommons.org/licenses/by/4.0/.1 All code and pretrained models available at: https://github.com/HanGuo97/MultitaskSimplification.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We found that sharing higher-level semantic layers (farther from input/output), i.e., encoder layer 2, attention, and decoder layer 1 (inFig. 1), to work well. See Sec. 6 for ablations on alternative layer sharing methods.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We found that sharing lower-level lexico-syntactic layers (closer to input/output), i.e., encoder layer 1 and decoder layer 2 (inFig. 1), to work well. See Sec. 6 for ablations on alternative layer sharing methods.4 Note that even though entailment just tries to generate shorter, logical-subset sub-sentences, the overall saliency and quality of the simplified output is still balanced because the entailment task is flexibly (softly) shared with the paraphrasing and sentence simplification tasks, and the final model mixture is chosen based on simplification task metrics (see output examples inFig. 4where our multi-task model generates entailed sentences with important information).5 We set n s to 10 to reduce variance of estimates, i.e., the bandit controller's task/action will be trained for 10 mini-batches.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We tried decaying the temperature variable, but we didn't find this to very beneficial, so we instead fix this to 1.0.7 We chose ParaNMT over other paraphrase datasets (e.g. the phrase-to-phrase PPDB dataset(Ganitkevitch et al., 2013)), because ParaNMT is a sentence-to-sentence dataset and hence is a more natural fit for sentence-level multi-task RNN-layer sharing with our sentence-to-sentence simplification task.8 We use the JOSHUA package for calculating SARI and BLEU score followingZhang and Lapata (2017) andXu et al. (2016). Our FKGL implementation is based on https://github.com/mmautner/readability.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Note that our multi-task model is stat. equal to our pointer baseline on the overall-average score, showing the available trade-off between systems that simplify conservatively vs. strongly, based on one's desired downstream task application. Also refer to the high 'match-with-input' issue with the adequacy metric discussed next.13 Since Newsela is considered as the more representative dataset for sentence simplification with lesser noise and human quality(Xu et al., 2015;Zhang and Lapata, 2017), we conduct our ablation studies on this dataset, but we observed similar patterns on the other two datasets as well.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For this entailment analysis, we use ground-truth output as premise instead of input source, because: (1) entailment w.r.t. input source can give artificially high scores even when the output doesn't simplify enough and just copies the source (see the discussion in Sec. 5 and Table 4); (2) By transitivity, if output is entailed by ground-truth, which in turn is entailed by source, then output should also be entailed by source (plus, we want the output to be closer to ground-truth than to input source).15 We choose the last 10% to avoid the noisy action-value estimates at the start of the training.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This constant serves the same purpose as the temperature variable in the softmax function.17 We measured the vocabulary overlap between the main and auxiliary tasks, and found that \"word-form-overlap\" (percentage of unique word types in auxiliary task that also appear in the main task) to be 40.7% (entailment) and 41.0% (paraphrase), and \"word-count-overlap\" (percentage of words in auxiliary task that also appear in the main task, based on token frequency counts) to be 95.2% (entailment) and 94.9% (paraphrase). Hence, this suggests that only rare words (which make up for very few counts) aren't considered in training process, and our pointer mechanism handles these extra UNK words by copying the actual word-form from the source to the output.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We thank the reviewers for their helpful comments (and Xingxing Zhang for providing preprocessed datasets). This work was supported by DARPA (YFA17-D17AP00022), Google Faculty Research Award, Bloomberg Data Science Research Grant, and NVidia GPU awards. The views contained in this article are those of the authors and not of the funding agency.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "All LSTMs use hidden state size of 256. We train word vectors with embedding size of 128 with random initialization. We use gradient clipped norm of 2.0. Our model selection (tuning) criteria is based on the average of our 3 metrics (SARI, BLEU, 1/FKGL) on the validation set. The mixing ratios are \u03b1 ss :\u03b1 eg :\u03b1 pp = 6:1:3 for Newsela, 6:1:3 for WikiSmall, and 7:2:1 for WikiLarge. The soft-sharing coefficient \u03bb is set such that we balance the cross-entropy and regularization losses (at convergence), which is 5 \u00d7 10 \u22126 for Newsela, 1 \u00d7 10 \u22126 WikiSmall, and 1 \u00d7 10 \u22125 for WikiLarge. We train models from scratch for Newsela and WikiSmall (using Adam (Kingma and Ba, 2014) optimizer with learning rate of 0.002 and 0.0015, respectively). However, because of the large size and computation overhead for WikiLarge, we first pre-train both main and auxiliary models on their own domain until they reach 90%", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A.1 Training Details", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "An analysis of crowdsourced text simplifications", |
|
"authors": [ |
|
{ |
|
"first": "Marcelo", |
|
"middle": [], |
|
"last": "Amancio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lucia", |
|
"middle": [], |
|
"last": "Specia", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 3rd Workshop on Predicting and Improving Text Readability for Target Reader Populations (PITR)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "123--130", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marcelo Amancio and Lucia Specia. 2014. An analysis of crowdsourced text simplifications. In Proceedings of the 3rd Workshop on Predicting and Improving Text Readability for Target Reader Populations (PITR), pages 123-130.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Finite-time analysis of the multiarmed bandit problem", |
|
"authors": [ |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Auer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicolo", |
|
"middle": [], |
|
"last": "Cesa-Bianchi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Fischer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Machine learning", |
|
"volume": "47", |
|
"issue": "2-3", |
|
"pages": "235--256", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peter Auer, Nicolo Cesa-Bianchi, and Paul Fischer. 2002a. Finite-time analysis of the multiarmed bandit problem. Machine learning, 47(2-3):235-256.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "The nonstochastic multiarmed bandit problem", |
|
"authors": [ |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Auer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicolo", |
|
"middle": [], |
|
"last": "Cesa-Bianchi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Freund", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Schapire", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "SIAM journal on computing", |
|
"volume": "32", |
|
"issue": "1", |
|
"pages": "48--77", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peter Auer, Nicolo Cesa-Bianchi, Yoav Freund, and Robert E Schapire. 2002b. The nonstochastic multiarmed bandit problem. SIAM journal on computing, 32(1):48-77.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Neural machine translation by jointly learning to align and translate", |
|
"authors": [ |
|
{ |
|
"first": "Dzmitry", |
|
"middle": [], |
|
"last": "Bahdanau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "ICLR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Extracting paraphrases from a parallel corpus", |
|
"authors": [ |
|
{ |
|
"first": "Regina", |
|
"middle": [], |
|
"last": "Barzilay", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Kathleen R Mckeown", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proceedings of the 39th annual meeting on Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "50--57", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Regina Barzilay and Kathleen R McKeown. 2001. Extracting paraphrases from a parallel corpus. In Proceed- ings of the 39th annual meeting on Association for Computational Linguistics, pages 50-57. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "What do neural machine translation models learn about morphology?", |
|
"authors": [ |
|
{ |
|
"first": "Yonatan", |
|
"middle": [], |
|
"last": "Belinkov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nadir", |
|
"middle": [], |
|
"last": "Durrani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fahim", |
|
"middle": [], |
|
"last": "Dalvi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hassan", |
|
"middle": [], |
|
"last": "Sajjad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Glass", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1704.03471" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yonatan Belinkov, Nadir Durrani, Fahim Dalvi, Hassan Sajjad, and James Glass. 2017. What do neural machine translation models learn about morphology? arXiv preprint arXiv:1704.03471.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Semantic parsing via paraphrasing", |
|
"authors": [ |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Berant", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Percy", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "ACL (1)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1415--1425", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jonathan Berant and Percy Liang. 2014. Semantic parsing via paraphrasing. In ACL (1), pages 1415-1425.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "A large annotated corpus for learning natural language inference", |
|
"authors": [ |
|
{ |
|
"first": "Gabor", |
|
"middle": [], |
|
"last": "Samuel R Bowman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Angeli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher D", |
|
"middle": [], |
|
"last": "Potts", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference. In EMNLP.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Regret analysis of stochastic and nonstochastic multi-armed bandit problems. Foundations and Trends R in Machine Learning", |
|
"authors": [ |
|
{ |
|
"first": "S\u00e9bastien", |
|
"middle": [], |
|
"last": "Bubeck", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicolo", |
|
"middle": [], |
|
"last": "Cesa-Bianchi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "", |
|
"volume": "5", |
|
"issue": "", |
|
"pages": "1--122", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S\u00e9bastien Bubeck, Nicolo Cesa-Bianchi, et al. 2012. Regret analysis of stochastic and nonstochastic multi-armed bandit problems. Foundations and Trends R in Machine Learning, 5(1):1-122.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Simplifying text for language-impaired readers", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "John", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guido", |
|
"middle": [], |
|
"last": "Carroll", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Darren", |
|
"middle": [], |
|
"last": "Minnen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yvonne", |
|
"middle": [], |
|
"last": "Pearce", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Siobhan", |
|
"middle": [], |
|
"last": "Canning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Tait", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "EACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "269--270", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John A Carroll, Guido Minnen, Darren Pearce, Yvonne Canning, Siobhan Devlin, and John Tait. 1999. Simplify- ing text for language-impaired readers. In EACL, pages 269-270.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Multitask learning", |
|
"authors": [ |
|
{ |
|
"first": "Rich", |
|
"middle": [], |
|
"last": "Caruana", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Learning to learn", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "95--133", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rich Caruana. 1998. Multitask learning. In Learning to learn, pages 95-133. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Motivations and methods for text simplification", |
|
"authors": [ |
|
{ |
|
"first": "Raman", |
|
"middle": [], |
|
"last": "Chandrasekar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christine", |
|
"middle": [], |
|
"last": "Doran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bangalore", |
|
"middle": [], |
|
"last": "Srinivas", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Proceedings of the 16th conference on Computational linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "1041--1044", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Raman Chandrasekar, Christine Doran, and Bangalore Srinivas. 1996. Motivations and methods for text sim- plification. In Proceedings of the 16th conference on Computational linguistics-Volume 2, pages 1041-1044. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "An empirical evaluation of thompson sampling", |
|
"authors": [ |
|
{ |
|
"first": "Olivier", |
|
"middle": [], |
|
"last": "Chapelle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lihong", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2249--2257", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Olivier Chapelle and Lihong Li. 2011. An empirical evaluation of thompson sampling. In Advances in neural information processing systems, pages 2249-2257.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Enhanced lstm for natural language inference", |
|
"authors": [ |
|
{ |
|
"first": "Qian", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaodan", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhen-Hua", |
|
"middle": [], |
|
"last": "Ling", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Si", |
|
"middle": [], |
|
"last": "Wei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hui", |
|
"middle": [], |
|
"last": "Jiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Diana", |
|
"middle": [], |
|
"last": "Inkpen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1657--1668", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017. Enhanced lstm for natural language inference. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1657-1668.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Models for sentence compression: A comparison across domains, training requirements and evaluation measures", |
|
"authors": [ |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Clarke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mirella", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "377--384", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "James Clarke and Mirella Lapata. 2006. Models for sentence compression: A comparison across domains, training requirements and evaluation measures. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, pages 377-384. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "A unified architecture for natural language processing: Deep neural networks with multitask learning", |
|
"authors": [ |
|
{ |
|
"first": "Ronan", |
|
"middle": [], |
|
"last": "Collobert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Weston", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the 25th international conference on Machine learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "160--167", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th international conference on Machine learning, pages 160-167. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Learning to simplify sentences using wikipedia", |
|
"authors": [ |
|
{ |
|
"first": "William", |
|
"middle": [], |
|
"last": "Coster", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Kauchak", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the workshop on monolingual text-to-text generation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--9", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "William Coster and David Kauchak. 2011. Learning to simplify sentences using wikipedia. In Proceedings of the workshop on monolingual text-to-text generation, pages 1-9. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "The pascal recognising textual entailment challenge", |
|
"authors": [ |
|
{ |
|
"first": "Oren", |
|
"middle": [], |
|
"last": "Ido Dagan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bernardo", |
|
"middle": [], |
|
"last": "Glickman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Magnini", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Machine learning challenges. evaluating predictive uncertainty, visual object classification, and recognising tectual entailment", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "177--190", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The pascal recognising textual entailment challenge. In Machine learning challenges. evaluating predictive uncertainty, visual object classification, and recognising tectual entailment, pages 177-190. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Simplifying natural language for aphasic readers", |
|
"authors": [ |
|
{ |
|
"first": "Siobhan", |
|
"middle": [ |
|
"Lucy" |
|
], |
|
"last": "Devlin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Siobhan Lucy Devlin. 1999. Simplifying natural language for aphasic readers. Ph.D. thesis, University of Sunderland.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "An introduction to the bootstrap", |
|
"authors": [ |
|
{ |
|
"first": "Bradley", |
|
"middle": [], |
|
"last": "Efron", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Robert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Tibshirani", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bradley Efron and Robert J Tibshirani. 1994. An introduction to the bootstrap. CRC press.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "An evaluation of syntactic simplification rules for people with autism", |
|
"authors": [ |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Evans", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Constantin", |
|
"middle": [], |
|
"last": "Orasan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iustin", |
|
"middle": [], |
|
"last": "Dornescu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 3rd Workshop on Predicting and Improving Text Readability for Target Reader Populations (PITR)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "131--140", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Richard Evans, Constantin Orasan, and Iustin Dornescu. 2014. An evaluation of syntactic simplification rules for people with autism. In Proceedings of the 3rd Workshop on Predicting and Improving Text Readability for Target Reader Populations (PITR), pages 131-140.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Paraphrase-driven learning for open question answering", |
|
"authors": [ |
|
{ |
|
"first": "Anthony", |
|
"middle": [], |
|
"last": "Fader", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oren", |
|
"middle": [], |
|
"last": "Etzioni", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1608--1618", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Anthony Fader, Luke Zettlemoyer, and Oren Etzioni. 2013. Paraphrase-driven learning for open question answer- ing. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1608-1618.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Dependency tree based sentence compression", |
|
"authors": [ |
|
{ |
|
"first": "Katja", |
|
"middle": [], |
|
"last": "Filippova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Strube", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the Fifth International Natural Language Generation Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "25--32", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Katja Filippova and Michael Strube. 2008. Dependency tree based sentence compression. In Proceedings of the Fifth International Natural Language Generation Conference, pages 25-32. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Ppdb: The paraphrase database", |
|
"authors": [ |
|
{ |
|
"first": "Juri", |
|
"middle": [], |
|
"last": "Ganitkevitch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Van Durme", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Callison-Burch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "HLT-NAACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "758--764", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. Ppdb: The paraphrase database. In HLT-NAACL, pages 758-764.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Fast r-cnn", |
|
"authors": [ |
|
{ |
|
"first": "Ross", |
|
"middle": [], |
|
"last": "Girshick", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the IEEE international conference on computer vision", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1440--1448", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ross Girshick. 2015. Fast r-cnn. In Proceedings of the IEEE international conference on computer vision, pages 1440-1448.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Simplifying lexical simplification: Do we need simplified corpora", |
|
"authors": [ |
|
{ |
|
"first": "Goran", |
|
"middle": [], |
|
"last": "Glava\u0161", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Sanja\u0161tajner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "63--68", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Goran Glava\u0161 and Sanja\u0160tajner. 2015. Simplifying lexical simplification: Do we need simplified corpora. In Pro- ceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, volume 2, pages 63-68.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Automated curriculum learning for neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Graves", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Marc", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Bellemare", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Remi", |
|
"middle": [], |
|
"last": "Menick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Koray", |
|
"middle": [], |
|
"last": "Munos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Kavukcuoglu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1704.03003" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alex Graves, Marc G Bellemare, Jacob Menick, Remi Munos, and Koray Kavukcuoglu. 2017. Automated cur- riculum learning for neural networks. arXiv preprint arXiv:1704.03003.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Soft, layer-specific multi-task summarization with entailment and question generation", |
|
"authors": [ |
|
{ |
|
"first": "Han", |
|
"middle": [], |
|
"last": "Guo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ramakanth", |
|
"middle": [], |
|
"last": "Pasunuru", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohit", |
|
"middle": [], |
|
"last": "Bansal", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Han Guo, Ramakanth Pasunuru, and Mohit Bansal. 2018. Soft, layer-specific multi-task summarization with entailment and question generation. In Proceedings of ACL.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Methods for using textual entailment in open-domain question answering", |
|
"authors": [ |
|
{ |
|
"first": "Sanda", |
|
"middle": [], |
|
"last": "Harabagiu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Hickl", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "905--912", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sanda Harabagiu and Andrew Hickl. 2006. Methods for using textual entailment in open-domain question an- swering. In ACL, pages 905-912.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "A joint many-task model: Growing a neural network for multiple nlp tasks", |
|
"authors": [ |
|
{ |
|
"first": "Kazuma", |
|
"middle": [], |
|
"last": "Hashimoto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Caiming", |
|
"middle": [], |
|
"last": "Xiong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshimasa", |
|
"middle": [], |
|
"last": "Tsuruoka", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kazuma Hashimoto, Caiming Xiong, Yoshimasa Tsuruoka, and Richard Socher. 2017. A joint many-task model: Growing a neural network for multiple nlp tasks. In EMNLP.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Learning a lexical simplifier using wikipedia", |
|
"authors": [ |
|
{ |
|
"first": "Colby", |
|
"middle": [], |
|
"last": "Horn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cathryn", |
|
"middle": [], |
|
"last": "Manduca", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Kauchak", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "ACL (2)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "458--463", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Colby Horn, Cathryn Manduca, and David Kauchak. 2014. Learning a lexical simplifier using wikipedia. In ACL (2), pages 458-463.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Vime: Variational information maximizing exploration", |
|
"authors": [ |
|
{ |
|
"first": "Rein", |
|
"middle": [], |
|
"last": "Houthooft", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xi", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yan", |
|
"middle": [], |
|
"last": "Duan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Schulman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Filip", |
|
"middle": [ |
|
"De" |
|
], |
|
"last": "Turck", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pieter", |
|
"middle": [], |
|
"last": "Abbeel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1109--1117", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rein Houthooft, Xi Chen, Yan Duan, John Schulman, Filip De Turck, and Pieter Abbeel. 2016. Vime: Variational information maximizing exploration. In Advances in Neural Information Processing Systems, pages 1109-1117.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Aligning sentences from standard wikipedia to simple wikipedia", |
|
"authors": [ |
|
{ |
|
"first": "William", |
|
"middle": [], |
|
"last": "Hwang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hannaneh", |
|
"middle": [], |
|
"last": "Hajishirzi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mari", |
|
"middle": [], |
|
"last": "Ostendorf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "NAACL-HLT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "211--217", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "William Hwang, Hannaneh Hajishirzi, Mari Ostendorf, and Wei Wu. 2015. Aligning sentences from standard wikipedia to simple wikipedia. In NAACL-HLT, pages 211-217.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Text simplification for reading assistance: a project note", |
|
"authors": [ |
|
{ |
|
"first": "Kentaro", |
|
"middle": [], |
|
"last": "Inui", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Atsushi", |
|
"middle": [], |
|
"last": "Fujita", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tetsuro", |
|
"middle": [], |
|
"last": "Takahashi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ryu", |
|
"middle": [], |
|
"last": "Iida", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomoya", |
|
"middle": [], |
|
"last": "Iwakura", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the second international workshop on Paraphrasing", |
|
"volume": "16", |
|
"issue": "", |
|
"pages": "9--16", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kentaro Inui, Atsushi Fujita, Tetsuro Takahashi, Ryu Iida, and Tomoya Iwakura. 2003. Text simplification for reading assistance: a project note. In Proceedings of the second international workshop on Paraphrasing- Volume 16, pages 9-16. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "UNAL-NLP: Combining soft cardinality features for semantic textual similarity, relatedness and entailment", |
|
"authors": [ |
|
{ |
|
"first": "Sergio", |
|
"middle": [], |
|
"last": "Jimenez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "George", |
|
"middle": [], |
|
"last": "Duenas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julia", |
|
"middle": [], |
|
"last": "Baquero", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Gelbukh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "732--742", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sergio Jimenez, George Duenas, Julia Baquero, Alexander Gelbukh, Av Juan Dios B\u00e1tiz, and Av Mendiz\u00e1bal. 2014. UNAL-NLP: Combining soft cardinality features for semantic textual similarity, relatedness and entail- ment. In In SemEval, pages 732-742.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Google's multilingual neural machine translation system: enabling zero-shot translation", |
|
"authors": [ |
|
{ |
|
"first": "Melvin", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Schuster", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Quoc", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maxim", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yonghui", |
|
"middle": [], |
|
"last": "Krikun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhifeng", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nikhil", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fernanda", |
|
"middle": [], |
|
"last": "Thorat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martin", |
|
"middle": [], |
|
"last": "Vi\u00e9gas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [], |
|
"last": "Wattenberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Corrado", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1611.04558" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Melvin Johnson, Mike Schuster, Quoc V Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fer- nanda Vi\u00e9gas, Martin Wattenberg, Greg Corrado, et al. 2016. Google's multilingual neural machine translation system: enabling zero-shot translation. arXiv preprint arXiv:1611.04558.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Reinforcement learning: A survey", |
|
"authors": [ |
|
{ |
|
"first": "Leslie", |
|
"middle": [], |
|
"last": "Pack Kaelbling", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew W", |
|
"middle": [], |
|
"last": "Michael L Littman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Moore", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Journal of artificial intelligence research", |
|
"volume": "4", |
|
"issue": "", |
|
"pages": "237--285", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Leslie Pack Kaelbling, Michael L Littman, and Andrew W Moore. 1996. Reinforcement learning: A survey. Journal of artificial intelligence research, 4:237-285.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "One model to learn them all", |
|
"authors": [ |
|
{ |
|
"first": "Lukasz", |
|
"middle": [], |
|
"last": "Kaiser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aidan", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Gomez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noam", |
|
"middle": [], |
|
"last": "Shazeer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Vaswani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niki", |
|
"middle": [], |
|
"last": "Parmar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llion", |
|
"middle": [], |
|
"last": "Jones", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jakob", |
|
"middle": [], |
|
"last": "Uszkoreit", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1706.05137" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lukasz Kaiser, Aidan N Gomez, Noam Shazeer, Ashish Vaswani, Niki Parmar, Llion Jones, and Jakob Uszkoreit. 2017. One model to learn them all. arXiv preprint arXiv:1706.05137.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "Verb paraphrase based on case frame alignment", |
|
"authors": [ |
|
{ |
|
"first": "Nobuhiro", |
|
"middle": [], |
|
"last": "Kaji", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daisuke", |
|
"middle": [], |
|
"last": "Kawahara", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sadao", |
|
"middle": [], |
|
"last": "Kurohash", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Satoshi", |
|
"middle": [], |
|
"last": "Sato", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the 40th Annual Meeting on Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "215--222", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nobuhiro Kaji, Daisuke Kawahara, Sadao Kurohash, and Satoshi Sato. 2002. Verb paraphrase based on case frame alignment. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pages 215-222. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "Improving text simplification language modeling using unsimplified text data", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Kauchak", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "ACL (1)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1537--1546", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Kauchak. 2013. Improving text simplification language modeling using unsimplified text data. In ACL (1), pages 1537-1546.", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "Derivation of new readability formulas (automated readability index, fog count and flesch reading ease formula) for navy enlisted personnel", |
|
"authors": [ |
|
{ |
|
"first": "Robert P Fishburne", |
|
"middle": [], |
|
"last": "Peter Kincaid", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Jr", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Brad", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Rogers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Chissom", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1975, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J Peter Kincaid, Robert P Fishburne Jr, Richard L Rogers, and Brad S Chissom. 1975. Derivation of new read- ability formulas (automated readability index, fog count and flesch reading ease formula) for navy enlisted personnel. Technical report, Naval Technical Training Command Millington TN Research Branch.", |
|
"links": null |
|
}, |
|
"BIBREF41": { |
|
"ref_id": "b41", |
|
"title": "Adam: A method for stochastic optimization", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Diederik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jimmy", |
|
"middle": [], |
|
"last": "Kingma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ba", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980.", |
|
"links": null |
|
}, |
|
"BIBREF42": { |
|
"ref_id": "b42", |
|
"title": "Text simplification for information-seeking applications", |
|
"authors": [ |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Beata Beigman Klebanov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Marcu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Lecture Notes in Computer Science", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "735--747", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Beata Beigman Klebanov, Kevin Knight, and Daniel Marcu. 2004. Text simplification for information-seeking applications. Lecture Notes in Computer Science, pages 735-747.", |
|
"links": null |
|
}, |
|
"BIBREF43": { |
|
"ref_id": "b43", |
|
"title": "Summarization beyond sentence extraction: A probabilistic approach to sentence compression", |
|
"authors": [ |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Marcu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Artificial Intelligence", |
|
"volume": "139", |
|
"issue": "1", |
|
"pages": "91--107", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kevin Knight and Daniel Marcu. 2002. Summarization beyond sentence extraction: A probabilistic approach to sentence compression. Artificial Intelligence, 139(1):91-107.", |
|
"links": null |
|
}, |
|
"BIBREF44": { |
|
"ref_id": "b44", |
|
"title": "Moses: Open source toolkit for statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hieu", |
|
"middle": [], |
|
"last": "Hoang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Birch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Callison-Burch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marcello", |
|
"middle": [], |
|
"last": "Federico", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicola", |
|
"middle": [], |
|
"last": "Bertoldi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Brooke", |
|
"middle": [], |
|
"last": "Cowan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wade", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christine", |
|
"middle": [], |
|
"last": "Moran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Zens", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 45th annual meeting of the ACL on interactive poster and demonstration sessions", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "177--180", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, et al. 2007. Moses: Open source toolkit for statistical ma- chine translation. In Proceedings of the 45th annual meeting of the ACL on interactive poster and demonstration sessions, pages 177-180. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF45": { |
|
"ref_id": "b45", |
|
"title": "Illinois-lh: A denotational and distributional approach to semantics", |
|
"authors": [ |
|
{ |
|
"first": "Alice", |
|
"middle": [], |
|
"last": "Lai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julia", |
|
"middle": [], |
|
"last": "Hockenmaier", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proc. SemEval", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alice Lai and Julia Hockenmaier. 2014. Illinois-lh: A denotational and distributional approach to semantics. Proc. SemEval, 2:5.", |
|
"links": null |
|
}, |
|
"BIBREF46": { |
|
"ref_id": "b46", |
|
"title": "Rouge: A package for automatic evaluation of summaries. Text Summarization Branches Out", |
|
"authors": [ |
|
{ |
|
"first": "Chin-Yew", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. Text Summarization Branches Out.", |
|
"links": null |
|
}, |
|
"BIBREF47": { |
|
"ref_id": "b47", |
|
"title": "Multi-task sequence to sequence learning", |
|
"authors": [ |
|
{ |
|
"first": "Minh-Thang", |
|
"middle": [], |
|
"last": "Luong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Quoc", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oriol", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lukasz", |
|
"middle": [], |
|
"last": "Vinyals", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Kaiser", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1511.06114" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Minh-Thang Luong, Quoc V Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. 2015. Multi-task sequence to sequence learning. arXiv preprint arXiv:1511.06114.", |
|
"links": null |
|
}, |
|
"BIBREF48": { |
|
"ref_id": "b48", |
|
"title": "Hybrid simplification using deep semantics and machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Shashi", |
|
"middle": [], |
|
"last": "Narayan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "Gardent", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shashi Narayan and Claire Gardent. 2014. Hybrid simplification using deep semantics and machine translation. In ACL.", |
|
"links": null |
|
}, |
|
"BIBREF49": { |
|
"ref_id": "b49", |
|
"title": "Computer-intensive methods for testing hypotheses", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Eric W Noreen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1989, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eric W Noreen. 1989. Computer-intensive methods for testing hypotheses. Wiley New York.", |
|
"links": null |
|
}, |
|
"BIBREF50": { |
|
"ref_id": "b50", |
|
"title": "Bleu: a method for automatic evaluation of machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Kishore", |
|
"middle": [], |
|
"last": "Papineni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Salim", |
|
"middle": [], |
|
"last": "Roukos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Todd", |
|
"middle": [], |
|
"last": "Ward", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei-Jing", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In ACL.", |
|
"links": null |
|
}, |
|
"BIBREF51": { |
|
"ref_id": "b51", |
|
"title": "A decomposable attention model for natural language inference", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Ankur", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oscar", |
|
"middle": [], |
|
"last": "Parikh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dipanjan", |
|
"middle": [], |
|
"last": "T\u00e4ckstr\u00f6m", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jakob", |
|
"middle": [], |
|
"last": "Das", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Uszkoreit", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1606.01933" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ankur P Parikh, Oscar T\u00e4ckstr\u00f6m, Dipanjan Das, and Jakob Uszkoreit. 2016. A decomposable attention model for natural language inference. arXiv preprint arXiv:1606.01933.", |
|
"links": null |
|
}, |
|
"BIBREF52": { |
|
"ref_id": "b52", |
|
"title": "Multi-task video captioning with video and entailment generation", |
|
"authors": [ |
|
{ |
|
"first": "Ramakanth", |
|
"middle": [], |
|
"last": "Pasunuru", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohit", |
|
"middle": [], |
|
"last": "Bansal", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ramakanth Pasunuru and Mohit Bansal. 2017. Multi-task video captioning with video and entailment generation. In Proceedings of ACL.", |
|
"links": null |
|
}, |
|
"BIBREF53": { |
|
"ref_id": "b53", |
|
"title": "Towards improving abstractive summarization via entailment generation", |
|
"authors": [ |
|
{ |
|
"first": "Ramakanth", |
|
"middle": [], |
|
"last": "Pasunuru", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Han", |
|
"middle": [], |
|
"last": "Guo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohit", |
|
"middle": [], |
|
"last": "Bansal", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ramakanth Pasunuru, Han Guo, and Mohit Bansal. 2017. Towards improving abstractive summarization via entailment generation. In NFiS@EMNLP.", |
|
"links": null |
|
}, |
|
"BIBREF54": { |
|
"ref_id": "b54", |
|
"title": "Text simplification for language learners: a corpus analysis", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Sarah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mari", |
|
"middle": [], |
|
"last": "Petersen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ostendorf", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Workshop on Speech and Language Technology in Education", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sarah E Petersen and Mari Ostendorf. 2007. Text simplification for language learners: a corpus analysis. In Workshop on Speech and Language Technology in Education.", |
|
"links": null |
|
}, |
|
"BIBREF55": { |
|
"ref_id": "b55", |
|
"title": "The impact of lexical simplification by verbal paraphrases for people with and without dyslexia", |
|
"authors": [ |
|
{ |
|
"first": "Luz", |
|
"middle": [], |
|
"last": "Rello", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ricardo", |
|
"middle": [], |
|
"last": "Baeza-Yates", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Horacio", |
|
"middle": [], |
|
"last": "Saggion", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "International Conference on Intelligent Text Processing and Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "501--512", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Luz Rello, Ricardo Baeza-Yates, and Horacio Saggion. 2013. The impact of lexical simplification by verbal paraphrases for people with and without dyslexia. In International Conference on Intelligent Text Processing and Computational Linguistics, pages 501-512. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF56": { |
|
"ref_id": "b56", |
|
"title": "Sluice networks: Learning what to share between loosely related tasks", |
|
"authors": [ |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Ruder", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joachim", |
|
"middle": [], |
|
"last": "Bingel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Isabelle", |
|
"middle": [], |
|
"last": "Augenstein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anders", |
|
"middle": [], |
|
"last": "S\u00f8gaard", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1705.08142" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sebastian Ruder, Joachim Bingel, Isabelle Augenstein, and Anders S\u00f8gaard. 2017. Sluice networks: Learning what to share between loosely related tasks. arXiv preprint arXiv:1705.08142.", |
|
"links": null |
|
}, |
|
"BIBREF57": { |
|
"ref_id": "b57", |
|
"title": "Get to the point: Summarization with pointergenerator networks", |
|
"authors": [ |
|
{ |
|
"first": "Abigail", |
|
"middle": [], |
|
"last": "See", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Peter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher D", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1704.04368" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointer- generator networks. arXiv preprint arXiv:1704.04368.", |
|
"links": null |
|
}, |
|
"BIBREF58": { |
|
"ref_id": "b58", |
|
"title": "A survey of automated text simplification", |
|
"authors": [ |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Shardlow", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "International Journal of Advanced Computer Science and Applications", |
|
"volume": "4", |
|
"issue": "1", |
|
"pages": "58--70", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew Shardlow. 2014. A survey of automated text simplification. International Journal of Advanced Computer Science and Applications, 4(1):58-70.", |
|
"links": null |
|
}, |
|
"BIBREF59": { |
|
"ref_id": "b59", |
|
"title": "Online multi-task learning using active sampling", |
|
"authors": [ |
|
{ |
|
"first": "Sahil", |
|
"middle": [], |
|
"last": "Sharma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Balaraman", |
|
"middle": [], |
|
"last": "Ravindran", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sahil Sharma and Balaraman Ravindran. 2017. Online multi-task learning using active sampling. CoRR, abs/1702.06053.", |
|
"links": null |
|
}, |
|
"BIBREF60": { |
|
"ref_id": "b60", |
|
"title": "Syntactic simplification and text cohesion", |
|
"authors": [ |
|
{ |
|
"first": "Advaith", |
|
"middle": [], |
|
"last": "Siddharthan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Research on Language and Computation", |
|
"volume": "4", |
|
"issue": "1", |
|
"pages": "77--109", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Advaith Siddharthan. 2006. Syntactic simplification and text cohesion. Research on Language and Computation, 4(1):77-109.", |
|
"links": null |
|
}, |
|
"BIBREF61": { |
|
"ref_id": "b61", |
|
"title": "A survey of research on text simplification", |
|
"authors": [ |
|
{ |
|
"first": "Advaith", |
|
"middle": [], |
|
"last": "Siddharthan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "ITL-International Journal of Applied Linguistics", |
|
"volume": "165", |
|
"issue": "2", |
|
"pages": "259--298", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Advaith Siddharthan. 2014. A survey of research on text simplification. ITL-International Journal of Applied Linguistics, 165(2):259-298.", |
|
"links": null |
|
}, |
|
"BIBREF62": { |
|
"ref_id": "b62", |
|
"title": "Translating from complex to simplified sentences. Computational Processing of the Portuguese Language", |
|
"authors": [ |
|
{ |
|
"first": "Lucia", |
|
"middle": [], |
|
"last": "Specia", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "30--39", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lucia Specia. 2010. Translating from complex to simplified sentences. Computational Processing of the Por- tuguese Language, pages 30-39.", |
|
"links": null |
|
}, |
|
"BIBREF63": { |
|
"ref_id": "b63", |
|
"title": "A deeper exploration of the standard pb-smt approach to text simplification and its evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Hannah", |
|
"middle": [], |
|
"last": "Sanja\u0161tajner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Horacio", |
|
"middle": [], |
|
"last": "B\u00e9chara", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Saggion", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sanja\u0160tajner, Hannah B\u00e9chara, and Horacio Saggion. 2015. A deeper exploration of the standard pb-smt approach to text simplification and its evaluation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics (ACL).", |
|
"links": null |
|
}, |
|
"BIBREF64": { |
|
"ref_id": "b64", |
|
"title": "Reinforcement learning: An introduction", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Richard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [ |
|
"G" |
|
], |
|
"last": "Sutton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Barto", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Richard S Sutton and Andrew G Barto. 1998. Reinforcement learning: An introduction, volume 1. MIT press Cambridge.", |
|
"links": null |
|
}, |
|
"BIBREF65": { |
|
"ref_id": "b65", |
|
"title": "Sentence simplification for semantic role labeling", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Vickrey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daphne", |
|
"middle": [], |
|
"last": "Koller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "344--352", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Vickrey and Daphne Koller. 2008. Sentence simplification for semantic role labeling. In ACL, pages 344-352.", |
|
"links": null |
|
}, |
|
"BIBREF66": { |
|
"ref_id": "b66", |
|
"title": "Pushing the limits of paraphrastic sentence embeddings with millions of machine translations", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Wieting", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Gimpel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John Wieting and Kevin Gimpel. 2017a. Pushing the limits of paraphrastic sentence embeddings with millions of machine translations. CoRR, abs/1711.05732.", |
|
"links": null |
|
}, |
|
"BIBREF67": { |
|
"ref_id": "b67", |
|
"title": "Revisiting recurrent networks for paraphrastic sentence embeddings", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Wieting", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Gimpel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John Wieting and Kevin Gimpel. 2017b. Revisiting recurrent networks for paraphrastic sentence embeddings. In Proceedings of the Annual Meeting of the Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF68": { |
|
"ref_id": "b68", |
|
"title": "A broad-coverage challenge corpus for sentence understanding through inference", |
|
"authors": [ |
|
{ |
|
"first": "Adina", |
|
"middle": [], |
|
"last": "Williams", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nikita", |
|
"middle": [], |
|
"last": "Nangia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Samuel R", |
|
"middle": [], |
|
"last": "Bowman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1704.05426" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adina Williams, Nikita Nangia, and Samuel R Bowman. 2017. A broad-coverage challenge corpus for sentence understanding through inference. arXiv preprint arXiv:1704.05426.", |
|
"links": null |
|
}, |
|
"BIBREF69": { |
|
"ref_id": "b69", |
|
"title": "Learning to simplify sentences with quasi-synchronous grammar and integer programming", |
|
"authors": [ |
|
{ |
|
"first": "Kristian", |
|
"middle": [], |
|
"last": "Woodsend", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mirella", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the conference on empirical methods in natural language processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "409--420", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kristian Woodsend and Mirella Lapata. 2011. Learning to simplify sentences with quasi-synchronous grammar and integer programming. In Proceedings of the conference on empirical methods in natural language process- ing, pages 409-420. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF70": { |
|
"ref_id": "b70", |
|
"title": "Text rewriting improves semantic role labeling", |
|
"authors": [ |
|
{ |
|
"first": "Kristian", |
|
"middle": [], |
|
"last": "Woodsend", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mirella", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Journal of Artificial Intelligence Research", |
|
"volume": "51", |
|
"issue": "", |
|
"pages": "133--164", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kristian Woodsend and Mirella Lapata. 2014. Text rewriting improves semantic role labeling. Journal of Artificial Intelligence Research, 51:133-164.", |
|
"links": null |
|
}, |
|
"BIBREF71": { |
|
"ref_id": "b71", |
|
"title": "Antal van den Bosch, and Emiel Krahmer. 2012. Sentence simplification by monolingual machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Sander", |
|
"middle": [], |
|
"last": "Wubben", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sander Wubben, Antal van den Bosch, and Emiel Krahmer. 2012. Sentence simplification by monolingual ma- chine translation. In ACL.", |
|
"links": null |
|
}, |
|
"BIBREF72": { |
|
"ref_id": "b72", |
|
"title": "Problems in current text simplification research: New data can help", |
|
"authors": [ |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Callison-Burch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Courtney", |
|
"middle": [], |
|
"last": "Napoles", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Transactions of the Association of Computational Linguistics", |
|
"volume": "3", |
|
"issue": "1", |
|
"pages": "283--297", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wei Xu, Chris Callison-Burch, and Courtney Napoles. 2015. Problems in current text simplification research: New data can help. Transactions of the Association of Computational Linguistics, 3(1):283-297.", |
|
"links": null |
|
}, |
|
"BIBREF73": { |
|
"ref_id": "b73", |
|
"title": "Optimizing statistical machine translation for text simplification", |
|
"authors": [ |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Courtney", |
|
"middle": [], |
|
"last": "Napoles", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ellie", |
|
"middle": [], |
|
"last": "Pavlick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Quanze", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Callison-Burch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "4", |
|
"issue": "", |
|
"pages": "401--415", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wei Xu, Courtney Napoles, Ellie Pavlick, Quanze Chen, and Chris Callison-Burch. 2016. Optimizing statistical machine translation for text simplification. Transactions of the Association for Computational Linguistics, 4:401-415.", |
|
"links": null |
|
}, |
|
"BIBREF74": { |
|
"ref_id": "b74", |
|
"title": "Visualizing and understanding convolutional networks", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Matthew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rob", |
|
"middle": [], |
|
"last": "Zeiler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Fergus", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "European conference on computer vision", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "818--833", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew D Zeiler and Rob Fergus. 2014. Visualizing and understanding convolutional networks. In European conference on computer vision, pages 818-833. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF75": { |
|
"ref_id": "b75", |
|
"title": "Sentence simplification with deep reinforcement learning", |
|
"authors": [ |
|
{ |
|
"first": "Xingxing", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mirella", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1703.10931" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xingxing Zhang and Mirella Lapata. 2017. Sentence simplification with deep reinforcement learning. arXiv preprint arXiv:1703.10931.", |
|
"links": null |
|
}, |
|
"BIBREF76": { |
|
"ref_id": "b76", |
|
"title": "Exploiting parallel news streams for unsupervised event extraction", |
|
"authors": [ |
|
{ |
|
"first": "Congle", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Soderland", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Weld", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "117--129", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Congle Zhang, Stephen Soderland, and Daniel S Weld. 2015. Exploiting parallel news streams for unsupervised event extraction. Transactions of the Association for Computational Linguistics, 3:117-129.", |
|
"links": null |
|
}, |
|
"BIBREF77": { |
|
"ref_id": "b77", |
|
"title": "A monolingual tree-based translation model for sentence simplification", |
|
"authors": [ |
|
{ |
|
"first": "Zhemin", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Delphine", |
|
"middle": [], |
|
"last": "Bernhard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iryna", |
|
"middle": [], |
|
"last": "Gurevych", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 23rd international conference on computational linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1353--1361", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhemin Zhu, Delphine Bernhard, and Iryna Gurevych. 2010. A monolingual tree-based translation model for sentence simplification. In Proceedings of the 23rd international conference on computational linguistics, pages 1353-1361. Association for Computational Linguistics.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"text": "Output examples comparing DRESS-LS, our pointer baseline, and multi-task model.", |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"TABREF2": { |
|
"text": "Newsela (FKGL: lower is better).", |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td/><td>WIKISMALL</td><td/><td/><td>WIKILARGE</td><td/></tr><tr><td>Models</td><td colspan=\"6\">BLEU FKGL SARI BLEU FKGL SARI</td></tr><tr><td/><td/><td colspan=\"2\">PREVIOUS WORK</td><td/><td/><td/></tr><tr><td>PBMT-R</td><td>46.31</td><td>11.42</td><td>15.97</td><td>81.11</td><td>8.33</td><td>38.56</td></tr><tr><td>Hybrid</td><td>53.94</td><td>9.21</td><td>30.46</td><td>48.97</td><td>4.56</td><td>31.40</td></tr><tr><td>SBMT-SARI</td><td>-</td><td>-</td><td>-</td><td>73.08</td><td>7.29</td><td>39.96</td></tr><tr><td>EncDecA</td><td>47.93</td><td>11.35</td><td>13.61</td><td>88.85</td><td>8.41</td><td>35.66</td></tr><tr><td>DRESS</td><td>34.53</td><td>7.48</td><td>27.48</td><td>77.18</td><td>6.58</td><td>37.08</td></tr><tr><td>DRESS-LS</td><td>36.32</td><td>7.55</td><td>27.24</td><td>80.12</td><td>6.62</td><td>37.27</td></tr><tr><td/><td/><td colspan=\"2\">OUR MODELS</td><td/><td/><td/></tr><tr><td>Baseline \u2297</td><td>36.18</td><td>7.69</td><td>25.67</td><td>82.37</td><td>7.84</td><td>36.68</td></tr><tr><td>\u2297+Ent+Par</td><td>29.70</td><td>6.93</td><td>28.24</td><td>81.49</td><td>7.41</td><td>37.45</td></tr></table>" |
|
}, |
|
"TABREF3": { |
|
"text": "WikiSmall/Large results (FKGL: lower is better).", |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table/>" |
|
}, |
|
"TABREF5": { |
|
"text": "Results on dynamic vs. static mixing ratio (FKGL: lower is better).", |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table/>" |
|
}, |
|
"TABREF8": { |
|
"text": "Analysis: Entailment and paraphrase classification results (avg. probability scores as %) on Newsela.", |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>Models</td><td colspan=\"2\">Deletions Additions</td></tr><tr><td>Hybrid</td><td>95.18</td><td>0.000</td></tr><tr><td>DRESS-LS</td><td>85.37</td><td>0.047</td></tr><tr><td>Pointer Baseline</td><td>88.91</td><td>0.026</td></tr><tr><td>3-way Multi-Task</td><td>97.54</td><td>0.049</td></tr></table>" |
|
}, |
|
"TABREF9": { |
|
"text": "Analysis: SARI's sub-operation scores on Newsela dataset.", |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>Models</td><td colspan=\"3\">BLEU FKGL SARI</td></tr><tr><td>Final (High Ent + Low PP)</td><td>11.86</td><td>1.38</td><td>32.98</td></tr><tr><td>Both lower-layer</td><td>11.94</td><td>1.47</td><td>31.92</td></tr><tr><td>Both higher-layer</td><td>12.26</td><td>1.38</td><td>32.02</td></tr><tr><td>Swapped (Low Ent + High PP)</td><td>21.64</td><td>2.97</td><td>29.07</td></tr><tr><td>Hard-sharing</td><td>13.01</td><td>1.38</td><td>32.36</td></tr></table>" |
|
} |
|
} |
|
} |
|
} |