|
{ |
|
"paper_id": "R19-1033", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:03:10.976664Z" |
|
}, |
|
"title": "Sentence Simplification for Semantic Role Labelling and Information Extraction", |
|
"authors": [ |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Evans", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Research Institute in Information and Language Processing University of Wolverhampton United Kingdom", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Constantin", |
|
"middle": [], |
|
"last": "Or\u0203san", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Research Institute in Information and Language Processing University of Wolverhampton United Kingdom", |
|
"institution": "", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "In this paper, we report on the extrinsic evaluation of an automatic sentence simplification method with respect to two NLP tasks: semantic role labelling (SRL) and information extraction (IE). The paper begins with our observation of challenges in the intrinsic evaluation of sentence simplification systems, which motivates the use of extrinsic evaluation of these systems with respect to other NLP tasks. We describe the two NLP systems and the test data used in the extrinsic evaluation, and present arguments and evidence motivating the integration of a sentence simplification step as a means of improving the accuracy of these systems. Our evaluation reveals that their performance is improved by the simplification step: the SRL system is better able to assign semantic roles to the majority of the arguments of verbs and the IE system is better able to identify fillers for all IE template slots.", |
|
"pdf_parse": { |
|
"paper_id": "R19-1033", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "In this paper, we report on the extrinsic evaluation of an automatic sentence simplification method with respect to two NLP tasks: semantic role labelling (SRL) and information extraction (IE). The paper begins with our observation of challenges in the intrinsic evaluation of sentence simplification systems, which motivates the use of extrinsic evaluation of these systems with respect to other NLP tasks. We describe the two NLP systems and the test data used in the extrinsic evaluation, and present arguments and evidence motivating the integration of a sentence simplification step as a means of improving the accuracy of these systems. Our evaluation reveals that their performance is improved by the simplification step: the SRL system is better able to assign semantic roles to the majority of the arguments of verbs and the IE system is better able to identify fillers for all IE template slots.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Sentence simplification is one aspect of text simplification, which is concerned with the conversion of texts into a more accessible form. In many cases, text simplification is performed to facilitate subsequent human or machine text processing. This may include processing for human reading comprehension (Canning, 2002; Scarton et al., 2017; Or\u0203san et al., 2018) or for NLP tasks such as dependency parsing (Jel\u00ednek, 2014) , information extraction (Jonnalagadda et al., 2009; Evans, 2011; Peng et al., 2012) , semantic role labelling (Vickrey and Koller, 2008) , and multidocument summarisation (Blake et al., 2007; Siddharthan et al., 2004) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 306, |
|
"end": 321, |
|
"text": "(Canning, 2002;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 322, |
|
"end": 343, |
|
"text": "Scarton et al., 2017;", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 344, |
|
"end": 364, |
|
"text": "Or\u0203san et al., 2018)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 409, |
|
"end": 424, |
|
"text": "(Jel\u00ednek, 2014)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 450, |
|
"end": 477, |
|
"text": "(Jonnalagadda et al., 2009;", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 478, |
|
"end": 490, |
|
"text": "Evans, 2011;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 491, |
|
"end": 509, |
|
"text": "Peng et al., 2012)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 536, |
|
"end": 562, |
|
"text": "(Vickrey and Koller, 2008)", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 597, |
|
"end": 617, |
|
"text": "(Blake et al., 2007;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 618, |
|
"end": 643, |
|
"text": "Siddharthan et al., 2004)", |
|
"ref_id": "BIBREF35" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In previous research, Caplan and Waters (1999) noted a correlation between sentence comprehension difficulty for human readers and the numbers of propositions expressed in the sentences being read. 1 Evans and Or\u0203san (2019) presented an iterative rule-based approach to sentence simplification which is intended to reduce the per sentence propositional density of input texts by converting sentences which contain compound clauses and complex NPs 2 into sequences of simpler sentences.", |
|
"cite_spans": [ |
|
{ |
|
"start": 22, |
|
"end": 46, |
|
"text": "Caplan and Waters (1999)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 198, |
|
"end": 199, |
|
"text": "1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 200, |
|
"end": 223, |
|
"text": "Evans and Or\u0203san (2019)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Evaluation of text simplification systems is difficult, especially when such evaluations need to be conducted repeatedly for development purposes and cost is a critical factor. In general, the choice of evaluation method depends on the purpose of the simplification task. Various types of evaluation are currently used, but these are problematic. In previous work, evaluation of sentence simplification systems (including Evans and Or\u0203san's (2019) system, which is extrinsically evaluated in our current paper) has relied on one or more of three main approaches: the use of overlap metrics such as Levenshtein distance (Levenshtein, 1966) , BLEU score (Papineni et al., 2002) and SARI (Xu et al., 2016) to compare system output with human simplified texts (e.g. Wubben et al., 2012; Glavas and Stajner, 2013; Vu et al., 2014) ; automated assessments of the readability of system output (Wubben et al., 2012; Glavas and Stajner, 2013; Vu et al., 2014) ; and surveys of human opinions about the grammaticality, readability, and meanings of system output (Angrosh et al., 2014; Wubben et al., 2012; Feblowitz and Kauchak, 2013) . In previous work, researchers have also used methods such as 1 Propositions are atomic statements that express simple factual claims (Jay, 2003) . They are considered the basic units involved in the understanding and retention of text (Kintsch and Welsch, 1991) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 422, |
|
"end": 447, |
|
"text": "Evans and Or\u0203san's (2019)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 619, |
|
"end": 638, |
|
"text": "(Levenshtein, 1966)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 652, |
|
"end": 675, |
|
"text": "(Papineni et al., 2002)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 685, |
|
"end": 702, |
|
"text": "(Xu et al., 2016)", |
|
"ref_id": "BIBREF40" |
|
}, |
|
{ |
|
"start": 762, |
|
"end": 782, |
|
"text": "Wubben et al., 2012;", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 783, |
|
"end": 808, |
|
"text": "Glavas and Stajner, 2013;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 809, |
|
"end": 825, |
|
"text": "Vu et al., 2014)", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 886, |
|
"end": 907, |
|
"text": "(Wubben et al., 2012;", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 908, |
|
"end": 933, |
|
"text": "Glavas and Stajner, 2013;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 934, |
|
"end": 950, |
|
"text": "Vu et al., 2014)", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 1052, |
|
"end": 1074, |
|
"text": "(Angrosh et al., 2014;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 1075, |
|
"end": 1095, |
|
"text": "Wubben et al., 2012;", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 1096, |
|
"end": 1124, |
|
"text": "Feblowitz and Kauchak, 2013)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 1188, |
|
"end": 1189, |
|
"text": "1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1260, |
|
"end": 1271, |
|
"text": "(Jay, 2003)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 1362, |
|
"end": 1388, |
|
"text": "(Kintsch and Welsch, 1991)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "2 NPs which contain finite nominally bound relative clauses. eye tracking (Klerke et al., 2015; Timm, 2018) , and reading comprehension testing (Or\u0203san et al., 2018) to evaluate text simplification systems.", |
|
"cite_spans": [ |
|
{ |
|
"start": 74, |
|
"end": 95, |
|
"text": "(Klerke et al., 2015;", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 96, |
|
"end": 107, |
|
"text": "Timm, 2018)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 144, |
|
"end": 165, |
|
"text": "(Or\u0203san et al., 2018)", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "There are several challenges in these approaches to evaluation. The development of gold standards in text simplification is problematic because they are difficult to produce and numerous variant simplifications are acceptable. As a result, existing metrics may not accurately reflect the usefulness of the simplification system being evaluated. Even when there are detailed guidelines for the simplification task, there is still likely to be a variety of means by which a human might simplify a text to produce a reference simplification. Further, due to the difficulty of the human simplification task, it may be that evaluation measures such as BLEU and SARI are unable to exploit a sufficiently large set of reference simplifications.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Evaluation of text simplification methods using automatic readability metrics is problematic because the extent to which all but a handful of readability metrics correlate with human reading comprehension is uncertain. Evaluation via opinion surveys of readers is difficult because participants may have varying expectations about the upper and lower limits of sentence complexity, making responses to Likert items unreliable. Participants also vary in terms of linguistic ability and personal background knowledge. These variables, which affect reading behaviour and may affect responses to opinion surveys, are difficult to control.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "When using methods such as eye tracking to evaluate text simplification, previous work has shown that differences in reading behaviour depend on participants' reading goals (Yeari et al., 2015) . This variable is usually controlled by asking participants to respond to text-related opinion surveys or multiple choice reading comprehension questions. One adverse effect of this is that these evaluations may be of limited validity when considering the usefulness of system output for other purposes. While we may learn whether a sentence simplification method improves participants' performance in answering short reading comprehension questions, it is not clear whether similar benefits would be obtained in terms of readers' abilities to be entertained by the text or to understand it well enough to be able to summarise it for friends.", |
|
"cite_spans": [ |
|
{ |
|
"start": 173, |
|
"end": 193, |
|
"text": "(Yeari et al., 2015)", |
|
"ref_id": "BIBREF41" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Given that text simplification is usually made for a particular purpose, the evaluation method should offer insights into the suitability of the text simplification system for this purpose. Extrinsic evaluation offers the possibility of meeting this requirement. Text simplification has also been claimed to improve automatic text processing (e.g. Vickrey and Koller, 2008; Evans, 2011; Hasler et al., 2017) , though the evidence for this has been fairly limited. In this paper, we explore whether syntactic simplification can facilitate two NLP tasks: semantic role labelling (SRL) and information extraction (IE).", |
|
"cite_spans": [ |
|
{ |
|
"start": 348, |
|
"end": 373, |
|
"text": "Vickrey and Koller, 2008;", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 374, |
|
"end": 386, |
|
"text": "Evans, 2011;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 387, |
|
"end": 407, |
|
"text": "Hasler et al., 2017)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In Section 2 of this paper, we present an overview of previous related work. In Section 3, we present an overview of Evans and Or\u0203san's (2019) method for sentence simplification, which is the simplification method used in our current paper. In Section 4, we present each of the extrinsic evaluation experiments based on SRL (Section 4.1) and IE (Section 4.2). Each of these sections describes the task, the test data used, the NLP system whose output is used for extrinsic evaluation of the sentence simplification system, our motivation for considering that accuracy of the NLP system may be improved via a preprocessing step in which sentence simplification is performed, the evaluation method, our results, and a discussion of the results. In Section 5, we draw conclusions and consider directions for future work. Chandrasekar and Srinivas (1997) hypothesised that approaches to sentence simplification may evoke improvements in subsequent text processing tasks. In previous work, researchers have sought to determine whether or not a preprocessing step based on text simplification can facilitate subsequent natural language processing. In the current paper, our concern is to investigate the impact of a system simplifying sentences which contain compound clauses. Hogan (2007) and Collins (1999) observed that, for dependency parsers, dependencies involving coordination are identified with by far the worst accuracy of any dependency type (F 1 -score \u2248 61%). This is one factor motivating our research in this direction.", |
|
"cite_spans": [ |
|
{ |
|
"start": 818, |
|
"end": 850, |
|
"text": "Chandrasekar and Srinivas (1997)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 1271, |
|
"end": 1283, |
|
"text": "Hogan (2007)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 1288, |
|
"end": 1302, |
|
"text": "Collins (1999)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Sentence simplification has also been applied as a preprocessing step in neural machine translation and hierarchical machine translation (Hasler et al., 2017) . In their approach, the approach to sentence simplification was sentence compression. One contribution of our current paper is an investigation of the use of an information preserving ap-proach to sentence simplification as a preprocessing step in the NLP applications. Vickrey and Koller (2008) applied their sentence simplification method to improve performance on the CoNLL-2005 shared task on SRL. 3 For sentence simplification, their method exploits full syntactic parsing with a set of 154 parse tree transformations and a machine learning component to determine which transformation operations to apply to an input sentence. They find that a SRL system based on a syntactic analysis of automatically simplified versions of input sentences outperforms a strong baseline. In their evaluation, Vickrey and Koller (2008) focus on the overall performance of their SRL system rather than on the particular contribution made by the sentence simplification method. As noted earlier, in our current paper, we isolate sentence simplification as a preprocessing step and investigate its impact on subsequent NLP tasks.", |
|
"cite_spans": [ |
|
{ |
|
"start": 137, |
|
"end": 158, |
|
"text": "(Hasler et al., 2017)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 430, |
|
"end": 455, |
|
"text": "Vickrey and Koller (2008)", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 562, |
|
"end": 563, |
|
"text": "3", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Evans and Or\u0203san 2019presented an iterative rule-based method for sentence simplification based on a shallow syntactic analysis step. Their system transforms input sentences containing compound clauses and complex NPs into sequences of simpler sentences that do not contain these types of syntactic complexity.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentence Simplification System", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The first stage of sentence simplification is a shallow syntactic analysis step which tags textual markers of syntactic complexity, referred to as signs, with information about the syntactic constituents that they coordinate or of which they are boundaries. The signs of syntactic complexity are a set of conjunctions, complementisers, wh-words, punctuation marks, and bigrams consisting of a punctuation mark followed by a lexical sign. In the analysis step, syntactic constituents are not identified. It is only the signs which are tagged. The automatic sign tagger was developed by Dornescu et al. (2013) . In their scheme, clause coordinators are tagged CEV 4 while the left boundaries of subordinate clauses are tagged SSEV. 5 After shallow syntactic analysis of the sentence, an iterative algorithm is applied to sentences containing compound clauses and complex NPs.", |
|
"cite_spans": [ |
|
{ |
|
"start": 585, |
|
"end": 607, |
|
"text": "Dornescu et al. (2013)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 730, |
|
"end": 731, |
|
"text": "5", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentence Simplification System", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The algorithm (Algorithm 1) integrates a sentence transformation function which implements the transformation schemes listed in Table 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 128, |
|
"end": 135, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Sentence Simplification System", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Input: Sentence s 0 , containing at least one sign of syntactic complexity of class c, where c \u2208 {CEV, SSEV}. Output: The set of sentences A derived from s 0 , that have reduced propositional density", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentence Simplification System", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": ". 1 The empty stack W ; 2 O \u2190 \u2205; 3 push(s 0 , W ); 4 while isEmpty(W ) is f alse do 5 pop(s i , W ); 6 if s i contains a sign of syntactic complexity of class c (specified in Input) then 7 s i 1 , s i 2 \u2190 transf orm c (s i ); 8 push(s i 1 , W ); 9 push(s i 2 , W ); 10 else 11 O \u2190 O \u222a {s i } 12 end", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentence Simplification System", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "13 end Algorithm 1: Sentence simplification algorithm", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentence Simplification System", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In its original implementation, the transform function (line 7 of Algorithm 1) included 28 sentence simplification rules to implement one transformation scheme simplifying compound clauses and 125 rules implementing three transformation schemes simplifying sentences which contain complex NPs. Evaluation of the method revealed that simplification of sentences containing complex NPs was significantly less reliable than simplification of sentences containing compound clauses. For this reason, in the extrinsic evaluations presented in this paper, we deactivated the rules simplifying sentences that contain complex NPs. Each of the remaining implemented rules includes a rule activation pattern which, when detected in the input sentence, triggers an associated transformation operation. Table 1 presents the transformation scheme used to simplify compound clauses and an example of the sentence transformation that it makes. Input sentences are transformed if they match any of the rule activation patterns, which are expressed in terms of particular words, parts of speech, and tagged signs of syntactic complexity. Each application of a rule transforms a single input sentence into two sim- Table 1 : Sentence transformation scheme used to simplify sentences containing compound clauses pler sentences which are added to the working set (stack W in Algorithm 1). The iterative nature of the algorithm enables it to convert complex sentences containing multiple signs of syntactic complexity such as (1) into the sequence of simple sentences (2).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 790, |
|
"end": 797, |
|
"text": "Table 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1196, |
|
"end": 1203, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Sentence Simplification System", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "( 1)Kattab, of Eccles, Greater Manchester, was required to use diluted chloroform water in the remedy, but the pharmacy only kept concentrated chloroform, which is 20 times stronger.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentence Simplification System", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "(2) a. Kattab, of Eccles, Greater Manchester, was required to use diluted chloroform water in the remedy. b.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentence Simplification System", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The pharmacy only kept concentrated chloroform. c.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentence Simplification System", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Concentrated chloroform is 20 times stronger.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentence Simplification System", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We evaluated the sentence simplification method extrinsically via two NLP applications. In each case, the application was treated as a black box. We compared performance of the system when processing input in its original form and in an automatically simplified form generated by the simplification method. As noted in Section 3, our approach to sentence simplification is syntactic rather than lexical. As they are based to some extent on exact string matching, the experiments described in this paper would be unsuitable for evaluation of lexical simplification systems.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Semantic role labelling (SRL) is the task of automatically detecting the different arguments of predicates expressed in input sentences. We evaluated a system performing SRL in accordance with the Propbank formalism. In this scheme, an \"individual verb's semantic arguments are numbered, beginning with zero. For a particular verb, [A0] is generally the argument exhibiting features of a Prototypical Agent (Dowty, 1991) For extrinsic evaluation of the sentence simplification method, we focused on verbal predicates 8 , their arguments, and the nine listed adjunct-like argument types. Table 2 provides an example of SRL to analyse sentence (3).", |
|
"cite_spans": [ |
|
{ |
|
"start": 407, |
|
"end": 420, |
|
"text": "(Dowty, 1991)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 587, |
|
"end": 594, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Semantic Role Labelling", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "( 3)When Disney offered to pay Mr. Steinberg a premium for his shares, the New York investor didn't demand the company also pay a premium to other shareholders.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic Role Labelling", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The table contains a row of information about the semantic roles associated with each of the four main verbs occurring in the sentence. For example, it encodes information about the agent (the New York investor), patient or theme (the company also pay a premium to other shareholders), time (When Disney offered to pay Mr. Steinberg a premium for his shares), and negation (n't) of the verb demand. Test Data. No suitable test data exist to evaluate a SRL system as a means of extrinsically evaluating the sentence simplification method. Although annotated data from the CONLL-2004/5 9 shared tasks on SRL are available, this test data is available only for the original versions of input sentences and not for simplified versions which may be generated using sentence simplification systems. Given that it is difficult to map verbs, their arguments, and the semantic labels of these arguments from sentences in their original form to groups of sentences in their automatically generated simplifications, we developed a new set of test data for this purpose. We used a 7270-token collection of news articles from the METER corpus (Gaizauskas et al., 2001) to derive a new manually annotated data set. The original version of this dataset contains 265 sentences while the automatically simplified one contains 470 sentences. NLP System. We made our extrinsic evaluation of the sentence simplification method using Senna (Collobert et al., 2011), a SRL system which tags predicates and their arguments in accordance with the formalism used in Propbank Motivation. In our previous work (Evans and Or\u0203san, 2019), we used six metrics to assess the readability of the original and simplified versions of texts which include those that we use as test data for the SRL task. We found that the automatically simplified news texts have a lower propositional density (0.483 vs. 0.505) and reading grade level (5.4 vs. 10.3) and greater syntactic simplicity (89.07 vs. 46.81) and temporal consistency, assessed in terms of tense and aspect (30.15 vs. 27.76) than the original news texts. We determined the scores for these readability metrics using the CPIDR tool (Covington, 2012) 10 and the Coh-Matrix Web Tool (McNamara et al., 2014) . As a task dependent on accurate syntactic parsing, we would expect that automatic SRL would be more accurate when processing the simplified versions of the input texts. Evaluation Method. We applied Senna to the original and automatically simplified versions of the test data. Table 3 contains an example of the semantic roles labelled in one of the test sentences that we used. In this table, arguments identified more accurately in simplified sentences are underlined. For cases in which the SRL performed by Senna differed when processing the original and automatically simplified versions of input sentences, we manually inspected the two analyses, and recorded the number of cases for which SRL of the original sentence was superior to that of the simplified sentence, and vice versa. The inspection was made by a single annotator. In future work, we will seek to employ additional annotators for this task. Results. Our manual evaluation of output from Senna revealed that 86.39% (1707) of the arguments identified in the two versions of the texts were identical. Of the remaining arguments, 5.31% (105) of those correctly identified in the original versions of the texts were not identified in the simplified versions, while 8.29% (164) of the arguments correctly identified in the simplified versions of the texts were not identified in the original versions. Of the 269 arguments identified in only one of the versions of the texts, 60.97% were arguments identified more accurately in the simplified version, while 39.03% were arguments identified more accurately in the original versions of the texts. Table 4 shows the number of semantic roles labelled more accurately, by type, when Senna processes the original (Orig) and the automatically simplified (Simp) versions of news articles. To illustrate, when processing the original versions of the news texts, Senna correctly identifies the agents (arguments with semantic role label A0) of 14 verbs that it did not identify when process- ing the automatically simplified versions of those texts. Conversely, when processing the automatically simplified versions, Senna correctly identified the agents of 23 verbs that it did not identify when processing the original versions. Discussion. Overall, while there are advantages to performing SRL on each version of input texts, the greatest improvement in performance arises from processing the automatically simplified versions. A larger-scale evaluation is necessary but this observation constitutes some evidence that the sentence simplification method facilitates the NLP task of SRL.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1130, |
|
"end": 1155, |
|
"text": "(Gaizauskas et al., 2001)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 2201, |
|
"end": 2224, |
|
"text": "(McNamara et al., 2014)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 2504, |
|
"end": 2511, |
|
"text": "Table 3", |
|
"ref_id": "TABREF5" |
|
}, |
|
{ |
|
"start": 3839, |
|
"end": 3846, |
|
"text": "Table 4", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Semantic Role Labelling", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "A0 V", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic Role Labelling", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "A1 A2 A3 AMDIS AMNEG AMTMP Disney", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic Role Labelling", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Information extraction (IE) is the automatic identification of selected types of entities, relations, or events in free text (Grishman, 2005) . In this paper, we are concerned with IE from vignettes which provide brief clinical descriptions of hypothetical patients. The discourse structure of these vignettes consists of six elements: basic information (patient's gender, profession, ethnicity, and health status); chief complaint (the main concern motivating the patient to seek medical intervention); history (a narrative description of the patient's social, family, and medical history); vital signs (a description of the patient's pulse and respiration rates, blood pressure, and temperature); physical examination (a narrative description of clinical findings observed in the patient); and diagnostic study and laboratory study (the results of several different types of clinical test carried out on the patient).", |
|
"cite_spans": [ |
|
{ |
|
"start": 125, |
|
"end": 141, |
|
"text": "(Grishman, 2005)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Information Extraction", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Each element in the discourse structure is represented by a template encoding related information. For example, the template for physical examinations holds information on each clinical finding/symptom (FINDING) observed in the examination, information on the technique used to elicit that finding (TECHNIQUE), the bodily location to which the technique was applied (LOCATION), the body system that the finding pertains to (SYSTEM), and any qualifying information about the finding (QUALIFIER). In this article, we focus on automatic extraction of information pertaining to physical examinations. The goal of the IE system is to identify the phrases used in the clinical vignette that denote findings and related concepts and add them to its database entry for the vignette. Test Data. Our test data comprises a set of 286 clinical vignettes and completed IE templates, encoding information about TECHNIQUEs, LOCA-TIONs, SYSTEMs, and QUALIFIERs, associated with the 719 FINDINGs that they contain. This test data was developed in the context of an earlier project and is based on clinical vignettes owned by the National Board of Medical Examiners. 11 NLP System. For the experiments described in this paper, we used a simple IE system in which input texts are tokenised and part of speech tagged, domain-specific gazetteers are used to identify references to medical concepts and a simple set of finite state transducers (FSTs) is used to group adjacent references to concepts into multiword terms. The gazetteers and FSTs were developed in previous work presented by Evans (2011) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 1569, |
|
"end": 1581, |
|
"text": "Evans (2011)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Information Extraction", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "After tagging references to clinical concepts in the vignettes, IE is performed using a small number of simple rules. To summarize, vignettes are processed by considering each sentence in turn. Every mention of a clinical FINDING or SYMP-TOM is taken as the basis for a new IE template. The first tagged TECHNIQUE, SYSTEM, and LO-CATION within the sentence containing the focal SYMPTOM or FINDING is considered to be related to it. 12 QUALIFIERS (e.g. bilateral or peripheral) are extracted in the same way, except in sentences containing the word no. In these cases, the QUAL-IFIER related to the FINDING is identified as none.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Information Extraction", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "The sentences in the test data were simplified using the method presented in Section 3. We then ran the IE system in two settings. In the first (IE ORIG ), it processed the original collection of vignettes. In the second (IE SIM P ), it processed the automatically simplified vignettes which contain a reduced number of compound clauses.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Information Extraction", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Motivation. An analysis of the readability of the original and simplified versions of the clinical vignettes did not provide a strong indication that the automatic sentence simplification method would improve the accuracy of the IE system. The 286 original clinical vignettes in the test data have a mean propositional density of 0.4826 ideas per word and 5.499 ideas per sentence. The values of these metrics for the simplified versions of the vignettes are 0.4803 ideas per word and 5.269 ideas per sentence, respectively. Although they are of the correct polarity, these differences are not statistically significant (p = 0.5327 and p = 0.1407, respectively). However, previous work in sentence simplification for IE (Jonnalagadda et al., 2009; Evans, 2011; Peng et al., 2012; Niklaus et al., 2016) has demonstrated that automatic sentence simplification can improve the accuracy of IE systems. This motivated us to evaluate the impact of the automatic sentence simplification method in this task. Evaluation Method. For the IE task, our evaluation metric is based on F 1 -score averaged over all slots in the IE templates and all templates in the test data. Identification of true positives is based on exact matching of system-identified slot fillers with those in the manually completed IE templates in our test data. Results. The accuracy scores obtained by each variant of the IE system are presented in Table 5 . Inspection of this table reveals that FINDINGs and all related concepts are identified more accurately in the simplified versions of the input texts. Sentence (4) and its automatically simplified variant (5) provide an example of the difference in performance obtained by the two systems. In these examples, identified FINDINGs are italicised and associated concepts are underlined. Multiword terms appear in square brackets.", |
|
"cite_spans": [ |
|
{ |
|
"start": 720, |
|
"end": 747, |
|
"text": "(Jonnalagadda et al., 2009;", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 748, |
|
"end": 760, |
|
"text": "Evans, 2011;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 761, |
|
"end": 779, |
|
"text": "Peng et al., 2012;", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 780, |
|
"end": 801, |
|
"text": "Niklaus et al., 2016)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1412, |
|
"end": 1419, |
|
"text": "Table 5", |
|
"ref_id": "TABREF8" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Information Extraction", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "She has truncal LOC obesity and pigmented QU AL abdominal LOC striae. We applied a bootstrapping method to obtain confidence intervals for accuracy of extraction of each of the IE template slots. For this purpose, 50% of the the output of each system was randomly sampled in each of 100 000 evaluations. The confidence intervals are presented in the 95% CI columns of Table 5 . The figures in the Best Performer column of this table indicate the proportion of evaluations for which the IE SIM P system was more accurate than the IE ORIG system. Differences in the accuracy of IE were found to be statistically significant in all cases, using McNemar's test (p < 0.00078), with the exception of differences when extracting FINDINGs (p = 0.6766).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 368, |
|
"end": 375, |
|
"text": "Table 5", |
|
"ref_id": "TABREF8" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "(4)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Discussion. Chinchor (1992) notes that assessment of the statistical significance of differences in accuracy between different IE systems is challenging. In our evaluation experiment, Dos Santos et al. (2018) framed the comparison between IE ORIG and IE SIM P using a binomial regression model. Given that such models apply only when the variables being considered are independent, dos Santos et al. (2018) included a latent variable in the analysis to represent the effect of the text on the performance of the two systems (the two evaluations are not independent because both systems process the same text). They showed that the odds ratio of agreement between IE SIM P and the gold standard is 1.5 times greater than that between IE ORIG and the gold standard. For all slots in the IE template, the probability of agreement between IE ORIG and the gold standard is 0.937. The probability of agreement between IE SIM P and the gold standard is 0.957. This difference is statistically significant. They conclude that IE ORIG and IE SIM P differ in their performance on the information extraction task. The probability of agreement with our gold standard is greater for IE SIM P than for IE ORIG , although the probability of agreement is already large for IE ORIG . This evaluation indicates that the automatic sentence simplification method facilitates IE.", |
|
"cite_spans": [ |
|
{ |
|
"start": 12, |
|
"end": 27, |
|
"text": "Chinchor (1992)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "(4)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "As a result of various difficulties identified in current approaches to intrinsic evaluation of sentence simplification methods, we performed an extrinsic evaluation of one information-preserving sentence simplification method via three NLP tasks. We found that the sentence simplification step brings improvements to the performance of IE and SRL systems. In a third experiment, not described here due to space restrictions, we evaluated the sentence simplification method extrinsically with respect to a multidocument summarisation task using MEAD (Radev et al., 2006) to summarise clusters of documents developed for Task 2 of DUC-2004. 13 We found that the simplification step had no impact on this task. As a result, although the findings reported in our current paper seem promising, it is difficult to know the extent to which they are applicable to other NLP tasks or to tasks which differ only with respect to the test data used. This is one issue that we are interested in exploring in future work. Another is a test of whether extrinsic evaluation methods sensitive to information about the types of changes made in the simplification step would perform better than the black box methods used in the current paper. 13 Information about the DUC conferences is accessible from https://www-nlpir.nist.gov/projects/ duc/index.html (last accessed 22nd August 2018). Guidelines about the tasks presented in DUC-2004 are available at https://www-nlpir.nist.gov/ projects/duc/guidelines/2004.html (last accessed 22nd August 2018).", |
|
"cite_spans": [ |
|
{ |
|
"start": 550, |
|
"end": 570, |
|
"text": "(Radev et al., 2006)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 630, |
|
"end": 642, |
|
"text": "DUC-2004. 13", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1226, |
|
"end": 1228, |
|
"text": "13", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "http://www.lsi.upc.edu/\u02dcsrlconll/ spec.html. Last accessed 14th May 2019. 4 Coordinator of Extended projections of a Verb. 5 Start of Subordinate Extended projection of a Verb.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Such as [A2], etc. 7 Applicable to verbs. 8 As opposed to prepositional, adjectival, or other types of predicate. 9 http://www.lsi.upc.edu/\u02dcsrlconll/ home.html. Last accessed 23rd May 2019.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://ai1.ai.uga.edu/caspr/CPIDR-3. 2.zip. Last accessed 31st May 2019.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://www.nbme.org/. Last accessed 31st May 2019.12 Versions of the system in which the closest tagged concept was extracted in each case, rather than the first, were significantly less accurate in both cases (overall accuracy of 0.6542 for IE from the original vignettes, and 0.6567 for IE from vignettes automatically simplified using the system described in Section 3). SeeTable 5for results obtained using the superior IE system.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Lexico-syntactic text simplification and compression with typed dependencies", |
|
"authors": [ |
|
{ |
|
"first": "Mandya", |
|
"middle": [], |
|
"last": "Angrosh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tadashi", |
|
"middle": [], |
|
"last": "Nomoto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Advaith", |
|
"middle": [], |
|
"last": "Siddharthan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers. Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1996--2006", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mandya Angrosh, Tadashi Nomoto, and Advaith Sid- dharthan. 2014. Lexico-syntactic text simplification and compression with typed dependencies. In Pro- ceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Techni- cal Papers. Association for Computational Linguis- tics, Dublin, Ireland, pages 1996-2006.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Uncch at duc 2007: Query expansion, lexical simplification and sentence selection strategies for multidocument summarization", |
|
"authors": [ |
|
{ |
|
"first": "Catherine", |
|
"middle": [], |
|
"last": "Blake", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julia", |
|
"middle": [], |
|
"last": "Kampov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andreas", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Orphanides", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "West", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cory", |
|
"middle": [], |
|
"last": "Lown", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the Document Understanding Conference (DUC-2007). National Institute of Standards and Technology", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Catherine Blake, Julia Kampov, Andreas K. Or- phanides, David West, and Cory Lown. 2007. Unc- ch at duc 2007: Query expansion, lexical simpli- fication and sentence selection strategies for multi- document summarization. In Proceedings of the Document Understanding Conference (DUC-2007). National Institute of Standards and Technology.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Syntactic Simplification of Text", |
|
"authors": [ |
|
{ |
|
"first": "Yvonne", |
|
"middle": [], |
|
"last": "Canning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yvonne Canning. 2002. Syntactic Simplification of Text. Ph.d. thesis, University of Sunderland.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Verbal working memory and sentence comprehension", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Caplan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gloria", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Waters", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Behavioural and Brain Sciences", |
|
"volume": "22", |
|
"issue": "", |
|
"pages": "77--126", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Caplan and Gloria S. Waters. 1999. Verbal working memory and sentence comprehension. Be- havioural and Brain Sciences 22:77-126.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Automatic induction of rules for text simplification", |
|
"authors": [ |
|
{ |
|
"first": "Raman", |
|
"middle": [], |
|
"last": "Chandrasekar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bangalore", |
|
"middle": [], |
|
"last": "Srinivas", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Knowledge-Based Systems", |
|
"volume": "10", |
|
"issue": "", |
|
"pages": "183--190", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Raman Chandrasekar and Bangalore Srinivas. 1997. Automatic induction of rules for text simplification. Knowledge-Based Systems 10:183-190.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "The statistical significance of the muc-4 results", |
|
"authors": [ |
|
{ |
|
"first": "Nancy", |
|
"middle": [], |
|
"last": "Chinchor", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Proceedings of the Fourth Message Understanding Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "30--50", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nancy Chinchor. 1992. The statistical significance of the muc-4 results. In Proceedings of the Fourth Mes- sage Understanding Conference. McLean, Virginia, pages 30-50.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Head-Driven Statistical Models for Natural Language Parsing", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael Collins. 1999. Head-Driven Statistical Mod- els for Natural Language Parsing. Ph.d thesis, Uni- versity of Pennsylvania.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Natural language processing (almost) from scratch", |
|
"authors": [ |
|
{ |
|
"first": "Ronan", |
|
"middle": [], |
|
"last": "Collobert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Weston", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L\u00e9on", |
|
"middle": [], |
|
"last": "Bottou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Karlen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Koray", |
|
"middle": [], |
|
"last": "Kavukcuoglu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pavel", |
|
"middle": [], |
|
"last": "Kuksa", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "12", |
|
"issue": "", |
|
"pages": "2493--2537", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language pro- cessing (almost) from scratch. Journal of Machine Learning Research 12:2493-2537.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Institute for Artificial Intelligence", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Michael", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Covington", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael A. Covington. 2012. CPIDR R 5.1 user man- ual. Technical report, Institute for Artificial In- telligence, University of Georgia, Athens, Georgia, U.S.A.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "A Tagging Approach to Identify Complex Constituents for Text Simplification", |
|
"authors": [ |
|
{ |
|
"first": "Iustin", |
|
"middle": [], |
|
"last": "Dornescu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Evans", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Constantin", |
|
"middle": [], |
|
"last": "Orasan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of Recent Advances in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "221--229", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Iustin Dornescu, Richard Evans, and Constantin Orasan. 2013. A Tagging Approach to Identify Complex Constituents for Text Simplification. In Proceedings of Recent Advances in Natural Lan- guage Processing. Hissar, Bulgaria, pages 221 - 229.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Assessing if an automated method for identifying features in texts is better than another: discussions and results", |
|
"authors": [ |
|
{ |
|
"first": "Larissa", |
|
"middle": [], |
|
"last": "Sayuri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Futino", |
|
"middle": [], |
|
"last": "Castro Dos Santos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marcos", |
|
"middle": [ |
|
"Oliveira" |
|
], |
|
"last": "Prates", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gisele", |
|
"middle": [], |
|
"last": "De Oliveira Maia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guilherme", |
|
"middle": [], |
|
"last": "Lucas Moreira Dias Almeida", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daysemara", |
|
"middle": [], |
|
"last": "Maria Cotta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ricardo", |
|
"middle": [ |
|
"Cunha" |
|
], |
|
"last": "Pedroso", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aur\u00e9lio", |
|
"middle": [], |
|
"last": "De Aquino Ara\u00fajo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Larissa Sayuri Futino Castro dos Santos, Mar- cos Oliveira Prates, Gisele de Oliveira Maia, Guil- herme Lucas Moreira Dias Almeida, Daysemara maria Cotta, Ricardo Cunha Pedroso, and Aur\u00e9lio de Aquino ara\u00fajo. 2018. Assessing if an automated method for identifying features in texts is better than another: discussions and results. Technical report, Department of Statistics, Universidade Federal de Minas Gerais. https://bit.ly/2xUD2BI.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Thematic proto-roles and argument selection", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Dowty", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "Language", |
|
"volume": "67", |
|
"issue": "", |
|
"pages": "547--619", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Dowty. 1991. Thematic proto-roles and argu- ment selection. Language 67:547-619.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Comparing methods for the syntactic simplification of sentences in information extraction", |
|
"authors": [ |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Evans", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Literary and Linguistic Computing", |
|
"volume": "26", |
|
"issue": "4", |
|
"pages": "371--388", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Richard Evans. 2011. Comparing methods for the syntactic simplification of sentences in information extraction. Literary and Linguistic Computing 26 (4):371-388.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Identifying signs of syntactic complexity for rule-based sentence simplification", |
|
"authors": [ |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Evans", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Constantin", |
|
"middle": [], |
|
"last": "Or\u0203san", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Natural Language Engineering", |
|
"volume": "25", |
|
"issue": "1", |
|
"pages": "69--119", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Richard Evans and Constantin Or\u0203san. 2019. Identify- ing signs of syntactic complexity for rule-based sen- tence simplification. Natural Language Engineer- ing 25 (1):69-119.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Sentence simplification as tree transduction", |
|
"authors": [ |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Feblowitz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Kauchak", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 2nd Workshop on Predicting and Improving Text Readability for Target Reader Populations (PITR)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--10", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dan Feblowitz and David Kauchak. 2013. Sentence simplification as tree transduction. In Proceedings of the 2nd Workshop on Predicting and Improv- ing Text Readability for Target Reader Populations (PITR). Association for Computational Linguistics, Sofia, Bulgaria, pages 1-10.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "The meter corpus: A corpus for analysing journalistic text reuse", |
|
"authors": [ |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Gaizauskas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Foster", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yorick", |
|
"middle": [], |
|
"last": "Wilks", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Arundel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Clough", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Scott", |
|
"middle": [], |
|
"last": "Piao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proceedings of Corpus Linguistics 2001 Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "214--223", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Robert Gaizauskas, Jonathan Foster, Yorick Wilks, John Arundel, Paul Clough, and Scott Piao. 2001. The meter corpus: A corpus for analysing journal- istic text reuse. In Proceedings of Corpus Linguis- tics 2001 Conference. Lancaster University Centre for Computer Corpus Research on Language, pages 214-223.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Event-centered simplication of news stories", |
|
"authors": [ |
|
{ |
|
"first": "Goran", |
|
"middle": [], |
|
"last": "Glavas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sanja", |
|
"middle": [], |
|
"last": "Stajner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the Student Workshop held in conjunctuion with RANLP-2013. RANLP, Hissar", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "71--78", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Goran Glavas and Sanja Stajner. 2013. Event-centered simplication of news stories. In Proceedings of the Student Workshop held in conjunctuion with RANLP-2013. RANLP, Hissar, Bulgaria, pages 71- 78.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "The Oxford Handbook of Computational Linguistics", |
|
"authors": [ |
|
{ |
|
"first": "Ralph", |
|
"middle": [], |
|
"last": "Grishman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "545--559", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ralph Grishman. 2005. The Oxford Handbook of Computational Linguistics. Oxford University Press, chapter Information Extraction, pages 545- 559.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Source sentence simplification for statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Eva", |
|
"middle": [], |
|
"last": "Hasler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adri\u00e0", |
|
"middle": [], |
|
"last": "De Gispert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Felix", |
|
"middle": [], |
|
"last": "Stahlberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aurelien", |
|
"middle": [], |
|
"last": "Waite", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Computer Speech & language", |
|
"volume": "45", |
|
"issue": "", |
|
"pages": "221--235", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eva Hasler, Adri\u00e0 de Gispert, Felix Stahlberg, and Au- relien Waite. 2017. Source sentence simplification for statistical machine translation. Computer Speech & language 45:221-235.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Coordinate noun phrase disambiguation in a generative parsing model", |
|
"authors": [ |
|
{ |
|
"first": "Deirdre", |
|
"middle": [], |
|
"last": "Hogan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics. Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "680--687", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Deirdre Hogan. 2007. Coordinate noun phrase disam- biguation in a generative parsing model. In Proceed- ings of the 45th Annual Meeting of the Association of Computational Linguistics. Association for Compu- tational Linguistics, Prague, Czech Republic, pages 680-687.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "The psychology of language", |
|
"authors": [ |
|
{ |
|
"first": "Timothy", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "Jay", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Timothy B. Jay. 2003. The psychology of language. Pearson, Upper Saddle Rive, NJ.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Improvements to dependency parsing using automatic simplification of data", |
|
"authors": [ |
|
{ |
|
"first": "Tom\u00e1s", |
|
"middle": [], |
|
"last": "Jel\u00ednek", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of LREC-2014 the 22nd International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "73--77", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tom\u00e1s Jel\u00ednek. 2014. Improvements to dependency parsing using automatic simplification of data. In Proceedings of LREC-2014 the 22nd International Conference on Computational Linguistics (Coling 2008). European Language Resources Association, Reykjavik, Iceland, pages 73-77.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Towards effective sentence simplification for automatic processing of biomedical text", |
|
"authors": [ |
|
{ |
|
"first": "Siddhartha", |
|
"middle": [], |
|
"last": "Jonnalagadda", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luis", |
|
"middle": [], |
|
"last": "Tari", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jorg", |
|
"middle": [], |
|
"last": "Hakenberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chitta", |
|
"middle": [], |
|
"last": "Baral", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Graciela", |
|
"middle": [], |
|
"last": "Gonzalez", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of NAACL HLT 2009: Short Papers. Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "177--180", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Siddhartha Jonnalagadda, Luis Tari, Jorg Hakenberg, Chitta Baral, and Graciela Gonzalez. 2009. To- wards effective sentence simplification for auto- matic processing of biomedical text. In Proceed- ings of NAACL HLT 2009: Short Papers. Associ- ation for Computational Linguistics, Boulder, Col- orado, pages 177-180.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "The construction-integration model: A framework for studying memory for text", |
|
"authors": [ |
|
{ |
|
"first": "Walter", |
|
"middle": [], |
|
"last": "Kintsch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Welsch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "Relating theory and data: Essays on human memory", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "367--385", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Walter Kintsch and David M. Welsch. 1991. The construction-integration model: A framework for studying memory for text. In W. E. Hockley and S. Lewandowsky, editors, Relating theory and data: Essays on human memory, Hillsdale, NJ: Erlbaum, pages 367-385.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Looking hard: Eye tracking for detecting grammaticality of automatically compressed sentences", |
|
"authors": [ |
|
{ |
|
"first": "Sigrid", |
|
"middle": [], |
|
"last": "Klerke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anders", |
|
"middle": [], |
|
"last": "H\u00e9ctor Mart\u00ednez Alonso", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "S\u00f8gaard", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "NODALIDA. Link\u00f6ping University Electronic Press / ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "97--105", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sigrid Klerke, H\u00e9ctor Mart\u00ednez Alonso, and Anders S\u00f8gaard. 2015. Looking hard: Eye tracking for de- tecting grammaticality of automatically compressed sentences. In NODALIDA. Link\u00f6ping University Electronic Press / ACL, pages 97-105.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Binary Codes Capable of Correcting Deletions and Insertions and Reversals", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Vladimir Iosifovich Levenshtein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1966, |
|
"venue": "Soviet Physics Doklady", |
|
"volume": "10", |
|
"issue": "8", |
|
"pages": "707--710", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vladimir Iosifovich Levenshtein. 1966. Binary Codes Capable of Correcting Deletions and Insertions and Reversals. Soviet Physics Doklady 10 (8):707-710.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Automated Evaluation of Text and Discourse with Coh-Metrix", |
|
"authors": [ |
|
{ |
|
"first": "Danielle", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Mcnamara", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arthur", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Graesser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philip", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Mccarthy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhiqiang", |
|
"middle": [], |
|
"last": "Cai", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Danielle S. McNamara, Arthur C. Graesser, Philip M. McCarthy, and Zhiqiang Cai. 2014. Automated Evaluation of Text and Discourse with Coh-Metrix. Cambridge University Press.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "A sentence simplification system for improving relation extraction", |
|
"authors": [ |
|
{ |
|
"first": "Christina", |
|
"middle": [], |
|
"last": "Niklaus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bernhard", |
|
"middle": [], |
|
"last": "Bermeitinger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Siegfried", |
|
"middle": [], |
|
"last": "Handschuh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andr\u00e9", |
|
"middle": [], |
|
"last": "Freitas", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: System Demonstrations. Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "170--174", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christina Niklaus, Bernhard Bermeitinger, Siegfried Handschuh, and Andr\u00e9 Freitas. 2016. A sentence simplification system for improving relation extrac- tion. In Proceedings of COLING 2016, the 26th In- ternational Conference on Computational Linguis- tics: System Demonstrations. Association for Com- putational Linguistics, Osaka, Japan, pages 170- 174. https://www.aclweb.org/anthology/C16-2036.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Intelligent text processing to help readers with autism", |
|
"authors": [ |
|
{ |
|
"first": "Constantin", |
|
"middle": [], |
|
"last": "Or\u0203san", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Evans", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruslan", |
|
"middle": [], |
|
"last": "Mitkov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Intelligent Natural Language Processing: Trends and Applications", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "713--740", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Constantin Or\u0203san, Richard Evans, and Ruslan Mitkov. 2018. Intelligent text processing to help readers with autism. In Khaled Shaalan, Aboul Ella Hassanien, and Mohammed F. Tolba, editors, Intelligent Natu- ral Language Processing: Trends and Applications, Springer, pages 713-740.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "The proposition bank: An annotated corpus of semantic roles", |
|
"authors": [ |
|
{ |
|
"first": "Martha", |
|
"middle": [], |
|
"last": "Palmer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Gildea", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Kingsbury", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Computational Linguistics", |
|
"volume": "31", |
|
"issue": "1", |
|
"pages": "71--106", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/0891201053630264" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The proposition bank: An annotated corpus of semantic roles. Computational Linguistics 31(1):71-106.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "BLEU: a method for automatic evaluation of machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Kishore", |
|
"middle": [], |
|
"last": "Papineni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Salim", |
|
"middle": [], |
|
"last": "Roukos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Todd", |
|
"middle": [], |
|
"last": "Ward", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei-Jing", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the 40th annual meeting for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "311--318", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting for Computational Lin- guistics. Association for Computational Linguistics, Philadelphia, Pennsylvania, pages 311-318.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "iSimp: A sentence simplification system for biomedical text", |
|
"authors": [ |
|
{ |
|
"first": "Yifan", |
|
"middle": [], |
|
"last": "Peng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Catalina", |
|
"middle": [ |
|
"O" |
|
], |
|
"last": "Tudor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Manabu", |
|
"middle": [], |
|
"last": "Torii", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cathy", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Vijay-Shanker", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 2012 IEEE International Conference on Bioinformatics and Biomedicine. IEEE, Philadelphia", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--6", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yifan Peng, Catalina O. Tudor, Manabu Torii, Cathy H. Wu, and K. Vijay-Shanker. 2012. iSimp: A sentence simplification system for biomedical text. In Pro- ceedings of the 2012 IEEE International Conference on Bioinformatics and Biomedicine. IEEE, Philadel- phia, PA, pages 1-6.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "MUSST: A Multilingual Syntactic Simplification Tool", |
|
"authors": [ |
|
{ |
|
"first": "Carolina", |
|
"middle": [], |
|
"last": "Scarton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alessio", |
|
"middle": [], |
|
"last": "Palmero Aprosio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sara", |
|
"middle": [], |
|
"last": "Tonelli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tamara", |
|
"middle": [], |
|
"last": "Mart\u00edn Wanton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lucia", |
|
"middle": [], |
|
"last": "Specia", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "The Companion Volume of the IJC-NLP 2017 Proceedings: System Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "25--28", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Carolina Scarton, Alessio Palmero Aprosio, Sara Tonelli, Tamara Mart\u00edn Wanton, and Lucia Specia. 2017. MUSST: A Multilingual Syntactic Simplifi- cation Tool. In The Companion Volume of the IJC- NLP 2017 Proceedings: System Demonstrations. Taipei, Taiwan, pages 25-28.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Syntactic simplification for improving content selection in multidocument summarization", |
|
"authors": [ |
|
{ |
|
"first": "Advaith", |
|
"middle": [], |
|
"last": "Siddharthan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ani", |
|
"middle": [], |
|
"last": "Nenkova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kathleen", |
|
"middle": [], |
|
"last": "Mckeown", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the 20th International Conference on Computational Linguistics. Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/1220355.1220484" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Advaith Siddharthan, Ani Nenkova, and Kath- leen McKeown. 2004. Syntactic simplifica- tion for improving content selection in multi- document summarization. In Proceedings of the 20th International Conference on Computational Linguistics. Association for Computational Lin- guistics, Stroudsburg, PA, USA, COLING '04. https://doi.org/10.3115/1220355.1220484.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Looking at text simplification -Using eye tracking to evaluate the readability of automatically simplified sentences. Bachelor thesis", |
|
"authors": [ |
|
{ |
|
"first": "Linnea", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "Timm", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Linnea B. Timm. 2018. Looking at text simplification -Using eye tracking to evaluate the readability of automatically simplified sentences. Bachelor the- sis, Institutionen fr datavetenskap, Linkpings univer- sitet.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Sentence simplification for semantic role labeling", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Vickrey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daphne", |
|
"middle": [], |
|
"last": "Koller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of ACL-08: HLT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "344--352", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Vickrey and Daphne Koller. 2008. Sentence simplification for semantic role labeling. In Pro- ceedings of ACL-08: HLT. Columbus, Ohio, pages 344-352.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "Learning to Simplify Children Stories with Limited Data", |
|
"authors": [ |
|
{ |
|
"first": "Thanh", |
|
"middle": [], |
|
"last": "Tu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Giang", |
|
"middle": [], |
|
"last": "Vu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Son", |
|
"middle": [ |
|
"Bao" |
|
], |
|
"last": "Binh Tran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Pham", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "31--41", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tu Thanh Vu, Giang Binh Tran, and Son Bao Pham. 2014. Learning to Simplify Children Stories with Limited Data, Springer, Bangkook, Thailand, pages 31-41.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "Sentence simplification by monolingual machine translation", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Sander Wubben", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Van Den", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Emiel", |
|
"middle": [], |
|
"last": "Bosch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Krahmer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (ACL-12). Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1015--1024", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sander Wubben, Antal van den Bosch, and Emiel Krahmer. 2012. Sentence simplification by mono- lingual machine translation. In Proceedings of the 50th Annual Meeting of the Association for Compu- tational Linguistics (ACL-12). Association for Com- putational Linguistics, Jeju, Republic of South Ko- rea, pages 1015-1024.", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "Optimizing statistical machine translation for text simplification", |
|
"authors": [ |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Courtney", |
|
"middle": [], |
|
"last": "Napoles", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ellie", |
|
"middle": [], |
|
"last": "Pavlick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Quanze", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Callison-Burch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "TACL", |
|
"volume": "4", |
|
"issue": "", |
|
"pages": "401--415", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wei Xu, Courtney Napoles, Ellie Pavlick, Quanze Chen, and Chris Callison-Burch. 2016. Optimizing statistical machine translation for text simplification. TACL 4:401-415. https://bit.ly/2Sj5mag.", |
|
"links": null |
|
}, |
|
"BIBREF41": { |
|
"ref_id": "b41", |
|
"title": "Processing and memory of central versus peripheral information as a function of reading goals: evidence from eye-movements", |
|
"authors": [ |
|
{ |
|
"first": "Menahem", |
|
"middle": [], |
|
"last": "Yeari", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Van Den Broek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marja", |
|
"middle": [], |
|
"last": "Oudega", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Reading and Writing", |
|
"volume": "28", |
|
"issue": "8", |
|
"pages": "1071--1097", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Menahem Yeari, Paul van den Broek, and Marja Oudega. 2015. Processing and memory of central versus peripheral information as a function of read- ing goals: evidence from eye-movements. Reading and Writing 28 (8):1071-1097.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF0": { |
|
"num": null, |
|
"text": "{They were formally found not guilty by the recorder Michael Gibbon QC after}A [a witness, who cannot be identified, withdrew from giving evidenceB and CEV prosecutor Susan Ferrier offered no further evidenceC ]{}D.", |
|
"html": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>Scheme</td><td>Input Sentence</td><td>Output Sentence 1</td><td>Output Sentence 2</td></tr><tr><td/><td/><td>{They were formally found</td><td>{They were formally found</td></tr><tr><td>A [B CEV C] D. \u2192 A B D. A C D.</td><td/><td>not guilty by the recorder Michael Gibbon QC after}A a witness, who cannot be identified, withdrew from</td><td>not guilty by the recorder Michael Gibbon QC after}A prosecutor Susan Ferrier offered no further evidenceC</td></tr><tr><td/><td/><td>giving evidenceB {}D.</td><td>{}D</td></tr></table>" |
|
}, |
|
"TABREF3": { |
|
"num": null, |
|
"text": "Example of semantic role labelling of Sentence (3)", |
|
"html": null, |
|
"type_str": "table", |
|
"content": "<table/>" |
|
}, |
|
"TABREF4": { |
|
"num": null, |
|
"text": "Original sentence: But Smith had already been arrested -her clothing had been found near his home and DNA tests linked him to it.", |
|
"html": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>A0</td><td/><td>V</td><td>A1</td><td>A2 AMDIS AMLOC</td><td>AMTMP</td></tr><tr><td/><td/><td colspan=\"2\">arrested Smith</td><td>But</td><td>already</td></tr><tr><td/><td/><td/><td/><td>near his home and</td></tr><tr><td/><td/><td>found</td><td>her clothing</td><td>DNA tests linked</td></tr><tr><td/><td/><td/><td/><td>him to it</td></tr><tr><td>his</td><td>home</td><td/><td/><td/></tr><tr><td>and</td><td>DNA</td><td>linked</td><td>him</td><td>to it</td></tr><tr><td>tests</td><td/><td/><td/><td/></tr><tr><td colspan=\"6\">Simplified sentence: But Smith has already been arrested -her clothing had been found</td></tr><tr><td colspan=\"5\">near his home. DNA tests linked him to it.</td></tr><tr><td>A0</td><td/><td>V</td><td>A1</td><td>A2 AMDIS AMLOC</td><td>AMTMP</td></tr><tr><td/><td/><td colspan=\"2\">arrested Smith</td><td>But</td><td>already</td></tr><tr><td/><td/><td>found</td><td>her clothing</td><td>near his home</td></tr><tr><td colspan=\"2\">DNA tests</td><td>linked</td><td>him</td><td>to it</td></tr></table>" |
|
}, |
|
"TABREF5": { |
|
"num": null, |
|
"text": "Example of more accurate semantic role labelling in automatically simplified text.", |
|
"html": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td colspan=\"2\">Orig vs. Simp vs.</td></tr><tr><td>Role</td><td>Simp</td><td>Orig</td></tr><tr><td>A0 (agent)</td><td>14</td><td>23</td></tr><tr><td>A1 (patient/theme)</td><td>45</td><td>77</td></tr><tr><td>A2 (less prominent than A1)</td><td>14</td><td>13</td></tr><tr><td>AMCAU (cause)</td><td>0</td><td>1</td></tr><tr><td>AMDIR (direction)</td><td>4</td><td>0</td></tr><tr><td>AMDIS (discourse relation)</td><td>0</td><td>3</td></tr><tr><td>AMLOC (location)</td><td>3</td><td>13</td></tr><tr><td>AMMNR (manner)</td><td>4</td><td>6</td></tr><tr><td>AMNEG (negation)</td><td>0</td><td>1</td></tr><tr><td>AMPNC (purpose)</td><td>1</td><td>6</td></tr><tr><td>AMTMP (time)</td><td>12</td><td/></tr><tr><td>V (verb)</td><td>2</td><td>3</td></tr><tr><td>Total</td><td>99</td><td>173</td></tr></table>" |
|
}, |
|
"TABREF6": { |
|
"num": null, |
|
"text": "Positive differences in numbers of true positives obtained for semantic role labelling of original and simplified versions of input texts", |
|
"html": null, |
|
"type_str": "table", |
|
"content": "<table/>" |
|
}, |
|
"TABREF7": { |
|
"num": null, |
|
"text": "a. She has truncal LOC [obesity striae]. b. She has pigmented QU AL abdominal LOC striae.In (5-a), the FINDING obesity is not tagged correctly because the SYMPTOM striae is erroneously grouped with obesity to form a new FINDING, obesity striae which does not match the FIND-ING listed in the gold standard. By contrast, LO-CATIONS in (5) are identified with greater accu-", |
|
"html": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td/><td>IE ORIG</td><td/><td>IE SIM P</td><td/></tr><tr><td>Template</td><td/><td/><td/><td/><td>Best</td></tr><tr><td>slot</td><td>Acc</td><td>95% CI</td><td>Acc</td><td>95% CI</td><td>Performer</td></tr><tr><td>FINDING</td><td colspan=\"4\">0.8819 [0.847, 0.914] 0.8861 [0.853, 0.917]</td><td>0.5486</td></tr><tr><td colspan=\"5\">TECHNIQUE 0.8514 [0.814, 0.886] 0.8903 [0.858, 0.922]</td><td>0.9344</td></tr><tr><td>SYSTEM</td><td colspan=\"4\">0.8097 [0.769, 0.850] 0.8431 [0.806, 0.881]</td><td>0.873</td></tr><tr><td>QUALIFIER</td><td colspan=\"4\">0.7431 [0.697, 0.786] 0.7708 [0.728, 0.814]</td><td>0.794</td></tr><tr><td>LOCATION</td><td colspan=\"4\">0.8431 [0.806, 0.881] 0.8611 [0.825, 0.894]</td><td>0.735</td></tr><tr><td>All</td><td colspan=\"4\">0.8258 [0.808, 0.843] 0.8503 [0.834, 0.867]</td><td>0.976</td></tr></table>" |
|
}, |
|
"TABREF8": { |
|
"num": null, |
|
"text": "Performance of the IE systems processing our test data.", |
|
"html": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>racy than those in (4) because IE ORIG erroneously</td></tr><tr><td>extracts the same LOCATION (truncal) for both</td></tr><tr><td>FINDINGs in (4).</td></tr></table>" |
|
} |
|
} |
|
} |
|
} |