|
{ |
|
"paper_id": "R19-1047", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:02:42.794271Z" |
|
}, |
|
"title": "Divide and Extract -Disentangling Clause Splitting and Proposition Extraction", |
|
"authors": [ |
|
{ |
|
"first": "Darina", |
|
"middle": [], |
|
"last": "Gold", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Lab University of Duisburg", |
|
"location": { |
|
"settlement": "Essen", |
|
"country": "Germany" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Torsten", |
|
"middle": [], |
|
"last": "Zesch", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Lab University of Duisburg", |
|
"location": { |
|
"settlement": "Essen", |
|
"country": "Germany" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Proposition extraction from sentences is an important task for information extraction systems. Evaluation of such systems usually conflates two aspects: splitting complex sentences into clauses and the extraction of propositions. It is thus difficult to independently determine the quality of the proposition extraction step. We create a manually annotated proposition dataset from sentences taken from restaurant reviews that distinguishes between clauses that need to be split and those that do not. The resulting proposition evaluation dataset allows us to independently compare the performance of proposition extraction systems on simple and complex clauses. Although performance drastically drops on more complex sentences, we show that the same systems perform best on both simple and complex clauses. Furthermore, we show that specific kinds of subordinate clauses pose difficulties to most systems.", |
|
"pdf_parse": { |
|
"paper_id": "R19-1047", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Proposition extraction from sentences is an important task for information extraction systems. Evaluation of such systems usually conflates two aspects: splitting complex sentences into clauses and the extraction of propositions. It is thus difficult to independently determine the quality of the proposition extraction step. We create a manually annotated proposition dataset from sentences taken from restaurant reviews that distinguishes between clauses that need to be split and those that do not. The resulting proposition evaluation dataset allows us to independently compare the performance of proposition extraction systems on simple and complex clauses. Although performance drastically drops on more complex sentences, we show that the same systems perform best on both simple and complex clauses. Furthermore, we show that specific kinds of subordinate clauses pose difficulties to most systems.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Propositions are predicate-centered tuples consisting of the verb, the subject, and other arguments such as objects and modifiers. For example in Figure 1 , \"smiled\" is the predicate and the other elements are arguments. The first argument is reserved for the role of the subject, in this case \"The waitress\", while \"at her friend\" and \"now\" are arguments, without further sub-specification. Propositions are used in language understanding tasks such as relation extraction (Riedel et al., 2013; Petroni et al., 2015) , information retrieval (L\u00f6ser et al., 2011; Giri et al., 2017) , question answering (Khot et al., 2017) , word analogy detection (Stanovsky et al., 2015) , knowledge base construction (Dong et al., 2014; , summarization (Melli et al., 2006) , or other tasks that need comparative operations, such as equality, entailment, or contradiction, on phrases or sentences.", |
|
"cite_spans": [ |
|
{ |
|
"start": 474, |
|
"end": 495, |
|
"text": "(Riedel et al., 2013;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 496, |
|
"end": 517, |
|
"text": "Petroni et al., 2015)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 542, |
|
"end": 562, |
|
"text": "(L\u00f6ser et al., 2011;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 563, |
|
"end": 581, |
|
"text": "Giri et al., 2017)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 603, |
|
"end": 622, |
|
"text": "(Khot et al., 2017)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 648, |
|
"end": 672, |
|
"text": "(Stanovsky et al., 2015)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 703, |
|
"end": 722, |
|
"text": "(Dong et al., 2014;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 739, |
|
"end": 759, |
|
"text": "(Melli et al., 2006)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 146, |
|
"end": 154, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The main goal of this paper is to empirically measure the influence of sentence complexity on the performance of proposition extraction systems. Complexity worsens the extraction of dependencies, on which propositions are built. Hence, proposition extraction performance should decrease with increasing sentence complexity.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The contribution of this work is threefold a) a gold standard corpus for propositions 1 , b) an analysis of proposition extraction systems without the influence of complex sentences, and c) an analysis of proposition extraction systems with the influence of complex sentences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The knowledge of how proposition extraction systems perform on complex sentences will 1) help to identify the system that deals with them best 2) by showing the difficulty with complexity, give a direction towards which proposition extraction systems can be improved.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "If different systems perform well on simple or complex sentences, the complexity distinction could help to identify the complexity of a sentence. The complexity of a sentence would then give a direction towards which system would be better to use.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Proposition are relational tupels extracted from sentences in the form of predicate-argument struc-tures (Marcus et al., 1994) . There are proposition models that further distinguish between the type of arguments. They do not only identify the subject, but more complex roles such as temporal and locational objects or causal clauses.", |
|
"cite_spans": [ |
|
{ |
|
"start": 105, |
|
"end": 126, |
|
"text": "(Marcus et al., 1994)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Besides the theory and formalization of proposition, proposition extraction systems have performance issues on real data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Although there have been comparative studies of proposition extraction systems, there has been no extensive study on the impact of sentence complexity on proposition extraction system performance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparison of Proposition Systems", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Comparative Studies Niklaus et al. (2018) presented an overview of proposition extraction systems and classified them into the classic categories of learning-based, rule-based, and clause-based approaches, as well as approaches capturing interpropositional relationships. They described the specific problems each system tackles as well as gaps on the overall evolution of proposition extraction systems. Schneider et al. (2017) present a benchmark for analyzing errors in proposition extraction systems. Their classes are wrong boundaries, redundant extraction, wrong extraction, uninformative extraction, missing extraction, and out of scope. Their pre-defined classes do not map directly to sentence complexity, although wrong boundaries and out of scope would also be of some interest in an even more detailed error analysis.", |
|
"cite_spans": [ |
|
{ |
|
"start": 12, |
|
"end": 41, |
|
"text": "Studies Niklaus et al. (2018)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 405, |
|
"end": 428, |
|
"text": "Schneider et al. (2017)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparison of Proposition Systems", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Furthermore, according to and Niklaus et al. (2018) there are no common guidelines and followingly no gold standard defining a valid extraction.", |
|
"cite_spans": [ |
|
{ |
|
"start": 30, |
|
"end": 51, |
|
"text": "Niklaus et al. (2018)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparison of Proposition Systems", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Systems Table 1 shows the outputs from different systems, our baselines, and our gold standard.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 8, |
|
"end": 15, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Comparison of Proposition Systems", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "In their study, Gashteovski et al. (2017) aim at finding a system with minimal attributes, meaning that hedging 2 and attributes expressed e.g. through relative clauses or adjectives, can be optionally removed. Thus, they use recall and two kinds of precision in the evaluation in order to account for the feature of minimality. To explain this in more detail does not lie within the scope of this paper. Gashteovski et al. (2017) evaluates OLLIE (Mausam et al., 2012) , ClausIE (Del Corro and", |
|
"cite_spans": [ |
|
{ |
|
"start": 16, |
|
"end": 41, |
|
"text": "Gashteovski et al. (2017)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 405, |
|
"end": 430, |
|
"text": "Gashteovski et al. (2017)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 447, |
|
"end": 468, |
|
"text": "(Mausam et al., 2012)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparison of Proposition Systems", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The waitress smiled at her friend now Systems Subject Predicate Other Elements", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentence", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The waitress smiled at her friend | now ClausIE The waitress smiled at her friend now The waitress smiled now her has friend ReVerb The waitress now smiled at her friend Stanford waitress smiled at her friend waitress now smiled at her friend OLLIE The waitress now smiled at her friend OpenIE", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 40, |
|
"end": 269, |
|
"text": "ClausIE The waitress smiled at her friend now The waitress smiled now her has friend ReVerb The waitress now smiled at her friend Stanford waitress smiled at her friend waitress now smiled at her friend OLLIE", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Allen", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The waitress smiled now | at her friend", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Allen", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The waitress smiled at her friend now BL2", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BL1", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The waitress smiled at her friend now", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BL1", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The waitress smiled at her friend | now , and Open IE-4 against their new system, that we will call Allen herein, using precisionrecall, area under the curve, and F1-score. They compare the individual proposition elements. For a proposition to be judged as correct, the predicate and the syntactic heads of the arguments need to be the same as the gold standard. Saha et al. (2018) evaluate ClausIE, OpenIE-4, and CALMIE (a part of OpenIE) using precision. With the findings of this comparison, they introduce a new version of their system, OpenIE-5 3 ,", |
|
"cite_spans": [ |
|
{ |
|
"start": 363, |
|
"end": 381, |
|
"text": "Saha et al. (2018)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Us", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In all described comparisons, the system of the respective authors is the best, which makes sense as it addresses the issue shown by the authors.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Us", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "According to Saha et al. (2018) conjunctive sentences are one of the issues in proposition extraction, as conjunctions are a challenge to dependency parsers which proposition extraction systems are mostly built upon. Hence, Saha et al. (2018) built a system that automatically creates simple sentences from sentences with several conjunctions that are used for proposition extraction. For the proposition extraction of the simple sentences they used ClausIE and OpenIE. They evaluated their data using three different proposition datasets. The correctness of the extracted proposition from the original sentence were evaluated manually. In their study, simple sentences were sentences without conjunctions. Quirk (1985) defines a simple sentence as a sentence consisting of exactly one independent clause that does not contain any further clause as one of its elements. Hence, a complex sentence consists of more than one clause. This is also the definition that we use in our study.", |
|
"cite_spans": [ |
|
{ |
|
"start": 13, |
|
"end": 31, |
|
"text": "Saha et al. (2018)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 707, |
|
"end": 719, |
|
"text": "Quirk (1985)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Propositions from Simple Sentences", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Recent work used crowdsourcing for creating and evaluating proposition extraction FitzGerald et al., 2018) in the setting of question answering. In short, they asked their crowdworkers to produce questions and answers in a way that resulted in the extraction of their predicates and arguments, without directly asking for predicate-argument structures.", |
|
"cite_spans": [ |
|
{ |
|
"start": 82, |
|
"end": 106, |
|
"text": "FitzGerald et al., 2018)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Crowdsourcing Gold Standard Propositions", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "We create a corpus to evaluate the performance of proposition extraction systems entangled with and disentangled from the task of clause splitting. Our source corpus is the portion of the Aspect Based Sentiment Analysis (ABSA) task (Pontiki et al., 2014) concerned with restaurant reviews within one aspect -service. We use all 423 sentences that were annotated with this aspect. In a preliminary step, we produce a corpus of reduced sentences. To examine the influence of sentence complexity, we classify the reduced sentences as either 1) simple sentences, meaning sentences with potentially just one proposition, and 2) complex sentences, meaning sentences with potentially multiple propositions. Then, we produce propositions from the reduced sentences using expert annotation and evaluate it by calculating the inter-annotator agreement.", |
|
"cite_spans": [ |
|
{ |
|
"start": 232, |
|
"end": 254, |
|
"text": "(Pontiki et al., 2014)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Corpus Creation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Our corpus contains 2,181 sentences (class distribution in Table 2 ) and 2,526 propositions.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 59, |
|
"end": 66, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Corpus Creation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "As a preliminary step, we created a gold corpus of reduced sentences formed from originally more complex sentences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Preliminary Step: Creating Reduced Sentences", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "To do so, we use 423 sentences from review texts 4 . As these are quite difficult for producing propositions, even for humans, we included a preliminary step of creating reduced sentences. A reduced sentence is a sentence that contains only a portion of the original sentence, e.g. the original sentence \"The server was cool and served food and drinks\" could be reduced to \"The server was cool\" or \"The server served food\". The intention behind this step was to create sentences with one proposition only. Hence, the guidelines contained rules such as decomposing conjunctive sentences or creating independent sentences from relative clauses. 5 We perform this preliminary step via crowdsourcing and evaluate it qualitatively.", |
|
"cite_spans": [ |
|
{ |
|
"start": 643, |
|
"end": 644, |
|
"text": "5", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Preliminary Step: Creating Reduced Sentences", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Definition of Reduced Sentences We instructed our workers to produce reduced sentences from the original sentence. To prevent nested structures, a reduced sentence was not allowed to be split in further reduced sentences, at least within the output of one worker. 6 Ideally, the crowdworkers could have created sentences that contain exactly one proposition. However, this might even be a difficult task for experts, as there are non-trivial sentence constructions that would need long guidelines to create sentences with exactly one proposition. However, our guidelines insured that sentences were reduced in comparison to the original version, if possible. In this way, we are able to create a sufficiently big set of both simple and more complex sentences, as shown in Table 2 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 772, |
|
"end": 779, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Preliminary Step: Creating Reduced Sentences", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Crowdsourcing We used Amazon Turk for crowdsourcing our data. crowdsourced gold data for evaluating propositions. The sentence reduction performed here and also in Saha et al. (2018) is very similar to syntactic sentence simplification as performed by Lee and Don (2017) . We paid 0.04 $ per HIT and 0.01 $ for each further reduced sentence. Each sentence was reduced by 3 workers. In this process, 2181 unique reduced sentences, which are all used in the following corpus creation process, were created from 423 original sentences.", |
|
"cite_spans": [ |
|
{ |
|
"start": 164, |
|
"end": 182, |
|
"text": "Saha et al. (2018)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 252, |
|
"end": 270, |
|
"text": "Lee and Don (2017)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Preliminary Step: Creating Reduced Sentences", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Evaluation of Reduced Sentences To measure the quality of the crowdsourced reduced sentences, we chose 100 random reduced sentences together with their original sentence and evalu- Table 3b ).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 181, |
|
"end": 189, |
|
"text": "Table 3b", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Preliminary Step: Creating Reduced Sentences", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In Table 3a , we provide an exemplary sentence for each category, except for ORIGINALSIMPLE, as it means that the original is already a simple sentence, containing only one proposition which cannot be further reduced. 20 sentences in the random sample were categorized as being ORIGINALSIM-PLE. However, some workers still tried to reduce some of these sentences -2 of them were grammatically incorrect (GRAMMAR) and 3 fell into the class INFERENCE. This means that their content was not explicitly mentioned in the original sentence, but was lexically inferred. There were 66 REDUCED sentences, meaning that the sentences have been successfully reduced. 60 of the REDUCED resulted in SIMPLE sentences, which means that they contained only one proposition after the reduction, and 6 were simpler than the original sentence, but contained more than one proposition. We believe that the results are usable as is, as the error rate is quite low -only 17 of the reduced sentences in the random sample were incorrect (GRAMMAR and INFERENCE), as many of the GRAMMAR errors stem from the original sentence. Furthermore, we show that our reduction step was necessary to produce enough simple sentences for our experiment, as 80% of the random sample were originally complex.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 11, |
|
"text": "Table 3a", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Preliminary Step: Creating Reduced Sentences", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "To evaluate the performance of proposition extraction systems, we created a gold standard corpus for propositions from the reduced sentences. In this paper, we follow the most simple possible annotation, similar to .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Creating Propositions from Simple Sentences", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We want to extract English propositions with one main verb and all arguments that are linked to it. In our notation, the first position of the proposition is the subject, the second is the predicate and the order of the other elements is irrelevant. 7 The arguments may also contain further propositions, e.g. here, the sentence \"I think their food is great\" is split in two propositions -\"I | think | their food is great\" and \"their food | is | great \". This definition is restrictive in that it asks for exactly two propositions in the given example. Additionally, it is not bound to a clearly defined theory (as there is no clearly defined theory on propositions). However, it is the representation that is needed to extract information from reviews, as it would help to reduce redundancies, e.g. by clustering sentences such as \"Their food is great\" and \"I think their food is great\". Furthermore, we are not interested in inferred information, e.g. \"They | have | food\" from the previously discussed sentence. This choice will also be reflected in the performance of systems that do not adhere to our understanding of propositions. However, this does not necessarily cloud the performance comparison of simple and complex sentences, as we will still measure the influence of sentence complexity. Each sentence is processed by two annotators and the disagreements are curated in a subsequent step.", |
|
"cite_spans": [ |
|
{ |
|
"start": 250, |
|
"end": 251, |
|
"text": "7", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Creating Propositions from Simple Sentences", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Creation As the creation of propositions is not a trivial task, due to many different cases that need to be explained in the guidelines 8 , this task should be performed by people who were trained longer than a crowdsourcing platform allows for. Thus, we produced proposition annotations in a doubleannotation process by three graduate students 9 . The disagreements were curated by the first author of the paper. The result of the curation builds the gold standard. The gold standard, all annotations, and the guidelines are available.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Creating Propositions from Simple Sentences", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "To evaluate our dataset, we report inter-annotator agreement as well as agreement with the curator", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation of Proposition Creation", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The server was cool and served food and drinks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Original Sentence", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The server was cool and served food.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "REDUCED", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The server was cool.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SIMPLE", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The server was.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "GRAMMAR", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The server is good. on both the proposition (see Table 4a ) and proposition element level (see Table 4b ).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 49, |
|
"end": 57, |
|
"text": "Table 4a", |
|
"ref_id": "TABREF6" |
|
}, |
|
{ |
|
"start": 95, |
|
"end": 103, |
|
"text": "Table 4b", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "INFERENCE", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Evaluation Metric In order to see differences in the annotation, we performed inter-annotator agreement using %-agreement (accuracy). We use the same measure for system performance, which enables a direct comparison. Although we are aware that agreement is ignorant of chance agreement, we believe that it is the best measure for this problem, as chance agreement is quite low in the case of this complex annotation problem. Furthermore, it is difficult to interpret these results in comparison to other works. As previously described, there are no clear guidelines for propositions and also no manual gold datasets created explicitly for this purpose. We could compare the results of our inter-annotator agreement to similar tasks, where sentences are split into components, as e.g. answers prepared for question answering, paraphrase alignment, translation alignment etc. However, they also have different setups and evaluation metrics and it is out of the scope of this work to discuss these differences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "INFERENCE", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Levels of Evaluation We perform the evaluation on two levels -proposition level and proposition element level. On the proposition level, we calculate the agreement of whole propositions. On the proposition element level we calculate the agreement of individual elements of the propositions whilst taking their label (subject, predicate, or other element) into account.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "INFERENCE", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Inter-annotator Agreement Table 4a shows that the inter-annotator agreement on the proposition level is .39 and .53 on complex sentences and .61 and .71 on simple sentences. These agreement differences show that clause splitting is also difficult for humans.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 26, |
|
"end": 34, |
|
"text": "Table 4a", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "INFERENCE", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Agreement with Curator The agreement with the curator is .05 to .19 higher than the interannotator agreement. The agreement on the proposition element level is .67 and .7 on complex sentences and .83 and .85 for simple sentencesnearly double of the whole proposition agreement. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "INFERENCE", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "To identify the system that performs best when disentangled from the task of clause splitting, we use the herein produced corpus to analyze and evaluate the performance of various proposition extraction systems as used in evaluations by , Gashteovski et al. (2017) , Saha et al. (2018) , and . Hence, we will analyze proposition extraction performance using AllenNLP, ClausIE, ReVerb, Stanford Open Information Extraction, OLLIE, and OpenIE-5. 10 Furthermore, we provide two baseline systems. We use %-agreement to measure the performance of systems. We want full agreement, not just matching phrase heads, as performed by . Furthermore, we evaluate only agreement, as in our setup the argument or the predicate matching is what we are interested in, meaning we do not need precision and recall in our setting. In this way, our evaluation setup is similar to Saha et al. (2018) , who also identified specific issues in proposition extraction systems.", |
|
"cite_spans": [ |
|
{ |
|
"start": 239, |
|
"end": 264, |
|
"text": "Gashteovski et al. (2017)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 267, |
|
"end": 285, |
|
"text": "Saha et al. (2018)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 444, |
|
"end": 446, |
|
"text": "10", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 859, |
|
"end": 877, |
|
"text": "Saha et al. (2018)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Setup", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "As in inter-annotator agreement, we calculate agreement on two levels: proposition and proposition element level. The results of the performance comparison is shown in Table 5 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 168, |
|
"end": 175, |
|
"text": "Table 5", |
|
"ref_id": "TABREF8" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Setup", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We provide two baselines in order to better compare the systems. Both baselines create propositions with three elements at most: subject, predicate, and one other element. The first baseline (BL1) takes the first word as subject, the second word as predicate and the rest as one other element. The second baseline (BL2) is a little more engineered and uses POS-tags. It makes a proposition for each verb. All words before the verb are the subject and all words after the verb are one other element. Examples for the baselines are shown in the Table 1. The baselines are kept simple on purpose to show how simple algorithms can solve the given problem. A baseline that appears intuitive is using a dependency parser and filtering for the root and its dependants. However, deciding which parts are its dependents and especially the span of arguments is ambiguous. This would not be a baseline, it would be a rule-based system that is not out of the box. Hence, we decided not to do it. Table 5a shows that performance of proposition extraction on whole propositions is equally bad for both simple and complex sentences. Table 5b shows that performance on proposition elements is much better than on proposition level. Furthermore, the table shows that for all systems but Re-Verb, the performance is much better on the simple sentences, which was expected.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 984, |
|
"end": 992, |
|
"text": "Table 5a", |
|
"ref_id": "TABREF8" |
|
}, |
|
{ |
|
"start": 1118, |
|
"end": 1126, |
|
"text": "Table 5b", |
|
"ref_id": "TABREF8" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Baselines", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "It is also interesting that although the performance of both baselines on whole propositions is 0, the performance of the second baseline on proposition elements is competitive. This shows, that the task of proposition extraction can, to a big part, be solved by correct verb extraction. It outperforms ReVerb, Stanford, and on simple and complex sentences also OLLIE. The second baseline performs a little worse on all sentences, as these also include sentences without a verb and this baseline is verb-based. This shows that either the automatic systems have problems with the extraction of verbs or they have deeper issues, e.g. they do not extract from a lot of sentences, as is discussed in Section 4.4.1. The second baseline performs almost equally on both simple and complex sentences. This may show correct verb extraction alone solves only a particular portion of proposition extraction.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System Performance", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Other systems, especially the two best ones, perform about two times better on the simple sentences but then have a much bigger drop on the complex sentences. This may show that clause splitting has a bigger impact on better or probably more intelligent systems than on more simple systems.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System Performance", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "On both levels, OpenIE is the best system, very closely followed by Allen, whereas the other systems are well-beaten.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System Performance", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Identifying further problems except clause splitting could improve current proposition extraction systems. On the one hand there are sub-issues in clause splitting. On the other hand, there are issues besides clause splitting.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis of System Performance", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "In the case of ClausIE and ReVerb, many further clauses and also arguments are cut, as these consist of a maximum of three elements, which makes the comparison difficult.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis of System Performance", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "We first manually examined some potential issues in the proposition extraction from simple sentences. After the manual analysis of potential issues, we calculated the system performance if the issue would be eliminated. One big issue we found is missing propositions, meaning that systems do not always extract propositions. Except for the missing propositions, there was no big difference in the system performance with or without the issue. Also, some systems have different models of propositions, which may also affect their performance. On the one hand, there are issues with previous steps, e.g. negations or quantifiers are ignored. On the other hand, there are issues with formatting, e.g. a different treatment of prepositions or conditionals.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "General Issues", |
|
"sec_num": "4.4.1" |
|
}, |
|
{ |
|
"text": "Missing Propositions One big issue is that proposition extraction systems often do not produce any extraction from a sentence. Unsurprisingly, this issue is bigger among the systems that do not perform well -namely ReVerb (58% of sentences do not have an extraction), Stanford (39%), and OLLIE (33%), whereas the better performing systems have much lower rates -Allen (3%), ClausIE (4%), and OpenIE (10%). In ReVerb, Stanford, and OLLIE we could not find a clear reason why there are no extractions. In the case of Allen, there are only no extractions from sentences without verbs. 11 ClausIE and OpenIE have no extractions from sentences that are missing a verb or a subject. Additionally, OpenIE has no extractions from existential clauses. In Table 6a , where we show the performances of systems on full propositions without the discussed issues, it is shown that systems perform slightly better when eliminating missing propositions from simple sentences. However, the improvement is clearer in Table 6b on the element level. Especially for the systems that had more missing propositions, namely Stanford, ReVerb, and OLLIE, the change is between .06 -.17. Conjunctions As already stated by Saha et al. (2018) , conjunctive sentences pose an issue to proposition extraction systems. In our case, we wanted to separate all conjunctive sentences in individual propositions, e.g. the sentence \"The waitress smiled at her friend and at me.\" contains the propositions \"The waitress | smiled | at her friend\" and \"The waitress | smiled | at me.\". OpenIE and Stanford have the same guidelines on conjunctions, whereas Allen, ClausIE, and ReVerb keep the conjuncted elements together -from the previous sentence they would create one proposition -\"The waitress | smiled | at her friend and me.\". Negations Stanford does not extract from negated sentences and Allen has problems with negated sentences missing a verb. The rest can deal with negations. These specific problems are difficult to show in numbers, as they are rare -only about 7% of the sentences contained negations. Prepositions OLLIE, ReVerb, and Stanford place the prepositions with the predicate, whereas all other systems as well as our gold standard place it with the associated argument, as is shown in the example in Table 1 . For these cases we would need adjusted evaluations that ignore this difference. Quantifiers Stanford ignores \"every\" in propositions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1195, |
|
"end": 1213, |
|
"text": "Saha et al. (2018)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 746, |
|
"end": 754, |
|
"text": "Table 6a", |
|
"ref_id": "TABREF9" |
|
}, |
|
{ |
|
"start": 999, |
|
"end": 1007, |
|
"text": "Table 6b", |
|
"ref_id": "TABREF9" |
|
}, |
|
{ |
|
"start": 2283, |
|
"end": 2290, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "General Issues", |
|
"sec_num": "4.4.1" |
|
}, |
|
{ |
|
"text": "We looked at issues within complex clauses, namely conditional and temporal clauses. Conditional Clauses In some cases, Allen, ClausIE, OLLIE, and OpenIE extract the if-clause for the argument, but delete the \"if\", which leads to disagreements on both full proposition and proposition element level. Comparing the performance on all complex clauses as shown in Table 5a to complex clauses without conditional clauses, as shown in Table 6a , all systems, except for Re-Verb and Stanford, clearly perform better. Allen is better by .04 and OpenIE by .05, which shows that they have the biggest issues with conditional clauses. On proposition element level this becomes even clearer. Here, the three better systems, ClausIE, Allen, and OpenIE perform .04 -.17 better without conditional clauses. Temporal Clauses Conceptually, Allen, OLLIE, and OpenIE extract temporal clauses correctly, but have some problems if the sentence is too long. Stanford cuts out the \"when\". For temporal clauses, the performance is similar to conditional clauses. The three better systems perform .06 -.11 better on full proposition level, and .02-.09 better on proposition element level. Stanford and OLLIE perform worse without the temporal clauses.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 361, |
|
"end": 369, |
|
"text": "Table 5a", |
|
"ref_id": "TABREF8" |
|
}, |
|
{ |
|
"start": 430, |
|
"end": 438, |
|
"text": "Table 6a", |
|
"ref_id": "TABREF9" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Issues with Complex Sentences", |
|
"sec_num": "4.4.2" |
|
}, |
|
{ |
|
"text": "In this work, we described a method on how to create a dataset of reduced sentences from originally complex ones. We created an English dataset according to this method and further classified this dataset as simple and complex. It can be used for further evaluation of proposition extraction systems. The dataset enabled us to research the performance of proposition extraction detached from the task of clause splitting.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Summary", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "On the one hand, we showed that sentence complexity has a measurable impact on proposition extraction performance of both humans and machines. Hence, one step towards improving the performance of such systems, is the improvement of clause splitting. Furthermore, we believe that the performance of the original complex sentences, without the preliminary reduction step, would pose an even bigger problem to proposition systems, which implies that using these systems on real data could be problematic.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Summary", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "On the other hand, our study also showed that the ranking of systems is similar among simple and complex sentences. This means, that the best performing systems among simple sentences that are disentangled from the task of clause splitting, are also the best in complex sentences, where clause splitting also needs to be performed. This may mean that to find the overall best system, one does not need to classify between simple and complex sentences. However, it is necessary to find that sentence complexity is one problem of proposition extraction.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Summary", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Also, our intelligent baseline system, that was able to extract verbs, outperformed three of the systems. However, the better systems did not only perform much better, but they were also more affected by sentence complexity.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Summary", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Additionally, we looked into further problems of proposition extraction systems. The main issues in complex sentences that we could identify were conditional and temporal clauses.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Summary", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In future work, we plan to enlarge the corpus in order to use it for studies on user-specific recommendations. We plan to display proposition-like information to the user to provide more specific information than is given by a long sentence. This work may help in clause splitting, as we not only provide a gold standard for it, but also describe a method on how to create it. Furthermore, we plan to built a proposition extraction system based on the findings from this paper.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Future Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "https://github.com/MeDarina/review_ propositions", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In pragmatics, hedging is a textual construction that lessens the impact of an utterance. It is often expressed through modal verbs, adjectives, or adverbs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://knowitall.github.io/openie/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Online users' restaurant reviews are a fruitful domain for proposition extraction, as propositions extracted from reviews would be useful for several user-centered tasks, as they would allow to display only information pieces of interest.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "However, this step turned out to be more difficult than expected, as some sentences contained several factors that could be reduced. However, this did not influence our goal of determining the influence of sentence complexity.6 The annotation instructions are also available on our Github page.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We are not interested in different types of objects and modifiers, similar to Stanford, OpenIE, and AllenNLP, and thus we do not discuss this information. For a better overview, we asked the annotators to present the other elements in their order of occurrence.8 The guidelines include explanations of what predicates, arguments, and nested propositions are. This in itself is not difficult. However, such instructions consume more time and need more training, as simple mistakes are made by untrained annotators. We saw this in a training set for this task, that is not included or discussed here due to space restrictions.9 The result is shown inTable 4. A1 annotated the whole set, while A2 and A3 annotated parts.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We will not use MinIE(Gashteovski et al., 2017), as it is an extension of ClausIE providing additional information such as modality and whether an argument is necessary or unnecessary, which is disregarded in this work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "These sentences are classified as neither simple nor complex, but are included in all.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We would like to thank Venelin Kovatchev and Marie Bexte for their annotations. This work has been funded by Deutsche Forschungsgemeinschaft within the project ASSURE.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgement", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Leveraging Linguistic Structure For Open Domain Information Extraction", |
|
"authors": [ |
|
{ |
|
"first": "Gabor", |
|
"middle": [], |
|
"last": "Angeli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Melvin Jose Johnson", |
|
"middle": [], |
|
"last": "Premkumar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher D", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "344--354", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gabor Angeli, Melvin Jose Johnson Premkumar, and Christopher D Manning. 2015. Leveraging Linguis- tic Structure For Open Domain Information Extrac- tion. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, volume 1, pages 344-354.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "ClausIE: Clause-Based Open Information Extraction", |
|
"authors": [ |
|
{ |
|
"first": "Luciano", |
|
"middle": [], |
|
"last": "Del Corro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rainer", |
|
"middle": [], |
|
"last": "Gemulla", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 22nd international conference on World Wide Web", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "355--366", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Luciano Del Corro and Rainer Gemulla. 2013. ClausIE: Clause-Based Open Information Extrac- tion. In Proceedings of the 22nd international con- ference on World Wide Web, pages 355-366. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Knowledge vault: A web-scale approach to probabilistic knowledge fusion", |
|
"authors": [ |
|
{ |
|
"first": "Xin", |
|
"middle": [], |
|
"last": "Dong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Evgeniy", |
|
"middle": [], |
|
"last": "Gabrilovich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Geremy", |
|
"middle": [], |
|
"last": "Heitz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wilko", |
|
"middle": [], |
|
"last": "Horn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ni", |
|
"middle": [], |
|
"last": "Lao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Murphy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Strohmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shaohua", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "601--610", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xin Dong, Evgeniy Gabrilovich, Geremy Heitz, Wilko Horn, Ni Lao, Kevin Murphy, Thomas Strohmann, Shaohua Sun, and Wei Zhang. 2014. Knowledge vault: A web-scale approach to probabilistic knowl- edge fusion. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 601-610. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "A Neural Network for Coordination Boundary Prediction", |
|
"authors": [ |
|
{ |
|
"first": "Jessica", |
|
"middle": [], |
|
"last": "Ficler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "23--32", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jessica Ficler and Yoav Goldberg. 2016. A Neural Net- work for Coordination Boundary Prediction. In Pro- ceedings of the 2016 Conference on Empirical Meth- ods in Natural Language Processing, pages 23-32.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Minie: minimizing facts in open information extraction", |
|
"authors": [ |
|
{ |
|
"first": "Kiril", |
|
"middle": [], |
|
"last": "Gashteovski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rainer", |
|
"middle": [], |
|
"last": "Gemulla", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luciano", |
|
"middle": [ |
|
"Del" |
|
], |
|
"last": "Corro", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2630--2640", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kiril Gashteovski, Rainer Gemulla, and Luciano Del Corro. 2017. Minie: minimizing facts in open information extraction. In Proceedings of the 2017 Conference on Empirical Methods in Natural Lan- guage Processing, pages 2630-2640.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Approaches for information retrieval in legal documents", |
|
"authors": [ |
|
{ |
|
"first": "Rachayita", |
|
"middle": [], |
|
"last": "Giri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yosha", |
|
"middle": [], |
|
"last": "Porwal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vaibhavi", |
|
"middle": [], |
|
"last": "Shukla", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Palak", |
|
"middle": [], |
|
"last": "Chadha", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rishabh", |
|
"middle": [], |
|
"last": "Kaushal", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Contemporary Computing (IC3), 2017 Tenth International Conference on", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--6", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rachayita Giri, Yosha Porwal, Vaibhavi Shukla, Palak Chadha, and Rishabh Kaushal. 2017. Approaches for information retrieval in legal documents. In Con- temporary Computing (IC3), 2017 Tenth Interna- tional Conference on, pages 1-6. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Answering Complex Questions Using Open Information Extraction", |
|
"authors": [ |
|
{ |
|
"first": "Tushar", |
|
"middle": [], |
|
"last": "Khot", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Sabharwal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "311--316", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tushar Khot, Ashish Sabharwal, and Peter Clark. 2017. Answering Complex Questions Using Open Infor- mation Extraction. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics, volume 2, pages 311-316.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Splitting Complex English Sentences", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Buddhika K Pathirage Don", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 15th International Conference on Parsing Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "50--55", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John Lee and J Buddhika K Pathirage Don. 2017. Splitting Complex English Sentences. In Proceed- ings of the 15th International Conference on Parsing Technologies, pages 50-55.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "The GoOlap Fact Retrieval Framework", |
|
"authors": [ |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "L\u00f6ser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Arnold", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tillmann", |
|
"middle": [], |
|
"last": "Fiehn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "European Business Intelligence Summer School", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "84--97", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexander L\u00f6ser, Sebastian Arnold, and Tillmann Fiehn. 2011. The GoOlap Fact Retrieval Frame- work. In European Business Intelligence Summer School, pages 84-97. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "The Penn Treebank: annotating predicate argument structure", |
|
"authors": [ |
|
{ |
|
"first": "Mitchell", |
|
"middle": [], |
|
"last": "Marcus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Grace", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mary", |
|
"middle": [ |
|
"Ann" |
|
], |
|
"last": "Marcinkiewicz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Macintyre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ann", |
|
"middle": [], |
|
"last": "Bies", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Ferguson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karen", |
|
"middle": [], |
|
"last": "Katz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Britta", |
|
"middle": [], |
|
"last": "Schasberger", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "Proceedings of the workshop on Human Language Technology", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "114--119", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mitchell Marcus, Grace Kim, Mary Ann Marcinkiewicz, Robert MacIntyre, Ann Bies, Mark Ferguson, Karen Katz, and Britta Schas- berger. 1994. The Penn Treebank: annotating predicate argument structure. In Proceedings of the workshop on Human Language Technology, pages 114-119. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Open language learning for information extraction", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Mausam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Schmitz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Bart", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oren", |
|
"middle": [], |
|
"last": "Soderland", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Etzioni", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "523--534", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mausam, Michael Schmitz, Robert Bart, Stephen Soderland, and Oren Etzioni. 2012. Open language learning for information extraction. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 523-534. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Description of SQUASH, the SFU Question Answering Summary Handler for the DUC-2006 Summarization Task", |
|
"authors": [ |
|
{ |
|
"first": "Gabor", |
|
"middle": [], |
|
"last": "Melli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhongmin", |
|
"middle": [], |
|
"last": "Shi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yang", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yudong", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the 6th Document Understanding Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gabor Melli, Zhongmin Shi, Yang Wang, Yudong Liu, Anoop Sarkar, and Fred Popowich. 2006. Descrip- tion of SQUASH, the SFU Question Answering Summary Handler for the DUC-2006 Summariza- tion Task. In Proceedings of the 6th Document Un- derstanding Conference (DUC 2006).", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Crowdsourcing Question-Answer Meaning Representations", |
|
"authors": [ |
|
{ |
|
"first": "Julian", |
|
"middle": [], |
|
"last": "Michael", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gabriel", |
|
"middle": [], |
|
"last": "Stanovsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luheng", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ido", |
|
"middle": [], |
|
"last": "Dagan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "560--568", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Julian Michael, Gabriel Stanovsky, Luheng He, Ido Dagan, and Luke Zettlemoyer. 2018. Crowdsourc- ing Question-Answer Meaning Representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, volume 2, pages 560-568.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "A survey on open information extraction", |
|
"authors": [ |
|
{ |
|
"first": "Christina", |
|
"middle": [], |
|
"last": "Niklaus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthias", |
|
"middle": [], |
|
"last": "Cetto", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 27th International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3866--3878", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christina Niklaus, Matthias Cetto, Andr\u00e9 Freitas, and Siegfried Handschuh. 2018. A survey on open infor- mation extraction. In Proceedings of the 27th Inter- national Conference on Computational Linguistics, pages 3866-3878.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "CORE: Context-Aware Open Relation Extraction with Factorization Machines", |
|
"authors": [ |
|
{ |
|
"first": "Fabio", |
|
"middle": [], |
|
"last": "Petroni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luciano", |
|
"middle": [ |
|
"Del" |
|
], |
|
"last": "Corro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rainer", |
|
"middle": [], |
|
"last": "Gemulla", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1763--1773", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fabio Petroni, Luciano Del Corro, and Rainer Gemulla. 2015. CORE: Context-Aware Open Relation Ex- traction with Factorization Machines. In Proceed- ings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1763-1773.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Semeval-2014 task 4: Aspect based sentiment analysis", |
|
"authors": [ |
|
{ |
|
"first": "Maria", |
|
"middle": [], |
|
"last": "Pontiki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dimitris", |
|
"middle": [], |
|
"last": "Galanis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Haris", |
|
"middle": [], |
|
"last": "Papageorgiou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 10th international workshop on semantic evaluation (SemEval-2014)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "27--35", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Maria Pontiki, Dimitris Galanis, Haris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar. 2014. Semeval-2014 task 4: Aspect based sentiment anal- ysis. In Proceedings of the 10th international work- shop on semantic evaluation (SemEval-2014), pages 27-35.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "A grammar of contemporary English", |
|
"authors": [ |
|
{ |
|
"first": "Randolph", |
|
"middle": [], |
|
"last": "Quirk", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1985, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Randolph Quirk. 1985. A grammar of contemporary English, 11. impression edition. Longman, London.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Relation extraction with matrix factorization and universal schemas", |
|
"authors": [ |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Riedel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Limin", |
|
"middle": [], |
|
"last": "Yao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Mccallum", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin M", |
|
"middle": [], |
|
"last": "Marlin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "74--84", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sebastian Riedel, Limin Yao, Andrew McCallum, and Benjamin M Marlin. 2013. Relation extraction with matrix factorization and universal schemas. In Pro- ceedings of the 2013 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 74-84.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Open Information Extraction from Conjunctive Sentences", |
|
"authors": [ |
|
{ |
|
"first": "Swarnadeep", |
|
"middle": [], |
|
"last": "Saha", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 27th International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2288--2299", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Swarnadeep Saha et al. 2018. Open Information Ex- traction from Conjunctive Sentences. In Proceed- ings of the 27th International Conference on Com- putational Linguistics, pages 2288-2299.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Analysing errors of open information extraction systems", |
|
"authors": [ |
|
{ |
|
"first": "Rudolf", |
|
"middle": [], |
|
"last": "Schneider", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tom", |
|
"middle": [], |
|
"last": "Oberhauser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tobias", |
|
"middle": [], |
|
"last": "Klatt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Felix", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Gers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "L\u00f6ser", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the First Workshop on Building Linguistically Generalizable NLP Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "11--18", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rudolf Schneider, Tom Oberhauser, Tobias Klatt, Fe- lix A Gers, and Alexander L\u00f6ser. 2017. Analysing errors of open information extraction systems. In Proceedings of the First Workshop on Building Lin- guistically Generalizable NLP Systems, pages 11- 18.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Creating a Large Benchmark for Ipen Information Extraction", |
|
"authors": [ |
|
{ |
|
"first": "Gabriel", |
|
"middle": [], |
|
"last": "Stanovsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ido", |
|
"middle": [], |
|
"last": "Dagan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2300--2305", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gabriel Stanovsky and Ido Dagan. 2016. Creating a Large Benchmark for Ipen Information Extraction. In Proceedings of the 2016 Conference on Empiri- cal Methods in Natural Language Processing, pages 2300-2305.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Open IE as an intermediate structure for semantic tasks", |
|
"authors": [ |
|
{ |
|
"first": "Gabriel", |
|
"middle": [], |
|
"last": "Stanovsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ido Dagan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "303--308", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gabriel Stanovsky, Ido Dagan, et al. 2015. Open IE as an intermediate structure for semantic tasks. In Pro- ceedings of the 53rd Annual Meeting of the Associ- ation for Computational Linguistics and the 7th In- ternational Joint Conference on Natural Language Processing, volume 2, pages 303-308.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Getting More Out Of Syntax with PROPS", |
|
"authors": [ |
|
{ |
|
"first": "Gabriel", |
|
"middle": [], |
|
"last": "Stanovsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jessica", |
|
"middle": [], |
|
"last": "Ficler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ido", |
|
"middle": [], |
|
"last": "Dagan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1603.01648" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gabriel Stanovsky, Jessica Ficler, Ido Dagan, and Yoav Goldberg. 2016. Getting More Out Of Syntax with PROPS. arXiv preprint arXiv:1603.01648.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Supervised open information extraction", |
|
"authors": [ |
|
{ |
|
"first": "Gabriel", |
|
"middle": [], |
|
"last": "Stanovsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julian", |
|
"middle": [], |
|
"last": "Michael", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ido", |
|
"middle": [], |
|
"last": "Dagan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "885--895", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gabriel Stanovsky, Julian Michael, Luke Zettlemoyer, and Ido Dagan. 2018. Supervised open information extraction. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, volume 1, pages 885-895.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "Example Sentence and Extracted Proposition" |
|
}, |
|
"TABREF0": { |
|
"html": null, |
|
"text": "", |
|
"type_str": "table", |
|
"content": "<table><tr><td>: Output of Proposition Extraction Systems</td></tr><tr><td>and Our Two Baselines for the Sentence The waitress</td></tr><tr><td>smiled at her friend now</td></tr><tr><td>Gemulla, 2013), and Stanford OIE (Angeli et al.,</td></tr><tr><td>2015) against their own system.</td></tr><tr><td>Stanovsky et al. (2018) evaluates ClausIE,</td></tr><tr><td>PropS</td></tr></table>", |
|
"num": null |
|
}, |
|
"TABREF2": { |
|
"html": null, |
|
"text": "", |
|
"type_str": "table", |
|
"content": "<table><tr><td>: Distribution of Sentence Complexity Classes</td></tr><tr><td>in Our Reduced Sentence Set</td></tr><tr><td>ated their correctness using the following non-</td></tr><tr><td>exclusive categories: ORIGINALSIMPLE, RE-</td></tr><tr><td>DUCED, SIMPLE, GRAMMAR, and INFERENCE</td></tr><tr><td>(see</td></tr></table>", |
|
"num": null |
|
}, |
|
"TABREF4": { |
|
"html": null, |
|
"text": "Classification of Reduced Sentences", |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"num": null |
|
}, |
|
"TABREF6": { |
|
"html": null, |
|
"text": "", |
|
"type_str": "table", |
|
"content": "<table><tr><td>: Inter-Annotator Agreement in Accuracy</td></tr><tr><td>4 Evaluation of Proposition Extraction</td></tr><tr><td>Systems</td></tr><tr><td>Similar to Saha et al. (2018); Schneider et al.</td></tr><tr><td>(2017) and Niklaus et al. (2018), we evaluate</td></tr><tr><td>proposition system performance. They do not,</td></tr><tr><td>however, regard the task of proposition extraction</td></tr><tr><td>disentangled from the intrinsic subtask of clause</td></tr><tr><td>splitting. By showing the performance of both</td></tr><tr><td>simple and complex sentences, we are furthermore</td></tr><tr><td>able to show the impact of clause splitting.</td></tr></table>", |
|
"num": null |
|
}, |
|
"TABREF8": { |
|
"html": null, |
|
"text": "System Performance Measured in Accuracy", |
|
"type_str": "table", |
|
"content": "<table><tr><td colspan=\"4\">Systems Missing Conditional Temporal</td></tr><tr><td>Allen</td><td>.08</td><td>.13</td><td>.19</td></tr><tr><td>ClausIE</td><td>.06</td><td>.11</td><td>.13</td></tr><tr><td>ReVerb</td><td>.03</td><td>.00</td><td>.03</td></tr><tr><td>Stanford</td><td>.02</td><td>.00</td><td>.00</td></tr><tr><td>OLLIE</td><td>.04</td><td>.06</td><td>.02</td></tr><tr><td>OpenIE</td><td>.10</td><td>.19</td><td>.17</td></tr><tr><td colspan=\"4\">(a) System Performance on Propositions Excluding Specific Is-</td></tr><tr><td>sues</td><td/><td/><td/></tr><tr><td colspan=\"4\">Systems Missing Conditional Temporal</td></tr><tr><td>Allen</td><td>.50</td><td>.57</td><td>.55</td></tr><tr><td>ClausIE</td><td>.38</td><td>.40</td><td>.38</td></tr><tr><td>Stanford</td><td>.26</td><td>.03</td><td>.14</td></tr><tr><td>ReVerb</td><td>.32</td><td>.00</td><td>.21</td></tr><tr><td>OLLIE</td><td>.31</td><td>.00</td><td>.20</td></tr><tr><td>OpenIE</td><td>.54</td><td>.53</td><td>.50</td></tr><tr><td colspan=\"4\">(b) System Performance on Proposition Elements Excluding</td></tr><tr><td>Specific Issues</td><td/><td/><td/></tr></table>", |
|
"num": null |
|
}, |
|
"TABREF9": { |
|
"html": null, |
|
"text": "", |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"num": null |
|
} |
|
} |
|
} |
|
} |