|
{ |
|
"paper_id": "S18-1010", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:44:37.296590Z" |
|
}, |
|
"title": "KOI at SemEval-2018 Task 5: Building Knowledge Graph of Incidents", |
|
"authors": [ |
|
{ |
|
"first": "Paramita", |
|
"middle": [], |
|
"last": "Mirza", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Fariz", |
|
"middle": [], |
|
"last": "Darari", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Universitas Indonesia", |
|
"location": { |
|
"country": "Indonesia" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Rahmad", |
|
"middle": [], |
|
"last": "Mahendra", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Universitas Indonesia", |
|
"location": { |
|
"country": "Indonesia" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We present KOI (Knowledge of Incidents), a system that given news articles as input, builds a knowledge graph (KOI-KG) of incidental events. KOI-KG can then be used to efficiently answer questions such as \"How many killing incidents happened in 2017 that involve Sean?\" The required steps in building the KG include: (i) document preprocessing involving word sense disambiguation, named-entity recognition, temporal expression recognition and normalization, and semantic role labeling; (ii) incidental event extraction and coreference resolution via document clustering; and (iii) KG construction and population.", |
|
"pdf_parse": { |
|
"paper_id": "S18-1010", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We present KOI (Knowledge of Incidents), a system that given news articles as input, builds a knowledge graph (KOI-KG) of incidental events. KOI-KG can then be used to efficiently answer questions such as \"How many killing incidents happened in 2017 that involve Sean?\" The required steps in building the KG include: (i) document preprocessing involving word sense disambiguation, named-entity recognition, temporal expression recognition and normalization, and semantic role labeling; (ii) incidental event extraction and coreference resolution via document clustering; and (iii) KG construction and population.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "SemEval-2018 1 Task 5: Counting Events and Participants in the Long Tail (Postma et al., 2018) addresses the problem of referential quantification that requires a system to answer numerical questions about events such as (i) \"How many killing incidents happened in June 2016 in San Antonio, Texas?\" or (ii) \"How many people were killed in June 2016 in San Antonio, Texas?\" Subtasks S1 and S2 For questions of type (i), which are asked by the first two subtasks, participating systems must be able to identify the type (e.g., killing, injuring), time, location and participants of each event occurring in a given news article, and establish within-and cross-document event coreference links. Subtask S1 focuses on evaluating systems' performances on identifying answer incidents, i.e., events whose properties fit the constraints of the questions, by making sure that there is only one answer incident per question.", |
|
"cite_spans": [ |
|
{ |
|
"start": 73, |
|
"end": 94, |
|
"text": "(Postma et al., 2018)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In order to answer questions of type (ii), participating systems are also required to identify participant roles in each identified answer incident (e.g., victim, subject-suspect), and use such information along with victim-related numerals (\"three people were killed\") mentioned in the corresponding answer documents, i.e., documents that report on the answer incident, to determine the total number of victims.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Subtask S3", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Datasets The organizers released two datasets: (i) test data, stemming from three domains of gun violence, fire disasters and business, and (ii) trial data, covering only the gun violence domain. Each dataset contains (i) an input document (in CoNLL format) that comprises news articles, and (ii) a set of questions (in JSON format) to evaluate the participating systems. 2 This paper describes the KOI (Knowledge of Incidents) system submitted to SemEval-2018 Task 5, which constructs and populates a knowledge graph of incidental events mentioned in news articles, to be used to retrieve answer incidents and answer documents given numerical questions about events. We propose a fully unsupervised approach to identify events and their properties in news texts, and to resolve within-and crossdocument event coreference, which will be detailed in the following section.", |
|
"cite_spans": [ |
|
{ |
|
"start": 372, |
|
"end": 373, |
|
"text": "2", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Subtask S3", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Given an input document in CoNLL format (one token per line), for each news article, we first split the sentences following the annotation of: (i) whether a token is part of the article title or content; (ii) sentence identifier; and (iii) whether a to-ken is a newline character. We then ran several tools on the tokenized sentences to obtain the following NLP annotations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Document Preprocessing", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Word sense disambiguation (WSD) We ran Babelfy 3 (Moro et al., 2014) to get disambiguated concepts (excluding stop-words), which can be multi-word expressions, e.g., gunshot wound. Each concept is linked to a sense in Babel-Net 4 (Navigli and Ponzetto, 2012) , which subsequently is also linked to a WordNet sense and a DBpedia entity (if any).", |
|
"cite_spans": [ |
|
{ |
|
"start": 49, |
|
"end": 68, |
|
"text": "(Moro et al., 2014)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 230, |
|
"end": 258, |
|
"text": "(Navigli and Ponzetto, 2012)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Document Preprocessing", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Named-entity recognition (NER) We relied on spaCy 5 for a statistical entity recognition, specifically for identifying persons and geopolitical entities (countries, cities, and states).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Document Preprocessing", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "We used HeidelTime 6 (Str\u00f6tgen and Gertz, 2013) for recognizing textual spans that indicate time, e.g., this Monday, and normalizing the time expressions according to a given document creation time, e.g., 2018-03-05.", |
|
"cite_spans": [ |
|
{ |
|
"start": 21, |
|
"end": 47, |
|
"text": "(Str\u00f6tgen and Gertz, 2013)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Time expression recognition and normalization", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Semantic role labeling (SRL) Senna 7 (Collobert et al., 2011) was used to run semantic parsing on the input text, for identifying sentence-level events (i.e., predicates) and their participants.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Time expression recognition and normalization", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Identifying document-level events Sentencelevel events, i.e., predicates recognized by the SRL tool, were considered as the candidates for the document-level events. Note that predicates containing other predicates as the patient argument, e.g., 'says' with arguments 'police' as its agent and 'one man was shot to death' as its patient, were not considered as candidate events. Given a predicate, we simultaneously determined whether it is part of document-level events and also identified its type, based on the occurrence of BabelNet concepts that are related to four event types of interest stated in the task guidelines: killing, injuring, fire burning and job firing. A predicate is automatically labeled as a sentencelevel event with one of the four types if such re-lated concepts occur either in the predicate itself or in one of its arguments. For example, a predicate 'shot', with arguments 'one man' as its patient and 'to death' as its manner, will be considered as a killing event because of the occurrence of 'death' concept. 8 Concept relatedness was computed via pathbased WordNet similarity (Hirst et al., 1998) of a given BabelNet concept, which is linked to a WordNet sense, with a predefined set of related WordNet senses for each event type (e.g., wn30:killing.n.02 and wn30:kill.v.01 for the killing event), setting 5.0 as the threshold. Related concepts were also annotated with the corresponding event types, to be used for the mention-level event coreference evaluation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1041, |
|
"end": 1042, |
|
"text": "8", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1109, |
|
"end": 1129, |
|
"text": "(Hirst et al., 1998)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Event Extraction and Coreference Resolution", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "We then assumed all identified sentence-level events in a news article belonging to the same event type to be automatically regarded as one document-level event, meaning that each article may contain at most four document-level events (i.e., at most one event per event type).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Event Extraction and Coreference Resolution", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Identifying document-level event participants Given a predicate as an identified event, its participants were simply extracted from the occurrence of named entities of type person, according to both Senna and spaCy, in the agent and patient arguments of the predicate. Furthermore, we determined the role of each participant as victim, perpetrator or other, based on its mention in the predicate. For example, if 'Randall' is mentioned as the agent argument of the predicate 'shot', then he is a perpetrator. Note that a participant can have multiple roles, as is the case for a person who kills himself.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Event Extraction and Coreference Resolution", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Taking into account all participants of a set of identified events (per event type) in a news article, we extracted document-level event participants by resolving name coreference. For instance, 'Randall', 'Randall R. Coffland', and 'Randall Coffland' all refer to the same person.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Event Extraction and Coreference Resolution", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Identifying document-level number of victims For each identified predicate in a given document, we extracted the first existing numeral in the patient argument of the predicate, e.g., one in 'one man'. The normalized value of the numeral was then taken as the number of victims, as long as the predicate is not suspect-related predicates such as suspected or charged. The number of victims of document-level events is simply the maximum value of identified number of victims per predicate.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Event Extraction and Coreference Resolution", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Identifying document-level event locations To retrieve candidate event locations given a document, we relied on disambiguated DBpedia entities as a result of Babelfy annotation. We utilized SPARQL queries over the DBpedia SPARQL endpoint 9 to identify whether a DBpedia entity is a city or a state, and whether it is part of or located in a city or a state. Specifically, an entity is considered to be a city whenever it is of type dbo:City or its equivalent types (e.g., schema:City). Similarly, it is considered to be a state whenever it is either of type yago:WikicatStatesOfTheUnitedStates, has a senator (via the property dbp:senators), or has dbc:States of the United States as a subject.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Event Extraction and Coreference Resolution", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Assuming that document-level events identified in a given news article happen at one certain location, we simply ranked the candidate event locations, i.e., pairs of city and state, based on their frequencies, and took the one with the highest frequency.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Event Extraction and Coreference Resolution", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Identifying document-level event times Given a document D, suppose we have dct as the document creation time and T as a list of normalized time expressions returned by HeidelTime, whose types are either date or time. We considered a time expression t i \u2208 T as one of candidate event times T \u2286 T , if dct \u2212 t i is a non-negative integer less than n days. 10 We hypothesize that the event reported in a news article may have happened several days before the news is published.", |
|
"cite_spans": [ |
|
{ |
|
"start": 354, |
|
"end": 356, |
|
"text": "10", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Event Extraction and Coreference Resolution", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Assuming that document-level events identified in a given news article happen at one certain time, we determine which one is the document-level event time from the set of candidates T by applying two heuristics: A time expression t j \u2208 T is considered as the event time, if (i) t j is mentioned in sentences containing event-related concepts, and (ii) t j is the earliest time expression in the candidate set.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Event Extraction and Coreference Resolution", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "We approached cross-document event coreference by clustering similar document-level events that 9 https://dbpedia.org/sparql 10 Based on our empirical observations on the trial data we found n = 7 to be the best parameter. are of the same type, via their provenance, i.e., news articles where they were mentioned. From each news article we derived TF-IDF-based vectors of (i) BabelNet senses and (ii) spaCy's persons and geopolitical entities, which are then used to compute cosine similarities among the articles. Two news articles will be clustered together if (i) the computed similarity is above a certain threshold, which was optimized using the trial data, and (ii) the event time distance of documentlevel events found in the articles does not exceed a certain threshold, i.e., 3 days. All document-level events belonging to the same document cluster are assumed to be coreferring events and to have properties resulting from the aggregation of locations, times and participants of contributing events, with the exception of number of victims where the maximum value was taken instead.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cross-document event coreference resolution", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We first built an OWL ontology 11 to capture the knowledge model of incidental events and documents. We rely on reification (Noy and Rector, 2006) for modeling entities, that is, incident events, documents, locations, participants and dates are all resources of their own. Each resource is described through its corresponding properties, as shown in Table 1 . An incident event can be of type injuring, killing, fire burning, and job firing. Documents are linked to incident events through the property event, and different documents may refer to the same corresponding incident event. We borrow URIs from DBpedia for values of the properties city and state. Participant roles can be either victim, perpetrator or other. A date has a unified literal value of the format \"yyyy-mm-dd\", as well as separated values for the day, month, and year.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 350, |
|
"end": 357, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Constructing, Populating and Querying the Knowledge Graph", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "To build the KOI knowledge graph (KOI-KG) Figure 1 : A SPARQL query over KOI-KG for \"Which killing events happened in 2017 that involve persons with Sean as first name?\" we relied on Apache Jena, 12 a Java-based Semantic Web framework. The output of the previously explained event extraction and coreference resolution steps was imported into the Jena TDB triple store as RDF triples. This facilitates SPARQL querying, which can be done using the Jena ARQ module. The whole dump of KOI-KG is available for download at https://koi.cs.ui.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 42, |
|
"end": 50, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Constructing, Populating and Querying the Knowledge Graph", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "ac.id/incidents.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Constructing, Populating and Querying the Knowledge Graph", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Given a question in JSON format, we applied mapping rules to transform it into a SPARQL query, which was then used to retrieve corresponding answer incidents and answer documents.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Constructing, Populating and Querying the Knowledge Graph", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Constraints of questions such as event type, participant, date, and location were mapped into SPARQL join conditions (that is, triple patterns). Figure 1 shows a SPARQL representation for the question \"Which killing events happened in 2017 that involve persons with Sean as first name?\". The prefix koi is for the KOI ontology namespace (https://koi.cs.ui.ac.id/ns#). In the SPARQL query, the join conditions are over the event type killing, the date '2017' (as year) and the participant 'Sean' (as firstname).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 145, |
|
"end": 153, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Constructing, Populating and Querying the Knowledge Graph", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "For Subtask S2, we extended the SPARQL query with counting feature to retrieve the total number of unique events. Analogously, for Subtask S3, we retrieve number of victims by counting event participants having victim as their roles, and by getting the value of the numOfVictims property (if any). The value of the numOfVictims property was preferred as the final value for an incident if it exists, otherwise, KOI relied on counting event participants.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Constructing, Populating and Querying the Knowledge Graph", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "We also provide a SPARQL query interface for KOI-KG at https://koi.cs.ui.ac.id/ dataset.html?tab=query&ds=/incidents.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Constructing, Populating and Querying the Knowledge Graph", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "12 http://jena.apache.org/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Constructing, Populating and Querying the Knowledge Graph", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Evaluation results Participating systems were evaluated according to three evaluation schemes: (i) mention-level evaluation, for resolving crossdocument coreference of event mentions, (ii) document-level evaluation (doc-f1), for identifying events and their properties given a document, and (iii) incident-level evaluation, for combining event extraction and within-/cross-document event coreference resolution to answer numerical questions in terms of exact matching (inc-acc) and Root Mean Square Error (inc-rmse). Furthermore, the percentage of questions in each subtask that can be answered by the systems (%ans) also contributes to the final ranking.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results and Discussion", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Regarding the mention-level evaluation, KOI achieves an average F1-score of 42.8% (36.3 percentage point increase over the baseline) from several established metrics for evaluating coreference resolution systems. For document-level and incident-level evaluation schemes, we report in Table 2 the performance of three different system runs of KOI:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 284, |
|
"end": 291, |
|
"text": "Table 2", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results and Discussion", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "v1 Submitted version of KOI during the evaluation period.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results and Discussion", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "v2 Similar as v1, however, instead of giving no answers when we found no matching answer incidents, KOI simply returns zero as the numerical answer with an empty list of answer documents.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results and Discussion", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "v3 Submitted version of KOI during the postevaluation period, which incorporates improvement on document-level event time identification leading to enhanced crossdocument event coreference. 13 Compared to the baseline provided by the task organizers, the performance of KOI is considerably better, specifically of KOI v3 for subtask S2 with doc-f1 and inc-acc around twice as much as of the baseline. Hereafter, our quantitative and qualitative analyses are based on KOI v3, and mentions of the KOI system refer to this system run.", |
|
"cite_spans": [ |
|
{ |
|
"start": 190, |
|
"end": 192, |
|
"text": "13", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results and Discussion", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Subtask S1 We detail in in terms of micro-averaged and macro-averaged scores. Note that the official doc-f1 scores reported in Table 2 correspond to macro-averaged F1-scores. We first analyzed the system performance only on answered questions, i.e., for which KOI returns the relevant answer documents (55.1% of all questions), yielding 79.8% and 85.7% micro-averaged and macro-averaged F1-scores, respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 127, |
|
"end": 134, |
|
"text": "Table 2", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results and Discussion", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In order to have a fair comparison with systems that are able to answer all questions, we also report the performance of KOI that returns empty sets of answer documents for unanswered questions. In this evaluation scheme, the macro-averaged precision is significantly lower than the micro-averaged one (51.7% vs 86.6%), because systems are heavily penalized for not retrieving relevant answer documents per question, i.e., given zero precision score, which brings the average over all questions down. Meanwhile, the micro-averaged precision measures the systems' ability in returning relevant documents for all questions regardless of whether the questions were answered or not. KOI focuses on yielding high quality answer documents, which is reflected by high micro-averaged precision of above 80% in general. The following result analy- ses are based on the all questions scheme. By analyzing the document retrieval per event type, we found that KOI can identify fire burning events in documents quite well, yielding the highest recall among all event types, but the contrary for job firing events. With respect to event constraints, answering questions with location constraint results in the worst performance, meaning that our method is still lacking in identifying and/or disambiguating event locations from news documents. Specifically, questions with city constraint are more difficult to answer compared to the ones with state constraint (49.6% vs 61.5% microaveraged F1-scores, respectively).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results and Discussion", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The key differences between Subtask S1 and S2 are: (i) questions with zero as an answer are included, and (ii) there can be more than one answer incidents per question, hence, systems must be able to cluster answer documents into the correct number of clusters, i.e., incidents.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Subtask S2", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "As shown in Table 4 , KOI is able to answer questions with zero as the true answer with 96.3% accuracy. Meanwhile, for questions with non-zero number of incidents as the answers, KOI gives numerical answers with 18.9% accuracy, resulting in overall accuracy (inc-acc) of 27.4% and RMSE inc-rmse) of 5.3.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 12, |
|
"end": 19, |
|
"text": "Table 4", |
|
"ref_id": "TABREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Subtask S2", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We also analyzed questions (with non-zero answer incidents) for which KOI yields perfect sets of answer documents with 100% F1-score, i.e., 7.7% of all questions. For 61.8% of such answered questions, KOI returns the perfect number of inci-Event ID: 22409", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Subtask S2", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Man playing with gun while riding in a car fatally shoots, kills driver A man was fatally shot early Sunday morning after the passenger in the car he was driving accidentally discharged the gun, according to the San Antonio Police Department. The shooting occurred about 3 a.m. when group of four men were driving out of the Iron Horse Apartments at 8800 Village Square on the Northeast Side. The passenger in the front seat was playing with a gun and allegedly shot himself in the hand, according to officers at the scene. The bullet went through his hand and struck the driver in the abdomen. The men then drove to Northeast Baptist Hospital, which was nearby, but the driver was pronounced dead at the hospital, according to investigators. Police believe the driver and passenger to be related and are still investigating the incident. The other two men in the vehicle were detained. No charges have been filed.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2016-06-19", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "41-year -old man killed in overnight shooting SAN ANTONIO -A 41-year-old man is dead after a shooting police say may have been accidental. The victim died after another man drove him to Northeast Baptist Hospital for treatment of that gunshot wound. Police say they got a call at around 2:45 a.m. for the shooting in the 8800 block of Village Drive. The man told them he and the victim were in a pickup when he fired the shot, but police say it's not known why the men were in the truck. Investigators say the man told them he fired the shot accidentally and struck the victim. Police say the shooter took the victim to the emergency room at Northeast Baptist, where hospital personnel pronounced him dead. Police are questioning the man who did the shooting. Table 5 : An identified 'killing' event by KOI for \"Which killing incidents happened in June 2016 in San Antonio, Texas?\" with two supporting documents. dents. For the rest, KOI tends to overestimate the number of incidents, i.e., for 30.9% of the cases, KOI fails to establish cross-document event coreference links with the current document clustering method.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 760, |
|
"end": 767, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "2016-06-19", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Subtask S3 We also show in Table 4 , the KOI performance on answering numerical questions about number of victims. KOI is able to answer correctly 55.2% of questions with zero answers, and 11.9% of the ones with non-zero answers.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 27, |
|
"end": 34, |
|
"text": "Table 4", |
|
"ref_id": "TABREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "2016-06-19", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Analyzing the questions with zero as the true answer, for which KOI is able to answer correctly, in 41.1% of the cases KOI is able to identify the non-existence of victims when the set of answer documents is not empty. In 40.0% of the cases, the correctly predicted zero answers are actually by chance, i.e., because KOI fails to identify relevant answer documents.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2016-06-19", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Meanwhile, for questions with gold numerical answers greater than zero, KOI returns wrong answers in 88.1% of the cases. Among these answers, 66.9% of the answers are lower than the true number of victims, and 33.1% are higher. This means that KOI tends to underestimate the number of victims with 6.6 RMSE.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2016-06-19", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For 22.5% of all questions, KOI is able to identify the perfect sets of answer documents with 100% F1-score. Among these questions, 34.3% were answered correctly with the exact number of victims, for which: 52.7% of correct answers result from solely counting participants (as victims), 35.3% were inferred only from numeral mentions, and the rest of 12.0% were answered by combining both victim counting and numeral mentions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2016-06-19", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Qualitative Analysis Recalling the example questions mentioned in the beginning of Section 1, for the first question, KOI is able to perfectly identify 2 killing incidents with 5 supporting documents pertaining to the event-time and -location constraints. One of the identified answer incidents with two supporting documents is shown in Table 5, which shows how well the system is able to establish cross-document event coreference, given overlapping concepts and entities. However, in answering the second question, KOI returns one less number of victims since it cannot identify the killed victim in the answer incident shown in Table 5, due to the lack of numeral mentions and named event participants as victims.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2016-06-19", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We have introduced a system called KOI (Knowledge of Incidents), that is able to build a knowledge graph (KG) of incidental events by extracting relevant event information from news articles. The resulting KG can then be used to efficiently answer numerical questions about events such as \"How many people were killed in June 2016 in San Antonio, Texas?\" We have submitted KOI as a participating system at SemEval-2018 Task 5, which achieved competitive results. A live demo of our system is available at https://koi.cs.ui. ac.id/. Future directions of this work include the incorporation of supervised (or semi-supervised) approaches for specific steps of KOI such as the extraction of numeral information (Mirza et al., 2017) , as well as the investigation of applying our approach to other domains such as disease outbreaks and natural disasters.", |
|
"cite_spans": [ |
|
{ |
|
"start": 707, |
|
"end": 727, |
|
"text": "(Mirza et al., 2017)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "https://competitions.codalab.org/ competitions/17285", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://babelfy.org/ 4 http://babelnet.org/ 5 https://spacy.io/ 6 https://github.com/HeidelTime/ heideltime 7 https://ronan.collobert.com/senna/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We assume that a predicate that is labeled as a killing event cannot be labeled as an injuring event even though an injuring-related concept such as 'shot' occurs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Available at https://koi.cs.ui.ac.id/ns", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Submission v1 and v2 did not consider heuristic (i) that we have discussed in Section 2.2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Natural language processing (almost) from scratch", |
|
"authors": [ |
|
{ |
|
"first": "Ronan", |
|
"middle": [], |
|
"last": "Collobert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Weston", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L\u00e9on", |
|
"middle": [], |
|
"last": "Bottou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Karlen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Koray", |
|
"middle": [], |
|
"last": "Kavukcuoglu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pavel", |
|
"middle": [], |
|
"last": "Kuksa", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "J. Mach. Learn. Res", |
|
"volume": "12", |
|
"issue": "", |
|
"pages": "2493--2537", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. J. Mach. Learn. Res., 12:2493-2537.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Lexical chains as representations of context for the detection and correction of malapropisms. WordNet: An electronic lexical database", |
|
"authors": [ |
|
{ |
|
"first": "Graeme", |
|
"middle": [], |
|
"last": "Hirst", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "St-Onge", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "", |
|
"volume": "305", |
|
"issue": "", |
|
"pages": "305--332", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Graeme Hirst, David St-Onge, et al. 1998. Lexical chains as representations of context for the detec- tion and correction of malapropisms. WordNet: An electronic lexical database, 305:305-332.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Cardinal virtues: Extracting relation cardinalities from text", |
|
"authors": [ |
|
{ |
|
"first": "Paramita", |
|
"middle": [], |
|
"last": "Mirza", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Simon", |
|
"middle": [], |
|
"last": "Razniewski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fariz", |
|
"middle": [], |
|
"last": "Darari", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gerhard", |
|
"middle": [], |
|
"last": "Weikum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "347--351", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Paramita Mirza, Simon Razniewski, Fariz Darari, and Gerhard Weikum. 2017. Cardinal virtues: Extract- ing relation cardinalities from text. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 -August 4, Volume 2: Short Papers, pages 347-351.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Entity Linking meets Word Sense Disambiguation: a Unified Approach. Transactions of the Association for Computational Linguistics (TACL)", |
|
"authors": [ |
|
{ |
|
"first": "Andrea", |
|
"middle": [], |
|
"last": "Moro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alessandro", |
|
"middle": [], |
|
"last": "Raganato", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roberto", |
|
"middle": [], |
|
"last": "Navigli", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "231--244", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrea Moro, Alessandro Raganato, and Roberto Nav- igli. 2014. Entity Linking meets Word Sense Disam- biguation: a Unified Approach. Transactions of the Association for Computational Linguistics (TACL), 2:231-244.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "BabelNet: The automatic construction, evaluation and application of a wide-coverage multilingual semantic network", |
|
"authors": [ |
|
{ |
|
"first": "Roberto", |
|
"middle": [], |
|
"last": "Navigli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Simone", |
|
"middle": [ |
|
"Paolo" |
|
], |
|
"last": "Ponzetto", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Artificial Intelligence", |
|
"volume": "193", |
|
"issue": "", |
|
"pages": "217--250", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Roberto Navigli and Simone Paolo Ponzetto. 2012. BabelNet: The automatic construction, evaluation and application of a wide-coverage multilingual se- mantic network. Artificial Intelligence, 193:217- 250.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Defining N-ary Relations on the Semantic Web. W3C Working Group Note", |
|
"authors": [], |
|
"year": 2006, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Natasha Noy and Alan Rector, editors. 2006. Defin- ing N-ary Relations on the Semantic Web. W3C Working Group Note. Retrieved Jan 10, 2017 from https://www.w3.org/TR/2006/NOTE-swbp-n- aryRelations-20060412/.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Semeval-2018 task 5: Counting events and participants in the long tail", |
|
"authors": [ |
|
{ |
|
"first": "Marten", |
|
"middle": [], |
|
"last": "Postma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Filip", |
|
"middle": [], |
|
"last": "Ilievski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Piek", |
|
"middle": [], |
|
"last": "Vossen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 12th International Workshop on Semantic Evaluation (SemEval-2018). Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marten Postma, Filip Ilievski, and Piek Vossen. 2018. Semeval-2018 task 5: Counting events and par- ticipants in the long tail. In Proceedings of the 12th International Workshop on Semantic Evalu- ation (SemEval-2018). Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Multilingual and cross-domain temporal tagging. Language Resources and Evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Jannik", |
|
"middle": [], |
|
"last": "Str\u00f6tgen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Gertz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "47", |
|
"issue": "", |
|
"pages": "269--298", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jannik Str\u00f6tgen and Michael Gertz. 2013. Multilingual and cross-domain temporal tagging. Language Re- sources and Evaluation, 47(2):269-298.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF1": { |
|
"text": "KOI-KG ontology", |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF3": { |
|
"text": "", |
|
"type_str": "table", |
|
"content": "<table><tr><td>, the perfor-</td></tr></table>", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF4": { |
|
"text": "KOI performance results at SemEval-2018 Task 5 (in percentages) for three subtasks, baseline was provided by the task organizers, *) denotes the system run that we submitted during the evaluation period.", |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td colspan=\"3\">micro-averaged</td><td colspan=\"3\">macro-averaged</td></tr><tr><td/><td>p</td><td>r</td><td>f1</td><td>p</td><td>r</td><td>f1</td></tr><tr><td>Overall answered questions</td><td colspan=\"6\">86.6 74.0 79.8 94.2 83.6 85.7</td></tr><tr><td>all questions</td><td colspan=\"6\">86.6 41.6 56.2 51.7 45.9 47.1</td></tr><tr><td>Event type killing</td><td colspan=\"6\">88.5 43.2 58.1 56.8 48.6 50.3</td></tr><tr><td>injuring</td><td colspan=\"6\">82.8 37.4 51.5 46.4 40.1 41.4</td></tr><tr><td>job firing</td><td>100.0</td><td colspan=\"5\">8.7 16.0 15.4 15.4 15.4</td></tr><tr><td>fire burning</td><td colspan=\"6\">96.9 66.2 78.7 65.5 66.2 65.7</td></tr><tr><td>Event constraint participant</td><td colspan=\"6\">84.8 43.0 57.0 61.1 51.1 53.2</td></tr><tr><td>location</td><td colspan=\"6\">89.1 39.4 54.6 46.7 42.8 43.6</td></tr><tr><td>time</td><td colspan=\"6\">86.0 42.4 56.8 51.7 46.3 47.4</td></tr></table>", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF5": { |
|
"text": "KOI performance results for subtask S1, on answer document retrieval (p for precision, r for recall and f1 for F1-score).", |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF7": { |
|
"text": "", |
|
"type_str": "table", |
|
"content": "<table><tr><td>: KOI performance results for subtasks S2 and S3, on answering numerical questions, i.e., number of incidents and number of victims.</td></tr></table>", |
|
"html": null, |
|
"num": null |
|
} |
|
} |
|
} |
|
} |