|
{ |
|
"paper_id": "S12-1035", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:24:08.541344Z" |
|
}, |
|
"title": "*SEM 2012 Shared Task: Resolving the Scope and Focus of Negation", |
|
"authors": [ |
|
{ |
|
"first": "Roser", |
|
"middle": [], |
|
"last": "Morante", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "CLiPS -University of Antwerp", |
|
"location": { |
|
"addrLine": "Prinsstraat 13", |
|
"postCode": "B-2000", |
|
"settlement": "Antwerp", |
|
"country": "Belgium" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Eduardo", |
|
"middle": [], |
|
"last": "Blanco", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Lymba Corporation Richardson", |
|
"location": { |
|
"postCode": "75080", |
|
"region": "TX", |
|
"country": "USA" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "The Joint Conference on Lexical and Computational Semantics (*SEM) each year hosts a shared task on semantic related topics. In its first edition held in 2012, the shared task was dedicated to resolving the scope and focus of negation. This paper presents the specifications, datasets and evaluation criteria of the task. An overview of participating systems is provided and their results are summarized.", |
|
"pdf_parse": { |
|
"paper_id": "S12-1035", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "The Joint Conference on Lexical and Computational Semantics (*SEM) each year hosts a shared task on semantic related topics. In its first edition held in 2012, the shared task was dedicated to resolving the scope and focus of negation. This paper presents the specifications, datasets and evaluation criteria of the task. An overview of participating systems is provided and their results are summarized.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Semantic representation of text has received considerable attention these past years. While early shallow approaches have been proven useful for several natural language processing applications (Wu and Fung, 2009; Surdeanu et al., 2003; Shen and Lapata, 2007) , the field is moving towards analyzing and processing complex linguistic phenomena, such as metaphor (Shutova, 2010) or modality and negation (Morante and Sporleder, 2012) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 194, |
|
"end": 213, |
|
"text": "(Wu and Fung, 2009;", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 214, |
|
"end": 236, |
|
"text": "Surdeanu et al., 2003;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 237, |
|
"end": 259, |
|
"text": "Shen and Lapata, 2007)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 362, |
|
"end": 377, |
|
"text": "(Shutova, 2010)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 403, |
|
"end": 432, |
|
"text": "(Morante and Sporleder, 2012)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The *SEM 2012 Shared Task is devoted to negation, specifically, to resolving its scope and focus. Negation is a grammatical category that comprises devices used to reverse the truth value of propositions. Broadly speaking, scope is the part of the meaning that is negated and focus the part of the scope that is most prominently or explicitly negated (Huddleston and Pullum, 2002) . Although negation is a very relevant and complex semantic aspect of language, current proposals to annotate meaning either dismiss negation or only treat it in a partial manner.", |
|
"cite_spans": [ |
|
{ |
|
"start": 351, |
|
"end": 380, |
|
"text": "(Huddleston and Pullum, 2002)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The interest in automatically processing negation originated in the medical domain (Chapman et al., 2001) , since clinical reports and discharge summaries must be reliably interpreted and indexed. The annotation of negation and hedge cues and their scope in the BioScope corpus (Vincze et al., 2008) represented a pioneering effort. This corpus boosted research on scope resolution, especially since it was used in the CoNLL 2010 Shared Task (CoNLL ST 2010) on hedge detection (Farkas et al., 2010) . Negation has also been studied in sentiment analysis (Wiegand et al., 2010) as a means to determine the polarity of sentiments and opinions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 83, |
|
"end": 105, |
|
"text": "(Chapman et al., 2001)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 278, |
|
"end": 299, |
|
"text": "(Vincze et al., 2008)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 477, |
|
"end": 498, |
|
"text": "(Farkas et al., 2010)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 554, |
|
"end": 576, |
|
"text": "(Wiegand et al., 2010)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Whereas several scope detectors have been developed using BioScope (Morante and Daelemans, 2009; Velldal et al., 2012) , there is a lack of corpora and tools to process negation in general domain texts. This is why we have prepared new corpora for scope and focus detection. Scope is annotated in Conan Doyle stories (CD-SCO corpus). For each negation, the cue, its scope and the negated event, if any, are marked as shown in example (1a). Focus is annotated on top of PropBank, which uses the WSJ section of the Penn TreeBank (PB-FOC corpus). Focus annotation is restricted to verbal negations annotated with MNEG in PropBank, and all the words belonging to a semantic role are selected as focus. An annotated example is shown in (1b) 1 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 67, |
|
"end": 96, |
|
"text": "(Morante and Daelemans, 2009;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 97, |
|
"end": 118, |
|
"text": "Velldal et al., 2012)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "(1) a. [John had] never [said as much before] b. John had never said {as much} before", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The rest of this paper is organized as follows. The two proposed tasks are described in Section 2, and the corpora in Section 3. Participating systems and their results are summarized in Section 4. The approaches used by participating systems are described in Section 5, as well as the analysis of results. Finally, Section 6 concludes the paper.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The *SEM 2012 Shared Task 2 was dedicated to resolving the scope and focus of negation (Task 1 and 2 respectively). Participants were allowed to engage in any combination of tasks and submit at most two runs per task. A pilot task combining scope and focus detection was initially planned, but was cancelled due to lack of participation. We received a total of 14 runs, 12 for scope detection (7 closed, 5 open) and 2 for focus detection (0 closed, 2 open).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task description", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Submissions fall into two tracks:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task description", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "\u2022 Closed track. Systems are built using exclusively the annotations provided in the training set and are tuned with the development set. Systems that do not use external tools to process the input text or that modify the annotations provided (e.g., simplify parse tree, concatenate lists of POS tags, ) fall under this track. \u2022 Open track. Systems can make use of any external resource or tool. For example, if a team uses an external semantic parser, named entity recognizer or obtains the lemma for each token by querying external resources, it falls under the open track. The tools used cannot have been developed or tuned using the annotations of the test set.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task description", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Regardless of the track, teams were allowed to submit their final results on the test set using a system trained on both the training and development sets. The data format is the same as in several previous CoNLL Shared Tasks (Surdeanu et al., 2008) . Sentences are separated by a blank line. Each sentence consists of a sequence of tokens, and a new line is used for each token.", |
|
"cite_spans": [ |
|
{ |
|
"start": 226, |
|
"end": 249, |
|
"text": "(Surdeanu et al., 2008)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task description", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Task 1 aimed at resolving the scope of negation cues and detecting negated events. The task is divided into 3 subtasks:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task 1: Scope Resolution", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "1. Identifying negation cues, i.e., words that express negation. Cues can be single words (e.g., never), multiwords (e.g., no longer, by no means), or affixes (e.g.l im-, -less). Note that negation cues can be discontinuous, e.g., neither [. . . ] nor. 2. Resolving the scope of negation. This subtask addresses the problem of determining which tokens within a sentence are affected by the negation cue. A scope is a sequence of tokens that can be discontinuous.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task 1: Scope Resolution", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "3. Identifying the negated event or property, if any. The negated event or property is always within the scope of a cue. Only factual events can be negated.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task 1: Scope Resolution", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "For the sentence in (2), systems have to identify no and nothing as negation cues, after his habit he said and after mine I asked questions as scopes, and said and asked as negated events.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task 1: Scope Resolution", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "(2) [After his habit he said] nothing, and after mine I asked no questions. After his habit he said nothing, and [after mine I asked] no [questions].", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task 1: Scope Resolution", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Previously, scope resolvers have been evaluated at either the token or scope level. The token level evaluation checks whether each token is correctly labeled (inside or outside the scope), while the scope level evaluation checks whether the full scope is correctly labeled. The CoNLL 2010 ST introduced precision and recall at scope level as performance measures and established the following requirements: A true positive (TP) requires an exact match for both the negation cue and the scope. False positives (FP) occur when a system predicts a non-existing scope in gold, or when it incorrectly predicts a scope existing in gold because: (1) the negation cue is correct but the scope is incorrect;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation measures", |
|
"sec_num": "2.1.1" |
|
}, |
|
{ |
|
"text": "(2) the cue is incorrect but the scope is correct;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation measures", |
|
"sec_num": "2.1.1" |
|
}, |
|
{ |
|
"text": "(3) both cue and scope are incorrect. These three scenarios also trigger a false negative (FN). Finally, FN also occur when the gold annotations specify a scope but the system makes no such prediction (Farkas et al., 2010 ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 201, |
|
"end": 221, |
|
"text": "(Farkas et al., 2010", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation measures", |
|
"sec_num": "2.1.1" |
|
}, |
|
{ |
|
"text": "As we see it, the CONLL 2010 ST evaluation requirements were somewhat strict because for a scope to be counted as TP, the negation cue had to be correctly identified (strict match) as well as the punctuation tokens within the scope. Additionally, this evaluation penalizes partially correct scopes more than fully missed scopes, since partially correct scopes count as FP and FN, whereas missed scopes count only as FN. This is a standard problem when applying the F measures to the evaluation of sequences. For this shared task we have adopted a slightly different approach based on the following criteria:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation measures", |
|
"sec_num": "2.1.1" |
|
}, |
|
{ |
|
"text": "\u2022 Punctuation tokens are ignored. A second version of the measures (Cue/Scope CM/Scope NCM/Negated/Global-B) was calculated and provided to participants, but was not used to rank the systems, because it was introduced in the last period of the development phase following the request of a participant team. In the B version of the measures, precision is not counted as (TP/(TP+FP)), but as (TP / total of system predictions), counting in this way the percentage of perfect matches among all the system predictions. Providing this version of the measures also allowed us to compare the results of the two versions and to check if systems would be ranked in a different position depending on the version.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation measures", |
|
"sec_num": "2.1.1" |
|
}, |
|
{ |
|
"text": "Even though we believe that relaxing scope evaluation by ignoring punctuation marks and relaxing the strict cue match requirement is a positive feature of our evaluation, we need to explore further in order to define a scope evaluation measure that captures the impact of partial matches in the scores.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation measures", |
|
"sec_num": "2.1.1" |
|
}, |
|
{ |
|
"text": "This task tackles focus of negation detection. Both scope and focus are tightly connected. Scope is the part of the meaning that is negated and focus is that part of the scope that is most prominently or explicitly negated (Huddleston and Pullum, 2002) . Focus can also be defined as the element of the scope that is intended to be interpreted as false to make the overall negative true.", |
|
"cite_spans": [ |
|
{ |
|
"start": 223, |
|
"end": 252, |
|
"text": "(Huddleston and Pullum, 2002)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task 2: Focus Detection", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Detecting focus of negation is useful for retrieving the numerous words that contribute to implicit positive meanings within a negation. Consider the statement The government didn't release the UFO files {until 2008}. The focus is until 2008, yielding the interpretation The government released the UFO files, but not until 1998. Once the focus is resolved, the verb release, its AGENT The government and its THEME the UFO files are positive; only the TEMPO-RAL information until 2008 remains negated.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task 2: Focus Detection", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "We only target verbal negations and focus is always the full text of a semantic role. Some examples of annotation and their interpretation (Int) using focus detection are provided in (3-5).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task 2: Focus Detection", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "(3) Even if that deal isn't {revived}, NBC hopes to find another. Int: Even if that deal is suppressed, NBC hopes to find another.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task 2: Focus Detection", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "(4) A decision isn't expected {until some time next year}. Int: A decision is expected at some time next year.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task 2: Focus Detection", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "(5) . . . it told the SEC it couldn't provide financial statements by the end of its first extension \"{without unreasonable burden or expense}\". Int: It could provide them by that time with a huge overhead.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task 2: Focus Detection", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Task 2 is evaluated using precision, recall and F 1 . Submissions are ranked by F 1 . For each negation, the predicted focus is considered correct if it is a perfect match with the gold annotations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation measures", |
|
"sec_num": "2.2.1" |
|
}, |
|
{ |
|
"text": "We have released two datasets, which will be available from the web site of the task: CD-SCO for scope detection and PB-FOC for focus detection. The next two sections introduce the datasets. Figure 1: Example sentence from CD-SCO.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Sets", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The corpus for Task 1 is CD-SCO, a corpus of Conan Doyle stories. The training corpus contains The Hound of the Baskervilles, the development corpus, The Adventure of Wisteria Lodge, and the test corpus The Adventure of the Red Circle and The Adventure of the Cardboard Box. The original texts are freely available from the Gutenberg Project. 3 CD-SCO is annotated with negation cues and their scope, as well as the event or property that is negated. The cues are the words that express negation and the scope is the part of a sentence that is affected by the negation cues. The negated event or property is the main event or property actually negated by the negation cue. An event can be a process, an action, or a state. Figure 1 shows an example sentence. Column 1 contains the name of the file, column 2 the sentence #, column 3 the token #, column 4 the word, column 5 the lemma, column 6 the PoS, column 7 the parse tree information and columns 8 to end the negation information. If a sentence does not contain a negation, column 8 contains \"***\" and there are no more columns. If it does contain negations, the information for each one is encoded in three columns: negation cue, scope, and negated event respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 723, |
|
"end": 731, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "CD-SCO: Scope Annotation", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The annotation of cues and scopes is inspired by the BioScope corpus, but there are several differences. First and foremost, BioScope does not annotate the negated event or property. Another im- portant difference concerns the scope model itself: in CD-SCO, the cue is not considered to be part of the scope. Furthermore, scopes can be discontinuous and all arguments of the negated event are considered to be part of the scope, including the subject, which is kept out of the scope in BioScope. A final difference is that affixal negation is annotated in CD-SCO, as in (6). Statistics for the corpus is presented in Table 1 . More information about the annotation guidelines is provided by Morante et al. (2011) and Morante and Daelemans (2012) , including inter-annotator agreement.", |
|
"cite_spans": [ |
|
{ |
|
"start": 691, |
|
"end": 712, |
|
"text": "Morante et al. (2011)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 717, |
|
"end": 745, |
|
"text": "Morante and Daelemans (2012)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 617, |
|
"end": 624, |
|
"text": "Table 1", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "CD-SCO: Scope Annotation", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The corpus was preprocessed at the University of Oslo. Tokenization was obtained by the PTBcompliant tokenizer that is part of the LinGO English Resource Grammar. 4 Apart from the gold annotations, the corpus was provided to participants with additional annotations:", |
|
"cite_spans": [ |
|
{ |
|
"start": 163, |
|
"end": 164, |
|
"text": "4", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "CD-SCO: Scope Annotation", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u2022 Lemmatization using the GENIA tagger (Tsuruoka and Tsujii, 2005) , version 3.0.1, with the '-nt' command line option. GENIA PoS tags are complemented with TnT PoS tags for increased compatibility with the original PTB. \u2022 Parsing with the Charniak and Johnson (2005) reranking parser. 5 For compatibility with PTB conventions, the top-level nodes in parse trees ('S1'), were removed. The conversion of PTB-style syntax trees into CoNLL-style format was performed using the CoNLL 2005 Shared Task software. 6", |
|
"cite_spans": [ |
|
{ |
|
"start": 39, |
|
"end": 66, |
|
"text": "(Tsuruoka and Tsujii, 2005)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 240, |
|
"end": 267, |
|
"text": "Charniak and Johnson (2005)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 286, |
|
"end": 287, |
|
"text": "5", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "CD-SCO: Scope Annotation", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We have adapted the only previous annotation effort targeting focus of negation for PB-FOC (Blanco and Moldovan, 2011) . This corpus provides focus annotation on top of PropBank. It targets exclusively verbal negations marked with MNEG in PropBank and selects as focus the semantic role containing the most likely focus. The motivation behind their approach, annotation guidelines and examples can be found in the aforementioned paper. We gathered all negations from sections 02-21, 23 and 24 and discarded negations for which the focus or PropBank annotations were not sound, leaving 3,544 instances. 7 For each verbal negation, PB-FOC provides the current sentence, and the previous and next sentences as context. For each sentence, along with the gold focus annotations, PB-FOC contains the following additional annotations:", |
|
"cite_spans": [ |
|
{ |
|
"start": 91, |
|
"end": 118, |
|
"text": "(Blanco and Moldovan, 2011)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "PB-FOC: Focus Annotation", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u2022 Token number;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "PB-FOC: Focus Annotation", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u2022 POS tags using the Brill tagger (Brill, 1992) ; \u2022 Named Entities using the Stanford named entity recognizer recognizer (Finkel et al., 2005) ; \u2022 Chunks using the chunker by Phan (2006) \u2022 Semantic roles using the labeler described by (Punyakanok et al., 2008) ; and \u2022 Verbal negation, indicates with 'N' if that token correspond to a verbal negation for which focus must be predicted. Figure 2 provides a sample of PB-FOC. Knowing that the original focus annotations were done on top of PropBank and that focus corresponds to a single role, semantic role information is key to predict the focus. In Table 2 , we show some basic numeric analysis regarding focus annotation and the automatically obtained semantic role labels. Most instances of focus belong to a single role in the three splits and the most common role focus belongs to is A1, followed by AM-NEG, M-TMP and M-MNR. Note that some instances have at least one word that does not belong to any role (88 in training, 19 in development and 35 in test).", |
|
"cite_spans": [ |
|
{ |
|
"start": 34, |
|
"end": 47, |
|
"text": "(Brill, 1992)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 121, |
|
"end": 142, |
|
"text": "(Finkel et al., 2005)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 175, |
|
"end": 186, |
|
"text": "Phan (2006)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 235, |
|
"end": 260, |
|
"text": "(Punyakanok et al., 2008)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 386, |
|
"end": 394, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 600, |
|
"end": 607, |
|
"text": "Table 2", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "PB-FOC: Focus Annotation", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "A total of 14 runs were submitted: 12 for scope detection and 2 for focus detection. The unbalanced number of submissions might be due to the fact that both tasks are relatively new and the tight timeline (six weeks) under which systems were developed. Some participants showed interest in the second task and expressed that they did not participate because of lack of time. In this section, we present the results for each task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Submissions and results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Six teams (UiO1, UiO2, FBK, UWashington, UMichigan, UABCoRAL) submitted results for the closed track with a total of seven runs, and four teams (UiO2, UGroningen, UCM-1, UCM-2) submitted results for the open track with a total of five runs. The evaluation results are provided in Table 4 , which contains the official results, and Table 5 , which contains the results for evaluation measures B. The best Global score in the closed track was obtained by UiO1 (57.63 F 1 ). The best score for Cues was obtained by FBK (92.34 F 1 ), for Scopes CM by UiO2 (73.39 F 1 ), for Scopes NCM by UWashington (72.40 F 1 ), and for Negated by UiO1 (67.02 F 1 ). The best Global score in the open track was obtained by UiO2 (54.82 F 1 ), as well as the best scores for Cues (91.31 F 1 ), Scopes CM (72.39 F 1 ), Scopes NCM (72.39 F 1 ), and Negated (61.79 F 1 ).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 280, |
|
"end": 287, |
|
"text": "Table 4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 331, |
|
"end": 338, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Task 1", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Only one team participated in Task 2, UConcordia from CLaC Lab at Concordia University. They submitted two runs and the official results are summarized in Table 3 . Their best run scored 58.40 F 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 155, |
|
"end": 162, |
|
"text": "Table 3", |
|
"ref_id": "TABREF8" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Task 2", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "In this section we summarize the methodologies applied by participants to solve the tasks and we analyze the results.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Approaches and analysis", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "To solve Task 1 most teams develop a three module pipeline with a module per subtask. Scope resolution and negated event detection are independent of each other and both depend on cue detection. An exception is the UiO1 system, which incorporates a module for factuality detection. Most systems apply machine learning algorithms, either Conditional Random Fields (CRFs) or Support Vector Machines (SVMs), while less systems implement a rule-based approach. Syntax information is widely employed, either in the form of rules or incorporated in the learning model. Multi-word and affixal negation cues receive a special treatment in most cases, and scopes are generally postprocessed.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task 1", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "The systems that participate in the closed track are machine learning based. The UiO1 system is an adaptation of another system (Velldal et al., 2012) , which combines SVM cue classification with SVMbased ranking of syntactic constituents for scope resolution. The approach is extended to identify negated events by first classifying negations as factual or non-factual, and then applying an SVM ranker over candidate events. The original treatment of factuality in this system results in the highest score for both the negated event subtask and the global task.", |
|
"cite_spans": [ |
|
{ |
|
"start": 128, |
|
"end": 150, |
|
"text": "(Velldal et al., 2012)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task 1", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "The UiO2 system combines SVM cue classification with CRF-based sequence labeling. An original aspect of the UiO2 approach is the model represen- Table 4 : Official results. \"r1\" stands for run 1 nd \"r2\" for run 2. CNS stands for Correct Negation Sentences. \"CM\" stands for Cue Match and \"NCM\" stands for No Cue Match. Table 5 : Results withevaluation measures B. Precision is calculated as: true positives / total of system predictions. \"r1\" stands for run 1 nd \"r2\" for run 2. \"CM\" stands for Cue Match and \"NCM\" stands for No Cue Match. tation for scopes and negated events, where tokens are assigned a set of labels that attempts to describe their behavior within the mechanics of negation. After unseen sequences are labeled, in-scope and negated tokens are assigned to their respective cues using simple post-processing heuristics.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 145, |
|
"end": 152, |
|
"text": "Table 4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 318, |
|
"end": 325, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Task 1", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "The FBK system consists of three different CRF classifiers, as well as the UMichigan. A characteristic of the cue model of the UMichigan system is that tokens are assigned five labels in order to represent the different types of negation. Similarly, the UWashington system has a CRF sequence tagger for scope and negated event detection, while the cue detector learns regular expression matching rules from the training set. The UABCoRAL system follows the same strategy, but instead of CRFs it employs SVM Light.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task 1", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "The resources utilized by participants in the open track are diverse. UiO2 reparsed the data with Malt-Parser in order to obtain dependency graphs. For the rest, the system is the same as in the closed track. The global results obtained by this system in the closed track are higher than the results obtained in the open track, which is mostly due to a higher performance of the scope resolution module. This is the only machine learning system in the open track and the highest performing one.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task 1", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "The UGroningen system is based on tools that produce complex semantic representations. The system employs the C&C tools 8 for parsing and Boxer 9 to produce semantic representations in the form of Discourse Representation Structures (DRSs). For cue detection, the DRSs are converted to flat, nonrecursive structures, called Discourse Representation Graphs (DRGs). These DRGs allow for cue detection by means of labelled tuples. Scope detection is done by gathering the tokens that occur within the scope of the negated DRSs. For negated event detection, a basic algorithm takes the detected scope and returns the negated event based on information from the syntax tree within the scope.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task 1", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "UCM-1 and UCM-2 are rule-based systems that rely heavily on information from the syntax tree. The UCM-1 system was initially designed for pro-cessing opinionated texts. It applies a dictionary approach to cue detection, with the detection of affixal cues being performed using WordNet. Non-affixal cue detection is performed by consulting a predefined list of cues. It then uses information from the syntax tree in order to get a first approximation to the scope, which is later refined using a set of postprocessing rules. In the case of the UCM-2 system an algorithm detects negation cues and their scope by traversing Minipar dependency structures. Finally, the scope is refined with post-processing rules that take into account the information provided by the first algorithm and linguistic clause boundaries.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task 1", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "If we compare tracks, the Global best results obtained in the closed track (57.63 F 1 ) are higher than the Global best results obtained in the open track (54.82 F 1 ). If we compare approaches, the best results in the two tracks are obtained with machine learning-based systems. The rule-based systems participating in the open track clearly score lower (39.56 F 1 the best) than the machine learning-based system (54.82 F 1 ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task 1", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Regarding subtasks, systems achieve higher results in the cue detection task (92.34 F 1 the best) and lower results in the scope resolution (72.40 F 1 the best) and negated event detection (67.02 F 1 the best) tasks. This is not surprising, not only because of the error propagation effect, but also because the set of negation cues is closed and comprises mostly single tokens, whereas scope sequences are longer. The best results in cue detection are obtained by the FBK system that uses CRFs and applies a special procedure to detect the negation cues that are subtokens. The best scores for scope resolution (72.40, 72.39 F 1 ) are obtained by two machine learning components. UWashington uses CRFs with features derived from the syntax tree. UiO2 uses CRFs models with syntactic and lexical features for scopes, together with a set of labels aimed at capturing the behavior of certain tokens within the mechanics of negation. The best scores for negated events (67.02 F 1 ) are obtained by the UiO1 system that first classifies negations as factual or non-factual, and then applies an SVM ranker over candidate events.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task 1", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Finally, we would like to draw the attention to the different scores obtained depending on the evaluation measure used. When scope resolution is evaluated with the Scope (NCM, CM) measure, results are much lower than when using the Scope Tokens measure, which does not reflect the ability of systems to deal with sequences. Another observation is related to the difference in precision scores between the two versions of the evaluation measures. Whereas for Cues and Negated the differences are not so big because most cues and negated events span over a single token, for Scopes they are. The best Scope NCM precision score is 90.00 %, whereas the best Scope NCM B precision score is 59.54 %. This shows that the scores can change considerably depending on how partial matches are counted (as FP and FN, or only as FN). As a final remark it is worth noting that the ranking of systems does not change when using the B measures.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task 1", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "UConcordia submitted two runs in the open track. Both of them follow the same three component approach. First, negation cues are detected. Second, the scope of negation is extracted based on dependency relations and heuristics defined by Kilicoglu and Bergler (2011) . Third, the focus of negation is determined within the elements belonging to the scope following three heuristics.", |
|
"cite_spans": [ |
|
{ |
|
"start": 238, |
|
"end": 266, |
|
"text": "Kilicoglu and Bergler (2011)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task 2", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "In this paper we presented the description of the first *SEM Shared Task on Resolving the Scope and Focus of Negation, which consisted of two different tasks related to different aspects of negation: Task 1 on resolving the scope of negation, and Task 2 on detecting the focus of negation. Task 1 was divided into three subtasks: identifying negation cues, resolving their scope, and identifying the negated event. Two new datasets have been produced for this Shared Task: the CD-SCO corpus of Conan Doyle stories annotated with scopes, and the PB-FOC corpus, which provides focus annotation on top of Prop-Bank. New evaluation software was also developed for this task. The datasets and the evaluation software will be available on the web site of the Shared Task. As far as we know, this is the first task that focuses on resolving the focus and scope of negation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "A total of 14 runs were submitted, 12 for scope detection and 2 for focus detection. Of these, four runs are from systems that take a rule-based ap-proach, two runs from hybrid systems, and the rest from systems that take a machine learning approach using SVMs or CRFs. Most participants designed a three component architecture.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "For a future edition of the shared task we would like to unify the annotation schemes of the two corpora, namely the annotation of focus in PB-FOC and negated events in CD-SCO. The annotation of more data with both scope and focus would allow us to study the two aspects jointly. We would also like to provide better evaluation measures for scope resolution. Currently, scopes are evaluated in terms of F 1 , which demands a division of errors into the categories TP/FP/TN/FN borrowed from the evaluation of information retrieval systems. These categories are not completely appropriate to be assigned to sequence tasks, such as scope resolution.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Throughout this paper, negation cues are marked in bold letters, scopes are enclosed in square brackets and negated events are underlined; focus is enclosed in curly brackets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "www.clips.ua.ac.be/sem2012-st-neg/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://moin.delph-in.net/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://svn.ask.it.usyd.edu.au/trac/ candc/wiki/Documentation 9 http://svn.ask.it.usyd.edu.au/trac/ candc/wiki/boxer", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We are very grateful to Vivek Srikumar for preprocessing the PB-FOC corpus with the Illinois semantic role labeler, and to Stephan Oepen for preprocessing the CD-SCO corpus. We also thank the *SEM organisers and the ST participants. Roser Morante's research was funded by the University of Antwerp (GOA project BIOGRAPH).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Semantic Representation of Negation Using Focus Detection", |
|
"authors": [ |
|
{ |
|
"first": "Eduardo", |
|
"middle": [], |
|
"last": "Blanco", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Moldovan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 49th Annual Meeting of the Association for C omputational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "581--589", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eduardo Blanco and Dan Moldovan. 2011. Semantic Representation of Negation Using Focus Detection. In Proceedings of the 49th Annual Meeting of the Asso- ciation for C omputational Linguistics: Human Lan- guage Technologies, pages 581-589, Portland, Ore- gon, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "A simple rule-based part of speech tagger", |
|
"authors": [ |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Brill", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Proceedings of the third conference on Applied natural language processing, ANLC '92", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "152--155", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eric Brill. 1992. A simple rule-based part of speech tag- ger. In Proceedings of the third conference on Applied natural language processing, ANLC '92, pages 152- 155, Stroudsburg, PA, USA. Association for Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "A simple algorithm for identifying negated findings and diseases in discharge summaries", |
|
"authors": [ |
|
{ |
|
"first": "Wendy", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Chapman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Will", |
|
"middle": [], |
|
"last": "Bridewell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Hanbury", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gregory", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Cooper", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bruce", |
|
"middle": [ |
|
"G" |
|
], |
|
"last": "Buchanan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "J Biomed Inform", |
|
"volume": "34", |
|
"issue": "", |
|
"pages": "301--310", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wendy W. Chapman, Will Bridewell, Paul Hanbury, Gre- gory F. Cooper, and Bruce G. Buchanan. 2001. A simple algorithm for identifying negated findings and diseases in discharge summaries. J Biomed Inform, 34:301-310.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Coarse-tofine n-best parsing and maxent discriminative reranking", |
|
"authors": [ |
|
{ |
|
"first": "Eugene", |
|
"middle": [], |
|
"last": "Charniak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "173--180", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eugene Charniak and Mark Johnson. 2005. Coarse-to- fine n-best parsing and maxent discriminative rerank- ing. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics, pages 173-180, Ann Arbor.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "A maximum-entropy-inspired parser", |
|
"authors": [ |
|
{ |
|
"first": "Eugene", |
|
"middle": [], |
|
"last": "Charniak", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of the 1st North American chapter of the Association for Computational Linguistics conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "132--139", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eugene Charniak. 2000. A maximum-entropy-inspired parser. In Proceedings of the 1st North American chapter of the Association for Computational Lin- guistics conference, NAACL 2000, pages 132-139, Stroudsburg, PA, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Generating Typed Dependency Parses from Phrase Structure Parses", |
|
"authors": [ |
|
{ |
|
"first": "Marie-Catherine", |
|
"middle": [], |
|
"last": "De Marneffe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bill", |
|
"middle": [], |
|
"last": "Maccartney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the IEEE / ACL 2006 Workshop on Spoken Language Technology. The Stanford Natural Language Processing Group", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marie-Catherine de Marneffe, Bill MacCartney, and Christopher D. Manning. 2006. Generating Typed Dependency Parses from Phrase Structure Parses. In Proceedings of the IEEE / ACL 2006 Workshop on Spoken Language Technology. The Stanford Natural Language Processing Group.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "The CoNLL-2010 Shared Task: Learning to Detect Hedges and their Scope in Natural Language Text", |
|
"authors": [ |
|
{ |
|
"first": "Rich\u00e1rd", |
|
"middle": [], |
|
"last": "Farkas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veronika", |
|
"middle": [], |
|
"last": "Vincze", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gy\u00f6rgy", |
|
"middle": [], |
|
"last": "M\u00f3ra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J\u00e1nos", |
|
"middle": [], |
|
"last": "Csirik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gy\u00f6rgy", |
|
"middle": [], |
|
"last": "Szarvas", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the Fourteenth Conference on Computational Natural Language Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--12", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rich\u00e1rd Farkas, Veronika Vincze, Gy\u00f6rgy M\u00f3ra, J\u00e1nos Csirik, and Gy\u00f6rgy Szarvas. 2010. The CoNLL-2010 Shared Task: Learning to Detect Hedges and their Scope in Natural Language Text. In Proceedings of the Fourteenth Conference on Computational Natural Language Learning, pages 1-12, Uppsala, Sweden. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Incorporating non-local information into information extraction systems by gibbs sampling", |
|
"authors": [ |
|
{ |
|
"first": "Jenny", |
|
"middle": [ |
|
"Rose" |
|
], |
|
"last": "Finkel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Trond", |
|
"middle": [], |
|
"last": "Grenager", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, ACL '05", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "363--370", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jenny Rose Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating non-local informa- tion into information extraction systems by gibbs sam- pling. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, ACL '05, pages 363-370, Stroudsburg, PA, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "The Cambridge Grammar of the English Language", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Rodney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Geoffrey", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Huddleston", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Pullum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rodney D. Huddleston and Geoffrey K. Pullum. 2002. The Cambridge Grammar of the English Language. Cambridge University Press.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Effective bioevent extraction using trigger words and syntactic dependencies", |
|
"authors": [ |
|
{ |
|
"first": "Halil", |
|
"middle": [], |
|
"last": "Kilicoglu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sabine", |
|
"middle": [], |
|
"last": "Bergler", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Computational Intelligence", |
|
"volume": "27", |
|
"issue": "4", |
|
"pages": "583--609", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Halil Kilicoglu and Sabine Bergler. 2011. Effective bio- event extraction using trigger words and syntactic de- pendencies. Computational Intelligence, 27(4):583- 609.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "A metalearning approach to processing the scope of negation", |
|
"authors": [ |
|
{ |
|
"first": "Roser", |
|
"middle": [], |
|
"last": "Morante", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Walter", |
|
"middle": [], |
|
"last": "Daelemans", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 13th Conference on Natural Language Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "21--29", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Roser Morante and Walter Daelemans. 2009. A met- alearning approach to processing the scope of nega- tion. In Proceedings of the 13th Conference on Natu- ral Language Learning, pages 21-29, Boulder, CO. Roser Morante and Walter Daelemans. 2012.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Annotation of negation cues and their scope in Conan Doyle stories", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Conandoyle-Neg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Special issue on modality and negation: An introduction. Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "ConanDoyle-neg: Annotation of negation cues and their scope in Conan Doyle stories. In Proceedings of LREC 2012, Istambul. Roser Morante and Caroline Sporleder. 2012. Special is- sue on modality and negation: An introduction. Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Annotation of negation cues and their scope. guidelines v1.0", |
|
"authors": [ |
|
{ |
|
"first": "Roser", |
|
"middle": [], |
|
"last": "Morante", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sarah", |
|
"middle": [], |
|
"last": "Schrauwen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Walter", |
|
"middle": [], |
|
"last": "Daelemans", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Roser Morante, Sarah Schrauwen, and Walter Daele- mans. 2011. Annotation of negation cues and their scope. guidelines v1.0. Technical Report Series CTR- 003, CLiPS, University of Antwerp, Antwerp, April.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Crfchunker: Crf english phrase chunker", |
|
"authors": [ |
|
{ |
|
"first": "Xuan-Hieu", |
|
"middle": [], |
|
"last": "Phan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xuan-Hieu Phan. 2006. Crfchunker: Crf english phrase chunker.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "The importance of syntactic parsing and inference in semantic role labeling", |
|
"authors": [ |
|
{ |
|
"first": "Vasin", |
|
"middle": [], |
|
"last": "Punyakanok", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wen-Tau", |
|
"middle": [], |
|
"last": "Yih", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Computational Linguistics", |
|
"volume": "34", |
|
"issue": "2", |
|
"pages": "257--287", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vasin Punyakanok, Dan Roth, and Wen-tau Yih. 2008. The importance of syntactic parsing and inference in semantic role labeling. Computational Linguistics, 34(2):257-287, June.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Using Semantic Roles to Improve Question Answering", |
|
"authors": [ |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mirella", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EM NLP-CoNLL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "12--21", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dan Shen and Mirella Lapata. 2007. Using Semantic Roles to Improve Question Answering. In Proceed- ings of the 2007 Joint Conference on Empirical Meth- ods in Natural Language Processing and Computa- tional Natural Language Learning (EM NLP-CoNLL), pages 12-21.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Models of Metaphor in NLP", |
|
"authors": [ |
|
{ |
|
"first": "Ekaterina", |
|
"middle": [], |
|
"last": "Shutova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "688--697", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ekaterina Shutova. 2010. Models of Metaphor in NLP. In Proceedings of the 48th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 688-697, Uppsala, Sweden. ACL.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Using Predicate-Argument Structures for Information Extraction", |
|
"authors": [ |
|
{ |
|
"first": "Mihai", |
|
"middle": [], |
|
"last": "Surdeanu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sanda", |
|
"middle": [], |
|
"last": "Harabagiu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Williams", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Aarseth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "8--15", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mihai Surdeanu, Sanda Harabagiu, John Williams, and Paul Aarseth. 2003. Using Predicate-Argument Struc- tures for Information Extraction. In Proceedings of the 41st Annual Meeting of the Association for Computa- tional Linguistics, pages 8-15, Sapporo, Japan. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "The conll-2008 shared task on joint parsing of syntactic and semantic dependencies", |
|
"authors": [ |
|
{ |
|
"first": "Mihai", |
|
"middle": [], |
|
"last": "Surdeanu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Johansson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Meyers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llu\u00eds", |
|
"middle": [], |
|
"last": "M\u00e0rquez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joakim", |
|
"middle": [], |
|
"last": "Nivre", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "CoNLL 2008: Proceedings of the 12th Conference on Computational Natural Language Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mihai Surdeanu, Richard Johansson, Adam Meyers, Llu\u00eds M\u00e0rquez, and Joakim Nivre. 2008. The conll- 2008 shared task on joint parsing of syntactic and se- mantic dependencies. In CoNLL 2008: Proceedings of the 12th Conference on Computational Natural Lan- guage Learning, page 159177, Manchester.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Bidirectional inference with the easiest-first strategy for tagging sequence data", |
|
"authors": [ |
|
{ |
|
"first": "Yoshimasa", |
|
"middle": [], |
|
"last": "Tsuruoka", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jun'ichi", |
|
"middle": [], |
|
"last": "Tsujii", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "467--474", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yoshimasa Tsuruoka and Jun'ichi Tsujii. 2005. Bidi- rectional inference with the easiest-first strategy for tagging sequence data. In Proceedings of of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 467-474, Vancouver.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Speculation and negation: Rules, rankers, and the role of syntax", |
|
"authors": [ |
|
{ |
|
"first": "Erik", |
|
"middle": [], |
|
"last": "Velldal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lilja", |
|
"middle": [], |
|
"last": "\u00d8vrelid", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathon", |
|
"middle": [], |
|
"last": "Read", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephan", |
|
"middle": [], |
|
"last": "Oepen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Erik Velldal, Lilja \u00d8vrelid, Jonathon Read, and Stephan Oepen. 2012. Speculation and negation: Rules, rankers, and the role of syntax. Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "The Bio-Scope corpus: biomedical texts annotated for uncertainty, negation and their scopes", |
|
"authors": [ |
|
{ |
|
"first": "Veronika", |
|
"middle": [], |
|
"last": "Vincze", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gyorgy", |
|
"middle": [], |
|
"last": "Szarvas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Farkas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gyorgy", |
|
"middle": [], |
|
"last": "Mora", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Janos", |
|
"middle": [], |
|
"last": "Csirik", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "BMC Bioinformatics", |
|
"volume": "9", |
|
"issue": "11", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Veronika Vincze, Gyorgy Szarvas, Richard Farkas, Gy- orgy Mora, and Janos Csirik. 2008. The Bio- Scope corpus: biomedical texts annotated for uncer- tainty, negation and their scopes. BMC Bioinformat- ics, 9(Suppl 11):S9+.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "A survey on the role of negation in sentiment analysis", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Wiegand", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Balahur", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dietrich", |
|
"middle": [], |
|
"last": "Klakow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andr\u00e9s", |
|
"middle": [], |
|
"last": "Montoyo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the Workshop on Negation and Speculation in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "60--68", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael Wiegand, Alexandra Balahur, Benjamin Roth, Dietrich Klakow, and Andr\u00e9s Montoyo. 2010. A sur- vey on the role of negation in sentiment analysis. In Proceedings of the Workshop on Negation and Specu- lation in Natural Language Processing, pages 60-68, Uppsala, Sweden. University of Antwerp.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Semantic Roles for SMT: A Hybrid Two-Pass Model", |
|
"authors": [ |
|
{ |
|
"first": "Dekai", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pascale", |
|
"middle": [], |
|
"last": "Fung", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Short Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "13--16", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dekai Wu and Pascale Fung. 2009. Semantic Roles for SMT: A Hybrid Two-Pass Model. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the As- sociation for Computational Linguistics, Companion Volume: Short Papers, pages 13-16, Boulder, Col- orado. Association for Computational Linguistics.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF3": { |
|
"html": null, |
|
"text": "", |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"num": null |
|
}, |
|
"TABREF6": { |
|
"html": null, |
|
"text": "", |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"num": null |
|
}, |
|
"TABREF8": { |
|
"html": null, |
|
"text": "", |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"num": null |
|
} |
|
} |
|
} |
|
} |