ACL-OCL / Base_JSON /prefixS /json /starsem /2020.starsem-1.15.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:40:51.416456Z"
},
"title": "Learning Negation Scope from Syntactic Structure",
"authors": [
{
"first": "Nick",
"middle": [
"M"
],
"last": "C Kenna",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Edinburgh",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Edinburgh",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present a semi-supervised model which learns the semantics of negation purely through analysis of syntactic structure. Linguistic theory posits that the semantics of negation can be understood purely syntactically, though recent research relies on combining a variety of features including part-of-speech tags, word embeddings, and semantic representations to achieve high task performance. Our simplified model returns to syntactic theory and achieves state-of-the-art performance on the task of Negation Scope Detection while demonstrating the tight relationship between the syntax and semantics of negation.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "We present a semi-supervised model which learns the semantics of negation purely through analysis of syntactic structure. Linguistic theory posits that the semantics of negation can be understood purely syntactically, though recent research relies on combining a variety of features including part-of-speech tags, word embeddings, and semantic representations to achieve high task performance. Our simplified model returns to syntactic theory and achieves state-of-the-art performance on the task of Negation Scope Detection while demonstrating the tight relationship between the syntax and semantics of negation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Negation is a semantic phenomenon in natural language which varies significantly. For example, \"Sherlock did not solve the case\" contains a simple negation cued by \"not.\" However, there are many cue words such as \"without\" and \"nothing,\" and word affixes like \"un-\" which instantiate negation, and their effect on meaning can be different and dependent on context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We approach the meaning of negation using a logical semantics, in which natural language negation is expressed with the negation operator on logical expressions (Horn, 1989; Horn and Wansing, 2017) , such as in \u00acSOLVE(Sherlock, case). Further, in truth-theoretic logic the meaning of negation is simply the inverted truth value of this expression. Capturing the meaning of a negation cue in language can thus be understood as simply identifying the negated expression. This is the task of Negation Scope Detection (NSD): given a negation cue in a sentence, identify the sentence tokens which make up the negated expression.",
"cite_spans": [
{
"start": 161,
"end": 173,
"text": "(Horn, 1989;",
"ref_id": "BIBREF6"
},
{
"start": 174,
"end": 197,
"text": "Horn and Wansing, 2017)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http: //creativecommons.org/licenses/by/4.0/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The task of NSD has been approached in several formulations using different annotation schemes (Kim et al., 2008; Szarvas et al., 2008) , but these older tasks do not define negation semantically, in contrast to the commonly accepted *SEM2012 Shared Task competition (*SEM, 2012). Under the *SEM definition cues scope over events and participants, either directly or via predicate arguments and complements. Linguistic theory explains negation semantics in terms of events, which are built from syntactic phrase structures (Huddleston and Pullum, 2005) , i.e. that negation doesn't just scope over individual words, but rather whole phrases and clauses which make up events. The following examples drawn from the *SEM2012 dataset of Conan Doyle writing illustrate how scope is built from phrase structures. Cues are in bold* with underlined underlined underlined underlined underlined underlined underlined underlined underlined underlined underlined underlined underlined underlined underlined underlined underlined scope.",
"cite_spans": [
{
"start": 95,
"end": 113,
"text": "(Kim et al., 2008;",
"ref_id": "BIBREF9"
},
{
"start": 114,
"end": 135,
"text": "Szarvas et al., 2008)",
"ref_id": "BIBREF21"
},
{
"start": 523,
"end": 552,
"text": "(Huddleston and Pullum, 2005)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Negation Scope Detection Task",
"sec_num": "1.1"
},
{
"text": "(1) Well, sir, I thought no* good could come of it good could come of it good could come of it good could come of it good could come of it good could come of it good could come of it good could come of it good could come of it good could come of it good could come of it good could come of it good could come of it good could come of it good could come of it good could come of it good could come of it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Negation Scope Detection Task",
"sec_num": "1.1"
},
{
"text": "Example (1) contains the main verb \"thought\" which takes a complement clause. This clause contains a simple verb \"come\" negated by its negative subject \"no good.\" The correct negation scope covers this verb, its arguments, and its modifier \"could\" (leaving out the cue itself, following *SEM convention). Figure 1 (a) shows that scope corresponds to clause boundaries in the syntactic tree.",
"cite_spans": [],
"ref_spans": [
{
"start": 305,
"end": 313,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Negation Scope Detection Task",
"sec_num": "1.1"
},
{
"text": "( The scope in example (2) has discontinuous span. In this sentence \"he\" is a subject shared by two clauses in coordination, but only one of these is negated. Figure 1 (b) illustrates this. As cases add complexity it's clear that NSD requires reasoning about the underlying structure of the sentence.",
"cite_spans": [],
"ref_spans": [
{
"start": 159,
"end": 167,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Negation Scope Detection Task",
"sec_num": "1.1"
},
{
"text": "In this work we build on the theory of negation scope as a syntactic phenomenon and reframe the task of NSD as a tree tagging problem over syntactic constituents. We develop a new Structural Tree Recursive Neural Network for this task which labels scope for constituents conditioned on just the syntactic structure of the sentence, with no other features. Our model achieves the highest score to date on the *SEM2012 dataset. We further show that adding word embedding features does not improve results, demonstrating that efficient use of syntax is all you need to perform well on this task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contributions",
"sec_num": "1.2"
},
{
"text": "There have been a variety of approaches to NSD. The *SEM shared task dataset is the only resource available with semantic annotations (other NSD datasets are annotated for different goals), but there are many models which use *SEM to compare with (Morante and Blanco, 2012) . Fancellu et al. (2016) show the most recent, best-performing model on *SEM. It is a BiL-STM sequence tagging model which produces inscope/out-of-scope classifications for each word, and jointly learns word embeddings with additional part-of-speech features for each token. However, further analysis (Fancellu et al., 2017) shows the model is over-reliant on punctuation like commas (\",\") which often mark scope boundaries in English and especially in Conan Doyle's older style of writing (the content of the *SEM dataset). The model does not learn a very robust semantics without these markers, yet also does not make use of the full syntactic structure of sentences. Fancellu et al. (2018) address this with a Dependency-LSTM model which processes dependency trees using encodings both for words and dependency relations. On a modified *SEM dataset this model slightly improves over the BiLSTM when they are ensembled together, showing again that the BiLSTM method lacks some structural understanding.",
"cite_spans": [
{
"start": 247,
"end": 273,
"text": "(Morante and Blanco, 2012)",
"ref_id": "BIBREF12"
},
{
"start": 276,
"end": 298,
"text": "Fancellu et al. (2016)",
"ref_id": "BIBREF2"
},
{
"start": 575,
"end": 598,
"text": "(Fancellu et al., 2017)",
"ref_id": "BIBREF3"
},
{
"start": 944,
"end": 966,
"text": "Fancellu et al. (2018)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "2"
},
{
"text": "The original winner of *SEM, U iO 1 (Read et al., 2012) , uses an SVM classifier on constituents that contain the cue. Constituents are shown to be useful, but this is not enough to capture discontinuous scopes (see example (2)) which do not align to a single constituent. Read adds extra heuristics to help with this, and Rosenberg (2013) continues from this with a comprehensive set of syntactic heuristics deployed on dependency graphs. This model performs well and demonstrates the strength of syntax by itself for capturing negation semantics. Packard et al. (2014) and Li et al. (2010) approach NSD \"in the semantic domain.\" Packard first induces a Minimal Recursion Semantics parse of a sentence (Copestake et al., 2005) , then crawls it to identify negated predicates and arguments. Though intuitive, inducing a full parse may overshoot the problem -by itself the crawler doesn't compete because it loses much information during parsing and backtracking to sentence words. Ensembling with U iO 1 provides a boost over U iO 1 itself, which indicates that MRS parsing provides complementary information, possibly due to induction of additional structure.",
"cite_spans": [
{
"start": 36,
"end": 55,
"text": "(Read et al., 2012)",
"ref_id": "BIBREF15"
},
{
"start": 323,
"end": 339,
"text": "Rosenberg (2013)",
"ref_id": "BIBREF16"
},
{
"start": 549,
"end": 570,
"text": "Packard et al. (2014)",
"ref_id": "BIBREF13"
},
{
"start": 575,
"end": 591,
"text": "Li et al. (2010)",
"ref_id": "BIBREF10"
},
{
"start": 703,
"end": 727,
"text": "(Copestake et al., 2005)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "2"
},
{
"text": "We argue for a solution returning to syntax which makes per-word scope judgements conditioned on the full syntactic structure of the sentence. Section 1.1 discusses the theoretical basis in syntax for negation semantics and section 2 details successful models on the task. While syntax is used by several models to a degree, full trees are rarely used and recent models rely on additional features like word embeddings to achieve high performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Structural Approach",
"sec_num": "3"
},
{
"text": "Combinatory Categorial Grammar (CCG) is a nearly context-free constituency grammar capable of describing complex phenomena like coordination (Steedman, 2000) . CCG also has transparency between syntax and semantics: a CCG syntactic parse may be transformed into a logical form in terms of events, similar to the Sherlock example in section 1. Steedman (2011) separately formalizes a theory for computing the polarity of lexical items using a CCG-based calculus, which encourages an automated approach to learning negation scope by example. Figure 1(a) shows the CCG parse tree for example (1). Commonly, simple independent clauses in English form one continuous scope span, which is distinguishable in syntax. The scope of negation in (1) is simply the complement clause.",
"cite_spans": [
{
"start": 141,
"end": 157,
"text": "(Steedman, 2000)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 540,
"end": 551,
"text": "Figure 1(a)",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Structural Approach",
"sec_num": "3"
},
{
"text": "However, it is not always the case that negation scope aligns cleanly to subtrees, like in example Figure 1(b), the parse tree for sentence (2). CCG provides structural insight which alleviates this problem. In this example CCG's explicit modeling of coordination makes it straightforward to identify the subject to the left of the coordinated dependent clauses. Figure 1 : (a) (left) The cue \"no\" negates the subject \"good\" and thus scopes over the related event. This corresponds cleanly with the constituent subtree (headed by S[dcl]) which spans that text. (b) (right) The cue \"nothing\" lies in a dependent clause, lacking a subject. This can be found on the left using the CCG parse structure.",
"cite_spans": [],
"ref_spans": [
{
"start": 363,
"end": 371,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Structural Approach",
"sec_num": "3"
},
{
"text": "Understanding how semantics is built from phrase structures is key to our method. Analyses from additional sentences support this (see appendix for parse trees).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Structural Approach",
"sec_num": "3"
},
{
"text": "( 3) ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Structural Approach",
"sec_num": "3"
},
{
"text": "The Global Belief Tree Recursive Neural Network of Paulus et al. (2014) is adapted to solve this task. The GB-TRNN takes as input a binarized syntax tree (satisfied by CCG) and starting state vectors for each word in the sentence, referred to now as tree leaves. It first recursively combines constituent states from leaves to root in an upward pass, building up to a single, global state vector. Conditioned on this state, the downward pass recursively unfolds the global vector following the parse tree back down from the root to the leaves. The output for each constituent in the downward pass can be used to produce classifications conditioned on the entire tree.",
"cite_spans": [
{
"start": 51,
"end": 71,
"text": "Paulus et al. (2014)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "4"
},
{
"text": "We add additional syntactic target inputs in both passes and refer to this new architecture as a Structural Tree Recursive Neural Network (STRNN). These additional inputs represent the current kind of CCG combination. In the upward step the parent tag (composed category) is additionally passed in with the two child states, while in the downward step the two child tags (decomposed categories) are passed in addition to the parent state.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "4"
},
{
"text": "The STRNN learns one embedding matrix for CCG syntactic categories E Cat \u2208 R |V CCG |\u00d7s , where V CCG is the CCG category vocabulary (\u223c400 tags) and the embedding size s = 50. Figure 2 shows a diagram of the STRNN.",
"cite_spans": [],
"ref_spans": [
{
"start": 176,
"end": 184,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Methods",
"sec_num": "4"
},
{
"text": "Upward Pass. The initial state u i for a leaf constituent i is the result of the transformation matrix H \u2208 R (s+c)\u00d7h . This consumes a category embedding and the cue feature c to produce an initial upward state u i of size h = 200. c is a binary indicator expressing if the word is a cue. Note that only a word's CCG category and cue status are shown to the model. The STRNN first makes an upward pass through the syntax tree. This pass recursively combines two child constituent states u lef t and u right as well as the target parent category embedding s parent \u2208 E Cat to produce an upward state for the parent constituent u parent . A weight matrix W \u2191 \u2208 R (2h+s)\u00d7h is learned for this operation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "4"
},
{
"text": "x = [u lef t ; u right ; s parent ] u parent = tanh(xW \u2191 + b \u2191 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "4"
},
{
"text": "Downward Pass. The top level hidden state of size h, GLOBAL \u2191 , is then transformed with the matrix G \u2208 R h\u00d7h to make the GLOBAL \u2193 downward state following Paulus. This is recursively unfolded down the tree to the leaves using the learned matrix",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "4"
},
{
"text": "W \u2193 \u2208 R (2h+2s)\u00d72h .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "4"
},
{
"text": "The recursive cell consumes two recurrent inputs following Paulus: the downward parent state d parent and upward state u parent . We supply additional inputs which instruct the cell how to decompose the state vector, the target child category embeddings s lef t , s right \u2208 E Cat . The cell produces a double-wide state vector which is split into left-and right-child state vectors d lef t and d right . Completing the downward pass results in a globally-informed information state d j for each constituent j in the syntax tree.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "4"
},
{
"text": "x = [u parent ; d parent ; s lef t ; s right ] [d lef t ; d right ] = tanh(xW \u2193 + b \u2193 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "4"
},
{
"text": "Classification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "4"
},
{
"text": "Classifications are produced using the matrix C \u2208 R 2h\u00d72 and a softmax activation. C takes as input, for any tree constituent j, the concatenation of u j and d j vectors. This can be applied in the same way to tree leaves (words) and to any higher tree constituent to produce a scope judgement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "4"
},
{
"text": "Optimization. The model is semi-supervised. The *SEM dataset provides supervision signal only for which words are in scope, not general constituents. However, like the GB-TRNN, the STRNN produces classifications for all tree constituents, learning the general pattern from the supervision of words. During training the Adam optimizer was used with cross-entropy loss and 0.001 initial learning rate. Regularization is important with a small training set, and we used dropout following Gal and Ghahramani (2016) with recurrent connections set to 0.2 and others 0.5. ST\" is self-trained embeddings learned jointly with the task. \"BERT\" is with pretrained embeddings. Table 1 shows performance results for the STRNN and key comparison models on the *SEM2012 test corpus. The conventional *SEM test metric is the F 1 measure of individual sentence tokens predicted in-scope of the cue. Not shown here is Fancellu's Dependency-LSTM, which published results on a modified version of *SEM. The corpus consists of sentences where each word is annotated with a gold label of in-scope or out-of-scope, and the negation cue. We augment these with CCG parses (Stanojevi\u0107 and Steedman, 2019) . Adapted from Fancellu et al. (2017) is a na\u00efve baseline which predicts scope within the nearest punctuation marks left and right of the cue. It performs fairly well because punctuation indicates and defines grammatical structure. Compared to the STRNN this model overpredicts things like subordinate clauses (e.g. sentence (3)) and underpredicts distant arguments separated by e.g. an appositive.",
"cite_spans": [
{
"start": 485,
"end": 510,
"text": "Gal and Ghahramani (2016)",
"ref_id": "BIBREF5"
},
{
"start": 1147,
"end": 1178,
"text": "(Stanojevi\u0107 and Steedman, 2019)",
"ref_id": "BIBREF18"
},
{
"start": 1194,
"end": 1216,
"text": "Fancellu et al. (2017)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 665,
"end": 672,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Methods",
"sec_num": "4"
},
{
"text": "Recall F",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Precision",
"sec_num": null
},
{
"text": "Our base syntactic model outperforms the comparison models on the task, achieving a new state of the art on this dataset. Notably, it performs at least as well as the word embedding-based model, also the highest-scoring model to date, Fancellu's BiLSTM. Figure 1 actually shows correct model output on the example sentences, demonstrating the model on a simple case and a complex coordination. Given this, it might be interesting to ablate Fancellu's BiLSTM to see if it still performs well only using embeddings for POS tags. We note that POS tags contain punctuation markers, and as discussed in section 2 this model has been found to lean heavily on punctuation. The BiLSTM would be able to take advantage of these markers in the same way as before.",
"cite_spans": [],
"ref_spans": [
{
"start": 254,
"end": 262,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "Word embeddings frequently boost performance in language disambiguation tasks by incorporating many kinds of information (Mikolov et al., 2013) . We tested augmented STRNN models by adding word embeddings to leaves in addition to syntax embeddings. We obtain an embedding for each word in the sentence using BERT (Devlin et al., 2019) and concatenate this word embedding with our basic syntactic vector, which includes the CCG category embedding and cue status for the given word. We then proceed as before with learning and classification. We also tested a variant with randomly initialized word vectors which are jointly learned with the task. We found that both selftrained word embeddings and pretrained BERT embeddings provided no noticeable benefit to F 1 score on the task.",
"cite_spans": [
{
"start": 121,
"end": 143,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF11"
},
{
"start": 313,
"end": 334,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "The model classifies all constituents in the syntax tree. We examined local labeling decisions within trees in the development set and found that in 89.9% of cases where a parent constituent has both children classified in-scope, the model also classifies the parent in-scope. On average, a constituent has a 24.1% chance of in-scope classification. This shows the model's preference for representing scope in larger phrase structures, aligning with the syntactic theory.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "Development set results for the base STRNN model were analyzed and 18 difficult sentences were found with accuracy below random chance guessing. The largest group of errors (11 sentences) were likely caused by CCG parsing errors. Some have speech-like text with stuttering, parentheticals, etc, and four have improper attachments within the parse tree affecting the scoped text. A few other sentences have fine CCG parses but are very complex, including one with a 19-word noun phrase. Many of these are related to the style of the genre.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "5.1"
},
{
"text": "To analyze model robustness, Pearson correlations of accuracy vs several factors were calculated. Negation cues do not always take a scope (such as with interjections) so we measure with accuracy instead of F 1 . No correlation with accuracy was found for sentence length, maximum tree depth, cue depth in the tree, or tree balance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "5.1"
},
{
"text": "We show the STRNN model effectively predicts negation scope using syntactic parse trees. State-ofthe-art performance is achieved on the *SEM2012 Shared Task without identifying individual words or extracting features beyond syntax.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "This result reverts to earlier theories about the relationship between syntax and negation semantics. Both word embeddings and semantic reasoning require large resource overhead in model parameters and processing, but efficient use of syntax is all that is needed for high performance on this task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
}
],
"back_matter": [
{
"text": "This work was supported in part by ERC H2020 Advanced Fellowship GA 742137 SEMANTAX, and an Edinburgh and Huawei Technologies Research Centre award.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": "7"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Minimal recursion semantics: An introduction",
"authors": [
{
"first": "Ann",
"middle": [],
"last": "Copestake",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Flickinger",
"suffix": ""
},
{
"first": "Carl",
"middle": [],
"last": "Pollard",
"suffix": ""
},
{
"first": "Ivan",
"middle": [
"A"
],
"last": "Sag",
"suffix": ""
}
],
"year": 2005,
"venue": "Research on language and computation",
"volume": "3",
"issue": "2-3",
"pages": "281--332",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ann Copestake, Dan Flickinger, Carl Pollard, and Ivan A Sag. 2005. Minimal recursion semantics: An introduction. Research on language and computa- tion, 3(2-3):281-332.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Neural networks for negation scope detection",
"authors": [
{
"first": "Federico",
"middle": [],
"last": "Fancellu",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lopez",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Webber",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "495--504",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Federico Fancellu, Adam Lopez, and Bonnie Webber. 2016. Neural networks for negation scope detection. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 495-504.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Detecting negation scope is easy, except when it isn't",
"authors": [
{
"first": "Federico",
"middle": [],
"last": "Fancellu",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lopez",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Webber",
"suffix": ""
},
{
"first": "Hangfeng",
"middle": [],
"last": "He",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "58--63",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Federico Fancellu, Adam Lopez, Bonnie Webber, and Hangfeng He. 2017. Detecting negation scope is easy, except when it isn't. In Proceedings of the 15th Conference of the European Chapter of the Associa- tion for Computational Linguistics: Volume 2, Short Papers, volume 2, pages 58-63.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Neural networks for cross-lingual negation scope detection",
"authors": [
{
"first": "Federico",
"middle": [],
"last": "Fancellu",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lopez",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [
"L"
],
"last": "Webber",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Federico Fancellu, Adam Lopez, and Bonnie L. Web- ber. 2018. Neural networks for cross-lingual nega- tion scope detection. CoRR, abs/1810.02156.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A theoretically grounded application of dropout in recurrent neural networks",
"authors": [
{
"first": "Yarin",
"middle": [],
"last": "Gal",
"suffix": ""
},
{
"first": "Zoubin",
"middle": [],
"last": "Ghahramani",
"suffix": ""
}
],
"year": 2016,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "1019--1027",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yarin Gal and Zoubin Ghahramani. 2016. A theoret- ically grounded application of dropout in recurrent neural networks. In Advances in neural information processing systems, pages 1019-1027.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A Natural History of Negation",
"authors": [
{
"first": "Laurence",
"middle": [],
"last": "Horn",
"suffix": ""
}
],
"year": 1989,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Laurence Horn. 1989. A Natural History of Negation. University of Chicago Press.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Negation",
"authors": [
{
"first": "R",
"middle": [],
"last": "Laurence",
"suffix": ""
},
{
"first": "Heinrich",
"middle": [],
"last": "Horn",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wansing",
"suffix": ""
}
],
"year": 2017,
"venue": "The Stanford Encyclopedia of Philosophy, spring 2017 edition",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Laurence R. Horn and Heinrich Wansing. 2017. Nega- tion. In Edward N. Zalta, editor, The Stanford Ency- clopedia of Philosophy, spring 2017 edition. Meta- physics Research Lab, Stanford University.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A Student's Introduction to English Grammar",
"authors": [
{
"first": "Rodney",
"middle": [],
"last": "Huddleston",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"K"
],
"last": "Pullum",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1017/CBO9780511815515"
]
},
"num": null,
"urls": [],
"raw_text": "Rodney Huddleston and Geoffrey K. Pullum. 2005. A Student's Introduction to English Grammar. Cam- bridge University Press.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Corpus annotation for mining biomedical events from literature",
"authors": [
{
"first": "Jin-Dong",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Tomoko",
"middle": [],
"last": "Ohta",
"suffix": ""
},
{
"first": "Jun'ichi",
"middle": [],
"last": "Tsujii",
"suffix": ""
}
],
"year": 2008,
"venue": "BMC bioinformatics",
"volume": "9",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jin-Dong Kim, Tomoko Ohta, and Jun'ichi Tsujii. 2008. Corpus annotation for mining biomedi- cal events from literature. BMC bioinformatics, 9(1):10.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Learning the scope of negation via shallow semantic parsing",
"authors": [
{
"first": "Junhui",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Guodong",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Hongling",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Qiaoming",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics, COLING '10",
"volume": "",
"issue": "",
"pages": "671--679",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Junhui Li, Guodong Zhou, Hongling Wang, and Qiaoming Zhu. 2010. Learning the scope of nega- tion via shallow semantic parsing. In Proceedings of the 23rd International Conference on Computa- tional Linguistics, COLING '10, pages 671-679, Stroudsburg, PA, USA. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Linguistic regularities in continuous space word representations",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Scott",
"middle": [],
"last": "Wen-Tau",
"suffix": ""
},
{
"first": "Yih",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Zweig",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT-2013)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Scott Wen-tau Yih, and Geoffrey Zweig. 2013. Linguistic regularities in continu- ous space word representations. In Proceedings of the 2013 Conference of the North American Chap- ter of the Association for Computational Linguis- tics: Human Language Technologies (NAACL-HLT- 2013). Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "*SEM2012 shared task: Resolving the scope and focus of negation",
"authors": [
{
"first": "Roser",
"middle": [],
"last": "Morante",
"suffix": ""
},
{
"first": "Eduardo",
"middle": [],
"last": "Blanco",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the First Joint Conference on Lexical and Computational Semantics",
"volume": "1",
"issue": "",
"pages": "265--274",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roser Morante and Eduardo Blanco. 2012. *SEM2012 shared task: Resolving the scope and focus of nega- tion. In Proceedings of the First Joint Conference on Lexical and Computational Semantics-Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth Inter- national Workshop on Semantic Evaluation, pages 265-274. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Simple negation scope resolution through deep parsing: A semantic solution to a semantic problem",
"authors": [
{
"first": "Woodley",
"middle": [],
"last": "Packard",
"suffix": ""
},
{
"first": "Emily",
"middle": [
"M"
],
"last": "Bender",
"suffix": ""
},
{
"first": "Jonathon",
"middle": [],
"last": "Read",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Oepen",
"suffix": ""
},
{
"first": "Rebecca",
"middle": [],
"last": "Dridan",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "69--78",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Woodley Packard, Emily M Bender, Jonathon Read, Stephan Oepen, and Rebecca Dridan. 2014. Sim- ple negation scope resolution through deep parsing: A semantic solution to a semantic problem. In Pro- ceedings of the 52nd Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 69-78.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Global belief recursive neural networks",
"authors": [
{
"first": "Romain",
"middle": [],
"last": "Paulus",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in Neural Information Processing Systems",
"volume": "27",
"issue": "",
"pages": "2888--2896",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Romain Paulus, Richard Socher, and Christopher D Manning. 2014. Global belief recursive neural net- works. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Ad- vances in Neural Information Processing Systems 27, pages 2888-2896. Curran Associates, Inc.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Uio1: Constituent-based discriminative ranking for negation resolution",
"authors": [
{
"first": "Jonathon",
"middle": [],
"last": "Read",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Velldal",
"suffix": ""
},
{
"first": "Lilja",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Oepen",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Sixth International Workshop on Semantic Evaluation, SemEval '12",
"volume": "1",
"issue": "",
"pages": "310--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathon Read, Erik Velldal, Lilja , and Stephan Oepen. 2012. Uio1: Constituent-based discriminative rank- ing for negation resolution. In Proceedings of the First Joint Conference on Lexical and Computa- tional Semantics -Volume 1: Proceedings of the Main Conference and the Shared Task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation, SemEval '12, pages 310- 318, Stroudsburg, PA, USA. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Negation triggers and their scope",
"authors": [
{
"first": "Sabine",
"middle": [],
"last": "Rosenberg",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sabine Rosenberg. 2013. Negation triggers and their scope. Master's thesis, Concordia University.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "*SEM2012: First joint conference on lexical and computational semantics",
"authors": [
{
"first": "*",
"middle": [],
"last": "Sem",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "*SEM. 2012. *SEM2012: First joint conference on lexical and computational semantics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "CCG parsing algorithm with incremental tree rotation",
"authors": [
{
"first": "Milo\u0161",
"middle": [],
"last": "Stanojevi\u0107",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "228--239",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1020"
]
},
"num": null,
"urls": [],
"raw_text": "Milo\u0161 Stanojevi\u0107 and Mark Steedman. 2019. CCG parsing algorithm with incremental tree rotation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 228-239, Minneapolis, Minnesota. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "The Syntactic Process",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Steedman. 2000. The Syntactic Process. MIT Press, Cambridge, MA, USA.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Negation and polarity",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2011,
"venue": "Taking Scope",
"volume": "",
"issue": "",
"pages": "175--208",
"other_ids": {
"DOI": [
"10.7551/mitpress/9780262017077.003.0011"
]
},
"num": null,
"urls": [],
"raw_text": "Mark Steedman. 2011. Negation and polarity. In Tak- ing Scope, pages 175-208. The MIT Press.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "The bioscope corpus: Annotation for negation, uncertainty and their scope in biomedical texts",
"authors": [
{
"first": "Gy\u00f6rgy",
"middle": [],
"last": "Szarvas",
"suffix": ""
},
{
"first": "Veronika",
"middle": [],
"last": "Vincze",
"suffix": ""
},
{
"first": "Rich\u00e1rd",
"middle": [],
"last": "Farkas",
"suffix": ""
},
{
"first": "J\u00e1nos",
"middle": [],
"last": "Csirik",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Workshop on Current Trends in Biomedical Natural Language Processing, BioNLP '08",
"volume": "",
"issue": "",
"pages": "38--45",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gy\u00f6rgy Szarvas, Veronika Vincze, Rich\u00e1rd Farkas, and J\u00e1nos Csirik. 2008. The bioscope corpus: Anno- tation for negation, uncertainty and their scope in biomedical texts. In Proceedings of the Workshop on Current Trends in Biomedical Natural Language Processing, BioNLP '08, pages 38-45, Stroudsburg, PA, USA. Association for Computational Linguis- tics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "(Left) Upward pass: recursive composition of constituent states from leaves to root. (Right) Downward pass: recursive decomposition of constituent states from root to leaves. NB: The model sees only the CCG parse and cue. Word identities are hidden.",
"type_str": "figure",
"uris": null,
"num": null
}
}
}
}