|
{ |
|
"paper_id": "K15-2006", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T07:08:31.146541Z" |
|
}, |
|
"title": "A Minimalist Approach to Shallow Discourse Parsing and Implicit Relation Recognition", |
|
"authors": [ |
|
{ |
|
"first": "Christian", |
|
"middle": [], |
|
"last": "Chiarcos", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Applied Computational Linguistics Lab Goethe University", |
|
"institution": "", |
|
"location": { |
|
"settlement": "Frankfurt am Main" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Niko", |
|
"middle": [], |
|
"last": "Schenk", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Applied Computational Linguistics Lab Goethe University", |
|
"institution": "", |
|
"location": { |
|
"settlement": "Frankfurt am Main" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We describe a minimalist approach to shallow discourse parsing in the context of the CoNLL 2015 Shared Task. 1 Our parser integrates a rule-based component for argument identification and datadriven models for the classification of explicit and implicit relations. We place special emphasis on the evaluation of implicit sense labeling, we present different feature sets and show that (i) word embeddings are competitive with traditional word-level features, and (ii) that they can be used to considerably reduce the total number of features. Despite its simplicity, our parser is competitive with other systems in terms of sense recognition and thus provides a solid ground for further refinement.", |
|
"pdf_parse": { |
|
"paper_id": "K15-2006", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We describe a minimalist approach to shallow discourse parsing in the context of the CoNLL 2015 Shared Task. 1 Our parser integrates a rule-based component for argument identification and datadriven models for the classification of explicit and implicit relations. We place special emphasis on the evaluation of implicit sense labeling, we present different feature sets and show that (i) word embeddings are competitive with traditional word-level features, and (ii) that they can be used to considerably reduce the total number of features. Despite its simplicity, our parser is competitive with other systems in terms of sense recognition and thus provides a solid ground for further refinement.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Comprehending sentences and other textual units requires capabilities beyond capturing the lexical semantics of their components. Contextual information is needed, i.e., a semantically coherent representation of the logical structure of a text-be it written or spoken discourse, unidirectional or bidirectional communication, etc. Different formalisms have been proposed to model these assumptions in frameworks of coherence relations and discourse structure (Mann and Thompson, 1988; Lascarides and Asher, 1993; Webber, 2004) . In a more applied NLP context, the goal of shallow discourse parsing (SDP) is to automatically detect relevant discourse units and to label the relations that hold between them. Unlike deep discourse parsing, a stringent logical formalization or the establishment of a global data structure, say, a tree, is not required.", |
|
"cite_spans": [ |
|
{ |
|
"start": 459, |
|
"end": 484, |
|
"text": "(Mann and Thompson, 1988;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 485, |
|
"end": 512, |
|
"text": "Lascarides and Asher, 1993;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 513, |
|
"end": 526, |
|
"text": "Webber, 2004)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "With the release of the Penn Discourse Treebank (Prasad et al., 2008, PDTB) , annotated training data for SDP has become available and, as a consequence, the field has considerably attracted researchers from the NLP and IR community. Informally, the PDTB annotation scheme describes a discourse unit as a syntactically motivated character span in the text, and augments with relations pointing from argument 2 (Arg2, prototypically, a discourse unit associated with an explicit discourse marker) to its antecedent, i.e., the discourse unit Arg1. Relations are labeled with a relation type (its sense) and the associated discourse marker (either as found in the text or as inferred by the annotator). PDTB distinguishes explicit and implicit relations depending on whether such a connector or cue phrase (e.g., because) is present, or not. 2 As an illustration, consider the following example from the PDTB:", |
|
"cite_spans": [ |
|
{ |
|
"start": 48, |
|
"end": 75, |
|
"text": "(Prasad et al., 2008, PDTB)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 839, |
|
"end": 840, |
|
"text": "2", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Arg1: Solo woodwind players have to be creative if they want to work a lot Connector: because Arg2: their repertoire and audience appeal are limited", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this explicit relation, Arg1 and Arg2 are directly connected by the cue word; the relation type is Contingency.Cause.Reason-one out of roughly 20 three-level senses marking the relation sense between any given argument pair in the PDTB.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We participate in the CoNLL 2015 Shared Task (Xue et al., 2015 ) with a minimalist end-to-end shallow discourse parser developed from scratch. It was, however, originally not specifically developed for this purpose, but created in preparation of more elaborate experiments on implicit intersentential relations in discourse, an aspect not explicitly addressed by the evaluation of the Shared Task.", |
|
"cite_spans": [ |
|
{ |
|
"start": 45, |
|
"end": 62, |
|
"text": "(Xue et al., 2015", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The remainder of the paper describes the architecture and functionality of our system: A rule-based component identifies explicit and implicit argument-pairs and two statistical, datadriven models classify senses. Our system suffers from the surface-based definition of argument spans and their evaluation as string ranges, but with respect to sense disambiguation (in particular, in terms of precision), it is competitive with other systems in the task. Inspired by the diversity of different approaches to handle the more challenging-and more interesting-non-explicit relations, our description focuses on inferring implicit senses and benefits from abstracting from traditional surface-based features in favor of distributional representations of the argument spans.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "At the moment, few full-fledged end-to-end discourse parsers exist, but they use different theories of discourse, e.g., PDTB (Lin et al., 2010) , or RST (duVerle and Prendinger, 2009; Feng and Hirst, 2012) . Most of the literature on automated discourse analysis has focused on specialized subtasks:", |
|
"cite_spans": [ |
|
{ |
|
"start": 125, |
|
"end": 143, |
|
"text": "(Lin et al., 2010)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 149, |
|
"end": 183, |
|
"text": "RST (duVerle and Prendinger, 2009;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 184, |
|
"end": 205, |
|
"text": "Feng and Hirst, 2012)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Argument identification is approached by, e.g., Ghosh et al. (2012) on the word and intersentential level, using a CRF-based approach including local and global features. Kong et al. (2014) tackle argument span detection on the constituent-level with features for subtrees and special constraints.", |
|
"cite_spans": [ |
|
{ |
|
"start": 48, |
|
"end": 67, |
|
"text": "Ghosh et al. (2012)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 171, |
|
"end": 189, |
|
"text": "Kong et al. (2014)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Explicit relation classification Classifying the senses of explicit relations is rather straightforward, given the cue phrase. introduce a refinement using syntactic features to disambiguate explicit connectives which increases performance close to a human baseline.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Implicit relation classification In the early attempt by Marcu and Echihabi (2002) , implicit relation classification was grounded on synthetic training data (relation patterns with explicit cue phrases removed) and a Naive Bayes model trained on word-pair features. Aggregation over such word-pairs was described by Biran and McKeown (2013) , while Park and Cardie (2012) optimized feature sets through feature selection, preprocessing and special binning techniques.", |
|
"cite_spans": [ |
|
{ |
|
"start": 57, |
|
"end": 82, |
|
"text": "Marcu and Echihabi (2002)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 317, |
|
"end": 341, |
|
"text": "Biran and McKeown (2013)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Out of these, implicit relation classification remains the most problematic subtask, and attracted 2009present an extensive evaluation of mostly linguistically motivated features for implicit sense labeling in a 4-way classification experiment. Useful indicators, among others, are verb information, polarity labels and the first and last three words of an argument. Lin et al. (2009) refine their work by introducing contextual and dependency information from the argument pairs and show that syntactic phrase-structure features help in level-2 relation type classifications. Moreover, Zhou et al. (2010) use a language model to \"predict\" explicit connectives from implicit relations. Our approach is most similar to the one by Rutherford and Xue (2014) , who successfully integrate distributional representations to substitute word-pair features.", |
|
"cite_spans": [ |
|
{ |
|
"start": 367, |
|
"end": 384, |
|
"text": "Lin et al. (2009)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 587, |
|
"end": 605, |
|
"text": "Zhou et al. (2010)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 729, |
|
"end": 754, |
|
"text": "Rutherford and Xue (2014)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Our SDP system participates in the closed track of the Shared Task. 3 Its components are illustrated in Figure 1 . Input is tokenized text in the provided JSON format including meta information about parts-of-speech and sentence boundaries.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 104, |
|
"end": 112, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Approach", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The SDP pipeline processes the documents sentence by sentence. Due to the strict time constraints of the Shared Task, we have set up a rulebased detector for both Arg1 and Arg2 spans as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Argument Identification", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u2022 Extract an explicit Arg1-Arg2 pair, where", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Argument Identification", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Arg2 is a complete sentence starting with an explicit connective. 4 The previous sentence serves as Arg1.", |
|
"cite_spans": [ |
|
{ |
|
"start": 66, |
|
"end": 67, |
|
"text": "4", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Argument Identification", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u2022 Refining step 1, we extract sentenceinternal explicit Arg1-Arg2 pairs by applying the pattern BOS-Arg1-cue word-punctuation-Arg2-EOS. 5 Note that we require a punctuation symbol between both arguments to prevent the template from extracting, e.g., coordinated NPs such as chairman and chief executive.", |
|
"cite_spans": [ |
|
{ |
|
"start": 136, |
|
"end": 137, |
|
"text": "5", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Argument Identification", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u2022 We take special care of explicit temporal Arg1-Arg2 relations and extract patterns of the form BOS-cue word-Arg2-comma-Arg1-EOS. Cue words are, e.g., while, although, unless.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Argument Identification", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u2022 More complicated explicit patterns split the second argument into two parts by the cue word as with however in: Argument identification is tough. Writing patterns, however, is easy.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Argument Identification", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u2022 Finally, we extract all relations between adjacent, complete sentences as Arg1 and Arg2 spans as implicit, iff Arg1-Arg2 is not already an explicit relation and Arg1-Arg2 does not cross a paragraph boundary.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Argument Identification", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u2022 EntRel and AltLex relations are beyond the scope of our current parser as both taken together make up only 14.3% of all relations in the training section of the PDTB.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Argument Identification", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Post processing A rule-based post-processor is applied on top of the previous component. Its purpose is to fix token lists for argument spans according to the guidelines of the Shared Task as no partial credit is given for non-exact matches. For example, a leading or trailing punctuation, quote or attribution spans must not be part of any of the arguments. This rule-based model had specifically to be developed for the Shared Task; it replaced a more elaborate argument identifier based on structured representations rather than character spans to represent the arguments of discourse relations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Argument Identification", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Given two argument spans and an explicit connective, we aim to predict the correct relation type (sense). To this end, we trained a simple statistical model 6 in a supervised setting on all explicit relations whose only feature is the cue word itself. An exhaustive list of cue words (features) was obtained from the training section of the PDTB data. Moreover, we restricted the set of labels to those eight senses that appear only frequently enough, i.e. we excluded those whose proportion is less than 5% of all explicit senses in the training section.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Labeling Explicit Senses", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "A third component handles the classification of implicit senses for any implicit Arg1-Arg2 pair. Similar to the previous subtask, we restrict the label set (here to six senses). We trained various models only on implicit relations. Inspired by the previous literature on implicit sense classification, we experimented with different surfacebased word-pair feature sets for Arg1 and Arg2, as well as more abstract representations for the word forms, such as embeddings and word vectors: 7", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Labeling Implicit Senses", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "1. Word-pair (WP) token features of Arg1 and Arg2: (i) normal-case (N ) as encountered in the text and (ii) after lower-case normalization (l), both with frequency thresholds.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Labeling Implicit Senses", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "2. Similar to (1.) but using word stems (Porter, 1980) instead.", |
|
"cite_spans": [ |
|
{ |
|
"start": 40, |
|
"end": 54, |
|
"text": "(Porter, 1980)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Labeling Implicit Senses", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "3. Similar to (1.) but using a Brown cluster 3200 representation (Turian et al., 2010) for each word form if it exists. Otherwise, we use the word form as feature.", |
|
"cite_spans": [ |
|
{ |
|
"start": 65, |
|
"end": 86, |
|
"text": "(Turian et al., 2010)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Labeling Implicit Senses", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "A subsequent experiment is concerned with finding a more compact representation of both Arg1 and Arg2 spans: For each argument pair, we computed two real-valued vectors (600 features in total), in which each argument is represented by a 300-dimensional feature vector. These were obtained by summing over all skip-gram neural word embeddings (Mikolov et al., 2013) present in each argument weighted by the respective number of elements (embeddings) found in each argument. The normalization is necessary to handle sentences of different lengths.", |
|
"cite_spans": [ |
|
{ |
|
"start": 342, |
|
"end": 364, |
|
"text": "(Mikolov et al., 2013)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Labeling Implicit Senses", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Testing the effect of both Brown clusters and neural word embeddings, a final experiment combines them into one feature set for each implicit argument pair.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Labeling Implicit Senses", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "In the overall task (based on the blind test set), our system is ranked at position 13 -rather poorly compared to 17 submitted systems in total (including a baseline). This is due to the imperfect argument identification, and in particular due to the erroneous recognition of explicit cue phrases. The system suffers from low overall recall of the identified explicit argument spans, including the connective. 8 A simple error analysis reveals that patterns in which cue phrases do not directly start the second argument are hard to identify by our rule-based system. Moreover, punctuation symbols pose problems to the system as well (cf. our discussion in Section 4.3). A separate evaluation shows that post-processing argument pairs improves F-score by 2%.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Argument Identification", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Despite these obvious drawbacks, we would like to draw special attention to our statistical components for sense classification: for the argument pairs which were correctly recognized, our system is ranked at position 4 for sense precision, even outperforming the best three systems. We will elaborate more on these models in the next subsection.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Argument Identification", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The classification of explicit senses with only the connector word as single feature reaches an accuracy of 80.48% using the PDTB trainingdevelopment split. This is still below state-of-the art (94% in 9 -yet satisfying for our lightweight system with its original emphasis on implicit relations. Table 1 shows the results for implicit sense classification (472 instances in total) using different feature sets. 10 First, models trained on any of the feature sets significantly outperform the majority 8 Ranks for expl. Arg1-Arg2 prec., recall, F1: 12, 10, 11. Ranks for expl. connective prec., recall, F1: 15, 16, 15.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 297, |
|
"end": 304, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Explicit and Implicit Senses", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "9 Note, however, that this is 4-way sense classification. 10 We also tested a broad band-width of sentiment and phrase-structure features, but with the resulting accuracies not outperforming the current experiments, these are omitted for reasons of brevity. class baseline (25.4%, Expansion.Conjunction). 11 Applying lower-case normalization to the input tends to improve classifier performance, but using a frequency threshold on the minimum number of occurrences of a feature does not: This is an interesting observation and not in line with the previous literature on implicit sense classification; Lin et al. (2009) , for example, use a frequency cutoff of 5 for feature selection. Also, stemming as another type of normalization seems not to be useful either and yields slightly lower accuracies.", |
|
"cite_spans": [ |
|
{ |
|
"start": 58, |
|
"end": 60, |
|
"text": "10", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 602, |
|
"end": 619, |
|
"text": "Lin et al. (2009)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Explicit and Implicit Senses", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Noticeably, substituting surface-level word-pair features by the Brown Cluster 3200 embeddings yields a better performance. The difference is, however, not statistically significant. 12 More important, however, may be the positive side effect of a smaller feature space (\u22481.4 million) which is reduced by 23%.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Explicit and Implicit Senses", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We expect the skip-gram neural word embeddings (word vectors) to perform even better than Brown clusters: They are comparable in their contextual features but preserve the topology of the original feature space. Indeed, these are competitive with the low-frequency word-pair features and even significantly better than the configurations l 3 , l 4 , l 5 . Their greatest benefit can be seen in the overall number of real-valued features per instance (which is only 600 in our setting). Finally, a combination of Brown clusters and skip-gram embeddings yields the best results for the classification of implicit senses. This gain over using the embeddings alone may possibly be attributed to nonlinearities in the feature space which may be partially captured in the Brown clusters, but not with embeddings in a SVM. 13 We report detailed scores for this best-performing classifier in Table 2 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 816, |
|
"end": 818, |
|
"text": "13", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 884, |
|
"end": 891, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Explicit and Implicit Senses", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Exact argument identification is a crucial preprocessing step for any SDP pipeline. Our shallow discourse parser suffers from low overall recall of the correctly recognized (explicit) spans, which we see as the main source of poor performance in the task evaluation. Even though a system description may not be the right place for a general discussion about the appropriate representation of how arguments of discourse relations are to be defined and represented, we would like to point out that we see a potential issue in the rather strict evaluation of exact matches within the Shared Task (which does not allow for partial matches). Likewise problematic is an arguable definition of gold spans for Arg1 and Arg2 in the provided training data. As an illustration consider the following example: 14 Gold: Arg1: At any rate India needs the sugar Arg2: it will be in sooner or later to buy it Our System Output: Arg1: At any rate, she added, \"India needs the sugar Arg2: it will be in sooner or later to buy it.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Argument Span Identification", |
|
"sec_num": "4.3.1" |
|
}, |
|
{ |
|
"text": "At least on a general basis, both argument spans are correctly identified by our system. The only 14 Document ID: wsj 2265, Relation ID: 36896. difference is that punctuation symbols and attribution spans (she added) are not present in the gold data. Note, however, that a rule-based removal of such patterns is far from trivial, as syntactic patterns are complex and the PDTB gold data reveals many inconsistencies, especially regarding leading and trailing punctuation symbols. In this particular example, our system is capable of (i) identifying the correct explicit connective (so), and (ii) classifying its correct sense (Contingency.Cause.Result).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Argument Span Identification", |
|
"sec_num": "4.3.1" |
|
}, |
|
{ |
|
"text": "Nevertheless, it is not given any credit, as the system's token lists do not match the gold data. Very much related to the span identification problem sketched above is the detection of discontinuous argument spans and cases in which our system adds a subordinate clause to the argument, which is not present in the gold data. We believe that-in line with the annotation guidelines of the PDTBthese are relevant factors to consider when implementing a SDP, but that it should not affect the overall evaluation in such a strict and rigid manner. We would therefore encourage future evaluations to", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Argument Span Identification", |
|
"sec_num": "4.3.1" |
|
}, |
|
{ |
|
"text": "\u2022 either employ additional metrics permitting partial matches, e.g., using sliding-window metrics such as Pevzner and Hearst (2002) ,", |
|
"cite_spans": [ |
|
{ |
|
"start": 106, |
|
"end": 131, |
|
"text": "Pevzner and Hearst (2002)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Argument Span Identification", |
|
"sec_num": "4.3.1" |
|
}, |
|
{ |
|
"text": "\u2022 or to ground argument definitions in psycholinguistically more plausible models of propositions, cf. Lascarides and Asher (1993) or Kintsch (1998) , resp.-their more operationalizable approximation in terms of, say, frame semantics as previously annotated for the PDTB data in the context of PropBank and NomBank Meyers et al., 2004) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 103, |
|
"end": 130, |
|
"text": "Lascarides and Asher (1993)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 134, |
|
"end": 148, |
|
"text": "Kintsch (1998)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 315, |
|
"end": 335, |
|
"text": "Meyers et al., 2004)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Argument Span Identification", |
|
"sec_num": "4.3.1" |
|
}, |
|
{ |
|
"text": "The latter idea may be challenging, as it involves efficient handling of multi-layer annotations for different major annotation projects, yet, experiments in this direction have successfully been conducted (Pustejovsky et al., 2005) . This integrative direction of research has been the original focus of our system.", |
|
"cite_spans": [ |
|
{ |
|
"start": 206, |
|
"end": 232, |
|
"text": "(Pustejovsky et al., 2005)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Argument Span Identification", |
|
"sec_num": "4.3.1" |
|
}, |
|
{ |
|
"text": "Our experiments indicate that frequency cutoffs to select word-pair features for implicit relation recognition do not seem to improve classifier performance. While some previous approaches (most notably Lin et al., 2009) incorporate cutoffs in their experiments, others do not. But if a frequency filter is applied, the specific value for the threshold is usually not motivated. We see a possible explanation for the negative impact of cutoffs in the extremely sparse feature space: Many word-pair features which are present in the training section of the PDTB are not found in the development set and vice versa, and with frequency cutoffs applied, sparsity even grows further. Closely related to our observation are earlier findings that using even a small stop word list has adverse effects on performance, which seems implausible at first sight (Blair-Goldensohn et al., 2007) . Biran and McKeown (2013) address this issue in closer detail by replacing the sparse lexical word-pair features by more dense, aggregated score features. Based on their experiments, the authors argue that the most powerful features are mainly function words. Yet, their lack of semantic content whatsoever still calls for an explanation why they are useful in distinguishing the different types of implicit relations-except through overfitting the data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 203, |
|
"end": 220, |
|
"text": "Lin et al., 2009)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 849, |
|
"end": 880, |
|
"text": "(Blair-Goldensohn et al., 2007)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 883, |
|
"end": 907, |
|
"text": "Biran and McKeown (2013)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Frequency Cutoffs for Word-Pair Feature Selection", |
|
"sec_num": "4.3.2" |
|
}, |
|
{ |
|
"text": "As a side experiment, we performed 10-fold cross validation on the PDTB, and again trained implicit relations by varying the cutoff. The results are in line with our experiments reported in Table 1 showing the same trend, which reinforces the aforementioned sparsity issue.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 190, |
|
"end": 197, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Frequency Cutoffs for Word-Pair Feature Selection", |
|
"sec_num": "4.3.2" |
|
}, |
|
{ |
|
"text": "Overall, we believe that more aggregated types of features have advantages over sparse features and that they are better in representing the underlying semantic relationship between argument pairs. We elaborate on this in our final subsection.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Frequency Cutoffs for Word-Pair Feature Selection", |
|
"sec_num": "4.3.2" |
|
}, |
|
{ |
|
"text": "Features Our experiments for implicit relation classification have shown that is is beneficial to abstract from surface-level (token) features for two reasons: (i) word embeddings seem to express a more general, semantic representation of the underlying relationship between two arguments in the discourse and (ii) the number of features involved in a classification can be significantly reduced which has a positive effect on the computational side.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstracting from Surface-Level", |
|
"sec_num": "4.3.3" |
|
}, |
|
{ |
|
"text": "Future research should be concerned with a closer inspection of how combinations of word embeddings can be used to increase classification results, especially when no explicit connectives are available. Instead of vector addition, as applied in our setting, we think that traditional vector-based similarity measures comparing both arguments spans seem to be highly promising in approaching their underlying semantic relationship.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstracting from Surface-Level", |
|
"sec_num": "4.3.3" |
|
}, |
|
{ |
|
"text": "In the context of the CoNLL 2015 Shared Task, we have described a minimalist approach to shallow discourse parsing with an emphasis on implicit relation recognition.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Our system combines task-specific adaptations, i.e., rule-based discourse unit identification via templates, with data-driven models to infer senses of (esp. implicit) discourse relations. We described the system architecture and experiments conducted on implicit sense labeling. In this context, we motivated the need to model the relationship between arguments in a more abstract way using distributional representations instead of surface-based features. Our experiments are in line with previous work (most notably by Rutherford and Xue, 2014), while having shown that more abstract representations are at least equally powerful in predicting the correct senses and, also, that sparsity issues can be overcome. A slight improvement in performance has yielded a combination of distributional profiles for argument spans (Brown clusters and skip-gram neural word embeddings) which is promising and should be addressed in closer detail in future work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "http://www.cs.brandeis.edu/\u02dcclp/ conll15st/index.html", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The set of relation types is completed by alternative lexicalization (AltLex, discourse marker rephrased), entity relation (EntRel, i.e., anaphoric coherence), resp. the absence of any relation (NoRel).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://www.cs.brandeis.edu/\u02dcclp/ conll15st/dataset.html 4 An exhaustive list of explicit cue words was obtained from the training section of the PDTB, ranging from unigrams to 7-grams.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "BOS and EOS mark the beginning and the end of sentence, respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In all our experiments, we made use of the JAVA implementation of libsvm(Chang and Lin, 2011) with linear kernel and default parameters.7 A word-pair is defined as the cross product of any combination of words in both Arg1 and Arg2. Punctuation symbols were removed before processing. All features are treated as boolean if present (true) or absent (false).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In all experiments, we applied the \u03c7 2 test statistic to assess significance.12 We have tested the other Brown cluster representations provided, as well, but 100, 320 and 1000 cluster sets yielded lower accuracies.13 All results reported above were obtained with linear kernels. These experiments have also been conducted with RBF and polynomial kernels, whose performance was not reported here, as it did not yield an improvement. However, truly nonlinear models would be possible with multi-layered neural networks. While this may yield better results for word embeddings as features, such an experiment is left for future research.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Aggregated Word Pair Features for Implicit Discourse Relation Disambiguation", |
|
"authors": [ |
|
{ |
|
"first": "Or", |
|
"middle": [], |
|
"last": "Biran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kathleen", |
|
"middle": [], |
|
"last": "Mckeown", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, ACL", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "69--73", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Or Biran and Kathleen McKeown. 2013. Aggregated Word Pair Features for Implicit Discourse Relation Disambiguation. In Proceedings of the 51st Annual Meeting of the Association for Computational Lin- guistics, ACL 2013, 4-9 August 2013, Sofia, Bul- garia, Volume 2: Short Papers, pages 69-73.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Building and Refining Rhetorical-Semantic Relation Models", |
|
"authors": [ |
|
{ |
|
"first": "Sasha", |
|
"middle": [], |
|
"last": "Blair-Goldensohn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kathleen", |
|
"middle": [], |
|
"last": "Mckeown", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Owen", |
|
"middle": [], |
|
"last": "Rambow", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "HLT-NAACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "428--435", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sasha Blair-Goldensohn, Kathleen McKeown, and Owen Rambow. 2007. Building and Refining Rhetorical-Semantic Relation Models. In Can- dace L. Sidner, Tanja Schultz, Matthew Stone, and ChengXiang Zhai, editors, HLT-NAACL, pages 428- 435. The Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "LIB-SVM: A library for support vector machines", |
|
"authors": [ |
|
{ |
|
"first": "Chih-Chung", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chih-Jen", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "ACM Transactions on Intelligent Systems and Technology", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chih-Chung Chang and Chih-Jen Lin. 2011. LIB- SVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technol- ogy, 2:27:1-27:27. Software available at http:// www.csie.ntu.edu.tw/\u02dccjlin/libsvm.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "A Novel Discourse Parser Based on Support Vector Machine Classification", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "David", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Helmut", |
|
"middle": [], |
|
"last": "Duverle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Prendinger", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "665--673", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David A. duVerle and Helmut Prendinger. 2009. A Novel Discourse Parser Based on Support Vector Machine Classification. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Nat- ural Language Processing of the AFNLP: Volume 2 -Volume 2, ACL '09, pages 665-673, Stroudsburg, PA, USA. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Text-level Discourse Parsing with Rich Linguistic Features", |
|
"authors": [ |
|
{ |
|
"first": "Vanessa", |
|
"middle": [], |
|
"last": "Wei Feng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Graeme", |
|
"middle": [], |
|
"last": "Hirst", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "60--68", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vanessa Wei Feng and Graeme Hirst. 2012. Text-level Discourse Parsing with Rich Linguistic Features. In Proceedings of the 50th Annual Meeting of the Asso- ciation for Computational Linguistics: Long Papers -Volume 1, ACL '12, pages 60-68, Stroudsburg, PA, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Global Features for Shallow Discourse Parsing", |
|
"authors": [ |
|
{ |
|
"first": "Sucheta", |
|
"middle": [], |
|
"last": "Ghosh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Giuseppe", |
|
"middle": [], |
|
"last": "Riccardi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Johansson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 13th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "150--159", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sucheta Ghosh, Giuseppe Riccardi, and Richard Jo- hansson. 2012. Global Features for Shallow Dis- course Parsing. In Proceedings of the 13th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), pages 150-159.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Comprehension: A Paradigm for Cognition", |
|
"authors": [ |
|
{ |
|
"first": "Walter", |
|
"middle": [], |
|
"last": "Kintsch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Walter Kintsch. 1998. Comprehension: A Paradigm for Cognition. Cambridge University Press, Cam- bridge.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "A Constituent-Based Approach to Argument Labeling with Joint Inference in Discourse Parsing", |
|
"authors": [ |
|
{ |
|
"first": "Fang", |
|
"middle": [], |
|
"last": "Kong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guodong", |
|
"middle": [], |
|
"last": "Tou Hwee Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "68--77", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fang Kong, Tou Hwee Ng, and Guodong Zhou. 2014. A Constituent-Based Approach to Argument La- beling with Joint Inference in Discourse Parsing. In Proceedings of the 2014 Conference on Em- pirical Methods in Natural Language Processing (EMNLP), pages 68-77. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Temporal Interpretation, Discourse Relations and Commonsense entailment", |
|
"authors": [ |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Lascarides", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicholas", |
|
"middle": [], |
|
"last": "Asher", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Linguistics and Philosophy", |
|
"volume": "16", |
|
"issue": "5", |
|
"pages": "437--493", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alex Lascarides and Nicholas Asher. 1993. Tem- poral Interpretation, Discourse Relations and Com- monsense entailment. Linguistics and Philosophy, 16(5):437-493.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Recognizing Implicit Discourse Relations in the Penn Discourse Treebank", |
|
"authors": [ |
|
{ |
|
"first": "Ziheng", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Min-Yen", |
|
"middle": [], |
|
"last": "Kan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hwee Tou", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "343--351", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ziheng Lin, Min-Yen Kan, and Hwee Tou Ng. 2009. Recognizing Implicit Discourse Relations in the Penn Discourse Treebank. In Proceedings of the 2009 Conference on Empirical Methods in Nat- ural Language Processing: Volume 1 -Volume 1, EMNLP '09, pages 343-351, Stroudsburg, PA, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "A PDTB-Styled End-to-End Discourse Parser", |
|
"authors": [ |
|
{ |
|
"first": "Ziheng", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Min-Yen", |
|
"middle": [], |
|
"last": "Hwee Tou Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Kan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ziheng Lin, Hwee Tou Ng, and Min-Yen Kan. 2010. A PDTB-Styled End-to-End Discourse Parser. CoRR, abs/1011.0835.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Rhetorical structure theory: Toward a functional theory of text organization", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "William", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sandra", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Mann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Thompson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1988, |
|
"venue": "Text", |
|
"volume": "8", |
|
"issue": "3", |
|
"pages": "243--281", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "William C. Mann and Sandra A. Thompson. 1988. Rhetorical structure theory: Toward a functional the- ory of text organization. Text, 8(3):243-281.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "An Unsupervised Approach to Recognizing Discourse Relations", |
|
"authors": [ |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Marcu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Abdessamad", |
|
"middle": [], |
|
"last": "Echihabi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL '02", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "368--375", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daniel Marcu and Abdessamad Echihabi. 2002. An Unsupervised Approach to Recognizing Discourse Relations. In Proceedings of the 40th Annual Meet- ing on Association for Computational Linguistics, ACL '02, pages 368-375, Stroudsburg, PA, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Annotating Noun Argument Structure for NomBank", |
|
"authors": [ |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Meyers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruth", |
|
"middle": [], |
|
"last": "Reeves", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Catherine", |
|
"middle": [], |
|
"last": "Macleod", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rachel", |
|
"middle": [], |
|
"last": "Szekely", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veronika", |
|
"middle": [], |
|
"last": "Zielinska", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Brian", |
|
"middle": [], |
|
"last": "Young", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ralph", |
|
"middle": [], |
|
"last": "Grishman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC'04). European Language Resources Association (ELRA)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adam Meyers, Ruth Reeves, Catherine Macleod, Rachel Szekely, Veronika Zielinska, Brian Young, and Ralph Grishman. 2004. Annotating Noun Argument Structure for NomBank. In Proceed- ings of the Fourth International Conference on Lan- guage Resources and Evaluation (LREC'04). Euro- pean Language Resources Association (ELRA).", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Efficient Estimation of Word Representations in Vector Space", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "CoRR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient Estimation of Word Repre- sentations in Vector Space. CoRR, abs/1301.3781.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "The Proposition Bank: An Annotated Corpus of Semantic Roles", |
|
"authors": [ |
|
{ |
|
"first": "Martha", |
|
"middle": [], |
|
"last": "Palmer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Gildea", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Kingsbury", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Comput. Linguist", |
|
"volume": "31", |
|
"issue": "1", |
|
"pages": "71--106", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The Proposition Bank: An Annotated Corpus of Semantic Roles. Comput. Linguist., 31(1):71- 106, March.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Improving Implicit Discourse Relation Recognition Through Feature Set Optimization", |
|
"authors": [ |
|
{ |
|
"first": "Joonsuk", |
|
"middle": [], |
|
"last": "Park", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "Cardie", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 13th Annual Meeting of the Special Interest Group on Discourse and Dialogue", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "108--112", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joonsuk Park and Claire Cardie. 2012. Improving Implicit Discourse Relation Recognition Through Feature Set Optimization. In Proceedings of the 13th Annual Meeting of the Special Interest Group on Discourse and Dialogue, page 108-112, Seoul, South Korea, July. Association for Computational Linguistics, Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "A critique and improvement of an evaluation metric for text segmentation", |
|
"authors": [ |
|
{ |
|
"first": "Lev", |
|
"middle": [], |
|
"last": "Pevzner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Marti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Hearst", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Computational Linguistics", |
|
"volume": "28", |
|
"issue": "1", |
|
"pages": "19--36", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lev Pevzner and Marti A Hearst. 2002. A critique and improvement of an evaluation metric for text seg- mentation. Computational Linguistics, 28(1):19- 36.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Using Syntax to Disambiguate Explicit Discourse Connectives in Text", |
|
"authors": [ |
|
{ |
|
"first": "Emily", |
|
"middle": [], |
|
"last": "Pitler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ani", |
|
"middle": [], |
|
"last": "Nenkova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "ACL 2009, Proceedings of the 47th Annual Meeting of the Association for Computational Linguistics and the 4th International Joint Conference on Natural Language Processing of the AFNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "13--16", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Emily Pitler and Ani Nenkova. 2009. Using Syntax to Disambiguate Explicit Discourse Connectives in Text. In ACL 2009, Proceedings of the 47th Annual Meeting of the Association for Computational Lin- guistics and the 4th International Joint Conference on Natural Language Processing of the AFNLP, 2-7 August 2009, Singapore, Short Papers, pages 13-16.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Automatic Sense Prediction for Implicit Discourse Relations in Text", |
|
"authors": [ |
|
{ |
|
"first": "Emily", |
|
"middle": [], |
|
"last": "Pitler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Annie", |
|
"middle": [], |
|
"last": "Louis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ani", |
|
"middle": [], |
|
"last": "Nenkova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "683--691", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Emily Pitler, Annie Louis, and Ani Nenkova. 2009. Automatic Sense Prediction for Implicit Discourse Relations in Text. In Proceedings of the Joint Con- ference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2 -Vol- ume 2, ACL '09, pages 683-691, Stroudsburg, PA, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "An algorithm for suffix stripping", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Porter", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1980, |
|
"venue": "", |
|
"volume": "14", |
|
"issue": "", |
|
"pages": "130--137", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M.F. Porter. 1980. An algorithm for suffix stripping. Program, 14(3):130-137.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "The Penn Discourse TreeBank 2.0", |
|
"authors": [ |
|
{ |
|
"first": "Rashmi", |
|
"middle": [], |
|
"last": "Prasad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nikhil", |
|
"middle": [], |
|
"last": "Dinesh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alan", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eleni", |
|
"middle": [], |
|
"last": "Miltsakaki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Livio", |
|
"middle": [], |
|
"last": "Robaldo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aravind", |
|
"middle": [], |
|
"last": "Joshi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bonnie", |
|
"middle": [], |
|
"last": "Webber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of LREC", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Milt- sakaki, Livio Robaldo, Aravind Joshi, and Bonnie Webber. 2008. The Penn Discourse TreeBank 2.0. In In Proceedings of LREC.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Proceedings of the Workshop on Frontiers in Corpus Annotations II: Pie in the Sky, chapter Merging PropBank", |
|
"authors": [ |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Pustejovsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Meyers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martha", |
|
"middle": [], |
|
"last": "Palmer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Massimo", |
|
"middle": [], |
|
"last": "Poesio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5--12", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "James Pustejovsky, Adam Meyers, Martha Palmer, and Massimo Poesio, 2005. Proceedings of the Work- shop on Frontiers in Corpus Annotations II: Pie in the Sky, chapter Merging PropBank, NomBank, TimeBank, Penn Discourse Treebank and Corefer- ence, pages 5-12. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Discovering Implicit Discourse Relations Through Brown Cluster Pair Representation and Coreference Patterns", |
|
"authors": [ |
|
{ |
|
"first": "Attapol", |
|
"middle": [], |
|
"last": "Rutherford", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nianwen", |
|
"middle": [], |
|
"last": "Xue", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "645--654", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Attapol Rutherford and Nianwen Xue. 2014. Discov- ering Implicit Discourse Relations Through Brown Cluster Pair Representation and Coreference Pat- terns. In Proceedings of the 14th Conference of the European Chapter of the Association for Computa- tional Linguistics, pages 645-654. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Word Representations: A Simple and General Method for Semi-Supervised Learning", |
|
"authors": [ |
|
{ |
|
"first": "Joseph", |
|
"middle": [], |
|
"last": "Turian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lev-Arie", |
|
"middle": [], |
|
"last": "Ratinov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "384--394", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joseph Turian, Lev-Arie Ratinov, and Yoshua Bengio. 2010. Word Representations: A Simple and General Method for Semi-Supervised Learning. In Proceed- ings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 384-394, Up- psala, Sweden, July. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "D-LTAG: extending lexicalized TAG to discourse", |
|
"authors": [ |
|
{ |
|
"first": "Bonnie", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Webber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Cognitive Science", |
|
"volume": "28", |
|
"issue": "5", |
|
"pages": "751--779", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bonnie L. Webber. 2004. D-LTAG: extending lex- icalized TAG to discourse. Cognitive Science, 28(5):751-779.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "The CoNLL-2015 Shared Task on Shallow Discourse Parsing", |
|
"authors": [ |
|
{ |
|
"first": "Nianwen", |
|
"middle": [], |
|
"last": "Xue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sameer", |
|
"middle": [], |
|
"last": "Hwee Tou Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rashmi", |
|
"middle": [], |
|
"last": "Pradhan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Prasad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Attapol", |
|
"middle": [], |
|
"last": "Bryant", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Rutherford", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the Nineteenth Conference on Computational Natural Language Learning: Shared Task", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nianwen Xue, Hwee Tou Ng, Sameer Pradhan, Rashmi Prasad, Christopher Bryant, and Attapol Rutherford. 2015. The CoNLL-2015 Shared Task on Shallow Discourse Parsing. In Proceedings of the Nine- teenth Conference on Computational Natural Lan- guage Learning: Shared Task, Beijing, China.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Predicting Discourse Connectives for Implicit Discourse Relation Recognition", |
|
"authors": [ |
|
{ |
|
"first": "Zhi-Min", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yu", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zheng-Yu", |
|
"middle": [], |
|
"last": "Niu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Man", |
|
"middle": [], |
|
"last": "Lan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jian", |
|
"middle": [], |
|
"last": "Su", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chew Lim", |
|
"middle": [], |
|
"last": "Tan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics: Posters, COLING '10", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1507--1514", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhi-Min Zhou, Yu Xu, Zheng-Yu Niu, Man Lan, Jian Su, and Chew Lim Tan. 2010. Predicting Discourse Connectives for Implicit Discourse Relation Recog- nition. In Proceedings of the 23rd International Conference on Computational Linguistics: Posters, COLING '10, pages 1507-1514, Stroudsburg, PA, USA. Association for Computational Linguistics.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"text": "Our three-component SDP pipeline. considerable interest: Pitler et al.", |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"TABREF0": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>N0/l0</td><td>N1/l1</td><td>N2/l2</td><td>N3/l3</td><td>N4/l4</td><td>N5/l5</td></tr><tr><td>WP / Tokens 36.65WP / Stems -/36.23</td><td>-/33.89</td><td>-/32.84</td><td>-/31.99</td><td>-/33.05</td><td>-/30.72</td></tr><tr><td>WP / Brown Cluster 3200 36.86Word Vectors 36.23/37.28</td><td/><td/><td/><td/><td/></tr><tr><td>WP / Brown Cluster + Word Vectors 37.28/39.41</td><td/><td/><td/><td/><td/></tr></table>", |
|
"text": "/38.14 36.23/34.53 33.68/32.84 32.84/33.05 31.57/32.63 30.08/32.63 /38.77 35.38/35.17 33.90/36.07 35.38/34.11 34.96/33.47 32.63/33.89", |
|
"html": null |
|
}, |
|
"TABREF1": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td>Prec</td><td>Rec</td><td>F1</td></tr><tr><td>Expansion.Conjunction</td><td colspan=\"3\">43.09 67.50 52.59</td></tr><tr><td>Expansion.Restatement</td><td colspan=\"3\">32.68 49.50 39.37</td></tr><tr><td>Comparison.Contrast</td><td colspan=\"3\">42.85 18.29 25.64</td></tr><tr><td colspan=\"4\">Contingency.Cause.Reason 41.26 35.61 38.23</td></tr><tr><td>Contingency.Cause.Result</td><td colspan=\"3\">40.00 16.32 23.18</td></tr><tr><td>Expansion.Instantiation</td><td colspan=\"3\">46.15 12.76 20.00</td></tr></table>", |
|
"text": "Accuracies for 6-way implicit sense labeling and different feature sets when tokens are treated in normal-case (N ) or after lower-case preprocessing (l). Subscripts indicate frequency thresholds for feature selection (0 means no threshold applied). Majority class baseline: 25.4%.", |
|
"html": null |
|
}, |
|
"TABREF2": { |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"text": "Detailed classification scores for the bestperforming classifier combining Brown Cluster 3200 and skip-gram embeddings.", |
|
"html": null |
|
} |
|
} |
|
} |
|
} |