|
{ |
|
"paper_id": "N03-1030", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T14:07:20.355470Z" |
|
}, |
|
"title": "Sentence Level Discourse Parsing using Syntactic and Lexical Information", |
|
"authors": [ |
|
{ |
|
"first": "Radu", |
|
"middle": [], |
|
"last": "Soricut", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Southern California", |
|
"location": { |
|
"addrLine": "4676 Admiralty Way, Suite 1001 Marina del Rey", |
|
"postCode": "90292", |
|
"region": "CA" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Marcu", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Southern California", |
|
"location": { |
|
"addrLine": "4676 Admiralty Way, Suite 1001 Marina del Rey", |
|
"postCode": "90292", |
|
"region": "CA" |
|
} |
|
}, |
|
"email": "marcu\[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We introduce two probabilistic models that can be used to identify elementary discourse units and build sentence-level discourse parse trees. The models use syntactic and lexical features. A discourse parsing algorithm that implements these models derives discourse parse trees with an error reduction of 18.8% over a state-ofthe-art decision-based discourse parser. A set of empirical evaluations shows that our discourse parsing model is sophisticated enough to yield discourse trees at an accuracy level that matches near-human levels of performance. 2 The Corpus For the experiments described in this paper, we use a publicly available corpus (RST-DT, 2002) that contains 385", |
|
"pdf_parse": { |
|
"paper_id": "N03-1030", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We introduce two probabilistic models that can be used to identify elementary discourse units and build sentence-level discourse parse trees. The models use syntactic and lexical features. A discourse parsing algorithm that implements these models derives discourse parse trees with an error reduction of 18.8% over a state-ofthe-art decision-based discourse parser. A set of empirical evaluations shows that our discourse parsing model is sophisticated enough to yield discourse trees at an accuracy level that matches near-human levels of performance. 2 The Corpus For the experiments described in this paper, we use a publicly available corpus (RST-DT, 2002) that contains 385", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "By exploiting information encoded in human-produced syntactic trees (Marcus et al., 1993) , research on probabilistic models of syntax has driven the performance of syntactic parsers to about 90% accuracy (Charniak, 2000; Collins, 2000) . The absence of semantic and discourse annotated corpora prevented similar developments in semantic/discourse parsing. Fortunately, recent annotation projects have taken significant steps towards developing semantic (Fillmore et al., 2002; Kingsbury and Palmer, 2002) and discourse (Carlson et al., 2003) annotated corpora. Some of these annotation efforts have already had a computational impact. For example, Gildea and Jurafsky (2002) developed statistical models for automatically inducing semantic roles. In this paper, we describe probabilistic models and algorithms that exploit the discourseannotated corpus produced by Carlson et al. (2003) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 68, |
|
"end": 89, |
|
"text": "(Marcus et al., 1993)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 205, |
|
"end": 221, |
|
"text": "(Charniak, 2000;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 222, |
|
"end": 236, |
|
"text": "Collins, 2000)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 454, |
|
"end": 477, |
|
"text": "(Fillmore et al., 2002;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 478, |
|
"end": 505, |
|
"text": "Kingsbury and Palmer, 2002)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 520, |
|
"end": 542, |
|
"text": "(Carlson et al., 2003)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 649, |
|
"end": 675, |
|
"text": "Gildea and Jurafsky (2002)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 866, |
|
"end": 887, |
|
"text": "Carlson et al. (2003)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "A discourse structure is a tree whose leaves correspond to elementary discourse units (edu)s, and whose internal nodes correspond to contiguous text spans (called discourse spans). An example of a discourse structure is the tree given in Figure 1 . Each internal node in a discourse tree is characterized by a rhetorical relation, such as ATTRIBUTION and ENABLEMENT. Within a rhetorical relation a discourse span is also labeled as either NUCLEUS or SATELLITE. The distinction between nuclei and satellites comes from the empirical observation that a nucleus expresses what is more essential to the writer's purpose than a satellite. Discourse trees can be represented graphically in the style shown in Figure 1 . The arrows link the satellite to the nucleus of a rhetorical relation. Arrows are labeled with the name of the rhetorical relation that holds between the linked units. Horizontal lines correspond to text spans, and vertical lines identify text spans which are nuclei.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 238, |
|
"end": 246, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 703, |
|
"end": 711, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper, we introduce two probabilistic models that can be used to identify elementary discourse units and build sentence-level discourse parse trees. We show how syntactic and lexical information can be exploited in the process of identifying elementary units of discourse and building sentence-level discourse trees. Our evaluation indicates that the discourse parsing model we propose is sophisticated enough to achieve near-human levels of performance on the task of deriving sentence-level discourse trees, when working with human-produced syntactic trees and discourse segments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Wall Street Journal articles from the Penn Treebank. The corpus comes conveniently partitioned into a Training set of 347 articles (6132 sentences) and a Test set of 38 articles (991 sentences). Each document in the corpus is paired with a discourse structure (tree) that was manually built in the style of Rhetorical Structure Theory (Mann and Thompson, 1988) . (See (Carlson et al., 2003) for details concerning the corpus and the annotation process.) Out of the 385 articles in the corpus, 53 have been independently annotated by two human annotators. We used this doubly-annotated subset to compute human agreement on the task of discourse structure derivation. In our experiments we used as discourse structures only the discourse sub-trees spanning over individual sentences.", |
|
"cite_spans": [ |
|
{ |
|
"start": 335, |
|
"end": 360, |
|
"text": "(Mann and Thompson, 1988)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 368, |
|
"end": 390, |
|
"text": "(Carlson et al., 2003)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Because the discourse structures had been built on top of sentences already associated with syntactic trees from the Penn Treebank, we were able to create a composite corpus which allowed us to perform an empirically driven syntax-discourse relationship study. This composite corpus was created by associating each sentence \u00a2 in the discourse corpus with its corresponding Penn Treebank syntactic parse tree", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u00a2 \u00a4 \u00a3 \u00a6 \u00a5 \u00a7 \u00a9 \u00a6 \u00a7 \" ! # \u00a2 $", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "and its corresponding sentence-level discourse tree", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "% & \u00a2 ' ) ( 0 1 2 \u00a2 ' ' \" ! # \u00a2 $", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": ". Although human annotators were free to build their discourse structures without enforcing the existence of wellformed discourse sub-trees for each sentence, in about 95% of the cases in the (RST- DT, 2002) corpus, there exists a discourse sub-tree", |
|
"cite_spans": [ |
|
{ |
|
"start": 198, |
|
"end": 207, |
|
"text": "DT, 2002)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "% 3 \u00a2 ' ) ( 0 4 \u00a2 ' ' \u00a6 ! 5 \u00a2 $ associated with each sentence \u00a2 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The remaining 5% of the sentences cannot be used in our approach, as no well-formed discourse tree can be associated with these sentences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Therefore, our Training section consists of a set of 5809 triples of the form", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "6 \u00a2 3 7 8 \u00a2 \u00a4 \u00a3 \" \u00a5 \u00a7 \u00a9 \u00a6 \u00a7 9 \u00a6 ! 5 \u00a2 $ ) 7 @ % & \u00a2 ' ) ( 0 4 2 \u00a2 ' \u00a4 9 \u00a6 ! 5 \u00a2 $ @ A", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "which are used to train the parameters of the statistical models. Our Test section consists of a set of 946 triples of a similar form, which are used to evaluate the performance of our discourse parser. The (RST-DT, 2002) corpus uses 110 different rhetorical relations. We found it useful to also compact these relations into classes, as described by Carlson et al. (2003) , and operate with the resulting 18 labels as well (seen as coarser granularity rhetorical relations). Operating with different levels of granularity allows one to get deeper insight into the difficulties of assigning the appropriate rhetorical relation, if any, to two adjacent text spans.", |
|
"cite_spans": [ |
|
{ |
|
"start": 351, |
|
"end": 372, |
|
"text": "Carlson et al. (2003)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We break down the problem of building sentence-level discourse trees into two sub-problems: discourse segmentation and discourse parsing. Discourse segmentation is covered by this section, while discourse parsing is covered by Section 4. Discourse segmentation is the process in which a given text is broken into non-overlapping segments called elementary discourse units (edus). In the present work, elementary discourse units are taken to be clauses or clauselike units that are unequivocally the NUCLEUS or SATEL-LITE of a rhetorical relation that holds between two adjacent spans of text (see (Carlson et al., 2003) for details). Our approach to discourse segmentation breaks the problem further into two sub-problems: sentence segmentation and sentence-level discourse segmentation. The problem of sentence segmentation has been studied extensively, and tools such as those described by Palmer and Hearst (1997) and Ratnaparkhi (1998) can handle it well. In this section, we present a discourse segmentation algorithm that deals with segmenting sentences into elementary discourse units.", |
|
"cite_spans": [ |
|
{ |
|
"start": 597, |
|
"end": 619, |
|
"text": "(Carlson et al., 2003)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 892, |
|
"end": 916, |
|
"text": "Palmer and Hearst (1997)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 921, |
|
"end": 939, |
|
"text": "Ratnaparkhi (1998)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Discourse Segmenter", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The discourse segmenter proposed here takes as input a sentence and outputs its elementary discourse unit boundaries. Our statistical approach to sentence segmentation uses two components: a statistical model which assigns a probability to the insertion of a discourse boundary after each word in a sentence, and a segmenter, which uses the probabilities computed by the model for inserting discourse boundaries. We first focus on the statistical model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Discourse Segmentation Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "A good model of discourse segmentation needs to account both for local interactions at the word level and for global interactions at more abstract levels. Consider, for example, the syntactic tree in Figure 2 . According to our hypothesis, the discourse boundary inserted between the words says and it is best explained not by the words alone, but by the lexicalized syntactic structure [VP(says) [VBZ(says) B SBAR(will)]], signaled by the boxed nodes in Figure 2 . Hence, we hypothesize that the discourse boundary in our example is best explained by the global interaction between the verb (the act of saying) and its clausal complement (what is being said). Given a sentence", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 200, |
|
"end": 208, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 455, |
|
"end": 463, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The Discourse Segmentation Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u00a2 D C F E 9 G E I H Q P R P \u00a4 P E I S T P \u00a4 P R P U E I V", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Discourse Segmentation Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": ", we first find the syntactic parse tree \u00a7 of \u00a2 . We used in our experiments both syntactic parse trees obtained using Charniak's parser (2000) and syntactic parse trees from the PennTree bank. Our statistical model assigns a segmenting probability", |
|
"cite_spans": [ |
|
{ |
|
"start": 137, |
|
"end": 143, |
|
"text": "(2000)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Discourse Segmentation Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "W X ! # Y S @E S 7 U \u00a7 U $ for each word E S", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Discourse Segmentation Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": ", where", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Discourse Segmentation Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Y S b a b c", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Discourse Segmentation Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "boundary, no-boundaryd . Because our model is concerned with discourse segmentation at sentence level, we define", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Discourse Segmentation Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "W X ! boundary`E V 7 @ \u00a7 U $ e C g f", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Discourse Segmentation Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": ", i.e., the sentence boundary is always a discourse boundary as well.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Discourse Segmentation Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Our model uses both lexical and syntactic features for determining the probability of inserting discourse boundaries. We apply canonical lexical head projection rules (Magerman, 1995) in order to lexicalize syntactic trees. For each word E , the upper-most node with lexical head E which has a right sibling node determines the features on the basis of which we decide whether to insert a discourse boundary. We denote such node . In the example in Figure 2 , we determine whether to insert a discourse boundary after the word says using as features node", |
|
"cite_spans": [ |
|
{ |
|
"start": 167, |
|
"end": 183, |
|
"text": "(Magerman, 1995)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 449, |
|
"end": 457, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The Discourse Segmentation Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "h u r e C w v \u00a6 x y ! 3 3 4 3 $ and its children h i C v \" 4 ! 3 & 4 2 $ and h t C 3 \u00a6 y ! 2 \u00a6 \" $", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Discourse Segmentation Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": ". We use our corpus to estimate the likelihood of inserting a discourse boundary between word E and the next word using formula (1),", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Discourse Segmentation Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "W X ! 5 \u1ef2E u 7 @ \u00a7 U $ e d \u00a5 \u00a7 ) ! 5 h s r u f g P R P R P @ h h i t B ' h P R P R P 8 $ d \u00a5 \u00a7 ) ! 5 h r f i P R P \u00a4 P U h i h h b P \u00a4 P R P 8 $ (1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Discourse Segmentation Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "where the numerator represents all the counts of the rule", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Discourse Segmentation Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "h r f i P R P \u00a4 P @ h i h h Q P R P R P", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Discourse Segmentation Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "for which a discourse boundary has been inserted after word E , and the denominator represents all the counts of the rule.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Discourse Segmentation Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Because we want to account for boundaries that are motivated lexically as well, the counts used in formula (1) are defined over lexicalized rules. Without lexicalization, the syntactic context alone is too general and fails to distinguish genuine cases of discourse boundaries from incorrect ones. As can be seen in Figure 3 , the same syntactic context may indicate a discourse boundary when the lexical heads passed and without are present, but it may not indicate a boundary when the lexical heads priced and at are present.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 316, |
|
"end": 324, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The Discourse Segmentation Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The discourse segmentation model uses the corpus presented in Section 2 in order to estimate probabilities for inserting discourse boundaries using equation 1. We also use a simple interpolation method for smoothing lexicalized rules to accommodate data sparseness.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Discourse Segmentation Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Once we have the segmenting probabilities given by the statistical model, a straightforward algorithm is used to implement the segmenter. Given a syntactic tree \u00a7 , the algorithm inserts a boundary after each word E for which", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Discourse Segmentation Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "W X ! boundary`E u 7 U \u00a7 U $ k j m l T P o n .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Discourse Segmentation Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In the setting presented here, the input to the discourse parser is a Discourse Segmented Lexicalized Syntactic Tree (i.e., a lexicalized syntactic parse tree in which the discourse boundaries have been identified), henceforth called a DS-LST. An example of a DS-LST in the tree in Figure 2 . The output of the discourse parser is a discourse parse tree, such as the one presented in Figure 1 . As in other statistical approaches, we identify two components that perform the discourse parsing task. The first component is the parsing model, which assigns a probability to every potential candidate parse tree. Formally, given a discourse tree p q and a set of parameters r , the parsing model estimates the conditional probability", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 282, |
|
"end": 290, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 384, |
|
"end": 392, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The Discourse Parser", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "W X ! s p q r $", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Discourse Parser", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": ". The most likely parse is then given by formula (2).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Discourse Parser", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "p X u t w v x w y Q C F \u00a9 \" ' z \" { | \u00a9 \" } 1 Q W X ! s p X r $", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "The Discourse Parser", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The second component is called the discourse parser, and it is an algorithm for finding", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Discourse Parser", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "p X t w v x w y", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Discourse Parser", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": ". We first focus on the parsing model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Discourse Parser", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "A discourse parse tree can be formally represented as a set of tuples. The discourse tree in Figure 1 , for example, can be formally written as the set of tuples estimates the goodness of the structure of . We expect these probabilities to prefer the hierarchical structure (1, (2, 3)) over ((1,2) , 3) for the discourse tree in Figure 1 . For each tuple a p q", |
|
"cite_spans": [ |
|
{ |
|
"start": 291, |
|
"end": 297, |
|
"text": "((1,2)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 93, |
|
"end": 101, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 329, |
|
"end": 337, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The Discourse Parser", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "c ATTRIBUTION-SN[1,1,3]7 ENABLEMENT-NS[2,2,3]d . A", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Discourse Parser", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "W X ! s p q r $ C @ Q W b x ! s % \u00a6 \u00a2 \" ! s R $ r $ I W 3 ! s @ ! s R $ r $", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "The Discourse Parser", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": ", the probability W estimates the goodness of the discourse relation of . We expect these probabilities to prefer the rhetorical relation ATTRIBUTION-NS over CONTRAST-NN for the relation between spans 1 and T 7 @ in the discourse tree in Figure 1 . The overall probability of a discourse tree is obtained multiplying the structural probabilities without running into a severe sparseness problem. To overcome this, we map the input DS-LST into a more abstract representation that contains only the salient features of the DS-LST. This mapping leads to the notion of a dominance set over a discourse segmented lexicalized syntactic tree. In what follows, we define this notion and show that it provides adequate parameterization for the discourse parsing problem.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 238, |
|
"end": 246, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The Discourse Parser", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The dominance set of a DS-LST contains feature representations of a discourse segmented lexicalized syntactic tree. Each feature is a representation of the syntactic and lexical information that is found at the point where two edus are joined together in a DS-LST. Our hypothesis is that such \"attachment\" points in the structure of a DS-LST (the boxed nodes in the tree in Figure 4 ) carry the most indicative information with respect to the potential discourse tree we want to build. A set representation of the \"attachment\" points of a DS-LST is called the dominance set of a DS-LST. . The edu which has as head node the root of the DS-LST is called the exception edu. In our example, the head word for edu 2 is C m 3 \"", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 374, |
|
"end": 382, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The Dominance Set of a DS-LST", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": ", and its head node is , and its lexical head says belongs to edu 1; the attachment node of edu 3 is", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Dominance Set of a DS-LST", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "h t \u00a1 C \u00a2 v \u00a6 x ! \u00a3 y 3 \u00a4 \" $", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Dominance Set of a DS-LST", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": ", and its lexical head use belongs to edu 2. We write formally that two edus ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Dominance Set of a DS-LST", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "h as ! s X 7 \u00a5 h $ k \u00a6 ! 5 \u00a7 7 @ h $ .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Dominance Set of a DS-LST", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The dominance set of a DS-LST is given by all the edu pairs linked through a head node and an attachment node in the DS-LST. Each element in the dominance set represents a dominance relationship between the edus involved. Figure 4 shows the dominance set p for our example DS-LST. We say that edu 2 is dominated by edu 1 (shortly written X \u00a6 f ), and edu 3 is dominated by edu 2 (", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 222, |
|
"end": 230, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The Dominance Set of a DS-LST", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": ").", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "t \u00a6 \u00a9", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Our discourse parsing model uses the dominance set p of a DS-LST as the conditioning parameter r in equation (3). The discourse parsing model we propose uses the dominance set p to compute the probability of a discourse parse tree p X according to formula (4). ), we filter out the lexical heads and keep only the syntactic labels; also, we filter out all the elements of p which do not have at least one edu inside the span of . In our running example, for instance, for 9 C , would most likely influence the structure probability of . In the case of W (the probability of the relation ), we keep both the lexical heads and the syntactic labels, but filter out the edu identifiers (clearly, the relation between two spans does not depend on the positions of the spans involved); also, we filter out all the elements of p whose dominance relationship does not hold across the two sub-spans of . In our running example, for D C", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Discourse Model", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "W X ! s p X p \u00aa $ \u00ab C @ Q W x ! s % \u00a6 \u00a2 \" ! s R $\u00ac s \u00a7 \u00a4 x ! s 7 \u00a5 p \u00aa $ U $ I W & ! U ! 5 R $\u00ac \u00a7 ' \u00a4 2 ! 5 7 @ p e $ U $ U $", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": "The Discourse Model", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "ENABLEMENT-NS 7 \u00a5 T 7 @ 2 , \u00ac \u00a7 ' x ! s 7 \u00a5 p \u00aa $ C c ! # 7 8 \u00ae h D s $ t \u00a6 ! f & 7 \u00a5\u00b0 \u00b1 W u $ ) 7 \u00a4 ! s 7 \u00a5 Q $ q \u00a6 \u00b2 ! 5 T 7 \u00a5\u00b0 u W u $ d", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Discourse Model", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "ENABLEMENT-NS T 7 \u00a5 7 \u00a5 , \u00ac \u00a7 ' ' 3 ! s 7 \u00a5 p \u00aa $ C c ! \u00a7 ( 2 $ t \u00a6 \u00b0 u W X ! s 0 y \u00a2 ' $ 8 d", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Discourse Model", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": ". The conditional probabilities involved in equation 4are estimated from the training corpus using maximum likelihood estimation. A simple interpolation method is used for smoothing to accommodate data sparseness. The counts for the dependency sets are also smoothed using symbolic names for the edu identifiers and accounting only for the distance between them.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Discourse Model", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Our discourse parser implements a classical bottom-up algorithm. The parser searches through the space of all legal discourse parse trees and uses a dynamic programming algorithm. If two constituents are derived for the same discourse span, then the constituent for which the model assigns a lower probability can be safely discarded. Figure 5 shows a discourse structure created in a bottom-up manner for the DS-LST in Figure 2 . Tuple 2, 3 ] has a score of 0.40, obtained as the product between the structure probability W x of 0.47 and the relation probability W of 0.88. Tuple ATTRIBUTION-SN[1,1,3] has a score of 0.37 for the structure, and a score of 0.009 for the relation. The final score for the entire discourse structure is 0.001. All probabilities used were estimated from our training corpus. According to our discourse model, the discourse structure in Figure 5 is the most likely among all the legal discourse structures for our example sentence.", |
|
"cite_spans": [ |
|
{ |
|
"start": 437, |
|
"end": 439, |
|
"text": "2,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 440, |
|
"end": 441, |
|
"text": "3", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 335, |
|
"end": 343, |
|
"text": "Figure 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 420, |
|
"end": 428, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 575, |
|
"end": 603, |
|
"text": "Tuple ATTRIBUTION-SN[1,1,3]", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 868, |
|
"end": 876, |
|
"text": "Figure 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The Discourse Parser", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "In this section we present the evaluations carried out for both the discourse segmentation task and the discourse parsing task. For this evaluation, we re-trained Charniak's parser (2000) such that the test sentences from the discourse corpus were not seen by the syntactic parser during training.", |
|
"cite_spans": [ |
|
{ |
|
"start": 181, |
|
"end": 187, |
|
"text": "(2000)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We train our discourse segmenter on the Training section of the corpus described in Section 2, and test it on the Test section. The training regime uses syntactic trees from the Penn Treebank. The metric we use to evaluate the discourse segmenter records the accuracy of the discourse segmenter with respect to its ability to insert inside-sentence discourse boundaries. That is, if a sentence has 3 edus, which correspond to 2 inside-sentence discourse boundaries, we measure the ability of our algorithm to correctly identify these 2 boundaries. We report our evaluation results using recall, precision, and Fscore figures. This metric is harsher than the metric previously used by Marcu (2000) , who assesses the performance of a discourse segmentation algorithm by counting how often the algorithm makes boundary and noboundary decisions for every word in a sentence.", |
|
"cite_spans": [ |
|
{ |
|
"start": 684, |
|
"end": 696, |
|
"text": "Marcu (2000)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation of the Discourse Segmenter", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We compare the performance of our probabilistic discourse segmenter with the performance of the decisionbased segmenter proposed by (Marcu, 2000) and the performance of two baseline algorithms. The first baseline (\u00ae \u00aa f \u00a4 p \u00aa ) uses punctuation to determine when to insert a boundary; because commas are often used to indicate breaks inside long sentences, inserts discourse boundaries after each text span whose corresponding syntactic subtree is labeled S, SBAR, or SINV. We also compute the agreement between human annotators on the discourse segmentation task ( p \u00aa ), using the doubly-annotated discourse corpus mentioned in Section 2. Table 1 shows the results obtained by the algorithm described in this paper (", |
|
"cite_spans": [ |
|
{ |
|
"start": 132, |
|
"end": 145, |
|
"text": "(Marcu, 2000)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 641, |
|
"end": 648, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation of the Discourse Segmenter", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "b \u00a3 \u00a6 \u00a5 u p \u00aa I ! h \u2022 \u00b9 $", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation of the Discourse Segmenter", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": ") using syntactic trees produced by Charniak's parser (2000) , in comparison with the results obtained by the algorithm described in (Marcu, 2000) ", |
|
"cite_spans": [ |
|
{ |
|
"start": 36, |
|
"end": 60, |
|
"text": "Charniak's parser (2000)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 133, |
|
"end": 146, |
|
"text": "(Marcu, 2000)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation of the Discourse Segmenter", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "(p q ) p \u00aa )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation of the Discourse Segmenter", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": ", and baseline algorithms", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation of the Discourse Segmenter", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "\u00ae q f ' p \u00aa and \u00ae t 3 p \u00aa", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation of the Discourse Segmenter", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": ", on the same test set. Crucial to the performance of the discourse segmenter is the recall figure, because we want to find as many discourse boundaries as possible. The baseline algorithms are too simplistic to yield good results (recall figures of 28.2% and 25.4%). The algorithm presented in this paper gives an error reduction in missed discourse boundaries of 24.5% (recall accuracy improvement from 77.1% to 82.7%) over (Marcu, 2000) . The overall error reduction is of 15.1% (improvement in F-score from 80.1% to 83.1%).", |
|
"cite_spans": [ |
|
{ |
|
"start": 426, |
|
"end": 439, |
|
"text": "(Marcu, 2000)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation of the Discourse Segmenter", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "In order to asses the impact on the performance of the discourse segmenter due to incorrect syntactic parse trees, we also carry an evaluation using syntactic trees from the Penn Treebank. The results are shown in row b \u00a3 \u00a6 \u00a5 u p \u00aa I ! s 9 b $", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation of the Discourse Segmenter", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": ". Perfect syntactic trees lead to a further error reduction of 9.5% (F-score improvement from 83.1% to 84.7%). The performance ceiling for discourse segmentation is given by the human annotation agreement F-score of 98.3%.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation of the Discourse Segmenter", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We train our discourse parsing model on the Training section of the corpus described in Section 2, and test it on the Test section. The training regime uses syntactic trees from the Penn Treebank. The performance is assessed using labeled recall and labeled precision as defined by the standard Parseval metric (Black et al., 1991) . As mentioned in Section 2, we use both 18 labels and 110 labels for the discourse relations. The recall and precision figures are combined into an F-score figure in the usual manner.", |
|
"cite_spans": [ |
|
{ |
|
"start": 311, |
|
"end": 331, |
|
"text": "(Black et al., 1991)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation of the Discourse Parser", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "The discourse parsing model uses syntactic trees produced by Charniak's parser (2000) and discourse segments produced by the algorithm described in Section 3. We compare the performance of our model ( \u00a3 \" \u00a5 u p \u00aa W ) with the performance of the decision-based discourse parsing model (p q ) p q W", |
|
"cite_spans": [ |
|
{ |
|
"start": 61, |
|
"end": 85, |
|
"text": "Charniak's parser (2000)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation of the Discourse Parser", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": ") proposed by (Marcu, 2000) , and ). The baseline algorithm builds right-branching discourse trees labeled with the most frequent relation encountered in the training set (i.e., ELABORATION-NS) . We also compute the agreement between human annotators on the discourse parsing task ( p q W ), using the doubly-annotated discourse corpus mentioned in Section 2. The results are shown in Table 2 . The baseline algorithm has a performance of 23.4% and 20.7% F-score, when using 18 labels and 110 labels, respectively. Our algorithm has a performance of 49.0% and 45.6% F-score, when using 18 labels and 110 labels, respectively. These results represent an error reduction of 18.8% (F-score improvement from 37.2% to 49.0%) over a state-of-the-art discourse parser (Marcu, 2000) when using 18 labels, and an error reduction of 15.7% (F-score improvement from 35.5% to 45.6%) when using 110 labels. The performance ceiling for sentence-level discourse structure derivation is given by the human annotation agreement F-score of 77.0% and 71.9%, when using 18 labels and 110 labels, respectively. The performance gap between the results of b \u00a3 \u00a6 \u00a5 u p q W and human agreement is still large, and it can be attributed to three possible causes: errors made by the syntactic parser, errors made by the discourse segmenter, and the weakness of our discourse model. In order to quantitatively asses the impact in performance of each possible cause of error, we perform further experiments. We replace the syntactic parse trees produced by Charniak's parser at 90% accuracy ( \u2022 ) with the corresponding Penn Treebank syntactic parse trees produced by human annotators ( o ). We also replace the discourse boundaries produced by our discourse segmenter at 83% accuracy ( \u00a7 \u2022 ) with the discourse boundaries taken from (RST- DT, 2002) , which are produced by the human annotators ( Q ). The results are shown in Table 3 . The results in column 9 k \u2022", |
|
"cite_spans": [ |
|
{ |
|
"start": 14, |
|
"end": 27, |
|
"text": "(Marcu, 2000)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 178, |
|
"end": 193, |
|
"text": "ELABORATION-NS)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 761, |
|
"end": 774, |
|
"text": "(Marcu, 2000)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 1810, |
|
"end": 1819, |
|
"text": "DT, 2002)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 385, |
|
"end": 392, |
|
"text": "Table 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1897, |
|
"end": 1904, |
|
"text": "Table 3", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation of the Discourse Parser", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "show that using perfect syntactic trees leads to an error reduction of 14.5% (F-score improvement from 49.0% to 56.4%) when using 18 labels, and an error reduction of 12.9% (F-score improvement from 45.6% to 52.6%) when using 110 labels. The results in column \u2022", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation of the Discourse Parser", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "show that the impact of perfect discourse segmentation is double the impact of perfect syntactic trees. Human-level performance on discourse segmentation leads to an error reduction of 29.0% (F-score improvement from 49.0% to 63.8%) when using 18 labels, and an error reduction of 25.6% (F-score improvement from 45.6% to 59.5%) when using 110 labels. Together, perfect syntactic trees and perfect discourse segmentation lead to an error reduction of 52.0% (F-score improvement from 49.0% to 75.5%) when using 18 labels, and an error reduction of 45.5% (F-score improvement from 45.6% to 70.3%) when using 110 labels. The results in column 9 in Table 3 compare extremely favorable with the results in column p q W", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 645, |
|
"end": 652, |
|
"text": "Table 3", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation of the Discourse Parser", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "in Table 2 . The discourse parsing model produces unlabeled discourse structure at a performance level similar to human annotators (F-score of 96.2%). When using 18 labels, the distance between our discourse parsing model performance level and human annotators performance level is of absolute 1.5% (75.5% versus 77%). When using 110 labels, the distance is of absolute 1.6% (70.3% versus 71.9%). Our evaluation shows that our discourse model is sophisticated enough to match near-human levels of performance.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 10, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation of the Discourse Parser", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "In this paper, we have introduced a discourse parsing model that uses syntactic and lexical features to estimate the adequacy of sentence-level discourse structures. Our model defines and exploits a set of syntactically motivated lexico-grammatical dominance relations that fall naturally from a syntactic representation of sentences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The most interesting finding is that these dominance relations encode sufficient information to enable the derivation of discourse structures that are almost indistinguishable from those built by human annotators. Our experiments empirically show that, at the sentence level, there is an extremely strong correlation between syntax and discourse. This is even more remarkable given that the discourse corpus (RST-DT, 2002) was built with no syntactic theory in mind. The annotators used by Carlson et al. (2003) were not instructed to build discourse trees that were consistent with the syntax of the sentences. Yet, they built discourse structures at sentence level that are not only consistent with the syntactic structures of sentences, but also derivable from them.", |
|
"cite_spans": [ |
|
{ |
|
"start": 490, |
|
"end": 511, |
|
"text": "Carlson et al. (2003)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Recent work on Tree Adjoining Grammar-based lexicalized models of discourse (Forbes et al., 2001 ) has already shown how to exploit within a single framework lexical, syntactic, and discourse cues. Various linguistics studies have also shown how intertwined syntax and discourse are (Maynard, 1998) . However, to our knowledge, this is the first paper that empirically shows that the connection between syntax and discourse can be computationally exploited at high levels of accuracy on open domain, newspaper text.", |
|
"cite_spans": [ |
|
{ |
|
"start": 76, |
|
"end": 96, |
|
"text": "(Forbes et al., 2001", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 283, |
|
"end": 298, |
|
"text": "(Maynard, 1998)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Another interesting finding is that the performance of current state-of-the-art syntactic parsers (Charniak, 2000) is not a bottleneck for coming up with a good solution to the sentence-level discourse parsing problem. Little improvement comes from using manually built syntactic parse trees instead of automatically derived trees. However, experiments show that there is much to be gained if better discourse segmentation algorithms are found; 83% accuracy on this task is not sufficient for building highly accurate discourse trees.", |
|
"cite_spans": [ |
|
{ |
|
"start": 98, |
|
"end": 114, |
|
"text": "(Charniak, 2000)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We believe that semantic/discourse segmentation is a notoriously under-researched problem. For example, Gildea and Jurafsky (2002) present a semantic parser that optimistically assumes that has access to perfect semantic segments. Our results suggest that more effort needs to be put on semantic/discourse-based segmentation. Improvements in this area will have a significant impact on both semantic and discourse parsing.", |
|
"cite_spans": [ |
|
{ |
|
"start": 104, |
|
"end": 130, |
|
"text": "Gildea and Jurafsky (2002)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "A procedure for quantitatively comparing the syntactic coverage of English grammars", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Black", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Abney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Flickinger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Gdaniec", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Grishman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Harrison", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Hindle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Ingria", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Jelinek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Klavans", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Liberman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Marcus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Roukos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Santorini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Strzalkowski", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "Proceedings of Speech and Natural Language Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "306--311", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "E. Black, S. Abney, D. Flickinger, C. Gdaniec, R. Gr- ishman, P. Harrison, D. Hindle, R. Ingria, F. Jelinek, J. Klavans, M. Liberman, M. Marcus, S. Roukos, B. Santorini, and T. Strzalkowski. 1991. A proce- dure for quantitatively comparing the syntactic cover- age of English grammars. In Proceedings of Speech and Natural Language Workshop, pages 306-311, Pa- cific Groove, CA. DARPA.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Building a discourse-tagged corpus in the framework of Rhetorical Structure Theory", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Carlson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Marcu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Okurowski", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Current Directions in Discourse and Dialogue", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "L. Carlson, D. Marcu, and M. E. Okurowski. 2003. Building a discourse-tagged corpus in the framework of Rhetorical Structure Theory. In Jan van Kuppevelt and Ronnie Smith, editors, Current Directions in Dis- course and Dialogue. Kluwer Academic Publishers. To appear.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "A maximum-entropy-inspired parser", |
|
"authors": [ |
|
{ |
|
"first": "Eugene", |
|
"middle": [], |
|
"last": "Charniak", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of the NAACL 2000", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "132--139", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eugene Charniak. 2000. A maximum-entropy-inspired parser. In Proceedings of the NAACL 2000, pages 132- 139, Seattle, Washington, April 29 -May 3.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Discriminative reranking for natural language parsing", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of ICML 2000", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael Collins. 2000. Discriminative reranking for nat- ural language parsing. In Proceedings of ICML 2000, Stanford University, Palo Alto, CA, June 29-July 2.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "The framenet database and software tools", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Fillmore", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Baker", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Hiroaki", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the LREC 2002", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1157--1160", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "C. J. Fillmore, C. F. Baker, and S. Hiroaki. 2002. The framenet database and software tools. In Proceedings of the LREC 2002, pages 1157-1160, LREC.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "D-LTAG System: Discourse parsing with a lexicalized tree-adjoining grammar", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Forbes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Miltsakaki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Prasad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Sarkar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Joshi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Webber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "ESSLLI'2001 Workshop on Information Structure", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "K. Forbes, E. Miltsakaki, R. Prasad, A. Sarkar, A. Joshi, and B. Webber. 2001. D-LTAG System: Discourse parsing with a lexicalized tree-adjoining grammar. In ESSLLI'2001 Workshop on Information Structure, Dis- course Structure and Discourse Semantics.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Automatic labeling of semantic role", |
|
"authors": [ |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Gildea", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Jurafsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Computational Linguistics", |
|
"volume": "28", |
|
"issue": "3", |
|
"pages": "245--288", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daniel Gildea and Daniel Jurafsky. 2002. Automatic la- beling of semantic role. Computational Linguistics, 28(3):245-288.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "From Treebank to Propbank", |
|
"authors": [ |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Kingsbury", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martha", |
|
"middle": [], |
|
"last": "Palmer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the LREC 2002", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Paul Kingsbury and Martha Palmer. 2002. From Tree- bank to Propbank. In Proceedings of the LREC 2002, Las Palmas, Canary Islands, Spain, May 28-June 3.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Statistical decision-tree models for parsing", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Magerman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Proceedings of the ACL 1995", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "276--283", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David M. Magerman. 1995. Statistical decision-tree models for parsing. In Proceedings of the ACL 1995, pages 276-283, Cambridge, Massachusetts, June 26- 30.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Rhetorical Structure Theory: Toward a functional theory of text organization", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "William", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sandra", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Mann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Thompson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1988, |
|
"venue": "Text", |
|
"volume": "8", |
|
"issue": "3", |
|
"pages": "243--281", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "William C. Mann and Sandra A. Thompson. 1988. Rhetorical Structure Theory: Toward a functional the- ory of text organization. Text, 8(3):243-281.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "The Theory and Practice of Discourse Parsing and Summarization", |
|
"authors": [ |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Marcu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daniel Marcu. 2000. The Theory and Practice of Dis- course Parsing and Summarization. The MIT Press, Cambridge, Massachusetts.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Building a large annotated corpus of English: the Penn Treebank", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Marcus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Santorini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Marcinkiewicz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Computational Linguistics", |
|
"volume": "19", |
|
"issue": "2", |
|
"pages": "313--330", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Marcus, B. Santorini, and M. Marcinkiewicz. 1993. Building a large annotated corpus of English: the Penn Treebank. Computational Linguistics, 19(2):313-330.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Principles of Japanese Discourse: A Handbook", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Senko", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Maynard", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Senko K. Maynard. 1998. Principles of Japanese Dis- course: A Handbook. Cambridge University Press.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Adaptive multilingual sentence boundary disambiguation", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "David", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marti", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Palmer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Hearst", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Computational Linguistics", |
|
"volume": "23", |
|
"issue": "2", |
|
"pages": "241--269", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David D. Palmer and Marti A. Hearst. 1997. Adaptive multilingual sentence boundary disambiguation. Com- putational Linguistics, 23(2):241-269, June.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Maximum Entropy Models for Natural Language Ambiguity Resolution", |
|
"authors": [ |
|
{ |
|
"first": "Adwait", |
|
"middle": [], |
|
"last": "Ratnaparkhi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adwait Ratnaparkhi. 1998. Maximum Entropy Models for Natural Language Ambiguity Resolution. Ph.D. thesis, University of Pennsylvania.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "RST Discourse Treebank", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Rst-Dt", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "RST-DT. 2002. RST Discourse Tree- bank. Linguistic Data Consortium.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"text": "Discourse structure of a sentence.", |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"num": null, |
|
"text": "Discourse segmentation using lexicalized syntactic trees.", |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF2": { |
|
"num": null, |
|
"text": "The same syntactic information indicates discourse boundaries depending on the lexical heads involved.", |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF4": { |
|
"num": null, |
|
"text": "Figure 4: Dominance set extracted from a DS-LST.", |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF5": { |
|
"num": null, |
|
"text": "in the input DS-LST. However, given such a tree b as input, one cannot estimate probabilities such as", |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF6": { |
|
"num": null, |
|
"text": "the word with the highest occurrence as a lexical head in the lexicalized tree among all the words in . The node in which occurs highest is called the head node of edu and is denoted h t", |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF8": { |
|
"num": null, |
|
"text": "this is because a different dominance relationship between edus 1 and 2, namelyf \u00a6 e", |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF0": { |
|
"content": "<table><tr><td>tu-, and denotes a discourse rela-that holds between the discourse span that contains ple is of the form p 8 7 @ { 7 w 2 tion edus through { , and the discourse span that contains edus { X f through . Each relation also signals explic-itly the nuclearity In what follows we make use of two functions: func-tion applied to a tuple p 8 7 U { 7 # 3 yields the discourse relation ; function % \u00a6 \u00a2 applied to a tuple p 8 7 U { 7 # 3 yields the structure 8 7 U { 7 # 3 . Given a set of adequate parameters r , our discourse model estimates the goodness of a dis-course parse tree p X using formula (3).</td></tr></table>", |
|
"text": "assignment, which can be NUCLEUS-SATELLITE (NS), SATELLITE-NUCLEUS (SN), or NUCLEUS-NUCLEUS (NN). This notation assumes that all relations are binary relations. The assumption is justified empirically: 99% of the nodes of the discourse trees in our corpus are binary nodes. Using only binary relations makes our discourse model easier to build and reason with.", |
|
"type_str": "table", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF6": { |
|
"content": "<table><tr><td colspan=\"2\">: racy for syntactic trees and discourse boundaries. b \u00a3 \u00a6 \u00a5 u p q W performance with human-level accu-</td></tr><tr><td>with the performance of a baseline algorithm (\u00ae</td><td>h p q W</td></tr></table>", |
|
"text": "", |
|
"type_str": "table", |
|
"html": null, |
|
"num": null |
|
} |
|
} |
|
} |
|
} |