Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D10-1039",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:51:58.785933Z"
},
"title": "A Semi-Supervised Approach to Improve Classification of Infrequent Discourse Relations using Feature Vector Extension",
"authors": [
{
"first": "Hugo",
"middle": [],
"last": "Hernault",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Tokyo",
"location": {
"addrLine": "7-3-1 Hongo, Bunkyo-ku",
"postCode": "113-8656",
"settlement": "Tokyo",
"country": "Japan Mitsuru Ishizuka"
}
},
"email": ""
},
{
"first": "Danushka",
"middle": [],
"last": "Bollegala",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Tokyo",
"location": {
"addrLine": "7-3-1 Hongo, Bunkyo-ku",
"postCode": "113-8656",
"settlement": "Tokyo",
"country": "Japan Mitsuru Ishizuka"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Several recent discourse parsers have employed fully-supervised machine learning approaches. These methods require human annotators to beforehand create an extensive training corpus, which is a time-consuming and costly process. On the other hand, unlabeled data is abundant and cheap to collect. In this paper, we propose a novel semi-supervised method for discourse relation classification based on the analysis of cooccurring features in unlabeled data, which is then taken into account for extending the feature vectors given to a classifier. Our experimental results on the RST Discourse Treebank corpus and Penn Discourse Treebank indicate that the proposed method brings a significant improvement in classification accuracy and macro-average F-score when small training datasets are used. For instance, with training sets of c.a. 1000 labeled instances, the proposed method brings improvements in accuracy and macro-average F-score up to 50% compared to a baseline classifier. We believe that the proposed method is a first step towards detecting low-occurrence relations, which is useful for domains with a lack of annotated data.",
"pdf_parse": {
"paper_id": "D10-1039",
"_pdf_hash": "",
"abstract": [
{
"text": "Several recent discourse parsers have employed fully-supervised machine learning approaches. These methods require human annotators to beforehand create an extensive training corpus, which is a time-consuming and costly process. On the other hand, unlabeled data is abundant and cheap to collect. In this paper, we propose a novel semi-supervised method for discourse relation classification based on the analysis of cooccurring features in unlabeled data, which is then taken into account for extending the feature vectors given to a classifier. Our experimental results on the RST Discourse Treebank corpus and Penn Discourse Treebank indicate that the proposed method brings a significant improvement in classification accuracy and macro-average F-score when small training datasets are used. For instance, with training sets of c.a. 1000 labeled instances, the proposed method brings improvements in accuracy and macro-average F-score up to 50% compared to a baseline classifier. We believe that the proposed method is a first step towards detecting low-occurrence relations, which is useful for domains with a lack of annotated data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Automatic detection of discourse relations in natural language text is important for numerous tasks in NLP, such as sentiment analysis (Somasundaran et al., 2009 ), text summarization (Marcu, 2000) and dialogue generation (Piwek et al., 2007) . However, most of the recent work employing discourse relation classifiers are based on fully-supervised machine learning approaches (duVerle and Prendinger, 2009; Pitler et al., 2009; Lin et al., 2009) . Two of the main corpora with discourse annotations are the RST Discourse Treebank (RSTDT) (Carlson et al., 2001 ) and the Penn Discourse Treebank (PDTB) (Prasad et al., 2008a) , which are both based on the Wall Street Journal (WSJ) corpus.",
"cite_spans": [
{
"start": 135,
"end": 161,
"text": "(Somasundaran et al., 2009",
"ref_id": "BIBREF28"
},
{
"start": 184,
"end": 197,
"text": "(Marcu, 2000)",
"ref_id": "BIBREF16"
},
{
"start": 222,
"end": 242,
"text": "(Piwek et al., 2007)",
"ref_id": "BIBREF22"
},
{
"start": 377,
"end": 407,
"text": "(duVerle and Prendinger, 2009;",
"ref_id": "BIBREF5"
},
{
"start": 408,
"end": 428,
"text": "Pitler et al., 2009;",
"ref_id": "BIBREF21"
},
{
"start": 429,
"end": 446,
"text": "Lin et al., 2009)",
"ref_id": "BIBREF10"
},
{
"start": 539,
"end": 560,
"text": "(Carlson et al., 2001",
"ref_id": "BIBREF2"
},
{
"start": 602,
"end": 624,
"text": "(Prasad et al., 2008a)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the RSTDT, annotation is done using 78 fine-grained discourse relations, which are usually grouped into 18 coarser-grained relations. Each of these relations has furthermore several possible configurations for its arguments-its 'nuclearity' (Mann and Thompson, 1988) . In practice, a classifier trained on these coarse-grained relations must solve a 41-class classification problem. Some of the relations corresponding to these classes are relatively more frequent in the corpus, such as the ELAB- ORATION[N] [S] relation (4441 instances), or the ATTRIBUTION[S] [N] relation (1612 instances). 1 However, other relation types occur very rarely, such as TOPIC-COMMENT [S] [N] (2 instances), or EVALUATION [N] [N] (3 instances). A similar phenomenon can be observed in PDTB, in which 15 level-two relations are employed: Some, such as EXPANSION.CONJUNCTION, occur as often as 8759 times throughout the corpus, whereas the remainder of the relations, such as EXPANSION.EXCEPTION and COMPARISON.PRAGMATIC CONCESSION, can appear as rarely as 17 and 12 times respectively. Although supervised approaches to discourse relation learning achieve good results on frequent relations, performance is poor on rare relation types (duVerle and Prendinger, 2009) .",
"cite_spans": [
{
"start": 244,
"end": 269,
"text": "(Mann and Thompson, 1988)",
"ref_id": "BIBREF13"
},
{
"start": 501,
"end": 511,
"text": "ORATION[N]",
"ref_id": null
},
{
"start": 550,
"end": 564,
"text": "ATTRIBUTION[S]",
"ref_id": null
},
{
"start": 669,
"end": 672,
"text": "[S]",
"ref_id": null
},
{
"start": 673,
"end": 676,
"text": "[N]",
"ref_id": null
},
{
"start": 706,
"end": 709,
"text": "[N]",
"ref_id": null
},
{
"start": 1218,
"end": 1248,
"text": "(duVerle and Prendinger, 2009)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Nonetheless, certain infrequent relation types might be important for specific tasks. For instance, capturing the RST TOPIC-COMMENT [S] [N] and EVALUATION[N] [N] relations can be useful for sentiment analysis (Pang and Lee, 2008) .",
"cite_spans": [
{
"start": 132,
"end": 135,
"text": "[S]",
"ref_id": null
},
{
"start": 136,
"end": 139,
"text": "[N]",
"ref_id": null
},
{
"start": 144,
"end": 157,
"text": "EVALUATION[N]",
"ref_id": null
},
{
"start": 209,
"end": 229,
"text": "(Pang and Lee, 2008)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Another situation where detection of lowoccurring relations is desirable is the case where we have only a small training set at our disposal, for instance when there is not enough annotated data for all the relation types described in a discourse theory. In this case, all the dataset's relations can be considered rare, and being able to build an efficient classifier depends on the capacity to deal with this lack of annotated data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our contributions in this paper are summarized as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We propose a semi-supervised method that exploits the abundant, freely-available unlabeled data, which is harvested for feature cooccurrence information, and used as a basis to extend feature vectors to help classification for cases where unknown features are found in test vectors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 The proposed method is evaluated on the RSTDT and PDTB corpus, where it significantly improves accuracy and macro-average F-score when small training sets are used. For instance, when trained on moderately small datasets with ca. 1000 instances, the proposed method increases the macro-average F-score and accuracy up to 50%, compared to a baseline classifier.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Since the release in 2001 of the RSTDT corpus, several fully-supervised discourse parsers have been built in the RST framework. In the recent work of duVerle and Prendinger (2009), a discourse parser based on Support Vector Machines (SVM) (Vapnik, 1995) is proposed. SVMs are employed to train two classifiers: One, binary, for determining the presence of a relation, and another, multi-class, for determining the relation label between related text spans. For the discourse relation classifier, shallow lexical, syntactic and structural features, including 'dominance sets' (Soricut and Marcu, 2003) are used. For relation classification, they report an accuracy of 0.668, and an F-score of 0.509 for the creation of the full discourse tree.",
"cite_spans": [
{
"start": 239,
"end": 253,
"text": "(Vapnik, 1995)",
"ref_id": "BIBREF30"
},
{
"start": 575,
"end": 600,
"text": "(Soricut and Marcu, 2003)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The unsupervised method of Marcu and Echihabi (2002) was the first that tried to detect implicit relations (i.e. relations not accompanied by a cue phrase, such as 'however', 'but'), using word pairs extracted from two spans of text. Their method attempts to capture the difference of polarity in words. For example, the word pair (sell, hold) indicates a CON-TRAST relation.",
"cite_spans": [
{
"start": 27,
"end": 52,
"text": "Marcu and Echihabi (2002)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Discourse relation classifiers have also been trained using PDTB. Pitler et al. (2008) performed a corpus study of the PDTB, and found that 'explicit' relations can be most of the times distinguished by their discourse connectives. Their discourse relation classifier reported an accuracy of 0.93 for explicit relations and in overall an accuracy of 0.744 for all relations in PDTB. Lin et al. (2009) studied the problem of detecting implicit relations in PDTB. Their relational classifier is trained using features extracted from dependency paths, contextual information, word pairs and production rules in parse trees. They reported for their classifier an accuracy of 0.402, which is an improvement of 14.1% over the previous state-of-theart for implicit relation classification in PDTB. For the same task, Pitler et al. (2009) also used word pairs, as well as several other types of features such as verb classes, modality, context, and lexical features.",
"cite_spans": [
{
"start": 66,
"end": 86,
"text": "Pitler et al. (2008)",
"ref_id": "BIBREF20"
},
{
"start": 383,
"end": 400,
"text": "Lin et al. (2009)",
"ref_id": "BIBREF10"
},
{
"start": 810,
"end": 830,
"text": "Pitler et al. (2009)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In text classification, similarity measures have been employed in kernel methods, where they have been shown to improve accuracy over 'bag-ofwords' approaches. In Siolas and d'Alch\u00e9-Buc (2000) , a semantic proximity measure based on WordNet (Fellbaum, 1998) is defined, as a basis to create a proximity matrix for all terms of the problem. This matrix is then used to smooth the vectorial data, and the resulting 'semantic' metric is incorporated into a SVM kernel, resulting in a significant increase of accuracy and F-score over a baseline. Cristianini et al. (2002) have used a lexical similarity measure derived from Latent Semantic Indexing (Deerwester et al., 1990) , where the semantic similarity between two terms is inferred from the analysis of their co-occurrence patterns: Terms that co-occur often in the same documents are considered as related. In this work, the statistical cooccurrence information is extracted by the means of singular value decomposition. The authors observe substantial improvements in performance for some datasets, while little effect is obtained for others.",
"cite_spans": [
{
"start": 163,
"end": 192,
"text": "Siolas and d'Alch\u00e9-Buc (2000)",
"ref_id": "BIBREF27"
},
{
"start": 543,
"end": 568,
"text": "Cristianini et al. (2002)",
"ref_id": "BIBREF3"
},
{
"start": 646,
"end": 671,
"text": "(Deerwester et al., 1990)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Semantic kernels have also been shown to be efficient for text classification tasks, in the case in of unbalanced and sparse datasets. In , a 'conceptual density' metric based on WordNet is introduced, and employed in a SVM kernel. Using this metric results in improved accuracy of 10% for text classification in poor training conditions. However, the authors observe that when the number of training documents is increased, the improvement produced by the semantic kernel is lower. Bloehdorn et al. (2006) compare the performance of different semantic kernels, based on several measures of semantic relatedness in WordNet. For each measure, the authors note a performance increase when little training data is available, or when the feature representations are very sparse. However, for our task, classification of discourse relations, we employ not only words but also other types of features such as parse tree production rules, and thus cannot compute semantic kernels using WordNet.",
"cite_spans": [
{
"start": 483,
"end": 506,
"text": "Bloehdorn et al. (2006)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In this paper, we are not aiming at defining novel features for improving performance in RST or PDTB relation classification. Instead we incorporate numerous features that have been shown to be useful for discourse relation learning and explore the possibilities of using unlabeled data for this task. One of our goals is to improve classification accuracy for rare discourse relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Given a set of unlabeled instances U and labeled instances L, our objective is to learn an n-class relation classifier H such that for a given test instance x return its correct relation type H(x). In the case of discourse relation learning we are interested in the situation where |U | >> |L|. Here, we use the notation |A| to denote the number of elements in a set A. A fundamental problem that one encounters when trying to learn a classifier for a large number of relations with small training dataset is that most of the features that appear in the test instances either never occur in training instances or appear a small number of times. Therefore, the classification algorithm does not have sufficient information to correctly predict the relation type of the given test instance. We propose a method that first computes the co-occurrence between features using unlabeled data and use that information to extend the feature vectors during training and testing, thereby reducing the sparseness in test feature vectors. In Section 3.1, we introduce the concept of feature co-occurrence matrix and describe how it is computed using unlabeled data. A method to extend feature vectors during training and testing is presented in Section 3.2. We defer the details on exact features used in the method to Section 3.3. It is noteworthy that the proposed method does not depend or assume a particular multi-class classification algorithm. Consequently, it can be used with any multi-class classification algorithm to learn a discourse relation classifier.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "We represent an instance using a d dimensional feature vector",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Co-occurrence Matrix",
"sec_num": "3.1"
},
{
"text": "f = [f 1 , . . . , f d ] T , where f i \u2208 R.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Co-occurrence Matrix",
"sec_num": "3.1"
},
{
"text": "We define a feature co-occurrence matrix, C such that the (i, j)-th element of C, C (i,j) \u2208 [0, 1] denotes the degree of co-occurrence between the two features f i and f j . If both f i and f j appear in a feature vector then we define them to be co-occurring. The number of different feature vectors in which f i and f j co-occur is denoted by the function h(f i , f j ). From our definition of co-occurrence it follows that h(f i , f j ) = h(f j , f i ). Importantly, feature cooccurrences can be calculated only using unlabeled data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Co-occurrence Matrix",
"sec_num": "3.1"
},
{
"text": "Feature co-occurrence matrices can be computed using any co-occurrence measure. For the current task we use the \u03c7 2 -measure (Plackett, 1983) as the preferred co-occurrence measure because of its simplicity. \u03c7 2 -measure between two features f i and f j is defined as follows,",
"cite_spans": [
{
"start": 125,
"end": 141,
"text": "(Plackett, 1983)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Co-occurrence Matrix",
"sec_num": "3.1"
},
{
"text": "\u03c7 2 i,j = 2 k=1 2 l=1 (O i,j k,l \u2212 E i,j k,l ) 2 E i,j k,l .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Co-occurrence Matrix",
"sec_num": "3.1"
},
{
"text": "(1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Co-occurrence Matrix",
"sec_num": "3.1"
},
{
"text": "Therein, O i,j and E i,j are the 2\u00d72 matrices containing respectively observed frequencies and expected frequencies, which are respectively computed using C as,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Co-occurrence Matrix",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "O i,j = h(f i , f j ) Z i \u2212 h(f i , f j ) Z j \u2212 h(f i , f j ) Z s \u2212 Z i \u2212 Z j ,",
"eq_num": "(2)"
}
],
"section": "Feature Co-occurrence Matrix",
"sec_num": "3.1"
},
{
"text": "and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Co-occurrence Matrix",
"sec_num": "3.1"
},
{
"text": "E i,j = Z i \u2022Z j Zs Z i \u2022(Zs\u2212Z j ) Zs Z j \u2022(Zs\u2212Z i ) Zs (Zs\u2212Z i )\u2022(Zs\u2212Z j ) Zs .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Co-occurrence Matrix",
"sec_num": "3.1"
},
{
"text": "( 3)Here,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Co-occurrence Matrix",
"sec_num": "3.1"
},
{
"text": "Z i = k =i h(f i , f k )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Co-occurrence Matrix",
"sec_num": "3.1"
},
{
"text": ", and Z s = n i=1 Z i . Finally, we create the feature co-occurrence matrix C, such that, for all pairs of features (f i , f j ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Co-occurrence Matrix",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "C (i,j) = \u03c7 2 i,j if \u03c7 2 i,j > c 0 otherwise .",
"eq_num": "(4)"
}
],
"section": "Feature Co-occurrence Matrix",
"sec_num": "3.1"
},
{
"text": "Here\u03c7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Co-occurrence Matrix",
"sec_num": "3.1"
},
{
"text": "2 i,j = \u03c7 2 i,j \u2212\u03c7 2 min \u03c7 2 max \u2212\u03c7 2 min \u2208 [0, 1]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Co-occurrence Matrix",
"sec_num": "3.1"
},
{
"text": ", and c is the critical value, which, for a confidence level of 0.05 and one degree of freedom, can be set to 3.84. Keeping C (i,j) in the range [0, 1] makes it convenient to filter out low-relevance co-occurrences at the feature vector extension step of Section 3.2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Co-occurrence Matrix",
"sec_num": "3.1"
},
{
"text": "In discourse relation learning, the feature space can be extremely large. For example, with word pair features (discussed later in Section 3.3), any two words that appear in two adjoining discourse units can form a feature. Because the number of elements in the feature co-occurrence matrix is proportional to the square of the feature space's dimension, computing co-occurrences for all pairs of features can be computationally costly. Moreover, storing a large matrix in memory for further computations can be problematic. To reduce the dimensionality and improve the sparseness in the feature cooccurrence matrix, we use entropy-based feature selection (Manning and Sch\u00fctze, 1999) . The negative entropy, E(f i ), of a feature f i is defined as follows,",
"cite_spans": [
{
"start": 656,
"end": 683,
"text": "(Manning and Sch\u00fctze, 1999)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Co-occurrence Matrix",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "E(f i ) = \u2212 j =i p(i, j) \u2022 log (p (i, j)) .",
"eq_num": "(5)"
}
],
"section": "Feature Co-occurrence Matrix",
"sec_num": "3.1"
},
{
"text": "Here, p(i, j) is the probability that feature f i cooccurs with feature f j , and is given by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Co-occurrence Matrix",
"sec_num": "3.1"
},
{
"text": "p(i, j) = h(f i , f j )/Z i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Co-occurrence Matrix",
"sec_num": "3.1"
},
{
"text": "If a particular feature f i co-occurs with many other features, then its negative entropy E(f i ) decreases. Because we are interested in identifying salient co-occurrences between features, we can ignore the features that tend to co-occur with many other features. Consequently, we sort the features in the descending order of their entropy, and select the top ranked N number of features to build the feature co-occurrence matrix. This feature selection procedure can efficiently reduce the dimensions of the feature co-occurrence matrix to N \u00d7 N . Because the feature co-occurrence matrix is symmetric, we must only store the elements for the upper (or lower) triangular portion of it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Co-occurrence Matrix",
"sec_num": "3.1"
},
{
"text": "Once the feature co-occurrence matrix is computed using unlabeled data as described in Section 3.1, we can use it to extend a feature vector during training and testing. The proposed feature vector extension method is inspired by query expansion in the field of Information Retrieval (Salton and Buckley, 1983; Fang, 2008) . One of the reasons that a classifier might perform poorly on a test instance is that there are features in the test instance that were not observed during training. We call F U = {f i } the set of features that were not observed by the classifier during training (i.e. occurring in test data but not in training data). For each of those features, we use the feature co-occurrence matrix to find the set of co-occurring features, F c (f i ).",
"cite_spans": [
{
"start": 284,
"end": 310,
"text": "(Salton and Buckley, 1983;",
"ref_id": "BIBREF26"
},
{
"start": 311,
"end": 322,
"text": "Fang, 2008)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Vector Extension",
"sec_num": "3.2"
},
{
"text": "Let us denote the feature vector corresponding to a training or test instance x by f x . We use the superscript notation, f i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Vector Extension",
"sec_num": "3.2"
},
{
"text": "x to denote the i-th feature in f x . Moreover, the total number of features of f x is indicated by d(x). For a feature f i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Vector Extension",
"sec_num": "3.2"
},
{
"text": "x in f x , we define n(i) number of expansion features, f",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Vector Extension",
"sec_num": "3.2"
},
{
"text": "(i,1) x , . . . , f (i,n(i)) x as follows. First, we require that each expansion fea- ture f (i,j) x belongs to F c (f i ). Second, the value of f (i,j) x is set to f i x \u2022 C (i,j)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Vector Extension",
"sec_num": "3.2"
},
{
"text": ". The expansion features for each feature f i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Vector Extension",
"sec_num": "3.2"
},
{
"text": "x are then appended to the original feature vector f x to create an extended feature vector, f x , where,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Vector Extension",
"sec_num": "3.2"
},
{
"text": "f x = (f 1 x , . . . , f d(x) x , (6) f (i,1) x , . . . , f (i,n(i)) x , . . . , f (d(x),1) x , . . . , f (d(x),n(d(x)) x ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Vector Extension",
"sec_num": "3.2"
},
{
"text": "In total, doing so augments the original vector's size by f i \u2208U |F c (f i )|. All training and test instances are extended in this fashion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Vector Extension",
"sec_num": "3.2"
},
{
"text": "Note that because this process can potentially increase the dimension too much, it is possible to retain only candidate co-occurring features of F c (f i ) possessing a co-occurrence value C (i,j) above a certain threshold. In the experiments of Section 4 how-ever, we experienced dimension increase of 10000 at most, which did not require us to use thresholding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Vector Extension",
"sec_num": "3.2"
},
{
"text": "We use three types of features: Word pairs, production rules from the parse tree, as well as features encoding the lexico-syntactic context at the border between two units of text (Soricut and Marcu, 2003) . Our word pairs are lemmatized using the Wordnetbased lemmatizer of NLTK (Loper and Bird, 2002) . Figure 1 shows the parse tree for a sentence composed of two discourse units, which serve as arguments of a discourse relation we want to generate a feature vector from. Lexical heads have been calculated using the projection rules of Magerman (1995) , and annotated between brackets. Surrounded by dots is, for each argument, the minimal set of subparse trees containing strictly all the words of the argument.",
"cite_spans": [
{
"start": 180,
"end": 205,
"text": "(Soricut and Marcu, 2003)",
"ref_id": "BIBREF29"
},
{
"start": 280,
"end": 302,
"text": "(Loper and Bird, 2002)",
"ref_id": "BIBREF11"
},
{
"start": 540,
"end": 555,
"text": "Magerman (1995)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 305,
"end": 313,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Features",
"sec_num": "3.3"
},
{
"text": "We first extract all possible lemmatized wordpairs from the two arguments, such as (Mr., when), (decline, ask) or (comment, sale). Next, we extract from left and right argument separately, all production rules from the sub-parse trees, such as NP \u2192 NNP NNP, NNP \u2192 \"Sherry\" or TO \u2192 \"to\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3.3"
},
{
"text": "Finally, we encode in our features three nodes of the parse tree, which capture the local context at the connection point between the two arguments: The first node, which we call N w , is the highest ancestor of the first argument's last word w, and is such that N w 's right-sibling is the ancestor of the second argument's first word. N w 's right-sibling node is called N r . Finally, we call N p the parent of N w and N r . For each node, we encode in the feature vector its part-of-speech (POS) and lexical head. For instance, in Figure 1 , we have N w = S(comment), N r = SBAR(when), and N p = VP(declined). In the PDTB, certain discourse relations have disjoint arguments. In this case, as well as in the case where the two arguments belong to different sentences, the nodes N w , N r , N p cannot be defined, and their corresponding features are given the value zero.",
"cite_spans": [],
"ref_spans": [
{
"start": 535,
"end": 543,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Features",
"sec_num": "3.3"
},
{
"text": "The proposed method is independent of any particular classification algorithm. Because our goal is strictly to evaluate the relative benefit of employing the proposed method, and not the absolute performance when used with a specific classification algorithm, we select a logistic regression classifier, for its simplicity. We use the multi-class logistic regression (maximum entropy model) implemented in the Classias toolkit (Okazaki, 2009) . Regularization parameters are set to their default value of one and are fixed throughout the experiments described in the paper.",
"cite_spans": [
{
"start": 427,
"end": 442,
"text": "(Okazaki, 2009)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "To create our unlabeled dataset, we use sentences extracted from the English Wikipedia 2 , as they are freely available and relatively easy to collect. For further extraction of syntactic features, these sentences are automatically parsed using the Stanford parser (Klein and Manning, 2003) . Then, they are segmented into elementary discourse units (EDUs) using our sequential discourse segmenter (Hernault et al., 2010) . The relatively high performance of this RST segmenter, which has an F-score of 0.95 compared to that of 0.98 between human annotators (Soricut and Marcu, 2003) , is acceptable for this task. We collect and parse 100000 sentences from random Wikipedia articles. As there is no segmentation tool for the PDTB framework, we assume that co-occurrence information taken from EDUs created using a RST segmenter is also useful for extending feature vectors of PDTB relations. Unless otherwise noted, the experiments presented in the rest of this paper are done using those 100000 unlabeled instances.",
"cite_spans": [
{
"start": 265,
"end": 290,
"text": "(Klein and Manning, 2003)",
"ref_id": "BIBREF9"
},
{
"start": 398,
"end": 421,
"text": "(Hernault et al., 2010)",
"ref_id": "BIBREF8"
},
{
"start": 558,
"end": 583,
"text": "(Soricut and Marcu, 2003)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "In the unlabeled data, any two consecutive discourse units might not always be connected by a discourse relation. Therefore, we introduce an artificial NONE relation in the training set, in order to facilitate this. Instances of the NONE relation are generated randomly by pairing consecutive discourse units which are not connected by a discourse relation in the training data. NONE is also learnt as a separate discourse relation class by the multi-class classification algorithm. This enables us to detect discourse units between which there exist no discourse relation, thereby improving the classification accuracy for other relation types.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "We follow the common practice in discourse research for partitioning the discourse corpora into training and test set. For the RST classifier, the dedicated training and test sets of the RSTDT are employed. For the PDTB classifier, we conform to the guidelines of Prasad et al. (2008b, 5) : The portion of the corpus corresponding to sections 2-21 of the WSJ is used for training the classifier, while the portion corresponding to WSJ section 23 is used for testing. In order to extract syntactic features, all training and test data are furthermore aligned with their corresponding parse trees in the Penn Treebank (Marcus et al., 1993) .",
"cite_spans": [
{
"start": 264,
"end": 288,
"text": "Prasad et al. (2008b, 5)",
"ref_id": null
},
{
"start": 616,
"end": 637,
"text": "(Marcus et al., 1993)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Because in the PDTB an instance can be annotated with several discourse relations simultaneously-called 'senses' in Prasad et al. (2008b) -for each instance with n senses in the corpus, we create n identical feature vectors, each being labeled by one of the instance's senses. However, in the RST framework, only one relation is allowed to hold between two EDUs. Consequently, each instance from the RSTDT is labeled with a single discourse relation, from which a single feature vector is created. For RSTDT, we extract 25078 training vectors and 1633 test vectors. For PDTB we extract 49748 training vectors and 1688 test vectors. There are 41 classes (relation types) in the RSTDT relation classification task, and 29 classes in the PDTB task. For the PDTB, we selected level-two relations, because they have better expressivity and are not too fine-grained. We experimentally set the entropy-based feature selection parameter to N = 5000. With large N values, we must store and process large feature co-occurrence matrices. For example, doubling the number of selected features, N to 10000 did not improve the classification accuracy, although it required 4GB of memory to store the feature co-occurrence matrix. Figure 2 shows the number of features that occur in test data but not in labeled training data, against the number of training instances. It can be seen from Figure 2 that, with less training data available to the classifier, we can potentially obtain more information regarding features by looking at unlabeled data. However, when the training dataset's size increases, the number of features that only appear in test data decreases rapidly. This inverse relation between the training dataset size and the number of features that only appear in test data can be observed in both RSTDT and PDTB datasets. For a training set of 100 instances, there are 23580 unseen features in the case of RSTDT, and 27757 in the case of PDTB. The number of unseen features is halved for a training set of 1800 instances in the case of RSTDT, and for a training set of 1300 instances in the case of PDTB. Finally, when selecting all available training data, we count only 1365 unseen test features in the case of RSTDT, and 87 in the case of PDTB.",
"cite_spans": [
{
"start": 116,
"end": 137,
"text": "Prasad et al. (2008b)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [
{
"start": 1216,
"end": 1224,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 1374,
"end": 1382,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "In the following experiments, we use macroaveraged F-scores to evaluate the performance of the proposed discourse relation classifier on test data. Macro-averaged F-score is not influenced by the number of instances that exist in each relation type. It equally weights the performance on both frequent relation types and infrequent relation types. Because we are interested in measuring the overall performance of a discourse relation classifier across all re- lation types we use macro-averaged F-score as the preferred evaluation metric for this task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "We train a multi-class logistic regression model without extending the feature vectors as a baseline method. This baseline is expected to show the effect of using the proposed feature vector extension approach for the task of discourse relation learning. Experimental results on RSTDT and PDTB datasets are depicted in Figures 3 and 4 . From these figures, we see that the proposed feature extension method outperforms the baseline for both RSTDT and PDTB datasets for the full range of training dataset sizes. However, whereas the difference of scores between the two methods is obvious for small amounts of training data, this difference progressively decreases as we increase the amount of training data. Specifically, with 100 training instances, the difference between baseline and proposed method is the largest: For RSTDT, the baseline has a macro-averaged F-score of 0.084, whereas the the proposed method has a macro-averaged Fscore of 0.189 (ca. 119% increase in F-score). For PDTB, the baseline has an F-score of 0.016, while the proposed method has an F-score of 0.089 (459% increase). The difference of scores between the two methods then progressively diminishes as the number of training instances is increased, and fades beyond 10000 training instances. The reason for this behavior is given by Figure 2 : For a small number of training instances, the number of unseen features in training data is large. In this case, the feature vec-tor extension process is comprehensive, and score can be increased by the use of unlabeled data. When more training data is progressively used, the number of unseen test features sharply diminishes, which means feature vector extension becomes more limited, and the performance of the proposed method gets progressively closer to the baseline. Note that we plotted PDTB performance up to 25000 training instances, as the number of unseen test features becomes so small past this point that the performances of the proposed method and baseline are identical. Using all PDTB training data (49748 instances), both baseline and proposed method reach a macro-average F-score of 0.308. not bring any change in F-score or accuracy. Indeed, as the number of unknown features is low, feature vector extension is very limited, and does not improve the performance compared to the baseline. Then, a progressive increase of both accuracy and macro-average F-score is observed, as the number of unseen test features is incremented. For instance, for 8500 unseen test features, the macroaverage F-score increase (resp. accuracy increase) is 25% (resp. 2.5%), while it is 20% (resp. 1%) for 11000 unseen test instances. These values reach a maximum of 119% macro-average F-score increase, and 66% accuracy increase, when 23500 features unseen during training are present in test data. This situation corresponds in Figures 3 and 4 to the case of very small training sets. The bottom subfigure of Figure 2 , for the case of PDTB, reveals a similar tendency. The macro-average F-score increase (resp. accuracy increase) is negligible for 1000 unseen test features, while this increase is 21% for both macro-average F-score and accuracy in the case of 9700 unseen test features, and 459% (resp. 630% for accuracy) when 28000 unseen features are found in test data. This shows that the proposed method is useful when large numbers of features are missing from the training set, which corresponds in practice to small training sets, with few training instances for each relation type. For large training sets, most fea-tures are encountered by the classifier during training, and feature vector extension does not bring useful information. We empirically evaluate the effect of using different amounts of unlabeled data on the performance of the proposed method. We use respectively 100 and 10000 labeled training instances, create feature cooccurrence matrices with different amounts of unlabeled data, and evaluate the performance in relation classification. Experimental results for RSTDT are illustrated in Figure 6 (top). From Figure 6 it appears clearly that macro-average F-scores improve with increased number of unlabeled instances. However, the benefit of using larger amounts of unlabeled data is more pronounced when only a small number of labeled training instances are employed (ca. 100). In fact, with 100 labeled training instances, the maximum improvement in F-score is 119% (corresponds to using all our 100000 unlabeled instances). However, the maximum improvement in F-score with 10000 labeled training instances is small, only 2.5% (corresponds to 10000 unlabeled instances).",
"cite_spans": [],
"ref_spans": [
{
"start": 319,
"end": 334,
"text": "Figures 3 and 4",
"ref_id": null
},
{
"start": 1311,
"end": 1319,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 2851,
"end": 2866,
"text": "Figures 3 and 4",
"ref_id": null
},
{
"start": 2932,
"end": 2940,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 4042,
"end": 4050,
"text": "Figure 6",
"ref_id": null
},
{
"start": 4063,
"end": 4071,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "The effect of using unlabeled data on PDTB relation classification is illustrated in Figure 6 (bottom) . Similarly, we consecutively set the labeled training dataset size to 100 and 10000 instances, and plot the macro-average F-score against the unlabeled dataset size. As in the RSTDT experiment, the benefit of us- ing unlabeled data is more obvious when the number of labeled training instances is small. In particular, with 100 training instances, the maximum improvement in F-score is 459% (corresponds to 100000 unlabeled instances). However, with 10000 labeled training instances the maximum improvement in F-score is 15% (corresponds to 100 unlabeled instances). These results confirm that, on the one hand performance improvement is more prominent for smaller training sets, and that on the other hand, performance is increased when using larger amounts of unlabeled data.",
"cite_spans": [],
"ref_spans": [
{
"start": 85,
"end": 102,
"text": "Figure 6 (bottom)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "We presented a semi-supervised method which exploits the co-occurrence of features in unlabeled data, to extend feature vectors during training and testing in a discourse relation classifier. Despite the Macro-average F-score 10 1 10 2 10 3 10 4 10 5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Number of unlabeled instances PDTB (100) Baseline PDTB (100) PDTB (10000) Baseline PDTB (10000) Figure 6 : Macro-average F-score for RSTDT (top) and PDTB (bottom), for 100 and 10000 training instances, against the number of unlabeled instances. simplicity of the proposed method, it significantly improved the macro-average F-score in discourse relation classification for small training datasets, containing low-occurrence relations. We performed an evaluation on two popular datasets, the RSTDT and PDTB. We empirically evaluated the benefit of using a variable amount of unlabeled data for the proposed method. Although the macro-average F-scores of the classifiers described are too low to be used directly as discourse analyzers, the gain in F-score and accuracy for small labeled datasets are a promising perspective for improving classification accuracy for infrequent relation types. In particular, the proposed method can be employed in existing discourse classifiers that work well on popular relations, and be expected to improve the overall accuracy.",
"cite_spans": [],
"ref_spans": [
{
"start": 96,
"end": 104,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "We use the notation [N] and [S] respectively to denote the nucleus and satellite in a RST discourse relation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://en.wikipedia.org",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "213 Macro-average F-score 0.000 0.060 0.000 0.069 0.008 0.101 0.038 0.118 0.107 0.134 Table 1 : F-scores for RSTDT relations, using a training set containing #Tr instances of each relation. B. indicates F-score for baseline, P.M. for the proposed method. A boldface indicates the best classifier for each relation.Although the distribution of discourse relations in RSTDT and PDTB is not uniform, it is possible to study the performance of the proposed method when all relations are made equally rare. We evaluate performance on artificially-created training sets containing an equal amount of each discourse relation. Table 1 contains the F-score for each RSTDT relation, using training sets containing respectively one, two, three, five and seven instances of each relation. For space considerations, only relations with significant results are shown. We observe that, when using respectively one and two instances of each relation, the baseline classifier is unable to detect any relation, and has a macro-average F-score of zero. Contrastingly, the classifier built with feature vector extension reaches in those cases an Fscore of 0.06. Furthermore, when employing the proposed method, certain relations have relatively high F-scores even with very little labeled data: Still, in this case, the extended classifier's accuracy is higher than the baseline (0.213 versus 0.122). Table 2 summarizes the outcome of the same experiments performed on the PDTB dataset. The results exhibit a similar trend, despite the baseline classifier having a relatively high accuracy for each case.Using the data from Figures 2, 3 and 4 , it is possible to calculate the relative score change occurring when using the proposed method, as a function of the number of unseen features found in test data. This graph is plotted in Figure 5 . Besides macro-average F-score, we additionally plot accuracy change. In the top subfigure, representing the case of RSTDT, we see that, for the lowest amount of unseen test features, the proposed method does",
"cite_spans": [],
"ref_spans": [
{
"start": 86,
"end": 93,
"text": "Table 1",
"ref_id": null
},
{
"start": 619,
"end": 626,
"text": "Table 1",
"ref_id": null
},
{
"start": 1381,
"end": 1388,
"text": "Table 2",
"ref_id": null
},
{
"start": 1604,
"end": 1622,
"text": "Figures 2, 3 and 4",
"ref_id": null
},
{
"start": 1813,
"end": 1821,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "annex",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A semantic kernel to classify texts with very few training examples",
"authors": [
{
"first": "R",
"middle": [],
"last": "Basili",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Cammisa",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Moschitti",
"suffix": ""
}
],
"year": 2006,
"venue": "Informatica (Slovenia)",
"volume": "30",
"issue": "2",
"pages": "163--172",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Basili, M. Cammisa, and A. Moschitti. 2006. A se- mantic kernel to classify texts with very few training examples. Informatica (Slovenia), 30(2):163-172.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Semantic kernels for text classification based on topological measures of feature similarity",
"authors": [
{
"first": "S",
"middle": [],
"last": "Bloehdorn",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Basili",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Cammisa",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Moschitti",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. of ICDM'06",
"volume": "",
"issue": "",
"pages": "808--812",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Bloehdorn, R. Basili, M. Cammisa, and A. Moschitti. 2006. Semantic kernels for text classification based on topological measures of feature similarity. In Proc. of ICDM'06, pages 808-812.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Building a discourse-tagged corpus in the framework of Rhetorical Structure Theory",
"authors": [
{
"first": "L",
"middle": [],
"last": "Carlson",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Marcu",
"suffix": ""
},
{
"first": "M",
"middle": [
"E"
],
"last": "Okurowski",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc. of Second SIGdial Workshop on Discourse",
"volume": "16",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Carlson, D. Marcu, and M. E. Okurowski. 2001. Building a discourse-tagged corpus in the framework of Rhetorical Structure Theory. Proc. of Second SIG- dial Workshop on Discourse and Dialogue-Volume 16, pages 1-10.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Latent semantic kernels",
"authors": [
{
"first": "N",
"middle": [],
"last": "Cristianini",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Shawe-Taylor",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Lodhi",
"suffix": ""
}
],
"year": 2002,
"venue": "Journal of Intelligent Information Systems",
"volume": "18",
"issue": "",
"pages": "127--152",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N. Cristianini, J. Shawe-Taylor, and H. Lodhi. 2002. La- tent semantic kernels. Journal of Intelligent Informa- tion Systems, 18:127-152.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Indexing by latent semantic analysis",
"authors": [
{
"first": "S",
"middle": [
"C"
],
"last": "Deerwester",
"suffix": ""
},
{
"first": "S",
"middle": [
"T"
],
"last": "Dumais",
"suffix": ""
},
{
"first": "T",
"middle": [
"K"
],
"last": "Landauer",
"suffix": ""
},
{
"first": "G",
"middle": [
"W"
],
"last": "Furnas",
"suffix": ""
},
{
"first": "R",
"middle": [
"A"
],
"last": "Harshman",
"suffix": ""
}
],
"year": 1990,
"venue": "Journal of the American Society of Information Science",
"volume": "41",
"issue": "6",
"pages": "391--407",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. C. Deerwester, S. T. Dumais, T. K. Landauer, G. W. Furnas, and R. A. Harshman. 1990. Indexing by latent semantic analysis. Journal of the American Society of Information Science, 41(6):391-407.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A novel discourse parser based on Support Vector Machine classification",
"authors": [
{
"first": "D",
"middle": [
"A"
],
"last": "Duverle",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Prendinger",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. of ACL'09",
"volume": "",
"issue": "",
"pages": "665--673",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. A. duVerle and H. Prendinger. 2009. A novel dis- course parser based on Support Vector Machine clas- sification. In Proc. of ACL'09, pages 665-673.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A re-examination of query expansion using lexical resources",
"authors": [
{
"first": "H",
"middle": [],
"last": "Fang",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. of ACL'08",
"volume": "",
"issue": "",
"pages": "139--147",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Fang. 2008. A re-examination of query expansion us- ing lexical resources. In Proc. of ACL'08, pages 139- 147.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "WordNet: An electronic lexical database",
"authors": [],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Fellbaum, editor. 1998. WordNet: An electronic lexi- cal database. MIT Press.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A sequential model for discourse segmentation",
"authors": [
{
"first": "H",
"middle": [],
"last": "Hernault",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Bollegala",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Ishizuka",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. of CICLing'10",
"volume": "",
"issue": "",
"pages": "315--326",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Hernault, D. Bollegala, and M. Ishizuka. 2010. A sequential model for discourse segmentation. In Proc. of CICLing'10, pages 315-326.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Fast exact inference with a factored model for natural language parsing",
"authors": [
{
"first": "D",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2003,
"venue": "Advances in Neural Information Processing Systems",
"volume": "15",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Klein and C. D. Manning. 2003. Fast exact inference with a factored model for natural language parsing. In Advances in Neural Information Processing Systems, volume 15. MIT Press.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Recognizing implicit discourse relations in the Penn Discourse Treebank",
"authors": [
{
"first": "Z",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "M-Y.",
"middle": [],
"last": "Kan",
"suffix": ""
},
{
"first": "H",
"middle": [
"T"
],
"last": "Ng",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. of EMNLP'09",
"volume": "",
"issue": "",
"pages": "343--351",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Z. Lin, M-Y. Kan, and H. T. Ng. 2009. Recognizing im- plicit discourse relations in the Penn Discourse Tree- bank. In Proc. of EMNLP'09, pages 343-351.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "NLTK: The natural language toolkit",
"authors": [
{
"first": "E",
"middle": [],
"last": "Loper",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Bird",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of ACL'02 Workshop on Effective tools and methodologies for teaching natural language processing and computational linguistics",
"volume": "",
"issue": "",
"pages": "63--70",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Loper and S. Bird. 2002. NLTK: The natural lan- guage toolkit. In Proc. of ACL'02 Workshop on Effec- tive tools and methodologies for teaching natural lan- guage processing and computational linguistics, pages 63-70.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Statistical decision-tree models for parsing",
"authors": [
{
"first": "D",
"middle": [
"M"
],
"last": "Magerman",
"suffix": ""
}
],
"year": 1995,
"venue": "Proc. of ACL'95",
"volume": "",
"issue": "",
"pages": "276--283",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. M. Magerman. 1995. Statistical decision-tree models for parsing. Proc. of ACL'95, pages 276-283.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Rhetorical Structure Theory: Toward a functional theory of text organization",
"authors": [
{
"first": "W",
"middle": [
"C"
],
"last": "Mann",
"suffix": ""
},
{
"first": "S",
"middle": [
"A"
],
"last": "Thompson",
"suffix": ""
}
],
"year": 1988,
"venue": "Text",
"volume": "8",
"issue": "3",
"pages": "243--281",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. C. Mann and S. A. Thompson. 1988. Rhetorical Structure Theory: Toward a functional theory of text organization. Text, 8(3):243-281.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Foundations of Statistical Natural Language processing",
"authors": [
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. D. Manning and H. Sch\u00fctze. 1999. Foundations of Statistical Natural Language processing. MIT Press.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "An unsupervised approach to recognizing discourse relations",
"authors": [
{
"first": "D",
"middle": [],
"last": "Marcu",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Echihabi",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of ACL'02",
"volume": "",
"issue": "",
"pages": "368--375",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Marcu and A. Echihabi. 2002. An unsupervised ap- proach to recognizing discourse relations. In Proc. of ACL'02, pages 368-375.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "The Theory and Practice of Discourse Parsing and Summarization",
"authors": [
{
"first": "D",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Marcu. 2000. The Theory and Practice of Discourse Parsing and Summarization. MIT Press.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Building a large annotated corpus of English: The Penn Treebank",
"authors": [
{
"first": "M",
"middle": [
"P"
],
"last": "Marcus",
"suffix": ""
},
{
"first": "M",
"middle": [
"A"
],
"last": "Marcinkiewicz",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Santorini",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "313--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. P. Marcus, M. A. Marcinkiewicz, and B. Santorini. 1993. Building a large annotated corpus of En- glish: The Penn Treebank. Computational Linguistics, 19(2):313-330.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Classias: A collection of machinelearning algorithms for classification",
"authors": [
{
"first": "N",
"middle": [],
"last": "Okazaki",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N. Okazaki. 2009. Classias: A collection of machine- learning algorithms for classification. http:// www.chokkan.org/software/classias/.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Opinion mining and sentiment analysis. Foundations and Trends in Information Retrieval",
"authors": [
{
"first": "B",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "2",
"issue": "",
"pages": "1--135",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. Pang and L. Lee. 2008. Opinion mining and senti- ment analysis. Foundations and Trends in Information Retrieval, 2(1-2):1-135.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Easily identifiable discourse relations",
"authors": [
{
"first": "E",
"middle": [],
"last": "Pitler",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Raghupathy",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Mehta",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Nenkova",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Joshi",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. of COLING'08 (Posters)",
"volume": "",
"issue": "",
"pages": "87--90",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Pitler, M. Raghupathy, H. Mehta, A. Nenkova, A. Lee, and A. Joshi. 2008. Easily identifiable discourse rela- tions. In Proc. of COLING'08 (Posters), pages 87-90.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Automatic sense prediction for implicit discourse relations in text",
"authors": [
{
"first": "E",
"middle": [],
"last": "Pitler",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Louis",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Nenkova",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. of ACL'09",
"volume": "",
"issue": "",
"pages": "683--691",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Pitler, A. Louis, and A. Nenkova. 2009. Automatic sense prediction for implicit discourse relations in text. In Proc. of ACL'09, pages 683-691.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Generating dialogues between virtual agents automatically from text",
"authors": [
{
"first": "P",
"middle": [],
"last": "Piwek",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Hernault",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Prendinger",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Ishizuka",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. of IVA'07",
"volume": "",
"issue": "",
"pages": "161--174",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Piwek, H. Hernault, H. Prendinger, and M. Ishizuka. 2007. Generating dialogues between virtual agents au- tomatically from text. In Proc. of IVA'07, pages 161- 174.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Karl Pearson and the chi-squared test",
"authors": [
{
"first": "R",
"middle": [
"L"
],
"last": "Plackett",
"suffix": ""
}
],
"year": 1983,
"venue": "International Statistical Review / Revue Internationale de Statistique",
"volume": "51",
"issue": "1",
"pages": "59--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. L. Plackett. 1983. Karl Pearson and the chi-squared test. International Statistical Review / Revue Interna- tionale de Statistique, 51(1):59-72.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "The Penn Discourse TreeBank 2.0",
"authors": [
{
"first": "R",
"middle": [],
"last": "Prasad",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Dinesh",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Miltsakaki",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Robaldo",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Webber",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. of LREC'08",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Prasad, N. Dinesh, A. Lee, E. Miltsakaki, L. Robaldo, A. Joshi, and B. Webber. 2008a. The Penn Discourse TreeBank 2.0. In Proc. of LREC'08.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "The Penn Discourse Treebank 2.0 annotation manual",
"authors": [
{
"first": "R",
"middle": [],
"last": "Prasad",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Miltsakaki",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Dinesh",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Robaldo",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Webber",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Prasad, E. Miltsakaki, N. Dinesh, A. Lee, A. Joshi, L. Robaldo, and B. Webber. 2008b. The Penn Dis- course Treebank 2.0 annotation manual. Technical re- port, University of Pennsylvania Institute for Research in Cognitive Science.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Introduction to Modern Information Retrieval",
"authors": [
{
"first": "G",
"middle": [],
"last": "Salton",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Buckley",
"suffix": ""
}
],
"year": 1983,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Salton and C. Buckley. 1983. Introduction to Modern Information Retrieval. McGraw-Hill Book Company.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Support Vector Machines based on a semantic kernel for text categorization",
"authors": [
{
"first": "G",
"middle": [],
"last": "Siolas",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "D'alch\u00e9-Buc",
"suffix": ""
}
],
"year": 2000,
"venue": "Proc. of IJCNN'00",
"volume": "5",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Siolas and F. d'Alch\u00e9-Buc. 2000. Support Vector Ma- chines based on a semantic kernel for text categoriza- tion. In Proc. of IJCNN'00, volume 5, page 5205.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Supervised and unsupervised methods in employing discourse relations for improving opinion polarity classification",
"authors": [
{
"first": "S",
"middle": [],
"last": "Somasundaran",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Namata",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Wiebe",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Getoor",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. of EMNLP'09",
"volume": "",
"issue": "",
"pages": "170--179",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Somasundaran, G. Namata, J. Wiebe, and L. Getoor. 2009. Supervised and unsupervised methods in em- ploying discourse relations for improving opinion po- larity classification. In Proc. of EMNLP'09, pages 170-179.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Sentence level discourse parsing using syntactic and lexical information",
"authors": [
{
"first": "R",
"middle": [],
"last": "Soricut",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. of NA-ACL'03",
"volume": "1",
"issue": "",
"pages": "149--156",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Soricut and D. Marcu. 2003. Sentence level discourse parsing using syntactic and lexical information. Proc. of NA-ACL'03, 1:149-156.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "The Nature of Statistical Learning Theory",
"authors": [
{
"first": "V",
"middle": [
"N"
],
"last": "Vapnik",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "V. N. Vapnik. 1995. The Nature of Statistical Learning Theory. Springer-Verlag New York, Inc.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Two arguments of a discourse relation, and the minimum set of subtrees that contain them-lexical heads are indicated between brackets."
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Number of features seen only in the test set, as a function of the number of training instances used."
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Macro-average F-score (RSTDT) as a function of the number of training instances used. Macro-average F-score (PDTB) as a function of the number of training instances used."
},
"FIGREF3": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Score change as a function of unseen test features for RSTDT (top) and PDTB (bottom)."
},
"TABREF1": {
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null,
"text": ""
}
}
}
}