Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "K16-2002",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:11:07.840008Z"
},
"title": "OPT: Oslo-Potsdam-Teesside Pipelining Rules, Rankers, and Classifier Ensembles for Shallow Discourse Parsing",
"authors": [
{
"first": "Stephan",
"middle": [],
"last": "Oepen",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Jonathon",
"middle": [],
"last": "Read",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Tatjana",
"middle": [],
"last": "Scheffler",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Uladzimir",
"middle": [],
"last": "Sidarenka",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Manfred",
"middle": [],
"last": "Stede",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Erik",
"middle": [],
"last": "Velldal",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Lilja",
"middle": [],
"last": "\u00d8vrelid",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The OPT submission to the Shared Task of the 2016 Conference on Natural Language Learning (CoNLL) implements a 'classic' pipeline architecture, combining binary classification of (candidate) explicit connectives, heuristic rules for non-explicit discourse relations, ranking and 'editing' of syntactic constituents for argument identification, and an ensemble of classifiers to assign discourse senses. With an end-toend performance of 27.77 F 1 on the English 'blind' test data, our system advances the previous state of the art (Wang & Lan, 2015) by close to four F 1 points, with particularly good results for the argument identification sub-tasks.",
"pdf_parse": {
"paper_id": "K16-2002",
"_pdf_hash": "",
"abstract": [
{
"text": "The OPT submission to the Shared Task of the 2016 Conference on Natural Language Learning (CoNLL) implements a 'classic' pipeline architecture, combining binary classification of (candidate) explicit connectives, heuristic rules for non-explicit discourse relations, ranking and 'editing' of syntactic constituents for argument identification, and an ensemble of classifiers to assign discourse senses. With an end-toend performance of 27.77 F 1 on the English 'blind' test data, our system advances the previous state of the art (Wang & Lan, 2015) by close to four F 1 points, with particularly good results for the argument identification sub-tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Being able to recognize aspects of discourse structure has recently been shown to be relevant for tasks as diverse as machine translation, questionanswering, text summarization, and sentiment analysis. For many of these applications, a 'shallow' approach as embodied in the PDTB can be effective. It is shallow in the sense of making only very few commitments to an overall account of discourse structure and of having annotation decisions concentrate on the individual instances of discourse relations, rather than on their interactions. Previous work on this task has usually broken it down into a set of sub-problems, which are solved in a pipeline architecture (roughly: identify connectives, then arguments, then discourse senses; Lin et al., 2014) . While adopting a similar pipeline approach, the OPT discourse parser also builds on and extends a method that has previously achieved state-of-the-art results for the detection of speculation and negation (Velldal et al., 2012; Read et al., 2012) . It is interesting to observe that an abstractly similar pipeline-disambiguating trigger expressions and then resolving their in-text 'scope'-yields strong performance across linguistically diverse tasks. At the same time, the original system has been substantially augmented for discourse parsing as outlined below. There is no closely corresponding sub-problem to assigning discourse senses in the analysis of negation and speculation; thus, our sense classifier described has been developed specifically for OPT.",
"cite_spans": [
{
"start": 736,
"end": 753,
"text": "Lin et al., 2014)",
"ref_id": "BIBREF8"
},
{
"start": 961,
"end": 983,
"text": "(Velldal et al., 2012;",
"ref_id": "BIBREF14"
},
{
"start": 984,
"end": 1002,
"text": "Read et al., 2012)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our system overview is shown in Figure 1 . The individual modules interface through JSON files which resemble the desired output files of the Task. Each module adds the information specified for it. We will describe them here in thematic blocks, while the exact order of the modules can be seen in the figure. Relation identification ( \u00a73) includes the detection of explicit discourse connectives and the stipulation of non-explicit relations. Our argument identification module ( \u00a74) contains separate subclassifiers for a range of argument types and is invoked separately for explicit and non-explicit relations. Likewise, the sense classification module ( \u00a75) employs separate ensemble classifiers for explicit and non-explicit relations.",
"cite_spans": [],
"ref_spans": [
{
"start": 32,
"end": 40,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "System Architecture",
"sec_num": "2"
},
{
"text": "Explicit Connectives Our classifier for detecting explicit discourse connectives extends the work by Velldal et al. (2012) for identifying expressions of speculation and negation. The approach treats the set of connectives observed in the training data as a closed class, and 'only' attempts to disambiguate occurrences of these token sequences in new data. Connectives can be single-or multitoken sequences (e.g. 'as' vs. 'as long as'). In cases \u00a73 \u00a75 \u00a74 of overlapping connective candidates, OPT deterministically chooses the longest sequence. The Shared Task defines a notion of heads in complex connectives, for example just the final token in 'shortly after'. As evaluation is in terms of matching connective heads only, these are the unit of disambiguation in OPT. Disambiguation is performed as point-wise ('per-connective') classification using the support vector machine implementation of the SVM light toolkit (Joachims, 1999) . Tuning of feature configurations and the error-to-margin cost parameter (C) was performed by ten-fold cross validation on the Task training set.",
"cite_spans": [
{
"start": 101,
"end": 122,
"text": "Velldal et al. (2012)",
"ref_id": "BIBREF14"
},
{
"start": 920,
"end": 936,
"text": "(Joachims, 1999)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Identification",
"sec_num": "3"
},
{
"text": "The connective classifier builds on two groups of feature templates: (a) the generic, surface-oriented ones defined by Velldal et al. (2012) and (b) the more targeted, discourse-specific features of Pitler & Nenkova (2009) , Lin et al. (2014) , and Wang & Lan (2015) . Of these, group (a) comprises n-grams of downcased surface forms and parts of speech for up to five token positions preceding and following the connective; and group (b) draws heavily on syntactic configurations extracted from the phrase structure parses provided with the Task data. During system development, a few thousand distinct combinations of features were evaluated, including variable levels of feature conjunction (called interaction features by Pitler & Nenkova, 2009) within each group. These experiments suggest that there is substantial overlap between the utility of the various feature templates, and n-gram window size can to a certain degree be traded off with richer syntactic features. Many distinct configurations yield near-identical performance in cross-validation on the training data, and we selected our final model by (a) giving preference to configurations with smaller numbers of features and lower variance across folds and (b) additionally evaluating a dozen candidate configurations against the development data. The model used in the system submission includes n-grams of up to three preceding and following positions, full feature conjunction for the 'self' and 'parent' categories of Pitler & Nenkova (2009) , but limited conjunctions involving their 'left' and 'right' sibling categories, and none of the 'connected context' features suggested by Wang & Lan (2015) . This model has some 1.2 million feature types.",
"cite_spans": [
{
"start": 119,
"end": 140,
"text": "Velldal et al. (2012)",
"ref_id": "BIBREF14"
},
{
"start": 199,
"end": 222,
"text": "Pitler & Nenkova (2009)",
"ref_id": "BIBREF9"
},
{
"start": 225,
"end": 242,
"text": "Lin et al. (2014)",
"ref_id": "BIBREF8"
},
{
"start": 249,
"end": 266,
"text": "Wang & Lan (2015)",
"ref_id": "BIBREF15"
},
{
"start": 726,
"end": 749,
"text": "Pitler & Nenkova, 2009)",
"ref_id": "BIBREF9"
},
{
"start": 1489,
"end": 1512,
"text": "Pitler & Nenkova (2009)",
"ref_id": "BIBREF9"
},
{
"start": 1653,
"end": 1670,
"text": "Wang & Lan (2015)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Identification",
"sec_num": "3"
},
{
"text": "Non-Explicit Relations According to the PDTB guidelines, non-explicit relations must be stipulated between each pair of sentences iff four conditions hold: two sentences (a) are adjacent; (b) are located in the same paragraph; and (c) are not yet 'connected' by an explicit connective; and (d) a coherence relation can be inferred or an entity-based relation holds between them. We proceed straightforwardly: We traverse the sentence bigrams, following condition (a). Paragraph boundaries are detected based on character offsets in the input text (b). We compute a list of already 'connected' first sentences in sentence bigrams, extracting all the previously detected explicit connectives whose Arg1 is located in the 'previous sentence' (PS; see \u00a74). If the first sentence in a candidate bigram is not yet 'connected' (c), we posit a nonexplicit relation for the bigram. Condition (d) is ignored, since NoRel annotations are extremely rare and EntRel vs. Implicit relations are disambiguated in the downstream sense module ( \u00a75). We currently do not attempt to recover the AltLex instances, because they are relatively infrequent and there is a high chance for false positives.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Identification",
"sec_num": "3"
},
{
"text": "Our approach to argument identification is rooted in previous work on resolving the scope of spec- 2012: We generate candidate arguments by selecting constituents from a sentence parse tree, apply an automatically-learned ranking function to discriminate between candidates, and use the predicted constituent's surface projection to determine the extent of an argument. Like for explicit connective identification, all classifiers trained for argument identification use the SVM light toolkit and are tuned by ten-fold cross-validation against the training set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Argument Identification",
"sec_num": "4"
},
{
"text": "Predicting Arg1 Location For non-explicit relations we make the simplifying assumption that Arg1 occurs in the sentence immediately preceding that of Arg2 (PS). However, the Arg1s of explicit relations frequently occur in the same sentence (SS), so, following Wang & Lan (2015), we attempt to learn a classification function to predict whether these are in SS or PS. Considering all features proposed by Wang & Lan, but under cross-validation on the training set, we found that the significantly informative features were limited to: the connective form, the syntactic path from connective to root, the connective position in sentence (tertiles), and a bigram of the connective and following token part-of-speech.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Argument Identification",
"sec_num": "4"
},
{
"text": "Candidate Generation and Ranking Candidates are limited to clausal constituents as these account for the majority of arguments, offering substantial coverage while restricting the ambiguity (i.e., the mean number of candidates per argument; see Table 1 ). Candidates whose projection corresponds to the true extent of the argument are labeled as correct; others are labeled as incorrect. We experimented with various feature types to describe candidates, using the implementation of ordinal ranking in SVM light (Joachims, 2002) . These types comprise both the candidate's surface projection (including: bigrams of tokens in candidate, connective, connective category (Knott, 1996) , connective part-of-speech, connective precedes the candidate, connective position in sentence, initial token of candidate, final token of candidate, size of candidate projection relative to the sentence, token immediately following the candidate, token immediately preceding the candidate, tokens in candidate, and verbs in candidate) and the candidate's position in the sentence's parse tree (including: path to connective, path to connective via root, path to initial token, path to root, path between initial and preceding tokens, path between final and following tokens, and production rules of the candidate subtree).",
"cite_spans": [
{
"start": 512,
"end": 528,
"text": "(Joachims, 2002)",
"ref_id": "BIBREF6"
},
{
"start": 668,
"end": 681,
"text": "(Knott, 1996)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 245,
"end": 252,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Argument Identification",
"sec_num": "4"
},
{
"text": "An exhaustive search of all permutations of the above feature types requires significant resources. Instead we iteratively build a pool of feature types, at each stage assessing the contribution of each feature type when added to the pool, and only add a feature type if its contribution is statistically significant (using a Wilcoxon signed-rank test, p < .05). The most informative feature types thus selected are syntactic in nature, with a small but significant contribution from surface features. Table 2 lists the specific feature types found to be optimal for each particular type of argument.",
"cite_spans": [],
"ref_spans": [
{
"start": 502,
"end": 509,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Argument Identification",
"sec_num": "4"
},
{
"text": "Constituent Editing Our approach to argument identification is based on the assumption that arguments correspond to syntactically meaningful units, more specifically we require arguments to be clausal constituents (S/SBAR/SQ). In order to test this assumption, we quantify the alignment of arguments with constituents in en.train, see Table 1 . We find that the initial alignment (Align w/o edits) is rather low, in particular for Explicit arguments (.48 for Arg1 and .54 for Arg2). We therefore formulate a set of constituent editing heuristics, designed to improve on this alignment by including or removing certain elements from the candidate constituent. We apply the following heuristics, with conditions by argument type (Arg1 vs. Arg2), connective type (explicit vs. non-explicit) and position (SS vs. PS) in parentheses. Following editing, the alignment of arguments with the edited constituents improves considerably for explicit Arg1s (.81) and Arg2s (.84), see Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 335,
"end": 342,
"text": "Table 1",
"ref_id": null
},
{
"start": 972,
"end": 979,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Argument Identification",
"sec_num": "4"
},
{
"text": "Limitations The assumptions of our approach mean that the system upper-bound is limited in three respects. Firstly, some arguments span sentence boundaries (see Sent. Span in Table 1 ) meaning there can be no single aligned constituent. Secondly, not all arguments correspond with clausal constituents (approximately 1.7% of arguments in en.train align with a constituent of some other type). Finally, as reported in Table 1 , several Arg1s occur in neither the same sentence nor the immediately preceding sentence. Table 1 provides system upper-bounds taking each of these limitations into account.",
"cite_spans": [],
"ref_spans": [
{
"start": 175,
"end": 182,
"text": "Table 1",
"ref_id": null
},
{
"start": 417,
"end": 424,
"text": "Table 1",
"ref_id": null
},
{
"start": 516,
"end": 523,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Argument Identification",
"sec_num": "4"
},
{
"text": "In order to assign senses to the predicted relations, we apply an ensemble-classification approach. In particular, we use two separate groups of classifiers: one group for predicting the senses of explicit relations and another one for analyzing the senses of non-explicit relations. Each of these groups comprises the same types of predictors (presented below) but uses different feature sets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Sense Classification",
"sec_num": "5"
},
{
"text": "Majority Class Senser The first classifier included in both of our ensembles is a simplistic system which, given an input connective (none for non-explicit relations), returns a vector of conditional probabilities of its senses computed on the training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Sense Classification",
"sec_num": "5"
},
{
"text": "W&L LSVC Another prediction module is a reimplementation of the Wang & Lan (2015) systemthe winner of the previous iteration of the ConNLL Shared Task on shallow discourse parsing. In contrast to the original version, however, which relies on the Maximum Entropy classifier for predicting the senses of explicit relations and utilizes the Na\u00efve Bayes approach for classifying the senses of the non-explicit ones, both of our components (explicit and non-explicit) use the LIBLINEAR system (Fan et al., 2008 )-a speed-optimized SVM (Boser et al., 1992) with linear kernel. In our derived classifier, we adopt all features 1 of the original implementation up to the Brown clusters, where instead of taking the differences and intersections of the clusters from both arguments, we use the Cartesian product (CP) of the Brown groups similarly to the token-CP features of the UniTN system from last year (Stepanov et al., 2015) . Additionally, in order to reduce the number of possible CP attributes, we take the set of 1,000 clusters provided by the organizers of the Task instead of differentiating between 3,200 Brown groups as was done originally by Wang & Lan (2015) . Unlike the upstream modules in our pipeline, whose model parameters are tuned by 10-fold crossvalidation on the training set, the hyper-parameters of the sense classifiers are tweaked towards the development set, while using the entire training data for computing the feature weights. This decision is motivated by the wish to harness the full range of the training set, since the number of the target classes to predict is much bigger than in the preceding sub-tasks and because some of the senses, e.g. Expansion.Exception, only appear a dozen of times in the provided dataset. For training the final system, we use the Crammer-Singer multi-class strategy (Crammer & Singer, 2001) optimizing the primal objective and setting the error penalty term C to 0.3.",
"cite_spans": [
{
"start": 489,
"end": 506,
"text": "(Fan et al., 2008",
"ref_id": "BIBREF2"
},
{
"start": 531,
"end": 551,
"text": "(Boser et al., 1992)",
"ref_id": "BIBREF0"
},
{
"start": 899,
"end": 922,
"text": "(Stepanov et al., 2015)",
"ref_id": "BIBREF12"
},
{
"start": 1149,
"end": 1166,
"text": "Wang & Lan (2015)",
"ref_id": "BIBREF15"
},
{
"start": 1827,
"end": 1851,
"text": "(Crammer & Singer, 2001)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Sense Classification",
"sec_num": "5"
},
{
"text": "W&L XGBoost Even though linear SVM systems achieve competitive results on many important classification tasks, these systems can still experience difficulties with discerning instances that are not separable by a hyperplane. In order to circumvent this problem, we use a third type of classifier in our ensembles-a forest of decision trees learned by gradient boosting (XGBoost; Friedman, 2000) . For this part, we take the same set of features as in the previous component and optimize the hyperparameters of this module on the development set as described previously. In particular, we set the maximum tree depth to 3 and take 300 tree estimators for the complete forest.",
"cite_spans": [
{
"start": 379,
"end": 394,
"text": "Friedman, 2000)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Sense Classification",
"sec_num": "5"
},
{
"text": "Prediction Merging To compute the final predictions, we first obtain vectors of the estimated sense probabilities for each input instance from the three classifiers in the respective ensemble and then sum up these vectors, choosing the sense with the highest final score. More formally, we compute the prediction label\u0177 i for the input instance x i a\u015d y i = arg max n j=1 v j , where n is the number of classifiers in the ensemble (in our case three), and v j denotes the output probability vector of the jth predictor. Since the XGBoost implementation we use, however, can only return classifications without actual probability estimates, we obtain a probability vector for this component by assigning the score 1 \u2212 to the predicted sense class (with the -term determined on the development and set to 0.1) and uniformly distributing the -weight among the remaining senses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Sense Classification",
"sec_num": "5"
},
{
"text": "Overall Results Table 3 summarizes OPT system performance in terms of the metrics computed by the official scorer for the Shared Task, against both the WSJ and 'blind' test sets. To compare against the previous state of the art, we include results for the top-performing systems from the 2015 and 2016 competitions (as reported by Xue et al., 2015, and Xue et al., 2016, respectively) . Where applicable, best results (when comparing F 1 ) are highlighted for each sub-task and -metric. The highlighting makes it evident that the OPT system is competitive to the state of the art across the board, but particularly so on the argument identification sub-task and on the 'blind' test data: In terms of the WSJ test data, OPT would have ranked second in the 2015 competition, but on the 'blind' data it outperforms the previous state of the art on all but one metric for which contrastive results are provided by Xue et al.. Where earlier systems tend to drop by several F 1 points when evaluated on the non-WSJ data, this 'out-of-domain' effect is much smaller for OPT. For comparison, we also include the top scores for each submodule achieved by any system in the 2016 Shared Task.",
"cite_spans": [
{
"start": 331,
"end": 352,
"text": "Xue et al., 2015, and",
"ref_id": "BIBREF16"
},
{
"start": 353,
"end": 384,
"text": "Xue et al., 2016, respectively)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 16,
"end": 23,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "6"
},
{
"text": "Non-Explicit Relations In isolation, the stipulation of non-explicit relations achieves an F 1 of 93.2 on the WSJ test set (P = 89.9, R = 96.8). Since this sub-module does not specify full argument spans, we match gold and predicted relations based on the sentence identifiers of the arguments only. False positives include NoRel and missing relations. About half of the false negatives are relations within the same sentence (across a semicolon).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "6"
},
{
"text": "Blind Set Arg1 Arg2 Both Arg1 Arg2 Both Explicit (SS) . 683 .817 .590 .647 .783 .519 Explicit (PS) . 623 .663 .462 .611 .832 .505 Explicit (All) . 572 .753 .474 .586 .782 .473 Non-explicit (All) . 744 .743 .593 .640 .758 .539 Overall .668 .749 .536 .617 .769 .509 Table 4: Isolated argument extraction results (PS refers to the immediately preceding sentence only).",
"cite_spans": [
{
"start": 49,
"end": 53,
"text": "(SS)",
"ref_id": null
},
{
"start": 56,
"end": 98,
"text": "683 .817 .590 .647 .783 .519 Explicit (PS)",
"ref_id": null
},
{
"start": 101,
"end": 144,
"text": "623 .663 .462 .611 .832 .505 Explicit (All)",
"ref_id": null
},
{
"start": 147,
"end": 175,
"text": "572 .753 .474 .586 .782 .473",
"ref_id": null
},
{
"start": 197,
"end": 263,
"text": "744 .743 .593 .640 .758 .539 Overall .668 .749 .536 .617 .769 .509",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "WSJ Test Set",
"sec_num": null
},
{
"text": "Arguments Table 4 reports the isolated performance for argument identification. Most results are consistent across types of arguments, the two data sets, and the upper-bound estimates in Table 1 , with Arg1 harder to identify than Arg2. However an anomaly is the extraction of Arg2 in explicit relations where the Arg1 is in the immediately preceding sentence, which is poor in the WSJ Test Set but better in the blind set. This may be due to variance in the number of PS Arg1s in the respective sets, but will be investigated further in future work on error analysis.",
"cite_spans": [],
"ref_spans": [
{
"start": 10,
"end": 17,
"text": "Table 4",
"ref_id": null
},
{
"start": 187,
"end": 194,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "WSJ Test Set",
"sec_num": null
},
{
"text": "The results of the sense classification subtask without error propagation are shown in Table 5 . As can be seen from the table, the LIBLINEAR reimplementation of the Wang & Lan system was the strongest component in our ensemble, outperforming the best results on the WSJ test set from the previous year by 0.89 F 1 . The XGBoost variant of that module typically achieved the second best scores, being slightly better at predicting the sense of non-explicit relations on the blind test set. The majority class predictor is the least competitive part, which, however, is made up for by the simplicity of the model and its relative robustness to unseen data. Finally, we report on a system variant that was not part of the official OPT submission, shown in the bottom rows of Table 5 .",
"cite_spans": [],
"ref_spans": [
{
"start": 87,
"end": 94,
"text": "Table 5",
"ref_id": null
},
{
"start": 773,
"end": 780,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Sense Classification",
"sec_num": null
},
{
"text": "In this configuration, we added more features (types of modal verbs in the arguments, occurrence of negation, as well as the form and part-of-speech tag of the word immediately following the connective) to the W&L-based classifier of explicit relations, re-adjusting the hyper-parameters of this model afterwards; increased the -term of the XG-Boost component from 0.1 to 0.5; and, finally, replaced the majority class predictor with a neural LSTM model (Hochreiter & Schmidhuber, 1997) Table 5 : Isolated results for sense classification (the bottom * model was not part of the submission).",
"cite_spans": [
{
"start": 454,
"end": 486,
"text": "(Hochreiter & Schmidhuber, 1997)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 487,
"end": 494,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Sense Classification",
"sec_num": null
},
{
"text": "using the provided Word2Vec embeddings as input. This ongoing work shows clear promise for substantive improvements in sense classification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sense Classification",
"sec_num": null
},
{
"text": "The most innovative aspect of this work, arguably, is our adaptation of constituent ranking and editing from negation and speculation analysis to the sub-task of argument identification in discourse parsing. Premium performance (relatively speaking, comparing to the previous state of the art) on this sub-problem is in no small part the reason for overall competitive performance of the OPT system, despite its relatively simplistic architecture. The constituent ranker (and to some degree also the 'targeted' features in connective disambiguation) embodies a strong commitment to syntactic analysis as a prerequisite to discourse parsing. This is an interesting observation, in that it (a) confirms tight interdependencies between intra-and interutterance analysis and (b) offers hope that higherquality syntactic analysis should translate into improved discourse parsing. We plan to investigate these connections through in-depth error analysis and follow-up experimentation with additional syntactic parsers and types of representations. Another noteworthy property of our OPT system submission appears to be its relative resilience to minor differences in text type between the WSJ and 'blind' test data. We attribute this behavior at least in part to methodological choices made in parameter tuning, in particular cross-validation over the training data-yielding more reliable estimates of system performance than tuning against the much smaller development set-and selective, step-wise inclusion of features in model development.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion & Outlook",
"sec_num": "7"
},
{
"text": "A detailed description of these features can be found in the original paper byWang & Lan (2015) and their code posted on github: https://github.com/lanmanok/ conll2015_discourse.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We are indebted to Te Rutherford of Brandeis University for his effort in preparing data and infrastructure for the Task, as well as for shepherding our team and everyone else through its various stages. We are grateful to two anonymous reviewers for comments on an earlier version of this manuscript.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A training algorithm for optimal margin classifiers",
"authors": [
{
"first": "B",
"middle": [
"E"
],
"last": "Boser",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Guyon",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Vapnik",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of the Fifth Annual ACM Conference on Computational Learning Theory",
"volume": "",
"issue": "",
"pages": "144--152",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Boser, B. E., Guyon, I., & Vapnik, V. (1992). A training algorithm for optimal margin classifiers. In Proceed- ings of the Fifth Annual ACM Conference on Compu- tational Learning Theory (p. 144 -152). Pittsburgh, PA, USA.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "On the algorithmic implementation of multiclass kernel-based vector machines",
"authors": [
{
"first": "K",
"middle": [],
"last": "Crammer",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2001,
"venue": "Journal of Machine Learning Research",
"volume": "2",
"issue": "",
"pages": "265--292",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Crammer, K., & Singer, Y. (2001). On the algo- rithmic implementation of multiclass kernel-based vector machines. Journal of Machine Learning Re- search, 2, 265 -292.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "LIBLINEAR. A library for large linear classification",
"authors": [
{
"first": "R",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Hsieh",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2008,
"venue": "Journal of Machine Learning Research",
"volume": "9",
"issue": "",
"pages": "1871--1874",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fan, R., Chang, K., Hsieh, C., Wang, X., & Lin, C. (2008). LIBLINEAR. A library for large linear clas- sification. Journal of Machine Learning Research, 9, 1871 -1874.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Greedy function approximation. A gradient boosting machine",
"authors": [
{
"first": "J",
"middle": [
"H"
],
"last": "Friedman",
"suffix": ""
}
],
"year": 2000,
"venue": "Annals of Statistics",
"volume": "29",
"issue": "",
"pages": "1189--1232",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Friedman, J. H. (2000). Greedy function approxima- tion. A gradient boosting machine. Annals of Statis- tics, 29, 1189 -1232.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Long shortterm memory",
"authors": [
{
"first": "S",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hochreiter, S., & Schmidhuber, J. (1997). Long short- term memory. Neural Computation, 9(8), 1735 - 1780.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Making large-scale SVM learning practical",
"authors": [
{
"first": "T",
"middle": [],
"last": "Joachims",
"suffix": ""
}
],
"year": 1999,
"venue": "Advances in kernel methods. Support vector learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joachims, T. (1999). Making large-scale SVM learning practical. In B. Sch\u00f6lkopf, C. Burges, & A. Smola (Eds.), Advances in kernel methods. Support vector learning. Cambridge, MA, USA: MIT Press.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Optimizing search engines using clickthrough data",
"authors": [
{
"first": "T",
"middle": [],
"last": "Joachims",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 8th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining",
"volume": "",
"issue": "",
"pages": "133--142",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joachims, T. (2002). Optimizing search engines us- ing clickthrough data. In Proceedings of the 8th ACM SIGKDD International Conference on Knowl- edge Discovery and Data Mining (p. 133 -142). Ed- monton, Canada.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A data-driven methodology for motivating a set of coherence relations. Unpublished doctoral dissertation",
"authors": [
{
"first": "A",
"middle": [],
"last": "Knott",
"suffix": ""
}
],
"year": 1996,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Knott, A. (1996). A data-driven methodology for mo- tivating a set of coherence relations. Unpublished doctoral dissertation, University of Edinburgh.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A PDTBstyled end-to-end discourse parser",
"authors": [
{
"first": "Z",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "H",
"middle": [
"T"
],
"last": "Ng",
"suffix": ""
},
{
"first": "M.-Y",
"middle": [],
"last": "Kan",
"suffix": ""
}
],
"year": 2014,
"venue": "Natural Language Engineering",
"volume": "20",
"issue": "2",
"pages": "151--184",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lin, Z., Ng, H. T., & Kan, M.-Y. (2014). A PDTB- styled end-to-end discourse parser. Natural Lan- guage Engineering, 20(2), 151 -184.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Using syntax to disambiguate explicit discourse connectives in text",
"authors": [
{
"first": "E",
"middle": [],
"last": "Pitler",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Nenkova",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 47th Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "13--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pitler, E., & Nenkova, A. (2009). Using syntax to dis- ambiguate explicit discourse connectives in text. In Proceedings of the 47th Meeting of the Association for Computational Linguistics (p. 13 -16). Singa- pore.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Constituent-based discriminative ranking for negation resolution",
"authors": [
{
"first": "",
"middle": [],
"last": "Uio1",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the 1st Joint Conference on Lexical and Computational Semantics",
"volume": "",
"issue": "",
"pages": "310--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "UiO1. Constituent-based discriminative ranking for negation resolution. In Proceedings of the 1st Joint Conference on Lexical and Computational Seman- tics (p. 310 -318). Montr\u00e9al, Canada.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "The UniTN discourse parser in CoNLL 2015 Shared Task. Token-level sequence labeling with argumentspecific models",
"authors": [
{
"first": "E",
"middle": [
"A"
],
"last": "Stepanov",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Riccardi",
"suffix": ""
},
{
"first": "A",
"middle": [
"O"
],
"last": "Bayer",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 19th Conference on Natural Language Learning",
"volume": "",
"issue": "",
"pages": "25--31",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stepanov, E. A., Riccardi, G., & Bayer, A. O. (2015). The UniTN discourse parser in CoNLL 2015 Shared Task. Token-level sequence labeling with argument- specific models. In Proceedings of the 19th Con- ference on Natural Language Learning (p. 25 -31).",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Speculation and negation: Rules, rankers and the role of syntax",
"authors": [
{
"first": "E",
"middle": [],
"last": "Velldal",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "\u00d8vrelid",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Read",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Oepen",
"suffix": ""
}
],
"year": 2012,
"venue": "Computational Linguistics",
"volume": "38",
"issue": "2",
"pages": "369--410",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Velldal, E., \u00d8vrelid, L., Read, J., & Oepen, S. (2012). Speculation and negation: Rules, rankers and the role of syntax. Computational Linguistics, 38(2), 369 -410.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A refined end-to-end discourse parser",
"authors": [
{
"first": "J",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Lan",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 19th Conference on Natural Language Learning",
"volume": "",
"issue": "",
"pages": "17--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wang, J., & Lan, M. (2015). A refined end-to-end dis- course parser. In Proceedings of the 19th Conference on Natural Language Learning (p. 17 -24). Bejing, China.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "The CoNLL-2015 shared task on shallow discourse parsing",
"authors": [
{
"first": "N",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "H",
"middle": [
"T"
],
"last": "Ng",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Pradhan",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Prasad",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Bryant",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Rutherford",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 19th Conference on Natural Language Learning: Shared task",
"volume": "",
"issue": "",
"pages": "1--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xue, N., Ng, H. T., Pradhan, S., Prasad, R., Bryant, C., & Rutherford, A. (2015). The CoNLL-2015 shared task on shallow discourse parsing. In Proceedings of the 19th Conference on Natural Language Learning: Shared task (p. 1 -16). Bejing, China.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "The CoNLL-2016 shared task on multilingual shallow discourse parsing",
"authors": [
{
"first": "N",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "H",
"middle": [
"T"
],
"last": "Ng",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Pradhan",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Webber",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Rutherford",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 20th Conference on Natural Language Learning: Shared task",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xue, N., Ng, H. T., Pradhan, S., Webber, B., Ruther- ford, A., Wang, C., & Wang, H. (2016). The CoNLL- 2016 shared task on multilingual shallow discourse parsing. In Proceedings of the 20th Conference on Natural Language Learning: Shared task. Berlin, Germany.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "OPT system overview: Dotted boxes indicate sections that describe particular components.",
"type_str": "figure",
"num": null
},
"FIGREF1": {
"uris": null,
"text": "add conjunction (CC) preceding constituent (Arg1) -cut clause headed by connective (Arg1, explicit, SS) -cut constituent-final CC (Arg1) -cut constituent-final wh-determiner (Arg1) -cut constituent-initial CC (Arg2, explicit) -cut relative clause, i.e. SBAR initiated by WHNP/WHADVP -cut connective -cut initial and final punctuation",
"type_str": "figure",
"num": null
},
"TABREF3": {
"html": null,
"text": "Feature types used to describe candidate constituents for argument ranking.",
"type_str": "table",
"content": "<table/>",
"num": null
},
"TABREF4": {
"html": null,
"text": "with L2-loss,",
"type_str": "table",
"content": "<table><tr><td/><td/><td colspan=\"2\">WSJ Test Set</td><td/><td/><td/><td colspan=\"2\">Blind Test Set</td><td/></tr><tr><td/><td colspan=\"2\">2015 2016</td><td colspan=\"2\">OPT</td><td/><td colspan=\"2\">2015 2016</td><td colspan=\"2\">OPT</td></tr><tr><td/><td>F1</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>F1</td><td>F1</td><td>P</td><td>R</td><td>F1</td></tr><tr><td>Explicit Connectives</td><td>94.8</td><td colspan=\"4\">98.9 96.4 92.5 94.4</td><td>91.9</td><td colspan=\"4\">98.4 93.5 90.1 91.8</td></tr><tr><td>Explicit Arg1 Extraction</td><td>50.7</td><td colspan=\"4\">53.8 53.1 50.9 52.0</td><td>49.7</td><td colspan=\"4\">52.4 53.4 51.5 52.4</td></tr><tr><td>Explicit Arg2 Extraction</td><td>77.4</td><td colspan=\"4\">76.7 74.1 71.1 72.6</td><td>74.3</td><td colspan=\"4\">75.2 76.6 73.8 75.2</td></tr><tr><td>Explicit Both Extraction</td><td>45.2</td><td colspan=\"4\">45.3 44.9 43.0 43.9</td><td>41.4</td><td colspan=\"4\">44.0 44.9 43.2 44.0</td></tr><tr><td>Explicit Sense Micro-Average</td><td/><td/><td colspan=\"3\">38.6 40.2 39.4</td><td/><td/><td colspan=\"3\">33.9 35.1 34.5</td></tr><tr><td>Non-Explicit Arg1 Extraction</td><td>67.2</td><td colspan=\"4\">69.9 72.0 68.0 69.9</td><td>60.9</td><td colspan=\"4\">66.8 63.7 65.5 64.6</td></tr><tr><td>Non-Explicit Arg2 Extraction</td><td>68.4</td><td colspan=\"4\">71.5 73.5 69.5 71.5</td><td>74.6</td><td colspan=\"4\">79.1 75.3 77.5 76.4</td></tr><tr><td>Non-Explicit Both Extraction</td><td>53.1</td><td colspan=\"4\">53.5 55.0 52.0 53.5</td><td>50.4</td><td colspan=\"4\">58.1 51.3 52.8 52.0</td></tr><tr><td>Non-Explicit Sense Micro-Average</td><td/><td/><td colspan=\"3\">17.5 18.6 18.0</td><td/><td/><td colspan=\"3\">22.0 21.6 21.9</td></tr><tr><td>All Both Extraction</td><td>49.4</td><td colspan=\"4\">49.6 50.2 47.8 48.9</td><td>46.4</td><td colspan=\"4\">50.6 48.3 48.1 48.2</td></tr><tr><td>Overall Parser Performance</td><td>29.7</td><td colspan=\"4\">30.7 27.5 28.9 28.2</td><td>24.0</td><td colspan=\"4\">27.8 27.8 27.8 27.8</td></tr></table>",
"num": null
},
"TABREF5": {
"html": null,
"text": "Per-component breakdown of system performance, compared to top performers in 2015/16.",
"type_str": "table",
"content": "<table/>",
"num": null
},
"TABREF6": {
"html": null,
"text": "34.45 61.27 76.44 36.29 54.76 Majority 89.30 21.40 54.02 75.91 30.46 51.39 W&LLSVC 89.63 37.18 62.29 77.86 33.05 53.66 W&LXGB 89.41 34.12 60.64 76.27 34.42 53.62 OPT 89.95 33.53 60.64 76.81 33.66 53.54 LSTM * 89.90 33.76 60.78 77.63 33.69 53.29 OPT * 90.01 41.12 64.70 77.06 37.20 55.55",
"type_str": "table",
"content": "<table><tr><td/><td colspan=\"3\">WSJ Test Set</td><td/><td colspan=\"2\">Blind Set</td></tr><tr><td>System</td><td>Exp</td><td>Non-</td><td>All</td><td>Exp</td><td>Non-</td><td>All</td></tr><tr><td/><td/><td>Exp</td><td/><td/><td>Exp</td><td/></tr><tr><td>2015</td><td>90.79</td><td/><td/><td/><td/><td/></tr><tr><td>,</td><td/><td/><td/><td/><td/><td/></tr></table>",
"num": null
}
}
}
}