Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N18-1013",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:51:31.184926Z"
},
"title": "Improving Implicit Discourse Relation Classification by Modeling Inter-dependencies of Discourse Units in a Paragraph",
"authors": [
{
"first": "Zeyu",
"middle": [],
"last": "Dai",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Texas A&M University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Ruihong",
"middle": [],
"last": "Huang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Texas A&M University",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We argue that semantic meanings of a sentence or clause can not be interpreted independently from the rest of a paragraph, or independently from all discourse relations and the overall paragraph-level discourse structure. With the goal of improving implicit discourse relation classification, we introduce a paragraph-level neural networks that model inter-dependencies between discourse units as well as discourse relation continuity and patterns, and predict a sequence of discourse relations in a paragraph. Experimental results show that our model outperforms the previous state-of-the-art systems on the benchmark corpus of PDTB.",
"pdf_parse": {
"paper_id": "N18-1013",
"_pdf_hash": "",
"abstract": [
{
"text": "We argue that semantic meanings of a sentence or clause can not be interpreted independently from the rest of a paragraph, or independently from all discourse relations and the overall paragraph-level discourse structure. With the goal of improving implicit discourse relation classification, we introduce a paragraph-level neural networks that model inter-dependencies between discourse units as well as discourse relation continuity and patterns, and predict a sequence of discourse relations in a paragraph. Experimental results show that our model outperforms the previous state-of-the-art systems on the benchmark corpus of PDTB.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "PDTB-style discourse relations, mostly defined between two adjacent text spans (i.e., discourse units, either clauses or sentences), specify how two discourse units are logically connected (e.g., causal, contrast). Recognizing discourse relations is one crucial step in discourse analysis and can be beneficial for many downstream NLP applications such as information extraction, machine translation and natural language generation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Commonly, explicit discourse relations were distinguished from implicit ones, depending on whether a discourse connective (e.g., \"because\" and \"after\") appears between two discourse units (Prasad et al., 2008a) . While explicit discourse relation detection can be framed as a discourse connective disambiguation problem Lin et al., 2014) and has achieved reasonable performance (F1 score > 90%), implicit discourse relations have no discourse connective and are especially difficult to identify (Lin et al., 2009 (Lin et al., , 2014 . To fill the gap, implicit discourse relation prediction has drawn significant research interest recently and progress has been made (Chen et al., 2016; by modeling compositional meanings of two discourse units and exploiting word interactions between discourse units using neural tensor networks or attention mechanisms in neural nets. However, most of existing approaches ignore wider paragraph-level contexts beyond the two discourse units that are examined for predicting a discourse relation in between.",
"cite_spans": [
{
"start": 188,
"end": 210,
"text": "(Prasad et al., 2008a)",
"ref_id": "BIBREF24"
},
{
"start": 320,
"end": 337,
"text": "Lin et al., 2014)",
"ref_id": "BIBREF16"
},
{
"start": 495,
"end": 512,
"text": "(Lin et al., 2009",
"ref_id": "BIBREF15"
},
{
"start": 513,
"end": 532,
"text": "(Lin et al., , 2014",
"ref_id": "BIBREF16"
},
{
"start": 667,
"end": 686,
"text": "(Chen et al., 2016;",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To further improve implicit discourse relation prediction, we aim to improve discourse unit representations by positioning a discourse unit (DU) in its wider context of a paragraph. The key observation is that semantic meaning of a DU can not be interpreted independently from the rest of the paragraph that contains it, or independently from the overall paragraph-level discourse structure that involve the DU. Considering the following paragraph with four discourse relations, one relation between each two adjacent DUs: (1): [The Butler, Wis., manufacturer went public at $15.75 a share in August 1987,] DU 1 and (Explicit-Expansion) [Mr.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Sim's goal then was a $29 per-share price by 1992.] DU 2 (Implicit-Expansion) [Strong earnings growth helped achieve that price far ahead of schedule, in August 1988.] DU 3 (Implicit-Comparison) [The stock has since softened, trading around $25 a share last week and closing yesterday at $23 in national over-the-counter trading.] DU 4 But (Explicit-Comparison) [Mr. Sim has set a fresh target of $50 a share by the end of reaching that goal.] DU 5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Clearly, each DU is an integral part of the paragraph and not independent from other units. First, predicting a discourse relation may require understanding wider paragraph-level contexts beyond two relevant DUs and the overall discourse structure of a paragraph. For example, the implicit \"Comparison\" discourse relation between DU3 and DU4 is difficult to identify without the back-ground information (the history of per-share price) introduced in DU1 and DU2. Second, a DU may be involved in multiple discourse relations (e.g., DU4 is connected with both DU3 and DU5 with a \"Comparison\" relation), therefore the pragmatic meaning representation of a DU should reflect all the discourse relations the unit was involved in. Third, implicit discourse relation prediction should benefit from modeling discourse relation continuity and patterns in a paragraph that involve easy-to-identify explicit discourse relations (e.g., \"Implicit-Comparison\" relation is followed by \"Explicit-Comparison\" in the above example).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Following these observations, we construct a neural net model to process a paragraph each time and jointly build meaning representations for all DUs in the paragraph. The learned DU representations are used to predict a sequence of discourse relations in the paragraph, including both implicit and explicit relations. Although explicit relations are not our focus, predicting an explicit relation will help to reveal the pragmatic roles of its two DUs and reconstruct their representations, which will facilitate predicting neighboring implicit discourse relations that involve one of the DUs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In addition, we introduce two novel designs to further improve discourse relation classification performance of our paragraph-level neural net model. First, previous work has indicated that recognizing explicit and implicit discourse relations requires different strategies, we therefore untie parameters in the discourse relation prediction layer of the neural networks and train two separate classifiers for predicting explicit and implicit discourse relations respectively. This unique design has improved both implicit and explicit discourse relation identification performance. Second, we add a CRF layer on top of the discourse relation prediction layer to fine-tune a sequence of predicted discourse relations by modeling discourse relation continuity and patterns in a paragraph.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Experimental results show that the intuitive paragraph-level discourse relation prediction model achieves improved performance on PDTB for both implicit discourse relation classification and explicit discourse relation classification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Since the PDTB (Prasad et al., 2008b) corpus was created, a surge of studies Lin et al., 2009; have been conducted for predicting discourse relations, primarily focusing on the challenging task of implicit discourse relation classification when no explicit discourse connective phrase was presented. Early studies (Pitler et al., 2008; Lin et al., 2009 Lin et al., , 2014 focused on extracting linguistic and semantic features from two discourse units. Recent research (Zhang et al., 2015; Ji and Eisenstein, 2015; Ji et al., 2016) tried to model compositional meanings of two discourse units by exploiting interactions between words in two units with more and more complicated neural network models, including the ones using neural tensor (Chen et al., 2016; Qin et al., 2016; Lei et al., 2017) and attention mechanisms Lan et al., 2017; ). Another trend is to alleviate the shortage of annotated data by leveraging related external data, such as explicit discourse relations in PDTB Lan et al., 2017; Qin et al., 2017) and unlabeled data obtained elsewhere Lan et al., 2017) , often in a multi-task joint learning framework.",
"cite_spans": [
{
"start": 15,
"end": 37,
"text": "(Prasad et al., 2008b)",
"ref_id": "BIBREF25"
},
{
"start": 77,
"end": 94,
"text": "Lin et al., 2009;",
"ref_id": "BIBREF15"
},
{
"start": 314,
"end": 335,
"text": "(Pitler et al., 2008;",
"ref_id": "BIBREF23"
},
{
"start": 336,
"end": 352,
"text": "Lin et al., 2009",
"ref_id": "BIBREF15"
},
{
"start": 353,
"end": 371,
"text": "Lin et al., , 2014",
"ref_id": "BIBREF16"
},
{
"start": 469,
"end": 489,
"text": "(Zhang et al., 2015;",
"ref_id": "BIBREF35"
},
{
"start": 490,
"end": 514,
"text": "Ji and Eisenstein, 2015;",
"ref_id": "BIBREF7"
},
{
"start": 515,
"end": 531,
"text": "Ji et al., 2016)",
"ref_id": "BIBREF8"
},
{
"start": 740,
"end": 759,
"text": "(Chen et al., 2016;",
"ref_id": "BIBREF2"
},
{
"start": 760,
"end": 777,
"text": "Qin et al., 2016;",
"ref_id": "BIBREF26"
},
{
"start": 778,
"end": 795,
"text": "Lei et al., 2017)",
"ref_id": "BIBREF13"
},
{
"start": 821,
"end": 838,
"text": "Lan et al., 2017;",
"ref_id": "BIBREF11"
},
{
"start": 985,
"end": 1002,
"text": "Lan et al., 2017;",
"ref_id": "BIBREF11"
},
{
"start": 1003,
"end": 1020,
"text": "Qin et al., 2017)",
"ref_id": "BIBREF27"
},
{
"start": 1059,
"end": 1076,
"text": "Lan et al., 2017)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implicit Discourse Relation Recognition",
"sec_num": "2.1"
},
{
"text": "However, nearly all the previous works assume that a pair of discourse units is independent from its wider paragraph-level contexts and build their discourse relation prediction models based on only two relevant discourse units. In contrast, we model inter-dependencies of discourse units in a paragraph when building discourse unit representations; in addition, we model global continuity and patterns in a sequence of discourse relations, including both implicit and explicit relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implicit Discourse Relation Recognition",
"sec_num": "2.1"
},
{
"text": "Hierarchical neural network models (Liu and Lapata, 2017; have been applied to RST-style discourse parsing (Carlson et al., 2003) mainly for the purpose of generating text-level hierarchical discourse structures. In contrast, we use hierarchical neural network models to build context-aware sentence representations in order to improve implicit discourse relation prediction.",
"cite_spans": [
{
"start": 35,
"end": 57,
"text": "(Liu and Lapata, 2017;",
"ref_id": "BIBREF17"
},
{
"start": 107,
"end": 129,
"text": "(Carlson et al., 2003)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implicit Discourse Relation Recognition",
"sec_num": "2.1"
},
{
"text": "Abstracting latent representations from a long sequence of words, such as a paragraph, is a challenging task. While several novel neural network models (Zhang et al., 2017b,a) have been introduced in recent years for encoding a paragraph, Recurrent Neural Network (RNN)-based methods remain the most effective approaches. RNNs, especially the long-short term memory (LSTM) (Hochreiter and Schmidhuber, 1997) models, have been widely used to encode a paragraph for machine translation (Sutskever et al., 2014) , dialogue systems (Serban et al., 2016) and text summarization (Nallapati et al., 2016) because of its ability in modeling long-distance dependencies between words. In addition, among four typical pooling methods (sum, mean, last and max) for calculating sentence representations from RNN-encoded hidden states for individual words, max-pooling along with bidirectional LSTM (Bi-LSTM) (Schuster and Paliwal, 1997) yields the current best universal sentence representation method (Conneau et al., 2017) . We adopted a similar neural network architecture for paragraph encoding.",
"cite_spans": [
{
"start": 152,
"end": 175,
"text": "(Zhang et al., 2017b,a)",
"ref_id": null
},
{
"start": 373,
"end": 407,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF6"
},
{
"start": 484,
"end": 508,
"text": "(Sutskever et al., 2014)",
"ref_id": "BIBREF33"
},
{
"start": 573,
"end": 597,
"text": "(Nallapati et al., 2016)",
"ref_id": "BIBREF20"
},
{
"start": 895,
"end": 923,
"text": "(Schuster and Paliwal, 1997)",
"ref_id": "BIBREF31"
},
{
"start": 989,
"end": 1011,
"text": "(Conneau et al., 2017)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Paragraph Encoding",
"sec_num": "2.2"
},
{
"text": "Paragraph-level Discourse Relation Recognition 3.1 The Basic Model Architecture Figure 1 illustrates the overall architecture of the discourse-level neural network model that consists of two Bi-LSTM layers, one max-pooling layer in between and one softmax prediction layer. The input of the neural network model is a paragraph containing a sequence of discourse units, while the output is a sequence of discourse relations with one relation between each pair of adjacent discourse units 1 . Given the words sequence of one paragraph as input, the lower Bi-LSTM layer will read the whole paragraph and calculate hidden states as word representations, and a max-pooling layer will be applied to abstract the representation of each discourse unit based on individual word representations. Then another Bi-LSTM layer will run over the sequence of discourse unit representations and compute new representations by further modeling semantic dependencies between discourse units within paragraph. The final softmax prediction layer will concatenate representations of two adjacent discourse units and predict the discourse relation between them.",
"cite_spans": [],
"ref_spans": [
{
"start": 80,
"end": 88,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "The Neural Network Model for",
"sec_num": "3"
},
{
"text": "Word Vectors as Input: The input of the paragraph-level discourse relation prediction model is a sequence of word vectors, one vector per word in the paragraph. In this work, we used the pre-trained 300-dimension Google English word2vec embeddings 2 . For each word that is not in the vocabulary of Google word2vec, we will randomly initialize a vector with each dimension sampled from the range [\u22120.25, 0.25] . In addition, recognizing key entities and discourse connective phrases is important for discourse relation recognition, therefore, we concatenate the raw word embeddings with extra linguistic features, specifically one-hot Part-Of-Speech tag embeddings and one-hot named entity tag embeddings 3 .",
"cite_spans": [
{
"start": 396,
"end": 409,
"text": "[\u22120.25, 0.25]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Neural Network Model for",
"sec_num": "3"
},
{
"text": "Building Discourse Unit Representations: We aim to build discourse unit (DU) representations that sufficiently leverage cues for discourse relation prediction from paragraph-wide contexts, including the preceding and following discourse units in a paragraph. To process long paragraphwide contexts, we take a bottom-up two-level abstraction approach and progressively generate a compositional representation of each word first (low level) and then generate a compositional representation of each discourse unit (high level), with a max-pooling operation in between. At both word-level and DU-level, we choose Bi-LSTM as our basic component for generating compositional representations, mainly considering its capability to capture long-distance dependencies between words (discourse units) and to incorporate influences of context words (discourse units) in each side.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Neural Network Model for",
"sec_num": "3"
},
{
"text": "Given a variable-length words sequence X = (x 1 , x 2 , ..., x L ) in a paragraph, the word-level Bi-LSTM will process the input sequence by using two separate LSTMs, one process the word sequence from the left to right while the other follows the reversed direction. Therefore, at each word position t, we obtain two hidden states \u2212 \u2192 h t , \u2190 \u2212 h t . We concatenate them to get the word representation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Neural Network Model for",
"sec_num": "3"
},
{
"text": "h t = [ \u2212 \u2192 h t , \u2190 \u2212 h t ].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Neural Network Model for",
"sec_num": "3"
},
{
"text": "Then we apply maxpooling over the sequence of word representations for words in a discourse unit in order to get the discourse unit embedding: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Neural Network Model for",
"sec_num": "3"
},
{
"text": "h i [j]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Neural Network Model for",
"sec_num": "3"
},
{
"text": "(1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Neural Network Model for",
"sec_num": "3"
},
{
"text": "where, 1 \u2264 j \u2264 hidden node size (2) Next, the DU-level Bi-LSTM will process the sequence of discourse unit embeddings in a paragraph and generate two hidden states \u2212 \u2212\u2212 \u2192 hDU t and \u2190 \u2212\u2212 \u2212 hDU t at each discourse unit position. We concatenate them to get the discourse unit representation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Neural Network Model for",
"sec_num": "3"
},
{
"text": "hDU t = [ \u2212 \u2212\u2212 \u2192 hDU t , \u2190 \u2212\u2212 \u2212 hDU t ].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Neural Network Model for",
"sec_num": "3"
},
{
"text": "The Softmax Prediction Layer: Finally, we concatenate two adjacent discourse unit representations hDU t\u22121 and hDU t and predict the discourse relation between them using a softmax function:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Neural Network Model for",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "y t\u22121 = sof tmax(W y * [hDU t\u22121 , hDU t ] + b y )",
"eq_num": "(3)"
}
],
"section": "The Neural Network Model for",
"sec_num": "3"
},
{
"text": "3.2 Untie Parameters in the Softmax Prediction Layer (Implicit vs. Explicit)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Neural Network Model for",
"sec_num": "3"
},
{
"text": "Previous work Lin et al., 2014; has re- vealed that recognizing explicit vs. implicit discourse relations requires different strategies. Note that in the PDTB dataset, explicit discourse relations were distinguished from implicit ones, depending on whether a discourse connective exists between two discourse units. Therefore, explicit discourse relation detection can be simplified as a discourse connective phrase disambiguation problem Lin et al., 2014) . On the contrary, predicting an implicit discourse relation should rely on understanding the overall contents of its two discourse units (Lin et al., 2014; .",
"cite_spans": [
{
"start": 14,
"end": 31,
"text": "Lin et al., 2014;",
"ref_id": "BIBREF16"
},
{
"start": 439,
"end": 456,
"text": "Lin et al., 2014)",
"ref_id": "BIBREF16"
},
{
"start": 595,
"end": 613,
"text": "(Lin et al., 2014;",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Neural Network Model for",
"sec_num": "3"
},
{
"text": "Considering the different natures of explicit vs. implicit discourse relation prediction, we decide to untie parameters at the final discourse relation prediction layer and train two softmax classifiers, as illustrated in Figure 2 . The two classifiers have different sets of parameters, with one classifier for only implicit discourse relations and the other for only explicit discourse relations.",
"cite_spans": [],
"ref_spans": [
{
"start": 222,
"end": 230,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "The Neural Network Model for",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "yt\u22121 = sof tmax(Wexp[hDUt\u22121, hDUt] + bexp), exp sof tmax(Wimp[hDUt\u22121, hDUt] + bimp), imp",
"eq_num": "(4)"
}
],
"section": "The Neural Network Model for",
"sec_num": "3"
},
{
"text": "The loss function used for the neural network training considers loss induced by both implicit relation prediction and explicit relation prediction:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Neural Network Model for",
"sec_num": "3"
},
{
"text": "Loss = Loss imp + \u03b1 * Loss exp (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Neural Network Model for",
"sec_num": "3"
},
{
"text": "The \u03b1, in the full system, is set to be 1, which means that minimizing the loss in predicting either type of discourse relations is equally important.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Neural Network Model for",
"sec_num": "3"
},
{
"text": "In the evaluation, we will also evaluate a system variant, where we will set \u03b1 = 0, which means that the neural network will not attempt to predict explicit discourse relations and implicit discourse relation prediction will not be influenced by predicting neighboring explicit discourse relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Neural Network Model for",
"sec_num": "3"
},
{
"text": "Data analysis and many linguistic studies (Pitler et al., 2008; Asr and Demberg, 2012; Lascarides and Asher, 1993; Hobbs, 1985) have repeatedly shown that discourse relations feature continuity and patterns (e.g., a temporal relation is likely to be followed by another temporal relation). Especially, Pitler et al. (2008) firstly reported that patterns exist between implicit discourse relations and their neighboring explicit discourse relations. Motivated by these observations, we aim to improve implicit discourse relation detection by making use of easily identifiable explicit discourse relations and taking into account global patterns of discourse relation distributions. Specifically, we add an extra CRF layer at the top of the softmax prediction layer (shown in figure 3) to fine-tune predicted discourse relations by considering their inter-dependencies.",
"cite_spans": [
{
"start": 42,
"end": 63,
"text": "(Pitler et al., 2008;",
"ref_id": "BIBREF23"
},
{
"start": 64,
"end": 86,
"text": "Asr and Demberg, 2012;",
"ref_id": "BIBREF0"
},
{
"start": 87,
"end": 114,
"text": "Lascarides and Asher, 1993;",
"ref_id": "BIBREF12"
},
{
"start": 115,
"end": 127,
"text": "Hobbs, 1985)",
"ref_id": "BIBREF5"
},
{
"start": 302,
"end": 322,
"text": "Pitler et al. (2008)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 774,
"end": 783,
"text": "figure 3)",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Fine-tune Discourse Relation Predictions Using a CRF Layer",
"sec_num": "3.3"
},
{
"text": "The Conditional Random Fields (Lafferty et al., 2001 ) (CRF) layer updates a state transition matrix, which can effectively adjust the current la- bel depending on proceeding and following labels. Both training and decoding of the CRF layer can be solved efficiently by using the Viterbi algorithm. With the CRF layer, the model jointly assigns a sequence of discourse relations between each two adjacent discourse units in a paragraph, including both implicit and explicit relations, by considering relevant discourse unit representations as well as global discourse relation patterns.",
"cite_spans": [
{
"start": 30,
"end": 52,
"text": "(Lafferty et al., 2001",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-tune Discourse Relation Predictions Using a CRF Layer",
"sec_num": "3.3"
},
{
"text": "The Penn Discourse Treebank (PDTB): We experimented with PDTB v2.0 (Prasad et al., 2008b) which is the largest annotated corpus containing 36k discourse relations in 2,159 Wall Street Journal (WSJ) articles. In this work, we focus on the top-level 4 discourse relation senses which are consist of four major semantic classes: Comparison (Comp), Contingency (Cont), Expansion (Exp) and Temporal (Temp). We followed the same PDTB section partition as previous work and used sections 2-20 as training set, sections 21-22 as test set, and sections 0-1 as development set. Table 1 presents the data distributions we collected from PDTB.",
"cite_spans": [
{
"start": 67,
"end": 89,
"text": "(Prasad et al., 2008b)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [
{
"start": 568,
"end": 575,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Dataset and Preprocessing",
"sec_num": "4.1"
},
{
"text": "Preprocessing: The PDTB dataset documents its annotations as a list of discourse relations, with each relation associated with its two discourse units. To recover the paragraph context for a discourse relation, we match contents of its two annotated discourse units with all paragraphs in corresponding raw WSJ article. When all the matching was completed, each paragraph was split into a sequence of discourse units, with one discourse relation (implicit or explicit) between each two ad- jacent discourse units 5 . Following this method, we obtained 14,309 paragraphs in total, each contains 3.2 discourse units on average. Table 2 shows the distribution of paragraphs based on the number of discourse units in a paragraph.",
"cite_spans": [],
"ref_spans": [
{
"start": 626,
"end": 633,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Dataset and Preprocessing",
"sec_num": "4.1"
},
{
"text": "We tuned the parameters based on the best performance on the development set. We fixed the weights of word embeddings during training. All the LSTMs in our neural network use the hidden state size of 300. To avoid overfitting, we applied dropout (Hinton et al., 2012) with dropout ratio of 0.5 to both input and output of LSTM layers. To prevent the exploding gradient problem in training LSTMs, we adopt gradient clipping with gradient L2-norm threshold of 5.0. These parameters remain the same for all our proposed models as well as our own baseline models. We chose the standard cross-entropy loss function for training our neural network model and adopted Adam (Kingma and Ba, 2014) optimizer with the initial learning rate of 5e-4 and a minibatch size of 128 6 . If one instance is annotated with two labels (4% of all instances), we use both of them in loss calculation and regard the prediction as correct if model predicts one of the annotated labels. All the proposed models were imple- 5 In several hundred discourse relations, one discourse unit is complex and can be further separated into two elementary discourse units, which can be illustrated as [DU1-DU2]-DU3. We simplify such cases to be a relation between DU2 and DU3.",
"cite_spans": [
{
"start": 665,
"end": 686,
"text": "(Kingma and Ba, 2014)",
"ref_id": "BIBREF9"
},
{
"start": 996,
"end": 997,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Settings and Model Training",
"sec_num": "4.2"
},
{
"text": "6 Counted as the number of discourse relations rather than paragraph instances. mented with Pytorch 7 and converged to the best performance within 20-40 epochs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Settings and Model Training",
"sec_num": "4.2"
},
{
"text": "To alleviate the influence of randomness in neural network model training and obtain stable experimental results, we ran each of the proposed models and our own baseline models ten times and report the average performance of each model instead of the best performance as reported in many previous works.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Settings and Model Training",
"sec_num": "4.2"
},
{
"text": "We compare the performance of our neural network model with several recent discourse relation recognition systems that only consider two relevant discourse units.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Models and Systems",
"sec_num": "4.3"
},
{
"text": "\u2022 : improves implicit discourse relation prediction by creating more training instances from the Gigaword corpus utilizing explicitly mentioned discourse connective phrases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Models and Systems",
"sec_num": "4.3"
},
{
"text": "\u2022 (Chen et al., 2016) : a gated relevance network (GRN) model with tensors to capture semantic interactions between words from two discourse units.",
"cite_spans": [
{
"start": 2,
"end": 21,
"text": "(Chen et al., 2016)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Models and Systems",
"sec_num": "4.3"
},
{
"text": "\u2022 : a convolutional neural network model that leverages relations between different styles of discourse relations annotations (PDTB and RST (Carlson et al., 2003) ) in a multi-task joint learning framework.",
"cite_spans": [
{
"start": 140,
"end": 162,
"text": "(Carlson et al., 2003)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Models and Systems",
"sec_num": "4.3"
},
{
"text": "\u2022 : a multi-level attentionover-attention model to dynamically exploit features from two discourse units for recognizing an implicit discourse relation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Models and Systems",
"sec_num": "4.3"
},
{
"text": "\u2022 (Qin et al., 2017) : a novel pipelined adversarial framework to enable an adaptive imitation competition between the implicit network and a rival feature discriminator with access to connectives.",
"cite_spans": [
{
"start": 2,
"end": 20,
"text": "(Qin et al., 2017)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Models and Systems",
"sec_num": "4.3"
},
{
"text": "\u2022 (Lei et al., 2017) : a Simple Word Interaction Model (SWIM) with tensors that captures both linear and quadratic relations between words from two discourse units.",
"cite_spans": [
{
"start": 2,
"end": 20,
"text": "(Lei et al., 2017)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Models and Systems",
"sec_num": "4.3"
},
{
"text": "\u2022 (Lan et al., 2017) : an attention-based LSTM neural network that leverages explicit discourse relations in PDTB and unannotated external data in a multi-task joint learning framework.",
"cite_spans": [
{
"start": 2,
"end": 20,
"text": "(Lan et al., 2017)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Models and Systems",
"sec_num": "4.3"
},
{
"text": "Macro Acc Comp Cont Exp Temp Macro Acc 40.50 57.10 ------ 44.98 57.27 ------ 46.29 57.57 ------ (Lei et al., 2017) 46.46 ------- (Lan et al., 2017) 47 Table 3 : Multi-class Classification Results on PDTB. We report accuracy (Acc) and macro-average F1-scores for both explicit and implicit discourse relation predictions. We also report class-wise F1 scores.",
"cite_spans": [
{
"start": 96,
"end": 114,
"text": "(Lei et al., 2017)",
"ref_id": "BIBREF13"
},
{
"start": 129,
"end": 147,
"text": "(Lan et al., 2017)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 151,
"end": 158,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Implicit Explicit Model",
"sec_num": null
},
{
"text": "On the PDTB corpus, both binary classification and multi-way classification settings are commonly used to evaluate the implicit discourse relation recognition performance. We noticed that all the recent works report class-wise implicit relation prediction performance in the binary classification setting, while none of them report detailed performance in the multi-way classification setting. In the binary classification setting, separate \"oneversus-all\" binary classifiers were trained, and each classifier is to identify one class of discourse relations. Although separate classifiers are generally more flexible in combating with imbalanced distributions of discourse relation classes and obtain higher class-wise prediction performance, one pair of discourse units may be tagged with all four discourse relations without proper conflict resolution. Therefore, the multi-way classification setting is more appropriate and natural in evaluating a practical end-to-end discourse parser, and we mainly evaluate our proposed models using the four-way multi-class classification setting. Since none of the recent previous work reported class-wise implicit relation classification performance in the multi-way classification setting, for better comparisons, we re-implemented the neural tensor network architecture (so-called SWIM in (Lei et al., 2017) ) which is essentially a Bi-LSTM model with tensors and report its detailed evaluation result in the multi-way classification setting. As another baseline, we report the per-formance of a Bi-LSTM model without tensors as well. Both baseline models take two relevant discourse units as the only input.",
"cite_spans": [
{
"start": 1333,
"end": 1351,
"text": "(Lei et al., 2017)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Settings",
"sec_num": "4.4"
},
{
"text": "For additional comparisons, We also report the performance of our proposed models in the binary classification setting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Settings",
"sec_num": "4.4"
},
{
"text": "Multi-way Classification: The first section of table 3 shows macro average F1-scores and accuracies of previous works. The second section of table 3 shows the multi-class classification results of our implemented baseline systems. Consistent with results of previous works, neural tensors, when applied to Bi-LSTMs, improved implicit discourse relation prediction performance. However, the performance on the three small classes (Comp, Cont and Temp) remains low.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4.5"
},
{
"text": "The third section of table 3 shows the multi-class classification results of our proposed paragraph-level neural network models that capture inter-dependencies among discourse units. The first row shows the performance of a variant of our basic model, where we only identify implicit relations and ignore identifying explicit relations by setting the \u03b1 in equation (5) to be 0. Compared with the baseline Bi-LSTM model, the only difference is that this model considers paragraph-wide contexts and model inter-dependencies among discourse units when building representation for individual DU. We can see that this model has greatly improved implicit relation classification perfor-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4.5"
},
{
"text": "Exp Temp (Chen et al., 2016) 40.17 54.76 -31.32 37.91 55.88 69.97 37.17 36.70 54.48 70.43 38.84 (Qin et al., 2017) 40.87 54.56 72.38 36.20 (Lei et al., 2017) 40.47 55.36 69.50 35.34 (Lan et al., 2017) 40 mance across all the four relations and improved the macro-average F1-score by over 7 percents.",
"cite_spans": [
{
"start": 9,
"end": 28,
"text": "(Chen et al., 2016)",
"ref_id": "BIBREF2"
},
{
"start": 96,
"end": 114,
"text": "(Qin et al., 2017)",
"ref_id": "BIBREF27"
},
{
"start": 139,
"end": 157,
"text": "(Lei et al., 2017)",
"ref_id": "BIBREF13"
},
{
"start": 182,
"end": 200,
"text": "(Lan et al., 2017)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comp Cont",
"sec_num": null
},
{
"text": "In addition, compared with the baseline Bi-LSTM model with tensor, this model improved implicit relation classification performance across the three small classes, with clear performance gains of around 2 and 8 percents on contingency and temporal relations respectively, and overall improved the macro-average F1-score by 2.2 percents. The second row shows the performance of our basic paragraph-level model which predicts both implicit and explicit discourse relations in a paragraph. Compared to the variant system (the first row), the basic model further improved the classification performance on the first three implicit relations. Especially on the contingency relation, the classification performance was improved by another 1.42 percents. Moreover, the basic model yields good performance for recognizing explicit discourse relations as well, which is comparable with previous best result (92.05% macro F1-score and 93.09% accuracy as reported in (Pitler et al., 2008) ).",
"cite_spans": [
{
"start": 956,
"end": 977,
"text": "(Pitler et al., 2008)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comp Cont",
"sec_num": null
},
{
"text": "After untying parameters in the softmax prediction layer, implicit discourse relation classification performance was improved across all four relations, meanwhile, the explicit discourse relation classification performance was also improved. The CRF layer further improved implicit discourse relation recognition performance on the three small classes. In summary, our full paragraph-level neural network model achieves the best macro-average F1-score of 48.82% in predicting implicit discourse relations, which outperforms previous neural tensor network models (e.g., (Lei et al., 2017) ) by more than 2 percents and outperforms the best previous system (Lan et al., 2017) by 1 percent.",
"cite_spans": [
{
"start": 569,
"end": 587,
"text": "(Lei et al., 2017)",
"ref_id": "BIBREF13"
},
{
"start": 655,
"end": 673,
"text": "(Lan et al., 2017)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comp Cont",
"sec_num": null
},
{
"text": "Binary Classification: From table 4, we can see that compared against the best previous systems, our paragraph-level model with untied parameters in the prediction layer achieves F1-score improvements of 6 points on Comparison and 7 points on Temporal, which demonstrates that paragraphwide contexts are important in detecting minority discourse relations. Note that the CRF layer of the model is not suitable for binary classification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comp Cont",
"sec_num": null
},
{
"text": "As we explained in section 4.2, we ran our models for 10 times to obtain stable average performance. Then we also created ensemble models by applying majority voting to combine results of ten runs. From table 5, each ensemble model obtains performance improvements compared with single model. The full model achieves performance boosting of (51.84 -48.82 = 3.02) and (94.17 -93.21 = 0.96) in macro F1-scores for predicting implicit and explicit discourse relations respectively. Furthermore, the ensemble model achieves the best performance for predicting both implicit Figure 4 : Impact of Paragraph Length. We plot the macro-average F1-score of implicit discourse relation classification on instances with different paragraph length. and explicit discourse relations simultaneously.",
"cite_spans": [],
"ref_spans": [
{
"start": 570,
"end": 578,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Ensemble Model",
"sec_num": "4.6"
},
{
"text": "To understand the influence of paragraph lengths to our paragraph-level models, we divide paragraphs in the PDTB test set into several subsets based on the number of DUs in a paragraph, and then evaluate our proposed models on each subset separately. From Figure 4 , we can see that our paragraph-level models (the latter three) overall outperform DU-pair baselines across all the subsets. As expected, the paragraphlevel models achieve clear performance gains on long paragraphs (with more than 5 DUs) by extensively modeling mutual influences of DUs in a paragraph. But somewhat surprisingly, the paragraph-level models achieve noticeable performance gains on short paragraphs (with 2 or 3 DUs) as well. We hypothesize that by learning more appropriate discourse-aware DU representations in long paragraphs, our paragraph-level models reduce bias of using DU representations in predicting discourse relations, which benefits discourse relation prediction in short paragraphs as well.",
"cite_spans": [],
"ref_spans": [
{
"start": 256,
"end": 264,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Impact of Paragraph Length",
"sec_num": "4.7"
},
{
"text": "For the example (1), the baseline neural tensor model predicted both implicit relations wrongly (\"Implicit-Contingency\" between DU2 and DU3; \"Implicit-Expansion\" between DU3 and DU4), while our paragraph-level model predicted all the four discourse relations correctly, which indicates that paragraph-wide contexts play a key role in implicit discourse relation prediction. Our basic paragraph-level model wrongly predicted the implicit discourse relation between DU1 and DU2 to be \"Implicit-Comparison\", without being able to effectively use the succeeding \"Explicit-Temporal\" relation. On the contrary, the full model corrected this mistake by modeling discourse relation patterns with the CRF layer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example Analysis",
"sec_num": "4.8"
},
{
"text": "We have presented a paragraph-level neural network model that takes a sequence of discourse units as input, models inter-dependencies between discourse units as well as discourse relation continuity and patterns, and predicts a sequence of discourse relations in a paragraph. By building wider-context informed discourse unit representations and capturing the overall discourse structure, the paragraph-level neural network model outperforms the best previous models for implicit discourse relation recognition on the PDTB dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "In PDTB, most of discourse relations were annotated between two adjacent sentences or two adjacent clauses. For exceptional cases, we applied heuristics to convert them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Downloaded from https://docs.google.com/ uc?id=0B7XkCwpI5KDYNlNUTTlSS21pQmM3 Our feature-rich word embeddings are of dimension 343, including 300 dimensions for word2vec embeddings + 36 dimensions for Part-Of-Speech (POS) tags + 7 dimensions for named entity tags. We used the Stanford CoreNLP to generate POS tags and named entity tags.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In PDTB, the sense label of discourse relation was annotated hierarchically with three levels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://pytorch.org/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We acknowledge the support of NVIDIA Corporation for their donation of one GeForce GTX TI-TAN X GPU used for this research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Implicitness of discourse relations",
"authors": [
{
"first": "Torabi",
"middle": [],
"last": "Fatemeh",
"suffix": ""
},
{
"first": "Vera",
"middle": [],
"last": "Asr",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Demberg",
"suffix": ""
}
],
"year": 2012,
"venue": "Coling",
"volume": "",
"issue": "",
"pages": "2669--2684",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fatemeh Torabi Asr and Vera Demberg. 2012. Implic- itness of discourse relations. In Coling. pages 2669- 2684.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Building a discourse-tagged corpus in the framework of rhetorical structure theory",
"authors": [
{
"first": "Lynn",
"middle": [],
"last": "Carlson",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"Ellen"
],
"last": "Okurowski",
"suffix": ""
}
],
"year": 2003,
"venue": "Current and new directions in discourse and dialogue",
"volume": "",
"issue": "",
"pages": "85--112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lynn Carlson, Daniel Marcu, and Mary Ellen Okurowski. 2003. Building a discourse-tagged cor- pus in the framework of rhetorical structure theory. In Current and new directions in discourse and dia- logue, Springer, pages 85-112.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Implicit discourse relation detection via a deep architecture with gated relevance network",
"authors": [
{
"first": "Jifan",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Qi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Pengfei",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xipeng",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Xuanjing",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jifan Chen, Qi Zhang, Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2016. Implicit discourse relation detection via a deep architecture with gated rele- vance network. In ACL 2016.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Supervised learning of universal sentence representations from natural language inference data",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Lo\u00efc",
"middle": [],
"last": "Barrault",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "681--691",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo\u00efc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Nat- ural Language Processing, EMNLP 2017, Copen- hagen, Denmark, September 9-11, 2017. pages 681-691. http://aclanthology.info/ papers/D17-1071/d17-1071.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Improving neural networks by preventing coadaptation of feature detectors",
"authors": [
{
"first": "Nitish",
"middle": [],
"last": "Geoffrey E Hinton",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
},
{
"first": "Ruslan R",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1207.0580"
]
},
"num": null,
"urls": [],
"raw_text": "Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdinov. 2012. Improving neural networks by preventing co- adaptation of feature detectors. arXiv preprint arXiv:1207.0580 .",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "On the coherence and structure of discourse",
"authors": [
{
"first": "R",
"middle": [],
"last": "Jerry",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hobbs",
"suffix": ""
}
],
"year": 1985,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jerry R Hobbs. 1985. On the coherence and structure of discourse. CSLI.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735-1780.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "One vector is not enough: Entity-augmented distributed semantics for discourse relations",
"authors": [
{
"first": "Yangfeng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
}
],
"year": 2015,
"venue": "TACL",
"volume": "3",
"issue": "",
"pages": "329--344",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yangfeng Ji and Jacob Eisenstein. 2015. One vector is not enough: Entity-augmented distributed semantics for discourse relations. TACL 3:329-344. https: //tacl2013.cs.columbia.edu/ojs/ index.php/tacl/article/view/536.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A latent variable recurrent neural network for discourse relation language models",
"authors": [
{
"first": "Yangfeng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Gholamreza",
"middle": [],
"last": "Haffari",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of NAACL-HLT",
"volume": "",
"issue": "",
"pages": "332--342",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yangfeng Ji, Gholamreza Haffari, and Jacob Eisen- stein. 2016. A latent variable recurrent neural net- work for discourse relation language models. In Proceedings of NAACL-HLT. pages 332-342.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "Diederik",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 .",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data",
"authors": [
{
"first": "John",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Fernando Cn",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 18th International Conference on Machine Learning",
"volume": "951",
"issue": "",
"pages": "282--289",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Lafferty, Andrew McCallum, and Fernando CN Pereira. 2001. Conditional random fields: Prob- abilistic models for segmenting and labeling se- quence data. In Proceedings of the 18th Interna- tional Conference on Machine Learning. volume 951, pages 282-289.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Multi-task attentionbased neural networks for implicit discourse relationship representation and identification",
"authors": [
{
"first": "Man",
"middle": [],
"last": "Lan",
"suffix": ""
},
{
"first": "Jianxiang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yuanbin",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Zheng-Yu",
"middle": [],
"last": "Niu",
"suffix": ""
},
{
"first": "Haifeng",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1310--1319",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Man Lan, Jianxiang Wang, Yuanbin Wu, Zheng-Yu Niu, and Haifeng Wang. 2017. Multi-task attention- based neural networks for implicit discourse rela- tionship representation and identification. In Pro- ceedings of the 2017 Conference on Empirical Meth- ods in Natural Language Processing. pages 1310- 1319.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Temporal interpretation, discourse relations and commonsense entailment",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Lascarides",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Asher",
"suffix": ""
}
],
"year": 1993,
"venue": "Linguistics and philosophy",
"volume": "16",
"issue": "5",
"pages": "437--493",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Lascarides and Nicholas Asher. 1993. Temporal interpretation, discourse relations and commonsense entailment. Linguistics and philosophy 16(5):437- 493.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Swim: A simple word interaction model for implicit discourse relation recognition",
"authors": [
{
"first": "Wenqiang",
"middle": [],
"last": "Lei",
"suffix": ""
},
{
"first": "Xuancong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Meichun",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Ilija",
"middle": [],
"last": "Ilievski",
"suffix": ""
},
{
"first": "Xiangnan",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Min-Yen",
"middle": [],
"last": "Kan",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI-17",
"volume": "",
"issue": "",
"pages": "4026--4032",
"other_ids": {
"DOI": [
"10.24963/ijcai.2017/562"
]
},
"num": null,
"urls": [],
"raw_text": "Wenqiang Lei, Xuancong Wang, Meichun Liu, Ilija Ilievski, Xiangnan He, and Min-Yen Kan. 2017. Swim: A simple word interaction model for im- plicit discourse relation recognition. In Proceed- ings of the Twenty-Sixth International Joint Con- ference on Artificial Intelligence, IJCAI-17. pages 4026-4032. https://doi.org/10.24963/ ijcai.2017/562.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Discourse parsing with attention-based hierarchical neural networks",
"authors": [
{
"first": "Qi",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Tianshi",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Baobao",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "362--371",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qi Li, Tianshi Li, and Baobao Chang. 2016. Discourse parsing with attention-based hierarchical neural net- works. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Process- ing. pages 362-371.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Recognizing implicit discourse relations in the penn discourse treebank",
"authors": [
{
"first": "Ziheng",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Min-Yen",
"middle": [],
"last": "Kan",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "343--351",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ziheng Lin, Min-Yen Kan, and Hwee Tou Ng. 2009. Recognizing implicit discourse relations in the penn discourse treebank. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 1 -Vol- ume 1. Association for Computational Linguistics, Stroudsburg, PA, USA, EMNLP '09, pages 343- 351. http://dl.acm.org/citation.cfm? id=1699510.1699555.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A pdtb-styled end-to-end discourse parser",
"authors": [
{
"first": "Ziheng",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Min-Yen",
"middle": [],
"last": "Hwee Tou Ng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kan",
"suffix": ""
}
],
"year": 2014,
"venue": "Natural Language Engineering",
"volume": "20",
"issue": "2",
"pages": "151--184",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ziheng Lin, Hwee Tou Ng, and Min-Yen Kan. 2014. A pdtb-styled end-to-end discourse parser. Natural Language Engineering 20(2):151-184.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Learning contextually informed representations for linear-time discourse parsing",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2017,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang Liu and Mirella Lapata. 2017. Learning contex- tually informed representations for linear-time dis- course parsing. In EMNLP.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Recognizing implicit discourse relations via repeated reading: Neural networks with multi-level attention",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Sujian",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1224--1233",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang Liu and Sujian Li. 2016. Recognizing implicit discourse relations via repeated reading: Neural net- works with multi-level attention. In Proceedings of the 2016 Conference on Empirical Methods in Nat- ural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016. pages 1224- 1233. http://aclweb.org/anthology/D/ D16/D16-1130.pdf.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Implicit discourse relation classification via multi-task neural networks",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Sujian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zhifang",
"middle": [],
"last": "Sui",
"suffix": ""
}
],
"year": 2016,
"venue": "In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "2750--2756",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang Liu, Sujian Li, Xiaodong Zhang, and Zhi- fang Sui. 2016. Implicit discourse relation classification via multi-task neural networks. In Proceedings of the Thirtieth AAAI Confer- ence on Artificial Intelligence, February 12-17, 2016, Phoenix, Arizona, USA.. pages 2750-2756. http://www.aaai.org/ocs/index.php/ AAAI/AAAI16/paper/view/11831.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Abstractive text summarization using sequence-tosequence rnns and beyond",
"authors": [
{
"first": "Ramesh",
"middle": [],
"last": "Nallapati",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ramesh Nallapati, Bowen Zhou, Cicero Nogueira dos santos, Caglar Gulcehre, and Bing Xiang. 2016. Abstractive text summarization using sequence-to- sequence rnns and beyond. CoNLL 2016 page 280.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Automatic sense prediction for implicit discourse relations in text",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Pitler",
"suffix": ""
},
{
"first": "Annie",
"middle": [],
"last": "Louis",
"suffix": ""
},
{
"first": "Ani",
"middle": [],
"last": "Nenkova",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP",
"volume": "2",
"issue": "",
"pages": "683--691",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emily Pitler, Annie Louis, and Ani Nenkova. 2009. Automatic sense prediction for implicit discourse re- lations in text. In Proceedings of the Joint Confer- ence of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2 - Volume 2. Association for Computational Linguis- tics, Stroudsburg, PA, USA, ACL '09, pages 683- 691. http://dl.acm.org/citation.cfm? id=1690219.1690241.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Using syntax to disambiguate explicit discourse connectives in text",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Pitler",
"suffix": ""
},
{
"first": "Ani",
"middle": [],
"last": "Nenkova",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the ACL-IJCNLP 2009 Conference Short Papers",
"volume": "",
"issue": "",
"pages": "13--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emily Pitler and Ani Nenkova. 2009. Using syntax to disambiguate explicit discourse connectives in text. In Proceedings of the ACL-IJCNLP 2009 Confer- ence Short Papers. Association for Computational Linguistics, pages 13-16.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Easily identifiable discourse relations",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Pitler",
"suffix": ""
},
{
"first": "Mridhula",
"middle": [],
"last": "Raghupathy",
"suffix": ""
},
{
"first": "Hena",
"middle": [],
"last": "Mehta",
"suffix": ""
},
{
"first": "Ani",
"middle": [],
"last": "Nenkova",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Aravind",
"middle": [],
"last": "Joshi",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 22nd International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emily Pitler, Mridhula Raghupathy, Hena Mehta, Ani Nenkova, Alan Lee, and Aravind Joshi. 2008. Easily identifiable discourse relations. In In Proceedings of the 22nd International Conference on Computa- tional Linguistics (COLING 2008) Short Papers.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "The Penn Discourse Treebank 2.0",
"authors": [
{
"first": "R",
"middle": [],
"last": "Prasad",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Dinesh",
"suffix": ""
},
{
"first": "Lee",
"middle": [
"A"
],
"last": "",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Miltsakaki",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Robaldo",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Webber",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Prasad, N. Dinesh, Lee A., E. Miltsakaki, L. Robaldo, Joshi A., and B. Webber. 2008a. The Penn Discourse Treebank 2.0. In lrec2008.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "The penn discourse treebank 2.0",
"authors": [
{
"first": "Rashmi",
"middle": [],
"last": "Prasad",
"suffix": ""
},
{
"first": "Nikhil",
"middle": [],
"last": "Dinesh",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Eleni",
"middle": [],
"last": "Miltsakaki",
"suffix": ""
},
{
"first": "Livio",
"middle": [],
"last": "Robaldo",
"suffix": ""
},
{
"first": "Aravind",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Webber",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Milt- sakaki, Livio Robaldo, Aravind Joshi, and Bonnie Webber. 2008b. The penn discourse treebank 2.0. In In Proceedings of LREC.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "A stacking gated neural architecture for implicit discourse relation classification",
"authors": [
{
"first": "Lianhui",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Zhisong",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2016,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "2263--2270",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lianhui Qin, Zhisong Zhang, and Hai Zhao. 2016. A stacking gated neural architecture for implicit dis- course relation classification. In EMNLP. pages 2263-2270.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Adversarial connectiveexploiting networks for implicit discourse relation classification",
"authors": [
{
"first": "Lianhui",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Zhisong",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Zhiting",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"P"
],
"last": "Xing",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "17--1093",
"other_ids": {
"DOI": [
"10.18653/v1"
]
},
"num": null,
"urls": [],
"raw_text": "Lianhui Qin, Zhisong Zhang, Hai Zhao, Zhiting Hu, and Eric P. Xing. 2017. Adversarial connective- exploiting networks for implicit discourse relation classification. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 -August 4, Volume 1: Long Papers. pages 1006-1017. https://doi.org/10.18653/ v1/P17-1093.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Improving the inference of implicit discourse relations via classifying explicit discourse connectives",
"authors": [
{
"first": "Attapol",
"middle": [],
"last": "Rutherford",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
}
],
"year": 2015,
"venue": "HLT-NAACL",
"volume": "",
"issue": "",
"pages": "799--808",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Attapol Rutherford and Nianwen Xue. 2015. Improv- ing the inference of implicit discourse relations via classifying explicit discourse connectives. In HLT- NAACL. pages 799-808.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Neural network models for implicit discourse relation classification in english and chinese without surface features",
"authors": [
{
"first": "T",
"middle": [],
"last": "Attapol",
"suffix": ""
},
{
"first": "Vera",
"middle": [],
"last": "Rutherford",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Demberg",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Xue",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1606.01990"
]
},
"num": null,
"urls": [],
"raw_text": "Attapol T Rutherford, Vera Demberg, and Nianwen Xue. 2016. Neural network models for implicit discourse relation classification in english and chi- nese without surface features. arXiv preprint arXiv:1606.01990 .",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Robust non-explicit neural discourse parser in english and chinese",
"authors": [
{
"first": "T",
"middle": [],
"last": "Attapol",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Rutherford",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Xue",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Attapol T Rutherford and Nianwen Xue. 2016. Robust non-explicit neural discourse parser in english and chinese. ACL 2016 page 55.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Bidirectional recurrent neural networks",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Kuldip",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Paliwal",
"suffix": ""
}
],
"year": 1997,
"venue": "IEEE Transactions on Signal Processing",
"volume": "45",
"issue": "11",
"pages": "2673--2681",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Schuster and Kuldip K Paliwal. 1997. Bidirec- tional recurrent neural networks. IEEE Transactions on Signal Processing 45(11):2673-2681.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Building end-to-end dialogue systems using generative hierarchical neural network models",
"authors": [
{
"first": "Iulian",
"middle": [],
"last": "Vlad Serban",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Sordoni",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Aaron",
"suffix": ""
},
{
"first": "Joelle",
"middle": [],
"last": "Courville",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pineau",
"suffix": ""
}
],
"year": 2016,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "3776--3784",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Iulian Vlad Serban, Alessandro Sordoni, Yoshua Ben- gio, Aaron C Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using gener- ative hierarchical neural network models. In AAAI. pages 3776-3784.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3104--3112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural net- works. In Advances in neural information process- ing systems. pages 3104-3112.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "The conll-2015 shared task on shallow discourse parsing. CoNLL",
"authors": [
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Hwee Tou Ng",
"suffix": ""
},
{
"first": "Rashmi Prasado Christopher",
"middle": [],
"last": "Pradhan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bryant",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rutherford",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nianwen Xue, Hwee Tou Ng, Sameer Pradhan, Rashmi PrasadO Christopher Bryant, and Attapol T Ruther- ford. 2015. The conll-2015 shared task on shallow discourse parsing. CoNLL 2015 page 1.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Shallow convolutional neural network for implicit discourse relation recognition",
"authors": [
{
"first": "Biao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jinsong",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Deyi",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Yaojie",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Hong",
"middle": [],
"last": "Duan",
"suffix": ""
},
{
"first": "Junfeng",
"middle": [],
"last": "Yao",
"suffix": ""
}
],
"year": 2015,
"venue": "EMNLP. The Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2230--2235",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Biao Zhang, Jinsong Su, Deyi Xiong, Yaojie Lu, Hong Duan, and Junfeng Yao. 2015. Shallow convolu- tional neural network for implicit discourse relation recognition. In EMNLP. The Association for Com- putational Linguistics, pages 2230-2235.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Spherical paragraph model",
"authors": [
{
"first": "Ruqing",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jiafeng",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Yanyan",
"middle": [],
"last": "Lan",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Xueqi",
"middle": [],
"last": "Cheng",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1707.05635"
]
},
"num": null,
"urls": [],
"raw_text": "Ruqing Zhang, Jiafeng Guo, Yanyan Lan, Jun Xu, and Xueqi Cheng. 2017a. Spherical paragraph model. arXiv preprint arXiv:1707.05635 .",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Deconvolutional paragraph representation learning",
"authors": [
{
"first": "Yizhe",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Dinghan",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Guoyin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Zhe",
"middle": [],
"last": "Gan",
"suffix": ""
},
{
"first": "Ricardo",
"middle": [],
"last": "Henao",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [],
"last": "Carin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "4170--4180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yizhe Zhang, Dinghan Shen, Guoyin Wang, Zhe Gan, Ricardo Henao, and Lawrence Carin. 2017b. De- convolutional paragraph representation learning. In Advances in Neural Information Processing Sys- tems. pages 4170-4180.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Attentionbased bidirectional long short-term memory networks for relation classification",
"authors": [
{
"first": "Peng",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Tian",
"suffix": ""
},
{
"first": "Zhenyu",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Bingchen",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Hongwei",
"middle": [],
"last": "Hao",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "207--212",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peng Zhou, Wei Shi, Jun Tian, Zhenyu Qi, Bingchen Li, Hongwei Hao, and Bo Xu. 2016. Attention- based bidirectional long short-term memory net- works for relation classification. In Proceedings of the 54th Annual Meeting of the Association for Com- putational Linguistics (Volume 2: Short Papers). volume 2, pages 207-212.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "The Basic Model Architecture for Paragraph-level Discourse Relations Sequence Prediction. M P DU [j] = DU end max i=DU start"
},
"FIGREF1": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Untie Parameters in the Prediction Layer"
},
"FIGREF2": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Fine-tune Discourse Relations with a CRF layer."
},
"FIGREF3": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "For another example: (2): [Marshall came clanking in like Marley's ghost dragging those chains of brigades and air wings and links with Arab despots.] DU 1 (Implicit-Temporal) [He wouldn't leave] DU 2 until (Explicit-Temporal) [Mr. Cheney promised to do whatever the Pentagon systems analysts told him.] DU 3"
},
"TABREF1": {
"text": "Distributions of Four Top-level Discourse Relations in PDTB.",
"html": null,
"num": null,
"content": "<table><tr><td># of DUs</td><td>2</td><td>3</td><td>4</td><td>5</td><td>&gt;5</td></tr><tr><td>ratio</td><td colspan=\"5\">44% 25% 15% 7.3% 8.7%</td></tr></table>",
"type_str": "table"
},
"TABREF2": {
"text": "Distributions of Paragraphs.",
"html": null,
"num": null,
"content": "<table/>",
"type_str": "table"
},
"TABREF5": {
"text": "Binary Classification Results on PDTB. We report F1-scores for implicit discourse relations.",
"html": null,
"num": null,
"content": "<table><tr><td/><td>Implicit</td><td>Explicit</td></tr><tr><td>Model</td><td colspan=\"2\">Macro Acc Macro Acc</td></tr><tr><td colspan=\"3\">Basic System (\u03b1 = 1) 49.92 59.08 93.05 93.83</td></tr><tr><td>+ Untie Parameters + the CRF Layer</td><td colspan=\"2\">50.47 59.85 93.95 94.74 51.84 59.75 94.17 94.82</td></tr></table>",
"type_str": "table"
},
"TABREF6": {
"text": "Multi-class Classification Results of Ensemble Models on PDTB.",
"html": null,
"num": null,
"content": "<table/>",
"type_str": "table"
}
}
}
}