|
{ |
|
"paper_id": "N16-1037", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T14:36:01.531710Z" |
|
}, |
|
"title": "A Latent Variable Recurrent Neural Network for Discourse Relation Language Models", |
|
"authors": [ |
|
{ |
|
"first": "Yangfeng", |
|
"middle": [], |
|
"last": "Ji", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Gholamreza", |
|
"middle": [], |
|
"last": "Haffari", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Eisenstein", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This paper presents a novel latent variable recurrent neural network architecture for jointly modeling sequences of words and (possibly latent) discourse relations between adjacent sentences. A recurrent neural network generates individual words, thus reaping the benefits of discriminatively-trained vector representations. The discourse relations are represented with a latent variable, which can be predicted or marginalized, depending on the task. The resulting model can therefore employ a training objective that includes not only discourse relation classification, but also word prediction. As a result, it outperforms state-ofthe-art alternatives for two tasks: implicit discourse relation classification in the Penn Discourse Treebank, and dialog act classification in the Switchboard corpus. Furthermore, by marginalizing over latent discourse relations at test time, we obtain a discourse informed language model, which improves over a strong LSTM baseline.", |
|
"pdf_parse": { |
|
"paper_id": "N16-1037", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This paper presents a novel latent variable recurrent neural network architecture for jointly modeling sequences of words and (possibly latent) discourse relations between adjacent sentences. A recurrent neural network generates individual words, thus reaping the benefits of discriminatively-trained vector representations. The discourse relations are represented with a latent variable, which can be predicted or marginalized, depending on the task. The resulting model can therefore employ a training objective that includes not only discourse relation classification, but also word prediction. As a result, it outperforms state-ofthe-art alternatives for two tasks: implicit discourse relation classification in the Penn Discourse Treebank, and dialog act classification in the Switchboard corpus. Furthermore, by marginalizing over latent discourse relations at test time, we obtain a discourse informed language model, which improves over a strong LSTM baseline.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Natural language processing (NLP) has recently experienced a neural network \"tsunami\" (Manning, 2016) . A key advantage of these neural architectures is that they employ discriminatively-trained distributed representations, which can capture the meaning of linguistic phenomena ranging from individual words (Turian et al., 2010) to longer-range linguistic contexts at the sentence level (Socher et al., 2013) and beyond (Le and Mikolov, 2014) . Because they are discriminatively trained, these meth-ods can learn representations that yield very accurate predictive models (e.g., .", |
|
"cite_spans": [ |
|
{ |
|
"start": 86, |
|
"end": 101, |
|
"text": "(Manning, 2016)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 308, |
|
"end": 329, |
|
"text": "(Turian et al., 2010)", |
|
"ref_id": "BIBREF40" |
|
}, |
|
{ |
|
"start": 388, |
|
"end": 409, |
|
"text": "(Socher et al., 2013)", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 421, |
|
"end": 443, |
|
"text": "(Le and Mikolov, 2014)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "However, in comparison with the probabilistic graphical models that were previously the dominant machine learning approach for NLP, neural architectures lack flexibility. By treating linguistic annotations as random variables, probabilistic graphical models can marginalize over annotations that are unavailable at test or training time, elegantly modeling multiple linguistic phenomena in a joint framework (Finkel et al., 2006) . But because these graphical models represent uncertainty for every element in the model, adding too many layers of latent variables makes them difficult to train.", |
|
"cite_spans": [ |
|
{ |
|
"start": 408, |
|
"end": 429, |
|
"text": "(Finkel et al., 2006)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper, we present a hybrid architecture that combines a recurrent neural network language model with a latent variable model over shallow discourse structure. In this way, the model learns a discriminatively-trained distributed representation of the local contextual features that drive word choice at the intra-sentence level, using techniques that are now state-of-the-art in language modeling (Mikolov et al., 2010) . However, the model treats shallow discourse structure -specifically, the relationships between pairs of adjacent sentencesas a latent variable. As a result, the model can act as both a discourse relation classifier and a language model. Specifically:", |
|
"cite_spans": [ |
|
{ |
|
"start": 404, |
|
"end": 426, |
|
"text": "(Mikolov et al., 2010)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 If trained to maximize the conditional likelihood of the discourse relations, it outperforms state-of-the-art methods for both implicit discourse relation classification in the Penn Discourse Treebank and dialog act classification in Switch-board (Kalchbrenner and Blunsom, 2013) . The model learns from both the discourse annotations as well as the language modeling objective, unlike previous recursive neural architectures that learn only from annotated discourse relations .", |
|
"cite_spans": [ |
|
{ |
|
"start": 249, |
|
"end": 281, |
|
"text": "(Kalchbrenner and Blunsom, 2013)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 If the model is trained to maximize the joint likelihood of the discourse relations and the text, it is possible to marginalize over discourse relations at test time, outperforming language models that do not account for discourse structure.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In contrast to recent work on continuous latent variables in recurrent neural networks (Chung et al., 2015) , which require complex variational autoencoders to represent uncertainty over the latent variables, our model is simple to implement and train, requiring only minimal modifications to existing recurrent neural network architectures that are implemented in commonly-used toolkits such as Theano, Torch, and CNN.", |
|
"cite_spans": [ |
|
{ |
|
"start": 87, |
|
"end": 107, |
|
"text": "(Chung et al., 2015)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We focus on a class of shallow discourse relations, which hold between pairs of adjacent sentences (or utterances). These relations describe how the adjacent sentences are related: for example, they may be in CONTRAST, or the latter sentence may offer an answer to a question posed by the previous sentence. Shallow relations do not capture the full range of discourse phenomena (Webber et al., 2012) , but they account for two well-known problems: implicit discourse relation classification in the Penn Discourse Treebank, which was the 2015 CoNLL shared task ; and dialog act classification, which characterizes the structure of interpersonal communication in the Switchboard corpus (Stolcke et al., 2000) , and is a key component of contemporary dialog systems (Williams and Young, 2007) . Our model outperforms state-of-the-art alternatives for implicit discourse relation classification in the Penn Discourse Treebank, and for dialog act classification in the Switchboard corpus.", |
|
"cite_spans": [ |
|
{ |
|
"start": 379, |
|
"end": 400, |
|
"text": "(Webber et al., 2012)", |
|
"ref_id": "BIBREF42" |
|
}, |
|
{ |
|
"start": 685, |
|
"end": 707, |
|
"text": "(Stolcke et al., 2000)", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 764, |
|
"end": 790, |
|
"text": "(Williams and Young, 2007)", |
|
"ref_id": "BIBREF43" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our model scaffolds on recurrent neural network (RNN) language models (Mikolov et al., 2010) , and recent variants that exploit multiple levels of linguistic detail Lin et al., 2015) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 70, |
|
"end": 92, |
|
"text": "(Mikolov et al., 2010)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 165, |
|
"end": 182, |
|
"text": "Lin et al., 2015)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "RNN Language Models Let us denote token n in a sentence t by y t,n \u2208 {1 . . . V }, and write y t = {y t,n } n\u2208{1...Nt} to indicate the sequence of words in sentence t. In an RNN language model, the probability of the sentence is decomposed as,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "p(y t ) = Nt n p(y t,n | y t,<n ),", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "where the probability of each word y t,n is conditioned on the entire preceding sequence of words y t,<n through the summary vector h t,n\u22121 . This vector is computed recurrently from h t,n\u22122 and from the embedding of the current word, X y t,n\u22121 , where X \u2208 R K\u00d7V and K is the dimensionality of the word embeddings. The language model can then be summarized as,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "h t,n =f(X yt,n , h t,n\u22121 ) (2) p(y t,n | y t,<n ) =softmax (W o h t,n\u22121 + b o ) ,", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "where the matrix W o \u2208 R V \u00d7K defines the output embeddings, and", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "b o \u2208 R V is an offset. The function f(\u2022)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "is a deterministic non-linear transition function. It typically takes an element-wise non-linear transformation (e.g., tanh) of a vector resulting from the sum of the word embedding and a linear transformation of the previous hidden state. The model as described thus far is identical to the recurrent neural network language model (RNNLM) of Mikolov et al. (2010) . In this paper, we replace the above simple hidden state units with the more complex Long Short-Term Memory units (Hochreiter and Schmidhuber, 1997), which have consistently been shown to yield much stronger performance in language modeling (Pham et al., 2014) . For simplicity, we still use the term RNNLM in referring to this model. Document Context Language Model One drawback of the RNNLM is that it cannot propagate longrange information between the sentences. Even if we remove sentence boundaries, long-range information will be attenuated by repeated application of the non-linear transition function. propose the Document Context Language Model (DCLM) to address this issue. The core idea is to represent context with two vectors: h t,n , representing intra-sentence word-level context, and c t , representing inter-sentence context. These two vectors", |
|
"cite_spans": [ |
|
{ |
|
"start": 343, |
|
"end": 364, |
|
"text": "Mikolov et al. (2010)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 607, |
|
"end": 626, |
|
"text": "(Pham et al., 2014)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "y t \u22121, N t\u22121 \u22122 y t \u22121, N t\u22121 \u22121 y t \u22121, N t\u22121 y t ,1 y t ,2 z t y t ,3", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Figure 1: A fragment of our model with latent variable zt, which only illustrates discourse information flow from sentence (t \u2212 1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "to t. The information from sentence (t \u2212 1) affects the distribution of zt and then the words prediction within sentence t. are then linearly combined in the generation function for word y t,n ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "p(y t,n+1 | z t , y t,<n , y t\u22121 ) = g W (zt) o h t,n relation-specific intra-sentential context + W (zt) c c t\u22121 relation-specific inter-sentential context + b (zt) o relation-specific bias (4)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "p(y t,n | y t,<n , y <t ) = softmax (W o h t,n\u22121 + W c c t\u22121 + b o ) ,", |
|
"eq_num": "(5)" |
|
} |
|
], |
|
"section": "Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "where c t\u22121 is set to the last hidden state of the previous sentence. show that this model can improve language model perplexity.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We now present a probabilistic neural model over sequences of words and shallow discourse relations. Discourse relations z t are treated as latent variables, which are linked with a recurrent neural network over words in a latent variable recurrent neural network (Chung et al., 2015) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 264, |
|
"end": 284, |
|
"text": "(Chung et al., 2015)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discourse Relation Language Models", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Our model (see Figure 1 ) is formulated as a two-step generative story. In the first step, context information from the sentence (t \u2212 1) is used to generate the discourse relation between sentences (t \u2212 1) and t,", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 15, |
|
"end": 23, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "p(z t | y t\u22121 ) = softmax (U c t\u22121 + b) ,", |
|
"eq_num": "(6)" |
|
} |
|
], |
|
"section": "The Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "where z t is a random variable capturing the discourse relation between the two sentences, and c t\u22121 is a vector summary of the contextual information from sentence (t \u2212 1), just as in the DCLM (Equation 5). The model maintains a default context vector c 0 for the first sentences of documents, and treats it as a parameter learned with other model parameters during training.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In the second step, the sentence y t is generated, conditioning on the preceding sentence y t\u22121 and the discourse relation z t :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "p(y t | z t , y t\u22121 ) = Nt n p(y t,n | y t,<n , y t\u22121 , z t ), (7)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The generative probability for the sentence y t decomposes across tokens as usual Equation 7. The per-token probabilities are shown in Equation 4, in Figure 2 . Discourse relations are incorporated by parameterizing the output matrices W c ; depending on the discourse relation that holds between (t \u2212 1) and t, these matrices will favor different parts of the embedding space. The bias term b", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 150, |
|
"end": 158, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "(zt) o", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "is also parametrized by the discourse relation, so that each relation can favor specific words.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Overall, the joint probability of the text and discourse relations is,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "p(y 1:T , z 1:T ) = T t p(z t | y t\u22121 ) \u00d7 p(y t | z t , y t\u22121 ). (8)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "If the discourse relations z t are not observed, then our model is a form of latent variable recurrent neural network (LVRNN). Connections to recent work on LVRNNs are discussed in \u00a7 6; the key difference is that the latent variables here correspond to linguistically meaningful elements, which we may wish to predict or marginalize, depending on the situation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Parameter Tying As proposed, the Discourse Relation Language Model has a large number of parameters. Let K, H and V be the input dimension, hidden dimension and the size of vocabulary in language modeling. The size of each prediction matrix W", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "(z) o and W (z)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "c is V \u00d7 H; there are two such matrices for each possible discourse relation. We reduce the number of parameters by factoring each of these matrices into two components:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "W (z) o = W o \u2022 V (z) , W (z) c = W c \u2022 M (z) ,", |
|
"eq_num": "(9)" |
|
} |
|
], |
|
"section": "The Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "where V (z) and M (z) are relation-specific components for intra-sentential and inter-sentential contexts; the size of these matrices is H \u00d7 H, with H V . The larger V \u00d7 H matrices W o and W c are shared across all relations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 8, |
|
"end": 11, |
|
"text": "(z)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 18, |
|
"end": 21, |
|
"text": "(z)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "There are two possible inference scenarios: inference over discourse relations, conditioning on words; and inference over words, marginalizing over discourse relations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inference", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Inference over Discourse Relations The probability of discourse relations given the sentences p(z 1:T | y 1:T ) is decomposed into the product of probabilities of individual discourse relations conditioned on the adjacent sentences t p(z t | y t , y t\u22121 ). These probabilities are computed by Bayes' rule:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inference", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "p(z t | y t , y t\u22121 ) = p(y t | z t , y t\u22121 ) \u00d7 p(z t | y t\u22121 ) z p(y t | z , y t\u22121 ) \u00d7 p(z | y t\u22121 ) .", |
|
"eq_num": "(10)" |
|
} |
|
], |
|
"section": "Inference", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The terms in each product are given in Equations 6 and 7. Normalizing involves only a sum over a small finite number of discourse relations. Note that inference is easy in our case because all words are observed and there is no probabilistic coupling of the discourse relations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inference", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Inference over Words In discourse-informed language modeling, we marginalize over discourse relations to compute the probability of a sequence of sentence y 1:T , which can be written as,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inference", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "p(y 1:T ) = T t zt p(z t | y t\u22121 ) \u00d7 p(y t | z t , y t\u22121 ),", |
|
"eq_num": "(11)" |
|
} |
|
], |
|
"section": "Inference", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "because the word sequences are observed, decoupling each z t from its neighbors z t+1 and z t\u22121 . This decoupling ensures that we can compute the overall marginal likelihood as a product over local marginals.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inference", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The model can be trained in two ways: to maximize the joint probability p(y 1:T , z 1:T ), or to maximize the conditional probability p(z 1:T | y 1:T ). The joint training objective is more suitable for language modeling scenarios, and the conditional objective is better for discourse relation prediction. We now describe each objective in detail.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Joint likelihood objective The joint likelihood objective function is directly adopted from the joint probability defined in Equation 8. The objective function for a single document with T sentences or utterances is,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "(\u03b8) = T t log p(z t | y t\u22121 ) + Nt n log p(y t,n | y t,<n , y t\u22121 , z t ),", |
|
"eq_num": "(12)" |
|
} |
|
], |
|
"section": "Learning", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "where \u03b8 represents the collection of all model parameters, including the parameters in the LSTM units and the word embeddings. Maximizing the objective function (\u03b8) will jointly optimize the model on both language language and discourse relation prediction. As such, it can be viewed as a form of multi-task learning (Caruana, 1997) , where we learn a shared representation that works well for discourse relation prediction and for language modeling. However, in practice, the large vocabulary size and number of tokens means that the language modeling part of the objective function tends to dominate.", |
|
"cite_spans": [ |
|
{ |
|
"start": 317, |
|
"end": 332, |
|
"text": "(Caruana, 1997)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Conditional objective This training objective is specific to the discourse relation prediction task, and based on Equation 10 can be written as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "r (\u03b8) = T t log p(z t | y t\u22121 ) + log p(y t | z t , y t\u22121 ) \u2212 log z p(z | y t\u22121 ) \u00d7 p(y t | z , y t\u22121 ) (13)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The first line in Equation 13 is the same as (\u03b8), but the second line reflects the normalization over all possible values of z t . This forces the objective function to attend specifically to the problem of maximizing the conditional likelihood of the discourse relations and treat language modeling as an auxiliary task (Collobert et al., 2011) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 321, |
|
"end": 345, |
|
"text": "(Collobert et al., 2011)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The discourse relation language model is carefully designed to decouple the discourse relations from each other, after conditioning on the words. It is clear that text documents and spoken dialogues have sequential discourse structures, and it seems likely that modeling this structure could improve performance. In a traditional hidden Markov model (HMM) generative approach (Stolcke et al., 2000) , modeling sequential dependencies is not difficult, because training reduces to relative frequency estimation. However, in the hybrid probabilisticneural architecture proposed here, training is already expensive, due to the large number of parameters that must be estimated. Adding probabilistic couplings between adjacent discourse relations z t\u22121 , z t would require the use of dynamic programming for both training and inference, increasing time complexity by a factor that is quadratic in the number of discourse relations. We did not attempt this in this paper; we do compare against a conventional HMM on the dialogue act prediction task in \u00a7 5. propose an alternative form of the document context language model, in which the contextual information c t impacts the hidden state h t+1 , rather than going directly to the outputs y t+1 . They obtain slightly better perplexity with this approach, which has fewer trainable parameters. However, this model would couple z t with all subsequent sentences y >t , making prediction and marginalization of discourse relations considerably more challenging. Sequential Monte Carlo algorithms offer a possible solution (de Freitas et al., ; Gu et al., 2015) , which may be considered in future work.", |
|
"cite_spans": [ |
|
{ |
|
"start": 376, |
|
"end": 398, |
|
"text": "(Stolcke et al., 2000)", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 1566, |
|
"end": 1587, |
|
"text": "(de Freitas et al., ;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 1588, |
|
"end": 1604, |
|
"text": "Gu et al., 2015)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Modeling limitations", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "We evaluate our model on two benchmark datasets:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data and Implementation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "(1) the Penn Discourse Treebank (Prasad et al., 2008, PDTB) , which is annotated on a corpus of Wall Street Journal acticles; (2) the Switchboard di-alogue act corpus (Stolcke et al., 2000, SWDA) , which is annotated on a collections of phone conversations. Both corpora contain annotations of discourse relations and dialogue relations that hold between adjacent spans of text.", |
|
"cite_spans": [ |
|
{ |
|
"start": 32, |
|
"end": 59, |
|
"text": "(Prasad et al., 2008, PDTB)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 167, |
|
"end": 195, |
|
"text": "(Stolcke et al., 2000, SWDA)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data and Implementation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The Penn Discourse Treebank (PDTB) provides a low-level discourse annotation on written texts. In the PDTB, each discourse relation is annotated between two argument spans, Arg1 and Arg2. There are two types of relations: explicit and implicit. Explicit relations are signalled by discourse markers (e.g., \"however\", \"moreover\"), and the span of Arg1 is almost totally unconstrained: it can range from a single clause to an entire paragraph, and need not be adjacent to either Arg2 nor the discourse marker. However, automatically classifying these relations is considered to be relatively easy, due to the constraints from the discourse marker itself . In addition, explicit relations are difficult to incorporate into language models which must generate each word exactly once. On the contrary, implicit discourse relations are annotated only between adjacent sentences, based on a semantic understanding of the discourse arguments. Automatically classifying these discourse relations is a challenging task (Lin et al., 2009; Pitler et al., 2009; . We therefore focus on implicit discourse relations, leaving to the future work the question of how to apply our modeling framework to explicit discourse relations. During training, we collapse all relation types other than implicit (explicit, ENTREL, and NOREL) into a single dummy relation type, which holds between all adjacent sentence pairs that do not share an implicit relation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1009, |
|
"end": 1027, |
|
"text": "(Lin et al., 2009;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 1028, |
|
"end": 1048, |
|
"text": "Pitler et al., 2009;", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data and Implementation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "As in the prior work on first-level discourse relation identification (e.g., Park and Cardie, 2012), we use sections 2-20 of the PDTB as the training set, sections 0-1 as the development set for parameter tuning, and sections 21-22 for testing. For preprocessing, we lower-cased all tokens, and substituted all numbers with a special token \"NUM\". To build the vocabulary, we kept the 10,000 most frequent words from the training set, and replaced lowfrequency words with a special token \"UNK\". In prior work that focuses on detecting individual relations, balanced training sets are constructed so that there are an equal number of instances with and without each relation type (Park and Cardie, ; Biran and McKeown, 2013; Rutherford and Xue, 2014) . In this paper, we target the more challenging multiway classification problem, so this strategy is not applicable; in any case, since our method deals with entire documents, it is not possible to balance the training set in this way.", |
|
"cite_spans": [ |
|
{ |
|
"start": 678, |
|
"end": 697, |
|
"text": "(Park and Cardie, ;", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 698, |
|
"end": 722, |
|
"text": "Biran and McKeown, 2013;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 723, |
|
"end": 748, |
|
"text": "Rutherford and Xue, 2014)", |
|
"ref_id": "BIBREF32" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data and Implementation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The Switchboard Dialog Act Corpus (SWDA) is annotated on the Switchboard Corpus of humanhuman conversational telephone speech (Godfrey et al., 1992) . The annotations label each utterance with one of 42 possible speech acts, such as AGREE, HEDGE, and WH-QUESTION. Because these speech acts form the structure of the dialogue, most of them pertain to both the preceding and succeeding utterances (e.g., AGREE). The SWDA corpus includes 1155 five-minute conversations. We adopted the standard split from Stolcke et al. (2000) , using 1,115 conversations for training and nineteen conversations for test. For parameter tuning, we randomly select nineteen conversations from the training set as the development set. After parameter tuning, we train the model on the full training set with the selected configuration. We use the same preprocessing techniques here as in the PDTB.", |
|
"cite_spans": [ |
|
{ |
|
"start": 126, |
|
"end": 148, |
|
"text": "(Godfrey et al., 1992)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 502, |
|
"end": 523, |
|
"text": "Stolcke et al. (2000)", |
|
"ref_id": "BIBREF39" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data and Implementation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We use a single-layer LSTM to build the recurrent architecture of our models, which we implement in the CNN package. 1 Our implementation is available on https://github.com/ jiyfeng/drlm. Some additional details follow.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Implementation", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Initialization Following prior work on RNN initialization (Bengio, 2012) , all parameters except the relation prediction parameters U and b are initialized with random values drawn from the range Learning Online learning was performed using AdaGrad (Duchi et al., 2011) with initial learning 1 https://github.com/clab/cnn rate \u03bb = 0.1. To avoid the exploding gradient problem, we used norm clipping trick with a threshold of \u03c4 = 5.0 (Pascanu et al., 2012) . In addition, we used value dropout (Srivastava et al., 2014) with rate 0.5, on the input X, context vector c and hidden state h, similar to the architecture proposed by Pham et al. (2014) . The training procedure is monitored by the performance on the development set. In our experiments, 4 to 5 epochs were enough.", |
|
"cite_spans": [ |
|
{ |
|
"start": 58, |
|
"end": 72, |
|
"text": "(Bengio, 2012)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 249, |
|
"end": 269, |
|
"text": "(Duchi et al., 2011)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 433, |
|
"end": 455, |
|
"text": "(Pascanu et al., 2012)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 493, |
|
"end": 518, |
|
"text": "(Srivastava et al., 2014)", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 627, |
|
"end": 645, |
|
"text": "Pham et al. (2014)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Implementation", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "[\u2212 6/(d 1 + d 2 ), 6/(d 1 + d 2 )],", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Implementation", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Hyper-parameters Our model includes two tunable hyper-parameters: the dimension of word representation K, the hidden dimension of LSTM unit H. We consider the values {32, 48, 64, 96, 128} for both K and H. For each corpus in experiments, the best combination of K and H is selected via grid search on the development set.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Implementation", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Our main evaluation is discourse relation prediction, using the PDTB and SWDA corpora. We also evaluate on language modeling, to determine whether incorporating discourse annotations at training time and then marginalizing them at test time can improve performance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We first evaluate our model with implicit discourse relation prediction on the PDTB dataset. Most of the prior work on first-level discourse relation prediction focuses on the \"one-versus-all\" binary classification setting, but we attack the more general fourway classification problem, as performed by Rutherford and Xue (2015). We compare against the following methods:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Implicit discourse relation prediction on the PDTB", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Rutherford and Xue (2015) build a set of featurerich classifiers on the PDTB, and then augment these classifiers with additional automaticallylabeled training instances. We compare against their published results, which are state-of-the-art.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Implicit discourse relation prediction on the PDTB", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Ji and Eisenstein (2015) employ a recursive neural network architecture. Their experimental setting is different, so we re-run their system using the same setting as described in \u00a7 4. 40.5 with extra training data 4. 56.4 40.0", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Implicit discourse relation prediction on the PDTB", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Our work -DRLM 5. Joint training 57.1 40.5 6. Conditional training 59.5 * 42.3 * significantly better than lines 2 and 4 with p < 0.05 Results As shown in Table 1 , the conditionallytrained discourse relation language models (DRLM) outperforms all alternatives, on both metrics. While the jointly-trained DRLM is at the same level as the previous state-of-the-art, conditional training on the same model provides a significant additional advantage, indicated by a binomial test.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 155, |
|
"end": 162, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Implicit discourse relation prediction on the PDTB", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Dialogue act tagging has been widely studied in both NLP and speech communities. We follow the setup used by Stolcke et al. (2000) to conduct experiments, and adopt the following systems for comparison: Stolcke et al. (2000) employ a hidden Markov model, with each HMM state corresponding to a dialogue act.", |
|
"cite_spans": [ |
|
{ |
|
"start": 109, |
|
"end": 130, |
|
"text": "Stolcke et al. (2000)", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 203, |
|
"end": 224, |
|
"text": "Stolcke et al. (2000)", |
|
"ref_id": "BIBREF39" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dialogue Act tagging", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Kalchbrenner and Blunsom (2013) employ a complex neural architecture, with a convolutional network at each utterance and a recurrent network over the length of the dialog. To our knowledge, this model attains state-of-the-art accuracy on this task, outperforming other prior work such as (Webb et al., 2005; Milajevs and Purver, 2014) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 288, |
|
"end": 307, |
|
"text": "(Webb et al., 2005;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 308, |
|
"end": 334, |
|
"text": "Milajevs and Purver, 2014)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dialogue Act tagging", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Results As shown in Prior work 3. (Stolcke et al., 2000) 71.0 4. (Kalchbrenner and Blunsom, 2013) 73.9", |
|
"cite_spans": [ |
|
{ |
|
"start": 34, |
|
"end": 56, |
|
"text": "(Stolcke et al., 2000)", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 65, |
|
"end": 97, |
|
"text": "(Kalchbrenner and Blunsom, 2013)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dialogue Act tagging", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Our work -DRLM 5. Joint training 74.0 6. Conditional training 77.0 * * significantly better than line 4 with p < 0.01 is more reliable on this evaluation, since no single class dominates, unlike the PDTB task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dialogue Act tagging", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "As a joint model for discourse and language modeling, DRLM can also function as a language model, assigning probabilities to sequences of words while marginalizing over discourse relations. To determine whether discourse-aware language modeling can improve performance, we compare against the following systems:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discourse-aware language modeling", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "RNNLM+LSTM This is the same basic architecture as the RNNLM proposed by (Mikolov et al., 2010) , which was shown to outperform a Kneser-Ney smoothed 5-gram model on modeling Wall Street Journal text. Following Pham et al. (2014) , we replace the Sigmoid nonlinearity with a long short-term memory (LSTM).", |
|
"cite_spans": [ |
|
{ |
|
"start": 72, |
|
"end": 94, |
|
"text": "(Mikolov et al., 2010)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 210, |
|
"end": 228, |
|
"text": "Pham et al. (2014)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discourse-aware language modeling", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "DCLM We compare against the Document Context Language Model (DCLM) of . We use the \"context-to-output\" variant, which is identical to the current modeling approach, except that it is not parametrized by discourse relations. This model achieves strong results on language modeling for small and medium-sized corpora, outperforming RNNLM+LSTM.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discourse-aware language modeling", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Results The perplexities of language modeling on the PDTB and the SWDA are summarized in Ta yields further improvements for both datasets. We emphasize that discourse relations in the test documents are marginalized out, so no annotations are required for the test set; the improvements are due to the disambiguating power of discourse relations in the training set. Because our training procedure requires discourse annotations, this approach does not scale to the large datasets typically used in language modeling. As a consequence, the results obtained here are somewhat academic, from the perspective of practical language modeling. Nonetheless, the positive results here motivate the investigation of training procedures that are also capable of marginalizing over discourse relations at training time.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 89, |
|
"end": 91, |
|
"text": "Ta", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Discourse-aware language modeling", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "This paper draws on previous work in both discourse modeling and language modeling.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Discourse and dialog modeling Early work on discourse relation classification utilizes rich, handcrafted feature sets (Joty et al., 2012; Lin et al., 2009; Sagae, 2009) . Recent representation learning approaches attempt to learn good representations jointly with discourse relation classifiers and discourse parsers (Ji and Eisenstein, 2014; Li et al., 2014) . Of particular relevance are applications of neural architectures to PDTB implicit discourse relation classification Zhang et al., 2015; Braud and Denis, 2015) . All of these approaches are essentially classifiers, and take supervision only from the 16,000 annotated discourse relations in the PDTB training set. In contrast, our approach is a probabilistic model over the entire text.", |
|
"cite_spans": [ |
|
{ |
|
"start": 118, |
|
"end": 137, |
|
"text": "(Joty et al., 2012;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 138, |
|
"end": 155, |
|
"text": "Lin et al., 2009;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 156, |
|
"end": 168, |
|
"text": "Sagae, 2009)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 317, |
|
"end": 342, |
|
"text": "(Ji and Eisenstein, 2014;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 343, |
|
"end": 359, |
|
"text": "Li et al., 2014)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 478, |
|
"end": 497, |
|
"text": "Zhang et al., 2015;", |
|
"ref_id": "BIBREF45" |
|
}, |
|
{ |
|
"start": 498, |
|
"end": 520, |
|
"text": "Braud and Denis, 2015)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Probabilistic models are frequently used in dia-log act tagging, where hidden Markov models have been a dominant approach (Stolcke et al., 2000) . In this work, the emission distribution is an n-gram language model for each dialogue act; we use a conditionally-trained recurrent neural network language model. An alternative neural approach for dialogue act tagging is the combined convolutionalrecurrent architecture of Kalchbrenner and Blunsom (2013) . Our modeling framework is simpler, relying on a latent variable parametrization of a purely recurrent architecture.", |
|
"cite_spans": [ |
|
{ |
|
"start": 122, |
|
"end": 144, |
|
"text": "(Stolcke et al., 2000)", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 421, |
|
"end": 452, |
|
"text": "Kalchbrenner and Blunsom (2013)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Language modeling There are an increasing number of attempts to incorporate document-level context information into language modeling. For example, Mikolov and Zweig (2012) introduce LDAstyle topics into RNN based language modeling. Sordoni et al. (2015) use a convolutional structure to summarize the context from previous two utterances as context vector for RNN based language modeling. Our models in this paper provide a unified framework to model the context and current sentence. Wang and Cho (2015) and Lin et al. (2015) construct bag-of-words representations of previous sentences, which are then used to inform the RNN language model that generates the current sentence. The most relevant work is the Document Context Language Model , DCLM); we describe the connection to this model in \u00a7 2. By adding discourse information as a latent variable, we attain better perplexity on held-out data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 148, |
|
"end": 172, |
|
"text": "Mikolov and Zweig (2012)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 233, |
|
"end": 254, |
|
"text": "Sordoni et al. (2015)", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 486, |
|
"end": 505, |
|
"text": "Wang and Cho (2015)", |
|
"ref_id": "BIBREF41" |
|
}, |
|
{ |
|
"start": 510, |
|
"end": 527, |
|
"text": "Lin et al. (2015)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Latent variable neural networks Introducing latent variables to a neural network model increases its representational capacity, which is the main goal of prior efforts in this space (Kingma and Welling, 2014; Chung et al., 2015) . From this perspective, our model with discourse relations as latent variables shares the same merit. Unlike this prior work, in our approach, the latent variables carry a linguistic interpretation, and are at least partially observed. Also, these prior models employ continuous latent variables, requiring complex inference techniques such as variational autoencoders (Kingma and Welling, 2014; Burda et al., 2016; Chung et al., 2015) . In contrast, the discrete latent variables in our model are easy to sum and maximize over.", |
|
"cite_spans": [ |
|
{ |
|
"start": 182, |
|
"end": 208, |
|
"text": "(Kingma and Welling, 2014;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 209, |
|
"end": 228, |
|
"text": "Chung et al., 2015)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 599, |
|
"end": 625, |
|
"text": "(Kingma and Welling, 2014;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 626, |
|
"end": 645, |
|
"text": "Burda et al., 2016;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 646, |
|
"end": 665, |
|
"text": "Chung et al., 2015)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We have presented a probabilistic neural model over sequences of words and shallow discourse relations between adjacent sequences. This model combines positive aspects of neural network architectures with probabilistic graphical models: it can learn discriminatively-trained vector representations, while maintaining a probabilistic representation of the targeted linguistic element: in this case, shallow discourse relations. This method outperforms state-of-the-art systems in two discourse relation detection tasks, and can also be applied as a language model, marginalizing over discourse relations on the test data. Future work will investigate the possibility of learning from partially-labeled training data, which would have at least two potential advantages. First, it would enable the model to scale up to the large datasets needed for competitive language modeling. Second, by training on more data, the resulting vector representations might support even more accurate discourse relation prediction.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "Thanks to Trevor Cohn, Chris Dyer, Lingpeng Kong, and Quoc V. Le for helpful discussions, and to the anonymous reviewers for their feedback. This work was supported by a Google Faculty Research award to the third author. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Practical recommendations for gradient-based training of deep architectures", |
|
"authors": [ |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Neural Networks: Tricks of the Trade", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "437--478", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yoshua Bengio. 2012. Practical recommendations for gradient-based training of deep architectures. In Neu- ral Networks: Tricks of the Trade, pages 437-478. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Proceedings of the Association for Computational Linguistics (ACL)", |
|
"authors": [ |
|
{ |
|
"first": "Or", |
|
"middle": [], |
|
"last": "Biran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kathleen", |
|
"middle": [], |
|
"last": "Mckeown", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "69--73", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Or Biran and Kathleen McKeown. 2013. In Proceed- ings of the Association for Computational Linguistics (ACL), pages 69-73, Sophia, Bulgaria.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Comparing word representations for implicit discourse relation classification", |
|
"authors": [ |
|
{ |
|
"first": "Chlo\u00e9", |
|
"middle": [], |
|
"last": "Braud", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pascal", |
|
"middle": [], |
|
"last": "Denis", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of Empirical Methods for Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2201--2211", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chlo\u00e9 Braud and Pascal Denis. 2015. Comparing word representations for implicit discourse relation classi- fication. In Proceedings of Empirical Methods for Natural Language Processing (EMNLP), pages 2201- 2211, Lisbon, September.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Importance weighted autoencoders", |
|
"authors": [ |
|
{ |
|
"first": "Yuri", |
|
"middle": [], |
|
"last": "Burda", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roger", |
|
"middle": [], |
|
"last": "Grosse", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruslan", |
|
"middle": [], |
|
"last": "Salakhutdinov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the International Conference on Learning Representations (ICLR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yuri Burda, Roger Grosse, and Ruslan Salakhutdinov. 2016. Importance weighted autoencoders. In Pro- ceedings of the International Conference on Learning Representations (ICLR).", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Multitask learning. Machine learning", |
|
"authors": [ |
|
{ |
|
"first": "Rich", |
|
"middle": [], |
|
"last": "Caruana", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "", |
|
"volume": "28", |
|
"issue": "", |
|
"pages": "41--75", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rich Caruana. 1997. Multitask learning. Machine learn- ing, 28(1):41-75.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "A recurrent latent variable model for sequential data", |
|
"authors": [ |
|
{ |
|
"first": "Junyoung", |
|
"middle": [], |
|
"last": "Chung", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyle", |
|
"middle": [], |
|
"last": "Kastner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Laurent", |
|
"middle": [], |
|
"last": "Dinh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kratarth", |
|
"middle": [], |
|
"last": "Goel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aaron", |
|
"middle": [], |
|
"last": "Courville", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Neural Information Processing Systems (NIPS)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron Courville, and Yoshua Bengio. 2015. A recurrent latent variable model for sequential data. In Neural Information Processing Systems (NIPS), Montr\u00e9al.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Natural language processing (almost) from scratch", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Collobert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Weston", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Bottou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Karlen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Kavukcuoglu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Kuksa", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "12", |
|
"issue": "", |
|
"pages": "2493--2537", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. 2011. Natural lan- guage processing (almost) from scratch. Journal of Machine Learning Research, 12:2493-2537.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Sequential monte carlo methods to train neural network models", |
|
"authors": [ |
|
{ |
|
"first": "Mahesan", |
|
"middle": [], |
|
"last": "Jo\u00e3o Fg De Freitas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Niranjan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arnaud", |
|
"middle": [], |
|
"last": "Gee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Doucet", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Neural computation", |
|
"volume": "12", |
|
"issue": "4", |
|
"pages": "955--993", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jo\u00e3o FG de Freitas, Mahesan Niranjan, Andrew H. Gee, and Arnaud Doucet. Sequential monte carlo methods to train neural network models. Neural computation, 12(4):955-993.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Adaptive subgradient methods for online learning and stochastic optimization", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Duchi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Elad", |
|
"middle": [], |
|
"last": "Hazan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoram", |
|
"middle": [], |
|
"last": "Singer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "The Journal of Machine Learning Research", |
|
"volume": "12", |
|
"issue": "", |
|
"pages": "2121--2159", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. The Journal of Machine Learning Research, 12:2121-2159.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Transitionbased dependency parsing with stack long short-term memory", |
|
"authors": [ |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miguel", |
|
"middle": [], |
|
"last": "Ballesteros", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wang", |
|
"middle": [], |
|
"last": "Ling", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Austin", |
|
"middle": [], |
|
"last": "Matthews", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the Association for Computational Linguistics (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "334--343", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transition- based dependency parsing with stack long short-term memory. In Proceedings of the Association for Com- putational Linguistics (ACL), pages 334-343, Beijing, China.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Solving the problem of cascading errors: Approximate bayesian inference for linguistic annotation pipelines", |
|
"authors": [ |
|
{ |
|
"first": "Jenny", |
|
"middle": [ |
|
"Rose" |
|
], |
|
"last": "Finkel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Christopher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew Y", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of Empirical Methods for Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "618--626", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jenny Rose Finkel, Christopher D Manning, and An- drew Y Ng. 2006. Solving the problem of cascading errors: Approximate bayesian inference for linguis- tic annotation pipelines. In Proceedings of Empirical Methods for Natural Language Processing (EMNLP), pages 618-626.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Switchboard: Telephone speech corpus for research and development", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "John", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Godfrey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Edward", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jane", |
|
"middle": [], |
|
"last": "Holliman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mcdaniel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "ICASSP", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "517--520", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John J Godfrey, Edward C Holliman, and Jane McDaniel. 1992. Switchboard: Telephone speech corpus for re- search and development. In ICASSP, volume 1, pages 517-520. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Neural adaptive sequential monte carlo", |
|
"authors": [ |
|
{ |
|
"first": "Shixiang", |
|
"middle": [], |
|
"last": "Gu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zoubin", |
|
"middle": [], |
|
"last": "Ghahramani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Turner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Neural Information Processing Systems (NIPS), Montr\u00e9al. Sepp Hochreiter and J\u00fcrgen Schmidhuber", |
|
"volume": "9", |
|
"issue": "", |
|
"pages": "1735--1780", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shixiang Gu, Zoubin Ghahramani, and Richard E Turner. 2015. Neural adaptive sequential monte carlo. In Neu- ral Information Processing Systems (NIPS), Montr\u00e9al. Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735- 1780.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Representation learning for text-level discourse parsing", |
|
"authors": [ |
|
{ |
|
"first": "Yangfeng", |
|
"middle": [], |
|
"last": "Ji", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Eisenstein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the Association for Computational Linguistics (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yangfeng Ji and Jacob Eisenstein. 2014. Representation learning for text-level discourse parsing. In Proceed- ings of the Association for Computational Linguistics (ACL), Baltimore, MD.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "One vector is not enough: Entity-augmented distributional semantics for discourse relations", |
|
"authors": [ |
|
{ |
|
"first": "Yangfeng", |
|
"middle": [], |
|
"last": "Ji", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Eisenstein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Transactions of the Association for Computational Linguistics (TACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yangfeng Ji and Jacob Eisenstein. 2015. One vector is not enough: Entity-augmented distributional seman- tics for discourse relations. Transactions of the Asso- ciation for Computational Linguistics (TACL), June.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Document context language models", |
|
"authors": [ |
|
{ |
|
"first": "Yangfeng", |
|
"middle": [], |
|
"last": "Ji", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Trevor", |
|
"middle": [], |
|
"last": "Cohn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lingpeng", |
|
"middle": [], |
|
"last": "Kong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Eisenstein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yangfeng Ji, Trevor Cohn, Lingpeng Kong, Chris Dyer, and Jacob Eisenstein. 2015. Document con- text language models. In International Conference on Learning Representations, Poster Paper, volume abs/1511.03962.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "A novel discriminative framework for sentence-level discourse analysis", |
|
"authors": [ |
|
{ |
|
"first": "Shafiq", |
|
"middle": [], |
|
"last": "Joty", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Giuseppe", |
|
"middle": [], |
|
"last": "Carenini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raymond", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of Empirical Methods for Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shafiq Joty, Giuseppe Carenini, and Raymond Ng. 2012. A novel discriminative framework for sentence-level discourse analysis. In Proceedings of Empirical Meth- ods for Natural Language Processing (EMNLP).", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Recurrent convolutional neural networks for discourse compositionality", |
|
"authors": [ |
|
{ |
|
"first": "Nal", |
|
"middle": [], |
|
"last": "Kalchbrenner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Phil", |
|
"middle": [], |
|
"last": "Blunsom", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the Workshop on Continuous Vector Space Models and their Compositionality", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "119--126", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent convolutional neural networks for discourse composi- tionality. In Proceedings of the Workshop on Continu- ous Vector Space Models and their Compositionality, pages 119-126, Sofia, Bulgaria, August. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Autoencoding variational bayes", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Diederik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Max", |
|
"middle": [], |
|
"last": "Kingma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Welling", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the International Conference on Learning Representations (ICLR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diederik P Kingma and Max Welling. 2014. Auto- encoding variational bayes. In Proceedings of the In- ternational Conference on Learning Representations (ICLR).", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Distributed representations of sentences and documents", |
|
"authors": [ |
|
{ |
|
"first": "Quoc", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the International Conference on Machine Learning (ICML)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Quoc Le and Tomas Mikolov. 2014. Distributed rep- resentations of sentences and documents. In Pro- ceedings of the International Conference on Machine Learning (ICML).", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Recursive deep models for discourse parsing", |
|
"authors": [ |
|
{ |
|
"first": "Jiwei", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rumeng", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eduard", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of Empirical Methods for Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jiwei Li, Rumeng Li, and Eduard Hovy. 2014. Recursive deep models for discourse parsing. In Proceedings of Empirical Methods for Natural Language Processing (EMNLP).", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Recognizing implicit discourse relations in the penn discourse treebank", |
|
"authors": [ |
|
{ |
|
"first": "Ziheng", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Min-Yen", |
|
"middle": [], |
|
"last": "Kan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hwee Tou", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of Empirical Methods for Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "343--351", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ziheng Lin, Min-Yen Kan, and Hwee Tou Ng. 2009. Recognizing implicit discourse relations in the penn discourse treebank. In Proceedings of Empirical Methods for Natural Language Processing (EMNLP), pages 343-351, Singapore.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Hierarchical recurrent neural network for document modeling", |
|
"authors": [ |
|
{ |
|
"first": "Rui", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shujie", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Muyun", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mu", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sheng", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of Empirical Methods for Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "899--907", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rui Lin, Shujie Liu, Muyun Yang, Mu Li, Ming Zhou, and Sheng Li. 2015. Hierarchical recurrent neural network for document modeling. In Proceedings of Empirical Methods for Natural Language Processing (EMNLP), pages 899-907, Lisbon, September.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Computational linguistics and deep learning", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Christopher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Computational Linguistics", |
|
"volume": "", |
|
"issue": "4", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christopher D. Manning. 2016. Computational linguis- tics and deep learning. Computational Linguistics, 41(4).", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Context dependent recurrent neural network language model", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Geoffrey", |
|
"middle": [], |
|
"last": "Zweig", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of Spoken Language Technology (SLT)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "234--239", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov and Geoffrey Zweig. 2012. Context de- pendent recurrent neural network language model. In Proceedings of Spoken Language Technology (SLT), pages 234-239.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Recurrent neural network based language model", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martin", |
|
"middle": [], |
|
"last": "Karafi\u00e1t", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lukas", |
|
"middle": [], |
|
"last": "Burget", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "INTER-SPEECH", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1045--1048", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Martin Karafi\u00e1t, Lukas Burget, Jan Cer- nock\u1ef3, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In INTER- SPEECH, pages 1045-1048.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Investigating the contribution of distributional semantic information for dialogue act classification", |
|
"authors": [ |
|
{ |
|
"first": "Dmitrijs", |
|
"middle": [], |
|
"last": "Milajevs", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Purver", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 2nd Workshop on Continuous Vector Space Models and their Compositionality (CVSC)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "40--47", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dmitrijs Milajevs and Matthew Purver. 2014. Investi- gating the contribution of distributional semantic in- formation for dialogue act classification. In Proceed- ings of the 2nd Workshop on Continuous Vector Space Models and their Compositionality (CVSC), pages 40- 47.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Improving implicit discourse relation recognition through feature set optimization", |
|
"authors": [ |
|
{ |
|
"first": "Joonsuk", |
|
"middle": [], |
|
"last": "Park", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "Cardie", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Proceedings of the 13th Annual Meeting of the Special Interest Group on Discourse and Dialogue", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "108--112", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joonsuk Park and Claire Cardie. Improving implicit dis- course relation recognition through feature set opti- mization. In Proceedings of the 13th Annual Meeting of the Special Interest Group on Discourse and Dia- logue, pages 108-112, Seoul, South Korea, July. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "On the difficulty of training recurrent neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Razvan", |
|
"middle": [], |
|
"last": "Pascanu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1211.5063" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2012. On the difficulty of training recurrent neural networks. arXiv preprint arXiv:1211.5063.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Dropout improves recurrent neural networks for handwriting recognition", |
|
"authors": [ |
|
{ |
|
"first": "Vu", |
|
"middle": [], |
|
"last": "Pham", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Th\u00e9odore", |
|
"middle": [], |
|
"last": "Bluche", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Kermorvant", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J\u00e9r\u00f4me", |
|
"middle": [], |
|
"last": "Louradour", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Frontiers in Handwriting Recognition (ICFHR)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "285--290", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vu Pham, Th\u00e9odore Bluche, Christopher Kermorvant, and J\u00e9r\u00f4me Louradour. 2014. Dropout improves re- current neural networks for handwriting recognition. In Frontiers in Handwriting Recognition (ICFHR), 2014 14th International Conference on, pages 285- 290. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Easily identifiable discourse relations", |
|
"authors": [ |
|
{ |
|
"first": "Emily", |
|
"middle": [], |
|
"last": "Pitler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mridhula", |
|
"middle": [], |
|
"last": "Raghupathy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hena", |
|
"middle": [], |
|
"last": "Mehta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ani", |
|
"middle": [], |
|
"last": "Nenkova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alan", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aravind", |
|
"middle": [], |
|
"last": "Joshi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the International Conference on Computational Linguistics (COLING)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "87--90", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Emily Pitler, Mridhula Raghupathy, Hena Mehta, Ani Nenkova, Alan Lee, and Aravind Joshi. 2008. Eas- ily identifiable discourse relations. In Proceedings of the International Conference on Computational Lin- guistics (COLING), pages 87-90, Manchester, UK.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Automatic sense prediction for implicit discourse relations in text", |
|
"authors": [ |
|
{ |
|
"first": "Emily", |
|
"middle": [], |
|
"last": "Pitler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Annie", |
|
"middle": [], |
|
"last": "Louis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ani", |
|
"middle": [], |
|
"last": "Nenkova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the Association for Computational Linguistics (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Emily Pitler, Annie Louis, and Ani Nenkova. 2009. Au- tomatic sense prediction for implicit discourse rela- tions in text. In Proceedings of the Association for Computational Linguistics (ACL), Suntec, Singapore. Rashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Milt- sakaki, Livio Robaldo, Aravind Joshi, and Bonnie Webber. 2008. The Penn Discourse Treebank 2.0. In Proceedings of LREC.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Discovering implicit discourse relations through brown cluster pair representation and coreference patterns", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Attapol", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nianwen", |
|
"middle": [], |
|
"last": "Rutherford", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Xue", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the European Chapter of the Association for Computational Linguistics (EACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Attapol T Rutherford and Nianwen Xue. 2014. Discov- ering implicit discourse relations through brown clus- ter pair representation and coreference patterns. In Proceedings of the European Chapter of the Associ- ation for Computational Linguistics (EACL).", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Improving the inference of implicit discourse relations via classifying explicit discourse connectives", |
|
"authors": [ |
|
{ |
|
"first": "Attapol", |
|
"middle": [], |
|
"last": "Rutherford", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nianwen", |
|
"middle": [], |
|
"last": "Xue", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "799--808", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Attapol Rutherford and Nianwen Xue. 2015. Improving the inference of implicit discourse relations via classi- fying explicit discourse connectives. pages 799-808, May-June.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Analysis of discourse structure with syntactic dependencies and data-driven shift-reduce parsing", |
|
"authors": [ |
|
{ |
|
"first": "Kenji", |
|
"middle": [], |
|
"last": "Sagae", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 11th International Conference on Parsing Technologies (IWPT'09)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "81--84", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kenji Sagae. 2009. Analysis of discourse structure with syntactic dependencies and data-driven shift-reduce parsing. In Proceedings of the 11th International Con- ference on Parsing Technologies (IWPT'09), pages 81-84, Paris, France, October. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Recursive deep models for semantic compositionality over a sentiment treebank", |
|
"authors": [ |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Perelygin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Jean", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Chuang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Christopher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Andrew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Potts", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Proceedings of Empirical Methods for Natural Language Processing (EMNLP)", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "In Proceedings of Empirical Methods for Natural Lan- guage Processing (EMNLP), Seattle, WA.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "A neural network approach to context-sensitive generation of conversational responses", |
|
"authors": [ |
|
{ |
|
"first": "Alessandro", |
|
"middle": [], |
|
"last": "Sordoni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michel", |
|
"middle": [], |
|
"last": "Galley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Auli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Brockett", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yangfeng", |
|
"middle": [], |
|
"last": "Ji", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Meg", |
|
"middle": [], |
|
"last": "Mitchell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jian-Yun", |
|
"middle": [], |
|
"last": "Nie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianfeng", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bill", |
|
"middle": [], |
|
"last": "Dolan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the North American Chapter of the Association for Computational Linguistics (NAACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Meg Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. 2015. A neural network approach to context-sensitive generation of conversa- tional responses. In Proceedings of the North Ameri- can Chapter of the Association for Computational Lin- guistics (NAACL), Denver, CO, May.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "Dropout: A simple way to prevent neural networks from overfitting", |
|
"authors": [ |
|
{ |
|
"first": "Nitish", |
|
"middle": [], |
|
"last": "Srivastava", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Geoffrey", |
|
"middle": [], |
|
"last": "Hinton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Krizhevsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruslan", |
|
"middle": [], |
|
"last": "Salakhutdinov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "The Journal of Machine Learning Research", |
|
"volume": "15", |
|
"issue": "1", |
|
"pages": "1929--1958", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929-1958.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "Dialogue act modeling for automatic tagging and recognition of conversational speech", |
|
"authors": [ |
|
{ |
|
"first": "Andreas", |
|
"middle": [], |
|
"last": "Stolcke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Klaus", |
|
"middle": [], |
|
"last": "Ries", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [], |
|
"last": "Coccaro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Elizabeth", |
|
"middle": [], |
|
"last": "Shriberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rebecca", |
|
"middle": [], |
|
"last": "Bates", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Jurafsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Taylor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rachel", |
|
"middle": [], |
|
"last": "Martin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carol", |
|
"middle": [], |
|
"last": "Van Ess-Dykema", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marie", |
|
"middle": [], |
|
"last": "Meteer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Computational linguistics", |
|
"volume": "26", |
|
"issue": "3", |
|
"pages": "339--373", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andreas Stolcke, Klaus Ries, Noah Coccaro, Eliza- beth Shriberg, Rebecca Bates, Daniel Jurafsky, Paul Taylor, Rachel Martin, Carol Van Ess-Dykema, and Marie Meteer. 2000. Dialogue act modeling for automatic tagging and recognition of conversational speech. Computational linguistics, 26(3):339-373.", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "Word representations: A simple and general method for semi-supervised learning", |
|
"authors": [ |
|
{ |
|
"first": "Joseph", |
|
"middle": [], |
|
"last": "Turian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lev", |
|
"middle": [], |
|
"last": "Ratinov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "384--394", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word representations: A simple and general method for semi-supervised learning. pages 384-394.", |
|
"links": null |
|
}, |
|
"BIBREF41": { |
|
"ref_id": "b41", |
|
"title": "Dialogue act classification based on intra-utterance features", |
|
"authors": [ |
|
{ |
|
"first": "Tian", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the AAAI Workshop on Spoken Language Understanding", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1511.03729" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tian Wang and Kyunghyun Cho. 2015. Larger- context language modelling. arXiv preprint arXiv:1511.03729. Nick Webb, Mark Hepple, and Yorick Wilks. 2005. Di- alogue act classification based on intra-utterance fea- tures. In Proceedings of the AAAI Workshop on Spo- ken Language Understanding.", |
|
"links": null |
|
}, |
|
"BIBREF42": { |
|
"ref_id": "b42", |
|
"title": "Discourse structure and language technology", |
|
"authors": [ |
|
{ |
|
"first": "Bonnie", |
|
"middle": [], |
|
"last": "Webber", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Markus", |
|
"middle": [], |
|
"last": "Egg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Valia", |
|
"middle": [], |
|
"last": "Kordoni", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Journal of Natural Language Engineering", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bonnie Webber, Markus Egg, and Valia Kordoni. 2012. Discourse structure and language technology. Journal of Natural Language Engineering, 1.", |
|
"links": null |
|
}, |
|
"BIBREF43": { |
|
"ref_id": "b43", |
|
"title": "Partially observable markov decision processes for spoken dialog systems", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Jason", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steve", |
|
"middle": [], |
|
"last": "Williams", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Young", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Computer Speech & Language", |
|
"volume": "21", |
|
"issue": "2", |
|
"pages": "393--422", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jason D Williams and Steve Young. 2007. Partially ob- servable markov decision processes for spoken dialog systems. Computer Speech & Language, 21(2):393- 422.", |
|
"links": null |
|
}, |
|
"BIBREF44": { |
|
"ref_id": "b44", |
|
"title": "The CoNLL-2015 shared task on shallow discourse parsing", |
|
"authors": [ |
|
{ |
|
"first": "Nianwen", |
|
"middle": [], |
|
"last": "Xue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sameer", |
|
"middle": [], |
|
"last": "Hwee Tou Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rashmi", |
|
"middle": [], |
|
"last": "Pradhan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Prasad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Bryant", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Rutherford", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the Conference on Natural Language Learning (CoNLL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nianwen Xue, Hwee Tou Ng, Sameer Pradhan, Rashmi Prasad, Christopher Bryant, and Attapol T Rutherford. 2015. The CoNLL-2015 shared task on shallow dis- course parsing. In Proceedings of the Conference on Natural Language Learning (CoNLL).", |
|
"links": null |
|
}, |
|
"BIBREF45": { |
|
"ref_id": "b45", |
|
"title": "Shallow convolutional neural network for implicit discourse relation recognition", |
|
"authors": [ |
|
{ |
|
"first": "Biao", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jinsong", |
|
"middle": [], |
|
"last": "Su", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Deyi", |
|
"middle": [], |
|
"last": "Xiong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yaojie", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hong", |
|
"middle": [], |
|
"last": "Duan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Junfeng", |
|
"middle": [], |
|
"last": "Yao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of Empirical Methods for Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2230--2235", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Biao Zhang, Jinsong Su, Deyi Xiong, Yaojie Lu, Hong Duan, and Junfeng Yao. 2015. Shallow convolu- tional neural network for implicit discourse relation recognition. In Proceedings of Empirical Methods for Natural Language Processing (EMNLP), pages 2230- 2235, Lisbon, September.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "Per-token generative probabilities in the discourse relation language model" |
|
}, |
|
"FIGREF2": { |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "where d 1 and d 2 are the input and output dimensions of the parameter matrix respectively. The matrix U is initialized with random numbers from [\u221210 \u22125 , 10 \u22125 ] and b is initialized to 0." |
|
}, |
|
"TABREF0": { |
|
"content": "<table/>", |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Multiclass relation identification on the first-level PDTB relations.", |
|
"num": null |
|
}, |
|
"TABREF1": { |
|
"content": "<table><tr><td>, the conditionally-</td></tr><tr><td>trained discourse relation language model (DRLM)</td></tr><tr><td>outperforms all competitive systems on this task. A</td></tr><tr><td>binomial test shows the result in line 6 is signifi-</td></tr><tr><td>cantly better than the previous state-of-the-art (line</td></tr><tr><td>4). All comparisons are against published results,</td></tr><tr><td>and Macro-F 1 scores are not available. Accuracy</td></tr></table>", |
|
"html": null, |
|
"type_str": "table", |
|
"text": "", |
|
"num": null |
|
}, |
|
"TABREF2": { |
|
"content": "<table/>", |
|
"html": null, |
|
"type_str": "table", |
|
"text": "The results of dialogue act tagging.", |
|
"num": null |
|
}, |
|
"TABREF4": { |
|
"content": "<table/>", |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Language model perplexities (PPLX), lower is better.The model dimensions K and H that gave best performance on the dev set are also shown.", |
|
"num": null |
|
} |
|
} |
|
} |
|
} |