|
{ |
|
"paper_id": "D17-1019", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T16:17:54.442786Z" |
|
}, |
|
"title": "Neural Net Models of Open-domain Discourse Coherence", |
|
"authors": [ |
|
{ |
|
"first": "Jiwei", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Stanford University", |
|
"location": { |
|
"settlement": "Stanford", |
|
"country": "USA" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Jurafsky", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Stanford University", |
|
"location": { |
|
"settlement": "Stanford", |
|
"country": "USA" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Discourse coherence is strongly associated with text quality, making it important to natural language generation and understanding. Yet existing models of coherence focus on measuring individual aspects of coherence (lexical overlap, rhetorical structure, entity centering) in narrow domains. In this paper, we describe domainindependent neural models of discourse coherence that are capable of measuring multiple aspects of coherence in existing sentences and can maintain coherence while generating new sentences. We study both discriminative models that learn to distinguish coherent from incoherent discourse, and generative models that produce coherent text, including a novel neural latentvariable Markovian generative model that captures the latent discourse dependencies between sentences in a text. Our work achieves state-of-the-art performance on multiple coherence evaluations, and marks an initial step in generating coherent texts given discourse contexts.", |
|
"pdf_parse": { |
|
"paper_id": "D17-1019", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Discourse coherence is strongly associated with text quality, making it important to natural language generation and understanding. Yet existing models of coherence focus on measuring individual aspects of coherence (lexical overlap, rhetorical structure, entity centering) in narrow domains. In this paper, we describe domainindependent neural models of discourse coherence that are capable of measuring multiple aspects of coherence in existing sentences and can maintain coherence while generating new sentences. We study both discriminative models that learn to distinguish coherent from incoherent discourse, and generative models that produce coherent text, including a novel neural latentvariable Markovian generative model that captures the latent discourse dependencies between sentences in a text. Our work achieves state-of-the-art performance on multiple coherence evaluations, and marks an initial step in generating coherent texts given discourse contexts.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Modeling discourse coherence (the way parts of a text are linked into a coherent whole) is essential for summarization (Barzilay and McKeown, 2005 ), text planning (Hovy, 1988; Marcu, 1997) question-answering (Verberne et al., 2007) , and even psychiatric diagnosis (Elvev\u00e5g et al., 2007; Bedi et al., 2015) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 119, |
|
"end": 146, |
|
"text": "(Barzilay and McKeown, 2005", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 164, |
|
"end": 176, |
|
"text": "(Hovy, 1988;", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 177, |
|
"end": 189, |
|
"text": "Marcu, 1997)", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 209, |
|
"end": 232, |
|
"text": "(Verberne et al., 2007)", |
|
"ref_id": "BIBREF54" |
|
}, |
|
{ |
|
"start": 266, |
|
"end": 288, |
|
"text": "(Elvev\u00e5g et al., 2007;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 289, |
|
"end": 307, |
|
"text": "Bedi et al., 2015)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Various frameworks exist, each tackling aspects of coherence. Lexical cohesion (Halliday and Hasan, 1976; Morris and Hirst, 1991 ) models chains of words and synonyms. Psychological models of discourse (Foltz et al., 1998; McNamara et al., 2010) use LSA embeddings to generalize lexical cohesion. Relational models like RST (Mann and Thompson, 1988; Lascarides and Asher, 1991) define relations that hierarchically structure texts. The entity grid model (Barzilay and Lapata, 2008) and its extensions 1 capture the referential coherence of entities moving in and out of focus across a text. Each captures only a single aspect of coherence, and all focus on scoring existing sentences, rather than on generating coherent discourse for tasks like abstractive summarization.", |
|
"cite_spans": [ |
|
{ |
|
"start": 79, |
|
"end": 105, |
|
"text": "(Halliday and Hasan, 1976;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 106, |
|
"end": 128, |
|
"text": "Morris and Hirst, 1991", |
|
"ref_id": "BIBREF43" |
|
}, |
|
{ |
|
"start": 202, |
|
"end": 222, |
|
"text": "(Foltz et al., 1998;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 223, |
|
"end": 245, |
|
"text": "McNamara et al., 2010)", |
|
"ref_id": "BIBREF40" |
|
}, |
|
{ |
|
"start": 320, |
|
"end": 349, |
|
"text": "RST (Mann and Thompson, 1988;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 350, |
|
"end": 377, |
|
"text": "Lascarides and Asher, 1991)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 454, |
|
"end": 481, |
|
"text": "(Barzilay and Lapata, 2008)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Here we introduce two classes of neural models for discourse coherence. Our discriminative models induce coherence by treating human generated texts as coherent examples and texts with random sentence replacements as negative examples, feeding LSTM sentence embeddings of pairs of consecutive sentences to a classifier. These achieve stateof-the-art (96% accuracy) on the standard domainspecific sentence-pair-ordering dataset (Barzilay and Lapata, 2008) , but suffer in a larger opendomain setting due to the small semantic space that negative sampling is able to cover.", |
|
"cite_spans": [ |
|
{ |
|
"start": 427, |
|
"end": 454, |
|
"text": "(Barzilay and Lapata, 2008)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our generative models are based on augumenting encoder-decoder models with latent variables to model discourse relationships across sentences, including (1) a model that incorporates an HMM-LDA topic model into the generative model and (2) an end-to-end model that introduces a Markovstructured neural latent variable, inspired by recent work on training latent-variable recurrent nets (Bowman et al., 2015; Serban et al., 2016b) . These generative models obtain the best result on a large open-domain setting, including on the difficult task of reconstructing the order of every sentence in a paragraph, and our latent variable generative model significantly improves the coherence of text generated by the model. Our work marks an initial step in building endto-end systems to evaluate open-domain discourse coherence, and more importantly, generating coherent texts given discourse contexts.", |
|
"cite_spans": [ |
|
{ |
|
"start": 386, |
|
"end": 407, |
|
"text": "(Bowman et al., 2015;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 408, |
|
"end": 429, |
|
"text": "Serban et al., 2016b)", |
|
"ref_id": "BIBREF50" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The discriminative model treats cliques (sets of sentences surrounding a center sentence) taken from the original articles as coherent positive examples and cliques with random replacements of the center sentence as negative examples. The discriminative model can be viewed as an extended version of Li and Hovy's (2014) model but is practical at large scale 2 . We thus make this section succinct.", |
|
"cite_spans": [ |
|
{ |
|
"start": 300, |
|
"end": 320, |
|
"text": "Li and Hovy's (2014)", |
|
"ref_id": "BIBREF33" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Discriminative Model", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Notations Let C denote a sequence of coherent texts taken from original articles generated by humans. C is comprised of a sequence of sentences C = {s n\u2212L , ..., s n\u22121 , s n , s n+1 , ..., s n+L } where L denotes the half size of the context window. Suppose each sentence s n consists of a sequence of words w n1 , ..., w nt , ..., w nM , where M is the number of tokens in s n . Each word w is associated with a K dimensional vector h w and each sentence is associated with a K dimensional vector x s .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Discriminative Model", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Each C contains 2L + 1 sentences, and is associated with a (2L + 1) \u00d7 K dimensional vector obtained by concatenating the representations of its constituent sentences. The sentence representation is obtained from LSTMs. After word compositions, we use the representation output from the final time step to represent the entire sentence. Another neural network model with a sigmoid function on the very top layer is employed to map the concatenation of representations of its constituent sentences to a scalar, indicating the probability of the current clique being a coherent one or an incoherent one.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Discriminative Model", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Weakness Two problems with the discriminative model stand out: First, it relies on negative sampling to generate negative examples. Since the sentence-level semantic space in the open-domain setting is huge, the sampled instances can only cover a tiny proportion of the possible negative candidates, and therefore don't cover the space of possible meanings. As we will show in the experiments section, the discriminative model performs competitively in specific domains, but not in the open domain setting. Secondly and more importantly, discriminative models are only able to tell whether an already-given chunk of text is coherent or not. While they can thus be used in tasks like extractive summarization for sentence re-ordering, they cannot be used for coherent text generation in tasks like dialogue generation or abstractive text summarization.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Discriminative Model", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We therefore introduce three neural generative models of discourse coherence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Generative Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In a coherent context, a machine should be able to guess the next utterance given the preceding ones. A straightforward way to do that is to train a SEQ2SEQ model to predict a sentence given its contexts (Sutskever et al., 2014) . Generating sentences based on neighboring sentences resembles skip-thought models (Kiros et al., 2015) , which build an encoder-decoder model by predicting tokens in neighboring sentences. As shown in Figure 1a , given two consecutive sentences [s i , s i+1 ], one can measure the coherence by the likelihood of generating s i+1 given its preceding sentence s i (denoted by uni). This likelihood is scaled by the number of words in s i+1 (denoted by N i+1 ) to avoid favoring short sequences.", |
|
"cite_spans": [ |
|
{ |
|
"start": 204, |
|
"end": 228, |
|
"text": "(Sutskever et al., 2014)", |
|
"ref_id": "BIBREF52" |
|
}, |
|
{ |
|
"start": 313, |
|
"end": 333, |
|
"text": "(Kiros et al., 2015)", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 432, |
|
"end": 441, |
|
"text": "Figure 1a", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Model 1: the SEQ2SEQ Model and its Variations", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "L(s i , s i+1 ) = 1 N i+1 log p(s i+1 |s i )", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Model 1: the SEQ2SEQ Model and its Variations", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The probability can be directly computed using a pretrained SEQ2SEQ model (Sutskever et al., 2014) or an attention-based model (Bahdanau et al., 2015; Luong et al., 2015) . In a coherent context, a machine should not only be able to guess the next utterance given the preceding ones, but also the preceding one given the following ones. This gives rise to the coherence model (denoted by bi) that measures the bidirectional dependency between the two consecutive sentences:", |
|
"cite_spans": [ |
|
{ |
|
"start": 74, |
|
"end": 98, |
|
"text": "(Sutskever et al., 2014)", |
|
"ref_id": "BIBREF52" |
|
}, |
|
{ |
|
"start": 127, |
|
"end": 150, |
|
"text": "(Bahdanau et al., 2015;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 151, |
|
"end": 170, |
|
"text": "Luong et al., 2015)", |
|
"ref_id": "BIBREF37" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model 1: the SEQ2SEQ Model and its Variations", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "L(s i , s i+1 ) = 1 N i log p B (s i |s i+1 ) + log 1 N i+1 p F (s i+1 |s i )", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Model 1: the SEQ2SEQ Model and its Variations", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We separately train two models: a forward model p F (s i+1 |s i ) that predicts the next sentence based on the previous one and a backward model p B (s i |s i+1 ) that predicts the previous sentence given the next sentence. p B (s i |s i+1 ) can be trained in a way similar to p F (s i+1 |s i ) with sources and targets swapped. It is worth noting that p B and p F are separate models and do not share parameters.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model 1: the SEQ2SEQ Model and its Variations", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "One problem with the described uni and bi models is that sentences with higher language model probability (e.g., sentences without rare words) also tend to have higher conditional probability given their preceding or succeeding sentences. We are interested in measuring the informational gain from the contexts rather than how fluent the current sentence is. We thus propose eliminating the influence of the language model, which yields the following coherence score:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model 1: the SEQ2SEQ Model and its Variations", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "L(s i , s i+1 ) = 1 N i [log p B (s i |s i+1 ) \u2212 log p L (s i )] + 1 N i+1 [log p B (s i+1 |s i ) \u2212 log p L (s i+1 )]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model 1: the SEQ2SEQ Model and its Variations", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "(3) where p L (s) is the language model probability for generating sentence s. We train an LSTM language model, which can be thought of as a SEQ2SEQ model with an empty source. A closer look at Eq. 3 shows that it is of the same form as the mutual information between s i+1 and s i , namely", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model 1: the SEQ2SEQ Model and its Variations", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "log[p(s i+1 , s i )/p(s i+1 )p(s i )].", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model 1: the SEQ2SEQ Model and its Variations", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Generation The scoring functions in Eqs. 1, 2, and 3 are discriminative, generating coherence scores for an already-given chunk of text. Eqs. 2 and 3 can not be directly used for generation purposes, since they requires the completion of s i+1 before the score can be computed. A normal strategy is to generate a big N-best list using Eq. 1 and then rerank the N-best list using Eq. 2 or 3 (Li et al., 2015a) . The N-best list can be generated using standard beam search, or other algorithmic variations that promote diversity, coherence, etc. (Shao et al., 2017) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 390, |
|
"end": 408, |
|
"text": "(Li et al., 2015a)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 544, |
|
"end": 563, |
|
"text": "(Shao et al., 2017)", |
|
"ref_id": "BIBREF51" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model 1: the SEQ2SEQ Model and its Variations", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Weakness (1) The SEQ2SEQ model generates words sequentially based on an evolving hidden vector, which is updated by combining the current word representation with previously built hidden vectors. The generation process is thus not exposed to more global features of the discourse like topics. As the hidden vector evolves, the influence from contexts gradually diminishes, with language models quickly dominating. (2) By predicting a sentence conditioning only on its left or right neighbor, the model lacks the ability to handle the longerterm discourse dependencies across the sentences of a text.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model 1: the SEQ2SEQ Model and its Variations", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "To tackle these two issues, we need a model that is able to constantly remind the decoder about the global meaning that it should convey at each wordgeneration step, a global meaning which can capture the state of the discourse across the sentences of a text. We propose two models of this global meaning, a pipelined approach based on HMMbased topic models (Blei et al., 2003; Gruber et al., 2007) , and an end-to-end generative model with variational latent variables.", |
|
"cite_spans": [ |
|
{ |
|
"start": 358, |
|
"end": 377, |
|
"text": "(Blei et al., 2003;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 378, |
|
"end": 398, |
|
"text": "Gruber et al., 2007)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model 1: the SEQ2SEQ Model and its Variations", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "(HMM-LDA-GM)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "HMM-LDA based Generative Models", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "In Markov topic models the topic depends on the previous topics in context (Ritter et al., 2010; Paul and Girju, 2010; Wang et al., 2011; Gruber et al., 2007; Paul, 2012) . The topic for the current sentence is drawn based on the topic of the preceding sentence (or word) rather than on the global document-level topic distribution in vanilla LDA. Our first model is a pipelined one (the HMM-LDA-GM in Fig. 1b) , in which an HMM-LDA model provides the SEQ2SEQ model with global information for token generation, with two components:", |
|
"cite_spans": [ |
|
{ |
|
"start": 75, |
|
"end": 96, |
|
"text": "(Ritter et al., 2010;", |
|
"ref_id": "BIBREF48" |
|
}, |
|
{ |
|
"start": 97, |
|
"end": 118, |
|
"text": "Paul and Girju, 2010;", |
|
"ref_id": "BIBREF44" |
|
}, |
|
{ |
|
"start": 119, |
|
"end": 137, |
|
"text": "Wang et al., 2011;", |
|
"ref_id": "BIBREF55" |
|
}, |
|
{ |
|
"start": 138, |
|
"end": 158, |
|
"text": "Gruber et al., 2007;", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 159, |
|
"end": 170, |
|
"text": "Paul, 2012)", |
|
"ref_id": "BIBREF45" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 402, |
|
"end": 410, |
|
"text": "Fig. 1b)", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "HMM-LDA based Generative Models", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "(1) Running HMM-LDA: we first run a sentence-level HMM-LDA similar to Gruber et al. (2007) . Our implementation forces all words in a sentence to be generated from the same topic, and this topic is sampled from a distribution based on the topic from previous sentence. Let t n denote the distribution of topics for the current sentence, where t n \u2208 R 1\u00d7T . We also associate each LDA topic with a K dimensional vector, representing the semantics embedded in this topic. The topicrepresentation matrix is denoted by V \u2208 R T \u00d7K , where T is the pre-specified number of topics in LDA. V is learned in the word predicting process when training encoder-decoder models.", |
|
"cite_spans": [ |
|
{ |
|
"start": 70, |
|
"end": 90, |
|
"text": "Gruber et al. (2007)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "HMM-LDA based Generative Models", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "(2) Training encoder-decoder models: For the current sentence s n , given its topic distribution t n , we first compute the topic representation z n for s n using the weighted sum of LDA topic vectors:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "HMM-LDA based Generative Models", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "z n = t n \u00d7 V", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": "HMM-LDA based Generative Models", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "z n can be thought of as a discourse state vector that stores the information the current sentence needs to convey in the discourse, and is used to guide every step of word generation in s n . We run the encoderdecoder model, which subsequently predicts tokens in s n given s n\u22121 . This process is the same as the vanilla version of SEQ2SEQ models, the only difference being that z n is incorporated into each step of decoding for hidden vector updates:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "HMM-LDA based Generative Models", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "p(s n |z n , s n\u22121 ) = M t=1 p(w t |h t\u22121 , z n )", |
|
"eq_num": "(5)" |
|
} |
|
], |
|
"section": "HMM-LDA based Generative Models", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "V is updated along with parameters in the encoderdecoder model. z n influences each time step of decoding, and thus addresses the problem that vanilla SEQ2SEQ models gradually lose global information as the hidden representations evolve. z n is computed based on the topic distribution t n , which is obtained from the HMM-LDA model, thus modeling the global Markov discourse dependency between sentences of the text. 3 The model can be adapted to the bi-directional setting, in which we separately train two models to handle the forward probability log p(t n |s n\u22121 , ...) and the backward one log p(t n |s n+1 ). The bi-directional (bi) strategy described in Eq. 3 can also be incorporated to remove the influence of language models.", |
|
"cite_spans": [ |
|
{ |
|
"start": 418, |
|
"end": 419, |
|
"text": "3", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "HMM-LDA based Generative Models", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Weakness Topic models (either vanilla or HMM versions) focus on word co-occurrences at the document-level and are thus very lexicon-based. Furthermore, given the diversity of topics in a dataset like Wikipedia but the small number of topic clusters, the LDA model usually produces very coarse-grained topics (politics, sports, history, etc.), assigning very similar topic distributions to consecutive sentences. These topics thus capture topical coherence but are too coarse-grained to capture all the more fine-grained aspects of discourse coherence relationships.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "HMM-LDA based Generative Models", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Models (VLV-GM)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Variational Latent Variable Generative", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "We therefore propose instead to train an end-to-end system, in which the meaning transitions between sentences can be naturally learned from the data. Inspired by recent work on generating sentences from a latent space (Serban et al., 2016b; Bowman et al., 2015; Chung et al., 2015) , we propose the VSV-GM model in Fig. 1c . Each sentence s n is again associated with a hidden vector representation z n \u2208 R K which stores the global information that the current sentence needs to talk about, but instead of obtaining z n from an upstream model like LDA, z n is learned from the training data. z n is a stochastic latent variable conditioned on all previous sentences and z n\u22121 :", |
|
"cite_spans": [ |
|
{ |
|
"start": 219, |
|
"end": 241, |
|
"text": "(Serban et al., 2016b;", |
|
"ref_id": "BIBREF50" |
|
}, |
|
{ |
|
"start": 242, |
|
"end": 262, |
|
"text": "Bowman et al., 2015;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 263, |
|
"end": 282, |
|
"text": "Chung et al., 2015)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 316, |
|
"end": 323, |
|
"text": "Fig. 1c", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Variational Latent Variable Generative", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "p(z n |z n\u22121 , s n\u22121 , s n\u22122 , ...) = N (\u00b5 true zn , \u03a3 true zn ) \u00b5 true zn = f (z n\u22121 , s n\u22121 , s n\u22122 , ...) \u03a3 true zn = g(z n\u22121 , s n\u22121 , s n\u22122 , ...)", |
|
"eq_num": "(6)" |
|
} |
|
], |
|
"section": "Variational Latent Variable Generative", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "where N (\u00b5, \u03a3) is a multivariate normal distribution with mean \u00b5 \u2208 R K and covariance matrix \u03a3 \u2208 R K\u00d7K . \u03a3 is a diagonal matrix. As can be seen, the global information z n for the current sentence depends on the information z n\u22121 for its previous sentence as well as the text of the context sentences. This forms a Markov chain across all sentences. f and g are neural network models that take previous sentences and z n\u22121 , and map them to a real-valued representation using hierarchical LSTMs (Li et al., 2015b) 4 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 495, |
|
"end": 513, |
|
"text": "(Li et al., 2015b)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 514, |
|
"end": 515, |
|
"text": "4", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Variational Latent Variable Generative", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Each word w nt from s n is predicted using the concatenation of the representation previously build by the LSTMs (the same vector used in word prediction in vanilla SEQ2SEQ models) and z n , as shown in Eq.5.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Variational Latent Variable Generative", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "We are interested in the posterior distribution p(z n |s 1 , s 2 , ..., s n\u22121 ), namely, the information that the current sentence needs to convey given the preceding ones. Unfortunately, a highly non-linear mapping from z n to tokens in s n results in in-tractable inference of the posterior. A common solution is to use variational inference to learn another distribution, denoted by q(z n |s 1 , s 2 , ..., s N ), to approximate the true posterior p(z n |s 1 , s 2 , ..., s n\u22121 ). The model's latent variables are obtained by maximizing the variational lower-bound of observing the dataset:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Variational Latent Variable Generative", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "log p(s 1 , .., s N ) \u2264 N t=1 \u2212D KL (q(z n |s n , s n\u22121 , ...)||p(z n |s n\u22121 , s n\u22122 , ...)) + E q(zn|sn,s n\u22121 ,...) log p(s n |z n , s n\u22121 , s n\u22122 , ...)", |
|
"eq_num": "(7)" |
|
} |
|
], |
|
"section": "Variational Latent Variable Generative", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "This objective to optimize consists of two parts; the first is the KL divergence between the approximate distribution q and the true posterior p(s n |z n , s n\u22121 , s n\u22122 , ...), in which we want to approximate the true posterior using q. The second part", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Variational Latent Variable Generative", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "E q(zn|sn,s n\u22121 ,...) log p(s n |z n , s n\u22121 , s n\u22122 , ...),", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Variational Latent Variable Generative", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "predicts tokens in s n in the same way as in SEQ2SEQ models with the difference that it considers the global information z n .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Variational Latent Variable Generative", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The approximate posterior distribution", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Variational Latent Variable Generative", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "q(z n |s n , s n\u22121 , ...) takes a form similar to p(z n |s n\u22121 , s n\u22122 , ...): q(z n |s n , s n\u22121 , ...) = N (\u00b5 approx zn , \u03a3 approx zn ) \u00b5 approx zn = f q (z n\u22121 , s n , s n\u22121 , ...) \u03a3 approx zn = g q (z n\u22121 , s n , s n\u22121 , ...) (8)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Variational Latent Variable Generative", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "f q and g q are of similar structures to f and g, using a hierarchical neural network model to map context tokens to vector representations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Variational Latent Variable Generative", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Learning and Testing At training time, the approximate posterior q(z n |z n\u22121 , s n , s n\u22121 , ...), the true distribution p(z n |z n\u22121 , s n\u22121 , s n\u22122 , ...), and the generative probability p(s n |z n , s n\u22121 , s n\u22122 , ...) are trained jointly by maximizing the variational lower bound with respect to their parameters: a sample z n is first drawn from the posterior dis-", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Variational Latent Variable Generative", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "tribution q, namely N (\u00b5 approx zn , \u03a3 approx zn", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Variational Latent Variable Generative", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "). This sample is used to approximate the expectation E q log p(s n |z n , s n\u22121 , s n\u22122 , ...). Using z n , we can update the encoder-decoder model using SGD in a way similar to the standard SEQ2SEQ model, the only difference being that the current token to predict not only depends on the LSTM output h t , but also z n . Given the sampled z n , the KL-divergence can be readily computed, and we update the model using standard gradient decent (details shown in the Appendix).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Variational Latent Variable Generative", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The proposed VLV-GM model can be adapted to the bi-directional setting and the bi setting similarly to the way LDA-based models are adapted.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Variational Latent Variable Generative", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The proposed model is closely related to many recent attempts in training variational autoencoders (VAE) (Kingma and Welling, 2013; Rezende et al., 2014), variational or latent-variable recurrent nets (Bowman et al., 2015; Chung et al., 2015; Ji et al., 2016; Bayer and Osendorfer, 2014) , hierarchical latent variable encoder-decoder models (Serban et al., 2016b,a).", |
|
"cite_spans": [ |
|
{ |
|
"start": 201, |
|
"end": 222, |
|
"text": "(Bowman et al., 2015;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 223, |
|
"end": 242, |
|
"text": "Chung et al., 2015;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 243, |
|
"end": 259, |
|
"text": "Ji et al., 2016;", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 260, |
|
"end": 287, |
|
"text": "Bayer and Osendorfer, 2014)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Variational Latent Variable Generative", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "In this section, we describe experimental results. We first evaluate the proposed models on discriminative tasks such as sentence-pair ordering and full paragraph ordering reconstruction. Then we look at the task of coherent text generation. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Dataset We first evaluate the proposed algorithms on the task of predicting the correct ordering of pairs of sentences predicated on the assumption that an article is always more coherent than a random permutation of its sentences (Barzilay and Lapata, 2008) . A detailed description of this commonly used dataset and training/testing are found in the Appendix. We report the performance of the following baselines widely used in the coherence literature.", |
|
"cite_spans": [ |
|
{ |
|
"start": 231, |
|
"end": 258, |
|
"text": "(Barzilay and Lapata, 2008)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentence Ordering, Domain-specific Data", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "(1) Entity Grid Model: The grid model presented in Barzilay and Lapata (2008) . Results are directly taken from Barzilay and Lapata's (2008) paper. We also consider variations of entity grid models, such as Louis and Nenkova (2012) which models the cluster transition probability and the Graph Based Approach which uses a graph to represent the entity transitions needed for local coherence computation (Guinaudeau and Strube, 2013) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 51, |
|
"end": 77, |
|
"text": "Barzilay and Lapata (2008)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 112, |
|
"end": 140, |
|
"text": "Barzilay and Lapata's (2008)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 207, |
|
"end": 231, |
|
"text": "Louis and Nenkova (2012)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 403, |
|
"end": 432, |
|
"text": "(Guinaudeau and Strube, 2013)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentence Ordering, Domain-specific Data", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "(2) Li and Hovy (2014): A recursive neural model computes sentence representations based on parse trees. Negative sampling is used to construct negative incoherent examples. Results are from their papers.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentence Ordering, Domain-specific Data", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "(3) Foltz et al. (1998) computes the semantic relatedness of two text units as the cosine similarity between their LSA vectors. The coherence of a discourse is the average of the cosine of adjacent sentences. We used this intuition, but with more modern embedding models: (1) 300-dimensional Glove word vectors (Pennington et al., 2014) , embeddings for a sentence computed by averaging the embeddings of its words (2) Sentence representations obtained from LDA (Blei et al., 2003) with 300 topics, trained on the Wikipedia dataset. Results are reported in Table 2 . The extended version of the discriminative model described in this work significantly outperforms the parse-tree based recursive models presented in Li and Hovy (2014) as well as all non-neural baselines. It achieves almost perfect accuracy on the earthquake dataset and 93% on the accident dataset, marking a significant advancement in the benchmark. Generative models (both vanilla SEQ2SEQ and the proposed variational model) do not perform competitively on this dataset. We conjecture that this is due to the small size of the dataset, leading the generative model to overfit.", |
|
"cite_spans": [ |
|
{ |
|
"start": 311, |
|
"end": 336, |
|
"text": "(Pennington et al., 2014)", |
|
"ref_id": "BIBREF46" |
|
}, |
|
{ |
|
"start": 458, |
|
"end": 481, |
|
"text": "LDA (Blei et al., 2003)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 716, |
|
"end": 734, |
|
"text": "Li and Hovy (2014)", |
|
"ref_id": "BIBREF33" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 557, |
|
"end": 564, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Sentence Ordering, Domain-specific Data", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Since the dataset presented in Barzilay and Lapata (2008) is quite domain-specific, we propose testing coherence with a much larger, open-domain dataset: Wikipedia. We created a test set by randomly selecting 984 paragraphs from Wikipedia dump 2014, each paragraph consisting of at least 16 sentences. The training set is 30 million sentences not overlapping with the test set.", |
|
"cite_spans": [ |
|
{ |
|
"start": 31, |
|
"end": 57, |
|
"text": "Barzilay and Lapata (2008)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluating Ordering on Open-domain", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We adopt the same strategy as in Barzilay and Lapata (2008) , in which we generate pairs of sentence permutations from the original Wikipedia paragraphs. We follow the protocols described in the subsection and each pair whose original paragraph's score is higher than its permutation is treated as being correctly classified, else incorrectly classified. Models are evaluated using accuracy. We implement the Entity Grid Model (Barzilay and Lapata, 2008) using the Wikipedia training set as a baseline, the detail of which is presented in the Appendix. Other baselines consist of the Glove and LDA updates of the lexical coherence baselines (Foltz et al., 1998) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 33, |
|
"end": 59, |
|
"text": "Barzilay and Lapata (2008)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 427, |
|
"end": 454, |
|
"text": "(Barzilay and Lapata, 2008)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 641, |
|
"end": 661, |
|
"text": "(Foltz et al., 1998)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Binary Permutation Classification", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "Results Table 2 presents results on the binary classification task. Contrary to the findings on the domain specific dataset in the previous subsection, the discriminative model does not yield compelling results, performing only slightly better than the entity grid model. We believe the poor performance is due to the sentence-level negative sampling used by the discriminative model. Due to the huge semantic space in the open-domain setting, the sampled instances can only cover a tiny proportion of the possible negative candidates, and therefore don't cover the space of possible meanings. By contrast the dataset in Barzilay and Lapata (2008) is very domain-specific, and the semantic space is thus relatively small. By treating all other sentences in the document as negative, the discriminative strategy's negative samples form a much larger proportion of the semantic space, leading to good performance. Generative models perform significantly better than all other baselines. Compared with the dataset in Barzilay and Lapata (2008) , overfitting is not an issue here due to the great amount of training data. In line with our expectation, the MMI model outperforms the bidirectional model, which in turn outperforms the unidirectional model across all three generative model settings. We thus only report MMI results for experiments below. The VLV-GM model outperforms that the LDA-HMM-GM model, which is slightly better than the vanila SEQ2SEQ models. ", |
|
"cite_spans": [ |
|
{ |
|
"start": 621, |
|
"end": 647, |
|
"text": "Barzilay and Lapata (2008)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 1014, |
|
"end": 1040, |
|
"text": "Barzilay and Lapata (2008)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 8, |
|
"end": 15, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Binary Permutation Classification", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "The accuracy of our models on the binary task of detecting the original sentence ordering is very high, on both the prior small task and our large open-domain version. We therefore believe it is time for the community to move to a more difficult task for measuring coherence. We suggest the task of reconstructing an original paragraph from a bag of constituent sentences, which has been previously used in coherence evaluation (Lapata, 2003) . More formally, given a set of permuted sentences s 1 , s 2 , ..., s N (N the number of sentences in the original document), our goal is return the original (presumably most coherent) ordering of s.", |
|
"cite_spans": [ |
|
{ |
|
"start": 428, |
|
"end": 442, |
|
"text": "(Lapata, 2003)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Paragraph Reconstruction", |
|
"sec_num": "4.2.2" |
|
}, |
|
{ |
|
"text": "Because the discriminative model calculates the coherence of a sentence given the known previous and following sentences, it cannot be applied to this task since we don't know the surrounding context. Hence, we only use the generative model. The first sentence of a paragraph is given: for each step, we compute the coherence score of placing each remaining candidate sentence to the right of the partially constructed document. We use beam search with beam size 10. We use the Entity Grid model as a baseline for both the settings.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Paragraph Reconstruction", |
|
"sec_num": "4.2.2" |
|
}, |
|
{ |
|
"text": "Evaluating the absolute positions of sentences would be too harsh, penalizing orderings that maintain relative position between sentences through which local coherence can be manifested. We therefore use Kendall's \u03c4 (Lapata, 2003 (Lapata, , 2006 , a metric of rank correlation for evaluation. See the Appendix for details of Kendall's \u03c4 computation. We observe a pattern similar to the results on the binary classification task, where the VLV-GM model performs the best.", |
|
"cite_spans": [ |
|
{ |
|
"start": 216, |
|
"end": 229, |
|
"text": "(Lapata, 2003", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 230, |
|
"end": 245, |
|
"text": "(Lapata, , 2006", |
|
"ref_id": "BIBREF30" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Paragraph Reconstruction", |
|
"sec_num": "4.2.2" |
|
}, |
|
{ |
|
"text": "Both the tasks above are discriminative ones. We also want to evaluate different models' ability to generate coherent text chunks. The experiment is set up as follow: each encoder-decoder model is first given a set of context sentences (3 sentences).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Adversarial evaluation on Text Generation Quality", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "The model then generates a succeeding sentence using beam-search given the contexts. For the unidirectional setting, we directly take the most probable sequence and for the bi-directional and MMI, we rerank the N-best list using the backward probability and language model probability.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Adversarial evaluation on Text Generation Quality", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "We conduct experiments on multi-sentence generation, in which we repeat the generative process described above for N times, where N =1,2,3. At the end of each turn, the context is updated by adding in the newly generated sequence, and this sequence is used as the source input to the encoderdecoder model for next sequence generation. For example, when N is set to 2, given the three context sentences context-a, context-b and context-c, we first generate sen-d given the three context sentences and then generate sen-e given the sen-d, context-a, context-b and context-c.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Adversarial evaluation on Text Generation Quality", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "For evaluation, standard word overlap metrics such as BLEU or ROUGE are not suited for our task, and we use adversarial evaluation Bowman et al. (2015); Anjuli and Vinyals (2016) . In adversarial evaluation, we train a binary discriminant function to classify a sequence as machine generated or human generated, in an attempt to evaluate the model's sentence generation capability. The evaluator takes as input the concatenation of the contexts and the generated sentences (i.e., context-a, context-b and context-c, sen-d , sen-e in the example described above), 5 and outputs a scalar, indicating the probability of the current text chunk being human-generated. Training/dev/test sets are held-out sets from the one on which generative models are trained. They respectively contain 128,000/12,800/12,800 instances. Since discriminative models cannot generate sentences, and thus cannot be used for adversarial evaluation, they are skipped in this section.", |
|
"cite_spans": [ |
|
{ |
|
"start": 153, |
|
"end": 178, |
|
"text": "Anjuli and Vinyals (2016)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Adversarial evaluation on Text Generation Quality", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "We report Adversarial Success (AdverSuc for short), which is the fraction of instances in which a model is capable of fooling the evaluator. Adver- Figure 2 : An overview of training the adversarial evaluator using a hierarchical neural model. Green denotes input contexts. Red denotes a sentence from human-generated texts, treated as a positive example. Purple denotes a sentence from machine-decoded texts, treated as a negative example.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 148, |
|
"end": 156, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Adversarial evaluation on Text Generation Quality", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Suc is the difference between 1 and the accuracy achieved by the evaluator. Higher values of Ad-verSuc for a dialogue generation model are better. AdverSuc-N denotes the adversarial accuracy value on machine-generated texts with N turns. Table 4 show AdverSuc numbers for different models. As can be seen, the latent variable model VLV-GM is able to generate chunk of texts that are most indistinguishable from coherent texts from humans. This is due to its ability to handle the dependency between neighboring sentences. Performance declines as the number of turns increases due to the accumulation of errors and current models' inability to model long-term sentence-level dependency. All models perform poorly on the adver-3 evaluation metric, with the best adversarial success value being 0.081 (the trained evaluator is able to distinguish between human-generated and machine generated dialogues with greater than 90 percent accuracy for all models).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 238, |
|
"end": 245, |
|
"text": "Table 4", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Adversarial evaluation on Text Generation Quality", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "With the aim of guiding future investigations, we also briefly explore our model qualitatively, examining the coherence scores assigned to some artificial miniature discourses that exhibit various kinds of coherence. The examples suggest that the model handles lexical coherence, correctly favoring the 1st over the 2nd, and the 3rd over the 4th examples. Note that the coherence score for the final example is negative, which means conditioning on the first sentence actually decreases the likelihood of generating the second one.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Qualitative Analysis", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "Washington was unanimously elected president in the first two national elections. He oversaw the creation of a strong, wellfinanced national government. 1.48 Washington oversaw the creation of a strong, well-financed national government. He was unanimously elected president in the first two national elections. 0.72", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Case 2: Temporal Order", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Bret enjoys video games; therefore, he sometimes is late to appointments. 0.69 Bret sometimes is late to appointments; therefore, he enjoys video games. -0.07", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Case 3: Causal Relationship", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Cases 2 and 3 suggest the model may, at least in these simple cases, be capable of addressing the much more complex task of dealing with temporal and causal relationships. Presumably this is because the model is exposed in training to the general preference of natural text for temporal order, and even for the more subtle causal links.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Case 3: Causal Relationship", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Mary ate some apples. She likes apples. 3.06 She ate some apples. Mary likes apples. 2.41", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Case 4: Centering/Referential Coherence", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The model seems to deal with simple cases of referential coherence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Case 4: Centering/Referential Coherence", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Example3: 2.40 John went to his favorite music store to buy a piano. He had frequented the store for many years. He was excited that he could finally buy a piano. He arrived just as the store was closing for the day. Example4: 1.62 John went to his favorite music store to buy a piano. It was a store John had frequented for many years He was excited that he could finally buy a piano.. It was closing just as John arrived.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Case 4: Centering/Referential Coherence", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In these final examples from Miltsakaki and Kukich (2004) , the model successfully captures the fact that the second text is less coherent due to rough shifts. This suggests that the discourse embedding space may be able to capture a representation of entity focus.", |
|
"cite_spans": [ |
|
{ |
|
"start": 29, |
|
"end": 57, |
|
"text": "Miltsakaki and Kukich (2004)", |
|
"ref_id": "BIBREF42" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Case 4: Centering/Referential Coherence", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Of course all of these these qualitative evaluations are only suggestive, and a deeper understanding of what the discourse embedding space is capturing will likely require more sophisticated visualizations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Case 4: Centering/Referential Coherence", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We investigate the problem of open-domain discourse coherence, training discriminative models that treating natural texts as coherent and permutations as non-coherent, and Markov generative models that can predict sentences given their neighbors.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Our work shows that the traditional evaluation metric (ordering pairs of sentences in small domains) is completely solvable by our discriminative models, and we therefore suggest the community move to the harder task of open-domain full-paragraph sentence ordering.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The proposed models also offer an initial step in generating coherent texts given contexts, which has the potential to benefit a wide range of generation tasks in NLP. Our latent variable neural models, by offering a new way to learn latent discourse-level features of a text, also suggest new directions in discourse representation that may bring benefits to any discourse-aware NLP task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Acknowledgements The authors thank Will Monroe, Sida Wang, Kelvin Guu and the other members of the Stanford NLP Group for helpful discussions and comments. Jiwei Li is supported by a Facebook Fellowship, which we gratefully acknowledge. This work is also partially supported by the NSF under award IIS-1514268, and the DARPA Communicating with Computers (CwC) program under ARO prime contract no. W911NF-15-1-0462. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of DARPA, the NSF, or Facebook.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Details for the domain specific dataset (Barzilay and Lapata, 2008) The corpus consists of 200 articles each from two domains: NTSB airplane accident reports (V=4758, 10.6 sentences/document) and AP earthquake reports (V=3287, 11.5 sentences/document), split into training and testing. For each document, pairs of permutations are generated 6 . Each pair contains the original document order and a random permutation of the sentences from the same document.", |
|
"cite_spans": [ |
|
{ |
|
"start": 40, |
|
"end": 67, |
|
"text": "(Barzilay and Lapata, 2008)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Supplemental Material", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Training/Testing details for models on the domain specific dataset We use reduced versions of both generative and discriminative models to allow fair comparison with baselines. For the discriminative model, we generate noise negative examples from random replacements in the training set, with the only difference that random replacements only come from the same document. We use 300 dimensional embeddings borrowed from GLOVE (Pennington et al., 2014) to initialize word embeddings. Word embeddings are kept fixed during training and we update LSTM parameters using AdaGrad (Duchi et al., 2011) . For the generative model, due to the small size of the dataset, we train a one layer SEQ2SEQ model with word dimensionality and number of hidden neurons set to 100. The model is trained using SGD with AdaGrad (Zeiler, 2012).", |
|
"cite_spans": [ |
|
{ |
|
"start": 427, |
|
"end": 452, |
|
"text": "(Pennington et al., 2014)", |
|
"ref_id": "BIBREF46" |
|
}, |
|
{ |
|
"start": 575, |
|
"end": 595, |
|
"text": "(Duchi et al., 2011)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Supplemental Material", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The task requires a coherence score for the whole document, which is comprised of multiple cliques. We adopt the strategy described in Li and Hovy (2014) by breaking the document into a series of cliques which is comprised of a sequence of consecutive sentences. The document-level coherence score is attained by averaging its constituent cliques. We say a document is more coherent if it achieves a higher average score within its constituent cliques.", |
|
"cite_spans": [ |
|
{ |
|
"start": 135, |
|
"end": 153, |
|
"text": "Li and Hovy (2014)", |
|
"ref_id": "BIBREF33" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Supplemental Material", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Implementation of Entity Grid Model For each noun in a sentence, we extract its syntactic role (subject, object or other). We use a wikipedia dump parsed using the Fanse Parser (Tratz and Hovy, 2011) . Subjects and objects are extracted based on nsubj and dobj relations in the dependency trees. (Barzilay and Lapata, 2008) define two versions of the Entity Grid Model, one using full coreference and a simpler method using only exact-string coreference; Due to the difficulty of running full coreference resolution tens of millions of Wikipedia sentences, we follow other researchers in using Barzilay and Lapata's simpler method (Feng and Hirst, 2012; Burstein et al., 2010; Barzilay and Lapata, 2008 Kendall's \u03c4 Kendall's \u03c4 is computed based on the number of inversions in the rankings as follows:", |
|
"cite_spans": [ |
|
{ |
|
"start": 177, |
|
"end": 199, |
|
"text": "(Tratz and Hovy, 2011)", |
|
"ref_id": "BIBREF53" |
|
}, |
|
{ |
|
"start": 296, |
|
"end": 323, |
|
"text": "(Barzilay and Lapata, 2008)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 594, |
|
"end": 653, |
|
"text": "Barzilay and Lapata's simpler method (Feng and Hirst, 2012;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 654, |
|
"end": 676, |
|
"text": "Burstein et al., 2010;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 677, |
|
"end": 702, |
|
"text": "Barzilay and Lapata, 2008", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Supplemental Material", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u03c4 = 1 \u2212 2# of inversions N \u00d7 (N \u2212 1)", |
|
"eq_num": "(9)" |
|
} |
|
], |
|
"section": "Supplemental Material", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "where N denotes the number of sentences in the original document and inversions denote the number of interchanges of consecutive elements needed to reconstruct the original document. Kendall's \u03c4 can be efficiently computed by counting the number of intersections of lines when aligning the original document and the generated document. We refer the readers to Lapata (2003) for more details.", |
|
"cite_spans": [ |
|
{ |
|
"start": 360, |
|
"end": 373, |
|
"text": "Lapata (2003)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Supplemental Material", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Derivation for Variation Inference For simplicity, we use \u00b5 post and \u03a3 approx to denote \u00b5 approx (z n ) and \u03a3 approx (z n ), \u00b5 true and \u03a3 true to denote \u00b5 true (z n ) and \u03a3 true (z n ). The KLdivergence between the approximate distribution q(z n |z n\u22121 , s n , s n\u22121 , ...) and the true distribution p(z n |z n\u22121 , s n\u22121 , s n\u22122 , ...) in the variational inference is given by:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Supplemental Material", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "D KL (q(z n |z n\u22121 , s n , s n\u22121 , ...)||p(z n |z n\u22121 , s n\u22121 , s n\u22122 , ...) = 1 2 (tr(\u03a3 \u22121 true \u03a3 approx ) \u2212 k + log det\u03a3 true det\u03a3 approx +(\u00b5 true \u2212 \u00b5 approx ) \u22121 \u03a3 \u22121 true (\u00b5 true \u2212 \u00b5 approx ))", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Supplemental Material", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "7 Our implementation of the Entity Grid Model is built upon public available code at https://github.com/ karins/CoherenceFramework.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Supplemental Material", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "where k denotes the dimensionality of the vector. Since z n has already been sampled and thus known, \u00b5 approx , \u03a3 approx , \u00b5 true , \u03a3 true and consequently Eq10 can be readily computed. The gradient with respect to \u00b5 approx , \u03a3 approx , \u00b5 true , \u03a3 true can be respectively computed, and the error is then backpropagated to the hierarchical neural models that are used to compute them. We refer the readers to Doersch (2016) for more details about how a general VAE model can be trained.", |
|
"cite_spans": [ |
|
{ |
|
"start": 409, |
|
"end": 423, |
|
"text": "Doersch (2016)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Supplemental Material", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Our generate models offer a powerful way to represent the latent discourse structure in a complex embedding space, but one that is hard to visualize.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Supplemental Material", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "To help understand what the model is doing, we examine some relevant examples, annotated with the (log-likelihood) coherence score from the MMI generative model, with the goal of seeing (qualitatively) the kinds of coherence the model seems to be representing. (The MMI can be viewed as the informational gain from conditioning the generation of the current sentence on its neighbors.)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Supplemental Material", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Adding coreference(Elsner and Charniak, 2008), named entities(Eisner and Charniak, 2011), discourse relations(Lin et al., 2011) and entity graphs(Guinaudeau and Strube, 2013).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Li and Hovy's (2014) recursive neural model operates on parse trees, which does not support batched computation and is therefore hard to scale up.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This pipelined approach is closely related to recent work that incorporates LDA topic information into generation models in an attempt to leverage context information(Ghosh et al., 2016;Xing et al., 2016;Mei et al., 2016)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Sentences are first mapped to vector representations using a LSTM model. Another level of LSTM at the sentence level then composes representations of the multiple sentences to a single vector.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The model uses a hierarchical neural structure that first maps each sentence to a vector representation, with another level of LSTM on top of the constituent sentences, producing a single vector to represent the entire chunk of texts.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Permutations downloaded from people.csail.mit. edu/regina/coherence/CLsubmission/.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Computing locally coherent discourses", |
|
"authors": [ |
|
{ |
|
"first": "Ernst", |
|
"middle": [], |
|
"last": "Althaus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nikiforos", |
|
"middle": [], |
|
"last": "Karamanis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Koller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ernst Althaus, Nikiforos Karamanis, and Alexander Koller. 2004. Computing locally coherent dis- courses. In Proceedings of ACL 2004.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Adversarial evaluation of dialogue models", |
|
"authors": [ |
|
{ |
|
"first": "Kannan", |
|
"middle": [], |
|
"last": "Anjuli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oriol", |
|
"middle": [], |
|
"last": "Vinyals", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "NIPS 2016 Workshop on Adversarial Training", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kannan Anjuli and Oriol Vinyals. 2016. Adversarial evaluation of dialogue models. NIPS 2016 Work- shop on Adversarial Training .", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Neural machine translation by jointly learning to align and translate", |
|
"authors": [ |
|
{ |
|
"first": "Dzmitry", |
|
"middle": [], |
|
"last": "Bahdanau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proc. of the International Conference on Learning Representations (ICLR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In Proc. of the Inter- national Conference on Learning Representations (ICLR).", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Modeling local coherence: An entity-based approach", |
|
"authors": [ |
|
{ |
|
"first": "Regina", |
|
"middle": [], |
|
"last": "Barzilay", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mirella", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Computational Linguistics", |
|
"volume": "34", |
|
"issue": "1", |
|
"pages": "1--34", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Regina Barzilay and Mirella Lapata. 2008. Modeling local coherence: An entity-based approach. Compu- tational Linguistics 34(1):1-34.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Catching the drift: Probabilistic content models, with applications to generation and summarization", |
|
"authors": [ |
|
{ |
|
"first": "Regina", |
|
"middle": [], |
|
"last": "Barzilay", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lillian", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "HLT-NAACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "113--120", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Regina Barzilay and Lillian Lee. 2004. Catching the drift: Probabilistic content models, with applications to generation and summarization. In HLT-NAACL. pages 113-120.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Sentence fusion for multidocument news summarization", |
|
"authors": [ |
|
{ |
|
"first": "Regina", |
|
"middle": [], |
|
"last": "Barzilay", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Kathleen R Mckeown", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Computational Linguistics", |
|
"volume": "31", |
|
"issue": "3", |
|
"pages": "297--328", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Regina Barzilay and Kathleen R McKeown. 2005. Sen- tence fusion for multidocument news summariza- tion. Computational Linguistics 31(3):297-328.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Learning stochastic recurrent networks", |
|
"authors": [ |
|
{ |
|
"first": "Justin", |
|
"middle": [], |
|
"last": "Bayer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christian", |
|
"middle": [], |
|
"last": "Osendorfer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1411.7610" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Justin Bayer and Christian Osendorfer. 2014. Learn- ing stochastic recurrent networks. arXiv preprint arXiv:1411.7610 .", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Automated analysis of free speech predicts psychosis onset in high-risk youths", |
|
"authors": [ |
|
{ |
|
"first": "Gillinder", |
|
"middle": [], |
|
"last": "Bedi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Facundo", |
|
"middle": [], |
|
"last": "Carrillo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guillermo", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Cecchi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Diego", |
|
"middle": [ |
|
"Fern\u00e1ndez" |
|
], |
|
"last": "Slezak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mariano", |
|
"middle": [], |
|
"last": "Sigman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Nat\u00e1lia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sidarta", |
|
"middle": [], |
|
"last": "Mota", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ribeiro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Daniel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mauro", |
|
"middle": [], |
|
"last": "Javitt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cheryl", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Copelli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Corcoran", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gillinder Bedi, Facundo Carrillo, Guillermo A Cec- chi, Diego Fern\u00e1ndez Slezak, Mariano Sigman, Nat\u00e1lia B Mota, Sidarta Ribeiro, Daniel C Javitt, Mauro Copelli, and Cheryl M Corcoran. 2015. Au- tomated analysis of free speech predicts psychosis onset in high-risk youths. npj Schizophrenia 1.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Latent dirichlet allocation", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "David", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Blei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Andrew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael I Jordan", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Journal of machine Learning research", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "993--1022", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. Journal of ma- chine Learning research 3(Jan):993-1022.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Generating sentences from a continuous space", |
|
"authors": [ |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Samuel R Bowman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oriol", |
|
"middle": [], |
|
"last": "Vilnis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Vinyals", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Andrew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rafal", |
|
"middle": [], |
|
"last": "Dai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Samy", |
|
"middle": [], |
|
"last": "Jozefowicz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1511.06349" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Samuel R Bowman, Luke Vilnis, Oriol Vinyals, An- drew M Dai, Rafal Jozefowicz, and Samy Ben- gio. 2015. Generating sentences from a continuous space. arXiv preprint arXiv:1511.06349 .", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Using entity-based features to model coherence in student essays", |
|
"authors": [ |
|
{ |
|
"first": "Jill", |
|
"middle": [], |
|
"last": "Burstein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joel", |
|
"middle": [], |
|
"last": "Tetreault", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Slava", |
|
"middle": [], |
|
"last": "Andreyev", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Human language technologies: The 2010 annual conference of the North American chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "681--684", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jill Burstein, Joel Tetreault, and Slava Andreyev. 2010. Using entity-based features to model coherence in student essays. In Human language technologies: The 2010 annual conference of the North American chapter of the Association for Computational Lin- guistics. Association for Computational Linguistics, pages 681-684.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "A recurrent latent variable model for sequential data", |
|
"authors": [ |
|
{ |
|
"first": "Junyoung", |
|
"middle": [], |
|
"last": "Chung", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyle", |
|
"middle": [], |
|
"last": "Kastner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Laurent", |
|
"middle": [], |
|
"last": "Dinh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kratarth", |
|
"middle": [], |
|
"last": "Goel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Aaron", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Courville", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2980--2988", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C Courville, and Yoshua Bengio. 2015. A recurrent latent variable model for sequential data. In Advances in neural information processing sys- tems. pages 2980-2988.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Tutorial on variational autoencoders", |
|
"authors": [ |
|
{ |
|
"first": "Carl", |
|
"middle": [], |
|
"last": "Doersch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1606.05908" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Carl Doersch. 2016. Tutorial on variational autoen- coders. arXiv preprint arXiv:1606.05908 .", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Adaptive subgradient methods for online learning and stochastic optimization", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Duchi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Elad", |
|
"middle": [], |
|
"last": "Hazan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoram", |
|
"middle": [], |
|
"last": "Singer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "The Journal of Machine Learning Research", |
|
"volume": "12", |
|
"issue": "", |
|
"pages": "2121--2159", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. The Journal of Ma- chine Learning Research 12:2121-2159.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Extending the entity grid with entity-specific features", |
|
"authors": [ |
|
{ |
|
"first": "Micha", |
|
"middle": [], |
|
"last": "Eisner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eugene", |
|
"middle": [], |
|
"last": "Charniak", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "125--129", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Micha Eisner and Eugene Charniak. 2011. Extending the entity grid with entity-specific features. In Pro- ceedings of the 49th Annual Meeting of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies: short papers-Volume 2. Associ- ation for Computational Linguistics, pages 125-129.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "A unified local and global model for discourse coherence", |
|
"authors": [ |
|
{ |
|
"first": "Micha", |
|
"middle": [], |
|
"last": "Elsner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Joseph", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eugene", |
|
"middle": [], |
|
"last": "Austerweil", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Charniak", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "HLT-NAACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "436--443", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Micha Elsner, Joseph L Austerweil, and Eugene Char- niak. 2007. A unified local and global model for dis- course coherence. In HLT-NAACL. pages 436-443.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Coreference-inspired coherence modeling", |
|
"authors": [ |
|
{ |
|
"first": "Micha", |
|
"middle": [], |
|
"last": "Elsner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eugene", |
|
"middle": [], |
|
"last": "Charniak", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics on Human Language Technologies: Short Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "41--44", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Micha Elsner and Eugene Charniak. 2008. Coreference-inspired coherence modeling. In Proceedings of the 46th Annual Meeting of the As- sociation for Computational Linguistics on Human Language Technologies: Short Papers. Association for Computational Linguistics, pages 41-44.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Quantifying incoherence in speech: An automated methodology and novel application to schizophrenia", |
|
"authors": [ |
|
{ |
|
"first": "Brita", |
|
"middle": [], |
|
"last": "Elvev\u00e5g", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Peter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Foltz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Terry", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Daniel R Weinberger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Schizophrenia research", |
|
"volume": "93", |
|
"issue": "1", |
|
"pages": "304--316", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Brita Elvev\u00e5g, Peter W Foltz, Daniel R Weinberger, and Terry E Goldberg. 2007. Quantifying incoher- ence in speech: An automated methodology and novel application to schizophrenia. Schizophrenia research 93(1):304-316.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Extending the entity-based coherence model with multiple ranks", |
|
"authors": [ |
|
{ |
|
"first": "Vanessa", |
|
"middle": [], |
|
"last": "Wei Feng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Graeme", |
|
"middle": [], |
|
"last": "Hirst", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "315--324", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vanessa Wei Feng and Graeme Hirst. 2012. Extend- ing the entity-based coherence model with multiple ranks. In Proceedings of the 13th Conference of the European Chapter of the Association for Compu- tational Linguistics. Association for Computational Linguistics, pages 315-324.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Discourse coherence and lsa. Handbook of latent semantic analysis pages", |
|
"authors": [ |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Peter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Foltz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "167--184", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peter W Foltz. 2007. Discourse coherence and lsa. Handbook of latent semantic analysis pages 167- 184.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "The measurement of textual coherence with latent semantic analysis", |
|
"authors": [ |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Peter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Walter", |
|
"middle": [], |
|
"last": "Foltz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Kintsch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Landauer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Discourse processes", |
|
"volume": "25", |
|
"issue": "2-3", |
|
"pages": "285--307", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peter W Foltz, Walter Kintsch, and Thomas K Lan- dauer. 1998. The measurement of textual coherence with latent semantic analysis. Discourse processes 25(2-3):285-307.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Contextual lstm (clstm) models for large scale nlp tasks", |
|
"authors": [ |
|
{ |
|
"first": "Shalini", |
|
"middle": [], |
|
"last": "Ghosh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oriol", |
|
"middle": [], |
|
"last": "Vinyals", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Brian", |
|
"middle": [], |
|
"last": "Strope", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Scott", |
|
"middle": [], |
|
"last": "Roy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tom", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Larry", |
|
"middle": [], |
|
"last": "Heck", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1602.06291" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shalini Ghosh, Oriol Vinyals, Brian Strope, Scott Roy, Tom Dean, and Larry Heck. 2016. Contextual lstm (clstm) models for large scale nlp tasks. arXiv preprint arXiv:1602.06291 .", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Hidden topic markov models", |
|
"authors": [ |
|
{ |
|
"first": "Amit", |
|
"middle": [], |
|
"last": "Gruber", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yair", |
|
"middle": [], |
|
"last": "Weiss", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michal", |
|
"middle": [], |
|
"last": "Rosen-Zvi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "AISTATS", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "163--170", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Amit Gruber, Yair Weiss, and Michal Rosen-Zvi. 2007. Hidden topic markov models. In AISTATS. vol- ume 2, pages 163-170.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Graphbased local coherence modeling", |
|
"authors": [ |
|
{ |
|
"first": "Camille", |
|
"middle": [], |
|
"last": "Guinaudeau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Strube", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "93--103", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Camille Guinaudeau and Michael Strube. 2013. Graph- based local coherence modeling. In Proceedings of the 51st Annual Meeting of the Association for Com- putational Linguistics. pages 93-103.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Planning coherent multisentential text", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Eduard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1988, |
|
"venue": "Proceedings of the 26th annual meeting on Association for Computational Linguistics. Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "163--169", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eduard H Hovy. 1988. Planning coherent multisenten- tial text. In Proceedings of the 26th annual meeting on Association for Computational Linguistics. As- sociation for Computational Linguistics, pages 163- 169.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "A latent variable recurrent neural network for discourse relation language models", |
|
"authors": [ |
|
{ |
|
"first": "Yangfeng", |
|
"middle": [], |
|
"last": "Ji", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gholamreza", |
|
"middle": [], |
|
"last": "Haffari", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Eisenstein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1603.01913" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yangfeng Ji, Gholamreza Haffari, and Jacob Eisenstein. 2016. A latent variable recurrent neural network for discourse relation language models. arXiv preprint arXiv:1603.01913 .", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Autoencoding variational bayes", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Diederik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Max", |
|
"middle": [], |
|
"last": "Kingma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Welling", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1312.6114" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diederik P Kingma and Max Welling. 2013. Auto- encoding variational bayes. arXiv preprint arXiv:1312.6114 .", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Skip-thought vectors", |
|
"authors": [ |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Kiros", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yukun", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Ruslan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Salakhutdinov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raquel", |
|
"middle": [], |
|
"last": "Zemel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Antonio", |
|
"middle": [], |
|
"last": "Urtasun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sanja", |
|
"middle": [], |
|
"last": "Torralba", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Fidler", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3276--3284", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. In Advances in Neural Information Processing Systems. pages 3276-3284.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Probabilistic text structuring: Experiments with sentence ordering", |
|
"authors": [ |
|
{ |
|
"first": "Mirella", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the 41st Annual Meeting on Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "545--552", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mirella Lapata. 2003. Probabilistic text structuring: Experiments with sentence ordering. In Proceed- ings of the 41st Annual Meeting on Association for Computational Linguistics-Volume 1. Associa- tion for Computational Linguistics, pages 545-552.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Automatic evaluation of information ordering: Kendall's tau", |
|
"authors": [ |
|
{ |
|
"first": "Mirella", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Computational Linguistics", |
|
"volume": "32", |
|
"issue": "4", |
|
"pages": "471--484", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mirella Lapata. 2006. Automatic evaluation of infor- mation ordering: Kendall's tau. Computational Lin- guistics 32(4):471-484.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Discourse relations and defeasible knowledge", |
|
"authors": [ |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Lascarides", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicholas", |
|
"middle": [], |
|
"last": "Asher", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "Proceedings of the 29th annual meeting on Association for Computational Linguistics. Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "55--62", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alex Lascarides and Nicholas Asher. 1991. Discourse relations and defeasible knowledge. In Proceed- ings of the 29th annual meeting on Association for Computational Linguistics. Association for Compu- tational Linguistics, pages 55-62.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "A diversity-promoting objective function for neural conversation models", |
|
"authors": [ |
|
{ |
|
"first": "Jiwei", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michel", |
|
"middle": [], |
|
"last": "Galley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Brockett", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianfeng", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bill", |
|
"middle": [], |
|
"last": "Dolan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1510.03055" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2015a. A diversity-promoting objec- tive function for neural conversation models. arXiv preprint arXiv:1510.03055 .", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "A model of coherence based on distributed sentence representation", |
|
"authors": [ |
|
{ |
|
"first": "Jiwei", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eduard", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jiwei Li and Eduard Hovy. 2014. A model of coher- ence based on distributed sentence representation. In Proceedings of Empirical Methods in Natural Language Processing.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "A hierarchical neural autoencoder for paragraphs and documents", |
|
"authors": [ |
|
{ |
|
"first": "Jiwei", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Minh-Thang", |
|
"middle": [], |
|
"last": "Luong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Jurafsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1506.01057" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jiwei Li, Minh-Thang Luong, and Dan Jurafsky. 2015b. A hierarchical neural autoencoder for paragraphs and documents. arXiv preprint arXiv:1506.01057 .", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Automatically evaluating text coherence using discourse relations", |
|
"authors": [ |
|
{ |
|
"first": "Ziheng", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Min-Yen", |
|
"middle": [], |
|
"last": "Hwee Tou Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Kan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "997--1006", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ziheng Lin, Hwee Tou Ng, and Min-Yen Kan. 2011. Automatically evaluating text coherence using dis- course relations. In Proceedings of the 49th An- nual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1. Association for Computational Linguistics, pages 997-1006.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "A coherence model based on syntactic patterns", |
|
"authors": [ |
|
{ |
|
"first": "Annie", |
|
"middle": [], |
|
"last": "Louis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ani", |
|
"middle": [], |
|
"last": "Nenkova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1157--1168", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Annie Louis and Ani Nenkova. 2012. A coherence model based on syntactic patterns. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. Association for Com- putational Linguistics, pages 1157-1168.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Effective approaches to attentionbased neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Minh-Thang", |
|
"middle": [], |
|
"last": "Luong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hieu", |
|
"middle": [], |
|
"last": "Pham", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher D", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attention- based neural machine translation. EMNLP .", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "Rhetorical structure theory: Toward a functional theory of text organization", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "William", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sandra", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Mann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Thompson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1988, |
|
"venue": "Text", |
|
"volume": "8", |
|
"issue": "3", |
|
"pages": "243--281", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "William C Mann and Sandra A Thompson. 1988. Rhetorical structure theory: Toward a functional the- ory of text organization. Text 8(3):243-281.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "From local to global coherence: A bottom-up approach to text planning", |
|
"authors": [ |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Marcu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "AAAI/IAAI. Citeseer", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "629--635", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daniel Marcu. 1997. From local to global coher- ence: A bottom-up approach to text planning. In AAAI/IAAI. Citeseer, pages 629-635.", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "Cohmetrix: Capturing linguistic features of cohesion", |
|
"authors": [ |
|
{ |
|
"first": "Danielle", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Mcnamara", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Max", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Louwerse", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philip", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Mccarthy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arthur", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Graesser", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Discourse Processes", |
|
"volume": "47", |
|
"issue": "4", |
|
"pages": "292--330", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Danielle S. McNamara, Max M. Louwerse, Philip M. McCarthy, and Arthur C. Graesser. 2010. Coh- metrix: Capturing linguistic features of cohesion. Discourse Processes 47(4):292-330.", |
|
"links": null |
|
}, |
|
"BIBREF41": { |
|
"ref_id": "b41", |
|
"title": "Coherent dialogue with attention-based language models", |
|
"authors": [ |
|
{ |
|
"first": "Hongyuan", |
|
"middle": [], |
|
"last": "Mei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohit", |
|
"middle": [], |
|
"last": "Bansal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthew R", |
|
"middle": [], |
|
"last": "Walter", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1611.06997" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hongyuan Mei, Mohit Bansal, and Matthew R Walter. 2016. Coherent dialogue with attention-based lan- guage models. arXiv preprint arXiv:1611.06997 .", |
|
"links": null |
|
}, |
|
"BIBREF42": { |
|
"ref_id": "b42", |
|
"title": "Evaluation of text coherence for electronic essay scoring systems", |
|
"authors": [ |
|
{ |
|
"first": "Eleni", |
|
"middle": [], |
|
"last": "Miltsakaki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karen", |
|
"middle": [], |
|
"last": "Kukich", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Natural Language Engineering", |
|
"volume": "10", |
|
"issue": "01", |
|
"pages": "25--55", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eleni Miltsakaki and Karen Kukich. 2004. Evaluation of text coherence for electronic essay scoring sys- tems. Natural Language Engineering 10(01):25-55.", |
|
"links": null |
|
}, |
|
"BIBREF43": { |
|
"ref_id": "b43", |
|
"title": "Lexical cohesion computed by thesaural relations as an indicator of the structure of text", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Morris", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Hirst", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "Computational Linguistics", |
|
"volume": "17", |
|
"issue": "1", |
|
"pages": "21--48", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. Morris and G. Hirst. 1991. Lexical cohesion computed by thesaural relations as an indicator of the structure of text. Computational Linguistics 17(1):21-48.", |
|
"links": null |
|
}, |
|
"BIBREF44": { |
|
"ref_id": "b44", |
|
"title": "A twodimensional topic-aspect model for discovering multi-faceted topics", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Paul", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roxana", |
|
"middle": [], |
|
"last": "Girju", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Urbana", |
|
"volume": "51", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael Paul and Roxana Girju. 2010. A two- dimensional topic-aspect model for discovering multi-faceted topics. Urbana 51(61801):36.", |
|
"links": null |
|
}, |
|
"BIBREF45": { |
|
"ref_id": "b45", |
|
"title": "Mixed membership markov models for unsupervised conversation modeling", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Michael", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Paul", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "94--104", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael J Paul. 2012. Mixed membership markov models for unsupervised conversation modeling. In Proceedings of the 2012 Joint Conference on Empir- ical Methods in Natural Language Processing and Computational Natural Language Learning. Associ- ation for Computational Linguistics, pages 94-104.", |
|
"links": null |
|
}, |
|
"BIBREF46": { |
|
"ref_id": "b46", |
|
"title": "Glove: Global vectors for word representation", |
|
"authors": [ |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Pennington", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher D", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "EMNLP", |
|
"volume": "14", |
|
"issue": "", |
|
"pages": "1532--1543", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In EMNLP. volume 14, pages 1532- 1543.", |
|
"links": null |
|
}, |
|
"BIBREF47": { |
|
"ref_id": "b47", |
|
"title": "Stochastic backpropagation and approximate inference in deep generative models", |
|
"authors": [ |
|
{ |
|
"first": "Danilo", |
|
"middle": [], |
|
"last": "Jimenez Rezende", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shakir", |
|
"middle": [], |
|
"last": "Mohamed", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daan", |
|
"middle": [], |
|
"last": "Wierstra", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1401.4082" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. 2014. Stochastic backpropagation and approximate inference in deep generative models. arXiv preprint arXiv:1401.4082 .", |
|
"links": null |
|
}, |
|
"BIBREF48": { |
|
"ref_id": "b48", |
|
"title": "Unsupervised modeling of twitter conversations", |
|
"authors": [ |
|
{ |
|
"first": "Alan", |
|
"middle": [], |
|
"last": "Ritter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Colin", |
|
"middle": [], |
|
"last": "Cherry", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bill", |
|
"middle": [], |
|
"last": "Dolan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "172--180", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alan Ritter, Colin Cherry, and Bill Dolan. 2010. Unsu- pervised modeling of twitter conversations. In Hu- man Language Technologies: The 2010 Annual Con- ference of the North American Chapter of the Associ- ation for Computational Linguistics. Association for Computational Linguistics, pages 172-180.", |
|
"links": null |
|
}, |
|
"BIBREF49": { |
|
"ref_id": "b49", |
|
"title": "Multiresolution recurrent neural networks: An application to dialogue response generation", |
|
"authors": [ |
|
{ |
|
"first": "Iulian", |
|
"middle": [], |
|
"last": "Vlad Serban", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tim", |
|
"middle": [], |
|
"last": "Klinger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gerald", |
|
"middle": [], |
|
"last": "Tesauro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kartik", |
|
"middle": [], |
|
"last": "Talamadupula", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bowen", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aaron", |
|
"middle": [], |
|
"last": "Courville", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1606.00776" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Iulian Vlad Serban, Tim Klinger, Gerald Tesauro, Kartik Talamadupula, Bowen Zhou, Yoshua Ben- gio, and Aaron Courville. 2016a. Multiresolu- tion recurrent neural networks: An application to dialogue response generation. arXiv preprint arXiv:1606.00776 .", |
|
"links": null |
|
}, |
|
"BIBREF50": { |
|
"ref_id": "b50", |
|
"title": "A hierarchical latent variable encoder-decoder model for generating dialogues", |
|
"authors": [ |
|
{ |
|
"first": "Iulian", |
|
"middle": [], |
|
"last": "Vlad Serban", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alessandro", |
|
"middle": [], |
|
"last": "Sordoni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Lowe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Laurent", |
|
"middle": [], |
|
"last": "Charlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joelle", |
|
"middle": [], |
|
"last": "Pineau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aaron", |
|
"middle": [], |
|
"last": "Courville", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1605.06069" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2016b. A hierarchical latent variable encoder-decoder model for generating dia- logues. arXiv preprint arXiv:1605.06069 .", |
|
"links": null |
|
}, |
|
"BIBREF51": { |
|
"ref_id": "b51", |
|
"title": "Generating long and diverse responses with neural conversation models", |
|
"authors": [ |
|
{ |
|
"first": "Louis", |
|
"middle": [], |
|
"last": "Shao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephan", |
|
"middle": [], |
|
"last": "Gouws", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Denny", |
|
"middle": [], |
|
"last": "Britz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anna", |
|
"middle": [], |
|
"last": "Goldie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Brian", |
|
"middle": [], |
|
"last": "Strope", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ray", |
|
"middle": [], |
|
"last": "Kurzweil", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1701.03185" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Louis Shao, Stephan Gouws, Denny Britz, Anna Goldie, Brian Strope, and Ray Kurzweil. 2017. Gen- erating long and diverse responses with neural con- versation models. arXiv preprint arXiv:1701.03185 .", |
|
"links": null |
|
}, |
|
"BIBREF52": { |
|
"ref_id": "b52", |
|
"title": "Sequence to sequence learning with neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oriol", |
|
"middle": [], |
|
"last": "Vinyals", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Quoc", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3104--3112", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc Le. 2014. Se- quence to sequence learning with neural networks. In Advances in neural information processing sys- tems. pages 3104-3112.", |
|
"links": null |
|
}, |
|
"BIBREF53": { |
|
"ref_id": "b53", |
|
"title": "A fast, accurate, non-projective, semantically-enriched parser", |
|
"authors": [ |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Tratz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eduard", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1257--1268", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stephen Tratz and Eduard Hovy. 2011. A fast, accurate, non-projective, semantically-enriched parser. In Proceedings of the Conference on Empirical Meth- ods in Natural Language Processing. Association for Computational Linguistics, pages 1257-1268.", |
|
"links": null |
|
}, |
|
"BIBREF54": { |
|
"ref_id": "b54", |
|
"title": "Evaluating discoursebased answer extraction for why-question answering", |
|
"authors": [ |
|
{ |
|
"first": "Suzan", |
|
"middle": [], |
|
"last": "Verberne", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lou", |
|
"middle": [], |
|
"last": "Boves", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nelleke", |
|
"middle": [], |
|
"last": "Oostdijk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter-Arno", |
|
"middle": [], |
|
"last": "Coppen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "735--736", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Suzan Verberne, Lou Boves, Nelleke Oostdijk, and Peter-Arno Coppen. 2007. Evaluating discourse- based answer extraction for why-question answer- ing. In Proceedings of the 30th annual international ACM SIGIR conference on Research and develop- ment in information retrieval. ACM, pages 735-736.", |
|
"links": null |
|
}, |
|
"BIBREF55": { |
|
"ref_id": "b55", |
|
"title": "Structural topic model for latent topical structure analysis", |
|
"authors": [ |
|
{ |
|
"first": "Hongning", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Duo", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chengxiang", |
|
"middle": [], |
|
"last": "Zhai", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1526--1535", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hongning Wang, Duo Zhang, and ChengXiang Zhai. 2011. Structural topic model for latent topical struc- ture analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Lin- guistics: Human Language Technologies-Volume 1. Association for Computational Linguistics, pages 1526-1535.", |
|
"links": null |
|
}, |
|
"BIBREF56": { |
|
"ref_id": "b56", |
|
"title": "Topic augmented neural response generation with a joint attention mechanism", |
|
"authors": [ |
|
{ |
|
"first": "Chen", |
|
"middle": [], |
|
"last": "Xing", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yu", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jie", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yalou", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei-Ying", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1606.08340" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chen Xing, Wei Wu, Yu Wu, Jie Liu, Yalou Huang, Ming Zhou, and Wei-Ying Ma. 2016. Topic aug- mented neural response generation with a joint atten- tion mechanism. arXiv preprint arXiv:1606.08340 .", |
|
"links": null |
|
}, |
|
"BIBREF57": { |
|
"ref_id": "b57", |
|
"title": "Adadelta: an adaptive learning rate method", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Matthew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Zeiler", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1212.5701" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew D Zeiler. 2012. Adadelta: an adaptive learn- ing rate method. arXiv preprint arXiv:1212.5701 .", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "Overview of the proposed generative models for discourse coherence modeling." |
|
}, |
|
"TABREF3": { |
|
"html": null, |
|
"content": "<table><tr><td>Model</td><td colspan=\"3\">adver-1 adver-2 adver-3</td></tr><tr><td>VLV-GM (MMI)</td><td>0.174</td><td>0.120</td><td>0.054</td></tr><tr><td>LDA-HMM-GM (MMI)</td><td>0.130</td><td>0.104</td><td>0.043</td></tr><tr><td>SEQ2SEQ (MMI)</td><td>0.120</td><td>0.090</td><td>0.039</td></tr><tr><td>SEQ2SEQ (bi)</td><td>0.108</td><td>0.078</td><td>0.030</td></tr><tr><td>SEQ2SEQ (uni)</td><td>0.101</td><td>0.068</td><td>0.024</td></tr></table>", |
|
"text": "Performances of the proposed models on the opendomain paragraph reconstruction dataset.", |
|
"type_str": "table", |
|
"num": null |
|
}, |
|
"TABREF4": { |
|
"html": null, |
|
"content": "<table/>", |
|
"text": "Adversarial Success for different models.", |
|
"type_str": "table", |
|
"num": null |
|
}, |
|
"TABREF5": { |
|
"html": null, |
|
"content": "<table/>", |
|
"text": "Case 1: Lexical Coherence Pinochet was arrested. His arrest was unexpected. 1.79 Pinochet was arrested. His death was unexpected. 0.84 Mary ate some apples. She likes apples. 2.03 Mary ate some apples. She likes pears. 0.27 Mary ate some apples. She likes Paris. -1.35", |
|
"type_str": "table", |
|
"num": null |
|
} |
|
} |
|
} |
|
} |