Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "K18-1003",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:10:30.632411Z"
},
"title": "Dual Latent Variable Model for Low-Resource Natural Language Generation in Dialogue Systems",
"authors": [
{
"first": "Van-Khanh",
"middle": [],
"last": "Tran",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Japan Advanced Institute of Science and Technology",
"location": {
"addrLine": "JAIST 1-1 Asahidai",
"postCode": "923-1292",
"settlement": "Nomi, Ishikawa",
"country": "Japan"
}
},
"email": ""
},
{
"first": "Le-Minh",
"middle": [],
"last": "Nguyen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Japan Advanced Institute of Science and Technology",
"location": {
"addrLine": "JAIST 1-1 Asahidai",
"postCode": "923-1292",
"settlement": "Nomi, Ishikawa",
"country": "Japan"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Recent deep learning models have shown improving results to natural language generation (NLG) irrespective of providing sufficient annotated data. However, a modest training data may harm such models' performance. Thus, how to build a generator that can utilize as much of knowledge from a low-resource setting data is a crucial issue in NLG. This paper presents a variational neural-based generation model to tackle the NLG problem of having limited labeled dataset, in which we integrate a variational inference into an encoder-decoder generator and introduce a novel auxiliary autoencoding with an effective training procedure. Experiments showed that the proposed methods not only outperform the previous models when having sufficient training dataset but also show strong ability to work acceptably well when the training data is scarce.",
"pdf_parse": {
"paper_id": "K18-1003",
"_pdf_hash": "",
"abstract": [
{
"text": "Recent deep learning models have shown improving results to natural language generation (NLG) irrespective of providing sufficient annotated data. However, a modest training data may harm such models' performance. Thus, how to build a generator that can utilize as much of knowledge from a low-resource setting data is a crucial issue in NLG. This paper presents a variational neural-based generation model to tackle the NLG problem of having limited labeled dataset, in which we integrate a variational inference into an encoder-decoder generator and introduce a novel auxiliary autoencoding with an effective training procedure. Experiments showed that the proposed methods not only outperform the previous models when having sufficient training dataset but also show strong ability to work acceptably well when the training data is scarce.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Natural language generation (NLG) plays an critical role in Spoken dialogue systems (SDSs) with the NLG task is mainly to convert a meaning representation produced by the dialogue manager, i.e., dialogue act (DA), into natural language responses. SDSs are typically developed for various specific domains, i.e., flight reservations (Levin et al., 2000) , buying a tv or a laptop (Wen et al., 2015b) , searching for a hotel or a restaurant (Wen et al., 2015a) , and so forth. Such systems often require well-defined ontology datasets that are extremely time-consuming and expensive to collect. There is, thus, a need to build NLG systems that can work acceptably well when the training data is in short supply.",
"cite_spans": [
{
"start": 332,
"end": 352,
"text": "(Levin et al., 2000)",
"ref_id": "BIBREF5"
},
{
"start": 379,
"end": 398,
"text": "(Wen et al., 2015b)",
"ref_id": "BIBREF15"
},
{
"start": 439,
"end": 458,
"text": "(Wen et al., 2015a)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There are two potential solutions for abovementioned problems, which are domain adaptation training and model designing for low-resource training. First, domain adaptation training which aims at learning from sufficient source domain a model that can perform acceptably well on a different target domain with a limited labeled target data. Domain adaptation generally involves two different types of datasets, one from a source domain and the other from a target domain. Despite providing promising results for low-resource setting problems, the methods still need an adequate training data at the source domain site.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Second, model designing for low-resource setting has not been well studied in the NLG literature. The generation models have achieved very good performances irrespective of providing sufficient labeled datasets (Wen et al., 2015b,a; . However, small training data easily result in worse generation models in the supervised learning methods. Thus, this paper presents an explicit way to construct an effective low-resource setting generator.",
"cite_spans": [
{
"start": 211,
"end": 232,
"text": "(Wen et al., 2015b,a;",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In summary, we make the following contributions, in which we: (i) propose a variational approach for an NLG problem which benefits the generator to not only outperform the previous methods when there is a sufficient training data but also perform acceptably well regarding lowresource data; (ii) present a variational generator that can also adapt faster to a new, unseen domain using a limited amount of in-domain data; (iii) investigate the effectiveness of the proposed method in different scenarios, including ablation studies, scratch, domain adaptation, and semi-supervised training with varied proportion of dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recently, the RNN-based generators have shown improving results in tackling the NLG problems in task oriented-dialogue systems with varied proposed methods, such as HLSTM (Wen et al., 2015a) , SCLSTM (Wen et al., 2015b) , or espe-cially RNN Encoder-Decoder models integrating with attention mechanism, such as Enc-Dec (Wen et al., 2016b) , and RALSTM . However, such models have proved to work well only when providing a sufficient in-domain data since a modest dataset may harm the models' performance.",
"cite_spans": [
{
"start": 171,
"end": 190,
"text": "(Wen et al., 2015a)",
"ref_id": "BIBREF12"
},
{
"start": 193,
"end": 219,
"text": "SCLSTM (Wen et al., 2015b)",
"ref_id": null
},
{
"start": 310,
"end": 337,
"text": "Enc-Dec (Wen et al., 2016b)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In this context, one can think of a potential solution where the domain adaptation learning is utilized. The source domain, in this scenario, typically contains a sufficient amount of annotated data such that a model can be efficiently built, while there is often little or no labeled data in the target domain. A phrase-based statistical generator (Mairesse et al., 2010) using graphical models and active learning, and a multi-domain procedure (Wen et al., 2016a) via data counterfeiting and discriminative training. However, a question still remains as how to build a generator that can directly work well on a scarce dataset.",
"cite_spans": [
{
"start": 349,
"end": 372,
"text": "(Mairesse et al., 2010)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Neural variational framework for generative models of text have been studied extensively. Chung et al. (2015) proposed a recurrent latent variable model for sequential data by integrating latent random variables into hidden state of an RNN. A hierarchical multi scale recurrent neural networks was proposed to learn both hierarchical and temporal representation (Chung et al., 2016) , while Bowman et al. (2015) presented a variational autoencoder for unsupervised generative language model. Sohn et al. (2015) proposed a deep conditional generative model for structured output prediction, whereas Zhang et al. (2016) introduced a variational neural machine translation that incorporated a continuous latent variable to model underlying semantics of sentence pairs. To solve the exposure-bias problem ; proposed a seq2seq purely convolutional and deconvolutional autoencoder, Yang et al. (2017) proposed to use a dilated CNN decoder in a latentvariable model, or Semeniuta et al. (2017) proposed a hybrid VAE architecture with convolutional and deconvolutional components.",
"cite_spans": [
{
"start": 90,
"end": 109,
"text": "Chung et al. (2015)",
"ref_id": "BIBREF3"
},
{
"start": 362,
"end": 382,
"text": "(Chung et al., 2016)",
"ref_id": "BIBREF2"
},
{
"start": 492,
"end": 510,
"text": "Sohn et al. (2015)",
"ref_id": "BIBREF9"
},
{
"start": 598,
"end": 617,
"text": "Zhang et al. (2016)",
"ref_id": "BIBREF17"
},
{
"start": 876,
"end": 894,
"text": "Yang et al. (2017)",
"ref_id": "BIBREF16"
},
{
"start": 963,
"end": 986,
"text": "Semeniuta et al. (2017)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We make an assumption about the existing of a continuous latent variable z from a underlying semantic space of DA-Utterance pairs (d, u) , so that we explicitly model the space together with Figure 1 : Illustration of proposed variational models as a directed graph. (a) VNLG: joint learning both variational parameters \u03c6 and generative model parameters \u03b8. (b) DualVAE: red and blue arrows form a standard VAE (parameterized by \u03c6 and \u03b8 ) as an auxiliary auto-encoding to the VNLG model denoted by red and black arrows. variable d to guide the generation process, i.e., p (u|z, d) . The original conditional probability p(y|d) modeled by a vanilla encoder-decoder network is thus reformulated as follows:",
"cite_spans": [
{
"start": 130,
"end": 136,
"text": "(d, u)",
"ref_id": null
},
{
"start": 571,
"end": 579,
"text": "(u|z, d)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 191,
"end": 199,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Variational Natural Language Generator",
"sec_num": "3.1"
},
{
"text": "p(u|d) = z p(u, z|d)d z = z p(u|z, d)p(z|d)d z (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variational Natural Language Generator",
"sec_num": "3.1"
},
{
"text": "This latent variable enables us to model the underlying semantic space as a global signal for generation. However, the incorporating of latent variable into the probabilistic model arises two difficulties in (i) modeling the intractable posterior inference p(z|d, u) and (ii) whether or not the latent variables z can be modeled effectively in case of lowresource setting data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variational Natural Language Generator",
"sec_num": "3.1"
},
{
"text": "To address the difficulties, we propose an encoder-decoder based variational model to natural language generation (VNLG) by integrating a variational autoencoder (Kingma and Welling, 2013) into an encoder-decoder generator (Tran and Nguyen, 2017). Figure 1-(a) shows a graphical model of VNLG. We then employ deep neural networks to approximate the prior p(z|d), true posterior p(z|d, u), and decoder p(u|z, d). To tackle the first issue, the intractable posterior is approximated from both the DA and utterance information q \u03c6 (z|d, u) under the above assumption. In contrast, the prior is modeled to condition on the DA only p \u03b8 (z|d) due to the fact that the DA and utterance of a training pair usually share the same semantic information, i.e., a given DA inform(name='ABC'; area='XYZ') contains key information of the corresponding utterance \"The hotel ABC is in XYZ area\". The underlying semantic space with having more information encoded from both the prior and the posterior provides the generator a potential solution to tackle the second issue. Lastly, in generative process, given an observation DA d the output u is generated by the decoder network p \u03b8 (u|z, d) under the guidance of the global signal z which is drawn from the prior distribution p \u03b8 (z|d). According to (Sohn et al., 2015) , the variational lower bound can be recomputed as:",
"cite_spans": [
{
"start": 1166,
"end": 1174,
"text": "(u|z, d)",
"ref_id": null
},
{
"start": 1284,
"end": 1303,
"text": "(Sohn et al., 2015)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 248,
"end": 260,
"text": "Figure 1-(a)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Variational Natural Language Generator",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L(\u03b8,\u03c6, d, u) = \u2212KL(q \u03c6 (z|d, u)||p \u03b8 (z|d)) +E q \u03c6 (z|d,u) [log p \u03b8 (u|z, d)] \u2264 log p(u|d)",
"eq_num": "(2)"
}
],
"section": "Variational Natural Language Generator",
"sec_num": "3.1"
},
{
"text": "3.1.1 Variational Encoder Network",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variational Natural Language Generator",
"sec_num": "3.1"
},
{
"text": "The encoder consists of two networks: (i) a Bidirectional LSTM (BiLSTM) which encodes the sequence of slot-value pairs {sv i } T DA i=1 by separate parameterization of slots and values (Wen et al., 2016b) ; and (ii) a shared CNN/RNN Utterance Encoder which encodes the corresponding utterance. The encoder network, thus, produces both the DA representation h D and the utterance representation h U vectors which flow into the inference and decoder networks, and the posterior approximator, respectively (see Suppl. 1.1).",
"cite_spans": [
{
"start": 185,
"end": 204,
"text": "(Wen et al., 2016b)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Variational Natural Language Generator",
"sec_num": "3.1"
},
{
"text": "This section models both the prior p \u03b8 (z|d) and the posterior q \u03c6 (z|d, u) by utilizing neural networks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variational Inference Network",
"sec_num": "3.1.2"
},
{
"text": "Neural Posterior Approximator: We approximate the intractable posterior distribution of z to simplify the posterior inference, in which we first projects both DA and utterance representations onto the latent space:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variational Inference Network",
"sec_num": "3.1.2"
},
{
"text": "h z = g(W z [h D ; h U ] + b z ) (3) where W z \u2208 R dz\u00d7(d h D +d h U )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variational Inference Network",
"sec_num": "3.1.2"
},
{
"text": ", b z \u2208 R dz are matrix and bias parameters respectively, d z is the dimensionality of the latent space, and we set g(.) to be ReLU in our experiments. We then approximate the posterior as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variational Inference Network",
"sec_num": "3.1.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "q \u03c6 (z|d, u) = N (z; \u00b5 1 (h z ), \u03c3 2 1 (h z )I)",
"eq_num": "(4)"
}
],
"section": "Variational Inference Network",
"sec_num": "3.1.2"
},
{
"text": "with mean \u00b5 1 and standard variance \u03c3 1 are the outputs of the neural network as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variational Inference Network",
"sec_num": "3.1.2"
},
{
"text": "\u00b5 1 = W \u00b5 1 h z + b \u00b5 1 , log \u03c3 2 1 = W \u03c3 1 h z + b \u03c3 1 (5) where \u00b5 1 , log \u03c3 2 1 are both d z dimension vectors. Neural Prior:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variational Inference Network",
"sec_num": "3.1.2"
},
{
"text": "We model the prior as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variational Inference Network",
"sec_num": "3.1.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p \u03b8 (z|d) = N (z; \u00b5 1 (d), \u03c3 2 1 (d)I)",
"eq_num": "(6)"
}
],
"section": "Variational Inference Network",
"sec_num": "3.1.2"
},
{
"text": "where \u00b5 1 and \u03c3 1 of the prior are neural models only based on the Dialogue Act representation, which are the same as those of the posterior q \u03c6 (z|d, u) in Eq. 3 and 5, except for the absence of h U . To obtain a representation of the latent variable z, we re-parameterize it as follows: Note here that the parameters for the prior and the posterior are independent of each other. Moreover, during decoding we set h z to be the mean of the prior p \u03b8 (z|d), i.e., \u00b5 1 due to the absence of the utterance u. In order to integrate the latent variable h z into the decoder, we use a non-linear transformation to project it onto the output space for generation: h e = g(W e h z + b e )(7), where h e \u2208 R de .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variational Inference Network",
"sec_num": "3.1.2"
},
{
"text": "h z = \u00b5 1 + \u03c3 1 where \u223c N (0, I).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variational Inference Network",
"sec_num": "3.1.2"
},
{
"text": "Given a DA d and the latent variable z, the decoder calculates the probability over the generation u as a joint probability of ordered conditionals:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variational Decoder Network",
"sec_num": "3.1.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(u|z, d) = T U t=1 p(u t |u <t , z, d)",
"eq_num": "(8)"
}
],
"section": "Variational Decoder Network",
"sec_num": "3.1.3"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variational Decoder Network",
"sec_num": "3.1.3"
},
{
"text": "p(u t |u <t , z, d)=g (RALSTM(u t , h t\u22121 , d t ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variational Decoder Network",
"sec_num": "3.1.3"
},
{
"text": "The RALSTM cell (Tran and Nguyen, 2017) is slightly modified in order to integrate the representation of latent variable, i.e., h e , into the computational cell (see Suppl. 1.3), in which the latent variable can affect the hidden representation through the gates. This allows the model can indirectly take advantage of the underlying semantic information from the latent variable z. In addition, when the model learns unseen dialogue acts, the semantic representation h e can benefit the generation process (see Table 1 ). We finally obtain the VNLG model with RNN Utterance Encoder (R-VNLG) or with CNN Utterance Encoder (C-VNLG).",
"cite_spans": [],
"ref_spans": [
{
"start": 513,
"end": 520,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Variational Decoder Network",
"sec_num": "3.1.3"
},
{
"text": "This standard VAE model (left side in Figure 2 ) acts as an auxiliary auto-encoding for utterance (used at training time) to the VNLG generator. The model consists of two components. While the shared CNN Utterance Encoder with the VNLG model is to compute the latent representation vector h U (see Suppl. 1.1.3), a Deconvolutional CNN Decoder to decode the latent representation h e back to the source text (see Suppl. 2.1). Specifically, after having the vector representation h U , we apply another linear regression to obtain the distribution parameter \u00b5 2 = W \u00b5 2 h U +b \u00b5 2 and log \u03c3 2 2 = W \u03c3 2 h U + b \u03c3 2 . We then re-parameterize them to obtain a latent representation h zu = \u00b5 2 + \u03c3 2 , where \u223c N (0, I). In order to integrate the latent variable h zu into the DCNN Decoder, we use the shared non-linear transformation as in Eq. 7 (denoted by the black-dashed line in Figure 2 ) as: h e = g(W e h zu + b e ).",
"cite_spans": [],
"ref_spans": [
{
"start": 38,
"end": 46,
"text": "Figure 2",
"ref_id": "FIGREF0"
},
{
"start": 878,
"end": 886,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Variational CNN-DCNN Model",
"sec_num": "3.2"
},
{
"text": "The entire resulting model, named DualVAE, by incorporating the VNLG with the Variational CNN-DCNN model, is depicted in Figure 2 . ",
"cite_spans": [],
"ref_spans": [
{
"start": 121,
"end": 129,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Variational CNN-DCNN Model",
"sec_num": "3.2"
},
{
"text": "E q \u03c6 (z|d,u) [.] 1 M M m=1 log p \u03b8 (u|d, h (m) z )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variational CNN-DCNN Model",
"sec_num": "3.2"
},
{
"text": "where M is the number of samples. In this work, the joint training objective L VNLG for a training instance pair (d, u) is formulated as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variational CNN-DCNN Model",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L(\u03b8, \u03c6, d, u) \u2212KL(q \u03c6 (z|d, u)||p \u03b8 (z|d)) + 1 M M m=1 T U t=1 log p \u03b8 (u t |u <t , d, h (m) z )",
"eq_num": "(9)"
}
],
"section": "Variational CNN-DCNN Model",
"sec_num": "3.2"
},
{
"text": "where h m) , and (m) \u223c N (0, I), and \u03b8 and \u03c6 denote decoder and encoder parameters, respectively. The first term is the KL divergence between two Gaussian distribution, and the second term is the approximation expectation. We simply set M = 1 which degenerates the second term to the objective of conventional generator. Since the objective function in Eq. 9 is differentiable, we can jointly optimize the parameter \u03b8 and variational parameter \u03c6 using standard gradient ascent techniques. However, the KL divergence loss tends to be significantly small during training (Bowman et al., 2015) . As a results, the decoder does not take advantage of information from the latent variable z. Thus, we apply the KL cost annealing strategy that encourages the model to encode meaningful representations into the latent vector z, in which we gradually anneal the KL term from 0 to 1. This helps our model to achieve solutions with non-zero KL term.",
"cite_spans": [
{
"start": 8,
"end": 10,
"text": "m)",
"ref_id": null
},
{
"start": 569,
"end": 590,
"text": "(Bowman et al., 2015)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Variational CNN-DCNN Model",
"sec_num": "3.2"
},
{
"text": "(m) z = \u00b5 + \u03c3 (",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variational CNN-DCNN Model",
"sec_num": "3.2"
},
{
"text": "The objective function L CNN-DCNN of the Variational CNN-DCNN model is the standard VAE lower bound and maximized as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Variational CNN-DCNN Model",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L(\u03b8 , \u03c6 , u) = \u2212KL(q \u03c6 (z|u)||p \u03b8 (z)) + E q \u03c6 (z|u) [log p \u03b8 (u|z)] \u2264 log p(u)",
"eq_num": "(10)"
}
],
"section": "Training Variational CNN-DCNN Model",
"sec_num": "4.2"
},
{
"text": "where \u03b8 and \u03c6 denote decoder and encoder parameters, respectively. During training, we also consider a denoising autoencoder where we slightly modify the input by swapping some arbitrary word pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Variational CNN-DCNN Model",
"sec_num": "4.2"
},
{
"text": "To allow the model explore and balance maximizing the variational lower bound between the Variational CNN-DCNN model and VNLG model, an objective is joint training as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Training Dual VAE Model",
"sec_num": "4.3"
},
{
"text": "L DualVAE = L VNLG + \u03b1L CNN-DCNN (11)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Training Dual VAE Model",
"sec_num": "4.3"
},
{
"text": "where \u03b1 controls the relative weight between two variational losses. During training, we anneal the value of \u03b1 from 1 to 0, so that the dual latent variable learned can gradually focus less on reconstruction objective of the CNN-DCNN model, only retain those features that are useful for the generation objective.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Training Dual VAE Model",
"sec_num": "4.3"
},
{
"text": "To allow the dual VAE model explore and encode useful information of the Dialogue Act into the latent variable, we further take a cross training between two VAEs by simply replacing the RALSTM Decoder of the VNLG model with the DCNN Utterance Decoder and its objective training L DA-DCNN as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Cross Training Dual VAE Model",
"sec_num": "4.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L(\u03b8 , \u03c6, d, u) \u2212KL(q \u03c6 (z|d, u)||p \u03b8 (z|d)) + E q \u03c6 (z|d,u) [log p \u03b8 (u|z, d)],",
"eq_num": "(12)"
}
],
"section": "Joint Cross Training Dual VAE Model",
"sec_num": "4.4"
},
{
"text": "and a joint cross training objective is employed:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Cross Training Dual VAE Model",
"sec_num": "4.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L CrossVAE = L VNLG + \u03b1(L CNN-DCNN + L DA-DCNN )",
"eq_num": "(13)"
}
],
"section": "Joint Cross Training Dual VAE Model",
"sec_num": "4.4"
},
{
"text": "We assessed the proposed models on four different original NLG domains: finding a restaurant and hotel (Wen et al., 2015a), or buying a laptop and television (Wen et al., 2016b).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "The generator performances were evaluated using the two metrics: the BLEU and the slot error rate ERR by adopting code from an NLG toolkit * . We compared the proposed models against strong baselines which have been recently published as NLG benchmarks of those datasets, including (i) gating models such as HLSTM (Wen et al., 2015a), and SCLSTM (Wen et al., 2015b); and (ii) attention models such as Enc-Dec (Wen et al., 2016b), RALSTM (Tran and Nguyen, 2017).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics and Baselines",
"sec_num": "5.1"
},
{
"text": "In this work, the CNN Utterance Encoder consists of L = 3 layers, which for a sentence of length T = 73, embedding size d = 100, stride length s = {2, 2, 2}, number of filters k = {300, 600, 100} with filter sizes h = {5, 5, 16}, results in feature maps V of sizes {35 \u00d7 300, 16 \u00d7 600, 1 \u00d7 100}, in which the last feature map corresponds to latent representation vector h U . The hidden layer size and beam width were set to be 100 and 10, respectively, and the models were trained with a 70% of keep dropout rate. We performed 5 runs with different random initialization of the network, and the training process is terminated by using early stopping. For the variational inference, we set the latent variable size to be 300. We used Adam optimizer with the learning rate is initially set to be 0.001, and after 5 epochs the learning rate is decayed every epoch using an exponential rate of 0.95.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setups",
"sec_num": "5.2"
},
{
"text": "We performed the models in different scenarios as follows: (i) scratch training where models trained from scratch using 10% (scr10), 30% (scr30), and 100% (scr100) amount of in-domain data; and (ii) domain adaptation training where models pre-trained from scratch using all source domain data, then fine-tuned on the target domain using only 10% amount of the target data. Overall, the proposed models can work well in scenarios * https://github.com/shawnwun/RNNLG Figure 3 : Performance on Laptop domain with varied limited amount, from 1% to 7%, of the adaptation training data when adapting models pretrained on [Restaurant+Hotel] union dataset. of low-resource setting data. The proposed models obtained state-of-the-art performances regarding both the evaluation metrics across all domains in all training scenarios.",
"cite_spans": [],
"ref_spans": [
{
"start": 465,
"end": 473,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "6"
},
{
"text": "We compare the encoder-decoder RALSTM model to its modification by integrating with variational inference (R-VNLG and C-VNLG) as demonstrated in Figure 3 and Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 145,
"end": 153,
"text": "Figure 3",
"ref_id": null
},
{
"start": 158,
"end": 165,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Integrating Variational Inference",
"sec_num": "6.1"
},
{
"text": "It clearly shows that the variational generators not only provide a compelling evidence on adapting to a new, unseen domain when the target domain data is scarce, i.e., from 1% to 7% (Figure 3) but also preserve the power of the original RAL-STM on generation task since their performances are very competitive to those of RALSTM (Table 1, scr100). Table 1 , scr10 further shows the necessity of the integrating in which the VNLGs achieved a significant improvement over the RAL-STM in scr10 scenario where the models trained from scratch with only a limited amount of training data (10%). These indicate that the proposed variational method can learn the underlying semantic of the existing DA-utterance pairs, which are especially useful information for low-resource setting.",
"cite_spans": [],
"ref_spans": [
{
"start": 183,
"end": 193,
"text": "(Figure 3)",
"ref_id": null
},
{
"start": 349,
"end": 356,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Integrating Variational Inference",
"sec_num": "6.1"
},
{
"text": "Furthermore, the R-VNLG model has slightly better results than the C-VNLG when providing sufficient training data in scr100. In contrast, with a modest training data, in scr10, the latter model demonstrates a significant improvement compared to the former in terms of both the BLEU and ERR scores by a large margin across all four dataset. Take Table 1 : Results evaluated on four domains by training models from scratch with 10%, 30%, and 100% in-domain data, respectively. The results were averaged over 5 randomly initialized networks. The bold and italic faces denote the best and second best models in each training scenario, respectively. STM (68.55 BLEU, 22.53% ERR). Thus, the rest experiments focus on the C-VNLG since it shows obvious sign for constructing a dual latent variable models dealing with low-resource in-domain data. We leave the R-VNLG for future investigation.",
"cite_spans": [],
"ref_spans": [
{
"start": 340,
"end": 344,
"text": "Take",
"ref_id": null
},
{
"start": 345,
"end": 352,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Integrating Variational Inference",
"sec_num": "6.1"
},
{
"text": "The ablation studies (Table 1) demonstrate the contribution of each model components, in which we incrementally train the baseline RALSTM, the C-VNLG (= RALSTM + Variational inference), the DualVAE (= C-VNLG + Variational CNN-DCNN), and the CrossVAE (= DualVAE + Cross training) models. Generally, while all models can work well when there are sufficient training datasets, the performances of the proposed models also increase as increasing the model components. The trend is consistent across all training cases no matter how much the training data was provided. Take, for example, the scr100 scenario in which the CrossVAE model mostly outperformed all the previous strong baselines with regard to the BLEU and the slot error rate ERR scores.",
"cite_spans": [],
"ref_spans": [
{
"start": 21,
"end": 30,
"text": "(Table 1)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Ablation Studies",
"sec_num": "6.2"
},
{
"text": "On the other hand, the previous methods showed extremely impaired performances regarding low BLEU score and high slot error rate ERR when training the models from scratch with only 10% of in-domain data (scr10). In contrast, by integrating the variational inference, the C-VNLG model, for example in Hotel domain, can significantly improve the BLEU score from 68.55 to 79.98, and also reduce the slot error rate ERR by a large margin, from 22.53 to 8.67, compared to the RALSTM baseline. Moreover, the proposed models have much better performance over the previous ones in the scr10 scenario since the Cross-VAE, and the DualVAE models mostly obtained the best and second best results, respectively. The CrossVAE model trained on scr10 scenario, in some cases, achieved results which close to those of the HLSTM, SCLSTM, and ENCDEC models trained on all training data (scr100) scenario. Take, for example, the most challenge dataset Laptop, in which the DualVAE and CrossVAE obtained competitive results regarding the BLEU score, at 50.16 and 50.85 respectively, which close to those of the HLSTM (51.30 BLEU), SCLSTM (51.09 BLEU), and ENCDEC (51.01 BLEU), while the results regardless the slot error rate ERR scores are also close to those of the previous or even better in some cases, for example DualVAE (2.44 ERR), CrossVAE (2.39 ERR), and ENCDEC (4.24 ERR). There are also some cases in TV domain where the proposed models (in scr10) have results close to or better over the previous ones (trained on scr100). These indicate that the proposed models can encode useful information into the latent variable efficiently to better generalize to the unseen dialogue acts, addressing the second difficulty with low-resource data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ablation Studies",
"sec_num": "6.2"
},
{
"text": "The scr30 section further confirms the effectiveness of the proposed methods, in which the Cross-VAE and DualVAE still mostly rank the best and second-best models compared with the baselines. The proposed models also show superior ability in leveraging the existing small training data to obtain very good performances, which are in many cases even better than those of the previous methods trained on 100% of in-domain data. Take Tv domain, for example, in which the CrossVAE in scr30 achieves a good result regarding BLEU and slot error rate ERR score, at 53.07 BLEU and 0.82 ERR, that are not only competitive to the RALSTM (53.73 BLEU, 0.49 ERR), but also outperform the previous models in scr100 training scenario, such as HLSTM (52.40 BLEU, 2.65 ERR), SCLSTM (52.35 BLEU, 2.41 ERR), and ENCDEC (51.42 BLEU, 3.38 ERR). This further indicates the need of the integrating with variational inference, the additional auxiliary autoencoding, as well as the joint and cross training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ablation Studies",
"sec_num": "6.2"
},
{
"text": "In this experiment, we trained four models (ENCDEC, SCLSTM, RALSTM, and CrossVAE) from scratch in the most difficult unseen Laptop domain with an increasingly varied proportion of training data, start from 1% to 100%. The results are shown in Figure 4 . It clearly sees that the BLEU score increases and the slot error ERR decreases as the models are trained on more data. The CrossVAE model is clearly better than the previous models (ENCDEC, SCLSTM, RALSTM) in all cases. While the performance of the Cross-VAE, RALSTM model starts to saturate around 30% and 50%, respectively, the ENCDEC model seems to continue getting better as providing more training data. The figure also confirms that the CrossVAE trained on 30% of data can achieve a better performance compared to those of the previous models trained on 100% of in-domain data.",
"cite_spans": [],
"ref_spans": [
{
"start": 243,
"end": 251,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Model comparison on unseen domain",
"sec_num": "6.3"
},
{
"text": "We further examine the domain scalability of the proposed methods by training the CrossVAE and SCLSTM models on adaptation scenarios, in which we first trained the models on out-ofdomain data, and then fine-tuned the model parameters by using a small amount (10%) of indomain data. The results are shown in Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 307,
"end": 314,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Domain Adaptation",
"sec_num": "6.4"
},
{
"text": "Both SCLSTM and CrossVAE models can take advantage of \"close\" dataset pairs, i.e., Restaurant \u2194 Hotel, and Tv \u2194 Laptop, to achieve better performances compared to those of the \"different\" dataset pairs, i.e. Latop \u2194 Restaurant. Moreover, Table 2 clearly shows that the SCLSTM (denoted by ) is limited to scale to another domain in terms of having very low BLEU and high ERR scores. This adaptation scenario along with the scr10 and scr30 in Table 1 demonstrate that the SCLSTM can not work when having a low-resource setting of in-domain training data.",
"cite_spans": [],
"ref_spans": [
{
"start": 238,
"end": 245,
"text": "Table 2",
"ref_id": null
},
{
"start": 441,
"end": 448,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Domain Adaptation",
"sec_num": "6.4"
},
{
"text": "On the other hand, the CrossVAE model again show ability in leveraging the out-of-domain data to better adapt to a new domain. Especially in the case where Laptop, which is a most difficult unseen domain, is the target domain the Cross-VAE model can obtain good results irrespective of low slot error rate ERR, around 1.90%, and high BLEU score, around 50.00 points. Surprisingly, the CrossVAE model trained on scr10 scenario in some cases achieves better performance compared to those in adaptation scenario first trained with 30% out-of-domain data (denoted by ) which is also better than the adaptation model trained on 100% out-of-domain data (denoted by \u03be).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Adaptation",
"sec_num": "6.4"
},
{
"text": "Preliminary experiments on semi-supervised training were also conducted, in which we trained the CrossVAE model with the same 10% indomain labeled data as in the other scenarios and Table 2 : Results evaluated on Target domains by adaptation training SCLSTM model from 100% (denoted as ) of Source data, and the CrossVAE model from 30% (denoted as ), 100% (denoted as \u03be) of Source data. The scenario used only 10% amount of the Target domain data. The last two rows show results by training the CrossVAE model on the scr10 and semi-supervised learning, respectively. 50% in-domain unlabeled data by keeping only the utterances u in a given input pair of dialogue act-utterance (d, u) , denoted by semi-U50-L10. The results showed CrossVAE's ability in leveraging the unlabeled data to achieve slightly better results compared to those in scratch scenario. All these stipulate that the proposed models can perform acceptably well in training cases of scratch, domain adaptation, and semi-supervised where the in-domain training data is in short supply.",
"cite_spans": [
{
"start": 677,
"end": 683,
"text": "(d, u)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 182,
"end": 189,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Domain Adaptation",
"sec_num": "6.4"
},
{
"text": "We present top responses generated for different scenarios from TV (Table 3 ) and Laptop (Table 4) , which further show the effectiveness of the proposed methods.",
"cite_spans": [],
"ref_spans": [
{
"start": 67,
"end": 75,
"text": "(Table 3",
"ref_id": "TABREF2"
},
{
"start": 89,
"end": 98,
"text": "(Table 4)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparison on Generated Outputs",
"sec_num": "6.5"
},
{
"text": "On the one hand, previous models trained on scr10, scr30 scenarios produce a diverse range of the outputs' error types, including missing, misplaced, redundant, wrong slots, or spelling mistake information, resulting in a very high score of the slot error rate ERR. The ENCDEC, HLSTM and SCLSTM models in Table 3 -DA 1, for example, tend to generate outputs with redundant slots (i.e., SLOT HDMIPORT, SLOT NAME, SLOT FAMILY), missing slots (i.e., [l7 family], [4 hdmi port -s]), or even in some cases produce irrelevant slots (i.e., SLOT AUDIO, eco rating), resulting in inadequate utterances.",
"cite_spans": [],
"ref_spans": [
{
"start": 305,
"end": 312,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Comparison on Generated Outputs",
"sec_num": "6.5"
},
{
"text": "On the other hand, the proposed models can effectively leverage the knowledge from only few of the existing training instances to better generalize to the unseen dialogue acts, leading to satisfactory responses. For example in Table 3 , the proposed methods can generate adequate number of the required slots, resulting in fulfilled utterances (DualVAE-10, CrossVAE-10, DualVAE-30, CrossVAE-30), or acceptable outputs with much fewer error information, i.e., mis-ordered slots in the generated utterances (C-VNLG-30).",
"cite_spans": [],
"ref_spans": [
{
"start": 227,
"end": 234,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Comparison on Generated Outputs",
"sec_num": "6.5"
},
{
"text": "For a much easier dialogue act in Table 3 -DA 2, previous models still produce some error outputs, whereas the proposed methods seem to form some specific slots into phrase in concise outputs. For example, instead of generating \"the proteus 73 is a television\" phrase, the proposed models tend to concisely produce \"the proteus 73 television\". The trend is mostly consistent to those in Table 4 .",
"cite_spans": [],
"ref_spans": [
{
"start": 34,
"end": 41,
"text": "Table 3",
"ref_id": "TABREF2"
},
{
"start": 387,
"end": 394,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparison on Generated Outputs",
"sec_num": "6.5"
},
{
"text": "We present an approach to low-resource NLG by integrating the variational inference and introducing a novel auxiliary auto-encoding. Experiments showed that the models can perform acceptably well using a scarce dataset. The ablation studies demonstrate that the variational generator contributes to learning the underlying semantic of DA-utterance pairs, while the variational CNN-DCNN plays an important role of encoding useful information into the latent variable. In the future, we further investigate the proposed models with adversarial training, semi-supervised, or unsupervised training. [OK] denotes successful generation. Model-X where X is amount of training data, i.e. 10%, 30%, or 100%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "7"
},
{
"text": "Generated Responses from Laptop Domain DA compare(name='satellite pallas 21'; battery='4 hour'; drive='500 gb'; name='satellite dinlas 18'; battery='3.5 hour'; drive='1 tb') Reference compared to satellite pallas 21 which can last 4 hour and has a 500 gb drive , satellite dinlas 18 can last 3.5 hour and has a 1 tb drive . which one do you prefer Enc-Dec-10 the satellite pallas 21 has a 500 gb drive , the satellite dinlas 18 has a 4 hour battery life and a 3.5 hour battery life and a SLOT BATTERY battery life and a 1 tb drive HLSTM-10 the satellite pallas 21 has a 4 hour battery life and a 500 gb drive . which one do you prefer [satellite pallas 18] [3.5 hour battery] [1 tb drive] SCLSTM-10 the satellite pallas 21 has a 4 hour battery , and has a 3.5 hour battery life and a 500 gb drive and a 1 tb drive [satellite dinlas 18] C-VNLG-10 the satellite pallas 21 has a 500 gb drive and a 4 hour battery life . the satellite dinlas 18 has a 3.5 hour battery life and a SLOT BATTERY battery life [1 tb drive] DualVAE-10 the satellite pallas 21 has a 4 hour battery life and a 500 gb drive and the satellite dinlas 18 with a 3.5 hour battery life and is good for business computing . which one do you prefer [1 tb drive] CrossVAE-10 the satellite pallas 21 with 500 gb and a 1 tb drive . the satellite dinlas 18 with a 4 hour battery and a SLOT DRIVE drive . which one do you prefer [3.5 hour battery] Enc-Dec-30 the satellite pallas 21 has a 500 gb drive with a 1 tb drive and is the satellite dinlas 18 with a SLOT DRIVE drive for 4 hour -s . which one do you prefer [3.5 hour battery] HLSTM-30 the satellite pallas 21 is a 500 gb drive with a 4 hour battery life . the satellite dinlas 18 has a 3.5 hour battery life . which one do you prefer [1 tb drive] SCLSTM-30 the satellite pallas 21 has a 500 gb drive . the satellite dinlas 18 has a 4 hour battery life . the SLOT NAME has a 3.5 hour battery life . which one do you prefer [1 tb drive] C-VNLG-30 which one do you prefer the satellite pallas 21 with a 4 hour battery life , the satellite dinlas 18 has a 500 gb drive and a 3.5 hour battery life and a 1 tb drive . which one do you prefer DualVAE-30 satellite pallas 21 has a 500 gb drive and a 4 hour battery life while the satellite dinlas 18 with a 3.5 hour battery life and a 1 tb drive .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "[OK] CrossVAE-30 the satellite pallas 21 has a 500 gb drive with a 4 hour battery life . the satellite dinlas 18 has a 1 tb drive and a 3.5 hour battery life . which one do you prefer [OK] Table 4: Comparison of top Laptop responses generated for different models in different scenarios. Errors are marked in colors ([missing] , misplaced, redundant, wrong, spelling information).",
"cite_spans": [
{
"start": 184,
"end": 188,
"text": "[OK]",
"ref_id": null
},
{
"start": 316,
"end": 326,
"text": "([missing]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "[OK] denotes successful generation. Model-X where X is amount of training data, i.e. 10%, 30%, or 100%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was supported by the JST CREST Grant Number JPMJCR1513, the JSPS KAK-ENHI Grant number 15K16048 and the grant of a collaboration between JAIST and TIS.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
},
{
"text": "Generated Responses from TV Domain DA 1compare(name='typhon 45'; hdmiport='2'; family='l2'; name='hades 48'; hdmiport='4'; family='l7') Reference 1Compared to typhon 45 which has 2 hdmi port -s and is in the L2 product family, hades 48 has 4 hdmi port -s and is in the L7 product family. Which one do you prefer ? ENCDEC-10 the typhon 45 is in the l2 product family and has 2 hdmi port -s and is in the l7 product family with 4 hdmi port -s, the hades 48 is in the SLOT FAMILY product family with a SLOT AUDIO.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "the typhon 45 is a great eco rating, the hades 48 is in the l2 family with 2 hdmi port -s. [l7 family] [4 hdmi port -s] SCLSTM-10 the typhon 45 is the hades 48 with 2 hdmi port in the l2 family, the SLOT NAME has 4 hdmi port -s and SLOT HDMIPORT hdmi port. [l7 family] C-VNLG-10 the typhon 45 has 2 hdmi port -s and the hades 48 is in the l2 family and has 4 hdmi port -s. [l7 family] DualVAE-10 the typhon 45 has 2 hdmi port -s and is in the l2 family while the hades 48 has 4 hdmi port -s and is in the l7 family.[OK] CrossVAE-10 the typhon 45 is in the l2 family with 2 hdmi port -s while the hades 48 has 4 hdmi port -s and is in the l7 family.[OK] ENCDEC-30 the typhon 45 has 2 hdmi port -s, the hades 48 has 4 hdmi port -s, the SLOT NAME has SLOT HDMIPORT hdmi port. [l2 family] [l7 family] HLSTM-30 the typhon 45 is in the l2 product family with 2 hdmi port -s, whereas the hades 48 has 4 hdmi port.[l7 family] SCLSTM-30 the typhon 45 has 2 hdmi port -s, the hades 48 is in the l2 product family.[l7 family] [4 hdmi port -s] C-VNLG-30 the typhon 45 has 2 hdmi port -s, the hades 48 is in the l2 product family and has 4 hdmi port -s in l7 family.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HLSTM-10",
"sec_num": null
},
{
"text": "which do you prefer, the typhon 45 in the l2 product family with 2 hdmi port -s . the hades 48 is in the l7 family with 4 hdmi port -s.[OK] CrossVAE-30 the typhon 45 has 2 hdmi port -s and in the l2 family while the hades 48 has 4 hdmi port -s and is in the l7 family. which item do you prefer.[OK] DA 2 recommend(name='proteus 73'; type='television'; price='1500 dollars'; audio='nicam stereo'; hdmiport='2') Reference 2 proteus 73 is a nice television. its price is 1500 dollars, its audio is nicam stereo, and it has 2 hdmi port -s. ENCDEC-10 the proteus 73 is a great television with a nicam stereo and 2 hdmi port -s [1500 dollars] HLSTM-10 the proteus 73 is a television with 2 hdmi port -s and comes with a nicam stereo and costs 1500 dollars [OK] SCLSTM-10 the proteus 73 is a nice television with nicam stereo and 2 hdmi port -s [1500 dollars] C-VNLG-10 the proteus 73 television has a nicam stereo and 2 hdmi port -s and costs 1500 dollars [OK] DualVAE-10 the proteus 73 television has a nicam stereo and 2 hdmi port -s and costs 1500 dollars [OK] CrossVAE-10 the proteus 73 television has 2 hdmi port -s and a nicam stereo and costs 1500 dollars [OK] ENCDEC-30 the proteus 73 television has 2 hdmi port -s and nicam stereo audio for 1500 dollars [OK] HLSTM-30 the proteus 73 television has a nicam stereo and 2 hdmi port -s and is priced at 1500 dollars [OK] SCLSTM-30 the proteus 73 is a nice television with nicam stereo and 2 hdmi port -s . it is priced at 1500 dollars [OK] C-VNLG-30 the proteus 73 television has 2 hdmi port -s , nicam stereo audio , and costs 1500 dollars [OK] DualVAE-30 the proteus 73 television has 2 hdmi port -s and nicam stereo audio and costs 1500 dollars [OK] CrossVAE-30 the proteus 73 television has 2 hdmi port -s and nicam stereo audio and costs 1500 dollars [OK] ",
"cite_spans": [
{
"start": 1805,
"end": 1809,
"text": "[OK]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "DualVAE-30",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Scheduled sampling for sequence prediction with recurrent neural networks",
"authors": [
{
"first": "Samy",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Navdeep",
"middle": [],
"last": "Jaitly",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "1171--1179",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. 2015. Scheduled sampling for se- quence prediction with recurrent neural networks. In Advances in Neural Information Processing Sys- tems, pages 1171-1179.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Generating sentences from a continuous space",
"authors": [
{
"first": "R",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Bowman",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vilnis",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vinyals",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, An- drew M. Dai, Rafal J\u00f3zefowicz, and Samy Ben- gio. 2015. Generating sentences from a continuous space. CoRR, abs/1511.06349.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Hierarchical multiscale recurrent neural networks",
"authors": [
{
"first": "Junyoung",
"middle": [],
"last": "Chung",
"suffix": ""
},
{
"first": "Sungjin",
"middle": [],
"last": "Ahn",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1609.01704"
]
},
"num": null,
"urls": [],
"raw_text": "Junyoung Chung, Sungjin Ahn, and Yoshua Bengio. 2016. Hierarchical multiscale recurrent neural net- works. arXiv preprint arXiv:1609.01704.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A recurrent latent variable model for sequential data",
"authors": [
{
"first": "Junyoung",
"middle": [],
"last": "Chung",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Kastner",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Dinh",
"suffix": ""
},
{
"first": "Kratarth",
"middle": [],
"last": "Goel",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Aaron",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Courville",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "2980--2988",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C Courville, and Yoshua Bengio. 2015. A recurrent latent variable model for sequential data. In Advances in neural information processing sys- tems, pages 2980-2988.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Autoencoding variational bayes",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Max",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Welling",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1312.6114"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik P Kingma and Max Welling. 2013. Auto- encoding variational bayes. arXiv preprint arXiv:1312.6114.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The at&t-darpa communicator mixed-initiative spoken dialog system",
"authors": [
{
"first": "Esther",
"middle": [],
"last": "Levin",
"suffix": ""
},
{
"first": "Shrikanth",
"middle": [],
"last": "Narayanan",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Pieraccini",
"suffix": ""
},
{
"first": "Konstantin",
"middle": [],
"last": "Biatov",
"suffix": ""
},
{
"first": "Enrico",
"middle": [],
"last": "Bocchieri",
"suffix": ""
},
{
"first": "Giuseppe",
"middle": [
"Di"
],
"last": "Fabbrizio",
"suffix": ""
},
{
"first": "Wieland",
"middle": [],
"last": "Eckert",
"suffix": ""
},
{
"first": "Sungbok",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Mazin",
"middle": [],
"last": "Pokrovsky",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rahim",
"suffix": ""
}
],
"year": 2000,
"venue": "Sixth International Conference on Spoken Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Esther Levin, Shrikanth Narayanan, Roberto Pier- accini, Konstantin Biatov, Enrico Bocchieri, Giuseppe Di Fabbrizio, Wieland Eckert, Sungbok Lee, A Pokrovsky, Mazin Rahim, et al. 2000. The at&t-darpa communicator mixed-initiative spoken dialog system. In Sixth International Conference on Spoken Language Processing.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Phrase-based statistical language generation using graphical models and active learning",
"authors": [
{
"first": "Fran\u00e7ois",
"middle": [],
"last": "Mairesse",
"suffix": ""
},
{
"first": "Milica",
"middle": [],
"last": "Ga\u0161i\u0107",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Jur\u010d\u00ed\u010dek",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Keizer",
"suffix": ""
},
{
"first": "Blaise",
"middle": [],
"last": "Thomson",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, ACL '10",
"volume": "",
"issue": "",
"pages": "1552--1561",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fran\u00e7ois Mairesse, Milica Ga\u0161i\u0107, Filip Jur\u010d\u00ed\u010dek, Simon Keizer, Blaise Thomson, Kai Yu, and Steve Young. 2010. Phrase-based statistical language generation using graphical models and active learning. In Pro- ceedings of the 48th Annual Meeting of the Associa- tion for Computational Linguistics, ACL '10, pages 1552-1561, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A hybrid convolutional variational autoencoder for text generation",
"authors": [
{
"first": "Stanislau",
"middle": [],
"last": "Semeniuta",
"suffix": ""
},
{
"first": "Aliaksei",
"middle": [],
"last": "Severyn",
"suffix": ""
},
{
"first": "Erhardt",
"middle": [],
"last": "Barth",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1702.02390"
]
},
"num": null,
"urls": [],
"raw_text": "Stanislau Semeniuta, Aliaksei Severyn, and Erhardt Barth. 2017. A hybrid convolutional variational autoencoder for text generation. arXiv preprint arXiv:1702.02390.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Deconvolutional latent-variable model for text sequence matching",
"authors": [
{
"first": "Dinghan",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Yizhe",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Ricardo",
"middle": [],
"last": "Henao",
"suffix": ""
},
{
"first": "Qinliang",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [],
"last": "Carin",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1709.07109"
]
},
"num": null,
"urls": [],
"raw_text": "Dinghan Shen, Yizhe Zhang, Ricardo Henao, Qinliang Su, and Lawrence Carin. 2017. Deconvolutional latent-variable model for text sequence matching. arXiv preprint arXiv:1709.07109.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Learning structured output representation using deep conditional generative models",
"authors": [
{
"first": "Kihyuk",
"middle": [],
"last": "Sohn",
"suffix": ""
},
{
"first": "Honglak",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Xinchen",
"middle": [],
"last": "Yan",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "3483--3491",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kihyuk Sohn, Honglak Lee, and Xinchen Yan. 2015. Learning structured output representation using deep conditional generative models. In Advances in Neural Information Processing Systems, pages 3483-3491.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Natural language generation for spoken dialogue system using rnn encoder-decoder networks",
"authors": [
{
"first": "Le-Minh",
"middle": [],
"last": "Van-Khanh Tran",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nguyen",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 21st Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "442--451",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Van-Khanh Tran and Le-Minh Nguyen. 2017. Natural language generation for spoken dialogue system us- ing rnn encoder-decoder networks. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), pages 442-451, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Neural-based natural language generation in dialogue using rnn encoder-decoder with semantic aggregation",
"authors": [
{
"first": "Le-Minh",
"middle": [],
"last": "Van-Khanh Tran",
"suffix": ""
},
{
"first": "Satoshi",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tojo",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue",
"volume": "",
"issue": "",
"pages": "231--240",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Van-Khanh Tran, Le-Minh Nguyen, and Satoshi Tojo. 2017. Neural-based natural language generation in dialogue using rnn encoder-decoder with seman- tic aggregation. In Proceedings of the 18th An- nual SIGdial Meeting on Discourse and Dialogue, pages 231-240, Saarbrcken, Germany. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Stochastic Language Generation in Dialogue using Recurrent Neural Networks with Convolutional Sentence Reranking",
"authors": [
{
"first": "Milica",
"middle": [],
"last": "Tsung-Hsien Wen",
"suffix": ""
},
{
"first": "Dongho",
"middle": [],
"last": "Ga\u0161i\u0107",
"suffix": ""
},
{
"first": "Nikola",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Pei-Hao",
"middle": [],
"last": "Mrk\u0161i\u0107",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Vandyke",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings SIGDIAL. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tsung-Hsien Wen, Milica Ga\u0161i\u0107, Dongho Kim, Nikola Mrk\u0161i\u0107, Pei-Hao Su, David Vandyke, and Steve Young. 2015a. Stochastic Language Generation in Dialogue using Recurrent Neural Networks with Convolutional Sentence Reranking. In Proceedings SIGDIAL. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Multi-domain neural network language generation for spoken dialogue systems",
"authors": [
{
"first": "Milica",
"middle": [],
"last": "Tsung-Hsien Wen",
"suffix": ""
},
{
"first": "Nikola",
"middle": [],
"last": "Gasic",
"suffix": ""
},
{
"first": "Lina",
"middle": [
"M"
],
"last": "Mrksic",
"suffix": ""
},
{
"first": "Pei-Hao",
"middle": [],
"last": "Rojas-Barahona",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Vandyke",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1603.01232"
]
},
"num": null,
"urls": [],
"raw_text": "Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Lina M Rojas-Barahona, Pei-Hao Su, David Vandyke, and Steve Young. 2016a. Multi-domain neural network language generation for spoken dia- logue systems. arXiv preprint arXiv:1603.01232.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Toward multidomain language generation using recurrent neural networks",
"authors": [
{
"first": "Milica",
"middle": [],
"last": "Tsung-Hsien Wen",
"suffix": ""
},
{
"first": "Nikola",
"middle": [],
"last": "Ga\u0161ic",
"suffix": ""
},
{
"first": "Lina",
"middle": [
"M"
],
"last": "Mrk\u0161ic",
"suffix": ""
},
{
"first": "Pei-Hao",
"middle": [],
"last": "Rojas-Barahona",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Vandyke",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tsung-Hsien Wen, Milica Ga\u0161ic, Nikola Mrk\u0161ic, Lina M Rojas-Barahona, Pei-Hao Su, David Vandyke, and Steve Young. 2016b. Toward multi- domain language generation using recurrent neural networks.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Semantically conditioned lstm-based natural language generation for spoken dialogue systems",
"authors": [
{
"first": "Milica",
"middle": [],
"last": "Tsung-Hsien Wen",
"suffix": ""
},
{
"first": "Nikola",
"middle": [],
"last": "Ga\u0161i\u0107",
"suffix": ""
},
{
"first": "Pei-Hao",
"middle": [],
"last": "Mrk\u0161i\u0107",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Vandyke",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of EMNLP. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tsung-Hsien Wen, Milica Ga\u0161i\u0107, Nikola Mrk\u0161i\u0107, Pei- Hao Su, David Vandyke, and Steve Young. 2015b. Semantically conditioned lstm-based natural lan- guage generation for spoken dialogue systems. In Proceedings of EMNLP. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Improved variational autoencoders for text modeling using dilated convolutions",
"authors": [
{
"first": "Zichao",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zhiting",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Taylor",
"middle": [],
"last": "Berg-Kirkpatrick",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1702.08139"
]
},
"num": null,
"urls": [],
"raw_text": "Zichao Yang, Zhiting Hu, Ruslan Salakhutdinov, and Taylor Berg-Kirkpatrick. 2017. Improved varia- tional autoencoders for text modeling using dilated convolutions. arXiv preprint arXiv:1702.08139.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Variational Neural Machine Translation",
"authors": [
{
"first": "B",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Duan",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. Zhang, D. Xiong, J. Su, H. Duan, and M. Zhang. 2016. Variational Neural Machine Translation. ArXiv e-prints.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Deconvolutional paragraph representation learning",
"authors": [
{
"first": "Yizhe",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Dinghan",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Guoyin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Zhe",
"middle": [],
"last": "Gan",
"suffix": ""
},
{
"first": "Ricardo",
"middle": [],
"last": "Henao",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [],
"last": "Carin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "4172--4182",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yizhe Zhang, Dinghan Shen, Guoyin Wang, Zhe Gan, Ricardo Henao, and Lawrence Carin. 2017. Decon- volutional paragraph representation learning. In Ad- vances in Neural Information Processing Systems, pages 4172-4182.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"text": "The Dual latent variable model consists of two VAE models: (i) a VNLG (red-dashed box) is to generate utterances and (ii) a Variational CNN-DCNN is an auxiliary auto-encoding model (left side). The RNN/CNN Utterance Encoder is shared between the two VAEs.",
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"text": "Training Dual Latent Variable Model 4.1 Training VNLG Model Inspired by work of Zhang et al. (2016), we also employ the Monte-Carlo method to approximate the expectation of the posterior in Eq. 2, i.e.",
"uris": null
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"text": "Performance comparison of the models trained on Laptop domain.",
"uris": null
},
"TABREF2": {
"type_str": "table",
"num": null,
"content": "<table/>",
"text": "Comparison of top Tv responses generated for different models in different scenarios. Errors are marked in colors([missing], misplaced, redundant, wrong, spelling mistake information).",
"html": null
}
}
}
}