ACL-OCL / Base_JSON /prefixC /json /coling /2020.coling-main.1.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:40:14.241123Z"
},
"title": "Exploring Controllable Text Generation Techniques",
"authors": [
{
"first": "Shrimai",
"middle": [],
"last": "Prabhumoye",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University Pittsburgh PA",
"location": {
"postCode": "15213"
}
},
"email": ""
},
{
"first": "Alan",
"middle": [
"W"
],
"last": "Black",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University Pittsburgh PA",
"location": {
"postCode": "15213"
}
},
"email": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University Pittsburgh PA",
"location": {
"postCode": "15213"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Neural controllable text generation is an important area gaining attention due to its plethora of applications. Although there is a large body of prior work in controllable text generation, there is no unifying theme. In this work, we provide a new schema of the pipeline of the generation process by classifying it into five modules. The control of attributes in the generation process requires modification of these modules. We present an overview of different techniques used to perform the modulation of these modules. We also provide an analysis on the advantages and disadvantages of these techniques. We further pave ways to develop new architectures based on the combination of the modules described in this paper.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Neural controllable text generation is an important area gaining attention due to its plethora of applications. Although there is a large body of prior work in controllable text generation, there is no unifying theme. In this work, we provide a new schema of the pipeline of the generation process by classifying it into five modules. The control of attributes in the generation process requires modification of these modules. We present an overview of different techniques used to perform the modulation of these modules. We also provide an analysis on the advantages and disadvantages of these techniques. We further pave ways to develop new architectures based on the combination of the modules described in this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Controllable text generation is the task of generating natural sentences whose attributes can be controlled. The attributes to control can range from being stylistic such politeness, sentiment, formality, etc.; demographic attributes of the person writing the text such as gender, age, etc.; content such as information, keywords, entities, etc.; ordering of information, events, like plot summaries etc. Controlling various attributes of text generation has manifold applications. For instance in dialogue response generation task, work has been done in controlling persona (Zhang et al., 2018; Li et al., 2016b) , controlling various aspects of the response such as politeness (Niu and Bansal, 2018) , formality, authority etc, grounding the responses in external source of information (Zhou et al., 2018; , and controlling topic sequence (Tang et al., 2019; Prabhumoye et al., 2020) . Another application is story generation where you can control the ending , the persona (Chandu et al., 2019) , the plot (Yao et al., 2019) , and the topic sequence (Huang et al., 2019) . Controllable text generation is also used to modulate the formality and politeness of emails (Madaan et al., 2020) . Report generation can be controlled by pulling disparate source documents into a coherent unified whole, which can use a shared set of sources such as Wikipedia article generation .",
"cite_spans": [
{
"start": 575,
"end": 595,
"text": "(Zhang et al., 2018;",
"ref_id": "BIBREF72"
},
{
"start": 596,
"end": 613,
"text": "Li et al., 2016b)",
"ref_id": "BIBREF32"
},
{
"start": 679,
"end": 701,
"text": "(Niu and Bansal, 2018)",
"ref_id": "BIBREF39"
},
{
"start": 788,
"end": 807,
"text": "(Zhou et al., 2018;",
"ref_id": "BIBREF73"
},
{
"start": 841,
"end": 860,
"text": "(Tang et al., 2019;",
"ref_id": "BIBREF55"
},
{
"start": 861,
"end": 885,
"text": "Prabhumoye et al., 2020)",
"ref_id": "BIBREF38"
},
{
"start": 975,
"end": 996,
"text": "(Chandu et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 1008,
"end": 1026,
"text": "(Yao et al., 2019)",
"ref_id": "BIBREF70"
},
{
"start": 1052,
"end": 1072,
"text": "(Huang et al., 2019)",
"ref_id": "BIBREF23"
},
{
"start": 1168,
"end": 1189,
"text": "(Madaan et al., 2020)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Although there is a large body of prior work in controllable text generation, there is no unifying theme. Each work addresses a specific task in a specific context. In this paper we outline a new schema which connects prior work and provides an insight into various aspects of controllable text generation. The schema contains five modules that cover the overall generation pipeline and provide an understanding of the effect of each component on the generation process. Prior work has focused on specific parts of the schema that we outline here and we provide insights into their similarities. We provide an overview of these modules and also present an exploration of the various techniques used to control and update each of these modules.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Most of the controllable text generation tasks can be framed as conditional language generation tasks. They have an input or a source sequence U and an output or a target sequence Y to be generated. In this case, we model the probability of the target sequence conditioned on the source sequence given by P (Y|U) = T t=1 P (y t |U, y <t ). The generation of the target tokens of the sequence Y unfolds as a time series where each token y t is generated at a time step t. At a given time step t, a generative model takes in the previous hidden state h t\u22121 and the input x t at current time step. It performs a set of operations denoted by G to produce the output o t which is used to predict tokenx t . The ground truth token to be generated is denoted by y t . Figure 1 shows the schema proposed in this work consisting of five modules which can be used for controlling the generation process: (1) External Input module is responsible for the initialization h 0 , of the generation process. (2) Sequential Input module is the input x t at each time step of the generation.",
"cite_spans": [],
"ref_spans": [
{
"start": 761,
"end": 769,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(3) Generator Operations module performs consistent operations or calculations on all the input at each time step. (4) Output module is the output o t which is further projected on to the vocabulary space to predict the tokenx t at each time step. (5) Training Objective module takes care of the loss functions used for training the generator.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This schema provides an insight into the contributions of the various modules for controllable text generation. The main advantage of this schema is that it can be used with any algorithmic paradigm like sequence-to-sequence, adversarial methods, reinforcement learning, etc. The schema can also be used with non-autoregressive algorithms which may generate text using graphical structures like trees (Welleck et al., 2019; Guo et al., 2019) . In this paper, we focus on how this schema can be used to describe controllable text generation focusing particularly on the use of autoregressive models. This work paves way to designing new architectures based on our schema. This can be done by identifying promising techniques for each module and then combining them. Our schema can also be potentially used for applying these techniques on new tasks of similar nature. It also provides a standardized framework to position and compare new architectures with existing techniques.",
"cite_spans": [
{
"start": 401,
"end": 423,
"text": "(Welleck et al., 2019;",
"ref_id": "BIBREF61"
},
{
"start": 424,
"end": 441,
"text": "Guo et al., 2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The prior work on unifying text generation models has mostly focused on building efficient tool-kits and modular views of generation. For instance, (Reiter and Dale, 2000) details seven sub-tasks which are conceptually distinct to describe the generation process. These sub-tasks can be modelled separately or in some cases they may interleave. In (Reiter and Dale, 2000) , these seven sub-tasks are primarily characterized as content or structure tasks. Note that Reiter and Dale (2000) is not specific to neural text generation. Our work focuses specifically on controlling attributes in neural text generation process. We don't divide the generation pipeline into several sub-tasks but we divide the neural text generation process into modules all of which are required for generation. In (Hu et al., 2019b) , the focus is on building a toolkit for various text generation tasks based on the three properties of versatility, modularity and extensibility. This work enlists few model architectures and learning paradigms for various text generation tasks. In our work, we focus only on the generation process of controllable text generation tasks. We specifically detail the inputs, outputs and operations of the generation process. We do not provide any specific examples of architectures but provide an overview of the basic underlying modules which can be used with any learning paradigm. Xie (2017) provides a practical guide to the neural generation process describing it in terms of initialization, optimization, regularization and decoding strategies. Our work on the other hand does not delve into the implementation details of the generation pipeline but provides an overall schema for understanding of the various components involved.",
"cite_spans": [
{
"start": 148,
"end": 171,
"text": "(Reiter and Dale, 2000)",
"ref_id": "BIBREF50"
},
{
"start": 348,
"end": 371,
"text": "(Reiter and Dale, 2000)",
"ref_id": "BIBREF50"
},
{
"start": 465,
"end": 487,
"text": "Reiter and Dale (2000)",
"ref_id": "BIBREF50"
},
{
"start": 792,
"end": 810,
"text": "(Hu et al., 2019b)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the remainder of the paper, we denote the representation of the control attribute by s and the representation of the input or source sentence returned by the encoder as h e . In what follows, we first describe the possible ways of controlling attributes by modulating the external input in \u00a72, the sequential input in \u00a73, the generator operations in \u00a74, the output in \u00a75 and the training objective in \u00a76. At the end of each section, we provide an analysis of each of the techniques described and how they fit together.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this section we discuss the different techniques which can be used to control the generation process by updating the initialization of the generator h 0 . In the standard generation process, h 0 is equal to h e . This is marked as module (1) in Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 248,
"end": 256,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "External Input",
"sec_num": "2"
},
{
"text": "One of the easiest ways to control the generation is to concatenate a control vector s to output of the encoder h e . The external input of the decoder h 0 will be [h e ; s], where [a; b] denotes concatenation. Here, the control vector s would provide the generator with a strong signal to guide the generation process. Fu et al. (2018) use this technique to control the style representation for their generator. The encoder builds representation that is devoid of the style and only retains content. The control vector for style is then concatenated to the encoder representation to initialize the decoder. This technique is commonly used in Zhou et al., 2018; to concatenate information from external sources to dialogue context to generate dialogue responses. Chandu et al. (2019) concatenate personality representation P derived from a separate corpus to generate visual stories. They also experiment with a simple arithmetic operation on h e given by h 0 = h e \u2212 S + P to get the initialization of the generator (here S denotes the average representation of the story). They observed that while concatenation technique is better at preserving the meaning of the generated story, the arithmetic operation provides a better signal of the personality for the generation process. Hoang et al. (2016) uses both the concatenation technique as well as performs a linear transform of s to obtain h 0 for language modelling task. The control vectors in this case represents meta data such as key-words, topics etc. In case of the linear transform h 0 = tanh(W 1 h e + W 2 s + b). The paper also explores adding the control vector to the encoder representation (h 0 = h e + s).",
"cite_spans": [
{
"start": 320,
"end": 336,
"text": "Fu et al. (2018)",
"ref_id": "BIBREF9"
},
{
"start": 643,
"end": 661,
"text": "Zhou et al., 2018;",
"ref_id": "BIBREF73"
},
{
"start": 763,
"end": 783,
"text": "Chandu et al. (2019)",
"ref_id": "BIBREF3"
},
{
"start": 1281,
"end": 1300,
"text": "Hoang et al. (2016)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Arithmetic or Linear Transform",
"sec_num": "2.1"
},
{
"text": "In case of addition, the resulting h 0 would be averaged representation of the input representation h e and s. Information could be lost in this case as control is not explicit. In case of concatenation, if the size of the control vector s is too small compared to h e , then s can be over-shadowed by h e and the generator may not be able to pay attention to s. Hence it is important to choose comparable dimensions for s and h e . But this increases the size of model considerably and could be quite costly. Linear transform avoids these issues and performs better than the other two techniques for Hoang et al. (2016) .",
"cite_spans": [
{
"start": 601,
"end": 620,
"text": "Hoang et al. (2016)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Arithmetic or Linear Transform",
"sec_num": "2.1"
},
{
"text": "Kingma and Welling (2014) introduce variational auto-encoder, where you can stochastically draw a continuous latent variable z from a Gaussian distribution. The initialization of the generator h 0 is based on this latent variable. Bowman et al. (2016) use this concept for generating sentences from this continuous latent representation. This process of changing the encoder state h e can only be used with Kullback-Leibler (KL) Divergence training objective described in \u00a76.2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stochastic Changes",
"sec_num": "2.2"
},
{
"text": "In , Variational Auto-Encoder (VAE) is used to guide the generation process with topics of a document. A gaussian mixture model is used to incorporate topics into latent variables. In , VAE is used to control for sentiment attribute in style transfer task by constraining the posterior mean to a learned probability simplex.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stochastic Changes",
"sec_num": "2.2"
},
{
"text": "Such a design of controllable text generation works when the control attributes can be represented as latent variables for example style, topics, strategies etc. This design is difficult to work for content grounded text generation tasks where specific information, keywords or entities have to guide the generation process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stochastic Changes",
"sec_num": "2.2"
},
{
"text": "The encoder representation h e can be decomposed into multiple subspaces, each of which signifies a different attribute to be controlled. Liu and Lapata (2018) split the encoder representation h e into two components, one which represents the structure in the document and the other represents the semantic information. This formulation was used by (Balachandran et al., 2020) for controlling structure in abstractive summarization. This work performs the split with respect to the dimensions of h e . The method forces the first n dimensions of h e to capture meaning and the latter to capture structure. Balachandran et al. (2020) also show quantitative and qualitative analysis on the types of structures of documents learnt by this technique. Romanov et al. (2019) decompose the encoder representation h e into a form vector f and a meaning vector m. During the training phase, a discriminator enforces m to not carry any information about the form using an adversarial loss and a motivator is used for a motivational loss that encourages f to carry the information about the form. The generation process can then be guided to adhere to the desired target form. As opposed to splitting h e with respect to dimensions, this work learns subspaces W m and W f given by m = tanh(W m h e + b m ) and f = tanh(W f h e + b f ) respectively. When h e is projected on W m , it yields the meaning vector m and similarly when it is projected on W f it yields the form vector f . This work shows qualitatively how m and f are learnt in the subspaces using t-SNE plots. It also shows quantitatively the use of m and f in downstream paraphrase detection tasks. This builds interpretable representations for control attributes. Although, the effectiveness of this technique is not yet proven in the style transfer task or the abstractive summarization task. In both the above mentioned works, the models learns interpretable representations of control attributes but were not able to beat state of the art methods in their respective tasks. It is also worth noting that learning good decomposed vectors is especially hard when no supervision is provided on what the decomposed components are supposed to learn.",
"cite_spans": [
{
"start": 138,
"end": 159,
"text": "Liu and Lapata (2018)",
"ref_id": "BIBREF34"
},
{
"start": 349,
"end": 376,
"text": "(Balachandran et al., 2020)",
"ref_id": "BIBREF1"
},
{
"start": 606,
"end": 632,
"text": "Balachandran et al. (2020)",
"ref_id": "BIBREF1"
},
{
"start": 747,
"end": 768,
"text": "Romanov et al. (2019)",
"ref_id": "BIBREF51"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Decompose",
"sec_num": "2.3"
},
{
"text": "This technique works well when the representation space of the input x can be decomposed into subspaces which can represent the control attributes. This means that the input x needs to contain signal of the control attributes. It is unlikely to work when the control attributes need to be externally provided. For example in case of content grounded generation tasks described in Zhou et al., 2018) , the input may not necessarily contain the content that needs to be generated. A separate input of the content to be generated is provided in these cases.",
"cite_spans": [
{
"start": 380,
"end": 398,
"text": "Zhou et al., 2018)",
"ref_id": "BIBREF73"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Decompose",
"sec_num": "2.3"
},
{
"text": "A regularizer is often used to control the external input h 0 to the generator. In many cases, an adversarial loss to manipulate the latent space is used as an external feedback mechanism. This essentially controls the latent space of the encoder which is eventually provided as an initialization to the generator. In (Fu et al., 2018) , a multi-layer perceptron (MLP) is used for predicting the style labels from h 0 . Similarly, the adversarial loss is also used in (Wang et al., 2019a) to control the latent representation h 0 for style attributes. In (Romanov et al., 2019) , an adversarial loss is used to ensure that the meaning representation m does not carry any style signals. The adversarial loss is obtained by training a discriminator which takes as input a representation m and indicates if it carries the target style signal. Similarly, this work also employs a motivator loss which is the opposite of the adversarial loss to ensure that the style representation f actually does carry the stylistic information. John et al. (2019) use multiple losses to control the style and content information represented in h 0 .",
"cite_spans": [
{
"start": 318,
"end": 335,
"text": "(Fu et al., 2018)",
"ref_id": "BIBREF9"
},
{
"start": 468,
"end": 488,
"text": "(Wang et al., 2019a)",
"ref_id": "BIBREF59"
},
{
"start": 555,
"end": 577,
"text": "(Romanov et al., 2019)",
"ref_id": "BIBREF51"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "External Feedback",
"sec_num": "2.4"
},
{
"text": "The discriminator which provides external feedback has to be jointly trained with the generator. This technique can be useful with the decompose technique to ensure that the decomposed sub-spaces represent the desired control attributes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "External Feedback",
"sec_num": "2.4"
},
{
"text": "In this section we discuss the different techniques which can be used to manipulate the sequential input x t to the decoder at each time step. x t here is used to denote the word embedding of the token at time step t. This is marked as position (2) in Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 252,
"end": 260,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Sequential Input",
"sec_num": "3"
},
{
"text": "Similar to changing the initialization, we can change the input to the decoder by concatenating the information at each time step with some additional control vector s. Typically, teacher forcing method (Williams and Zipser, 1989 ) is used to train the generator. At time step t, the generator takes as input the word embedding x t of the word that was predicted at step t \u2212 1 and predicts the word to be generated y t at the current time step. Note that x t = y t\u22121 . The input x t can be concatenated with s at each time step to control the generation process. Hence,x t = [x t ; s]. Noraset et al. (2017) , use this technique in the task of definition modeling. They concatenate word embedding vector s of the word to be defined at each time step of the definition generation process. Unfortunately, for this task, this technique has not proved to be effective compared to other techniques of controlling the generation. Zhou et al. (2018) concatenate the hidden representation of the external source of information s to each time step of dialogue response generation. Similarly, also concatenate the hidden representation of the external source of information s to each time step of Wikipedia update generation process. This technique did not achieve impressive results in this work as well. Harrison et al. (2019) concatenate a side constraint s which represents style and personality into the generation process. For this task of generating language from meaning representations with stylistic variation, this method performed better than conditioning the encoder with side constraint in terms of BLEU metric. Chandu et al. (2019) also concatenate the personality representation P at each time step of the story generation process. This is used to control the personality of the visual stories. In addition to concatenation, this work proposes to modify the sequential input asx t = x t \u2212 S + P (here S denotes the average representation of the story and P denotes the representation of the personality). The latter technique is better at generating personality conditioned stories than the concatenation technique. Neither of these techniques prove to be conclusively better than making similar changes to the external input module ( \u00a72.1). Note that in this technique, changes are made directly to the input of generation and not the context which is the case with external input. Also, most of the prior work has focused on recurrent neural network and its variants for making such changes. It could be interesting to see such changes made to transformers (Vaswani et al., 2017) .",
"cite_spans": [
{
"start": 203,
"end": 229,
"text": "(Williams and Zipser, 1989",
"ref_id": "BIBREF64"
},
{
"start": 586,
"end": 607,
"text": "Noraset et al. (2017)",
"ref_id": "BIBREF40"
},
{
"start": 924,
"end": 942,
"text": "Zhou et al. (2018)",
"ref_id": "BIBREF73"
},
{
"start": 1296,
"end": 1318,
"text": "Harrison et al. (2019)",
"ref_id": "BIBREF14"
},
{
"start": 1616,
"end": 1636,
"text": "Chandu et al. (2019)",
"ref_id": "BIBREF3"
},
{
"start": 2565,
"end": 2587,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF57"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Arithmetic or Linear Transform",
"sec_num": "3.1"
},
{
"text": "This module takes in the external input h 0 , the sequential input x t at time step t and performs the same set of computations (G) to return an output o t . Changes can be made to the set of operations G to include the the control vector s in computing o t . This is shown as position (3) in Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 293,
"end": 301,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Generator Operations",
"sec_num": "4"
},
{
"text": "Recurrent Neural Networks (RNNs) are designed to model sequential information. RNNs perform the same operations for every element of a sequence, with the output depending on previous computations. This recurrence serves as a form of memory. It allows contextual information to flow through the network so that relevant outputs from previous time steps can be applied to network operations at the current time step. Theoretically, RNNs can make use of information in arbitrarily long sequences, but empirically, they are limited to looking back only a few steps.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recurrent Neural Networks",
"sec_num": "4.1"
},
{
"text": "The Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) units are a type of RNNs that have additional 'memory cell' apart from standard units of basic RNNs. The memory cell can maintain information in memory for long periods of time. A set of gates is used to control when information enters the memory, when it's output, and when it's forgotten. This architecture lets them learn longer-term dependencies. The vanishing gradient problem of RNNs is resolved here. Gated Recurrent Units (GRUs) (Cho et al., 2014) are similar to LSTMs, but use a simplified structure designed to adaptively capture dependencies of different time scales. They also use a set of gates to control the flow of information, but they don't use separate memory cells, and they use fewer gates.",
"cite_spans": [
{
"start": 34,
"end": 68,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF16"
},
{
"start": 506,
"end": 524,
"text": "(Cho et al., 2014)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Recurrent Neural Networks",
"sec_num": "4.1"
},
{
"text": "The computations of the RNN or its variants can be modified to account for the control attribute. Additional gates can be added or the control attribute can be provided as an additional input to the standard gates of RNNs. Gan et al. (2017) propose a variant of the LSTM model, named factored LSTM, which controls style representation in image caption task. The parameters of the LSTM module which are responsible to transform the input x t are factored into three components U, S and V. The operations of the input (i t ), forget (f t ) and output gate (o t ) are given by:",
"cite_spans": [
{
"start": 223,
"end": 240,
"text": "Gan et al. (2017)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Recurrent Neural Networks",
"sec_num": "4.1"
},
{
"text": "i t = sigmoid(U ix S ix V ix x t + W ih h t\u22121 ) f t = sigmoid(U f x S f x V f x x t + W f h h t\u22121 ) o t = sigmoid(U ox S ox V ox x t + W oh h t\u22121 ) c t = tanh(U cx S cx V cx x t + W ch h t\u22121 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recurrent Neural Networks",
"sec_num": "4.1"
},
{
"text": "Particularly, the matrix set {S} is specific to each style in the task and is responsible to capture the underlying style features in the data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recurrent Neural Networks",
"sec_num": "4.1"
},
{
"text": "In (Kiddon et al., 2016) , the GRU unit is modified to accommodate extra inputs -goal g and agenda items E new t in the recipe generation task. The operation of the new componenth t is given by:",
"cite_spans": [
{
"start": 3,
"end": 24,
"text": "(Kiddon et al., 2016)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Recurrent Neural Networks",
"sec_num": "4.1"
},
{
"text": "h t = tanh(W h x t + r t U h h t\u22121 + s t Yg + q t (1 T L ZE new t ) T )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recurrent Neural Networks",
"sec_num": "4.1"
},
{
"text": "where s t is a goal select gate and q t is a item select gate. With this modification, the generation process is controlled for the items to be generation in the recipe and the goal. Wen et al. (2015) adapt the LSTM to control the dialogue act information in the generation process. The operation to compute the cell value c t is given by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recurrent Neural Networks",
"sec_num": "4.1"
},
{
"text": "c t = f t c t\u22121 + i t c t + tanh(W d d t )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recurrent Neural Networks",
"sec_num": "4.1"
},
{
"text": "The dialogue act representation d t is build using another LSTM cell.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recurrent Neural Networks",
"sec_num": "4.1"
},
{
"text": "RNNs, LSTMs and GRUs are commonly used to model controllable text generation tasks Rao and Tetreault, 2018; See et al., 2017; Zhou et al., 2018; Fu et al., 2018) . Most of these variants still have trouble remembering long sequences and are hence commonly used with attention mechanism ( \u00a75.1) on the source sequence.",
"cite_spans": [
{
"start": 83,
"end": 107,
"text": "Rao and Tetreault, 2018;",
"ref_id": "BIBREF49"
},
{
"start": 108,
"end": 125,
"text": "See et al., 2017;",
"ref_id": "BIBREF52"
},
{
"start": 126,
"end": 144,
"text": "Zhou et al., 2018;",
"ref_id": "BIBREF73"
},
{
"start": 145,
"end": 161,
"text": "Fu et al., 2018)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Recurrent Neural Networks",
"sec_num": "4.1"
},
{
"text": "Transformers are proposed by (Vaswani et al., 2017) and they rely on attention mechanism to draw global dependencies between input and output. The Transformer uses stacked self-attention and point-wise, fully connected layers for both the encoder and decoder. The encoder stacks N identical layers, each of which has two sub-layers. The first sub-layer is a multi-head self-attention mechanism ( \u00a75.1), and the second sub-layer is a positionwise fully connected feed-forward network. Each sub-layer uses residual connections around each of the sub-layers, followed by layer normalization. The decoder has an additional third sub-layer, which performs multi-head attention over the output of the encoder stack.",
"cite_spans": [
{
"start": 29,
"end": 51,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF57"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer",
"sec_num": "4.2"
},
{
"text": "Since, attention mechanism is at the core of this generator, the decoder can attend over all positions of input sequence. Computations over a sequence can be parallelized in this case and hence it is faster. The modifications made to the computing units of RNN mentioned in \u00a74.1 which use parameters specific to control attributes such as style, dialog act etc. have not been explored with the transformers architecture.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer",
"sec_num": "4.2"
},
{
"text": "Recently pre-trained conditional language models are used for text generation like GPT (Radford et al., 2018) , GPT2 (Radford et al., 2019) , XLNet (Yang et al., 2019) , etc. Several works have fine-tuned the pre-trained models for downstream controllable text generation tasks (Sudhakar et al., 2019; Urbanek et al., 2019) . The language modeling aspects of generation like fluency and grammaticality are already learnt if pre-trained models are used.",
"cite_spans": [
{
"start": 87,
"end": 109,
"text": "(Radford et al., 2018)",
"ref_id": "BIBREF46"
},
{
"start": 117,
"end": 139,
"text": "(Radford et al., 2019)",
"ref_id": "BIBREF47"
},
{
"start": 148,
"end": 167,
"text": "(Yang et al., 2019)",
"ref_id": "BIBREF69"
},
{
"start": 278,
"end": 301,
"text": "(Sudhakar et al., 2019;",
"ref_id": "BIBREF54"
},
{
"start": 302,
"end": 323,
"text": "Urbanek et al., 2019)",
"ref_id": "BIBREF56"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-trained models",
"sec_num": "4.3"
},
{
"text": "These models are hard to fine-tune for sequence-to-sequence tasks such as machine translation, abstractive summarization etc. BART (Lewis et al., 2019 ) is a denoising autoencoder built with a sequence-tosequence model and is particularly effective when fine tuned for text generation. Alternatively, T5 (Raffel et al., 2019) treats every NLP problem as a \"text-to-text\" problem, i.e. taking text as input and producing new text as output. Hence, it can be adapted to controllable text generation tasks. Dathathri et al. (2019) propose a Plug and Play Language Model (PPLM) for controllable language generation. It combines a pre-trained LM with one or more simple attribute classifiers that guide text generation without any further training of the LM. This is similar to the classifier feedback technique described in \u00a76.3. Some of the other techniques described in this paper such as stochastic changes \u00a72.2 , external feedback \u00a72.4 and \u00a75.2, decompose \u00a72.3 etc would be hard to incorporate into pre-trained language models without modifying the model architecture or fine-tuning entailing the significant cost of retraining.",
"cite_spans": [
{
"start": 131,
"end": 150,
"text": "(Lewis et al., 2019",
"ref_id": "BIBREF30"
},
{
"start": 304,
"end": 325,
"text": "(Raffel et al., 2019)",
"ref_id": "BIBREF48"
},
{
"start": 504,
"end": 527,
"text": "Dathathri et al. (2019)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-trained models",
"sec_num": "4.3"
},
{
"text": "In the standard generation process, o t is the output of the generator module which is projected to the vocabulary space to predict the tokenx t . Here, we discuss the various techniques used to modulate the sequential output o t at each time step t, before projecting it to the vocabulary space. This is marked as position (4) in Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 331,
"end": 339,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Output",
"sec_num": "5"
},
{
"text": "Attention is the most popular way of guiding the generation process. It is typically used to guide the generation process to focus on the source sequence (Bahdanau et al., 2015) . The attention calculating module takes as input the current hidden state h t of the generator at each time step t. The aim of this module is to determine a context vector c t that captures relevant source-side information to help predict the tokenx t . In case of global attention, all the hidden states of the encoder are considered to calculate the context vector c t (Luong et al., 2015) . This faces the the downside of expensive calculation especially for longer source sequences like documents. To overcome this challenge, local attention only chooses to focus only on a small subset of the source positions per target word. In this case, c t is calculated over a window of size D of the source hidden states. Vaswani et al. (2017) view attention as a mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is computed as a weighted sum of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key. This work proposes the simultaneous use of scaled dot-product attention which helps in parallelizing computation and a multi-headed attention which allows the model to jointly attend to information from different representation subspaces at different positions. Sudhakar et al. (2019) use self-attention to control for style by simply adding a special target style token in the source sequence. also use transformers to attend over information from external document for guided dialogue response generation. (Zhang et al., 2018) uses the encoded representation of personas to compute the attention weights a t at a given time step of the decoder. The attention is re-weighted according to the persona of the response to be generated in dialogue. So far, work has not been done to modulate the attention weights to control for attributes like style, topic, content etc.",
"cite_spans": [
{
"start": 154,
"end": 177,
"text": "(Bahdanau et al., 2015)",
"ref_id": "BIBREF0"
},
{
"start": 550,
"end": 570,
"text": "(Luong et al., 2015)",
"ref_id": "BIBREF37"
},
{
"start": 896,
"end": 917,
"text": "Vaswani et al. (2017)",
"ref_id": "BIBREF57"
},
{
"start": 1495,
"end": 1517,
"text": "Sudhakar et al. (2019)",
"ref_id": "BIBREF54"
},
{
"start": 1741,
"end": 1761,
"text": "(Zhang et al., 2018)",
"ref_id": "BIBREF72"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Attention",
"sec_num": "5.1"
},
{
"text": "The output latent space of the generator can be controlled by external feedback. Similar to changing the external input h 0 , the output latent space can also be changed using adversarial loss. In (Logeswaran et al., 2018) , an adversarial loss is used which encourages the generation realistic and attribute compatible sentences. The adversarial loss tries to match the distribution of sentence and attribute vector pairs (x, s) where the sentence can either be a real or generated sentence. Similarly, in (Shen et al., 2017) , a two discriminator losses in the style transfer task. Each discriminator is trained to distinguish between a sentence which came from the real target attribute distribution and a sentence that was transferred from source to target attribute. This work uses Professor-Forcing (Lamb et al., 2016) to match the hidden states of the generator and the discriminator. Gong et al. (2019) also control the output latent space by providing different types of rewards like style reward, semantic reward and fluency reward in the reinforcement learning setup. The discriminator used to obtain the adversarial loss has to be jointly trained with the generator.",
"cite_spans": [
{
"start": 197,
"end": 222,
"text": "(Logeswaran et al., 2018)",
"ref_id": "BIBREF36"
},
{
"start": 507,
"end": 526,
"text": "(Shen et al., 2017)",
"ref_id": "BIBREF53"
},
{
"start": 805,
"end": 824,
"text": "(Lamb et al., 2016)",
"ref_id": "BIBREF29"
},
{
"start": 892,
"end": 910,
"text": "Gong et al. (2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "External Feedback",
"sec_num": "5.2"
},
{
"text": "Hoang et al. (2016) demonstrate three simple ways of changing the output o t of an RNN to control for meta information like topic, keywords etc. The three ways demonstrated in (Hoang et al., 2016) are: (1) addition, where the modified output\u00f5 t is\u00f5 t = o t + s, (2) concatenation, where the modified output o t (\u00f5 t = [o t ; s]), and (3) using a perceptron layer dependent on s and o t . In this case,\u00f5 t is given b\u1ef9",
"cite_spans": [
{
"start": 176,
"end": 196,
"text": "(Hoang et al., 2016)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Arithmetic or Linear Transform",
"sec_num": "5.3"
},
{
"text": "o t = tanh(W o o t + W s s + b o ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Arithmetic or Linear Transform",
"sec_num": "5.3"
},
{
"text": "In each of the three cases, the modified output\u00f5 t is then projected to the vocabulary space to predict the tokenx t .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Arithmetic or Linear Transform",
"sec_num": "5.3"
},
{
"text": "In this section we describe various methods used to control the generation using objective functions. The output o t at each time step t of the generation process is projected to the vocabulary space using a linear transform (\u00f4 t = W o o t + b). A tokenx t is predicted from the vocabulary by passing\u00f4 t through a softmax function and taking the max value. The predicted tokenx t is compared with the reference token y t using a loss function. This loss function can be tweaked to ensure that the generated text carries the desired control attributes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Objective",
"sec_num": "6"
},
{
"text": "Here, we describe the loss objectives commonly used in natural language generation tasks. These loss objectives do not try to control for any attribute. Instead they try to ensure fluent, grammatical and diverse generations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General Loss Objective",
"sec_num": "6.1"
},
{
"text": "Cross Entropy Loss: This is the basic loss used to compare the generated tokens with the reference tokens and is used in all text generation process. At each time step t, the generation has to predict a token from the vocabulary. Hence, it could be seen as a classification problem with number of classes being equal to vocabulary size. The categorical cross entropy loss is given by \u2212\u03a3 M c=1 y t,c log(p t,c ). Here p t,c is the probability of the token c at time step t. Note that p t = softmax(\u00f5 t ) is the probability distribution over the vocabulary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General Loss Objective",
"sec_num": "6.1"
},
{
"text": "Unlikelihood loss: This maintains a set of negative candidates which is based on repeating tokens or n-grams and frequent tokens (Welleck et al., 2020) . This set is updated at each time step as tokens are generated. This works at both token and sequence level and the objective tries to minimize the repetitions in generations. This is used at train time in augmentation with the maximum likelihood objective and can be used for any task.",
"cite_spans": [
{
"start": 129,
"end": 151,
"text": "(Welleck et al., 2020)",
"ref_id": "BIBREF62"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "General Loss Objective",
"sec_num": "6.1"
},
{
"text": "Decoding strategies: These strategies are not used as a loss objective during training. Many of these objectives rely on post-hoc decoding strategies such as stochastic decoding which include Top k-sampling , nucleus sampling (Holtzman et al., 2020) , or beam search variants (Paulus et al., 2018; Kulikov et al., 2019; Vijayakumar et al., 2018; Holtzman et al., 2018) .",
"cite_spans": [
{
"start": 226,
"end": 249,
"text": "(Holtzman et al., 2020)",
"ref_id": "BIBREF18"
},
{
"start": 276,
"end": 297,
"text": "(Paulus et al., 2018;",
"ref_id": "BIBREF41"
},
{
"start": 298,
"end": 319,
"text": "Kulikov et al., 2019;",
"ref_id": "BIBREF27"
},
{
"start": 320,
"end": 345,
"text": "Vijayakumar et al., 2018;",
"ref_id": "BIBREF58"
},
{
"start": 346,
"end": 368,
"text": "Holtzman et al., 2018)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "General Loss Objective",
"sec_num": "6.1"
},
{
"text": "Specifically, we discuss the Diversity-Promoting objective which is used to generate a varied set of sentences given similar inputs. Particularly, Li et al. (2016a) use Maximum Mutual Information (MMI) as an objective function for the dialogue response generation task. Most generation systems use maximum likelihood objective but this objective additionally tries to reduce the proportion of generic responses. It is given by:T = argmax T {logp(T|S) \u2212 \u03bblogp(T)}",
"cite_spans": [
{
"start": 147,
"end": 164,
"text": "Li et al. (2016a)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "General Loss Objective",
"sec_num": "6.1"
},
{
"text": "whereT is the generated target sequence, T is the reference target sequence and S is the source sequence. The second term controls the generation of the high frequency or the generic target sequences. Note that this objective is only used during the inference and the generators are trained using cross entropy loss. Zhang et al. (2018) , also use a diversity encouraging objective for dialogue response generation. They train a discriminator to calculate similarity between the source S and target T (D \u03c8 (T, S)) , as well as between the source S and the generated targetT (D \u03c8 (T, S)). They finally try to minimize the difference between D \u03c8 (T, S) and D \u03c8 (T, S).",
"cite_spans": [
{
"start": 317,
"end": 336,
"text": "Zhang et al. (2018)",
"ref_id": "BIBREF72"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "General Loss Objective",
"sec_num": "6.1"
},
{
"text": "The Kullback-Leibler (KL) Divergence score, quantifies how much one probability distribution differs from another probability distribution. The KL divergence between two distributions Q and P is often stated using the notation KL(P Q), where the operator \" \" indicates divergence or P's divergence from Q. Note that KL Divergence is not symmetric i.e KL(P Q) = KL(Q P). KL divergence can be used to minimize the information loss while approximating a distribution. In text generation, the KL Divergence is combined with the evidence lower bound (ELBO) to approximately maximize the marginal likelihood of data p(x) which helps in better generations. This objective is used in variational autoencoders and its variants in combination with sampling techniques described in \u00a72.2. This objective fits in the controllable text generation paradigm because it allows you to approximate the posterior distribution of the control variables in the latent z-space.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "KL Divergence",
"sec_num": "6.2"
},
{
"text": "This loss is specifically used to ensure that the generated tokensx comply with the control attributes s. Note the difference between this loss and the external feedback loss used for the external input module and the output module is that this loss operates at the token level and the external feedback loss works on the latent hidden representations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classifier Loss",
"sec_num": "6.3"
},
{
"text": "In case of style transfer task, this loss is used to guide the generation process to output the target style tokens. Some works Sudhakar et al., 2019; Hu et al., 2017) use this loss to discriminate between all the styles in their task (one verses all fashion). This type of design will suffer from low accuracy scores when the number of styles increases. To counter this problem, this loss can be setup to calculate if the generated sentencex belongs to style s 1 or not and similarly to calculate another separate loss term for each style (Chandu et al., 2019) . This type of loss design encounters increasing number of loss terms depending on the number of styles. The third way to motivate this loss term is to discriminating between a sentence x from data which belongs to style s 1 and a generated sentencex which belongs to the same style s 1 (Yang et al., 2018) . Again, you would need as many loss terms as the number of styles in this case. All of these works use cross entropy loss function to measure their losses.",
"cite_spans": [
{
"start": 128,
"end": 150,
"text": "Sudhakar et al., 2019;",
"ref_id": "BIBREF54"
},
{
"start": 151,
"end": 167,
"text": "Hu et al., 2017)",
"ref_id": "BIBREF19"
},
{
"start": 540,
"end": 561,
"text": "(Chandu et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 849,
"end": 868,
"text": "(Yang et al., 2018)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Classifier Loss",
"sec_num": "6.3"
},
{
"text": "Hu et al. (2019a) use a classifier based loss in the visual storytelling task. The classifier is a pre-trained language model (Devlin et al., 2019) used to measure the coherence between generated sentences of the story. Particularly, the classifier takes as input two sentences at a timex 1 andx 2 and outputs a binary label which indicates ifx 2 followsx 1 . In this case, the control variable is coherence in stories which is used to guide the generator to produce consistent sentences.",
"cite_spans": [
{
"start": 126,
"end": 147,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Classifier Loss",
"sec_num": "6.3"
},
{
"text": "Depending on the end task and the attribute to be controlled, you can design different loss objectives to ensure that generations abide by the target attributes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Specific Loss",
"sec_num": "6.4"
},
{
"text": "Strategy Loss: Zhou et al. (2020) use a dialogue strategy based objective to generate responses for negotiation tasks. This task has ground truth strategies that lead to better negotiations. This loss captures the probability of a particular strategy occurring for the next utterance given the dialogue history. It guides the generator to align the responses with particular strategies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Specific Loss",
"sec_num": "6.4"
},
{
"text": "Coverage Loss: Generating repeated words or phrases is a common problem for text generation systems, and this becomes especially pronounced for multi-sentence text generation task such as abstractive document summarization. See et al. (2017) introduce a coverage loss which penalizes repeatedly attending to the same locations of the source document.",
"cite_spans": [
{
"start": 224,
"end": 241,
"text": "See et al. (2017)",
"ref_id": "BIBREF52"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task Specific Loss",
"sec_num": "6.4"
},
{
"text": "Structure loss: Li et al. (2018) introduce two new loss objectives structural compression and structural coverage based on sentence-level attention. These objectives are specially designed for the task of abstractive document summarization. structural compression is used to generate a sentence by compressing several specific source sentences and structural coverage is used to cover more salient information of the original document. These objectives leverage document structure in document summarization, and explore the effectiveness of capturing structural properties of document summarization by regularization of the generative model to generate more informative and concise summaries.",
"cite_spans": [
{
"start": 16,
"end": 32,
"text": "Li et al. (2018)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task Specific Loss",
"sec_num": "6.4"
},
{
"text": "Discrete space issues: The classifier loss ( \u00a76.3) is used to determine if the generated tokensx are in accordance with the target control attribute s. To calculate the loss, the generated tokensx are provided as input to the classifier. If the tokens in this case are generated using the argmax then this function is not differentiable. Hence, passing tokens effectively to the classifier is a challenge.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "In (Yu et al., 2017) , the REINFORCE (Williams, 1992) algorithm is used and rewards are calculated using Monte Carlo search sampling for the next tokens. This technique is known to be unstable due to the high variance of the sampled gradient during training (Shen et al., 2017) . Kusner and Hern\u00e1ndez-Lobato (2016) introduce the Gumbel-softmax distribution as a solution. It approximates the multinomial distribution parameterized in terms of the softmax distribution. Here the predicted token is:",
"cite_spans": [
{
"start": 3,
"end": 20,
"text": "(Yu et al., 2017)",
"ref_id": "BIBREF71"
},
{
"start": 258,
"end": 277,
"text": "(Shen et al., 2017)",
"ref_id": "BIBREF53"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "x t = softmax(1/\u03c4 (\u00f4 t + g t )),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "where\u00f4 t is described in ( \u00a76), \u03c4 is temperature parameter and g t is sampled independently from the Gumbel distribution. Hu et al. (2017) use this technique without sampling from the Gumbel distribution but by only training the temperature parameter.",
"cite_spans": [
{
"start": 122,
"end": 138,
"text": "Hu et al. (2017)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "Combined module architecture: It is also possible to combine techniques from multiple modules to control the generation process. We mention some of the prior works that have successfully combined various modules here. Hu et al. (2017) combine stochastic changes ( \u00a72.2), KL Divergence loss ( \u00a76.2) and a classifier loss ( \u00a76.3). It adopts a variational auto-encoder along with KL divergence loss objective and further adds a discriminator loss which signifies if the generated sentence belong to the target attribute. As mentioned earlier, Romanov et al. (2019) combine the decomposition of the external input ( \u00a72.3) with external feedback provided to the external input ( \u00a72.4). External feedback is used to ensure that the decomposed latent sub-spaces represent the desired target attributes. establishes formal connections between generative adversarial networks (related to \u00a75.2 and \u00a76.3) and variational auto-encoders (related to \u00a72.2 and \u00a76.2). Determining the best possible combination of modules through empirical evaluation remains an open challenge.",
"cite_spans": [
{
"start": 218,
"end": 234,
"text": "Hu et al. (2017)",
"ref_id": "BIBREF19"
},
{
"start": 540,
"end": 561,
"text": "Romanov et al. (2019)",
"ref_id": "BIBREF51"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "In this paper we propose a new schema to organize the prior work in controllable text generation. The schema contains five modules, each of which plays an important role in the generation process. We detail the various techniques used to modulate each of the five modules to perform controllable text generation. We also provide theoretical understanding and qualitative analysis of these techniques. This understanding paves way to new architectures based on combinations of these modules. The future work will focus on empirical comparison of these techniques to gain an insight into their usefulness and strength.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "8"
}
],
"back_matter": [
{
"text": "This work was supported in part by NSF IIS1763562, and ONR Grant N000141812861. We would like to thank Elijah Mayfield, Sai Krishna Rallabandi, Shruti Palaskar, Aman Madaan, Bhuwan Dhingra, Harsh Jhamtani and Khyathi Chandu for valuable discussions at earlier stages of this work. We are also grateful to the anonymous reviewers for their constructive feedback.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "3rd International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Structsum: Incorporating latent and explicit sentence dependencies for single document summarization",
"authors": [
{
"first": "Vidhisha",
"middle": [],
"last": "Balachandran",
"suffix": ""
},
{
"first": "Artidoro",
"middle": [],
"last": "Pagnoni",
"suffix": ""
},
{
"first": "Jay",
"middle": [
"Yoon"
],
"last": "Lee",
"suffix": ""
},
{
"first": "Dheeraj",
"middle": [],
"last": "Rajagopal",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "Yulia",
"middle": [],
"last": "Tsvetkov",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vidhisha Balachandran, Artidoro Pagnoni, Jay Yoon Lee, Dheeraj Rajagopal, Jaime Carbonell, and Yulia Tsvetkov. 2020. Structsum: Incorporating latent and explicit sentence dependencies for single document summarization. ArXiv e-prints, 03.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Generating sentences from a continuous space",
"authors": [
{
"first": "R",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Bowman",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vilnis",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Rafal",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Samy",
"middle": [],
"last": "Jozefowicz",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "10--21",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew Dai, Rafal Jozefowicz, and Samy Bengio. 2016. Gen- erating sentences from a continuous space. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 10-21, Berlin, Germany, August. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "my way of telling a story\": Persona based grounded story generation",
"authors": [
{
"first": "Khyathi",
"middle": [],
"last": "Chandu",
"suffix": ""
},
{
"first": "Shrimai",
"middle": [],
"last": "Prabhumoye",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Alan",
"middle": [
"W"
],
"last": "Black",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Second Workshop on Storytelling",
"volume": "",
"issue": "",
"pages": "11--21",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Khyathi Chandu, Shrimai Prabhumoye, Ruslan Salakhutdinov, and Alan W Black. 2019. \"my way of telling a story\": Persona based grounded story generation. In Proceedings of the Second Workshop on Storytelling, pages 11-21.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Learning phrase representations using rnn encoder-decoder for statistical machine translation",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merri\u00ebnboer",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Fethi",
"middle": [],
"last": "Bougares",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1724--1734",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho, Bart van Merri\u00ebnboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724-1734.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Plug and play language models: a simple approach to controlled text generation",
"authors": [
{
"first": "Sumanth",
"middle": [],
"last": "Dathathri",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Madotto",
"suffix": ""
},
{
"first": "Janice",
"middle": [],
"last": "Lan",
"suffix": ""
},
{
"first": "Jane",
"middle": [],
"last": "Hung",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Frank",
"suffix": ""
},
{
"first": "Piero",
"middle": [],
"last": "Molino",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Yosinski",
"suffix": ""
},
{
"first": "Rosanne",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1912.02164"
]
},
"num": null,
"urls": [],
"raw_text": "Sumanth Dathathri, Andrea Madotto, Janice Lan, Jane Hung, Eric Frank, Piero Molino, Jason Yosinski, and Rosanne Liu. 2019. Plug and play language models: a simple approach to controlled text generation. arXiv preprint arXiv:1912.02164.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirec- tional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Wizard of wikipedia: Knowledge-powered conversational agents",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Dinan",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Roller",
"suffix": ""
},
{
"first": "Kurt",
"middle": [],
"last": "Shuster",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2018. Wizard of wikipedia: Knowledge-powered conversational agents. In International Conference on Learning Representa- tions.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Hierarchical neural story generation",
"authors": [
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Dauphin",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "889--898",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hierarchical neural story generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 889-898, Melbourne, Australia, July.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Style transfer in text: Exploration and evaluation",
"authors": [
{
"first": "Zhenxin",
"middle": [],
"last": "Fu",
"suffix": ""
},
{
"first": "Xiaoye",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Nanyun",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Dongyan",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Yan",
"suffix": ""
}
],
"year": 2018,
"venue": "Thirty-Second AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhenxin Fu, Xiaoye Tan, Nanyun Peng, Dongyan Zhao, and Rui Yan. 2018. Style transfer in text: Exploration and evaluation. In Thirty-Second AAAI Conference on Artificial Intelligence.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Stylenet: Generating attractive visual captions with styles",
"authors": [
{
"first": "Chuang",
"middle": [],
"last": "Gan",
"suffix": ""
},
{
"first": "Zhe",
"middle": [],
"last": "Gan",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Deng",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "3137--3146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chuang Gan, Zhe Gan, Xiaodong He, Jianfeng Gao, and Li Deng. 2017. Stylenet: Generating attractive visual captions with styles. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3137-3146.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A knowledge-grounded neural conversation model",
"authors": [
{
"first": "Marjan",
"middle": [],
"last": "Ghazvininejad",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Yih",
"middle": [],
"last": "Wen-Tau",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
}
],
"year": 2018,
"venue": "Thirty-Second AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marjan Ghazvininejad, Chris Brockett, Ming-Wei Chang, Bill Dolan, Jianfeng Gao, Wen-tau Yih, and Michel Galley. 2018. A knowledge-grounded neural conversation model. In Thirty-Second AAAI Conference on Artificial Intelligence.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Reinforcement learning based text style transfer without parallel training corpus",
"authors": [
{
"first": "Hongyu",
"middle": [],
"last": "Gong",
"suffix": ""
},
{
"first": "Suma",
"middle": [],
"last": "Bhat",
"suffix": ""
},
{
"first": "Lingfei",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Jinjun",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Wen-Mei",
"middle": [],
"last": "Hwu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "3168--3180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hongyu Gong, Suma Bhat, Lingfei Wu, JinJun Xiong, and Wen-mei Hwu. 2019. Reinforcement learning based text style transfer without parallel training corpus. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3168-3180, Minneapolis, Minnesota, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Non-autoregressive neural machine translation with enhanced decoder input",
"authors": [
{
"first": "Junliang",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Di",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Linli",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Tie-Yan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "33",
"issue": "",
"pages": "3723--3730",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Junliang Guo, Xu Tan, Di He, Tao Qin, Linli Xu, and Tie-Yan Liu. 2019. Non-autoregressive neural machine translation with enhanced decoder input. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 3723-3730.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Maximizing stylistic control and semantic accuracy in nlg: Personality variation and discourse contrast",
"authors": [
{
"first": "Vrindavan",
"middle": [],
"last": "Harrison",
"suffix": ""
},
{
"first": "Lena",
"middle": [],
"last": "Reed",
"suffix": ""
},
{
"first": "Shereen",
"middle": [],
"last": "Oraby",
"suffix": ""
},
{
"first": "Marilyn",
"middle": [],
"last": "Walker",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 1st Workshop on Discourse Structure in Neural NLG",
"volume": "",
"issue": "",
"pages": "1--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vrindavan Harrison, Lena Reed, Shereen Oraby, and Marilyn Walker. 2019. Maximizing stylistic control and semantic accuracy in nlg: Personality variation and discourse contrast. In Proceedings of the 1st Workshop on Discourse Structure in Neural NLG, pages 1-12.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Incorporating side information into recurrent neural network language models",
"authors": [
{
"first": "Cong Duy Vu",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
},
{
"first": "Gholamreza",
"middle": [],
"last": "Haffari",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1250--1255",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cong Duy Vu Hoang, Trevor Cohn, and Gholamreza Haffari. 2016. Incorporating side information into recurrent neural network language models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1250-1255.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Learning to write with cooperative discriminators",
"authors": [
{
"first": "Ari",
"middle": [],
"last": "Holtzman",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Buys",
"suffix": ""
},
{
"first": "Maxwell",
"middle": [],
"last": "Forbes",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bosselut",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Golub",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1638--1649",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ari Holtzman, Jan Buys, Maxwell Forbes, Antoine Bosselut, David Golub, and Yejin Choi. 2018. Learning to write with cooperative discriminators. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1638-1649.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "The curious case of neural text degeneration",
"authors": [
{
"first": "Ari",
"middle": [],
"last": "Holtzman",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Buys",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Maxwell",
"middle": [],
"last": "Forbes",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2020,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text degenera- tion. In International Conference on Learning Representations.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Toward controlled generation of text",
"authors": [
{
"first": "Zhiting",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Zichao",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Xiaodan",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"P"
],
"last": "Xing",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. ICML",
"volume": "",
"issue": "",
"pages": "1587--1596",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P Xing. 2017. Toward controlled generation of text. In Proc. ICML, pages 1587-1596.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "On unifying deep generative models",
"authors": [
{
"first": "Zhiting",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Zichao",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"P"
],
"last": "Xing",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhiting Hu, Zichao Yang, Ruslan Salakhutdinov, and Eric P. Xing. 2018. On unifying deep generative models. In International Conference on Learning Representations.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "What makes a good story? designing composite rewards for visual storytelling",
"authors": [
{
"first": "Junjie",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Zhe",
"middle": [],
"last": "Gan",
"suffix": ""
},
{
"first": "Jingjing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.05316"
]
},
"num": null,
"urls": [],
"raw_text": "Junjie Hu, Yu Cheng, Zhe Gan, Jingjing Liu, Jianfeng Gao, and Graham Neubig. 2019a. What makes a good story? designing composite rewards for visual storytelling. arXiv preprint arXiv:1909.05316.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Texar: A modularized, versatile, and extensible toolkit for text generation",
"authors": [
{
"first": "Zhiting",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Haoran",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Wentao",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Zichao",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Tiancheng",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Junxian",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Lianhui",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Di",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Xuezhe",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
"volume": "",
"issue": "",
"pages": "159--164",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhiting Hu, Haoran Shi, Bowen Tan, Wentao Wang, Zichao Yang, Tiancheng Zhao, Junxian He, Lianhui Qin, Di Wang, Xuezhe Ma, et al. 2019b. Texar: A modularized, versatile, and extensible toolkit for text gener- ation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 159-164.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Hierarchically structured reinforcement learning for topically coherent visual story generation",
"authors": [
{
"first": "Qiuyuan",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Zhe",
"middle": [],
"last": "Gan",
"suffix": ""
},
{
"first": "Asli",
"middle": [],
"last": "Celikyilmaz",
"suffix": ""
},
{
"first": "Dapeng",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "33",
"issue": "",
"pages": "8465--8472",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qiuyuan Huang, Zhe Gan, Asli Celikyilmaz, Dapeng Wu, Jianfeng Wang, and Xiaodong He. 2019. Hierarchically structured reinforcement learning for topically coherent visual story generation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 8465-8472.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Disentangled representation learning for non-parallel text style transfer",
"authors": [
{
"first": "Vineet",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "Lili",
"middle": [],
"last": "Mou",
"suffix": ""
},
{
"first": "Hareesh",
"middle": [],
"last": "Bahuleyan",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Vechtomova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "424--434",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vineet John, Lili Mou, Hareesh Bahuleyan, and Olga Vechtomova. 2019. Disentangled representation learning for non-parallel text style transfer. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 424-434, Florence, Italy, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Globally coherent text generation with neural checklist models",
"authors": [
{
"first": "Chlo\u00e9",
"middle": [],
"last": "Kiddon",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "329--339",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chlo\u00e9 Kiddon, Luke Zettlemoyer, and Yejin Choi. 2016. Globally coherent text generation with neural checklist models. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 329-339, Austin, Texas, November. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Auto-encoding variational bayes",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Max",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Welling",
"suffix": ""
}
],
"year": 2014,
"venue": "Proc. ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P Kingma and Max Welling. 2014. Auto-encoding variational bayes. In Proc. ICLR.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Importance of search and evaluation strategies in neural dialogue modeling",
"authors": [
{
"first": "Ilia",
"middle": [],
"last": "Kulikov",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 12th International Conference on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "76--87",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilia Kulikov, Alexander Miller, Kyunghyun Cho, and Jason Weston. 2019. Importance of search and evaluation strategies in neural dialogue modeling. In Proceedings of the 12th International Conference on Natural Lan- guage Generation, pages 76-87, Tokyo, Japan, October-November. Association for Computational Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Gans for sequences of discrete elements with the gumbel-softmax distribution",
"authors": [
{
"first": "J",
"middle": [],
"last": "Matt",
"suffix": ""
},
{
"first": "Jos\u00e9 Miguel Hern\u00e1ndez-Lobato",
"middle": [],
"last": "Kusner",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1611.04051"
]
},
"num": null,
"urls": [],
"raw_text": "Matt J Kusner and Jos\u00e9 Miguel Hern\u00e1ndez-Lobato. 2016. Gans for sequences of discrete elements with the gumbel-softmax distribution. arXiv preprint arXiv:1611.04051.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Professor forcing: A new algorithm for training recurrent networks",
"authors": [
{
"first": "Alex M",
"middle": [],
"last": "Lamb",
"suffix": ""
},
{
"first": "Anirudh",
"middle": [],
"last": "Goyal Alias Parth",
"suffix": ""
},
{
"first": "Ying",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Saizheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Aaron",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Courville",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2016,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "4601--4609",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex M Lamb, Anirudh Goyal Alias Parth Goyal, Ying Zhang, Saizheng Zhang, Aaron C Courville, and Yoshua Bengio. 2016. Professor forcing: A new algorithm for training recurrent networks. In Advances in neural information processing systems, pages 4601-4609.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Marjan",
"middle": [],
"last": "Ghazvininejad",
"suffix": ""
},
{
"first": "Abdelrahman",
"middle": [],
"last": "Mohamed",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Ves",
"middle": [],
"last": "Stoyanov",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.13461"
]
},
"num": null,
"urls": [],
"raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoy- anov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "A diversity-promoting objective function for neural conversation models",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "110--119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016a. A diversity-promoting objective function for neural conversation models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 110-119, San Diego, California, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "A persona-based neural conversation model",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Georgios",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Spithourakis",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dolan",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiwei Li, Michel Galley, Chris Brockett, Georgios P Spithourakis, Jianfeng Gao, and Bill Dolan. 2016b. A persona-based neural conversation model. In Proc. ACL.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Improving neural abstractive document summarization with structural regularization",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xinyan",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "Yajuan",
"middle": [],
"last": "Lyu",
"suffix": ""
},
{
"first": "Yuanzhuo",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "4078--4087",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Li, Xinyan Xiao, Yajuan Lyu, and Yuanzhuo Wang. 2018. Improving neural abstractive document sum- marization with structural regularization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4078-4087, Brussels, Belgium, October-November. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Learning structured text representations",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2018,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "6",
"issue": "",
"pages": "63--75",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang Liu and Mirella Lapata. 2018. Learning structured text representations. Transactions of the Association for Computational Linguistics, 6:63-75.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Generating wikipedia by summarizing long sequences",
"authors": [
{
"first": "J",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Etienne",
"middle": [],
"last": "Saleh",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Pot",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Goodrich",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Sepassi",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Shazeer",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter J. Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and Noam Shazeer. 2018. Generating wikipedia by summarizing long sequences. In International Conference on Learning Repre- sentations.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Content preserving text generation with attribute controls",
"authors": [
{
"first": "Lajanugen",
"middle": [],
"last": "Logeswaran",
"suffix": ""
},
{
"first": "Honglak",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Samy",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2018,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "5103--5113",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lajanugen Logeswaran, Honglak Lee, and Samy Bengio. 2018. Content preserving text generation with attribute controls. In Advances in Neural Information Processing Systems, pages 5103-5113.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Effective approaches to attention-based neural machine translation",
"authors": [
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1508.04025"
]
},
"num": null,
"urls": [],
"raw_text": "Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attention-based neural machine translation. arXiv preprint arXiv:1508.04025.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Politeness transfer: A tag and generate approach",
"authors": [
{
"first": "Aman",
"middle": [],
"last": "Madaan",
"suffix": ""
},
{
"first": "Amrith",
"middle": [],
"last": "Setlur",
"suffix": ""
},
{
"first": "Tanmay",
"middle": [],
"last": "Parekh",
"suffix": ""
},
{
"first": "Barnabas",
"middle": [],
"last": "Poczos",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Alan",
"middle": [
"W"
],
"last": "Black",
"suffix": ""
},
{
"first": "Shrimai",
"middle": [],
"last": "Prabhumoye",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1869--1881",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aman Madaan, Amrith Setlur, Tanmay Parekh, Barnabas Poczos, Graham Neubig, Yiming Yang, Ruslan Salakhut- dinov, Alan W Black, and Shrimai Prabhumoye. 2020. Politeness transfer: A tag and generate approach. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1869-1881, Online, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Polite dialogue generation without parallel data",
"authors": [
{
"first": "Tong",
"middle": [],
"last": "Niu",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
}
],
"year": 2018,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "6",
"issue": "",
"pages": "373--389",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tong Niu and Mohit Bansal. 2018. Polite dialogue generation without parallel data. Transactions of the Associa- tion for Computational Linguistics, 6:373-389.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Definition modeling: Learning to define word embeddings in natural language",
"authors": [
{
"first": "Thanapon",
"middle": [],
"last": "Noraset",
"suffix": ""
},
{
"first": "Chen",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Larry",
"middle": [],
"last": "Birnbaum",
"suffix": ""
},
{
"first": "Doug",
"middle": [],
"last": "Downey",
"suffix": ""
}
],
"year": 2017,
"venue": "Thirty-First AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thanapon Noraset, Chen Liang, Larry Birnbaum, and Doug Downey. 2017. Definition modeling: Learning to define word embeddings in natural language. In Thirty-First AAAI Conference on Artificial Intelligence.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "A deep reinforced model for abstractive summarization",
"authors": [
{
"first": "Romain",
"middle": [],
"last": "Paulus",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Romain Paulus, Caiming Xiong, and Richard Socher. 2018. A deep reinforced model for abstractive summariza- tion. In International Conference on Learning Representations.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Towards controllable story generation",
"authors": [
{
"first": "Nanyun",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Marjan",
"middle": [],
"last": "Ghazvininejad",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "May",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the First Workshop on Storytelling",
"volume": "",
"issue": "",
"pages": "43--49",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nanyun Peng, Marjan Ghazvininejad, Jonathan May, and Kevin Knight. 2018. Towards controllable story genera- tion. In Proceedings of the First Workshop on Storytelling, pages 43-49.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Style transfer through back-translation",
"authors": [
{
"first": "Yulia",
"middle": [],
"last": "Shrimai Prabhumoye",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Tsvetkov",
"suffix": ""
},
{
"first": "Alan",
"middle": [
"W"
],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Black",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "866--876",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shrimai Prabhumoye, Yulia Tsvetkov, Ruslan Salakhutdinov, and Alan W Black. 2018. Style transfer through back-translation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 866-876, Melbourne, Australia, July.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Towards content transfer through grounded text generation",
"authors": [
{
"first": "Shrimai",
"middle": [],
"last": "Prabhumoye",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Quirk",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2622--2632",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shrimai Prabhumoye, Chris Quirk, and Michel Galley. 2019. Towards content transfer through grounded text generation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2622- 2632, Minneapolis, Minnesota, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "2020. I love your chain mail! making knights smile in a fantasy game world: Open-domain goal-oriented dialogue agents",
"authors": [
{
"first": "Shrimai",
"middle": [],
"last": "Prabhumoye",
"suffix": ""
},
{
"first": "Margaret",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jack",
"middle": [],
"last": "Urbanek",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Dinan",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Arthur",
"middle": [],
"last": "Szlam",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2002.02878"
]
},
"num": null,
"urls": [],
"raw_text": "Shrimai Prabhumoye, Margaret Li, Jack Urbanek, Emily Dinan, Douwe Kiela, Jason Weston, and Arthur Szlam. 2020. I love your chain mail! making knights smile in a fantasy game world: Open-domain goal-oriented dialogue agents. arXiv preprint arXiv:2002.02878.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Improving language understanding by generative pre-training",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Karthik",
"middle": [],
"last": "Narasimhan",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Language models are unsupervised multitask learners",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Exploring the limits of transfer learning with a unified text-to-text transformer",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Raffel",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Roberts",
"suffix": ""
},
{
"first": "Katherine",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Sharan",
"middle": [],
"last": "Narang",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Matena",
"suffix": ""
},
{
"first": "Yanqi",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Peter J",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.10683"
]
},
"num": null,
"urls": [],
"raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Dear sir or madam, may I introduce the GYAFC dataset: Corpus, benchmarks and metrics for formality style transfer",
"authors": [
{
"first": "Sudha",
"middle": [],
"last": "Rao",
"suffix": ""
},
{
"first": "Joel",
"middle": [],
"last": "Tetreault",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "129--140",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sudha Rao and Joel Tetreault. 2018. Dear sir or madam, may I introduce the GYAFC dataset: Corpus, benchmarks and metrics for formality style transfer. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 129-140.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Building Natural Language Generation Systems",
"authors": [
{
"first": "Ehud",
"middle": [],
"last": "Reiter",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Dale",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ehud Reiter and Robert Dale. 2000. Building Natural Language Generation Systems. Cambridge University Press, USA.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Adversarial decomposition of text representation",
"authors": [
{
"first": "Alexey",
"middle": [],
"last": "Romanov",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Rumshisky",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Rogers",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Donahue",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "815--825",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexey Romanov, Anna Rumshisky, Anna Rogers, and David Donahue. 2019. Adversarial decomposition of text representation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 815-825, Minneapolis, Minnesota, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Get to the point: Summarization with pointergenerator networks",
"authors": [
{
"first": "Abigail",
"middle": [],
"last": "See",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointer- generator networks. In Proc. ACL.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Style transfer from non-parallel text by cross-alignment",
"authors": [
{
"first": "Tianxiao",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Lei",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Tommi",
"middle": [],
"last": "Jaakkola",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "6830--6841",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2017. Style transfer from non-parallel text by cross-alignment. In Advances in neural information processing systems, pages 6830-6841.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "transforming\" delete, retrieve, generate approach for controlled text style transfer",
"authors": [
{
"first": "Akhilesh",
"middle": [],
"last": "Sudhakar",
"suffix": ""
},
{
"first": "Bhargav",
"middle": [],
"last": "Upadhyay",
"suffix": ""
},
{
"first": "Arjun",
"middle": [],
"last": "Maheswaran",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "3260--3270",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Akhilesh Sudhakar, Bhargav Upadhyay, and Arjun Maheswaran. 2019. \"transforming\" delete, retrieve, generate approach for controlled text style transfer. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3260-3270.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Target-guided open-domain conversation",
"authors": [
{
"first": "Jianheng",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Tiancheng",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Chenyan",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Xiaodan",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Xing",
"suffix": ""
},
{
"first": "Zhiting",
"middle": [],
"last": "Hu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5624--5634",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jianheng Tang, Tiancheng Zhao, Chenyan Xiong, Xiaodan Liang, Eric Xing, and Zhiting Hu. 2019. Target-guided open-domain conversation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5624-5634.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "Learning to speak and act in a fantasy text adventure game",
"authors": [
{
"first": "Jack",
"middle": [],
"last": "Urbanek",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Siddharth",
"middle": [],
"last": "Karamcheti",
"suffix": ""
},
{
"first": "Saachi",
"middle": [],
"last": "Jain",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Humeau",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Dinan",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rockt\u00e4schel",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Arthur",
"middle": [],
"last": "Szlam",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "673--683",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jack Urbanek, Angela Fan, Siddharth Karamcheti, Saachi Jain, Samuel Humeau, Emily Dinan, Tim Rockt\u00e4schel, Douwe Kiela, Arthur Szlam, and Jason Weston. 2019. Learning to speak and act in a fantasy text adventure game. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 673-683.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "Diverse beam search for improved description of complex scenes",
"authors": [
{
"first": "K",
"middle": [],
"last": "Ashwin",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Vijayakumar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cogswell",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Ramprasaath",
"suffix": ""
},
{
"first": "Qing",
"middle": [],
"last": "Selvaraju",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Crandall",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Batra",
"suffix": ""
}
],
"year": 2018,
"venue": "Thirty-Second AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashwin K Vijayakumar, Michael Cogswell, Ramprasaath R Selvaraju, Qing Sun, Stefan Lee, David Crandall, and Dhruv Batra. 2018. Diverse beam search for improved description of complex scenes. In Thirty-Second AAAI Conference on Artificial Intelligence.",
"links": null
},
"BIBREF59": {
"ref_id": "b59",
"title": "Controllable unsupervised text attribute transfer via editing entangled latent representation",
"authors": [
{
"first": "Ke",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Hang",
"middle": [],
"last": "Hua",
"suffix": ""
},
{
"first": "Xiaojun",
"middle": [],
"last": "Wan",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "11034--11044",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ke Wang, Hang Hua, and Xiaojun Wan. 2019a. Controllable unsupervised text attribute transfer via editing entangled latent representation. In Advances in Neural Information Processing Systems, pages 11034-11044.",
"links": null
},
"BIBREF60": {
"ref_id": "b60",
"title": "Topic-guided variational auto-encoder for text generation",
"authors": [
{
"first": "Wenlin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Zhe",
"middle": [],
"last": "Gan",
"suffix": ""
},
{
"first": "Hongteng",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Ruiyi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Guoyin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Dinghan",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Changyou",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [],
"last": "Carin",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "166--177",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenlin Wang, Zhe Gan, Hongteng Xu, Ruiyi Zhang, Guoyin Wang, Dinghan Shen, Changyou Chen, and Lawrence Carin. 2019b. Topic-guided variational auto-encoder for text generation. In Proceedings of the 2019 Confer- ence of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 166-177, Minneapolis, Minnesota, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF61": {
"ref_id": "b61",
"title": "Non-monotonic sequential text generation",
"authors": [
{
"first": "Sean",
"middle": [],
"last": "Welleck",
"suffix": ""
},
{
"first": "Kiant\u00e9",
"middle": [],
"last": "Brantley",
"suffix": ""
},
{
"first": "Hal",
"middle": [
"Daum\u00e9"
],
"last": "Iii",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
}
],
"year": 2019,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "6716--6726",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sean Welleck, Kiant\u00e9 Brantley, Hal Daum\u00e9 Iii, and Kyunghyun Cho. 2019. Non-monotonic sequential text generation. In International Conference on Machine Learning, pages 6716-6726.",
"links": null
},
"BIBREF62": {
"ref_id": "b62",
"title": "Neural text generation with unlikelihood training",
"authors": [
{
"first": "Sean",
"middle": [],
"last": "Welleck",
"suffix": ""
},
{
"first": "Ilia",
"middle": [],
"last": "Kulikov",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Roller",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Dinan",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2020,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sean Welleck, Ilia Kulikov, Stephen Roller, Emily Dinan, Kyunghyun Cho, and Jason Weston. 2020. Neural text generation with unlikelihood training. In International Conference on Learning Representations.",
"links": null
},
"BIBREF63": {
"ref_id": "b63",
"title": "Semantically conditioned LSTM-based natural language generation for spoken dialogue systems",
"authors": [
{
"first": "Milica",
"middle": [],
"last": "Tsung-Hsien Wen",
"suffix": ""
},
{
"first": "Nikola",
"middle": [],
"last": "Gasic",
"suffix": ""
},
{
"first": "Pei-Hao",
"middle": [],
"last": "Mrk\u0161i\u0107",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Vandyke",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1711--1721",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tsung-Hsien Wen, Milica Gasic, Nikola Mrk\u0161i\u0107, Pei-Hao Su, David Vandyke, and Steve Young. 2015. Semanti- cally conditioned LSTM-based natural language generation for spoken dialogue systems. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1711-1721.",
"links": null
},
"BIBREF64": {
"ref_id": "b64",
"title": "A learning algorithm for continually running fully recurrent neural networks",
"authors": [
{
"first": "J",
"middle": [],
"last": "Ronald",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zipser",
"suffix": ""
}
],
"year": 1989,
"venue": "Neural computation",
"volume": "1",
"issue": "2",
"pages": "270--280",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronald J Williams and David Zipser. 1989. A learning algorithm for continually running fully recurrent neural networks. Neural computation, 1(2):270-280.",
"links": null
},
"BIBREF65": {
"ref_id": "b65",
"title": "Simple statistical gradient-following algorithms for connectionist reinforcement learning",
"authors": [
{
"first": "J",
"middle": [],
"last": "Ronald",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Williams",
"suffix": ""
}
],
"year": 1992,
"venue": "Machine learning",
"volume": "8",
"issue": "3-4",
"pages": "229--256",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronald J Williams. 1992. Simple statistical gradient-following algorithms for connectionist reinforcement learn- ing. Machine learning, 8(3-4):229-256.",
"links": null
},
"BIBREF66": {
"ref_id": "b66",
"title": "Neural text generation: A practical guide",
"authors": [
{
"first": "Ziang",
"middle": [],
"last": "Xie",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1711.09534"
]
},
"num": null,
"urls": [],
"raw_text": "Ziang Xie. 2017. Neural text generation: A practical guide. arXiv preprint arXiv:1711.09534.",
"links": null
},
"BIBREF67": {
"ref_id": "b67",
"title": "Unsupervised controllable text generation with global variation discovery and disentanglement",
"authors": [
{
"first": "Peng",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Yanshuai",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Jackie Chi Kit",
"middle": [],
"last": "Cheung",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peng Xu, Yanshuai Cao, and Jackie Chi Kit Cheung. 2019. Unsupervised controllable text generation with global variation discovery and disentanglement. CoRR, abs/1905.11975.",
"links": null
},
"BIBREF68": {
"ref_id": "b68",
"title": "Unsupervised text style transfer using language models as discriminators",
"authors": [
{
"first": "Zichao",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zhiting",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"P"
],
"last": "Xing",
"suffix": ""
},
{
"first": "Taylor",
"middle": [],
"last": "Berg-Kirkpatrick",
"suffix": ""
}
],
"year": 2018,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "7287--7298",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zichao Yang, Zhiting Hu, Chris Dyer, Eric P Xing, and Taylor Berg-Kirkpatrick. 2018. Unsupervised text style transfer using language models as discriminators. In Advances in Neural Information Processing Systems, pages 7287-7298.",
"links": null
},
"BIBREF69": {
"ref_id": "b69",
"title": "Xlnet: Generalized autoregressive pretraining for language understanding",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zihang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Russ",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5754--5764",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in neural information process- ing systems, pages 5754-5764.",
"links": null
},
"BIBREF70": {
"ref_id": "b70",
"title": "Plan-and-write: Towards better automatic storytelling",
"authors": [
{
"first": "Lili",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Nanyun",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Weischedel",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Dongyan",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Yan",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "33",
"issue": "",
"pages": "7378--7385",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lili Yao, Nanyun Peng, Ralph Weischedel, Kevin Knight, Dongyan Zhao, and Rui Yan. 2019. Plan-and-write: Towards better automatic storytelling. In Proceedings of the AAAI Conference on Artificial Intelligence, vol- ume 33, pages 7378-7385.",
"links": null
},
"BIBREF71": {
"ref_id": "b71",
"title": "Seqgan: Sequence generative adversarial nets with policy gradient",
"authors": [
{
"first": "Lantao",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Weinan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yong",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2017,
"venue": "Thirty-first AAAI conference on artificial intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. 2017. Seqgan: Sequence generative adversarial nets with policy gradient. In Thirty-first AAAI conference on artificial intelligence.",
"links": null
},
"BIBREF72": {
"ref_id": "b72",
"title": "Personalizing dialogue agents: I have a dog, do you have pets too?",
"authors": [
{
"first": "Saizheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Dinan",
"suffix": ""
},
{
"first": "Jack",
"middle": [],
"last": "Urbanek",
"suffix": ""
},
{
"first": "Arthur",
"middle": [],
"last": "Szlam",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "2204--2213",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Person- alizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2204-2213. Association for Computational Linguistics.",
"links": null
},
"BIBREF73": {
"ref_id": "b73",
"title": "A dataset for document grounded conversations",
"authors": [
{
"first": "Kangyan",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Shrimai",
"middle": [],
"last": "Prabhumoye",
"suffix": ""
},
{
"first": "Alan",
"middle": [
"W"
],
"last": "Black",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "708--713",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kangyan Zhou, Shrimai Prabhumoye, and Alan W Black. 2018. A dataset for document grounded conversations. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 708-713.",
"links": null
},
"BIBREF74": {
"ref_id": "b74",
"title": "Augmenting non-collaborative dialog systems with explicit semantic and strategic dialog history",
"authors": [
{
"first": "Yiheng",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Yulia",
"middle": [],
"last": "Tsvetkov",
"suffix": ""
},
{
"first": "Alan",
"middle": [
"W"
],
"last": "Black",
"suffix": ""
},
{
"first": "Zhou",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2020,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yiheng Zhou, Yulia Tsvetkov, Alan W Black, and Zhou Yu. 2020. Augmenting non-collaborative dialog systems with explicit semantic and strategic dialog history. In International Conference on Learning Representations.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "Modules that control the generation process. Each module is numbered by the circle next to it.",
"num": null,
"uris": null
}
}
}
}