ACL-OCL / Base_JSON /prefixC /json /coling /2020.coling-main.102.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:44:55.236769Z"
},
"title": "A Semantically Consistent and Syntactically Variational Encoder-Decoder Framework for Paraphrase Generation",
"authors": [
{
"first": "Wenqing",
"middle": [],
"last": "Chen",
"suffix": "",
"affiliation": {
"laboratory": "Lab of Advanced Optical Communication System and Network",
"institution": "Shanghai Jiao Tong University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Jidong",
"middle": [],
"last": "Tian",
"suffix": "",
"affiliation": {
"laboratory": "Lab of Advanced Optical Communication System and Network",
"institution": "Shanghai Jiao Tong University",
"location": {}
},
"email": ""
},
{
"first": "Liqiang",
"middle": [],
"last": "Xiao",
"suffix": "",
"affiliation": {
"laboratory": "Lab of Advanced Optical Communication System and Network",
"institution": "Shanghai Jiao Tong University",
"location": {}
},
"email": ""
},
{
"first": "Hao",
"middle": [],
"last": "He",
"suffix": "",
"affiliation": {
"laboratory": "Lab of Advanced Optical Communication System and Network",
"institution": "Shanghai Jiao Tong University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Yaohui",
"middle": [],
"last": "Jin",
"suffix": "",
"affiliation": {
"laboratory": "Lab of Advanced Optical Communication System and Network",
"institution": "Shanghai Jiao Tong University",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Paraphrase generation aims to generate semantically consistent sentences with different syntactic realizations. Most of the recent studies rely on the typical encoder-decoder framework where the generation process is deterministic. However, in practice, the ability to generate multiple syntactically different paraphrases is important. Recent work proposed to cooperate variational inference on a target-related latent variable to introduce the diversity. But the latent variable may be contaminated by the semantic information of other unrelated sentences, and in turn, change the conveyed meaning of generated paraphrases. In this paper, we propose a semantically consistent and syntactically variational encoder-decoder framework, which uses adversarial learning to ensure the syntactic latent variable be semantic-free. Moreover, we adopt another discriminator to improve the word-level and sentence-level semantic consistency. So the proposed framework can generate multiple semantically consistent and syntactically different paraphrases. The experiments show that our model outperforms the baseline models on the metrics based on both n-gram matching and semantic similarity, and our model can generate multiple different paraphrases by assembling different syntactic variables.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Paraphrase generation aims to generate semantically consistent sentences with different syntactic realizations. Most of the recent studies rely on the typical encoder-decoder framework where the generation process is deterministic. However, in practice, the ability to generate multiple syntactically different paraphrases is important. Recent work proposed to cooperate variational inference on a target-related latent variable to introduce the diversity. But the latent variable may be contaminated by the semantic information of other unrelated sentences, and in turn, change the conveyed meaning of generated paraphrases. In this paper, we propose a semantically consistent and syntactically variational encoder-decoder framework, which uses adversarial learning to ensure the syntactic latent variable be semantic-free. Moreover, we adopt another discriminator to improve the word-level and sentence-level semantic consistency. So the proposed framework can generate multiple semantically consistent and syntactically different paraphrases. The experiments show that our model outperforms the baseline models on the metrics based on both n-gram matching and semantic similarity, and our model can generate multiple different paraphrases by assembling different syntactic variables.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Paraphrase generation is a longstanding problem in Natural Language Processing (NLP) (McKeown, 1983) , which aims to generate semantically consistent sentences for a given sentence with different syntactic realizations. The task is not only an important building block for many text generation systems such as question answering (Buck et al., 2018; Dong et al., 2017) , machine translation (Cho et al., 2014) , but also beneficial to some NLP tasks such as semantic parsing (Su and Yan, 2017) , sentence-level representation learning (Patro et al., 2018) , data augmentation (Kumar et al., 2019) .",
"cite_spans": [
{
"start": 85,
"end": 100,
"text": "(McKeown, 1983)",
"ref_id": "BIBREF26"
},
{
"start": 329,
"end": 348,
"text": "(Buck et al., 2018;",
"ref_id": "BIBREF5"
},
{
"start": 349,
"end": 367,
"text": "Dong et al., 2017)",
"ref_id": "BIBREF10"
},
{
"start": 390,
"end": 408,
"text": "(Cho et al., 2014)",
"ref_id": "BIBREF8"
},
{
"start": 474,
"end": 492,
"text": "(Su and Yan, 2017)",
"ref_id": "BIBREF36"
},
{
"start": 534,
"end": 554,
"text": "(Patro et al., 2018)",
"ref_id": "BIBREF31"
},
{
"start": 575,
"end": 595,
"text": "(Kumar et al., 2019)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Neural network-based methods (Prakash et al., 2016; Gupta et al., 2018; Li et al., 2018; Fu et al., 2019) have shown great progress on paraphrase generation. The models mainly rely on the sequence-tosequence (seq2seq) learning framework (Sutskever et al., 2014) with typical encoder-decoders, which are relatively deterministic during the testing stage. Generally, the models will select the best result through the beam search but are not able to produce multiple paraphrases in a principled way (Gupta et al., 2018) . Due to the nature of beam-search, the quality of k-th variant will be worse than the first variant.",
"cite_spans": [
{
"start": 29,
"end": 51,
"text": "(Prakash et al., 2016;",
"ref_id": "BIBREF33"
},
{
"start": 52,
"end": 71,
"text": "Gupta et al., 2018;",
"ref_id": "BIBREF13"
},
{
"start": 72,
"end": 88,
"text": "Li et al., 2018;",
"ref_id": "BIBREF20"
},
{
"start": 89,
"end": 105,
"text": "Fu et al., 2019)",
"ref_id": "BIBREF11"
},
{
"start": 237,
"end": 261,
"text": "(Sutskever et al., 2014)",
"ref_id": "BIBREF37"
},
{
"start": 497,
"end": 517,
"text": "(Gupta et al., 2018)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In practice, the ability to generate multiple high-quality and diverse paraphrases is an important characteristic of text generation systems. A target-oriented seq2seq model is applaudable to achieve this goal. For example, Gupta et al. (2018) applied variational inference (Kingma and Welling, 2014) on a target-related latent variable z. During testing, the model can sample multiple latent variables z from a prior distribution to generate multiple different paraphrases. But the remained problem is that z may be contaminated by the semantic information of other unrelated sentences in the training set, leading to an unexpected semantic change of the generated sentences.",
"cite_spans": [
{
"start": 224,
"end": 243,
"text": "Gupta et al. (2018)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we propose to constrain the target-related latent variable z to contain merely the syntactic information. To achieve this goal, we introduce a syntactic encoder to extract z syn from the target y, and develop a discriminator with adversarial learning to ensure z syn is semantic-free. The idea is inspired by (Bao et al., 2019) , which disentangled the latent space of variational autoencoder (VAE) into semantic and syntactic spaces. But they considered the bag of words (BOWs) as the semantic information for adversarial training. This is not optimal because human-generated paraphrases can use quite different words but still express the same meaning. Instead, our model is data-driven. We do not constrain the semantic variables to be syntax-free, as the syntactic information entangled in the semantic variables will be overwritten by the target-oriented syntactic variables.",
"cite_spans": [
{
"start": 324,
"end": 342,
"text": "(Bao et al., 2019)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Gold Reference S r : It is an excellent film! --More Penalized S a : It is an easy way! 2 D Less Penalized S b : It is an awesome movie! 2 <D Table 1 : Illustration of the problem of MLE. The sentence S r is the reference, and the rest sentences are two generated samples. S a and S b have the same word distance to S r , but S b is semantically similar to S r . MLE will equally penalize the phrases \"easy way\" and \"awesome movie\" because they are non-target.",
"cite_spans": [],
"ref_spans": [
{
"start": 142,
"end": 149,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Sentences Word Distance Semantic Distance",
"sec_num": null
},
{
"text": "When considering semantic consistency, there exists another problem in many text generation models that maximum likelihood estimation (MLE) which is implemented by the cross-entropy function will penalize all the non-target words. An example is shown in Table 1 . The cross-entropy function will equally penalize the two generated sentences S a and S b because both of them have two words not match the gold ones. But the semantics of them are quite different. It means that MLE captures the word distance well but does not precisely reflect the semantic distance. Our proposition is that sentences with larger semantic distance should be more penalized. We develop another discriminator, which determines whether the generated sentences are semantically consistent with the references. Unlike the discriminator for the latent variable z syn , this discriminator needs to have access to the sampled tokens, which will cause the non-differentiable problem. We adopt Gumbel-softmax (Jang et al., 2017; Maddison et al., 2017) to make the model end-to-end differentiable. And we introduce two losses to measure both wordlevel and sentence-level semantic consistency.",
"cite_spans": [
{
"start": 980,
"end": 999,
"text": "(Jang et al., 2017;",
"ref_id": "BIBREF15"
},
{
"start": 1000,
"end": 1022,
"text": "Maddison et al., 2017)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [
{
"start": 254,
"end": 261,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Sentences Word Distance Semantic Distance",
"sec_num": null
},
{
"text": "The experiments on two datasets show that our model yields competitive results over other baseline models, and can generate multiple syntactically different and semantically consistent paraphrases. The main contributions of this work are as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentences Word Distance Semantic Distance",
"sec_num": null
},
{
"text": "\u2022 We propose a target-oriented seq2seq framework that involves different syntactic variables to generate multiple different paraphrases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentences Word Distance Semantic Distance",
"sec_num": null
},
{
"text": "\u2022 Our method not only increases the syntactic diversity with variational inference but also improves the word-level and sentence-level semantic consistency for the generated paraphrases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentences Word Distance Semantic Distance",
"sec_num": null
},
{
"text": "\u2022 The experiments use metrics based on both n-gram matching and semantic similarity, and demonstrate the effectiveness of our model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentences Word Distance Semantic Distance",
"sec_num": null
},
{
"text": "Recently, many neural network-based models are proposed for paraphrase generation and can be categorized into three groups: reconstruction-based learning, typical seq2seq learning, and target-oriented seq2seq learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Reconstruction-based Learning. The first category of studies mainly deals with paraphrase generation in an unsupervised manner, which adds constraints on language models (LMs) including RNN-LM (Mikolov et al., 2010) or VAE (Bowman et al., 2016) . Kovaleva et al. (2018) introduced a similaritybased reconstruction loss to VAE which considered similarities between words in the embedding space. Miao et al. (2019) introduces three kinds of constraints on RNN-LM including keywords matching, word embedding similarity, and skip-thoughts similarity. However, the similarity-based losses could not guarantee the semantic consistency between two words. For example, the words \"good\", \"great\" and \"bad\" are all close in the embedding space because they appear in similar contexts. Recently, an intuitive approach was proposed which disentangled the latent space of VAE into syntactic and semantic spaces (Bao et al., 2019) . In their model, the constituency parse tree was used to supervise the syntactic latent variable, and the BOWs were used to supervise the semantic latent variable. Although the proposal of disentanglement is promising, supervision with BOWs is not optimal because paraphrases are possible to use quite different words and still convey the same meaning.",
"cite_spans": [
{
"start": 193,
"end": 215,
"text": "(Mikolov et al., 2010)",
"ref_id": "BIBREF28"
},
{
"start": 219,
"end": 244,
"text": "VAE (Bowman et al., 2016)",
"ref_id": null
},
{
"start": 247,
"end": 269,
"text": "Kovaleva et al. (2018)",
"ref_id": "BIBREF17"
},
{
"start": 394,
"end": 412,
"text": "Miao et al. (2019)",
"ref_id": "BIBREF27"
},
{
"start": 898,
"end": 916,
"text": "(Bao et al., 2019)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Typical Seq2seq Learning. The second category of studies considered paraphrase generation as a typical seq2seq task with parallel data. Prakash et al. (2016) proposed to use a seq2seq model for paraphrase generation with residual stack LSTM, and still performs as a strong baseline (Fu et al., 2019) . Recent studies improved seq2seq models by involving some efficient mechanisms such as copy and constrained decoding (Cao et al., 2017) , inverse reinforcement learning (Li et al., 2018) , decomposition of phrase-level and sentence-level patterns , and content planning with latent bag of words (Fu et al., 2019) . When a sentence has multiple paraphrases in training data, these models will convert them into multiple pairwise sentences. From the perspective of probability modeling, these studies maximize the log conditional probability",
"cite_spans": [
{
"start": 136,
"end": 157,
"text": "Prakash et al. (2016)",
"ref_id": "BIBREF33"
},
{
"start": 282,
"end": 299,
"text": "(Fu et al., 2019)",
"ref_id": "BIBREF11"
},
{
"start": 418,
"end": 436,
"text": "(Cao et al., 2017)",
"ref_id": "BIBREF6"
},
{
"start": 470,
"end": 487,
"text": "(Li et al., 2018)",
"ref_id": "BIBREF20"
},
{
"start": 596,
"end": 613,
"text": "(Fu et al., 2019)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "\u2211 k i=1 log p(y i |x)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "where x denotes the original sentence and y i is the i-th sentence among k paraphrases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Target-oriented Seq2seq Learning. Compared with the second category of studies, the third included the target information which substantially maximized the log probability",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "\u2211 k i=1 log p(y i |x, z y i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "where z y i conveyed the information of target y i . Apparently, there was a train-test discrepancy because z y i was not available during testing. Gupta et al. (2018) tackled the issue by a combination of the seq2seq architecture with VAE which allowed z y i to sample from a prior distribution. The remained problem is that z y i may contain semantic information of other unrelated sentences, which is possible to mislead the model. Ideally, for paraphrase generation, z y i is expected to only convey the syntactic information. Kumar et al. (2020) implicitly tackled this problem by focusing on a slightly different task, the syntacticguided controlled paraphrase generation, which inputted an exemplar to tell the syntactic information. As a result, the train-test discrepancy does not exist in the controlled task. However, for the traditional paraphrase generation task, constraining z y i is still a problem.",
"cite_spans": [
{
"start": 148,
"end": 167,
"text": "Gupta et al. (2018)",
"ref_id": "BIBREF13"
},
{
"start": 531,
"end": 550,
"text": "Kumar et al. (2020)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Before introducing our models, we briefly review the architecture of VAE (Kingma and Welling, 2014), a generative model which allows to generate high-dimensional samples from a continuous space. In the probability model framework, the probability of data x can be computed by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variational Autoencoder",
"sec_num": "3.1"
},
{
"text": "p(x) = \u222b p(x, z)dz = \u222b p(z)p(x|z)dz (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variational Autoencoder",
"sec_num": "3.1"
},
{
"text": "Since this integral is unavailable in closed form or requires exponential time to compute (Blei et al., 2016) , it is approximated by maximizing the evidence lower bound (ELBO):",
"cite_spans": [
{
"start": 90,
"end": 109,
"text": "(Blei et al., 2016)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Variational Autoencoder",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "log p \u03b8 (x) \u2265 ELBO = E z\u223cq \u03d5 (z|x) [log p \u03b8 (x|z)] \u2212 KL(q \u03d5 (z|x)\u2225p(z))",
"eq_num": "(2)"
}
],
"section": "Variational Autoencoder",
"sec_num": "3.1"
},
{
"text": "where p \u03b8 (x|z) denotes the generator with parameters \u03b8 and q \u03d5 (z|x) is obtained by an encoder with parameters \u03d5, and p(z) is a prior distribution, for example, a Gaussian distribution. And KL(\u2022||\u2022) denotes the Kullback-Leibler (KL) Divergence between the two distributions. Moreover, a previous work proposed \u03b2-VAE (Higgins et al., 2017) to use a weight \u03b2 for the KL divergence. This approach was considered as a baseline for paraphrase generation (Fu et al., 2019) .",
"cite_spans": [
{
"start": 317,
"end": 339,
"text": "(Higgins et al., 2017)",
"ref_id": "BIBREF14"
},
{
"start": 450,
"end": 467,
"text": "(Fu et al., 2019)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Variational Autoencoder",
"sec_num": "3.1"
},
{
"text": "When a text generation model involves the process of sampling words and expecting a reward from a discriminator or an evaluator, it will suffer from the non-differentiable problem due to the discrete nature of texts. Recently, many studies use reinforcement learning (RL) (Yu et al., 2017; Lin et al., 2017; Guo et al., 2018; Li et al., 2018) or Gumbel-softmax (Jang et al., 2017; Maddison et al., 2017; Yang et al., 2018; Nie et al., 2019) to overcome the problem. In our model, we use Gumbel-softmax because it makes models end-to-end differentiable, improving the stability and speed of training over RL (Chen et al., 2018) . Assuming that the model outputs a logit value o t when generating a sentence at tth timestep. A softmax function is used to produce probability p t over the vocabulary set:",
"cite_spans": [
{
"start": 272,
"end": 289,
"text": "(Yu et al., 2017;",
"ref_id": "BIBREF36"
},
{
"start": 290,
"end": 307,
"text": "Lin et al., 2017;",
"ref_id": "BIBREF23"
},
{
"start": 308,
"end": 325,
"text": "Guo et al., 2018;",
"ref_id": "BIBREF12"
},
{
"start": 326,
"end": 342,
"text": "Li et al., 2018)",
"ref_id": "BIBREF20"
},
{
"start": 361,
"end": 380,
"text": "(Jang et al., 2017;",
"ref_id": "BIBREF15"
},
{
"start": 381,
"end": 403,
"text": "Maddison et al., 2017;",
"ref_id": "BIBREF25"
},
{
"start": 404,
"end": 422,
"text": "Yang et al., 2018;",
"ref_id": "BIBREF39"
},
{
"start": 423,
"end": 440,
"text": "Nie et al., 2019)",
"ref_id": "BIBREF29"
},
{
"start": 607,
"end": 626,
"text": "(Chen et al., 2018)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Continuous Approximation",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p t = softmax(o t )",
"eq_num": "(3)"
}
],
"section": "Continuous Approximation",
"sec_num": "3.2"
},
{
"text": "Traditionally, a token w t will be sampled from p t with multinomial function or the argmax operation, both of which are non-differentiable. Gumbel-softmax uses a re-parameter trick by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Continuous Approximation",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p t = softmax((o t + g)/\u03c4 ) = Gumbel-softmax(p t ; \u03c4 )",
"eq_num": "(4)"
}
],
"section": "Continuous Approximation",
"sec_num": "3.2"
},
{
"text": "where g samples from Gumbel(0, 1) and \u03c4 is the temperature. When \u03c4 \u2192 0, p t is approximated to the one-hot representation of the sampled token w t . This process is a continuous approximation to the multinomial sampling, and we denote it by Gumbel-softmax(\u2022) in the following sections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Continuous Approximation",
"sec_num": "3.2"
},
{
"text": "Z sem x SynEncoder z syn~q\u03d5 (z syn | y)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Continuous Approximation",
"sec_num": "3.2"
},
{
"text": "A city bus riding down the street in a town.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Continuous Approximation",
"sec_num": "3.2"
},
{
"text": "A red bus on the street near a brick building.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Continuous Approximation",
"sec_num": "3.2"
},
{
"text": "A red bus rounding a curve on a city street.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Continuous Approximation",
"sec_num": "3.2"
},
{
"text": "A bus is on the street driving past a building.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Continuous Approximation",
"sec_num": "3.2"
},
{
"text": "A passenger bus that is driving down the street.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Continuous Approximation",
"sec_num": "3.2"
},
{
"text": "y 1 y 2 y 3 y 4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Continuous Approximation",
"sec_num": "3.2"
},
{
"text": "SemEncoder Decoder",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Continuous Approximation",
"sec_num": "3.2"
},
{
"text": "z syn~p (z syn ) ! y 1 Generated Paraphrases with Sampled Syntactic Variables ! y 2 Discriminator Discriminator L syn g L syn d L KL L tos2s L SC",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Continuous Approximation",
"sec_num": "3.2"
},
{
"text": "Figure 1: The architecture of the proposed model. The key idea is to generate multiple different paraphrases by involving different syntactic variables to the decoder.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gold References",
"sec_num": null
},
{
"text": "Our method belongs to the category of target-oriented seq2seq learning, and aims to generate diverse paraphrases by involving the target-oriented syntactic information. We assume that each paraphrase should convey the same semantic with the original sentence, and multiple paraphrases have different syntaxes from each other. The architecture of our model is shown in Figure 1 . The model contains a semantic encoder, a syntactic encoder, and a decoder with parameters \u03d5, \u03c6, and \u03b8 respectively. Given the sentence x and one of its paraphrases y, the generation process can be defined as:",
"cite_spans": [],
"ref_spans": [
{
"start": 368,
"end": 376,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Semantically Consistent and Syntactically Variational Encoder-Decoder",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Z sem = SemEncoder(x; \u03d5)",
"eq_num": "(5)"
}
],
"section": "Semantically Consistent and Syntactically Variational Encoder-Decoder",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "z syn = SynEncoder(y; \u03c6)",
"eq_num": "(6)"
}
],
"section": "Semantically Consistent and Syntactically Variational Encoder-Decoder",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "y = Decoder(Z sem , z syn ; \u03b8)",
"eq_num": "(7)"
}
],
"section": "Semantically Consistent and Syntactically Variational Encoder-Decoder",
"sec_num": "4.1"
},
{
"text": "where Z sem and z syn denote the semantic and syntactic latent variables respectively. The variables Z sem are a sequence of hidden states and z syn is a vector representation. And our model can cooperate with the attention mechanism (Bahdanau et al., 2015) . At each timestep, the decoder will produce a variable by the weighted sum of hidden states in Z sem and then concatenate it with z syn to decode each token. This process is modeling the probability p (y|x, z syn ) instead of p (y|x).",
"cite_spans": [
{
"start": 234,
"end": 257,
"text": "(Bahdanau et al., 2015)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantically Consistent and Syntactically Variational Encoder-Decoder",
"sec_num": "4.1"
},
{
"text": "The key problem is how to constrain the syntactic variable z syn , as y is not available during testing. Similar to VAE, we apply variational inference on the variable z syn , which can be shown from the modeling of the likelihood p(y, x) and p(y|x):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantically Consistent and Syntactically Variational Encoder-Decoder",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(y, x) = \u222b p(y, x, z syn )dz syn = \u222b p(y|x, z syn )p(x|z syn )p(z syn )dz syn = \u222b p(y|x, z syn )p(x)p(z syn )dz syn , (if z syn \u22a5 x)",
"eq_num": "(8)"
}
],
"section": "Semantically Consistent and Syntactically Variational Encoder-Decoder",
"sec_num": "4.1"
},
{
"text": "where z syn \u22a5 x means that z syn is independent from x. Since p(x) can be moved outside of the integral, we divide both sides of Equation 8 by p(x) to obtain the conditional probability:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantically Consistent and Syntactically Variational Encoder-Decoder",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(y|x) = \u222b p(y|x, z syn )p(z syn )dz syn (9) log p(y|x) \u2265 ELBO = E zsyn\u223cq\u03c6(zsyn|y) [log p \u03b8,\u03d5 (y|x, z syn )] \u2212 KL (q \u03c6 (z syn |y)\u2225p(z syn )) = \u2212L tos2s (\u03d5; \u03b8) \u2212 L KL (\u03c6)",
"eq_num": "(10)"
}
],
"section": "Semantically Consistent and Syntactically Variational Encoder-Decoder",
"sec_num": "4.1"
},
{
"text": "where maximizing the log likelihood log p(y|x) is approximated to maximize the ELBO. And p \u03b8,\u03d5 (y|x, z syn ) can be modeled by Equation 5 and 7, and the posterior q \u03c6 (z syn |y) is modeled by Equation 6. Then the first term of Equation 10 is a considered as the target-oriented seq2seq loss denoted by L tos2s (\u03d5; \u03b8). The second term is the KL loss denoted by L KL (\u03c6).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantically Consistent and Syntactically Variational Encoder-Decoder",
"sec_num": "4.1"
},
{
"text": "There are two important assumptions to make Equation 8 \u2212 10 true: 1) z syn is independent from x; 2) z syn contains merely the syntactic information of y. Since z syn is extracted from y by Equation 6, the first assumption is met if z syn does not contain the information shared by x and y, which is typically the semantic information. The second assumption also requires that z syn does not contain the semantic information. Therefore, we use adversarial learning to derive semantic-free information for z syn . Given z x syn \u223c q \u03c6 (z x syn |x) and z y syn \u223c q \u03c6 (z y syn |y) corresponding to the syntactic variables of the original sentence x and the paraphrase y respectively, we employ a discriminator with trainable weight W syn \u2208 R 4dsyn\u00d7c where d syn denotes the dimension of the syntactic variables and c = 2 means that it is a binary classification process. The probability of whether z x syn and z y syn contain same semantic information can be computed by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adversarial Learning for Syntactic Variables",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p x,y = softmax ( W syn [ z x syn , z y syn , |z x syn \u2212 z y syn |, z x syn \u2299 z y syn ])",
"eq_num": "(11)"
}
],
"section": "Adversarial Learning for Syntactic Variables",
"sec_num": "4.2"
},
{
"text": "where | \u2022 | means taking the absolute value, \u2299 denotes the element-wise multiplication, and [, ] denotes the concatenation operation. Moreover, we construct negative samples by randomly sampling a sentence x \u0338 = x in the dataset. The predicted probability is denoted by p x,y . Then the loss of the discriminator is computed by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adversarial Learning for Syntactic Variables",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L d syn (\u03c7) = \u2212p pos log p x,y \u2212 p neg log p x,y",
"eq_num": "(12)"
}
],
"section": "Adversarial Learning for Syntactic Variables",
"sec_num": "4.2"
},
{
"text": "where p pos = [1, 0] and p neg = [0, 1] representing the labels for the positive pair (x, y) and the negative pair (x, y) respectively. And \u03c7 denotes the parameters (W syn ) of the discriminator. Equation 12 means that the discriminator is trying to recognize the semantic information shared between x and y. Then the syntactic encoder is considered as the generator to fool the discriminator by minimizing the loss:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adversarial Learning for Syntactic Variables",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L g syn (\u03c6) = \u2212p neg log p x,y",
"eq_num": "(13)"
}
],
"section": "Adversarial Learning for Syntactic Variables",
"sec_num": "4.2"
},
{
"text": "And the generator and discriminator play an adversarial game by minimizing L g syn (\u03c6) and L d syn (\u03c7) alternatively. When combining the other losses, the objective of our model is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adversarial Learning for Syntactic Variables",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "min \u03b8,\u03d5,\u03c6 [L tos2s (\u03d5; \u03b8) + L KL (\u03c6) + L g syn (\u03c6)] + min \u03c7 [L d syn (\u03c7)]",
"eq_num": "(14)"
}
],
"section": "Adversarial Learning for Syntactic Variables",
"sec_num": "4.2"
},
{
"text": "where the first term is the total loss for the generator and the second term is the loss for the discriminator.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adversarial Learning for Syntactic Variables",
"sec_num": "4.2"
},
{
"text": "There remains a train-test discrepancy where z syn \u223c q \u03c6 (z syn |y) during training while z syn \u223c p(z syn ) during testing. Minimizing the KL divergence between q \u03c6 (z syn |y) and p(z syn ) can help reduce the discrepancy, but does not provide end-to-end guarantee for the semantic consistency. Therefore, we further employ another discriminator D \u03c8 with parameters \u03c8, which consists of a sentence encoder, and a fullyconnected neural network followed with the softmax function. For arbitrary two sentences represented by the sequences of one-hot vectors u \u2208 R T \u00d7V and v \u2208 R T \u00d7V , the discriminator predicts the probability p u,v \u2208 R 2 of whether two sentences are semantically consistent:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensuring Semantic Consistency",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p u,v = D \u03c8 (u, v)",
"eq_num": "(15)"
}
],
"section": "Ensuring Semantic Consistency",
"sec_num": "4.3"
},
{
"text": "where T and V represent the maximum length of the sentences and the vocabulary size respectively. Traditionally, when z syn \u223c q \u03c6 (z syn |y), the model minimizes L tos2s (\u03d5; \u03b8) with MLE:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensuring Semantic Consistency",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "max zsyn\u223cq\u03c6(zsyn|y) T \u2211 t=1 log p \u03b8,\u03d5 ( y t = y t |y <t , x, z syn )",
"eq_num": "(16)"
}
],
"section": "Ensuring Semantic Consistency",
"sec_num": "4.3"
},
{
"text": "where y t and y t denote the predicted and referenced tokens respectively at t-th timestep, and y <t denotes the sequence of tokens preceding y t . However, when z syn \u223c p(z syn ), the syntactic information is different from that of z syn \u223c q \u03c6 (z syn |y), and the predicted tokens is therefore not required to match all the tokens of y. Instead, we assume that there is a set of semantically consistent words W c (y t ) with respect to y t , using which will not change the conveyed meaning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensuring Semantic Consistency",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "max zsyn\u223cp(zsyn) T \u2211 t=1 log p \u03b8,\u03d5 ( y t \u2208 W c (y t )|y <t , x, z syn )",
"eq_num": "(17)"
}
],
"section": "Ensuring Semantic Consistency",
"sec_num": "4.3"
},
{
"text": "where the objective is to ensure the word-level semantic consistency (WSC). We construct a sequence of tokens represented by one-hot vectors\u0177 = (\u0177 1 ,\u0177 2 , ...,\u0177 T ). The sentence is obtained by replacing a certain ratio (\u03b7) of tokens in y with predicted tokens y t sampled from the predicted probability distribution p t \u2208 R V . The process can be described by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensuring Semantic Consistency",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p t = p \u03b8,\u03d5 ( y t |y <t , x, z syn ) , z syn \u223c p(z syn )",
"eq_num": "(18)"
}
],
"section": "Ensuring Semantic Consistency",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "y t = { Gumbel-softmax( p t ; \u03c4 ), rand() < \u03b7 one-hot(y t ), otherwise",
"eq_num": "(19)"
}
],
"section": "Ensuring Semantic Consistency",
"sec_num": "4.3"
},
{
"text": "where rand() is a random function to sample numbers between 0 and 1 following the uniform distribution. Then the loss for word-level semantically consistency is computed by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensuring Semantic Consistency",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L wsc = \u2212p pos log D \u03c8 (\u0177, x)",
"eq_num": "(20)"
}
],
"section": "Ensuring Semantic Consistency",
"sec_num": "4.3"
},
{
"text": "Moreover, we further reduce the train-test discrepancy by reducing the exposure bias problem (Ranzato et al., 2016) . We let each token in the sentence s be generated conditioning on previously generated tokens instead of gold ones, and get a sentence-level feedback from the discriminator:",
"cite_spans": [
{
"start": 93,
"end": 115,
"text": "(Ranzato et al., 2016)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ensuring Semantic Consistency",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "max zsyn\u223cp(zsyn) log p \u03b8,\u03d5 ( y \u2208 S c (y)|x, z syn )",
"eq_num": "(21)"
}
],
"section": "Ensuring Semantic Consistency",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "y t = Gumbel-softmax (p( y t | y <t , x, z syn ); \u03c4 )",
"eq_num": "(22)"
}
],
"section": "Ensuring Semantic Consistency",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L ssc = \u2212p pos log D \u03c8 ( y, x)",
"eq_num": "(23)"
}
],
"section": "Ensuring Semantic Consistency",
"sec_num": "4.3"
},
{
"text": "where the objective is to ensure sentence-level semantic consistency (SSC). S c (y) denotes the set of semantically consistent sentences, and y = ( y 1 , y 2 , ..., y T ) denotes the sequence of generated tokens with one-hot representations. The discriminator will also include positive samples (x, y) and negative samples (x, y) to learn to predict whether two sentences are semantically consistent:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensuring Semantic Consistency",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L sc (\u03b8, \u03d5, \u03c6, \u03c8) = L wsc + L ssc \u2212 p pos log D \u03c8 (x, y) \u2212 p neg log D \u03c8 (x, y)",
"eq_num": "(24)"
}
],
"section": "Ensuring Semantic Consistency",
"sec_num": "4.3"
},
{
"text": "Then, the final objective can be computed by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensuring Semantic Consistency",
"sec_num": "4.3"
},
{
"text": "min \u03b8,\u03d5,\u03c6,\u03c8 [L tos2s (\u03d5; \u03b8) + \u03bb KL L KL (\u03c6) + \u03bb g syn L g syn (\u03c6) + \u03bb sc L sc (\u03b8, \u03d5, \u03c6, \u03c8)] + \u03bb d syn min \u03c7 [L d syn (\u03c7)] (25)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensuring Semantic Consistency",
"sec_num": "4.3"
},
{
"text": "where \u03bb KL , \u03bb g syn , \u03bb sc , and \u03bb d syn are the hyperparameters to balance the losses in overall objective. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensuring Semantic Consistency",
"sec_num": "4.3"
},
{
"text": "Following previous work on paraphrase generation, we experiment on two datasets: Quora (Lin et al., 2014) 1 and MSCOCO 2 . The Quora dataset is originally developed for duplicated question detection which contains about 140k pairs of paraphrase and 260k pairs of non-paraphrase sentence pairs. We only use the paraphrase sentences and hold out 3k and 30k validation and test sets respectively. We set the maximum decoding length to be 20 which equals the maximum length of 95% of sentences. The MSCOCO dataset is originally developed for image captioning and each image has 5 captions. In our experiments, we randomly choose 1 of the 5 captions as the source and use the rest 4 captions as the targets. The original dataset contains about 80k and 40k samples in the train and test sets respectively. We randomly hold out about 4k samples from the train set as the validation set. The detailed statistics of the two datasets are shown in Table 2 .",
"cite_spans": [
{
"start": 87,
"end": 105,
"text": "(Lin et al., 2014)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [
{
"start": 937,
"end": 944,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Datasets",
"sec_num": "5.1"
},
{
"text": "The evaluation of paraphrase generation remains an open issue. Most of previous studies (Prakash et al., 2016; Gupta et al., 2018; Li et al., 2018; Bao et al., 2019; Fu et al., 2019) adopt metrics based on n-gram matching, such as BLEU (Papineni et al., 2002) and ROUGE (Lin, 2004) . To compare our model with them, we also report the n-gram metrics (1-4 grams in BLEU, 1-2 gram in ROUGE). However, we observe that they are not always sufficient to evaluate the semantic consistency because human-generated paraphrases have lower BLEU or ROUGE scores than machine-generated on the MSCOCO dataset (will be discussed in Section 5.3). Therefore, we further employ a metric BERTCS (Reimers and Gurevych, 2019) which computes the cosine similarity of sentence-level embeddings of fine-tuned BERT (Devlin et al., 2019) . We choose the BERT-base model fine-tuned on SNLI (Bowman et al., 2015) and MultiNLI (Williams et al., 2018) datasets with mean-tokens pooling 3 . Moreover, since simply copying the source sentence is not an interesting model but definitely yields semantically consistent outputs, we evaluate the syntactic difference from the source sentence based on BLEU-ori (up to 4 grams) which were recently used to evaluate the reconstruction-based models (Miao et al., 2019; Bao et al., 2019) . Compared Models. We compare our models with three categories of existing methods introduced in Section 2. The reconstruction-based models for comparison include \u03b2-VAE (Higgins et al., 2017) with \u03b2 = 1e \u22123 and \u03b2 = 1e \u22124 , and DSS-VAE (Bao et al., 2019) . The typical seq2seq models include vanilla seq2seq LSTM with (or without) the attention mechanism (Bahdanau et al., 2015) , and LBOW-Topk which is the state-of-the-art (SOTA) model (Fu et al., 2019) . The compared target-oriented seq2seq model is the variational encoder-decoder (VAE-SVG-eq) (Gupta et al., 2018) . Since variational models can generate multiple paraphrases for a source sentence by sampling multiple latent variables, we can select the best one with highest BERTCS scores computed with the source sentence (not with the reference sentences because they not available in practice). This searching mechanism is also used in (Gupta et al., 2018) and is denoted by VarSearch in the following sections. We search 5 times for both VAE-SVG-eq and our model.",
"cite_spans": [
{
"start": 88,
"end": 110,
"text": "(Prakash et al., 2016;",
"ref_id": "BIBREF33"
},
{
"start": 111,
"end": 130,
"text": "Gupta et al., 2018;",
"ref_id": "BIBREF13"
},
{
"start": 131,
"end": 147,
"text": "Li et al., 2018;",
"ref_id": "BIBREF20"
},
{
"start": 148,
"end": 165,
"text": "Bao et al., 2019;",
"ref_id": "BIBREF1"
},
{
"start": 166,
"end": 182,
"text": "Fu et al., 2019)",
"ref_id": "BIBREF11"
},
{
"start": 236,
"end": 259,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF30"
},
{
"start": 270,
"end": 281,
"text": "(Lin, 2004)",
"ref_id": "BIBREF24"
},
{
"start": 677,
"end": 705,
"text": "(Reimers and Gurevych, 2019)",
"ref_id": "BIBREF35"
},
{
"start": 791,
"end": 812,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF9"
},
{
"start": 899,
"end": 922,
"text": "(Williams et al., 2018)",
"ref_id": "BIBREF38"
},
{
"start": 1260,
"end": 1279,
"text": "(Miao et al., 2019;",
"ref_id": "BIBREF27"
},
{
"start": 1280,
"end": 1297,
"text": "Bao et al., 2019)",
"ref_id": "BIBREF1"
},
{
"start": 1467,
"end": 1489,
"text": "(Higgins et al., 2017)",
"ref_id": "BIBREF14"
},
{
"start": 1533,
"end": 1551,
"text": "(Bao et al., 2019)",
"ref_id": "BIBREF1"
},
{
"start": 1652,
"end": 1675,
"text": "(Bahdanau et al., 2015)",
"ref_id": "BIBREF0"
},
{
"start": 1735,
"end": 1752,
"text": "(Fu et al., 2019)",
"ref_id": "BIBREF11"
},
{
"start": 1846,
"end": 1866,
"text": "(Gupta et al., 2018)",
"ref_id": "BIBREF13"
},
{
"start": 2193,
"end": 2213,
"text": "(Gupta et al., 2018)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and Settings",
"sec_num": "5.2"
},
{
"text": "Hyperparameters. Word embeddings are 300-dimensional and initialized with GloVe (Pennington et al., 2014) . The dimension of the encoders and the decoder are based on two-layer LSTM with 500 state size. The latent space dimension is also set to 500. We set a fixed temperature of \u03c4 = 0.01 for Gumbel-softmax during training. The weights for different losses are \u03bb KL = 0.2 (with the annealing trick), \u03bb g syn = 0.5, \u03bb sc = 0.5, and \u03bb d syn = 0.5 respectively. The replacement ratio \u03b7 for word-level semantic consistency is set to 0.5. The learning rate of all models is set to 5 \u00d7 10 \u22124 . The batch size is set to 32. All models are trained for 15 epochs. We report the averaged metrics after the training process is repeated 3 times. Table 3 and 4 show the overall performance of different models. To understand what is an applaudable score on each metric, we do a preliminary experiment by designing a copying and a randomly sampling model, which can be considered as the upper and lower bound for metrics. Higher B-i, R-j, and BertCS scores represent better consistency with reference sentences. Lower BLEU-ori scores represent better syntactic differences from source sentences. The interesting finding on the MSCOCO dataset is that the source sentences, which are human-generated paraphrases with regard to reference sentences, have lower B-i and R-j scores than the machine-generated. The possible reason may be that humans will use diverse n-grams and still express the same meaning while machines prefer to use high-frequency n-grams. And BertCS scores confirm the high semantic consistency of human-generated paraphrases. Generally, our model with variational search achieves competitive B-i, R-j scores, and the best BertCS scores on the Quora and MSCOCO datasets. Compared with the previous SOTA model LBOW-Topk, our model improves B-4 by 1.20 and 2.97 points on Quora and MSCOCO respectively. Compared with Seq2Seq-Att, our model improves B-4 and BertCS by 3.35 and 2.72 points respectively on Quora, and 2.92 and 1.19 points respectively on MSCOCO. When compared with variational models including \u03b2-VAE and VAE-SVG-eq, our model also outperforms them with a large margin. The reason may be that the sampled variational latent variables in their models contain semantic information, and lead to a change of the conveyed meaning. DSS-VAE which disentangles the semantic and syntactic representations outperforms \u03b2-VAE with an increase of B-4 and a decrease of BLEU-ori scores on Quora but does not outperform seq2seq models. It means that the disentanglement of the latent spaces is not sufficient to guarantee the decoder of VAE to generate semantically consistent sentences. Table 5 : Results of the ablation study.",
"cite_spans": [
{
"start": 80,
"end": 105,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [
{
"start": 735,
"end": 742,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 2688,
"end": 2695,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation and Settings",
"sec_num": "5.2"
},
{
"text": "B-1 B-2 B-3 B-4 R-1 R-2 R-L",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and Settings",
"sec_num": "5.2"
},
{
"text": "Quora MSCOCO B-2 B-4 R-2 R-L BertCS B-2 B-4 R-2 R-L",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Main Results",
"sec_num": "5.3"
},
{
"text": "To analyze which mechanisms are driving the improvements, we present an ablation study in Table 5 . We eliminate sentence-level and word-level semantic consistency (SSC and WSC), syntactically adversarial learning (SynAdv) one by one, which results in three ablated models. Further eliminating the variational inference of syntactic variables yields the Seq2Seq-Att model. Generally, the three mechanisms are all influential. For example, eliminating the two semantic consistent losses leads to a total drop of BertCS by 0.82 and 0.65 points on Quora and MSCOCO respectively. When further eliminating SynAdv, the model has worse performance than Seq2Seq-Att. It demonstrates the importance of guaranteeing the syntactic variable to be semantic-free. Table 6 : An example of the generated sentences of the models on MSCOCO dataset.",
"cite_spans": [],
"ref_spans": [
{
"start": 90,
"end": 97,
"text": "Table 5",
"ref_id": null
},
{
"start": 750,
"end": 757,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "5.4"
},
{
"text": "To help understand our model, we present a case study in Table 6 . For the MSCOCO dataset, each image has multiple diverse captions. We show the source and two gold references for an image. After training, Seq2Seq-Att and our model both produce three paraphrases for the given source, and BertCS scores are presented to measure their semantic consistency with respect to the source sentence. Following traditional seq2seq models, we choose the top 3 results through the beam search for Seq2Seq-Att. The results show that the three generated sentences lack diversity. Different from Seq2Seq-Att, our model generates 3 paraphrases by sampling 3 different latent variables z i syn , z j syn , z k syn , which produces high-quality and diverse paraphrases. However, it is worth noting that the variable z syn is data-driven, which means that the information in z syn may not perfectly match human-defined syntaxes. Moreover, the references may contain additional information than the source, which is not statistically easy to learn. This phenomenon can explain why the BLEU and ROUGE scores of the references are lower than machine-generated sentences in Table 5 . However, the key information is preserved.",
"cite_spans": [],
"ref_spans": [
{
"start": 57,
"end": 64,
"text": "Table 6",
"ref_id": null
},
{
"start": 1152,
"end": 1159,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Case Study",
"sec_num": "5.5"
},
{
"text": "In this paper, we propose a semantically consistent and syntactically variational encoder-decoder framework for paraphrase generation, which enables the model to generate different paraphrases according to different syntactic variables. We first introduce an adversarial learning method to ensure the variational syntactic variable not be contaminated by semantic information, and further develop word-level and sentence-level objectives to ensure the generated sentences be semantic consistent. The experiments show that our model yields competitive results and can generate high-quality and diverse paraphrases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "https://www.kaggle.com/c/quora-question-pairs/data 2 http://cocodataset.org/ 3 https://github.com/UKPLab/sentence-transformers",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors sincerely thank the anonymous reviewers for their constructive comments on the improvement of this paper. This work was supported by National Key Research and Development Program of China under Grant 2018YFC0830400.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "3rd International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Yoshua Bengio and Yann LeCun, editors, 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Generating sentences from disentangled syntactic and semantic spaces",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Bao",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Shujian",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Lili",
"middle": [],
"last": "Mou",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Vechtomova",
"suffix": ""
},
{
"first": "Xin-Yu",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Jiajun",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019",
"volume": "1",
"issue": "",
"pages": "6008--6019",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yu Bao, Hao Zhou, Shujian Huang, Lei Li, Lili Mou, Olga Vechtomova, Xin-Yu Dai, and Jiajun Chen. 2019. Gen- erating sentences from disentangled syntactic and semantic spaces. In Anna Korhonen, David R. Traum, and Llu\u00eds M\u00e0rquez, editors, Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Volume 1: Long Papers, pages 6008-6019. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Variational inference: A review for statisticians",
"authors": [
{
"first": "David",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
},
{
"first": "Alp",
"middle": [],
"last": "Kucukelbir",
"suffix": ""
},
{
"first": "Jon",
"middle": [
"D"
],
"last": "Mcauliffe",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David M. Blei, Alp Kucukelbir, and Jon D. McAuliffe. 2016. Variational inference: A review for statisticians. CoRR, abs/1601.00670.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A large annotated corpus for learning natural language inference",
"authors": [
{
"first": "R",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "Gabor",
"middle": [],
"last": "Bowman",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Angeli",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Potts",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "632--642",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Llu\u00eds M\u00e0rquez, Chris Callison-Burch, Jian Su, Daniele Pighin, and Yuval Marton, editors, Proceedings of the 2015 Conference on Empirical Methods in Natural Lan- guage Processing, EMNLP 2015, Lisbon, Portugal, September 17-21, 2015, pages 632-642. The Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Generating sentences from a continuous space",
"authors": [
{
"first": "R",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Bowman",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vilnis",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"M"
],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Rafal",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Samy",
"middle": [],
"last": "J\u00f3zefowicz",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "10--21",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew M. Dai, Rafal J\u00f3zefowicz, and Samy Bengio. 2016. Generating sentences from a continuous space. In Yoav Goldberg and Stefan Riezler, editors, Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, CoNLL 2016, Berlin, Germany, August 11-12, 2016, pages 10-21. ACL.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Ask the right questions: Active question reformulation with reinforcement learning",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "Buck",
"suffix": ""
},
{
"first": "Jannis",
"middle": [],
"last": "Bulian",
"suffix": ""
},
{
"first": "Massimiliano",
"middle": [],
"last": "Ciaramita",
"suffix": ""
},
{
"first": "Wojciech",
"middle": [],
"last": "Gajewski",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Gesmundo",
"suffix": ""
},
{
"first": "Neil",
"middle": [],
"last": "Houlsby",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2018,
"venue": "6th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christian Buck, Jannis Bulian, Massimiliano Ciaramita, Wojciech Gajewski, Andrea Gesmundo, Neil Houlsby, and Wei Wang. 2018. Ask the right questions: Active question reformulation with reinforcement learning. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 -May 3, 2018, Conference Track Proceedings. OpenReview.net.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Joint copying and restricted generation for paraphrase",
"authors": [
{
"first": "Ziqiang",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Chuwei",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Wenjie",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Sujian",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "3152--3158",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ziqiang Cao, Chuwei Luo, Wenjie Li, and Sujian Li. 2017. Joint copying and restricted generation for para- phrase. In Satinder P. Singh and Shaul Markovitch, editors, Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, February 4-9, 2017, San Francisco, California, USA, pages 3152-3158. AAAI Press.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Adversarial text generation via feature-mover's distance",
"authors": [
{
"first": "Liqun",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Shuyang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Chenyang",
"middle": [],
"last": "Tao",
"suffix": ""
},
{
"first": "Haichao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zhe",
"middle": [],
"last": "Gan",
"suffix": ""
},
{
"first": "Dinghan",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Yizhe",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Guoyin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Ruiyi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [],
"last": "Carin",
"suffix": ""
}
],
"year": 2018,
"venue": "Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "4671--4682",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liqun Chen, Shuyang Dai, Chenyang Tao, Haichao Zhang, Zhe Gan, Dinghan Shen, Yizhe Zhang, Guoyin Wang, Ruiyi Zhang, and Lawrence Carin. 2018. Adversarial text generation via feature-mover's distance. In Samy Bengio, Hanna M. Wallach, Hugo Larochelle, Kristen Grauman, Nicol\u00f2 Cesa-Bianchi, and Roman Garnett, editors, Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, 3-8 December 2018, Montr\u00e9al, Canada, pages 4671-4682.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Learning phrase representations using RNN encoder-decoder for statistical machine translation",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merrienboer",
"suffix": ""
},
{
"first": "\u00c7aglar",
"middle": [],
"last": "G\u00fcl\u00e7ehre",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Fethi",
"middle": [],
"last": "Bougares",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1724--1734",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho, Bart van Merrienboer, \u00c7aglar G\u00fcl\u00e7ehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Alessandro Moschitti, Bo Pang, and Walter Daelemans, editors, Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1724-1734. ACL.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "BERT: pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirec- tional transformers for language understanding. In Jill Burstein, Christy Doran, and Thamar Solorio, editors, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171-4186. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Learning to paraphrase for question answering",
"authors": [
{
"first": "Li",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Mallinson",
"suffix": ""
},
{
"first": "Siva",
"middle": [],
"last": "Reddy",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "875--886",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li Dong, Jonathan Mallinson, Siva Reddy, and Mirella Lapata. 2017. Learning to paraphrase for question answer- ing. In Martha Palmer, Rebecca Hwa, and Sebastian Riedel, editors, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pages 875-886. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Paraphrase generation with latent bag of words",
"authors": [
{
"first": "Yao",
"middle": [],
"last": "Fu",
"suffix": ""
},
{
"first": "Yansong",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "John",
"middle": [
"P"
],
"last": "Cunningham",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "13623--13634",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yao Fu, Yansong Feng, and John P. Cunningham. 2019. Paraphrase generation with latent bag of words. In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d'Alch\u00e9-Buc, Emily B. Fox, and Roman Garnett, editors, Advances in Neural Information Processing Systems 32: Annual Conference on Neural Infor- mation Processing Systems 2019, NeurIPS 2019, 8-14 December 2019, Vancouver, BC, Canada, pages 13623- 13634.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Long text generation via adversarial training with leaked information",
"authors": [
{
"first": "Jiaxian",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Sidi",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Han",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Weinan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yong",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18)",
"volume": "",
"issue": "",
"pages": "5141--5148",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiaxian Guo, Sidi Lu, Han Cai, Weinan Zhang, Yong Yu, and Jun Wang. 2018. Long text generation via adversarial training with leaked information. In Sheila A. McIlraith and Kilian Q. Weinberger, editors, Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 5141-5148. AAAI Press.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A deep generative framework for paraphrase generation",
"authors": [
{
"first": "Ankush",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Arvind",
"middle": [],
"last": "Agarwal",
"suffix": ""
},
{
"first": "Prawaan",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Piyush",
"middle": [],
"last": "Rai",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18)",
"volume": "",
"issue": "",
"pages": "5149--5156",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ankush Gupta, Arvind Agarwal, Prawaan Singh, and Piyush Rai. 2018. A deep generative framework for para- phrase generation. In Sheila A. McIlraith and Kilian Q. Weinberger, editors, Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelli- gence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 5149-5156. AAAI Press.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "beta-vae: Learning basic visual concepts with a constrained variational framework",
"authors": [
{
"first": "Irina",
"middle": [],
"last": "Higgins",
"suffix": ""
},
{
"first": "Lo\u00efc",
"middle": [],
"last": "Matthey",
"suffix": ""
},
{
"first": "Arka",
"middle": [],
"last": "Pal",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Burgess",
"suffix": ""
},
{
"first": "Xavier",
"middle": [],
"last": "Glorot",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Botvinick",
"suffix": ""
},
{
"first": "Shakir",
"middle": [],
"last": "Mohamed",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Lerchner",
"suffix": ""
}
],
"year": 2017,
"venue": "5th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Irina Higgins, Lo\u00efc Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. 2017. beta-vae: Learning basic visual concepts with a constrained variational frame- work. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Categorical reparameterization with gumbel-softmax",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Jang",
"suffix": ""
},
{
"first": "Shixiang",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Poole",
"suffix": ""
}
],
"year": 2017,
"venue": "5th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric Jang, Shixiang Gu, and Ben Poole. 2017. Categorical reparameterization with gumbel-softmax. In 5th Inter- national Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Auto-encoding variational bayes",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Max",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Welling",
"suffix": ""
}
],
"year": 2014,
"venue": "2nd International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Max Welling. 2014. Auto-encoding variational bayes. In Yoshua Bengio and Yann LeCun, editors, 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Similarity-Based Reconstruction Loss for Meaning Representation",
"authors": [
{
"first": "Olga",
"middle": [],
"last": "Kovaleva",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Rumshisky",
"suffix": ""
},
{
"first": "Alexey",
"middle": [],
"last": "Romanov",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "4875--4880",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Olga Kovaleva, Anna Rumshisky, and Alexey Romanov. 2018. Similarity-Based Reconstruction Loss for Meaning Representation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4875-4880, Brussels, Belgium, October. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Submodular optimizationbased diverse paraphrasing and its effectiveness in data augmentation",
"authors": [
{
"first": "Ashutosh",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Satwik",
"middle": [],
"last": "Bhattamishra",
"suffix": ""
},
{
"first": "Manik",
"middle": [],
"last": "Bhandari",
"suffix": ""
},
{
"first": "Partha",
"middle": [
"P"
],
"last": "Talukdar",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019",
"volume": "1",
"issue": "",
"pages": "3609--3619",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashutosh Kumar, Satwik Bhattamishra, Manik Bhandari, and Partha P. Talukdar. 2019. Submodular optimization- based diverse paraphrasing and its effectiveness in data augmentation. In Jill Burstein, Christy Doran, and Thamar Solorio, editors, Proceedings of the 2019 Conference of the North American Chapter of the Associ- ation for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 3609-3619. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Syntax-guided controlled generation of paraphrases",
"authors": [
{
"first": "Ashutosh",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Kabir",
"middle": [],
"last": "Ahuja",
"suffix": ""
},
{
"first": "Raghuram",
"middle": [],
"last": "Vadapalli",
"suffix": ""
},
{
"first": "Partha",
"middle": [
"P"
],
"last": "Talukdar",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashutosh Kumar, Kabir Ahuja, Raghuram Vadapalli, and Partha P. Talukdar. 2020. Syntax-guided controlled generation of paraphrases. CoRR, abs/2005.08417.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Paraphrase generation with deep reinforcement learning",
"authors": [
{
"first": "Zichao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Lifeng",
"middle": [],
"last": "Shang",
"suffix": ""
},
{
"first": "Hang",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "3865--3878",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zichao Li, Xin Jiang, Lifeng Shang, and Hang Li. 2018. Paraphrase generation with deep reinforcement learning. In Ellen Riloff, David Chiang, Julia Hockenmaier, and Jun'ichi Tsujii, editors, Proceedings of the 2018 Con- ference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 -November 4, 2018, pages 3865-3878. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Decomposable neural paraphrase generation",
"authors": [
{
"first": "Zichao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Lifeng",
"middle": [],
"last": "Shang",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019",
"volume": "1",
"issue": "",
"pages": "3403--3414",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zichao Li, Xin Jiang, Lifeng Shang, and Qun Liu. 2019. Decomposable neural paraphrase generation. In Anna Korhonen, David R. Traum, and Llu\u00eds M\u00e0rquez, editors, Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Volume 1: Long Papers, pages 3403-3414. Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Microsoft COCO: common objects in context",
"authors": [
{
"first": "Tsung-Yi",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Maire",
"suffix": ""
},
{
"first": "Serge",
"middle": [
"J"
],
"last": "Belongie",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Hays",
"suffix": ""
},
{
"first": "Pietro",
"middle": [],
"last": "Perona",
"suffix": ""
},
{
"first": "Deva",
"middle": [],
"last": "Ramanan",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Doll\u00e1r",
"suffix": ""
},
{
"first": "C",
"middle": [
"Lawrence"
],
"last": "Zitnick",
"suffix": ""
}
],
"year": 2014,
"venue": "Computer Vision -ECCV 2014 -13th European Conference",
"volume": "8693",
"issue": "",
"pages": "740--755",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tsung-Yi Lin, Michael Maire, Serge J. Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll\u00e1r, and C. Lawrence Zitnick. 2014. Microsoft COCO: common objects in context. In David J. Fleet, Tom\u00e1s Pajdla, Bernt Schiele, and Tinne Tuytelaars, editors, Computer Vision -ECCV 2014 -13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V, volume 8693 of Lecture Notes in Computer Science, pages 740-755. Springer.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Adversarial ranking for language generation",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Dianqi",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Ming-Ting",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Zhengyou",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "3155--3165",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Lin, Dianqi Li, Xiaodong He, Ming-Ting Sun, and Zhengyou Zhang. 2017. Adversarial ranking for lan- guage generation. In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett, editors, Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, pages 3155-3165.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "ROUGE: A package for automatic evaluation of summaries",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2004,
"venue": "Text Summarization Branches Out",
"volume": "",
"issue": "",
"pages": "74--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74-81, Barcelona, Spain, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "The concrete distribution: A continuous relaxation of discrete random variables",
"authors": [
{
"first": "Chris",
"middle": [
"J"
],
"last": "Maddison",
"suffix": ""
},
{
"first": "Andriy",
"middle": [],
"last": "Mnih",
"suffix": ""
},
{
"first": "Yee Whye",
"middle": [],
"last": "Teh",
"suffix": ""
}
],
"year": 2017,
"venue": "5th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris J. Maddison, Andriy Mnih, and Yee Whye Teh. 2017. The concrete distribution: A continuous relaxation of discrete random variables. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Paraphrasing questions using given and new information",
"authors": [
{
"first": "R",
"middle": [],
"last": "Kathleen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mckeown",
"suffix": ""
}
],
"year": 1983,
"venue": "American Journal of Computational Linguistics",
"volume": "9",
"issue": "1",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kathleen R. McKeown. 1983. Paraphrasing questions using given and new information. American Journal of Computational Linguistics, 9(1):1-10.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "CGMH: constrained sentence generation by metropolis-hastings sampling",
"authors": [
{
"first": "Ning",
"middle": [],
"last": "Miao",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Lili",
"middle": [],
"last": "Mou",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2019,
"venue": "The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence",
"volume": "2019",
"issue": "",
"pages": "6834--6842",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ning Miao, Hao Zhou, Lili Mou, Rui Yan, and Lei Li. 2019. CGMH: constrained sentence generation by metropolis-hastings sampling. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Sym- posium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019, pages 6834-6842. AAAI Press.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Recurrent neural network based language model",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Karafi\u00e1t",
"suffix": ""
},
{
"first": "Luk\u00e1s",
"middle": [],
"last": "Burget",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Cernock\u00fd",
"suffix": ""
},
{
"first": "Sanjeev",
"middle": [],
"last": "Khudanpur",
"suffix": ""
}
],
"year": 2010,
"venue": "INTER-SPEECH 2010, 11th Annual Conference of the International Speech Communication Association",
"volume": "",
"issue": "",
"pages": "1045--1048",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Martin Karafi\u00e1t, Luk\u00e1s Burget, Jan Cernock\u00fd, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Takao Kobayashi, Keikichi Hirose, and Satoshi Nakamura, editors, INTER- SPEECH 2010, 11th Annual Conference of the International Speech Communication Association, Makuhari, Chiba, Japan, September 26-30, 2010, pages 1045-1048. ISCA.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Relgan: Relational generative adversarial networks for text generation",
"authors": [
{
"first": "Weili",
"middle": [],
"last": "Nie",
"suffix": ""
},
{
"first": "Nina",
"middle": [],
"last": "Narodytska",
"suffix": ""
},
{
"first": "Ankit",
"middle": [],
"last": "Patel",
"suffix": ""
}
],
"year": 2019,
"venue": "7th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Weili Nie, Nina Narodytska, and Ankit Patel. 2019. Relgan: Relational generative adversarial networks for text generation. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evalua- tion of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, July 6-12, 2002, Philadelphia, PA, USA, pages 311-318. ACL.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Learning semantic sentence embeddings using sequential pair-wise discriminator",
"authors": [
{
"first": "Vinod",
"middle": [
"Kumar"
],
"last": "Badri Narayana Patro",
"suffix": ""
},
{
"first": "Sandeep",
"middle": [],
"last": "Kurmi",
"suffix": ""
},
{
"first": "Vinay",
"middle": [
"P"
],
"last": "Kumar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Namboodiri",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2715--2729",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Badri Narayana Patro, Vinod Kumar Kurmi, Sandeep Kumar, and Vinay P. Namboodiri. 2018. Learning seman- tic sentence embeddings using sequential pair-wise discriminator. In Emily M. Bender, Leon Derczynski, and Pierre Isabelle, editors, Proceedings of the 27th International Conference on Computational Linguistics, COL- ING 2018, Santa Fe, New Mexico, USA, August 20-26, 2018, pages 2715-2729. Association for Computational Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word represen- tation. In Alessandro Moschitti, Bo Pang, and Walter Daelemans, editors, Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1532-1543. ACL.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Neural paraphrase generation with stacked residual LSTM networks",
"authors": [
{
"first": "Aaditya",
"middle": [],
"last": "Prakash",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Sadid",
"suffix": ""
},
{
"first": "Kathy",
"middle": [],
"last": "Hasan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Vivek",
"suffix": ""
},
{
"first": "Ashequl",
"middle": [],
"last": "Datla",
"suffix": ""
},
{
"first": "Joey",
"middle": [],
"last": "Qadir",
"suffix": ""
},
{
"first": "Oladimeji",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Farri",
"suffix": ""
}
],
"year": 2016,
"venue": "COLING 2016, 26th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers",
"volume": "",
"issue": "",
"pages": "2923--2934",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aaditya Prakash, Sadid A. Hasan, Kathy Lee, Vivek V. Datla, Ashequl Qadir, Joey Liu, and Oladimeji Farri. 2016. Neural paraphrase generation with stacked residual LSTM networks. In Nicoletta Calzolari, Yuji Matsumoto, and Rashmi Prasad, editors, COLING 2016, 26th International Conference on Computational Linguistics, Pro- ceedings of the Conference: Technical Papers, December 11-16, 2016, Osaka, Japan, pages 2923-2934. ACL.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Sequence level training with recurrent neural networks",
"authors": [
{
"first": "Aurelio",
"middle": [],
"last": "Marc",
"suffix": ""
},
{
"first": "Sumit",
"middle": [],
"last": "Ranzato",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Chopra",
"suffix": ""
},
{
"first": "Wojciech",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zaremba",
"suffix": ""
}
],
"year": 2016,
"venue": "4th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marc'Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2016. Sequence level training with recurrent neural networks. In Yoshua Bengio and Yann LeCun, editors, 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Sentence-bert: Sentence embeddings using siamese bert-networks",
"authors": [
{
"first": "Nils",
"middle": [],
"last": "Reimers",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "3980--3990",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. In Kentaro Inui, Jing Jiang, Vincent Ng, and Xiaojun Wan, editors, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 3980-3990. Association for Computational Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Cross-domain semantic parsing via paraphrasing",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Xifeng",
"middle": [],
"last": "Yan",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1235--1246",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yu Su and Xifeng Yan. 2017. Cross-domain semantic parsing via paraphrasing. In Martha Palmer, Rebecca Hwa, and Sebastian Riedel, editors, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pages 1235-1246. Association for Computational Linguistics.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "3104--3112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Zoubin Ghahramani, Max Welling, Corinna Cortes, Neil D. Lawrence, and Kilian Q. Weinberger, editors, Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pages 3104-3112.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "A broad-coverage challenge corpus for sentence understanding through inference",
"authors": [
{
"first": "Adina",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Nikita",
"middle": [],
"last": "Nangia",
"suffix": ""
},
{
"first": "Samuel",
"middle": [
"R"
],
"last": "Bowman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018",
"volume": "1",
"issue": "",
"pages": "1112--1122",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adina Williams, Nikita Nangia, and Samuel R. Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Marilyn A. Walker, Heng Ji, and Amanda Stent, editors, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), pages 1112-1122. Association for Computational Linguistics.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Unsupervised text style transfer using language models as discriminators",
"authors": [
{
"first": "Zichao",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zhiting",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"P"
],
"last": "Xing",
"suffix": ""
},
{
"first": "Taylor",
"middle": [],
"last": "Berg-Kirkpatrick",
"suffix": ""
}
],
"year": 2018,
"venue": "Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "7298--7309",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zichao Yang, Zhiting Hu, Chris Dyer, Eric P. Xing, and Taylor Berg-Kirkpatrick. 2018. Unsupervised text style transfer using language models as discriminators. In Samy Bengio, Hanna M. Wallach, Hugo Larochelle, Kris- ten Grauman, Nicol\u00f2 Cesa-Bianchi, and Roman Garnett, editors, Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, 3-8 December 2018, Montr\u00e9al, Canada, pages 7298-7309.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Seqgan: Sequence generative adversarial nets with policy gradient",
"authors": [
{
"first": "Lantao",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Weinan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yong",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "2852--2858",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. 2017. Seqgan: Sequence generative adversarial nets with policy gradient. In Satinder P. Singh and Shaul Markovitch, editors, Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, February 4-9, 2017, San Francisco, California, USA, pages 2852-2858. AAAI Press.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"text": "Statistics of two datasets. N para represents the number of paraphrases in one sample. L avg and L 95 denote the average length of all sentences and the maximum length of 95% of sentences respectively.",
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null
},
"TABREF3": {
"text": "The results on Quora. B-i and R-j stand for BLEU and ROUGE scores respectively. Larger values are better except that BLEU-ori prefers lower values. The symbol \u2020 means the cited results.",
"content": "<table><tr><td/><td>B-1</td><td>B-2</td><td>B-3</td><td>B-4</td><td>R-1</td><td>R-2</td><td>R-L BertCS BLEU-ori (\u2193)</td></tr><tr><td>Copying (Positive)</td><td colspan=\"7\">65.74 44.56 29.78 19.85 37.32 12.08 33.01 72.27</td><td>100.00</td></tr><tr><td>Sampling (Negative)</td><td colspan=\"3\">34.39 11.75 4.38</td><td colspan=\"4\">1.81 17.38 1.45 14.30 20.02</td><td>-</td></tr><tr><td>\u03b2-VAE (\u03b2 = 10 \u22123 )</td><td colspan=\"7\">65.09 44.02 29.35 19.52 36.92 11.89 32.69 71.45</td><td>90.80</td></tr><tr><td>\u03b2-VAE (\u03b2 = 10 \u22124 )</td><td colspan=\"7\">65.29 44.19 29.48 19.63 37.02 11.93 32.77 71.69</td><td>92.30</td></tr><tr><td>Seq2Seq</td><td colspan=\"7\">71.68 51.50 36.08 25.21 39.75 14.64 36.00 70.53</td><td>15.00</td></tr><tr><td>Seq2Seq-Att</td><td colspan=\"7\">71.84 51.51 36.17 25.32 39.83 14.65 36.06 70.75</td><td>15.01</td></tr><tr><td>LBOW-Topk \u2020</td><td colspan=\"7\">72.60 51.14 35.66 25.27 42.08 16.13 38.16</td><td>-</td><td>-</td></tr><tr><td>VAE-SVG-eq (+ VarSearch)</td><td colspan=\"7\">72.89 52.42 36.93 25.99 40.10 15.18 36.13 70.98</td><td>15.23</td></tr><tr><td>SCSVED (ours)</td><td colspan=\"7\">73.75 53.66 38.32 27.33 40.65 15.39 37.03 71.80</td><td>16.27</td></tr><tr><td colspan=\"8\">SCSVED (+ VarSearch) (ours) 74.11 54.35 39.19 28.24 40.90 15.70 37.33 71.94</td><td>16.44</td></tr></table>",
"type_str": "table",
"html": null,
"num": null
},
"TABREF4": {
"text": "The results on MSCOCO. B-i and R-j stand for BLEU and ROUGE scores respectively. Larger values are better except that BLEU-ori prefers lower values. The symbol \u2020 means the cited results.",
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null
},
"TABREF5": {
"text": "25.02 32.90 58.30 80.19 52.54 25.99 15.03 36.43 71.15 SCSVED -SSC -WSC -SynAdv 36.31 22.28 29.79 55.78 77.52 51.18 24.76 13.87 35.23 69.86 Seq2Seq-Att 38.42 24.02 31.47 57.16 78.88 51.51 25.32 14.65 36.06 70.75",
"content": "<table><tr><td/><td>BertCS</td></tr><tr><td>SCSVED</td><td>40.67 26.04 33.77 58.93 81.01 53.66 27.33 15.39 37.03 71.80</td></tr><tr><td>SCSVED -SSC</td><td>40.12 25.49 33.20 58.47 80.57 52.99 26.19 15.09 36.72 71.43</td></tr><tr><td>SCSVED -SSC -WSC</td><td>39.51</td></tr></table>",
"type_str": "table",
"html": null,
"num": null
}
}
}
}