ACL-OCL / Base_JSON /prefixT /json /tacl /2020.tacl-1.22.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:53:43.887391Z"
},
"title": "Syntax-Guided Controlled Generation of Paraphrases",
"authors": [
{
"first": "Ashutosh",
"middle": [],
"last": "Kumar",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Indian Institute of Science",
"location": {
"settlement": "Bangalore"
}
},
"email": "[email protected]"
},
{
"first": "Kabir",
"middle": [],
"last": "Ahuja",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Microsoft Research",
"location": {
"settlement": "Bangalore 3 Google, London"
}
},
"email": "[email protected]"
},
{
"first": "Raghuram",
"middle": [],
"last": "Vadapalli",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Partha",
"middle": [],
"last": "Talukdar",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Indian Institute of Science",
"location": {
"settlement": "Bangalore"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Given a sentence (e.g., ''I like mangoes'') and a constraint (e.g., sentiment flip), the goal of controlled text generation is to produce a sentence that adapts the input sentence to meet the requirements of the constraint (e.g., ''I hate mangoes''). Going beyond such simple constraints, recent work has started exploring the incorporation of complex syntacticguidance as constraints in the task of controlled paraphrase generation. In these methods, syntactic-guidance is sourced from a separate exemplar sentence. However, this prior work has only utilized limited syntactic information available in the parse tree of the exemplar sentence. We address this limitation in the paper and propose Syntax Guided Controlled Paraphraser (SGCP), an end-to-end framework for syntactic paraphrase generation. We find that SGCP can generate syntax-conforming sentences while not compromising on relevance. We perform extensive automated and human evaluations over multiple real-world English language datasets to demonstrate the efficacy of SGCP over state-of-the-art baselines. To drive future research, we have made SGCP's source code available. 1",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Given a sentence (e.g., ''I like mangoes'') and a constraint (e.g., sentiment flip), the goal of controlled text generation is to produce a sentence that adapts the input sentence to meet the requirements of the constraint (e.g., ''I hate mangoes''). Going beyond such simple constraints, recent work has started exploring the incorporation of complex syntacticguidance as constraints in the task of controlled paraphrase generation. In these methods, syntactic-guidance is sourced from a separate exemplar sentence. However, this prior work has only utilized limited syntactic information available in the parse tree of the exemplar sentence. We address this limitation in the paper and propose Syntax Guided Controlled Paraphraser (SGCP), an end-to-end framework for syntactic paraphrase generation. We find that SGCP can generate syntax-conforming sentences while not compromising on relevance. We perform extensive automated and human evaluations over multiple real-world English language datasets to demonstrate the efficacy of SGCP over state-of-the-art baselines. To drive future research, we have made SGCP's source code available. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Controlled text generation is the task of producing a sequence of coherent words based on given constraints. These constraints can range from simple attributes like tense, sentiment polarity, and word-reordering (Hu et al., 2017; Shen et al., 2017; Yang et al., 2018) to more complex syntactic information. For example, given a sentence ''The movie is awful!'' and a simple constraint like flip sentiment to positive, a controlled text generator is expected to produce the sentence ''The movie is fantastic!''.",
"cite_spans": [
{
"start": 212,
"end": 229,
"text": "(Hu et al., 2017;",
"ref_id": "BIBREF16"
},
{
"start": 230,
"end": 248,
"text": "Shen et al., 2017;",
"ref_id": "BIBREF41"
},
{
"start": 249,
"end": 267,
"text": "Yang et al., 2018)",
"ref_id": "BIBREF51"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "These constraints are important in not only providing information about what to say but also how to say it. Without any constraint, the ubiquitous sequence-to-sequence neural models often tend to produce degenerate outputs and favor generic utterances (Vinyals and Le, 2015; Li et al., 2016) . Although simple attributes are helpful in addressing what to say, they provide very little information about how to say it. Syntactic control over generation helps in filling this gap by providing that missing information.",
"cite_spans": [
{
"start": 252,
"end": 274,
"text": "(Vinyals and Le, 2015;",
"ref_id": "BIBREF47"
},
{
"start": 275,
"end": 291,
"text": "Li et al., 2016)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Incorporating complex syntactic information has shown promising results in neural machine translation (Stahlberg et al., 2016; Aharoni and Goldberg, 2017; Yang et al., 2019) , data-to-text generation (Peng et al., 2019) , abstractive textsummarization (Cao et al., 2018) , and adversarial text generation (Iyyer et al., 2018) . Additionally, recent work (Iyyer et al., 2018; Kumar et al., 2019) has shown that augmenting lexical and syntactical variations in the training set can help in building better performing and more robust models.",
"cite_spans": [
{
"start": 102,
"end": 126,
"text": "(Stahlberg et al., 2016;",
"ref_id": "BIBREF44"
},
{
"start": 127,
"end": 154,
"text": "Aharoni and Goldberg, 2017;",
"ref_id": "BIBREF0"
},
{
"start": 155,
"end": 173,
"text": "Yang et al., 2019)",
"ref_id": "BIBREF50"
},
{
"start": 200,
"end": 219,
"text": "(Peng et al., 2019)",
"ref_id": "BIBREF35"
},
{
"start": 252,
"end": 270,
"text": "(Cao et al., 2018)",
"ref_id": "BIBREF6"
},
{
"start": 305,
"end": 325,
"text": "(Iyyer et al., 2018)",
"ref_id": "BIBREF19"
},
{
"start": 354,
"end": 374,
"text": "(Iyyer et al., 2018;",
"ref_id": "BIBREF19"
},
{
"start": 375,
"end": 394,
"text": "Kumar et al., 2019)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we focus on the task of syntactically controlled paraphrase generation, that is, given an input sentence and a syntactic exemplar, produce a sentence that conforms to the syntax of the exemplar while retaining the meaning of the original input sentence. While syntactically controlled generation of paraphrases finds applications in multiple domains like dataaugmentation and text passivization, we highlight its importance in the particular task of text simplification. As pointed out in Siddharthan (2014) , depending on the literacy skill of an individual, certain syntactical forms of English sentences are easier to comprehend than others. As an example, consider the following two sentences: S1 Because it is raining today, you should carry an umbrella.",
"cite_spans": [
{
"start": 504,
"end": 522,
"text": "Siddharthan (2014)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "S2 You should carry an umbrella today, because it is raining.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Connectives that permit pre-posed adverbial clauses have been found to be difficult for third to fifth grade readers, even when the order of mention coincides with the causal (and temporal) order (Anderson and Davison, 1986; Levy, 2003) . Hence, they prefer sentence S2. However, various other studies (Clark and Clark, 1968; Katz and Brent, 1968; Irwin, 1980) have suggested that for older school children, college students, and adults, comprehension is better for the cause-effect presentation, hence sentence S1. Thus, modifying a sentence, syntactically, would help in better comprehension based on literacy skills. Prior work in syntactically controlled paraphrase generation addressed this task by conditioning the semantic input on either the features learned from a linearized constituency-based parse tree (Iyyer et al., 2018) , or the latent syntactic information (Chen et al., 2019a) learned from exemplars through variational auto-encoders. Linearizing parse trees typically results in loss of essential dependency information. On the other hand, as noted in Shi et al. (2016) , an autoencoderbased approach might not offer rich enough syntactic information as guaranteed by actual constituency parse trees. Moreover, as noted in Chen et al. (2019a) , SCPN (Iyyer et al., 2018) , and CGEN (Chen et al., 2019a) tend to generate sentences of the same length as the exemplar. This is an undesirable characteristic because it often results in producing sentences that end abruptly, thereby compromising on grammaticality and semantics. Please see Table 1 for sample generations using each of the models.",
"cite_spans": [
{
"start": 196,
"end": 224,
"text": "(Anderson and Davison, 1986;",
"ref_id": "BIBREF1"
},
{
"start": 225,
"end": 236,
"text": "Levy, 2003)",
"ref_id": "BIBREF25"
},
{
"start": 302,
"end": 325,
"text": "(Clark and Clark, 1968;",
"ref_id": "BIBREF10"
},
{
"start": 326,
"end": 347,
"text": "Katz and Brent, 1968;",
"ref_id": "BIBREF21"
},
{
"start": 348,
"end": 360,
"text": "Irwin, 1980)",
"ref_id": "BIBREF17"
},
{
"start": 815,
"end": 835,
"text": "(Iyyer et al., 2018)",
"ref_id": "BIBREF19"
},
{
"start": 874,
"end": 894,
"text": "(Chen et al., 2019a)",
"ref_id": "BIBREF7"
},
{
"start": 1071,
"end": 1088,
"text": "Shi et al. (2016)",
"ref_id": "BIBREF42"
},
{
"start": 1242,
"end": 1261,
"text": "Chen et al. (2019a)",
"ref_id": "BIBREF7"
},
{
"start": 1269,
"end": 1289,
"text": "(Iyyer et al., 2018)",
"ref_id": "BIBREF19"
},
{
"start": 1301,
"end": 1321,
"text": "(Chen et al., 2019a)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 1555,
"end": 1562,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To address these gaps, we propose Syntax Guided Controlled Paraphraser (SGCP) which uses full exemplar syntactic tree information. Additionally, our model provides an easy mechanism to incorporate different levels of syntactic control (granularity) based on the height of the tree being considered. The decoder in our framework is augmented with rich enough syntactical information to be able to produce which is the best app you ca n't live without and why ? Table 1 : Sample syntactic paraphrases generated by SCPN (Iyyer et al., 2018) , CGEN (Chen et al., 2019a) , SGCP (Ours). We observe that SGCP is able to generate syntax conforming paraphrases without compromising much on relevance. syntax conforming sentences while not losing out on semantics and grammaticality.",
"cite_spans": [
{
"start": 517,
"end": 537,
"text": "(Iyyer et al., 2018)",
"ref_id": "BIBREF19"
},
{
"start": 545,
"end": 565,
"text": "(Chen et al., 2019a)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 460,
"end": 467,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The main contributions of this work are as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. We propose SGCP, an end-to-end model to generate syntactically controlled paraphrases at different levels of granularity using a parsed exemplar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2. We provide a new decoding mechanism to incorporate syntactic information from the exemplar sentence's syntactic parse.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "3. We provide a dataset formed from Quora Question Pairs 2 for evaluating the models. We also perform extensive experiments to demonstrate the efficacy of our model using multiple automated metrics as well as human evaluations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Controllable Text Generation. is an important problem in NLP that has received significant attention in recent times. Prior work include generating text using models conditioned on attributes like formality, sentiment, or tense (Hu et al., 2017; Shen et al., 2017; Yang et al., 2018) as well as on syntactical templates (Iyyer et al., 2018; Chen et al., 2019a) . These systems find applications in adversarial sample generation (Iyyer et al., 2018 ), text summarization, and table-to-text generation (Peng et al., 2019) . While achieving state-of-theart in their respective domains, these systems typically rely on a known finite set of attributes thereby making them quite restrictive in terms of the styles they can offer.",
"cite_spans": [
{
"start": 228,
"end": 245,
"text": "(Hu et al., 2017;",
"ref_id": "BIBREF16"
},
{
"start": 246,
"end": 264,
"text": "Shen et al., 2017;",
"ref_id": "BIBREF41"
},
{
"start": 265,
"end": 283,
"text": "Yang et al., 2018)",
"ref_id": "BIBREF51"
},
{
"start": 320,
"end": 340,
"text": "(Iyyer et al., 2018;",
"ref_id": "BIBREF19"
},
{
"start": 341,
"end": 360,
"text": "Chen et al., 2019a)",
"ref_id": "BIBREF7"
},
{
"start": 428,
"end": 447,
"text": "(Iyyer et al., 2018",
"ref_id": "BIBREF19"
},
{
"start": 500,
"end": 519,
"text": "(Peng et al., 2019)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Paraphrase Generation. While generation of paraphrases has been addressed in the past using traditional methods (McKeown, 1983; Barzilay and Lee, 2003; Quirk et al., 2004; Hassan et al., 2007; Zhao et al., 2008; Madnani and Dorr, 2010; Wubben et al., 2010) , they have recently been superseded by deep learning-based approaches (Prakash et al., 2016; Gupta et al., 2018; Li et al., 2019 Kumar et al., 2019) . The primary task of all these methods (Prakash et al., 2016; Gupta et al., 2018; is to generate the most semantically similar sentence and they typically rely on beam search to obtain any kind of lexical diversity. Kumar et al. (2019) try to tackle the problem of achieving lexical, and limited syntactical diversity using submodular optimization but do not provide any syntactic control over the type of utterance that might be desired. These methods are therefore restrictive in terms of the syntactical diversity that they can offer.",
"cite_spans": [
{
"start": 112,
"end": 127,
"text": "(McKeown, 1983;",
"ref_id": "BIBREF33"
},
{
"start": 128,
"end": 151,
"text": "Barzilay and Lee, 2003;",
"ref_id": "BIBREF4"
},
{
"start": 152,
"end": 171,
"text": "Quirk et al., 2004;",
"ref_id": "BIBREF37"
},
{
"start": 172,
"end": 192,
"text": "Hassan et al., 2007;",
"ref_id": "BIBREF13"
},
{
"start": 193,
"end": 211,
"text": "Zhao et al., 2008;",
"ref_id": "BIBREF54"
},
{
"start": 212,
"end": 235,
"text": "Madnani and Dorr, 2010;",
"ref_id": "BIBREF31"
},
{
"start": 236,
"end": 256,
"text": "Wubben et al., 2010)",
"ref_id": "BIBREF49"
},
{
"start": 328,
"end": 350,
"text": "(Prakash et al., 2016;",
"ref_id": "BIBREF36"
},
{
"start": 351,
"end": 370,
"text": "Gupta et al., 2018;",
"ref_id": "BIBREF12"
},
{
"start": 371,
"end": 386,
"text": "Li et al., 2019",
"ref_id": "BIBREF28"
},
{
"start": 387,
"end": 406,
"text": "Kumar et al., 2019)",
"ref_id": "BIBREF24"
},
{
"start": 447,
"end": 469,
"text": "(Prakash et al., 2016;",
"ref_id": "BIBREF36"
},
{
"start": 470,
"end": 489,
"text": "Gupta et al., 2018;",
"ref_id": "BIBREF12"
},
{
"start": 624,
"end": 643,
"text": "Kumar et al. (2019)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Controlled Paraphrase Generation. Our task is similar in spirit to Iyyer et al. (2018) and Chen et al. (2019a) , which also deals with the task of syntactic paraphrase generation. However, the approach taken by them is different from ours in at least two aspects. Firstly, SCPN (Iyyer et al., 2018) uses an attention-based (Bahdanau et al., 2014) pointer-generator network (See et al., 2017) to encode input sentences and a linearized constituency tree to produce paraphrases. Because of the linearization of syntactic tree, considerable dependency-based information is generally lost. Our model, instead, directly encodes the tree structure to produce a paraphrase. Secondly, the inference (or generation) process in SCPN is computationally very expensive, because it involves a two-stage generation process. In the first stage, they generate full parse trees from incomplete templates, and then from full parse trees to final generations. In contrast, the inference in our method involves a single-stage process, wherein our model takes as input a semantic source, a syntactic tree and the level of syntactic style that needs to be transferred, to obtain the generations. Additionally, we also observed that the model does not perform well in low resource settings. This, again, can be attributed to the compounding implicit noise in the training due to linearized trees and generation of full linearized trees before obtaining the final paraphrases. Chen et al. (2019a) propose a syntactic exemplar-based method for controlled paraphrase generation using an approach based on latent variable probabilistic modeling, neural variational inference, and multi-task learning. This, in principle, is very similar to Chen et al. (2019b) . As opposed to our model, which provides different levels of syntactic control of the exemplarbased generation, this approach is restrictive in terms of the flexibility it can offer. Also, as noted in Shi et al. (2016) , an autoencoder-based approach might not offer rich enough syntactic information as offered by actual constituency parse trees. Additionally, VAEs (Kingma and Welling, 2014) are generally unstable and harder to train (Bowman et al., 2016; Gupta et al., 2018) than seq2seq-based approaches.",
"cite_spans": [
{
"start": 67,
"end": 86,
"text": "Iyyer et al. (2018)",
"ref_id": "BIBREF19"
},
{
"start": 91,
"end": 110,
"text": "Chen et al. (2019a)",
"ref_id": "BIBREF7"
},
{
"start": 278,
"end": 298,
"text": "(Iyyer et al., 2018)",
"ref_id": "BIBREF19"
},
{
"start": 323,
"end": 346,
"text": "(Bahdanau et al., 2014)",
"ref_id": "BIBREF2"
},
{
"start": 373,
"end": 391,
"text": "(See et al., 2017)",
"ref_id": "BIBREF39"
},
{
"start": 1453,
"end": 1472,
"text": "Chen et al. (2019a)",
"ref_id": "BIBREF7"
},
{
"start": 1713,
"end": 1732,
"text": "Chen et al. (2019b)",
"ref_id": "BIBREF8"
},
{
"start": 1935,
"end": 1952,
"text": "Shi et al. (2016)",
"ref_id": "BIBREF42"
},
{
"start": 2171,
"end": 2192,
"text": "(Bowman et al., 2016;",
"ref_id": "BIBREF5"
},
{
"start": 2193,
"end": 2212,
"text": "Gupta et al., 2018)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In this section, we describe the inputs and various architectural components essential for building SGCP, an end-to-end trainable model. Our model, as shown in Figure 1 , comprises a sentence encoder (3.2), syntactic tree encoder (3.3), and a syntactic-paraphrase-decoder (3.4).",
"cite_spans": [],
"ref_spans": [
{
"start": 160,
"end": 168,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "SGCP: Proposed Method",
"sec_num": "3"
},
{
"text": "Given an input sentence X and a syntactic exemplar Y , our goal is to generate a sentence Z that conforms to the syntax of Y while retaining the meaning of X.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inputs",
"sec_num": "3.1"
},
{
"text": "The semantic encoder (Section 3.2) works on sequence of input tokens, and the syntactic encoder (Section 3.3) operates on constituencybased parse trees. We parse the syntactic exemplar Y 3 to obtain its constituency-based parse tree. The leaf nodes of the constituency-based parse tree consists of token for the sentence Y. These tokens, in some sense, carry the semantic information of sentence Y, which we do not need for generating paraphrases. In order to prevent any meaning Figure 1 : Architecture of SGCP (proposed method). SGCP aims to paraphrase an input sentence, while conforming to the syntax of an exemplar sentence (provided along with the input). The input sentence is encoded using the Sentence Encoder (Section 3.2) to obtain a semantic signal c t . The Syntactic Encoder (Section 3.3) takes a constituency parse tree (pruned at height H) of the exemplar sentence as an input, and produces representations for all the nodes in the pruned tree. Once both of these are encoded, the Syntactic Paraphrase Decoder (Section 3.4) uses pointer-generator network, and at each time step takes the semantic signal c t , the decoder recurrent state s t , embedding of the previous token and syntactic signal h Y t to generate a new token. Note that the syntactic signal remains the same for each token in a span (shown in figure above curly braces; please see Figure 2 for more details). The gray shaded region (not part of the model) illustrates a qualitative comparison of the exemplar syntax tree and the syntax tree obtained from the generated paraphrase. Please refer to Section 3 for details.",
"cite_spans": [],
"ref_spans": [
{
"start": 480,
"end": 488,
"text": "Figure 1",
"ref_id": null
},
{
"start": 1365,
"end": 1373,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Inputs",
"sec_num": "3.1"
},
{
"text": "propagation from exemplar sentence Y into the generation, we remove these leaf/terminal nodes from its constituency parse. The tree thus obtained is denoted as C Y .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inputs",
"sec_num": "3.1"
},
{
"text": "The syntactic encoder, additionally, takes as input H, which governs the level of syntactic control needed to be induced. The utility of H will be described in Section 3.3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inputs",
"sec_num": "3.1"
},
{
"text": "The semantic encoder, a multilayered Gated Recurrent Unit (GRU), receives tokenized sentence X = {x 1 , . . . , x T X } as input and computes the contextualized hidden state representation h X t for each token using:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Encoder",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h X t = GRU(h X t\u22121 , e(x t )),",
"eq_num": "(1)"
}
],
"section": "Semantic Encoder",
"sec_num": "3.2"
},
{
"text": "where e(x t ) represents the learnable embedding of the token x t and t \u2208 {1, . . . , T X }. Note that we use byte-pair encoding (Sennrich et al., 2016) for word/token segmentation.",
"cite_spans": [
{
"start": 129,
"end": 152,
"text": "(Sennrich et al., 2016)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Encoder",
"sec_num": "3.2"
},
{
"text": "This encoder provides the necessary syntactic guidance for the generation of paraphrases. Formally, let constituency tree C Y = {V, E, Y}, where V is the set of nodes, E the set of edges, and Y the labels associated with each node. We calculate the hidden-state representation h Y v of each node v \u2208 V using the hidden-state representation of its parent node pa(v) and the embedding associated with its label y v as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Encoder",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h Y v = GeLU(W pa h Y pa(v) + W v e(y v ) + b v ),",
"eq_num": "(2)"
}
],
"section": "Syntactic Encoder",
"sec_num": "3.3"
},
{
"text": "where e(y v ) is the embedding of the node label y v , and W pa , W v , b v are learnable parameters. This approach can be considered similar to TreeLSTM (Tai et al., 2015) . We use GeLU activation function (Hendrycks and Gimpel, 2016) rather than the standard tanh or relu, because of superior empirical performance. As indicated in Section 3.1, syntactic encoder takes as input the height H, which governs the level of syntactic control. We randomly prune the Figure 2 : The constituency parse tree serves as an input to the syntactic encoder (Section 3.3). The first step is to remove the leaf nodes which contain meaning representative tokens (Here: What is the best language . . . ). H denotes the height to which the tree can be pruned and is an input to the model. Figure 2 (a) shows the full constituency parse tree annotated with vector a for different heights. Figure 2 (b) shows the same tree pruned at height H = 3 with its corresponding a vector. The vector a serves as an signalling vector (Section 3.4.2) which helps in deciding the syntactic signal to be passed on to the decoder. Please refer Section 3 for details.",
"cite_spans": [
{
"start": 154,
"end": 172,
"text": "(Tai et al., 2015)",
"ref_id": "BIBREF45"
},
{
"start": 207,
"end": 235,
"text": "(Hendrycks and Gimpel, 2016)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 462,
"end": 470,
"text": "Figure 2",
"ref_id": null
},
{
"start": 772,
"end": 780,
"text": "Figure 2",
"ref_id": null
},
{
"start": 871,
"end": 879,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Syntactic Encoder",
"sec_num": "3.3"
},
{
"text": "tree C Y to height H \u2208 {3, . . . , H max }, where H max is the height of the full constituency tree C Y . As an example, in Figure 2b , we prune the constituencybased parse tree of the exemplar sentence, to height H = 3. The leaf nodes for this tree have the labels WP, VBZ, NP, and <DOT>. Although we calculate the hidden-state representation of all the nodes, only the terminal nodes are responsible for providing the syntactic signal to the decoder (Section 3.4).",
"cite_spans": [],
"ref_spans": [
{
"start": 124,
"end": 133,
"text": "Figure 2b",
"ref_id": null
}
],
"eq_spans": [],
"section": "Syntactic Encoder",
"sec_num": "3.3"
},
{
"text": "We maintain a queue L Y H of such terminal node representations where elements are inserted from left to right for a given H. Specifically, for the particular example given in Figure 2b ,",
"cite_spans": [],
"ref_spans": [
{
"start": 176,
"end": 185,
"text": "Figure 2b",
"ref_id": null
}
],
"eq_spans": [],
"section": "Syntactic Encoder",
"sec_num": "3.3"
},
{
"text": "L Y H = [h Y WP , h Y VBZ , h Y NP , h Y <DOT> ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Encoder",
"sec_num": "3.3"
},
{
"text": "We emphasize the fact that the length of the queue |L Y H | is a function of height H.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Encoder",
"sec_num": "3.3"
},
{
"text": "Having obtained the semantic and syntactic representations, the decoder is tasked with the generation of syntactic paraphrases. This can be modeled as finding the best Z = Z * that maximizes the probability P(Z|X, Y ), which can further be factorized as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Paraphrase Decoder",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Z * = arg max z T Z t=1 (z t |z 1 , . . . , z t\u22121 , X, Y ),",
"eq_num": "(3)"
}
],
"section": "Syntactic Paraphrase Decoder",
"sec_num": "3.4"
},
{
"text": "where T Z is the maximum length up to which decoding is required.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Paraphrase Decoder",
"sec_num": "3.4"
},
{
"text": "In the subsequent sections, we use t to denote the decoder time step.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Paraphrase Decoder",
"sec_num": "3.4"
},
{
"text": "At each decoder time step t, the attention distribution \u03b1 t is calculated over the encoder hidden states h X i , obtained using Equation 1, as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using Semantic Information",
"sec_num": "3.4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "e t i = v \u22ba tanh(W h h X i + W s s t + b attn ) \u03b1 t = softmax(e t ),",
"eq_num": "(4)"
}
],
"section": "Using Semantic Information",
"sec_num": "3.4.1"
},
{
"text": "where s t is the decoder cell-state and v, W h , W s , b attn are learnable parameters. The attention distribution provides a way to jointly align and train sequence to sequence models by producing a weighted sum of the semantic encoder hidden states, known as contextvector c t , given by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using Semantic Information",
"sec_num": "3.4.1"
},
{
"text": "c t = i \u03b1 t i h X i (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using Semantic Information",
"sec_num": "3.4.1"
},
{
"text": "c t serves as the semantic signal which is essential for generating meaning preserving sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using Semantic Information",
"sec_num": "3.4.1"
},
{
"text": "During training, each terminal node in the tree C Y , pruned at H, is equipped with information about the span of words it needs to generate. At each time step t, only one terminal node representation h Y v \u2208 L Y H is responsible for providing the syntactic signal which we call h Y t . This hiddenstate representation to be used is governed through an signalling vector a = (a 1 , . . . , a T z ), where each a i \u2208 {0, 1}. 0 indicates that the decoder should keep on using the same hiddenrepresentation h Y v \u2208 L Y H that is currently being used, and 1 indicates that the next element (hiddenrepresentation) in the queue L Y H should be used for decoding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using Syntactic Information",
"sec_num": "3.4.2"
},
{
"text": "The utility of a can be best understood through Figure 2b . Consider the syntactic tree pruned at height H = 3. For this example,",
"cite_spans": [],
"ref_spans": [
{
"start": 48,
"end": 57,
"text": "Figure 2b",
"ref_id": null
}
],
"eq_spans": [],
"section": "Using Syntactic Information",
"sec_num": "3.4.2"
},
{
"text": "L Y H = [h Y WP , h Y VBZ , h Y NP , h",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using Syntactic Information",
"sec_num": "3.4.2"
},
{
"text": "Y <DOT> ] and a = (1, 1, 1, 0, 0, 0, 0, 0, 1) a i = 1 provides a signal to pop an element from the queue L Y H while a i = 0 provides a signal to keep on using the last popped element. This element is then used to guide the decoder syntactically by providing a signal in the form of hidden-state representation (Equation 8).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using Syntactic Information",
"sec_num": "3.4.2"
},
{
"text": "Specifically, in this example, the a 1 = 1 signals L Y H to pop h Y WP to provide syntactic guidance to the decoder for generating the first token. a 2 = 1 signals L Y H to pop h Y VBZ to provide syntactic guidance to the decoder for generating the second token. a 3 = 1 helps in obtaining h Y NP from L Y H to provide guidance to generate the third token. As described earlier, a 4 , . . . , a 8 = 0 indicates, that the same representation h Y NP should be used for syntactically guiding tokens z 4 , . . . , z 8 . Finally a 9 = 1 helps in retrieving h Y <DOT> for guiding decoder to generate token z 9 . Note that |L Y H | = T z i=1 a i Although a is provided to the model during training, this information might not be available during inference. Providing a during generation makes the model restrictive and might result in producing ungrammatical sentences. SGCP is tasked to learn a proxy for the signalling vector a, using transition probability vector p.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using Syntactic Information",
"sec_num": "3.4.2"
},
{
"text": "At each time step t, we calculate p t \u2208 (0, 1), which determines the probability of changing the syntactic signal using:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using Syntactic Information",
"sec_num": "3.4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p t = \u03c3(W bop ([c t ; h Y t ; s t ; e(z \u2032 t )]) + b bop ),",
"eq_num": "(6)"
}
],
"section": "Using Syntactic Information",
"sec_num": "3.4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h Y t+1 = h Y t p t < 0.5 pop(L Y H ) otherwise",
"eq_num": "(7)"
}
],
"section": "Using Syntactic Information",
"sec_num": "3.4.2"
},
{
"text": "where pop removes and returns the next element in the queue, s t is the decoder state, and e(z \u2032 t ) is the embedding of the input token at time t during decoding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using Syntactic Information",
"sec_num": "3.4.2"
},
{
"text": "The semantic signal c t , together with decoder state s t , embedding of the input token e(z \u2032 t ) and the syntactic signal h Y t is fed through a GRU followed by softmax of the output to produce a vocabulary distribution as: We augment this with the copying mechanism as in the pointer-generator network (See et al., 2017) . Usage of such a mechanism offers a probability distribution over the extended vocabulary (the union of vocabulary words and words present in the source sentence) as follows:",
"cite_spans": [
{
"start": 305,
"end": 323,
"text": "(See et al., 2017)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Overall",
"sec_num": "3.4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P vocab = softmax(W ([c t ; h Y t ; s t ; e(z \u2032 t )]) + b),",
"eq_num": "(8)"
}
],
"section": "Overall",
"sec_num": "3.4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P(z) = p gen P vocab (z) + (1 \u2212 p gen ) i:z i =z \u03b1 t i p gen = \u03c3(w \u22ba c c t + w \u22ba s s t + w \u22ba x e(z \u2032 t ) + b gen )",
"eq_num": "(9)"
}
],
"section": "Overall",
"sec_num": "3.4.3"
},
{
"text": "where w c , w s , w x and b gen are learnable parameters, e(z \u2032 t ) is the input token embedding to the decoder at time step t, and \u03b1 t i is the element corresponding to the i th co-ordinate in the attention distribution as defined in Equation 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overall",
"sec_num": "3.4.3"
},
{
"text": "The overall objective can be obtained by taking negative log-likelihood of the distributions obtained in Equation 6 and Equation 9.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overall",
"sec_num": "3.4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L = \u2212 1 T T t=0 [log P(z * t ) + a t log(p t ) + (1 \u2212 a t ) log(1 \u2212 p t )]",
"eq_num": "(10)"
}
],
"section": "Overall",
"sec_num": "3.4.3"
},
{
"text": "where a t is the t th element of the vector a.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overall",
"sec_num": "3.4.3"
},
{
"text": "Our experiments are geared towards answering the following questions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Q1. Is SGCP able to generate syntax conforming sentences without losing out on meaning? (Section 5.1, 5.4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Q2. What level of syntactic control does SGCP offer? (Section 5.2, 5.3, 5.2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Q3. How does SGCP compare against prior models, qualitatively? (Section 5.4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Q4. Are the improvements achieved by SGCP statistically significant? (Section 5.1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Based on these questions, we outline the methods compared (Section 4.1), along with the datasets (Section 4.2) used, evaluation criteria (Section 4.3) and the experimental setup (Section 4.4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "As in Chen et al. (2019a) , we first highlight the results of the two direct return-input baselines.",
"cite_spans": [
{
"start": 6,
"end": 25,
"text": "Chen et al. (2019a)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methods Compared",
"sec_num": "4.1"
},
{
"text": "1. Source-as-Output: Baseline where the output is the semantic input.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods Compared",
"sec_num": "4.1"
},
{
"text": "2. Exemplar-as-Output: Baseline where the output is the syntactic exemplar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods Compared",
"sec_num": "4.1"
},
{
"text": "We compare the following competitive methods:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods Compared",
"sec_num": "4.1"
},
{
"text": "3. SGCP (Iyyer et al., 2018 ) is a sequence-tosequence based model comprising two encoders built with LSTM (Hochreiter and Schmidhuber, 1997) to encode semantics and syntax respectively. Once the encoding is obtained, it serves as an input to the LSTM-based decoder, which is augmented with soft-attention (Bahdanau et al., 2014) over encoded states as well as a copying mechanism (See et al., 2017) to deal with out-of-vocabulary tokens. 4 4. CGEN (Chen et al., 2019a ) is a VAE (Kingma and Welling, 2014) model with two encoders to project semantic input and syntactic input to a latent space. They obtain a syntactic embedding from one encoder, using a standard Gaussian prior. To obtain the semantic representation, they use von Mises-Fisher prior, which can be thought of as a Gaussian distribution on a hypersphere. They train the model using a multi-task paradigm, incorporating paraphrase generation loss and word position loss. We considered their best model, VGVAE + LC + WN + WPL, which incorporates the above objectives.",
"cite_spans": [
{
"start": 8,
"end": 27,
"text": "(Iyyer et al., 2018",
"ref_id": "BIBREF19"
},
{
"start": 107,
"end": 141,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF15"
},
{
"start": 306,
"end": 329,
"text": "(Bahdanau et al., 2014)",
"ref_id": "BIBREF2"
},
{
"start": 381,
"end": 399,
"text": "(See et al., 2017)",
"ref_id": "BIBREF39"
},
{
"start": 449,
"end": 468,
"text": "(Chen et al., 2019a",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methods Compared",
"sec_num": "4.1"
},
{
"text": "5. SGCP (Section 3) is a sequence-and-tree-tosequence based model that encodes semantics and tree-level syntax to produce paraphrases. It uses a GRU-based (Chung et al., 2014) decoder with soft-attention on semantic encodings and a begin of phrase (bop) gate to select a leaf node in the exemplar syntax tree. We compare the following two variants of SGCP:",
"cite_spans": [
{
"start": 155,
"end": 175,
"text": "(Chung et al., 2014)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methods Compared",
"sec_num": "4.1"
},
{
"text": "(a) SGCP-F: Uses full constituency parse tree information of the exemplar for generating paraphrases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods Compared",
"sec_num": "4.1"
},
{
"text": "(a) SGCP-R: SGCP can produce multiple paraphrases by pruning the exemplar tree at various heights. This variant first generates five candidate generations, corresponding to five different heights of the exemplar tree, namely, {H max , H max \u2212 1, H max \u2212 2, H max \u2212 3, H max \u2212 4}, for each (source, exemplar) pair. From these candidates, the one with the highest ROUGE-1 score with the source sentence is selected as the final generation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods Compared",
"sec_num": "4.1"
},
{
"text": "Note that, except for the return-input baselines, all methods use beam search during inference.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods Compared",
"sec_num": "4.1"
},
{
"text": "We train the models and evaluate them on the following datasets:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.2"
},
{
"text": "(1) ParaNMT-small (Chen et al., 2019a) contains 500K sentence-paraphrase pairs for training, and 1,300 manually labeled sentenceexemplar-reference, which is further split into 800 test data points and 500 dev. data points, respectively.",
"cite_spans": [
{
"start": 18,
"end": 38,
"text": "(Chen et al., 2019a)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.2"
},
{
"text": "As in Chen et al. (2019a) , our model uses only (sentence, paraphrase) during training. The paraphrase itself serves as the exemplar input during training.",
"cite_spans": [
{
"start": 6,
"end": 25,
"text": "Chen et al. (2019a)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.2"
},
{
"text": "This dataset is a subset of the original ParaNMT-50M dataset . ParaNMT-50M is a data set generated automatically through backtranslation of original English sentences. It is inherently noisy because of imperfect neural machine translation quality, with many sentences being non-grammatical and some even being non-English sentences. Because of such noisy data points, it is optimistic to assume that the corresponding constituency parse tree would be well aligned. To that end, we propose to use the following additional dataset, which is more well-formed and has more human intervention than the ParaNMT-50M dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.2"
},
{
"text": "(2) QQP-Pos: The original Quora Question Pairs (QQP) dataset contains about 400K sentence pairs labeled positive if they are duplicates of each other and negative otherwise. The dataset is composed of about 150K positive and 250K negative pairs. We select those positive pairs that contain both sentences with a maximum token length of 30, leaving us with \u223c146K pairs. We call this dataset QQP-Pos.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.2"
},
{
"text": "Similar to ParaNMT-small, we use only the sentence-paraphrase pairs as training set and sentence-exemplar-reference triples for testing and validation. We randomly choose 140K sentence-paraphrase pairs as the training set T train , and the remaining 6K pairs T eval are used to form the evaluation set E. Additionally, let T eset = {{X, Z} : (X, Z) \u2208 T eval }. Note that T eset is a set of sentences while T eval is a set of sentence-paraphrase pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.2"
},
{
"text": "Let E = \u03c6 be the initial evaluation set. For selecting exemplar for each each sentenceparaphrase pair (X, Z) \u2208 T eval , we adopt the following procedure:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.2"
},
{
"text": "Step 1: For a given (X, Z) \u2208 T eval , construct an exemplar candidate set C = T eset \u2212 {X, Z}. |C| \u2248 12, 000.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.2"
},
{
"text": "Step 2: Retain only those sentences C \u2208 C whose sentence length (= number of tokens) differ by at most two when compared to the paraphrase Z. This is done since sentences with similar constituency-based parse tree structures tend to have similar token lengths.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.2"
},
{
"text": "Step 3: Remove those candidates C \u2208 C, which are very similar to the source sentence X, that is, BLEU(X, C) > 0.6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.2"
},
{
"text": "Step 4: From the remaining instances in C, choose that sentence C as the exemplar Y which has the least Tree-Edit distance with the paraphrase Z of the selected pair, namely, Y = argmin C\u2208C TED(Z, C). This ensures that the constituency-based parse tree of the exemplar Y is quite similar to that of Z, in terms of Tree-Edit distance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.2"
},
{
"text": "Step 5: E := E \u222a (X, Y, Z).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.2"
},
{
"text": "Step 6: Repeat procedure for all other pairs in T eval .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.2"
},
{
"text": "From the obtained evaluation set E, we randomly choose 3K triplets for the test set T test , and remaining 3K for the validation set V.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.2"
},
{
"text": "It should be noted that there is no single fully reliable metric for evaluating syntactic paraphrase generation. Therefore, we evaluate on the following metrics to showcase the efficacy of syntactic paraphrasing models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.3"
},
{
"text": "1. Automated Evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.3"
},
{
"text": "(i) Alignment based metrics: We compute BLEU (Papineni et al., 2002) , METEOR (Banerjee and Lavie, 2005) , ROUGE-1, ROUGE-2, and ROUGE-L (Lin, 2004) scores between the generated and the reference paraphrases in the test set.",
"cite_spans": [
{
"start": 45,
"end": 68,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF34"
},
{
"start": 78,
"end": 104,
"text": "(Banerjee and Lavie, 2005)",
"ref_id": "BIBREF3"
},
{
"start": 137,
"end": 148,
"text": "(Lin, 2004)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.3"
},
{
"text": "(ii) Syntactic Transfer: We evaluate the syntactic transfer using Tree-edit distance (Zhang and Shasha, 1989) between the parse trees of:",
"cite_spans": [
{
"start": 85,
"end": 109,
"text": "(Zhang and Shasha, 1989)",
"ref_id": "BIBREF52"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.3"
},
{
"text": "(a) the generated and the syntactic exemplar in the test set -TED-E (b) the generated and the reference paraphrase in the test set -TED-R (iii) Model-based evaluation: Because our goal is to generate paraphrases of the input sentences, we need some measure to determine if the generations indeed convey the same meaning as the original text. To achieve this, we adopt a model-based evaluation metric as used by Shen et al. (2017) Once trained, we use Classifier-1 to evaluate generations on QQP-Pos and Classifier-2 on ParaNMT-small.",
"cite_spans": [
{
"start": 411,
"end": 429,
"text": "Shen et al. (2017)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.3"
},
{
"text": "We first generate syntactic paraphrases using all the models (Section 4.1) on the test splits of QQP-Pos and ParaNMT-small datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.3"
},
{
"text": "We then pair the source sentence with their corresponding generated paraphrases and send them as input to the classifiers. The Paraphrase Detection score, denoted as PDS in Table 2 , is defined as, the ratio of the number of generations predicted as paraphrases of their corresponding source sentences by the classifier to the total number of generations.",
"cite_spans": [],
"ref_spans": [
{
"start": 173,
"end": 180,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.3"
},
{
"text": "Although TED is sufficient to highlight syntactic transfer, there has been some scepticism regarding automated metrics for paraphrase quality (Reiter, 2018) . To address this issue, we perform human evaluation on 100 randomly selected data points from the test set. In the evaluation, three judges 6 Because the test set of QQP is not public, the 90.2% number was computed on the available dev set (not used for model selection).",
"cite_spans": [
{
"start": 142,
"end": 156,
"text": "(Reiter, 2018)",
"ref_id": "BIBREF38"
},
{
"start": 298,
"end": 299,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Human Evaluation.",
"sec_num": "2."
},
{
"text": "(non-researchers proficient in the English language) were asked to assign scores to generated sentences based on the semantic similarity with the given source sentence. The annotators were shown a source sentence and the corresponding outputs of the systems in random order. The scores ranged from 1 (doesn't capture meaning at all) to 4 (perfectly captures the meaning of the source sentence).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Evaluation.",
"sec_num": "2."
},
{
"text": "(a) Pre-processing. Because our model needs access to constituency parse trees, we tokenize and parse all our data points using the fully parallelizable Stanford CoreNLP Parser (Manning et al., 2014) to obtain their respective parse trees. This is done prior to training in order to prevent any additional computational costs that might be incurred because of repeated parsing of the same data points during different epochs.",
"cite_spans": [
{
"start": 177,
"end": 199,
"text": "(Manning et al., 2014)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "4.4"
},
{
"text": "(b) Implementation Details. We train both our models using the Adam Optimizer (Kingma and Ba, 2014) with an initial learning rate of 7e-5. We use a bidirectional three-layered GRU for encoding the tokenized semantic input and a standard pointer-generator network with GRU for decoding. The token embedding is learnable with dimension 300. To reduce the training complexity Source what should be done to get rid of laziness ? Template Exemplar how can i manage my anger ?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "4.4"
},
{
"text": "SGCP (Iyyer et al., 2018) how can i get rid ? CGEN (Chen et al., 2019a) how SGCP (Iyyer et al., 2018) what are the best books books to read to read ? CGEN (Chen et al., 2019a) what 's the best book for entrepreneurs read to entrepreneurs ? SGCP-F (Ours) what is a best book idea that entrepreneurs to read ? SGCP-R Ourswhat is a good book that entrepreneurs should read ?",
"cite_spans": [
{
"start": 5,
"end": 25,
"text": "(Iyyer et al., 2018)",
"ref_id": "BIBREF19"
},
{
"start": 51,
"end": 71,
"text": "(Chen et al., 2019a)",
"ref_id": "BIBREF7"
},
{
"start": 81,
"end": 101,
"text": "(Iyyer et al., 2018)",
"ref_id": "BIBREF19"
},
{
"start": 155,
"end": 175,
"text": "(Chen et al., 2019a)",
"ref_id": "BIBREF7"
},
{
"start": 240,
"end": 253,
"text": "SGCP-F (Ours)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "4.4"
},
{
"text": "Source how do i get on the board of directors of a non profit or a for profit organisation ? Template Exemplar what is the best way to travel around the world for free ?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "4.4"
},
{
"text": "SGCP (Iyyer et al., 2018) what is the best way to prepare for a girl of a ? CGEN (Chen et al., 2019a) what is the best way to get a non profit on directors ? SGCP-F (Ours) what is the best way to get on the board of directors ?",
"cite_spans": [
{
"start": 5,
"end": 25,
"text": "(Iyyer et al., 2018)",
"ref_id": "BIBREF19"
},
{
"start": 81,
"end": 101,
"text": "(Chen et al., 2019a)",
"ref_id": "BIBREF7"
},
{
"start": 158,
"end": 171,
"text": "SGCP-F (Ours)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "4.4"
},
{
"text": "SGCP-R (Ours)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "4.4"
},
{
"text": "what is the best way to get on the board of directors of a non profit or a for profit organisation ? Table 3 : Sample generations of the competitive models. Please refer to Section 5.5 for details.",
"cite_spans": [],
"ref_spans": [
{
"start": 101,
"end": 108,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Setup",
"sec_num": "4.4"
},
{
"text": "of the model, the maximum sequence length is kept at 60. The vocabulary size is kept at 24K for QQP and 50K for ParaNMT-small. SGCP needs access to the level of syntactic granularity for decoding, depicted as H in Figure 2 . During training, we keep on varying it randomly from 3 to H max , changing it with each training epoch. This ensures that our model is able to generalize because of an implicit regularization attained using this procedure. At each time-step of the decoding process, we keep a teacher forcing ratio of 0.9.",
"cite_spans": [],
"ref_spans": [
{
"start": 214,
"end": 222,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Setup",
"sec_num": "4.4"
},
{
"text": "Syntactic Transfer 1. Automated Metrics: As can be observed in Table 2 , our method(s) (SGCP-F/R (Section 4.1)) are able to outperform the existing baselines on both the datasets. Source-as-Output is independent of the exemplar sentence being used and since a sentence is a paraphrase of itself, the paraphrastic scores are generally high while the syntactic scores are below par. An opposite is true for Exemplar-as-Output. These baselines also serve as dataset quality indicators. It can be seen that source is semantically similar while being syntactically different from target sentence whereas the opposite is true when exemplar is compared to target sentences. Additionally, source sentences are syntactically and semantically different from exemplar sentences as can be observed from TED-E and PDS scores. This helps in showing that the dataset has rich enough syntactic diversity to learn from. Through TED-E scores it can be seen that SGCP-F is able to adhere to the syntax of the exemplar template to a much larger degree than the baseline models. This verifies that our model is able to generate meaning preserving sentences while conforming to the syntax of the exemplars when measured using standard metrics.",
"cite_spans": [],
"ref_spans": [
{
"start": 63,
"end": 70,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Semantic Preservation and",
"sec_num": "5.1"
},
{
"text": "It can also be seen that SGCP-R tends to perform better than SGCP-F in terms of paraphrastic scores while taking a hit on the syntactic scores. This makes sense, intuitively, because in some cases SGCP-R tends to select lower H values for syntactic granularity. This can also be observed from the example given in Table 6 where H = 6 is more favorable than H = 7, because of better meaning retention.",
"cite_spans": [],
"ref_spans": [
{
"start": 314,
"end": 321,
"text": "Table 6",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Semantic Preservation and",
"sec_num": "5.1"
},
{
"text": "Although CGEN performs close to our model in terms of BLEU, ROUGE, and METEOR scores on ParaNMT-small dataset, its PDS is still much lower than that of our model, suggesting that our model is better at capturing the original meaning of the source sentence. In order to show that the results are not coincidental, we test the statistical significance of our model. We follow the nonparametric Pitman's permutation test (Dror et al., 2018) and observe that our model is statistically significant when the significance level (\u03b1) is taken to be 0.05. Note that this holds true for all metric on both the datasets except ROUGE-2 on ParaNMT-small. Table 4 : A comparison of human evaluation scores for comparing quality of paraphrases generated using all models. Higher score is better. Please refer to Section 5.1 for details.",
"cite_spans": [
{
"start": 418,
"end": 437,
"text": "(Dror et al., 2018)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 642,
"end": 649,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Semantic Preservation and",
"sec_num": "5.1"
},
{
"text": "2. Human Evaluation: Table 4 shows the results of human assessment. It can be seen that annotators, generally tend to rate SGCP-F and SGCP-R (Section 4.1) higher than the baseline models, thereby highlighting the efficacy of our models. This evaluation additionally shows that automated metrics are somewhat consistent with the human evaluation scores.",
"cite_spans": [],
"ref_spans": [
{
"start": 21,
"end": 28,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Semantic Preservation and",
"sec_num": "5.1"
},
{
"text": "1. Syntactical Granularity: Our model can work with different levels of granularity for the exemplar syntax, namely, different tree heights of the exemplar tree can be used for decoding the output.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic Control",
"sec_num": "5.2"
},
{
"text": "As can been seen in Table 6 , at height 4 the syntax tree provided to the model is not enough to generate the full sentence that captures the meaning of the original sentence. As we increase the height to 5, it is able to capture the semantics better by predicting some of in the sentence. We see that at heights 6 and 7 SGCP is able to capture both semantics and syntax of the source and exemplar, respectively. However, as we provide the complete height of the tree (i.e., 7), it further tries to follow the syntactic input more closely leading to sacrifice in the overall relevance since the original sentence is about pure substances and not a pure substance. It can be inferred from this example that because a source sentence and exemplar's syntax might not be fully compatible with each other, using the complete syntax tree can potentially lead to loss of relevance and grammaticality. Hence by choosing different levels of syntactic granularity, one can address the issue of compatibility to a certain extent. Table 5 shows sample generations of our model on multiple exemplars for a given source sentence. It can be observed that SGCP can generate high-quality outputs for a variety of different template exemplars even the ones which differ a lot from the original sentence in terms of their syntax. A particularly interesting exemplar is what is chromosomal mutation ? what are some examples ?. Here, SGCP is able to generate a sentence with two question marks while preserving the essence of the source sentence. It should also be noted that the exemplars used in Table 5 were selected manually from the test sets, considering only their qualitative compatibility with the source sentence. Unlike the procedure used for the creation of QQP-Pos dataset, the final paraphrases were not kept in hand while selecting the exemplars. In real-world settings, where a gold paraphrase won't be present, these results are indicative of the qualitative efficacy of our method.",
"cite_spans": [],
"ref_spans": [
{
"start": 20,
"end": 27,
"text": "Table 6",
"ref_id": "TABREF8"
},
{
"start": 1019,
"end": 1026,
"text": "Table 5",
"ref_id": "TABREF7"
},
{
"start": 1577,
"end": 1584,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Syntactic Control",
"sec_num": "5.2"
},
{
"text": "ROUGE-based selection from the candidates favors paraphrases that have higher n-gram overlap with their respective source sentences, hence may capture source's meaning better. This hypothesis can be directly observed from the results in Tables 2 and 4 , where we see higher values on automated semantic and human evaluation scores. Although this helps in obtaining better semantic generation, it tends to result in higher TED values. One possible reason is that, when provided with the complete tree, finegrained information is available to the model for decoding and it forces the generations to adhere to the syntactic structure. In contrast, at lower heights, the model is provided with lesser syntactic information but equivalent semantic information.",
"cite_spans": [],
"ref_spans": [
{
"start": 237,
"end": 251,
"text": "Tables 2 and 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "SGCP-R Analysis",
"sec_num": "5.3"
},
{
"text": "As can be seen from Table 7 , SGCP not only incorporates the best aspects of both the prior models, namely SCPN and CGEN, but also utilizes the complete syntactic information obtained using the constituency-based parse trees of the exemplar.",
"cite_spans": [],
"ref_spans": [
{
"start": 20,
"end": 27,
"text": "Table 7",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "5.4"
},
{
"text": "From the generations in Table 3 , we can see that our model is able to capture both the semantics of the source text as well as the syntax of template. SCPN, evidently, can produce outputs with the template syntax, but it does so at the cost of semantics of the source sentence. This can also be verified from the results in sentence, as in example 1 in Table 3 , truncating the penultimate token with of. The problem of abrupt ending due to insufficient syntactic input length was highlighted in Chen et al. (2019a) and we observe similar trends. SGCP, on the other hand, generates more relevant and grammatical sentences.",
"cite_spans": [
{
"start": 497,
"end": 516,
"text": "Chen et al. (2019a)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 24,
"end": 31,
"text": "Table 3",
"ref_id": null
},
{
"start": 354,
"end": 361,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "5.4"
},
{
"text": "Based on empirical evidence, SGCP alleviates this shortcoming, possibly due to dynamic syntactic control and decoding. This can be seen in, for example, example 3 in Table 3 where CGEN truncates the sentence abruptly (penultimate token = directors) but SGCP is able to generate relevant sentence without compromising on grammaticality.",
"cite_spans": [],
"ref_spans": [
{
"start": 166,
"end": 173,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "5.4"
},
{
"text": "All natural language English sentences cannot necessarily be converted to any desirable syntax. We note that SGCP does not take into account the compatibility of source sentence and template exemplars and can freely generate syntax conforming paraphrases. This, at times, leads to imperfect paraphrase conversion and nonsensical sentences like example 6 in Table 5 (is career useful in software ?). Identifying compatible exemplars is an important but separate task in itself, which we defer to future work.",
"cite_spans": [],
"ref_spans": [
{
"start": 357,
"end": 364,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Limitations and Future Directions",
"sec_num": "5.5"
},
{
"text": "Another important aspect is that the task of paraphrase generation is inherently domain agnostic. It is easy for humans to adapt to new domains for paraphrasing. However, because of the nature of the formulation of the problem in NLP, all the baselines, as well as our model(s), suffer from dataset bias and are not directly applicable to new domains. A prospective future direction can be to explore it from the lens of domain independence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Limitations and Future Directions",
"sec_num": "5.5"
},
{
"text": "Analyzing the utility of controlled paraphrase generations for the task of data augmentation is another interesting possible direction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Limitations and Future Directions",
"sec_num": "5.5"
},
{
"text": "In this paper we proposed SGCP, an end-toend framework for the task of syntactically controlled paraphrase generation. SGCP generates paraphrase of an input sentence while conforming to the syntax of an exemplar sentence provided along with the input. SGCP comprises a GRUbased sentence encoder, a modified RNN-based tree encoder, and a pointer-generator-based novel decoder. In contrast to previous work that focuses on a limited amount of syntactic control, our model can generate paraphrases at different levels of granularity of syntactic control without compromising on relevance. Through extensive evaluations on real-world datasets, we demonstrate SGCP's efficacy over state-of-the-art baselines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "We believe that the above approach can be useful for a variety of text generation tasks including syntactic exemplar-based abstractive summarization, text simplification and data-totext generation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "https://www.kaggle.com/c/quoraquestion-pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Obtained using the Stanford CoreNLP toolkit(Manning et al., 2014).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Note that the results for SCPN differ from the ones shown inIyyer et al. (2018). This is because the dataset used inIyyer et al. (2018) is at least 50 times larger than the largest dataset (ParaNMT-small) in this work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Because the ParaNMT dataset only contains paraphrase pairs, we augment it with the PAWS(Zhang et al., 2019) dataset to acquire negative samples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research is supported in part by the Ministry of Human Resource Development (Government of India). We thank the action editor Asli Celikyilmaz and the three anonymous reviewers for their helpful suggestions in preparing the manuscript. We also thank Chandrahas for his indispensable comments on earlier drafts of this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Towards string-to-tree neural machine translation",
"authors": [
{
"first": "Roee",
"middle": [],
"last": "Aharoni",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "132--140",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roee Aharoni and Yoav Goldberg. 2017. Towards string-to-tree neural machine translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 132-140. Van- couver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Conceptual and empirical bases of readability formulas",
"authors": [
{
"first": "Richard",
"middle": [
"Chase"
],
"last": "",
"suffix": ""
},
{
"first": "Anderson",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Alice",
"middle": [],
"last": "Davison",
"suffix": ""
}
],
"year": 1986,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Chase Anderson and Alice Davison. 1986. Conceptual and empirical bases of readability formulas. Center for the Study of Reading Technical Report; no. 392.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1409.0473"
]
},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "METEOR: An automatic metric for MT evaluation with improved correlation with human judgments",
"authors": [
{
"first": "Satanjeev",
"middle": [],
"last": "Banerjee",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Lavie",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization",
"volume": "",
"issue": "",
"pages": "65--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evalua- tion with improved correlation with human judgments. In Proceedings of the ACL Work- shop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, pages 65-72.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Learning to paraphrase: An unsupervised approach using multiple-sequence alignment",
"authors": [
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Lillian",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology",
"volume": "1",
"issue": "",
"pages": "16--23",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Regina Barzilay and Lillian Lee. 2003. Learning to paraphrase: An unsupervised approach using multiple-sequence alignment. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology- Volume 1, pages 16-23. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Generating sentences from a continuous space",
"authors": [
{
"first": "R",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Bowman",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vilnis",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Rafal",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Samy",
"middle": [],
"last": "Jozefowicz",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "10--21",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew Dai, Rafal Jozefowicz, and Samy Bengio. 2016. Generating sentences from a continuous space. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 10-21, Berlin, Germany. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Retrieve, rerank and rewrite: Soft template based neural summarization",
"authors": [
{
"first": "Ziqiang",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Wenjie",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Sujian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "152--161",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ziqiang Cao, Wenjie Li, Sujian Li, and Furu Wei. 2018. Retrieve, rerank and rewrite: Soft template based neural summarization. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 152-161.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Controllable paraphrase generation with a syntactic exemplar",
"authors": [
{
"first": "Mingda",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Qingming",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Wiseman",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5972--5984",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mingda Chen, Qingming Tang, Sam Wiseman, and Kevin Gimpel. 2019a. Controllable para- phrase generation with a syntactic exemplar. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5972-5984, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A multi-task approach for disentangling syntax and semantics in sentence representations",
"authors": [
{
"first": "Mingda",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Qingming",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Wiseman",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2453--2464",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mingda Chen, Qingming Tang, Sam Wiseman, and Kevin Gimpel. 2019b. A multi-task approach for disentangling syntax and seman- tics in sentence representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2453-2464, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Empirical evaluation of gated recurrent neural networks on sequence modeling",
"authors": [
{
"first": "Junyoung",
"middle": [],
"last": "Chung",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "NIPS 2014 Workshop on Deep Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Junyoung Chung, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. In NIPS 2014 Workshop on Deep Learning, December 2014.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Semantic distinctions and memory for complex sentences",
"authors": [
{
"first": "H",
"middle": [],
"last": "Herbert",
"suffix": ""
},
{
"first": "Eve",
"middle": [
"V"
],
"last": "Clark",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 1968,
"venue": "Quarterly Journal of Experimental Psychology",
"volume": "20",
"issue": "2",
"pages": "129--138",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Herbert H. Clark and Eve V. Clark. 1968. Semantic distinctions and memory for complex sentences. Quarterly Journal of Experimental Psychology, 20(2):129-138.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "The hitchhiker's guide to testing statistical significance in natural language processing",
"authors": [
{
"first": "Rotem",
"middle": [],
"last": "Dror",
"suffix": ""
},
{
"first": "Gili",
"middle": [],
"last": "Baumer",
"suffix": ""
},
{
"first": "Segev",
"middle": [],
"last": "Shlomov",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1383--1392",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rotem Dror, Gili Baumer, Segev Shlomov, and Roi Reichart. 2018. The hitchhiker's guide to testing statistical significance in natural language processing. In Proceedings of the 56th Annual Meeting of the Association for Comput- ational Linguistics (Volume 1: Long Papers), pages 1383-1392, Melbourne, Australia. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A deep generative framework for paraphrase generation",
"authors": [
{
"first": "Ankush",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Arvind",
"middle": [],
"last": "Agarwal",
"suffix": ""
},
{
"first": "Prawaan",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Piyush",
"middle": [],
"last": "Rai",
"suffix": ""
}
],
"year": 2018,
"venue": "Thirty-Second AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ankush Gupta, Arvind Agarwal, Prawaan Singh, and Piyush Rai. 2018. A deep generative framework for paraphrase generation. In Thirty-Second AAAI Conference on Artificial Intelligence.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Unt: Subfinder: Combining knowledge sources for automatic lexical substitution",
"authors": [
{
"first": "Samer",
"middle": [],
"last": "Hassan",
"suffix": ""
},
{
"first": "Andras",
"middle": [],
"last": "Csomai",
"suffix": ""
},
{
"first": "Carmen",
"middle": [],
"last": "Banea",
"suffix": ""
},
{
"first": "Ravi",
"middle": [],
"last": "Sinha",
"suffix": ""
},
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 4th International Workshop on Semantic Evaluations",
"volume": "",
"issue": "",
"pages": "410--413",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samer Hassan, Andras Csomai, Carmen Banea, Ravi Sinha, and Rada Mihalcea. 2007. Unt: Subfinder: Combining knowledge sources for automatic lexical substitution. In Proceedings of the 4th International Workshop on Semantic Evaluations, pages 410-413. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Gaussian error linear units (gelus)",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Hendrycks",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1606.08415"
]
},
"num": null,
"urls": [],
"raw_text": "Dan Hendrycks and Kevin Gimpel. 2016. Gaussian error linear units (gelus). arXiv preprint arXiv:1606.08415.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Toward controlled generation of text",
"authors": [
{
"first": "Zhiting",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Zichao",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Xiaodan",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"P"
],
"last": "Xing",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 34th International Conference on Machine Learning",
"volume": "70",
"issue": "",
"pages": "1587--1596",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P Xing. 2017. Toward controlled generation of text. In Proceedings of the 34th International Confer- ence on Machine Learning-Volume 70, pages 1587-1596. JMLR.org.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "The effects of explicitness and clause order on the comprehension of reversible causal relationships. Reading Research Quarterly",
"authors": [
{
"first": "W",
"middle": [],
"last": "Judith",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Irwin",
"suffix": ""
}
],
"year": 1980,
"venue": "",
"volume": "",
"issue": "",
"pages": "477--488",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Judith W. Irwin. 1980. The effects of explicitness and clause order on the comprehension of reversible causal relationships. Reading Re- search Quarterly, pages 477-488.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Image-to-image translation with conditional adversarial networks",
"authors": [
{
"first": "Phillip",
"middle": [],
"last": "Isola",
"suffix": ""
},
{
"first": "Jun-Yan",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Tinghui",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Alexei",
"middle": [
"A"
],
"last": "Efros",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "1125--1134",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A. Efros. 2017. Image-to-image translation with conditional adversarial net- works. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1125-1134.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Adversarial example generation with syntactically controlled paraphrase networks",
"authors": [
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Wieting",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohit Iyyer, John Wieting, Kevin Gimpel, and Luke Zettlemoyer. 2018. Adversarial example generation with syntactically controlled para- phrase networks. In Proceedings of the 2018",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"authors": [],
"year": null,
"venue": "",
"volume": "1",
"issue": "",
"pages": "1875--1885",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1875-1885, New Orleans, Louisiana. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Understanding connectives",
"authors": [
{
"first": "Evelyn",
"middle": [
"Walker"
],
"last": "Katz",
"suffix": ""
},
{
"first": "Sandor",
"middle": [
"B"
],
"last": "Brent",
"suffix": ""
}
],
"year": 1968,
"venue": "Journal of Memory and Language",
"volume": "7",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Evelyn Walker Katz and Sandor B. Brent. 1968. Understanding connectives. Journal of Memory and Language, 7(2):501.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Auto-encoding variational bayes",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Max",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Welling",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Max Welling. 2014. Auto-encoding variational bayes. In Proceed- ings of ICLR.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Submodular optimization-based diverse paraphrasing and its effectiveness in data augmentation",
"authors": [
{
"first": "Ashutosh",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Satwik",
"middle": [],
"last": "Bhattamishra",
"suffix": ""
},
{
"first": "Manik",
"middle": [],
"last": "Bhandari",
"suffix": ""
},
{
"first": "Partha",
"middle": [],
"last": "Talukdar",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "3609--3619",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashutosh Kumar, Satwik Bhattamishra, Manik Bhandari, and Partha Talukdar. 2019. Submod- ular optimization-based diverse paraphrasing and its effectiveness in data augmentation. In Proceedings of the 2019 Conference of the North American Chapter of the Associ- ation for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3609-3619, Minneapo- lis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "The roots of coherence in discourse",
"authors": [
{
"first": "Elena",
"middle": [
"T"
],
"last": "Levy",
"suffix": ""
}
],
"year": 2003,
"venue": "Human Development",
"volume": "46",
"issue": "4",
"pages": "169--188",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elena T. Levy. 2003. The roots of coherence in discourse. Human Development, 46(4): 169-188.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "A diversitypromoting objective function for neural conversation models",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "110--119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity- promoting objective function for neural conversation models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 110-119.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Paraphrase generation with deep reinforcement learning",
"authors": [
{
"first": "Zichao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Lifeng",
"middle": [],
"last": "Shang",
"suffix": ""
},
{
"first": "Hang",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "3865--3878",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zichao Li, Xin Jiang, Lifeng Shang, and Hang Li. 2018. Paraphrase generation with deep reinforcement learning. In Proceedings of the 2018 Conference on Empirical Methods in Nat- ural Language Processing, pages 3865-3878, Brussels, Belgium. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Decomposable neural paraphrase generation",
"authors": [
{
"first": "Zichao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Lifeng",
"middle": [],
"last": "Shang",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3403--3414",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zichao Li, Xin Jiang, Lifeng Shang, and Qun Liu. 2019. Decomposable neural paraphrase generation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3403-3414, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "ROUGE: A package for automatic evaluation of summaries",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2004,
"venue": "Text Summarization Branches Out",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Sum- marization Branches Out. https://www. aclweb.org/anthology/W04-1013.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv: 1907.11692.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Generating phrasal and sentential paraphrases: A survey of data-driven methods",
"authors": [
{
"first": "Nitin",
"middle": [],
"last": "Madnani",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Bonnie",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dorr",
"suffix": ""
}
],
"year": 2010,
"venue": "Computational Linguistics",
"volume": "36",
"issue": "3",
"pages": "341--387",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitin Madnani and Bonnie J. Dorr. 2010. Gen- erating phrasal and sentential paraphrases: A survey of data-driven methods. Computational Linguistics, 36(3):341-387.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "The Stanford CoreNLP natural language processing toolkit",
"authors": [
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Bauer",
"suffix": ""
},
{
"first": "Jenny",
"middle": [],
"last": "Finkel",
"suffix": ""
},
{
"first": "Steven",
"middle": [
"J"
],
"last": "Bethard",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mcclosky",
"suffix": ""
}
],
"year": 2014,
"venue": "Association for Computational Linguistics (ACL) System Demonstrations",
"volume": "",
"issue": "",
"pages": "55--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Asso- ciation for Computational Linguistics (ACL) System Demonstrations, pages 55-60.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Paraphrasing questions using given and new information. Computational Linguistics",
"authors": [
{
"first": "R",
"middle": [],
"last": "Kathleen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mckeown",
"suffix": ""
}
],
"year": 1983,
"venue": "",
"volume": "9",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kathleen R. McKeown. 1983. Paraphrasing questions using given and new information. Computational Linguistics, 9(1):1-10.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "BLEU: A method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. BLEU: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Text generation with exemplar-based adaptive decoding",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Bhuwan",
"middle": [],
"last": "Dhingra",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2555--2565",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hao Peng, Ankur Parikh, Manaal Faruqui, Bhuwan Dhingra, and Dipanjan Das. 2019. Text generation with exemplar-based adaptive decoding. In Proceedings of the 2019 Confer- ence of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2555-2565, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Neural paraphrase generation with stacked residual LSTM networks",
"authors": [
{
"first": "Aaditya",
"middle": [],
"last": "Prakash",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Sadid",
"suffix": ""
},
{
"first": "Kathy",
"middle": [],
"last": "Hasan",
"suffix": ""
},
{
"first": "Vivek",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Ashequl",
"middle": [],
"last": "Datla",
"suffix": ""
},
{
"first": "Joey",
"middle": [],
"last": "Qadir",
"suffix": ""
},
{
"first": "Oladimeji",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Farri",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "2923--2934",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aaditya Prakash, Sadid A. Hasan, Kathy Lee, Vivek Datla, Ashequl Qadir, Joey Liu, and Oladimeji Farri. 2016. Neural paraphrase generation with stacked residual LSTM net- works. In Proceedings of COLING 2016, the 26th International Conference on Comput- ational Linguistics: Technical Papers, pages 2923-2934, Osaka, Japan. The COLING 2016 Organizing Committee.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Monolingual machine translation for paraphrase generation",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Quirk",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Dolan",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "142--149",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Quirk, Chris Brockett, and William Dolan. 2004. Monolingual machine translation for paraphrase generation. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 142-149.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "A structured review of the validity of BLEU",
"authors": [
{
"first": "Ehud",
"middle": [],
"last": "Reiter",
"suffix": ""
}
],
"year": 2018,
"venue": "Computational Linguistics",
"volume": "44",
"issue": "3",
"pages": "393--401",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ehud Reiter. 2018. A structured review of the validity of BLEU. Computational Linguistics, 44(3):393-401.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Get to the point: Summarization with pointer-generator networks",
"authors": [
{
"first": "Abigail",
"middle": [],
"last": "See",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Liu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1073--1083",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summa- rization with pointer-generator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073-1083, Vancouver, Canada.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1715--1725",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715-1725, Berlin, Germany.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Style transfer from nonparallel text by cross-alignment",
"authors": [
{
"first": "Tianxiao",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Lei",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Tommi",
"middle": [],
"last": "Jaakkola",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "6830--6841",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2017. Style transfer from non- parallel text by cross-alignment. In Advances in Neural Information Processing Systems, pages 6830-6841.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Does string-based neural MT learn source syntax?",
"authors": [
{
"first": "Xing",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Inkit",
"middle": [],
"last": "Padhi",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1526--1534",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xing Shi, Inkit Padhi, and Kevin Knight. 2016. Does string-based neural MT learn source syntax? In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1526-1534, Austin, Texas. Association for Computational Linguistics.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "A survey of research on text simplification",
"authors": [
{
"first": "Advaith",
"middle": [],
"last": "Siddharthan",
"suffix": ""
}
],
"year": 2014,
"venue": "ITL-International Journal of Applied Linguistics",
"volume": "165",
"issue": "",
"pages": "259--298",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Advaith Siddharthan. 2014. A survey of research on text simplification. ITL-International Jour- nal of Applied Linguistics, 165(2):259-298.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Syntactically guided neural machine translation",
"authors": [
{
"first": "Felix",
"middle": [],
"last": "Stahlberg",
"suffix": ""
},
{
"first": "Eva",
"middle": [],
"last": "Hasler",
"suffix": ""
},
{
"first": "Aurelien",
"middle": [],
"last": "Waite",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Byrne",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "299--305",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Felix Stahlberg, Eva Hasler, Aurelien Waite, and Bill Byrne. 2016. Syntactically guided neural machine translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 299-305.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Improved semantic representations from tree-structured long short-term memory networks",
"authors": [
{
"first": "Kai Sheng",
"middle": [],
"last": "Tai",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "1556--1566",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved semantic repre- sentations from tree-structured long short-term memory networks. In Proceedings of the 53rd Annual Meeting of the Association for Com- putational Linguistics and the 7th Interna- tional Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1556-1566.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Pointer networks",
"authors": [
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Meire",
"middle": [],
"last": "Fortunato",
"suffix": ""
},
{
"first": "Navdeep",
"middle": [],
"last": "Jaitly",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "2692--2700",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Advances in Neural Information Processing Systems, pages 2692-2700.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "A neural conversational model",
"authors": [
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Quoc",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1506.05869"
]
},
"num": null,
"urls": [],
"raw_text": "Oriol Vinyals and Quoc Le. 2015. A neural conversational model. arXiv preprint arXiv: 1506.05869.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Paranmt-50m: Pushing the limits of paraphrastic sentence embeddings with millions of machine translations",
"authors": [
{
"first": "John",
"middle": [],
"last": "Wieting",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "451--462",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Wieting and Kevin Gimpel. 2018. Paranmt- 50m: Pushing the limits of paraphrastic sen- tence embeddings with millions of machine translations. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 451-462.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Paraphrase generation as monolingual translation: Data and evaluation",
"authors": [
{
"first": "",
"middle": [],
"last": "Sander Wubben",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Van Den",
"suffix": ""
},
{
"first": "Emiel",
"middle": [],
"last": "Bosch",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Krahmer",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 6th International Natural Language Generation Conference",
"volume": "",
"issue": "",
"pages": "203--207",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sander Wubben, Antal Van Den Bosch, and Emiel Krahmer. 2010. Paraphrase generation as monolingual translation: Data and evaluation. In Proceedings of the 6th International Natural Language Generation Conference, pages 203-207. Association for Computational Linguistics.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Latent part-of-speech sequences for neural machine translation",
"authors": [
{
"first": "Xuewen",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Yingru",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Dongliang",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Niranjan",
"middle": [],
"last": "Balasubramanian",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xuewen Yang, Yingru Liu, Dongliang Xie, Xin Wang, and Niranjan Balasubramanian. 2019. Latent part-of-speech sequences for neural machine translation.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Unsupervised text style transfer using language models as discriminators",
"authors": [
{
"first": "Zichao",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zhiting",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"P"
],
"last": "Xing",
"suffix": ""
},
{
"first": "Taylor",
"middle": [],
"last": "Berg-Kirkpatrick",
"suffix": ""
}
],
"year": 2018,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "7287--7298",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zichao Yang, Zhiting Hu, Chris Dyer, Eric P. Xing, and Taylor Berg-Kirkpatrick. 2018. Un- supervised text style transfer using language models as discriminators. In Advances in Neural Information Processing Systems, pages 7287-7298.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Simple fast algorithms for the editing distance between trees and related problems",
"authors": [
{
"first": "Kaizhong",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Dennis",
"middle": [],
"last": "Shasha",
"suffix": ""
}
],
"year": 1989,
"venue": "SIAM Journal on Computing",
"volume": "18",
"issue": "6",
"pages": "1245--1262",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kaizhong Zhang and Dennis Shasha. 1989. Simple fast algorithms for the editing distance between trees and related problems. SIAM Journal on Computing, 18(6):1245-1262.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "PAWS: Paraphrase adversaries from word scrambling",
"authors": [
{
"first": "Yuan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Baldridge",
"suffix": ""
},
{
"first": "Luheng",
"middle": [],
"last": "He",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1298--1308",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuan Zhang, Jason Baldridge, and Luheng He. 2019. PAWS: Paraphrase adversaries from word scrambling. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1298-1308.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Combining multiple resources to improve SMT-based paraphrasing model",
"authors": [
{
"first": "Shiqi",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Cheng",
"middle": [],
"last": "Niu",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Sheng",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of ACL-08: HLT",
"volume": "",
"issue": "",
"pages": "1021--1029",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shiqi Zhao, Cheng Niu, Ming Zhou, Ting Liu, and Sheng Li. 2008. Combining multiple resources to improve SMT-based paraphras- ing model. Proceedings of ACL-08: HLT. pages 1021-1029.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"text": "where [; ] represents concatenation of constituent elements, and W, b are trainable parameters.",
"num": null,
"html": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF3": {
"text": "Results on QQP and ParaNMT-small dataset. Higher\u2191 BLEU, METEOR, ROUGE, and PDS is better whereas lower\u2193 TED score is better. SCPN-R selects the best candidate out of many, resulting in performance boost for semantic preservation (shown in box). We bold the statistically significant results of SCPN-F, only, for a fair comparison with the baselines. Note that Source-as-Output, and Exemplar-as-Output are only dataset quality indicators and not the competitive baselines. Please see Section 5 for details. which achieve accuracies of 90.2% and 94.0% on their respective test sets.6",
"num": null,
"html": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF6": {
"text": ", where SCPN performs poorly on PDS as compared with other models. In contrast, CGEN and SGCP retain much better semantic information, as is desirable. While generating sentences, CGEN often abruptly ends the",
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"2\">SOURCE: how do i develop my career in software ?</td></tr><tr><td>SYNTACTIC EXEMPLAR</td><td>SGCP GENERATIONS</td></tr><tr><td>how can i get a domain for free ?</td><td>how can i develop a career in software ?</td></tr><tr><td>what is the best way to register a company ?</td><td>what is the best way to develop career in software ?</td></tr><tr><td>what are good places to visit in new york ?</td><td>what are good ways to develop my career in software ?</td></tr><tr><td>can i make 800,000 a month betting on horses ?</td><td>can i develop my career in software ?</td></tr><tr><td>what is chromosomal mutation ? what are some examples ?</td><td>what is good career ? what are some of the ways to develop my career in software ?</td></tr><tr><td>is delivery free on quikr ?</td><td>is career useful in software ?</td></tr><tr><td>is it possible to mute a question on quora ?</td><td>is it possible to develop my career in software ?</td></tr></table>"
},
"TABREF7": {
"text": "Sample SGCP-R generations with a single source sentence and multiple syntactic exemplars. Please refer to Section 5.4 for details.",
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td>S</td><td>what are pure substances ? what are some examples ?</td></tr><tr><td>E</td><td>what are the characteristics of the elizabethan theater ?</td></tr><tr><td>H = 4</td><td>what are pure substances ?</td></tr><tr><td>H = 5</td><td>what are some of pure substances ?</td></tr><tr><td>H = 6</td><td>what are some examples of pure substances ?</td></tr><tr><td>H = 7</td><td>what are some examples of a pure substance ?</td></tr></table>"
},
"TABREF8": {
"text": "Sample generations with different levels of syntactic control. S and E stand for source and exemplar, respectively. Please refer to Section 5.2 for details.",
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td/><td>Single-Pass</td><td>Syntactic Signal</td><td>Granularity</td></tr><tr><td>SCPN</td><td>\u2717</td><td>Linearized Tree</td><td>\u2713</td></tr><tr><td>CGEN</td><td>\u2713</td><td>POS Tags (During</td><td>\u2717</td></tr><tr><td/><td/><td>training)</td><td/></tr><tr><td>SGCP</td><td>\u2713</td><td>Constituency Parse</td><td>\u2713</td></tr><tr><td/><td/><td>Tree</td><td/></tr></table>"
},
"TABREF9": {
"text": "Comparison of different syntactically controlled paraphrasing methods. Please refer to Section 5.4 for details.",
"num": null,
"html": null,
"type_str": "table",
"content": "<table/>"
}
}
}
}