ACL-OCL / Base_JSON /prefixA /json /alta /2021.alta-1.13.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:10:28.941729Z"
},
"title": "Document Level Hierarchical Transformer",
"authors": [
{
"first": "Najam",
"middle": [],
"last": "Zaidi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Monash University",
"location": {}
},
"email": ""
},
{
"first": "Gholamreza",
"middle": [],
"last": "Haffari",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Monash University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Melbourne",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Generating long and coherent text is an important and challenging task encompassing many application areas such as summarization, document level machine translation and story generation. Despite the success in modeling intrasentence coherence, existing long text generation models (e.g., BART and GPT-3) still struggle to maintain a coherent event sequence throughout the generated text. We conjecture that this is because of the difficulty for the model to revise, replace or revoke any part that has been generated by the model. In this chapter, we present a novel semiautoregressive document generation model capable of revising and editing the generated text. Building on recent models by (Gu et al., 2019; Xu and Carpuat, 2020), we propose document generation as a hierarchical Markov decision process with a two level hierarchy, where the high and low level editing programs generate and refine the document. We train our model using imitation learning and introduce roll-in policy such that each policy learns on the output of applying the previous action. Experiments applying the proposed approach convey various insights on the problems of long text generation using our model. We suggest various remedies such as using distilled dataset, designing better attention mechanisms and using autoregressive models as a low level program.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Generating long and coherent text is an important and challenging task encompassing many application areas such as summarization, document level machine translation and story generation. Despite the success in modeling intrasentence coherence, existing long text generation models (e.g., BART and GPT-3) still struggle to maintain a coherent event sequence throughout the generated text. We conjecture that this is because of the difficulty for the model to revise, replace or revoke any part that has been generated by the model. In this chapter, we present a novel semiautoregressive document generation model capable of revising and editing the generated text. Building on recent models by (Gu et al., 2019; Xu and Carpuat, 2020), we propose document generation as a hierarchical Markov decision process with a two level hierarchy, where the high and low level editing programs generate and refine the document. We train our model using imitation learning and introduce roll-in policy such that each policy learns on the output of applying the previous action. Experiments applying the proposed approach convey various insights on the problems of long text generation using our model. We suggest various remedies such as using distilled dataset, designing better attention mechanisms and using autoregressive models as a low level program.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Generating long and coherent text encompass various tasks such as summarization, story generation, document level machine translation and document level post editing. Each task is characterised by modelling long range dependencies to make the document coherent as well as modelling a high level plot to make the document thematically consistent (Fan et al., 2018) . This is challenging as the models need to plan content, while producing local words consistent with the global context in a timely manner.",
"cite_spans": [
{
"start": 345,
"end": 363,
"text": "(Fan et al., 2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recent work on autoregressive generation models, such as GPT-3 and BART (Lewis et al., 2019; Brown et al., 2020) , have shown impressive performance in generating short fluent text with a maximum length ranging from 150 to 350 tokens (Bosselut et al., 2018; Shen et al., 2019; Zhao et al., 2020b) . But applying the same model to generate longer passages of text (e.g., 1000 tokens) has resulted in syntactic and semantic errors throughout the document requiring extensive human curations (Tan et al., 2020) . These massive language models are usually pre-trained using large corpora of generic text, and then fine-tuned with small domainspecific data. Most of the time, the models are not publicly available to adapt to arbitrary desired domains.",
"cite_spans": [
{
"start": 72,
"end": 92,
"text": "(Lewis et al., 2019;",
"ref_id": "BIBREF16"
},
{
"start": 93,
"end": 112,
"text": "Brown et al., 2020)",
"ref_id": "BIBREF2"
},
{
"start": 234,
"end": 257,
"text": "(Bosselut et al., 2018;",
"ref_id": "BIBREF1"
},
{
"start": 258,
"end": 276,
"text": "Shen et al., 2019;",
"ref_id": "BIBREF22"
},
{
"start": 277,
"end": 296,
"text": "Zhao et al., 2020b)",
"ref_id": "BIBREF31"
},
{
"start": 489,
"end": 507,
"text": "(Tan et al., 2020)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "On the other hand, recent non-autoregressive approaches allow generation to be done within a much smaller number of decoding iterations (Gu et al., 2017; Kasai et al., 2020) . But due to its problems with modelling dependencies among the tokens, the approach still lags behind its autoregressive counterparts and has not yet been applied to long text generation (Zhou et al., 2019; Gu and Kong, 2020) . In both of these model families, the length of generated sequences is either fixed or monotonically increased as the decoding proceeds. This makes them incompatible with human-level intelligence where humans can revise and edit any part of their generated text.",
"cite_spans": [
{
"start": 136,
"end": 153,
"text": "(Gu et al., 2017;",
"ref_id": "BIBREF7"
},
{
"start": 154,
"end": 173,
"text": "Kasai et al., 2020)",
"ref_id": "BIBREF12"
},
{
"start": 362,
"end": 381,
"text": "(Zhou et al., 2019;",
"ref_id": "BIBREF32"
},
{
"start": 382,
"end": 400,
"text": "Gu and Kong, 2020)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we present a novel semiautoregressive document generation model capable of revising and editing the generated text. We build on recent models by Xu and Carpuat, 2020) , who framed generation as a Markov decision process (Garcia and Rachelson, 2013) and showed that iteratively refining output sequences via insertions and repositions yields a fast and flexible generation process for machine trans-lation and automatic post editing task. We extend their model by proposing document generation as a hierarchical Markov decision process with a two level hierarchy. The high level program produce actions a H \u2208 {reposition, insert, update} which tries to capture global context and plan content while the low level program produce actions a L \u2208 {reposition, insert} to generate local words in a consistent and timely manner. Due to unavailability of large-scale data to train our model, we propose a noising process to simulate the error patterns observed in document level tasks such as redundancy of words, key information omission and disordered sentences. The noising process can be reversed by applying a set of high and low level actions to get back the original document. This serve as an efficient oracle to train our model using imitation learning (Hussein et al., 2017) . The rollin policy is defined such that each policy learns on the output of applying the previous action.",
"cite_spans": [
{
"start": 160,
"end": 181,
"text": "Xu and Carpuat, 2020)",
"ref_id": null
},
{
"start": 235,
"end": 263,
"text": "(Garcia and Rachelson, 2013)",
"ref_id": "BIBREF6"
},
{
"start": 1269,
"end": 1291,
"text": "(Hussein et al., 2017)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We cast document generation and refinement as a hierarchical Markov decision process (HMDP) with a two level hierarchy. The high level program is defined by the tuple (D, A H , E , R, d 0 ) where a state d \u2208 D corresponds to a set of sequences d = (s 1 , s 2 , ..., s L ) up to length L, and d 0 \u2208 D is the initial document. The low level program corresponds to the tuple (S , A L , E , R, s 0 ) where a state s \u2208 S corresponds to a sequence of tokens s = (w 1 , w 2 , ..., w n ) from the vocabulary V up to length n, and s 0 \u2208 S is the initial sequence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem formulation 2.1 Hierarchical Markov decision process",
"sec_num": "2"
},
{
"text": "At any time step t , the model takes as input d t\u22121 , the output from the previous iteration, chooses an action a H \u2208 A H to refine the sequence into d t = E (d t\u22121 , a H ), and receives a reward r t = R(d t ). The policy \u03c0 H maps the input sequence d t\u22121 to a probability distribution P (A H ) over the action space A H . A high level program may call a low level program with the initial input s 0 . It is similar to high level program with its set of actions a L \u2208 A L , reward function r t = R(s t ) and the policy \u03c0 L . Instead of sequences, the low level actions are applied to individual tokens. This results in a trajectory \u03c3 :=",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem formulation 2.1 Hierarchical Markov decision process",
"sec_num": "2"
},
{
"text": "{d 1 , a 1 H , \u03c4 1 , r 1 , d 2 , ...., d N , a N H , \u03c4 N , r N , d N+1 } which is the concatenation of high-level trajectory \u03c4 H := (d 1 , a 1 H , r 1 , d 2 , a 2 H , r 2 , ...., d H +1 ) and the low level trajectory \u03c4 L := (s 1 , a 1 L , s 2 , a 2 L , ...., s T +1 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem formulation 2.1 Hierarchical Markov decision process",
"sec_num": "2"
},
{
"text": "We define a reward function R = d i st (D, D * ) which measures the distance between the generation and the groundtruth sequence. We use Levenstein distance (?) as our distance metric.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem formulation 2.1 Hierarchical Markov decision process",
"sec_num": "2"
},
{
"text": "Following the formulation of HDMP, we define a high level policy \u03c0 H : d \u2212 \u2192 A H , as well as the low level policy \u03c0 L : s \u2212 \u2192 A L as a mapping from state to actions. The high level actions consist of a H \u2208 {r eposi t i on, i nser t , upd at e} and the low level actions consist of a L \u2208 {r eposi t i on, i nser t }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HMDP policies:",
"sec_num": "2.2"
},
{
"text": "INSERT H : The insertion policy reads the input document d consisting of set of sequences {s 1 , s 2 , ...s i , s i+1 , ...s L }, and for every possible slot i , i + 1, the insertion policy \u03c0 i ns H (x|i , d) makes a binary decision which is 1 (insert here) or 0 (do not insert). For each insertion position, low level MDP is called to generate the new sequence from scratch. This allows the model to generate a sentence conditioned on the surrounding context resulting in outputs that are consistent with the theme and plot of the document.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HMDP policies:",
"sec_num": "2.2"
},
{
"text": "The update policy reads the input document d, consisting of set of sequences {s 1 , s 2 , ...s i , s i+1 , ...s L }, and for every sequence position i , the update policy \u03c0 upd H (x|i , d) makes a binary decision which is 1 (update this sentence) or 0 (do not update). In order to make the update, the low level MDP is called to refine the given sequence. This allows the model to correct mistakes and improve the sentences generated by the insert policy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "UPDATE H :",
"sec_num": null
},
{
"text": "The reposition policy reads in the document d consisting of set of sequences",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "REPOSITION H :",
"sec_num": null
},
{
"text": "{s 1 , s 2 , ...s i , s i+1 , ...s L }. For every sentence position i , the reposition policy \u03c0 r ep H (x|i , d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "REPOSITION H :",
"sec_num": null
},
{
"text": ") makes a categorical decision between 0 and L + 1 where L is the number of sequences in the document. The given sequence is repositioned to the output value. If x is 0 then the sequence is deleted. This policy allows the model to observe the complete document and make it more coherent by repositioning and deleteing sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "REPOSITION H :",
"sec_num": null
},
{
"text": "The Low level MDP is made up of actions reposition and insert. They work in a similar manner as defined in the paper Xu and Carpuat, 2020) with the difference that the conditioning context contains document d along with the sentence s. Therefore the reposition policy at the word level is defined by \u03c0",
"cite_spans": [
{
"start": 117,
"end": 138,
"text": "Xu and Carpuat, 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "INSERT L , REPOSITION L :",
"sec_num": null
},
{
"text": "r ep L (x|i , y, d).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INSERT L , REPOSITION L :",
"sec_num": null
},
{
"text": "The insertion policy is made up of a placeholder and token prediction policy",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INSERT L , REPOSITION L :",
"sec_num": null
},
{
"text": "as defined by \u03c0 pl h L (x|i , y, d) and \u03c0 t ok L (x|i , y, d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INSERT L , REPOSITION L :",
"sec_num": null
},
{
"text": ") respectively. The placeholder policy first determines the number of words that need to be inserted at a given position. Special <mask> tokens are then inserted. These <mask> tokens are filled by the token prediction policy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INSERT L , REPOSITION L :",
"sec_num": null
},
{
"text": "The generative process is outlined in algorithm 1. The combination of high and low level policies can either generate a document from scratch or edit a given initial document. The insertion and update policy calls the low level program in Lines 6 and 11. Line 2 in algorithm 2 builds the initial scaffolding which is later used by the algorithm for its set of actions. If the low level program is called by the high level update action the initial scaffolding is created by concatenating the sentences identified by the high level update policy. Otherwise in case of high level insert action, it is the concatenation of empty sentences. Although one iteration is made up of multiple stages, within each stage an action is performed in parallel.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative process:",
"sec_num": "2.3"
},
{
"text": "3 Hierarchical Transformer",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative process:",
"sec_num": "2.3"
},
{
"text": "Our model is based on the Transformer encoderdecoder architecture (Vaswani et al., 2017) . We extract the hidden representations (h 1 , ..., h n ) to make the policy predictions. We extract sentence representations by concatenating all sentences with a special <sep> token. The hidden states corresponding to these special tokens are then used as sentence representation by the policies. Along with position embeddings for individual tokens, we also introduce segment embeddings for sentences, which identify the position of a sentence in a document. We show the illustration of the proposed model in Figure 1 .",
"cite_spans": [
{
"start": 66,
"end": 88,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [
{
"start": 601,
"end": 609,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Architectures",
"sec_num": "3.1"
},
{
"text": "We implement policies as classifiers whose prediction depends upon the hidden state representations generated by the transformer layers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Policy classifiers",
"sec_num": "3.2"
},
{
"text": "Reposition classifier: The reposition classifier gives a categorical distribution over the index of the input, where the input can be the representation of a sentence or a word. The input sequence is then repositioned accordingly. Along with reordering, this classifier can also perform deletion by predicting special delete token. This classifier is implemented as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Policy classifiers",
"sec_num": "3.2"
},
{
"text": "\u03c0 r ep \u03b8 (r |s i , d) = softmax(h i \u2022 [b, e 1 , ..., e n ])",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Policy classifiers",
"sec_num": "3.2"
},
{
"text": "for i \u2208 {1..n} where e can be the embedding of a sentence or token and b \u2208 R d model is a special token to predict deletion. Note that in case of low level program, we also condition on the complete document. This is done by having cross-attention on the hidden representation of the sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Policy classifiers",
"sec_num": "3.2"
},
{
"text": "The high level insert classifer scans over the consecutive sentences and make a binary decision to insert or not.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Insertion classifier:",
"sec_num": null
},
{
"text": "\u03c0 i ns \u03b8 (p|s i , d) = softmax([h i ; h i +1 ] \u2022 A)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Insertion classifier:",
"sec_num": null
},
{
"text": "for i \u2208 {1..n} and A \u2208 R 2\u00d7d model is a parameter to be learned. The low level insert classifier is made up of placeholder insertion followed by token insertion. The placeholder classifier predicts the number of tokens to be inserted at every consecutive position pairs, by casting the representation to a categorical distribution",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Insertion classifier:",
"sec_num": null
},
{
"text": "\u03c0 i ns \u03b8 (p|w i , s, d) = softmax([h i , h i +1 ] \u2022 B)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Insertion classifier:",
"sec_num": null
},
{
"text": "for i \u2208 {1..n} and B \u2208 R (k max +1)\u00d7(2d model ) is a parameter to be learned. Following , k max is 255. Token classifier then fill the placeholders",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Insertion classifier:",
"sec_num": null
},
{
"text": "\u03c0 t ok \u03b8 (t |w i , s, d) = softmax(h i \u2022 C)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Insertion classifier:",
"sec_num": null
},
{
"text": "for i \u2208 {1..n} where w i is a placeholder and C \u2208 R |V |\u00d7d model is a parameter to be learned.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Insertion classifier:",
"sec_num": null
},
{
"text": "Update classifier: The update classifier is only present in the high level program. It scans over the sentences and make a binary decision to update a given sentence",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Insertion classifier:",
"sec_num": null
},
{
"text": "\u03c0 upd \u03b8 (u|s i , d) = softmax(h i \u2022 D)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Insertion classifier:",
"sec_num": null
},
{
"text": "for i \u2208 {1..n} and D \u2208 R 2\u00d7d model is a parameter to be learned.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Insertion classifier:",
"sec_num": null
},
{
"text": "(a) Transformer blocks extract the sentence representations which are used by high level policy classifiers. Suppose that the update policy predicts to refine sentence 1 and 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Insertion classifier:",
"sec_num": null
},
{
"text": "(b) The input to the low level transformer is the concatenated sentences identified by the high level update policy. Figure 1 : The illustration of the proposed model for the update iteration. The same architecture can be applied for different tasks with specific classifiers. We have omitted attention from transformer blocks for simplicity. p stands for position embedding wheras s is for segment embedding",
"cite_spans": [],
"ref_spans": [
{
"start": 117,
"end": 125,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Insertion classifier:",
"sec_num": null
},
{
"text": "There is no large-scale labeled training dataset for document-level rewriting. Accordingly we train on synthetic dataset. To generate artificial broken text, we apply transformation techniques both at the sentence and word level and then learn to reverse the transformation to recover the original document. The techniques we use at the sentence level include: i) sentences reordering where sentences are randomly shuffled and/or deleted; ii) sentence insertion that a totally independent sentence is inserted into the source. iii) sentence update the sentence is slightly modified. For the lower-level transformation, we apply: i) word insertion that we insert a random word from another pre-defined vocabulary into the source. ii) shuffle and delete that we shuffle and delete some words. Each transformation is applied with a uniform probability between 0 and 1 leads to different trajectories of noise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Noise",
"sec_num": "3.3"
},
{
"text": "Expert policy actions a * are created by reversing the noise in the data. This is done by keeping track of the noise actions that have been used to create a corrupted output. In order to get alignment among sentences, we create a bipartite graph where the nodes are the sentences and the edge weight is the Levenstein distance between those sentences. We use max-flow min-cut algorithm to get the align-ment (Dantzig and Fulkerson, 2003) .",
"cite_spans": [
{
"start": 408,
"end": 437,
"text": "(Dantzig and Fulkerson, 2003)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Oracle",
"sec_num": "3.4"
},
{
"text": "Training is done by imitating the expert policy. We design roll-in policy such that each classifier is trained on the output of the other classifier. This reduces exposure bias as the model is trained on conditions it will encounter at decoding. The algorithm for training is shown in algorithm 3. The objective function is the product of decisions made during the generation process. It is the loses incurred by both the high level and low level program and is shown on line 14.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "3.5"
},
{
"text": "Data sets. We conduct experiments on synthetically generated dataset consisting of sorted and unsorted sequence pairs. Each sequence contains 5 -10 and each line has between 20 to 100 tokens. The document is sorted in numerical order with tens coming before hundreds. The numbers lie between 1 and 1000. We generated 300K such pairs for training consisting of unsorted sequence as input and sorted sequence as output. We further use real world datasets including ROC stories (Mostafazadeh et al., 2016) , consisting of multiple 5 lines stories to check the capabilities of our model. We also conducted ex-periemnts on Multi-news and DUC-2004 for multidocument summarization (MDS), which is a subtask of summarization tasks. Multi-news (Lebanoff et al., 2018) is a large-scale dataset for MDS and DUC-2004 (Over and Yen, 2004 ) is a benchmark dataset in MDS and its source documents are truncated to 1,500 tokens. To generate our input and output pairs, we inserted noise in the output sequences as outlined in section 3.3.",
"cite_spans": [
{
"start": 475,
"end": 502,
"text": "(Mostafazadeh et al., 2016)",
"ref_id": "BIBREF18"
},
{
"start": 735,
"end": 758,
"text": "(Lebanoff et al., 2018)",
"ref_id": "BIBREF14"
},
{
"start": 796,
"end": 814,
"text": "DUC-2004 (Over and",
"ref_id": null
},
{
"start": 815,
"end": 824,
"text": "Yen, 2004",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.1"
},
{
"text": "Evaluation Metrics. Rouge (Hovy et al., 2006) , an automatic evaluation metric, is commonly used in Summarization to evaluate the quality of summaries. We use Rouge-l, Rouge-2 and Rouge-L to measure unigram-overlap, bigram-overlap, and the longtest common sequence between system and actual summaries. Synthetic and ROC stories are evaluated with BLEU score (Papineni et al., 2002) .",
"cite_spans": [
{
"start": 26,
"end": 45,
"text": "(Hovy et al., 2006)",
"ref_id": "BIBREF10"
},
{
"start": 358,
"end": 381,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.1"
},
{
"text": "Baselines. We compare three models: i) Copy: the original text is copied without any change, which establishes the lower bound for the task. ii) Transformer: a vanilla Transformer (Vaswani et al., 2017) is used to generate a sequence of text by reconstructing the source text. Without explicit editing guidance, we have little control over its generation process. iii) Levenshtein Transformer (LevT): LevT is a semi autoregressive model for parallel sentence-level sequence generation . It refines a given sequence in an iterative manner with three operations, including deletion, placeholder prediction and token prediction. The iteration terminates when a certain stopping criterion is met. iv) Editor transformer: It is similar to the LevT, with the exception that it introduce a reposition operator instead of the deletion operator (Xu and Carpuat, 2020).",
"cite_spans": [
{
"start": 180,
"end": 202,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.1"
},
{
"text": "Implementation Details. To train the our models, we follow most of the hyper-parameter settings in . The only differences are that we use 3 Nvidia V100 GPUs and adopt fastbpe (?).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.1"
},
{
"text": "The main results for summarization are shown in table 1. The best result is obtained by copy across both dataset indicating that post editing of long sequences may hurt its quality. Copy consist of output from SummPip system (Zhao et al., 2020a) . SummPip uses graph clustering to find relevant sentences which are then used to generate the summary. Among other models, the Vanilla transformer performed better showing a strong bias present in the languages for autoregressive monotone generation. Levenshtein and the Editor transformer performed comparably whereas as our model showed no improvement over the baselines. We see similar performance in Synthetic and ROC-stories dataset in table 2 with Vanilla transformer performing better then the other models.",
"cite_spans": [
{
"start": 225,
"end": 245,
"text": "(Zhao et al., 2020a)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "Multi-News DUC-2004 R-1 R-2 R-L R-1 R-2 R-L",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "We outlines various ways to improve the results of our model:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "4.3"
},
{
"text": "Evaluation metrics sensitivity towards document level ordering: We measure the sensitivity of our evaluation metrics towards capturing sentence reordering. We permuted sentences in a document and measure the metric's mean and standard deviation. The results in table 2 shows the inadequacy of using these metrics(BLEU, ROGUE) towards document level phenomenons. This suggest a training approach where a low level program is initially trained separately and then kept frozen while the high level program is trained. Table 3 : Sensitivity of metrics towards capturing sentence reordering. We synthetic and ROC stories we report the BLEU score. For Multi-news and DOC-2004 we report the R1 score. Mean and standard deviation is measured over 10 runs.",
"cite_spans": [],
"ref_spans": [
{
"start": 515,
"end": 522,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "4.3"
},
{
"text": "Dataset: Semi/non-autoregressive models struggle to achieve quality similar to autoregressive models. As the dependencies are broken, it become difficult for the model to generalise across multimodal dataset. The situation is further aggravated when the sequences are long. Distilled dataset has been found useful in dealing with multomodality problem in non-autoregressive modals (Zhou et al., 2019) . Instead of using the actual output, the outputs generated from an autoregressive teacher modal are used with the input sequence. It is not directly clear as to how we can use distilled data in our model. One way is to insert the noise in distilled dataset to get input sequences. Another way is to use curriculum learning (Bengio et al., 2009) , starting with distilled dataset and then moving to harder actual examples.",
"cite_spans": [
{
"start": 381,
"end": 400,
"text": "(Zhou et al., 2019)",
"ref_id": "BIBREF32"
},
{
"start": 725,
"end": 746,
"text": "(Bengio et al., 2009)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Distilled",
"sec_num": null
},
{
"text": "Better Training: Pre-training and fine-tuning approach has been found useful in various tasks. Our model consist of various components including classifiers at two levels. These classifiers can be individually pre-trained. Once the pre-training step is done, the whole model can be fine tuned for better model generalisation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distilled",
"sec_num": null
},
{
"text": "Use of Autoregressive model: The low level program is responsible for word generation. Due to the inherent left to right generation bias, autoregressive models have shown better results in our experiments. We can take advantage of this bias by using autoregressive model as a low level program but this can lead to longer decoding times.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distilled",
"sec_num": null
},
{
"text": "Attention Mechanism: Wider context have been shown to improve results for various document level task (Kim et al., 2019) . Designing an attention mechanism such that more attention is given to the sentences around the given sentence than those far away in the document can improve results. This can be done by having more attention heads for the near context then the far away context.",
"cite_spans": [
{
"start": 102,
"end": 120,
"text": "(Kim et al., 2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Distilled",
"sec_num": null
},
{
"text": "Previous work on long text generation has mostly focused on generating tokens up to three hundred words. These method usually employ the idea of planning a document before generating it (Shen et al., 2019; Zhao et al., 2020b; Rashkin et al., 2020) . Another line of work, focus on extending transformer architecture to model long sequences (Wang et al., 2020; Choromanski et al., 2020) . Recent work by (Tan et al., 2020) used pre-train language models to progressively generate longer text greater than 300 tokens. Our work differs from previous approaches as it allows editing the generated text while it is being written. Previous work on non-monotonic generation and refinement (Welleck et al., 2019; Stern et al., 2019; Lee et al., 2018) has mostly focused on generating shorter text. Our proposed approach, differs from prior works by extending non-monotonic generation towards longer texts.",
"cite_spans": [
{
"start": 186,
"end": 205,
"text": "(Shen et al., 2019;",
"ref_id": "BIBREF22"
},
{
"start": 206,
"end": 225,
"text": "Zhao et al., 2020b;",
"ref_id": "BIBREF31"
},
{
"start": 226,
"end": 247,
"text": "Rashkin et al., 2020)",
"ref_id": "BIBREF21"
},
{
"start": 340,
"end": 359,
"text": "(Wang et al., 2020;",
"ref_id": "BIBREF26"
},
{
"start": 360,
"end": 385,
"text": "Choromanski et al., 2020)",
"ref_id": "BIBREF3"
},
{
"start": 682,
"end": 704,
"text": "(Welleck et al., 2019;",
"ref_id": "BIBREF28"
},
{
"start": 705,
"end": 724,
"text": "Stern et al., 2019;",
"ref_id": "BIBREF23"
},
{
"start": 725,
"end": 742,
"text": "Lee et al., 2018)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "We present a hierarchical document generation model, that is capable of revising and editing its generated text thus bringing it closer to humanlevel intelligence. Although results showed that our approach lags behind the baselines, it did shed light into various problems present in semiautoregressive models and long document generation. In the future, we will be incorporating these insights into our model to make it more robust.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "A.1 Generation Algorithm Algorithm 1 Generation in HMDP Require: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendices",
"sec_num": null
},
{
"text": "Initial document d 0 , policy: \u03c0 \u03b8 H 1: d \u2190 d 0 2: while Termination condition is not met do 3: rep_index \u2190 arg max r s i \u2208d log \u03c0 r ep \u03b8 H (r i |s i , d) Do reposition 4: d \u2190 E (d, rep_index) 5: ins_index \u2190 arg max p s i ,s i +1 \u2208d log \u03c0 i ns \u03b8 H (p i |s i , s i +1 , d) Do insertion 6: d \u2190 E (d, ins_index) Call to Low level MDP 7: upd_index \u2190 arg max u s i \u2208d log \u03c0 upd \u03b8 H (u i |s i , d)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendices",
"sec_num": null
},
{
"text": "rep_index \u2190 arg max r w i \u2208s log \u03c0 r ep \u03b8 L (r i |w i , s, d) Do reposition 7: d \u2190 E (s,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendices",
"sec_num": null
},
{
"text": "repL1 * , insL1 * , tokL1 * , repL2 * , insL2 * , tokL2 * \u2190 \u03c0 L * (d, d * ) 5: L r ep \u03b8 H \u2190 \u2212 s i \u2208d log \u03c0 r ep \u03b8 H (r ep H * i |s i , d) 6: d \u2190 applyAction(d, repH * ) 7: L i ns \u03b8 H \u2190 \u2212 s i ,s i +1 \u2208d log \u03c0 i ns \u03b8 H (i nsH * i |s i , s i +1 , d) 8: s \u2190 buildFrame(insH * , d) 9: L r ep1 \u03b8 L \u2190 \u2212 w i \u2208s log \u03c0 r ep \u03b8 L (r epL1 * i |w i , s, d) Low Level 10: s \u2190 applyAction(s, repL1 * )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendices",
"sec_num": null
},
{
"text": "11:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendices",
"sec_num": null
},
{
"text": "L i ns1 \u03b8 L \u2190 \u2212 w i ,w i +1 \u2208s log \u03c0 i ns \u03b8 L (i nsL1 * i |w i , w i +1 , s, d) 12:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendices",
"sec_num": null
},
{
"text": "s \u2190 applyAction(s, insL1 * )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendices",
"sec_num": null
},
{
"text": "13: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendices",
"sec_num": null
},
{
"text": "L t ok1 \u03b8 L \u2190 \u2212 w i \u2208s,w i =<mask> log \u03c0 t ok \u03b8 L (t okL1 * i |w i ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendices",
"sec_num": null
},
{
"text": "s \u2190 applyAction(s, repL2 * )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendices",
"sec_num": null
},
{
"text": "19:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendices",
"sec_num": null
},
{
"text": "L i ns2 \u03b8 L \u2190 \u2212 w i ,w i +1 \u2208s log \u03c0 i ns \u03b8 L (i nsL2 * i |w i , w i +1 , s, d) 20: s \u2190 applyAction(s, insL2 * ) 21: L t ok2 \u03b8 L \u2190 \u2212 w i \u2208s,w i =<mask> log \u03c0 t ok \u03b8 L (t okL2 * i |w i , s, d) 22: \u03b8 \u2190 \u03b8 \u2212 \u03bb\u2207[L r ep \u03b8 H + L i ns \u03b8 H + L upd \u03b8 H + L r ep1 \u03b8 L + L i ns1 \u03b8 L + L t ok1 \u03b8 L + L r ep2 \u03b8 L + L i ns2 \u03b8 L + L t ok2 \u03b8 L ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendices",
"sec_num": null
},
{
"text": "23: end while",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendices",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Curriculum learning",
"authors": [
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "J\u00e9r\u00f4me",
"middle": [],
"last": "Louradour",
"suffix": ""
},
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 26th annual international conference on machine learning",
"volume": "",
"issue": "",
"pages": "41--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshua Bengio, J\u00e9r\u00f4me Louradour, Ronan Collobert, and Jason Weston. 2009. Curriculum learning. In Proceedings of the 26th annual international confer- ence on machine learning, pages 41-48.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Discourse-aware neural rewards for coherent text generation",
"authors": [
{
"first": "Antoine",
"middle": [],
"last": "Bosselut",
"suffix": ""
},
{
"first": "Asli",
"middle": [],
"last": "Celikyilmaz",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Po-Sen",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1805.03766"
]
},
"num": null,
"urls": [],
"raw_text": "Antoine Bosselut, Asli Celikyilmaz, Xiaodong He, Jianfeng Gao, Po-Sen Huang, and Yejin Choi. 2018. Discourse-aware neural rewards for coherent text generation. arXiv preprint arXiv:1805.03766.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Language models are few-shot learners",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "Tom B Brown",
"suffix": ""
},
{
"first": "Nick",
"middle": [],
"last": "Mann",
"suffix": ""
},
{
"first": "Melanie",
"middle": [],
"last": "Ryder",
"suffix": ""
},
{
"first": "Jared",
"middle": [],
"last": "Subbiah",
"suffix": ""
},
{
"first": "Prafulla",
"middle": [],
"last": "Kaplan",
"suffix": ""
},
{
"first": "Arvind",
"middle": [],
"last": "Dhariwal",
"suffix": ""
},
{
"first": "Pranav",
"middle": [],
"last": "Neelakantan",
"suffix": ""
},
{
"first": "Girish",
"middle": [],
"last": "Shyam",
"suffix": ""
},
{
"first": "Amanda",
"middle": [],
"last": "Sastry",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Askell",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2005.14165"
]
},
"num": null,
"urls": [],
"raw_text": "Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. arXiv preprint arXiv:2005.14165.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Rethinking attention with performers",
"authors": [
{
"first": "Krzysztof",
"middle": [],
"last": "Choromanski",
"suffix": ""
},
{
"first": "Valerii",
"middle": [],
"last": "Likhosherstov",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Dohan",
"suffix": ""
},
{
"first": "Xingyou",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Andreea",
"middle": [],
"last": "Gane",
"suffix": ""
},
{
"first": "Tamas",
"middle": [],
"last": "Sarlos",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Hawkins",
"suffix": ""
},
{
"first": "Jared",
"middle": [],
"last": "Davis",
"suffix": ""
},
{
"first": "Afroz",
"middle": [],
"last": "Mohiuddin",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2009.14794"
]
},
"num": null,
"urls": [],
"raw_text": "Krzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sar- los, Peter Hawkins, Jared Davis, Afroz Mohiuddin, Lukasz Kaiser, et al. 2020. Rethinking attention with performers. arXiv preprint arXiv:2009.14794.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "On the max flow min cut theorem of networks",
"authors": [
{
"first": "G",
"middle": [],
"last": "Dantzig",
"suffix": ""
},
{
"first": "Delbert",
"middle": [
"Ray"
],
"last": "Fulkerson",
"suffix": ""
}
],
"year": 2003,
"venue": "Linear inequalities and related systems",
"volume": "38",
"issue": "",
"pages": "225--231",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G Dantzig and Delbert Ray Fulkerson. 2003. On the max flow min cut theorem of networks. Linear in- equalities and related systems, 38:225-231.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Hierarchical neural story generation",
"authors": [
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Dauphin",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1805.04833"
]
},
"num": null,
"urls": [],
"raw_text": "Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hi- erarchical neural story generation. arXiv preprint arXiv:1805.04833.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Markov decision processes. Markov Decision Processes in Artificial Intelligence",
"authors": [
{
"first": "Fr\u00e9d\u00e9rick",
"middle": [],
"last": "Garcia",
"suffix": ""
},
{
"first": "Emmanuel",
"middle": [],
"last": "Rachelson",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "1--38",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fr\u00e9d\u00e9rick Garcia and Emmanuel Rachelson. 2013. Markov decision processes. Markov Decision Pro- cesses in Artificial Intelligence, pages 1-38.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Nonautoregressive neural machine translation",
"authors": [
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "O",
"middle": [
"K"
],
"last": "Victor",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1711.02281"
]
},
"num": null,
"urls": [],
"raw_text": "Jiatao Gu, James Bradbury, Caiming Xiong, Vic- tor OK Li, and Richard Socher. 2017. Non- autoregressive neural machine translation. arXiv preprint arXiv:1711.02281.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Fully nonautoregressive neural machine translation: Tricks of the trade",
"authors": [
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Kong",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2012.15833"
]
},
"num": null,
"urls": [],
"raw_text": "Jiatao Gu and Xiang Kong. 2020. Fully non- autoregressive neural machine translation: Tricks of the trade. arXiv preprint arXiv:2012.15833.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Automated summarization evaluation with basic elements",
"authors": [
{
"first": "H",
"middle": [],
"last": "Eduard",
"suffix": ""
},
{
"first": "Chin-Yew",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Junichi",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fukumoto",
"suffix": ""
}
],
"year": 2006,
"venue": "LREC",
"volume": "6",
"issue": "",
"pages": "604--611",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eduard H Hovy, Chin-Yew Lin, Liang Zhou, and Ju- nichi Fukumoto. 2006. Automated summarization evaluation with basic elements. In LREC, volume 6, pages 604-611. Citeseer.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Imitation learning: A survey of learning methods",
"authors": [
{
"first": "Ahmed",
"middle": [],
"last": "Hussein",
"suffix": ""
},
{
"first": "Mohamed",
"middle": [
"Medhat"
],
"last": "Gaber",
"suffix": ""
},
{
"first": "Eyad",
"middle": [],
"last": "Elyan",
"suffix": ""
},
{
"first": "Chrisina",
"middle": [],
"last": "Jayne",
"suffix": ""
}
],
"year": 2017,
"venue": "ACM Computing Surveys (CSUR)",
"volume": "50",
"issue": "2",
"pages": "1--35",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ahmed Hussein, Mohamed Medhat Gaber, Eyad Elyan, and Chrisina Jayne. 2017. Imitation learning: A sur- vey of learning methods. ACM Computing Surveys (CSUR), 50(2):1-35.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Non-autoregressive machine translation with disentangled context transformer",
"authors": [
{
"first": "Jungo",
"middle": [],
"last": "Kasai",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Cross",
"suffix": ""
},
{
"first": "Marjan",
"middle": [],
"last": "Ghazvininejad",
"suffix": ""
},
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
}
],
"year": 2020,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "5144--5155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jungo Kasai, James Cross, Marjan Ghazvininejad, and Jiatao Gu. 2020. Non-autoregressive machine trans- lation with disentangled context transformer. In In- ternational Conference on Machine Learning, pages 5144-5155. PMLR.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "When and why is document-level context useful in neural machine translation?",
"authors": [
{
"first": "Yunsu",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Thanh",
"middle": [],
"last": "Duc",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Tran",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.00294"
]
},
"num": null,
"urls": [],
"raw_text": "Yunsu Kim, Duc Thanh Tran, and Hermann Ney. 2019. When and why is document-level context useful in neural machine translation? arXiv preprint arXiv:1910.00294.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Adapting the neural encoder-decoder framework from single to multi-document summarization",
"authors": [
{
"first": "Logan",
"middle": [],
"last": "Lebanoff",
"suffix": ""
},
{
"first": "Kaiqiang",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1808.06218"
]
},
"num": null,
"urls": [],
"raw_text": "Logan Lebanoff, Kaiqiang Song, and Fei Liu. 2018. Adapting the neural encoder-decoder framework from single to multi-document summarization. arXiv preprint arXiv:1808.06218.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Deterministic non-autoregressive neural sequence modeling by iterative refinement",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Elman",
"middle": [],
"last": "Mansimov",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1802.06901"
]
},
"num": null,
"urls": [],
"raw_text": "Jason Lee, Elman Mansimov, and Kyunghyun Cho. 2018. Deterministic non-autoregressive neural se- quence modeling by iterative refinement. arXiv preprint arXiv:1802.06901.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal ; Abdelrahman Mohamed",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Ves",
"middle": [],
"last": "Stoyanov",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.13461"
]
},
"num": null,
"urls": [],
"raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Learning to actively learn neural machine translation",
"authors": [
{
"first": "Ming",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Wray",
"middle": [],
"last": "Buntine",
"suffix": ""
},
{
"first": "Gholamreza",
"middle": [],
"last": "Haffari",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 22nd Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "334--344",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ming Liu, Wray Buntine, and Gholamreza Haffari. 2018. Learning to actively learn neural machine translation. In Proceedings of the 22nd Confer- ence on Computational Natural Language Learning, pages 334-344.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A corpus and evaluation framework for deeper understanding of commonsense stories",
"authors": [
{
"first": "Nasrin",
"middle": [],
"last": "Mostafazadeh",
"suffix": ""
},
{
"first": "Nathanael",
"middle": [],
"last": "Chambers",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Batra",
"suffix": ""
},
{
"first": "Lucy",
"middle": [],
"last": "Vanderwende",
"suffix": ""
},
{
"first": "Pushmeet",
"middle": [],
"last": "Kohli",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Allen",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1604.01696"
]
},
"num": null,
"urls": [],
"raw_text": "Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, and James Allen. 2016. A cor- pus and evaluation framework for deeper under- standing of commonsense stories. arXiv preprint arXiv:1604.01696.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "An introduction to duc-2004",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Over",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Yen",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Over and James Yen. 2004. An introduction to duc-2004. National Institute of Standards and Tech- nology.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th annual meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting of the Association for Compu- tational Linguistics, pages 311-318.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Plotmachines: Outlineconditioned generation with dynamic plot state tracking",
"authors": [
{
"first": "Asli",
"middle": [],
"last": "Hannah Rashkin",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Celikyilmaz",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.14967"
]
},
"num": null,
"urls": [],
"raw_text": "Hannah Rashkin, Asli Celikyilmaz, Yejin Choi, and Jianfeng Gao. 2020. Plotmachines: Outline- conditioned generation with dynamic plot state tracking. arXiv preprint arXiv:2004.14967.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Towards generating long and coherent text with multi-level latent variable models",
"authors": [
{
"first": "Dinghan",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Asli",
"middle": [],
"last": "Celikyilmaz",
"suffix": ""
},
{
"first": "Yizhe",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Liqun",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [],
"last": "Carin",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1902.00154"
]
},
"num": null,
"urls": [],
"raw_text": "Dinghan Shen, Asli Celikyilmaz, Yizhe Zhang, Liqun Chen, Xin Wang, Jianfeng Gao, and Lawrence Carin. 2019. Towards generating long and coherent text with multi-level latent variable models. arXiv preprint arXiv:1902.00154.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Insertion transformer: Flexible sequence generation via insertion operations",
"authors": [
{
"first": "Mitchell",
"middle": [],
"last": "Stern",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Chan",
"suffix": ""
},
{
"first": "Jamie",
"middle": [],
"last": "Kiros",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
}
],
"year": 2019,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "5976--5985",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell Stern, William Chan, Jamie Kiros, and Jakob Uszkoreit. 2019. Insertion transformer: Flexible se- quence generation via insertion operations. In In- ternational Conference on Machine Learning, pages 5976-5985. PMLR.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Progressive generation of long text with pretrained language models",
"authors": [
{
"first": "Zichao",
"middle": [],
"last": "Bowen Tan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Ai-",
"middle": [],
"last": "Maruan",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"P"
],
"last": "Shedivat",
"suffix": ""
},
{
"first": "Zhiting",
"middle": [],
"last": "Xing",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hu",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2006.15720"
]
},
"num": null,
"urls": [],
"raw_text": "Bowen Tan, Zichao Yang, Maruan AI-Shedivat, Eric P Xing, and Zhiting Hu. 2020. Progressive generation of long text with pretrained language models. arXiv preprint arXiv:2006.15720.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Linformer: Selfattention with linear complexity",
"authors": [
{
"first": "Sinong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Belinda",
"suffix": ""
},
{
"first": "Madian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Han",
"middle": [],
"last": "Khabsa",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2006.04768"
]
},
"num": null,
"urls": [],
"raw_text": "Sinong Wang, Belinda Z Li, Madian Khabsa, Han Fang, and Hao Ma. 2020. Linformer: Self- attention with linear complexity. arXiv preprint arXiv:2006.04768.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Non-autoregressive machine translation with auxiliary regularization",
"authors": [
{
"first": "Yiren",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Tian",
"suffix": ""
},
{
"first": "Di",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Chengxiang",
"middle": [],
"last": "Zhai",
"suffix": ""
},
{
"first": "Tie-Yan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "5377--5384",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yiren Wang, Fei Tian, Di He, Tao Qin, ChengXiang Zhai, and Tie-Yan Liu. 2019. Non-autoregressive machine translation with auxiliary regularization. In Proceedings of the AAAI Conference on Artificial In- telligence, pages 5377-5384.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Non-monotonic sequential text generation",
"authors": [
{
"first": "Sean",
"middle": [],
"last": "Welleck",
"suffix": ""
},
{
"first": "Kiant\u00e9",
"middle": [],
"last": "Brantley",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1902.02192"
]
},
"num": null,
"urls": [],
"raw_text": "Sean Welleck, Kiant\u00e9 Brantley, Hal Daum\u00e9 III, and Kyunghyun Cho. 2019. Non-monotonic sequential text generation. arXiv preprint arXiv:1902.02192.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Editor: an editbased transformer with repositioning for neural machine translation with soft lexical constraints",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2011.06868"
]
},
"num": null,
"urls": [],
"raw_text": "Weijia Xu and Marine Carpuat. 2020. Editor: an edit- based transformer with repositioning for neural ma- chine translation with soft lexical constraints. arXiv preprint arXiv:2011.06868.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Summpip: Unsupervised multidocument summarization with sentence graph compression",
"authors": [
{
"first": "Jinming",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Longxiang",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Lan",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "He",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "1949--1952",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jinming Zhao, Ming Liu, Longxiang Gao, Yuan Jin, Lan Du, He Zhao, He Zhang, and Gholamreza Haffari. 2020a. Summpip: Unsupervised multi- document summarization with sentence graph com- pression. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Develop- ment in Information Retrieval, pages 1949-1952.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Graph-based multi-hop reasoning for long text generation",
"authors": [
{
"first": "Liang",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Jingjing",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Junyang",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Yichang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Hongxia",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2009.13282"
]
},
"num": null,
"urls": [],
"raw_text": "Liang Zhao, Jingjing Xu, Junyang Lin, Yichang Zhang, Hongxia Yang, and Xu Sun. 2020b. Graph-based multi-hop reasoning for long text generation. arXiv preprint arXiv:2009.13282.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Understanding knowledge distillation in nonautoregressive machine translation",
"authors": [
{
"first": "Chunting",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1911.02727"
]
},
"num": null,
"urls": [],
"raw_text": "Chunting Zhou, Graham Neubig, and Jiatao Gu. 2019. Understanding knowledge distillation in non- autoregressive machine translation. arXiv preprint arXiv:1911.02727.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"type_str": "table",
"content": "<table><tr><td colspan=\"3\">: Experiment Results on Multi-News and</td></tr><tr><td>DUC2004 dataset</td><td/><td/></tr><tr><td/><td colspan=\"2\">Synthetic ROC-Stories</td></tr><tr><td>Copy</td><td>23.59</td><td>28.82</td></tr><tr><td>Transformer</td><td>30.17</td><td>35.72</td></tr><tr><td>LevT</td><td>22.42</td><td>25.29</td></tr><tr><td>Editor</td><td>22.78</td><td>25.89</td></tr><tr><td>Ours</td><td>20.63</td><td>23.10</td></tr></table>",
"text": "",
"num": null,
"html": null
},
"TABREF2": {
"type_str": "table",
"content": "<table/>",
"text": "Experiment Results on Synthetic and ROCstories dataset. We report the BLEU score in the table.",
"num": null,
"html": null
},
"TABREF5": {
"type_str": "table",
"content": "<table><tr><td colspan=\"2\">Algorithm 3 2: (d, d * ) \u223c T</td><td/><td/><td>Sample a training pair</td></tr><tr><td>3:</td><td colspan=\"4\">repH Get oracle actions</td></tr><tr><td>4:</td><td/><td/><td/></tr><tr><td/><td>rep_index)</td><td/><td/></tr><tr><td>8:</td><td>end if</td><td/><td/></tr><tr><td>9: 10:</td><td>plh_index \u2190 arg max p w i ,w i +1 \u2208s log \u03c0 i ns \u03b8 L s \u2190 E (s, plh_index)</td><td colspan=\"2\">(p i |w i , w i +1 , s, d)</td><td>Insert placeholders</td></tr><tr><td>11: 12:</td><td colspan=\"2\">tok_index \u2190 arg max t w i \u2208s,w i ==&lt;mask&gt; log \u03c0 t ok \u03b8 L s \u2190 E (s, tok_index)</td><td>(t i |w i , s, d)</td><td>Fill placeholders</td></tr><tr><td colspan=\"2\">13: end while</td><td/><td/></tr><tr><td colspan=\"2\">14: d \u2190 documentUpdate(d, s)</td><td/><td/></tr><tr><td colspan=\"2\">A.2 Training Algorithm</td><td/><td/></tr></table>",
"text": "Training for Hierarchical Levenshtein Transformer Require: Training data T , Model policy: \u03c0 \u03b8 , Expert policy: \u03c0 * 1: while Maximum training steps reached do",
"num": null,
"html": null
}
}
}
}