ACL-OCL / Base_JSON /prefixT /json /tacl /2020.tacl-1.18.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:54:49.655979Z"
},
"title": "Leveraging Pre-trained Checkpoints for Sequence Generation Tasks",
"authors": [
{
"first": "Sascha",
"middle": [],
"last": "Rothe",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Shashi",
"middle": [],
"last": "Narayan",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Aliaksei",
"middle": [],
"last": "Severyn",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Unsupervised pre-training of large neural models has recently revolutionized Natural Language Processing. By warm-starting from the publicly released checkpoints, NLP practitioners have pushed the state-of-the-art on multiple benchmarks while saving significant amounts of compute time. So far the focus has been mainly on the Natural Language Understanding tasks. In this paper, we demonstrate the efficacy of pre-trained checkpoints for Sequence Generation. We developed a Transformer-based sequence-to-sequence model that is compatible with publicly available pre-trained BERT, GPT-2, and RoBERTa checkpoints and conducted an extensive empirical study on the utility of initializing our model, both encoder and decoder, with these checkpoints. Our models result in new state-of-the-art results on Machine",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Unsupervised pre-training of large neural models has recently revolutionized Natural Language Processing. By warm-starting from the publicly released checkpoints, NLP practitioners have pushed the state-of-the-art on multiple benchmarks while saving significant amounts of compute time. So far the focus has been mainly on the Natural Language Understanding tasks. In this paper, we demonstrate the efficacy of pre-trained checkpoints for Sequence Generation. We developed a Transformer-based sequence-to-sequence model that is compatible with publicly available pre-trained BERT, GPT-2, and RoBERTa checkpoints and conducted an extensive empirical study on the utility of initializing our model, both encoder and decoder, with these checkpoints. Our models result in new state-of-the-art results on Machine",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Unsupervised and self-supervised pre-training methods, such as ELMo (Peters et al., 2018) , ULMFiT (Howard and Ruder, 2018) , and more recently BERT (Devlin et al., 2019) , GPT and GPT-2 (Radford et al., 2018 (Radford et al., , 2019 , XLNet (Yang et al., 2019) , and RoBERTa have established a qualitatively new level of baseline performance for many widely used Natural Language Understanding (NLU) benchmarks including some of the most popular, like GLUE (Williams et al., 2018) and SQuAD (Rajpurkar et al., 2018) .",
"cite_spans": [
{
"start": 68,
"end": 89,
"text": "(Peters et al., 2018)",
"ref_id": "BIBREF40"
},
{
"start": 99,
"end": 123,
"text": "(Howard and Ruder, 2018)",
"ref_id": "BIBREF20"
},
{
"start": 149,
"end": 170,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF6"
},
{
"start": 187,
"end": 208,
"text": "(Radford et al., 2018",
"ref_id": "BIBREF43"
},
{
"start": 209,
"end": 232,
"text": "(Radford et al., , 2019",
"ref_id": "BIBREF44"
},
{
"start": 241,
"end": 260,
"text": "(Yang et al., 2019)",
"ref_id": "BIBREF30"
},
{
"start": 457,
"end": 480,
"text": "(Williams et al., 2018)",
"ref_id": "BIBREF57"
},
{
"start": 491,
"end": 515,
"text": "(Rajpurkar et al., 2018)",
"ref_id": "BIBREF45"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The most appealing part about this massive shift towards using large architectures pre-trained on large collections of texts is that the pretrained checkpoints along with the inference code are made freely available. This saves hundreds of TPU/GPU hours, as warm-starting a model from a pre-trained checkpoint typically requires orders of magnitude fewer fine-tuning steps while delivering significant performance boosts. More importantly, the ability to bootstrap from a state-of-the-art performing model such as BERT (Devlin et al., 2019) motivates the community to greatly speed up the progress towards developing better and easily reusable NLU systems.",
"cite_spans": [
{
"start": 519,
"end": 540,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "While we continue to observe an increasing number of papers building on top of BERT and/or GPT models reporting encouraging improvements on GLUE, SQuAD, and other similar benchmarks, very little attention has been paid to using these pre-trained models to warm-start sequence-tosequence (seq2seq) models. It has been argued that the pre-training objective used by BERT is not well-suited for tasks that require decoding texts, for example, conditional text generation in machine translation and summarization (Yang et al., 2019) . Nevertheless, it remains unclear to what extent using such large models pre-trained on large collections of text can be beneficial to warm-start seq2seq generation models.",
"cite_spans": [
{
"start": 509,
"end": 528,
"text": "(Yang et al., 2019)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we report on a Transformer-based seq2seq model that is compatible with publicly available pre-trained BERT, GPT-2, and RoBERTa checkpoints. We aim to provide an empirical answer to the following research question: What is the best way to leverage publicly available pretrained checkpoints for warm-starting sequence generation models? For example, one could imagine using a BERT checkpoint to initialize the encoder for better input understanding and choosing GPT-2 model as the decoder for better text generation. One of the main contributions of this paper is that we rigorously experiment with a large number of different settings to combine BERT, GPT, and RoBERTa pre-trained checkpoints to initialize our Transformer-based model. We report results on three canonical conditional text generation tasks of increasing complexity: sentence-level fusion (DiscoFuse, Geva et al., 2019) and splitting (WikiSplit, Botha et al., 2018) , WMT14 En\u2194De machine translation using most common eval sets: newstest2014 and newstest2016, and abstractive summarization using three datasets: Gigaword (Napoles et al., 2012) , CNN and DailyMail (Hermann et al., 2015) , and BBC extreme (Narayan et al., 2018a) .",
"cite_spans": [
{
"start": 881,
"end": 899,
"text": "Geva et al., 2019)",
"ref_id": "BIBREF12"
},
{
"start": 914,
"end": 945,
"text": "(WikiSplit, Botha et al., 2018)",
"ref_id": null
},
{
"start": 1101,
"end": 1123,
"text": "(Napoles et al., 2012)",
"ref_id": "BIBREF35"
},
{
"start": 1126,
"end": 1166,
"text": "CNN and DailyMail (Hermann et al., 2015)",
"ref_id": null
},
{
"start": 1185,
"end": 1208,
"text": "(Narayan et al., 2018a)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our models report significant improvements over randomly initialized models, demonstrating the benefit of leveraging unsupervised pre-trained models. More importantly, this simple strategy results in new state-of-the-art results on machine translation, text summarization, sentence splitting, and sentence fusion. Our results also demonstrate that a pre-trained encoder is an essential component for sequence generation tasks and often these tasks benefit from sharing the weights between the encoder and the decoder. Overall, we have run over 300 experiments spending thousands of TPU v3 hours to better accommodate the language modeling and understanding capabilities of these pre-trained models for text generation. We believe that NLP researchers and practitioners will derive actionable insights from our findings when tackling various seq2seq tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The code to query our models and predictions on various benchmarks will be available at https:// github.com/google-research/googleresearch/tree/master/bertseq2seq.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "BERT was primarily developed for encoding text representations for NLU tasks (encoder-only architecture), whereas GPT-2 (Radford et al., 2019) , was primarily developed as a decoder-only architecture for language modeling. Our model uses a seq2seq architecture with encoder and decoder both composed of Transformer layers (Vaswani et al., 2017) . For the encoder, we inherit the BERT Transformer layer implementations (Devlin et al., 2019) , which differs slightly from the canonical Transformer layer (Vaswani et al., 2017) ; BERT uses a GELU activation (Hendrycks and Gimpel, 2016) rather than the standard RELU. If not stated otherwise, the implementation of the decoder layers are also identical to the BERT implementation with two adjustments. First, the self-attention mechanism is masked to look only at the left context. Secondly, we add an encoderdecoder attention mechanism. Note, that if the model was randomly initialized, we found no difference between a BERT compatible decoder and a GPT-2 compatible decoder.",
"cite_spans": [
{
"start": 120,
"end": 142,
"text": "(Radford et al., 2019)",
"ref_id": "BIBREF44"
},
{
"start": 322,
"end": 344,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF54"
},
{
"start": 418,
"end": 439,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF6"
},
{
"start": 502,
"end": 524,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF54"
},
{
"start": 555,
"end": 583,
"text": "(Hendrycks and Gimpel, 2016)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models and Pre-trained Checkpoints",
"sec_num": "2"
},
{
"text": "Most of the models use the base checkpoint and therefore have 12 layers, a hidden size of 768, filter size of 3,072, and 12 attention heads. We chose the best-performing model and also collect numbers using larger pre-trained checkpoints. These models have 24 layers, a hidden size of 1,024, filter size of 4,096, and 16 attention heads.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models and Pre-trained Checkpoints",
"sec_num": "2"
},
{
"text": "All models were fine-tuned on the target task using Adam with a learning rate of 0.05. We used a linear learning rate warmup with 40k steps, normalization by the square root of the hidden size, and a square root decay. We did not perform any tuning of these hyperparameters (except for \u00a75). The batch size and the number of training steps will be reported for each task individually.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models and Pre-trained Checkpoints",
"sec_num": "2"
},
{
"text": "BERT Checkpoints. We tokenize our text using the WordPiece (Wu et al., 2016) to match the BERT pre-trained vocabulary. Depending on the experiment, we use one of the following publicly available checkpoints: BERT-Base Cased, BERT-Base Uncased, BERT-Base Multilingual Cased (Devlin et al., 2019) . 1 The first two checkpoints have a vocabulary size of around \u223c30k wordpieces, whereas the multilingual checkpoint has a much larger vocabulary size of \u223c110k. BERT also trains positional embeddings for up to 512 positions, which is the maximum input and output length in all experiments.",
"cite_spans": [
{
"start": 59,
"end": 76,
"text": "(Wu et al., 2016)",
"ref_id": "BIBREF58"
},
{
"start": 273,
"end": 294,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF6"
},
{
"start": 297,
"end": 298,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models and Pre-trained Checkpoints",
"sec_num": "2"
},
{
"text": "GPT-2 Checkpoints. We tokenize our text using the SentencePieces (Kudo and Richardson, 2018) to match the GPT-2 pre-trained vocabulary. 2 Note that, although the available checkpoint is frequently called 117M, which suggests the same number of parameters, we count 125M parameters in the checkpoint. This is the smallest architecture they trained, and the number of layers, hidden size, and filter size are comparable to BERT-Base. The model was trained mainly on English data but does contain some foreign language. The vocabulary size is \u223c50k. While GPT-2 has positional embeddings for up to 1,024 positions, we only use the first 512 to make the results comparable with BERT.",
"cite_spans": [
{
"start": 65,
"end": 92,
"text": "(Kudo and Richardson, 2018)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models and Pre-trained Checkpoints",
"sec_num": "2"
},
{
"text": "RoBERTa Checkpoints. RoBERTa is trained using PyTorch, but we found that the learned parameters are fully compatible total embed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models and Pre-trained Checkpoints",
"sec_num": "2"
},
{
"text": "init. random As the conceptual differences between BERT and RoBERTa are minor, we might use BERT as a hypernym to address both pretraining methods in this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models and Pre-trained Checkpoints",
"sec_num": "2"
},
{
"text": "RND2RND 221M 23M 0 221M BERT2RND 221M 23M 109M 112M RND2BERT 221M 23M 109M 26M BERT2BERT 221M 23M 195M 26M BERTSHARE 136M 23M 109M 26M ROBERTASHARE 152M 39M 125M 26M GPT 125M 39M 125M 0 RND2GPT 238M 39M 125M 114M BERT2GPT 260M 62M 234M 26M ROBERTA2GPT 276M 78M 250M 26M",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models and Pre-trained Checkpoints",
"sec_num": "2"
},
{
"text": "In this section, we describe several combinations of model initialization. The number of total trainable parameters, the number of embedding parameters, and the number of parameters initialized from the checkpoint vs. randomly are shown in Table 1 . RND2RND A Transformer encoder-decoder architecture with all weights initialized randomly.",
"cite_spans": [],
"ref_spans": [
{
"start": 240,
"end": 247,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Investigated Model Variants",
"sec_num": "3"
},
{
"text": "BERT2RND A BERT-initialized encoder paired with a randomly initialized decoder. Encoder and decoder share the embedding matrix initialized from a checkpoint.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Investigated Model Variants",
"sec_num": "3"
},
{
"text": "RND2BERT A randomly initialized encoder paired with a BERT-initialized decoder. To perform autoregressive decoding, we mask the 3 More specifically: a) the variable names have to be adjusted; b) the weight and bias variables of the attention mechanism have to be splitted into query, key, and values; c) all variables except the embedding matrices have to be transposed. 4 RoBERTa checkpoints are available at https:// github.com/pytorch/fairseq. bidirectional self-attention mechanism of BERT to look only at the left context.",
"cite_spans": [
{
"start": 371,
"end": 372,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Investigated Model Variants",
"sec_num": "3"
},
{
"text": "BERT2BERT A BERT-initialized encoder paired with a BERT-initialized decoder. All weights are initialized from a public BERT checkpoint. The only variable that is initialized randomly is the encoder-decoder attention.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Investigated Model Variants",
"sec_num": "3"
},
{
"text": "BERTSHARE Like BERT2BERT, but the parameters between encoder and decoder are shared. This greatly reduces the memory footprint of the model (136M vs. 221M parameters). Additionally, we experimented with a layer-wise attention mechanism (He et al., 2018) , but obtained nearly identical numbers on most tasks.",
"cite_spans": [
{
"start": 236,
"end": 253,
"text": "(He et al., 2018)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Investigated Model Variants",
"sec_num": "3"
},
{
"text": "ROBERTASHARE Same as BERTSHARE, but the shared encoder and decoder are initialized with the public RoBERTa checkpoint.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Investigated Model Variants",
"sec_num": "3"
},
{
"text": "GPT A decoder-only architecture. We treat the input as a conditioning prefix of a language model. The decoder is warm-started with a public GPT-2 checkpoint. Similarly to BERTSHARE and ROBERTASHARE, the memory footprint of this model is smaller compared to an encoder-decoder setup (125M parameters).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Investigated Model Variants",
"sec_num": "3"
},
{
"text": "RND2GPT A randomly initialized encoder paired with a GPT-2-compatible decoder. We warmstart the decoder and the embedding matrix with a public GPT-2 checkpoint.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Investigated Model Variants",
"sec_num": "3"
},
{
"text": "BERT2GPT A BERT-compatible encoder paired with a GPT-2-compatible decoder. We warmstart both sides with the two separate, BERT and GPT-2, public checkpoints. We use the BERT vocabulary for the input and the GPT-2 vocabulary for the output.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Investigated Model Variants",
"sec_num": "3"
},
{
"text": "ROBERTA2GPT Same as BERT2GPT, but we use a public RoBERTa checkpoint to warm-start the encoder. RoBERTa was trained using the GPT-2 vocabulary so we can use it for input and output. Note that although the vocabulary is shared, this model still has two embeddings matrices, one for the input and one for the output.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Investigated Model Variants",
"sec_num": "3"
},
{
"text": "The pre-training objective in the BERT models learns to predict a masked token using the bidirectional representation of the input text (Devlin et al., 2019; . Our decoder, even when initialized with the BERT or RoBERTa checkpoints, always generates the output text in an autoregressive fashion as in Tranformers (Vaswani et al., 2017) and GPT-2 (Radford et al., 2019) .",
"cite_spans": [
{
"start": 136,
"end": 157,
"text": "(Devlin et al., 2019;",
"ref_id": "BIBREF6"
},
{
"start": 313,
"end": 335,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF54"
},
{
"start": 340,
"end": 368,
"text": "GPT-2 (Radford et al., 2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Investigated Model Variants",
"sec_num": "3"
},
{
"text": "10% 1% Exact SARI SARI SARI (Geva et al., 2019) We performed the bulk of our experiments on the 12-layer checkpoints of BERT, GPT-2, and RoBERTa, assuming that the findings will also hold for the 24-layer checkpoints. We chose BERTSHARE, ROBERTASHARE and ROBERTA to also report numbers using the 24-layer public pretrained checkpoints. We also experimented with the GPT setup with 24 layers and 345M parameters but as we did not achieve any better results we excluded this from the paper.",
"cite_spans": [
{
"start": 28,
"end": 47,
"text": "(Geva et al., 2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "100%",
"sec_num": null
},
{
"text": "Sentence Fusion is the problem of combining multiple sentences into a single coherent sentence. We use the ''balanced Wikipedia'' portion of the DiscoFuse dataset (Geva et al., 2019) for our experiments with 4.5M fusion examples in the training set. The evaluation set has 50k examples. Because of the size of this evaluation set, even small changes are statistically significant. For this reason, we have solely chosen this dataset for additional experiments described at the end of the paper.",
"cite_spans": [
{
"start": 163,
"end": 182,
"text": "(Geva et al., 2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Fusion",
"sec_num": "4.1"
},
{
"text": "Training was done for 300k steps with a global batch size of 256. The input and output are padded to a length of 128, which covers 100% of the training, evaluation, and test data. We report SARI",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Fusion",
"sec_num": "4.1"
},
{
"text": "Exact SARI BLEU (Botha et al., 2018) The results can be seen in Table 2 . Previous state-of-the-art results by Geva et al. (2019) used the vanilla transformer model by Vaswani et al. (2017) , with only 7 layers. All models with initialized encoders outperform the baseline by a large margin, with a SARI score of 89.3 compared with 86.9 (BERT2RND vs. RND2RND). To measure the effect on smaller training sets, we randomly subsample the training data down to 10% and 1%, (i.e., 450k and 45k training examples, respectively). First, we notice, that performance comparable to the baseline is achieved even when training on only 10% of the training data (ROBERTASHARE vs. ROBERTASHARE). Secondly, when using only 1% of the training data, setups with fewer randomly initialized parameters (BERT2BERT vs. BERT2RND) perform better. The best performing 12-layer setup is ROBERTA2GPT with a SARI score of 89.9 only outperformed by 24-layer setup of ROBERTASHARE with a SARI score of 90.3.",
"cite_spans": [
{
"start": 16,
"end": 36,
"text": "(Botha et al., 2018)",
"ref_id": "BIBREF1"
},
{
"start": 111,
"end": 129,
"text": "Geva et al. (2019)",
"ref_id": "BIBREF12"
},
{
"start": 168,
"end": 189,
"text": "Vaswani et al. (2017)",
"ref_id": "BIBREF54"
}
],
"ref_spans": [
{
"start": 64,
"end": 71,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "WikiSplit",
"sec_num": null
},
{
"text": "The reverse task of sentence fusion is the splitand-rephrase task, which requires rewriting a long sentence into two or more coherent short sentences (Narayan et al., 2017) . We use the WikiSplit dataset (Botha et al., 2018) , which consists of 1M examples of sentence splits extracted from the Wikipedia edit history, and follow the training/test split suggested by the authors. Training was done for 300k steps with a global batch size of 256. The input and output are padded to a length of 128, which covers 100% of the training, evaluation, and test data. As in Botha et al. 2018, we report corpus-level BLEU, 6 the exact match accuracy, and SARI score. Previous state-of-the-art results by Botha et al. (2018) used a bi-directional LSTM with a copy mechanism (Aharoni and Goldberg, 2018) . Analogous to the DiscoFuse task we observe that initializing the encoder improves the model the most ( Table 3 ). The shared encoderdecoder setup of BERTSHARE outperforms all other setups. For the larger models with 24 layers, we observed a small over-fitting after 100k steps (\u02dc25 epochs), and therefore stop the training early. BERTSHARE and ROBERTASHARE perform on par and both outperform their 12-layer counterpart.",
"cite_spans": [
{
"start": 150,
"end": 172,
"text": "(Narayan et al., 2017)",
"ref_id": "BIBREF38"
},
{
"start": 204,
"end": 224,
"text": "(Botha et al., 2018)",
"ref_id": "BIBREF1"
},
{
"start": 695,
"end": 714,
"text": "Botha et al. (2018)",
"ref_id": "BIBREF1"
},
{
"start": 764,
"end": 792,
"text": "(Aharoni and Goldberg, 2018)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 898,
"end": 905,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Split and Rephrase",
"sec_num": "4.2"
},
{
"text": "We test our setups on the most common benchmark in machine translation-WMT 2014 English \u2194 German task-using newstest2014 and newstest2016 eval sets. We use the same hyperparameter settings as in the previous experiments. We limit the input and output lengths to 128 tokens each. We used a global batch size of 256 and train for 30 epochs. Decoding was done with beam size of 4 and the default value for the sentence length penalty set to \u03b1 = 0.6. We report uncased BLEU-4 scores. 7 In Table 4 , we first report the baseline scores for the original Transformer model Vaswani et al. (2017) and our Transformer implementation 8 with 6 We use NLTK v3.2.2 with case-sensitive scoring to estimate BLEU scores. 7 We use a script from the Tensorflow Official Transformer implementation https://github.com/tensorflow/ models/ tree/ master/ nlp/ transformer. Note that, differently from the https://github.com/ tensorflow/ tensor2tensor/ blob/ master/ tensor2tensor/utils/get ende bleu.sh used by Vaswani et al. (2017) , this script does not split noun compounds, but we normalize utf-8 quotes to ascii quotes as we noted that our pre-processed training set contains only ascii quotes. 8 We use Transformer layers from the official BERT implementation which have small differences from Vaswani et al. (2017) . the same hyper parameters. In both cases, we use the encoder and decoder with 6 layers and the 32k wordpiece vocabulary extracted from the WMT14 training set. Our implementation obtains slightly higher scores than the original implementation.",
"cite_spans": [
{
"start": 480,
"end": 481,
"text": "7",
"ref_id": null
},
{
"start": 566,
"end": 587,
"text": "Vaswani et al. (2017)",
"ref_id": "BIBREF54"
},
{
"start": 630,
"end": 631,
"text": "6",
"ref_id": null
},
{
"start": 704,
"end": 705,
"text": "7",
"ref_id": null
},
{
"start": 987,
"end": 1008,
"text": "Vaswani et al. (2017)",
"ref_id": "BIBREF54"
},
{
"start": 1176,
"end": 1177,
"text": "8",
"ref_id": null
},
{
"start": 1276,
"end": 1297,
"text": "Vaswani et al. (2017)",
"ref_id": "BIBREF54"
}
],
"ref_spans": [
{
"start": 485,
"end": 492,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Machine Translation",
"sec_num": "4.3"
},
{
"text": "The middle section of Table 4 reports the results for various initialization schema using BERT and GPT-2 pre-trained checkpoints. Note that here all models have encoders and decoders with 12 layers. For BERT models, we use the BERT-Base Multilingual Cased checkpoint to initialize the encoder or the decoder or both, as the task involves one non-English language. This checkpoint has been pre-trained on 108 languages using a multilingual Wikipedia dump with a vocabulary of 110k wordpieces. First, we observe that initializing the model with the BERT checkpoint is most beneficial on the encoder side; our observation is in line with Yang et al. (2019) . Furthermore, models initialized with the BERT checkpoint receive a significant boost: BERT2RND compared to the no-initialization RND2RND setup scores higher by +4 points on En\u2192De and +3.6 points on De\u2192En on newstest2014. Contrary to the WikiSplit and DiscoFuse task, sharing the encoder and decoder variables did not give an additional boost. This is most likely because a) model capacity is an important factor in MT and b) encoder and decoder have to deal with different grammar and vocabulary.",
"cite_spans": [
{
"start": 635,
"end": 653,
"text": "Yang et al. (2019)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [
{
"start": 22,
"end": 29,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Machine Translation",
"sec_num": "4.3"
},
{
"text": "GPT-based models (RND2GPT, GPT, and BERT2GPT) do not perform nearly as well, especially when GPT is used as the decoder and the target language is German. This is because the GPT model comes with an English vocabulary and has been pre-trained mainly on English text. Hence, we report the scores for GPT in the En\u2192De setting in gray.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Machine Translation",
"sec_num": "4.3"
},
{
"text": "Customized BERT Checkpoint. For this experiment we did not include RoBERTa, as the public checkpoint is available for English only. Instead, we train our own checkpoint. We also observe that our implementation of the baseline Transformer, as well as RND2RND setup, which uses no initialization, more weakly weaker on newstest2014 compared with the Transformer baselines (with 6 layers and the 32k wordpiece vocabulary) we report in the top section of Table 4 . We conjecture that the differences might be due to the larger 110k wordpiece vocabulary trained to handle 104 languages from Wikipedia dump, newstest2014 newstest2016 En\u2192De De\u2192En En\u2192De De\u2192En (Vaswani et al., 2017) 27.3 ---Transformer (ours)",
"cite_spans": [
{
"start": 652,
"end": 674,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF54"
}
],
"ref_spans": [
{
"start": 451,
"end": 458,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Machine Translation",
"sec_num": "4.3"
},
{
"text": "28.1 31.4 33.5 37.9 KERMIT (Chan et al., 2019) 28.7 31.4 -- (Shaw et al., 2018) 29.2 --- (Edunov et al., 2018) In the bottom section, we use the native 32k wordpiece vocabulary extracted from WMT14 train set and a BERT checkpoint pre-trained only on English and German subset of Wikipedia. * Leveraging a large number of additional parallel sentence pairs obtained with back-translation; we include this score as a reference to the highest achieved result on newstest2014. The GPT-2 results for En\u2192De (where the GPT-2 initialized decoder is used to decode targets in De) are grayed out as they are a priori penalizing for GPT-2, which was only pretrained on En texts.",
"cite_spans": [
{
"start": 27,
"end": 46,
"text": "(Chan et al., 2019)",
"ref_id": "BIBREF2"
},
{
"start": 60,
"end": 79,
"text": "(Shaw et al., 2018)",
"ref_id": "BIBREF51"
},
{
"start": 89,
"end": 110,
"text": "(Edunov et al., 2018)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Machine Translation",
"sec_num": "4.3"
},
{
"text": "which is suboptimal for WMT14 data and leads to inferior results. To verify this conjecture, we perform the following experiment: We use the 32k wordpiece vocabulary extracted from the WMT14 En \u2194 De training set (same as used in the top section of Table 4 ) and pre-train a BERT model on the English and German subset of the Wikipedia dump in the same way as the multilingual BERT checkpoint was obtained. We initialize our best-performing setups, BERT2RND and BERTSHARE, with this checkpoint (the third block of Table 4 ). This provides a further +0.5 (En \u2194 De) and +0.8 (De \u2194 En) BLEU improve-ments on newstest2014, and, +1.1 and +0.7 on newstest2016, yielding an overall very strong performance on the challenging WMT14 task. Experiments with the larger models (the last block) show further improvements of up to +1.1 BLEU points. Edunov et al. (2018) report better results when they augment the training set with a massive amount of back-translated sentence pairs. To the best of our knowledge, among the approaches that only leverage parallel data from WMT14, our results are state-of-the-art on both newstest2014 and newstest2016.",
"cite_spans": [
{
"start": 834,
"end": 854,
"text": "Edunov et al. (2018)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 248,
"end": 255,
"text": "Table 4",
"ref_id": "TABREF6"
},
{
"start": 513,
"end": 520,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Machine Translation",
"sec_num": "4.3"
},
{
"text": "Document summarization is the task of producing a short version of a document while preserving its salient information content. We evaluate our setups on three different summarization datasets of varying characteristics: Gigaword (Napoles et al., 2012) , CNN and DailyMail (Hermann et al., 2015) , and BBC extreme (Narayan et al., 2018a) . The Gigaword dataset focuses on abstractive sentence summarization with a total of 3.8M sentence-summary training pairs. The other two datasets focus on single-document summarization: The CNN/DailyMail dataset consists of 287k document-summary pairs, whereas the BBC dataset consists of 204k document-summary pairs. The CNN/DailyMail summaries are in the form of bullet-point story highlights and exhibit a high degree of extraction, requiring the models to learn to copy from the source documents. The BBC summaries, on the other hand, are extreme in that the documents are summarized into single-sentence summaries. These summaries demonstrate a high level of abstractiveness, and generating them automatically requires documentlevel inference, abstraction, and paraphrasing.",
"cite_spans": [
{
"start": 230,
"end": 252,
"text": "(Napoles et al., 2012)",
"ref_id": "BIBREF35"
},
{
"start": 255,
"end": 295,
"text": "CNN and DailyMail (Hermann et al., 2015)",
"ref_id": null
},
{
"start": 314,
"end": 337,
"text": "(Narayan et al., 2018a)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Abstractive Summarization",
"sec_num": "4.4"
},
{
"text": "In all three cases, we did not anonymize entities. We worked on the original cased versions of the CNN/DailyMail and BBC datasets. For Gigaword we used the lowercased version to match the requirements of the publicly available lowercased test set. During training, the input documents were truncated to 512 tokens for the CNN/DailyMail and BBC, and to 128 tokens for Gigaword. Similarly, the length of the summaries was limited to 128 tokens for CNN/DailyMail, 64 for BBC, and 32 for Gigaword. We used a global batch size of 128 document-summary pairs for CNN/ DailyMail and BBC, and 256 documentsummary pairs for Gigaword. We adapted to different number of training steps depending on the training data sizes. Models were trained for 500k, 300k, and 200k steps for the Gigaword, CNN/DailyMail, and BBC summarization datasets respectively. In all cases, we used the standard publicly available test sets; these consists of 1951 instances for Gigaword, 11,490 for CNN/DailyMail, and 11,334 for BBC. We report on the ROUGE F 1 scores (Lin and Hovy, 2003) ; in particular, we report on ROUGE-1 and ROUGE-2 for informativeness and ROUGE-L for fluency in Table 5 . Document Understanding. All BERT encoder-based setups (i.e., BERT2RND, BERTSHARE, ROBERTASHARE, and BERT2BERT) outperform the baseline RND2RND by a large margin. The improvements of the RND2BERT setup, where only the decoder is initialized, are narrow. These results overall validate the significance of document representation in the encoder-decoder framework for summarization. On the BBC extreme summarization in particular, these four models achieve on average +6.85 point improvement in ROUGE-L compared with the RND2RND setup. Our results demonstrate that the models with better document representations are better in generating extreme summaries that require document-level inference and abstraction. For the extractive highlights in the CNN/DailyMail dataset, these models show an improvement of +3.53 ROUGE-L points over the RND2RND baseline. For Gigaword, where the input is a single sentence, the improvements are minimal (average of +1.02 ROUGE-L points). The BERTSHARE setup with shared encoder and decoder parameters achieves better performance than BERT2BERT on all three datasets. The gains are larger on the BBC dataset than on the Gigaword and CNN/DailyMail datasets. This is probably because the BBC summary sentences follow a distribution that is similar to that of the sentences in the document, whereas this is not necessarily the case for the Gigaword headlines and the CNN/ DailyMail bullet-point highlights. ROBERTASHARE performs superior to BERTSHARE on the CNN/ DailyMail and BBC datasets. ROBERTASHARE performs competitively to BERTSHARE on the Gigaword dataset where the task is to summarize sentences.",
"cite_spans": [
{
"start": 1032,
"end": 1052,
"text": "(Lin and Hovy, 2003)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [
{
"start": 1150,
"end": 1157,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Abstractive Summarization",
"sec_num": "4.4"
},
{
"text": "Summarization with GPT Checkpoints. GPT (decoder-only) performs better than RND2GPT, BERT2GPT or ROBERTA2GPT (encoder-decoder models) by a large margin for generating CNN/ DailyMail extracts, but poorer for generating BBC abstracts. The encoder-decoder architecture where the input document is modeled separately is better equipped for document-level abstraction than the decoder-only architectures where the input document is a conditioning prefix of a language model. Initialization with different checkpoints (e.g., encoder with BERT and decoder with GPT in BERT2GPT) is not effective for document summarization; BERT2GPT and ROBERTA2GPT are inferior to RND2GPT on the Table 5 : Summarization results of different models and their initialization setups. We compare our setups (the bottom block) against both non-pre-trained (the top block) and pre-trained (the middle block) models on various datasets: the Lead baseline, PtGen (See et al., 2017) , ConvS2S (Gehring et al., 2017) , MMN (Kim et al., 2019) , Bottom-Up (Gehrmann et al., 2018) , MASS (Song et al., 2019) , TransLM (Khandelwal et al., 2019) , and UniLM (Dong et al., 2019) . The Lead results for the CNN/DailyMail dataset is taken from Narayan et al. (2018b) , whereas Lead, PtGen, and ConvS2S results on the BBC dataset are taken from Narayan et al. (2018a) . Our best results are boldfaced and the best results on the datasets are italicized.",
"cite_spans": [
{
"start": 931,
"end": 949,
"text": "(See et al., 2017)",
"ref_id": "BIBREF49"
},
{
"start": 960,
"end": 982,
"text": "(Gehring et al., 2017)",
"ref_id": "BIBREF10"
},
{
"start": 989,
"end": 1007,
"text": "(Kim et al., 2019)",
"ref_id": "BIBREF24"
},
{
"start": 1020,
"end": 1043,
"text": "(Gehrmann et al., 2018)",
"ref_id": "BIBREF11"
},
{
"start": 1051,
"end": 1070,
"text": "(Song et al., 2019)",
"ref_id": "BIBREF52"
},
{
"start": 1081,
"end": 1106,
"text": "(Khandelwal et al., 2019)",
"ref_id": "BIBREF23"
},
{
"start": 1119,
"end": 1138,
"text": "(Dong et al., 2019)",
"ref_id": "BIBREF7"
},
{
"start": 1202,
"end": 1224,
"text": "Narayan et al. (2018b)",
"ref_id": "BIBREF37"
},
{
"start": 1302,
"end": 1324,
"text": "Narayan et al. (2018a)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [
{
"start": 672,
"end": 679,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Abstractive Summarization",
"sec_num": "4.4"
},
{
"text": "Gigaword CNN/DailyMail BBC XSum R-1 R-2 R-L R-1 R-2 R-L R-1 R-2 R-L",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstractive Summarization",
"sec_num": "4.4"
},
{
"text": "BBC dataset and BERT2GPT to RND2GPT on the CNN/DailyMail dataset. However, this is not the case with the Gigaword dataset, which has 3.8M training instances; BERT2GPT and ROBERTA2GPT perform better than RND2GPT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstractive Summarization",
"sec_num": "4.4"
},
{
"text": "ROBERTASHARE performs the best and is on par with the current state-of-the-art MASS model (Song et al., 2019) on the Gigaword dataset. The MASS model has an advantage of pre-training encoder-decoder attention from scratch, our proposed models use the publicly available pre-trained checkpoints and only fine-tune on the target task. It is not obvious how the masked seq2seq pretraining objective for sentence generation in the MASS model will be beneficial for tasks like document summarization. Our proposed models provide a generic alternative and can be easily adapted to various text generation tasks. The ROBERTASHARE setup sets a new state-of-the-art, outperforming all existing baselines by a large margin on the BBC extreme summarization task. The best model on the CNN/DailyMail dataset outperforms the Pointer Generator network (See et al., 2017) and the pre-trained single-decoder model with Trans-formerLM (Khandelwal et al., 2019) . Our model, however, lags behind the Bottom-Up system (Gehrmann et al., 2018 ) with a task-specific module for content selection along with the copy mechanism (Gu et al., 2016) and the UniLM model (Dong et al., 2019) with BERT-Large pre-trained for bidirectional, unidirectional and seq2seq language modeling objectives. The UniLM model is also fine-tuned with an additional extractive summarization objective to predict relevant sentences in the document; this objective could be beneficial to generate the CNN/DailyMail extracts.",
"cite_spans": [
{
"start": 90,
"end": 109,
"text": "(Song et al., 2019)",
"ref_id": "BIBREF52"
},
{
"start": 838,
"end": 856,
"text": "(See et al., 2017)",
"ref_id": "BIBREF49"
},
{
"start": 918,
"end": 943,
"text": "(Khandelwal et al., 2019)",
"ref_id": "BIBREF23"
},
{
"start": 999,
"end": 1021,
"text": "(Gehrmann et al., 2018",
"ref_id": "BIBREF11"
},
{
"start": 1104,
"end": 1121,
"text": "(Gu et al., 2016)",
"ref_id": "BIBREF13"
},
{
"start": 1142,
"end": 1161,
"text": "(Dong et al., 2019)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Abstractive Summarization",
"sec_num": "4.4"
},
{
"text": "Combining Different Checkpoints. Combining BERT and GPT-2 into a single model (BERT2GPT) did not work and often underperformed than a randomly initialized baseline. This is presumably because the model has to learn two different vocabularies. This argument is backed by the fact that for MT, de\u2192en the BERT2GPT setup performed well. For this task the vocabulary setting is in favor of this particular task, meaning that two vocabularies have to be learned anyways and the output is English, on which GPT-2 was trained. Because RoBERTa and GPT-2 share the same vocabulary, combining both into a single model (ROBERTA2GPT) showed strong results on several tasks but did not outperform a setup where RoBERTa is used in the encoder and decoder.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion on Ablation Studies",
"sec_num": "5"
},
{
"text": "Tuning GPT-2 Based Models. We were surprised that setups using the GPT-2 checkpoint performed relatively poorly given that it is trained as a language model on a large corpus; our intuition was that GPT-2 initialized decoders will be strong natural language generators. To ensure that this was not due to an unfortunate choice of hyperparameters, we tuned the learning rate, the warmup steps, and the optimizer \u2208 {Adam, Adafactor} for the GPT-2 based setups (RND2GPT, GPT, BERT2GPT) on the DiscoFuse dataset. Naturally, this gave us slightly higher numbers but not at a magnitude that would suggest a previously suboptimal setting. Specifically, we obtained a SARI score of 88.8 compared with 88.4 for BERT2GPT, 88.1 compared with 88.0 for GPT, and 87.7 compared with 86.5 for RND2GPT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion on Ablation Studies",
"sec_num": "5"
},
{
"text": "Initializing Only Embeddings. We want to investigate the impact of the non-contextualized BERT and GPT-2 embeddings. This means we are initializing the transformer model with only the embedding matrices. The advantage of this setup would be that we could freely choose the model architecture and size and adapt it to a specific task. We found almost no improvement over the fully randomly initialized model RND2RND. Concretely, we compute a SARI score of 87.1 using the BERT embeddings and 87.0 using the GPT-2 embeddings, compared with 86.9 of the RND2RND baseline. We observe slightly higher improvements of up to 2 percentage points when training on only 10% of the training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion on Ablation Studies",
"sec_num": "5"
},
{
"text": "Initializing Only Layers. Contrary to the previous paragraph, we want to investigate the effect of initializing everything but the word embedding matrix. The embedding matrix accounts for only 10% to 31% of all learnable parameters, and sometimes the vocabulary given from a public checkpoint might not be optimal for a certain task. In these cases, it would be nice to redefine the vocabulary while still leveraging the checkpoint. First, we remove the embeddings matrices from the warm-started variables and observe a drop of 1.7 points using the BERTSHARE setup and 11 points using the GPT setup ( Table 6 ). The latter is probably due to the large vocab of the GPT-2 model, which now remains random-initialized. We then train a new BPE model with 16k tokens using the DiscoFuse training data (Kudo and Richardson, 2018; Sennrich et al., 2016) . We observe almost no change on BERTSHARE, suggesting that the BERT vocabulary was already optimal for DiscoFuse. GPT, however, showed a significant improvement using this much smaller vocabulary but is still behind the fully initialized setup. Finally, we experimented with a more sensitive way of training the model, meaning that we fix all warm-started variables for 100k steps. During this pre-training phase, we only train the new word embeddings. After the pre-training, we fine-tune the entire model for another 300k steps. This training scheme resulted in an improvement of 0.5 for the BERTSHARE setup, but overall the number is still considerably behind the fully initialized setup. For GPT, this training scheme did not result in a satisfying training curve.",
"cite_spans": [
{
"start": 796,
"end": 823,
"text": "(Kudo and Richardson, 2018;",
"ref_id": "BIBREF27"
},
{
"start": 824,
"end": 846,
"text": "Sennrich et al., 2016)",
"ref_id": "BIBREF50"
}
],
"ref_spans": [
{
"start": 601,
"end": 608,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion on Ablation Studies",
"sec_num": "5"
},
{
"text": "Initializing a Subset of Layers. Motivated by the results of using 24 layers, we want to investigate whether only a subset of these 24 layers can be used. To account for the larger hidden layer size (1,024 vs. 768) and filter size (4,096 vs. 3,072), we limit ourselves to using only 10 layers and the embedding matrix of this model. This model still BERTSHARE GPT DiscoFuse 89.3 88.0 -embeddings from checkpoint 87.5 77.0 + task specific SentencePieces 87.5 84.2 + pre-training SentencePieces 88.0 69.7 Table 6 : SARI scores on the DiscoFuse dataset when experimenting with different embedding setups. Each row also includes the setups of all previous rows.",
"cite_spans": [],
"ref_spans": [
{
"start": 503,
"end": 510,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion on Ablation Studies",
"sec_num": "5"
},
{
"text": "has more parameters then the base model (324M vs. 221M for BERT2BERT, 198M vs. 136M for BERTSHARE) but can be trained with the same batch size, in a comparable amount of time (3 min/1,000 iterations). As an initial experiment, we used the first 10 layers out of the large BERT checkpoint to initialize the BERTSHARE setup. This gave us a SARI score of 88.2 on DiscoFuse, compared with 89.3 when using the base checkpoint and compared with 87.0 when using the embeddings only (see Initializing Only Embeddings section).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion on Ablation Studies",
"sec_num": "5"
},
{
"text": "We then performed a hyperparameter search on the evaluation set using CMA-ES (Hansen, 2016) to find an optimal subset of layers to use. The best setup used the following layers: 9, 10, 13-18, 23, 24; and achieved a SARI score of 89.1. Although this is a remarkable improvement over using the first 10 layers, this setup is still outperformed by the base BERT model.",
"cite_spans": [
{
"start": 77,
"end": 91,
"text": "(Hansen, 2016)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion on Ablation Studies",
"sec_num": "5"
},
{
"text": "Finally, we present a qualitative analysis of these models for text generation. In particular, we focus on extreme summarization, which assesses models ability to do document-level inference and abstraction. We evaluated summaries from arandomly initialized model (RND2RND) and from best performing models initialized with GPT checkpoints (RND2GPT), BERT checkpoints (BERTSHARE), and RoBERTa checkpoints (ROBERTASHARE). We also included GOLD summaries in our evaluation. Results are presented in Table 7 .",
"cite_spans": [],
"ref_spans": [
{
"start": 496,
"end": 503,
"text": "Table 7",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Analysis of Abstractive Summaries",
"sec_num": "6"
},
{
"text": "Human Assessment of Summary Quality. The study was conducted on the Amazon Mechanical Turk platform using Best-Worst Scaling, a less labor-intensive alternative to paired comparisons (Louviere and Woodworth, 1991; Louviere et al., 2015) . Our participants were presented with a document and summaries generated from two out of five systems (four models and gold summaries) and were asked to decide which summary was better than the other in order of informativeness (does the summary capture important information in the document correctly and concisely?) and fluency (is the summary written in well-formed English?) We randomly selected 40 documents from the XSum test set. We collected judgments from three different participants for each comparison. The order of summaries was randomized per document and the order of documents was randomized per participant. The score of a system was computed as the percentage of times it was chosen as best minus the percentage of times it was selected as worst. The scores range from \u22121 (worst) to 1 (best). See Figure 1 for a few sample predictions that were used in our human evaluation. Our participants found the ROBERTASHARE summaries to be the best in terms of their overall quality; the BERTSHARE summaries ranked second after ROBERTASHARE. We further carried out pairwise comparisons between all models to assess whether system differences are statistically significant. 9 We did not observe significant differences between RND2RND and RND2GPT, RND2RND and BERTSHARE, and, ROBERTASHARE and GOLD. All other differences were statistically significant.",
"cite_spans": [
{
"start": 183,
"end": 213,
"text": "(Louviere and Woodworth, 1991;",
"ref_id": "BIBREF33"
},
{
"start": 214,
"end": 236,
"text": "Louviere et al., 2015)",
"ref_id": "BIBREF32"
},
{
"start": 1420,
"end": 1421,
"text": "9",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1053,
"end": 1061,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Analysis of Abstractive Summaries",
"sec_num": "6"
},
{
"text": "Summary Lengths and Repetitions. All models generated summaries of comparable lengths; the average length of summaries is 20.90 for RND2RND, 21.49 for RND2GPT, 20.71 for BERTSHARE, and 21.70 for ROBERTASHARE. ROBERTASHAREproduced summaries were closest to the GOLD summaries in terms of length (21.70 vs. 24.61). Finally, we estimated the percentage of summaries with at least one repetition of rare or content words. We discarded the 500 most common words from the model generated and reference summaries, the rest were considered as rare or content words. BERTSHARE and ROBERTASHARE summaries improve over the RND2RND summaries, but have more repetitions than the RND2GPT summaries. See examples in Figure 1 for redundant repeated spans marked in orange.",
"cite_spans": [],
"ref_spans": [
{
"start": 701,
"end": 709,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Analysis of Abstractive Summaries",
"sec_num": "6"
},
{
"text": "Overall, BERTSHARE and ROBERTASHARE summaries are unequivocally better than RND2GPT summaries in terms of both automatic evaluations (assessing ROUGE) and human evaluations (assessing summary quality); there is still room for improvements in these models (Dong et al., 2019; Song et al., 2019; .",
"cite_spans": [
{
"start": 255,
"end": 274,
"text": "(Dong et al., 2019;",
"ref_id": "BIBREF7"
},
{
"start": 275,
"end": 293,
"text": "Song et al., 2019;",
"ref_id": "BIBREF52"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Abstractive Summaries",
"sec_num": "6"
},
{
"text": "Representation Learning. Starting around 2013, word embeddings like word2vec (Mikolov et al., 2013) or GloVe (Pennington et al., 2014) became popular as they were easy to train in an unsupervised fashion on raw text and they improved several downstream tasks when used as features. These word embeddings are invariant to the context in which we the word. There has been previously work to contextualize these embeddings, mainly to account for synonyms (e.g., Huang et al., 2012; Rothe and Sch\u00fctze, 2015) but only in 2018 did training of the contextualized embeddings using large deep neural networks and an unsupervised training scheme become popular. the model was evaluated on more general natural language processing tasks, like machine translation, reading comprehension, summarization, and language modeling. GPT-2 achieved new stateof-the-art results on several language modeling datasets. On the other tasks, GPT-2 outperformed some unsupervised baselines but is still far behind supervised or task-specific approaches.",
"cite_spans": [
{
"start": 77,
"end": 99,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF34"
},
{
"start": 109,
"end": 134,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF39"
},
{
"start": 459,
"end": 478,
"text": "Huang et al., 2012;",
"ref_id": "BIBREF21"
},
{
"start": 479,
"end": 503,
"text": "Rothe and Sch\u00fctze, 2015)",
"ref_id": "BIBREF48"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": null
},
{
"text": "After we performed the majority of our experiments, XLNet (Yang et al., 2019) , an autoregressive pre-training method based on Transformer XL (Dai et al., 2019) , was released. XLNet achieved new state-of-the-art results on several NLP tasks. We leave the experiments with their public checkpoint for future work.",
"cite_spans": [
{
"start": 58,
"end": 77,
"text": "(Yang et al., 2019)",
"ref_id": "BIBREF30"
},
{
"start": 142,
"end": 160,
"text": "(Dai et al., 2019)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": null
},
{
"text": "We performed an extensive study on leveraging pre-trained checkpoints for sequence generation. Our findings show that a pre-trained encoder is an essential part. Most tasks also profit from sharing the weights between the encoder and the decoder, which additionally decreases the memory footprint. While combing BERT and GPT-2 into a single model often underperformed a randomly initialized baseline, combining RoBERTa and GPT-2 achieved strong results and shows the importance of sharing the vocabulary. Training a language-specific BERT model also improves performance over using the multilingual version.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "BERT checkpoints are available at https:// github.com/google-research/bert.2 GPT-2 checkpoints are available at https:// github.com/openai/gpt-2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "SARI is a lexical similarity metric that compares the model's output to multiple references and the input in order to assess the model's ability to add, delete, and keep an n-gram. Its implementation is available at: https://github. com/tensorflow/tensor2tensor/blob/master/ tensor2tensor/utils/sari hook.py.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "One-way ANOVA with post hoc Tukey HSD tests; p < 0.01.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank the reviewers and the action editor for their feedback. We would like to thank Ryan McDonald, Joshua Maynez, and Bernd Bohnet for useful discussions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "Whereas ELMo (Peters et al., 2018) and ULMFiT (Howard and Ruder, 2018) are based on LSTMs (Hochreiter and Schmidhuber, 1997) , BERT and GPT are based on the transformer architecture (Vaswani et al., 2017) . This architecture outperforms LSTMs on several NLP tasks and we therefore concentrated on these two pre-trained models. The contextualized embedding for each input token is given by the corresponding output of the last encoder layer.Pre-training Models. One can also see these models as pre-trained models (Dai and Le, 2015) , which are then fine-tuned for a downstream task. This is the conceptual view we adopted for this paper. Why unsupervised pre-training helps deep learning was investigated by Erhan et al. (2010) . While the unsupervised pre-training strategies are different from those used in our paper, we expect the findings to still hold. They show that unsupervised pre-training is not simply a way of getting a good initial marginal distribution, that classical regularization techniques cannot achieve the same performance as unsupervised pre-training, and that the effect of unsupervised pre-training does not go away with more training data. An extensive study of pre-training was done by Wang et al. (2019a) . This study compares single sentence classification, sentence pair classification, seq2seq and language modeling tasks for pre-training, and measures the effect on GLUE. The primary results support the use of language modeling. Peters et al. (2019) explore whether it is preferable to fine-tune the entire model on a specific task or to use the learned representations as features (i.e., freezing the pre-trained model). Their results suggest that the relative performance of fine-tuning vs. feature extraction depends on the similarity between the pre-training and the target tasks. Wang et al. (2019b) propose a combination of both, where first the model is trained with the BERT parameters being frozen and then the entire model is fine-tuned. This is the training scheme we used in the Initializing Only Layers section.Pre-training for Sequence Generation. Pretraining for seq2seq learning was first done by Ramachandran et al. (2017) . They used a language model to pre-train the encoder and decoder of an RNN seq2seq model. Their method improved BLEU scores on newstest2014 by 3 points and ROUGE-L on CNN/DailyMail also by 3 points. However, their BLEU score of 24.7 on newstest2014En\u2192De, compared to 30.6 in this work, and 29.4 ROUGE-L on CNN/DailyMail, compared with 36.33, also show the superiority of the transformer model as well as the masked language model objective of BERT. MASS (Song et al., 2019 ) is a BERT-inspired method of pre-training seq2seq models. One advantage of this method is that, in contrast to our setups (except for GPT), the encoder-decoder attention mechanism is also pretrained. The downside of this approach is that the pre-trained model is task-specific and not as general as BERT or GPT-2. UniLM (Dong et al., 2019) also unifies bidirectional, unidirectional, and seq2seq language modeling. At the time of writing, no public checkpoint was available to us. We compare our work with their results in Table 5 . To overcome the issue that the encoder-decoder attention is not pre-trained, Khandelwal et al. (2019) pre-trained a single transformer language model that encodes the source and generates the target. This setup matches our GPT setup. Conneau and Lample (2019) pre-train their model using casual language modeling (like GPT), masked language modeling (like BERT) and a third new objective called translation language modeling to improve cross-lingual pre-training.Leveraging Public Checkpoints. BERT has been used for various NLP tasks, such as question answering on the SQuAD dataset (Rajpurkar et al., 2018) . It also achieved new state-of-the-art results on the GLUE benchmark (Williams et al., 2018) and grounded commonsense inference (SWAG, Zellers et al., 2018) . All of these tasks are a form of classification or regression. Liu (2019) fine-tuned BERT for extractive summarization.An analysis of different layers of the BERT model was performed by Tenney et al. (2019) . They found that the classical NLP pipeline appears in the expected sequence. In the context of our experiments in the Initializing a Subset of Layers section, this would mean that the DiscoFuse task profits the most from pre-trained information about POS, constituents, dependencies, and semantic roles. A similar study by Jawahar et al. (2019) found that BERT captures phrase-level information in the lower layers and linguistic information in intermediate layers, with surface features at the bottom, syntactic features in the middle, and semantic features at the top.GPT was also evaluated on natural language inference tasks. In the extended version of GPT-2,",
"cite_spans": [
{
"start": 13,
"end": 34,
"text": "(Peters et al., 2018)",
"ref_id": "BIBREF40"
},
{
"start": 46,
"end": 70,
"text": "(Howard and Ruder, 2018)",
"ref_id": "BIBREF20"
},
{
"start": 90,
"end": 124,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF19"
},
{
"start": 182,
"end": 204,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF54"
},
{
"start": 513,
"end": 531,
"text": "(Dai and Le, 2015)",
"ref_id": "BIBREF4"
},
{
"start": 708,
"end": 727,
"text": "Erhan et al. (2010)",
"ref_id": "BIBREF9"
},
{
"start": 1214,
"end": 1233,
"text": "Wang et al. (2019a)",
"ref_id": "BIBREF55"
},
{
"start": 1819,
"end": 1838,
"text": "Wang et al. (2019b)",
"ref_id": "BIBREF56"
},
{
"start": 2147,
"end": 2173,
"text": "Ramachandran et al. (2017)",
"ref_id": "BIBREF46"
},
{
"start": 2629,
"end": 2647,
"text": "(Song et al., 2019",
"ref_id": "BIBREF52"
},
{
"start": 2970,
"end": 2989,
"text": "(Dong et al., 2019)",
"ref_id": "BIBREF7"
},
{
"start": 3260,
"end": 3284,
"text": "Khandelwal et al. (2019)",
"ref_id": "BIBREF23"
},
{
"start": 3417,
"end": 3442,
"text": "Conneau and Lample (2019)",
"ref_id": "BIBREF3"
},
{
"start": 3767,
"end": 3791,
"text": "(Rajpurkar et al., 2018)",
"ref_id": "BIBREF45"
},
{
"start": 3862,
"end": 3885,
"text": "(Williams et al., 2018)",
"ref_id": "BIBREF57"
},
{
"start": 3921,
"end": 3949,
"text": "(SWAG, Zellers et al., 2018)",
"ref_id": null
},
{
"start": 4015,
"end": 4025,
"text": "Liu (2019)",
"ref_id": "BIBREF30"
},
{
"start": 4138,
"end": 4158,
"text": "Tenney et al. (2019)",
"ref_id": "BIBREF53"
},
{
"start": 4484,
"end": 4505,
"text": "Jawahar et al. (2019)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [
{
"start": 3173,
"end": 3180,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "annex",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Split and rephrase: Better evaluation and stronger baselines",
"authors": [
{
"first": "Roee",
"middle": [],
"last": "Aharoni",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "719--724",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roee Aharoni and Yoav Goldberg. 2018. Split and rephrase: Better evaluation and stronger baselines. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pages 719-724. Melbourne, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Learning to split and rephrase from Wikipedia edit history",
"authors": [
{
"first": "Jan",
"middle": [
"A"
],
"last": "Botha",
"suffix": ""
},
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Alex",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Baldridge",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "732--737",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jan A. Botha, Manaal Faruqui, John Alex, Jason Baldridge, and Dipanjan Das. 2018. Learning to split and rephrase from Wikipedia edit his- tory. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Pro- cessing, pages 732-737. Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "KERMIT: Generative insertion-based modeling for sequences. CoRR, abs",
"authors": [
{
"first": "William",
"middle": [],
"last": "Chan",
"suffix": ""
},
{
"first": "Nikita",
"middle": [],
"last": "Kitaev",
"suffix": ""
},
{
"first": "Kelvin",
"middle": [],
"last": "Guu",
"suffix": ""
},
{
"first": "Mitchell",
"middle": [],
"last": "Stern",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
}
],
"year": 1906,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William Chan, Nikita Kitaev, Kelvin Guu, Mitchell Stern, and Jakob Uszkoreit. 2019. KERMIT: Generative insertion-based modeling for sequences. CoRR, abs/1906.01604v1.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Cross-lingual language model pretraining",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "32",
"issue": "",
"pages": "7057--7067",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau and Guillaume Lample. 2019. Cross-lingual language model pretraining, In H. Wallach, H. Larochelle, A. Beygelzimer, F. Alch\u00e9-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 7057-7067. Curran Asso- ciates, Inc.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Semisupervised sequence learning",
"authors": [
{
"first": "M",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in Neural Information Processing Systems",
"volume": "28",
"issue": "",
"pages": "3079--3087",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew M. Dai and Quoc V. Le. 2015. Semi- supervised sequence learning, In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Infor- mation Processing Systems 28, pages 3079-3087. Curran Associates, Inc.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Transformer-XL: Attentive language models beyond a fixed-length context",
"authors": [
{
"first": "Zihang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "Quoc",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2978--2988",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc Le, and Ruslan Salakhutdinov. 2019. Transformer-XL: Attentive language models beyond a fixed-length context. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2978-2988, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "BERT: Pretraining of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre- training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Unified language model pre-training for natural language understanding and generation",
"authors": [
{
"first": "Li",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Wenhui",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Hsiao-Wuen",
"middle": [],
"last": "Hon",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "32",
"issue": "",
"pages": "13042--13054",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Unified language model pre-training for natural language understanding and generation. In H. Wallach, H. Larochelle, A. Beygelzimer, F. Alch\u00e9-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 13042-13054. Curran Associates, Inc.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Understanding backtranslation at scale",
"authors": [
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "489--500",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018. Understanding back- translation at scale. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 489-500, Brussels, Belgium. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Why does unsupervised pre-training help deep learning",
"authors": [
{
"first": "Dumitru",
"middle": [],
"last": "Erhan",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Courville",
"suffix": ""
},
{
"first": "Pierre-Antoine",
"middle": [],
"last": "Manzagol",
"suffix": ""
},
{
"first": "Pascal",
"middle": [],
"last": "Vincent",
"suffix": ""
},
{
"first": "Samy",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of Machine Learning Research",
"volume": "11",
"issue": "",
"pages": "625--660",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dumitru Erhan, Yoshua Bengio, Aaron Courville, Pierre-Antoine Manzagol, Pascal Vincent, and Samy Bengio. 2010. Why does unsupervised pre-training help deep learning? Journal of Machine Learning Research, 11:625-660.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Convolutional sequence to sequence learning",
"authors": [
{
"first": "Jonas",
"middle": [],
"last": "Gehring",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Denis",
"middle": [],
"last": "Yarats",
"suffix": ""
},
{
"first": "Yann",
"middle": [
"N"
],
"last": "Dauphin",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 34th International Conference on Machine Learning",
"volume": "70",
"issue": "",
"pages": "1243--1252",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. 2017. Convolutional sequence to sequence learning. In Proceedings of the 34th International Conference on Machine Learning, volume 70, pages 1243-1252. Sydney, Australia.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Bottom-up abstractive summarization",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Gehrmann",
"suffix": ""
},
{
"first": "Yuntian",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "4098--4109",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Gehrmann, Yuntian Deng, and Alexander Rush. 2018. Bottom-up abstractive summariza- tion. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Pro- cessing, pages 4098-4109, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "DiscoFuse: A largescale dataset for discourse-based sentence fusion",
"authors": [
{
"first": "Mor",
"middle": [],
"last": "Geva",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Malmi",
"suffix": ""
},
{
"first": "Idan",
"middle": [],
"last": "Szpektor",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "3443--3455",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mor Geva, Eric Malmi, Idan Szpektor, and Jonathan Berant. 2019. DiscoFuse: A large- scale dataset for discourse-based sentence fu- sion. In Proceedings of the 2019 Conference of the North American Chapter of the Associ- ation for Computational Linguistics: Human Language Technologies, pages 3443-3455, Minneapolis, Minnesota. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Incorporating copying mechanism in sequence-to-sequence learning",
"authors": [
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Zhengdong",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Hang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "O",
"middle": [
"K"
],
"last": "Victor",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1631--1640",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O.K. Li. 2016. Incorporating copying mech- anism in sequence-to-sequence learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1631-1640, Berlin, Germany. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The CMA evolution strategy: A tutorial",
"authors": [
{
"first": "Nikolaus",
"middle": [],
"last": "Hansen",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikolaus Hansen. 2016. The CMA evolution strategy: A tutorial. CoRR, abs/1604.00772.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Layer-wise coordination between encoder and decoder for neural machine translation",
"authors": [
{
"first": "Tianyu",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Yingce",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Di",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Zhibo",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Tie-Yan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tianyu He, Xu Tan, Yingce Xia, Di He, Tao Qin, Zhibo Chen, and Tie-Yan Liu. 2018. Layer-wise coordination between encoder and decoder for neural machine translation. In S. Bengio, H. Wallach, H. Larochelle, K.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Advances in Neural Information Processing Systems 31",
"authors": [
{
"first": "N",
"middle": [],
"last": "Grauman",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Cesa-Bianchi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Garnett",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "7944--7954",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems 31, pages 7944-7954, Curran Associates, Inc.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Bridging nonlinearities and stochastic regularizers with gaussian error linear units",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Hendrycks",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Hendrycks and Kevin Gimpel. 2016. Bridging nonlinearities and stochastic regularizers with gaussian error linear units. CoRR, abs/1606.08415.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Teaching machines to read and comprehend",
"authors": [
{
"first": "Karl",
"middle": [],
"last": "Moritz Hermann",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Kocisky",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "Lasse",
"middle": [],
"last": "Espeholt",
"suffix": ""
},
{
"first": "Will",
"middle": [],
"last": "Kay",
"suffix": ""
},
{
"first": "Mustafa",
"middle": [],
"last": "Suleyman",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in Neural Information Processing Systems",
"volume": "28",
"issue": "",
"pages": "1693--1701",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 1693-1701. Curran Associates, Inc.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Universal language model fine-tuning for text classification",
"authors": [
{
"first": "Jeremy",
"middle": [],
"last": "Howard",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "328--339",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeremy Howard and Sebastian Ruder. 2018. Uni- versal language model fine-tuning for text clas- sification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pages 328-339, Melbourne, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Improving word representations via global context and multiple word prototypes",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "873--882",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric Huang, Richard Socher, Christopher Manning, and Andrew Ng. 2012. Improving word representations via global context and multiple word prototypes. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics, pages 873-882, Jeju Island, Korea. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "What does BERT learn about the structure of language",
"authors": [
{
"first": "Ganesh",
"middle": [],
"last": "Jawahar",
"suffix": ""
},
{
"first": "Beno\u00eet",
"middle": [],
"last": "Sagot",
"suffix": ""
},
{
"first": "Djam\u00e9",
"middle": [],
"last": "Seddah",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3651--3657",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ganesh Jawahar, Beno\u00eet Sagot, and Djam\u00e9 Seddah. 2019. What does BERT learn about the structure of language? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3651-3657, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Sample efficient text summarization using a single pre-trained transformer",
"authors": [
{
"first": "Urvashi",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Urvashi Khandelwal, Kevin Clark, Dan Jurafsky, and Lukasz Kaiser. 2019. Sample efficient text summarization using a single pre-trained trans- former. CoRR, abs/1905.08836.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Abstractive summarization of",
"authors": [
{
"first": "Byeongchang",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Hyunwoo",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Gunhee",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Byeongchang Kim, Hyunwoo Kim, and Gunhee Kim. 2019. Abstractive summarization of",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Reddit posts with multi-level memory networks",
"authors": [],
"year": null,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "2519--2531",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Reddit posts with multi-level memory net- works. In Proceedings of the 2019 Conference of the North American Chapter of the Associ- ation for Computational Linguistics: Human Language Technologies, pages 2519-2531.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Association for Computational Linguistics",
"authors": [
{
"first": "Minnesota",
"middle": [],
"last": "Minneapolis",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minneapolis, Minnesota. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Senten-cePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing",
"authors": [
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Richardson",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "66--71",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taku Kudo and John Richardson. 2018. Senten- cePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66-71, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Marjan",
"middle": [],
"last": "Ghazvininejad",
"suffix": ""
},
{
"first": "Abdelrahman",
"middle": [],
"last": "Mohamed",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Ves",
"middle": [],
"last": "Stoyanov",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. BART: Denoising sequence-to-sequence pre-training for natural language genera- tion, translation, and comprehension. CoRR, abs/1910.13461.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Automatic evaluation of summaries using n-gram cooccurrence statistics",
"authors": [
{
"first": "Eduard",
"middle": [],
"last": "Chin Yew Lin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "150--157",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin Yew Lin and Eduard Hovy. 2003. Automatic evaluation of summaries using n-gram co- occurrence statistics. In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, pages 150-157.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Fine-tune BERT for extractive summarization. CoRR, abs",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 1903,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang Liu. 2019. Fine-tune BERT for extractive summarization. CoRR, abs/1903.10318.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Best-worst scaling: Theory, methods and applications",
"authors": [
{
"first": "Jordan",
"middle": [
"J"
],
"last": "Louviere",
"suffix": ""
},
{
"first": "Terry",
"middle": [
"N"
],
"last": "Flynn",
"suffix": ""
},
{
"first": "Anthony Alfred John",
"middle": [],
"last": "Marley",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jordan J. Louviere, Terry N. Flynn, and Anthony Alfred John Marley. 2015. Best-worst scaling: Theory, methods and applications. Cambridge University Press.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Best-worst scaling: A model for the largest difference judgments",
"authors": [
{
"first": "J",
"middle": [],
"last": "Jordan",
"suffix": ""
},
{
"first": "George",
"middle": [
"G"
],
"last": "Louviere",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Woodworth",
"suffix": ""
}
],
"year": 1991,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jordan J. Louviere and George G. Woodworth. 1991. Best-worst scaling: A model for the lar- gest difference judgments. University of Alberta, Working Paper.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "CoRR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. CoRR, abs/1301.3781.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Annotated gigaword",
"authors": [
{
"first": "Courtney",
"middle": [],
"last": "Napoles",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Gormley",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Joint Workshop on Automatic Knowledge Base Construction and Web-scale Knowledge Extraction",
"volume": "",
"issue": "",
"pages": "95--100",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Courtney Napoles, Matthew Gormley, and Benjamin Van Durme. 2012. Annotated giga- word. In Proceedings of the Joint Workshop on Automatic Knowledge Base Construction and Web-scale Knowledge Extraction, pages 95-100, Montreal, Canada. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Don't give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization",
"authors": [
{
"first": "Shashi",
"middle": [],
"last": "Narayan",
"suffix": ""
},
{
"first": "Shay",
"middle": [
"B"
],
"last": "Cohen",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1797--1807",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018a. Don't give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Pro- cessing, pages 1797-1807, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Ranking sentences for extractive summarization with reinforcement learning",
"authors": [
{
"first": "Shashi",
"middle": [],
"last": "Narayan",
"suffix": ""
},
{
"first": "Shay",
"middle": [
"B"
],
"last": "Cohen",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1747--1759",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018b. Ranking sentences for extractive summarization with reinforcement learning. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1747-1759, New Orleans, Louisiana. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Split and rephrase",
"authors": [
{
"first": "Shashi",
"middle": [],
"last": "Narayan",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Gardent",
"suffix": ""
},
{
"first": "Shay",
"middle": [
"B"
],
"last": "Cohen",
"suffix": ""
},
{
"first": "Anastasia",
"middle": [],
"last": "Shimorina",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "606--616",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shashi Narayan, Claire Gardent, Shay B. Cohen, and Anastasia Shimorina. 2017. Split and rephrase. In Proceedings of the 2017 Con- ference on Empirical Methods in Natural Language Processing, pages 606-616. Copen- hagen, Denmark. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "GloVe: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceed- ings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 1532-1543. Doha, Qatar. Association for Computational Linguistics.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "2227--2237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextu- alized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2227-2237. New Orleans, Louisiana. Association for Computational Linguistics.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "To tune or not to tune? adapting pretrained representations to diverse tasks",
"authors": [
{
"first": "Matthew",
"middle": [
"E"
],
"last": "Peters",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 4th Workshop on Representation Learning for NLP",
"volume": "",
"issue": "",
"pages": "7--14",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew E. Peters, Sebastian Ruder, and Noah A. Smith. 2019. To tune or not to tune? adapting pretrained representations to diverse tasks. In Proceedings of the 4th Workshop on Repre- sentation Learning for NLP, pages 7-14.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Improving language understanding by generative pretraining",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Karthik",
"middle": [],
"last": "Narasimhan",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre- training. Technical report, OpenAI.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Language models are unsupervised multitask learners",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Technical report, OpenAI.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Know what you don't know: Unanswerable questions for SQuAD",
"authors": [
{
"first": "Pranav",
"middle": [],
"last": "Rajpurkar",
"suffix": ""
},
{
"first": "Robin",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "784--789",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don't know: Unanswer- able questions for SQuAD. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pages 784-789, Melbourne, Australia. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Unsupervised pretraining for sequence to sequence learning",
"authors": [
{
"first": "Prajit",
"middle": [],
"last": "Ramachandran",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Quoc",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "383--391",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Prajit Ramachandran, Peter Liu, and Quoc Le. 2017. Unsupervised pretraining for sequence to sequence learning. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 383-391.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Association for Computational Linguistics",
"authors": [
{
"first": "Denmark",
"middle": [],
"last": "Copenhagen",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Copenhagen, Denmark. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "AutoExtend: Extending word embeddings to embeddings for synsets and lexemes",
"authors": [
{
"first": "Sascha",
"middle": [],
"last": "Rothe",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1793--1803",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sascha Rothe and Hinrich Sch\u00fctze. 2015. AutoEx- tend: Extending word embeddings to embed- dings for synsets and lexemes. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th Inter- national Joint Conference on Natural Language Processing, pages 1793-1803, Beijing, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Get to the point: Summarization with pointer-generator networks",
"authors": [
{
"first": "Abigail",
"middle": [],
"last": "See",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Liu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1073--1083",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summa- rization with pointer-generator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1073-1083, Vancouver, Canada. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1715--1725",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1715-1725, Berlin, Germany. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Self-attention with relative position representations",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Shaw",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "464--468",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-attention with relative position representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 464-468, New Orleans, Louisiana. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "MASS: Masked sequence to sequence pre-training for language generation",
"authors": [
{
"first": "Kaitao",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Tie-Yan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 36th International Conference on Machine Learning",
"volume": "97",
"issue": "",
"pages": "5926--5936",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and Tie-Yan Liu. 2019. MASS: Masked sequence to sequence pre-training for language genera- tion. In Proceedings of the 36th International Conference on Machine Learning, volume 97, pages 5926-5936. PMLR.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "BERT rediscovers the classical NLP pipeline",
"authors": [
{
"first": "Ian",
"middle": [],
"last": "Tenney",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Ellie",
"middle": [],
"last": "Pavlick",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4593--4601",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019. BERT rediscovers the classical NLP pipeline. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4593-4601, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30, pages 5998-6008.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Can you tell me how to get past sesame street? Sentence-level pretraining beyond language modeling",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Hula",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Raghavendra",
"middle": [],
"last": "Pappagari",
"suffix": ""
},
{
"first": "R",
"middle": [
"Thomas"
],
"last": "Mccoy",
"suffix": ""
},
{
"first": "Roma",
"middle": [],
"last": "Patel",
"suffix": ""
},
{
"first": "Najoung",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Tenney",
"suffix": ""
},
{
"first": "Yinghui",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Katherin",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Shuning",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Berlin",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Ellie",
"middle": [],
"last": "Pavlick",
"suffix": ""
},
{
"first": "Samuel",
"middle": [
"R"
],
"last": "Bowman",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4465--4476",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Jan Hula, Patrick Xia, Raghavendra Pappagari, R. Thomas McCoy, Roma Patel, Najoung Kim, Ian Tenney, Yinghui Huang, Katherin Yu, Shuning Jin, Berlin Chen, Benjamin Van Durme, Edouard Grave, Ellie Pavlick, and Samuel R. Bowman. 2019a. Can you tell me how to get past sesame street? Sentence-level pretraining beyond language modeling. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4465-4476, Association for Computational Linguistics.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "To tune or not to tune? how about the best of both worlds?",
"authors": [
{
"first": "Ran",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Haibo",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Chunye",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Kailin",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Jupeng",
"middle": [],
"last": "Ding",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ran Wang, Haibo Su, Chunye Wang, Kailin Ji, and Jupeng Ding. 2019b. To tune or not to tune? how about the best of both worlds? CoRR, abs/1907.05338.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "A broad-coverage challenge corpus for sentence understanding through inference",
"authors": [
{
"first": "Adina",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Nikita",
"middle": [],
"last": "Nangia",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1112--1122",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Language Technologies, pages 1112-1122, New Orleans, Louisiana. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "Google's neural machine translation system: Bridging the gap between human and machine translation",
"authors": [
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Quoc",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Norouzi",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Qin",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Klaus",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Klingner",
"suffix": ""
},
{
"first": "Apurva",
"middle": [],
"last": "Shah",
"suffix": ""
},
{
"first": "Melvin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Xiaobing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Gouws",
"suffix": ""
},
{
"first": "Yoshikiyo",
"middle": [],
"last": "Kato",
"suffix": ""
},
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "Hideto",
"middle": [],
"last": "Kazawa",
"suffix": ""
},
{
"first": "Keith",
"middle": [],
"last": "Stevens",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Kurian",
"suffix": ""
},
{
"first": "Nishant",
"middle": [],
"last": "Patil",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2016,
"venue": "Oriol Vinyals",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. CoRR, abs/1609.08144.",
"links": null
},
"BIBREF59": {
"ref_id": "b59",
"title": "Optimizing statistical machine translation for text simplification",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Courtney",
"middle": [],
"last": "Napoles",
"suffix": ""
},
{
"first": "Ellie",
"middle": [],
"last": "Pavlick",
"suffix": ""
},
{
"first": "Quanze",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2016,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "4",
"issue": "",
"pages": "401--415",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Xu, Courtney Napoles, Ellie Pavlick, Quanze Chen, and Chris Callison-Burch. 2016. Optimi- zing statistical machine translation for text simplification. Transactions of the Association for Computational Linguistics, 4:401-415.",
"links": null
},
"BIBREF61": {
"ref_id": "b61",
"title": "XLNet: Generalized autoregressive pretraining for language understanding",
"authors": [
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Le. 2019. XLNet: Generalized autoregressive pretraining for language understanding. CoRR, abs/1906.08237.",
"links": null
},
"BIBREF62": {
"ref_id": "b62",
"title": "SWAG: A large-scale adversarial dataset for grounded commonsense inference",
"authors": [
{
"first": "Rowan",
"middle": [],
"last": "Zellers",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Bisk",
"suffix": ""
},
{
"first": "Roy",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "93--104",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. 2018. SWAG: A large-scale adversarial dataset for grounded commonsense inference. In Proceedings of the 2018 Confe- rence on Empirical Methods in Natural Language Processing, pages 93-104, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Model generated and reference summaries used for human evaluation. Words in orange correspond to incorrect or repeated information."
},
"TABREF0": {
"type_str": "table",
"text": "",
"content": "<table><tr><td>: The number of total trainable param-</td></tr><tr><td>eters, embedding parameters, and parameters</td></tr><tr><td>initialized from the checkpoint vs. randomly.</td></tr><tr><td>The BERT/GPT-2 embeddings have 23M/39M</td></tr><tr><td>parameters. The encoder-decoder attention ac-</td></tr><tr><td>counts for 26M parameters.</td></tr><tr><td>with the existing TensorFlow BERT architectures</td></tr><tr><td>with some minor adjustments. 3 The vocabulary</td></tr><tr><td>treatment in RoBERTa is compatible with the</td></tr><tr><td>SentencePiece tokenization in</td></tr></table>",
"num": null,
"html": null
},
"TABREF2": {
"type_str": "table",
"text": "",
"content": "<table/>",
"num": null,
"html": null
},
"TABREF3": {
"type_str": "table",
"text": "Initialized with the base checkpoint (12 layers)",
"content": "<table><tr><td/><td>14.3</td><td>61.5</td><td>76.4</td></tr><tr><td>BERTSHARE</td><td>16.3</td><td>63.5</td><td>77.2</td></tr><tr><td>ROBERTASHARE</td><td>16.1</td><td>63.4</td><td>77.1</td></tr><tr><td>BERT2BERT</td><td>15.6</td><td>63.2</td><td>77.0</td></tr><tr><td>ROBERTA2GPT</td><td>15.1</td><td>63.2</td><td>76.8</td></tr><tr><td>BERT2RND</td><td>15.9</td><td>63.1</td><td>76.9</td></tr><tr><td>BERT2GPT</td><td>14.6</td><td>62.4</td><td>76.5</td></tr><tr><td>RND2BERT</td><td>15.2</td><td>61.8</td><td>76.5</td></tr><tr><td>RND2RND</td><td>14.6</td><td>61.7</td><td>76.3</td></tr><tr><td>RND2GPT</td><td>14.2</td><td>61.3</td><td>76.2</td></tr><tr><td>GPT</td><td>14.2</td><td>61.1</td><td>75.8</td></tr><tr><td colspan=\"4\">Initialized with the large checkpoint (24 layers)</td></tr><tr><td>ROBERTASHARE</td><td>16.4</td><td>63.8</td><td>77.4</td></tr><tr><td>BERTSHARE</td><td>16.6</td><td>63.7</td><td>77.3</td></tr></table>",
"num": null,
"html": null
},
"TABREF4": {
"type_str": "table",
"text": "Results of different models and initialization setups on WikiSplit. Blockwise sorted by SARI score.",
"content": "<table/>",
"num": null,
"html": null
},
"TABREF6": {
"type_str": "table",
"text": "",
"content": "<table/>",
"num": null,
"html": null
},
"TABREF7": {
"type_str": "table",
"text": "38.13 19.81 35.62 39.09 18.10 36.33 38.52 16.12 31.13 ROBERTASHARE 38.21 19.70 35.44 40.10 18.95 37.39 39.87 17.50 32.37 BERTSHARE 38.35 19.80 35.66 39.83 17.69 37.01 38.93 16.35 31.52 ROBERTASHARE 38.62 19.78 35.94 40.31 18.91 37.62 41.45 18.79 33.90",
"content": "<table><tr><td>Lead</td><td>-</td><td>-</td><td>-</td><td colspan=\"5\">39.60 17.70 36.20 16.30 1.61 11.95</td></tr><tr><td>PtGen</td><td>-</td><td>-</td><td>-</td><td colspan=\"5\">39.53 17.28 36.38 29.70 9.21 23.24</td></tr><tr><td>ConvS2S</td><td colspan=\"3\">35.88 17.48 33.29</td><td>-</td><td>-</td><td>-</td><td colspan=\"2\">31.89 11.54 25.75</td></tr><tr><td>MMN</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td colspan=\"2\">32.00 12.10 26.00</td></tr><tr><td>Bottom-Up</td><td>-</td><td>-</td><td>-</td><td colspan=\"3\">41.22 18.68 38.34</td><td>-</td><td>-</td><td>-</td></tr><tr><td>MASS</td><td colspan=\"3\">38.73 19.71 35.96</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>TransLM</td><td>-</td><td>-</td><td>-</td><td colspan=\"3\">39.65 17.74 36.85</td><td>-</td><td>-</td><td>-</td></tr><tr><td>UniLM</td><td>-</td><td>-</td><td>-</td><td colspan=\"3\">43.47 20.30 40.63</td><td>-</td><td>-</td><td>-</td></tr><tr><td colspan=\"5\">Initialized with the base checkpoint (12 layers)</td><td/><td/><td/></tr><tr><td>RND2RND</td><td colspan=\"8\">36.94 18.71 34.45 35.77 14.00 32.96 30.90 10.23 24.24</td></tr><tr><td>BERT2RND</td><td colspan=\"8\">37.71 19.26 35.26 38.74 17.76 35.95 38.42 15.83 30.80</td></tr><tr><td>RND2BERT</td><td colspan=\"8\">37.01 18.91 34.51 36.65 15.55 33.97 32.44 11.52 25.65</td></tr><tr><td>BERT2BERT</td><td colspan=\"8\">38.01 19.68 35.58 39.02 17.84 36.29 37.53 15.24 30.05</td></tr><tr><td>GPT</td><td colspan=\"8\">36.04 18.44 33.67 37.26 15.83 34.47 22.21 4.89 16.69</td></tr><tr><td>RND2GPT</td><td colspan=\"8\">36.21 18.39 33.83 32.08 8.81 29.03 28.48 8.77 22.30</td></tr><tr><td>BERT2GPT</td><td colspan=\"8\">36.77 18.23 34.24 25.20 4.96 22.99 27.79 8.37 21.91</td></tr><tr><td>ROBERTA2GPT</td><td colspan=\"8\">37.94 19.21 35.42 36.35 14.72 33.79 19.91 5.20 15.88</td></tr><tr><td colspan=\"5\">Initialized with the large checkpoint (24 layers)</td><td/><td/><td/></tr></table>",
"num": null,
"html": null
},
"TABREF9": {
"type_str": "table",
"text": "Qualitative and human evaluations of BBC extreme summaries. The lowest numbers for repetitions and the highest numbers for quality are boldfaced. See the text for details.",
"content": "<table/>",
"num": null,
"html": null
}
}
}
}