|
{ |
|
"paper_id": "N16-1012", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T14:38:42.553829Z" |
|
}, |
|
"title": "Abstractive Sentence Summarization with Attentive Recurrent Neural Networks", |
|
"authors": [ |
|
{ |
|
"first": "Sumit", |
|
"middle": [], |
|
"last": "Chopra", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Auli", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Rush", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Harvard", |
|
"middle": [], |
|
"last": "Seas", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "ive Sentence Summarization generates a shorter version of a given sentence while attempting to preserve its meaning. We introduce a conditional recurrent neural network (RNN) which generates a summary of an input sentence. The conditioning is provided by a novel convolutional attention-based encoder which ensures that the decoder focuses on the appropriate input words at each step of generation. Our model relies only on learned features and is easy to train in an end-to-end fashion on large data sets. Our experiments show that the model significantly outperforms the recently proposed state-of-the-art method on the Gigaword corpus while performing competitively on the DUC-2004 shared task.", |
|
"pdf_parse": { |
|
"paper_id": "N16-1012", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "ive Sentence Summarization generates a shorter version of a given sentence while attempting to preserve its meaning. We introduce a conditional recurrent neural network (RNN) which generates a summary of an input sentence. The conditioning is provided by a novel convolutional attention-based encoder which ensures that the decoder focuses on the appropriate input words at each step of generation. Our model relies only on learned features and is easy to train in an end-to-end fashion on large data sets. Our experiments show that the model significantly outperforms the recently proposed state-of-the-art method on the Gigaword corpus while performing competitively on the DUC-2004 shared task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Generating a condensed version of a passage while preserving its meaning is known as text summarization. Tackling this task is an important step towards natural language understanding. Summarization systems can be broadly classified into two categories. Extractive models generate summaries by cropping important segments from the original text and putting them together to form a coherent summary. Abstractive models generate summaries from scratch without being constrained to reuse phrases from the original text.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper we propose a novel recurrent neural network for the problem of abstractive sentence summarization. Inspired by the recently proposed architectures for machine translation , our model consists of a conditional recurrent neural network, which acts as a decoder to generate the summary of an input sentence, much like a standard recurrent language model. In addition, at every time step the decoder also takes a conditioning input which is the output of an encoder module. Depending on the current state of the RNN, the encoder computes scores over the words in the input sentence. These scores can be interpreted as a soft alignment over the input text, informing the decoder which part of the input sentence it should focus on to generate the next word. Both the decoder and encoder are jointly trained on a data set consisting of sentence-summary pairs. Our model can be seen as an extension of the recently proposed model for the same problem by Rush et al. (2015) . While they use a feed-forward neural language model for generation, we use a recurrent neural network. Furthermore, our encoder is more sophisticated, in that it explicitly encodes the position information of the input words. Lastly, our encoder uses a convolutional network to encode input words. These extensions result in improved performance.", |
|
"cite_spans": [ |
|
{ |
|
"start": 961, |
|
"end": 979, |
|
"text": "Rush et al. (2015)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The main contribution of this paper is a novel convolutional attention-based conditional recurrent neural network model for the problem of abstractive sentence summarization. Empirically we show that our model beats the state-of-the-art systems of Rush et al. (2015) on multiple data sets. Particularly notable is the fact that even with a simple generation module, which does not use any extractive feature tuning, our model manages to significantly outperform their ABS+ system on the Gigaword data set and is comparable on the DUC-2004 task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "While there is a large body of work for generating extractive summaries of sentences (Jing, 2000; Knight and Marcu, 2002; McDonald, 2006; Clarke and Lapata, 2008; Filippova and Altun, 2013; Filippova et al., 2015) , there has been much less research on abstractive summarization. A count-based noisy-channel machine translation model was proposed for the problem in Banko et al. (2000) . The task of abstractive sentence summarization was later formalized around the DUC-2003 and DUC-2004 competitions (Over et al., 2007 , where the TOP-IARY system (Zajic et al., 2004) was the state-ofthe-art. More recently Cohn and Lapata (2008) and later Woodsend et al. (2010) proposed systems which made heavy use of the syntactic features of the sentence-summary pairs. Later, along the lines of Banko et al. (2000) , MOSES was used directly as a method for text simplification by Wubben et al. (2012) . Other works which have recently been proposed for the problem of sentence summarization include (Galanis and Androutsopoulos, 2010; Napoles et al., 2011; Cohn and Lapata, 2013) . Very recently Rush et al. 2015proposed a neural attention model for this problem using a new data set for training and showing state-of-the-art performance on the DUC tasks. Our model can be seen as an extension of their model.", |
|
"cite_spans": [ |
|
{ |
|
"start": 85, |
|
"end": 97, |
|
"text": "(Jing, 2000;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 98, |
|
"end": 121, |
|
"text": "Knight and Marcu, 2002;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 122, |
|
"end": 137, |
|
"text": "McDonald, 2006;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 138, |
|
"end": 162, |
|
"text": "Clarke and Lapata, 2008;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 163, |
|
"end": 189, |
|
"text": "Filippova and Altun, 2013;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 190, |
|
"end": 213, |
|
"text": "Filippova et al., 2015)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 366, |
|
"end": 385, |
|
"text": "Banko et al. (2000)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 467, |
|
"end": 475, |
|
"text": "DUC-2003", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 476, |
|
"end": 488, |
|
"text": "and DUC-2004", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 489, |
|
"end": 520, |
|
"text": "competitions (Over et al., 2007", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 549, |
|
"end": 569, |
|
"text": "(Zajic et al., 2004)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 609, |
|
"end": 631, |
|
"text": "Cohn and Lapata (2008)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 642, |
|
"end": 664, |
|
"text": "Woodsend et al. (2010)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 786, |
|
"end": 805, |
|
"text": "Banko et al. (2000)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 871, |
|
"end": 891, |
|
"text": "Wubben et al. (2012)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 1026, |
|
"end": 1047, |
|
"text": "Napoles et al., 2011;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 1048, |
|
"end": 1070, |
|
"text": "Cohn and Lapata, 2013)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Previous Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Let x denote the input sentence consisting of a sequence of M words", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Attentive Recurrent Architecture", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "x = [x 1 , . . . , x M ], where each word x i is part of vocabulary V, of size |V| = V .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Attentive Recurrent Architecture", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Our task is to generate a target sequence y = [y 1 , . . . , y N ], of N words, where N < M , such that the meaning of x is preserved: y = argmax y P (y|x), where y is a random variable denoting a sequence of N words.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Attentive Recurrent Architecture", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Typically the conditional probability is modeled by a parametric function with parameters \u03b8: P (y|x) = P (y|x; \u03b8). Training involves finding the \u03b8 which maximizes the conditional probability of sentence-summary pairs in the training corpus. If the model is trained to generate the next word of the summary, given the previous words, then the above conditional can be factorized into a product of indi-vidual conditional probabilities:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Attentive Recurrent Architecture", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "P (y|x; \u03b8) = N t=1 p(y t |{y 1 , . . . , y t\u22121 }, x; \u03b8). (1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Attentive Recurrent Architecture", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In this work we model this conditional probability using an RNN Encoder-Decoder architecture, inspired by and subsequently extended in . We call our model RAS (Recurrent Attentive Summarizer).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Attentive Recurrent Architecture", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The above conditional is modeled using an RNN:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recurrent Decoder", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "P (y t |{y 1 , . . . , y t\u22121 }, x; \u03b8) = P t = g \u03b8 1 (h t , c t ),", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recurrent Decoder", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "where h t is the hidden state of the RNN:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recurrent Decoder", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "h t = g \u03b8 1 (y t\u22121 , h t\u22121 , c t ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recurrent Decoder", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Here c t is the output of the encoder module (detailed in \u00a73.2). It can be seen as a context vector which is computed as a function of the current state h t\u22121 and the input sequence x.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recurrent Decoder", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Our Elman RNN takes the following form (Elman, 1990):", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recurrent Decoder", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "h t = \u03c3(W 1 y t\u22121 + W 2 h t\u22121 + W 3 c t ) P t = \u03c1(W 4 h t + W 5 c t ),", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recurrent Decoder", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "where \u03c3 is the sigmoid function and \u03c1 is the softmax, defined as: \u03c1(o t ) = e ot / j e o j and W i (i = 1, . . . , 5) are matrices of learnable parameters of sizes W {1,2,3} \u2208 R d\u00d7d and W {4,5} \u2208 R d\u00d7V .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recurrent Decoder", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The LSTM decoder is defined as (Hochreiter and Schmidhuber, 1997) :", |
|
"cite_spans": [ |
|
{ |
|
"start": 31, |
|
"end": 65, |
|
"text": "(Hochreiter and Schmidhuber, 1997)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recurrent Decoder", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "i t = \u03c3(W 1 y t\u22121 + W 2 h t\u22121 + W 3 c t ) i t = tanh(W 4 y t\u22121 + W 5 h t\u22121 + W 6 c t ) f t = \u03c3(W 7 y t\u22121 + W 8 h t\u22121 + W 9 c t ) o t = \u03c3(W 10 y t\u22121 + W 11 h t\u22121 + W 12 c t ) m t = m t\u22121 f t + i t i t h t = m t o t P t = \u03c1(W 13 h t + W 14 c t ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recurrent Decoder", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Operator refers to component-wise multiplication, and W i (i = 1, . . . , 14) are matrices of learnable parameters of sizes W {1,...,12} \u2208 R d\u00d7d , and", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recurrent Decoder", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "W {13,14} \u2208 R d\u00d7V .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Recurrent Decoder", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We now give the details of the encoder which computes the context vector c t for every time step t of the decoder above. With a slight overload of notation, for an input sentence x we denote by x i the d dimensional learnable embedding of the i-th word (x i \u2208 R d ). In addition the position i of the word x i is also associated with a learnable embedding l i of size d (l i \u2208 R d ). Then the full embedding for i-th word in x is given by a i = x i + l i . Let us denote by B k \u2208 R q\u00d7d a learnable weight matrix which is used to convolve over the full embeddings of consecutive words. Let there be d such matrices (k \u2208 {1, . . . , d}). The output of convolution is given by:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Attentive Encoder", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "z ik = q/2 h=\u2212q/2 a i+h \u2022 b k q/2+h ,", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Attentive Encoder", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "where b k j is the j-th column of the matrix B k . Thus the d dimensional aggregate embedding vector z i is defined as z i = [z i1 , . . . , z id ]. Note that each word x i in the input sequence is associated with one aggregate embedding vector z i . The vectors z i can be seen as a representation of the word which captures the position in which it occurs in the sentence and also the context in which it appears in the sentence. In our experiments the width q of the convolution matrix B k was set to 5. To account for words at the boundaries of x we first pad the sequence on both sides with dummy words before computing the aggregate vectors z i 's.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Attentive Encoder", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Given these aggregate vectors of words, we compute the context vector c t (the encoder output) as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Attentive Encoder", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "c t = M j=1 \u03b1 j,t\u22121 x j ,", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "Attentive Encoder", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "where the weights \u03b1 j,t\u22121 are computed as", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Attentive Encoder", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u03b1 j,t\u22121 = exp(z j \u2022 h t\u22121 ) M i=1 exp(z i \u2022 h t\u22121 ) .", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": "Attentive Encoder", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Given a training corpus S = {(x i , y i )} S i=1 of S sentence-summary pairs, the above model can be trained end-to-end using stochastic gradient descent by minimizing the negative conditional log likelihood of the training data with respect to \u03b8:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training and Generation", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "L = \u2212 S i=1 N t=1 log P (y i t |{y i 1 , . . . , y i t\u22121 }, x i ; \u03b8),", |
|
"eq_num": "(5)" |
|
} |
|
], |
|
"section": "Training and Generation", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "where the parameters \u03b8 constitute the parameters of the decoder and the encoder.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training and Generation", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Once the parametric model is trained we generate a summary for a new sentence x through a wordbased beam search such that P (y|x) is maximized, argmax P (y t |{y 1 , . . . , y t\u22121 }, x). The search is parameterized by the number of paths k that are pursued at each time step.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training and Generation", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "4 Experimental Setup", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training and Generation", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Our models are trained on the annotated version of the Gigaword corpus (Graff et al., 2003; Napoles et al., 2012) and we use only the annotations for tokenization and sentence separation while discarding other annotations such as tags and parses. We pair the first sentence of each article with its headline to form sentence-summary pairs. The data is pre-processed in the same way as Rush et al. (2015) and we use the same splits for training, validation, and testing. For Gigaword we report results on the same randomly held-out test set of 2000 sentence-summary pairs as (Rush et al., 2015). 1 We also evaluate our models on the DUC-2004 evaluation data set comprising 500 pairs (Over et al., 2007) . Our evaluation is based on three variants of ROUGE (Lin, 2004) , namely, ROUGE-1 (unigrams), ROUGE-2 (bigrams), and ROUGE-L (longest-common substring).", |
|
"cite_spans": [ |
|
{ |
|
"start": 71, |
|
"end": 91, |
|
"text": "(Graff et al., 2003;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 92, |
|
"end": 113, |
|
"text": "Napoles et al., 2012)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 595, |
|
"end": 596, |
|
"text": "1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 682, |
|
"end": 701, |
|
"text": "(Over et al., 2007)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 755, |
|
"end": 766, |
|
"text": "(Lin, 2004)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets and Evaluation", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We implemented our models in the Torch library (http://torch.ch/) 2 . To optimize our loss (Equation 5) we used stochastic gradient descent with mini-batches of size 32. During training we measure the perplexity of the summaries in the validation set and adjust our hyper-parameters, such as the learning rate, based on this number. For the decoder we experimented with both the Elman RNN and the Long-Short Term Memory (LSTM) architecture (as discussed in \u00a7 3.1). We chose hyper-parameters based on a grid search and picked the one which gave the best perplexity on the validation set. In particular we searched over the number of hidden units H of the recurrent layer, the learning rate \u03b7, the learning rate annealing schedule \u03b3 (the factor by which to decrease \u03b7 if the validation perplexity increases), and the gradient clipping threshold \u03ba. Our final Elman architecture (RAS-Elman) uses a single layer with H = 512, \u03b7 = 0.5, \u03b3 = 2, and \u03ba = 10. The LSTM model (RAS-LSTM) also has a single layer with H = 512, \u03b7 = 0.1, \u03b3 = 2, and \u03ba = 10.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Architectural Choices", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "On the Gigaword corpus we evaluate our models in terms of perplexity on a held-out set. We then pick the model with best perplexity on the held-out set and use it to compute the F1-score of ROUGE-1, ROUGE-2, and ROUGE-L on the test sets, all of which we report. For the DUC corpus however, inline with the standard, we report the recall-only ROUGE. As baseline we use the state-of-the-art attention-based system (ABS) of Rush et al. (2015) which relies on a feed-forward network decoder. Additionally, we compare to an enhanced version of their system (ABS+), which relies on a range of separate extractive summarization features that are added as log-linear features in a secondary learning step with minimum error rate training (Och, 2003) . ABS as well as other models reported in Rush et al. (2015). The RAS-LSTM performs slightly worse than RAS-Elman, most likely due to over-fitting.", |
|
"cite_spans": [ |
|
{ |
|
"start": 421, |
|
"end": 439, |
|
"text": "Rush et al. (2015)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 730, |
|
"end": 741, |
|
"text": "(Och, 2003)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We attribute this to the relatively simple nature of this task which can be framed as English-to-English translation with few long-term dependencies. The ROUGE results (Table 2) show that our models comfortably outperform both ABS and ABS+ by a wide margin on all metrics. This is even the case when we rely only on very fast greedy search (k = 1), while as ABS uses a much wider beam of size k = 50; the stronger ABS+ system also uses additional extractive features which our model does not. These features cause ABS+ to copy 92% of words from the input into the summary, whereas our model copies only 74% of the words leading to more abstractive summaries. On DUC-2004 we report recall ROUGE as is customary on this dataset. The results (Table 3) show that our models are better than ABS+. However the improvements are smaller than for Gi-gaword which is likely due to two reasons: First, tokenization of DUC-2004 differs slightly from our training corpus. Second, headlines in Gigaword are much shorter than in DUC-2004. For the sake of completeness we also compare our models to the recently proposed standard Neural Machine Translation (NMT) systems. In particular, we compare to a smaller re-implementation of the attentive stacked LSTM encoder-decoder of Luong et al. (2015) . Our implementation uses two-layer LSTMs for the encoder-decoder with 500 hidden units in each layer. Tables 2 and 3 report ROUGE scores on the two data sets. From the tables we observe that the proposed RAS-Elman model is able to match the performance of the NMT model of Luong at al. (2015) . This is noteworthy because RAS-Elman is significantly simpler than the NMT model at multiple levels. First, the encoder used by RAS-Elman is extremely light-weight (attention over the convolutional representation of the input words), compared to Luong's (a 2 hidden layer LSTM). Second, the decoder used by RAS-Elman is a single layer standard (Elman) RNN as opposed to a multi-layer LSTM. In an independent work, Nallapati et. al (2016) also trained a collection of standard NMT models and report numbers in the same ballpark as RAS-Elman on both datasets.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1014, |
|
"end": 1023, |
|
"text": "DUC-2004.", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1262, |
|
"end": 1281, |
|
"text": "Luong et al. (2015)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 1556, |
|
"end": 1575, |
|
"text": "Luong at al. (2015)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 1922, |
|
"end": 1929, |
|
"text": "(Elman)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 168, |
|
"end": 177, |
|
"text": "(Table 2)", |
|
"ref_id": "TABREF2" |
|
}, |
|
{ |
|
"start": 1385, |
|
"end": 1399, |
|
"text": "Tables 2 and 3", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In order to better understand which component of the proposed architecture is responsible for the improvements, we trained the recurrent model with Rush et. al., (2015)'s ABS encoder on a subset of the Gigaword dataset. The ABS encoder, which does not have the position features, achieves a final validation perplexity of 38 compared to 29 for the proposed encoder, which uses position features as well as context information. This clearly shows the benefits of using the position feature in the proposed encoder.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Finally in Figure 1 we highlight anecdotal examples of summaries produced by the RAS-Elman system on the Gigaword dataset. The first two examples highlight typical improvements in the RAS model over ABS+. Generally the model produces more fluent summaries and is better able to capture the main actors of the input. For instance in Sentence 1, RAS-Elman correctly distinguishes the actions of \"pepe\" from \"ferreira\", and in Sentence 2 it identifies the correct role of the \"think tank\". The final two ex-I(1): brazilian defender pepe is out for the rest of the season with a knee injury , his porto coach jesualdo ferreira said saturday . G: football : pepe out for season A+: ferreira out for rest of season with knee injury R: brazilian defender pepe out for rest of season with knee injury I(2): economic growth in toronto will suffer this year because of sars , a think tank said friday as health authorities insisted the illness was under control in canada 's largest city . G: sars toll on toronto economy estimated at c$ # billion A+: think tank under control in canada 's largest city R: think tank says economic growth in toronto will suffer this year I(3): colin l. powell said nothing -a silence that spoke volumes to many in the white house on thursday morning . G: in meeting with former officials bush defends iraq policy A+: colin powell speaks volumes about silence in white house R: powell speaks volumes on the white house I(4): an international terror suspect who had been under a controversial loose form of house arrest is on the run , british home secretary john reid said tuesday . G: international terror suspect slips net in britain A+: reid under house arrest terror suspect on the run R: international terror suspect under house arrest Figure 1 : Example sentence summaries produced on Gigaword. I is the input, G is the true headline, A is ABS+, and R is RAS-ELMAN.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 11, |
|
"end": 19, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1763, |
|
"end": 1771, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "amples highlight typical mistakes of the models. In Sentence 3 both models take literally the figurative use of the idiom \"a silence that spoke volumes,\" and produce fluent but nonsensical summaries. In Sentence 4 the RAS model mistakes the content of a relative clause for the main verb, leading to a summary with the opposite meaning of the input. These difficult cases are somewhat rare in the Gigaword, but they highlight future challenges for obtaining human-level sentence summary.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We extend the state-of-the-art model for abstractive sentence summarization (Rush et al., 2015) to a recurrent neural network architecture. Our model is a simplified version of the encoder-decoder framework for machine translation . The model is trained on the Gigaword corpus to generate headlines based on the first line of each news article. We comfortably outperform the previous state-of-the-art on both Gigaword data and the DUC-2004 challenge even though our model does not rely on additional extractive features.", |
|
"cite_spans": [ |
|
{ |
|
"start": 76, |
|
"end": 95, |
|
"text": "(Rush et al., 2015)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We remove pairs with empty titles resulting in slightly different accuracy compared to Rush et al. (2015) for their systems.2 Our code can found at www://github.com/facebook/namas", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Neural machine translation by jointly learning to align and translate", |
|
"authors": [ |
|
{ |
|
"first": "Dzmitry", |
|
"middle": [], |
|
"last": "Bahdanau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Headline generation based on statistical translation", |
|
"authors": [ |
|
{ |
|
"first": "Michele", |
|
"middle": [], |
|
"last": "Banko", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O", |
|
"middle": [], |
|
"last": "Vibhu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Mittal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Witbrock", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of the 38th Annual Meeting on Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "318--325", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michele Banko, Vibhu O Mittal, and Michael J Witbrock. 2000. Headline generation based on statistical trans- lation. In Proceedings of the 38th Annual Meeting on Association for Computational Linguistics, pages 318-325. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Learning phrase representations using RNN encoder-decoder for statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bart", |
|
"middle": [], |
|
"last": "Van Merrienboer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dzmitry", |
|
"middle": [], |
|
"last": "Aglar G\u00fcl\u00e7ehre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fethi", |
|
"middle": [], |
|
"last": "Bahdanau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Holger", |
|
"middle": [], |
|
"last": "Bougares", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Schwenk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of EMNLP 2014", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1724--1734", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kyunghyun Cho, Bart van Merrienboer, \u00c7 aglar G\u00fcl\u00e7ehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase repre- sentations using RNN encoder-decoder for statistical machine translation. In Proceedings of EMNLP 2014, pages 1724-1734.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Global inference for sentence compression: An integer linear programming approach", |
|
"authors": [ |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Clarke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mirella", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Journal of Artificial Intelligence Research", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "399--429", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "James Clarke and Mirella Lapata. 2008. Global infer- ence for sentence compression: An integer linear pro- gramming approach. Journal of Artificial Intelligence Research, pages 399-429.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Sentence compression beyond word deletion", |
|
"authors": [ |
|
{ |
|
"first": "Trevor", |
|
"middle": [], |
|
"last": "Cohn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mirella", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the 22nd International Conference on Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "137--144", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Trevor Cohn and Mirella Lapata. 2008. Sentence com- pression beyond word deletion. In Proceedings of the 22nd International Conference on Computational Linguistics-Volume 1, pages 137-144. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "An abstractive approach to sentence compression", |
|
"authors": [ |
|
{ |
|
"first": "Trevor", |
|
"middle": [], |
|
"last": "Cohn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mirella", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "ACM Transactions on Intelligent Systems and Technology (TIST'13)", |
|
"volume": "4", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Trevor Cohn and Mirella Lapata. 2013. An abstrac- tive approach to sentence compression. ACM Transac- tions on Intelligent Systems and Technology (TIST'13), 4,3(41).", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Finding structure in time", |
|
"authors": [ |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Elman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "Cognitive Science", |
|
"volume": "14", |
|
"issue": "2", |
|
"pages": "179--211", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeffrey L. Elman. 1990. Finding structure in time. Cog- nitive Science, 14(2):179-211.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Overcoming the lack of parallel data in sentence compression", |
|
"authors": [ |
|
{ |
|
"first": "Katja", |
|
"middle": [], |
|
"last": "Filippova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yasemin", |
|
"middle": [], |
|
"last": "Altun", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1481--1491", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Katja Filippova and Yasemin Altun. 2013. Overcoming the lack of parallel data in sentence compression. In EMNLP, pages 1481-1491.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Sentence compression by deletion with lstms", |
|
"authors": [ |
|
{ |
|
"first": "Katja", |
|
"middle": [], |
|
"last": "Filippova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Enrique", |
|
"middle": [], |
|
"last": "Alfonseca", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Carlos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lukasz", |
|
"middle": [], |
|
"last": "Colmenares", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oriol", |
|
"middle": [], |
|
"last": "Kaiser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Vinyals", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Katja Filippova, Enrique Alfonseca, Carlos A Col- menares, Lukasz Kaiser, and Oriol Vinyals. 2015. Sentence compression by deletion with lstms. In EMNLP.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "An extractive supervised two-stage method for sentence compression", |
|
"authors": [], |
|
"year": 2010, |
|
"venue": "Dimitrios Galanis and Ion Androutsopoulos", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dimitrios Galanis and Ion Androutsopoulos. 2010. An extractive supervised two-stage method for sentence compression. In Proceedings of NAACL-HLT 2010.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "English gigaword. Linguistic Data Consortium", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Graff", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Junbo", |
|
"middle": [], |
|
"last": "Kong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ke", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kazuaki", |
|
"middle": [], |
|
"last": "Maeda", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Graff, Junbo Kong, Ke Chen, and Kazuaki Maeda. 2003. English gigaword. Linguistic Data Consortium, Philadelphia.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Long shortterm memory", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Hochreiter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Schmidhuber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Neural Computation", |
|
"volume": "9", |
|
"issue": "8", |
|
"pages": "1735--1780", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S. Hochreiter and J. Schmidhuber. 1997. Long short- term memory. Neural Computation, 9(8):1735-1780.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Sentence reduction for automatic text summarization", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Jing", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "ANLP-00", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "703--711", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "H Jing. 2000. Sentence reduction for automatic text sum- marization. In ANLP-00, pages 703-711.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Summarization beyond sentence extraction: A probabilistic approach to sentence compression", |
|
"authors": [ |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Marcu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Artificial Intelligence", |
|
"volume": "139", |
|
"issue": "1", |
|
"pages": "91--107", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kevin Knight and Daniel Marcu. 2002. Summariza- tion beyond sentence extraction: A probabilistic ap- proach to sentence compression. Artificial Intelli- gence, 139(1):91-107.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Rouge: A package for automatic evaluation of summaries", |
|
"authors": [ |
|
{ |
|
"first": "Chin-Yew", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Text Summarization Branches Out: Proceedings of the ACL-04 Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "74--81", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chin-Yew Lin. 2004. Rouge: A package for auto- matic evaluation of summaries. In Text Summarization Branches Out: Proceedings of the ACL-04 Workshop, pages 74-81.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Effective approaches to attentionbased neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Minh-Thang", |
|
"middle": [], |
|
"last": "Luong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hieu", |
|
"middle": [], |
|
"last": "Pham", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1412--1421", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention- based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412-1421, Lisbon, Por- tugal, September. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Discriminative sentence compression with soft syntactic evidence", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mcdonald", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "EACL-06", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "297--304", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R McDonald. 2006. Discriminative sentence compres- sion with soft syntactic evidence. In EACL-06, pages 297-304.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Sequence-to-sequence rnns for text summarization", |
|
"authors": [ |
|
{ |
|
"first": "Ramesh", |
|
"middle": [], |
|
"last": "Nallapati", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bing", |
|
"middle": [], |
|
"last": "Xiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhou", |
|
"middle": [], |
|
"last": "Bowen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ramesh Nallapati, Bing Xiang, and Zhou Bowen. 2016. Sequence-to-sequence rnns for text summarization. In http://arxiv.org/abs/1602.06023.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Paraphratic sentence compression with a character-based metric: Tightening without deletion", |
|
"authors": [ |
|
{ |
|
"first": "Courtney", |
|
"middle": [], |
|
"last": "Napoles", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Callison-Burch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Juri", |
|
"middle": [], |
|
"last": "Ganitkevitch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Van Durme", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the Workshop on Monolingual Text-To-Text Generation (MTTG'11)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Courtney Napoles, Chris Callison-Burch, Juri Ganitke- vitch, and Benjamin Van Durme. 2011. Paraphratic sentence compression with a character-based metric: Tightening without deletion. In Proceedings of the Workshop on Monolingual Text-To-Text Generation (MTTG'11).", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Annotated gigaword", |
|
"authors": [ |
|
{ |
|
"first": "Courtney", |
|
"middle": [], |
|
"last": "Napoles", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Gormley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Van Durme", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the Joint Workshop on Automatic Knowledge Base Construction and Web-scale Knowledge Extraction", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "95--100", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Courtney Napoles, Matthew Gormley, and Benjamin Van Durme. 2012. Annotated gigaword. In Proceed- ings of the Joint Workshop on Automatic Knowledge Base Construction and Web-scale Knowledge Extrac- tion, pages 95-100. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Minimum error rate training in statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Franz Josef", |
|
"middle": [], |
|
"last": "Och", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the 41st Annual Meeting on Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "160--167", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of the 41st Annual Meeting on Association for Computa- tional Linguistics-Volume 1, pages 160-167. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Duc in context. Information Processing & Management", |
|
"authors": [ |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Over", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hoa", |
|
"middle": [], |
|
"last": "Dang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Donna", |
|
"middle": [], |
|
"last": "Harman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "", |
|
"volume": "43", |
|
"issue": "", |
|
"pages": "1506--1520", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Paul Over, Hoa Dang, and Donna Harman. 2007. Duc in context. Information Processing & Management, 43(6):1506-1520.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "A neural attention model for abstractive sentence summarization", |
|
"authors": [ |
|
{ |
|
"first": "Sumit", |
|
"middle": [], |
|
"last": "Alexander M Rush", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Chopra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Weston", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexander M Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sen- tence summarization. In EMNLP.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Generation with quasi-synchronous grammar", |
|
"authors": [ |
|
{ |
|
"first": "Kristian", |
|
"middle": [], |
|
"last": "Woodsend", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yansong", |
|
"middle": [], |
|
"last": "Feng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mirella", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 2010 conference on empirical methods in natural language processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "513--523", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kristian Woodsend, Yansong Feng, and Mirella Lapata. 2010. Generation with quasi-synchronous grammar. In Proceedings of the 2010 conference on empirical methods in natural language processing, pages 513- 523. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Sentence simplification by monolingual machine translation", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Sander Wubben", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Van Den", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Emiel", |
|
"middle": [], |
|
"last": "Bosch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Krahmer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1015--1024", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sander Wubben, Antal Van Den Bosch, and Emiel Krah- mer. 2012. Sentence simplification by monolingual machine translation. In Proceedings of the 50th An- nual Meeting of the Association for Computational Linguistics: Long Papers-Volume 1, pages 1015-1024. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Bbn/umd at duc-2004: Topiary", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Zajic", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bonnie", |
|
"middle": [], |
|
"last": "Dorr", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Schwartz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the HLT-NAACL 2004 Document Understanding Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "112--119", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Zajic, Bonnie Dorr, and Richard Schwartz. 2004. Bbn/umd at duc-2004: Topiary. In Proceedings of the HLT-NAACL 2004 Document Understanding Work- shop, Boston, pages 112-119.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF1": { |
|
"type_str": "table", |
|
"content": "<table><tr><td>shows that both our RAS-Elman and</td></tr><tr><td>RAS-LSTM models achieve lower perplexity than</td></tr></table>", |
|
"html": null, |
|
"text": "", |
|
"num": null |
|
}, |
|
"TABREF2": { |
|
"type_str": "table", |
|
"content": "<table><tr><td colspan=\"2\">RG refers to ROUGE. Rush et al. (2015) previously reported</td></tr><tr><td colspan=\"2\">ROUGE recall, while as we use the more balanced F-measure.</td></tr><tr><td/><td>RG-1 RG-2 RG-L</td></tr><tr><td>ABS</td><td>26.55 7.06 22.05</td></tr><tr><td>ABS+</td><td>28.18 8.49 23.81</td></tr><tr><td>RAS-Elman (k = 1)</td><td>29.13 7.62 23.92</td></tr><tr><td colspan=\"2\">RAS-Elman (k = 10) 28.97 8.26 24.06</td></tr><tr><td>RAS-LSTM (k = 1)</td><td>26.90 6.57 22.12</td></tr><tr><td colspan=\"2\">RAS-LSTM (k = 10) 27.41 7.69 23.06</td></tr><tr><td>Luong-NMT</td><td>28.55 8.79 24.43</td></tr></table>", |
|
"html": null, |
|
"text": "F1 ROUGE scores on the Gigaword test set. ABS and ABS+ are the systems of Rush et al. 2015. k refers to the size of the beam for generation; k = 1 implies greedy generation.", |
|
"num": null |
|
}, |
|
"TABREF3": { |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"html": null, |
|
"text": "ROUGE results (recall-only) on the DUC-2004 test sets. ABS and ABS+ are the systems of Rush et al. 2015. k refers to the size of the beam for generation; k = 1 implies greedy generation. RG refers to ROUGE.", |
|
"num": null |
|
} |
|
} |
|
} |
|
} |