|
{ |
|
"paper_id": "D12-1024", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T16:23:25.930211Z" |
|
}, |
|
"title": "Framework of Automatic Text Summarization Using Reinforcement Learning", |
|
"authors": [ |
|
{ |
|
"first": "Seonggi", |
|
"middle": [], |
|
"last": "Ryang", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Tokyo", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Takeshi", |
|
"middle": [], |
|
"last": "Abekawa", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We present a new approach to the problem of automatic text summarization called Automatic Summarization using Reinforcement Learning (ASRL) in this paper, which models the process of constructing a summary within the framework of reinforcement learning and attempts to optimize the given score function with the given feature representation of a summary. We demonstrate that the method of reinforcement learning can be adapted to automatic summarization problems naturally and simply, and other summarizing techniques, such as sentence compression, can be easily adapted as actions of the framework. The experimental results indicated ASRL was superior to the best performing method in DUC2004 and comparable to the state of the art ILP-style method, in terms of ROUGE scores. The results also revealed ASRL can search for sub-optimal solutions efficiently under conditions for effectively selecting features and the score function.", |
|
"pdf_parse": { |
|
"paper_id": "D12-1024", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We present a new approach to the problem of automatic text summarization called Automatic Summarization using Reinforcement Learning (ASRL) in this paper, which models the process of constructing a summary within the framework of reinforcement learning and attempts to optimize the given score function with the given feature representation of a summary. We demonstrate that the method of reinforcement learning can be adapted to automatic summarization problems naturally and simply, and other summarizing techniques, such as sentence compression, can be easily adapted as actions of the framework. The experimental results indicated ASRL was superior to the best performing method in DUC2004 and comparable to the state of the art ILP-style method, in terms of ROUGE scores. The results also revealed ASRL can search for sub-optimal solutions efficiently under conditions for effectively selecting features and the score function.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Automatic text summarization aims to automatically produce a short and well-organized summary of single or multiple documents (Mani, 2001) . Automatic summarization, especially multi-document summarization, has been an increasingly important task in recent years, because of the exponential explosion of available information. The brief summary that the summarization system produces allows readers to quickly and easily understand the content of original documents without having to read each individ-ual document, and it should be helpful for dealing with enormous amounts of information.", |
|
"cite_spans": [ |
|
{ |
|
"start": 126, |
|
"end": 138, |
|
"text": "(Mani, 2001)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The extractive approach to automatic summarization is a popular and well-known approach in this field, which creates a summary by directly selecting some textual units (e.g., words and sentences) from the original documents, because it is difficult to genuinely evaluate and guarantee the linguistic quality of the produced summary.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "One of the most well-known extractive approaches is maximal marginal relevance (MMR), which scores each textual unit and extracts the unit that has the highest score in terms of the MMR criteria (Goldstein et al., 2000) . Greedy MMR-style algorithms are widely used; however, they cannot take into account the whole quality of the summary due to their greediness, although a summary should convey all the information in given documents. Global inference algorithms for the extractive approach have been researched widely in recent years (Filatova and Hatzivassiloglou, 2004; McDonald, 2007; Takamura and Okumura, 2009) to consider whether the summary is \"good\" as a whole. These algorithms formulate the problem as integer linear programming (ILP) to optimize the score: however, as ILP is non-deterministic polynomialtime hard (NP-hard), the time complexity is very large. Consequently, we need some more efficient algorithm for calculations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 195, |
|
"end": 219, |
|
"text": "(Goldstein et al., 2000)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 537, |
|
"end": 574, |
|
"text": "(Filatova and Hatzivassiloglou, 2004;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 575, |
|
"end": 590, |
|
"text": "McDonald, 2007;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 591, |
|
"end": 618, |
|
"text": "Takamura and Okumura, 2009)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We present a new approach to the problem of automatic text summarization called Automatic Summarization using Reinforcement Learning (ASRL), which models the process of construction of a summary within the framework of reinforcement learn-ing and attempts to optimize the given score function with the given feature representation of a summary. We demonstrate that the method of reinforcement learning can be adapted to problems with automatic summarization naturally and simply, and other summarizing techniques, such as sentence compression, can be easily adapted as actions of the framework, which should be helpful to enhance the quality of the summary that is produced. This is the first paper utilizing reinforcement learning for problems with automatic summarization of text.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We evaluated ASRL with the DUC2004 summarization task 2, and the experimental results revealed ASRL is superior to the best method of performance in DUC2004 and comparable with the state of the art ILP-style method, based on maximum coverage with the knapsack constraint problem, in terms of ROUGE scores with experimental settings. We also evaluated ASRL in terms of optimality and execution time. The experimental results indicated ASRL can search the state space efficiently for some suboptimal solutions under the condition of effectively selecting features and the score function, and produce a summary whose score denotes the expectation of the score of the same features' states. The evaluation of the quality of a produced summary only depends on the given score function, and therefore it is easy to adapt the new method of evaluation without having to modify the structure of the framework.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We first focus on the extractive approach, which is directly used to produce a summary by extracting some textual units, by avoiding the difficulty of having to consider the genuine linguistic quality of a summary.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Formulation of Extractive Approach", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The given document (or documents) in extractive summarization approaches is reduced to the set of textual units:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Formulation of Extractive Approach", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "D = {x 1 , x 2 , \u2022 \u2022 \u2022 , x n },", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Formulation of Extractive Approach", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "where n is the size of the set, and x i denotes individual textual units. Note that any textual unit is permitted, such as character, word, sentence, phrase, and conceptual unit. If we determine a sentence is a textual unit to be extracted, the formulated problem is a problem of extracting sentences from the source document, which is one of the most popular settings for sum-marization tasks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Formulation of Extractive Approach", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Next, we define the score function, score(S), for any subset of the document: S \u2282 D. Subset S is one of the summaries of the given document. The aim of this summarization problem is to find the summary that maximizes this function when the score function is given. The score function is typically defined by taking into consideration the tradeoff between relevance and redundancy.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Formulation of Extractive Approach", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Then, we define length function L(S), which indicates the length of summary S. The length is also arbitrary, which can be based on the character, word, and sentence. We assume the limitation of summary length K is given in summarization tasks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Formulation of Extractive Approach", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Finally, we define the extractive approach of the automatic summarization problem as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Formulation of Extractive Approach", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "S * = arg max S\u2282D score(S) (1) s.t. L(S) \u2264 K.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Formulation of Extractive Approach", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We can regard the extractive approach as a search problem. It is extremely difficult to solve this search problem because the final result of evaluation given by the given score function is not available until it finishes, and we therefore need to try all combinations of textual units. Consequently, the score function, which denotes some criterion for the quality of a summary, tends to be determined so that the function can be decomposed to components and it is solved with global inference algorithms, such as ILP. However, both decomposing the score function properly and utilizing the evaluation of half-way process of searches are generally difficult. For example, let us assume that we design the score function by using some complex semantic considerations to take into account the readability of a summary, and the score is efficiently calculated if the whole summary is given. Then, formulating the problem as a global inference problem and solving it with methods of integer linear programming might generally be difficult, because of the complex composition of the score function, despite the ease with which the whole summary is evaluated. The readability score might be based on extremely complex calculations of dependency relations, or a great deal of external knowledge the summarizer cannot know merely from the source documents. In fact, it is ideal that we can only directly utilize the score function, in the sense that we do not have to consider the decomposed form of the given score function. We need to consider the problem with automatic summarization to be the same as that with reinforcement learning to handle these problems. Reinforcement learning is one of the solutions to three problems.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Motivation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "\u2022 The learning of the agent only depends on the reward provided by the environment.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Motivation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "\u2022 Furthermore, the reward is delayed, in the sense that the agent cannot immediately know the actual evaluation of the executed action.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Motivation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "\u2022 The agent only estimates the value of the state with the information on rewards, without knowledge of the actual form of the score function, to maximize future rewards.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Motivation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We suggest the formulation of the problem as we have just described will enable us to freely design the score function without limitations and expand the capabilities of automatic summarization.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Motivation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "4 Models of Extractive Approach for Reinforcement Learning", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Motivation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Reinforcement learning is a powerful method of solving planning problems, especially problems formulated as Markov decision processes (MDPs) (Sutton and Barto, 1998) . The agent of reinforcement learning repeats three steps until terminated at each episode in the learning process.", |
|
"cite_spans": [ |
|
{ |
|
"start": 153, |
|
"end": 165, |
|
"text": "Barto, 1998)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reinforcement Learning", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "1. The agent observes current state s from the environment, contained in state space S.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reinforcement Learning", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "2. Next, it determines and executes next action a according to current policy \u03c0. Action a is contained in the action space limited by the current state: A(s), which is a subset of whole action space A = \u222a s\u2208S A(s). Policy \u03c0 is the strategy for selecting action, represented as a conditional distribution of actions: p(a|s).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reinforcement Learning", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "3. It then observes next state s \u2032 and receives reward r from the environment.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reinforcement Learning", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The aim of reinforcement learning is to find optimal policy \u03c0 * only with information on sample trajectories and to reward the experienced agent. We describe how to adapt the extractive approach to the problem of reinforcement learning in the sections that follow.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reinforcement Learning", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "A state denotes a summary. We represent state s as a tuple of summary S (a set of textual units) and additional state variables: s = (S, A, f ). We assume s has the history of actions A that the agent executed to achieve this state. Additionally, s has the binary state variable, f \u2208 {0, 1}, which denotes whether s is a terminal state or not. Initial state s 0 is (\u2205, \u2205, 0).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "State", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We assume the d-dimensional feature representation of state s: \u03d5(s) \u2208 R d , which only depends on the feature of summary \u03d5 \u2032 (S) \u2208 R d\u22121 . Given \u03d5 \u2032 (S), we define the features as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "State", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u03d5(s) = { (\u03d5 \u2032 (S), 0) T (L(S) \u2264 K) (0, 1) T (K < L(S)) .", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "State", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "This definition denotes that summaries that violate the length limitation are shrunk to a single feature, (0, 1) T , which means it is not a summary.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "State", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Note the features of the state only depend on the features of the summary, not on the executed actions to achieve the state. Unlike naive search methods, this property has the potential for different states to be represented as the same vector, which has the same features. The agent, however, should search as many possible states as it can. Therefore, the generalization function of the feature representation is of utmost importance. The accurate selection of features contributes to reducing the search space and provides efficient learning as will be discussed later.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "State", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "An action denotes a transition operation that produces a new state from a current state. We assumed all actions were deterministic in this study. We define insert i (1 \u2264 i \u2264 n) actions, each of which inserts textual unit x i to the current state unless the state is terminated, as described in the following di-agram:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Action", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "s t a t s t+1 \uf8eb \uf8ed S t A t 0 \uf8f6 \uf8f8 insert i \u2212\u2212\u2212\u2212\u2192 \uf8eb \uf8ed S t \u222a {x i } A t \u222a {insert i } 0 \uf8f6 \uf8f8 . (3)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Action", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "In addition to insertion actions, we define finish that terminates the current episode in reinforcement learning:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Action", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "s t a t s t+1 \uf8eb \uf8ed S t A t 0 \uf8f6 \uf8f8 finish \u2212 \u2212\u2212\u2212 \u2192 \uf8eb \uf8ed S t A t \u222a {finish} 1 \uf8f6 \uf8f8 (4)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Action", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Note that f t = 1 means state s t is a terminal state. Then, the whole action set, A, is defined by insert i and finish:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Action", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "A = {insert 1 , insert 2 , \u2022 \u2022 \u2022 , insert n , finish}. (5)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Action", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "We can calculate the available actions limited by state s t :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Action", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "A(s t ) = { A\\A t (L(S t ) \u2264 K) {finish} (K < L(S t )) .", |
|
"eq_num": "(6)" |
|
} |
|
], |
|
"section": "Action", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "This definition means that the agent may execute one of the actions that have not yet been executed in this episode, and it has no choice but to finish if the summary of the current state already violates length limitations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Action", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "The agent receives a reward from the environment as some kind of criterion of how good the action the agent executed was. If the current state is s t , the agent executes a t , and the state makes a transition into s t+1 ; then, the agent receives the reward, r t+1 :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reward", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "r t+1 = \uf8f1 \uf8f2 \uf8f3 score(S t ) (a t = finish, L(S t ) \u2264 K) \u2212R penalty (a t = finish, K < L(S t )) 0 (otherwise) ,", |
|
"eq_num": "(7)" |
|
} |
|
], |
|
"section": "Reward", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "where R penalty > 0. The agent can receive the score awarded by the given score function if and only if the executed action is finish and the summary length is appropriate. If the summary length is inappropriate but the executed action is finish, the environment awards a penalty to the agent. The most important point of this definition is that the agent receives nothing under the condition where the next state is not terminated. In this sense, the reward is delayed. Due to this definition, maximizing the expectation of future rewards is equivalent to maximizing the given score function, and we do not need to consider the decomposed form of the score function, i.e., we only need to consider the final score of the whole summary.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reward", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "Our aim is to find the optimal policy. This is achieved by obtaining the optimal state value function, V * (s), because if we obtain this, the greedy policy is optimal, which determines the action so as to maximize the state value after the transition occurred. Therefore, our aim is equivalent to finding V * (s). Let us try to estimate the state value function with parameter \u03b8 \u2208 R d :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Value Function Approximation", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "V (s) = \u03b8 T \u03d5(s).", |
|
"eq_num": "(8)" |
|
} |
|
], |
|
"section": "Value Function Approximation", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "We can also represent and estimate the action value function, Q(s, a), by using V (s):", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Value Function Approximation", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "Q(s, a) = r + \u03b3V (s \u2032 ),", |
|
"eq_num": "(9)" |
|
} |
|
], |
|
"section": "Value Function Approximation", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "where the execution of a causes the state transition from s to s \u2032 and the agent receives reward r, and \u03b3(0 \u2264 \u03b3 \u2264 1) is the discount rate. Note that all actions are deterministic in this study. By using these value functions, we define the policy as the conditional distribution, p(a|s; \u03b8, \u03c4 ), which is parameterized by \u03b8 and a temperature parameter \u03c4 :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Value Function Approximation", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "p(a|s; \u03b8, \u03c4 ) = e Q(s,a)/\u03c4 \u2211 a \u2032 e Q(s,a \u2032 )/\u03c4 .", |
|
"eq_num": "(10)" |
|
} |
|
], |
|
"section": "Value Function Approximation", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "Temperature \u03c4 decreases as learning progresses, which causes the policy to be greedier. This softmax selection strategy is called Boltzmann selection.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Value Function Approximation", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "The goal of learning is to estimate \u03b8. We use the TD (\u03bb) algorithm with function approximation (Sutton and Barto, 1998) . Algorithm 1 represents the whole system of our method, called Automatic Summarization using Reinforcement Learning (ASRL) ", |
|
"cite_spans": [ |
|
{ |
|
"start": 107, |
|
"end": 119, |
|
"text": "Barto, 1998)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning Algorithm", |
|
"sec_num": "4.6" |
|
}, |
|
{ |
|
"text": "Algorithm 1 ASRL Input: document D = {x 1 , x 2 , \u2022 \u2022 \u2022 , x n }, score function score(S) 1: initialize \u03b8 = 0 2: for k = 1 to N do 3: s \u2190 (\u2205, \u2205, 0) // initial", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning Algorithm", |
|
"sec_num": "4.6" |
|
}, |
|
{ |
|
"text": "\u03b4 \u2190 r + \u03b3\u03b8 T \u03d5(s \u2032 ) \u2212 \u03b8 T \u03d5(s) //", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning Algorithm", |
|
"sec_num": "4.6" |
|
}, |
|
{ |
|
"text": "(s \u2032 , r) \u2190 execute(s, a)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning Algorithm", |
|
"sec_num": "4.6" |
|
}, |
|
{ |
|
"text": "18:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning Algorithm", |
|
"sec_num": "4.6" |
|
}, |
|
{ |
|
"text": "s \u2190 s \u2032 19: end while 20: return the summary of s in this paper. N is the number of learning episodes, and e(\u2208 R d ) and \u03bb(0 \u2264 \u03bb \u2264 1) correspond to the eligibility trace and the trace decay parameter. The eligibility trace, e, conveys all information on the features of states that the agent previously experienced, with previously decaying influences of features due to decay parameter \u03bb and discount rate \u03b3 (Line 9).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning Algorithm", |
|
"sec_num": "4.6" |
|
}, |
|
{ |
|
"text": "Line 1 initializes parameter \u03b8 to start up its learning. The following procedures from Lines 2 to 13 learn \u03b8 with the TD (\u03bb) algorithm, by using information on actual interactions with the environment. Learning rate \u03b1 k and temperature parameter \u03c4 k decay as the learning episode progresses. The best summary with the obtained policy is calculated in steps from Lines 14 to 19. If the agent can estimate \u03b8 properly, greedy output is the optimal solution.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning Algorithm", |
|
"sec_num": "4.6" |
|
}, |
|
{ |
|
"text": "We formulated the extractive approach as a problem with reinforcement learning in the previous section.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Models of Combined Approach for Reinforcement Learning", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In fact, we can also formulate a more general model of summarization, since evaluation only depends on the final state and it is not actually very important to regard the given documents as a set of textual units contained in the original documents.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Models of Combined Approach for Reinforcement Learning", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We explain how to take into account other methods within the ASRL framework by modifying the models in this section, with an example of sentence compression. We assume that we have a method of sentence compression, comp(x), and that a textual unit to be extracted is a sentence. What we have to do is to only simply modify the definitions of the state and action. Note that this is just one example of the combined method. Even other summarization systems can be similarly adapted to ASRL.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Models of Combined Approach for Reinforcement Learning", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We do not want to execute sentence compression twice, so we have to modify the state variables to convey the information: s = (S, A, c, f ) , where c \u2208 {0, 1}, and S, A, and f are the same definitions as previously described.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 126, |
|
"end": 139, |
|
"text": "(S, A, c, f )", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "State", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We add deterministic action comp to A, which produces the new summary constructed by compressing the last inserted sentence of the current summary:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Action", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "s t a t s t+1 \uf8eb \uf8ec \uf8ec \uf8ed S t A t 0 0 \uf8f6 \uf8f7 \uf8f7 \uf8f8 comp \u2212 \u2212\u2212\u2212 \u2192 \uf8eb \uf8ec \uf8ec \uf8ed S t \\{x c } \u222a {comp(x c )} A t \u222a {comp} 1 0 \uf8f6 \uf8f7 \uf8f7 \uf8f8 ,", |
|
"eq_num": "(11)" |
|
} |
|
], |
|
"section": "Action", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "where x c is the last sentence that is inserted into S t . Next, we modify insert i and finish:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Action", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "s t a t s t+1 \uf8eb \uf8ec \uf8ec \uf8ed S t A t c t 0 \uf8f6 \uf8f7 \uf8f7 \uf8f8 insert i \u2212\u2212\u2212\u2212\u2192 \uf8eb \uf8ec \uf8ec \uf8ed S t \u222a {x i } A t \u222a {insert i } 0 0 \uf8f6 \uf8f7 \uf8f7 \uf8f8 ,", |
|
"eq_num": "(12)" |
|
} |
|
], |
|
"section": "Action", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "s t a t s t+1 \uf8eb \uf8ec \uf8ec \uf8ed S t A t c t 0 \uf8f6 \uf8f7 \uf8f7 \uf8f8 finish \u2212 \u2212\u2212\u2212 \u2192 \uf8eb \uf8ec \uf8ec \uf8ed S t A t \u222a {finish} c t 1 \uf8f6 \uf8f7 \uf8f7 \uf8f8 . (13)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Action", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Note comp \u2208 A(s t ) may be executed if and only if c t = 0. insert i resets c to 0.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Action", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "We conducted three experiments in this study. First, we evaluated our method with ROUGE metrics (Lin, 2004) , in terms of ROUGE-1, ROUGE-2, and ROUGE-L. Second, we conducted an experiment on measuring the optimization capabilities of ASRL, with the scores we obtained and the execution time. Third, we evaluated ASRL taking into consideration sentence compression by using a very naive method, in terms of ROUGE-1, ROUGE-2, and ROUGE-3.", |
|
"cite_spans": [ |
|
{ |
|
"start": 96, |
|
"end": 107, |
|
"text": "(Lin, 2004)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We used sentences as textual units for the extractive approach in this research. Each sentence and document were represented as a bag-of-words vector with tf*idf values, with stopwords removed. All tokens were stemmed by using Porter's stemmer (Porter, 1980) . We experimented with our proposed method on the dataset of DUC2004 task2. This is a multidocument summarization task that contains 50 document clusters, each of which has 10 documents. We set up the length limitation to 665 bytes, used in the evaluation of DUC2004.", |
|
"cite_spans": [ |
|
{ |
|
"start": 244, |
|
"end": 258, |
|
"text": "(Porter, 1980)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Settings", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "We set up the parameters of ASRL where the number of episodes N = 300, the training rate \u03b1 k = 0.001 \u2022 101/(100 + k 1.1 ), and the temperature \u03c4 k = 1.0 \u2022 0.987 k\u22121 where k was the number of episodes that decayed as learning progressed. Both discount rate \u03b3 and trace decay parameter \u03bb were fixed to 1 for episodic tasks. The penalty, R penalty , was fixed to 1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Settings", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "We used the following score function in this study:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Settings", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "score(S) = \u2211 x i \u2208S \u03bb s Rel(x i ) \u2212 \u2211 x i ,x j \u2208S,i<j (1 \u2212 \u03bb s )Red(x i , x j ), (14)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Settings", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "where", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Settings", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "Rel(x i ) = Sim(x i , D) + P os(x i ) \u22121 (15) Red(x i , x j ) = Sim(x i , x j ). (16)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Settings", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "\u03bb s is the parameter for the trade-off between relevance and redundancy, Sim(x i , D) and Sim(x i , x j ) correspond to the cosine similarities between sentence x i and the sentence set of the given original documents D, and between sentence x i and sentence x j . P os(x i ) is the position of the occurrence of x i when we index sentences in each document from top to bottom with one origin. This score function was determined by reference to McDonald (2007) . We set \u03bb s = 0.9 in this experiment. We designed \u03d5 \u2032 (S), i.e., the vector representation of a summary, to adapt it to the summarization problem as follows.", |
|
"cite_spans": [ |
|
{ |
|
"start": 445, |
|
"end": 460, |
|
"text": "McDonald (2007)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 73, |
|
"end": 85, |
|
"text": "Sim(x i , D)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experimental Settings", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "\u2022 Coverage of important words: The elements are the top 100 words in terms of the tf*idf of the given document with binary representation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Settings", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "\u2022 Coverage ratio: This is calculated by counting up the number of top 100 elements included in the summary.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Settings", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "\u2022 Redundancy ratio: This is calculated by counting up the number of elements that excessively cover the top 100 elements.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Settings", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "\u2022 Length ratio: This is the ratio between the length of the summary and length limitation K.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Settings", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "\u2022 Position: This feature takes into consideration the position of sentence occurrences. It is calculated with", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Settings", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "\u2211 x\u2208S P os(x) \u22121 . Consequently, \u03d5 \u2032 (S) is a 104-dimensional vector.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Settings", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "We executed ASRL 10 times with the settings previously described and used all the results for evaluation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Settings", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "We used the dataset of DUC2003, which is a similar task that contains 30 document clusters and each cluster had 10 documents, to determine \u03c4 k and \u03bb s . We determined the parameters so that they would converge properly and become close to the optimal solutions calculated by ILP, under the conditions that the described feature representation and the score function were given. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Settings", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "We compared ASRL with four other conventional methods.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "\u2022 GREEDY: This method is a simple greedy algorithm, which repeats the selection of the sentence with the highest score of the remaining sentences by using an MMR-like method of scoring as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "x = arg max x\u2208D\\S [\u03bb s Rel(x) \u2212(1 \u2212 \u03bb s ) max x i \u2208S Red(x, x i )],", |
|
"eq_num": "(17)" |
|
} |
|
], |
|
"section": "Evaluation", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "where S is the current summary.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "\u2022 ILP: This indicates the method proposed by McDonald (2007) for maximizing the score function (14) with integer linear programming.", |
|
"cite_spans": [ |
|
{ |
|
"start": 45, |
|
"end": 60, |
|
"text": "McDonald (2007)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "\u2022 PEER65: This is the best performing system in task 2 of the DUC2004 competition in terms of ROUGE-1 proposed by Conroy et al. (2004) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 114, |
|
"end": 134, |
|
"text": "Conroy et al. (2004)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "\u2022 MCKP: This method was proposed by Takamura and Okamura (2009) . MCKP defines an automatic summarization problem as a maximum coverage problem with a knapsack constraint, which uses conceptual units (Filatova and Hatzivassiloglou, 2004) , and composes the meaning of sentences, as textual units and attempts to cover as many units as possible under the knapsack constraint.", |
|
"cite_spans": [ |
|
{ |
|
"start": 36, |
|
"end": 63, |
|
"text": "Takamura and Okamura (2009)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 200, |
|
"end": 237, |
|
"text": "(Filatova and Hatzivassiloglou, 2004)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "We evaluated our method of ASRL with ROUGE, in terms of ROUGE-1, ROUGE-2, and ROUGE-L. Table 2 : Results of ROGUE evaluation for each ASRL peer of 10 results in DUC2004. ASRL did not converge with stable solution with these experimental settings because of property of randomness.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 87, |
|
"end": 94, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation with ROUGE", |
|
"sec_num": "7.1" |
|
}, |
|
{ |
|
"text": "The experimental results are summarized in Tables 1 and 2. Table 1 lists the results for the comparison and Table 2 lists all the results for ASRL peers.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 43, |
|
"end": 117, |
|
"text": "Tables 1 and 2. Table 1 lists the results for the comparison and Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation with ROUGE", |
|
"sec_num": "7.1" |
|
}, |
|
{ |
|
"text": "The results imply ASRL is superior to PEER65, ILP, and GREEDY, and comparable to MCKP with these experimental settings in terms of ROUGE metrics. Note that ASRL is a kind of approximate method, because actions are selected probabilistically and the method of reinforcement learning occasionally converges with some sub-optimal solution. This can be expected from Table 2 , which indicates the results vary although each ASRL solution converged with some solution. However, in this experiment, ASRL achieved higher ROUGE scores than ILP, which achieved optimal solutions. This seems to have been caused by the properties of the features, which we will discuss later. It seems this feature representation is useful for efficiently searching the feature space. The method of mapping a state to features is, however, approximate in the sense that some states will shrink to the same feature vector, and ASRL therefore has no tendency to converge with some stable solution.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 363, |
|
"end": 370, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation with ROUGE", |
|
"sec_num": "7.1" |
|
}, |
|
{ |
|
"text": "Since we proposed our method as an approach to approximate optimization, there was the possibility of convergence with some sub-optimal solution as previously discussed. We also evaluated our approach from the point of view of the obtained scores and the execution time to confirm whether our method had optimization capabilities.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation of Optimization Capabilities", |
|
"sec_num": "7.2" |
|
}, |
|
{ |
|
"text": "The experimental results are plotted in Figures 1 and 2. Figure 1 plots the average for the rewards (i.e., scores) that the agent obtained for each episode. The horizontal line for ILP is the average for the optimal scores of (14). The score in ASRL increases as the number of episodes increases, and overtakes the score of GREEDY at some episode. The agent attempts to come close to the optimal score line of ILP but seems to fail, and finally converges to some local optimal solution. We should increase the number of episodes, adjust parameters \u03b1 and \u03c4 , and select more appropriate features for the state to improve the optimization capabilities of ASRL. Figure 2 plots the execution time for each peer. The horizontal axis is the number of textual units, i.e., the number of sentences in this experiment. The vertical axis is the execution time taken by the task. The plots of ASRL and ILP fit a linear function for the former and an exponential function for the latter. The experimental results indicate that while the execution time for ILP tends to increase exponentially, that for ASRL increases linearly. The time complexity of ASRL is linear with respect to the number of actions because the agent has to select the next action from the available actions for each episode, whose time complexity is naively O(|A|). As insert i actions are dominant in the extractive approach, the execution time increases linearly with respect to the number of textual units. However, ILP has to take into account the combinations of textual units, whose number increases exponentially.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 40, |
|
"end": 66, |
|
"text": "Figures 1 and 2. Figure 1", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 660, |
|
"end": 668, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation of Optimization Capabilities", |
|
"sec_num": "7.2" |
|
}, |
|
{ |
|
"text": "In conclusion, both the experimental results indicate that ASRL efficiently calculated a summary that was sub-optimal, but that was of relatively highquality in terms of ROUGE metrics, with the experimental settings we used.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation of Optimization Capabilities", |
|
"sec_num": "7.2" |
|
}, |
|
{ |
|
"text": "We also evaluated the combined approach with sentence compression. We evaluated the method described in Section 5 called ASRLC in this experiment for the sake of convenience. We used a very naive method of sentence compression for this experiment, which compressed a sentence to only important words, i.e., selecting word order by using the tf*idf score to compress the length to about half. This method of compression did not take into consideration either readability or linguistic quality. Note we wanted to confirm what effect the other methods would have, and we expected this to improve the ROUGE-1 score. We used the ROUGE-3 score in this evaluation instead of ROUGE-L, to confirm whether naive sentence compression occurred. The experimental results are summarized in Ta-ROUGE-1 ROUGE-2 ROUGE-3 ASRL 0.39013 0.09479 0.03435 ASRLC 0.39141 0.09259 0.03239 ble 3, which indicates ROUGE-1 increases but ROUGE-2 and ROUGE-3 decrease as expected. The variations, however, are small. This phenomenon was reported by Lin (2003) in that the effectiveness of sentence compression by local optimization at the sentence level was insufficient. Therefore, we would have to consider the range of applications with the combined method.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1017, |
|
"end": 1027, |
|
"text": "Lin (2003)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation of Effects of Sentence Compression", |
|
"sec_num": "7.3" |
|
}, |
|
{ |
|
"text": "8.1 Local Optimality of ASRL", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "We will discuss why ASRL seems to converge with some \"good\" local optimum with the described experimental settings in this section. Since our model of the state value function was simply linear and our parameter estimation was implemented by TD (\u03bb), which is a simple method in RL, it seems simply employing more efficient or state-of-the-art reinforcement learning methods may improve the performance of ASRL, such as GTD and GTD2 (Sutton et al., 2009b; Sutton et al., 2009a) . These methods basically only contribute to faster convergence, and the score that they will converge to might not differ significantly. As a result, it would not matter much which method was used for optimization.", |
|
"cite_spans": [ |
|
{ |
|
"start": 432, |
|
"end": 454, |
|
"text": "(Sutton et al., 2009b;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 455, |
|
"end": 476, |
|
"text": "Sutton et al., 2009a)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "The main point of this problem is modeling the feature representation of states, and this causes sub-optimality. The vector representation of states shrinks the different states to a single representation, i.e., the agent regards states whose features are similar to be similar states. Due to this property, the policy of reinforcement learning is learned to maximize the expected score of each feature vector, which includes many states. Such sub-optimality averagely balanced by the feature representation raises the possibility of achieving states that have a highquality summary with a low score, since we do not have a genuine score function.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "Thus, the most important thing in our method is to intentionally design the features of states and the score function, so that the agent can generalize states, while taking into consideration truly-essential features for the required summarization. It would be useful if the forms of features and the score function could be arbitrarily designed by the user because there is the capability of obtaining a high-quality summaries.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "Other useful methods, even other summarization systems, can easily be adapted to ASRL as was described in Section 5. The experimental results revealed that sentence compression has some effect. In fact, all operations that produce a new summary from an old summary can be used, i.e., even other summarizing methods can be employed for an action. We assumed a general combined method may have a great deal of potential to enhance the quality of summaries.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Potential of Combined Method", |
|
"sec_num": "8.2" |
|
}, |
|
{ |
|
"text": "We formulated each summarization task as a reinforcement learning task in this paper, i.e., where each learned policy differs. As this may be a little unnatural, we wanted to obtain a single learned policy, i.e., a global policy.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Can We Obtain \"a Global Policy\"?", |
|
"sec_num": "8.3" |
|
}, |
|
{ |
|
"text": "However, we assessed that we cannot achieve a global policy with these feature and score function settings because the best vector, which is the feature representation of the summary that achieves an optimal score under the current settings, seems to vary for each cluster, even if the domain of the clusters is the same (e.g., a news domain). Having said that, we simultaneously surmised that we could obtain a global policy if we could obtain a highly general, crucial, and efficient feature representation of a summary. We also think a global policy is essential in terms of reinforcement learning and we intend to attempt to achieve this in future work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Can We Obtain \"a Global Policy\"?", |
|
"sec_num": "8.3" |
|
}, |
|
{ |
|
"text": "We presented a new approach to the problem of automatic text summarization called ASRL in this paper, which models the process of constructing a summary with the framework of reinforcement learning and attempts to optimize the given score function with the given feature representation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "9" |
|
}, |
|
{ |
|
"text": "The experimental results demonstrated ASRL tends to converge sub-optimally, and excessively depends on the formulation of features and the score function. Although it is difficult, we believe this formulation would enable us to improve the quality of summaries by designing them freely.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "9" |
|
}, |
|
{ |
|
"text": "We intend to employ the ROUGE score as the score function in future work, and obtain the parameters of the state value function. Using these results, we will attempt to obtain a single learned policy by employing the ROUGE score or human evaluations as rewards. We also intend to consider efficient features and a score to achieve stable convergence. In addition, we plan to use other methods of function approximation, such as RBF networks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "9" |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Left-brain/right-brain multi-document summarization", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Conroy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Schlesinger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Goldstein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Leary", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the Document Understanding Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Conroy, J.D. Schlesinger, J. Goldstein, and D.P. O leary. 2004. Left-brain/right-brain multi-document summarization. In Proceedings of the Document Un- derstanding Conference (DUC 2004).", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "A formal model for information selection in multi-sentence text extraction", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Filatova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Hatzivassiloglou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the 20th international conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "E. Filatova and V. Hatzivassiloglou. 2004. A formal model for information selection in multi-sentence text extraction. In Proceedings of the 20th international conference on Computational Linguistics, page 397. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Multi-document summarization by sentence extraction", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Goldstein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Mittal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Carbonell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Kantrowitz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of the 2000 NAACL-ANLPWorkshop on Automatic summarization", |
|
"volume": "4", |
|
"issue": "", |
|
"pages": "40--48", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. Goldstein, V. Mittal, J. Carbonell, and M. Kantrowitz. 2000. Multi-document summarization by sentence extraction. In Proceedings of the 2000 NAACL- ANLPWorkshop on Automatic summarization-Volume 4, pages 40-48. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Improving summarization performance by sentence compression: a pilot study", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Lin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the sixth international workshop on Information retrieval with Asian languages", |
|
"volume": "11", |
|
"issue": "", |
|
"pages": "1--8", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "C.Y. Lin. 2003. Improving summarization performance by sentence compression: a pilot study. In Proceed- ings of the sixth international workshop on Informa- tion retrieval with Asian languages-Volume 11, pages 1-8. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Rouge: A package for automatic evaluation of summaries", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Lin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the workshop on text summarization branches out (WAS 2004)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "C.Y. Lin. 2004. Rouge: A package for automatic eval- uation of summaries. In Proceedings of the workshop on text summarization branches out (WAS 2004), vol- ume 16.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Automatic summarization", |
|
"authors": [ |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "Mani", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "I. Mani. 2001. Automatic summarization, volume 3. John Benjamins Pub Co.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "A study of global inference algorithms in multi-document summarization", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Mcdonald", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Advances in Information Retrieval", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "557--564", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R. McDonald. 2007. A study of global inference algo- rithms in multi-document summarization. Advances in Information Retrieval, pages 557-564.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "An algorithm for suffix stripping. Program: electronic library and information systems", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mf Porter", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1980, |
|
"venue": "", |
|
"volume": "14", |
|
"issue": "", |
|
"pages": "130--137", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "MF Porter. 1980. An algorithm for suffix stripping. Program: electronic library and information systems, 14(3):130-137.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Reinforcement learning: An introduction", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Sutton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"G" |
|
], |
|
"last": "Barto", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R.S. Sutton and A.G. Barto. 1998. Reinforcement learn- ing: An introduction, volume 1. Cambridge Univ Press.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Fast gradient-descent methods for temporal-difference learning with linear function approximation", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Sutton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Maei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Precup", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Bhatnagar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Silver", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Szepesv\u00e1ri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Wiewiora", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 26th Annual International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "993--1000", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R.S. Sutton, H.R. Maei, D. Precup, S. Bhatnagar, D. Silver, C. Szepesv\u00e1ri, and E. Wiewiora. 2009a. Fast gradient-descent methods for temporal-difference learning with linear function approximation. In Pro- ceedings of the 26th Annual International Conference on Machine Learning, pages 993-1000. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "A convergent o (n) algorithm for off-policy temporaldifference learning with linear function approximation", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Sutton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Szepesv\u00e1ri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Maei", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R.S. Sutton, C. Szepesv\u00e1ri, and H.R. Maei. 2009b. A convergent o (n) algorithm for off-policy temporal- difference learning with linear function approxima- tion.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Text summarization model based on maximum coverage problem and its variant", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Takamura", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Okumura", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "781--789", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "H. Takamura and M. Okumura. 2009. Text summariza- tion model based on maximum coverage problem and its variant. In Proceedings of the 12th Conference of the European Chapter of the Association for Compu- tational Linguistics, pages 781-789. Association for Computational Linguistics.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"uris": null, |
|
"text": "Average score for each episode in ASRL in DUC2004. Horizontal lines indicate scores of summaries obtained with ILP and GREEDY.", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"num": null, |
|
"uris": null, |
|
"text": "Execution time on number of textual units for each problem in DUC2004. Plot of ASRL is fitted to linear function and that of ILP is fitted to exponential function.", |
|
"type_str": "figure" |
|
}, |
|
"TABREF4": { |
|
"html": null, |
|
"text": "Evaluation of combined methods.", |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |