id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
1605.04711#22
Ternary Weight Networks
Very deep convolu- tional networks for large-scale image recognition,â arXiv preprint arXiv:1409.1556, 2014. [4] W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Er- han, V. Vanhoucke, and A. Rabinovich, â Going deeper with convolutions,â CVPR, p. 1â 9, 2015. [5] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, and S. Reed, â Ssd: Single shot multibox detector,â arXiv preprint arXiv:1512.02325, 2015. [6] S. Ren, K. He, R. Girshick, and J. Sun, â
1605.04711#21
1605.04711#23
1605.04711
[ "1602.07360" ]
1605.04711#23
Ternary Weight Networks
Faster r-cnn: Towards real-time object detection with region proposal networks,â Advances in neural information processing systems, p. 91â 99, 2015. [7] M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi, Imagenet classiï¬ cation using binary arXiv preprint â Xnor-net: convolutional neural networks,â arXiv:1603.05279, 2016. [8] Steven K. Esser, Paul A. Merolla, John V. Arthur, An- drew S. Cassidy, Rathinakumar Appuswamy, and et al., â
1605.04711#22
1605.04711#24
1605.04711
[ "1602.07360" ]
1605.04711#24
Ternary Weight Networks
Convolutional networks for fast, energy-efï¬ cient neu- romorphic computing,â Proceedings of the National Academy of Sciences, vol. 113, no. 41, pp. 11441â 11446, 2016. [9] Song Han, Huizi Mao, and William J. Dally, â Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding,â arXiv preprint arXiv 1510.00149, 2015. [10] M. Courbariaux, Y. Bengio, and J.-P.
1605.04711#23
1605.04711#25
1605.04711
[ "1602.07360" ]
1605.04711#25
Ternary Weight Networks
David, â Binarycon- nect: Training deep neural networks with binary weights during propagations,â NeurIPS, p. 3123â 3131, 2015. [11] I. Hubara, D. Soudry, and R. E. Yaniv, â Binarized neural networks,â Advances in neural information processing systems, 2016. [12] Z. Lin, M. Courbariaux, R. Memisevic, and Y. Ben- gio, â
1605.04711#24
1605.04711#26
1605.04711
[ "1602.07360" ]
1605.04711#26
Ternary Weight Networks
Neural networks with few multiplications,â arXiv preprint arXiv:1510.03009, 2015. [13] F. N. Iandola, M. W. Moskewicz, K. Ashraf, S. Han, W. J. Dally, and K. Keutzer, â Squeezenet: Alexnet-level accuracy with 50x fewer parameters and <1mb model size,â arXiv preprint arXiv:1602.07360, 2016. [14] Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam, â
1605.04711#25
1605.04711#27
1605.04711
[ "1602.07360" ]
1605.04711#27
Ternary Weight Networks
Mobilenets: Efï¬ cient convolutional neural networks for mobile vision applica- tions,â CoRR, vol. abs/1704.04861, 2017. [15] Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, and Jian Sun, â Shufï¬ enet: An extremely efï¬ cient convolutional neural network for mobile devices,â in CVPR, 2018. [16] Hanxiao Liu, Karen Simonyan, and Yiming Yang, â DARTS: differentiable architecture search,â in ICLR, 2019. [17] Xiaoxing Wang, Chao Xue, Junchi Yan, Xiaokang Yang, Yonggang Hu, and Kewei Sun, â
1605.04711#26
1605.04711#28
1605.04711
[ "1602.07360" ]
1605.04711#28
Ternary Weight Networks
Mergenas: Merge oper- ations into one for differentiable architecture search,â in IJCAI, 2020, pp. 3065â 3072. [18] Xiaoxing Wang, Jiale Lin, Juanping Zhao, Xiaokang Yang, and Junchi Yan, â Eautodet: Efï¬ cient architecture search for object detection,â in ECCV, 2022. [19] K. Hwang and W. Sung, â
1605.04711#27
1605.04711#29
1605.04711
[ "1602.07360" ]
1605.04711#29
Ternary Weight Networks
Fixed-point feedforward deep neural network design using weights +1, 0, and -1,â IEEE Workshop on Signal Processing Systems (SiPS), pp. 1â 6, 2014. [20] S. Ioffe and C. Szegedy, â Batch normalization: Ac- celerating deep network training by reducing internal covariate shift,â Proceedings of The 32nd International Conference on Machine Learning, p. 448â 456, 2015. [21] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, â Gradient-based learning applied to document recogni- tion,â Proceedings of the IEEE, vol. 86, no. 11, pp. 2278â 2324, 1998. [22] C.-Y. Lee, S. Xie, P. Gallagher, Z. Zhang, and Z. Tu, â Deeply-supervised nets,â
1605.04711#28
1605.04711#30
1605.04711
[ "1602.07360" ]
1605.04711#30
Ternary Weight Networks
Proceedings of the Eighteenth International Conference on Artiï¬ cial Intelligence and Statistics, p. 562â 570, 2015. [23] Mark Everingham, Luc Van Gool, Christopher K. I. Williams, John M. Winn, and Andrew Zisserman, â The pascal visual object classes (VOC) challenge,â Int. J. Comput. Vis., vol. 88, no. 2, pp. 303â 338, 2010. [24] Glenn Jocher, â Yolov5 documentation,â https:// docs.ultralytics.com/, May 2020. [25] Tsung-Yi Lin, Michael Maire, Serge J. Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C. Lawrence Zitnick, â Microsoft COCO: common ob- jects in context,â in ECCV, 2014.
1605.04711#29
1605.04711
[ "1602.07360" ]
1604.06778#0
Benchmarking Deep Reinforcement Learning for Continuous Control
6 1 0 2 y a M 7 2 ] G L . s c [ 3 v 8 7 7 6 0 . 4 0 6 1 : v i X r a # Benchmarking Deep Reinforcement Learning for Continuous Control Yan Duanâ Xi Chenâ Rein Houthooftâ â ¡ John Schulmanâ § Pieter Abbeelâ â University of California, Berkeley, Department of Electrical Engineering and Computer Sciences â ¡ Ghent University - iMinds, Department of Information Technology § OpenAI [email protected] [email protected] [email protected] [email protected] [email protected] # Abstract Recently, researchers have made signiï¬ cant progress combining the advances in deep learn- ing for learning feature representations with rein- forcement learning. Some notable examples in- clude training agents to play Atari games based on raw pixel data and to acquire advanced ma- nipulation skills using raw sensory inputs.
1604.06778#1
1604.06778
[ "1506.02438" ]
1604.06778#1
Benchmarking Deep Reinforcement Learning for Continuous Control
How- ever, it has been difï¬ cult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of contin- uous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel ï¬ ndings based on the systematic evaluation of a range of implemented reinforcement learning al- gorithms. Both the benchmark and reference im- plementations are released at https://github.com/ rllab/rllab in order to facilitate experimental re- producibility and to encourage adoption by other researchers. # 1. Introduction
1604.06778#0
1604.06778#2
1604.06778
[ "1506.02438" ]
1604.06778#2
Benchmarking Deep Reinforcement Learning for Continuous Control
Reinforcement learning addresses the problem of how agents should learn to take actions to maximize cumula- tive reward through interactions with the environment. The traditional approach for reinforcement learning algorithms requires carefully chosen feature representations, which are Proceedings of the 33 rd International Conference on Machine Learning, New York, NY, USA, 2016. JMLR: W&CP volume 48. Copyright 2016 by the author(s). Also available at https://arxiv.org/abs/1604.06778 usually hand-engineered. Recently, signiï¬
1604.06778#1
1604.06778#3
1604.06778
[ "1506.02438" ]
1604.06778#3
Benchmarking Deep Reinforcement Learning for Continuous Control
cant progress has been made by combining advances in deep learning for learning feature representations (Krizhevsky et al., 2012; Hinton et al., 2012) with reinforcement learning, tracing back to much earlier work of Tesauro (1995) and Bert- sekas & Tsitsiklis (1995). Notable examples are training agents to play Atari games based on raw pixels (Guo et al., 2014; Mnih et al., 2015; Schulman et al., 2015a) and to acquire advanced manipulation skills using raw sensory in- puts (Levine et al., 2015; Lillicrap et al., 2015; Watter et al., 2015). Impressive results have also been obtained in train- ing deep neural network policies for 3D locomotion and manipulation tasks (Schulman et al., 2015a;b; Heess et al., 2015b). Along with this recent progress, the Arcade Learning En- vironment (ALE) (Bellemare et al., 2013) has become a popular benchmark for evaluating algorithms designed for tasks with high-dimensional state inputs and discrete ac- tions. However, these algorithms do not always generalize straightforwardly to tasks with continuous actions, leading to a gap in our understanding. For instance, algorithms based on Q-learning quickly become infeasible when naive discretization of the action space is performed, due to the curse of dimensionality (Bellman, 1957; Lillicrap et al., 2015). In the continuous control domain, where actions are continuous and often high-dimensional, we argue that the existing control benchmarks fail to provide a compre- hensive set of challenging problems (see Section 7 for a review of existing benchmarks). Benchmarks have played a signiï¬ cant role in other areas such as computer vision and speech recognition.
1604.06778#2
1604.06778#4
1604.06778
[ "1506.02438" ]
1604.06778#4
Benchmarking Deep Reinforcement Learning for Continuous Control
Examples include MNIST (Le- Cun et al., 1998), Caltech101 (Fei-Fei et al., 2006), CI- FAR (Krizhevsky & Hinton, 2009), ImageNet (Deng et al., 2009), PASCAL VOC (Everingham et al., 2010), BSDS500 (Martin et al., 2001), SWITCHBOARD (Godfrey et al., 1992), TIMIT (Garofolo et al., 1993), Aurora (Hirsch & Pearce, 2000), and VoiceSearch (Yu et al., 2007). The lack Benchmarking Deep Reinforcement Learning for Continuous Control of a standardized and challenging testbed for reinforcement learning and continuous control makes it difï¬
1604.06778#3
1604.06778#5
1604.06778
[ "1506.02438" ]
1604.06778#5
Benchmarking Deep Reinforcement Learning for Continuous Control
cult to quan- tify scientiï¬ c progress. Systematic evaluation and compar- ison will not only further our understanding of the strengths of existing algorithms, but also reveal their limitations and suggest directions for future research. We attempt to address this problem and present a bench- mark consisting of 31 continuous control tasks. These tasks range from simple tasks, such as cart-pole balanc- ing, to challenging tasks such as high-DOF locomotion, tasks with partial observations, and hierarchically struc- tured tasks. Furthermore, a range of reinforcement learn- ing algorithms are implemented on which we report novel ï¬ ndings based on a systematic evaluation of their effective- ness in training deep neural network policies. The bench- mark and reference implementations are available at https: //github.com/rllab/rllab, allowing for the development, im- plementation, and evaluation of new algorithms and tasks.
1604.06778#4
1604.06778#6
1604.06778
[ "1506.02438" ]
1604.06778#6
Benchmarking Deep Reinforcement Learning for Continuous Control
# 2. Preliminaries In this section, we deï¬ ne the notation used in subsequent sections. in the supplementary materials and in the source code. We choose to implement all tasks using physics simulators rather than symbolic equations, since the former approach is less error-prone and permits easy modiï¬ cation of each task. Tasks with simple dynamics are implemented using Box2D (Catto, 2011), an open-source, freely available 2D physics simulator. Tasks with more complicated dynam- ics, such as locomotion, are implemented using MuJoCo (Todorov et al., 2012), a 3D physics simulator with better modeling of contacts. # 3.1. Basic Tasks We implement ï¬ ve basic tasks that have been widely an- alyzed in reinforcement learning and control literature: Cart-Pole Balancing (Stephenson, 1908; Donaldson, 1960; Widrow, 1964; Michie & Chambers, 1968), Cart-Pole Swing Up (Kimura & Kobayashi, 1999; Doya, 2000), Mountain Car (Moore, 1990), Acrobot Swing Up (DeJong & Spong, 1994; Murray & Hauser, 1991; Doya, 2000), and Double Inverted Pendulum Balancing (Furuta et al., 1978). These relatively low-dimensional tasks provide quick eval- uations and comparisons of RL algorithms. The implemented tasks conform to the standard interface of a ï¬ nite-horizon discounted Markov decision process (MDP), deï¬ ned by the tuple (S, A, P, r, Ï 0, γ, T ), where S is a (possibly inï¬ nite) set of states, A is a set of actions, P : S à Aà S â Râ ¥0 is the transition probability distribu- tion, r : S à A â R is the reward function, Ï 0 : S â Râ ¥0 is the initial state distribution, γ â (0, 1] is the discount factor, and T is the horizon. For partially observable tasks, which conform to the in- terface of a partially observable Markov decision process (POMDP), two more components are required, namely â ¦, a set of observations, and O :
1604.06778#5
1604.06778#7
1604.06778
[ "1506.02438" ]
1604.06778#7
Benchmarking Deep Reinforcement Learning for Continuous Control
S à ⠦ â Râ ¥0, the observa- tion probability distribution. Most of our implemented algorithms optimize a stochastic policy 79 : S x A â Rso. Let (7) denote its expected discounted reward: (7) = E, [eo y'r(s:, ai)| , where T = (80, a0, -- -) denotes the whole trajectory, 89 ~ po(so), a, ~ 7(az|Se), and sr41 ~ P(Sz41|S¢, ae). # 3.2. Locomotion Tasks In this category, we implement six locomotion tasks of varying dynamics and difï¬ culty: Swimmer (Purcell, 1977; Coulom, 2002; Levine & Koltun, 2013; Schulman et al., 2015a), Hopper (Murthy & Raibert, 1984; Erez et al., 2011; Levine & Koltun, 2013; Schulman et al., 2015a), Walker (Raibert & Hodgins, 1991; Erez et al., 2011; Levine & Koltun, 2013; Schulman et al., 2015a), Half-Cheetah (Wawrzy´nski, 2007; Heess et al., 2015b), Ant (Schulman et al., 2015b), Simple Humanoid (Tassa et al., 2012; Schul- man et al., 2015b), and Full Humanoid (Tassa et al., 2012).
1604.06778#6
1604.06778#8
1604.06778
[ "1506.02438" ]
1604.06778#8
Benchmarking Deep Reinforcement Learning for Continuous Control
The goal for all the tasks is to move forward as quickly as possible. These tasks are more challenging than the basic tasks due to high degrees of freedom. In addition, a great amount of exploration is needed to learn to move forward without getting stuck at local optima. Since we penalize for excessive controls as well as falling over, during the initial stage of learning, when the robot is not yet able to move forward for a sufï¬ cient distance without falling, apparent local optima exist including staying at the origin or diving forward slowly. For deterministic policies, we use the notation µθ :
1604.06778#7
1604.06778#9
1604.06778
[ "1506.02438" ]
1604.06778#9
Benchmarking Deep Reinforcement Learning for Continuous Control
S â A to denote the policy instead. The objective for it has the same form as above, except that now we have at = µ(st). # 3.3. Partially Observable Tasks # 3. Tasks The tasks in the presented benchmark can be divided into four categories: basic tasks, locomotion tasks, partially ob- servable tasks, and hierarchical tasks. We brieï¬ y describe them in this section. More detailed speciï¬ cations are given In real-life situations, agents are often not endowed with perfect state information. This can be due to sensor noise, sensor occlusions, or even sensor limitations that result in partial observations. To evaluate algorithms in more realis- tic settings, we implement three variations of partially ob- Benchmarking Deep Reinforcement Learning for Continuous Control
1604.06778#8
1604.06778#10
1604.06778
[ "1506.02438" ]
1604.06778#10
Benchmarking Deep Reinforcement Learning for Continuous Control
(a) (e) (b) (f) (c) (g) (d) = = (a) (b) â sy Figure 2. Illustration of hierarchical tasks: Food Collection; and (b) Locomotion + Maze. (a) Locomotion + Figure 1. Illustration of locomotion tasks: (a) Swimmer; (b) Hop- per; (c) Walker; (d) Half-Cheetah; (e) Ant; (f) Simple Humanoid; and (g) Full Humanoid. servable tasks for each of the ï¬ ve basic tasks described in Section 3.1, leading to a total of 15 additional tasks. These variations are described below. Limited Sensors: For this variation, we restrict the obser- vations to only provide positional information (including joint angles), excluding velocities. An agent now has to learn to infer velocity information in order to recover the full state. Similar tasks have been explored in Gomez & Miikkulainen (1998); Sch¨afer & Udluft (2005); Heess et al. (2015a); Wierstra et al. (2007). Locomotion + Food Collection: For this task, the agent needs to learn to control either the swimmer or the ant robot to collect food and avoid bombs in a ï¬
1604.06778#9
1604.06778#11
1604.06778
[ "1506.02438" ]
1604.06778#11
Benchmarking Deep Reinforcement Learning for Continuous Control
nite region. The agent receives range sensor readings about nearby food and bomb units. It is given a positive reward when it reaches a food unit, or a negative reward when it reaches a bomb. Locomotion + Maze: For this task, the agent needs to learn to control either the swimmer or the ant robot to reach a goal position in a ï¬ xed maze. The agent receives range sensor readings about nearby obstacles as well as its goal (when visible). A positive reward is given only when the robot reaches the goal region.
1604.06778#10
1604.06778#12
1604.06778
[ "1506.02438" ]
1604.06778#12
Benchmarking Deep Reinforcement Learning for Continuous Control
# 4. Algorithms Noisy Observations and Delayed Actions: In this case, sensor noise is simulated through the addition of Gaussian noise to the observations. We also introduce a time de- lay between taking an action and the action being in effect, accounting for physical latencies (Hester & Stone, 2013). Agents now need to learn to integrate both past observa- tions and past actions to infer the current state. Similar tasks have been proposed in Bakker (2001).
1604.06778#11
1604.06778#13
1604.06778
[ "1506.02438" ]
1604.06778#13
Benchmarking Deep Reinforcement Learning for Continuous Control
In this section, we brieï¬ y summarize the algorithms im- plemented in our benchmark, and note any modiï¬ cations made to apply them to general parametrized policies. We implement a range of gradient-based policy search meth- ods, as well as two gradient-free methods for comparison with the gradient-based approaches. # 4.1. Batch Algorithms System Identiï¬ cation: For this category, the underly- ing physical model parameters are varied across different episodes (Szita et al., 2003). The agents must learn to gen- eralize across different models, as well as to infer the model parameters from its observation and action history. # 3.4. Hierarchical Tasks Most of the implemented algorithms are batch algorithms. At each iteration, N trajectories {Ï i}N i=1 are generated, where Ï i = {(si t, ri t=0 contains data collected along the ith trajectory. For on-policy gradient-based methods, all the trajectories are sampled under the current policy. For gradient-free methods, they are sampled under perturbed versions of the current policy. Many real-world tasks exhibit hierarchical structure, where higher level decisions can reuse lower level skills (Parr & Russell, 1998; Sutton et al., 1999; Dietterich, 2000). For in- stance, robots can reuse locomotion skills when exploring the environment. We propose several tasks where both low- level motor controls and high-level decisions are needed. These two components each operates on a different time scale and calls for a natural hierarchy in order to efï¬ ciently learn the task. REINFORCE (Williams, 1992): This algorithm estimates the gradient of expected return â θη(Ï Î¸) using the likeli- hood ratio trick: â . N T Von(t) = wd Vo log x(a'|s'; 0)(Ri â bi), i=1 1=0 . T bing . . where Ri} = S0,,_, 7! ~â rj, and bj is a baseline that only depends on the state s; to reduce variance. Hereafter, an as- Benchmarking Deep Reinforcement Learning for Continuous Control cent step is taken in the direction of the estimated gradient. This process continues until θk converges.
1604.06778#12
1604.06778#14
1604.06778
[ "1506.02438" ]
1604.06778#14
Benchmarking Deep Reinforcement Learning for Continuous Control
Truncated Natural Policy Gradient (TNPG) (Kakade, 2002; Peters et al., 2003; Bagnell & Schneider, 2003; Schulman et al., 2015a): Natural Policy Gradient improves upon REINFORCE by computing an ascent direction that approximately ensures a small change in the policy distri- bution. This direction is derived to be I(θ)â 1â θη(Ï Î¸), where I(θ) is the Fisher information matrix (FIM). We use the step size suggested by Peters & Schaal (2008): δKL (â θη(Ï Î¸)T I(θ)â 1â θη(Ï Î¸))â 1. Finally, we re- Here dx, > 0 controls the step size of the policy, and 6:(v) = ri + v"(6(s4) â 6(s;)) is the sample Bellman error. We then solve for the new policy parameters: M 1 *) Jn On41 = arg max 77 2 ei )/n log 7(a;|8;; 9). Trust Region Policy Optimization (TRPO) (Schulman et al., 2015a): This algorithm allows more precise control on the expected policy improvement than TNPG through the introduction of a surrogate loss. At each iteration, we solve the following constrained optimization problem (re- placing expectations with samples): For neural network policies with tens of thousands of pa- rameters or more, generic Natural Policy Gradient incurs prohibitive computation cost by forming and inverting the empirical FIM. Instead, we study Truncated Natural Policy Gradient (TNPG) in this paper, which computes the nat- ural gradient direction without explicitly forming the ma- trix inverse, using a conjugate gradient algorithm that only requires computing I(θ)v for arbitrary vector v. TNPG makes it practical to apply natural gradient in policy search setting with high-dimensional parameters, and we refer the reader to Schulman et al. (2015a) for more details. Reward-Weighted Regression (RWR) (Peters & Schaal, 2007; Kober & Peters, 2009):
1604.06778#13
1604.06778#15
1604.06778
[ "1506.02438" ]
1604.06778#15
Benchmarking Deep Reinforcement Learning for Continuous Control
This algorithm formulates the policy optimization as an Expectation-Maximization problem to avoid the need to manually choose learning rate, and the method is guaranteed to converge to a lo- cally optimal solution. At each iteration, this algorithm optimizes a lower bound of the log-expected return: 9 = arg maxg £(6â ), where 1 T NT > log (aj|s; 0) o( Ry â 6;) 1 t=0 Mz L£(0) = i . Ta maximizeg Esnpo, a~mo, oa) Ft Als a] s.t. Es~po, (Dxi (mo, (-|8)||to(-|s))] < Oxi where Ï Î¸ = Ï Ï Î¸ is the discounted state-visitation frequen- cies induced by Ï Î¸, Aθk (s, a), known as the advantage function, is estimated by the empirical return minus the baseline, and δKL is a step size parameter which controls how much the policy is allowed to change per iteration. We follow the procedure described in the original paper for solving the optimization, which results in the same descent direction as TNPG with an extra line search in the objective and KL constraint. Cross Entropy Method (CEM) (Rubinstein, 1999; Szita & LË orincz, 2006): Unlike previously mentioned meth- ods, which perform exploration through stochastic actions, CEM performs exploration directly in the policy parame- ter space. At each iteration, we produce N perturbations of the policy parameter: θi â ¼ N (µk, Σk), and perform a rollout for each sampled parameter. Then, we compute the new mean and diagonal covariance using the parameters that correspond to the top q-quantile returns. Here, Ï : R â Râ ¥0 is a function that transforms raw re- turns to nonnegative values. Following Deisenroth et al. (2013), we choose Ï to be Ï (R) = R â Rmin, where Rmin is the minimum return among all trajectories collected in the current iteration. Relative Entropy Policy Search (REPS) (Peters et al., 2010):
1604.06778#14
1604.06778#16
1604.06778
[ "1506.02438" ]
1604.06778#16
Benchmarking Deep Reinforcement Learning for Continuous Control
This algorithm limits the loss of information per iteration and aims to ensure a smooth learning progress (Deisenroth et al., 2013). At each iteration, we collect all trajectories into a dataset D = {(s;,a;,7i, ,) }44,, where M is the total number of samples. Then, we first solve for the dual parameters [7*,v*] = argmin,,,/ g(7',vâ ) s.t. 7 > 0, where Covariance Matrix Adaption Evolution Strategy (CMA-ES) (Hansen & Ostermeier, 2001): Similar to CEM, CMA-ES is a gradient-free evolutionary approach for optimizing nonconvex objective functions. In our case, this objective function equals the average sampled return. In contrast to CEM, CMA-ES estimates the covariance matrix of a multivariate normal distribution through incremental adaption along evolution paths, which contain information about the correlation between consecutive updates. # 4.2. Online Algorithms M _ fi 5:(Â¥)/n g(n,v) = nox, +n log (:i > ebiv)/n Deep Deterministic Policy Gradient (DDPG) (Lillicrap et al., 2015): Compared to batch algorithms, the DDPG algorithm continuously improves the policy as it explores the environment. It applies gradient descent to the policy Benchmarking Deep Reinforcement Learning for Continuous Control with minibatch data sampled from a replay pool, where the gradient is computed via
1604.06778#15
1604.06778#17
1604.06778
[ "1506.02438" ]
1604.06778#17
Benchmarking Deep Reinforcement Learning for Continuous Control
> Va Q4(si, a) Von(ue) ) Vorto(si) la= He (Si where B is the batch size. The critic Q is trained via gradient descent on the (? loss of the Bellman er- ror L = 4 CH â Q¢(si,ai))?, where y; = 7; + Q',, (8), Hg (s;)). To improve stability of the algorithm, we use target networks for both the critic and the policy when forming the regression target y;. We refer the reader to Lillicrap et al. (2015) for a more detailed description of the algorithm.
1604.06778#16
1604.06778#18
1604.06778
[ "1506.02438" ]
1604.06778#18
Benchmarking Deep Reinforcement Learning for Continuous Control
# 4.3. Recurrent Variants Policy Representation: For basic, locomotion, and hier- archical tasks and for batch algorithms, we use a feed- forward neural network policy with 3 hidden layers, con- sisting of 100, 50, and 25 hidden units with tanh nonlin- earity at the ï¬ rst two hidden layers, which map each state to the mean of a Gaussian distribution. The log-standard deviation is parameterized by a global vector independent of the state, as done in Schulman et al. (2015a). For all par- tially observable tasks, we use a recurrent neural network with a single hidden layer consisting of 32 LSTM hidden units (Hochreiter & Schmidhuber, 1997). For the DDPG algorithm which trains a deterministic pol- icy, we follow Lillicrap et al. (2015). For both the policy and the Q function, we use the same architecture of a feed- forward neural network with 2 hidden layers, consisting of 400 and 300 hidden units with relu activations. We implement direct applications of the aforemen- tioned batch-based algorithms to recurrent policies.
1604.06778#17
1604.06778#19
1604.06778
[ "1506.02438" ]
1604.06778#19
Benchmarking Deep Reinforcement Learning for Continuous Control
The only modiï¬ cation required is to replace Ï (ai t) by Ï (ai 1:t and a1:tâ 1 are the histories of past and current observations and past actions. Recurrent versions of reinforcement learning algorithms have been studied in many existing works, such as Bakker (2001), Sch¨afer & Udluft (2005), Wierstra et al. (2007), and Heess et al. (2015a). # 5. Experiment Setup Baseline: For all gradient-based algorithms except REPS, we can subtract a baseline from the empirical return to re- duce variance of the optimization. We use a linear function as the baseline with a time-varying feature vector.
1604.06778#18
1604.06778#20
1604.06778
[ "1506.02438" ]
1604.06778#20
Benchmarking Deep Reinforcement Learning for Continuous Control
# 6. Results and Discussion The main evaluation results are presented in Table 1. The tasks on which the grid search is performed are marked with (*). In each entry, the pair of numbers shows the mean and standard deviation of the normalized cumulative return using the best possible hyperparameters. In this section, we elaborate on the experimental setup used to generate the results. Performance Metrics: For each report unit (a particular al- gorithm running on a particular task), we deï¬ ne its perfor- n=1 Rin, where I is the num- mance as ber of training iterations, Ni is the number of trajectories collected in the ith iteration, and Rin is the undiscounted return for the nth trajectory of the ith iteration, Hyperparameter Tuning: For the DDPG algorithm, we used the hyperparametes reported in Lillicrap et al. (2015). For the other algorithms, we follow the approach in (Mnih et al., 2015), and we select two tasks in each category, on which a grid search of hyperparameters is performed. Each choice of hyperparameters is executed under ï¬ ve random seeds. The criterion for the best hyperparameters is de- ï¬ ned as mean(returns) â std(returns). This metric se- lects against large ï¬ uctuations of performance due to overly large step sizes. REINFORCE: Despite its simplicity, REINFORCE is an effective algorithm in optimizing deep neural network poli- cies in most basic and locomotion tasks. Even for high- DOF tasks like Ant, REINFORCE can achieve competi- tive results. However we observe that REINFORCE some- times suffers from premature convergence to local optima as noted by Peters & Schaal (2008), which explains the per- formance gaps between REINFORCE and TNPG on tasks such as Walker (Figure 3(a)). By visualizing the ï¬ nal poli- cies, we can see that REINFORCE results in policies that tend to jump forward and fall over to maximize short-term return instead of acquiring a stable walking gait to max- imize long-term return. In Figure 3(b), we can observe that even with a small learning rate, steps taken by RE- INFORCE can sometimes result in large changes to policy distribution, which may explain the fast convergence to lo- cal optima.
1604.06778#19
1604.06778#21
1604.06778
[ "1506.02438" ]
1604.06778#21
Benchmarking Deep Reinforcement Learning for Continuous Control
For the other tasks, we try both of the best hyperparame- ters found in the same category, and report the better per- formance of the two. This gives us insights into both the maximum possible performance when extensive hyperpa- rameter tuning is performed, and the robustness of the best hyperparameters across different tasks. TNPG and TRPO: Both TNPG and TRPO outperform other batch algorithms by a large margin on most tasks, conï¬ rming that constraining the change in the policy dis- tribution results in more stable learning (Peters & Schaal, 2008). Compared to TNPG, TRPO offers better control over each Benchmarking Deep Reinforcement Learning for Continuous Control
1604.06778#20
1604.06778#22
1604.06778
[ "1506.02438" ]
1604.06778#22
Benchmarking Deep Reinforcement Learning for Continuous Control
# s t l u s e r e r a d n a # e h t e h T . ) s m h t i r o g l a l l a s s o r c a e m a s ( s d e e s m o d n a r t n e r e f f i d e v ï¬ r o f s n o i t a r e t i g n i n i a r t l l a r e v o n r u t e r e g a r e v a f o s m r e t n i s m h t i r o g l a d e t n e m e l p m i , ) 5 0 . 0 < p h t i w t s e t - t s â h c l e W ( t n e r e f f i d y l t n a c ï¬
1604.06778#21
1604.06778#23
1604.06778
[ "1506.02438" ]
1604.06778#23
Benchmarking Deep Reinforcement Learning for Continuous Control
i n g i s y l l a c i t s i t a t s t o n e r a t a h t s e c n a m r o f r e p e v a h t a h t s m h t i r o g l a l l a s a l l e w s a , k s a t h c a e s n o i t a v r e s b o y s i o n r o f O N , s r o s n e s d e t i m i l r o f s d n a t s S L : s w o l l o f s a d e t a t o n n a e r a s k s a t e h t f o s t n a i r a v e l b a v r e s b o y l l a i t r a p e h t , n m u l o c s k s a t e h t n I n i s r o r r e y r o m e m - f o - t u o o t g n i d a e l S E - A M C , . g . e , d n a h t a k s a t e h t n o d e l i a f s a h m h t i r o g l a n a t a h t s e t o n e d A N n o i t a t o n / e h T . s n o i t a c ï¬ i t n e d i m e t s y s G P D D 8 . 7 8 ± 4 . 4 3 6 4 6 . 4 4 2 ± 0 . 0 4 3 . 0 7 1 ± 4 . 8 8 2 â
1604.06778#22
1604.06778#24
1604.06778
[ "1506.02438" ]
1604.06778#24
Benchmarking Deep Reinforcement Learning for Continuous Control
8 . 5 ± 6 . 3 2 2 - 0 . 4 5 1 ± 4 . 3 6 8 2 8 . 1 ± 8 . 5 8 5 . 3 4 ± 1 . 7 6 2 6 . 1 8 1 ± 4 . 8 1 3 7 . 2 0 7 ± 6 . 8 4 1 2 8 . 0 2 ± 2 . 6 2 3 1 . 8 2 ± 4 . 9 9 2 . 1 3 ± 0 . 9 1 1 S E - A M C 3 . 8 6 5 ± 4 . 0 4 4 2 7 . 5 ± 1 . 0 4 â 7 . 7 ± 0 . 5 8 â 1 . 3 1 ± 6 . 5 8 7 â 3 . 1 5 ± 1 . 6 7 5 1 4 . 1 ± 9 . 4 6 3 . 4 1 ± 3 . 0 2 3 . 4 2 ± 1 . 7 7 6 . 7 0 1 ± 3 . 1 4 4 5 . 5 1 ± 8 . 7 1 9 . 3 ± 7 . 8 2 A N / ± A N / 6 . 1 ± 0 . 8 6 4 . 3 ± 4 . 2 6 â 6 . 0 ± 2 . 3 7 - 5 . 7 ± 9 . 9 5 1 â 0 . 6 1 ± 4 . 4 0 1 8 . 2 ± 3 . 0 8 â 5 . 0 ± 5 . 3 7 â 2 . 6 ± 6 . 6 3 2 â
1604.06778#23
1604.06778#25
1604.06778
[ "1506.02438" ]
1604.06778#25
Benchmarking Deep Reinforcement Learning for Continuous Control
9 . 2 ± 6 . 1 7 M E C 8 . 4 ± 4 . 5 1 8 4 7 . 5 2 ± 2 . 8 3 4 . 2 ± 0 . 6 6 â 7 . 4 1 ± 8 . 6 3 4 â 9 . 8 7 1 ± 2 . 6 6 5 2 4 . 2 ± 8 . 8 6 8 . 7 ± 1 . 3 6 2 . 9 1 ± 5 . 4 8 8 . 4 7 2 ± 4 . 0 3 3 9 . 5 ± 2 . 9 4 9 . 2 1 ± 6 . 0 6 9 . 2 ± 9 . 6 3 0 . 3 2 2 ± 0 . 7 2 2 2 . 3 3 ± 2 . 1 8 â 3 . 1 ± 9 . 8 6 - 3 . 5 1 ± 5 . 9 4 1 â 1 . 2 3 ± 4 . 1 8 1 7 . 6 1 ± 6 . 5 5 â 4 . 1 ± 4 . 7 6 â 3 . 6 ± 4 . 3 1 2 â 2 . 3 9 ± 6 . 6 4 7 O P R T 6 . 7 3 ± 8 . 9 6 8 4 1 . 6 7 ± 2 . 7 4 2 9 . 0 ± 7 . 1 6 - 4 . 4 2 ± 0 . 6 2 3 â
1604.06778#24
1604.06778#26
1604.06778
[ "1506.02438" ]
1604.06778#26
Benchmarking Deep Reinforcement Learning for Continuous Control
4 . 0 5 ± 4 . 2 1 4 4 2 . 0 ± 0 . 6 9 0 . 0 5 1 ± 3 . 3 8 1 1 0 . 5 8 ± 8 . 3 5 3 1 1 . 0 2 1 ± 0 . 4 1 9 1 3 . 1 6 ± 2 . 0 3 7 3 . 0 4 ± 7 . 9 6 2 4 . 3 2 ± 0 . 7 8 2 0 . 6 4 ± 2 . 0 6 9 1 . 4 ± 5 . 4 5 . 9 ± 2 . 4 6 - 9 . 9 ± 3 . 3 8 - 2 . 2 2 1 ± 2 . 6 0 6 2 . 2 ± 4 . 0 1 0 . 2 ± 2 . 0 6 - 6 . 8 ± 6 . 9 4 1 - 1 . 5 ± 3 . 0 8 9 S P E R R W R 6 . 7 3 1 ± 6 . 5 6 5 3 . 2 1 ± 5 . 1 6 8 4 6 . 4 ± 3 . 3 1 1 â 8 . 3 1 ± 7 . 4 8 3 . 6 6 1 ± 6 . 5 7 2 â 1 . 1 ± 4 . 9 7 â 8 . 0 1 ± 5 . 1 0 0 1 â 9 . 5 3 ± 7 . 2 5 3 â
1604.06778#25
1604.06778#27
1604.06778
[ "1506.02438" ]
1604.06778#27
Benchmarking Deep Reinforcement Learning for Continuous Control
8 . 4 1 1 ± 7 . 6 4 4 1 . 8 6 3 ± 8 . 4 1 6 3 3 . 3 ± 8 . 3 5 . 5 ± 7 . 0 6 6 . 7 1 ± 7 . 6 8 0 . 1 7 ± 2 . 3 5 5 1 . 8 3 ± 0 . 7 3 â 9 . 5 1 ± 0 . 6 3 1 0 . 8 3 ± 5 . 4 3 2 . 8 2 ± 1 . 6 7 3 8 . 9 ± 0 . 9 3 1 . 3 ± 6 . 7 3 7 . 4 ± 3 . 8 2 4 . 7 1 ± 3 . 3 9 1 . 6 ± 7 . 1 4 6 . 5 ± 7 . 6 4 1 . 2 2 ± 1 . 8 9 8 5 . 1 ± 9 . 8 6 0 . 8 ± 2 . 7 8 â 2 . 0 ± 4 . 7 0 1 â 4 . 0 ± 6 . 2 8 â 1 . 0 ± 7 . 1 8 â 4 . 1 ± 5 . 9 7 3 â 3 . 5 ± 9 . 5 3 2 â 2 . 7 ± 6 . 9 9 2 . 1 ± 8 . 3 9 2 . 4 ± 3 . 9 1 1 â 4 . 1 ± 0 . 0 1 1 â 1 . 0 ± 9 . 2 8 â 1 . 0 ± 7 . 1 8 â 0 . 4 1 ± 5 . 8 5 2 â 4 . 0 ± 1 . 3 3 2 â
1604.06778#26
1604.06778#28
1604.06778
[ "1506.02438" ]
1604.06778#28
Benchmarking Deep Reinforcement Learning for Continuous Control
4 . 6 9 1 ± 4 . 2 0 7 8 . 2 ± 0 . 9 6 G P N T 9 . 8 4 7 ± 4 . 6 8 9 3 5 . 5 5 ± 7 . 9 0 2 5 . 4 ± 5 . 6 6 - 2 . 1 2 1 ± 8 . 5 9 3 â 6 . 7 3 ± 4 . 5 5 4 4 2 . 0 ± 0 . 6 9 9 . 7 5 ± 1 . 5 5 1 1 2 . 8 0 1 ± 6 . 2 8 3 1 6 . 4 8 1 ± 5 . 9 2 7 1 7 . 7 2 1 ± 0 . 6 0 7 5 . 4 2 ± 0 . 5 5 2 2 . 5 2 ± 4 . 8 8 2 8 . 7 2 ± 1 . 5 4 9 1 . 6 ± 7 . 0 0 . 9 ± 7 . 5 6 - 9 . 2 ± 6 . 4 8 - 0 . 3 2 ± 3 . 6 1 9 5 . 0 ± 5 . 1 1 6 . 8 ± 5 . 4 6 - 4 . 3 1 ± 5 . 4 6 1 - 3 . 7 ± 5 . 0 8 9 E C R O F N I E R 0 . 4 1 ± 7 . 3 9 6 4 0 . 8 1 ± 4 . 3 1 0 . 1 ± 1 . 7 6 â 0 . 1 9 ± 1 . 8 0 5 â
1604.06778#27
1604.06778#29
1604.06778
[ "1506.02438" ]
1604.06778#29
Benchmarking Deep Reinforcement Learning for Continuous Control
2 . 5 6 ± 5 . 6 1 1 4 1 . 0 ± 3 . 2 9 3 . 9 2 ± 0 . 4 1 7 8 . 8 7 ± 5 . 6 0 5 2 . 9 6 ± 1 . 3 8 1 1 5 . 5 5 ± 3 . 8 4 5 0 . 4 3 ± 1 . 8 2 1 5 . 0 1 ± 2 . 2 6 2 5 . 5 6 2 ± 9 . 0 2 4 2 . 3 ± 4 . 3 1 â 6 . 0 ± 2 . 1 8 â 6 . 1 1 ± 9 . 8 2 1 â 8 . 0 1 2 ± 0 . 6 1 6 1 . 1 ± 5 . 6 8 . 7 ± 7 . 4 7 â 3 . 1 3 ± 7 . 6 8 1 - 1 . 4 7 2 ± 7 . 1 3 4 m o d n a R 0 . 0 ± 1 . 7 7 2 . 0 ± 4 . 3 5 1 â 0 . 0 ± 4 . 5 1 4 â 0 . 1 ± 5 . 4 0 9 1 â 1 . 0 ± 7 . 9 4 1 1 . 0 ± 7 . 1 â 0 . 0 ± 4 . 8 0 . 0 ± 7 . 1 â 3 . 0 ± 8 . 0 9 â
1604.06778#28
1604.06778#30
1604.06778
[ "1506.02438" ]
1604.06778#30
Benchmarking Deep Reinforcement Learning for Continuous Control
7 . 0 ± 4 . 3 1 2 . 0 ± 5 . 1 4 1 . 0 ± 2 . 3 1 0 . 0 ± 1 . 7 7 1 . 0 ± 1 . 2 2 1 â 0 . 0 ± 0 . 3 8 â 0 . 0 ± 2 . 3 9 3 â 1 . 0 ± 4 . 1 0 1 1 . 0 ± 2 . 2 2 1 â 0 . 0 ± 0 . 3 8 â 0 . 0 ± 5 . 3 9 3 â 1 . 0 ± 3 . 6 7 8 . 4 ± 1 . 3 6 â 6 . 0 1 ± 8 . 1 5 â 9 . 0 ± 1 . 4 1 9 . 3 2 ± 8 . 2 9 â 7 . 4 ± 7 . 8 0 1 â 7 . 1 ± 8 . 4 1 6 . 5 ± 3 . 5 â 2 . 0 ± 8 . 1 2 1 â 6 . 0 ± 9 . 6 6 â 0 . 1 ± 9 . 3 6 â 4 . 0 ± 6 . 1 6 - 3 . 2 ± 7 . 0 8 â 1 . 0 ± 4 . 1 8 â 4 . 0 ± 8 . 1 6 - 2 . 0 ± 9 . 3 6 â 0 . 0 ± 7 . 2 8 â 5 . 5 ± 0 . 5 4 2 â 7 . 3 1 ± 2 . 0 5 2 â 3 . 0 4 ± 9 . 0 7 1 - 7 . 7 ± 1 . 6 1 2 â 6 . 2 ± 2 . 3 3 2 â
1604.06778#29
1604.06778#31
1604.06778
[ "1506.02438" ]
1604.06778#31
Benchmarking Deep Reinforcement Learning for Continuous Control
9 . 8 3 ± 6 . 6 5 1 - 3 . 2 3 ± 1 . 9 6 1 - 0 . 1 ± 8 . 7 8 3 â 0 . 0 ± 0 . 0 0 . 0 ± 0 . 0 0 . 0 ± 0 . 0 0 . 0 ± 0 . 0 0 . 0 ± 0 . 0 0 . 0 ± 0 . 0 0 . 0 ± 0 . 0 0 . 0 ± 0 . 0 0 . 0 ± 0 . 0 3 . 0 ± 3 . 0 â A N / ± A N / 7 . 0 ± 7 . 4 â 0 . 0 ± 4 . 0 â 7 . 0 ± 7 . 6 â 5 . 0 ± 5 . 5 â 1 . 0 ± 4 . 0 â 1 . 0 ± 1 . 0 â 0 . 5 ± 8 . 5 â 0 . 0 ± 0 . 0 0 . 0 ± 0 . 0 0 . 0 ± 0 . 0 0 . 0 ± 0 . 0 0 . 0 ± 0 . 0 0 . 0 ± 0 . 0 0 . 0 ± 0 . 0 0 . 0 ± 0 . 0 0 . 0 ± 0 . 0 0 . 0 ± 0 . 0 A N / ± A N / 0 . 0 ± 0 . 0 0 . 0 ± 0 . 0 0 . 0 ± 0 . 0 0 . 0 ± 0 . 0 0 . 0 ± 0 . 0 0 . 0 ± 0 . 0 0 . 0 ± 0 . 0 # e h t # f o e c n a m r o f r e P . 1
1604.06778#30
1604.06778#32
1604.06778
[ "1506.02438" ]
1604.06778#32
Benchmarking Deep Reinforcement Learning for Continuous Control
e l b a T n o m h t i r o g l a g n i m r o f r e p - t s e b # e h t # f o a . e c a f d l o b # n i d e t h g i l h g i h # r o f # I S d n a , s n o i t c a d e y a l e d . k s a t d i o n a m u H # l l u F k s a T g n i c n a l a B e l o P - t r a C m u l u d n e P d e t r e v n I r a C n i a t n u o M t o b o r c A m u l u d n e P d e t r e v n I # e l b u o D r e m m w S # i # r e p p o H r e k l a # W D 2 h a t e e h C f l a H t n A d i o n a m u H e l p m i S d i o n a m u H # l l u F ) S L ( g n i c n a l a B e l o P - t r a C ) S L ( m u l u d n e P d e t r e v n I ) S L ( r a C n i a t n u o M ) S L ( t o b o r c A ) O N ( g n i c n a l a B e l o P - t r a C (ON) (ON) # O N ( ) # O N m u l u d n e P d e t r e v n I ( r a C n i a t n u o M ) O N ( t o b o r c A ) I S ( g n i c n a l a B e l o P - t r a C ) I S ( m u l u d n e P d e t r e v n I ) I S ( r a C n i a t n u o M ) I S ( t o b o r c A g n i r e h t a G + r e m m w S # i g n i r e h t a G + t n A e z a
1604.06778#31
1604.06778#33
1604.06778
[ "1506.02438" ]
1604.06778#33
Benchmarking Deep Reinforcement Learning for Continuous Control
# M + r e m m w S # i e z a # azey M + t n A s k s a t l a c i h c r a r e i h # e h t # r o f t p e c x E a Benchmarking Deep Reinforcement Learning for Continuous Control (a) (b) (c) (d) Figure 3. Performance as a function of the number of iterations; the shaded area depicts the mean ± the standard deviation over ï¬ ve different random seeds: (a) Performance comparison of all algorithms in terms of the average reward on the Walker task; (b) Comparison between REINFORCE, TNPG, and TRPO in terms of the mean KL-divergence on the Walker task; (c) Performance comparison on TNPG and TRPO on the Swimmer task; (d) Performance comparison of all algorithms in terms of the average reward on the Half- Cheetah task. policy update by performing a line search in the natural gra- dient direction to ensure an improvement in the surrogate loss function. We observe that hyperparameter grid search tends to select conservative step sizes (δKL) for TNPG, which alleviates the issue of performance collapse caused by a large update to the policy. By contrast, TRPO can robustly enforce constraints with larger a δKL value and hence speeds up learning in some cases. For instance, grid search on the Swimmer task reveals that the best step size for TNPG is δKL = 0.05, whereas TRPOâ s best step-size is larger: δKL = 0.1. As shown in Figure 3(c), this larger step size enables slightly faster learning. tain basic tasks such as Cart-Pole Balancing and Moun- tain Car, suggesting that the dimension of the searching parameter is not always the limiting factor of the method. However, the performance degrades quickly as the system dynamics becomes more complicated. We also observe that CEM outperforms CMA-ES, which is remarkable as CMA-ES estimates the full covariance matrix. For higher- dimensional policy parameterizations, the computational complexity and memory requirement for CMA-ES become noticeable. On tasks with high-dimensional observations, such as the Full Humanoid, the CMA-ES algorithm runs out of memory and fails to yield any results, denoted as N/A in Table 1.
1604.06778#32
1604.06778#34
1604.06778
[ "1506.02438" ]
1604.06778#34
Benchmarking Deep Reinforcement Learning for Continuous Control
RWR: RWR is the only gradient-based algorithm we im- plemented that does not require any hyperparameter tun- ing. It can solve some basic tasks to a satisfactory degree, but fails to solve more challenging tasks such as locomo- tion. We observe empirically that RWR shows fast initial improvement followed by signiï¬ cant slow-down, as shown in Figure 3(d). REPS: Our main observation is that REPS is especially prone to early convergence to local optima in case of con- tinuous states and actions. Its ï¬ nal outcome is greatly af- fected by the performance of the initial policy, an obser- vation that is consistent with the original work of Peters et al. (2010). This leads to a bad performance on average, although under particular initial settings the algorithm can perform on par with others. Moreover, the tasks presented here do not assume the existence of a stationary distribu- tion, which is assumed in Peters et al. (2010). In particular, for many of our tasks, transient behavior is of much greater interest than steady-state behavior, which agrees with pre- vious observation by van Hoof et al. (2015), Gradient-free methods: Surprisingly, even when train- ing deep neural network policies with thousands of pa- rameters, CEM achieves very good performance on cer- DDPG: Compared to batch algorithms, we found that DDPG was able to converge signiï¬ cantly faster on certain tasks like Half-Cheetah due to its greater sample efï¬ ciency. However, it was less stable than batch algorithms, and the performance of the policy can degrade signiï¬ cantly during training. We also found it to be more susceptible to scaling of the reward. In our experiment for DDPG, we rescaled the reward of all tasks by a factor of 0.1, which seems to improve the stability. Partially Observable Tasks: We experimentally verify that recurrent policies can ï¬ nd better solutions than feed- forward policies in Partially Observable Tasks but recur- rent policies are also more difï¬ cult to train. As shown in Table 1, derivative-free algorithms like CEM and CMA-ES work considerably worse with recurrent policies.
1604.06778#33
1604.06778#35
1604.06778
[ "1506.02438" ]
1604.06778#35
Benchmarking Deep Reinforcement Learning for Continuous Control
Also we note that the performance gap between REINFORCE and TNPG widens when they are applied to optimize recurrent policies, which can be explained by the fact that a small change in parameter space can result in a bigger change in policy distribution with recurrent policies than with feed- forward policies. Hierarchical Tasks: We observe that all of our imple- Benchmarking Deep Reinforcement Learning for Continuous Control mented algorithms achieve poor performance on the hier- archical tasks, even with extensive hyperparameter search and 500 iterations of training. It is an interesting direction to develop algorithms that can automatically discover and exploit the hierarchical structure in these tasks.
1604.06778#34
1604.06778#36
1604.06778
[ "1506.02438" ]
1604.06778#36
Benchmarking Deep Reinforcement Learning for Continuous Control
# 7. Related Work In this section, we review existing benchmarks of con- tinuous control tasks. The earliest efforts of evaluating reinforcement learning algorithms started in the form of individual control problems described in symbolic form. Some widely adopted tasks include the inverted pendu- lum (Stephenson, 1908; Donaldson, 1960; Widrow, 1964), mountain car (Moore, 1990), and Acrobot (DeJong & Spong, 1994). These problems are frequently incorporated into more comprehensive benchmarks. Some reinforcement learning benchmarks contain low- dimensional continuous control tasks, such as the ones introduced above, including RLLib (Abeyruwan, 2013), MMLF (Metzen & Edgington, 2011), RL-Toolbox (Neu- mann, 2006), JRLF (Kochenderfer, 2006), Beliefbox (Dim- itrakakis et al., 2007), Policy Gradient Toolbox (Peters, 2002), and ApproxRL (Busoniu, 2010). A series of RL competitions has also been held in recent years (Dutech et al., 2005; Dimitrakakis et al., 2014), again with relatively low-dimensional actions. In contrast, our benchmark con- tains a wider range of tasks with high-dimensional contin- uous state and action spaces. variety of challenging tasks. We implemented several rein- forcement learning algorithms, and presented them in the context of general policy parameterizations. Results show that among the implemented algorithms, TNPG, TRPO, and DDPG are effective methods for training deep neural network policies. Still, the poor performance on the pro- posed hierarchical tasks calls for new algorithms to be de- veloped. Implementing and evaluating existing and newly proposed algorithms will be our continued effort. By pro- viding an open-source release of the benchmark, we en- courage other researchers to evaluate their algorithms on the proposed tasks. # Acknowledgements We thank Emo Todorov and Yuval Tassa for providing the MuJoCo simulator, and Sergey Levine, Aviv Tamar, Chelsea Finn, and the anonymous ICML reviewers for in- sightful comments. We also thank Shixiang Gu and Timo- thy Lillicrap for helping us diagnose the DDPG implemen- tation.
1604.06778#35
1604.06778#37
1604.06778
[ "1506.02438" ]
1604.06778#37
Benchmarking Deep Reinforcement Learning for Continuous Control
This work was supported in part by DARPA, the Berkeley Vision and Learning Center (BVLC), the Berke- ley Artiï¬ cial Intelligence Research (BAIR) laboratory, and Berkeley Deep Drive (BDD). Rein Houthooft is supported by a Ph.D. Fellowship of the Research Foundation - Flan- ders (FWO). # References Previously, other benchmarks have been proposed for high- dimensional control tasks. Tdlearn (Dann et al., 2014) includes a 20-link pole balancing task, DotRL (Papis & Wawrzy´nski, 2013) includes a variable-DOF octopus arm and a 6-DOF planar cheetah model, PyBrain (Schaul et al., 2010) includes a 16-DOF humanoid robot with standing and jumping tasks, RoboCup Keepaway (Stone et al., 2005) is a multi-agent game which can have a ï¬ exible dimension of actions by varying the number of agents, and SkyAI (Yamaguchi & Ogasawara, 2010) includes a 17-DOF hu- manoid robot with crawling and turning tasks. Other li- braries such as CL-Square (Riedmiller et al., 2012) and RLPark (Degris et al., 2013) provide interfaces to actual hardware, e.g., Bioloid and iRobot Create. In contrast to these aforementioned testbeds, our benchmark makes use of simulated environments to reduce computation time and to encourage experimental reproducibility. Furthermore, it provides a much larger collection of tasks of varying difï¬
1604.06778#36
1604.06778#38
1604.06778
[ "1506.02438" ]
1604.06778#38
Benchmarking Deep Reinforcement Learning for Continuous Control
- culty. Abeyruwan, S. RLLib: Lightweight standard and on/off policy reinforcement learning library (C++). http://web.cs.miami. edu/home/saminda/rilib.html, 2013. Bagnell, J. A. and Schneider, J. Covariant policy search. pp. 1019â 1024. IJCAI, 2003. Bakker, B. Reinforcement learning with long short-term memory. In NIPS, pp. 1475â
1604.06778#37
1604.06778#39
1604.06778
[ "1506.02438" ]
1604.06778#39
Benchmarking Deep Reinforcement Learning for Continuous Control
1482, 2001. Bellemare, M. G., Naddaf, Y., Veness, J., and Bowling, M. The Arcade Learning Environment: An evaluation platform for general agents. J. Artif. Intell. Res., 47:253â 279, 2013. Bellman, R. Dynamic Programming. Princeton University Press, 1957. Bertsekas, Dimitri P and Tsitsiklis, John N. Neuro-dynamic pro- gramming: an overview.
1604.06778#38
1604.06778#40
1604.06778
[ "1506.02438" ]
1604.06778#40
Benchmarking Deep Reinforcement Learning for Continuous Control
In CDC, pp. 560â 564, 1995. Busoniu, L. ApproxRL: A Matlab toolbox for approxi- http://busoniu.net/ï¬ les/repository/ mate RL and DP. readme-approxrl.html, 2010. Catto, E. Box2D: A 2D physics engine for games, 2011. Coulom, R´emi. Reinforcement learning using neural networks, with applications to motor control. PhD thesis, Institut Na- tional Polytechnique de Grenoble-INPG, 2002.
1604.06778#39
1604.06778#41
1604.06778
[ "1506.02438" ]
1604.06778#41
Benchmarking Deep Reinforcement Learning for Continuous Control
# 8. Conclusion Dann, C., Neumann, G., and Peters, J. Policy evaluation with tem- poral differences: A survey and comparison. J. Mach. Learn. Res., 15(1):809â 883, 2014. In this work, a benchmark of continuous control problems for reinforcement learning is presented, covering a wide Degris, T., B´echu, J., White, A., Modayil, J., Pilarski, P. M., and Denk, C. RLPark. http://rlpark.github.io, 2013. Benchmarking Deep Reinforcement Learning for Continuous Control Deisenroth, M. P., Neumann, G., and Peters, J.
1604.06778#40
1604.06778#42
1604.06778
[ "1506.02438" ]
1604.06778#42
Benchmarking Deep Reinforcement Learning for Continuous Control
A survey on policy search for robotics, foundations and trends in robotics. Found. Trends Robotics, 2(1-2):1â 142, 2013. Heess, N., Wayne, G., Silver, D., Lillicrap, T., Erez, T., and Tassa, T. Learning continuous control policies by stochastic value gradients. In NIPS, pp. 2926â 2934. 2015b. DeJong, G. and Spong, M. W. Swinging up the Acrobot: An example of intelligent control. In ACC, pp. 2158â 2162, 1994.
1604.06778#41
1604.06778#43
1604.06778
[ "1506.02438" ]
1604.06778#43
Benchmarking Deep Reinforcement Learning for Continuous Control
Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. ImageNet: A large-scale hierarchical image database. In CVPR, pp. 248â 255, 2009. Dietterich, T. G. Hierarchical reinforcement learning with the MAXQ value function decomposition. J. Artif. Intell. Res, 13: 227â 303, 2000. Dimitrakakis, C., Tziortziotis, N., and Tossou, A.
1604.06778#42
1604.06778#44
1604.06778
[ "1506.02438" ]
1604.06778#44
Benchmarking Deep Reinforcement Learning for Continuous Control
Beliefbox: A framework for statistical methods in sequential decision mak- ing. http://code.google.com/p/beliefbox/, 2007. Hester, T. and Stone, P. The open-source TEXPLORE code re- lease for reinforcement learning on robots. In RoboCup 2013: Robot World Cup XVII, pp. 536â 543. 2013. Hinton, G., Deng, L., Yu, D., Mohamed, A.-R., Jaitly, N., Se- nior, A., Vanhoucke, V., Nguyen, P., Dahl, T. S. G., and Kings- bury, B.
1604.06778#43
1604.06778#45
1604.06778
[ "1506.02438" ]
1604.06778#45
Benchmarking Deep Reinforcement Learning for Continuous Control
Deep neural networks for acoustic modeling in speech recognition. IEEE Signal Process. Mag, 29(6):82â 97, 2012. Hirsch, H.-G. and Pearce, D. The Aurora experimental framework for the performance evaluation of speech recognition systems under noisy conditions. In ASR2000-Automatic Speech Recog- nition: Challenges for the new Millenium ISCA Tutorial and Research Workshop (ITRW), 2000. Dimitrakakis, Christos, Li, Guangliang, and Tziortziotis, Nikoa- los. The reinforcement learning competition 2014. AI Maga- zine, 35(3):61â 65, 2014. Hochreiter, S. and Schmidhuber, J. Long short-term memory. Neural Comput., 9(8):1735â 1780, 1997. Donaldson, P. E. K. Error decorrelation: a technique for matching a class of functions. In Proc. 3th Intl. Conf. Medical Electron- ics, pp. 173â 178, 1960. Doya, K.
1604.06778#44
1604.06778#46
1604.06778
[ "1506.02438" ]
1604.06778#46
Benchmarking Deep Reinforcement Learning for Continuous Control
Reinforcement learning in continuous time and space. Neural Comput., 12(1):219â 245, 2000. Kakade, S. M. A natural policy gradient. In NIPS, pp. 1531â 1538. 2002. Kimura, H. and Kobayashi, S. Stochastic real-valued reinforce- ment learning to solve a nonlinear control problem. In IEEE SMC, pp. 510â 515, 1999. Dutech, Alain, Edmunds, Timothy, Kok, Jelle, Lagoudakis, Michail, Littman, Michael, Riedmiller, Martin, Russell, Bryan, Scherrer, Bruno, Sutton, Richard, Timmer, Stephan, et al. Re- inforcement learning benchmarks and bake-offs ii. Advances in Neural Information Processing Systems (NIPS), 17, 2005. Inï¬ nite hori- zon model predictive control for nonlinear periodic tasks. Manuscript under review, 4, 2011. Everingham, M., Van Gool, L., Williams, C. K. I., Winn, J., and Zisserman, A. The pascal visual object classes (VOC) chal- lenge. Int. J. Comput. Vision, 88(2):303â 338, 2010. Kober, J. and Peters, J.
1604.06778#45
1604.06778#47
1604.06778
[ "1506.02438" ]
1604.06778#47
Benchmarking Deep Reinforcement Learning for Continuous Control
Policy search for motor primitives in robotics. In NIPS, pp. 849â 856, 2009. Kochenderfer, M. JRLF: Java reinforcement learning framework. http://mykel.kochenderfer.com/jrlf, 2006. Krizhevsky, A. and Hinton, G. Learning multiple layers of fea- tures from tiny images. Technical report, 2009. Krizhevsky, A., Sutskever, I., and Hinton, G. ImageNet classiï¬ - cation with deep convolutional neural networks. In NIPS, pp. 1097â 1105. 2012. LeCun, Y., Cortes, C., and Burges, C. The MNIST database of handwritten digits, 1998. Fei-Fei, L., Fergus, R., and Perona, P. One-shot learning of object categories. IEEE Trans. Pattern Anal. Mach. Intell., 28(4):594â 611, 2006. Levine, S. and Koltun, V. Guided policy search. In ICML, pp. 1â 9, 2013. Furuta, K., Okutani, T., and Sone, H.
1604.06778#46
1604.06778#48
1604.06778
[ "1506.02438" ]
1604.06778#48
Benchmarking Deep Reinforcement Learning for Continuous Control
Computer control of a double inverted pendulum. Comput. Electr. Eng., 5(1):67â 84, 1978. Garofolo, J. S., Lamel, L. F., Fisher, W. M., Fiscus, J. G., and Pal- lett, D. S. DARPA TIMIT acoustic-phonetic continuous speech corpus CD-ROM. NIST speech disc 1-1.1. NASA STI/Recon Technical Report N, 93, 1993. Godfrey, J. J., Holliman, E. C., and McDaniel, J. SWITCH- BOARD: Telephone speech corpus for research and develop- ment. In ICASSP, pp. 517â 520, 1992. Gomez, F. and Miikkulainen, R. 2-d pole balancing with recurrent evolutionary networks. In ICANN, pp. 425â 430. 1998. Guo, X., Singh, S., Lee, H., Lewis, R. L., and Wang, X.
1604.06778#47
1604.06778#49
1604.06778
[ "1506.02438" ]
1604.06778#49
Benchmarking Deep Reinforcement Learning for Continuous Control
Deep learning for real-time Atari game play using ofï¬ ine monte- carlo tree search planning. In NIPS, pp. 3338â 3346. 2014. Hansen, N. and Ostermeier, A. Completely derandomized self- adaptation in evolution strategies. Evol. Comput., 9(2):159â 195, 2001. Levine, S., Finn, C., Darrell, T., and Abbeel, P. End-to-end train- ing of deep visuomotor policies. arXiv:1504.00702, 2015. Lillicrap, T., Hunt, J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., Silver, D., and Wierstra, D. Continuous control with deep re- inforcement learning. arXiv:1509.02971, 2015. Martin, D., C. Fowlkes, D. Tal, and Malik, J.
1604.06778#48
1604.06778#50
1604.06778
[ "1506.02438" ]
1604.06778#50
Benchmarking Deep Reinforcement Learning for Continuous Control
A database of human segmented natural images and its application to evaluating seg- mentation algorithms and measuring ecological statistics. In ICCV, pp. 416â 423, 2001. Metzen, J. M. and Edgington, M. Maja machine learning frame- work. http://mloss.org/software/view/220/, 2011. Michie, D. and Chambers, R. A. BOXES: An experiment in adap- tive control. Machine Intelligence, 2:137â 152, 1968. Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., and Hassabis, D. Human-level control through deep reinforcement learning. Nature, 518(7540):529â 533, 2015. Heess, N., Hunt, J., Lillicrap, T., and Silver, D. Memory-based arXiv:1512.04455, control with recurrent neural networks. 2015a. Moore, A.
1604.06778#49
1604.06778#51
1604.06778
[ "1506.02438" ]
1604.06778#51
Benchmarking Deep Reinforcement Learning for Continuous Control
Efï¬ cient memory-based learning for robot control. Technical report, University of Cambridge, Computer Labora- tory, 1990. Benchmarking Deep Reinforcement Learning for Continuous Control Murray, R. M. and Hauser, J. A case study in approximate lin- earization: The Acrobot example. Technical report, UC Berke- ley, EECS Department, 1991. mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning. Artiï¬ cial intelligence, 112(1):181â 211, 1999. Murthy, S. S. and Raibert, M. H. 3D balance in legged locomo- tion: modeling and simulation for the one-legged case. ACM SIGGRAPH Computer Graphics, 18(1):27â 27, 1984. Neumann, G.
1604.06778#50
1604.06778#52
1604.06778
[ "1506.02438" ]
1604.06778#52
Benchmarking Deep Reinforcement Learning for Continuous Control
A reinforcement learning toolbox and RL bench- marks for the control of dynamical systems. Dynamical prin- ciples for neuroscience and intelligent biomimetic devices, pp. 113, 2006. Papis, B. and Wawrzy´nski, P. dotrl: A platform for rapid rein- forcement learning methods development and validation. In FedCSIS, pp. pages 129â 136., 2013. Parr, Ronald and Russell, Stuart.
1604.06778#51
1604.06778#53
1604.06778
[ "1506.02438" ]
1604.06778#53
Benchmarking Deep Reinforcement Learning for Continuous Control
Reinforcement learning with hierarchies of machines. Advances in neural information pro- cessing systems, pp. 1043â 1049, 1998. Szita, I. and LË orincz, A. Learning Tetris using the noisy cross- entropy method. Neural Comput., 18(12):2936â 2941, 2006. Szita, I., Tak´acs, B., and L¨orincz, A. ε-MDPs: Learning in vary- ing environments. J. Mach. Learn. Res., 3:145â 174, 2003. Tassa, Yuval, Erez, Tom, and Todorov, Emanuel.
1604.06778#52
1604.06778#54
1604.06778
[ "1506.02438" ]
1604.06778#54
Benchmarking Deep Reinforcement Learning for Continuous Control
Synthesis and stabilization of complex behaviors through online trajectory optimization. In Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, pp. 4906â 4913. IEEE, 2012. Tesauro, G. Temporal difference learning and TD-Gammon. Commun. ACM, 38(3):58â 68, 1995. Todorov, E., Erez, T., and Tassa, Y. MuJoCo: A physics engine for model-based control. In IROS, pp. 5026â 5033, 2012. http://www.ausy. tu-darmstadt.de/Research/PolicyGradientToolbox, 2002. Peters, J. and Schaal, S. Reinforcement learning by reward- In ICML, weighted regression for operational space control. pp. 745â 750, 2007. Peters, J. and Schaal, S. Reinforcement learning of motor skills with policy gradients. Neural networks, 21(4):682â 697, 2008. Peters, J., Vijaykumar, S., and Schaal, S. Policy gradient methods for robot control. Technical report, 2003. Peters, J., M¨ulling, K., and Alt¨un, Y. Relative entropy policy search. In AAAI, pp. 1607â 1612, 2010. Purcell, E. M. Life at low Reynolds number. Am. J. Phys, 45(1): 3â 11, 1977. van Hoof, H., Peters, J., and Neumann, G.
1604.06778#53
1604.06778#55
1604.06778
[ "1506.02438" ]
1604.06778#55
Benchmarking Deep Reinforcement Learning for Continuous Control
Learning of non- parametric control policies with high-dimensional state fea- tures. In AISTATS, pp. 995â 1003, 2015. Watter, M., Springenberg, J., Boedecker, J., and Riedmiller, M. Embed to control: A locally linear latent dynamics model for control from raw images. In NIPS, pp. 2728â 2736, 2015. Wawrzy´nski, P. Learning to control a 6-degree-of-freedom walk- ing robot. In IEEE EUROCON, pp. 698â 705, 2007. Widrow, B. Pattern recognition and adaptive control. IEEE Trans. Ind. Appl., 83(74):269â 277, 1964.
1604.06778#54
1604.06778#56
1604.06778
[ "1506.02438" ]
1604.06778#56
Benchmarking Deep Reinforcement Learning for Continuous Control
Wierstra, D., Foerster, A., Peters, J., and Schmidhuber, J. Solv- ing deep memory POMDPs with recurrent policy gradients. In ICANN, pp. 697â 706. 2007. Raibert, M. H. and Hodgins, J. K. Animation of dynamic legged In ACM SIGGRAPH Computer Graphics, vol- locomotion. ume 25, pp. 349â 358, 1991. Williams, R. J.
1604.06778#55
1604.06778#57
1604.06778
[ "1506.02438" ]
1604.06778#57
Benchmarking Deep Reinforcement Learning for Continuous Control
Simple statistical gradient-following algorithms for connectionist reinforcement learning. Mach. Learn., 8: 229â 256, 1992. Riedmiller, M., Blum, M., and Lampe, T. CLS2: Closed loop http://ml.informatik.uni-freiburg.de/ simulation system. research/clsquare, 2012. Yamaguchi, A. and Ogasawara, T. SkyAI: Highly modularized reinforcement learning library. In IEEE-RAS Humanoids, pp. 118â 123, 2010. Rubinstein, R. The cross-entropy method for combinatorial and continuous optimization. Methodol. Comput. Appl. Probab., 1 (2):127â 190, 1999. Yu, D., Ju, Y.-C., Wang, Y.-Y., Zweig, G., and Acero, A. Auto- mated directory assistance system - from theory to practice. In Interspeech, pp. 2709â 2712, 2007. Sch¨afer, A. M. and Udluft, S.
1604.06778#56
1604.06778#58
1604.06778
[ "1506.02438" ]
1604.06778#58
Benchmarking Deep Reinforcement Learning for Continuous Control
Solving partially observable rein- forcement learning problems with recurrent neural networks. In ECML Workshops, pp. 71â 81, 2005. Schaul, T., Bayer, J., Wierstra, D., Sun, Y., Felder, M., Sehnke, F., R¨uckstieà , T., and Schmidhuber, J. PyBrain. J. Mach. Learn. Res., 11:743â 746, 2010. Schulman, J., Levine, S., Abbeel, P., Jordan, M. I., and Moritz, P. Trust region policy optimization. In ICML, pp. 1889â 1897, 2015a. Schulman, J., Moritz, P., Levine, S., Jordan, M. I., and Abbeel, P. High-dimensional continuous control using generalized ad- vantage estimation. arXiv:1506.02438, 2015b. Stephenson, A. On induced stability. Philos. Mag., 15(86):233â 236, 1908.
1604.06778#57
1604.06778#59
1604.06778
[ "1506.02438" ]
1604.06778#59
Benchmarking Deep Reinforcement Learning for Continuous Control
Stone, Peter, Kuhlmann, Gregory, Taylor, Matthew E, and Liu, Yaxin. Keepaway soccer: From machine learning testbed to benchmark. In RoboCup 2005: Robot Soccer World Cup IX, pp. 93â 105. Springer, 2005. Sutton, Richard S, Precup, Doina, and Singh, Satinder. Between # Supplementary Material # 1. Task Speciï¬ cations Below we provide some speciï¬ cations for the task observations, actions, and rewards. Please refer to the benchmark source code (https://github.com/rllab/rllab) for complete speciï¬ cation of physics parameters. # 1.1. Basic Tasks Cart-Pole Balancing: In this task, an inverted pendulum is mounted on a pivot point on a cart. The cart itself is restricted to linear movement, achieved by applying horizontal forces.
1604.06778#58
1604.06778#60
1604.06778
[ "1506.02438" ]
1604.06778#60
Benchmarking Deep Reinforcement Learning for Continuous Control
Due to the systemâ s inherent instability, continuous cart movement is needed to keep the pendulum upright. The observation consists of the cart position x, pole angle @, the cart velocity «, and the pole velocity 6. The 1D action consists of the horizontal force applied to the cart body. The reward function is given by r(s, a) := 10 â (1 â cos(@)) â 1075 |la||3. The episode terminates when |x| > 2.4 or |0| > 0.2.
1604.06778#59
1604.06778#61
1604.06778
[ "1506.02438" ]
1604.06778#61
Benchmarking Deep Reinforcement Learning for Continuous Control
Cart-Pole Swing Up: This is a more complicated version of the previous task, in which the system should not only be able to balance the pole, but ï¬ rst succeed in swinging it up into an upright position. This task extends the working range of the inverted pendulum to 360â ¦. This is a nonlinear extension of the previous task. It has the same observation and action as in balancing. The reward function is given by r(s, a) := cos(θ). The episode terminates when |x| > 3, with a penalty of â
1604.06778#60
1604.06778#62
1604.06778
[ "1506.02438" ]
1604.06778#62
Benchmarking Deep Reinforcement Learning for Continuous Control
100. Mountain Car: In this task, a car has to escape a valley by repetitive application of tangential forces. Because the maximal tangential force is limited, the car has to alternately drive up along the two slopes of the valley in order to build up enough inertia to overcome gravity. This brings a challenge of exploration, since before ï¬ rst reaching the goal among all trials, a locally optimal solution exists, which is to drive to the point closest to the target and stay there for the rest of the episode. The observation is given by the horizontal position x and the horizontal velocity Ë x of the car. The reward is given by r(s, a) := â 1 + height, with height the carâ s vertical offset. The episode terminates when the car reaches a target height of 0.6. Hence the goal is to reach the target as soon as possible. Acrobot Swing Up: In this task, an under-actuated, two-link robot has to swing itself into an upright position. It consists of two joints of which the first one has a fixed position and only the second one can exert torque. The goal is to swing the robot into an upright position and stabilize around that position. The controller not only has to swing the pendulum in order to build up inertia, similar to the Mountain Car task, but also has to decelerate it in order to prevent it from tipping over. The observation includes the two joint angles, 0; and 62, and their velocities, 6; and 02. The action is the torque applied at the second joint. The reward is defined as r(s, a) := â ||tip(s) â tipyarget||2, Where tip(s) computes the Cartesian position of the tip of the robot given the joint angles.
1604.06778#61
1604.06778#63
1604.06778
[ "1506.02438" ]
1604.06778#63
Benchmarking Deep Reinforcement Learning for Continuous Control
No termination condition is applied. Double Inverted Pendulum Balancing: This task extends the Cart-Pole Balancing task by replacing the single-link pole by a two-link rigid structure. As in the former task, the goal is to stabilize the two-link pole near the upright position. This task is more difï¬ cult than single-pole balancing, since the system is even more unstable and requires the controller to actively maintain balance. The observation includes the cart position x, joint angles (θ1 and θ2), and joint velocities ( Ë Î¸1 and Ë Î¸2). We encode each joint angle as its sine and cosine values. The action is the same as in cart-pole tasks. The reward is given by r(s, a) = 10 â 0.01x2 2, where xtip, ytip are the coordinates of the tip of the pole.
1604.06778#62
1604.06778#64
1604.06778
[ "1506.02438" ]
1604.06778#64
Benchmarking Deep Reinforcement Learning for Continuous Control
No termination condition is applied. The episode is terminated when ytip â ¤ 1. # 1.2. Locomotion Tasks Swimmer: The swimmer is a planar robot with 3 links and 2 actuated joints. Fluid is simulated through viscosity forces, which apply drag on each link, allowing the swimmer to move forward. This task is the simplest of all locomotion tasks, since there are no irrecoverable states in which the swimmer can get stuck, unlike other robots which may fall down or ï¬
1604.06778#63
1604.06778#65
1604.06778
[ "1506.02438" ]
1604.06778#65
Benchmarking Deep Reinforcement Learning for Continuous Control
ip over. This places less burden on exploration. The 13-dim observation includes the joint angles, joint velocities, as well as Benchmarking Deep Reinforcement Learning for Continuous Control the coordinates of the center of mass. The reward is given by r(s,a) = v, â 0.005||a||3, where v,, is the forward velocity. No termination condition is applied. Hopper: The hopper is a planar monopod robot with 4 rigid links, corresponding to the torso, upper leg, lower leg, and foot, along with 3 actuated joints. More exploration is needed than the swimmer task, since a stable hopping gait has to be learned without falling. Otherwise, it may get stuck in a local optimum of diving forward. The 20-dim observation includes joint angles, joint velocities, the coordinates of center of mass, and constraint forces.
1604.06778#64
1604.06778#66
1604.06778
[ "1506.02438" ]
1604.06778#66
Benchmarking Deep Reinforcement Learning for Continuous Control
The reward is given by r(s,a) := vz, â 0.005 - |ja||3 + 1, where the last term is a bonus for being â alive.â The episode is terminated when Zbody < 0.7 where Zpody is the z-coordinate of the body, or when |6,| < 0.2, where @, is the forward pitch of the body. Walker: The walker is a planar biped robot consisting of 7 links, corresponding to two legs and a torso, along with 6 actuated joints. This task is more challenging than hopper, since it has more degrees of freedom, and is also prone to falling. The 21-dim observation includes joint angles, joint velocities, and the coordinates of center of mass. The reward is given by r(s,a) := vz â 0.005 - ||a\)3. The episode is terminated when zpoay < 0-8, 2body > 2.0, or when |0,| > 1.0.
1604.06778#65
1604.06778#67
1604.06778
[ "1506.02438" ]
1604.06778#67
Benchmarking Deep Reinforcement Learning for Continuous Control
Half-Cheetah: The half-cheetah is a planar biped robot with 9 rigid links, including two legs and a torso, along with 6 actuated joints. The 20-dim observation includes joint angles, joint velocities, and the coordinates of the center of mass. The reward is given by r(s,a) = vz â 0.05 - ||a||3. No termination condition is applied. Ant: The ant is a quadruped with 13 rigid links, including four legs and a torso, along with 8 actuated joints. This task is more challenging than the previous tasks due to the higher degrees of freedom. The 125-dim observation includes joint angles, joint velocities, coordinates of the center of mass, a (usually sparse) vector of contact forces, as well as the rotation matrix for the body. The reward is given by r(s,a) = vz â 0.005 - |a||} â Coontact + 0.05, where Coontact penalizes contacts to the ground, and is given by 5 - 10-4 . Frontact||3, where Feontact is the contact force vector clipped to values between â 1 and 1. The episode is terminated when z,ay < 0.2 or when Zpody > 1.0.
1604.06778#66
1604.06778#68
1604.06778
[ "1506.02438" ]
1604.06778#68
Benchmarking Deep Reinforcement Learning for Continuous Control
Simple Humanoid: This is a simplified humanoid model with 13 rigid links, including the head, body, arms, and legs, along with 10 actuated joints. The increased difficulty comes from the increased degrees of freedom as well as the need to maintain balance. The 102-dim observation includes the joint angles, joint velocities, vector of contact forces, and the coordinates of the center of mass. The reward is given by r(s,a) = v, â 5-10~4|lal]3 â Coontact â Caeviation + 0-2, where Ccoontact = 5+ 107° - || Feontactl|, and Cueviation = 5- 107° - (v3 + v2) to penalize deviation from the forward direction. The episode is terminated when Zpoay < 0.8 or when Zpody > 2.0. Full Humanoid: This is a humanoid model with 19 rigid links and 28 actuated joints. It has more degrees of freedom below the knees and elbows, which makes the system higher-dimensional and harder for learning. The 142-dim observation includes the joint angles, joint velocities, vector of contact forces, and the coordinates of the center of mass. The reward and termination condition is the same as in the Simple Humanoid model. # 1.3. Partially Observable Tasks Limited Sensors:
1604.06778#67
1604.06778#69
1604.06778
[ "1506.02438" ]
1604.06778#69
Benchmarking Deep Reinforcement Learning for Continuous Control
The full description is included in the main text. Noisy Observations and Delayed Actions: For all tasks, we use a Gaussan noise with Ï = 0.1. The time delay is as follows: Cart-Pole Balancing 0.15 sec, Cart-Pole Swing Up 0.15 sec, Mountain Car 0.15 sec, Acrobot Swing Up 0.06 sec, and Double Inverted Pendulum Balancing 0.06 sec. This corresponds to 3 discretization frames for each task.
1604.06778#68
1604.06778#70
1604.06778
[ "1506.02438" ]
1604.06778#70
Benchmarking Deep Reinforcement Learning for Continuous Control
System Identiï¬ cations: For Cart-Pole Balancing and Cart-Pole Swing Up, the pole length is varied uniformly between, 50% and 150%. For Mountain Car, the width of the valley varies uniformly between 75% and 125%. For Acrobot Swing Up, each of the pole length varies uniformly between 50% and 150%. For Double Inverted Pendulum Balancing, each of the pole length varies uniformly between 83% and 167%.
1604.06778#69
1604.06778#71
1604.06778
[ "1506.02438" ]
1604.06778#71
Benchmarking Deep Reinforcement Learning for Continuous Control
Please refer to the benchmark source code for reference values. # 1.4. Hierarchical Tasks Locomotion + Food Collection: During each episode, 8 food units and 8 bombs are placed in the environment. Collecting a food unit gives +1 reward, and collecting a bomb gives â 1 reward. Hence the best cumulative reward for a given episode is 8. Locomotion + Maze: During each episode, a +1 reward is given when the robot reaches the goal. Otherwise, the robot receives a zero reward throughout the episode. Benchmarking Deep Reinforcement Learning for Continuous Control
1604.06778#70
1604.06778#72
1604.06778
[ "1506.02438" ]
1604.06778#72
Benchmarking Deep Reinforcement Learning for Continuous Control
# 2. Experiment Parameters For all batch gradient-based algorithms, we use the same time-varying feature encoding for the linear baseline: $s, = concat(s, s © s,0.01¢, (0.014), (0.01t)*, 1) where s is the state vector and © represents element-wise product. Table 2 shows the experiment parameters for all four categories. We will then detail the hyperparameter search range for the selected tasks and report best hyperparameters, shown in Tables 3, 4, 5, 6, 7, and 8. Table 2. Experiment Setup Basic & Locomotion Partially Observable Hierarchical 50,000 0.99 500 500 50,000 0.99 100 300 50,000 0.99 500 500 Table 3. Learning Rate α for REINFORCE Search Range Best [1 à 10â 4, 1 à 10â 1] Cart-Pole Swing Up Double Inverted Pendulum [1 à 10â 4, 1 à 10â 1] [1 à 10â 4, 1 à 10â 1] Swimmer [1 à 10â 4, 1 à 10â 1] Ant 5 à 10â 3 5 à 10â 3 1 à 10â 2 5 à 10â 3 Table 4. Step Size δKL for TNPG Search Range Best [1 à 10â 3, 5 à 100] Cart-Pole Swing Up Double Inverted Pendulum [1 à 10â 3, 5 à 100] [1 à 10â 3, 5 à 100] Swimmer [1 à 10â 3, 5 à 100] Ant 5 à 10â 2 3 à 10â 2 1 à 10â 1 3 à 10â 1 Table 5. Step Size δKL for TRPO Search Range Best [1 à 10â 3, 5 à 100] Cart-Pole Swing Up Double Inverted Pendulum [1 à 10â 3, 5 à 100] [1 à 10â 3, 5 à 100] Swimmer [1 à 10â 3, 5 à 100] Ant 5 à 10â 2 1 à 10â 3 5 à 10â 2 8 à 10â 2 # Table 6. Step Size δKL for REPS
1604.06778#71
1604.06778#73
1604.06778
[ "1506.02438" ]
1604.06778#73
Benchmarking Deep Reinforcement Learning for Continuous Control
Search Range Best [1 Ã 10â 3, 5 Ã 100] Cart-Pole Swing Up Double Inverted Pendulum [1 Ã 10â 3, 5 Ã 100] [1 Ã 10â 3, 5 Ã 100] Swimmer [1 Ã 10â 3, 5 Ã 100] Ant 1 Ã 10â 2 8 Ã 10â 1 3 Ã 10â 1 8 Ã 10â 1 Benchmarking Deep Reinforcement Learning for Continuous Control Table 7. Initial Extra Noise for CEM Search Range Best [1 Ã 10â 3, 1] Cart-Pole Swing Up Double Inverted Pendulum [1 Ã 10â 3, 1] [1 Ã 10â 3, 1] Swimmer [1 Ã 10â 3, 1] Ant 1 Ã 10â 2 1 Ã 10â 1 1 Ã 10â 1 1 Ã 10â 1 Table 8. Initial Standard Deviation for CMA-ES Search Range Best [1 Ã 10â 3, 1 Ã 103] Cart-Pole Swing Up Double Inverted Pendulum [1 Ã 10â 3, 1 Ã 103] [1 Ã 10â 3, 1 Ã 103] Swimmer [1 Ã 10â 3, 1 Ã 103] Ant 1 Ã 103 3 Ã 10â 1 1 Ã 10â 1 1 Ã 10â 1
1604.06778#72
1604.06778
[ "1506.02438" ]
1604.06174#0
Training Deep Nets with Sublinear Memory Cost
6 1 0 2 r p A 2 2 ] G L . s c [ 2 v 4 7 1 6 0 . 4 0 6 1 : v i X r a # Training Deep Nets with Sublinear Memory Cost # Tianqi Chen 1, Bing Xu 2, Chiyuan Zhang 3, and Carlos Guestrin 1 1 Unveristy of Washington 2 Dato. Inc 3 Massachusetts Institute of Technology # Abstract
1604.06174#1
1604.06174
[ "1512.03385" ]
1604.06174#1
Training Deep Nets with Sublinear Memory Cost
We propose a systematic approach to reduce the memory consumption of deep neural net- work training. Speciï¬ cally, we design an algorithm that costs O( n) memory to train a n layer network, with only the computational cost of an extra forward pass per mini-batch. As many of the state-of-the-art models hit the upper bound of the GPU memory, our algorithm allows deeper and more complex models to be explored, and helps advance the innovations in deep learning research. We focus on reducing the memory cost to store the intermediate feature maps and gra- dients during training. Computation graph analysis is used for automatic in-place operation and memory sharing optimizations. We show that it is possible to trade computation for memory giving a more memory efï¬ cient training algorithm with a little extra computation cost. In the extreme case, our analysis also shows that the memory consumption can be reduced to O(log n) with as little as O(n log n) extra cost for forward computation. Our experiments show that we can reduce the memory cost of a 1,000-layer deep residual network from 48G to 7G on ImageNet problems.
1604.06174#0
1604.06174#2
1604.06174
[ "1512.03385" ]
1604.06174#2
Training Deep Nets with Sublinear Memory Cost
Similarly, signiï¬ cant memory cost reduction is observed in training complex recurrent neural networks on very long sequences. # 1 Introduction In this paper, we propose a systematic approach to reduce the memory consumption of deep neural network training. We mainly focus on reducing the memory cost to store intermediate results (feature maps) and gradients, as the size of the parameters are relatively small comparing to the size of the intermediate feature maps in many common deep architectures. We use a computation graph analysis to do automatic in-place operation and memory sharing optimizations. More importantly, we propose a novel method to trade computation for memory. As a result, we give a practical algorithm that cost O( n) memory for feature maps to train a n layer network with only double the forward pass computational cost. Interestingly, we also show that in the extreme case, it is possible to use as little as O(log n) memory for the features maps to train a n layer network. We have recently witnessed the success of deep neural networks in many domains [8], such as computer vision, speech recognition, natural language processing and reinforcement learning. Many of the success are brought by innovations in new architectures of deep neural networks. Convolu- tional neural networks [15, 14, 13, 10] model the spatial patterns and give the state of art results in computer vision tasks. Recurrent neural networks, such as long short-term memory [12], show inspiring results in sequence modeling and structure prediction. One common trend in those new models is to use deeper architectures [18, 14, 13, 10] to capture the complex patterns in a large amount of training data. Since the cost of storing feature maps and their gradients scales linearly with the depth of network, our capability of exploring deeper models is limited by the device (usu- ally a GPU) memory. For example, we already run out of memories in one of the current state-of-art models as described in [11]. In the long run, an ideal machine learning system should be able to continuously learn from an increasing amount of training data. Since the optimal model size and complexity often grows with more training data, it is very important to have memory-efï¬ cient train- ing algorithms.
1604.06174#1
1604.06174#3
1604.06174
[ "1512.03385" ]
1604.06174#3
Training Deep Nets with Sublinear Memory Cost
1 Reducing memory consumption not only allows us to train bigger models. It also enables larger batch size for better device utilization and stablity of batchwise operators such as batch normaliza- tion [13]. For memory limited devices, it helps improve memory locality and potentially leads to better memory access patterns. It also enables us to switch from model parallelism to data paral- lelism for training deep convolutional neural networks, which can be beneï¬ cial in certain circum- stances. Our solution enables us to train deeper convolutional neural networks, as well as recurrent neural networks with longer unrolling steps. We provide guidelines for deep learning frameworks to incorporate the memory optimization techniques proposed in this paper. We will also make our implementation of memory optimization algorithm publicly available. # 2 Related Works We can trace the idea of computational graph and liveness analysis back to the literatures of compiler optimizations [3]. Analogy between optimizing a computer program and optimizing a deep neural network computational graph can be found. For example, memory allocation in deep networks is similar to register allocation in a compiler. The formal analysis of computational graph allows us save memory in a principled way. Theano [5, 4] is a pioneering framework to bring the computation graph to deep learning, which is joined by recently introduced frameworks such as CNTK [2], Tensorï¬ ow [1] and MXNet [6]. Theano and Tensorï¬ ow use reference count based recycling and runtime garbage collection to manage memory during training, while MXNet uses a static memory allocation strategy prior to the actual computation. However, most of the existing framework focus on graph analysis to optimize computation after the gradient graph is constructed, but do not discuss the computation and memory trade-off. The trade-off between memory and computation has been a long standing topic in systems re- search. Although not widely known, the idea of dropping intermediate results is also known as gradient checkpointing technique in automatic differentiation literature [9]. We bring this idea to neural network gradient graph construction for general deep neural networks. Through the discus- sion with our colleagues [19], we know that the idea of dropping computation has been applied in some limited speciï¬ c use-cases. In this paper, we propose a general methodology that works for general deep neural networks, including both convolutional and recurrent neural networks.
1604.06174#2
1604.06174#4
1604.06174
[ "1512.03385" ]
1604.06174#4
Training Deep Nets with Sublinear Memory Cost
Our re- sults show that it is possible to train a general deep neural network with sublinear memory cost. More importantly, we propose an automatic planning algorithm to provide a good memory plan for real use-cases. The proposed gradient graph optimization algorithm can be readily combined with all the existing memory optimizations in the computational graph to further reduce the memory consumption of deep learning frameworks. There are other ways to train big models, such as swapping of CPU/GPU memory and use of model parallel training [7, 16]. These are orthogonal approaches and can be used together with our algorithm to train even bigger models with fewer resources. Moreover, our algorithm does not need additional communication over PCI-E and can save the bandwidth for model/data parallel training. # 3 Memory Optimization with Computation Graph We start by reviewing the concept of computation graph and the memory optimization techniques. Some of these techniques are already used by existing frameworks such as Theano [5, 4], Tensor- ï¬ ow [1] and MXNet [6]. A computation graph consists of operational nodes and edges that represent the dependencies between the operations. Fig. 1 gives an example of the computation graph of a two-layer fully connected neural network. Here we use coarse grained forward and backward op- erations to make the graph simpler. We further simplify the graph by hiding the weight nodes and gradients of the weights. A computation graph used in practice can be more complicated and con- tains mixture of ï¬ ne/coarse grained operations. The analysis presented in this paper can be directly used in those more general cases.
1604.06174#3
1604.06174#5
1604.06174
[ "1512.03385" ]
1604.06174#5
Training Deep Nets with Sublinear Memory Cost
Once the network conï¬ guration (forward graph) is given, we can construct the corresponding backward pathway for gradient calculation. A backward pathway can be constructed by traversing 2 Configuration Gradient Calculation Graph A Possible Allocation Plan input input input-grad input input-grad inplace Sharing, T we f operation sharing {ulle-forward â ulefonward fulle-backward , fulle-forward fullo-backward sigmoid-forward sigmoid-forward 4 sigmoid-backward sigttold-forward sigmoid-backward i | Hi fulle-forward fulle-forward Pt fullo-backward fulle-forward fulle-backward softmax-forward ! softmax-forward 4 softmax-backward softmax-forward softmax-backward i t log-loss j«â â â â
1604.06174#4
1604.06174#6
1604.06174
[ "1512.03385" ]
1604.06174#6
Training Deep Nets with Sublinear Memory Cost
_] label log-loss F «4 ]] label data dependency [J Memory allocation for each output of op, same color indicates shared memory. # Network â â Figure 1: Computation graph and possible memory allocation plan of a two layer fully connected neural network training procedure. Each node represents an operation and each edge represents a dependency between the operations. The nodes with the same color share the memory to store output or back-propagated gradient in each operator. To make the graph more clearly, we omit the weights and their output gradient nodes from the graph and assume that the gradient of weights are also calculated during backward operations. We also annotate two places where the in-place and sharing strategies are used.
1604.06174#5
1604.06174#7
1604.06174
[ "1512.03385" ]
1604.06174#7
Training Deep Nets with Sublinear Memory Cost
the conï¬ guration in reverse topological order, and apply the backward operators as in normal back- propagation algorithm. The backward pathway in Fig. 1 represents the gradient calculation steps explicitly, so that the gradient calculation step in training is simpliï¬ ed to just a forward pass on the entire computation graph (including the gradient calculation pathway). Explicit gradient path also offers some other beneï¬ ts (e.g. being able to calculate higher order gradients), which is beyond our scope and will not be covered in this paper. When training a deep convolutional/recurrent network, a great proportion of the memory is usu- ally used to store the intermediate outputs and gradients. Each of these intermediate results corre- sponds to a node in the graph. A smart allocation algorithm is able to assign the least amount of memory to these nodes by sharing memory when possible. Fig. 1 shows a possible allocation plan of the example two-layer neural network. Two types of memory optimizations can be used
1604.06174#6
1604.06174#8
1604.06174
[ "1512.03385" ]
1604.06174#8
Training Deep Nets with Sublinear Memory Cost
â ¢ Inplace operation: Directly store the output values to memory of a input value. â ¢ Memory sharing: Memory used by intermediate results that are no longer needed can be recycled and used in another node. Allocation plan in Fig. 1 contains examples of both cases. The ï¬ rst sigmoid transformation is carried out using inplace operation to save memory, which is then reused by its backward operation. The storage of the softmax gradient is shared with the gradient by the ï¬ rst fully connected layer. Ad hoc application of these optimizations can leads to errors. For example, if the input of an operation is still needed by another operation, applying inplace operation on the input will lead to a wrong result. We can only share memory between the nodes whose lifetime do not overlap. There are multiple ways to solve this problem. One option is to construct the conï¬ icting graph of with each variable as node and edges between variables with overlapping lifespan and then run a graph-coloring al- gorithm. This will cost O(n2) computation time. We adopt a simpler heuristic with only O(n) time. The algorithm is demonstrated in Fig. 2. It traverses the graph in topological order, and uses a counter to indicate the liveness of each record. An inplace operation can happen when there is no other pending operations that depend on its input. Memory sharing happens when a recycled tag is used by another node. This can also serve as a dynamic runtime algorithm that traverses the graph, and use a garbage collector to recycle the outdated memory. We use this as a static memory allocation algorithm, to allocate the memory to each node before the execution starts, in order to avoid the overhead of garbage collection during runtime. Guidelines for Deep Learning Frameworks As we can see from the algorithm demonstration graph in Fig. 2. The data dependency causes longer lifespan of each output and increases the memory
1604.06174#7
1604.06174#9
1604.06174
[ "1512.03385" ]
1604.06174#9
Training Deep Nets with Sublinear Memory Cost
3 B= A a ° I bo 2 apt ° sigmoid(A) 9 2 a 2 a KgmoldtA) Ee = Pooling(B) i ot ta a MU oy, T1 ear v1 1 () () Cesigmoista) BL Ty . H 4 ' â ; i + : a é bey od a E=Pooling(c) S---- 2 a f 1 1 Grete 1 1 14 1 1 C) Initial state of step 1: Allocate step 2: Allocate tag step 3: Allocate tag step 4: Reuse the tag step 5: Re-use tag of E, allocation algorithm tag for B for C, cannot do forF, release space in the box for E This is an inplace inplace because B is ofB optimization : siil alive Final Memory Plan GH internal arrays, same color indicates shared Tag used to indicate memory sharing tT memory. a allocation Algorithm. count ef counter on dependent operations that, yetto be fullfilled Box of free tags in allocation algorithm. > data dependency, operation completed _---» data dependency, operation not completed Figure 2: Memory allocation algorithm on computation graph. Each node associated with a liveness counter to count on operations to be full-ï¬
1604.06174#8
1604.06174#10
1604.06174
[ "1512.03385" ]
1604.06174#10
Training Deep Nets with Sublinear Memory Cost
lled. A temporal tag is used to indicate memory sharing. Inplace operation can be carried out when the current operations is the only one left (input of counter equals 1). The tag of a node can be recycled when the nodeâ s counter goes to zero. consumption of big network. It is important for deep learning frameworks to â ¢ Declare the dependency requirements of gradient operators in minimum manner. â ¢ Apply liveness analysis on the dependency information and enable memory sharing. It is important to declare minimum dependencies. For example, the allocation plan in Fig. 1 wonâ t be possible if sigmoid-backward also depend on the output of the ï¬ rst fullc-forward. The dependency analysis can usually reduce the memory footprint of deep network prediction of a n layer network from O(n) to nearly O(1) because sharing can be done between each intermediate results. The technique also helps to reduce the memory footprint of training, although only up to a constant factor. # 4 Trade Computation for Memory # 4.1 General Methodology The techniques introduced in Sec. 3 can reduce the memory footprint for both training and prediction of deep neural networks. However, due to the fact that most gradient operators will depend on the intermediate results of the forward pass, we still need O(n) memory for intermediate results to train a n layer convolutional network or a recurrent neural networks with a sequence of length n. In order to further reduce the memory, we propose to drop some of the intermediate results, and recover them from an extra forward computation when needed.
1604.06174#9
1604.06174#11
1604.06174
[ "1512.03385" ]
1604.06174#11
Training Deep Nets with Sublinear Memory Cost
More speciï¬ cally, during the backpropagation phase, we can re-compute the dropped intermedi- ate results by running forward from the closest recorded results. To present the idea more clearly, we show a simpliï¬ ed algorithm for a linear chain feed-forward neural network in Alg. 1. Speciï¬ cally, the neural network is divided into several segments. The algorithm only remembers the output of each segment and drops all the intermediate results within each segment. The dropped results are recomputed at the segment level during back-propagation. As a result, we only need to pay the mem- ory cost to store the outputs of each segment plus the maximum memory cost to do backpropagation on each segment. Alg. 1 can also be generalized to common computation graphs as long as we can divide the graph into segments. However, there are two drawbacks on directly applying Alg. 1: 1) users have to manually divide the graph and write customized training loop; 2) we cannot beneï¬ t from other memory optimizations presented in Sec 3. We solve this problem by introducing a general gradient graph construction algorithm that uses essentially the same idea. The algorithm is given in Alg. 2. In this algorithm, the user specify a function m :
1604.06174#10
1604.06174#12
1604.06174
[ "1512.03385" ]
1604.06174#12
Training Deep Nets with Sublinear Memory Cost
V â N on the nodes of a computation graph 4 # on Algorithm 1: Backpropagation with Data Dropping in a Linear Chain Network v â input for k = 1 to length(segments) do temp[k] â v for i = segments[k].begin to segments[k].end â 1 do v â layer[i].f orward(v) end end g â gradient(v, label) for k = length(segments) to 1 do v â temp[k] localtemp â empty hashtable for i = segments[k].begin to segments[k].end â 1 do localtemp[i] â v v â layer[i].f orward(v) end for i = segments[k].end â 1 to segments[k].begin do g â layer[i].backward(g, localtemp[i]) end end to indicate how many times a result can be recomputed. We call m the mirror count function as the re-computation is essentially duplicating (mirroring) the nodes. When all the mirror counts are set to 0, the algorithm degenerates to normal gradient graph. To specify re-computation pattern in Alg. 2, the user only needs to set the m(v) = 1 for nodes within each segment and m(v) = 0 for the output node of each segment. The mirror count can also be larger than 1, which leads to a recursive generalization to be discussed in Sec 4.4. Fig. 3 shows an example of memory optimized gradient graph. Importantly, Alg. 2 also outputs a traversal order for the computation, so the memory usage can be optimized. Moreover, this traversal order can help introduce control ï¬ ow dependencies for frameworks that depend on runtime allocation. # 4.2 Drop the Results of Low Cost Operations One quick application of the general methodology is to drop the results of low cost operations and keep the results that are time consuming to compute.
1604.06174#11
1604.06174#13
1604.06174
[ "1512.03385" ]
1604.06174#13
Training Deep Nets with Sublinear Memory Cost
This is usually useful in a Conv-BatchNorm-Activation pipeline in convolutional neural networks. We can always keep the result of convolution, but drop the result of the batch normalization, activation function and pooling. In practice this will translate to a memory saving with little computation overhead, as the computation for both batch normalization and activation functions are cheap. â # 4.3 An O( n) Memory Cost Algorithm Alg. 2 provides a general way to trade computation for memory. It remains to ask which intermediate result we should keep and which ones to re-compute. Assume we divide the n network into k segments the memory cost to train this network is given as follows. cost-total = max cost-of-segment(é) +O(k) =O (<) + O(k) (1)
1604.06174#12
1604.06174#14
1604.06174
[ "1512.03385" ]
1604.06174#14
Training Deep Nets with Sublinear Memory Cost
The ï¬ rst part of the equation is the memory cost to run back-propagation on each of the segment. Given that the segment is equally divided, this translates into O(n/k) cost. The second part of n, we get equation is the cost to store the intermediate outputs between segments. Setting k = n). This algorithm only requires an additional forward pass during training, but the cost of O(2 5 Network Normal Memory Optimized Configuration Gradient Graph Gradient Graph input input input-grad input input-grad i conv-forward â sonw-forward tN conv-backward cony-forward it~ bneforward ! br-forward bn-backward bn forward a. relu-forward " relu-forward relu-backward â _relu-forward conv-forward conv-forward conv-backward conv-forward > bn-forward bn-forward bn-backward bn-forward conv-backward bn-backward relu-backward conv-backward bn-backward relu-forward relu-forward relu-backward _relu-forward --- > relu-backward â â * data dependency ----» control dependency [5] Memory allocation for each output of op, same color indicates shared # memory. Figure 3: Memory optimized gradient graph generation example. The forward path is mirrored to represent the re-computation happened at gradient calculation.
1604.06174#13
1604.06174#15
1604.06174
[ "1512.03385" ]
1604.06174#15
Training Deep Nets with Sublinear Memory Cost
User speciï¬ es the mirror factor to control whether a result should be dropped or kept. # Algorithm 2: Memory Optimized Gradient Graph Construction Input: G = (V, pred), input computation graph, the pred[v] gives the predecessors array of node v. Input: gradient(succ_grads, output, inputs), symbolic gradient function that creates a gradient node given successor gradients and output and inputs Input: m : V + Nt, m(v) gives how many time node v should be duplicated, m(v) = 0 means do no drop output of node v. alu] + v forv EV for k = 1 to max,cy m(v) do for v in topological-order(V) do if k < m(v) then a{v] < new node, same operator as v pred{a[v]] â U, cpredjaj{ale} end end end Vâ & topological-order(V) for v in reverse-topological-order(V) do giv] <â gradient(|g{v] for v in successor(v)], alu], [a{v] for v in pred{u]]) V' & append(Vâ , topological-order(acenstors(g[v])) â Vâ ) end Output: Gâ = (Vâ , pred) the new graph, the order in Vâ gives the logical execution order. reduces the memory cost to be sub-linear. Since the backward operation is nearly twice as time consuming as the forward one, it only slows down the computation by a small amount. In the most general case, the memory cost of each layer is not the same, so we cannot simply set n. However, the trade-off between the intermediate outputs and the cost of each stage still k = holds. In this case, we use Alg. 3 to do a greedy allocation with a given budget for the memory cost within each segment as a single parameter B. Varying B gives us various allocation plans that either assign more memory to the intermediate outputs, or to computation within each stage. When we do static memory allocation, we can get the exact memory cost given each allocation plan. We can use this information to do a heuristic search over B to ï¬ nd optimal memory plan that balances the cost of the two. The details of the searching step is presented in the supplementary material.
1604.06174#14
1604.06174#16
1604.06174
[ "1512.03385" ]
1604.06174#16
Training Deep Nets with Sublinear Memory Cost
We ï¬ nd this approach works well in practice. We can also generalize this algorithm by considering the cost to run each operation to try to keep time consuming operations when possible. 6 Algorithm 3: Memory Planning with Budget Input: G = (V, pred), input computation graph. Input: C â V , candidate stage splitting points, we will search splitting points over v â C Input: B, approximate memory budget. We can search over B to optimize the memory allocation. temp â 0, x â 0, y â 0 for v in topological-order(V ) do temp â temp + size-of-output(v) if v â C and temp > B then x â x + size-of-output(v), y â max(y, temp) m(v) = 0, temp â 0 else m(v) = 1 end end Output: x approximate cost to store inter-stage feature maps Output: y approximate memory cost for each sub stage Output: m the mirror plan to feed to Alg. 2 input input-grad Le nv-bn-relu conv-bn-relu backward forward bn-backward relu-backward conv-backward ' iconv-bn-relu conv-bn-relu backward forward + data dependency [1 Memory allocation for each output of op, same color indicates shared memory. Figure 4: Recursion view of the memory optimized allocations. The segment can be viewed as a single operator that combines all the operators within the segment. Inside each operator, a sub-graph as executed to calculate the gradient. # 4.4 More General View: Recursion and Subroutine In this section, we provide an alternative view of the memory optimization scheme described above.
1604.06174#15
1604.06174#17
1604.06174
[ "1512.03385" ]