diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzewsi" "b/data_all_eng_slimpj/shuffled/split2/finalzzewsi" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzewsi" @@ -0,0 +1,5 @@ +{"text":"\\section{Nomenclature}\n\n{\\renewcommand\\arraystretch{1.0}\n\\noindent\\begin{longtable*}{@{}l @{\\quad=\\quad} l@{}}\n$altitude$ & height above WGS84 ellipsoid \\\\\n$ecef$ & Earth-centered, Earth-fixed coordinate system\\\\\n$ENU$ & East, North, Up coordinate system\\\\\n$\\Delta t_{imu}$ & time between IMU measurements, s\\\\\n$\\bm{\\theta}$ & gyroscope measurements $[\\theta_x, \\theta_y, \\theta_z]^T$, $\\frac{rad}{s}$ \\\\\n$\\omega$& $\\| \\bm{\\theta} \\| $ \\\\\n$q_{1}^{2}$ & unit quaternion describing the rotation from frame 1 to frame 2 \\\\\n$P$ & camera projection matrix \\\\\n$\\pi_{WGS84}$ & projection of a pixel coordinate to a 3D point on the surface of the WGS84 model \\\\\n$\\alpha_{max}$ & max acceptable angle between camera boresight and normal of a landmark \\\\\n$\\delta x$ & amount to shift a point by in pixel space \\\\\n$surface\\_normal()$ & function that finds normal vector at a point on the WGS84 model \\\\\n$angle\\_between()$ & function that finds the angle between a camera boresight and a vector \\\\\n\\end{longtable*}} \\clearpage\n\n\\section{Introduction}\n\\label{sec:intro}\n\nTerrain Relative Navigation (TRN) is a method for absolute pose estimation in a GPS-denied environment using a prior map of the environment and onboard sensors such as a camera. TRN \nis commonly desired %\nfor applications requiring accurate pose estimation, such as planetary landings and airdrops, where GPS \nis either unavailable or cannot be relied upon. Due to the high altitude of planetary TRN missions, acquiring non-simulation test data oftentimes proves difficult, \nand thus many datasets used to test TRN systems are from lower altitudes than what the system would actually be used at during a mission. Additionally, \nfor vision-based TRN systems, the large distance between the camera and features on the ground can make position changes of the camera \ndifficult to accurately observe due to the high ratio of meters per pixel in the image plane.\n\nThis paper presents an experimental analysis on performing TRN using a camera-based approach aided by a gyroscope for high-altitude navigation by associating mapped landmarks from satellite\nimagery to camera images. We evaluate performance of both a sideways-tilted and downward-facing camera on data collected from a World View Enterprises high-altitude balloon (\\cref{fig:balloon_launch}) \nwith\ndata beginning at an altitude of 33 km and descending to ground level with almost 1.5 hours of flight time (\\cref{fig:overview}) and on data collected at speeds up to \n880 km\/h (550 mph) from two sideways-tilted cameras mounted inside the capsule of Blue Origin's New Shepard rocket (\\cref{fig:rocket_all}), during payload mission NS-23. We also demonstrate the \nrobustness of the TRN system to rapid motions of the balloon which causes fast attitude changes (\\cref{fig:challenges_a})\nand can cause image blur (\\cref{fig:challenges_b}). Additionally, we demonstrate performance in the presence of dynamic camera obstructions \n caused by cords dangling below the balloon (\\cref{fig:challenges_c}), and clouds obstructing sections of \nthe image (\\cref{fig:challenges_d}). \n\nSideways-angled cameras are a common choice for TRN applications when mounting a downward camera is either infeasible due to vehicle constraints or \nwould be occluded by exhaust from an engine on vehicles such as a lander or a rocket. Additionally, for \nplanetary landings, a sideways-angled camera allows for a single camera to be used \nduring both the braking phase when the side of the lander faces the surface and during the \nfinal descent phase when the bottom of the lander faces the surface (\\cref{fig:landing}). We thus use both a \nsideways-angled camera and downward-facing camera during our high-altitude balloon flight \nto separately evaluate the performance of TRN using a camera from each orientation.\n\nWe use Draper's Image-Based Absolute Localization (IBAL) \\cite{Denver17aiaa-airdrop} software for our analysis. \nWhile our dataset has images at a rate of 20Hz, we subsample images by a factor of 10 and hence post-process images at 2Hz in real-time.\nIBAL could additionally be combined with a nonlinear estimator such as an Extended Kalman Filter (EKF) or a fixed-lag smoother through either a loosely coupled approach using IBAL's pose estimate or a tightly-coupled approach using landmark matches~\\cite{Forster17tro}. \nSince the quality of the feature matches generated by IBAL would affect all these methods, here we limit ourselves to evaluating IBAL as an independent system and also analyze the quality of the\nfeature matches. At the same time, we investigate the impact of using a gyroscope in conjunction with IBAL to aid with the challenges of our balloon dataset and show the advantage that \neven a simple sensor fusion method can provide. \nFinally, we extend IBAL to incorporate methods to \nefficiently process images when a camera views above the horizon.\n\n\\begin{figure}[hbt!]\n \\centering\n \\begin{subfigure}[t]{0.47\\textwidth}\n \\centering\n \\includegraphics[width=.5\\textwidth]{fig\/balloon2.png}\n \\caption{Release of high-altitude balloon for data collection. \\\\\\ Image: courtesy of World View\\textregistered Enterprises}\n \\label{fig:balloon_launch}\n \\end{subfigure}\n \\hfil\n \\begin{subfigure}[t]{0.418\\textwidth}\n \\centering\n \\includegraphics[width=.50\\textwidth]{fig\/rocket_all.png}\n \\caption{Blue Origin's New Shepard rocket carrying Draper experimental payload in the capsule. Image: courtesy of Blue Origin}\n \\label{fig:rocket_all}\n \\end{subfigure}\n \\caption{Data collection platforms used for experimental analysis.}\n\\end{figure}\n\n\\begin{figure}[hbt!]\n \\centering\n \\includegraphics[width=1.0\\textwidth]{fig\/image_overview.png}\n \\caption{Example of images collected at different altitudes (32, 23, 14, and 4 km) from the balloon dataset with the downward-facing camera (top)\n and sideways-facing camera (bottom).}\n \\label{fig:overview}\n\\end{figure}\n\n\\begin{figure}[hbt!]\n \\centering\n \\begin{subfigure}[t]{0.24\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{fig\/challenge_b.png}\n \\caption{Rapid rotations, here over $90^\\circ$ in 4 seconds. Red dots show ground reference points between top image and bottom image.}\n \\label{fig:challenges_a}\n \\end{subfigure}\n \\unskip\\ \\vrule\\ \n \\begin{subfigure}[t]{0.24\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{fig\/challenge_c.png}\n \\caption{Image blur (top) due to rapid motion compared to crisp image (bottom).}\n \\label{fig:challenges_b}\n \\end{subfigure}\n \\unskip\\ \\vrule\\ \n \\begin{subfigure}[t]{0.24\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{fig\/challenge_a.png}\n \\caption{Moving cords in the image. Top and bottom images showing example range of cord motion.}\n \\label{fig:challenges_c}\n \\end{subfigure}\n \\unskip\\ \\vrule\\ \n \\begin{subfigure}[t]{0.24\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{fig\/challenge_d.png}\n \\caption{images partially occluded by clouds}\n \\label{fig:challenges_d}\n \\end{subfigure}\n \\caption{Different types of TRN challenges in the balloon dataset.}\n\\end{figure}\n\n\\begin{figure}[hbt!]\n \\centering\n \\includegraphics[width=0.5\\textwidth]{fig\/landing.png}\n \\caption{Demonstration of a sideways-angled camera viewing the terrain and being used \n during the braking phase, pitch-up maneuver, and terminal descent phase.}\n \\label{fig:landing}\n\\end{figure}\n %\n\\newpage\n\n\\section {Related Work}\n\nWe present an overview of existing Terrain Relative Navigation approaches and experiments, noting that our primary contribution are two experiments that\nallow us to perform indepth analysis of vision-based terrain relative navigation on challenging high-altitude data and on data from a high speed vehicle. \nTRN methods primarily use either cameras, radar, or lidar as an exteroceptive sensor. The majority of \nearly TRN methods such as the Mars Science Laboratory \\cite{Katake10-landingNav} and NASA's ALHAT Project (\\cite{Brady11gnc-alhat}, \\cite{Amzajerdian12ac-lidarTRN}) \nuse radar or lidar. However, due to the high power \nand weight budget of radar and lidar, cameras have been motivated as an active area of exploration for more recent TRN systems.\n\nThe seminal work of Mourikis \\textit{et al.} \\cite{Mourikis09tro-EdlSoundingRocket} describes a visual-inertial navigation method for \nEntry, Descent, and Landing (EDL) using an Extended Kalman Filter (EKF) with matched landmarks and\ntracked feature points in an image. They use inertial navigation results from their entire sounding rocket launch with an apogee of 123 km, and leverage visual methods after the vehicle reaches altitudes below 3800m. Johnson and Montgomery~\\cite{Johnson09ac-trnReview}\npresent a survey of TRN methods that use either image or lidar to detect the location of known landmarks.\n\nSingh and Lim~\\cite{Singh12aiaa-trnEKF} \ndemonstrate a visual TRN approach leverging an EKF for lunar navigation using known crater locations as landmarks. Recently, Downes \\textit{et al.} \\cite{Downes20aiaa-lunarTRN} \npresent a deep learning method for lunar crater detection to improve TRN landmark tracking. \nThe Lander Vision System (LVS) \\cite{Johnson17-lvs} used for the Mars 2020 mission uses vision-based landmark matching starting at an altitude of 4200m above the \nmartian surface with the objective of achieving less than 40m error with respect to the landing site. \nOur analysis focuses on higher altitudes and on a larger span on altitudes (4.5 km to 33 km for the \nballoon dataset).\n\nDever \\textit{et al.}\\cite{Denver17aiaa-airdrop} demonstrate visual navigation for guided parachute airdrops using IBAL and a \nMulti-State Constraint Kalman Filter (MSCKF). Additionally, the work incorporates\na lost robot approach to recover from a diverged pose estimate and to initialize the system if the pose is unknown. \nSteffes \\textit{et al.} \\cite{Steffes19aiaa-trnEDL} present a theoretical analysis of three types of visual terrain navigation \napproaches, namely template matching, SIFT \\cite{lowe2004ijcv-distinctive} descriptor matching, and crater matching. \nThe work of Lorenz \\textit{et al.} \\cite{Lorenz17ac-osirisrex} demonstrates \nvision-based terrain relative navigation for a touch and go landing on an asteroid for the OSIRIS-REx mission. Due to extreme computation limits,\nthey used a maximum of five\n manually selected mapped template features per frame. Mario \\text{et al.} \\cite{Mario22psj-osirisRexTesting} provide additional discussion on ground tests \n used to prepare the TRN system for the OSIRIS-REx mission. Our balloon dataset has much faster rotional motion than \n what was present during the OSIRIS-REx mission along with camera obstructions.\n\nSteiner \\textit{et al.} \\cite{Steiner15ac-landmarkSelection} present a utility-based approach for optimal landmark selection and demonstrates performance \non a rocket testbed flight up to 500m. As shadows and variable lighting conditions are a well known challenge for TRN, \nSmith \\textit{et al.} \\cite{Smith22aiaa-blenderTRN} demonstrates the ability to use Blender to enhance a satellite database for different lighting conditions. %\n\n\\section{Data Collection}\n\nThe collection of both datasets used in this paper was supported by the NASA Flight Opportunities Program. The high-altitude balloon dataset \nwas designed to test TRN on a wide range of high-altitude data and occured in April of 2019. The New Shepard dataset was intended to \ntest TRN on a high speed vehicle with a flight profile similar to that of a precision landing and occured in August of 2022.\n\n\\subsection{Balloon Flight}\n\nWe captured downward and sideways camera images along with data from a GPS and an inertial measurement unit (IMU) on board a World View \nEnterprises high-altitude balloon shown in \\cref{fig:balloon_launch}, \nwith data recorded up to an altitude of 33 km.\nWe used FLIR Blackfly S Color 3.2 MP cameras for both downward and sideways facing views using 12 mm EFL lens and 4.5 mm EFL lens, respectively. \nThe field of view (FOV) for the downward and sideways camera with their respective lens is $32^{\\circ}$ and $76^{\\circ}$.\nBoth cameras, along with the IMU (Analog Devices ADIS16448) \nand data logging computer are self contained inside the Draper Multi-Environment Navigator (DMEN) package, shown in \\cref{fig:hardware}. \nBoth cameras generated images at 20 Hz with a resolution of $1024 \\times 768$. The IMU logged data at 820 Hz. \n\nAs mentioned in \\cref{sec:intro}, some TRN applications ---such as \nplanetary landing--- might prefer using a sideways-angled camera, while other applications \n---such as \nhigh-altitude drone flights--- may prefer a downward-facing camera. Therefore, we collect data from \nboth a downward and sideways angled camera to allow for IBAL to be evaluated at both these camera \nangles. Some planetary landings may also desire a downward-facing camera since it allows the boresight of the camera \nto be normal to the surface during the terminal descent phase, \nsuch as was done for OSIRIS-REx \\cite{Lorenz17ac-osirisrex}.\n\n\\begin{figure}[hbt!]\n \\centering\n \\includegraphics[width=.4\\textwidth]{fig\/hardware.png}\n \\caption{Draper Multi-Environment Navigator (DMEN) package: data collection package containing sideways and downward facing cameras, IMU, and logging computer.}\n \\label{fig:hardware}\n\\end{figure}\n\n\\subsection{Blue Origin New Shepard Flight}\n\nWe captured images from two sideways-angled cameras with 12.5 mm lens on opposite sides inside the New Shepard capsule which \nlook out the capsule windows. Having two cameras was intended to allow us to study the effects of different cloud cover, terrain, and angle to the sun. \nWe will refer to these cameras as camera 1 and camera 2. \nWe additionally log IMU data from a Analog Devices ADIS16448, and \ntelemetry from the capsule which served as ground truth for our experiment. Data was logged with a NUC mounted inside a payload locker in the capsule.\nBoth cameras generated images at 20 Hz with a resolution of $1024 \\times 768$ and FOV of $31^{\\circ}$. \nThe IMU logged data at 820 Hz. The rocket reached speeds up to 880 km\/h and an altitude of 8.5 km before an anomaly occurred during the NS-23 flight \nwhich triggered the capsule escape system.\n\n\\Cref{fig:payload_blue} shows our payload locker containing the NUC, IMU, and a power converter which is \nmounted inside the New Shepard capsule. An ethernet cable and two USB cables transfer \ntelemetry data from the capsule and data from the cameras to the NUC, respectively.\n\n\\Cref{fig:cam_mount_a} shows camera 2 mounted inside the capsule with a sideways-angle and \n\\cref{fig:cam_mount_b} shows the location of both cameras inside the capsule on opposite sides while New Shepard \nis on the launch pad. Both cameras are mounted at the same tilt angle such that they can view the terrain while not \nhaving their FOVs obstructed by components on the rocket. Additionally, a mounting angle was selected to reduce \nthe effects of distortion caused by the windows, and to ensure the cameras did not come in direct \ncontact with the windows.\n\nDistortion effects from the windows were addressed by calibrating the instrinsic parameters \nof the camera while the camera was mounted in the capsule (i.e., a calibration board was positioned outside \nthe capsule window). We used the Brown-Conrady model \\cite{Brown66-brownConrady} which helps account for decentralized distortion caused by the window \nin addition to distortion from the camera lens. Further evaluation on the effects of distortion caused \nby the window of the capsule is left as a topic for future work.\n\n\\begin{figure}[hbt!]\n \\centering\n \\begin{subfigure}[t]{0.49\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{fig\/payload1.png}\n \\end{subfigure}\n \\hfil\n \\begin{subfigure}[t]{0.49\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{fig\/payload2.JPG}\n \\end{subfigure}\n \\caption{Payload locker inside the New Shepard capsule containing a NUC, IMU, and DC\/DC Converter. Images courtesy of Blue Origin.}\n \\label{fig:payload_blue}\n\\end{figure}\n\n\\begin{figure}[H]\n \\centering\n \\begin{subfigure}[t]{0.49\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{fig\/cam_capsule.png}\n \\caption{}\n \\label{fig:cam_mount_a}\n \\end{subfigure}\n \\hfil\n \\begin{subfigure}[t]{0.49\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{fig\/rocket_cams.png}\n \\caption{}\n \\label{fig:cam_mount_b}\n \\end{subfigure}\n \\caption{Cameras 1 and 2 mounted inside the New Shepard capsule looking out the capsule windows. Images courtesy of Blue Origin.}\n \\label{fig:cameras_in_window}\n\\end{figure} %\n\n\\section{Terrain Relative Navigation Method}\n\\label{sec:method}\n\nWe use Draper's IBAL software~\\cite{Denver17aiaa-airdrop} to perform TRN \nfor our datasets. A database of image templates is created in advance from satellite imagery and stored using known \npixel correspondence with the world frame. Using satellite images and elevation maps from USGS~\\cite{usgs}, we automatically select patches of interest \nfrom the satellite images and create a collection of templates that serve as 3D landmarks. For each camera image processed by IBAL, IBAL uses an initial guess of \nthe camera pose to predict \nwhich templates from the database are in the field of view (FOV) of the camera using a projection from \nthe image plane to an ellipsoidal model of the planet. The templates are then matched to the camera \nimage using cross correlation. The resulting match locations are passed to a 3-point RANSAC \\cite{Fischler81} (using a Perspective-Three-Point method as a minimal solver) to reject outliers. \nThe output is a list of the inlier matches, their pixel location in the image, and their known location \nin the world frame that can be passed to a nonlinear estimator or fixed-lag smoother for tightly-coupled pose estimation. \nA secondary output of RANSAC is an absolute pose estimate found by using the Perspective-n-Point (PnP) \nalgorithm on the set of inliers. \n\nInstead of a tightly-coupled approach, we will use a simpler method to evaluate performance on the balloon and New Shepard datasets. \nFor the balloon dataset, we take the PnP absolute pose estimate directly from IBAL, \n forward propagate it with the gyroscope measurements, and use it at the next time step as a pose guess for IBAL. \n We do not use accelerometer data since \nin the image frame most scene changes for the balloon dataset \nover a short time span will be due to rotations. This\nis due to the high altitude and hence large distance between the camera and the Earth's surface. Using the gyroscope to propagate the rotation also allows for\nreduced computation since we are able to down-sample our camera data by a factor of 10 (2Hz image input to IBAL). \nAdditionally, the gyro allows for robust handling of rapid motions of the balloon and images that have large obstruction\nfrom cords which makes generating landmark matches unreliable. An ablation study on incorporating the gyroscope with IBAL is provided in \\cref{sec:gyro_ablation}.\n Since the New Shepard capsule does not experience rapid rotations like the balloon, we did not find it \nnecessary to use the gryoscope to forward propagate the pose estimate for the New Shepard dataset.\n\nWe propagate the rotation estimate of the vehicle, $q^{cam_T}_{ecef}$ (i.e., the orientation of the earth-centered, earth-fixed frame \nw.r.t. the camera frame at time $T$, represented as a unit quaternion), to the time of the next processed image ($T+1$) \nwith the gyro using second order strapdown quaternion expansion \\cite{Mckern68mit-transforms}. \nUsing 3-axis gyro measurements $\\bm{\\theta}$ and their magnitude $\\omega = \\|\\bm{\\theta}\\|$, we compute the orientation $q_{IMU_{t+1}}^{IMU_t}$ between gyro measurements \nusing the following equation\n\\begin{equation}\n \\label{eq:quat_gyro1}\n q_{IMU_{t+1}}^{IMU_t}= [1 - \\frac{\\omega^2 \\Delta t_{IMU}^2}{8}, \\frac{\\bm{\\theta}^T \\Delta t_{IMU}}{2}]\n\\end{equation}\nwhere $t+1$ and $t$ represent the time of consecutive IMU measurements occuring $\\Delta t_{IMU}$ seconds apart.\n\n\nUsing the rotations $q_{IMU_{t+1}}^{IMU_t}$ between consecutive IMU timestamps, we \ncan compute the relative rotation $q_{cam_{T+1}}^{cam_T}$ between the camera pose between consecutive images collected at time $T$ and $T+1$:\n\\begin{equation}\n \\label{eq:quat_gyro2}\n q_{cam_{T+1}}^{cam_T} = \\prod_{t = T}^{T+1} q_{IMU}^{cam} \\otimes q_{IMU_{t+1}}^{IMU_t} \\otimes (q_{IMU}^{cam})^{-1}\n\\end{equation}\nwhere $\\otimes$ is the quaternion product and $q_{IMU}^{cam}$ is the static transform from the IMU frame to the camera frame:\n\nFinally, we can compute the rotation estimate $q^{cam_{T+1}}_{ecef}$ of the vehicle at time $T+1$:\n\\begin{equation}\n \\label{eq:quat_gyro3}\n q^{cam_{T+1}}_{ecef} = (q_{cam_{T+1}}^{cam_T})^{-1} \\otimes q^{cam_{T}}_{ecef}\n\\end{equation}\n\nWe use a simple yet effective logic for handling short segments in our datasets when PnP is unable to produce a reliable pose, which can be caused by image obstructions \nor blurry images caused by rapid vehicle motion. If PnP RANSAC selects a small set of inliers (i.e., less than 8) or if the pose is clearly infeasible (i.e., an altitude change between \nprocessed images greater than 450 m for the balloon dataset), we reject the \npose estimate, keep forward propagating the pose using gyroscope data, and run IBAL with the next available image, ignoring the down-sampling rate. %\n\n\\section{Addressing Challenges of High-Altitude Images}\n\nWe apply simple and effective methods to address two common challenges we encountered with high-altitude images, namely determining the projection\nto the ellipsoid when the camera views the horizon, and reducing the number of potential landmarks from the database that have a lower probability of \ngenerating good matches when there is a large number of landmarks in view of the camera. \n\nWhen the horizon is in view of the camera, as is true for the higher altitude images from the sideways camera for the balloon dataset \n(\\cref{fig:overview}), our \nbaseline method of determining the camera's viewing bounds of the planet's surface is insufficient. Our baseline method is to use an \ninitial estimate of the camera's pose to project each corner of the image to the ellipsoid model. From this, we can create a bounding box on the \nellipsoid defined by a minimum and maximum latitude and longitude. However, this is ill-defined if at least one corner of the image falls \nabove the horizon. \nTo resolve this case, if the projection of a corner point does not intersect the ellipsoid we incrementally \nmove the point (in the image space) towards the opposite corner of the image until it intersects the ellipsoid (\\cref{fig:works}). This process is summarized in \\cref{alg:horizon_detection}. \nThis process is shown to be effective for our dataset, despite the fact that the approach could fail (see line 15 in~\\cref{alg:horizon_detection}) when the projection of the ellipsoid does not intersect the main diagonals of the image (e.g., when the camera is too far away from Earth or has a large tilt angle).\n\n\n\n\\begin{figure}[H]\n %\n \\centering\n \\includegraphics[width=.3\\textwidth]{fig\/works.png}\n \\caption{Example of our horizon detection method finding the horizon of an ellipsoidal body. Each corner point of the image \n is incremented towards the opposite corner until the ellipsoid body is intersected.}\n \\label{fig:works}\n %\n %\n %\n %\n %\n %\n %\n %\n %\n %\n\\end{figure}\n\n\\begin{algorithm}\n \\caption{Horizon Detection} \n \\label{alg:horizon_detection}\n \\small\n \\begin{algorithmic}[1]\n \\State \\textbf{Inputs:} \n \\State \\indent \\indent P \\Comment{estimate of camera projection matrix (containing intrinsic and extrinsic parameters)}\n \\State \\indent \\indent $\\pi_{WGS84}$ \\Comment{projection of a pixel coordinate to a 3D point on the surface of the WGS84 model}\n \\State \\indent \\indent $\\delta x$ \\Comment{amount to shift a point by in pixel space (default 10 pixels)}\n\n \\State \\textbf{Output:} $image\\_corners$ \\Comment{set of four pixel coordinates bounding image}\n \n \\For{$x_{corner}, \\in image\\_corners$}\n \\While{True}\n \\State X $\\gets \\pi_{WGS84}(P, x_{corner})$\n \\If{X intersects ellipsoid} \n %\n \\State break \\Comment{found valid image boundary}\n \\Else\n \\State increment $x_{corner}$ towards opposite corner by $\\delta x$\n \\EndIf\n \\If{$x_{corner}$ outside image}\n \\State \\textbf{return} error \\Comment{failed to find horizon boundary}\n \\EndIf\n \\EndWhile\n \\EndFor\n \\State \\textbf{return} $image\\_corners$\n \\end{algorithmic}\n\\end{algorithm}\n\nSince we select a maximum number of landmarks based on the landmarks in our satellite database that are in view of the camera, we need additional logic to\navoid the possibility of selecting landmarks that mostly fall near the horizon, since these are unlikely to lead to good matches. \nThe ratio\nof meters per pixels grows rapidly as we approach the horizon, and image matching becomes difficult or impossible near\nthe horizon line due to glare or heavy warping needed to match a shallow surface angle. Additionally, there \nis significant atmospheric distortion. \nRemoving those landmarks helps avoid unnecessary computation and reduces the number of outliers we pass to RANSAC. Towards this goal,\n we set a maximum acceptable angle between the boresight of the \ncamera and the surface normal of \na landmark and reject landmarks that fail to meet this threshold. To increase the number of potential landmarks that \nmeet our angle requirement, we filter out sections of the camera's FOV projection to the ellipsoid that are unlikely \nto produce landmarks that meet the angle threshold. \nThis filtering method follows our prior method for intersecting the ellipsoid and uses \nsimilar logic. Starting at the first point near each image corner that views the ellipsoid, we find the surface normal by projecting \nfrom the image plane to the ellipsoid and move towards the oppostite corner of the image\nuntil the angle requirement is met. This process is summarized in \\cref{alg:landmark_angle} and a corresponding ablation \nis shown in \\cref{fig:angle_ablation}. Notice that \nwithout \\cref{alg:landmark_angle}, more landmarks are selected near the horizon (\\cref{fig:landmark_angle_no_angle}) \nwhere template matching is more difficult resulting in more outliers. Using \\cref{alg:landmark_angle} allows IBAL to target \nregions of the image with more distinguishable features for matching which results in a higher concentration of inliers \n(\\cref{fig:landmark_angle_with_angle}).\n\n\n\n\\begin{algorithm}\n \\small\n \\caption{Landmark Angle Filter} \n \\label{alg:landmark_angle}\n \\begin{algorithmic}[1]\n \\State \\textbf{Inputs:} \n \\State \\indent \\indent P \\Comment{estimate of camera projection matrix (containing intrinsic and extrinsic parameters)}\n \\State \\indent \\indent $\\pi_{WGS84}$ \\Comment{projection of a pixel coordinate to a 3D point on the surface of the WGS84 model}\n \\State \\indent \\indent $\\alpha_{max}$ \\Comment{max acceptable angle between camera boresight and normal of a landmark}\n \\State \\indent \\indent $\\delta x$ \\Comment{amount to shift a point by in pixel space (default 10 pixels)}\n \n \\State \\textbf{Output:} $image\\_corners$ \\Comment{set of four pixel coordinates bounding image}\n\n \\State surface\\_normal() $\\gets$ function that finds normal vector at a point on the WGS84 model\n \\State angle\\_between() $\\gets$ function that finds the angle between a camera boresight and a vector \n \\For{$x_{corner}, \\in image\\_corners$}\n \\While{True}\n \\State X $\\gets \\pi_{WGS84}(P, x_{corner})$\n \\State $x_n \\gets surface\\_normal(X)$\n \\State $\\alpha \\gets angle\\_between(P, x_n)$\n \\If{$\\alpha \\leq \\alpha_{max}$}\n \\State break \\Comment{found valid image bounary}\n \\Else\n \\State increment $x_{corner}$ towards opposite corner by $\\delta x$\n \\EndIf\n \\If{$x_{corner}$ outside image}\n \\State \\textbf{return} error \\Comment{failed to meet landmark angle requirement}\n \\EndIf\n \\EndWhile\n \\EndFor\n \\State \\textbf{return} $image\\_corners$\n \\end{algorithmic}\n\\end{algorithm}\n\n\\begin{figure}[H]\n \\centering\n \\begin{subfigure}[t]{0.46\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{fig\/without_angle.png}\n \\caption{Higher concentration of outliers near the horizon without using landmark angle filter. Ratio of inliers to outliers: 0.3}\n \\label{fig:landmark_angle_no_angle}\n \\end{subfigure}\n \\hfil\n \\begin{subfigure}[t]{0.46\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{fig\/with_angle.png}\n \\caption{Higher concentration of inliers using landmark angle filter. Ratio of inliers to outliers: 1.3}\n \\label{fig:landmark_angle_with_angle}\n \\end{subfigure}\n \\caption{Ablation study for \\cref{alg:landmark_angle}, which filters regions of the image for landmark matching \n based on the angle between the surface and the camera boresight. This leads to a higher ratio of inliers to outliers, reducing computation and improving accuracy. \n Inliers matches are shown in green and outlier are shown in red. \n Blue shows initial estimate of landmark location based on initial pose estimate before utilizing cross correlation. Images are from sideways camera \n from balloon dataset.}\n \\label{fig:angle_ablation}\n\\end{figure} %\n\n\\section{Experiment Results}\n\n\\subsection{Balloon Flight}\n\nWe present results from running IBAL with both a sideways-tilted and downward-facing camera aided by gyroscope measurements on altitudes ranging from 33km to 4.5km. \nNote that we use the term altitude to mean height above the WGS84 ellipsoid. \nDuring this time, the system is descending under a parachute. \nWe split our data into \n7 segments, each about 15 minutes long, and evaluate our estimated TRN position by comparing with GPS. We manually reseed IBAL at the start of each segment. \nResults are defined with respect to an East North Up (ENU) frame centered at the landing site of the balloon. \n\\Cref{fig:all_trajectory} shows the ground truth trajectory from GPS compared to the trajectory estimates from IBAL with a downward and sideways \nfacing camera. The corresponding plot of absolute position \nerror is shown in \\cref{fig:all_trajectory_error} for each of the East, North, and Up axes. \nIBAL is able to achieve an average position error along the up axis of 78 m and 66 m for the entire trajectory with the downward-facing and sideways-tilted camera, \nrespectively, while the balloon travels almost 30 km in elevation.\nIBAL achieves 207 m and 124 m of average position error for the east and north axis across the entire trajectory of the \ndownward-facing camera, and likewise an average error of 177 m and 164 m along the east and north axis for the sideways camera \nwhile the balloon transverses well over 100 km laterally. \n\\Cref{fig:all_trajectory_total_error} shows total absolute error (defined as the Euclidean distance between the estimate and the GPS position) with respect to flight time and with respect to height above ground level. \nAverage absolute position error for the entire trajectory is 287 m and 284 m for the downward and sideways-tilted camera, respectively. \nSpikes in position estimates could be diminished \nusing filtering methods such as coupling with an accelerometer or with visual odometry as mentioned in \\cref{sec:method}. \nWe run IBAL in real-time on a laptop with an Intel Xeon 10885M CPU. \nWhile IBAL is designed to run in real-time \non flight hardware, we do not make showcasing run-time performance a focus of this paper.\n\n\\begin{figure}[hbt!]\n \\centering\n \\includegraphics[width=\\textwidth, height=0.40\\textheight]{fig\/traj_all.png}\n \\caption{IBAL+gyro trajectory estimate vs. GPS for altitude range of 33 km to 4.5 km on balloon dataset. \n Vertical lines show start of each new data segment.}\n \\label{fig:all_trajectory}\n\\end{figure}\n\n\\newpage\n\n\\begin{figure}[hbt!]\n \\centering\n \\includegraphics[width=\\textwidth, height=0.40\\textheight]{fig\/error_all.png}\n \\caption{IBAL+gyro absolute position error for altitude range of 33 km to 4.5 km on balloon dataset. Vertical lines show start of each new data segment.}\n \\label{fig:all_trajectory_error}\n\\end{figure}\n\n\\begin{figure}[H]\n \\centering\n \\includegraphics[width=\\textwidth, height=0.37\\textheight]{fig\/total_error.png}\n \\caption{IBAL+gyro total trajectory error vs. time and vs. height above ground level on balloon dataset. Error tends to \n show slight decrease in magnitude at lower altitudes. Vertical lines show start of each new data segment.}\n \\label{fig:all_trajectory_total_error}\n\\end{figure}\n\n\\newpage\n\n\nWe also provide an analysis of the match correlation for both cameras for the entire balloon dataset. \n\\Cref{fig:all_matches_down} and \\cref{fig:all_matches_side} show number of inliers and outliers \nfor the downward and sideways facing cameras. After estimating the location of a landmark in the image \nwith cross correlation and peak finding, inliers and outliers are labeled using PnP and RANSAC. \nThere are generally more inliers than outliers which shows the effectiveness of the correlation approach, and\nthat IBAL is able to perform well in the presence of outliers. \nWe observe a greater number of inliers with the \ndownward-facing camera than with the sideways-tilted camera.\n\nAdditionally, \\cref{fig:histograms} shows a histogram of the amount of pixel error for the inliers and outliers \ndetermined by PnP and RANSAC for both the downward and sideways-tilted cameras. Inlier pixel error is distributed such that \nmost inliers have between 0 and 1 pixel of error as determined by PnP and RANSAC which shows the effectiveness of IBAL's correlation approach. \nThat there is an increase in the ratio of outliers to inliers at lower altitudes. This is due in part to shadows, lack of distinct texture on the ground, and \nregions with a sparse amount of landmarks in our database. \nDepending on mission requirements, this issue can be greatly reduced \nduring the landmark database creation process such as by optimizing for landmark template size, ensuring sufficent landmark coverage at low altitudes \nfor all phases of a flight, and by baking shadows into the database as was demonstrated in \\cite{Smith22aiaa-blenderTRN}. \nHowever, for the purposes of the balloon experiment in this paper, we determined our database to be sufficient.\n\nLastly, we provide visual examples of IBAL matches on a selected subset of frames from the downward and sideways facing cameras. \n\\Cref{fig:down_match_135} shows landmark matches for the downward camera at 13.5 km with inliers shown in green and \noutliers shown in red. Blue dots show the inital estimate of the landmark locations in the image by using the pose estimated by IBAL's \nprior pose and the gyro before matching with cross correlation. \n\\Cref{fig:down_match_23} shows matches for the downward camera at 23 km. \nCords from the high-altitude balloon are partially in view, but incorrect matches caused by the cords are correctly \nrejected as outliers. \\Cref{fig:side_match_135} and \\cref{fig:side_match_23} show results for the sideways-tilted camera \nat 13.5 km and 23 km.\n\n\\begin{figure}[H]\n \\centering\n \\begin{subfigure}[t]{1.0\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth, height=0.195\\textheight]{fig\/matches_all_down.png}\n \\caption{IBAL landmarking matching results for downward-facing camera}\n \\label{fig:all_matches_down}\n \\end{subfigure}\n\n \n \\begin{subfigure}[t]{1.0\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth, height=0.195\\textheight]{fig\/matches_all_side.png}\n \\caption{IBAL landmarking matching results for sideways-tilted camera}\n \\label{fig:all_matches_side}\n \\end{subfigure}\n \\caption{IBAL+gyro number of inliers and outliers for sideways-tilted and downward-facing cameras on balloon dataset for altitude range of 33 km to 4.5 km \n as determined by PnP and RANSAC. Vertical lines show start of each new data segment. The downward camera tends to have more matches than the sideways-tilted camera.}\n\\end{figure}\n\n\\begin{figure}[H]\n \\centering\n \\begin{subfigure}[t]{0.48\\textwidth}\n \\centering\n \\textbf{Downward Camera}\\par\\medskip\n \\includegraphics[width=\\textwidth, height=0.1\\textheight]{fig\/hist_down_1.png}\n \\caption{altitude range: 33 km to 32.5 km}\n \\end{subfigure}\n \\hfil\n \\begin{subfigure}[t]{0.48\\textwidth}\n \\centering\n \\textbf{Sideways Camera}\\par\\medskip\n \\includegraphics[width=\\textwidth, height=0.1\\textheight]{fig\/hist_side_1.png}\n \\caption{altitude range: 33 km to 32.5 km}\n \\end{subfigure}\n\n \\begin{subfigure}[t]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth, height=0.1\\textheight]{fig\/hist_down_2.png}\n \\caption{altitude range: 32.5 km to 29 km}\n \\end{subfigure}\n \\hfil\n \\begin{subfigure}[t]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth, height=0.1\\textheight]{fig\/hist_side_2.png}\n \\caption{altitude range: 32.5 km to 29 km}\n \\end{subfigure}\n\n\n \\begin{subfigure}[t]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth, height=0.1\\textheight]{fig\/hist_down_3.png}\n \\caption{altitude range: 29 km to 23 km}\n \\end{subfigure}\n \\hfil\n \\begin{subfigure}[t]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth, height=0.1\\textheight]{fig\/hist_side_3.png}\n \\caption{altitude range: 29 km to 23 km}\n \\end{subfigure}\n\n\n \\begin{subfigure}[t]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth, height=0.1\\textheight]{fig\/hist_down_4.png}\n \\caption{altitude range: 23 km to 18 km}\n \\end{subfigure}\n \\hfil\n \\begin{subfigure}[t]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth, height=0.1\\textheight]{fig\/hist_side_4.png}\n \\caption{altitude range: 23 km to 18 km}\n \\end{subfigure}\n\n\n \\begin{subfigure}[t]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth, height=0.1\\textheight]{fig\/hist_down_5.png}\n \\caption{altitude range: 18 km to 14 km}\n \\end{subfigure}\n \\hfil\n \\begin{subfigure}[t]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth, height=0.1\\textheight]{fig\/hist_side_5.png}\n \\caption{altitude range: 18 km to 14 km}\n \\end{subfigure}\n\n \n \\begin{subfigure}[t]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth, height=0.1\\textheight]{fig\/hist_down_6.png}\n \\caption{altitude range: 14 km to 9 km}\n \\end{subfigure}\n \\hfil\n \\begin{subfigure}[t]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth, height=0.1\\textheight]{fig\/hist_side_6.png}\n \\caption{altitude range: 14 km to 9 km}\n \\end{subfigure}\n\n\n \\begin{subfigure}[t]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth, height=0.1\\textheight]{fig\/hist_down_7.png}\n \\caption{altitude range: 9 km to 4.5 km}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth, height=0.1\\textheight]{fig\/hist_side_7.png}\n \\caption{altitude range: 9 km to 4.5 km}\n \\end{subfigure}\n\n\\caption{Inlier and outlier pixel error for each segment of balloon dataset. Error is the reprojection error determined by PnP and RANSAC. \nLeft Column: downward camera, Right Column: sideways camera. \nRows correspond to different altitude ranges.\n}\n\\label{fig:histograms}\n\\end{figure}\n\n\\begin{figure}[H]\n \\centering\n \\begin{subfigure}[t]{0.49\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{fig\/chords_outlier_rejection_13_5.png}\n \\caption{Downward Camera, altitude 13.5 km}\n \\label{fig:down_match_135}\n \\end{subfigure}\n \\hfil\n \\begin{subfigure}[t]{0.49\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{fig\/chord_outlier_rejection_23.png}\n \\caption{Downward Camera, altitude 23 km}\n \\label{fig:down_match_23}\n \\end{subfigure}\n\n \n \\begin{subfigure}[t]{0.49\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{fig\/side_cam_matches_13_5.png}\n \\caption{Sideways Camera, altitude 13.5 km}\n \\label{fig:side_match_135}\n \\end{subfigure}\n \\hfil\n \\begin{subfigure}[t]{0.49\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{fig\/side_cam_matches_23.png}\n \\caption{Sideways Camera, altitude 23 km}\n \\label{fig:side_match_23}\n \\end{subfigure}\n\n \\caption{IBAL landmark match analysis on balloon dataset. Inliers matches are shown in green and outlier are shown in red. \n Points in blue show initial estimate of landmark location based on initial pose estimate before utilizing cross correlation. Lines connect blue estimate \n to calculated match location. Landmarks locations covered \n by the cords are correctly rejected as outliers (top row).}\n\\end{figure}\n\n\\subsection{Blue Origin New Shepard Flight}\n\nWe present results from running IBAL with two cameras (referred to as camera 1 and camera 2) mounted inside the Blue Origin New Shepard capsule. \nWe only show results up to an altitude of approximately 8.5 km \nsince there was an anomaly that occurred during flight NS-23 which triggered the capsule escape system. \nNevertheless, we are still able to show IBAL working while the rocket achieves \nnominal speeds up to 880 km\/h (550 mph). We seed the initial input image to IBAL using telemetry from New Shepard and then use the previous IBAL pose estimate \nas the initial pose guess for the next timestep. Unlike the balloon experiment, we do not incorporate the gyroscope measurement to forward propagate the \npose estimate since the capsule does not experience significant rotations during its ascent.\n\nWe show a similar series of analysis of trajectory error and landmark matches as was presented for the high-altitude balloon experiment. \nResults are defined with respect to a ENU frame centered at the launch pad. \n\\Cref{fig:blue_error} shows absolute error for each of the East, North, and Up axes by comparing the position estimate of IBAL with GPS. \n\\Cref{fig:blue_total_error} shows total absolute error with respect to flight time and with respect to height above ground level. IBAL's total position \nerror estimate is below 120 m for the duration of the dataset, and that error with camera 2 is as low as 10 m when the rocket is at an altitude of 3.5 km. \nAverage absolute position error for the entire trajectory is 54 m and 34 m for camera 1 and camera 2, respectively. Both cameras show similar performance with IBAL, and \nslight differences in performance can be explained by the cameras being located on opposite sides of the capsule (and thus viewing different terrain) \nand by potential unaccounted distortion effects in the camera calibration. \n\n\n\\begin{figure}[hbt!]\n \\centering\n \\includegraphics[width=1.0\\textwidth]{fig\/blue_error.png}\n \\caption{IBAL absolute position error on New Shepard dataset: altitude range of 3.5 km to 8.5 km.}\n \\label{fig:blue_error}\n\\end{figure}\n\n\\begin{figure}[hbt!]\n \\centering\n \\includegraphics[width=1.0\\textwidth]{fig\/blue_error_time_alt}\n \\caption{IBAL total trajectory error vs. time and height above ground level on New Shepard dataset. Total error is less than 120 m while reaching \n speeds up to 880 km\/h and a peak altitude of 8.5 km.}\n \\label{fig:blue_total_error}\n\\end{figure}\n\nWe also provide an analysis of match correlation for both cameras. Since each processed frame only had at most 2 matches identified as outliers \nby PnP and RANSAC, we do not include match analysis for outliers in our results. \n\\cref{fig:blue_inliers_outliers_1} and \\cref{fig:blue_inliers_outliers_2}\nshow number of inliers for both cameras. \n\\cref{fig:blue_histogram} shows a histogram of the amount of pixel error for the inliers determined by PnP\nRANSAC for both cameras. Similarly to the results from the balloon flight, pixel error for a majority of the inliers is less than two pixels.\n\nWe provide visual examples of IBAL matches on a frame from both cameras in \\cref{fig:blue_match_visualize}. \nMatches labeled as inliers are shown in green, while outliers are shown in red. There is only one outlier present in the processed image from \ncamera 1 (\\cref{fig:blue_match_visualize_1}) and no outliers in the image from camera 2 (\\cref{fig:blue_match_visualize_2}).\n\nLastly, we remark on one difficulty of the New Shepard dataset.\nA mountain range is in view of camera 2 which makes landmark matching more difficult near the latter portion of the dataset as the mountain comes into the camera's \nFOV (\\cref{fig:mountain}). This is due to the presence of shadows in the mountain that may not be consistent with shadows present in the \ntime of day the database imagery was collected. Additionally, the 2D-2D homography assumption which we use to warp landmark templates into the image for \ncorrelation begins to break down when 3D structures \nsuch as mountains are viewed from low altitudes. Work with database creation such as \\cite{Smith22aiaa-blenderTRN} along with advances in IBAL \nnot mentioned in the paper can be used to reduce these issue for low altitude navigation over mountains. \n\n\n\\begin{figure}[H]\n \\centering\n \\begin{subfigure}[t]{0.46\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{fig\/cam1_blue_matches.png}\n \\caption{IBAL landmarking matching results for camera 1}\n \\label{fig:blue_inliers_outliers_1}\n \\end{subfigure}\n \\hfil\n \\begin{subfigure}[t]{0.46\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{fig\/cam2_blue_matches.png}\n \\caption{IBAL landmarking matching results for camera 2}\n \\label{fig:blue_inliers_outliers_2}\n \\end{subfigure}\n \\caption{IBAL number of inliers and outliers for cameras 1 and 2 on New Shepard dataset as determined by PnP and RANSAC. \n The data corresponds to an altitude range between 3.5 km and 8.5 km.}\n\\end{figure}\n\n\\begin{figure}[H]\n \\centering\n \\begin{subfigure}[t]{0.45\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth, height=0.1\\textheight]{fig\/blue_hist_cam1.png}\n \\caption{Camera 1}\n \\end{subfigure}\n \\hfil\n \\begin{subfigure}[t]{0.45\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth, height=0.1\\textheight]{fig\/blue_hist_cam2.png}\n \\caption{Camera 2}\n \\end{subfigure}\n \\caption{Inlier pixel error distribution for Cameras 1 and 2 on New Shepard dataset. \n %\n }\n \\label{fig:blue_histogram}\n\\end{figure}\n\n\\begin{figure}[H]\n \\centering\n \\begin{subfigure}[t]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{fig\/cam1_54.png}\n \\caption{IBAL inlier and outlier matches for camera 1 on New Shepard dataset at an altitude of 6.4 km}\n \\label{fig:blue_match_visualize_1}\n \\end{subfigure}\n \\hfil\n \\begin{subfigure}[t]{0.476\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{fig\/cam2_54.png}\n \\caption{IBAL inlier and outlier matches for camera 2 on New Shepard dataset at an altitude of 6.4 km}\n \\label{fig:blue_match_visualize_2}\n \\end{subfigure}\n \\caption{IBAL inlier and outlier matches for cameras 1 and 2 on New Shepard dataset. Inliers matches are shown in green and outlier are shown in red. \n Blue shows initial estimate of landmark location based on initial pose estimate before utilizing cross correlation. Lines connect blue estimate \n to calculated match location. Images have been rotated by \n 180 $^{\\circ}$ for visual appeal.}\n \\label{fig:blue_match_visualize}\n\\end{figure}\n\n\\begin{figure}[H]\n \\centering\n \\includegraphics[width=0.46\\textwidth]{fig\/mountain.png}\n \\caption{IBAL Camera 2 viewing a mountain range on New Shepard dataset. Inliers matches are shown in green. \n Blue shows initial estimate of landmark location based on initial pose estimate before utilizing cross correlation. Lines connect blue estimate to calculated match location. \n Image has been rotated by \n 180 $^{\\circ}$ for visual appeal.}\n \\label{fig:mountain}\n\\end{figure} %\n\n\\section{Gyroscope Incorporation Ablation Study}\n\\label{sec:gyro_ablation}\n\nWe provide an ablation study of forward propagating the IBAL pose estimate with a gyroscope for the high-altitude balloon dataset \nas mentioned in \\cref{sec:method}. The benefits of incorporating the gyroscope data is two-fold. Firstly, since the balloon experiences \nrapid rotations, in some cases exceeding $20^\\circ$ per second, the gyro provides a more accurate initial guess of the balloon's pose for IBAL, which \nreduces the frequency at which images must be to used to estimate the pose, hence reducing computation. Additionally, if landmark match quality is temporarily insufficient \n(typically on the order of 1 to 3 seconds) for PnP and RANSAC, which can be caused for example by significant obstruction by the cords below the balloon, the gyro allows the pose estimate to be carried over until good landmark matches can be found.\n\n\\Cref{table:gyro_ablation} shows the benefits of using the gyro with our balloon dataset. Using the downward-facing camera, we show the percentage \nof each of the seven data segments IBAL is able to successfully complete with and without incorporating the gyroscope. We also test on two different rates of \nimage processing, noting that while one could partially compensate the lack of gyroscope measurements by increasing the rate of image processing, that strategy is only effective at high altitudes in our dataset.\n\n\n\\begin{table}[H]\n \\begin{tabular}{ | l | l | l | l | l | l | l | l |}\n \\hline\n & 33-32.5 km & 32.5-29 km & 29-23 km & 23-18 km & 18-14 km & 14-9 km & 9-4.5 km\\\\ \\hline\n 4 Hz w\/ gyro & 100 & 100 & 100 & 100 & 100 & 100 & 100 \\\\ \\hline\n 2 Hz w\/ gyro & 100 & 100 & 100 & 100 & 100 & 100 & 100 \\\\ \\hline\n 4 Hz w\/o gyro & 100 & 100 & 96 & 3 & 3 & 1 & 1 \\\\ \\hline\n 2 Hz w\/o gyro & 100 & 100 & 63 & 0 & 0 & 1 & 1 \\\\ \n \\hline\n \\end{tabular}\n \\caption{Ablation study showing the benefit of incorporating gyroscope measurements with IBAL on each of the seven altitude segments of the balloon dataset \n for different rates of image processing. \n Results show the percent of each dataset segment IBAL successfully processes using images from the downward camera.}\n \\label{table:gyro_ablation}\n\\end{table} %\n\n\\section{Conclusion}\n\nThis paper reports on the performance of a vision-based terrain relative navigation method on data ranging from 4.5 km to 33 km on a high-altitude \nballoon dataset and on data collected onboard Blue Origin's New Shepard rocket. We evaluate \nperformance \nof both a sideways-tilted and downward-facing camera for the balloon dataset and two sideways-tilted \ncameras on the New Shepard dataset. We observe less than 290 meters of \naverage position error on the balloon data over a trajectory of 150 kilometers and \nwith the presence of rapid motions and dynamic obstructions in the field of view of the camera. Additionally, we report less than 55 m of \naverage position error on the \nNew Shepard dataset while reaching an altitude of 8.5 km and a max nominal speed of 880 km\/h. As future work, we plan to fly again onboard the New Shepard \nrocket and capture camera data from ground level to an altitude of over 100 km. \n\\section*{Acknowledgments}\nWe would like to gratefully acknowledge Andrew Olguin, Carlos Cruz, Alanna Ferri, Laura Henderson, and\neveryone else at Draper who supported IBAL and data collection for the balloon flight and New Shepard flight. This\nwork was authored by employees of The Charles Stark Draper Laboratory, Inc. under Contract No. 80NSSC21K0348\nwith the National Aeronautics and Space Administration. The United States Government retains and the publisher, by\naccepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up,\nirrevocable, worldwide license to reproduce, prepare derivative works, distribute copies to the public, and perform\npublicly and display publicly, or allow others to do so, for United States Government purposes. All other rights are\nreserved by the copyright owner.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{sec:intro}\nStochastic or simulation models are only approximations to the reality. A conjectured model may not align with the true system because of unobserved complexity. Moreover, some highly accurate models, even if formulable, may not be implementable due to computational barriers and time constraints, in which case a simpler, lower-fidelity model is adopted. In all these cases, there is a discrepancy between the model and the reality, which we call \\emph{model discrepancy}. This article describes a data-processing framework to integrate data from both a simulated response and the real system of interest, under the presence of model discrepancy, to reliably predict stochastic outputs of interest.\n\nOur objective is motivated from everyday practice of simulation analysis. For example, this article describes a major manufacturer that is interested in assessing the impact of the staffing level of support workers on a production line via discrete-event simulation. Twelve weeks were spent carefully designing and tuning the simulation model and the final report included seventy-five realizations of the simulation model at each potential staffing level. The limited amount of realizations gives rise to a simulation error (also termed a Monte Carlo error). In addition, when data at the current staffing level was compared to the simulation model realizations, it was clear the simulation model was inaccurate. Yet, given the resources already invested, the manufacturer was interested if the simulation model could still be used to guide the staffing level decision. An approach that can account for both sources of errors can save significant costs and improve the decisions in situations like these.\n\nDifferences between a simulation and real data is traditionally addressed during the important practice of \\emph{model validation} and \\emph{calibration} in the simulation literature, which refers to the joint task of checking whether a developed stochastic model sufficiently reflects the reality (validation), and if not, re-developing the model until it matches (calibration) (e.g., \\cite{Sargent2013}, \\cite{banks2000dm} Chapter 10, \\cite{kelton2000simulation} Chapter 5). Conventional validation methods compare relevant outputs from simulation models and real-world data via statistical or Turing tests (e.g. \\cite{schruben1980establishing} and \\cite{balci1982some}). In the case of a mismatch, guided expert opinions, together with possibly more data collection, are used to re-calibrate the model recursively until acceptable accuracy \\citep{sargent1998verification}. While these tools are fundamentally critical to the practice of simulation, there can be two deficiencies when using calibration in an ad-hoc way:\n\\begin{enumerate}\n\\item It necessitates building increasingly sophisticated models after unsatisfactory conclusions. This process potentially places a heavy burden on a simulation modeler\/software, consumes time and, moreover, may end up in non-convergence to an acceptable ultimate model.\n\\item The recursive refinement of the model to align it with the real data along the development process involves hidden parameter choices and simultaneous estimations. These details, which are often overlooked and unaccounted for, complicate statistically justified uncertainty quantification alongside prediction.\n\\end{enumerate}\n\n\nOur goal is thus to investigate a framework that systematically offers predictive bounds using a simulation model without the traditionally encountered recursive efforts. Our framework is a stochastic version of model calibration that is similar in spirit to deterministic model calibration \\citep{kennedy2001bayesian}. The basic idea is to view potential model discrepancy as an object that can be inferred statistically, or plainly put, to ``model\" this potential error. To conduct feasible inference, often the model discrepancy is assumed to have some structure decided a priori of observing data, and data are used to update the uncertainty on predictions of the true system. Since \\cite{kennedy2001bayesian}, this idea has been extended and widely applied in various scientific areas, e.g., \\cite{tuo2015efficient,higdon2004combining,plumlee2016bayesian}. In the stochastic simulation literature, similar machinery has appeared under the heading of stochastic kriging (\\cite{ankenman2010stochastic,Staum2009,chen2013enhancing,chen2012effects,chen2014stochastic,chen2016efficient}). In the stochastic kriging literature, the oracle benchmark is the simulation model and stochastic kriging is used to reduce simulation effort by borrowing information from the simulation outputs at a collection of design values. In the model discrepancy setting, the oracle benchmark is the real system's probabilistic generating mechanism and our goal is to improve the prediction accuracy and quantification of uncertainties associated with the simulation model.\n\nOne challenge in bringing the deterministic model discrepancy machinery to stochastic simulation is that in the latter case, the inference objects are themselves embedded in probability spaces. The stochastic simulation model and the real system are naturally represented as probability distributions (think of the output distributions of a queueing or a stochastic inventory model), which constitute the basis of calculation in many decision-making tasks (for example, computing the chance that the outcome is in some region that indicates poor performance). Consequently, the learning and the uncertainty quantification of the discrepancies need to take into account the resulting probabilistic constraints. This is beyond the scope of the established inference and computation tools in the deterministic model discrepancy literature.\n\nAs our main contribution, we develop a framework to infer stochastic model discrepancies that is statistically justified and computationally tractable under the constraints discussed above. On the statistical aspect, we build a Bayesian learning framework that operates on the space of likelihood ratios as the representation of model discrepancies between simulation and reality. We study how this representation satisfies the constraints necessarily imposed in capturing stochastic model discrepancies and leads to desirable asymptotic behavior. On the computational aspect, we propose an optimization approach to obtain prediction bounds. Though sampling techniques such as Markov chain Monte Carlo \\cite[Chapters 11 and 12]{gelman2014bayesian} are widely used in Bayesian computation, they encounter difficulties in our setting due to the constraints and high-dimensionality. Our approach, inspired from the recent literature in robust optimization (\\cite{ben2002robust,ben2009robust,bertsimas2011theory}), alleviates this issue via the imposition of suitable optimization formulations over posterior high probability regions. We study the statistical properties of these formulations and demonstrate that they are equally tight in terms of asymptotic guarantees to traditional Bayesian inference.\n\nWe close this introduction by briefly reviewing two other lines of related work. First, in stochastic simulation, the majority of work in handling model uncertainty focuses on input uncertainty; see, e.g. the surveys \\cite{barton2002panel,henderson2003input,chick2006bayesian,barton2012tutorial,song2014advanced,lam2016advancedtutorial}, \\cite{nelson2013foundations} Chapter 7. They quantify the impacts on simulation outputs due to the statistical uncertainty in specifying the input models (distributions, stochastic assumptions etc.), assuming input data are available. Approaches include the delta method (\\cite{cheng1997sensitivity}) and its variants such as the two-point method (\\cite{cheng1998two,cheng2004calculation}), the bootstrap (\\cite{barton1993uniform,barton2001resampling,cheng1997sensitivity}) which can be assisted with stochastic kriging-based meta-models (\\cite{barton2013quantifying,xie2014bayesian}), and Bayesian methods (\\cite{chick2001input,zouaoui2003accounting,zouaoui2004accounting,xie2014bayesian,biller2011accounting}). Added to these approaches are recent perspectives of model risks and robust optimization that do not necessarily directly utilize data (\\cite{glasserman2014robust,lam2013robust,lam2011sensitivity,ghosh2015computing}). The second line of related work is queueing inference that investigates the calibration of input processes and system performances from partially observed queueing outputs such as congestion or transaction data (e.g., the queue inference engine; \\cite{larson1990queue}). This literature utilizes specific queueing structures that can be approximated either analytically or via diffusion limits, and as such allow tractable inference. Techniques include maximum likelihood estimation (\\cite{basawa1996maximum,pickands1997estimation}), nonparametric approaches (\\cite{bingham1999non,hall2004nonparametric}) and point processes (\\cite{whitt1981approximating}).\nRecently, \\cite{goeva2014reconstructing} study calibration of input distributions under more general simulation models. Like the input uncertainty literature, however, these studies assume correctly specified system logics that imply perfect matches of the simulation models with real-world outputs.\n\n\n\\section{Stochastic Model Discrepancy: Setting and Notations} \\label{sec:setting}\nThis section describes our setting and notations throughout this paper. We consider a system of interest that outputs a discrete random response over the space $\\mathcal{Y}$ with cardinality $m$. For notational simplicity, we will use the space $\\mathcal Y=\\{1,\\ldots,m\\}$. This response depends on a vector of design variables, denoted $x$, which can be broadly defined to include input variables that are not necessarily controllable. We presume a finite set of design points or design values $x_j$, $j = 1,\\ldots,s$. The probability mass function $\\pi_j=\\{\\pi_j(i)\\}_{i=1,\\ldots,m}$ describes the distribution of the response of the real system on $\\mathcal Y$ under $x_j$. Examples of the response include the waiting times in call centers \\citep{brown2005statistical} and hospitals \\citep{helm2014design}. In the first example, design variables could be the number of servers, the system capacity, and the arrival rate. In the second example, the design variable could be the rate of elective admissions.\n\nThe objective is to draw conclusions about $\\pi_j$ for several $j$'s. These distributions form the basis in evaluating quantities of interest used for decision-making. When responses are independently observed from the real system (e.g., from a designed experiment \\citep{li2015value}), $n_j(i)\/n_j$ is a reasonable estimate of $\\pi_j$, where $n_j(i)$ counts the number of outcomes equal to $i$ and $n_j$ is the total number of recorded responses at $x_j$. In the setting of simulation modeling, however, these empirical estimates are often inadequate because typical decision-making tasks, like feasibility or sensitivity tests, are applied on system configurations that are sparsely sampled or even never observed. This means that accurate empirical estimates for the $x_j$ values of interest are not available. In fact, for these $j$'s, $n_j$ can often times be $0$.\n\nIn contrast, using state-of-the-art understanding of the system, possibly simplified for computational concerns, an operations researcher builds a simulation model (typically based on discrete-event simulation) to estimate $\\tilde{\\pi}_{j}=\\{\\tilde\\pi_j(i)\\}_{i=1,\\ldots,m}$, the simulated distribution of the response at the design point $x_j$. In parallel to the real responses, we denote $\\tilde{n}_j(i)$ as the count of outcome $i$ and $\\tilde n_j$ as the total number of replications in a simulation experiment at $x_j$, and $\\tilde{n}_j(i)\/\\tilde{n}_j$ is hence an estimate of $\\tilde\\pi_j(i)$. Unlike the real responses, it is often affordable to generate a more abundant number of $\\tilde n_j$ and hence a more accurate estimate of $\\tilde\\pi_j$. However, the difference between $\\tilde{n}_j(i)\/\\tilde{n}_j$ and $\\tilde\\pi_j(i)$ remains a source of uncertainty.\n\nOur premise is that the real response distribution $\\pi_j$ and the simulated distribution $\\tilde\\pi_j$ differ. Thus, in order to make conclusions about $\\pi_j$, we must conjecture about the potential gap between $\\pi_j$ and $\\tilde\\pi_j$ with the limited simulation and real-world data. The remainder of this section describes our framework for defining the discrepancy between $\\pi_j$ and $\\tilde\\pi_j$.\n\nFirst note that both $\\pi_j$ and $\\tilde\\pi_j$ obviously must satisfy the criteria of a probability distribution:\n\\begin{defination} \\label{def:valid_distribution}\nAny mapping $p: \\mathcal{Y} \\rightarrow \\mathbb{R}$ is a \\emph{valid distribution} if\n\\begin{enumerate}[(i)]\n\\item $p(i)\\geq0$ for all $i = 1,\\ldots,m$ and\n\\item $\\sum_{i=1}^m p(i) = 1$.\n\\end{enumerate}\n\\end{defination}\n\nWe define the discrepancy between $\\pi_j$ and $\\tilde\\pi_j$ as $\\delta_j=\\{\\delta_j(i)\\}_{i=1,\\ldots,m}$ where\n\\begin{equation}\n\\delta_j(i) = \\frac{\\pi_j(i)}{\\tilde{\\pi}_j(i)}. \\label{def discrepancy}\n\\end{equation}\nIn other words, $\\delta_j$ reflects the the ratio between the probabilities of the true responses and simulated responses. If $\\delta_j (i) = 1$ for all $i$, the simulation model is correctly specified. Definition \\ref{def discrepancy} is analogous to that of likelihood ratio in the context of importance sampling (e.g., \\cite{mcbook}, Chapter 9; \\cite{asmussen2007stochastic}, Chapter IV; \\cite{glasserman2003monte}, Chapter 4). In the model risk literature, similar object as Definition \\ref{def discrepancy} also appears as a decision variable in worst-case optimization problems used to bound performance measures subject to the uncertainty on the true model relative to a conjectured stochastic model (often known as the baseline model). Examples include Gaussian models with mean and covariance uncertainty represented by linear matrix inequalities (\\cite{hu2012robust}), and nonparametric uncertainty measured by Kullback-Leibler divergence (e.g., \\cite{glasserman2014robust,lam2013robust}). Our definition \\ref{def discrepancy} is along a similar vein as these work, but rather than using it as a tool to speed up simulation (in importance sampling) or an optimization decision variable (in model risk), our $\\delta_j$ is an object to be \\emph{inferred} from data.\n\nNote that \\eqref{def discrepancy} is not the only way to define stochastic model discrepancy. Another natural choice, which more closely mimics the established deterministic counterpart \\citep{kennedy2001bayesian}, is via\n\\[\\pi_j(i)-\\tilde{\\pi}_j(i)\\]\nThe choice of which version of discrepancy to use relates to the convenience in statistical modeling. We adopt the multiplicative version in \\eqref{def discrepancy} based on its analog with likelihood ratio, which facilitates our inference.\n\n\n\nSince $\\pi_j$ and $\\tilde\\pi_j$ are valid distributions, the model discrepancy $\\delta_j$ defined in \\eqref{def discrepancy} must satisfy the following criteria with respect to $\\tilde\\pi_j$:\n\\begin{defination} \\label{def:valid}\nSay $p$ is a valid distribution with $p(i)> 0$. $d$ is a \\emph{valid discrepancy} with respect to $p$ if\n\\begin{enumerate}[(i)]\n\\item $d(i)\\geq0$ for all $i = 1,\\ldots,m$ and\n\\item $\\sum_{i=1}^m d(i) p(i) = 1$.\n\\end{enumerate}\n\\end{defination}\nClearly, if $d$ is a valid discrepancy and $p$ is a valid distribution then $\\{d(i) p(i)\\}_{i=1,\\ldots,m}$ will also be a valid distribution.\n\nDefinition \\ref{def:valid} plays a vital role in our subsequent analysis as they characterize the properties of our inference targets. Unlike deterministic model discrepancies, these conditions come from the probabilistic structure that arises uniquely in stochastic model discrepancies. Note that Definition \\ref{def:valid} coincides with that of a likelihood ratio (e.g., \\cite{asmussen2007stochastic}).\n\nLastly, in addition to model discrepancy, simulation noise and experimental noise also contribute to the uncertainty in estimating $\\pi_i$, i.e., the noise of the estimator $n_j(i)\/n_j$ for $\\pi_j(i)$ and $\\tilde n_j(i)\/\\tilde n_j$ for $\\tilde\\pi_j(i)$. Our analysis will also incorporate these sources of uncertainty.\n\\section{A Bayesian Framework} \\label{sec:learn}\nWe propose a Bayesian framework to infer the discrepancy $\\delta_j$. The framework has the capability to quantify uncertainty under limited data environments (common in our setting where observed responses from the real system may be sparse or absent for some design points), and to incorporate prior information that anticipates similar discrepancies for similar design points, where the similarity is measured by the distance between the design values. We will also see how the framework can account for the notion of a valid discrepancy provided in Definition \\ref{def:valid}.\n\n\nThe term \\emph{data} substitutes for the collection of all observed responses from the real system and the simulation model, which is sufficiently represented as\n\\[\\text{data} = \\left\\{n_j(i), i = 1,\\ldots,m, j = 1,\\ldots, s \\text{ and } \\tilde{n}_j(i), i = 1,\\ldots,m, j = 1,\\ldots, s \\right\\}.\\]\nOur main inference procedure is the Bayes rule summarized as\n\\begin{equation}\n\\operatorname{post}\\left(d,\\tilde{p},\\text{data}\\right) \\propto \\operatorname{likelihood}(d,\\tilde{p},\\text{data} ) \\operatorname{prior}(d,\\tilde{p}), \\label{Bayesian update}\n\\end{equation}\nwhere $d$ and $\\tilde{p}$ are the locations at which the density is evaluated for $\\delta=(\\delta_j)_{j=1,\\ldots,s}$ and $\\tilde{\\pi}=(\\tilde\\pi_j)_{j=1,\\ldots,s}$. The notations ``$\\operatorname{post}$\", ``$\\operatorname{likelihood}$\" and ``$\\operatorname{prior}$\" stand for the posterior, likelihood and prior distribution of $(\\delta,\\tilde\\pi)$. Note that we have defined $\\tilde\\pi$ as an inference target in addition to the discrepancy $\\delta$, in order to handle the simulation noise (as we will describe momentarily). The relationship\n$$p_j(i) = d_j(i) \\tilde{p}_j(i) $$\ncan be used to define the posterior distribution of $\\pi_j(i)$ at $p_j(i) $.\n\nThe likelihood for \\eqref{Bayesian update} is straightforward to compute as\n\\begin{align}\n\\operatorname{likelihood}(d,\\tilde{p},\\text{data}) \\propto \\exp &\\left( \\sum_{j=1}^{s} \\sum_{i=1}^{m} n_{j}(i) \\log\\left( d_{j}(i) \\tilde{p}_j(i) \\right) + \\sum_{j=1}^{s} \\sum_{i=1}^{m} \\tilde{n}_j(i) \\log \\tilde{p}_j(i)\\right). \\label{eq:likelihood}\n \\end{align}\n\n\nWe now discuss the prior for \\eqref{Bayesian update}. We restrict ourselves to independent priors for the discrepancy and the simulation model. The prior on the simulation model needs to exhibit the properties of a valid distribution. These properties can be enforced by conditioning an arbitrary prior distribution on a vector which takes real values in a space $\\mathbb{R}^{sm}$ on the constrained region associated with Definition \\ref{def:valid_distribution}. Similarly, the properties of a valid discrepancy can be enforced by conditioning an arbitrary prior distribution on the constrained region associated with Definition \\ref{def:valid}. More precisely, let the logarithm of this arbitrary prior mass function for the simulation model be denoted with $f$ and the discrepancy with $g$. Our construction leads to\n\\begin{equation}\n\\text{prior}(d,\\tilde{p}) \\propto \\begin{cases} \\exp\\left(f(\\tilde{p}) + g(d) \\right) &\\text{ if } \\begin{cases} \\tilde{p}_j(i) \\geq 0, & 1\\leq i \\leq m,1\\leq j \\leq s \\\\\n d_j(i) \\geq 0, & 1\\leq i \\leq m,1\\leq j \\leq s \\\\\n\\sum_{i=1}^m \\tilde{p}_j(i) =1, & 1\\leq j \\leq s\\\\\n\\sum_{i=1}^m \\tilde{p}_j(i) d_j(i) =1, & 1\\leq j \\leq s\n\\end{cases} \\\\\n0 &\\text{otherwise}. \\end{cases} \\label{eq:prior} \\end{equation}\n\nThe choices of $f(\\cdot)$ and $g(\\cdot)$ are open to the investigator. For computational reasons that will be detailed in Section \\ref{sec:optim}, we prefer $g(\\cdot)$ that is concave. One widely used option that exhibits this property will be a multivariate Gaussian with a mean $\\mu$ and correlation matrix $R$ that borrows information across design points and observation points. It is recommended that one uses a vector of $1$s as the prior mean for $\\delta$ and $(1\/m)$s for $\\tilde{\\pi}$. $R$ should be built with domain specific logic, e.g., similar design points and\/or similar responses should have similar discrepancies. For more detailed ideas toward constructing correlation structures for responses, see \\cite{ankenman2010stochastic} on the topic of stochastic kriging. In general, this approach leads to\n\\begin{align}\n \\exp\\left(f(\\tilde{p}) + g(d) \\right) \\propto \\exp & \\left( -\\lambda_{\\tilde{p}} (\\tilde{p} - 1\/m)^\\mathsf{T} R_{\\tilde{p}}^{-1} (\\tilde{p} - 1\/m) -\\lambda_d (d - 1)^\\mathsf{T} R_d^{-1} (d - 1) \\right), \\label{eq:Gaussian}\n\\end{align}\nwhere the $\\tilde{p}$ and the $d$ are understood to be vectorizations of the probability masses represented by themselves, and $\\lambda$s are positive constants that scale the correlation matrices $R$s.\n\nNote that, in the settings where simulation is cheap and $\\tilde\\pi$ is estimated with negligible error, one can drop the parameter $\\tilde p$ in the likelihood \\eqref{Bayesian update} and correspondingly the second terms in the likelihood \\eqref{eq:likelihood} and the prior \\eqref{eq:prior}.\n\n\\section{Optimization-based Procedure for Bayesian Inference} \\label{sec:optim}\nThis section presents our computation procedure to make conclusions about $\\pi_j$ based on \\eqref{Bayesian update}. In particular, we propose an optimization-based approach. There are two reasons for considering this inference package in place of the more traditional Markov Chain Monte Carlo. First, a typical decision-making in simulation analysis often boils down to the estimation of expectation-type quantities of interest evaluated at $\\pi_j$. The optimization we study will provide efficiently computable bounds on these expectations. Second, because of the constrained structure of the prior distribution \\eqref{eq:prior}, standard sampling-based Bayesian computation tools are deemed to be inefficient, and optimization serves as a competitive alternative.\n\n\nTo elaborate the second rationale, note that common solution mechanisms in Bayesian inference consist of drawing samples from the posterior of the parameters of interest. However, because the posterior is often not a standard distribution like Normal (and that there is an unknown proportionality constant), direct Monte Carlo sampling is not possible. Sophisticated Markov chain Monte Carlo samplers were designed explicitly for this purpose \\cite[Chapters 11 and 12]{gelman2014bayesian}. Popular samplers include the classic Metropolis Hastings algorithm with a symmetric proposal \\cite[pp 278-280]{gelman2014bayesian}, and other useful methods such as Hamilton Monte Carlo \\citep{duane1987hybrid} and slice sampling \\citep{neal2003slice}. The latter two methods are specifically designed to alleviate the problems faced by classical samplers. But there are still many practical issues for these new samplers regarding their execution and choices of parameters in constrained and high dimensional spaces, which is the setting we encounter in the posterior induced from \\eqref{eq:prior} (probabilistically constrained and with dimension $sm$). See, for example, \\cite{betancourt2017conceptual} for an intuitive history and theoretical summary of these conclusions. It should be acknowledged that theoretical results do not always reveal these practical issues; see, for example, the positive results from \\cite{dyer1991random}. However, numerical tests in \\cite{plumlee2016learning} demonstrate these issues in a closely related setting.\n\n\nIn the following subsections, we will present our optimization formulation, the statistical guarantees, and discussion on computational tractability. The summaries of the sections are: 1) We use an uncertainty set in place of a typical Bayesian integration; 2) The method is guaranteed to produce tight bounds that will contain the truth with the typical desired confidence; and 3) Given we simulate enough, the optimization problem can be reformulated into a convex problem.\n\n\\subsection{Optimization Formulation}\\label{sec:formulation}\nSuppose we are interested in estimating quantities of interest in the form $E[z(Y_j)]$ where $Y_j\\sim\\pi_j$ and $z:\\mathcal Y\\to\\mathbb R$ is some function. We can write this in terms of $\\delta$ and $\\tilde\\pi$ as $$\\zeta(\\delta,\\tilde\\pi)= \\sum_{i=1}^m z(i) \\pi_j(i) =\\sum_{i=1}^mz(i)\\delta_j(i)\\tilde\\pi_j(i).$$ Our procedure consists of solving the optimization pairs\n\\begin{equation}\\begin{array}{ll}\n\\max \\text{ or } \\min_{d,\\tilde{p}} & \\zeta(d,\\tilde{p}) = \\sum_{i=1}^m z(i) d_j(i) \\tilde{p}_j(i), \\\\\n\\text{subject to}& \\operatorname{post}(d,\\tilde{p},\\text{data}) \\geq c \\end{array} \\label{obj:optim}\n\\end{equation}\nwhere $c$ is chosen such that\n\\begin{equation}\n c = \\exp \\left(-\\frac{1}{2} \\Phi^{-1}(q)^2 + \\max_{d,\\tilde{p}} \\log \\text{post}(d,\\tilde{p},\\text{data})\\right),\\label{choice}\n\\end{equation}\nand $ \\Phi^{-1}(q)$ is the standard Normal quantile at level $q$. The optimal values of these optimization problems form an approximate confidence interval for $E[z(Y_j)]$ at a confidence level in the frequentist sense, as we will describe in Section \\ref{sec:theo}.\n\nOptimization problems \\eqref{obj:optim} can be motivated from a robust optimization viewpoint. This literature uses deterministic sets, the so-called ambiguity or uncertainty sets, to represent the probabilistic uncertainty in the parameters (e.g., \\cite{ben2002robust,ben2009robust,bertsimas2011theory}). Typically, these sets are chosen as prediction sets that contains the truth with a prescribed confidence. The optimal values of the resulting robust optimizations then bound the true quantity of interest with at least the same confidence level. This approach has been applied in many contexts, such as approximating chance-constrained programs (e.g., \\cite{ben2002robust}, Chapter 2) and performance measures driven by complex stochastic models (e.g., \\cite{bandi2012tractable,bandi2014robust}). Here, we consider using a prediction set given by a posterior high probability region\n\\begin{equation}\n\\mathcal{U}(c) = \\left\\{d,\\tilde{p} \\left| \\operatorname{post}(d,\\tilde{p},\\text{data}) \\geq c \\right.\\right\\} \\label{uncertainty set}\n\\end{equation}\nas the set of points $(d,\\tilde p)$ with posterior probability higher than level $c$. From the view of robust optimization, if $c$ is chosen such that $\\mathcal U(c)$ contains $1-\\alpha$ posterior content of $(\\delta,\\tilde\\pi)$, the optimal values of \\eqref{obj:optim} will form an interval covering at least $1-\\alpha$ posterior content of $\\zeta(\\delta,\\tilde\\pi)$.\n\nInstead of looking for an exact $(1-\\alpha)$-content prediction set, we choose our $c$ based on asymptotic theory that guarantees an asymptotically exact coverage of the true value of $E[z(Y_j)]$, which in general can be different from the choice discussed above. Our result that justifies this approach has a similar spirit to some recent studies in calibrating uncertainty sets in distributionally robust optimization, a setting in which the uncertainty is on the underlying distribution in a stochastic problem, via asymptotic analysis based on empirical likelihood (\\cite{lam2016recovering,duchi2016statistics,blanchet2016sample,lam2017empirical}) and Bayesian methods \\citep{gupta2015near}. Despite these connections, to our best knowledge, there has been no direct attempt in using robust optimization as a principled Bayesian computation tool.\n\nOur procedure essentially recovers the quantiles of the quantity of interest directly from the posterior distribution, which is the aforementioned goal of our Bayesian analysis and is conventionally obtained from sampling (e.g., Markov chain Monte Carlo). To intuitively explain the connection, consider the case when the posterior is normalized such that\n\\[\\int_{d,\\tilde p} \\text{post}(d,\\tilde p,\\text{data}) \\mathrm{d} d \\mathrm{d}\\tilde p = 1.\\]\nThe described quantile is defined as\n\\begin{equation}\n\\min\\left\\{a\\in\\mathbb R: \\int_{\\zeta(d,\\tilde p)\\leq a} \\text{post}(d,\\tilde p,\\text{data}) \\mathrm{d} d \\mathrm{d}\\tilde p \\geq1-\\alpha\\right\\}\\label{quantile def}\n\\end{equation}\nAssume that for every $a$ in consideration, there exists $(d,\\tilde p)$ such that $\\zeta(d,\\tilde p)=a$. Then \\eqref{quantile def} is equal to\n\\begin{equation}\\begin{array}{ll}\n\\min_{a}& \\left\\{\\begin{array}{ll}\\max_{d,\\tilde{p}} & \\zeta(d,\\tilde{p}) \\\\\n\\text{subject to}&\\zeta(d,\\tilde{p}) \\leq a\\end{array}\\right\\}\\\\\n\\text{subject to}& \\int_{\\zeta(d,\\tilde p)\\leq a} \\text{post}(d,\\tilde p,\\text{data}) \\mathrm{d} d \\mathrm{d}\\tilde p \\geq1-\\alpha\n\\end{array} \\label{quantile reformulation}\n\\end{equation}\nDenote $\\mathcal U_a=\\left\\{d,\\tilde{p} \\left| \\zeta(d,\\tilde{p}) \\leq a \\right. \\right\\}$. We can further rewrite \\eqref{quantile reformulation} as an optimization over the collection of sets in the form $\\mathcal U_a$, given by\n\\begin{equation}\\begin{array}{ll}\n\\min_{\\mathcal U_a}& \\left\\{\\begin{array}{ll}\\max_{d,\\tilde{p}} & \\zeta(d,\\tilde{p}) \\\\\n\\text{subject to}&(d,\\tilde{p}) \\in\\mathcal U_a\\end{array}\\right\\}\\\\\n\\text{subject to}& \\int_{\\mathcal U_a} \\text{post}(d,\\tilde p,\\text{data}) \\mathrm{d} d \\mathrm{d}\\tilde p \\geq1-\\alpha\n\\end{array} \\label{MC_sampler}\n\\end{equation}\nSuppose there exists an optimal solution $\\mathcal U^*$ to the outer optimization in \\eqref{MC_sampler}. We conclude that the $q$ quantile of $\\zeta(d,\\tilde p)$ under $\\text{post}(\\cdot,\\text{data})$ is equal to $\\max_{(d,\\tilde{p})\\in \\mathcal U^*} \\zeta(d,\\tilde{p}) $. Our chosen uncertainty set $\\mathcal U(c)$ turns out to bear a similar performance in bounding the quantity of interest as the set $\\mathcal U^*$, despite the potential vast difference in their geometries.\n\n\nTo illustrate graphically the difference between sampling quantiles and the optimization approach, suppose we are trying to find the $97.5\\%$ confidence level upper bounds for the sum of two probabilities in our system. Figure \\ref{fig:graphical_ROvSAMPLE} illustrates this with samples imposed on top of the projection of the uncertainty set in \\eqref{uncertainty set}, and it shows the similarity of the bounds provided by the two approaches. Clearly, $\\mathcal{U}(c)$ is much smaller compared to $\\mathcal{U}^*$, yet the resulting bounds are quite similar. The next subsection investigates the properties of $\\mathcal U(c)$ and explains such a phenomenon.\n\\begin{figure}[htb]\n{\n\\centering\n\\includegraphics[width=5in]{graphical_ROvSAMPLE-eps-converted-to.pdf}\n\\caption{Graphical description of the differences between optimization- and sampling-based approaches where the objective is to bound the sum of the probabilities. The $1000$ dots are samples from the posterior. The upper limit of the $97.5 \\%$ quantile of the sum is indicated by the dashed line where $\\mathcal{U}^*$ is the solution to the outer optimization in (\\ref{MC_sampler}) given that these samples are the entirety of the posterior distribution. The region labeled $\\mathcal{U}$ is the projection of $\\mathcal{U}$ onto this two-dimensional plane and the maximum of the optimization is determined by the solid line with $c=2$. \\label{fig:graphical_ROvSAMPLE}}\n}\n\\end{figure}\n\\subsection{Theoretical Guarantees} \\label{sec:theo}\nWe first study the asymptotic behavior of the optimal values in \\eqref{obj:optim}. We will consider a more general setting in which the objective function is $ \\sum_{j=1}^s\\sum_{i=1}^m z_j(i) p_j(i)$ for some functions $z_j:\\mathcal Y\\to\\mathbb R$, i.e., a linear combination of individual expectations at $x_j$. Evidently, $z_j(i)=0$ for all but one $j$ will reduce to the setting in \\eqref{obj:optim}. For ease of exposition, define\n\\[Z_j \\stackrel{\\text{dist}}{=}z_j(Y_j),\\]\nwhere $Y_j\\sim\\pi_j$. Let $\\mathcal{U}_n (c) $ be defined as in \\eqref{uncertainty set}, with the subscript $n=\\sum_{j=1}^sn_j$ indicating the total number of observed responses on the real system. Similarly, let $\\text{post}_n(d,\\tilde p)$ represent the posterior function when the data contains $n$ observations.\n\nWe have the following result (which is shown as Lemma \\ref{lem:op_consistency} in the appendix):\n\\begin{theorem} \\label{thm:op_consistency}\nSuppose that $\\tilde{\\pi}_j(i)>0$ and $\\pi_j(i)>0$ for all $i = 1,\\ldots,m$ and $j = 1,\\ldots,s$. For each observation, the design point is an independent random variable with sample space $\\{x_1,\\ldots,x_m\\}$ and respective positive probabilities $\\xi_1,\\ldots, \\xi_m$.\n\nLet $\\text{post}_n^* = \\max_{d,\\tilde{p}} \\text{post}_n (d,\\tilde{p})$ and $\\hat{\\pi}_i^n (i) = n_j(i)\/n_j .$ Then for all $\\ell >0$,\n\\begin{equation}\n\\lim_{n \\rightarrow \\infty} \\sqrt{n} \\left(\\max_{(d,\\tilde p)\\in\\mathcal{U}_n(\\text{post}_n^*-\\ell^2\/2)} \\sum_{j=1}^s\\sum_{i=1}^m z_j(i) d_j(i)\\tilde p_j(i) - \\sum_{j=1}^s \\sum_{i=1}^m z_j(i) \\hat{\\pi}_j^n(i) \\right) = \\ell \\sqrt{\\sum_{j=1}^s \\xi_j^{-1} \\mathbb{V} Z_j} \\label{eq:optim1}\n\\end{equation}\nand\n\\begin{equation}\n\\lim_{n \\rightarrow \\infty} \\sqrt{n} \\left( \\min_{(d,\\tilde p)\\in\\mathcal{U}_n(\\text{post}_n^*-\\ell^2\/2)} \\sum_{j=1}^s\\sum_{i=1}^m z_j(i)d_j(i)\\tilde p_j(i) - \\sum_{j=1}^s \\sum_{i=1}^m z_j(i) \\hat{\\pi}_j^n(i) \\right) = -\\ell \\sqrt{\\sum_{j=1}^s \\xi_j^{-1} \\mathbb{V} Z_j} \\label{eq:optim2}\n\\end{equation}\n\nalmost surely, where $\\mathbb{V} $ represents the variance.\n\\end{theorem}\n\nAn immediate observation of Theorem \\ref{thm:op_consistency} is that the simulation replication size $\\tilde n_j$ plays no role in the asymptotic behavior of the optimization output as $n$ gets large. Thus, with enough real data, the accuracy of the simulation runs is inconsequential, as the values of the real data dominate the results.\nThe same observation also holds for the prior choices made for $f(\\cdot)$ and $g(\\cdot)$. This asymptotic independence of the prior resembles the classic Bernstein-von Mises theorem. In summary, our optimization approach generates bounds in tight asymptotic agreement with those obtained from the typical data-only inference approaches.\n\nIt is known that not every posterior distribution is guaranteed to have appropriate consistency properties; see the works of \\cite{freedman1963asymptotic} and \\cite{diaconis1986consistency}. Bayesian credible sets resembling the form of \\eqref{uncertainty set} are not guaranteed to produce rational inference; for more information on the general properties of Bayesian credible sets, see \\cite{cox1993analysis} or \\cite{szabo2015frequentist}. In particular, two complications arise in proving Theorem \\ref{thm:op_consistency}. First, the measure associated with the likelihood function only concentrates on a lower dimensional manifold (dimension $sm-s$) of the parameter space (dimension $sm$). This issue is by-and-large a technical one and is addressed in Lemmas \\ref{lemma:str_consis} and \\ref{lemma:weak_consis} proved in the appendix. Second, the optimization problem requires a particular shape of the uncertainty set to yield the desired asymptotic properties. As a main observation, the uncertainty set $\\mathcal U_n(\\text{post}_n^*-\\ell^2\/2)$ can be shown to asymptotically become an ellipsoid, and optimization problem \\eqref{obj:optim} therefore reduces to a quadratic program with an elliptical constraint, which can be analyzed and elicits the convergence behavior in Theorem \\ref{thm:op_consistency}.\n\n\nThere are several implications of Theorem \\ref{thm:op_consistency}. The first is that both the upper and lower limits provided by the optimization converge to the true value almost surely as the data gets large, as described below:\n\\begin{corollary}\nUnder the same assumptions in Theorem \\ref{thm:op_consistency}, we for all $\\ell>0$,\n$$\\max_{(d,\\tilde p)\\in\\mathcal{U}_n(\\text{post}_n^*-\\ell)} \\sum_{j=1}^s\\sum_{i=1}^m z_j(i)d_j(i)\\tilde p_j(i)\\to\\sum_{j=1}^s\\sum_{i=1}^m z_j(i)\\pi_j(i)$$\nand\n$$\\min_{(d,\\tilde p)\\in\\mathcal{U}_n(\\text{post}_n^*-\\ell)} \\sum_{j=1}^s\\sum_{i=1}^m z_j(i)d_j(i)\\tilde p_j(i)\\to\\sum_{j=1}^s\\sum_{i=1}^m z_j(i)\\pi_j(i)$$\nalmost surely as $n\\to\\infty$.\\label{consistency}\n\\end{corollary}\n\nCorollary \\ref{consistency} shows that with enough data the proposed posterior estimate is a good representation of the truth. It is a basic property that is in line with Bayesian consistency results studied traditionally by statisticians \\citep{schwartz1965bayes}.\n\nFurthermore, Theorem \\ref{thm:op_consistency} also implies that, as $n$ gets large,\n\\begin{equation}\n\\max_{(d,\\tilde p)\\in\\mathcal{U}_n(\\text{post}_n^*-\\ell^2\/2)} \\sum_{j=1}^s\\sum_{i=1}^m z_j(i)d_j(i)\\tilde p_j(i) \\approx\\sum_{j=1}^s \\sum_{i=1}^m z_j(i) \\hat{\\pi}_j^n(i) + \\ell \\sqrt{\\frac{\\sum_{j=1}^s \\xi_j^{-1} \\mathbb{V} Z_j}{n}} \\label{asymptotic max}\n\\end{equation}\nand\n\\begin{equation}\\min_{(d,\\tilde p)\\in\\mathcal{U}_n(\\text{post}_n^*-\\ell^2\/2)} \\sum_{j=1}^s\\sum_{i=1}^m z_j(i)d_j(i)\\tilde p_j(i) \\approx\\sum_{j=1}^s \\sum_{i=1}^m z_j(i) \\hat{\\pi}_j^n(i) - \\ell \\sqrt{\\frac{\\sum_{j=1}^s \\xi_j^{-1} \\mathbb{V} Z_j}{n}} \\label{asymptotic min}\n\\end{equation}\nNote that the left hand sides of \\eqref{asymptotic max} and \\eqref{asymptotic min} are precisely the classical confidence bounds on $\\sum_{j=1}^s\\sum_{i=1}^m z_j(i) \\pi_j(i)$ generated from the central limit theorem with $n_j \\approx \\xi_j n$. This hints at a proper coverage in large samples at the level $1-\\alpha$. In fact, we have the following result:\n\\begin{corollary}\nUnder the same assumptions in Theorem \\ref{thm:op_consistency}, we have\n$$\\mathbb P\\left(\\max_{(d,\\tilde p)\\in\\mathcal{U}_n(\\text{post}_n^*-\\ell^2\/2)} \\sum_{j=1}^s\\sum_{i=1}^m z_j(i)d_j(i)\\tilde p_j(i)\\geq\\sum_{j=1}^s\\sum_{i=1}^m z_j(i)\\pi_j(i) \\right)\\to \\Phi(\\ell)$$\nand\n$$\\mathbb P\\left(\\min_{(d,\\tilde p)\\in\\mathcal{U}_n(\\text{post}_n^*-\\ell^2\/2)} \\sum_{j=1}^s\\sum_{i=1}^m z_j(i)d_j(i)\\tilde p_j(i)\\leq\\sum_{j=1}^s\\sum_{i=1}^m z_j(i)\\pi_j(i)\\right)\\to \\Phi(\\ell)$$\nas $n\\to\\infty$, where $\\mathbb P$ denotes the probability generated from a data set of size $n$.\\label{CI}\n\\end{corollary}\n\n\n\nThe above results reveal that the proposed inference differs from purely empirical estimates only when data is sparsely collected. If data from the real system is abundant, our simulation models $\\tilde{\\pi}_1(\\cdot),\\ldots,\\tilde{\\pi}_s(\\cdot)$ will have very little impact on our resulting conclusions. In a sense, the Bayesian approach automatically balances the influences from the empirical data versus the simulation model. Complement to our asymptotic result in this section, our numerical examples in Section \\ref{sec:illustration} will demonstrate that the difference in inference between our approach and one that ignores the simulation model can be sizable in sparse data environments.\n\n\nWe conclude this section by presenting a result on the consistency of a ``ranking and selection\" task:\n\\begin{corollary} \\label{corr:op_consistency}\nSuppose the conditions and definitions of Theorem \\ref{thm:op_consistency}. For all $\\ell>0$, if $j$ and $k$ are such that $\\mathbb{E} Z_j > \\mathbb{E} Z_k $, then\n\\[\\lim_{n \\rightarrow \\infty} \\mathbb P \\left( \\min_{(\\tilde{p},d) \\in \\mathcal{U}(\\text{post}_n^*-\\ell)} \\sum_{i=1}^m z_j(i)d_j(i)\\tilde p_j(i) < \\max_{(\\tilde{p},d) \\in\\mathcal{U}(\\text{post}_n^*-\\ell)} \\sum_{i=1}^m z_k(i)d_k(i)\\tilde p_k(i) \\right) = 0 .\\]\\label{rs}\n\\end{corollary}\nCorollary \\ref{rs} implies that the intervals for the quantities of interest at different design points do not overlap as the data gets large, if their values are truly different. Thus, in practice, a user who notes that the two intervals generated from the optimization problems do not overlap can reasonably conclude there is a difference between the two values.\n\n\\subsection{Solvability of the optimization} \\label{sec:optim}\nThis subsection discusses the tractability of the imposed optimization problems in Section \\ref{sec:formulation}. We focus on the convexity of the problems which, in contrast to the previous section, will depend on the replication size from the simulation model.\n\n\n To begin, we write optimization (\\ref{obj:optim}) in full (focusing only on the minimization problem) as\n\\begin{equation}\\begin{array}{ll}\n\\min_{d,\\tilde{p}} & \\zeta(d,\\tilde{p}) = \\sum_{i=1}^m z_j(i) d_j(i) \\tilde{p}_j(i), \\\\\n\\text{subject to}& f(\\tilde{p}) + g(d) + \\sum_{j=1}^{s} \\sum_{i=1}^{m} n_{j} (i) \\log( d_j(i) \\tilde{p}_j(i)) + \\sum_{j=1}^{s} \\sum_{i=1}^{m} \\tilde{n}_{j}(i) \\log \\tilde{p}_j(i) \\geq \\log(c)\\\\\n&\\tilde{p}_j(i) \\geq 0 , d_j(i) \\geq 0, \\text{ for all } i,j \\\\\n&\\sum_{i=1}^m \\tilde{p}_j(i) =1, \\sum_{i=1}^m \\tilde{p}_j(i) \\delta_j(i) =1, \\text{ for all } j\n\\end{array} \\label{obj:optim1}\n\\end{equation}\nThis formulation is generally non-convex because of the non-convex objective function and the non-convex constraint $\\sum_{i=1}^m \\tilde{p}_j(i) \\delta_j(i) =1$, regardless of the sample sizes and the priors $f(\\cdot)$ and $g(\\cdot)$. However, noting that the program is individually convex in $d$ and $\\tilde p$, one approach is to use alternating minimization, by sequentially optimizing $d$ fixing $\\tilde p$ and $\\tilde p$ fixing $d$ until no improvement is detected. Though it does not guarantee a global solution, this approach has been shown to be effective for certain chance-constrained programs (see, e.g., \\cite{chen2010cvar,zymler2013distributionally,jiang2016data}).\n\nOn the other hand, supposing that there is no simulation error in estimating $\\tilde\\pi$, then the prior on $\\tilde p$ and the associated calculations can be removed, resulting in\n\\begin{equation}\\begin{array}{ll}\n\\max _{d} & \\sum_{i=1}^m z(i)\\tilde{\\pi}_j(i)d_j(i) , \\\\\n\\text{subject to}& g(d) + \\sum_{j=1}^{s} \\sum_{i=1}^{m} n_{j}(i) \\log d_{j}(i) \\geq \\log(c) \\\\\n&d_j(i) \\geq 0, \\text{ for all } i,j \\\\\n &\\sum_{i=1}^m \\tilde{\\pi}_j(i) d_j(i) =1, \\text{ for all } j\n\\end{array} \\label{optimization no MC}\n\\end{equation}\nIf in addition the function $g(\\cdot)$ is a concave function, then \\eqref{optimization no MC} is a convex optimization problem. We summarize this as:\n\\begin{proposition}\nProblem (\\ref{optimization no MC}) is a convex program if $g(\\cdot)$ is a concave function on $\\mathbb{R}^{sm}$.\n\\end{proposition}\n\nRecalling our discussion in Section \\ref{sec:learn}, one example of a concave $g(\\cdot)$ corresponds to the multivariate Gaussian prior of (\\ref{eq:Gaussian}).\n\nFormulation \\eqref{optimization no MC} can be reasonably used in situations where simulation replications are abundant, so that the simulation outputs are very close to $\\tilde\\pi_j$. Our next result shows that, in the case that $\\tilde n_j$ is sufficiently large and $g(\\cdot)$ satisfies a slightly stronger condition, using \\eqref{obj:optim1} also leads to a convex problem. To prepare for this result, we rewrite the decision variables in \\eqref{obj:optim1} to get\n\\begin{equation}\\begin{array}{ll}\n\\max _{p,\\tilde{p}} & \\sum_{i=1}^j z(i) p_j(i), \\\\\n\\text{subject to}& f(\\tilde{p}) + h(p,\\tilde{p}) + \\sum_{j=1}^{s} \\sum_{i=1}^{m} n_{j} (i) \\log p_j(i) + \\sum_{j=1}^{s} \\sum_{i=1}^{m} \\tilde{n}_{j}(i) \\log \\tilde{p}_j(i) \\geq \\log(c)\\\\\n&\\tilde{p}_j(i) \\geq 0 , p_j(i) \\geq 0, \\text{ for all } i,j \\\\\n&\\sum_{i=1}^m \\tilde{p}_j(i) =1, \\sum_{i=1}^m p_j(i) =1, \\text{ for all } j\n\\end{array} \\label{optimization reformulation}\n\\end{equation}\nwhere $h(p,\\tilde{p}) = g(p_j\/\\tilde{p}_j)$ with the operation $p_j\/\\tilde p_j$ defined component-wise. We recall the definition that a function $r(\\cdot)$ is strongly concave if for all $a$ and $b$ and $0 \\leq \\lambda \\leq 1$,\n\\[r(\\lambda a+ (1-\\lambda) b ) \\geq \\lambda r(a)+ (1-\\lambda) r(b) + \\beta \\lambda (1-\\lambda) \\|a-b\\|^2, \\]\nwhere $\\|a-b\\|$ is the Euclidean norm and $\\beta$ is some positive constant \\cite[pp 60] {nesterov2003introductory}. Our result is:\n\n\\begin{theorem} \\label{thm:convex}\nAssume that $g(\\cdot)$ is strongly concave and differentiable on $\\mathbb R_+^{sm}$ and the derivative is bounded on all compact sets in $\\mathbb R_+^{sm}$, $f(\\cdot)$ is bounded from above, and $\\tilde\\pi_j(i)>0$ for all $i,j$.\n\nLet $\\mathcal{U}_{\\tilde{n}} (c_{\\tilde{n}})$ be the set of feasible solutions for (\\ref{optimization reformulation}) where\n\\[\\log c_{\\tilde{n}} = -\\frac{\\ell^2}{2} + \\max_{p,\\tilde{p}}\\left\\{ f(\\tilde{p}) + h(p,\\tilde{p}) + \\sum_{j=1}^{s} \\sum_{i=1}^{m} n_j(i) \\log p_j(i) + \\sum_{j=1}^{s} \\sum_{i=1}^{m} \\tilde{n}_{j}(i) \\log \\tilde{p}_j(i)\\right\\},\\]\nfor some constant $\\ell > 0$. Then as $\\tilde{n} \\rightarrow \\infty$, $\\mathbb{P} \\left( \\mathcal{U}_{\\tilde{n}} (c_{\\tilde{n}}) \\text{ is convex}\\right) \\rightarrow 1. $\n\\end{theorem}\n\nThus, given access to sufficient computing resources and properly choosing $g(\\cdot)$, one can use a convex optimization solver to carry out our proposed approach, no matter how few or many data were collected from the real system. Note that this observation holds even when $f(\\cdot)$ is not concave. Theorem \\ref{thm:convex} hinges on a joint convexity argument with respect to $(d,\\tilde{p})$ in the asymptotic regime as $\\tilde n$ grows but $n$ is fixed.\n\n\\section{Numerical Illustrations } \\label{sec:illustration}\nWe demonstrate our approach with two real-data examples. First is a proof-of-concept investigation in modeling a call center. Second is on the support of staffing decision in a manufacturing production line discussed in the introduction of this article.\n\n\\subsection*{Call center example}\\label{sec:call center}\nConsider the call center data originally analyzed in \\cite{brown2005statistical}. This dataset is associated with a call center where a customer calls in and is placed a queue until one of $x$ servers is available. From these data, the sample mean of the waiting time (from entry to service for a customer) from 9:00 to 10:00 am is calculated. In this narrow time period, the arrival rate and service rate, which is time inhomogenous according to \\cite{brown2005statistical}, should be approximately homogenous. Here, we also account for the number of servers operating in the system at any given time, which appears to differ between days (see the appendix for details). To our reading, this subset of the dataset was by-and-large ignored in \\cite{brown2005statistical}'s original analysis.\n\nOur model for this call center will be an $x$-server first-come-first-serve queue. Following practice, both the interarrival and service times are modeled as exponentially distributed. After a warm-up period, the sample average of the waiting time is measured over the course of a one-hour window. This, in principle, agrees with \\cite{brown2005statistical}. In the spirit of ad-hoc calibration, two additional features were added: (i) the arrival rate is randomly generated each day from a log-normal distribution with associated mean $1.8$ and variance $0.4$, and (ii) a customer will abandon the queue if the waiting time is longer than an exponential random variable with mean $5$. Adding both of these features resulted in a simulation model that was closer to the observed data.\n\nThe response is discretized into the four categories $<1$, $1-2$, $2-3$, and $> 3$ minutes ($m=4$) and we study $5-9$ servers ($s = 5$). No data from the real system is observed at either $5$ or $9$ servers. The simulation model was evaluated 250 times at each design point.\n\\begin{figure}[htb]\n{\n\\centering\n\\includegraphics[width=\\textwidth]{orig_data_calibration-eps-converted-to.pdf}\n\\caption{The predictive intervals for $\\pi$ used for the call center study described in Section \\ref{sec:illustration}. The subplots from left to right show the results with 5 to 9 servers, respectively. The solid line is the observed frequency from the data when it is available at $6-8$ server levels. The dashed line is the observed frequency from the simulation model. The rectangles represent intervals predictions from either using the typical intervals from traditional sampling (left rectangles) or the optimization (right rectangles). } \\label{fig:orig_data_calibration}\n}\n\\end{figure}\n\nFigure \\ref{fig:orig_data_calibration} shows the intervals implied by the proposed posterior distribution using a sampling-based approach and our optimization approach using \\eqref{obj:optim} and \\eqref{choice}. The functions $f$ and $g$ were of Gaussian form (\\ref{eq:Gaussian}), with correlation $0.75^{|x_i-x_j| } \\cdot 0.75^{|k-l| }$ between the $i$th and $j$th staffing levels and the $k$th and $l$th outputs with $\\lambda_d = 1\/4$ and $\\lambda_p = 1\/100$. The key is the ability to answer question such as: how likely is it that the average waiting time when there are $9$ servers is between $2$ and $3$ minutes? There is no data, but the simulation model combined with the observed responses and our prior information gives us an estimate of somewhere less than $15 \\%$. This accounts for the discrepancy that we observed based on the recorded responses at $6-8$ servers as well as the potential Monte Carlo error from running a finite number of simulations. Overall, the ranges at other staffing levels appear to agree with both the data and the simulation outputs. We are not confident, for example, that staffing $5$ servers will produce the same results as the simulation model, which has average waiting times over $3$ minutes about $15 \\%$ of the time. Based on the recorded responses, this could be $30 \\%$, but it could also be as low as about $1\\%$.\n\nThe above discussion offers some preliminary validity check on the practical implementation. Next we illustrate the theoretical discussions in Section \\ref{sec:theo}. For this purpose, consider an example where the true model is specified by us, some data is generated from this model, and an inexact simulation model is specified.\n\nWe use the same simulation model. In the dataset we found that waiting times are under-estimated by the simulation model when many servers are present. To replicate this, we add onto our ``true\" model an event (according to a Poisson process, average 5 min between events) in which if there are $5$ idle servers, all idle servers will take a break (average 30 minutes, exponentially distributed) and if there are more than $7$ idle servers, these additional severs will stop servicing for the remainder of the hour. This will naturally inflate the waiting times. While not exactly mimicking the real system, this reflects the general phenomena that all operators may not be working at all times in a call center. Thus even though $8$ servers may be ``working'', because of miscellaneous personnel reasons, the queue behaves differently than the simulation model. All other features of the true model are exactly the same as the simulation model, including the arrival rates and departure rates.\n\nIn this numerical experiment there are either $5$, $10$, $20$, $200$ or $2000$ total observations. Two observation schemes are examined: in the first, each observation comes from one of the staffing levels 6, 7 and 8 with equal probabilities of $1\/3$; in the second, each observation comes from one of the staffing levels 5, 6, 7, 8 and 9 with equal probabilities of $1\/5$.\n\n Figure \\ref{fig:ROvSAMPLE_p} shows the prediction of the probability the average waiting time will be less than $1$ minute. We compare to a data-only approach which consists of bounds based on the classic confidence interval with binomial responses (either less than one minute or not). All approaches behave similarly when the amount of data is large, agreeing with Theorem \\ref{thm:op_consistency}. But there are differences in the data-poor performances. Consider the first observation scheme, where no data is collected at $5$ and $9$ servers. The proposed approach correctly predicts the chance of a short average waiting time with $9$ servers to be large, while the data-only approach does not have access to the simulation model and thus predicts the chance of a short average waiting time with $9$ servers to be possibly small (the prediction covers all possibilities). Moreover, the data-only approach can be quite poor when only a few data points exist. The conclusions reached from using the posterior with either traditional sampling or our optimization approach are comparable in the large data cases, but do differ in the small data cases. The computational speed of the optimization approach was orders of magnitude smaller for this example compared to the sampling.\n\\begin{figure}[htb]\n{\n\\centering\n\\includegraphics[width=\\textwidth]{ROvSAMPLE_p-eps-converted-to.pdf}\n\\caption{The bounds produced in the call center example in Section \\ref{sec:illustration}. The rectangles in the six panels are decided by the data-only approach (left), the sampling-based calibration approach with $2.5\\%$ and $97.5 \\%$ quantiles (middle), and the proposed optimization-based calibration approach. The set of $5$ rectangles for each number of servers represent $5$, $10$, $20$, $200$ and $2000$ observations, respectively. The long horizontal line represents the true value. The top set of panels refers to the case where data is observed only at 6, 7, and 8 and the bottom set of panels refers to the case where data is observed at 5, 6, 7, 8 and 9. \\label{fig:ROvSAMPLE_p}}\n}\n\\end{figure}\n\n\n\\subsection*{Manufacturing line example}\nThis subsection uses our calibration framework to assist a decision process for staffing a real production line. A major manufacturer of automobiles has two parallel production lines, labeled box and closure, that suffer from frequent failures. These failures are predominately handled by a group of workers trained to quickly identify and resolve small issues. Due to the time needed to traverse the line combined with the relative frequency of failures, four workers are currently staffed in this support position. The manufacturer is interested in the impact of this staffing level on the throughput of the line, measured in units per hour. The lines' behavior are classified into thirteen categories from 46 to 74 in $2$ units per hour increments.\n\nThe two lines have different criteria for ill-performance. The box line will starve the next line if the throughput drops below 60. The closure line will starve the next line if the throughput drops below 56. The goal is thus to ensure that the chance of starving the next line remains near the current level when there are $4$ workers. Since experiments on the real system would be extremely costly and potentially dangerous, an outside company was hired to design a discrete-event simulation model to investigate potential staffing reconfigurations for this group of workers. Additionally, a two-person internal team was tasked with refining and adjusting the simulation model via ad-hoc calibration, including detailed input analysis that broke down failure rates by stations along the line. Despite these extensive and costly efforts, the simulation model did not perfectly agree with the data collected in the current four worker configuration (see Figure \\ref{fig:illustration}) due to several assumptions made along the model development process. These included typical input assumptions like independent and exponentially distributed inter-failure times as well as more complicated structural assumptions such as workers returning to their station in between maintenance calls. Roughly $75$ realizations from the simulation were completed at each design point, as decided was sufficient toward the end of the project.\n\n\\begin{figure}[t]\n{\n\\centering\n\\includegraphics[width=6.5in]{manufacturing_example-eps-converted-to.pdf}\n\\caption{Illustration for the manufacturing case study presented in Section \\ref{sec:illustration}. Subplots far left and middle right are for the box and closure lines comparing the model histogram and the frequency histogram from the data. Subplots middle left and far right show the predictive intervals from the case study presented in Section \\ref{sec:illustration} for the box and closure lines, respectively. \\label{fig:illustration}}\n}\n\\end{figure}\n\nKnowing this simulation model is not perfect, what would be a reasonable estimate for the mean throughput of the line at each staffing level from $1$ worker to $6$ workers? If we can define the priors $f$ and $g$, then this becomes an answerable question using the method described in this article. Like the previous example, the functions $f$ and $g$ were of Gaussian form, of (\\ref{eq:Gaussian}), with correlation $0.75^{|x_i-x_j| } \\cdot 0.9^{|k-l| }$ between the $i$th and $j$th staffing levels and the $k$th and $l$th possible throughputs with $\\lambda_d = 1\/4$ and $\\lambda_p = 1\/100$. This agreed, as best as possible, with the expectations of the builders of the simulation model, who think that there is a large correlation across outputs (i.e. a similar likelihood ratio at places close in the sample space) and smaller amounts of correlation across the inputs (i.e. the builders are unsure of the behavior of likelihood ratio across the input variables, but generally anticipate it is close for similar staffing levels).\n\nFigure \\ref{fig:illustration} displays the lower and upper bounds on the probabilities of low production for each line constructed from our method. As we move away from our observations at a staffing level of $4$, the predictive bounds on the mean throughput get larger and become closer to the simulation model. This expansion of predictive intervals and the regression to the simulation model mimic what is seen in calibration of deterministic models \\citep{kennedy2001bayesian} and stochastic kriging \\citep{ankenman2010stochastic}. Around $4$ workers, the assumption is that that there is some correlation between staffing levels that decays as we move away from a staffing level of $4$, thus expanding our predictive intervals.\n\n In terms of comparison to a data-only alternative, there is clearly no ability to distinguish between different staffing levels using data alone. In terms of an answer to the fundamental question posed by the manufacturer, a few things can be gleaned from these bounds. For example, it becomes clear from this analysis that staffing a single worker would in high likelihood starve the next lines, which is the core problem the manufacturer would like to avoid. The ultimate decision from the manufacturer was to do a field study of the three worker staffing level. This was based on both feasibility assurance provided by the simulation model and the potential benefit of redeploying a worker into a different position.\n\n\n\\section*{Acknowledgements}\nWe thank Ilan Guedj for the data organization and for Avi Mandelbaum for continuing to place the data on the website \\href{http:\/\/ie.technion.ac.il\/serveng\/}{http:\/\/ie.technion.ac.il\/serveng\/}. Additional thanks are due to the\nTauber Institute for Global Operations at the University of Michigan, Anthony Sciuto, Anusuya Ramdass and Brian Talbot. We also gratefully acknowledge support from the the National Science Foundation under grants CMMI-1542020, CMMI-1523453 and CAREER CMMI-1653339.\n\n\n\n\\bibliographystyle{informs2014}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Submission of conference papers to ICLR 2021}\n\nICLR requires electronic submissions, processed by\n\\url{https:\/\/openreview.net\/}. See ICLR's website for more instructions.\n\nIf your paper is ultimately accepted, the statement {{ {\\bm{\\theta}}_{t} }\n {\\textbackslash}iclrfinalcopy} should be inserted to adjust the\nformat to the camera ready requirements.\n\nThe format for the submissions is a variant of the NeurIPS format.\nPlease read carefully the instructions below, and follow them\nfaithfully.\n\n\\subsection{Style}\n\nPapers to be submitted to ICLR 2021 must be prepared according to the\ninstructions presented here.\n\n\nAuthors are required to use the ICLR \\LaTeX{} style files obtainable at the\nICLR website. Please make sure you use the current files and\nnot previous versions. Tweaking the style files may be grounds for rejection.\n\n\\subsection{Retrieval of style files}\n\nThe style files for ICLR and other conference information are available online at:\n\\begin{center}\n \\url{http:\/\/www.iclr.cc\/}\n\\end{center}\nThe file \\verb+iclr2021_conference.pdf+ contains these\ninstructions and illustrates the\nvarious formatting requirements your ICLR paper must satisfy.\nSubmissions must be made using \\LaTeX{} and the style files\n\\verb+iclr2021_conference.sty+ and \\verb+iclr2021_conference.bst+ (to be used with \\LaTeX{}2e). The file\n\\verb+iclr2021_conference.tex+ may be used as a ``shell'' for writing your paper. All you\nhave to do is replace the author, title, abstract, and text of the paper with\nyour own.\n\nThe formatting instructions contained in these style files are summarized in\nsections \\ref{gen_inst}, \\ref{headings}, and \\ref{others} below.\n\n\\section{General formatting instructions}\n\\label{gen_inst}\n\nThe text must be confined within a rectangle 5.5~inches (33~picas) wide and\n9~inches (54~picas) long. The left margin is 1.5~inch (9~picas).\nUse 10~point type with a vertical spacing of 11~points. Times New Roman is the\npreferred typeface throughout. Paragraphs are separated by 1\/2~line space,\nwith no indentation.\n\nPaper title is 17~point, in small caps and left-aligned.\nAll pages should start at 1~inch (6~picas) from the top of the page.\n\nAuthors' names are\nset in boldface, and each name is placed above its corresponding\naddress. The lead author's name is to be listed first, and\nthe co-authors' names are set to follow. Authors sharing the\nsame address can be on the same line.\n\nPlease pay special attention to the instructions in section \\ref{others}\nregarding figures, tables, acknowledgments, and references.\n\n\nThere will be a strict upper limit of 8 pages for the main text of the initial submission, with unlimited additional pages for citations. Note that the upper page limit differs from last year!Authors may use as many pages of appendices (after the bibliography) as they wish, but reviewers are not required to read these. During the rebuttal phase and for the camera ready version, authors are allowed one additional page for the main text, for a strict upper limit of 9 pages.\n\n\\section{Headings: first level}\n\\label{headings}\n\nFirst level headings are in small caps,\nflush left and in point size 12. One line space before the first level\nheading and 1\/2~line space after the first level heading.\n\n\\subsection{Headings: second level}\n\nSecond level headings are in small caps,\nflush left and in point size 10. One line space before the second level\nheading and 1\/2~line space after the second level heading.\n\n\\subsubsection{Headings: third level}\n\nThird level headings are in small caps,\nflush left and in point size 10. One line space before the third level\nheading and 1\/2~line space after the third level heading.\n\n\\section{Citations, figures, tables, references}\n\\label{others}\n\nThese instructions apply to everyone, regardless of the formatter being used.\n\n\\subsection{Citations within the text}\n\nCitations within the text should be based on the \\texttt{natbib} package\nand include the authors' last names and year (with the ``et~al.'' construct\nfor more than two authors). When the authors or the publication are\nincluded in the sentence, the citation should not be in parenthesis using \\verb|\\citet{}| (as\nin ``See \\citet{Hinton06} for more information.''). Otherwise, the citation\nshould be in parenthesis using \\verb|\\citep{}| (as in ``Deep learning shows promise to make progress\ntowards AI~\\citep{Bengio+chapter2007}.'').\n\nThe corresponding references are to be listed in alphabetical order of\nauthors, in the \\textsc{References} section. As to the format of the\nreferences themselves, any style is acceptable as long as it is used\nconsistently.\n\n\\subsection{Footnotes}\n\nIndicate footnotes with a number\\footnote{Sample of the first footnote} in the\ntext. Place the footnotes at the bottom of the page on which they appear.\nPrecede the footnote with a horizontal rule of 2~inches\n(12~picas).\\footnote{Sample of the second footnote}\n\n\\subsection{Figures}\n\nAll artwork must be neat, clean, and legible. Lines should be dark\nenough for purposes of reproduction; art work should not be\nhand-drawn. The figure number and caption always appear after the\nfigure. Place one line space before the figure caption, and one line\nspace after the figure. The figure caption is lower case (except for\nfirst word and proper nouns); figures are numbered consecutively.\n\nMake sure the figure caption does not get separated from the figure.\nLeave sufficient space to avoid splitting the figure and figure caption.\n\nYou may use color figures.\nHowever, it is best for the\nfigure captions and the paper body to make sense if the paper is printed\neither in black\/white or in color.\n\\begin{figure}[h]\n\\begin{center}\n\\fbox{\\rule[-.5cm]{0cm}{4cm} \\rule[-.5cm]{4cm}{0cm}}\n\\end{center}\n\\caption{Sample figure caption.}\n\\end{figure}\n\n\\subsection{Tables}\n\nAll tables must be centered, neat, clean and legible. Do not use hand-drawn\ntables. The table number and title always appear before the table. See\nTable~\\ref{sample-table}.\n\nPlace one line space before the table title, one line space after the table\ntitle, and one line space after the table. The table title must be lower case\n(except for first word and proper nouns); tables are numbered consecutively.\n\n\\begin{table}[t]\n\\caption{Sample table title}\n\\label{sample-table}\n\\begin{center}\n\\begin{tabular}{ll}\n\\multicolumn{1}{c}{\\bf PART} &\\multicolumn{1}{c}{\\bf DESCRIPTION}\n\\\\ \\hline \\\\\nDendrite &Input terminal \\\\\nAxon &Output terminal \\\\\nSoma &Cell body (contains cell nucleus) \\\\\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\\section{Default Notation}\n\nIn an attempt to encourage standardized notation, we have included the\nnotation file from the textbook, \\textit{Deep Learning}\n\\cite{goodfellow2016deep} available at\n\\url{https:\/\/github.com\/goodfeli\/dlbook_notation\/}. Use of this style\nis not required and can be disabled by commenting out\n\\texttt{math\\_commands.tex}.\n\n\n\\centerline{\\bf Numbers and Arrays}\n\\bgroup\n\\def1.5{1.5}\n\\begin{tabular}{p{1in}p{3.25in}}\n$\\displaystyle a$ & A scalar (integer or real)\\\\\n$\\displaystyle {\\bm{a}}$ & A vector\\\\\n$\\displaystyle {\\bm{A}}$ & A matrix\\\\\n$\\displaystyle {\\tens{A}}$ & A tensor\\\\\n$\\displaystyle {\\bm{I}}_n$ & Identity matrix with $n$ rows and $n$ columns\\\\\n$\\displaystyle {\\bm{I}}$ & Identity matrix with dimensionality implied by context\\\\\n$\\displaystyle {\\bm{e}}^{(i)}$ & Standard basis vector $[0,\\dots,0,1,0,\\dots,0]$ with a 1 at position $i$\\\\\n$\\displaystyle \\text{diag}({\\bm{a}})$ & A square, diagonal matrix with diagonal entries given by ${\\bm{a}}$\\\\\n$\\displaystyle {\\textnormal{a}}$ & A scalar random variable\\\\\n$\\displaystyle {\\mathbf{a}}$ & A vector-valued random variable\\\\\n$\\displaystyle {\\mathbf{A}}$ & A matrix-valued random variable\\\\\n\\end{tabular}\n\\egroup\n\\vspace{0.25cm}\n\n\\centerline{\\bf Sets and Graphs}\n\\bgroup\n\\def1.5{1.5}\n\n\\begin{tabular}{p{1.25in}p{3.25in}}\n$\\displaystyle {\\mathbb{A}}$ & A set\\\\\n$\\displaystyle \\mathbb{R}$ & The set of real numbers \\\\\n$\\displaystyle \\{0, 1\\}$ & The set containing 0 and 1 \\\\\n$\\displaystyle \\{0, 1, \\dots, n \\}$ & The set of all integers between $0$ and $n$\\\\\n$\\displaystyle [a, b]$ & The real interval including $a$ and $b$\\\\\n$\\displaystyle (a, b]$ & The real interval excluding $a$ but including $b$\\\\\n$\\displaystyle {\\mathbb{A}} \\backslash {\\mathbb{B}}$ & Set subtraction, i.e., the set containing the elements of ${\\mathbb{A}}$ that are not in ${\\mathbb{B}}$\\\\\n$\\displaystyle {\\mathcal{G}}$ & A graph\\\\\n$\\displaystyle \\parents_{\\mathcal{G}}({\\textnormal{x}}_i)$ & The parents of ${\\textnormal{x}}_i$ in ${\\mathcal{G}}$\n\\end{tabular}\n\\vspace{0.25cm}\n\n\n\\centerline{\\bf Indexing}\n\\bgroup\n\\def1.5{1.5}\n\n\\begin{tabular}{p{1.25in}p{3.25in}}\n$\\displaystyle {a}_i$ & Element $i$ of vector ${\\bm{a}}$, with indexing starting at 1 \\\\\n$\\displaystyle {a}_{-i}$ & All elements of vector ${\\bm{a}}$ except for element $i$ \\\\\n$\\displaystyle {A}_{i,j}$ & Element $i, j$ of matrix ${\\bm{A}}$ \\\\\n$\\displaystyle {\\bm{A}}_{i, :}$ & Row $i$ of matrix ${\\bm{A}}$ \\\\\n$\\displaystyle {\\bm{A}}_{:, i}$ & Column $i$ of matrix ${\\bm{A}}$ \\\\\n$\\displaystyle {\\etens{A}}_{i, j, k}$ & Element $(i, j, k)$ of a 3-D tensor ${\\tens{A}}$\\\\\n$\\displaystyle {\\tens{A}}_{:, :, i}$ & 2-D slice of a 3-D tensor\\\\\n$\\displaystyle {\\textnormal{a}}_i$ & Element $i$ of the random vector ${\\mathbf{a}}$ \\\\\n\\end{tabular}\n\\egroup\n\\vspace{0.25cm}\n\n\n\\centerline{\\bf Calculus}\n\\bgroup\n\\def1.5{1.5}\n\\begin{tabular}{p{1.25in}p{3.25in}}\n$\\displaystyle\\frac{d y} {d x}$ & Derivative of $y$ with respect to $x$\\\\ [2ex]\n$\\displaystyle \\frac{\\partial y} {\\partial x} $ & Partial derivative of $y$ with respect to $x$ \\\\\n$\\displaystyle \\nabla_{\\bm{x}} y $ & Gradient of $y$ with respect to ${\\bm{x}}$ \\\\\n$\\displaystyle \\nabla_{\\bm{X}} y $ & Matrix derivatives of $y$ with respect to ${\\bm{X}}$ \\\\\n$\\displaystyle \\nabla_{\\tens{X}} y $ & Tensor containing derivatives of $y$ with respect to ${\\tens{X}}$ \\\\\n$\\displaystyle \\frac{\\partial f}{\\partial {\\bm{x}}} $ & Jacobian matrix ${\\bm{J}} \\in \\mathbb{R}^{m\\times n}$ of $f: \\mathbb{R}^n \\rightarrow \\mathbb{R}^m$\\\\\n$\\displaystyle \\nabla_{\\bm{x}}^2 f({\\bm{x}})\\text{ or }{\\bm{H}}( f)({\\bm{x}})$ & The Hessian matrix of $f$ at input point ${\\bm{x}}$\\\\\n$\\displaystyle \\int f({\\bm{x}}) d{\\bm{x}} $ & Definite integral over the entire domain of ${\\bm{x}}$ \\\\\n$\\displaystyle \\int_{\\mathbb{S}} f({\\bm{x}}) d{\\bm{x}}$ & Definite integral with respect to ${\\bm{x}}$ over the set ${\\mathbb{S}}$ \\\\\n\\end{tabular}\n\\egroup\n\\vspace{0.25cm}\n\n\\centerline{\\bf Probability and Information Theory}\n\\bgroup\n\\def1.5{1.5}\n\\begin{tabular}{p{1.25in}p{3.25in}}\n$\\displaystyle P({\\textnormal{a}})$ & A probability distribution over a discrete variable\\\\\n$\\displaystyle p({\\textnormal{a}})$ & A probability distribution over a continuous variable, or over\na variable whose type has not been specified\\\\\n$\\displaystyle {\\textnormal{a}} \\sim P$ & Random variable ${\\textnormal{a}}$ has distribution $P$\\\\% so thing on left of \\sim should always be a random variable, with name beginning with \\r\n$\\displaystyle \\mathbb{E}_{{\\textnormal{x}}\\sim P} [ f(x) ]\\text{ or } \\mathbb{E} f(x)$ & Expectation of $f(x)$ with respect to $P({\\textnormal{x}})$ \\\\\n$\\displaystyle \\mathrm{Var}(f(x)) $ & Variance of $f(x)$ under $P({\\textnormal{x}})$ \\\\\n$\\displaystyle \\mathrm{Cov}(f(x),g(x)) $ & Covariance of $f(x)$ and $g(x)$ under $P({\\textnormal{x}})$\\\\\n$\\displaystyle H({\\textnormal{x}}) $ & Shannon entropy of the random variable ${\\textnormal{x}}$\\\\\n$\\displaystyle D_{\\mathrm{KL}} ( P \\Vert Q ) $ & Kullback-Leibler divergence of P and Q \\\\\n$\\displaystyle \\mathcal{N} ( {\\bm{x}} ; {\\bm{\\mu}} , {\\bm{\\Sigma}})$ & Gaussian distribution %\nover ${\\bm{x}}$ with mean ${\\bm{\\mu}}$ and covariance ${\\bm{\\Sigma}}$ \\\\\n\\end{tabular}\n\\egroup\n\\vspace{0.25cm}\n\n\\centerline{\\bf Functions}\n\\bgroup\n\\def1.5{1.5}\n\\begin{tabular}{p{1.25in}p{3.25in}}\n$\\displaystyle f: {\\mathbb{A}} \\rightarrow {\\mathbb{B}}$ & The function $f$ with domain ${\\mathbb{A}}$ and range ${\\mathbb{B}}$\\\\\n$\\displaystyle f \\circ g $ & Composition of the functions $f$ and $g$ \\\\\n $\\displaystyle f({\\bm{x}} ; {\\bm{\\theta}}) $ & A function of ${\\bm{x}}$ parametrized by ${\\bm{\\theta}}$.\n (Sometimes we write $f({\\bm{x}})$ and omit the argument ${\\bm{\\theta}}$ to lighten notation) \\\\\n$\\displaystyle \\log x$ & Natural logarithm of $x$ \\\\\n$\\displaystyle \\sigma(x)$ & Logistic sigmoid, $\\displaystyle \\frac{1} {1 + \\exp(-x)}$ \\\\\n$\\displaystyle \\zeta(x)$ & Softplus, $\\log(1 + \\exp(x))$ \\\\\n$\\displaystyle || {\\bm{x}} ||_p $ & $L^p$ norm of ${\\bm{x}}$ \\\\\n$\\displaystyle || {\\bm{x}} || $ & $L^2$ norm of ${\\bm{x}}$ \\\\\n$\\displaystyle x^+$ & Positive part of $x$, i.e., $\\max(0,x)$\\\\\n$\\displaystyle \\bm{1}_\\mathrm{condition}$ & is 1 if the condition is true, 0 otherwise\\\\\n\\end{tabular}\n\\egroup\n\\vspace{0.25cm}\n\n\n\n\\section{Final instructions}\nDo not change any aspects of the formatting parameters in the style files.\nIn particular, do not modify the width or length of the rectangle the text\nshould fit into, and do not change font sizes (except perhaps in the\n\\textsc{References} section; see below). Please note that pages should be\nnumbered.\n\n\\section{Preparing PostScript or PDF files}\n\nPlease prepare PostScript or PDF files with paper size ``US Letter'', and\nnot, for example, ``A4''. The -t\nletter option on dvips will produce US Letter files.\n\nConsider directly generating PDF files using \\verb+pdflatex+\n(especially if you are a MiKTeX user).\nPDF figures must be substituted for EPS figures, however.\n\nOtherwise, please generate your PostScript and PDF files with the following commands:\n\\begin{verbatim}\ndvips mypaper.dvi -t letter -Ppdf -G0 -o mypaper.ps\nps2pdf mypaper.ps mypaper.pdf\n\\end{verbatim}\n\n\\subsection{Margins in LaTeX}\n\nMost of the margin problems come from figures positioned by hand using\n\\verb+\\special+ or other commands. We suggest using the command\n\\verb+\\includegraphics+\nfrom the graphicx package. Always specify the figure width as a multiple of\nthe line width as in the example below using .eps graphics\n\\begin{verbatim}\n \\usepackage[dvips]{graphicx} ...\n \\includegraphics[width=0.8\\linewidth]{myfile.eps}\n\\end{verbatim}\nor %\n\\begin{verbatim}\n \\usepackage[pdftex]{graphicx} ...\n \\includegraphics[width=0.8\\linewidth]{myfile.pdf}\n\\end{verbatim}\nfor .pdf graphics.\nSee section~4.4 in the graphics bundle documentation (\\url{http:\/\/www.ctan.org\/tex-archive\/macros\/latex\/required\/graphics\/grfguide.ps})\n\nA number of width problems arise when LaTeX cannot properly hyphenate a\nline. Please give LaTeX hyphenation hints using the \\verb+\\-+ command.\n\n\\subsubsection*{Author Contributions}\nIf you'd like to, you may include a section for author contributions as is done\nin many journals. This is optional and at the discretion of the authors.\n\n\\subsubsection*{Acknowledgments}\nUse unnumbered third level headings for the acknowledgments. All\nacknowledgments, including those to funding agencies, go at the end of the paper.\n\n\n\n\\section{Introduction}\n\\input{subtex\/intro.tex}\n\n\\section{Preliminaries} \\label{sec:prel}\n\\input{subtex\/preliminary.tex}\n\n\n\\section{Differential Dynamic Programming Neural Optimizer} \\label{sec:ddp-dnn}\n\\input{subtex\/ddp.tex}\n\n\n\\section{The Role of Feedback Policies} \\label{sec:dnn-trajopt}\n\\input{subtex\/role-of-feedback.tex}\n\n\\section{Experiments} \\label{sec:experiment}\n\\input{subtex\/experiment.tex}\n\n\\section{Conclusion} \\label{sec:conclusion}\nIn this work, we introduce DDPNOpt, a new class of optimizer arising from a novel perspective by bridging DNN training to optimal control and trajectory optimization.\nDDPNOpt features {layer-wise feedback policies} which improve convergence and robustness to hyper-parameters\nover existing optimizers.\nIt outperforms other OCP-inspired methods in both training performance and scalability.\nThis work provides a new algorithmic insight and bridges between deep learning and optimal control.\n\n\\newpage\n\n\n\\section*{Acknowledgments}\nThe authors would like to thank Chen-Hsuan Lin, Yunpeng Pan, Yen-Cheng Liu, and Chia-Wen Kuo for many helpful discussions on the paper.\nThis research was supported by NSF Award Number 1932288.\n\n\n\n\\subsection{Connection between Pontryagin Maximum Principle and DNNs Training} \\label{app:pmp-dev}\n\nDevelopment of the optimality conditions to OCP can be dated back to 1960s,\ncharacterized by both the Pontryagin\\textquotesingle s Maximum Principle (PMP)\nand the Dynamic Programming (DP).\nHere we review Theorem of PMP and its connection to training DNNs.\n\\begin{theorem}[Discrete-time PMP \\citep{pontryagin1962mathematical}] \\label{the:pmp} %\nLet $\\bar{{\\bm{u}}}^*$ be the optimal control trajectory for OCP and\n$\\bar{{\\bm{x}}}^*$ be the corresponding state trajectory.\nThen, there exists a co-state trajectory $\\bar{{\\bm{p}}}^* \\triangleq \\{{\\bm{p}}_t^*\\}_{t=1}^{T}$,\nsuch that\n\\begin{subequations} \\label{eq:mf-pmp}\n\\begin{align}\n{{\\bm{x}}}_{t+1}^{*}&= \\nabla_{{\\bm{p}}_{}} H_t\\left({\\bm{x}}_{t}^{*}, {\\bm{p}}_{t+1}^{*}, {\\bm{u}}_{t}^{*}\\right) { \\text{ ,} }\n\\text{ } {\\bm{x}}_{0}^{*}={\\bm{x}}_{0}\n{ \\text{ ,} } \\label{eq:pmp-forward} \\\\\n{{\\bm{p}}}_{t}^{*}&=\\nabla_{{\\bm{x}}} H_t\\left({\\bm{x}}_{t}^{*}, {\\bm{p}}_{t+1}^{*}, {\\bm{u}}_{t}^{*}\\right) { \\text{ ,} }\n\\text{ } {\\bm{p}}_{T}^{*}= \\nabla_{{\\bm{x}}} \\phi\\left({\\bm{x}}_{T}^{*}\\right)\n{ \\text{ ,} } \\label{eq:pmp-backward} \\\\\n{\\bm{u}}_{t}^{*} &= \\argmin_{v\\in {\\mathbb{R}^{m}}}\nH_t\\left({\\bm{x}}_{t}^{*}, {\\bm{p}}_{t+1}^{*}, {\\bm{v}} \\right)\n{ \\text{ .} } \\label{eq:pmp-max-h}\n\\end{align}\n\\end{subequations}\nwhere $H_t: {\\mathbb{R}^{n}} \\times {\\mathbb{R}^{n}} \\times {\\mathbb{R}^{m}} \\mapsto \\mathbb{R} $\nis the discrete-time Hamiltonian given by\n\\begin{align}\nH_t\\left({\\bm{x}}_t, {\\bm{p}}_{t+1}, {\\bm{u}}_t \\right) \\triangleq \\ell_t({\\bm{x}}_t, {\\bm{u}}_t) + {\\bm{p}}_{t+1}^{\\mathsf{T}} f_t({\\bm{x}}_t, {\\bm{u}}_t) { \\text{ ,} }\n\\end{align}\nand \\eq{\\ref{eq:pmp-backward}} is called the \\textit{adjoint equation}.\n\n\\end{theorem}\nThe discrete-time PMP theorem can be derived using KKT conditions,\nin which the co-state ${\\bm{p}}_t$ is equivalent to the Lagrange multiplier.\nNote that the solution to \\eq{\\ref{eq:pmp-max-h}} admits an open-loop process in the sense that it does not depend on state variables.\nThis is in contrast to the Dynamic Programming principle,\nin which a feedback policy is considered.\n\n\nIt is natural to ask whether the necessary condition in the PMP theorem relates to first-order optimization methods in DNN training.\nThis is indeed the case as pointed out in \\citet{li2017maximum}: %\n\\begin{lemma}[\\cite{li2017maximum}] \\label{lm:bp-gd}\nBack-propagation satisfies \\eq{\\ref{eq:pmp-backward}} and gradient descent iteratively solves \\eq{\\ref{eq:pmp-max-h}}.\n\\end{lemma}\nLemma \\ref{lm:bp-gd} follows by first expanding the derivative of Hamiltonian w.r.t. ${\\bm{x}}_t$,\n\\begin{align}\n \\nabla_{{\\bm{x}}_t} H_t({\\bm{x}}_{t}, {\\bm{p}}_{t+1}, {\\bm{u}}_{t}) &= \\nabla_{{\\bm{x}}_t} \\ell_t({\\bm{x}}_{t}, {\\bm{u}}_{t}) + \\nabla_{{\\bm{x}}_t} f_t({\\bm{x}}_{t}, {\\bm{u}}_{t})^{\\mathsf{T}} {\\bm{p}}_{t+1} \\text{ } = \\nabla_{{\\bm{x}}_t} J({\\bar{{\\bm{u}}}}; {\\bm{x}}_0) { \\text{ .} }\n\\end{align}\nThus, \\eq{\\ref{eq:pmp-backward}} is simply the chain rule used in the Back-propagation.\nWhen $H_t$ is differentiable w.r.t. ${\\bm{u}}_t$, one can attempt to solve \\eq{\\ref{eq:pmp-max-h}} by iteratively taking the gradient descent.\nThis will lead to\n\\begin{align} %\n{\\bm{u}}^{(k+1)}_t\n= {\\bm{u}}^{(k)}_t - \\eta \\nabla_{{\\bm{u}}_t} H_t({\\bm{x}}_{t}, {\\bm{p}}_{t+1}, {\\bm{u}}_{t})\n= {\\bm{u}}^{(k)}_t - \\eta \\nabla_{{\\bm{u}}_t} J({\\bar{{\\bm{u}}}};{\\bm{x}}_0) { \\text{ ,} }\n\\end{align}\nwhere $k$ and $\\eta$ denote the update iteration and step size.\nThus, existing optimization methods can be interpreted as iterative processes to match the PMP optimality conditions.\n\n\nInspired from Lemma \\ref{lm:bp-gd}, \\citet{li2017maximum} proposed\na PMP-inspired method, named Extended Method of Successive Approximations (E-MSA),\nwhich solves the following augmented Hamiltonian\n\\begin{equation}\n\\begin{split}\n\\tilde{H}_t\\left({\\bm{x}}_t, {\\bm{p}}_{t+1}, {\\bm{u}}_t, {\\bm{x}}_{t+1}, {\\bm{p}}_{t} \\right)\n&\\triangleq\nH_t\\left({\\bm{x}}_t, {\\bm{p}}_{t+1}, {\\bm{u}}_t \\right) \\\\ &\\quad +\n\\frac{1}{2}\\rho \\norm{{\\bm{x}}_{t+1} - f_t({\\bm{x}}_t,{\\bm{u}}_t)} +\n\\frac{1}{2}\\rho \\norm{{\\bm{p}}_t - \\nabla_{{\\bm{x}}_t} H_t}\n{ \\text{ .} }\n\\end{split} \\label{eq:emsa}\n\\end{equation}\n$\\tilde{H}_t$ is the original Hamiltonian augmented with the feasibility constraints on both forward states and backward co-states.\nE-MSA solves the minimization\n\\begin{align}\n {\\bm{u}}_t^* = \\argmin_{{\\bm{u}}_t \\in \\mathbf{R}^{m_t}} \\tilde{H}_t\\left({\\bm{x}}_t, {\\bm{p}}_{t+1}, {\\bm{u}}_t, {\\bm{x}}_{t+1}, {\\bm{p}}_{t} \\right)\n\\end{align}\nwith L-BFGS per layer and per training iteration.\nAs a result, we consider E-MSA also as second-order method.\n\n\n\\subsection{Proof of Proposition \\ref{prop:bp2ddp}} \\label{app:c1}\n\\begin{proof}\nWe first prove the following lemma which connects the backward pass between two frameworks in the degenerate case.\n\\begin{lemma}\nAssume $Q_{{\\bm{u}} {\\bm{x}}}^t=\\mathbf{0}$ at all stages,\nthen we have\n\\begin{align} \\label{eq:v-dyn-degenerate}\nV_{\\bm{x}}^t = \\nabla_{{\\bm{x}}_t} J { \\text{ ,} } \\text{ and } \\quad\nV_{{\\bm{x}}\\vx}^t = \\nabla_{{\\bm{x}}_t}^2 J { \\text{ ,} } \\quad \\forall t\n{ \\text{ .} }\n\\end{align}\n\\end{lemma}\n\\begin{proof}\nIt is obvious to see that \\eq{\\ref{eq:v-dyn-degenerate}} holds at $t=T$.\nNow, assume the relation holds at $t+1$ and observe that at the time $t$, the backward passes take the form of\n\\begin{align*} %\n V_{\\bm{x}}^t\n &= Q_{\\bm{x}}^t - Q_{{\\bm{u}} {\\bm{x}}}^{t\\text{ }{\\mathsf{T}}} {({Q^{t}_{{\\bm{u}} {\\bm{u}}})}^{-1}} Q^t_{{\\bm{u}}}\n = \\ell^t_{{\\bm{x}}} + {{{f}^t_{{\\bm{x}}}}^{\\mathsf{T}}} \\nabla_{{\\bm{x}}_{t+1}} J\n = \\nabla_{{\\bm{x}}_t} J { \\text{ ,} } \\\\\n V_{{\\bm{x}}\\vx}^t &= Q_{{\\bm{x}}\\vx}^t - Q_{{\\bm{u}} {\\bm{x}}}^{t\\text{ }{\\mathsf{T}}} {({Q^{t}_{{\\bm{u}} {\\bm{u}}})}^{-1}} Q^t_{{\\bm{u}} {\\bm{x}}}\n = \\nabla_{{\\bm{x}}_t} \\{ \\ell^t_{{\\bm{x}}} + {{{f}^t_{{\\bm{x}}}}^{\\mathsf{T}}} \\nabla_{{\\bm{x}}_{t+1}} J \\}\n = \\nabla_{{\\bm{x}}_{t}}^2 J\n{ \\text{ ,} }\n\\end{align*}\nwhere we recall $J_t = \\ell_t + J_{t+1}(f_t) $ in \\eq{\\ref{eq:Jt}}.\n\\end{proof}\nNow, \\eq{\\ref{eq:newton}} follows by substituting \\eq{\\ref{eq:v-dyn-degenerate}} to the definition of $Q_{{\\bm{u}}}^t$ and $Q_{{\\bm{u}}\\vu}^t$\n\\begin{align*} %\n Q_{{\\bm{u}}}^t\n &= \\ell^t_{{\\bm{u}}} + {{{f}^t_{{\\bm{u}}}}^{\\mathsf{T}}} V_{\\bm{x}}^{t+1}\n = \\ell^t_{{\\bm{u}}} + {{{f}^t_{{\\bm{u}}}}^{\\mathsf{T}}} \\nabla_{{\\bm{x}}_{t+1}} J\n = \\nabla_{{\\bm{u}}_t} J\n{ \\text{ ,} } \\\\\n Q_{{\\bm{u}}\\vu}^t\n &= \\ell^t_{{\\bm{u}}\\vu} + {{{f}^t_{{\\bm{u}}}}^{\\mathsf{T}}} V_{{\\bm{x}}\\vx}^{t+1} {f}_{{\\bm{u}}}^t + V_{\\bm{x}}^{t+1} \\cdot {f}_{{\\bm{u}}\\vu}^t \\\\\n &= \\ell^t_{{\\bm{u}}\\vu} + {{{f}^t_{{\\bm{u}}}}^{\\mathsf{T}}} (\\nabla^2_{{\\bm{x}}_{t+1}} J) {f}_{{\\bm{u}}}^t + \\nabla_{{\\bm{x}}_{t+1}} J \\cdot {f}_{{\\bm{u}}\\vu}^t \\\\\n &= \\nabla_{{\\bm{u}}_t} \\{ \\ell^t_{{\\bm{u}}} + {{{f}^t_{{\\bm{u}}}}^{\\mathsf{T}}} \\nabla_{{\\bm{x}}_{t+1}} J \\}\n = \\nabla_{{\\bm{u}}_t}^2 J\n{ \\text{ .} }\n\\end{align*}\nConsequently, the DDP feedback policy degenerates to layer-wise Newton update.\n\\end{proof}\n\n\n\n\\subsection{Proof of Proposition \\ref{prop:gn-ddp}} \\label{app:prop:gn-ddp}\n\\begin{proof}\nWe will prove Proposition \\ref{prop:gn-ddp} by backward induction.\nSuppose at layer $t+1$, we have ${V^{t+1}_{{\\bm{x}}\\vx}} = {\\bm{z}}_{\\bm{x}}^{t+1} \\otimes {\\bm{z}}_{\\bm{x}}^{t+1}$ and $\\ell_t\\equiv\\ell_t({\\bm{u}}_t)$,\nthen \\eq{\\ref{eq:Qt}} becomes\n\\begin{align*} %\n{Q^t_{{\\bm{x}} {\\bm{x}}}} &= {{{f}^t_{{\\bm{x}}}}^{\\mathsf{T}}} {V^{t+1}_{{\\bm{x}}\\vx}} {{f}^t_{{\\bm{x}}}} = {{{f}^t_{{\\bm{x}}}}^{\\mathsf{T}}} ({\\bm{z}}_{\\bm{x}}^{t+1} \\otimes {\\bm{z}}_{\\bm{x}}^{t+1}) {{f}^t_{{\\bm{x}}}} = ({{{f}^t_{{\\bm{x}}}}^{\\mathsf{T}}}{\\bm{z}}_{\\bm{x}}^{t+1}) \\otimes ({{{f}^t_{{\\bm{x}}}}^{\\mathsf{T}}}{\\bm{z}}_{\\bm{x}}^{t+1}) \\\\\n{Q^t_{{\\bm{u}} {\\bm{x}}}} &= {{{f}^t_{{\\bm{u}}}}^{\\mathsf{T}}} {V^{t+1}_{{\\bm{x}}\\vx}} {{f}^t_{{\\bm{x}}}} = {{{f}^t_{{\\bm{u}}}}^{\\mathsf{T}}} ({\\bm{z}}_{\\bm{x}}^{t+1} \\otimes {\\bm{z}}_{\\bm{x}}^{t+1}) {{f}^t_{{\\bm{x}}}} = ({{{f}^t_{{\\bm{u}}}}^{\\mathsf{T}}}{\\bm{z}}_{\\bm{x}}^{t+1}) \\otimes ({{{f}^t_{{\\bm{x}}}}^{\\mathsf{T}}}{\\bm{z}}_{\\bm{x}}^{t+1})\n{ \\text{ .} }\n\\end{align*}\nSetting ${\\bm{q}}_{\\bm{x}}^t:={{{f}^t_{{\\bm{x}}}}^{\\mathsf{T}}}{\\bm{z}}_{\\bm{x}}^{t+1}$ and ${\\bm{q}}_{\\bm{u}}^t:={{{f}^t_{{\\bm{u}}}}^{\\mathsf{T}}}{\\bm{z}}_{\\bm{x}}^{t+1}$ will give the first part of Proposition \\ref{prop:gn-ddp}.\n\nNext, to show the same factorization structure preserves through the preceding layer, it is sufficient to show\n$V_{{\\bm{x}}\\vx}^t = {\\bm{z}}_{\\bm{x}}^{t} \\otimes {\\bm{z}}_{\\bm{x}}^{t}$ for some vector ${\\bm{z}}_{\\bm{x}}^{t}$. This is indeed the case.\n\\begin{align*} %\nV_{{\\bm{x}}\\vx}^t\n&= {Q^t_{{\\bm{x}} {\\bm{x}}}} - Q_{{\\bm{u}} {\\bm{x}}}^{t\\text{ }\\text{ } {\\mathsf{T}}} ({Q^t_{{\\bm{u}} {\\bm{u}}}})^{-1} {Q^t_{{\\bm{u}} {\\bm{x}}}} \\\\\n&= {\\bm{q}}_{\\bm{x}}^t \\otimes {\\bm{q}}_{\\bm{x}}^t - ({\\bm{q}}_{\\bm{u}}^t \\otimes {\\bm{q}}_{\\bm{x}}^t)^{\\mathsf{T}} ({Q^t_{{\\bm{u}} {\\bm{u}}}})^{-1} ({\\bm{q}}_{\\bm{u}}^t \\otimes {\\bm{q}}_{\\bm{x}}^t) \\\\\n&= {\\bm{q}}_{\\bm{x}}^t \\otimes {\\bm{q}}_{\\bm{x}}^t - ({\\bm{q}}_{\\bm{u}}^{t \\text{ } {\\mathsf{T}}} {({Q^{t}_{{\\bm{u}} {\\bm{u}}})}^{-1}} {\\bm{q}}_{\\bm{u}}^t) ({\\bm{q}}_{\\bm{x}}^t \\otimes {\\bm{q}}_{\\bm{x}}^t) { \\text{ ,} }\n\\end{align*}\nwhere the last equality follows by observing ${\\bm{q}}_{\\bm{u}}^{t \\text{ } {\\mathsf{T}}} {({Q^{t}_{{\\bm{u}} {\\bm{u}}})}^{-1}} {\\bm{q}}_{\\bm{u}}^t$ is a scalar.\n\nSet ${\\bm{z}}_{\\bm{x}}^t = \\mathpalette\\DHLhksqrt{1-{\\bm{q}}_{\\bm{u}}^{t \\text{ } {\\mathsf{T}}} {({Q^{t}_{{\\bm{u}} {\\bm{u}}})}^{-1}} {\\bm{q}}_{\\bm{u}}^t } \\text{ }{\\bm{q}}_{\\bm{x}}^t$ will give the desired factorization.\n\n\\end{proof}\n\n\n\n\\subsection{Derivation of \\eq{\\ref{eq:q-fc}}}\nFor notational simplicity, we drop the superscript $t$ and denote\n$V^{\\prime}_{{\\bm{x}}^{\\prime}} \\triangleq \\nabla_{{\\bm{x}}} V_{t+1}({\\bm{x}}_{t+1})$\nas the derivative of the value function at the next state.\n\\begin{align*}\nQ_{{\\bm{u}}}\n&=\\ell_{{\\bm{u}}}+f_{{\\bm{u}}}^{{\\mathsf{T}}}V_{{\\bm{x}}^{\\prime}}^{\\prime}\n=\\ell_{{\\bm{u}}}+g_{{\\bm{u}}}^{{\\mathsf{T}}}\\sigma_{{\\bm{h}}}^{{\\mathsf{T}}}V_{{\\bm{x}}^{\\prime}}^{\\prime} { \\text{ ,} } \\\\\n Q_{{\\bm{u}} {\\bm{u}}} &=\\ell_{{\\bm{u}} {\\bm{u}}} + \\frac{\\partial}{\\partial {\\bm{u}}} \\{g_{{\\bm{u}}}^{{\\mathsf{T}}}\\sigma_{{\\bm{h}}}^{{\\mathsf{T}}}V_{{\\bm{x}}^\\prime}^{\\prime} \\} \\\\\n &=\\ell_{{\\bm{u}} {\\bm{u}}} + g_{{\\bm{u}}}^{{\\mathsf{T}}}\\sigma_{{\\bm{h}}}^{{\\mathsf{T}}} \\frac{\\partial}{\\partial {\\bm{u}}} \\{V_{{\\bm{x}}^\\prime}^{\\prime} \\}\n + g_{{\\bm{u}}}^{{\\mathsf{T}}}(\\frac{\\partial}{\\partial {\\bm{u}}} \\{ \\sigma_{{\\bm{h}}} \\})^{{\\mathsf{T}}}V_{{\\bm{x}}^\\prime}^{\\prime} + (\\frac{\\partial}{\\partial {\\bm{u}}} \\{ g_{{\\bm{u}}}\\})^{{\\mathsf{T}}}\\sigma_{{\\bm{h}}}^{{\\mathsf{T}}}V_{{\\bm{x}}^\\prime}^{\\prime} \\\\\n &=\\ell_{{\\bm{u}} {\\bm{u}}} + g_{{\\bm{u}}}^{{\\mathsf{T}}}\\sigma_{{\\bm{h}}}^{{\\mathsf{T}}} V^\\prime_{{\\bm{x}}^\\prime {\\bm{x}}^\\prime} \\sigma_{{\\bm{h}}} g_{{\\bm{u}}}\n + g_{{\\bm{u}}}^{{\\mathsf{T}}}(V_{{\\bm{x}}^\\prime}^{\\prime {\\mathsf{T}}} \\sigma_{{\\bm{h}} {\\bm{h}}} g_{{\\bm{u}}})\n + g_{{\\bm{u}} {\\bm{u}}}^{{\\mathsf{T}}}\\sigma_{{\\bm{h}}}^{{\\mathsf{T}}}V_{{\\bm{x}}^\\prime}^{\\prime} \\\\\n &=\\ell_{{\\bm{u}} {\\bm{u}}}+g_{{\\bm{u}}}^{{\\mathsf{T}}} (V_{{\\bm{h}} {\\bm{h}}} + V_{{\\bm{x}}^{\\prime}}^{\\prime} \\cdot \\sigma_{{\\bm{h}} {\\bm{h}}}) g_{{\\bm{u}}}+V_{{\\bm{h}}} \\cdot g_{{\\bm{u}} {\\bm{u}}}\n\\end{align*}\nThe last equation follows by recalling\n$V_{{\\bm{h}}} \\triangleq \\sigma_{{\\bm{h}}}^{{\\mathsf{T}}}V_{{\\bm{x}}^\\prime}^{\\prime}$ and\n$V_{{\\bm{h}}\\vh} \\triangleq \\sigma_{{\\bm{h}}}^{{\\mathsf{T}}} V^\\prime_{{\\bm{x}}^\\prime {\\bm{x}}^\\prime} \\sigma_{{\\bm{h}}}$.\nFollow similar derivation, we have\n\\begin{equation}\n\\begin{split} \\label{eq:q-fc2}\n Q_{{\\bm{x}}} &=\\ell_{{\\bm{x}}}+g_{{\\bm{x}}}^{{\\mathsf{T}}}V_{{\\bm{h}}}\\\\\n Q_{{\\bm{x}} {\\bm{x}}} &=\\ell_{{\\bm{x}} {\\bm{x}}}+g_{{\\bm{x}}}^{{\\mathsf{T}}} (V_{{\\bm{h}} {\\bm{h}}} + V_{{\\bm{x}}^{\\prime}}^{\\prime} \\cdot \\sigma_{{\\bm{h}} {\\bm{h}}}) g_{{\\bm{x}}}+V_{{\\bm{h}}} \\cdot g_{{\\bm{x}} {\\bm{x}}} \\\\\n Q_{{\\bm{u}} {\\bm{x}}} &=\\ell_{{\\bm{u}} {\\bm{x}}}+g_{{\\bm{u}}}^{{\\mathsf{T}}} (V_{{\\bm{h}} {\\bm{h}}} + V_{{\\bm{x}}^{\\prime}}^{\\prime} \\cdot \\sigma_{{\\bm{h}} {\\bm{h}}}) g_{{\\bm{x}}}+V_{{\\bm{h}}} \\cdot g_{{\\bm{u}} {\\bm{x}}}\n\\end{split}\n\\end{equation}\n\n\\textbf{Remarks.}\nFor feedforward networks, the computational overhead in Eq.~\\ref{eq:q-fc} and \\ref{eq:q-fc2} can be mitigated by leveraging its affine structure.\nSince $g$ is bilinear in ${\\bm{x}}_t$ and ${\\bm{u}}_t$, the terms ${{g}^t_{{\\bm{x}} {\\bm{x}}}}$ and ${{g}^t_{{\\bm{u}} {\\bm{u}}}}$ vanish.\nThe tensor ${{g}^t_{{\\bm{u}} {\\bm{x}}}}$ admits a sparse structure, whose computation can be simplified to\n\\begin{equation} \\begin{split}\n [{{g}^t_{{\\bm{u}} {\\bm{x}}}}&]_{(i,j,k)} = 1 \\quad \\text{iff} \\quad j = (k-1)n_{t+1} + i { \\text{ ,} }\n\\\\ [{V^{t}_{{\\bm{h}}}} \\cdot {{g}^t_{{\\bm{u}} {\\bm{x}}}}&]_{((k-1)n_{t+1}:kn_{t+1},k)} = {V^{t}_{{\\bm{h}}}}\n{ \\text{ .} } \\label{eq:gux-compute}\n\\end{split}\n\\end{equation}\nFor the coordinate-wise nonlinear transform, $\\sigma^t_{{\\bm{h}}}$ and $\\sigma^t_{{\\bm{h}}\\vh}$ are diagonal matrix and tensor.\nIn most learning instances, stage-wise losses typically involved with weight decay alone; thus the terms ${{\\ell}^t_{{\\bm{x}}}}, {{\\ell}^t_{{\\bm{x}} {\\bm{x}}}}, {{\\ell}^t_{{\\bm{u}} {\\bm{x}}}}$ also vanish.\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Derivation of \\eq{\\ref{eq:qux-math}}} \\label{app:c2}\n\\eq{\\ref{eq:qux-math}} follows by an observation that the feedback policy $\\mathbf{K}_t {\\delta{\\bm{x}}}_t = -(Q^t_{{\\bm{u}} {\\bm{u}}})^{-1} Q^t_{{\\bm{u}} {\\bm{x}}} \\delta {\\bm{x}}_t$\nstands as the minimizer of the following objective\n\\begin{align} \\label{eq:Kdx-interpret}\n \\mathbf{K}_t {\\delta{\\bm{x}}}_t =\n \\argmin_{\\delta {\\bm{u}}_t({\\delta{\\bm{x}}}_t) \\in \\Gamma^\\prime(\\delta {\\bm{x}}_t )} \\norm{\\nabla_{{\\bm{u}}_t} Q({\\bm{x}}_t + \\delta {\\bm{x}}_t,{\\bm{u}}_t + \\delta {\\bm{u}}_t({\\delta{\\bm{x}}}_t)) - \\nabla_{{\\bm{u}}_t} Q({\\bm{x}}_t,{\\bm{u}}_t)}\n { \\text{ ,} }\n\\end{align}\nwhere $\\Gamma^\\prime(\\delta {\\bm{x}}_t )$ denotes all affine mappings from $\\delta {\\bm{x}}_t$ to $\\delta {\\bm{u}}_t$ and\n$\\norm{\\cdot}$ can be any proper norm in the Euclidean space.\n\\eq{\\ref{eq:Kdx-interpret}}\nfollows by the Taylor expansion of $Q({\\bm{x}}_t + \\delta {\\bm{x}}_t,{\\bm{u}}_t + \\delta {\\bm{u}}_t)$ to its first order,\n\\begin{align*} %\n \\nabla_{{\\bm{u}}_t} Q({\\bm{x}}_t + \\delta {\\bm{x}}_t,{\\bm{u}}_t + \\delta {\\bm{u}}_t)\n =\n \\nabla_{{\\bm{u}}_t} Q({\\bm{x}}_t,{\\bm{u}}_t) + Q^t_{{\\bm{u}} {\\bm{x}}} \\delta {\\bm{x}}_t + Q^t_{{\\bm{u}} {\\bm{u}}} \\delta {\\bm{u}}_t\n { \\text{ .} }\n\\end{align*}\nWhen $Q = J$, we will arrive at \\eq{\\ref{eq:qux-math}}.\nFrom Proposition \\ref{prop:bp2ddp}, we know the equality holds when all $Q^s_{{\\bm{x}}{\\bm{u}}}$ vanish for $s>t$.\nIn other words, the approximation in \\eq{\\ref{eq:qux-math}} becomes equality\nwhen all aferward layer-wise objectives $s>t$ are expanded only w.r.t. ${\\bm{u}}_s$.\n\n\n\n\\subsection{Performance on Classification Dataset}\n\\textbf{Networks \\& Baselines Setup.}\nWe first validate the performance of training fully-connected (FCN) and convolution networks (CNN) using DDPNOpt on classification datasets.\n{\nFCN consists of $5$ fully-connected layers with the hidden dimension ranging from $10$ to $32$, depending on the size of the dataset.\nCNN consists of $4$ convolution layers (with $3{\\times}3$ kernel, $32$ channels), followed by $2$ fully-connected layers.\nWe use ReLU activation on all datasets except Tanh for WINE and DIGITS to better distinguish the differences between optimizers.\nThe batch size is set to $8$-$32$ for datasets trained with FCN, and $128$ for datasets trained with CNN.}\nAs DDPNOpt combines strengths from both standard training methods and OCP framework, we select baselines from both sides.\nThis includes first-order methods, \\emph{i.e.} SGD (with tuned momentum), RMSprop, Adam,\nand second-order method EKFAC \\citep{george2018fast}, which is a recent extension of the popular KFAC \\citep{martens2015optimizing}.\nFor OCP-inspired methods,\nwe compare DDPNOpt with vanilla DDP and E-MSA \\citep{li2017maximum},\nwhich is also a second-order method\nyet built upon the PMP framework.\nRegarding the curvature approximation used in DDPNOpt (${\\bm{M}}_t$ in Table \\ref{table:update-rule}),\nwe found that using adaptive diagonal and GN matrices respectively for FCNs and CNNs\ngive the best performance in practice.\nWe leave the complete experiment setup and additional results in Appendix \\ref{app:experiment}.\n\n\\input{subtex\/training_table2.tex}\n\n\\begin{figure}[t]\n\\vskip -0.1in\n\\centering\n\\begin{minipage}{0.42\\textwidth}\n\\centering\n\\includegraphics[width=\\linewidth]{fig\/complexity.pdf}\n\\vskip -0.15in\n\\caption{Runtime comparison on MNIST.}\\label{fig:runtime}\n\\end{minipage}\n\\begin{minipage}{0.57\\textwidth}\n\\centering\n \\captionsetup{type=table}\n \\captionsetup{justification=centering}\n \\caption{Computational complexity {in backward pass}. \\\\ ($B$: batch size, $X$: hidden state dim., $L$: \\# of layers)}\n \\vskip -0.1in\n \\begin{small}\n \\begin{tabular}{c|cc|c}\n \\toprule\n Method & Adam & Vanilla DDP & \\textbf{DDPNOpt} \\\\\n \\midrule\n {\\small Memory} & {\\small $\\mathcal{O}(X^2L)$} & {\\small $\\mathcal{O}(BX^3L)$} & {\\small$\\mathcal{O}(X^2L+BX)$} \\\\\n {\\small Speed} & {\\small $\\mathcal{O}(BX^2L)$} & {\\small $\\mathcal{O}(B^3X^3L)$} & {\\small$\\mathcal{O}(BX^2L)$} \\\\\n \\bottomrule\n \\end{tabular} \\label{table:complexity}\n \\end{small}\n\\end{minipage}\n\\vskip -0.2in\n\\end{figure}\n\n\\textbf{Training Results.}\nTable \\ref{table:training} presents the results over $10$ random trials.\nIt is clear\nthat DDPNOpt outperforms two OCP baselines on \\emph{all datasets and network types}.\nIn practice, both baselines suffer from unstable training and require careful tuning on the hyper-parameters.\nIn fact, we are not able to obtain results for vanilla DDP with any reasonable amount of computational resources when the problem size goes beyond FC networks.\nThis is in contrast to DDPNOpt which adapts amortized curvature estimation from widely-used methods;\nthus exhibits much stabler training dynamics with superior convergence.\nIn Table~\\ref{table:complexity}, we provide the {analytic} runtime and memory complexity among different methods.\nWhile vanilla DDP grows cubic w.r.t. $BX$,\nDDPNOpt reduces the computation by orders of magnitude with efficient approximation presented in Sec.~\\ref{sec:ddp-dnn}.\nAs a result,\n{when measuring the actual computational performance with GPU parallelism,}\nDDPNOpt runs nearly as fast as standard methods and outperforms E-MSA by a large margin.\nThe additional memory complexity, when comparing DDP-inspired methods with Back-propagation methods,\ncomes from the layer-wise feedback policies.\nHowever, DDPNOpt is much memory-efficient compared with vanilla DDP by exploiting the factorization in Proposition~\\ref{prop:gn-ddp}.\n\n\n\\textbf{Ablation Analysis.}\nOn the other hand, the performance gain between DDPNOpt and standard methods appear comparatively small.\nWe conjecture this is due to the inevitable use of similar curvature adaptation,\nas the local geometry of the landscape directly affects the convergence behavior.\nTo identify scenarios where DDPNOpt best shows its effectiveness,\nwe conduct an ablation analysis on the feedback mechanism.\nThis is done by recalling Proposition~\\ref{prop:bp2ddp}: when ${Q^t_{{\\bm{u}} {\\bm{x}}}}$ vanishes,\nDDPNOpt degenerates to the method associated with each precondition matrix.\nFor instance, DDPNOpt with identity (\\emph{resp.} adaptive diagonal and GN) precondition ${\\bm{M}}_t$ will generate the same updates as SGD (\\emph{resp.} RMSprop and EKFAC) when all ${Q^t_{{\\bm{u}} {\\bm{x}}}}$ are zeroed out.\nIn other words, these DDPNOpt variants can be viewed as the \\emph{DDP-extension} to existing baselines.\n\n\nIn Fig.~\\ref{fig:exp-grid} we report the performance difference between each baseline and its associated DDPNOpt variant.\nEach grid corresponds to a distinct training configuration that is averaged over $10$ random trails,\nand we keep all hyper-parameters (\\emph{e.g.} learning rate and weight decay) the same between baselines and their DDPNOpt variants.\nThus, the performance gap only comes from the feedback policies,\nor equivalently the update directions in Table~\\ref{table:update-rule}.\nBlue (\\emph{resp.} red) indicates an improvement (\\emph{resp.} degradation) when the feedback policies are presented.\nClearly, the improvement over baselines\nremains consistent across most hyper-parameters setups, and\nthe performance gap tends to become obvious as the learning rate increases.\nThis aligns with the previous study on numerical stability \\citep{liao1992advantages},\n{\nwhich suggests the feedback can stabilize the optimization when \\emph{e.g.} larger control updates are taken.\nSince larger control corresponds to a further step size in the application of DNN training, one should expect DDPNOpt to show its robustness as the learning rate increases.\n}\nAs shown in Fig.~\\ref{fig:exp-comp},\nsuch a stabilization can also lead to smaller variance and faster convergence.\nThis sheds light on\nthe benefit gained by bridging two seemly disconnected methodologies between DNN training and trajectory optimization.\n\n\n\\begin{figure}[t]\n\\vskip -0.34in\n\\subfloat{\\includegraphics[width=0.78\\columnwidth]{fig\/grid-exp3.pdf} \\label{fig:exp-grid} }\n\\subfloat{\\includegraphics[width=0.21\\columnwidth]{fig\/comp2-fix.pdf} \\label{fig:exp-comp}}\n\\vskip -0.1in\n\\caption{ %\n(a) Performance difference between DDPNOpt and baselines on DIGITS across hyper-parameter grid.\nBlue (\\emph{resp.} red) indicates an improvement (\\emph{resp.} degradation) over baselines.\nWe observe similar behaviors on other datasets.\n(b) Examples of the actual training dynamics.\n}\n\\label{fig:grid-exp}\n\\end{figure}\n\n\n\\begin{figure}[t]\n\\vskip -0.1in\n\\centering\n\\begin{minipage}{0.29\\textwidth}\n \\centering\n \\includegraphics[width=0.7\\linewidth]{fig\/K-vis.png}\n \\vskip -0.1in\n \\caption{Visualization of the feedback policies on MNIST.}\\label{fig:K-vis}\n\\end{minipage}\n\\begin{minipage}{0.05\\textwidth}\n\\end{minipage}\n\\begin{minipage}{0.65\\textwidth}\n \\centering\n \\subfloat{\n \\includegraphics[width=0.95\\linewidth]{fig\/vg-update2.pdf}\n \\label{fig:vanish-grad-a}%\n }\n \\subfloat{%\n \\textcolor{white}{\\rule{1pt}{1pt}}\n \\label{fig:vanish-grad-b}%\n }%\n \\vskip -0.1in\n \\caption{Training a $9$-layer sigmoid-activated {FCN} on \\\\ DIGITS using MMC loss.\n DDPNOpt2nd denotes when the layer dynamics is fully expanded to the second order.\n }\n \\label{fig:vanish-grad}\n\\end{minipage}\n\\vskip -0.2in\n\\end{figure}\n\n\n\\subsection{Discussion on Feedback Policies}\n\n\\textbf{Visualization of Feedback Policies.}\nTo understand the effect of feedback policies more perceptually,\n{\nin Fig.~\\ref{fig:K-vis} we visualize the feedback policy when training CNNs.\nThis is done by first conducting singular-value decomposition on the feedback matrices ${{\\bm{K}}_t}$,\nthen projecting the leading right-singular vector back to image space\n(see Alg.~\\ref{alg:K-vis} and Fig.~\\ref{fig:K-vis-app} in Appendix for the pseudo-code).\nThese feature maps, denoted $\\delta x_{\\max}$ in Fig.~\\ref{fig:K-vis}, correspond to the dominating differential image that the policy shall respond with during weight update.\nFig.~\\ref{fig:K-vis} shows that the feedback policies indeed capture\nnon-trivial visual features related to the pixel-wise difference between spatially similar classes, \\emph{e.g.} $(8,3)$ or $(7,1)$.\nThese differential maps differ from adversarial perturbation \\citep{goodfellow2014explaining}\nas the former directly links the parameter update to the change in activation;\nthus being more interpretable.\n\n\n\\textbf{Vanishing Gradient.}\nLastly, we present an interesting finding on how the feedback policies help mitigate vanishing gradient (VG),\na notorious effect when DNNs become impossible to train as gradients vanish along Back-propagation.\nFig.~\\ref{fig:vanish-grad-a} reports results on training a sigmoid-activated DNN on DIGITS.\nWe select SGD-VGR, which imposes a specific regularization to mitigate VG \\citep{pascanu2013difficulty}, and EKFAC as our baselines.\nWhile both baselines suffer to make any progress,\nDDPNOpt continues to generate non-trivial updates as the state-dependent feedback, \\emph{i.e.} $\\mathbf{K}_t {\\delta{\\bm{x}}}_t$, remains active.\nThe effect becomes significant when dynamics is fully expanded to the second order.\nAs shown in Fig.~\\ref{fig:vanish-grad-b}, the update norm from DDPNOpt is typically $5$-$10$ times larger.\nWe note that in this experiment, we replace the cross-entropy (CE) with Max-Mahalanobis center (MMC) loss,\na new classification objective that improves robustness on standard vision datasets \\citep{pang2019rethinking}.\nMMC casts classification to distributional regression, providing denser Hessian and making problems similar to original trajectory optimization.\nNone of the algorithms escape from VG using CE.\nWe highlight that while VG is typically mitigated on the \\textit{architecture} basis, by having either unbounded activation function or residual blocks,\nDDPNOpt provides an alternative from the \\textit{algorithmic} perspective.\n\n\n\n\n\n\n\n\n\n\n\\iffalse\n\nthe update signal remain active\nand, as we discussed in the previous experiment, become significant\n\nwhen the networks going ``deep''.\n\n\n\n\n\noptimizer basis\n\n\n\n gradient vanishes along the Back-propagation\n\nThe layer-wise feedback policy has an surprising effect on preventing \\textit{vanishing gradient},\na notorious effect\n\nWe find that in additional to\n\n\nIn Fig. X we report the training performance on MNIST for $20$-layers feedforward networks\n\n the training curves between different optimizers\n\n of a $20$-layers feedforward networks trained with on MNIST.\n\n\n\n\nknown to be notoriously hard to train\n\n\nIn Fig. X\n\n\nexplain\n\noptimizer\narchitectures\n\n\nfuture works.\n\n\\fi\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Experiment Detail} \\label{app:experiment}\n\\subsubsection{Setup} \\label{app:exp-set}\n\n\\begin{minipage}[t]{0.56\\textwidth}\n\\textbf{Clarification Dataset.}\nAll networks in the classification experiments are composed of $5$-$6$ layers.\nFor the intermediate layers, we use ReLU activation on all dataset, except Tanh on WINE and DIGITS.\nWe use identity mapping at the last prediction layer on all dataset except WINE,\nwhere we use sigmoid instead to help distinguish the performance among optimizers.\nFor feedforward networks, the dimension of the hidden state is set to $10$-$32$.\nOn the other hand, we use standard $3\\text{ }\\times\\text{ }3$ convolution\n\\end{minipage}\n\\begin{minipage}[t]{0.43\\textwidth}\n \\captionof{table}{Hyper-parameter search}\n \\vskip -0.5in\n \\begin{center}\n \\begin{small}\n \\begin{tabular}{c|c}\n \\toprule\n Methods & Learning Rate \\\\\n \\midrule\n SGD & $(7\\mathrm{e}\\text{-}2,5\\mathrm{e}\\text{-}1)$ \\\\\n Adam \\& RMSprop & $(7\\mathrm{e}\\text{-}4,1\\mathrm{e}\\text{-}2)$ \\\\\n EKFAC & $(1\\mathrm{e}\\text{-}2,3\\mathrm{e}\\text{-}1)$ \\\\\n \\bottomrule\n \\end{tabular} \\label{table:hyper}\n \\end{small}\n \\end{center}\n \\vskip 0.5in\n\\end{minipage}\nkernels for all CNNs.\nThe batch size is set $8$-$32$ for dataset trained with feedforward networks, and $128$ for dataset trained with convolution networks.\nFor each baseline we select its own hyper-parameter from an appropriate search space, which we detail in Table~\\ref{table:hyper}.\nWe use the implementation in \\url{https:\/\/github.com\/Thrandis\/EKFAC-pytorch} for EKFAC\nand implement our own E-MSA in PyTorch since the official code released from \\citet{li2017maximum} does not support GPU implementation.\nWe impose the GN factorization presented in Proposition \\ref{prop:gn-ddp} for all CNN training.\nRegarding the machine information,\nwe conduct our experiments on GTX 1080 TI, RTX TITAN, and four Tesla V100 SXM2 16GB.\n\n\n\\textbf{Procedure to Generate Fig.~\\ref{fig:K-vis}.}\nFirst, we perform standard DDPNOpt steps to compute layer-wise policies. Next, we conduct singular-value decomposition on the feedback matrix $({{\\bm{k}}_t},{{\\bm{K}}_t})$.\nIn this way, the leading right-singular vector corresponding to the dominating that the feedback policy shall respond with.\nSince this vector is with the same dimension as the hidden state, which is most likely not the same as the image space, we project the vector back to image space using the techniques proposed in \\citep{zeiler2014visualizing}. The pseudo code and computation diagram are included in Alg.~\\ref{alg:K-vis} and Fig.~\\ref{fig:K-vis-app}.\n\n\n\\vspace{-10pt}\n\n\\begin{minipage}[t]{0.52\\textwidth}\n\\vskip -0.in\n\\begin{algorithm}[H]\n\\small\n \\caption{\\small Visualizing the Feedback Policies}\n \\label{alg:K-vis}\n \\begin{algorithmic}[1]\n \\STATE {\\bfseries Input:}\n Image ${{\\bm{x}}}$ (we drop the time subscript for notational simplicity, {\\emph{i.e.}} ${\\bm{x}} \\equiv {\\bm{x}}_0$)\n \\STATE Perform backward pass of DDPNOpt. Compute $({{\\bm{k}}_t},{{\\bm{K}}_t})$ backward\n \\STATE Perform SVD on ${{\\bm{K}}_t}$\n \\STATE Extract the right-singular vector corresponding to the largest singular value, denoted $v_{\\max} \\in \\mathbb{R}^{n_t} $\n \\STATE Project $v_{\\max}$ back to the image space using deconvolution procedures introduced in \\citep{zeiler2014visualizing}\n \\end{algorithmic}\n\\end{algorithm}\n\\end{minipage}\n\\hfill\n\\begin{minipage}[t]{0.42\\textwidth}\n\\vskip -0.1in\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=\\linewidth]{fig\/K-viz-procedure.png}\n\\caption{Pictorial illustration for Alg. \\ref{alg:K-vis}.}\n\\label{fig:K-vis-app}\n\\end{figure}\n\\end{minipage}\n\n\n\n\\subsubsection{Additional Experiment and Discussion} \\label{app:exp-more}\n\n\n\n\n\\textbf{Batch trajectory optimization on synthetic dataset.}\nOne of the difference between DNN training and trajectory optimization is that for the former,\nwe aim to find an ultimate control law that can drive every data point in the training set, or sampled batch, to its designed target.\nDespite seemly trivial from the ML perspective, this is a distinct formulation to OCP since the optimal policy typically varies at different initial state.\nAs such, we validate performance of DDPNOpt in batch trajectories optimization on a synthetic dataset,\nwhere we sample data from $k\\in\\{5,8,12,15\\}$ Gaussian clusters in $\\mathbb{R}^{30}$.\nSince conceptually\na DNN classifier can be thought of as a dynamical system guiding\ntrajectories of samples toward the target regions belong to their classes,\nwe hypothesize that\nfor the DDPNOpt to show its effectiveness on batch training,\nthe feedback policy must act as an ensemble policy that combines the locally optimal policy of each class.\nFig.~\\ref{fig:feedback-spectrum} shows the spectrum distribution, sorted in a descending order, of the feedback policy in the prediction layer.\nThe result shows that the number of nontrivial eigenvalues matches exactly the number of classes in each setup (indicated by the vertical dashed line).\nAs the distribution in the prediction layer\nconcentrates to $k$ bulks through training,\nthe eigenvalues also increase, providing stronger feedback to the weight update.\n\n\n\\begin{figure}[H]\n\\vskip -0.1in\n\\begin{center}\n\\centerline{\\includegraphics[width=\\columnwidth]{fig\/feedback-policy-spectrum.pdf}}\n\\vskip -0.1in\n\\caption{\nSpectrum distribution on synthetic dataset.\n}\n\\label{fig:feedback-spectrum}\n\\end{center}\n\\vskip -0.2in\n\\end{figure}\n\n\n\n\n\\textbf{Ablation analysis on Adam.}\nFig.~\\ref{fig:grid-exp-adam} reports the ablation analysis on Adam using the same setup as in Fig. \\ref{fig:exp-grid}, \\emph{i.e.}\nwe keep all hyper-parameters the same for each experiment so that the performance difference only comes from the existence of feedback policies.\nIt is clear that the improvements from the feedback policies remain consistent for Adam optimizer.\n\n\n\\begin{figure}[H]\n\\vskip -0.1in\n\\begin{center}\n\\centerline{\\includegraphics[width=0.6\\columnwidth]{fig\/grid-exp-adam.pdf}}\n\\vskip -0.1in\n\\caption{\nAdditional experiment for Fig.~\\ref{fig:exp-grid} where we compare the performance difference between DDPNOpt and Adam.\nAgain, all grids report values averaged over $10$ random seeds.\n}\n\\label{fig:grid-exp-adam}\n\\end{center}\n\\vskip -0.2in\n\\end{figure}\n\n\n\n\n\n\\textbf{Ablation analysis on DIGITS compared with best-tuned baselines.}\nFig.~\\ref{fig:grid-exp} reports the performance difference between baselines and DDPNOpt under different hyperparameter setupts.\nHere, we report the numerical values when each baseline uses its best-tuned learning rate (which is the values we report in Table 3) and compare with its DDPNOpt counterpart using the same learning rate. As shown in Tables \\ref{table:6}, \\ref{table:7}, and \\ref{table:8}, for most cases extending the baseline to accept the Bellman framework improves the performance.\n\n\\providecommand{\\e}[1]{\\ensuremath{\\times 10^{#1}}}\n\n\\bgroup\n\\setlength\\tabcolsep{0.07in}\n\\begin{table}[H]\n\\caption{ Learning rate $=0.1$}\n\\label{table:6}\n\\vspace{-3pt}\n\\begin{center}\n\\begin{small}\n\\vskip -0.1in\n\\begin{tabular}{c?cc}\n\\toprule\n& {SGD} & {DDPNOpt with {${\\bm{M}}_t = {\\bm{I}}_t$} } \\\\[2pt]\n\\midrule\nTrain Loss &\n$0.035$ & $\\textbf{0.032}$ \\\\\nAccuracy (\\%) &\n$95.36$ & $\\textbf{95.52}$ \\\\\n\\bottomrule\n\\end{tabular}\n\\end{small}\n\\end{center}\n\\end{table}\n\n\\vspace{-20pt}\n\n\\begin{table}[H]\n\\caption{ Learning rate $=0.001$}\n\\label{table:7}\n\\vspace{-3pt}\n\\begin{center}\n\\begin{small}\n\\vskip -0.1in\n\\begin{tabular}{c?cc}\n\\toprule\n& {RMSprop} & { DDPNOpt with {${\\bm{M}}_t = \\diag(\\mathpalette\\DHLhksqrt{\\mathbb{E}[{Q^t_{{\\bm{u}}}} \\odot {Q^t_{{\\bm{u}}}}]} +\\epsilon)$} } \\\\[2pt]\n\\midrule\nTrain Loss &\n$0.058$ & $\\textbf{0.052}$ \\\\\nAccuracy (\\%) &\n$94.33$ & $\\textbf{94.63}$ \\\\\n\\bottomrule\n\\end{tabular}\n\\end{small}\n\\end{center}\n\\end{table}\n\n\\vspace{-20pt}\n\n\\begin{table}[H]\n\\caption{ Learning rate $=0.03$}\n\\label{table:8}\n\\vspace{-3pt}\n\\begin{center}\n\\begin{small}\n\\vskip -0.1in\n\\begin{tabular}{c?cc}\n\\toprule\n& {EKFAC} & {DDPNOpt with {${\\bm{M}}_t = \\mathbb{E}{[{\\bm{x}}_t{\\bm{x}}_t^{\\mathsf{T}}]} \\otimes \\mathbb{E}{[V_{{\\bm{h}}}^tV_{{\\bm{h}}}^{t\\text{ }{\\mathsf{T}}}]}$} } \\\\[2pt]\n\\midrule\nTrain Loss &\n$0.074$ & $\\textbf{0.067}$ \\\\\nAccuracy (\\%) &\n$\\textbf{95.24}$ & ${95.19}$ \\\\\n\\bottomrule\n\\end{tabular}\n\\end{small}\n\\end{center}\n\\end{table}\n\n\\egroup\n\n\n\\input{subtex\/figure_4_absolute_value}\n\n\\textbf{More experiments on vanishing gradient.}\nRecall that Fig.~\\ref{fig:vanish-grad} reports the training performance using MMC loss on Sigmoid-activated networks.\nIn Fig.~\\ref{fig:vg-more1-a}, we report the result when training the same networks but using CE loss (notice the numerical differences in the $y$ axis for different objectives).\nNone of the presented optimizers were able to escape from vanishing gradient, as evidenced by the vanishing update magnitude.\nOn the other hands, changing the networks to ReLU-activated networks eliminates the vanishing gradient,\nas shown in Fig.~\\ref{fig:vg-more1-b}.\n\nFig.~\\ref{fig:vg-more1-c} reports the performance with other first-order adaptive optimizers including Adam and RMSprop.\nIn general, adaptive first-order optimizers are more likely to escape from vanishing gradient since the diagonal precondition matrix (recall ${\\bm{M}}_t = \\mathbb{E}[{J}_{{\\bm{u}}_t} \\odot {J}_{{\\bm{u}}_t}]$ in Table~\\ref{table:update-rule}) rescales the vanishing update to a fixed norm. However, as shown in Fig.~\\ref{fig:vg-more1-c}, DDPNOpt* (the variant of DDPNOpt that\nutilizes similar adaptive first-order precondition matrix) converges faster compared with these adaptive baselines.\n\nFig.~\\ref{fig:vg-more2} illustrates the selecting process on the learing-rate tuning when we report Fig.~\\ref{fig:vanish-grad}.\nThe training performance for both SGD-VGR and EKFAC remains unchanged when tuning the learning rate. In practice, we observe unstable training with SGD-VGR when the learing rate goes too large.\nOn the other hands, DDPNOpt and DDPNOpt2nd are able to escape from VG with all tested learning rates.\nHence, Fig.~\\ref{fig:vanish-grad} combines Fig.~\\ref{fig:vg-more2-a} (SGD-VGR-lr$0.1$) and Fig.~\\ref{fig:vg-more2-c} (EKFAC-lr$0.03$, DDPNOpt-lr$0.03$, DDPNOpt2nd-lr$0.03$) for best visualization.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{figure}[H]%\n \\centering\n \\vskip -0.1in\n \\subfloat{\n \\textcolor{white}{\\rule{1pt}{1pt}}\n \\label{fig:vg-more1-a}%\n }\n \\subfloat{\n \\includegraphics[width=0.85\\linewidth]{fig\/vg-1115.pdf}\n \\label{fig:vg-more1-b}%\n }\n \\vskip -0.1in\n \\caption{\n Vanishing gradient experiment for different losses and nonlinear activation functions.\n }\n \\label{fig:weight-update}%\n \\vskip -0.1in\n\\end{figure}\n\n\n\\begin{figure}[H]%\n \\centering\n \\subfloat{\n \\includegraphics[width=0.8\\linewidth]{fig\/vg-1115-2.pdf}\n }\n \\vskip -0.1in\n \\caption{\n Vanishing gradient experiment for other optimizers.\n The legend ``DDPNOpt*''\n denotes DDPNOpt with adaptive diagonal matrix.\n }\n \\label{fig:vg-more1-c}%\n \\vskip -0.1in\n\\end{figure}\n\n\\begin{figure}[H]%\n \\centering\n \\subfloat{\n \\includegraphics[width=0.8\\linewidth]{fig\/vg-lrs-1115.pdf}\n \\label{fig:vg-more2-a}%\n }\n \\subfloat{\n \\textcolor{white}{\\rule{1pt}{1pt}}\n \\label{fig:vg-more2-b}%\n }\n \\subfloat{\n \\textcolor{white}{\\rule{1pt}{1pt}}\n \\label{fig:vg-more2-c}%\n }\n \\subfloat{\n \\textcolor{white}{\\rule{1pt}{1pt}}\n \\label{fig:vg-more2-d}%\n }\n \\vskip -0.1in\n \\caption{\n Vanishing gradient experiment for different learning rate setups.\n }\n \\label{fig:vg-more2}%\n \\vskip -0.1in\n\\end{figure}\n\n\n\n\\subsection{Training DNNs as Trajectory Optimization}\n\nRecall that DNNs can be interpreted as dynamical systems where each layer is viewed as a distinct time step.\nConsider \\emph{e.g.} the propagation rule in feedforward layers,\n\\begin{align} \\label{eq:dnn-dyn}\n{\\bm{x}}_{t+1} = \\sigma_t ({\\bm{h}}_t) { \\text{ ,} } \\quad\n{\\bm{h}}_t = g_t({\\bm{x}}_{t},{\\bm{u}}_t) = {\\bm{W}}_t {\\bm{x}}_t + {\\bm{b}}_t { \\text{ .} }\n\\end{align}\n${\\bm{x}}_t \\in \\mathbb{R}^{n_t}$ and ${\\bm{x}}_{t+1} \\in \\mathbb{R}^{n_{t+1}}$ represent the activation vector at layer $t$ and $t+1$, with\n${\\bm{h}}_t \\in \\mathbb{R}^{n_{t+1}}$ being the pre-activation vector.\n$\\sigma_t$ and $g_t$ respectively denote\nthe nonlinear activation function and\nthe affine transform parametrized by the vectorized weight ${\\bm{u}}_t \\triangleq [\\mathrm{vec}({\\bm{W}}_t), {\\bm{b}}_t]^{\\mathsf{T}}$.\n\\eq{\\ref{eq:dnn-dyn}} can be seen as a dynamical system (by setting $f_t \\equiv \\sigma_t \\circ g_t$ in OCP)\npropagating the activation vector ${\\bm{x}}_t$ using ${\\bm{u}}_t$.\n\n\nNext, notice that the gradient descent (GD) update, denoted\n$\\delta {\\bar{{\\bm{u}}}}^* \\equiv -\\eta \\nabla_{{\\bar{{\\bm{u}}}}} J$ with $\\eta$ being the learning rate,\ncan be break down into each layer, \\textit{i.e.}\n$\\delta {\\bar{{\\bm{u}}}}^* \\triangleq \\{\\delta {\\bm{u}}_t^*\\}_{t=0}^{T-1}$,\nand computed backward by\n\\begin{align}\n{\\delta{\\bm{u}}}_t^*\n&= \\argmin_{{\\delta{\\bm{u}}}_t \\in \\mathbb{R}^{m_t}}\\{\n J_t + \\nabla_{{\\bm{u}}_t} J_t^{\\mathsf{T}} {\\delta{\\bm{u}}}_t + \\textstyle \\frac{1}{2} {\\delta{\\bm{u}}}_t^{\\mathsf{T}} (\\textstyle \\frac{1}{\\eta}{\\bm{I}}_t) {\\delta{\\bm{u}}}_t\n\\} { \\text{ ,} } \\label{eq:du-star} \\\\\n\\text{where } J_t({\\bm{x}}_t,{\\bm{u}}_t) &\\triangleq \\ell_t({\\bm{u}}_t) + J_{t+1}(f_t({\\bm{x}}_t,{\\bm{u}}_t),{\\bm{u}}_{t+1})\n{ \\text{ ,} } \\quad J_T({\\bm{x}}_T)\\triangleq\\phi({\\bm{x}}_T)\n\\label{eq:Jt}\n\\end{align}\nis the per-layer objective\\footnote{\n Hereafter we drop ${\\bm{x}}_t$ in all $\\ell_t(\\cdot)$\n as the layer-wise loss typically involves weight regularization alone.\n} at layer $t$.\nIt can be readily verified that ${\\bm{p}}_t \\equiv \\nabla_{{\\bm{x}}_t}J_t$ gives the exact Back-propagation dynamics.\n\\eq{\\ref{eq:Jt}} suggests that\nGD minimizes the quadratic expansion of $J_t$ with the Hessian $\\nabla^2_{{\\bm{u}}_t}J_t$ replaced by $\\frac{1}{\\eta}{\\bm{I}}_t$.\nSimilarly, adaptive first-order methods, such as RMSprop and Adam,\napproximate the Hessian\nwith the diagonal of the covariance matrix.\nSecond-order methods, such as KFAC and EKFAC \\citep{martens2015optimizing,george2018fast},\ncompute full matrices using Gauss-Newton (GN) approximation:\n\\begin{align} \\label{eq:kfac}\n \\nabla^2_{{\\bm{u}}}J_t \\approx\n \\mathbb{E}{[J_{{\\bm{u}}_t} J_{{\\bm{u}}_t}^{\\mathsf{T}}]}\n = \\mathbb{E}{[({\\bm{x}}_t \\otimes J_{{\\bm{h}}_t}) ({\\bm{x}}_t \\otimes J_{{\\bm{h}}_t})^{\\mathsf{T}}]}\n \\approx \\mathbb{E}{[({\\bm{x}}_t {\\bm{x}}_t^{\\mathsf{T}})]} \\otimes \\mathbb{E}{[( J_{{\\bm{h}}_t} J_{{\\bm{h}}_t}^{\\mathsf{T}})]}\n { \\text{ .} }\n\\end{align}\n\n\n\nWe now draw a novel connection between the training procedure of DNNs and DDP.\nLet us first summarize the Back-propagation (BP) {with gradient descent} in Alg. \\ref{alg:bp} and compare it with DDP (Alg. \\ref{alg:ddp}).\nAt each training iteration, we treat the current weight as the {control} ${\\bar{{\\bm{u}}}}$ that simulates the activation sequence ${\\bar{{\\bm{x}}}}$.\nStarting from this {nominal} trajectory $({\\bar{{\\bm{x}}}},{\\bar{{\\bm{u}}}})$,\nboth algorithms\nrecursively define some layer-wise objectives ($J_t$ in \\eq{\\ref{eq:Jt}} vs $V_t$ in \\eq{\\ref{eq:bellman}}),\ncompute the weight\/control update from the quadratic expansions (\\eq{\\ref{eq:du-star}} vs \\eq{\\ref{eq:du-star-ddp}}),\nand then carry certain information ($\\nabla_{{\\bm{x}}_t}J_t$ vs $(V_{\\bm{x}}^t,V_{{\\bm{x}}\\vx}^t)$)\nbackward to the preceding layer.\nThe computation graph between the two approaches is summarized in Fig.~\\ref{fig:1}.\nIn the following proposition,\nwe make this connection formally and provide conditions when the two algorithms become equivalent.\n\n\\begin{minipage}[t]{0.49\\textwidth}\n\\vskip -0.1in\n\\begin{proposition} \\label{prop:bp2ddp}\nAssume $Q_{{\\bm{u}} {\\bm{x}}}^t=\\mathbf{0}$ at all stages,\nthen the backward dynamics of the value derivative can be described by the Back-propagation,\n\\begin{align}\n\\begin{split}\n\\forall t { \\text{ ,} }\nV_{{\\bm{x}}}^t = \\nabla_{{\\bm{x}}_t} J { \\text{ ,} } \\text{ } Q_{{\\bm{u}}}^t = \\nabla_{{\\bm{u}}_t} J\n{ \\text{ ,} } \\text{ }\nQ_{{\\bm{u}}\\vu}^t = \\nabla^2_{{\\bm{u}}_t}J { \\text{ .} }\n\\end{split}\n\\end{align}\nIn this case, the DDP policy is equivalent to the stage-wise Newton,\nin which the gradient is preconditioned by the block-wise inverse Hessian at each layer:\n\\begin{align}\n{{\\bm{k}}_t} + {{\\bm{K}}_t} {\\delta{\\bm{x}}}_t\n= - (\\nabla_{{\\bm{u}}_t}^2 J)^{{-1}}\\nabla_{{\\bm{u}}_t}J { \\text{ .} } \\label{eq:newton}\n\\end{align}\nIf further we have ${Q_{{\\bm{u}} {\\bm{u}}}^{t}} \\approx \\frac{1}{\\eta}{\\mathbf{I}}$,\nthen DDP degenerates to Back-propagation with gradient descent.\n\\end{proposition}\n\\end{minipage}\n\\hfill\n\\begin{minipage}[t]{0.48\\textwidth}\n \\vskip -0.27in\n \\begin{figure}[H]\n \\vspace{-10pt}\n \\includegraphics[width=\\linewidth]{fig\/compute-graph4.png}\n \\caption{\n DDP backward propagates the value derivatives $(V_{\\bm{x}},V_{{\\bm{x}}\\vx})$ instead of $\\nabla_{{\\bm{x}}_t}J$\n and updates weight using layer-wise feedback policy, $\\delta {\\bm{u}}^{*}_t(\\delta {\\bm{x}}_t)$, with additional forward propagation.}\n \\label{fig:1}\n \\end{figure}\n\\end{minipage}\n\nProof is left in Appendix \\ref{app:c1}.\nProposition \\ref{prop:bp2ddp} states that the backward pass in DDP collapses to BP when $Q_{{\\bm{u}} {\\bm{x}}}$ vanishes at all stages.\nIn other words, existing training methods can be seen as special cases of DDP\nwhen the mixed derivatives (\\emph{i.e.} $\\nabla_{{\\bm{x}}_t{\\bm{u}}_t}$) of the layer-wise objective are discarded.\n\n\\subsection{Efficient Approximation and Factorization} \\label{sec:eff-approx}\n\nMotivated by Proposition \\ref{prop:bp2ddp},\nwe now present a new class of optimizer, DDP Neural Optimizer (DDPNOpt), on training feedforward and convolution networks.\nDDPNOpt follows the same procedure in vanilla DDP (Alg.~\\ref{alg:ddp})\nyet adapts several key traits arising from DNN training,\nwhich we highlight below.\n\n\\textbf{Evaluate derivatives of $Q_t$ with layer dynamics.}\nThe primary computation in DDPNOpt comes from constructing the derivatives of $Q_t$ at each layer.\nWhen the dynamics is represented by the layer propagation (recall \\eq{\\ref{eq:dnn-dyn}} where we set $f_t \\equiv \\sigma_t \\circ g_t$), we can rewrite \\eq{\\ref{eq:Qt}} as:\n\\begin{equation}\n\\begin{split}\n {Q^t_{{\\bm{x}}}} = {{g}^{t \\text{ }{\\mathsf{T}}}_{{\\bm{x}}}} {V^{t}_{{\\bm{h}}}} { \\text{ ,} } \\quad\n {Q^t_{{\\bm{u}}}} = {{\\ell}^t_{{\\bm{u}}}} + {{g}^{t \\text{ }{\\mathsf{T}}}_{{\\bm{u}}}} {V^{t}_{{\\bm{h}}}} { \\text{ ,} } \\quad\n {Q^t_{{\\bm{x}} {\\bm{x}}}} = {{g}^{t \\text{ }{\\mathsf{T}}}_{{\\bm{x}}}} {V^{t}_{{\\bm{h}}\\vh}} g^t_{{\\bm{x}}} { \\text{ ,} } \\quad\n {Q^t_{{\\bm{u}} {\\bm{x}}}} = {{g}^{t \\text{ }{\\mathsf{T}}}_{{\\bm{u}}}} {V^{t}_{{\\bm{h}}\\vh}} g^t_{{\\bm{x}}} { \\text{ ,} }\n \\label{eq:q-fc}\n\\end{split}\n\\end{equation}\nwhere\n${V^{t}_{{\\bm{h}}}} \\triangleq {\\sigma^{t\\text{ } {\\mathsf{T}}}_{{\\bm{h}}}} {V^{t+1}_{{\\bm{x}}}}$ and\n${V^{t}_{{\\bm{h}}\\vh}} \\triangleq {\\sigma^{t\\text{ } {\\mathsf{T}}}_{{\\bm{h}}}} V^{t+1}_{{\\bm{x}} {\\bm{x}}} \\sigma^t_{{\\bm{h}}}$\nabsorb the computation of the non-parametrized activation function $\\sigma$.\n{\n Note that \\eq{\\ref{eq:q-fc}} expands the dynamics only up to first order,\n \\emph{i.e.} we omitt the tensor products which involves second-order expansions on dynamics,\n as the stability obtained by keeping only the linearized dynamics is thoroughly discussed and widely adapted in practical DDP usages \\citep{todorov2005generalized}.\n}\nThe matrix-vector product with the Jacobian of the affine transform (\\emph{i.e.} ${{g}^{t}_{{\\bm{u}}}},{{g}^{t}_{{\\bm{x}}}}$) can be evaluated efficiently for both feedforward (FF) and convolution (Conv) layers:\n\\begin{alignat}{4}\n{\\bm{h}}_t &\\FCeq {\\bm{W}}_t{\\bm{x}}_t+{\\bm{b}}_t\n&&\\Rightarrow\n{g^t_{{\\bm{x}}}}^{\\mathsf{T}} V^{t}_{{\\bm{h}}} &&= {\\bm{W}}_t^{\\mathsf{T}} V^{t}_{{\\bm{h}}}\n{ \\text{ ,} } \\quad\n{g^t_{{\\bm{u}}}}^{\\mathsf{T}} V^{t}_{{\\bm{h}}} &&= {\\bm{x}}_t \\otimes V^{t}_{{\\bm{h}}}\n{ \\text{ ,} } \\\\\n{\\bm{h}}_t &\\CONVeq \\omega_t * {\\bm{x}}_t\n&&\\Rightarrow\n{g^t_{{\\bm{x}}}}^{\\mathsf{T}} V^{t}_{{\\bm{h}}} &&= \\omega_t^{\\mathsf{T}} {\\text{ } \\hat{*} \\text{ }} V^{t}_{{\\bm{h}}}\n{ \\text{ ,} } \\quad\n{g^t_{{\\bm{u}}}}^{\\mathsf{T}} V^{t}_{{\\bm{h}}} &&= {\\bm{x}}_t {\\text{ } \\hat{*} \\text{ }} V^{t}_{{\\bm{h}}}\n{ \\text{ ,} }\n\\end{alignat}\nwhere $\\otimes$, ${\\text{ } \\hat{*} \\text{ }}$, and $*$ respectively denote the Kronecker product and (de-)convolution operator.\n\n\n\n\n\n\n\n\n\n\\input{subtex\/table-optimizer-relation}\n\n\n\\textbf{Curvature approximation.}\n{\n Next, since DNNs are highly over-parametrized models, ${\\bm{u}}_t$ (\\emph{i.e.} the layer weight) will be in high-dimensional space.}\nThis makes ${Q^t_{{\\bm{u}} {\\bm{u}}}}$ and $({Q^t_{{\\bm{u}} {\\bm{u}}}})^{-1}$ computationally intractable to solve; thus requires approximation.\nRecall the interpretation we draw in \\eq{\\ref{eq:Jt}}\nwhere existing optimizers differ in approximating the Hessian $\\nabla_{{\\bm{u}}_t}^2J_t$.\nDDPNOpt adapts the same curvature approximation to ${Q^t_{{\\bm{u}} {\\bm{u}}}}$.\nFor instance, we can approximate ${Q^t_{{\\bm{u}} {\\bm{u}}}}$ simply with\nan identity matrix ${\\bm{I}}_t$, adaptive diagonal matrix $\\diag(\\mathpalette\\DHLhksqrt{\\mathbb{E}[{Q^t_{{\\bm{u}}}} \\odot {Q^t_{{\\bm{u}}}}]})$, or the GN matrix:\n\\begin{align} \\label{eq:ekfac-ddp}\n {Q}^t_{{\\bm{u}} {\\bm{u}}} \\approx\n \\mathbb{E}{[Q^t_{{\\bm{u}}} {Q^t_{{\\bm{u}}}}^{\\mathsf{T}}]}\n =\\mathbb{E}{[({\\bm{x}}_t \\otimes {V^{t}_{{\\bm{h}}}}) ({\\bm{x}}_t \\otimes {V^{t}_{{\\bm{h}}}})^{\\mathsf{T}}]}\n \\approx \\mathbb{E}{[{\\bm{x}}_t {\\bm{x}}_t^{\\mathsf{T}}]} \\otimes \\mathbb{E}{[{V^{t}_{{\\bm{h}}}} {V^{t}_{{\\bm{h}}}}^{\\mathsf{T}}]}\n { \\text{ .} }\n\\end{align}\n\nTable \\ref{table:update-rule} summarizes the difference in curvature approximation\n(\\emph{i.e.} the precondition ${\\bm{M}}_t$ ) for different methods.\nNote that DDPNOpt constructs these approximations using $(V,Q)$ rather than $J$\nsince they consider different layer-wise objectives. %\nAs a direct implication from Proposition $\\ref{prop:bp2ddp}$,\nDDPNOpt will degenerate to the optimizer it adapts for curvature approximation\nwhenever all ${Q^t_{{\\bm{u}} {\\bm{x}}}}$ vanish.\n\n\n\\textbf{Outer-product factorization.}\nWhen the memory efficiency becomes nonnegligible as the problem scales,\nwe make GN approximation to $\\nabla^2\\phi$,\nsince the low-rank structure at the prediction layer has been observed\nfor problems concerned in this work \\citep{nar2019cross,lezama2018ole}.\nIn the following proposition, we show that\nfor a specific type of OCP, which happens to be the case of DNN training,\nsuch a low-rank structure preserves throughout the DDP backward pass.\n\\begin{proposition}[Outer-product factorization in DDPNOpt] \\label{prop:gn-ddp}\n Consider the OCP where $\\ell_t \\equiv \\ell_t({\\bm{u}}_t)$ is independent of ${\\bm{x}}_t$,\n If the terminal-stage Hessian can be expressed by the outer product of vector ${\\bm{z}}_{\\bm{x}}^T$,\n $\\nabla^2\\phi({\\bm{x}}_{T}) = {\\bm{z}}_{\\bm{x}}^T \\otimes {\\bm{z}}_{\\bm{x}}^T$ (for instance, ${\\bm{z}}_{\\bm{x}}^T=\\nabla \\phi$ for GN), then we have the factorization for all $t$:\n \\begin{equation}\n \\begin{split}\n {Q^t_{{\\bm{u}} {\\bm{x}}}} = {\\bm{q}}_{\\bm{u}}^t \\otimes {\\bm{q}}_{\\bm{x}}^t { \\text{ ,} } \\quad\n {Q^t_{{\\bm{x}} {\\bm{x}}}} = {\\bm{q}}_{\\bm{x}}^t \\otimes {\\bm{q}}_{\\bm{x}}^t { \\text{ ,} } \\quad\n V_{{\\bm{x}}\\vx}^t = {\\bm{z}}_{\\bm{x}}^t \\otimes {\\bm{z}}_{\\bm{x}}^t { \\text{ .} }\n \\end{split} \\label{eq:qxqu}\n \\end{equation}\n ${\\bm{q}}_{\\bm{u}}^t$, ${\\bm{q}}_{\\bm{x}}^t$, and ${\\bm{z}}_{\\bm{x}}^t$\n are outer-product vectors\n which are also computed along the backward pass.\n \\begin{align}\n {\\bm{q}}_{\\bm{u}}^t = {{{f}^t_{{\\bm{u}}}}^{\\mathsf{T}}} {\\bm{z}}_{\\bm{x}}^{t+1} { \\text{ ,} } \\quad\n {\\bm{q}}_{\\bm{x}}^t = {{{f}^t_{{\\bm{x}}}}^{\\mathsf{T}}} {\\bm{z}}_{\\bm{x}}^{t+1} { \\text{ ,} } \\quad\n {\\bm{z}}_{\\bm{x}}^t = \\mathpalette\\DHLhksqrt{1-{\\bm{q}}_{\\bm{u}}^{t \\text{ } {\\mathsf{T}}} {({Q^{t}_{{\\bm{u}} {\\bm{u}}})}^{-1}} {\\bm{q}}_{\\bm{u}}^t } \\text{ }{\\bm{q}}_{\\bm{x}}^t\n { \\text{ .} }\n \\label{eq:vxvx}\n \\end{align}\n\\end{proposition}\n\\vskip -0.08in\nThe derivation is left in Appendix \\ref{app:prop:gn-ddp}.\nIn other words,\nthe outer-product factorization at the final layer can be backward propagated to all proceeding layers.\nThus, large matrices, such as ${Q^t_{{\\bm{u}} {\\bm{x}}}}$, ${Q^t_{{\\bm{x}} {\\bm{x}}}}$, $V_{{\\bm{x}}\\vx}^t$, and even feedback policies ${{\\bm{K}}_t}$,\ncan be factorized accordingly, greatly reducing the complexity. %\n\n\n\n\n\\vspace{-8pt}\n\\input{subtex\/pseudo-code}\n\\vspace{-10pt}\n\n{\n \\textbf{Regularization on $V_{{\\bm{x}}\\vx}$.}\n Finally,\n we apply Tikhonov regularization to the value Hessian $V^t_{{\\bm{x}}\\vx}$ (line $12$ in Alg.~\\ref{alg:ddpnopt}).\n This can be seen as placing a quadratic state-cost and has been shown to improve stability on optimizing complex humanoid behavior \\citep{tassa2012synthesis}.\n For the application of DNN where the dimension of the state ({\\emph{i.e.}} the vectorized activation) varies during forward\/backward pass,\n the Tikhonov regularization prevents the value Hessian from low rank (throught ${{g}^{t \\text{ }{\\mathsf{T}}}_{{\\bm{u}}}} {V^{t}_{{\\bm{h}}\\vh}} g^t_{{\\bm{x}}}$);\n hence we also observe similar stabilization effect in practice.\n}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\nAlthough standard four-dimensional (4D) General Relativity (GR) is\nbelieved to be the correct description of gravity at the classical level,\nits quantization faces many well-known problems. Therefore,\nthree-dimensional (3D) gravity has gained much interest, since classically\nit is much simpler and thus one can investigate more efficiently its\nquantization. Amongst others, in 3D gravity one obtains the\nBanados-Teitelboim-Zanelli (BTZ) black hole~\\cite{BTZ}, which is a\nsolution to the Einstein equations with a negative cosmological constant.\nThis black-hole solution presents interesting properties at both classical\nand quantum levels, and it shares several features of the Kerr black hole\nof 4D GR~\\cite{Carlip}. \n\nFurthermore, remarkable attention was addressed recently to topologically\nmassive gravity, which is a generalization of 3D GR that amounts to\naugment the Einstein-Hilbert action by adding a Chern-Simons gravitational\nterm, and thus the propagating degree of freedom is a massive\ngraviton, which amongst others also admits BTZ black-hole as exact\nsolutions~\\cite{deser}. The renewed interest on topologically\nmassive gravity relies on the possibility of\nconstructing a chiral theory of gravity at a special point of the\nparameter-space, as it was suggested in~\\cite{Li:2008dq}. This idea has\nbeen extensively analyzed in the last years~\\cite{Strominger:2008dp}, leading to a fruitful\ndiscussion that ultimately led to a significantly better understanding of\nthe model~\\cite{Maloney:2009ck}. Moreover, another\n3D massive gravity theory known as new massive gravity \\cite{Bergshoeff:2009hq, Bergshoeff:2009tb} (where the action is given by the\nEinstein-Hilbert term plus a specific\ncombination of square-curvature terms which gives rise to\nfield equations with a second order trace) have attracted considerable attention, this theory also admits interesting solutions, see for instance \\cite{Clement:2009gq, Clement:2009ka, Oliva:2009ip}. Furthermore, 3D gravity with torsion has been extensively studied in~\\cite{3dgravitywithtorsion} and references therein.\n\n\n\nOn the other hand, hairy black holes are interesting solutions of Einstein's Theory\nof Gravity and also of certain types of Modified Gravity Theories. The first attempts to couple a scalar field to gravity was done in\nan asymptotically flat spacetime. Then, hairy black hole solutions\nwere found \\cite{BBMB} but these\nsolutions were not examples of hairy black hole configurations\nviolating the no-hair theorems because they were not physically\nacceptable as the scalar field was divergent on the horizon and\nstability analysis showed that they were unstable\n\\cite{bronnikov}. To remedy this a regularization procedure has to\nbe used to make the scalar field finite on the horizon. Hairy black hole solutions have been extensively studied over the years mainly in connection\n with the no-hair theorems. The recent developments in string theory and\nspecially the application of the AdS\/CFT principle to condense\nmatter phenomena like superconductivity (for a review see\n\\cite{Hartnoll:2009sz}), triggered the interest of further study\nof the behavior of matter fields outside the black hole horizon\n\\cite{Gubser:2005ih,Gubser:2008px}. There are also very\ninteresting recent developments in Observational Astronomy. High\nprecision astronomical observations of the supermassive black\nholes may pave the way to experimentally test the no-hair\nconjecture\n \\cite{Sadeghian:2011ub}. Also, there are numerical investigations\n of single and binary black holes in the presence of scalar fields\n\\cite{Berti:2013gfa}. The aforementioned is a small part on the relevance that has taken the study of hairy black holes currently in the field of physics, for more details see for instance \\cite{Gonzalez:2013aca, Gonzalez:2014tga} and references therein. Also, we refer the reader to references \\cite{Martinez:1996gn, Henneaux:2002wm, Zhao:2013isa, Xu:2014uka, Cardenas:2014kaa} and references therein, where black holes solutions in three space-time dimensions with a scalar field (minimally and\/or confomally) coupled to gravity have been investigated.\n\n\n\n\n\nIn the present work we are interested in investigating the existence of 3D hairy black holes solutions for theories based on torsion. In particular, the so-called ``teleparallel\nequivalent of General Relativity\" (TEGR) \\cite{ein28,Hayashi79} is an\nequivalent formulation of gravity but instead of using the curvature\ndefined via the Levi-Civita connection, it uses the Weitzenb{\\\"o}ck\nconnection that has no curvature but only torsion. So, we consider a scalar field non-minimally coupled with the torsion scalar, with a self-interacting potential in TEGR, and we find three-dimensional asymptotically AdS, hairy black holes. It is worth mentioning, that this kind of theory (known as scalar-torsion theory), has been studied in the cosmological context, where the dark energy sector is attributed to the scalar field. It was shown that the minimal case is equivalent to standard quintessence. However, the nonminimal case has a richer structure, exhibiting quintessence-like or phantom-like behavior, or experiencing the phantom-divide crossing \\cite{Geng:2011aj, Geng:2011ka, Gu:2012ww}, see also \\cite{Horvat:2014xwa} for aplications of this theory (with a complex scalar field) to boson stars.\n\nIt is also worth to mention that a natural extension of TEGR is the so called $f(T)$ gravity, which is represented by a function of the scalar torsion $T$ as Lagrangian density \\cite{Ferraro:2006jd, Ferraro:2008ey, Bengochea:2008gz,Linder:2010py}.\nThe $f(T)$ theories picks up preferred referential frames which constitute the autoparallel curves of the given manifold. \nA genuine advantage of $f(T)$ gravity compared with other deformed gravitational schemes is that the differential equations for the vielbein components are second order differential equations. However, the effects of the additional degrees of freedom that certainly exist in $f(T)$ theories is a consequence of breaking the local Lorentz invariance that these theories exhibit. Despite this, it was found that on the flat FRW background with a scalar field, up to second order linear perturbations does not reveal any extra degree of freedom at all \\cite{Izumi:2012qj}. As such, it is fair to say that the nature of these additional degrees of freedom remains unknown. Remarkably, it is possible to modify $f(T)$ theory in order to make it manifestly Lorentz invariant. However, it will generically have different dynamics and will reduce to $f(T)$ gravity in some local Lorentz frames \\cite{Li:2010cg, Weinberg, Arcos:2010gi}.\nClearly, in extending this\ngeometry sector, one of the goals is to solve the puzzle of dark energy and\ndark matter without asking for new material ingredients that have\nnot yet been detected by experiments \\cite{Capozziello:2007ec,Ghosh:2012pg}. For instance, a Born-Infeld $f(T)$ gravity Lagrangian was used to address the physically inadmissible divergencies occurring in the standard cosmological Big Bang model, rendering the spacetime geodesically complete and powering an inflationary stage without the introduction of an inflaton field \\cite{Ferraro:2008ey}. Also, it is believed that $f(T)$ gravity could be a reliable approach to address the shortcomings of general relativity at high energy scales \\cite{Capozziello:2011et}. Furthermore, both inflation and the dark energy dominated stage can be realized in Kaluza-Klein and Randall-Sundrum models, respectively \\cite{Bamba:2013fta}.\nIn this way, $f(T)$ gravity has gained attention and\nhas been proven to exhibit interesting cosmological implications. On the other hand, the search for black hole solutions in $f(T)$ gravity is not a trivial problem, and there are only few exact solutions, see for instance \\cite{G1, solutions,Rodrigues:2013ifa}. Remarkably, it is possible to construct other generalizations, as Teleparallel Equivalent of Gauss-Bonnet Gravity \\cite{Kofinas:2014owa, Kofinas:2014daa}, Kaluza-Klein theory for teleparallel gravity \\cite{Geng:2014nfa} and scalar-torsion gravity theories \\cite{Geng:2011aj, Kofinas:2015hla}. \n\n\n\n\n\n\n\n\n\n\n\n\nThe paper is organized as follows. In Section II we give a brief review of three-dimensional Teleparallel Gravity. Then, in Section III we find asymptotically AdS black holes with scalar hair, and we conclude in Section IV with final remarks.\n\n\\section{3D Teleparallel Gravity}\n\\label{Tel3D}\n\nIn 1928, Einstein proposed the idea of teleparallelism to unify gravity and electromagnetism into a unified field theory; this corresponds to an equivalent formulation of General Relativity (GR), nowadays known as Teleparallel Equivalent to General Relativity (TEGR) \\cite{ein28, Hayashi79}, where the Weitzenb\\\"{o}ck connection is used to define the covariant derivative (instead of the Levi-Civita connection which is used to define the covariant derivative in the context of GR). The first \n investigations on teleparallel 3D gravity were\nperformed by Kawai almost twenty years ago \\cite{Kawai1,Kawai2,Kawai3}. The Weitzenb\\\"{o}ck connection mentioned above has not null torsion. However, it is curvatureless, which implies that this formulation of gravity exhibits only torsion. The Lagrangian density $T$ is constructed from the torsion tensor.\nTo clarify, the torsion scalar $T$ is the result of a very specific quadratic combination of irreducible representations of the torsion tensor under the Lorentz group $SO(1,3)$ \\cite{Hehl:1994ue}. In this way, the torsion tensor in TEGR \nincludes all the information concerning the\ngravitational field.\nThe theory is called ``Teleparallel\nEquivalent to General Relativity'' since the field equations are exactly the same as those of GR for every geometry choice.\n\nThe Lagrangian of teleparallel 3D gravity corresponds to\nthe more general quadratic Lagrangian for torsion, under the\nassumption of zero spin-connection. So, the action can be written as \\cite{Muench:1998ay,Itin:1999wi}\n\\begin{equation} \\label{action2}\nS=\\frac{1}{2 \\kappa}\\int \\left( \\rho_{0} \\mathcal{L}_{0}+ \\rho_{1}\n\\mathcal{L%\n}_{1}+ \\rho_{2} \\mathcal{L}_{2}+\\rho_{3} \\mathcal{L}_{3}+ \\rho_{4}\n\\mathcal{L%\n}_{4}\\right)~,\n\\end{equation}%\nwhere $\\kappa$ is the three-dimensional gravitational constant,\n$\\rho_i$ are parameters, and\n\\begin{equation}\n\\mathcal{L}_{0}= \\frac{1}{4}e^{a} \\wedge \\star e_a~,\\quad \\mathcal{L}_{1}=de^{a}\n\\wedge \\star de_{a}~,\\quad \\mathcal{L}_{2}= (de_{a} \\wedge \\star e^a)\n\\wedge \\star (de_b \\wedge e^b)~,\\nonumber\n\\end{equation}\n\\begin{equation}\n\\mathcal{L}_{3}=(de^{a} \\wedge e^{b}) \\wedge \\star (de_{a} \\wedge\ne_{b})~,\\quad \\mathcal{L}_{4}= (de_{a} \\wedge \\star e^b) \\wedge\n\\star (de_b \\wedge e^a)~,\n\\end{equation}\nwhere $e^a$ denotes the vielbein, $d$ is the exterior derivative, $\\star $ denotes the Hodge dual operator and $\\wedge$ the wedge\nproduct. The coupling constant $\\rho_{0}=-\\frac{8}{3} \\Lambda$ represents\nthe cosmological constant term. Moreover, since $\\mathcal{L}_{3}$ can be\nwritten completely in terms of $\\mathcal{L}%\n_{1}$, in the following we set $\\rho_{3}=0$ \\cite{Muench:1998ay}. Action (\\ref{action2}) can be written in a more convenient form as\n\\begin{equation}\n\\label{actiontele0}\nS=\\frac{1}{2\\kappa} \\int \\left (T -2\\Lambda \\right )\\star 1~,\n\\end{equation}\nwhere $\\star1=e^{0} \\wedge e^{1} \\wedge e^{2}$, and the torsion\nscalar $T$ is given by\n\\begin{equation} \\label{scalartorsion}\nT= \\star \\left[\\rho_{1}(de^{a} \\wedge \\star de_{a})+\\rho_{2}(de_{a} \\wedge\ne^a) \\wedge \\star (de_b \\wedge e^b)+\\rho_{4}(de_{a} \\wedge\ne^b) \\wedge \\star (de_b \\wedge e^a) \\right]~.\n\\end{equation}\nExpanding this expression in terms of its components, the torsion scalar yields\n\\begin{equation}\n\\label{scalartorsionrho}\nT=\\frac{1}{2} (\\rho_{1}+\\rho_{2}+\\rho_{4})T^{abc}T_{abc}+%\n\\rho_{2}T^{abc}T_{bca}-\\rho_{4}T_{a}^{ac}T^{b}_{bc}~,\n\\end{equation}\nnote that for TEGR $\\rho_{1}=0$, $\\rho_{2}=-\\frac{1}{2}$ and $\\rho_{4}=1$. \nA variation of action (\\ref{actiontele0}) with respect to the\nvielbein provides the following field equations:\n\\begin{eqnarray}\\label{fieldequations}\n&&\\delta \\mathcal{L} =\\delta e^{a}\\wedge \\left\\{\\left\\{\\rho\n_{1}\\left[2d\\star de_{a}+i_{a}(de^{b}\\wedge \\star\nde_{b})-2i_{a}(de^{b})\\wedge\n\\star de_{b}\\right]\n\\right.\\right.\\nonumber\n\\\\ && \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\\n+\\rho _{2}\\left\\{-2e_{a}\\wedge d\\star (de^{b}\\wedge\ne_{b})\n+2de_{a}\\wedge \\star (de^{b}\\wedge e_{b})+i_{a}\\left[de^{c}\\wedge\ne_{c}\\wedge\n\\star (de^{b}\\wedge e_{b})\\right]\\right.\\nonumber\\\\\n&&\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\, \\ \\ \\ \\ \\ \\left.\n-2i_{a}(de^{b})\\wedge e_{b}\\wedge \\star\n(de^{c}\\wedge e_{c})\\right\\}\n \\nonumber\\\\\n&& \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ +\\rho_{4}\\left\\{-2e_{b}\\wedge\nd\\star (e_{a}\\wedge de^{b})+2de_{b}\\wedge \\star\n(e_{a}\\wedge de^{b})+i_{a}\\left[e_{c}\\wedge de^{b}\\wedge \\star\n(de^{c}\\wedge\ne_{b})\\right]\\right.\\nonumber \\\\\n&&\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\, \\ \\ \\ \\ \\ \\left.\\left.\n -2i_{a}(de^{b})\\wedge e_{c}\\wedge \\star (de^{c}\\wedge\ne_{b})\\right\\}\\right\\}\\nonumber\\\\\n&&\\ \\left. \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\\n-2\\Lambda \\star e_a \\right\\}\n=0~,\n\\end{eqnarray}\nwhere $i_a$ is the interior product and for generality's sake we have kept the\ngeneral\ncoefficients $\\rho_i$, and we have used $\\epsilon^{012}=+1$. Through the following choice of the coefficients $\\rho_{1}=0$, $\\rho_{2}=-\\frac{1}{2}$ and $\\rho_{4}=1$ Teleparallel Gravity coincides with the\nusual curvature-formulation of General Relativity and therefore the following BTZ metric is solution of TEGR \n \\begin{equation}\n \\label{metric}\nds^2=N^2dt^2-N^{-2}dr^2-r^2(d\\varphi+N_{\\varphi}dt)^2~,\n\\end{equation}\nwhere the lapse $N$ and shift $N_{\\varphi}$ functions are given by,\n\\begin{equation}\nN^2= -8GM+\\frac{r^2}{l^2}+\\frac{16G^2J^2}{r^2}~,\\quad\nN_{\\varphi}=-\\frac{4GJ}{r^2},\n\\label{BTZ0}\n\\end{equation}\nand the two constants of integration $M$ and $J$ are the usual conserved\ncharges associated with the asymptotic invariance under time displacements\n(mass) and rotational invariance (angular momentum)\nrespectively, given by flux integrals\nthrough a large circle at spacelike infinity, and $\\Lambda=-1\/l^2$ is the\ncosmological constant \\cite{BTZ}.\nFinally, note\nthat \nthe torsion scalar can be \ncalculated, leading to\nthe constant value\n\\begin{equation}\n\\label{Tteleresult}\nT=-2\\Lambda,\n\\end{equation}\nwhich is the cosmological constant as the sole source of torsion.\\\\\n\n\n\n\\section{3D Teleparallel Hairy Black Holes}\n\\label{Tel3DH}\n\\subsection{The Model}\n\nIn this section we will extend the above discussion considering a scalar field $\\phi$ non-minimally coupled with the torsion scalar with a self-interacting potential $V(\\phi)$, and then we will find hairy black hole solutions. \n So, the action can be written as \n\\begin{equation}\n \\label{accionHT}\nS=\\int \\left( \\frac{1}{2 \\kappa} T \\star 1 - \\xi \\phi^2 T \\star 1 + \\frac{1}{2} d\\phi \\wedge \\star d\\phi -V(\\phi)\\star 1\\right)~,\n\\end{equation}\nwhere $T$ is given by (\\ref{scalartorsion}) and $\\xi$ is the non-minimal coupling parameter. \nThus, the variation with respect to the vielbein leads to\n the following field equations:\n\\begin{eqnarray} \\label{fieldeq}\n\\delta_{e^{a}} \\mathcal{L} &=&\\delta e^{a}\\wedge\n\\left\\{\\left(\\frac{1}{2\\kappa}-\\xi\\phi^2\\right)\n\\left\\{\\rho\n_{1}\\left[2d\\star de_{a}+i_{a}(de^{b}\\wedge \\star\nde_{b})-2i_{a}(de^{b})\\wedge\n\\star de_{b}\\right]\\right.\\right.\\nonumber\n\\\\\n&&\\ \\ \\ \\ \\ \\ \\ \\ +\\rho _{2}\\left\\{-2e_{a}\\wedge d\\star (de^{b}\\wedge\ne_{b})+2de_{a}\\wedge \\star\n(de^{b}\\wedge e_{b})+i_{a}\\left[de^{c}\\wedge e_{c}\\wedge\n\\star (de^{b}\\wedge e_{b})\\right] \\right.\n\\nonumber\\\\\n&&\\left. \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\, -2i_{a}(de^{b})\\wedge\ne_{b}\\wedge \\star\n(de^{c}\\wedge e_{c})\\right\\}\\nonumber\\\\\n&&\\ \\ \\ \\ \\ \\ \\ \\ \n+\\rho_{4}\\left\\{-2e_{b}\\wedge d\\star (e_{a}\\wedge\nde^{b})+2de_{b}\\wedge \\star\n(e_{a}\\wedge de^{b})\\right.\n\\nonumber \\\\\n&&\\left.\\left.\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\,\n+i_{a}\\left[e_{c}\\wedge de^{b}\\wedge \\star (de^{c}\\wedge\ne_{b})\\right] -2i_{a}(de^{b})\\wedge e_{c}\\wedge \\star (de^{c}\\wedge\ne_{b})\\right\\}\n\\right\\}\\nonumber\\\\\n&&\\ \\ \\ \\ \\ \\ \\ \\ -4\\xi\\left[\\rho_1\\phi d\\phi \\wedge \\star de_a+\\rho_2 \\phi d\\phi\\wedge\ne_a\\wedge\n\\star(de_b\\wedge e^b)+\\rho_4\\phi d\\phi \\wedge e_b\\wedge \\star(de^b \\wedge\ne_a)\\right]\\nonumber\\\\\n&&\\left. \\ \\ \\ \\ \\ \\ \\ \\ - V(\\phi) i_a(\\star 1)-\\frac{1}{2}d\\phi\\wedge i_a(\\star d\\phi)-\\frac{1}{2} i_a(d\\phi)\\wedge\\star d\\phi \\right\\}=0~,\n\\label{fieldeq000}\n\\end{eqnarray}\nand the variation with respect to the scalar field leads to the Klein-Gordon equation \n\\begin{equation}\n\\label{fieldeq001}\n\\delta_{\\phi} \\mathcal{L}=\\delta \\phi \\left(-2\\xi\\phi T \\star 1-d\\star d\\phi - \\frac{dV}{d\\phi}\\star 1 \\right)=0~.\n\\end{equation}\n\n\n\\subsection{Circularly Symmetric Hairy Solutions}\n\\label{circsymmsol}\nLet us now investigate hairy black hole solutions of the theory. In order to\nanalyze static solutions we consider\nthe metric form as\n\\begin{equation}\\label{metric}\nds^{2}=A\\left( r\\right) ^{2}dt^{2}-\\frac{1}{B\\left( r\\right) ^{2}}\ndr^{2}-r^{2}d\\varphi^{2}~,\n\\end{equation}\nwhich arises from the triad \ndiagonal ansatz \n\\begin{equation}\\label{diagonal}\ne^{0}=A\\left( r\\right) dt~,\\text{ \\ }e^{1}=\\frac{1}{B\\left( r\\right) }dr~,\n\\text{ \\ }e^{2}=rd\\varphi~.\n\\end{equation}\nThen, inserting this vielbein in the field equations (\\ref{fieldeq000}), (\\ref{fieldeq001}) yields\n\\begin{equation}\n-\\frac{1}{r}{(\\frac{1}{2\\kappa}-\\xi\\phi(r)^2)\\frac{dB^2}{dr}}+\\frac{4}{r}\\xi B(r)^2\\phi(r)\\frac{d\\phi}{dr}-\\frac{1}{2}B(r)^2(\\frac{d\\phi}{dr})^2-V(\\phi)=0~,\n\\label{q1}\n\\end{equation}\n\\begin{equation}\n\\frac{B(r)^2}{rA(r)^2}{(\\frac{1}{2\\kappa}-\\xi\\phi(r)^2)\\frac{dA^2}{dr}}-\\frac{1}{2}B(r)^2(\\frac{d\\phi}{dr})^2+V(\\phi)=0~, \n\\label{q2}\n\\end{equation}\n\\begin{eqnarray}\n\\notag&& 2 \\xi\\phi(r) \\frac{d\\phi}{dr}\\frac{B(r)^2}{A(r)^2}\\frac{dA^2}{dr}-\\frac{1}{2}B(r)^2(\\frac{d\\phi}{dr})^2\\\\\n\\notag&&+\\frac{1}{2A(r)^4}(\\frac{1}{2\\kappa}-\\xi\\phi(r)^2)\\left(-A(r)^2\\frac{dA^2}{dr}\\frac{dB^2}{dr}+B(r)^2(\\frac{dA^2}{dr})^2-2A(r)^2 B(r)^2\\frac{d^2A^2}{dr^2}\\right)\\\\\n&&-V(\\phi)=0~,\n\\label{q3}\n\\end{eqnarray}\n\\begin{equation}\n-\\frac{2B(r)^2}{rA(r)^2}\\xi\\phi(r)\\frac{dA^2}{dr}+\\frac{1}{r}B(r)^2\\frac{d\\phi}{dr}+\\frac{1}{2}\\frac{dB(r)^2}{dr}\\frac{d\\phi}{dr}+\\frac{B(r)^2}{2A(r)^2}\\frac{dA(r)^2}{dr}\\frac{d\\phi}{dr}+B(r)^2\\frac{d^2\\phi}{dr^2}-\\frac{dV}{d\\phi}=0~.\n\\label{q4}\n\\end{equation}\n\nIt is worth mentioning that, in the case of a minimally coupled scalar field, \n the above simple, diagonal relation \n between the metric and the vielbeins (\\ref{diagonal}) is\nalways allowed, due to in this case the theory is invariant under local Lorentz transformations of the vielbein. In contrast, in the extension of a non-minimally coupled scalar field with the torsion scalar, the theory is not local Lorentz invariant, therefore, one could have in general a more complicated relation connecting the vielbein with the metric, with the vielbeins being non-diagonal even for a diagonal\nmetric \\cite{fTLorinv0}. However, for the three-dimensional solutions considered here, using a preferred diagonal frame is allowed, in the sense that this frame defines a global set of basis covering the whole tangent bundle, i.e., they parallelize the spacetime \\cite{Fiorini:2013hva}, \\cite{Ferraro:2011us}.\n\nIn the following, and in order to solve the above system of equations, we will consider two cases: first, we analyze the case $A(r)=B(r)$, and then we analyze the more general case $A(r) \\neq B(r)$.\n\\subsubsection{$A(r)=B(r)$}\nIn this case the field equations (\\ref{q1})-(\\ref{q4}) simplify to\n\\begin{equation}\n-\\frac{1}{r}{(\\frac{1}{2\\kappa}-\\xi\\phi(r)^2)\\frac{dA^2}{dr}}+\\frac{4}{r}\\xi A(r)^2\\phi(r)\\frac{d\\phi}{dr}-\\frac{1}{2}A(r)^2(\\frac{d\\phi}{dr})^2-V(\\phi)=0~, \n\\label{fieldequation1}\n\\end{equation}\n\\begin{equation}\n\\frac{1}{r}{(\\frac{1}{2\\kappa}-\\xi\\phi(r)^2)\\frac{dA^2}{dr}}-\\frac{1}{2}A(r)^2(\\frac{d\\phi}{dr})^2+V(\\phi)=0~, \n\\label{fieldequation2}\n\\end{equation}\n\\begin{equation}\n2\\xi\\phi(r) \\frac{d\\phi}{dr}\\frac{dA^2}{dr}-\\frac{1}{2}A(r)^2(\\frac{d\\phi}{dr})^2-(\\frac{1}{2\\kappa}-\\xi\\phi(r)^2)\\frac{d^2A^2}{dr^2}-V(\\phi)=0~,\n\\label{fieldequation3}\n\\end{equation}\n\\begin{equation}\n-\\frac{2}{r}\\xi\\phi(r)\\frac{dA^2}{dr}+\\frac{1}{r}A(r)^2\\frac{d\\phi}{dr}+\\frac{dA(r)^2}{dr}\\frac{d\\phi}{dr}+A(r)^2\\frac{d^2\\phi}{dr^2}-\\frac{dV}{d\\phi}=0~.\n\\label{fieldequation4}\n\\end{equation}\nNow, by adding equations (\\ref{fieldequation1}) and (\\ref{fieldequation2}) we obtain\n\\begin{equation}\nA(r)^2\\frac{d \\phi}{dr}\\left( \\frac{4 \\xi}{r} \\phi-\\frac{d \\phi}{dr}\\right)=0~.\n\\end{equation}\nTherefore, the nontrivial solution for the scalar field is given by\n\\begin{equation}\n\\phi(r)=Br^{4\\xi} ~,\n\\end{equation}\nand by using this profile for the scalar field in the remaining equations, we obtain the solution\n\\begin{equation}\n\\label{A}\nA(r)^2=Gr^{2}+H {}_{2}F_1(1,-\\frac{1}{4\\xi}, 1-\\frac{1}{4\\xi}, 2\\kappa B^2\\xi r^{8\\xi})~,\n\\end{equation}\n\\begin{eqnarray}\nV(\\phi) & = & \\frac{H}{\\kappa}\\left(\\frac{\\phi}{B}\\right)^{-\\frac{1}{2\\xi}}+2G\\left(-\\frac{1}{2\\kappa}+B^2\\xi(1+4\\xi)\\left(\\frac{\\phi}{B}\\right)^2\\right) \\\\ \\notag \n&& -2H\\left(\\frac{\\phi}{B}\\right)^{-\\frac{1}{2\\xi}}\\left(\\frac{1}{2\\kappa}-B^2\\xi(1+4\\xi)\\left(\\frac{\\phi}{B}\\right)^2\\right){}_{2}F_1(1,-\\frac{1}{4\\xi}, 1-\\frac{1}{4\\xi}, 2\\kappa \\xi \\phi^2)~,\n\\end{eqnarray}\nwhere $B$, $G$, and $H$ are integration constant and ${}_{2}F_{1}$ is the Gauss hypergeometric function. In the limits $\\xi \\rightarrow 0$ or $B \\rightarrow 0$ the theory reduces to TEGR, therefore, we must hope our solution reduces to the BTZ black hole, this is indeed the case, as we show below. For those limits we obtain:\n\\begin{eqnarray}\n\\lim_{\\xi \\rightarrow 0} \\,\\,{}_{2}F_1(1,-\\frac{1}{4\\xi}, 1-\\frac{1}{4\\xi}, 2\\kappa B^2\\xi r^{8\\xi})=1~,\\\\\n\\lim_{B \\rightarrow 0} \\,\\,{}_{2}F_1(1,-\\frac{1}{4\\xi}, 1-\\frac{1}{4\\xi}, 2\\kappa B^2\\xi r^{8\\xi})=1~,\n\\end{eqnarray}\ntherefore,\n\\begin{equation}\n\\lim_{\\xi \\rightarrow 0 \\,\\, or \\,\\, B \\rightarrow 0} \\,\\, A(r)^2=Gr^{2}+H~,\n\\end{equation}\nwhich is the non-rotating BTZ metric.\nIn order to see the asymptotic behavior of $A(r)^2$, we expand the hypergeometric function for large $r$ and $\\xi<0$:\n\\begin{equation}\n{}_{2}F_1(1,-\\frac{1}{4\\xi}, 1-\\frac{1}{4\\xi}, 2\\kappa B^2\\xi r^{8\\xi})\\approx 1-\\frac{\\kappa B^2 r^{8 \\xi}}{2 \\left( 1-\\frac{1}{4 \\xi}\\right)}+...~.\n\\end{equation}\nThis expansion shows that the hairy black hole is asymptotically AdS.\nOn other hand, in the limit $\\phi\\rightarrow 0$ the potential goes to a constant (the effective cosmological constant) $V(\\phi)\\rightarrow -\\frac{G}{\\kappa}=\\Lambda$. In Fig.~(\\ref{function}) we plot the behavior of the metric function\n$A\\left( r\\right)^2 $ given by Eq. (\\ref{A}) \nfor a choice of parameters $H=-1$, $G=1$, $B=1$, $\\kappa=1$, \nand $\\xi=-0.25,-0.5,-1$. The metric function $A(r)^2$ changes\nsign for low values of $r$, signalling the presence of a horizon,\nwhile the scalar field is\nregular everywhere outside the event horizon (for $\\xi < 0$) and null at large\ndistances. In Fig.~(\\ref{Pot1}) we show the behavior of the potential, and we observe that it tends asymptotically ($\\phi \\rightarrow 0$) to a negative constant (the effective cosmological constant). We also plot the\nbehavior of the Ricci scalar $R(r)$, the principal quadratic invariant of the Ricci tensor $R^{\\mu\\nu}R_{\\mu\\nu}(r)$, and the Kretschmann scalar\n$R^{\\mu\\nu\\lambda\\tau}R_{\\mu\\nu\\lambda\\tau}(r)$ in Fig.~(\\ref{figuraRR}) by using the Levi-Civita connection, and we observe that there is not a\nRiemann curvature singularity outside the horizon for $\\xi=-0.25,-0.5,-1$. Also, we observe a Riemann curvature singularity at $r=0$ for $\\xi=-0.25$ and the torsion scalar is singular at $r=0$ for $\\xi=-0.25$, see Fig.~(\\ref{figuraR}).\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=0.6\\textwidth]{funcion.eps}\n\\end{center}\n\\caption{The behavior of $A(r)^2$, for $H=-1$, $G=1$, $B=1$, $\\kappa=1$, \nand $\\xi=-0.25,-0.5,-1$.} \\label{function}\n\\end{figure}\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=0.6\\textwidth]{Pot1.eps}\n\\end{center}\n\\caption{The Potencial $V(\\phi)$, for $H=-1$, $G=1$, $B=1$, $\\kappa=1$, \nand $\\xi=-0.25,-0.5,-1$.} \\label{Pot1}\n\\end{figure}\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=0.4\\textwidth]{epsilon1.eps}\n\\includegraphics[width=0.4\\textwidth]{epsilon2.eps}\n\\includegraphics[width=0.55\\textwidth]{epsilon3.eps}\n\\end{center}\n\\caption{The behavior of $R(r)$, $R^{\\mu\\nu}R_{\\mu\\nu}(r)$ and $R^{\\mu\\nu\\lambda\\tau}R_{\\mu\\nu\\lambda\\tau}(r)$ for $H=-1$, $G=1$, $B=1$, $\\kappa=1$, and $\\xi=-0.25$ (left figure), $\\xi=-0.5$ (right figure), and $\\xi=-1$ (bottom figure).} \\label{figuraRR}\n\\end{figure}\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=0.6\\textwidth]{epsilon.eps}\n\\end{center}\n\\caption{The behavior of torsion scalar $T$ as function of $r$ for $H=-1$, $G=1$, $B=1$, $\\kappa=1$, and $\\xi=-0.25, -0.5, -1$.} \\label{figuraR}\n\\end{figure}\n\n\n\\subsubsection{$A(r) \\neq B(r)$}\n\nNow, by considering the following ansatz for the scalar field\n\\begin{equation}\n\\phi(r)=Br^{\\gamma} ~,\n\\end{equation}\nwe find the following solution to the field equations\n\\begin{equation}\nA(r)^2=Gr^{2}+H {}_{2}F_1(-\\frac{1}{\\gamma},\\frac{\\gamma}{4\\xi}, 1-\\frac{1}{\\gamma}, 2\\kappa B^2\\xi r^{2\\gamma}) ~,\n\\end{equation}\n\\begin{equation}\nB(r)^2= \\left( \\frac{1}{2\\kappa} - r^{2 \\gamma} \\xi B^2\\right)^{-2+\\frac{\\gamma}{2 \\xi}} A(r)^2 ~,\n\\label{horizon}\n\\end{equation}\n\\begin{eqnarray}\n\\notag V(\\phi) & = & 2H \\left( \\frac{ \\phi}{B} \\right)^{-\\frac{2}{ \\gamma}} \\left( \\frac{1}{2\\kappa}-\\xi \\phi ^2 \\right)^{-1+\\frac{\\gamma}{2 \\xi}} \\left( 1-2\\kappa \\xi \\phi ^2\\right)^{-\\frac{\\gamma}{4 \\xi}}\\\\\n&& -\\frac{G}{2}\\left( \\frac{1}{2\\kappa}-\\xi \\phi ^2 \\right)^{-2+\\frac{\\gamma}{2 \\xi}}\n\\left( \\frac{2}{\\kappa}-(\\gamma^2+4\\xi) \\phi^2\\right)\\\\ \\notag \n&& -\\frac{H}{2} \\left( \\frac{ \\phi}{B} \\right)^{-\\frac{2}{ \\gamma}}\\left( \\frac{1}{2\\kappa}-\\xi \\phi ^2 \\right)^{-2+\\frac{\\gamma}{2 \\xi}}\n\\left( \\frac{2}{\\kappa}-(\\gamma^2+4\\xi) \\phi^2\\right) {}_{2}F_1(-\\frac{1}{\\gamma},\\frac{\\gamma}{4\\xi}, 1-\\frac{1}{\\gamma}, 2\\kappa\\xi \\phi ^2)~,\n\\end{eqnarray}\nwhere $B$, $G$ and $H$ are integration constants. \nThis solution is asymptotically AdS and generalizes the previous one, because if we take $\\gamma=4\\xi$ it reduces to the solution of the case $A(r)=B(r)$. Furthermore, for $\\gamma=0$ we recover the static BTZ black hole. On the other hand, in the limit $\\phi \\rightarrow 0$ the potential tends to a constant $V(\\phi) \\rightarrow -2G (2 \\kappa)^{1-\\frac{\\gamma}{2 \\xi}}=\\Lambda$. \n\nAs in the previous case, we plot the behavior of the metric function\n$B\\left( r\\right)^2 $ given by (\\ref{horizon}), in Fig.~(\\ref{functionn}) \nfor a choice of parameters $H=-1$, $G=1$, $B=1$, $\\kappa=1$, $\\xi=-0.25$\nand $\\gamma=-0.25,-1,-2$. The metric function $B(r)^2$ changes\nsign for low values of $r$, signalling the presence of a horizon,\nwhile for $\\gamma < 0$ the scalar field is\nregular everywhere outside the event horizon and null at large\ndistances. In Fig.~(\\ref{Pot2}) we show the behavior of the potential, asymptotically ($\\phi \\rightarrow 0$) it tends to a negative constant (the effective cosmological constant) as in the previous case. Also, we plot the\nbehavior of $R(r)$, $R^{\\mu\\nu}R_{\\mu\\nu}(r)$, and \n$R^{\\mu\\nu\\lambda\\tau}R_{\\mu\\nu\\lambda\\tau}(r)$ in Fig.~(\\ref{figureinvariant}) by using the Levi-Civita connection, and we observe that there is not a\nRiemann curvature singularity outside the horizon for $\\gamma=-0.25,-1,-2$. Also, we observe a Riemann curvature singularity at $r=0$ and the torsion scalar is singular at $r=0$ for all the cases considered. Asymptotically, the torsion scalar goes to $-2\\Lambda$ since this spacetime is asymptotically AdS, see Fig.~(\\ref{torsionRR}). Therefore, we have shown that there are three-dimensional black hole solutions with scalar hair in Teleparallel Gravity.\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=0.6\\textwidth]{funcionn.eps}\n\\end{center}\n\\caption{The behavior of $B(r)^2$, for $H=-1$, $G=1$, $B=1$, $\\kappa=1$, $\\xi=-0.25$ \nand $\\gamma=-0.25,-1,-2$.} \\label{functionn}\n\\end{figure}\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=0.6\\textwidth]{Pot2.eps}\n\\end{center}\n\\caption{The Potencial $V(\\phi)$, for $H=-1$, $G=1$, $B=1$, $\\kappa=1$, $\\xi=-0.25$ \nand $\\gamma=-0.25,-1,-2$.} \\label{Pot2}\n\\end{figure}\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=0.4\\textwidth]{epsilonn1.eps}\n\\includegraphics[width=0.4\\textwidth]{epsilonn2.eps}\n\\includegraphics[width=0.55\\textwidth]{epsilonn3.eps}\n\\end{center}\n\\caption{The behavior of $R(r)$, $R^{\\mu\\nu}R_{\\mu\\nu}(r)$ and $R^{\\mu\\nu\\lambda\\tau}R_{\\mu\\nu\\lambda\\tau}(r)$ for $H=-1$, $G=1$, $B=1$, $\\kappa=1$, $\\xi=-0.25$ and $\\gamma=-0.25$ (left figure), $\\gamma=-1$ (right figure), and $\\gamma=-2$ (bottom figure).} \\label{figureinvariant}\n\\end{figure}\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=0.6\\textwidth]{epsilonn.eps}\n\\end{center}\n\\caption{The behavior of torsion scalar $T$ as function of $r$ for $H=-1$, $G=1$, $B=1$, $\\kappa=1$, $\\xi=-0.25$ \nand $\\gamma=-0.25,-1,-2$.} \\label{torsionRR}\n\\end{figure}\n\n\\section{Final Remarks}\n\\label{conclusions}\n\n\nMotivated by the search of hairy black holes solutions in theories based on torsion, we have considered an extension of three-dimensional TEGR with a scalar field non-minimally coupled to the torsion scalar along with a self-interacting potential, and we have found three-dimensional asymptotically AdS black holes with scalar hair. These hairy black holes are characterized by a scalar field with a power-law behavior and by a self-interacting potential, which tends to an effective cosmological constant at spatial infinity. We have considered two cases $A(r)=B(r)$ and $A(r)\\neq B(r)$. In the first case the scalar field depends on the non-minimal coupling parameter $\\xi$, and it is regular everywhere outside the event horizon and null at spatial infinity for $\\xi < 0$, while for $\\xi = 0$ we recover the non-rotating BTZ black hole. In the second case the scalar field depends on a parameter $\\gamma$, and it is regular everywhere outside the event horizon and null at spatial infinity for $\\gamma < 0$, this solution generalizes the solution of the first case, which is recovered for $\\gamma=4\\xi$. Furthermore, for $\\gamma = 0$ we recover the non-rotating BTZ black hole. Moreover, the analysis of the Riemann curvature invariants and the torsion scalar shows that they are all regular outside the event horizon. In furthering our understanding, it would be interesting to study the thermodynamics of these hairy black hole solutions in order to study the phase transitions. Work in this direction is in progress.\n\n\n\n\\acknowledgments \nThis work was funded by Comisi\\'{o}n\nNacional de Ciencias y Tecnolog\\'{i}a through FONDECYT Grants 11140674 (PAG),\n1110076 (JS) and 11121148 (YV) and by DI-PUCV Grant 123713\n(JS). P.A.G. acknowledge the hospitality of the\nUniversidad de La Serena.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Introduction}\n\n\nIt is well known that $7$-dimensional area minimizing hypersurfaces can have isolated singularities. Using work of Hardt--Simon \\cite{HS}, Smale proved in \\cite{Smale} that in an $8$-dimensional manifold $M$ with $H_7(M; \\mathbb{Z}) \\neq 0$, there exists a smooth embedded area minimizing hypersurface for a generic choice of metric. In other words, he showed that isolated singularities of an area-minimizing $7$-dimensional hypersurface can generically be perturbed away. \n\nOne may thus seek to find a smooth embedded minimal hypersurface in all $8$-manifolds $M$ equipped with a generic metric $g$, even when $H_7(M;\\mathbb{Z})=0$. Here, we find such a hypersurface in the case of positive Ricci curvature, and give a partial answer in general. We let $\\Met^{2,\\alpha}(M)$ denote the space of Riemannian metrics of regularity $C^{2,\\alpha}$ on $M$ and $\\Met^{2,\\alpha}_{\\Ric>0}(M)\\subset\\Met^{2,\\alpha}(M)$ denote the open subset of Ricci positive metrics. \n\n\\begin{theorem}[Generic regularity with positive Ricci in dimension $8$]\\label{thm:generic_bound_ricci}\nLet $M^8$ be a compact smooth $8$-manifold. There is an open and dense set $\\mathcal G \\subset \\Met_{\\Ric>0}^{2,\\alpha}(M)$ so that for $g\\in \\mathcal G$, there exists a smooth embedded minimal hypersurface $\\Sigma \\subset M$. \n\\end{theorem}\n\nWithout the curvature condition, we have the following partial result. \n\n \\begin{theorem}[Generic almost regularity in dimension $8$]\\label{thm:generic_bound}\n \tLet $M^8$ be a compact smooth $8$-manifold. There exists a dense set $\\mathcal G \\subset \\Met^{2,\\alpha}(M)$ so that for $g \\in \\mathcal G$, there exists a smooth embedded minimal hypersurface $\\Sigma \\subset M$ with at most one singular point. \n \\end{theorem}\n \n We actually prove more general results valid in all dimensions, see Theorem \\ref{thm:generic_stratum} below. \n \n As mentioned above, the principal motivation for such results is to study generic regularity of non-minimizing, high-dimensional minimal submanifolds. This contrasts with previous works on generic regularity:\n\\begin{itemize}\n \\item Hardt--Simon \\cite{HS} (resp.\\ Smale \\cite{Smale}), cf.\\ \\cite{Liu}, show that regular singularities of (one-sided) minimizing hypersurfaces can be perturbed away by perturbing the boundary (resp.\\ metric). \n \\item White \\cite{White:85,white2019generic} shows that minimizing integral 2-cycles are smoothly embedded surfaces for a generic metric. \n \\item Moore \\cite{Moore,Moore:book} shows that parametrized minimal (2-dimensional) surfaces are free of branch points for a generic ambient metric. \n\\end{itemize}\n\n \n \\medskip\n\nIn fact, our work proves that generically there exists a minimal hypersurface of optimal regularity avoiding \\emph{certain} singularities in ambient dimensions beyond the singular dimension. Indeed, Theorem \\ref{thm:generic_bound} is a consequence of a more general result stated below.\n\n\\begin{theorem}[Generic removability of isolated singularities]\\label{thm:generic_stratum}\nConsider a compact smooth $(n+1)$-manifold, for $n\\geq 7$. There is a dense set $\\mathcal G\\subset \\Met^{2,\\alpha}(M)$ with the following properties:\n\\begin{itemize}\n\\item If $g\\in\\mathcal G$ then there exists a minimal hypersurface $\\Sigma$, smooth away from a closed singular set of Hausdorff dimension at most $n-7$, so that for $\\mathcal S_0 \\subset \\sing(\\Sigma)$ the set of singular points with regular tangent cones, we have $\\cH^0(S_0) \\leq 1$. \n\\item If $g \\in \\mathcal G \\cap \\Met^{2,\\alpha}_{\\Ric>0}(M)$ then the same statement holds, except we can conclude that $\\cH^0(\\mathcal S_0) = 0$. \n\\end{itemize}\n\\end{theorem}\n\n\n \\medskip\n\nIn order to remove the topological condition $H_7(M; \\mathbb{Z}) \\neq 0$ of Smale, we will use the Almgren--Pitts min-max construction \\cite{Pitts}, which guarantees the existence of a minimal hypersurface $\\Sigma^n$ in a closed Riemannian manifold $(M^{n+1},g)$. As in the area-minimizing case, when the dimension $n$ satisfies $2\\leq n \\leq 6$, the Almgren--Pitts minimal hypersurface is smooth, but for larger values of $n$ there may be an at most $(n-7)$-dimensional singular set (this follows from work of Schoen--Simon \\cite{SS}). However tangent cones to min-max hypersurfaces are \\emph{a priori} only stable, while only area-minimizing cones have complements that are foliated by smooth minimal hypersurfaces (cf.\\ \\cite{BDG, Lawlor}) and it seems that such a foliation is needed (at least on one side) to perturb the singularity away by adjusting the metric \\cite{HS}.\n\nThe key technical result of this paper is that (for one-parameter min-max) at all points---except possibly one---of the singular set with a regular tangent cone, the tangent cone is area minimizing on at least one side. Put another way, we show that tangent cones that are not area minimizing on either side ``contribute to the Morse index'' from the point of view of min-max (and these are precisely the cones that we are unable to perturb away using Hardt--Simon \\cite{HS}). \n\n\\subsection{Detailed description of results} \nLet $(M^{n+1},g)$ be a closed Riemannian manifold. By a \\emph{sweepout} of $M$ we will mean a family of (possibly singular) hypersurfaces $\\{ \\Phi(x)=\\partial \\Omega(x)\\}_{x \\in [0,1]}$, where each hypersurface $\\Phi(x)$ is the boundary of an open set\n$\\Omega(x)$ with $\\Omega(0) = \\emptyset$ and $\\Omega(1) = M$, and we denote the family of such sweepouts by $\\mathcal S$ (see Section \\ref{sec:min-max} for the precise definition).\nThe width, $W(M)$, is then defined by \n\\[\nW(M) = \\inf_{\\Phi\\in \\mathcal S} \\left\\{ \\sup_x \\mathbf{M}(\\Phi(x))\\right\\}\\,.\n\\]\n\n\n\n Given a stationary integral varifold $V$, with $\\supp V$ regular outside of a set of $n-7$ Hausdorff dimension, we define \n \\[\n\\mathfrak{h}_\\textnormal{nm}(V):= \\left\\{ p\\in \\supp(V)\\,:\\, \\begin{gathered}\n\\textrm{for all $r>0$ small, $\\supp V\\cap B_r(p)$ is not one-sided} \\\\\n\\text{homotopy area minimizing on either side (in $B_r(p)$).}\n\\end{gathered}\n\\right\\}\\,\n \\]\n \nIn other words, $p\\in\\mathfrak{h}_\\textrm{nm}(V)$ implies that in any small ball there are one-sided homotopies on both sides of $\\supp V$ that strictly decrease area without ever increasing area. \nLet $\\cR$ denote the set of integral varifolds, \nwhose support is a complete embedded minimal hypersurface regular away from a closed singular set of Hausdorff dimension $n-7$.\nFinally, we let $\\Index(V)$ denote the Morse index of the regular part of the support of $V$, that is\n$$\n\\Index (V)=\\Index (\\supp(\\reg(V)))\\,.\n$$\nThen the main technical estimate of this paper is the following result.\n\n\\begin{theorem}[Index plus non-area minimizing singularities bound]\\label{thm:nm+index_bound}\nFor $n\\geq 7$, let $(M^{n+1}, g)$ be a closed Riemannian manifold of class $C^2$. There exists a stationary integral varifold\n\t$V\\in \\cR$ such that $|V|(M)=W$, which satisfies \n\\begin{equation}\\label{e:bound1}\n \\cH^{0} (\\mathfrak{h}_\\textnormal{nm}(V)) +\\Index(V) \\leq 1\\,.\n\\end{equation}\nIf equality holds in \\eqref{e:bound1}, then for any point $p\\in \\supp V\\setminus \\mathfrak{h}_\\textnormal{nm}(V)$ there is $\\varepsilon>0$ so that $\\supp V$ is area-minimizing to one side in $B_\\varepsilon(p)$. Finally, we can write $V=\\sum_{i}\\kappa_i\\,|\\Sigma_i|$, where $\\Sigma_i$ are finitely many disjoint embeddded minimal hypersursufaces smooth away from finitely many points with $\\kappa_i\\leq 2$ for every $i$; if $\\Sigma_i$ is one-sided then $\\kappa_i =2$ and if $\\kappa_j=2$ for some $j$ then each $\\Sigma_i$ is stable. \n\\end{theorem}\n\nThe above bound is valid in all dimensions and can be seen as a generalization of the work of Calabi--Cao concerning min-max on surfaces \\cite{CaCa}. Indeed if we define $\\mathcal{S}_{\\textnormal{nm}}(V)$ by\\footnote{Here $\\omega$ is a modulus of continuity, and we could take it to be logarithmic, as suggested by the work of \\cite{Simon}. Notice in fact that at all isolated singularities $\\mathcal S_0$, minimal surfaces have unique tangent cone and are locally $C^{1,\\log}$ deformation of the cone itself.}\n\\[\n\\mathcal{S}_{\\textnormal{nm}}(V):= \\left\\{ p\\in \\supp(V)\\,:\\, \\begin{gathered}\nV \\text{ is locally a $C^{1,\\omega}$ graph over its \\emph{unique} tangent cone $\\mathcal C$}\\\\\n\\text{at $p$ and both sides of $\\mathcal C$ are not one-sided minimizing} \n\\end{gathered} \n\\right\\}\\,\n\\]\nthen we will see that $\\mathcal{S}_{\\textnormal{nm}}(V) \\subset \\mathfrak{h}_\\textrm{nm}(V)$ in Lemma \\ref{lem:snm-hnm}. In particular, \\eqref{e:bound1} implies that\n\\[\n\\cH^0(\\mathcal{S}_{\\textnormal{nm}}(V)) + \\Index(V) \\leq 1.\n\\]\nThus, if we are guaranteed to have $\\Index(V) =1$ (e.g., in positive curvature) we see that $\\mathcal{S}_{\\textnormal{nm}}(V) = \\emptyset$. This is precisely the higher dimensional analogue of the result of Calabi--Cao (cf. Figure \\ref{fig:starfish} and the remark below). \n\nSee also the more recent work of Mantoulidis \\cite{mantoulidis} which makes a more explicit connection with Morse index, using the Allen--Cahn approach (as developed by Guaraco and Gaspar \\cite{Guaraco,GasparGuaraco}) rather than Almgren--Pitts; it would be interesting to elucidate the relationship between Mantoulidis's Allen--Cahn techniques and our proof of Theorem \\ref{thm:nm+index_bound}. \n\n\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[scale=0.4]{starfish.png}\n\t\\caption{The figure eight geodesic $c$ is an \n\texample of a min-max closed geodesic\n\tthat is stable and has one singularity \n\twith non-area minimizing tangent cone.}\n\t\\label{fig:starfish}\n\\end{figure}\n\n\n\\begin{remark}\nBy the index bound in Theorem \\ref{thm:nm+index_bound}, any tangent cone to $V$ has stable regular part. Moreover, we note that the Simons cones \\cite{Simons} in $\\mathbb{R}^8$ (formed from products of two spheres) are all stable and area minimizing on (at least) one side (cf.\\ \\cite{Lawlor}). We particularly emphasize that the Simons cone \n\\[\n\\mathbf{C}^{1,5}: = \\{(x,y) \\in \\mathbb{R}^2\\times \\mathbb{R}^6 : 5|x|^2 = |y|^2\\}\n\\]\nis one-sided minimizing (and stable), but is not minimizing on the other side. It seems to be an open question whether or not there exists an $n$-dimensional stable cone that does not minimize area on either side, for $n\\geq 7$. \n\nEven assuming the existence of a stable minimal cone which is not area minimizing on either sides, it is hard to decide if the above bound is optimal. In dimension $n=1$, such an example is provided by the classical starfish example (cf.\\ Figure \\ref{fig:starfish}), whose tangent cone at the singular point (the union of two lines through the origin) is indeed stable non-area minimizing on either sides (and the starfish fails to be one-sided homotopy minimizing on either side).\n\nWe conjecture that if there is a regular stable minimal cone that is not area-minimizing on either side, then it can arise as the tangent cone to a min-max minimal hypersurface (possibly in a manifold geometrically similar to the starfish); note that were this to occur, Theorem \\ref{thm:nm+index_bound} would imply that the resulting hypersurface would necessarily be stable.\n\\end{remark}\n\nTheorem \\ref{thm:nm+index_bound} generalizes the index upper bound of Marques and Neves \\cite{MN16} for Riemannian manifolds $M^{n+1}$, $3\\leq n+1\\leq 7$ (see also \\cite{Zhou-reg-ind}).\nIn recent years there has been tremendous progress in the understanding of the geometry of minimal hypersurfaces constructed using min-max methods in\nthese dimensions\n(see \\cite{DL}, \\cite{MN19}, \\cite{CM}, \\cite{Zh19}, \\cite{So} and references therein).\n\nFor manifolds of dimension $n+1\\geq 8$ much less is known. \nWhen Ricci curvature is positive\nZhou obtained index and multiplicity bounds for one-parameter min-max minimal hypersurface \\cite{Z17} (see also the work of Ram\\'irez-Luna \\cite{RL} and Bellettini \\cite{bellettini}). Upper Morse index bounds are known to hold in arbitrary manifolds of any dimensions for hypersurfaces constructed by Allen--Cahn, as proven by Hiesmayr and Gaspar \\cite{Hiesmayr,Gaspar} (see also the recent work of Dey showing that the Almgren--Pitts and Allen--Cahn approaches are equivalent \\cite{dey}). Li proved \\cite{li2019} existence \nof infinitely many distinct minimal hypersurfaces constructed \nvia min-max methods for a generic set of metrics, using the\nWeyl law of Liokumovich--Marques--Neves \\cite{LMN}.\n\n\n\\subsection{Overview of the proof}\nThe construction of a minimal hypersurface in Almgren-Pitts min-max theory proceeds by considering a sequence of sweepouts $\\{ \\Phi_i(x) \\}$\nwith the supremum of the mass $\\sup_x \\mathbf{M}(\\Phi_i(x)) \\rightarrow W(M)$ as $i \\rightarrow \\infty$.\nIt is then proved that we can find a subsequence $\\{i_k\\}$\nand $\\{\\Phi_{i_k}(x_k) \\}$ with mass tending\nto $W$, so that $|\\Phi_{i_k}|(x_k)$ converges to some $V \\in \\cR$.\n\nWe outline the proof of Theorem \\ref{thm:nm+index_bound}.\nFor the sake of simplicity, let's focus on the non-cancellation case, i.e., when all multiplicities of $V$ are one (in the case of cancellation we must argue slightly differently but the main strategy is the same).\nThe main geometric idea is to show that there cannot be two disjoint open sets $U_1,U_2$ so that $\\Sigma=\\supp V$ fails to be one-sided homotopy minimizing on \nthe same side in both $U_1$ and $U_2$. \nThis property is reminiscent of (but different from) \nalmost minimizing \nproperty introduced by Pitts to prove regularity\nof min-max minimal hypersurfaces.\n\nGranted this fact, it is easy to deduce the bound \\eqref{e:bound1}. For example, if $\\Index(\\Sigma) = 1$ and $\\hnm(\\Sigma) = \\{p\\}$, then we can localize the index in some $U$ disjoint from $p$. Because $\\Sigma$ is unstable in $U$, we can find area decreasing homotopies to both sides there, and we can also find $B_r(p)$ disjoint from $U$ with area decreasing homotopies (by definition). This contradicts the above fact. \n\nAs such, we want to show the one-sided homotopy minimizing property in pairs by using the fact that $V$ is a min-max minimal hypersurface. However, this leads us to a major difficulty. Indeed, the approximating currents $\\Phi_{i_k}(x_k)$ might cross $\\Sigma$ many times, making it difficult to glue in one-sided homotopies to push down the mass. \n\nAt a technical level, the main tool used in this paper is that it is possible to \n simplify the one-parameter case of min-max \n theory by constructing a nested optimal sweepout $\\Phi(x)$ with $\\sup \\mathbf{M}(\\Phi(x)) = W$.\nThis allows us to work with one\nsweepout $\\Phi(x)$\ninstead of a sequence of sweepouts. The nested property allows us to directly ``glue in'' the one-sided homotopies to push down the mass. \n\n\nThe existence of a nested optimal\nsweepout follows from a monotonization\ntechnique from \\cite{CL}.\nThere Chambers and Liokumovich proved that\neach sweepout\n$\\Phi_i(x)$ can be replaced by a nested sweepout \n$\\Psi_i(x)$ with $\\sup \\mathbf{M}(\\Psi_i(x)) \\leq \\sup\n\\mathbf{M}(\\Psi_i(x)) + \\frac{1}{i}$. ``Nested'' here means \nthat $\\Psi_i(x) = \\partial \\Omega(x)$ for a family of open sets with $\\Omega(x) \\subset \\Omega(y)$ if $x0$ \\cite[Proposition 0.5]{DL}.\n\\end{definition}\n\nWe will switch freely between the equivalent notation $\\mathbf{M}(\\Phi(x))$ and $\\Per(\\Omega(x))$. We now introduce the notion of optimal nested sweepouts and prove their existence.\n\n\n\\begin{definition}[Optimal nested volume parametrized (ONVP) sweepout]\n\tA sweepout $\\{ \\Phi(x) = \\partial \\Omega(x) \\,:\\, x\\in[0,1]\\}$ is called\n\t\\begin{itemize}\n\t\t\\item \\textit{optimal} if $\\sup_{x\\in [0,1]} \\mathbf{M}(\\Phi(x)) = W$;\n\t\t\\item \\textit{nested} if $\\Omega(x_1) \\subset \\Omega(x_2)$, for all $0\\leq x_1 \\leq x_2\\leq 1$;\n\t\t\\item \\textit{volume parametrized} if $\\Vol(\\Omega(x)) = x$, for every $x\\in [0,1]$ (recall that we have assumed $\\Vol(M,g) = 1$). \n\t\\end{itemize}\n\\end{definition}\n\nNested volume parametrized sweepouts enjoy nice compactness properties.\n\n\\begin{lemma}[Compactness for nested volume parametrized sweepouts] \\label{nested sequence}\n\tLet $\\left( \\Phi_i\\right)_i$ be a sequence of nested volume-parametrized sweepouts\n\twith mass uniformly bounded, that is\n\t\\begin{equation}\\label{e:mass_bounds}\n \\sup_{i\\in \\mathbb N} \\sup_{x\\in [0, 1]} \\mathbf{M}(\\Phi_i(x))\\leq M<\\infty\\,.\n\t\\end{equation}\n\tThen there exists a subsequence $\\left(\\Phi_{i_k}\\right)_k$ converging uniformly to a nested volume parametrized sweepout $\\Psi$ such that\n\t\\begin{equation}\\label{e:sup_optimal}\n\t\t\\sup_x \\mathbf{M}(\\Psi(x)) \\leq \\liminf_k \\left(\\sup_x \\mathbf{M}( \\Phi_{i_k}(x) )\\right).\n\t\\end{equation}\n\\end{lemma}\n\n\\begin{proof} The sequence of continuous functions $\\Phi_i\\colon [0,1] \\to \\mathcal{Z}_{n-1}(M; \\mathbb{Z}_2)$ is uniformly Lispchitz continuous, since for every $0\\leq x0$ to obtain \\emph{regular} homotopic minimizers in certain situations. \n\n\n\\begin{definition}[Homotopic inner and outer minimizers] \\label{homotopy minimizer}\n\tGiven a Caccioppoli set $\\Omega$ we say that a Caccioppoli set $L(\\Omega\\,|\\, U)\\in \\mathcal I(\\Omega\\,|\\, U)$ is a \\emph{homotopic inner minimizer for $\\Omega$ in $U$}, if\n\t\\begin{enumerate}\n\t\t\\item $\\Per(L(\\Omega\\,|\\, U)\\,|\\, U)\\leq \\Per(\\Omega'\\,|\\,U)$, for every $\\Omega'\\in \\mathcal I(\\Omega\\,|\\, U)$ and\n \\item if $E\\in \\mathcal I(\\Omega\\,|\\, U)$ satisfies (1) and $ L(\\Omega\\,|\\, U) \\subset E$ then $E= L(\\Omega\\,|\\, U)$.\n\t\\end{enumerate}\n\tSimilarly, define $R(\\Omega\\,|\\, U)\\in \\mathcal O(\\Omega\\,|\\, U)$ to be a \\emph{homotopic outer minimizer for $\\Omega$ in $U$}, if\n\t\\begin{enumerate}\n\t\t\\item $\\Per( R(\\Omega\\,|\\, U)\\,|\\,U)\\leq \\Per(\\Omega'\\,|\\,U)$, for every $\\Omega'\\in \\mathcal I(\\Omega\\,|\\, U)$;\n\\item if $E\\in \\mathcal I(\\Omega\\,|\\, U)$ satisfies (1) and $E \\subset R(\\Omega\\,|\\, U)$ then $E= R(\\Omega\\,|\\, U)$.\n\t\\end{enumerate}\n\tWe say that a Caccioppoli set $\\Omega$ is an \\emph{inner (resp. outer) homotopic minimizer in $U$} if $\\Omega$ is a homotopic inner (resp.\\ outer) minimizer relative to itself. \n\\end{definition}\n\n\nIt is easy to see that inner and outer homotopic minimizers for a fixed set $\\Omega$ always exist.\n\n\\begin{lemma}[Existence of homotopic minimizers] \\label{l:existence_homotopic}\n\tFor any Caccioppoli set $\\Omega$ and open set $U$\twe can find a homotopic inner (resp.\\ outer) minimizer $L(\\Omega\\,|\\, U)$ (resp.\\ $R(\\Omega\\,|\\, U)$) for $\\Omega$ in $U$. \n\\end{lemma}\n\n\\begin{proof} We consider only the case of inner minimizers as the outer minimizers are handled identically. \n\nThis is once again an application of Arzel\\`a--Ascoli theorem. Indeed, notice that $\\mathcal I(\\Omega\\,|\\, U) \\not = \\emptyset$, since $\\Omega \\in \\mathcal I(\\Omega\\,|\\, U)$, so we can consider a minimizing sequence $(E_j)_j$, that is \n\\[\n\t\\lim_{j}\\Per( E_j\\,|\\,U)=\\inf\\{ \\Per(E\\,|\\,U)\\,:\\, E\\in \\mathcal I(\\Omega\\,|\\, U)\n\t\\}\n\\]\n\tand let $\\{E_j(x): x\\in [0,1]\\} \\in \\mathcal I(\\Omega,E_j \\,|\\, U)$ be the corresponding inner volume non increasing sweepout between $E_j$ and $\\Omega$. We can assume that it is volume parametrized (being nested). Moreover $\\Per(E_j(x)\\,|\\,U)$ is uniformly bounded by $\\Per(\\Omega\\,|\\,U)$, so by Arzel\\`a--Ascoli there is a subsequence converging to $\\{E_\\infty(x)\\} \\in \\mathcal I(\\Omega, E_\\infty\\,|\\, U)$, with $E_\\infty$ satisfying the desired minimality property by lower semi-continuity of the perimeter. \n\t\n\tFinally, again by Arzel\\`a--Ascoli, we can find $L(\\Omega\\,|\\, U)\\subset \\Omega$ in the set of minimizers, which infimizes the flat distance to $\\partial \\Omega$, and so satisfies condition (2) (otherwise there would be a competitor closer to $\\Omega$ in flat norm).\n\\end{proof}\n\nWe recall the definition of one-sided minimizers, which will be useful in the sequel when we perform cut and paste arguments. \n\n\\begin{definition}[One sided minimizers]\n\tLet $E$ be a Caccioppoli set. We say that $E$ is \\emph{locally one-sided inner (resp. outer) area-minimizing} in $U$ if for every $A\\Subset U$ and $V$ with $V\\Delta E \\subset A$, we have\n\\[\n\t\\Per( E\\,|\\, A) \\leq \\Per (V \\,|\\, A)\n\\]\n\twhenever $V\\subset E$ (resp.\\ $E\\subset V$). We say that $E$ is \\emph{strictly locally one-sided inner (resp.\\ outer) area-minimizing} if the inequality holds strictly except when $E=V$ as Caccioppoli sets. \n\\end{definition}\n\nWe show that homotopic minimizers are in fact strict one sided minimizers into the region they sweep out. \n\n\\begin{lemma}[Homotopic minimizers are one sided minimizers in the swept out region] \\label{strict minimizer}\n\tSuppose $L(\\Omega\\,|\\, U)$ is an homotopic inner (resp. outer) minimizer for $\\Omega$ in $U$. Then $L(\\Omega\\,|\\, U)$ (resp. $R(\\Omega\\,|\\, U)$) is strict locally outer (resp. inner) one-sided minimizing in $U\\cap \\Omega$ (resp. $U\\setminus \\Omega$) .\n\\end{lemma}\n\n\\begin{proof}\n\tWe consider homotopic inner minimizers; the case of outer minimizers is similar. \n\t\n\tIf $L(\\Omega\\,|\\, U)$ is not a strict outer minimizer in $U\\cap \\Omega$ then there is $V'$ with $L(\\Omega\\,|\\, U)\\subset V'$ and $L(\\Omega\\,|\\, U) \\Delta V' \\subset A \\Subset U$ and\n\t\\[\n\t\\Per(V'\\,|\\,A) \\leq P(L(\\Omega\\,|\\, U)\\,|\\,A).\n\t\\]\n\tWe can minimize perimeter in $A$ among all such $V'$ to find $V$. Namely,\n\t\\begin{equation}\\label{e:confusing1}\n\t\\Per( V \\,|\\,A )\\leq \\Per(W\\,|\\,A) \n\t\\end{equation} \n\t\tfor all $W$ with $W\\Delta V\\subset A\\setminus L(\\Omega\\,|\\, U)$. Since $L(\\Omega\\,|\\, U) \\in \\mathcal I(\\Omega\\,|\\, U)$, there is $\\{U(x)\\,:\\,x \\in [0,1]\\} \\in \\mathcal I(\\Omega, L(\\Omega\\,|\\,U)\\,|\\,U)$. Set $\\Omega(x) = U(x) \\cup V$. Since $V$ satisfies \\eqref{e:confusing1},\nwe have that \n\t$$\n\t\\Per(\\Omega_t\\,|\\,A) \\leq \\Per(U_t\\,|\\,A).\n\t$$\n\tThis implies that $\\Omega(1)=V$ satisfies (1) of Definition \\ref{homotopy minimizer} and $V \\Delta L(\\Omega\\,|\\,U)\\subset A \\setminus L(\\Omega\\,|\\, U)$, therefore by (2) of Definition \\ref{homotopy minimizer}, it follows that $V= L(\\Omega\\,|\\,U)$. This completes the proof. \n\\end{proof}\n\n\nWe have the following lemma that will allow us to find bounded mass homotopies in certain situations. \n\\begin{lemma}[Interpolation lemma] \\label{l:close in flat}\n\tFix $L>0$. For every $\\varepsilon>0$ there exists $\\delta>0$, such that the following holds. If $\\Omega_0,\\Omega_1 $ are two sets of finite perimeter, such that $\\Omega_0 \\subset \\Omega_1$, $\\Per(\\Omega_i) \\leq L$, $i=0,1$, and $\\Vol(\\Omega_1 \\setminus \\Omega_0)\\leq \\delta$, then there exists a nested $\\mathcal F$-continuous family $\\{\\partial\\Omega_t \\}_{t \\in [0,1]}$ with \n\\[\n\\Per(\\Omega_t)\\leq \\max\\{\\Per(\\Omega_0),\\Per(\\Omega_1) \\}+\\varepsilon\n\\]\nfor all $t\\in[0,1]$\n\\end{lemma}\n\\begin{proof}\nLet $\\Omega$ be a Caccioppoli set that minimizes\nperimeter among sets $\\Omega'$\nwith $\\Omega_0 \\subset \\Omega' \\subset \\Omega_1$.\n\nFix $r>0$ such that for every $x \\in M$ the ball $B(x,2r)$\nis $2$-bi-Lipschitz diffeomorphic to\nthe Euclidean ball of radius $2r$.\nLet $\\{B(x_i,r) \\}_{i=1}^N$ be a collection of balls covering $M$. By coarea inequality\nwe can find a radius $r_i \\in [r,2r]$,\nso that $\\mathbf{M}(\\partial B(x_i,r_i) \\cap \\Omega \\setminus \\Omega_0) \\leq \\frac{\\delta}{r}$.\n\nLet $U_1 = B(x_1,r_1) \\cap\n\\Omega \\setminus \\Omega_0$.\nBy a result of Falconer (see \\cite{Falconer},\n\\cite[Appendix 6]{Guth}) there exists a family of \nhypersurfaces sweeping out $U_1$ of area bounded by\n$c(n) \\delta^{\\frac{n}{n+1}}$. It follows (see\n\\cite[Lemma 5.3]{CL}) that\nthere exists a nested family \n$\\{ \\Xi^1(t)\\} $ of Caccioppoli sets with\n$\\Xi^1(0) = \\Omega_0$ and $\\Xi^1(1) = \\Omega_0 \\cup U_1$\nand satisfying\n$$\\Per(\\Xi^1(t)) \\leq \\Per(\\Omega_0)+2c(n)\\delta^{\\frac{n}{n+1}} \\, . $$\nLet $\\Omega^1 = \\Omega_0 \\cup U_1$. Observe, that\nthe minimality of $\\Omega$ implies that\n$$\\Per(\\Omega^1) \\leq \\Per(\\Omega_0) + \\frac{2\\delta}{r} $$\n\nInductively, we define $\\Omega^k= \\Omega^{k-1} \\cup U_k$ and\n$U_k = B(x_k,r_k) \\cap\n\\Omega \\setminus \\Omega^{k-1}$. As above we can construct a nested homotopy of Caccioppoli sets $\\Xi^k(t)$ from $\\Omega^{k-1}$ to $\\Omega^k$, satisfying\n$$\\Per(\\Xi^k(t)) \\leq \\Per(\\Omega_0)+2c(n)\\delta^{\\frac{n}{n+1}} \n+ \\frac{2N\\delta}{r}$$\n\nWe choose $\\delta>0$ so small that $\\Per(\\Xi^k(t))< \\Per(\\Omega_0) + \\varepsilon$.\nIt follows then that we have obtained a homotopy from $\\Omega_0$ to $\\Omega$ satisfying the\ndesired perimeter bound. Similarly,\nwe construct a homotopy from $\\Omega$ to $\\Omega_1$.\n\\end{proof}\n\n\n\n\n\n\n\nFinally, we have the following result. Recall that White \\cite{W} proves that strictly stable \\emph{smooth} minimal hypersurfaces are locally area-minimizing. A generalization of such a result to the case of hypersurfaces with singularities (i.e., elements of $\\cR$) would be very interesting. The following (weaker) result will suffice for our needs; it can be seen as a result along these lines, except ``stability'' is replaced by a stronger hypothesis: the surface is homotopic minimizing to one side.\\footnote{Note that one certainly needs a condition on the singularities rather than just a condition on the regular part like strict stability, since as we show in Proposition \\ref{p:def_thm}, the existence of (regular) non-minimizing tangent cones implies that the hypersurface is not homotopic minimizing irrespective of any stability condition that might hold on the regular part.}\n\n \\begin{proposition}[Comparing the notions of minimizing vs.\\ homotopic minimizing for minimal surfaces]\\label{prop:min-vs-htpy-min}\nSuppose that $\\Omega$ is a Caccioppoli set and for some strictly convex open set $U \\subset M$ with smooth boundary, the associated varifold $V = |\\partial \\Omega|$ satisfies $V \\in \\cR(U)$. Assume that $\\supp V \\cap U$ is connected. \n\nSuppose that $\\Omega$ is inner (resp.\\ outer) homotopy minimizing in $U$. Then, at least one of the following two situations holds:\n\\begin{enumerate}\n\\item for all $p \\in \\supp V \\cap U$, there is $\\rho_0>0$ so that for $\\rho<\\rho_0$, $B_\\rho(p) \\subset U$ and $\\Omega$ is inner (resp.\\ outer) minimizing in $B_\\rho(p)$, or\n\\item there exists a sequence of Caccioppoli sets $E_i\n\\neq \\Omega$\nwith $|\\partial E_i| \\in \\cR(U)$ so that $E \\Delta \\Omega \\subset \\Omega \\cap U$ (resp.\\ $\\Omega^c \\cap U$), $|\\partial E_i|$ has stable regular part, and $\\partial E_i \\to \\partial \\Omega$ in the flat norm. \n\\end{enumerate}\n\\end{proposition}\n\n\\begin{remark}\nIt is interesting to ask if the second possibility occurs; it seems possible that one could rule this out in the case where $V$ has regular tangent cones that are all strictly minimizing in the sense of Hardt--Simon \\cite[\\S 3]{HS}. \n\\end{remark}\n\n\n\\begin{proof}[Proof of Proposition \\ref{prop:min-vs-htpy-min}]\nWe consider the ``inner'' case, as the ``outer'' case is similar. Let $E^\\varepsilon \\in \\mathcal I_\\varepsilon(\\Omega\\, |\\, U)$ minimize perimeter among all sets in $\\mathcal I_\\varepsilon(\\Omega\\,|\\, U)$ (as usual, the existence of $E^\\varepsilon$ follows from Arzel\\`a--Ascoli). We claim that $E^\\varepsilon$ is area-minimizing to the inside of $\\Omega$ in sufficiently small balls. \n\nMore precisely, for $r>0$ sufficiently small, suppose there was a Caccioppoli set $E'$ so that $E'\\Delta E^\\varepsilon \\subset B_r(p) \\cap U \\cap \\Omega$ and $\\Per(E'\\,|\\,U) < \\Per(E^\\varepsilon\\,|\\,U)$. As long as $r$ was chosen sufficiently small, Lemma \\ref{l:close in flat} guarantees\nthat $E' \\in \\mathcal I_\\varepsilon(\\Omega\\, |\\,U)$. This is a contradiction. \n\nNow, consider $p \\in \\reg V \\cap U$. We note that $E^\\varepsilon$ is almost minimizing (with no constraint coming from $\\Omega$) in the sense of \\cite{Tam}, and thus has $C^{1,\\alpha}$ boundary in $B_r(p) \\cap U$, thanks to standard results on the obstacle problem; see \\cite[\\S 1.9, \\S1.14(iv)]{Tam}. As such, away from $\\sing V$ (which has Hausdorff dimension at most $n-7$) we can thus conclude that $\\partial^*E^\\varepsilon$ is regular, stationary and stable.\\footnote{Cf.\\ the proof of \\cite[Proposition 2.1]{Liu} for the proof of stability.} A capacity argument then implies that $|\\partial E^\\varepsilon| \\in \\cR(U)$ and $\\partial^*E^\\varepsilon$ is stable. Therefore, the maximum principle for (possibly singular) hypersurfaces \\cite{Ilm} implies that either $E^\\varepsilon = \\Omega$ or $\\partial^*E^\\varepsilon \\cap \\supp V = \\emptyset$. In the first case, we can conclude that $\\Omega$ is inner minimizing in small balls (since $E^\\varepsilon$ is). \n\nWe can thus assume that the latter possibility holds for all $\\varepsilon>0$ sufficiently small. Taking $\\varepsilon_j\\to 0$, there is $E \\in \\mathcal I_0(\\Omega | U)$ so that $E^{\\varepsilon_j}\\to E$ with respect to the flat norm. If $E=\\Omega$, then the second possibility in the conclusion of the proposition holds for $E_j = E^{\\varepsilon_j}$. \n\nThe final case to consider is $E\\neq \\Omega$. By curvature estimates for stable minimal hypersurfaces \\cite{SS}, $|\\partial E| \\in \\cR(\\Omega)$ and thus $\\partial^*E \\cap \\supp V = \\emptyset$ again by the maximum principle.\n\nWe know that $\\Per(E_i| U) \\leq \\Per(\\Omega| U)$, so in the limit we get\n$\\Per(E| U) \\leq \\Per(\\Omega| U)$\nBy Arzel\\`a-Ascoli nested homotopies from $\\Omega$ to $E_i$ will converge to\na nested homotopy $E(t)$ from $\\Omega$ to $E$ that does not increase volume.\nBy the inner homotopy minimizing property of $\\Omega$ we have\n\\[\n\\Per(E\\,|\\,U) = \\Per(\\Omega\\, |\\, U).\n\\]\n\nSuppose we minimize perimeter among Caccioppoli sets $A'$ sandwiched between $E$ and $\\Omega$, $E \\subset A' \\subset \\Omega$. We claim that the\nminimizer $A$ has perimeter equal to that of $\\Omega$. Indeed, if $A$ has strictly\nsmaller perimeter, then family $E(t) \\cup A$\nis an area decreasing nested homotopy between $\\Omega$ and $A$, contradicting that\n$\\Omega$ is inner homotopic minimizing.\n\nWe thus see that $\\Omega$ is minimizing in $\\Omega \\cap E^c \\cap U$, which implies that it is inner minimizing in small balls, as asserted. \n\\end{proof}\n\n\n\\section{Non-excessive sweepouts}\\label{sec:min-max}\n\nIn this section we introduce the concept of excessive intervals and excessive points for a sweepout and prove that there is a sweepout, such that every point in the critical domain is not excessive.\n\n\\begin{definition}[Excessive points and intervals] \\label{def:excessive_interval}\n\tSuppose $\\{\\Phi(x)=\\partial \\Omega(x)\\}$ is a sweepout. Given a connected interval $I$ (we allow $I$ to be open, closed, or half-open)\n\twe will say that $\\{\\Phi^I(x) = \\partial \\Omega^I(x)\\}_{x\\in \\bar I}$ is an \\emph{$I$-replacement family for $\\Phi$} if\n\t$\\Omega^I(a) = \\Omega(a)$, $\\Omega^I(b)=\\Omega(b)$ and for all $x \\in I$, \n\\[\n\\limsup_{I \\ni y\\to x} \\mathbf{M}(\\Phi^I(y)) < W.\n\\]\nWe say that a connected interval $I$ is an \\emph{excessive interval for $\\Phi$} if there is an $I$-replacement family for $\\Phi$. We say that a point $x$ is \\emph{left (resp.\\ right) excessive for $\\Phi$} if there is an excessive interval $I$ for $\\Phi$ so that $(x-\\varepsilon,x]\\subset I$ (resp.\\ $[x,x+\\varepsilon)\\subset I$) for some $\\varepsilon>0$. \n\\end{definition}\n\n \n The goal of this section is to prove the following result.\n\n\\begin{theorem}[Existence of non-excessive min-max hypersurface]\\label{c:non-excessive_minmax}\nThere exists a (ONVP) sweepout $\\Psi$ such that every $x\\in {\\bf m}_L(\\Psi)$ is not left excessive and every $x \\in {\\bf m}_R(\\Psi)$ is not right excessive. \n\\end{theorem}\n\n\\subsection{Preliminary results} We establish several results that will be used in the proof of Theorem \\ref{c:non-excessive_minmax}. \n\n\n\n\n\n\n\\begin{lemma}[Extension lemma I]\\label{l:union-excessive}\nIf $I,J$ are excessive for $\\Phi$ and $I \\cap J \\not = \\emptyset$, then $I \\cup J$ is excessive for $\\Phi$. \n\\end{lemma}\n\\begin{proof}\nLet $\\{\\partial \\Omega^I(x) \\}_{x \\in I}$\nand $\\{\\partial \\Omega^J(x) \\}_{x \\in J}$\nbe $I$ and $J$ replacement families\nfor $\\Phi$.\n\nLet $a_1 = \\inf\\{x \\in I\\} $, $a_2 = \\inf\\{x \\in J\\}$ and $b_1 = \\sup\\{x \\in I\\} $, $b_2 = \\sup\\{x \\in J\\}$.\nAssume without any loss of generality that\n$a_1\\leq a_2$ and $b_1 \\leq b_2$\nand at least one of the two inequalities\nis strict.\n\nLet $K = I \\cap J$; let $a,b$ denote, respectively, left and right\nboundary points of $K$ and $c = \\frac{a+b}{2}\\in K$.\nLet $\\tilde{\\Omega}$ be a Caccioppoli set minimizing perimeter among all $\\Omega'$\nwith $\\Omega(a) \\subset \\Omega' \\subset \\Omega(b)$.\nDefine $\\phi_1:[a_1,c] \\rightarrow [a_1,b_1]$\nand $\\phi_2:[c,b_2] \\rightarrow [a_2,b_2]$\ngiven by $\\phi_1(x)=a_1+\\frac{b_1-a_1}{c-a_1}(x-a_1)$\nand $\\phi_2(x)=a_2 + \\frac{b_2-a_2}{b_2-c}(x-c)$.\nWe define an $I\\cup J$ replacement family for\n$\\Phi$ by setting\n\\[\n\\Phi^{I\\cup J}(x) = \\begin{cases} \n\\partial (\\Omega^I(\\phi_1(x)) \\cap \\tilde{\\Omega}) & x \\in [a_1,c]\\\\\n\\partial (\\Omega^J(\\phi_2(x)) \\cup \\tilde{\\Omega}) & x \\in [c,b_2]\n\\end{cases}\n\\]\nObserve that $\\Phi^{I\\cup J}$\nis continuous since $\\Phi^{I\\cup J}(c) = \\partial \\tilde{\\Omega}$.\nIt follows from our choice of $\\tilde{\\Omega}$ that\n$\\mathbf{M}(\\Phi^{I\\cup J}(x))\\leq \\mathbf{M}(\\Phi^I(\\phi_1^{-1}(x)))\\max\\{n_i,i+1\\}$ so that we can construct $J_n$-replacements $\\{\\Phi^n_{i+1}(x)=\\partial \\Omega^n_{i+1}(x)\\}$ for $n\\geq n_{i+1}$ with \n\\[\n\\Per(\\Omega^n_{i+1}(x)) \\leq A_j\n\\]\nfor $x \\in [a_j',b_j']$ and $1 \\leq j \\leq i+1$. Granted this, we can easily (inductively) complete the proof by passing $\\Phi^{n_{i+1}}_{i+1}$ to a subsequential limit (using Arzel\\`a--Ascoli).\n\n It is useful to introduce the following notation, used in the construction of $\\Phi^n_{i+1}$. Given two nested sets of finite perimeter $V \\subset W$, we\n\tlet \n\t\\begin{itemize}\n\t\t\\item $\\mathcal M_{V,W}$ an outermost Caccioppoli set minimizing perimeter among all the Caccioppoli sets $\\Omega$ with $V \\subset \\Omega \\subset W$;\n\t\t\\item $\\{\\mathcal V_{(V,W)}(x)\\}_x$ the optimal nested homotopy from $V$ to $W$.\n\t\\end{itemize} \nFor $n \\geq n_i$, we set\n\\[\nL_n : = \\mathcal M_{\\Omega(a_n),\\Omega^{n_i}_i(a_{i+1}')}, \\qquad U_n: = \\mathcal M_{\\Omega^{n_i}_i(b_{i+1}'),\\Omega(b_n)}\n\\]\nNote that for $n \\leq m$, $ L_m \\subset L_n$ and $U_n \\subset U_m$. Hence, $L_n$ and $U_n$ have $\\mathcal F$-limits as $n\\to\\infty$. For $\\varepsilon > 0$ fixed so that \n\\[\n\\max\\left\\{\\Per\\left(\\Omega^{n_{i}}(a_{i+1}')\\right),\\Per\\left(\\Omega^{n_i}_i(b_{i+1}')\\right)\\right\\}+\\varepsilon < W,\n\\]\nLemma \\ref{l:close in flat} thus guarantees that there is $n_{i+1}\\geq i+1$ sufficiently large so that for $n\\geq n_{i+1}$, \n\\[\n\\sup_t \\Per\\left(\\mathcal V_{(L_n,L_{n_{i+1}})}(t)\\right) < W, \\qquad \\sup_t \\Per\\left(\\mathcal V_{(U_{n_{i+1}},U_n)}(t)\\right) < W. \n\\]\nFor $n\\geq n_{i+1}$, we define\n\\[\n\\tilde \\Phi^n_{i+1}(x) = \\begin{cases}\n\\partial\\left(\\Omega^n_i(x+1) \\cap L_n\\right) & x \\in [a_n-1,b_n-1]\\\\\n\\partial \\tilde \\mathcal V_{L_n,L_{n_{i+1}}}(x) & x \\in [b_n-1,a_{n_{i+1}}]\\\\\n\\partial\\left(\\Omega^{n_{i+1}}_i(x) \\cup L_{n_{i+1}} \\cap U_{n_{i+1}} \\right)& x \\in [a_{n_{i+1}},b_{n_{i+1}}]\\\\\n\\partial \\tilde \\mathcal V_{U_{n_{i+1}},U_n}(x) & x \\in [b_{n_{i+1}},a_n+1]\\\\\n\\partial\\left(\\Omega^n_i(x-1) \\cup U_n \\right)& x \\in[a_n+1,b_n+1]. \n\\end{cases}\n\\]\nHere, the $\\tilde \\mathcal V$ are the homotopies $\\mathcal V$ reparametrized to be defined on the given intervals (the exact parametrization is immaterial). It is easy to check that $\\tilde\\Phi^n_{i+1}$ is continuous. \n\nLet $\\Phi_{i+1}^n$ denote the reparametrization of $\\tilde\\Phi_{i+1}^n$ by volume. We have arranged that $\\Phi_{i+1}^n$ is a $J_{n}$-replacement. Moreover, for $x \\in [a_{i+1}',b_{i+1}']$, we have that $\\Phi^n_{i+1}(x) = \\Phi^{n_{i+1}}_i(x)$, so \n\\[\n\\mathbf{M}(\\Phi^n_{i+1}(x)) \\leq A_j\n\\]\nfor $x \\in [a_j',b_j']$ and $1\\leq j\\leq i$. Finally, we can set \n\\[\nA_{i+1} : = \\sup_{x\\in[a_{i+1}',b_{i+1}']} \\mathbf{M}(\\Phi^{n_{i+1}}_{i}(x)) < W\n\\]\n(which is independent of $n$). This completes the proof. \n\\end{proof}\n\n\n\\subsection{Proof Theorem \\ref{c:non-excessive_minmax}}\nWe are now able to complete proof of Theorem \\ref{c:non-excessive_minmax}\n\n\t\tLet $\\Phi$ be a nested optimal sweepout. Consider the collection $\\mathcal A$ of the maximal (with respect to inclusion) excessive intervals for $\\Phi$, that is $I\\in \\mathcal A$ if for every excessive interval $I'$ such that $I'\\cap I\\neq \\emptyset$, we have $I\\supset I'$. The existence of maximal intervals follows from Proposition \\ref{p:max_int} proven above. \n\t\n\t Notice that by definition $I \\neq J\\in \\mathcal A$ implies that $I\\cap J=\\emptyset$, so we can define a new sweepout $\\Psi$ in the following way\n\\[\n\t \\Psi(x)=\\begin{cases}\n\t \\Phi^I(x) & \\quad \\text{if }x \\in I \\in \\mathcal A\\\\\n\t \\Phi(x) & \\quad \\text{otherwise}\\,.\n\t \\end{cases}\n\\]\nNote that $\\Psi$ is a nested optimal sweepout, so up to reparametrization we can assume it is (ONVP), and moreover by construction ${\\bf m}(\\Psi)\\subset {\\bf m}(\\Phi)$. Suppose that $x \\in {\\bf m}_L(\\Psi)$ is left excessive. Then, there is a $\\Psi$-excessive interval $J$ with $(x-\\varepsilon,x]\\subset J$. We claim that there is $I \\in \\mathcal A$ with $J \\subset I$. Indeed, if $J \\cap I = \\emptyset$ for all $I \\in \\mathcal A$, then $J$ is a $\\Phi$-excessive interval, contradicting the definition of $\\mathcal A$. On the other hand, if there is $I \\in \\mathcal A$ with $J \\cap I \\not =\\emptyset$, then $J \\cup I$ is excessive by Lemma \\ref{l:union-excessive-repeated}. Thus, $J \\subset I$ by definition of $\\mathcal A$ again. Thus, for $y \\in (x-\\varepsilon,x]\\subset I$, $\\Psi(y) = \\Psi^I(y)$. By the definition of replacement family, we know that if $x_i \\in (x-\\varepsilon,x]$ has $x_i\\to x$, then\n\\[\n\\limsup_{i\\to\\infty} \\mathbf{M}(\\Psi^I(x_i)) < W. \n\\]\nHowever, this contradicts the assumption that $x \\in{\\bf m}_L(\\Psi)$. The same proof works to prove that $x \\in {\\bf m}_R(\\Psi)$ is not right excessive. This finishes the proof. \\qed\n\n\n\\section{Deformation Theorems and Proof of Theorem \\ref{thm:nm+index_bound}}\\label{ss:deformation_thm}\nIn this section we conclude the proof of Theorem \\ref{thm:nm+index_bound}. By Theorem \\ref{c:non-excessive_minmax}, there exists an (ONVP) sweepout $\\Phi$ so that every $x \\in {\\bf m}_L(\\Phi)$ is not left excessive and every $x\\in{\\bf m}_R(\\Phi)$ is not right excessive. By Almgren--Pitts pull-tight and regularity theory \\cite{Pitts}, we find that for some $x_0\\in{\\bf m}(\\Phi)$, there is a min-max sequence $x_i\\to x_0$ so that $|\\Phi(x_i)|$ converges to some $V \\in \\cR$. Indeed, we can pull-tight $\\Phi$ to find a sweepout (in the sense of Almgren--Pitts, not in the (ONVP) sense considered in this paper) $\\tilde\\Phi$; we have that ${\\bf C}(\\tilde\\Phi) \\subset {\\bf C}(\\Phi)$ and some $V\\in{\\bf C}(\\tilde\\Phi)$ is in $\\cR$. By replacing\n$\\Phi(x)$ by $\\Phi(1-x)$ if necessary,\nwe can then assume for the rest of this section that:\n\\begin{equation}\\label{e:no_can}\n\\begin{gathered}\n\\text{there is a (ONVP) sweepout $\\{\\Phi(x)=\\partial \\Omega(x)\\}$ and $x_i \\nearrow x_0 \\in {\\bf m}_L(\\Phi)$, so that}\\\\\n\\text{$|\\Phi(x_i)| \\to V\\in\\cR$ and $\\Phi$ is not left excessive at $x_0$}\n\\end{gathered}\n\\end{equation}\n\nWe then consider two cases: $\\mathbf{M}(\\Phi(x_0)) = W$ (no cancellation) and $\\mathbf{M}(\\Phi(x_0)) < W$ (cancellation). We analyze the geometric properties of $V$ in both cases separately,\nproving deformation theorems reminiscent of those in \\cite{MN16}. \n\t\n\\subsection{No cancellation} \nThroughout this subsection we will assume the no cancellation condition\n\\[\n\\mathbf{M}(\\Phi(x_0)) = W \\,.\n\\] \nIn this case we have that $|\\Phi(x_i)| \\to |\\partial \\Omega|$, see for instance \\cite[Proposition A.1]{DL}, so we can rephrase our assumption \\eqref{e:no_can} as\n\\begin{equation}\\label{e:no_canc2}\n\\begin{gathered}\n\\text{there is a (ONVP) sweepout $\\{\\Phi(x)=\\partial \\Omega(x)\\}$ and $x_i \\nearrow x_0 \\in {\\bf m}_L(\\Phi)$, so that} \\\\\n\\text{$|\\Phi(x_i)| \\to |\\Sigma|:=|\\partial \\Omega|\\in\\cR$ and $\\Phi$ is not left excessive at $x_0$.}\n\\end{gathered}\n\\end{equation}\nIn particular, in this case the multiplicity bound of Theorem \\ref{thm:nm+index_bound} follows immediately.\n\n\n\n\n\n\t\n\n\n\\begin{proposition}\n\\label{p:pairs}\nLet $\\Sigma$ be as in \\eqref{e:no_canc2}. \nSuppose $\\Sigma$ is not homotopic minimizing to either side\nin some open set $U$. Then the following holds:\n\\begin{enumerate}\n \\item for every $x \\not\\in \\overline{U}$ there exists $r>0$, such that\n$\\Sigma$ is minimizing to one side in $B_r(x)$;\n \\item for every open set $U'$ disjoint from $U$,\n we have that $\\Sigma$ is homotopic minimizing\n to one side in $U'$.\n\\end{enumerate}\n\n\\end{proposition}\t\n\t\\begin{proof} \nWe prove statement (1).\nThere is $\\delta>0$ and Caccioppoli sets $E^-_1 \\in \\mathcal I(\\Omega\\,|\\,U)$ and $ E^+_1\\in \\mathcal O(\\Omega\\,|\\,U)$ with \n\t\\begin{equation}\\label{e:vol_dec}\n\t\\Per(E^\\pm_1\\,|\\, U)\\leq \\Per(\\Omega\\,|\\, U)-\\delta\\,,\n\t\\end{equation}\n\tand nested families $\\{\\Omega^-_1(x)\\,:x\\in [0,1]\\} \\in \\mathcal I(\\Omega, E_1^-\\,|\\,U)$ and $\\{\\Omega^+_1(x)\\,:x\\in [0,1]\\} \\in \\mathcal O(\\Omega, E_1^+\\,|\\,U)$. Furthermore, by Lemma \\ref{l:existence_homotopic}, we can assume that $E_1^+$ are inner and $E_1^-$ are outer homotopic minimizers in $U$.\n\t\n Let $x \\in \\Sigma \\setminus \\overline{U}$ and assume, for contradiction,\n\tthat $\\Sigma$ is not area minimizing on both sides \n\tin every ball $B_r(x)$, $r< {\\rm dist}(x, U)$. Let $E^-_2 \\subset \\Omega$, \n\twith $\\Omega \\setminus E^-_2 \\subset B_r(x)$, denote a Caccioppoli set\n\tthat is a strict outer minimizer in $\\Omega \\cap B_r(x)$.\n\tSimilarly, let $ \\Omega \\subset E^+_2$, with\n\t$ E^+_2 \\setminus \\Omega \\subset B_r(x)$, denote a Caccioppoli set\n\tthat is a strict inner minimizer in $\\Omega \\cap B_r(x)$.\n We have $$\\Per(\\Omega)> \\max\\{\\Per(E^\\pm_2) \\}.$$\n\tIf we choose $r>0$ sufficiently small,\n\tthen, by Lemma \\ref{l:close in flat}, there exist nested families $\\{\\Omega^-_2(x)\\,:x\\in [0,1]\\}$ and $\\{\\Omega^+_2(x)\\,:x\\in [0,1]\\} $ that interpolate between $E^-_2$ and \n\t$\\Omega$ and between $\\Omega$ and $E^+_2$ and satisfying\n\\begin{equation}\\label{e:vol_dec2}\n\\Per(\\Omega^\\pm_2(x))\\leq \\Per(\\Omega) + \\frac{\\delta}{2}\n\\end{equation}\n\n\tLet $(x_l,x_r)\\neq \\emptyset$ be the interval (since $\\Phi$ is nested) such that\n\t$$\n\t\\Phi(x) \\cap (\\cup_i E_i^+\\setminus \\cup_i E_i^-)\\neq \\emptyset \\, .\n\t$$\n\tThen we define a family $\\bar \\Psi\\colon [x_l-2, x_r+2] \\to \\mathcal{Z}_{n}(M; \\mathbb{Z}_2)$ by setting\n\t$$\n\t\\bar \\Psi(x):=\n\t\\begin{cases}\n\t\\partial\\left(\\Omega(x+2)\\cap E_1^-\\cap E_2^-\\right) & \\text{if } x\\in (x_l-2,x_0-2]\\\\\n\t\\partial \\left(\\Omega_1^-(x-x_0+2)\\cap E_2^- \\right) & \\text{if } x\\in [x_0-2,x_0-1]\\\\\n\t\\partial \\left(\\Omega_1^+(x-x_0+1)\\cap E_2^-\\right) & \\text{if } x\\in [x_0-1,x_0]\\\\\n\t\\partial \\left(\\Omega_2^-(x-x_0)\\cup E_1^+\\right) & \\text{if } x\\in [x_0,x_0+1]\\\\\n\t\\partial \\left(\\Omega_2^+(x-x_0-1) \\cup E_1^+\\right) & \\text{if } x\\in [x_0+1,x_0+2]\\\\\n\t\\partial\\left(\\Omega(x-2)\\cup E_1^+\\cup E_2^+\\right) & \\text{if } x\\in [x_0+2,x_r+2)\n\t\\end{cases}\n\t$$\n\tIt is easy to see that $\\bar \\Psi$ is continuous, and moreover notice that, since by Lemma \\ref{strict minimizer} $E_1^+$ is a strict inner minimizer in $U$ and $E_{1}^-$ strict outer minimizers in $U$, we have that\n\t$$\n\t\\limsup_{y\\to x} \\mathbf{M} (\\bar \\Psi(y)) < \\limsup_{y\\to x} \\mathbf{M}(\\bar\\Phi(y)) \\leq W \n\t$$\n\tfor $x\\in(x_l-2,x_0-2] \\cup [x_0-2,x_r-2)$.\n\tSince the families $\\Omega_1^\\pm(x)$ do not increase the volume of $\\Sigma$ in $U_i$ and using \\eqref{e:vol_dec} and \\eqref{e:vol_dec2}, we also have\n\t$$\n\t\\mathbf{M} (\\bar \\Psi(x)) \\leq W -\\frac{\\delta}{2} \\qquad \\forall \\, x\\in [x_0-2, x_0+2]\\,.\n\t$$\n\tWe let $\\Psi$ be the volume reparametrization of the nested sweepout $\\bar \\Psi$, then $\\Psi$ is a $(x_l,x_r)$-replacement for $\\Phi$, thus giving a contradiction with the fact that $x_0\\in (x_l,x_r)$ and $x_0\\in {\\bf m}_L(\\Phi)$.\n\t\n\tThe proof of statement (2) is completely analogous.\n\\end{proof}\n\n\n\n\n\\begin{proposition}\n\\label{p:def_thm}\nLet $\\Sigma$ be as in \\eqref{e:no_canc2}, then the following holds\n\\begin{enumerate}\n \\item $\\Index(\\Sigma)\\leq 1$;\n \\item If $\\Index(\\Sigma)=1$, then \n for every point $x \\in \\Sigma$ there exists $r>0$, such that\n$\\Sigma$ is minimizing to one side in $B_r(x)$;\n \\item If $\\hnm(\\Sigma)$ is non-empty, then $\\Sigma$ is stable,\n $\\cH^0(\\hnm(\\Sigma))=1$ and for every point $x \\in \\Sigma\\setminus \\hnm(\\Sigma)$ there exists $r>0$, such that $\\Sigma$ is minimizing to one side in $B_r(x)$.\n\\end{enumerate}\nIn particular, Theorem \\ref{thm:nm+index_bound} holds in the case of no cancellations.\n\\end{proposition}\n\t\n\\begin{proof} \nNote that if $U \\cap \\Sigma$ is smooth and unstable, it is easy to see that $\\Sigma$ is not homotopic minimizing to either side in $U$ (just consider the normal flow generated by a compactly supported unstable variation of fixed sign). Statements (2) and (3)\nof the Proposition now immediately follow from Proposition \\ref{p:pairs}. \nThe upper bound on the index (1) follows from (2) of Proposition \\ref{p:pairs} and Lemma \\ref{l:localizing-index-two-sided} below. \n\\end{proof}\t\n\\begin{lemma}[Localizing the index]\\label{l:localizing-index-two-sided}\nSuppose that $\\Sigma \\in \\cR$ is two-sided and has $\\Index(\\Sigma) \\geq 2$. Then, there is $\\Sigma^*_1,\\Sigma^*_2 \\subset \\Sigma$ smooth hypersurfaces with boundary so that the $\\Sigma^*_i$ are both unstable (for variations fixing the boundary). \n\\end{lemma}\n\\begin{proof}\nA standard capacity argument implies that there is a subset $\\Sigma' \\subset \\Sigma$ where $\\Sigma'$ is a smooth minimal surface with smooth boundary and $\\Index(\\Sigma') \\geq 2$ (with Dirichlet boundary conditions). Let $u$ denote the second (Dirichlet) eigenfunction (with eigenvalue $\\lambda <0$) for the stability operator for $\\Sigma$. Because $u$ must change sign, there are at least two nodal domains $\\Sigma_1,\\Sigma_2\\subset \\Sigma$. One can find subsets with smooth boundary $\\Sigma^*_i\\subset \\Sigma_i$ so that $\\Sigma^*_i$ are unstable. This follows from the argument in \\cite[p.\\ 21]{Chavel} (namely, by considering $(u|_{\\Sigma_i} - \\varepsilon)_+$ in the stability operator for $\\varepsilon\\to 0$ chosen so that $\\{u|_{\\Sigma_i} > \\varepsilon\\}$ has smooth boundary). \n\\end{proof}\n\n\\begin{lemma}\\label{lem:snm-hnm}\n$\\mathcal{S}_{\\textnormal{nm}}(\\Sigma)\\subset \\mathfrak{h}_\\textrm{nm}(\\Sigma)$.\n\\end{lemma}\n\\begin{proof}\nSuppose that $p \\in \\mathcal{S}_{\\textnormal{nm}}(V)$, we claim that $\\Sigma$ is not homotopic minimizing to either side in $B_\\varepsilon(p)$ for any $\\varepsilon>0$ sufficiently small. Indeed, by assumption, the unique tangent cone $\\mathbf{C} = \\partial\\Omega_\\mathbf{C}$ to $\\Sigma$ at $p$ is not minimizing to either side. This implies that there are Caccioppoli sets $E_\\mathbf{C}^- \\subset \\Omega_\\mathbf{C} \\subset E_\\mathbf{C}^+$ so that $E_\\mathbf{C}^\\pm \\Delta\\Omega_\\mathbf{C} \\subset B_1 \\subset \\mathbb{R}^{n+1}$ and so that\n\\[\n\\Per_{\\mathbb{R}^{n+1}}(E_\\mathbf{C}^\\pm\\,|\\,B_1) \\leq \\Per_{\\mathbb{R}^{n+1}}(\\Omega_\\mathbf{C}\\,|\\,B_1) - \\delta.\n\\]\nChoose $C^{1,\\omega}$ coordinates on $M$ around $p$ so that $\\Omega = \\Omega_\\mathbf{C}$ in $B_\\varepsilon(p)$ and so that $g_{ij}(p) = \\delta_{ij}$, which we can do since $g\\in C^2$ and $\\Sigma$ is a $C^{1,\\omega}$ deformation of $\\mathbf{C}$ near $p$ by assumption. Then, set \n\\[\nE(x) := \\begin{cases}\n (\\Omega\\setminus B_\\varepsilon) \\cup (|x| E_\\mathbf{C}^- \\cap B_\\varepsilon) & x < 0\\\\\n \\Omega & x = 0\\\\\n (\\Omega\\setminus B_\\varepsilon) \\cup (|x| E_\\mathbf{C}^+ \\cap B_\\varepsilon) & x > 0\\\\\n \\end{cases}\n\\]\nWe have that \n\\[\n\\Per_g(E(x)) - \\Per_g(\\Omega) = - |x|^n \\delta (1+o(1))\n\\]\nas $x\\to 0$ (since the metric $g_{ij}$ converges to the flat metric $\\delta_{ij}$ after rescaling $|x| \\to 1$, by the $C^{1,\\omega}$ regularity of the chart). This shows that $\\Sigma$ is not homotopic minimizing to either side in $B_\\varepsilon(p)$, so $p\\in\\mathfrak{h}_\\textrm{nm}(\\Sigma)$ as claimed. \n\\end{proof}\n\n\\subsection{Cancellation} We will assume the cancellation condition\n\\[\n\\mathbf{M}(\\Phi(x_0)) < W\n\\]\nthroughout this subsection. In particular, we can find $q \\in \\reg V$ so that for all $\\varepsilon>0$ sufficiently small,\n\\[\n\\Per(\\Omega\\,|\\,B_\\varepsilon(q)) < | V |(B_\\varepsilon(q)) \n\\]\nwhere $\\partial \\Omega = \\Phi(x_0)$. Like in the previous section we set $\\Sigma := \\supp V$. \n\nFurthermore we set $V=\\sum_{i}\\kappa_i\\,|\\Sigma_i|$, where each $\\Sigma_i$ is a minimal hypersurface with optimal regularity and $\\kappa_i\\in \\mathbb{N}$ are constant multiplicities, by the constancy theorem \\cite[Theorem 41.1]{Sim}. So \\eqref{e:no_can} becomes\n\n\\begin{equation}\n\\label{e:canc}\n\\begin{gathered}\n\\text{there is a (ONVP) sweepout $\\{\\Phi(x)=\\partial \\Omega(x)\\}$ and $x_i \\nearrow x_0 \\in {\\bf m}_L(\\Phi)$, so that} \\\\\n\\text{$|\\Phi(x_i)| \\to V=\\sum_{i} \\kappa_i\\,|\\Sigma_i|\\in\\cR$, $\\Phi$ is not left excessive at $x_0$ and} \\\\\n\\text{there is $q\\in \\Sigma$ such that }\\, \\Per(\\Omega\\,|\\,B_\\varepsilon(q)) \\leq | V |(B_\\varepsilon(q))-\\delta(\\varepsilon) \\quad \\text{for all } \\varepsilon>0\\,.\n\\end{gathered}\n\\end{equation}\n\nWe write $\\Omega=\\Omega(x_0)$ and observe that $\\Sigma\\subset \\overline\\Omega$. We would like to claim that $\\Sigma$ is homotopically minimizing, but this condition might not make sense if $\\Sigma$ is one-sided. However, thanks to the cancellation we can actually prove \nthat $\\Sigma$ is area-minimizing in its neighborhood in $\\Omega$\naway from a small ball around $q$.\n\n\n\\begin{definition} \\label{def:nbhd minimizing}\nWe will call a set $\\Omega'$ a $(q,\\varepsilon, \\tau, \\Sigma, \\Omega)$-competitor\nif $$\\big(\\Omega \\setminus B_\\tau(\\Sigma)\\big) \\cup \\big(B_\\varepsilon(q) \\setminus \\Sigma \\big) \n\\subset \\Omega' \\subsetneqq \n\\Omega \\setminus \\Sigma \\, .$$\nAn $(q,\\varepsilon, \\tau, \\Sigma, \\Omega)$-competitor $\\Omega'$ will be called a\nminimizing competitor if \nits perimeter is strictly less than perimeter of any \n$(q,\\varepsilon, \\tau, \\Sigma, \\Omega)$-competitor $\\Omega''$ \nwith $\\Omega' \\subset \\Omega''$.\n(Note that we do not require $\\Per (\\Omega')$ to \nbe less that the perimeter of all competitors, but\nonly those that contain $\\Omega'$).\n\\end{definition}\n\n\n\n\\begin{proposition} \\label{thm:no_competitor}\n Suppose \\eqref{e:canc} holds, then \n for every $\\varepsilon>0$ there is $\\tau>0$,\n such that \n minimizing $(q,\\varepsilon, \\tau, \\Sigma, \\Omega)$-competitor\n does not exist.\n\\end{proposition}\n\n\\begin{proof} \nFor contradiction suppose \nthere exists a minimizing $(q,\\varepsilon, \\tau, \\Sigma, \\Omega)$-competitor\n$U$. Observe that by the cancellation\nassumption for every $\\eta>0$ \nwe can find $(q,\\varepsilon, \\tau, \\Sigma, \\Omega)$-competitors \n$\\Omega'$\nwith $\\Per(\\Omega') \\leq W+ \\eta - \\delta(\\epsilon)$,\nwhere $\\delta(\\epsilon)$ is from (\\ref{e:canc}). \nIt follows that \n$$\\Per(U) \\leq \\Per(\\Omega')0$ sufficiently small,\nthen by Lemma \\ref{l:close in flat}\nthere exists a nested family $\\{E(x)\\,:\\,x\\in [0,1]\\}$\nwith $E(0)= U$, $E(1) = \\Omega$ and\n$$\\Per(E(x))< W \\, .$$\n\nLet $(x_l,x_0]$ be the connected interval such that $\\Omega(x)\\setminus U\\neq \\emptyset$, where $\\{\\Phi(x)=\\partial \\Omega(x)\\}$, and define \nfamily $\\Psi \\colon (x_l, x_0+1] \\to \\mathcal Z_n(M, \\mathbb Z_2)$ by\n$$\n\\Psi(x):=\n\\begin{cases}\n\\partial(\\Omega(x)\\cap U) & \\text{if }x\\in (x_l,x_0] \\\\\n\\partial E(x-x_0)\n& \\text{if }x\\in [x_0, x_0+1]\n\\end{cases}\n$$\nClearly $\\Psi$ is continuous, since $\\Omega=\\Omega(x_0)$ and moreover we have that\n$$\n\\limsup_{y\\to x} \\mathbf{M}(\\Psi(y)) < \\limsup_{y\\to x} \\mathbf{M}(\\Phi(x)) \\leq W\n$$\nfor every $x\\in (x_l,x_0)$ by\nstrict minimality condition in Definition \\ref{def:nbhd minimizing}. For every $x\\in [x_0, x_0+1]$\nwe also have $\\mathbf{M}(\\Psi(x)) = \\mathbf{M}(\\partial E(x))0$, such that \nthe support of $V$ is minimizing to one side in $B_r(x)$\n\\end{proposition}\n\n\n\n\\begin{proof} \nFirst we observe that we can find two points $q_1$ and $q_2$ in $\\reg V$,\nsuch that for all $\\varepsilon>0$ sufficiently small,\n\\[\n\\Per(\\Omega\\,|\\,B_\\varepsilon(q_j)) < | V |(B_\\varepsilon(q)) \\, .\n\\]\nBy Proposition \\ref{thm:no_competitor} we have non-existence\nof minimizing $(q_j,\\varepsilon, \\tau, \\Sigma, \\Omega)$-competitors\nfor $j=1,2$. This implies that \n $\\Sigma_i$ is area minimizing to one side\nin a small ball around every point of $V$. In particular,\nwe have $\\cH^{0}(\\hnm(V)))=0$.\n\nThe stability of the regular part of each $\\Sigma_i$ also follows from the\nnon-existence of minimizing $(q,\\varepsilon, \\tau, \\Sigma, \\Omega)$-competitors.\nIndeed, if a component $\\Sigma_i$ has index $\\geq 1$,\nthen for $\\varepsilon>0$ sufficiently small, the minimal\nhypersurface $\\Sigma_i \\setminus B_{\\varepsilon}(q)$ with fixed boundary\nwill be unstable by a standard capacity argument.\nIf $\\Sigma_i$ is two-sided, then by considering a minimization problem\nto one side of $\\Sigma_i$ \nin $B_\\tau(\\Sigma_i) \\setminus B_\\varepsilon(q)$\nwe can find open set $U \\subset \\Omega$,\nsuch that $\\Omega \\setminus U$ is a minimizing \n$(q,\\varepsilon, \\tau, \\Sigma, \\Omega)$-competitor.\n\nSuppose $\\Sigma_i$ is one-sided.\nSince $\\Sigma_i \\subset \\overline{\\Omega}$\nwe have that $B_\\tau(\\Sigma_i)\\setminus \\Sigma_i \\subset \\Omega$\nfor all sufficiently small $\\tau>0$.\nIn particular, for small $\\tau< \\varepsilon$ \nwe can minimize in the class of hypersurfaces\n $\\{S\\subset B_\\tau(\\Sigma_i) : S \\cap B_\\varepsilon(q) = \\Sigma_i \\cap B_\\varepsilon(q)\\}$\n to obtain a minimizer $\\Sigma_i'$ in the same homology class\n and open set $U \\subset \\Omega$ with $\\partial U = \\Sigma_i \\cup \\Sigma_i'$.\n Then $\\Omega \\setminus U$ is a minimizing \n$(q,\\varepsilon, \\tau, \\Sigma, \\Omega)$-competitor.\n\\end{proof}\n\n\\subsection{Multiplicity $2$ bound} In this subsection we show that if $\\kappa_i> 2$ for some $i$, then $x_0$ is excessive, by using simple comparisons with disks. Notice that if any multiplicity satisfies $\\kappa_i\\geq 2$ then we must be in the cancellation case considered above. \n\n\\begin{lemma}[Multiplicity $2$ bound]\\label{l:mult_bound}\nLet $V=\\sum_{i}\\kappa_i\\,| \\Sigma_i|$ be as in \\eqref{e:no_can}. Then $\\kappa_i\\leq 2$ for every $i$.\n\\end{lemma}\n\t\n\\begin{proof} Suppose by contradiction $\\kappa_i\\geq 3$ for some $i$. Then let $p\\in \\reg(\\Sigma_i)$, $p\\neq q$ (where $q$ is the cancellation point considered above). Consider a ball $B_r(p)$, $r < \\frac{1}{2}dist(p,q)$, sufficiently small so that $\\Sigma_i\\cap B_r(p)$ is two-sided. Let $\\tau(r)>0$ be a small constant\nto be chosen later\nand set $U = B_r(p) \\cap B_\\tau(\\Sigma_i)$.\n\nConsider sequence $x_j \\nearrow x_0$\nwith $|\\partial \\Omega(x_j)| \\rightarrow V$.\nWe can assume that the radius $r$ was\n chosen sufficiently small, so that\n\\begin{equation}\\label{e:mult_bound1}\n \\mathbf{M}(\\partial \\Omega(x_j) \\cap U)\\geq \\left(\\kappa_i - \\frac{1}{10}\\right) \\omega_n r^n\\,,\n\\end{equation}\nfor all $j$ large enough, where $\\omega_n$ denotes the measure of the $n$-dimensional ball of radius one.\n\nLet $\\Omega_j' \\subset \\Omega(x_j)$, $\\Omega_j' \\setminus U = \\Omega(x_j)\n\\setminus U$, be a strict one-sided outer area minimizer\nin $\\Omega(x_j)\\cap U$.\nObserve that if $\\Omega'_j$ does not converge to $\\Omega(x_0)$,\nthen $\\lim \\Omega_j'$ is a $(q,\\frac{1}{2}dist(p,q), \\tau, \\Sigma, \\Omega(x_0))$-competitor,\nwhich contradicts Proposition \\ref{thm:no_competitor}.\n\nWe conclude that $\\lim \\Omega_j' = \\Omega(x_0)$.\nOn the other hand, \nby comparing $\\Omega(x_j) \\setminus U$ to $\\Omega_j'$ and\nassuming that $\\tau(r)$\nwas chosen sufficiently small, we have that one-sided \narea minimizing property of $\\Omega_j'$ implies \n\\begin{equation*}\n \\mathbf{M}(\\partial \\Omega_j' \\cap U)\\leq \\Per(U) \\leq \\left(2 + \\frac{1}{10}\\right) \\omega_n r^n\\,,\n\\end{equation*}\nFor $\\tau(r)$\n sufficiently small and $j$ large we can apply\nLemma \\ref{l:close in flat} to find a nested family $E(x)$\ninterpolating between $\\Omega_j'$ and $\\Omega$, such that\n\\begin{align*}\\Per( E(x) ) &\\leq \\max \\{\\mathbf{M}(\\partial \\Omega_j' \\setminus U),\n\\mathbf{M}(\\partial \\Omega(x_0) \\setminus U) \\} + \\left(2 + \\frac{2}{10}\\right) \\omega_n r^n\\\\\n& \\leq W - \\left( 1-\\frac{3}{10} \\right) \\omega_n r^n.\n\\end{align*}\nBy combining families $\\Omega(x) \\cap \\Omega_j'$\nand $E(x)$ we obtain that $x_0$ is left-excessive.\n\\end{proof}\n\t\n\t\n\t\n\t\n\t\n\t\\subsection{Proof of Theorem \\ref{thm:nm+index_bound}} The result follow immediately by combining Corollary \\ref{c:non-excessive_minmax} with Propositions \\ref{p:pairs}, \\ref{p:def_thm}, \\ref{p:def-thm-cancel} and Lemma \\ref{l:mult_bound}. \\qed\n\t\t\n\t\\section{Proof of Theorems \\ref{thm:generic_bound_ricci}, \\ref{thm:generic_bound}, and \\ref{thm:generic_stratum}}\n In this section we prove Theorem \\ref{thm:generic_stratum} (Theorems \\ref{thm:generic_bound_ricci} and \\ref{thm:generic_bound} follow immediately from Theorem \\ref{thm:generic_stratum} when combined with the facts that when $n=8$ all singularities are regular and that the set of bumpy metrics is open and dense \\cite{White:bumpy,White:bumpy2}). Theorem \\ref{thm:generic_stratum} will follow from Theorem \\ref{thm:nm+index_bound} and Proposition \\ref{prop:min-vs-htpy-min}, together with a simple surgery procedure.\n \n \n \\subsection{Surgery procedure} We show here how to regularize minimal hypersurfaces with regular singularities under the assumption that the hypersurface minimizes area in a small ball around each singularity. \n \\begin{proposition}[Perturbing away regular singularities of locally area minimizing surfaces]\\label{p:surgery}\n\tFor $(M^{n+1},g)$ a compact $C^{2,\\alpha}$-Riemannian metric and $\\Sigma\\in\\cR$ a minimal hypersurface, recall that $\\mathcal S_0(\\Sigma)\\subset \\sing \\Sigma$ is defined to be the set of singular points with a regular tangent cone. There is $\\tilde g\\in \\Met^{2,\\alpha}(M)$ arbitrarily close to $g$ and $\\tilde \\Sigma$ arbitrarily close in the Hausdorff sense to $\\Sigma$ so that $\\tilde\\Sigma$ is minimal with respect to $\\Sigma$ and $\\mathcal S_0(\\tilde\\Sigma) \\subset \\hnm(\\tilde\\Sigma) = \\hnm(\\Sigma)$. \n\\end{proposition} \n\\begin{proof}\nFor every $p \\in \\mathcal S_0(\\Sigma)\\setminus \\hnm(\\Sigma)$, and $\\varepsilon_0=\\varepsilon_0(p)$ so that $\\Sigma \\cap (B_{\\varepsilon_0}(p)\\setminus p)$ is regular, we will show how to perturb $g$ and $\\Sigma$ so that $p$ becomes regular. We will do this by making an arbitrarily small change to $g$, $\\Sigma$ supported in $B_{\\varepsilon_0}(p)$. Because $\\mathcal S_0$ is discrete (but not necessarily closed when $n\\geq 9$) it is easy to enumerate the elements of $\\mathcal S_0(\\Sigma)\\setminus \\hnm(\\Sigma)$ and make a summably small change around each point. As such, it suffices to consider just the perturbation near $p$.\n\nBy definition, taking $\\varepsilon<\\varepsilon_0$ sufficiently small, $\\Sigma \\cap B_\\varepsilon(p)$ is one-sided homotopy area-minimizing. For concreteness write $\\Sigma \\cap B_\\varepsilon(p) = \\partial \\Omega$ in $B_\\varepsilon(p)$ and assume that $\\Omega$ is inner homotopy minimizing. By Lemma \\ref{lem:snm-hnm}, the tangent cone at $p$ is area-minimizing (to the same side). \n\nWe claim that (after taking $\\varepsilon>0$ smaller if necessary) there is a sequence of $\\Sigma_i\\in\\cR(B_\\varepsilon(p))$ with stable regular part, with $\\Sigma_i \\subset \\Omega$, $\\Sigma_i$ disjoint from $\\Sigma$, and $\\Sigma_i\\to \\Sigma$. Indeed, we can apply Proposition \\ref{prop:min-vs-htpy-min} to conclude that either (after shrinking $\\varepsilon>0$), $\\Omega$ is area-minimizing to the inside, or there are $\\Sigma_i$ as asserted.\n\nIn the case that $\\Omega$ is area-minimizing to the inside, we can still construct the $\\Sigma_i$ by shrinking $\\varepsilon>0$ even further so that $\\Omega$ is strictly area-minimizing to the inside and then minimizing area with respect to a boundary\n$\\Sigma \\cap \\partial B_\\varepsilon(p) + \\delta_i$, for a sequence $\\delta_i\\to 0$;\ni.e., the boundary of $\\Sigma\\cap B_\\varepsilon(p)$ pushed slightly into $\\Omega$. By the unique minimizing property, the minimizers will converge back to $\\Sigma$ in $B_\\varepsilon(p)$. \n\nFor $i$ sufficiently large we can write the intersection of $\\Sigma_i$\nwith the annulus $A(p,\\varepsilon\/5,\\varepsilon)$ as a graph of function $u_i$\nover $\\Sigma$.\n\nReasoning as in Hardt--Simon \\cite[Theorem 5.6]{HS} (cf. \\cite[Theorem 3.1]{Liu}), for $i$ sufficiently large, $\\Sigma_i$ will be regular in $B_{\\varepsilon\/2}(p)$. We now set \n\\[\n\\tilde\\Sigma_i =(\\Sigma_i \\cap B_{\\varepsilon\/5}) \\cup (\\Sigma\\setminus B_\\varepsilon(p)) \\cup ((\\Sigma + \\chi u_i)\\cap A(p,\\varepsilon\/5,\\varepsilon))\n\\]\nwhere $\\chi$ is a smooth cutoff function with $\\chi\\equiv1$ on $B_{\\varepsilon\/5}$ and $\\chi\\equiv 0$ on $B_{3\\varepsilon\/5}$. Note that\n\\[\nH_g(\\tilde\\Sigma_i) \\quad \\text{is supported in $B_{4\\varepsilon\/5}(p) \\setminus B_{\\varepsilon\/5}(p)$}\n\\]\nand $\\Vert H_g(\\tilde \\Sigma_i) \\Vert_{C^{2,\\alpha}} = o(1)$ as \n$i \\to \\infty$.\n\n\tNow, define $\\tilde g = e^{2f} g$, in this new metric, since $\\tilde \\Sigma$ is smooth, we have the transformation\n\t$$\n\tH_{\\tilde g}(\\tilde \\Sigma)= e^{-f}\\left(H_g(\\tilde\\Sigma)+\\frac{\\partial f}{\\partial \\nu} \\right)\\,,\n\t$$\n\twhere $\\nu$ is the normal direction to $\\Sigma$. \n\tSetting $H_{\\tilde g}(\\tilde \\Sigma)=0$, this reduces to the equation\n\t$$\n\tH_g(\\tilde\\Sigma)+\\frac{\\partial f}{\\partial \\nu}=0\n\t$$\n\twhich implies that $f=-H_g(\\tilde \\Sigma) \\zeta(\\nu)$, for a function $\\zeta(t)$ such that $\\zeta'(0)=1$ and $\\zeta\\equiv0$ for $|t|\\geq \\varepsilon\/100$ is a solution. Since, as observed, $H_g(\\tilde\\Sigma)$ is supported in $A(p,\\varepsilon\/5,4\\varepsilon\/5)$, so is the metric change, and since $\\|u\\|_{C^{4,\\alpha}}\\leq o(1)$ and $\\chi$ is smooth, we have\n\t$$\n\t\\|g-\\tilde g\\|_{2,\\alpha}=\\|e^f-1\\|_{C^{2,\\alpha}} \\|g\\|_{2,\\alpha} \\leq C\\, \\|u\\|_{C^{4,\\alpha}}\\, \\|g\\|_{2,\\alpha} = o(1)\n\t$$ \n\tas $i\\to \\infty$. This completes the proof. \n\\end{proof}\n\t\n\t\\subsection{Proof of Theorem \\ref{thm:generic_stratum}} For $g \\in \\Met^{2,\\alpha}(M)$, apply Theorem \\ref{thm:nm+index_bound} to find $V\\in\\cR$ with\n\t\\[\n\t\\cH^0(\\hnm(V)) + \\Index(V) \\leq 1.\n\t\\]\n\tWe can apply Proposition \\ref{p:surgery} to $\\Sigma = \\supp V$ to find a metric $\\tilde g$ that is arbitrarily $C^{2,\\alpha}$-close to $g$ and a $\\tilde g$ minimal hypersurface $\\tilde\\Sigma \\in \\cR$ so that $\\mathcal S_0(\\Sigma) \\subset \\hnm(\\Sigma)$. (Note that if $\\Index(V) = 1$, then $ \\hnm(\\Sigma) =\\emptyset$, so $\\mathcal S_0(\\Sigma) = \\emptyset$.) This completes the first part of the proof. \n\t\nWe now consider $g \\in \\Met^{2,\\alpha}_{\\Ric>0}(M)$.\\footnote{The idea is that positive Ricci curvature rules out stable hypersurfaces but this requires the hypersurface to be two-sided. As such, we must consider two cases, depending on whether $\\Sigma$ is one or two-sided.} If $\\Sigma$ is two-sided, then $\\Index(\\Sigma)\\geq1$, so we can argue as above. On the other hand if $\\Sigma$ is one-sided, then $[\\Sigma]\\neq 0 \\in H_{n}(M,\\mathbb{Z}_{2})$. We can then find $\\hat\\Sigma \\in [\\Sigma]$ by minimizing area in the homology class. The surface $\\hat\\Sigma$ may have singularities, but they are all locally area minimizing. Thus, we can apply Proposition \\ref{p:surgery} to $\\Sigma$ yielding $\\tilde\\Sigma$ and $\\tilde g$ with $\\mathcal S_0(\\tilde\\Sigma) = \\emptyset$. \n\\qed\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}