diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzlvge" "b/data_all_eng_slimpj/shuffled/split2/finalzzlvge" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzlvge" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nExceptionally bright quasars with redshifts up to $z\\sim7$ have been discovered \\citep[e.g.][]{Fan2001,Fan2003,Mortlock2011}. These quasars are thought to be powered by the thin disk accretion of gas onto super-massive black-holes at the centres of galaxies. Their maximum (Eddington) luminosity depends on the mass of the black-hole, and the brighter quasars are inferred to have black-holes with masses of more than a few billion solar masses. Since their discovery \\citep[][]{Fan2001,Fan2003}, the existence of such massive black-holes at $z\\ga6$ has posed a challenge to models for their formation. This is because a $\\sim10^9$M$_\\odot$ black-hole accreting at the Eddington rate with a radiative efficiency of 10\\% requires almost the full age of the Universe at $z\\sim6$ to grow from a stellar mass seed. Many authors have therefore discussed solutions to this apparent mystery by including a significant build-up of mass through mergers \\citep[][]{Haiman2001}, collapse of low-spin systems \\citep[][]{Eisenstein1995}, and suppression of molecular line cooling via a large Lyman-Werner flux \\citep[][]{Dijkstra2008}. Other authors have also attempted to explain the fast growth of black-holes at high redshifts based on super-massive stars \\citep[][]{Loeb1994,Bromm2003} and more recently, quasi-stars \\citep[][]{Volonteri2010,Begelman2010}. \n\n\\citet[][]{Volonteri2005} discussed the possibility of super-Eddington accretion of black-hole seeds in high redshift galaxies. They pointed out that seed black-holes are located at the centres of isothermal disks where the conditions for quasi-spherical Bondi accretion should be prevalent. At high redshift these disk centres are sufficiently dense that the Bondi accretion rate greatly exceeds the Eddington rate. \\citet[][]{Volonteri2005} point out that this super-Eddington accretion provides a route by which a large fraction of the mass $e$-foldings needed to grow a super-massive black-hole by redshift six could be accommodated within a small fraction of the age of the Universe. However the calculations in \\citet[][]{Volonteri2005} ignored feedback effects like gas heating, which may raise the sound-speed, and hence lower the density and therefore the Bondi accretion rate. For example \\citet[][]{Milosavljev2009} show that photoheating and radiation pressure from photoionization significantly reduce the steady-state accretion rate and potentially render a quasi-radial accretion flow unsteady and inefficient. They find that the time-averaged accretion rate is always a small fraction of the Bondi accretion rate. Thus, the very high accretion rates implied by the Bondi accretion in the centre of a high redshift isothermal disk might never be reached. On the other hand, if the accretion rate is sufficiently high that the emergent photons are trapped within the accretion flow, then these feedback effects cannot operate \\citep[][]{Begelman1979}, and so the accretion rate can reach arbitrarily high levels. \n\nIn this paper we find that at sufficiently high redshift, the central densities of galaxies imply Bondi accretion rates that exceed the rate required to trap radiation and advect it into a black-hole \\citep[][]{Begelman1979}. Thus, we find that there were periods in the growth of black-holes at high redshift where the growth was super-Eddington and feedback mechanisms could not halt the accretion flow. Our goal is not to make a self-consistent model for both the transport of material from large galactic radii and central black-hole accretion. Rather, we note that there is a significant literature looking at the problem of super-Eddington accretion assuming large mass-delivery rates to the region of the black-hole, and identify the cosmological conditions that would provide sufficient accretion rates to allow this by trapping radiation. We begin in \\S~\\ref{model} with a description of our simple model, before presenting our results in \\S~\\ref{results}. We finish with a discussion in \\S~\\ref{discussion}, and conclusions in \\S~\\ref{conclusion}. In our numerical examples, we adopt the standard set of cosmological parameters \\citep[][]{Komatsu2011}, with values of $\\Omega_{\\rm b}=0.04$, $\\Omega_{\\rm m}=0.24$ and $\\Omega_\\Lambda=0.76$ for the matter, baryon, and dark energy fractional density respectively, $h=0.73$, for the dimensionless Hubble constant, and $\\sigma_8=0.82$.\n\n\n\n\\section{Model}\n\n\\label{model}\nThe basis of this paper is a comparison between the Bondi accretion rate, and the accretion rate required to trap photons within the accretion flow. We discuss these in turn.\n\n\\subsection{The Bondi accretion rate}\n\nWe begin with the expression for the Bondi accretion rate \\citep[][]{Bondi1952} onto a central black-hole of mass $M_{\\rm bh}$ \n\\begin{equation}\n\\label{bondi}\n\\dot{M}_{\\rm Bondi} = 4\\pi\\rho_0 r_{\\rm Bondi}^2 v_{\\rm ff} = 4\\pi n_0 \\mu m_{\\rm p} r_{\\rm Bondi}^2 \\sqrt{\\frac{G M_{\\rm bh}}{r_{\\rm Bondi}}},\n\\end{equation}\nwhere $v_{\\rm ff}$ is the free-fall time at the Bondi radius\n\\begin{equation}\nr_{\\rm Bondi} = \\frac{G M_{\\rm bh}}{c_{\\rm s}^2},\n\\end{equation}\nand $c_{\\rm s}$ is the sound speed, which for an isothermal gas we assume corresponds to a temperature of $10^4$K.\nTo evaluate $\\dot{M}_{\\rm Bondi}$ we specify the central number density of a self gravitating disk \\citep[][]{Schaye2004} \n\\begin{equation}\n\\label{n0}\nn_0 = \\frac{G M_{\\rm disk}^2}{12 \\pi c_{\\rm s}^2 R_{\\rm d}^4 \\mu m_{\\rm p}}. \n\\end{equation}\nHere $R_{\\rm d}$ is the characteristic radius of an exponential disk of surface density profile\n\\begin{equation}\n\\Sigma(r) = \\Sigma_0 \\exp(-r\/R_{\\rm d}),\n\\label{eq:expdensprof}\n\\end{equation}\nwith $\\Sigma_0 = M_{\\rm disk}\/ 2\\pi R_{\\rm d}^2$, and the disk scale length is given by\n\\begin{equation}\nR_{\\rm d} = {\\lambda \\over \\sqrt{2}} r_{\\rm vir},\n\\end{equation}\nwhere $\\lambda$ is the dimensionless spin parameter of the halo.\nIn equation~(\\ref{n0}), $M_{\\rm disk}=m_{\\rm d}M_{\\rm halo}$ is the disk mass, $m_{\\rm p}$ is the mass of hydrogen, and $\\mu=1.22$ is the molecular weight of primordial neutral gas. At the high redshifts of interest, most of the virialized galactic gas is expected to cool rapidly and assemble into the disk. We therefore assume $m_{\\rm d}=0.17$. The corresponding mass density is $\\rho_{\\rm 0} = m_{\\rm p} n_0$. The virial radius of a halo with mass $M_{\\rm halo}$ is given by the expression\n\\begin{equation}\n\\label{eps}\n\\nonumber r_{\\rm vir}= 0.784 h^{-1}\\,\\mbox{kpc} \\left(\\frac{M_{\\rm halo}}{10^{8}M_{\\odot}h}\\right)^{\\frac{1}{3}}\n[\\zeta(z)]^{-\\frac{1}{3}}\\left(\\frac{1+z}{10}\\right)^{-1},\n\\end{equation}\nwhere $\\zeta(z)$ is close to unity and\ndefined as $\\zeta\\equiv [(\\Omega_{\\rm m}\/\\Omega_{\\rm m}^z)(\\Delta_c\/18\\pi^2)]$,\n$\\Omega_{\\rm m}^z \\equiv [1+(\\Omega_\\Lambda\/\\Omega_{\\rm m})(1+z)^{-3}]^{-1}$,\n$\\Delta_c=18\\pi^2+82d-39d^2$, and $d=\\Omega_{\\rm m}^z-1$ \\citep[see equations~22--25 in][for more details]{Barkana2001}. From equations~(\\ref{n0}) and (\\ref{eps}) we see that the central density $n_0$ and hence the Bondi accretion rate $\\dot{M}_{\\rm Bondi}$ scales as $(1+z)^4$, and that as a result accretion rates are expected to be much larger at high redshift. \n\nDormant central black-holes are ubiquitous in local galaxies \\citep[][]{Magorrian1998}. The\nmasses of these super-massive black-holes scale with physical properties of their hosts\n\\cite[e.g.][]{Magorrian1998,Merritt2001,Tremaine2002}. However at high redshift the relations observed in the local Universe may not be in place. We therefore do not impose a model for the relation between black-hole and halo mass in this paper, and instead explore a range of values. Indeed, our results indicate that feedback, which is thought to drive the black-hole -- halo-mass relation relation, would not be effective at early times. The grey curves in Figure~\\ref{fig1} show the Bondi accretion rate as a function of redshift for different values of halo and black-hole mass. Here we assume $\\lambda=0.05$ corresponding to the mean spin parameter for dark-matter halos \\citep[][]{Mo1998}.\n\n\n\n \\begin{figure*}\n\\begin{center}\n\\vspace{3mm}\n\\includegraphics[width=17.5cm]{fig1.pdf}\n\\caption{\\label{fig1} The grey curves show the Bondi accretion rate as a function of redshift. The dark lines show the critical accretion rate for which the photon diffusion speed is smaller than the gravitational free-fall speed. In cases where the Bondi accretion rate is larger than the critical accretion rate, photons are trapped and the AGN is obscured. Three cases are considered for the vertical structure of the disk, which is assumed to be set by the sound speed of the gas ({\\em Left Panels}), the turbulent velocity associated with a $Q=1$ disk ({\\em Central Panels}), and the virial velocity ({\\em Right Panels}) respectively. In each case the {\\em Upper} and {\\em Lower} panels correspond to cases with halo masses of $M=10^{9}$M$_\\odot$ and $M=10^{10}$M$_\\odot$. Three curves in each case correspond to black-hole masses of $10^3$M$_\\odot$, $10^4$M$_\\odot$ and $10^5$M$_\\odot$. The cross-section was assumed to be $F_{\\rm sig}=100$ times larger than the Thomson cross-section, the minimum radius was $F_{\\rm min}=10^3$ times $r_{\\rm g}$, and the spin parameter $\\lambda=0.05$.}\n\\end{center}\n\\end{figure*}\n\n\\subsection{Photon trapping by the accretion flow}\n\nIf the diffusion velocity of photons at a radius $r$ is smaller than the free-fall velocity of the material at radius $r$, then photons become trapped in the accretion flow \\citep[][]{Begelman1979}. In such cases, the black-hole would be obscured. In this section we estimate the accretion rate required to achieve this photon trapping at radius $r$. \n\nThe free-fall time from radius $r$ to a smaller radius $r_{\\rm min}$ is \n\\begin{equation}\n\\label{tff}\nt_{\\rm ff}\\sim \\frac{(r-r_{\\rm min})}{v_{\\rm ff}},\n\\end{equation}\nwhere $v_{\\rm ff}\\sim \\sqrt{G M_{\\rm bh}\/r}$ within the Bondi radius. This should be compared with the diffusion time of \n\\begin{equation}\n\\label{tdiff}\nt_{\\rm diff}\\sim \\tau \\frac{(r-r_{\\rm min})}{c}.\n\\end{equation}\nHere the optical depth is given by \n\\begin{equation}\n\\tau \\sim \\bar{\\rho} \\frac{F_{\\rm sig}\\sigma_{\\rm T}}{m_{\\rm p}}(r-r_{\\rm min}),\n\\end{equation}\nwhere $\\sigma_{\\rm T}$ is the Thomson cross-section, $F_{\\rm sig}$ is the ratio of the scattering cross-section of the accreting gas to the emergent radiation in units of $\\sigma_{\\rm T}$, and $\\bar{\\rho}$ is the line-of-sight averaged density between radii $r_{\\rm min}$ and $r$. The density within the Bondi radius scales as $\\rho(r)=\\rho_0 (r\/r_{\\rm Bondi})^{-1.5}$, yielding\n\\begin{eqnarray}\n\\nonumber\n\\bar{\\rho}&=&2\\rho_0 \\frac{r_{\\rm Bondi}}{r-r_{\\rm min}} [(\\frac{r_{\\rm min}}{r_{\\rm Bondi}})^{-0.5} - (\\frac{r}{r_{\\rm Bondi}})^{-0.5}]\\\\\n\\nonumber\n&=&2\\rho \\frac{r}{r-r_{\\rm min}} [(\\frac{r}{r_{\\rm min}})^{0.5} - 1]\\hspace{5mm}\\mbox{for}\\hspace{5mm}rt_{\\rm ff}$ at radius $r$ is satisfied for accretion rates $\\dot{M}>\\dot{M}_{\\rm lim}$ where\n\\begin{equation}\n\\dot{M}_{\\rm lim}=\\frac{\\rho}{\\bar{\\rho}}\\left(\\frac{4\\pi m_{\\rm p} c}{F_{\\rm sig}\\sigma_{\\rm T}}\\right)\\frac{r^2}{r-r_{\\rm min}},\n\\end{equation}\nin which we have used the relation $\\dot{M}_{\\rm Bondi} = 4\\pi\\rho r^2 v_{\\rm ff}$. We find that the accretion rate required for photon trapping has a minimum value (i.e. $d\\dot{M}_{\\rm lim}\/dr=0$) at a radius of $r=4r_{\\rm min}$, yielding\n a minimum required accretion rate for photon trapping of\n\\begin{equation}\n\\label{lim}\n\\dot{M}_{\\rm lim}=\\left(\\frac{8\\pi m_{\\rm p} c}{F_{\\rm sig}\\sigma_{\\rm T}}\\right)r_{\\rm min}.\n\\end{equation}\nSince the Eddington accretion rate at an efficiency $\\epsilon$ is $\\dot{M}_{\\rm Edd}=(4\\pi G M_{\\rm bh} m_{\\rm p})\/(c\\sigma_{\\rm T}\\epsilon)$, we find \n\\begin{equation}\n\\label{lim2}\n\\dot{M}_{\\rm lim}=2\\left(\\frac{\\epsilon}{F_{\\rm sig}}\\right) \\left(\\frac{r_{\\rm min}}{r_{\\rm g}}\\right) \\dot{M}_{\\rm Edd},\n\\end{equation} \nwhere $r_{\\rm g}=GM_{\\rm bh}\/c^2$. \n\nThe limiting accretion rate depends on the emergent radiation spectrum. As described below in \\S~\\ref{xray}, we find that the condition for photon trapping is more readily achieved for X-ray photons than for optical\/UV photons. We therefore frame our discussion around trapping of optical\/UV photons. In the case of optical\/UV photons the cross-section would be dominated by dust at radii larger than a sublimation radius $r_{\\rm min} = r_{\\rm sub} $, beyond which the opacity of the gas to optical\/UV photons could be larger than the Thomson opacity by as much as two or three orders of magnitude due to the presence of dust \\citep[][]{Laor1993}. The existence of dust can be questioned for high-redshift galaxies \\citep[e.g.][]{Bouwens2010d}, however high metallicities are inferred from the broad emission lines of all quasars out to $z=7.1$ and so metal enrichment (due to star formation) is known to precede the growth of the black-hole in the galactic nuclei of interest \\citep[][]{Hamann2010}. On the other hand, the diffusion time may be lessened if the gas is clumpy, corresponding to a lower effective opacity. We therefore take a typical value for the cross-section that is $F_{\\rm sig}=100$ times larger than the Thomson cross-section (additional cases are presented below in \\S~\\ref{structure}). Defining $r_{\\rm min}=F_{\\rm min} r_{\\rm g}$ we assume a typical value of $F_{\\rm min}=10^3$ to describe the sublimation radius \\citep[][]{Netzer1993}. \n\nLimiting accretion rates corresponding to these default values are plotted in the left hand panels of Figure~\\ref{fig1} (dark lines) for the case where the vertical structure of the disk is set by the sound speed of the gas. The {\\em Upper} and {\\em Lower} panels correspond to cases of halo masses of $M_{\\rm halo}=10^{9}$M$_\\odot$ and $M_{\\rm halo}=10^{10}$M$_\\odot$. The larger value corresponds to the lower end of the inferred halo masses of the Lyman-break population at $z\\ga6$ \\citep[][]{trenti2010}. \n\n\\subsection{Disk structure}\n \\label{structure}\n\nBefore proceeding we first note that we have so far assumed the thickness of the gaseous disk at the centre of the galaxy to be set by the sound speed of gas at $10^4$K. However from equation~(\\ref{ratio}) we see a strong dependence on the value for the effective sound speed. For example, in turbulent disks, the turbulent velocity replaces the isothermal velocity in determining the effective sound speed. Recently, \\citet[][]{Genzel2010} inferred a Toomre's $Q$ value of $Q=1$ in ULIRGS, implying a value for the turbulent velocity of $c_{\\rm T}\\sim G \\Sigma\/\\Omega=\\sqrt{G\\Sigma_0 r\/\\pi}$, where $v^2 = G\\Sigma_0 \\pi r^2 \/ r $, yielding $\\Omega = v\/r = \\sqrt{G\\Sigma_0\\pi\/r}$. Evaluating at the Bondi radius, we get\n\\begin{equation}\nc_{\\rm T} = \\left(\\frac{G^2\\Sigma_0 M_{\\rm bh}}{\\pi}\\right)^{0.25}.\n\\end{equation}\nThis value of $c_{\\rm T}$ is the maximum value possible for a disk at large radius (as a higher $c_{\\rm T}$ would imply an unphysical disk with $h>r$). Therefore, for a $Q=1$ disk the turbulent velocity should decrease towards small $r$. At sufficiently small radii, $c_{\\rm T}4$). \n\nThere may also be sources of energy injection that heat the gas to temperatures that correspond to velocities much larger than $c_{\\rm T}$ (or $c_{\\rm s}$), although in order to remain bound the velocity of the gas must be smaller than the virial velocity of the halo. We therefore show two cases in addition to $c_{\\rm s}\\sim10$km\/s, namely $c_{\\rm T}$ and $c_{\\rm v}\\sim f v_{\\rm vir}$ where $f\\sim0.5$ in order to bracket the range of interest, with corresponding limiting accretion rates plotted in the central and right hand panels of Figure~\\ref{fig1} (dark lines). The results are almost unchanged where $c_{\\rm T}$ is used to set the disk height. This is because we find that for the halo masses, black-hole masses and redshifts of interest, the sound speed is smaller than the turbulent velocity when evaluated at the Bondi-radius. However if the disk height is set by the maximum velocity $v_{\\rm c}$, then larger black-holes and smaller halos are needed in order for the Bondi accretion rate to exceed the limiting rate, and at $z\\sim6$ black-hole accretion would not be obscured in this case. In the remainder of this paper we restrict our attention to the case of an isothermal disk.\n\n\n\\section{Photon Trapping}\n\\label{results}\n\nIn cases where the Bondi accretion rate is larger than the critical accretion rate, rest frame optical\/UV photons are trapped and the active galactic nucleus (AGN) can be obscured. Thus the cross-over of the limiting and Bondi accretion rate curves in Figure~\\ref{fig1} represents the redshift beyond which accretion traps radiation for an AGN powered by the particular black-hole masses shown. If the vertical disk structure is set by the sound speed, we find that a $10^3$M$_\\odot$ black-hole in a $10^{10}$M$_\\odot$ halo will result in trapped radiation at $z>4$. In this section we discuss the range of halo and black-hole masses that result in photon trapping.\n\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[width=17.5cm]{fig2.pdf}\n\\caption{\\label{fig2} Contours of the redshift at which the Bondi accretion rate becomes larger than the critical accretion rate as a function of black-hole and halo mass. We show contours for cases where the vertical structure of the disk is assumed to be set by the sound speed of the gas. In the {\\em Left}, {\\em Central} and {\\em Right} panels we show contours for parameter combinations $(F_{\\rm sig}\/10^2)(F_{\\rm min}\/10^3)^{-1}=1$, $(F_{\\rm sig}\/10^2)(F_{\\rm min}\/10^3)^{-1}=0.3$ and $(F_{\\rm sig}\/10^2)(F_{\\rm min}\/10^3)^{-1}=3$. For illustration, the grey regions show the portion of parameter space at $z\\sim6$ that does not result in photon trapping accretion flows. The example assumes $\\lambda=0.05$ and $m_{\\rm d}=0.17$.}\n\\end{center}\n\\end{figure*}\n\n\\subsection{Scaling relations}\n\nThe conditions for the halo mass, black-hole mass, and redshift that conspire to provide accretion rates that trap the optical\/UV radiation can be obtained by combining equations~(\\ref{bondi}) and (\\ref{lim}). Evaluating we find\n\\begin{eqnarray}\n\\label{ratio}\n\\nonumber\n&&\\hspace{-7mm}\\frac{\\dot{M}_{\\rm bondi}}{\\dot{M}_{\\rm lim}} \\sim 37.5\\left(\\frac{M_{\\rm bh}}{10^5\\mbox{M}_\\odot}\\right) \\left(\\frac{M_{\\rm halo}}{10^{10}\\mbox{M}_\\odot}\\right)^{2\/3} \\left(\\frac{1+z}{7}\\right)^4 \\left(\\frac{m_{\\rm d}}{0.17}\\right)^2 \\\\\n&&\\hspace{2mm} \\times \\left(\\frac{F_{\\rm min}}{10^3}\\right)^{-1} \\left(\\frac{c_{\\rm s}}{10\\,\\mbox{km\/s}}\\right)^{-5}\\left(\\frac{F_{\\rm sig}}{100}\\right)\\left(\\frac{\\lambda}{0.05}\\right)^{-4}.\n\\end{eqnarray}\nThis expression makes explicit the dependencies that lead to photons being more easily trapped (i.e. $\\dot{M}_{\\rm bondi}\/\\dot{M}_{\\rm lim}>1$), namely larger black-holes in larger halos, and at higher redshift. We note that the condition for photon trapping has a much steeper dependence on redshift than obscuration via dust absorption, which scales as $(1+z)^2$.\n\nThis critical redshift at which the Bondi accretion rate exceeds the critical rate for photon trapping is shown graphically in Figure~\\ref{fig2}. Here we plot contours of the redshift at which the Bondi accretion rate becomes larger than the critical accretion rate, as a function of halo and black-hole mass. This figure shows the mass combinations that give a cross over from AGN in which radiation can escape to those in which the radiation is trapped. Based on equation~(\\ref{ratio}) we plot contours for the parameter combinations $(F_{\\rm sig}\/10^2)(F_{\\rm min}\/10^3)^{-1}=1$, $(F_{\\rm sig}\/10^2)(F_{\\rm min}\/10^3)^{-1}=0.3$ and $(F_{\\rm sig}\/10^2)(F_{\\rm min}\/10^3)^{-1}=3$ in the {\\em Left}, {\\em Central} and {\\em Right} panels of Figure~\\ref{fig2}. Smaller (larger) values of this combination lead to larger (smaller) inflow rates being needed to trap the radiation, and so larger (smaller) black-holes are needed to trap the radiation at fixed halo mass and redshift (see equation~\\ref{ratio}). \n\nFrom equation~(\\ref{lim2}), the relation between the limiting rate for photon trapping and the Eddington rate can be illustrated via the condition \n\\begin{equation}\n\\label{ratio2}\n\\frac{\\dot{M}_{\\rm lim}}{\\dot{M}_{\\rm Edd}} \\sim 2 \\left(\\frac{\\epsilon}{0.1}\\right)\\left(\\frac{F_{\\rm sig}}{10^2}\\right)^{-1} \\left(\\frac{F_{\\rm min}}{10^3}\\right).\n\\end{equation}\nAn accretion rate that leads to photon trapping is likely to be in excess of the Eddington rate, and hence associated with a very rapid build-up of black-hole mass. \n\n \n\n\\subsection{Trapping of X-rays}\n\\label{xray}\n\nUp until now we have found that the critical rate at which the accretion flow traps optical\/UV photons can be reached in the centres of high redshift galaxies owing to gas with large opacity (i.e. $F_{\\rm sig}\\gg1$) beyond a radius $F_{\\rm min}r_{\\rm g}$. The results of \\citet[][]{Laor1993} indicate that while $F_{\\rm sig}\\gg1$ is expected for UV photons, the X-ray component of the spectrum will see an opacity to the inflowing gas that is set by the Thomson cross-section (i.e. $F_{\\rm sig,X}=1$). Moreover, if the X-ray photon component of the spectrum were not trapped, it could halt the accretion flow via radiation pressure if it exceeded the Eddington rate by itself, thus preventing trapping of the optical\/UV radiation. On the other hand, X-rays see this Thomson opacity at radii much smaller than the sublimation radius, with $F_{\\rm min,X}\\sim10$. \n\nTo illustrate the importance of X-rays in this context we modify equation~(\\ref{ratio2}) describing the relation between the limiting rate for optical\/UV photon trapping and the Eddington rate that is specific to the X-ray portion of the spectrum ($\\dot{M}_{\\rm Edd,X}$). We define $F_{\\rm X}$ to be the fraction of the luminosity in X-rays, which based on the spectrum in \\citet[][]{Elvis1994} takes a value of $F_{\\rm X}\\sim10\\%$. We therefore find \n\\begin{equation}\n\\label{ratio3}\n\\frac{\\dot{M}_{\\rm lim}}{\\dot{M}_{\\rm Edd,X}} \\sim 0.2 \\left(\\frac{\\epsilon}{0.1}\\right)\\left(\\frac{F_{\\rm sig}}{10^2}\\right)^{-1} \\left(\\frac{F_{\\rm min}}{10^3}\\right)\\left(\\frac{F_{\\rm X}}{0.1}\\right).\n\\end{equation}\nHere $F_{\\rm sig}\\gg1$ and $F_{\\rm min}\\gg1$ correspond to the values for UV photon trapping. Equation~(\\ref{ratio3}) shows that in order for X-rays to exceed the Eddington limit, the parameters corresponding to optical\/UV trapping need to be $(F_{\\rm sig}\/10^2)(F_{\\rm min}\/10^3)^{-1}\\ga5$. At these large accretion rates we find that when optical\/UV photons are trapped, so too are the X-rays unless the minimum radius at which the X-rays encounter opacity ($r_{\\rm min,X}=F_{\\rm min,X}\\,r_{\\rm g}$) is described by a value of $F_{\\rm min,X}>50$. To see this we note that the ratio of the Bondi accretion rate to the rate needed to trap X-rays is \n\\begin{equation}\n\\frac{\\dot{M}_{\\rm Bondi}}{\\dot{M}_{\\rm lim,X}}=\\left(\\frac{F_{\\rm min}}{F_{\\rm min,X}}\\right)\\left(\\frac{1}{F_{\\rm sig}}\\right)\\frac{\\dot{M}_{\\rm Bondi}}{\\dot{M}_{\\rm lim}}.\n\\end{equation}\nSince we expect $F_{\\rm min,X}\\sim6$, corresponding to the innermost stable circular orbit, we expect that an accretion flow which traps the optical\/UV radiation will also trap the X-rays. As a result we do not consider the effect of X-rays on the accretion flow for the remainer of this paper.\n\n\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[width=17.5cm]{fig3.pdf}\n\\caption{\\label{fig3_new} Regions of black-hole -- halo mass parameter space which result in photon trapping. {\\em Left:} Contours of $\\lambda_{\\rm c}$ as a function of black-hole and halo mass. The grey shading shows the region of parameter space which does not result in photon trapping for the mean disk, because the critical spin parameter is smaller than $\\lambda=0.05$. {\\em Center:} Contours of $\\lambda_{\\rm c,B}$ as a function of black-hole and halo mass. The grey shading shows the region of parameter space which does not result in photon trapping for the mean disk, because the Bondi radius is larger than the scale height at the disk centre. {\\em Right:} Contours of the probability that accretion will result in photon trapping, corresponding to the spin parameter $\\lambda$ lying between the two critical values, i.e. $\\lambda_{\\rm c,B}<\\lambda<\\lambda_{\\rm c}$. To calculate the probability we assume the distribution is Gaussian in the natural logarithm $\\ln{\\lambda}$, with variance $\\sigma_\\lambda=0.5$ and a mean at $\\lambda=0.05$. Contours are shown for $P= 1\\%$, $P= 10\\%$ and $P= 50\\%$. The examples assume $z=6$, $(F_{\\rm sig}\/10^2)(F_{\\rm min}\/10^3)^{-1}=1$, and $m_{\\rm d}=0.17$. \n}\n\\end{center}\n\\end{figure*}\n\n\n\\subsection{Critical spin parameter}\n\n\nEquation~(\\ref{ratio}) shows that the trapping of photons is very sensitive to the value of the spin parameter $\\lambda$ which governs the density of the galactic disk. In particular, photons may be more easily trapped within disks of low spin parameter. The assembly of dark matter halos leads to a distribution of spin parameters. To illustrate the parameter space we recast equation~(\\ref{ratio}). Photon trapping at all wavelengths occurs for spin parameters $\\lambda<\\lambda_{\\rm c}$ where\n\\begin{eqnarray}\n\\label{ratio4}\n\\nonumber\n&&\\hspace{-7mm}\\lambda_{\\rm c} \\sim 0.12 \\left(\\frac{M_{\\rm bh}}{10^5\\mbox{M}_\\odot}\\right)^{1\/4} \\left(\\frac{M_{\\rm halo}}{10^{10}\\mbox{M}_\\odot}\\right)^{1\/6} \\left(\\frac{1+z}{7}\\right) \\left(\\frac{m_{\\rm d}}{0.17}\\right)^{1\/2} \\\\\n&&\\hspace{7mm} \\times \\left(\\frac{F_{\\rm min}}{10^3}\\right)^{-1\/4} \\left(\\frac{c_{\\rm s}}{10\\,\\mbox{km\/s}}\\right)^{-5\/4} \\left(\\frac{F_{\\rm sig}}{10^2}\\right)^{1\/4} ,\n\\end{eqnarray}\n\nIn the left panel of Figure~\\ref{fig3_new} we plot contours of $\\lambda_{\\rm c}$ as a function of black-hole and halo mass at $z=6$. The example assumes $(F_{\\rm sig}\/10^2)(F_{\\rm min}\/10^3)^{-1}=1$, and $m_{\\rm d}=0.17$. The mean spin parameter $\\lambda=0.05$ is shown in black, and the grey shaded region illustrates the region of parameter space where trapping is not possible for a mean disk. Figure~\\ref{fig3_new} shows that emergent photons from black-holes with $M_{\\rm bh}\\ga10^{3-4}$M$_\\odot$ will be trapped at all wavelengths in the centres of high redshift galaxies ($\\sim10^{8-10}$M$_\\odot$) that are at the mean of the spin-parameter distribution . \n\n\n\n\\subsection{Breakout of the Bondi radius}\n\\label{breakout}\n\nIn the previous sections we have illustrated that the large densities expected at the centres of high redshift galaxies lead to conditions where accretion rates may be sufficient to trap photons within the accretion flow. These calculations are based on the density at the centre of a pressure supported self-gravitating disk. In this section, we point out that the calculation applies only to black-hole masses for which the Bondi radius is smaller than the scale-height at the disk centre. Moreover, we note that the trapping of radiation cannot be realised once the black-hole grows sufficiently that its Bondi radius exceeds the scale height. \n\nWe again utilise the self-gravitating disk model. The scale height at the disk centre is \n\\begin{equation}\n\\label{height}\nz_0 = \\frac{c_{\\rm s}^2}{\\pi G \\Sigma_0}.\n\\end{equation}\nThis expression ignores the gravitational contribution from the black-hole which would serve to reduce the scale height, and is valid when $r_{\\rm Bondi}< z_0$. \nPhoton trapping and obscuration are only possible in this regime. \nWe therefore calculate the ratio of Bondi radius to the central disk scale height as\n\\begin{equation}\n\\frac{r_{\\rm Bondi}}{z_0} = \\frac{G^2}{c_{\\rm s}^4}\\frac{M_{\\rm bh}m_{\\rm d}M}{\\lambda^2R_{\\rm vir}^2}.\n\\end{equation}\nPutting in characteristic values we obtain\n\\begin{eqnarray}\n\\label{ratio5}\n\\nonumber\\frac{r_{\\rm Bondi}}{z_0} &=& 2.6 \\left(\\frac{M_{\\rm bh}}{10^5\\mbox{M}_\\odot}\\right) \\left(\\frac{M_{\\rm halo}}{10^{10}\\mbox{M}_\\odot}\\right)^{1\/3} \\left(\\frac{1+z}{7}\\right)^2 \\\\\n&&\\hspace{9mm}\\times\\left(\\frac{m_{\\rm d}}{0.17}\\right) \\left(\\frac{c_{\\rm s}}{10\\,\\mbox{km\/s}}\\right)^{-4}\\left(\\frac{\\lambda}{0.05}\\right)^{-2}.\n\\end{eqnarray}\n\nThus, within $M\\sim10^{10}$M$_\\odot$ halos at $z\\sim6$, black-holes in excess of $M_{\\rm bh}\\sim10^5$M$_\\odot$ have Bondi radii which are larger than the disk scale-height and so may not be obscured. We note that the ratio is very sensitive to the value of the sound speed, with a value larger than the fiducial $10\\,$km$\\,$s$^{-1}$ significantly reducing the ratio, allowing larger black-holes to accrete in the photon trapping mode. A small gas fraction also reduces the ratio. However the breakout of the Bondi radius implies that high mass black-holes such as those observed within the SDSS quasars \\citep[e.g.][]{Fan2001,Fan2003} could not have their emergent radiation trapped.\n\nTo better understand the constraints imposed on black-hole masses where photon trapping can occur, we evaluate the critical value of spin parameter $\\lambda_{\\rm c,B}$ at which $r_{\\rm Bondi}=z_0$\n\\begin{eqnarray}\n\\label{lambda_critB}\n\\nonumber\n&&\\hspace{-7mm}\\lambda_{\\rm c,B} \\sim 0.08 \\left(\\frac{M_{\\rm bh}}{10^5\\mbox{M}_\\odot}\\right)^{\\frac{1}{2}} \\left(\\frac{M_{\\rm halo}}{10^{10}\\mbox{M}_\\odot}\\right)^{\\frac{1}{6}} \\left(\\frac{1+z}{7}\\right) \\left(\\frac{m_{\\rm d}}{0.17}\\right)^{\\frac{1}{2}} \\\\\n&&\\hspace{10mm} \\times \\left(\\frac{c_{\\rm s}}{10\\,\\mbox{km\/s}}\\right)^{1\/2}.\n\\end{eqnarray}\nIn the central panel of Figure~\\ref{fig3_new} we plot contours of $\\lambda_{\\rm c,B}$ as a function of black-hole and halo mass at $z=6$. The example again assumes $(F_{\\rm sig}\/10^2)(F_{\\rm min}\/10^3)^{-1}=1$, and $m_{\\rm d}=0.17$. The mean spin parameter $\\lambda=0.05$ is shown in black, and the grey shading illustrates the region of parameter space where photon trapping is not possible for the mean disk because the Bondi radius is larger than the scale-height. Figure~\\ref{fig3_new} shows that black-holes with $M_{\\rm bh}\\ga10^{4.5}$M$_\\odot$ within high redshift galaxies ($\\sim10^{10}$M$_\\odot$) having the mean of the spin-parameter cannot form a photon trapping accretion flow. \n\n\n\\subsection{When and where could photon trapping occur?}\n\nThe right panel of Figure~\\ref{fig3_new} shows the probabilities that the spin parameter $\\lambda$ lies between the two critical values needed for the conditions of $i)$ photon trapping, and $ii)$ a Bondi radius that is contained within the disk scale height (i.e. $\\lambda_{\\rm c,B}<\\lambda<\\lambda_{\\rm c}$). To calculate this probability\n\\begin{equation}\nP = \\frac{1}{\\sqrt{2\\pi}\\sigma_\\lambda}\\int_{\\lambda_{\\rm c,B}}^{\\lambda_{\\rm c}} \\exp{\\left[-\\frac{(\\ln{\\lambda}-\\ln{\\bar{\\lambda}})^2}{2\\sigma_\\lambda^2}\\right]}\\,d\\lambda\n\\end{equation}\n we assume the distribution of spin parameters is Gaussian in the natural logarithm $\\ln{\\lambda}$, with variance $\\sigma_\\lambda=0.5$ and a mean at $\\bar{\\lambda}=0.05$ \\citep[][]{Mo1998}. The probability is $P=0$ if $\\lambda_{\\rm c}<\\lambda_{\\rm c,B}$, indicating a black-hole -- halo mass combination that cannot produce a photon trapping accretion flow. Contours are shown that represent black-hole -- halo mass combinations for which $P= 1\\%$, $P= 10\\%$ and $P= 50\\%$ of disks would have densities that result in photon trapping. As before, the example assumes $(F_{\\rm sig}\/10^2)(F_{\\rm min}\/10^3)^{-1}=1$, and $m_{\\rm d}=0.17$. The mean disk at $z\\sim6$ has a central density that leads to photon trapping for black-hole masses up to $M_{\\rm bh}\\sim10^5$M$_\\odot$ within halos up to $M_{\\rm halo}\\sim10^{9}$M$_\\odot$. However $\\sim10\\%$ of disks are dense enough that photon trapping will occur for black-hole masses up to $M_{\\rm bh}\\sim10^5$M$_\\odot$ within larger halos up to $M_{\\rm halo}\\sim10^{11}$M$_\\odot$. Thus, we would expect photon trapping to be common in galaxies hosting $M_{\\rm bh}\\la10^5$M$_\\odot$ black-holes at $z\\sim6$. Moreover, since from equation~(\\ref{ratio}) we see that this growth is super-Eddington, we find that photon trapping provides a mechanism by which rapid black-hole growth could proceed at high redshift, helping to explain how super-massive black-holes grow less than a billion years after the Big-Bang.\n\nIn Figure~\\ref{fig4_new} we explore how the conclusions regarding the black-hole and halo mass-ranges that produce photon trapping accretion flows are effected by redshift and the parameter combination $(F_{\\rm sig}\/10^2)(F_{\\rm min}\/10^3)^{-1}$. We choose three values of redshift, $z=1$, $z=6$ and $z=10$. For $z=6$ and $z=10$ we choose $m_{\\rm d}=0.17$. However at later times we expect that gas is less plentiful and so assume $m_{\\rm d}=0.025$ at $z\\sim1$, corresponding to a comparison with low redshift disks \\citep[][]{Mo1998}. These cases are shown in the upper central and lower rows respectively. In each case we show examples with $(F_{\\rm sig}\/10^2)(F_{\\rm min}\/10^3)^{-1}=0.3$, $(F_{\\rm sig}\/10^2)(F_{\\rm min}\/10^3)^{-1}=1$ and $(F_{\\rm sig}\/10^2)(F_{\\rm min}\/10^3)^{-1}=3$ ({\\em Left}, {\\em Central} and {\\em Right} panels respectively). In all panels we show contours for the probabilities that the spin parameter $\\lambda$ lies between the two critical values needed for the conditions of $i)$ photon trapping, and $ii)$ a Bondi radius that is contained within the disk scale height (i.e. $\\lambda_{\\rm c,B}<\\lambda<\\lambda_{\\rm c}$). As before contours are shown that represent black-hole -- halo mass combinations for which $P= 1\\%$, $P= 10\\%$ and $P= 50\\%$ of disks would have central densities that lead to photon trapping.\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[width=17.5cm]{fig4.pdf}\n\\caption{\\label{fig4_new} Contours of the probability in black-hole -- halo mass parameter space that accretion will result in photon trapping, corresponding to the spin parameter $\\lambda$ lying between the two critical values, i.e. $\\lambda_{\\rm c,B}<\\lambda<\\lambda_{\\rm c}$. Contours are shown for $P= 1\\%$, $P= 10\\%$ and $P= 50\\%$. Examples are shown for three values of redshift, $z=1$, $z=6$ and $z=10$ ({\\em Top} to {\\em Bottom}). For $z=6$ and $z=10$ we choose $m_{\\rm d}=0.17$. We assume $m_{\\rm d}=0.025$ at $z\\sim1$. In each case we show examples with $(F_{\\rm sig}\/10^2)(F_{\\rm min}\/10^3)^{-1}=0.3$, $(F_{\\rm sig}\/10^2)(F_{\\rm min}\/10^3)^{-1}=1$ and $(F_{\\rm sig}\/10^2)(F_{\\rm min}\/10^3)^{-1}=3$ (from {\\em Left} to {\\em Right}). The thick grey curve is the predicted value for black-hole mass halo mass relation (equation~\\ref{relation}).}\n\\end{center}\n\\end{figure*}\n\n\nWe find that larger values of $(F_{\\rm sig}\/10^2)(F_{\\rm min}\/10^3)^{-1}$ lead to more massive black-holes in smaller halos having their emergent radiation trapped. At redshift $z\\sim6-10$ we find that black-holes with masses up to $M_{\\rm bh}\\sim10^5$M$_\\odot$ have their radiation trapped in 10\\%-50\\% of cases. At lower redshift, black-holes with masses of $M_{\\rm bh}\\sim10^{5.5}$M$_\\odot$ could have radiation trapped in up to 10\\% of cases, but only if the halo mass and $(F_{\\rm sig}\/10^2)(F_{\\rm min}\/10^3)^{-1}$ are large. Thus, photon trapping is likely to be a phenomenon dominated by low mass ($M_{\\rm bh}\\la10^5$M$_\\odot$) black-holes in high redshift ($z\\ga6$) galaxies ($M_{\\rm halo}\\sim10^{10}$M$_\\odot$).\n\n\\subsection{Comparison with $M_{\\rm bh}-M_{\\rm halo}$ models}\n\nAs noted in the introduction, the relations observed between black-hole mass and galaxy properties in the local Universe may not be in place at high redshift. For this reason we have not imposed a model for the relation between black-hole and halo mass in this paper, and instead have explored a range of values. However it is interesting to compare the range of black-hole masses found to be accreting in the photon trapping mode with expectations of the simple models relating black-hole and halo masses that have been successful in describing some of the properties of high redshift quasars. \n\nMotivated by local observations \\citep[][]{Ferrarese2002}, we consider a model in which the central black-hole mass is correlated with the halo\ncircular velocity. This scenario is supported by the results of \\citet[][]{Shields2003} who studied quasars out to $z\\sim3$ and demonstrated that the\nrelation between black-hole masses and the stellar velocity dispersion does not\nevolve with redshift. This is expected if the mass of the black-hole is\ndetermined by the depth of the gravitational potential well in which it\nresides, as would be the case if growth is regulated by feedback from\nquasar outflows \\cite[e.g.][]{Silk1998,Wyithe2003b}. Expressing\nthe halo virial velocity, $v_c$, in terms of the halo mass, $M_{\\rm halo}$, and\nredshift, $z$, the redshift dependent relation between the super-massive black-hole and halo\nmasses may be written as\n\\begin{equation}\n\\label{relation}\nM_{\\rm bh} = \\epsilon_{\\rm bh} M_{\\rm halo} \\left( \\frac{M_{\\rm halo}}{10^{10}\\mbox{M}_\\odot}\\right)^{2\/3}[\\zeta(z)]^{5\/6} \\left(\\frac{1+z}{7}\\right)^{5\/2}.\n\\end{equation}\nThe normalising constant in this relation has an\nobserved\\footnote{ We have used the normalization derived by\n\\citep[][]{Ferrarese2002} under the simplifying assumption\nthat the virial velocity of the halo\nrepresents its circular velocity.} value of $\\epsilon_{\\rm bh}\\approx 10^{-4.3}$ based on calibration of equation~(\\ref{relation}) at $z=0$ \\citep[][]{Ferrarese2002}, where we take the underlying\nassumption that the halo mass profile resembles a Singular Isothermal Sphere. In \\citet[][]{Wyithe2005} this model was shown to be consistent with the clustering and luminosity function data of the 2dF quasar redshift survey \\citep[][]{Croom2002}. \n\nInterestingly, the predicted value for super-massive black-hole masses in galaxy mass halos (as shown by the thick grey curve in Figure~\\ref{fig4_new}) is comparable to the range at which we would expect photon trapping in galaxies with $z\\ga6$. Thus, prior to obtaining masses where the self regulating mechanisms thought to be responsible for the relations between black-hole mass and halo properties take effect we would expect black-holes in high redshift galaxies to reach the level where super-Eddington accretion would become a natural part of their evolution. However at low redshift, Figure~\\ref{fig4_new} shows that photon trapping requires black-hole masses that are greater than the observed black-hole -- halo mass relation. Thus, we do not expect expect photon trapping within low redshift galaxies. \n\n\n\\section{Discussion}\n\\label{discussion}\n\nThe findings in this paper point to some potentially important implications for the growth of high redshift super-massive black-holes. Recent simulations by \\citet[][]{Li2011} show that self gravity overcomes radiative feedback and that accretion onto intermediate mass black-holes reaches the Eddington rate. However, at high enough accretion rates, a spherical inflow is not subject to the Eddington limit (though the emergent radiation is), and the Bondi accretion rate can take arbitrarily large values in high density galactic centres \\citep[][]{Begelman1979}. The conditions for spherical accretion may only rarely be realised at the centres of high redshift galaxies. On the other hand, there is a class of slim disk models \\citep[e.g.][]{Abramowicz1988,Chen1995,Ohsuga2005,Watarai2006,Ohsuga2011}, for which super-Eddington accretion flows are possible. These models include a large viscosity, and so rapid transport of material through the disk within a small multiple of the free fall time \\citep[e.g.][]{Watarai2006}, and are optically thick and advection dominated \\citep[][]{Chen1995}. Photon trapping plays an important role within these disks, even those with complex flows, leading to very inefficient energy conversion and a luminosity that is independent of accretion rate \\citep[e.g.][]{Ohsuga2005}. Simulations indicate that at sufficient densities, the mass accretion rate can reach hundreds of Eddington, although the photon luminosity does not \\citep[e.g.][]{Ohsuga2011}. These super-Eddington accretion disks will generate outflows \\citep[which cannot be generated in spherical accretion,][]{Begelman1979} that are likely to be channelled along the poles, and so will not suppress the accretion along the equatorial plane. The calculations presented in this paper provide the boundary condition for super-Eddington flows. These rapid accretion events may provide the seeds for super-massive black-hole growth.\n\n\nA plausible scenario therefore includes two episodes of black-hole growth. Initially the accretion may have been spherical or via a slim disk with high viscosity, so that the Eddington rate was greatly exceeded by the accretion rate. We find that radiation would have been trapped and advected into the central black-hole by the very large Bondi accretion rates at high redshift. This phase of growth would be obscured at both optical\/UV and X-ray wavelengths. Once the Bondi radius became larger than the scale-height of the galaxy disk, the accretion rate dropped, and higher\nangular momentum gas would have settled into a thin disk accretion mode, in which the\naccretion time was much longer than the free-fall time, allowing the radiation to escape. For a pressure supported disk, we have found that the Bondi radius is expected to exceed the scale-height of the disk for black-holes in excess of $\\sim10^{5}$M$_\\odot$.\nThe luminous quasars discovered at $z\\gsim6$ with black-hole masses in excess of $10^8$M$_\\odot$ are therefore thought to be shining in this later mode, un-obscured in the optical and with accretion rates close to, but smaller than Eddington. However a portion of their prior growth, when the black-hole mass was $\\sim10^{4-5}$M$_\\odot$ would have been in the photon trapping mode.\n\nSuper-Eddington accretion rates arising from the large densities in the cores of high redshift galaxies were previously considered by \\citet[][]{Volonteri2005}. As noted in the introduction, the calculations in \\citet[][]{Volonteri2005} neglected feedback effects like gas heating, which may lower the the Bondi accretion rate \\citep[e.g.][]{Milosavljev2009}. In this paper we note that if the accretion rate is sufficiently high that the emergent photons are trapped within the accretion flow, then feedback effects cannot operate \\citep[][]{Begelman1979}. Our model also makes two different assumptions which modify our conclusions relative to \\citet[][]{Volonteri2005}. Firstly, \\citet[][]{Volonteri2005} assume that once the gas is enriched, metal line cooling allows the gas to cool to temperatures much lower than $10^4$K, so that it fragments to form stars and the super-Eddington accretion episode is ended. This assumption was necessary in order that super-Eddington accretion not lead to black-hole densities in excess of those observed. However, we expect that fragmentation and associated star-formation will reheat the gas via radiative and supernova feedback so that the pressure support at the centre of the disk is maintained and accretion can continue. This scenario is supported by the observations of \\citet[][]{Genzel2010} which imply significant turbulent velocities in high redshift galaxies. Rather than make an arbitrary assumption that growth stops when the size of the accretion disk grows by a factor of five, we instead assume that the super-Eddington photon trapping accretion mode would be regulated by the time when the Bondi radius exceeds the scale height of the pressure supported disk. We find that this condition prevents super-Eddington accretion onto high mass black-holes. \n\n\n\n\\subsection{Obscured accretion}\n\nOur results have some relevance to the recent discussion surrounding obscured accretion in high redshift galaxies. While luminous optical quasars in the most massive halos ($\\ga10^{12}$M$_\\odot$) dominate the observations of high redshift super-massive black-holes, \\citet[][]{Treister2011} recently presented evidence that most of the black-hole accretion at $z\\ga6$ is actually optically obscured, and in galaxies below halo masses of $\\sim10^{10-11}$M$_\\odot$. Since high redshift Lyman-break galaxies are thought not to be dusty \\cite[e.g.][]{Bouwens2010d}, our results might have provided a mechanism by which the AGN can be obscured even in the absence of a large dusty component. However our results do not support a photon trapping explanation for this result. Firstly, X-ray photons would be more strongly trapped by the accretion flow than the UV photons, indicating that we would not expect to observe X-rays without optical detection. In addition, we also find that photon trapping is only expected in a fraction of galaxies rather than in all galaxies as implied by \\citet[][]{Treister2011}. \n\nWhile other mechanisms may lead to buried accretion, the findings of \\citet[][]{Treister2011} have also been disputed by a number of authors \\citep[][]{Fiore2011,Cowie2011,Willott2011}, who do not observe the same level of X-ray emission. These authors find limits on the average X-ray luminosity in the rest frame 0.5-2keV band of $L_{0.5-2}<4\\times10^{41}\\,$erg\/s for $z\\sim6.5$ dropout galaxies. This luminosity can be related to black-hole mass as\n\\begin{equation}\nL_{0.5-2}\\sim 3\\times10^{41}\\left(\\frac{M_{\\rm bh}}{10^5\\mbox{M}_\\odot}\\right)\\eta\\,\\mbox{erg}\\,\\mbox{s}^{-1},\n\\end{equation}\nwhere $\\eta=1$ is the fraction of the Eddington accretion rate, and we have assumed the spectral energy distribution of \\citet[][]{Elvis1994}. Observed luminosities must have $\\eta<1$ even if the accretion rate is super-Eddington \\citep[][]{Begelman1979}. The observed luminosity limit therefore corresponds to observed black-hole masses of $M_{\\rm bh}\\la1.3\\times 10^5$M$_\\odot$. We do not find that black-holes with masses $M_{\\rm bh}\\ga10^5$M$_\\odot$ produce photon trapping accretion flows, and so photon trapping does not explain the lack of observed X-ray sources among the $z\\sim6.5$ dropouts \\citep[][]{Fiore2011,Cowie2011,Willott2011}. Since the stacked observations in these studies are based on only $\\sim10^2$ galaxies, this lack of detection could follow from the low-duty cycle of AGN \\citep[which is likely a few percent,][]{Wyithe2003b}. However our results do suggest that 90\\%-100\\% of disks with black-holes below this mass would be in the photon trapping mode. Thus, we would expect that deeper and wider field X-ray observations using future X-ray observatories, to display a cut-off in the X-ray luminosity function at about $L_{0.5-2}\\sim 3\\times10^{41}$erg$\\,$s$^{-1}$. Conversely, the discovery of X-ray AGN with luminosities an order of magnitude lower than current limits would therefore rule out photon trapping accretion as a mechanism for rapid growth of early black-holes. Finally, we note that since super-Eddington accretion at high redshift is obscured at both optical and X-ray wavelengths, rapid growth of seed black-holes could not provide a significant source of X-ray for reionization of the IGM \\citep[][]{Volonteri2005}.\n\n\n\\subsection{Seed black-hole growth}\n\nThe photon trapping mode is likely to be important for the rapid growth of seed super-massive black-holes with masses of $\\sim10^{4-5}$M$_\\odot$. Because the Bondi accretion rates in these high redshift galaxies could be orders of magnitude larger than the Eddington rate, the photon trapping mechanism helps alleviate the difficulty of growing super-massive black-holes of more than a billion solar masses \\citep[corresponding to the most distant known quasars,][]{Mortlock2011} within the first billion years of the Universe's age. This point was made in detail in \\citet[][]{Volonteri2005}. To illustrate we note that accretion at the Eddington rate (with $\\epsilon=0.1)$ leads to an $e$-folding time of $t=4\\times10^7$ years. Assuming that a black-hole accretes with a duty cycle of unity, the number of e-folding times available by $z\\sim7$ is therefore $\\sim20$. This should be compared with the 20 $e$-folds needed to grow a $1$M$_\\odot$ black-hole seed up to a mass of $10^9$M$_\\odot$. Thus there is only just enough time during the age of the Universe at $z\\ga6$ for a stellar black-hole seed to grow to a super-massive black-hole. We suggest that the period of obscured growth in some galaxies would provide a path toward growing these super-massive black-holes. \n\n\n\n\n\\section{Conclusion}\n\\label{conclusion}\n\nIn this paper we have determined the cosmological regime in which photons produced through accretion onto a central black-hole are trapped by infalling material, so that $i)$ radiation feedback on the infall of gas outside the dust sublimation radius is suppressed, allowing accretion rates far in excess of the Eddington limit, $ii)$ AGN appear obscured, and $iii)$ the growth time of black-hole is short. Specifically we find that a large fraction of galaxies at $z\\ga6$ with masses up to those of the observed Ly-break population (halo masses of $\\sim10^{9-11}$M$_\\odot$) exhibit Bondi-accretion rates onto $M_{\\rm bh}\\sim10^{3-5}$M$_\\odot$ black-holes that are sufficiently high to trap the resulting rest frame optical\/UV\/X-ray radiation. The obscuration due to photon trapping is found only to occur for black-holes with masses up to $\\sim10^5$M$_\\odot$ because larger black-holes have Bondi radii that exceed the scale height of the disk from which they accrete gas, so that a photon trapping mechanism cannot operate. As a result, we find a natural distinction between obscured, photon trapping accretion onto $\\sim10^5$M$_\\odot$ black-holes in galaxies of halo mass $\\la10^{10}$M$_\\odot$, and the luminous accretion seen in the brightest quasars with black-hole masses of $\\sim10^{8-9}$M$_\\odot$ within halos of mass $\\sim10^{11-12}$M$_\\odot$. At lower redshift photon trapping requires black-holes that are larger than expected from the black-hole -- halo mass relation, and so is not expected to be observed. Our results indicate that super-Eddington accretion of mass to form seed black-holes of $\\sim10^5$M$_\\odot$ provided a mechanism by which super-massive black-holes were able to form prior to $z\\sim6$. \n\n\\vspace{5mm}\n\n{\\bf Acknowledgments} JSBW acknowledges\nthe support of the Australian Research Council. AL was supported in\npart by NSF grant AST-0907890 and NASA grants NNX08AL43G and\nNNA09DB30A.\n\n\n\n\\newcommand{\\noopsort}[1]{}\n\n\\bibliographystyle{mn2e}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn \\cite{k1937}, Kolmogorov posed the problem of characterizing the subset of the complex plane, denoted by $\\Theta_n$, that consists of the individual eigenvalues of all $n$-by-$n$ stochastic matrices. \n\nOne can easily verify that for each $n \\geq 2$, the region $\\Theta_n$ is closed, inscribed in the unit-disc, star-convex (with star-centers at zero and one), and symmetric with respect to the real-axis. Furthermore, it is clear that $\\Theta_n \\subseteq \\Theta_{n+1}$, $\\forall n \\in \\bb{N}$. In view of these properties, $\\partial \\Theta_n = \\{ \\lambda \\in \\Theta_n : \\alpha \\lambda \\not \\in \\Theta_n,\\forall \\alpha > 1\\}$, and each region is determined by its boundary. \n\nDmitriev and Dynkin \\cite{dd1946} obtained a partial solution to Kolmogorov's problem, and Karpelevi{\\v{c}}~\\cite[Theorem B]{k1951}, expanding on the work of \\cite{dd1946}, resolved it by showing that the boundary of $\\Theta_n$ consists of curvilinear arcs (herein, \\emph{Karpelevi{\\v{c}}~arcs} or K-arcs), whose points satisfy a polynomial equation that is determined by the endpoints of the arc (which are consecutive roots of unity). {\\DJ}okovi{\\'c} \\cite[Theorem 4.5]{d1990} and Ito \\cite[Theorem 2]{i1997} each provide a simplification of this result. However, noticably absent in the Karpelevi{\\v{c}}~Theorem (and the above-mentioned works) are \\emph{realizing-matrices} (i.e., a matrix whose spectrum contains a given point) for points on these arcs. \n\nThis problem has been addressed previously in the literature. Dmitriev and Dynkin \\cite[Basic Theorem]{dd1946} give a schematic description of such matrices for points on the boundary of $\\Theta_n \\backslash \\Theta_{n-1}$ and Swift \\cite[\\S 2.2.2]{s1972} provides such matrices for $3\\leq n \\leq 5$.\n\nOur main result is providing, for every $n$ and for each arc, a single parametric matrix that realizes the entire $K$-arc as the parameter runs from 0 to 1. Aside from the theoretical importance -- after all, the original problem posed by Kolmogorov is intrinsically matricial -- possession of such matrices is instrumental in the study of nonreal \\emph{Perron similarities} in the longstanding \\emph{nonnegative inverse eigenvalue problem} \\cite{jp2017}, and provides a framework for resolving Conjecture 1 \\cite{lpk2015} vis-\\`{a}-vis the results in \\cite{j1981}. \n\nIn addition, we provide some partial results on the differentiability of the Karpelevi{\\v{c} arcs. We demonstrate that some powers of certain realizing-matrices realize other arcs. Finally, we pose several problems that appeal to a wide variety of mathematical interests.\n\n\\section{Notation \\& Background}\n\nThe algebra of complex (real) $n$-by-$n$ matrices is denoted by $\\mat{n}{\\bb{C}}$ ($\\mat{n}{\\bb{R}}$). A real matrix is called \\emph{nonnegative} (\\emph{positive}) if it is an entrywise nonnegative (positive) matrix. If $A$ is nonnegative (positive), then we write $A \\geq 0$ ($A > 0$). \n\nAn $n$-by-$n$ nonnegative matrix $A$ is called \\emph{(row) stochastic} if every row sums to unity; \\emph{column stochastic} if every column sums to unity; and \\emph{doubly stochastic} if it is row stochastic and column stochastic.\n\nGiven $n \\in \\bb{N}$, the set ${F}_n := \\{ p\/q : 0\\leq p < q \\leq n,~\\gcd(p,q)=1 \\}$ is called the \\emph{set of Farey fractions of order n}. If $p\/q$, $r\/s$ are elements of ${F}_n$ such that $p\/q < r\/s$, then $(p\/q,r\/s)$ is called a \\emph{Farey pair (of order $n$)} if $x \\not\\in {F}_n$ whenever $p\/q < x < r\/s$. The Farey fractions $p\/q$ and $r\/s$ are called \\emph{Farey neighbors} if $(p\/q,r\/s)$ or $(r\/s, p\/q)$ is a Farey pair.\n\nThe following is the celebrated Karpelevi{\\v{c}}~Theorem in a form due to Ito \\cite{i1997}. \n\n\\begin{thm}[Karpelevi{\\v{c}}] \n\\label{thm:karpito}\nThe region $\\Theta_n$ is symmetric with respect to the real axis, is included in the unit-disc $\\{ z \\in \\bb{C} : |z| \\leq 1\\}$, and intersects the unit-circle $\\{ z \\in \\bb{C} : |z| = 1\\}$ at the points $\\{ e^{2\\pi\\ii p\/q} : p\/q \\in {F}_n \\}$. The boundary of $\\Theta_n$ consists of these points and of curvilinear arcs connecting them in circular order. \n\nLet the endpoints of an arc be $e^{2\\pi\\ii p\/q}$ and $e^{2\\pi\\ii r\/s}$ ($q < s$). Each of these arcs is given by the following parametric equation: \n\\begin{equation}\nt^{s} \\left( t^{q} - \\beta \\right)^{\\floor{n\/q}} = \\alpha^{\\floor{n\/q}} t^{q\\floor{n\/q}},~\\alpha \\in [0,1], ~\\beta:=1-\\alpha \\label{ito_eq}.\n\\end{equation} \n\\end{thm}\n\n\\hyp{Figure}{fig:karpregions} contains the regions $\\Theta_3$, $\\Theta_4$, and $\\Theta_5$.\n\n\\begin{figure}[H]\n\\centering\n\\begin{subfigure}{.32\\textwidth}\\centering\n\\begin{tikzpicture}\n\\begin{axis}[\naxis lines=none,\naxis equal image,\nscale=0.32,\nxlabel={$\\Re{(\\lambda)}$},\nylabel={$\\Im{(\\lambda)}$},\nylabel style={rotate=-90,, anchor=north},\nxmin=-1,\nxmax=1,\nymin=-1.0,\nymax=1.0,\nxtick={-1,1},\nytick={-1,1}\n]\n\n\\addplot[thick,black] coordinates{\n(1,0) \n(-.5,.866025403784439) \n(-.5,-.866025403784439)\n(1,0)}; \n\\addplot[thick,black] coordinates{(-1,0) (-.5,0)}; \t\t\t\n\n\\draw[color=gray] (axis cs:0,0) circle (1);\n\\end{axis}\n\\end{tikzpicture}\n\\caption{$\\Theta_3$}\n\\label{fig:karpregion3}\n\\end{subfigure}\n\\hfill\n\\begin{subfigure}{.32\\textwidth}\\centering\n\\begin{tikzpicture}\n\\begin{axis}[\naxis lines=none,\naxis equal image,\nscale=0.32,\nxlabel={$\\Re{(\\lambda)}$},\nylabel={$\\Im{(\\lambda)}$},\nylabel style={rotate=-90, anchor=north},\nxmin=-1,\nxmax=1,\nymin=-1.0,\nymax=1.0,\nxtick={-1,1},\nytick={-1,1}\n]\n\n\\addplot[thick,black] coordinates{(1,0) (0,1)}; \n\\addplot[thick,black] coordinates{(1,0) (0,-1)}; \n\\addplot[thick,black] table {aarc1a.dat}; \n\\addplot[thick,black] table {aarc1b.dat}; \n\\addplot[thick,black] table {aarc2a.dat}; \n\\addplot[thick,black] table {aarc2b.dat}; \n\n\\draw[color=gray] (axis cs:0,0) circle (1);\n\\end{axis}\n\\end{tikzpicture}\n\\caption{$\\Theta_4$}\n\\label{fig:karpregion4}\n\\end{subfigure}\n\\hfill\n\\begin{subfigure}{.32\\textwidth}\\centering\n\\begin{tikzpicture}\n\\begin{axis}[\naxis lines=none,\naxis equal image,\nscale=0.32,\nxlabel={$\\Re{(\\lambda)}$},\nylabel={$\\Im{(\\lambda)}$},\nylabel style={rotate=-90, anchor=north},\nxmin=-1,\nxmax=1,\nymin=-1.0,\nymax=1.0,\nxtick={-1,1},\nytick={-1,1}\n]\n\n\\addplot[thick,black] coordinates{(1,0) (.309016994374947,.951056516295154)}; \n\\addplot[thick,black] coordinates{(1,0) (.309016994374947,-.951056516295154)}; \n\\addplot[thick,black] table {fivearcs1.dat}; \t\t\t\t\t\n\\addplot[thick,black] table {fivearcs2.dat}; \n\\addplot[thick,black] table {fivearcs3.dat}; \n\\addplot[thick,black] table {fivearcs4.dat}; \n\\addplot[thick,black] table {fivearcs5.dat}; \n\\addplot[thick,black] table {fivearcs6.dat}; \n\\addplot[thick,black] table {aarc1a.dat}; \n\\addplot[thick,black] table {aarc1b.dat}; \n\n\\draw[color=gray] (axis cs:0,0) circle (1);\n\\end{axis}\n\\end{tikzpicture}\n\\caption{$\\Theta_5$}\n\\label{fig:karpregion5}\n\\end{subfigure}\n\\caption{$\\Theta_n$, $3 \\leq n \\leq 5$}\n\\label{fig:karpregions}\n\\end{figure}\n\nFor $n \\in \\bb{N}$, we call the collection of such arcs \\emph{the K-arcs (of order $n$)} and we denote by $K(p\/q,r\/s) = K_n(p\/q,r\/s)$ the arc connecting $e^{2 \\pi \\ii p \/q}$ and $e^{2 \\pi \\ii r \/s}$, when $p\/q$ and $r\/s$ are Farey neighbors. Notice that the number of K-arcs equals $| F_n| = 1 + \\sum_{k=1}^n \\phi(k)$, where $\\phi$ denotes \\emph{Euler's totient function}. \n\nFor Farey neighbors $p\/q$ and $r\/s$, $q < s$, we call the collection of equations \\eqref{ito_eq} the \\emph{Ito equations (with respect to $\\{p\/q,r\/s\\}$)} and the collection of polynomials \n\\[ f_\\alpha (t) := t^{s} \\left( t^{q} - \\beta \\right)^{\\floor{n\/q}} - \\alpha^{\\floor{n\/q}} t^{q\\floor{n\/q}},~\\alpha \\in [0,1]\\] \nthe \\emph{Ito polynomials (with respect to $\\{p\/q,r\/s\\}$)}. \n\nA \\emph{directed graph} (or simply \\emph{digraph}) $\\Gamma = (V,E)$ consists of a finite, nonempty set $V$ of \\emph{vertices}, together with a set $E \\subseteq V \\times V$ of \\emph{arcs}. For $A \\in \\mat{n}{\\bb{C}}$, the \\emph{directed graph} (or simply \\emph{digraph}) of $A$, denoted by $\\Gamma = \\dg{A}$, has vertex set $V = \\{ 1, \\dots, n \\}$ and arc set $E = \\{ (i, j) \\in V \\times V : a_{ij} \\neq 0\\}$. \n\nA digraph $\\Gamma$ is called \\emph{strongly connected} if for any two distinct vertices $i$ and $j$ of $\\Gamma$, there is a path in $\\Gamma$ from $i$ to $j$. Following \\cite{br1991}, we consider every vertex of $V$ as strongly connected to itself. A strong digraph is \\emph{primitive} if the greatest common divisor of all its cycle-lengths is one, otherwise it is \\emph{imprimitive}. \n\nFor $n \\geq 2$, an $n$-by-$n$ matrix $A$ is called \\emph{reducible} if there exists a permutation matrix $P$ such that\n\\begin{align*}\nP^\\top A P =\n\\begin{bmatrix}\nA_{11} & A_{12} \\\\\n0 & A_{22}\n\\end{bmatrix},\n\\end{align*}\nwhere $A_{11}$ and $A_{22}$ are nonempty square matrices. If $A$ is not reducible, then A is called \\emph{irreducible}. It is well-known that a matrix $A$ is irreducible if and only if $\\dg{A}$ is strongly connected (see, e.g., \\cite[Theorem 3.2.1]{br1991} or \\cite[Theorem 6.2.24]{hj2013}). \n\nAn irreducible nonnegative matrix is called \\emph{primitive} if, in its digraph, the set of cycle-lengths is relatively prime; otherwise it is \\emph{imprimitive}. \n\nFor $n \\in \\bb{N}$, denote by $C_n$ the \\emph{basic circulant}, i.e., \n\\[ C_n =\n\\left[ \n\\begin{array}{cc}\n0 & I_{n-1} \\\\\n1 & 0\n\\end{array} \\right]. \\]\nNote that the digraph of $C_n$ is a cycle of length $n$.\n\nGiven an $n$-by-$n$ matrix $A$, the \\emph{characteristic polynomial of $A$}, denoted by $\\chi_A$, is defined by $\\chi_A = \\det{(tI - A)}$. The \\emph{companion matrix} $C = C_f$ of a monic polynomial $f(t) = t^n + \\sum_{k=1}^{n} c_{k} t^{n - k}$ is the $n$-by-$n$ matrix defined by\n\\[ C = \n\\left[\\begin{array}{cc}\n0 & I_{n-1} \\\\\n-c_n & -c\n\\end{array} \\right], \\]\nwhere $c = [c_{n-1}~\\cdots~c_1]$. It is well-known that $\\chi_C = f$. Notice that $C$ is irreducible if and only if $c_n \\neq 0$. \n\n\n\\section{Realizing-matrices}\n\n\\begin{lem}\\label{lem:det}\nLet $A \\in \\mat{n}{\\bb{C}}$. If $B = A + \\alpha e_k e_\\ell^\\top$, then $\\det(B) = \\det(A) + (-1)^{k + \\ell} \\alpha \\det(A_{k\\ell})$.\n\\end{lem}\n\n\\begin{proof}\nTake either a Laplace-expansion along the $k$-th row or the $k$-th column of $B$. \n\\end{proof} \n\n\\begin{thm}\\label{thm:main}\nFor each K-arc $K_n (p\/q,r\/s)$, there is a parametric, stochastic matrix $M = M(\\alpha)$, $0\\leq \\alpha \\leq 1$, such that each point $\\lambda = \\lambda(\\alpha)$ of the arc is an eigenvalue of $M$. Furthermore, if $\\alpha \\in (0,1)$, then $M$ is primitive.\n\\end{thm}\n\n\\begin{proof}\nLet $p\/q$ and $r\/s$ be Farey neighbors, where $q < s$. Note that $s \\neq q\\floor{n\/q}$ since $q$ and $s$ are relatively prime. \n\nFirst, we consider the case in which $p\/q = 0$ and $r\/s = 1\/n$ (which we call the \\emph{Type 0 arc}). Then \\eqref{ito_eq} reduces to $(t - \\beta)^n - \\alpha^n = 0$. If \n\\[ M = M(\\alpha) := \\alpha C_n + \\beta I \n= \n\\begin{bmatrix}\n\\beta & \\alpha & \t \t\\\\\n & \\beta & \\alpha \t\t\\\\\n & & \\ddots & \\ddots \t\\\\\n & & & \\beta & \\alpha \t\\\\\n\\alpha & & & & \\beta\n\\end{bmatrix} \\in \\mat{\\bb{R}}{n}, \\]\nthen \n\\begin{align*} \n\\chi_M (t) \n&= \\det{(tI - (\\alpha C_n + \\beta I))} \\\\\n&= \\det{((t - \\beta)I - \\alpha C_n)} \\\\\n& = \\chi_{\\alpha C_n} (t - \\beta) \\\\\n& = (t - \\beta)^n - \\alpha^n.\n\\end{align*} \nIf $\\alpha \\in (0,1)$, then $\\dg{M}$ contains directed-cycles of length one and $n$. Hence, $M$ is irreducible and since the greatest common divisor of all cycle-lengths of $\\dg{M}$ is obviously one, $M$ is primitive.\n\nNext, we consider the case in which $\\floor{n\/q}= 1$ (herein referred to as a \\emph{Type I arc}). Then \\eqref{ito_eq} reduces to $t^{s} - \\beta t^{s-q} - \\alpha = 0$. If \n\\begin{align} \nM = M(\\alpha) := \n\\begin{bmatrix}\nz & I \\\\\n\\alpha & \\beta e_{s-q}^\\top \n\\end{bmatrix} \\in \\mat{s}{\\bb{R}}, \\label{typeonemats}\n\\end{align} \nthen $M \\geq 0$ and $\\chi_{M} (t) = t^{s} - \\beta t^{s-q} - \\alpha$. If $\\alpha \\in (0,1)$, then $\\dg{M}$ contains $\\dg{C_n}$. Hence, $M$ is irreducible, and, since $\\gcd{(s-(s-q),s)} = \\gcd{(q,s)}=1$, it must be primitive.\n\nNext, we consider the case in which $\\floor{n\/q}> 1$ and $s < q\\floor{n\/q}$ (which we call a \\emph{Type II arc}). Then \\eqref{ito_eq} reduces to \n\\begin{align*}\n(t^q - \\beta)^{\\floor{n\/q}} - \\alpha^{\\floor{n\/q}} t^{q\\floor{n\/q} - s} = 0.\n\\end{align*}\nConsider the nonnegative matrix $M = M(\\alpha) := \\alpha X + \\beta Y$, where $X$ is the nonnegative companion matrix of the polynomial $t^{q\\floor{n\/q}} - t^{q\\floor{n\/q} - s}$, and\n\\begin{align*} \nY := \\bigoplus_{k=1}^{\\floor{n\/q}} C_q =\n\\begin{bmatrix}\nC_q & \\\\\n & \\ddots & \\\\\n & & C_q\n\\end{bmatrix} \\in \\mat{q\\floor{n\/q}}{\\bb{R}}.\n\\end{align*} \nSince $1 < q\\floor{n\/q} - s + 1 \\leq n - s + 1 < q + 1$, it follows that \n\\begin{align*} \nM =\n\\left[ \n\\begin{array}{*{14}{c}}\n& 1 & & & \\vline & & & & & \\vline \t\t\t\t\t\t\t\t\t\t\t\\\\\n& & \\ddots & & \\vline & & & & & \\vline \t\t\t\t\t\t\t\t\t\t\\\\\n& & & 1 & \\vline & & & & & \\vline \t\t\t\t\t\t\t\t\t\t\t\\\\\n\\beta & & & & \\vline & \\alpha & & & & \\vline \t\t\t\t\t\t\t\t\t\\\\\n\\hline \n& & & & \\vline & \\multicolumn{4}{c}{\\multirow{4}{*}{\\Large $\\ddots$}} & \\vline & & & \t\\\\\n& & & & \\vline & & & & & \\vline & & & \t\t\t\t\t\t\t\t\t\t\\\\\n& & & & \\vline & & & & &\\vline & & & \t\t\t\t\t\t\t\t\t\t\\\\\n& & & & \\vline & & & & & \\vline & \\alpha \t\t\t\t\t\t\t\t\t\t\\\\\n\\hline\n& & & & \\vline & & & & & \\vline & & 1 \t\t\t\t\t\t\t\t\t\t\\\\\n& & & & \\vline & & & & & \\vline & & & \\ddots \t\t\t\t\t\t\t\t\t\\\\\n& & & & \\vline & & & & & \\vline & & & & 1 \t\t\t\t\t\t\t\t\t\\\\\n\\multicolumn{4}{c}{\\alpha e_{q\\floor{n\/q} - s + 1}^\\top} & \\vline & & & & & \\vline & \\beta & & \n\\end{array} \n\\right],\n\\end{align*}\nwhere $e_{q\\floor{n\/q} - s + 1} \\in \\bb{R}^q$. Because $M - \\alpha e_{q\\floor{n\/q}} e_{q\\floor{n\/q} - s + 1}^\\top$ is block upper-triangular, it follows from \\hyp{Lemma}{lem:det} that \n\\begin{align*}\n\\chi_M (t) \n&= (t^q - \\beta)^{\\floor{n\/q}} + \t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\\\\\n&\\quad (-1)^{2 q\\floor{n\/q} -s + 1} (-\\alpha) t^{q\\floor{n\/q} - s} (-\\alpha)^{\\floor{n\/q} -1} (-1)^{q\\floor{n\/q} - 1 - (q\\floor{n\/q} - s) - (\\floor{n\/q} -1)}\t\\\\\n&= (t^q - \\beta)^{\\floor{n\/q}} + (-1)^{2 q\\floor{n\/q}+ 1} \\alpha t^{q\\floor{n\/q} - s} \t\t\t\t\t\t\t\t\t\t\t\\\\\n&= (t^q - \\beta)^{\\floor{n\/q}} - \\alpha t^{q\\floor{n\/q} - s}.\n\\end{align*}\nIf $\\alpha \\in (0,1)$, then the directed graph contains $\\floor{n\/q}$ strongly connected components and the graph on these components, determined whether off-diagonal blocks are nonzero, is also strongly connected; hence, the entire graph is strongly connected, i.e., $M$ is irreducible. Furthermoe, since $\\dg{M}$ contains cycles of length $q$ and $q\\floor{n\/q} - (q\\floor{n\/q} - s + 1) + 1 = s$, it follows that $M$ is primitive .\n\nFinally, we consider the case when $\\floor{n\/q}> 1$ and $s > q\\floor{n\/q}$ (herein referred to as a \\emph{Type III arc}). For convenience, let $d = s - q\\floor{n\/q}$. Then \\eqref{ito_eq} reduces to \n\\begin{align*}\nt^{d} (t^q - \\beta)^{\\floor{n\/q}} - \\alpha^{\\floor{n\/q}} = 0.\n\\end{align*}\nConsider the nonnegative matrix $M = M(\\alpha) := \\alpha C_s + \\beta Y$, where\n\\begin{align*} \nY =\n\\begin{bmatrix}\n\\jord{d}{0} & \t\t\t\t\\\\\n\t\t& C_q & & \t\t\t\\\\\n\t\t& & \\ddots & \t\t\\\\\n\t\t& \t&\t & C_q\n\\end{bmatrix} + e_d e_{d+1}^\\top \\in \\mat{s}{\\bb{R}}.\n\\end{align*}\nThen \n\\[ \nM =\n\\kbordermatrix{& & & & & d & & \t\t\t\t\t\t\t\t\t\t\t\t\t\\\\\n & 0 & 1 & & & & \\vline & & & & & \\vline & & & & & \\vline & & & & \t\t\t\t\t\t\t\t\\\\\n & & 0 & 1 & & & \\vline & & & & & \\vline & & & & & \\vline & & & &\t\t\t\t\t\t\t\t\\\\\n & & & \\ddots & \\ddots & & \\vline & & & & & \\vline & & & & & \\vline & & & &\t\t\t\t\t\t\\\\\n & & & & 0 & 1 & \\vline & & & & & \\vline & & & & & \\vline & & & & \t\t\t\t\t\t\t\t\\\\\nd & & & & & 0 & \\vline & 1 & & & & \\vline & & & & & \\vline& & & & \t\t\t\t\t\t\t\t\\\\\n\\cline{2-20}\n& & & & & & \\vline & & 1 & & & \\vline & & & & & \\vline & & & & \t\t\t\t\t\t\t\t\\\\\n& & & & & & \\vline & & & \\ddots & & \\vline & & & & & \\vline & & & &\t\t\t\t\t\t\t\\\\\n& & & & & & \\vline & & & & 1 & \\vline & & & & & \\vline & & & &\t\t\t\t\t\t\t\t\\\\\n& & & & & & \\vline & \\beta & & & & \\vline & \\alpha & & & & \\vline & & & & \t\t\t\t\t\t\t\\\\\n\\cline{2-20}\n& & & & & & \\vline & & & & & \\vline & \\multicolumn{4}{c}{\\multirow{4}{*}{\\Large $\\ddots$}} & \\vline & & & \t\\\\\n& & & & & & \\vline & & & & & \\vline & & & & & \\vline\t\t\t\t\t\t\t\t\t\t\\\\\n& & & & & & \\vline & & & & & \\vline & & & & & \\vline\t\t\t\t\t\t\t\t\t\t\\\\\n& & & & & & \\vline & & & & & \\vline & & & & & \\vline & \\alpha\t\t\t\t\t\t\t\t\t\\\\\n\\cline{2-20}\n& & & & & & \\vline & & & & & \\vline & & & & & \\vline & & 1 & & \t\t\t\t\t\t\t\t\\\\\n& & & & & & \\vline & & & & & \\vline & & & & & \\vline & & & \\ddots & \t\t\t\t\t\t\t\\\\\n& & & & & & \\vline & & & & & \\vline & & & & & \\vline & & & & 1 \t\t\t\t\t\t\t\t\\\\\n& \\alpha & & & & & \\vline & & & & & \\vline & & & & & \\vline & \\beta & & & }.\n\\]\nSince $M - \\alpha e_s e_1^\\top$ is block upper-triangular, following \\hyp{Lemma}{lem:det}, \n\\begin{align*}\n\\chi_M (t) \n&= t^d (t^q - \\beta)^{\\floor{n\/q}} + (-1)^{s+1}(-\\alpha)(-\\alpha)^{\\floor{n\/q}-1}(-1)^{s - 1 -(\\floor{n\/q}-1)} \t\\\\\n&= t^d (t^q - \\beta)^{\\floor{n\/q}} + (-1)^{2s+1}\\alpha^{\\floor{n\/q}}\t\t\t\t\t\t\t \\\\\n&= t^d (t^q - \\beta)^{\\floor{n\/q}} - \\alpha^{\\floor{n\/q}}.\n\\end{align*}\nIf $\\alpha \\in (0,1)$, then $\\dg{M}$ contains $\\dg{C_n}$ as a subgraph. Hence, $M$ is irreducible, and since $\\dg{M}$ clearly contains cycles of length $q$ and $s$, $M$ is primitive.\n\\end{proof}\n\n\\begin{rem}\nNotice that the realizing matrices for arcs of Type I, II, and II all have trace zero. \n\\end{rem}\n\n\\begin{ex}\n\\hyp{Table}{tabone} contains realizing matrices illustrating each type of arc when $n=9$ (the smallest order for which each arc-type appears). \n\n\\begin{table}[H]\n\\centering\n\\begin{tabular}{ccc}\n$K\\left(\\frac{p}{q},\\frac{r}{s} \\right)$ & \\emph{Type} & $M(\\alpha)$, $\\beta := 1 - \\alpha$\t\\vspace*{5pt} \\\\\n$K\\left(\\frac{1}{9},\\frac{1}{8} \\right)$ & I & \n$\\begin{bmatrix}\n0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\\\\n0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\\\\n0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\\\\n0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\\\\n0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\\\\n0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\\\\n\\alpha & \\beta & 0 & 0 & 0 & 0 & 0 & 0 & 0 \n\\end{bmatrix}$ \\vspace*{5pt}\t\t\t\t\t\t\t\t\\\\ \n$K\\left(\\frac{2}{7},\\frac{1}{3} \\right)$ & II & \n$ \n\\left[ \\begin{array}{*{11}{c}}\n0 & 1 & 0 & \\vline & 0 & 0 & 0 & \\vline & 0 & 0 & 0 \\\\\n0 & 0 & 1 & \\vline &0 & 0 & 0 & \\vline & 0 & 0 & 0 \\\\\n\\beta & 0 & 0 & \\vline & \\alpha & 0 & 0 & \\vline & 0 & 0 & 0 \\\\\n\\cline{1-11}\n0 & 0 & 0 & \\vline & 0 & 1 & 0 & \\vline & 0 & 0 & 0 \\\\\n0 & 0 & 0 & \\vline &0 & 0 & 1 & \\vline & 0 & 0 & 0 \\\\\n0 & 0 & 0 & \\vline &\\beta & 0 & 0 & \\vline & \\alpha & 0 & 0 \\\\\n\\cline{1-11}\n0 & 0 & 0 & \\vline &0 & 0 & 0 & \\vline & 0 & 1 & 0 \\\\\n0 & 0 & 0 & \\vline &0 & 0 & 0 & \\vline & 0 & 0 & 1 \\\\\n0 & 0 & \\alpha & \\vline & 0 & 0 & 0 & \\vline & \\beta & 0 & 0\n\\end{array} \\right]$ \\vspace*{5pt} \t\t\t\t\t\t\t\\\\ \n$K\\left(\\frac{2}{9},\\frac{1}{4} \\right)$ & III & \n$\n\\left[ \\begin{array}{*{11}{c}}\n0 & \\vline & 1 & 0 & 0 & 0 & \\vline & 0 & 0 & 0 & 0 \\\\\n\\cline{1-11}\n0 & \\vline & 0 & 1 & 0 & 0 & \\vline & 0 & 0 & 0 & 0 \\\\\n0 & \\vline & 0 & 0 & 1 & 0 & \\vline & 0 & 0 & 0 & 0 \\\\\n0 & \\vline & 0 & 0 & 0 & 1 & \\vline & 0 & 0 & 0 & 0 \\\\\n0 & \\vline & \\beta & 0 & 0 & 0 & \\vline & \\alpha & 0 & 0 & 0 \\\\\n\\cline{1-11}\n0 & \\vline & 0 & 0 & 0 & 0 & \\vline & 0 & 1 & 0 & 0 \\\\\n0 & \\vline & 0 & 0 & 0 & 0 & \\vline & 0 & 0 & 1 & 0 \\\\\n0 & \\vline & 0 & 0 & 0 & 0 & \\vline & 0 & 0 & 0 & 1 \\\\\n\\alpha & \\vline & 0 & 0 & 0 & 0 & \\vline & \\beta & 0 & 0 & 0\n\\end{array} \\right]$\n\\end{tabular}\n\\caption{Realizing matrices for arcs of Type I, II, and II when $n=9$.}\n\\label{tabone}\n\\end{table}\n\nLet $\\mathcal{M} := \\{ M(\\alpha) : \\alpha \\in [0,1] \\}$ be the set of realizing matrices for the arc $K(1\/9,1\/8)$. For $d \\in \\bb{N}$, let $\\mathcal{M}^d = \\{ M(\\alpha)^d : M(\\alpha) \\in \\mathcal{M} \\}$. \\hyp{Theorem}{thm:arcpowers} shows that certain powers of the realizing matrices for the arc realize other arcs: in particular, $\\mathcal{M}^2$, $\\mathcal{M}^3$, and $\\mathcal{M}^4$ form a set of realizing matrices for the arcs $K(2\/9,1\/4)$, $K(1\/3,3\/8)$, and $K(4\/9,1\/2)$, respectively. \n\\end{ex}\n\n\\section{Differentiability of the Arcs}\nWe investigate here the smoothness of the K-arcs, a natural question not previously addressed. \n\nTo that end, let $f$ and $g$ be monic polynomials of degree $n$. For $\\alpha \\in [0,1]$, let $c_\\alpha := \\alpha f + (1 - \\alpha) g$. Since the roots of a polynomial vary continuously with respect to its coefficients, it follows that the locus $L(f,g) := \\left\\{ t \\in \\bb{C}: c_\\alpha(t) = 0,~\\alpha \\in [0,1] \\right\\}$ consists of $n$ continuous paths (counting multiplicities), each of which connects a root of $g$ to a root of $f$, whose points depend continuously on the parameter $\\alpha$ (if $f$ and $g$ share a root, then there is a degenerate path at this root). \n\n\nDenote by $P(\\mu, \\lambda)$ the path that starts at the root $\\mu$ of $g$ and terminates at the root $\\lambda$ of $f$ ($\\mu \\neq \\lambda$). If $r = r(\\alpha) \\in P(\\mu, \\lambda)$, $\\alpha \\in (0,1)$, then \n\\begin{align*}\n0 = \\alpha f(r) + (1 - \\alpha) g(r).\n\\end{align*}\nDifferentiating with respect to $\\alpha$ yields\n\\begin{align*}\n0 = f(r) + \\alpha g'(r) r' - g(r) + (1-\\alpha)g'(r)r' = f(r) - g(r) + r' c'_\\alpha (r).\n\\end{align*}\nIf $c_\\alpha'(r) \\neq 0$ (i.e., if $r$ is not a multiple root of $c_\\alpha$), then \n\\begin{equation*}\nr' = \\frac{g(r) - f(r)}{c_\\alpha'(r)}.\n\\end{equation*} \nThus, the path $P(\\mu, \\lambda)$ is differentiable at $r$ if $r$ is not a multiple root of $c_\\alpha$ \\cite{i2011}.\n\n\n\\begin{prop}\\label{distincteigs}\nFor $n \\geq 4$, let \n\\begin{equation}\nf_\\alpha (t) := t^n - \\beta t - \\alpha, ~\\alpha \\in [0,1], ~\\beta := 1 - \\alpha. \\label{polyalpha}\n\\end{equation} \n\\begin{enumerate}\n[label=(\\roman*)]\n\\item If $n$ is even, then $f_\\alpha$ has $n$ distinct roots.\n\\item If $n$ is odd and $\\alpha \\geq \\beta$, then $f_\\alpha$ has $n$ distinct roots.\n\\item If $n$ is odd and $\\alpha < \\beta$, then $f_\\alpha$ has a multiple root if and only if\n\\[ n^n \\alpha^{n-1} - (n-1)^{n-1} \\beta^n = n^n \\alpha^{n-1} + (n-1)^{n-1} (\\alpha -1)^n = 0. \\] \n\\end{enumerate}\n\\end{prop}\n\n\\begin{proof} Notice that $f_\\alpha(1) = 0$, and, since $C_f$ is primitive, if $f_\\alpha(\\lambda) = 0$, $\\lambda \\neq 1$, then \n\\begin{equation}\n|\\lambda| < 1. \\label{dominate}\n\\end{equation} \n\nIt is well-known that a polynomial has a multiple root if and only if it shares a root with its formal derivative. Thus, $f_\\alpha$ has a multiple root $\\lambda \\in \\bb{C}$ if and only if $f_\\alpha (\\lambda) = f_\\alpha'(\\lambda) = 0$, i.e., if and only if \n\\begin{align}\n\\lambda^n - \\beta \\lambda - \\alpha &= 0 \t\\label{poly}\t\\\\\nn\\lambda^{n-1} - \\beta &= 0\t\t\t\\label{deriv}.\n\\end{align}\nSolving for $\\beta$ in \\eqref{deriv} and substituting the result in \\eqref{poly} yields\n\\begin{equation}\n\\lambda^n = -\\frac{\\alpha}{n-1}\t.\t\t\\label{lambdan}\n\\end{equation}\nSubstituting for $\\lambda^n$ in \\eqref{poly} yields\n\\begin{equation}\n\\lambda = -\\frac{\\alpha n}{\\beta(n-1)} < 0. \t\\label{lambdafrac}\n\\end{equation} \n\nWe now consider each part separately:\n\n\\begin{enumerate}\n[label=(\\roman*)]\n\\item For contradiction, if $f_\\alpha$ has a multiple root, then it must be negative \\eqref{lambdafrac}; however, because $f_\\alpha(-t) = t^n + \\beta t - \\alpha$, Descartes' Rule of Signs ensures that $p$ has at most one negative root, a contradiction. \n \n\\item Suppose that $n$ is odd and $\\alpha \\geq \\beta$. For contradiction, if $f_\\alpha$ has a multiple root, then, following \\eqref{lambdafrac}, \n\\begin{equation*}\n|\\lambda| = \\frac{\\alpha n}{\\beta(n-1)} \\geq \\frac{n}{n-1} > 1,\n\\end{equation*}\ncontradicting \\eqref{dominate}.\n\n\\item It is well-known that a polynomial $f$ has a multiple root if and only if its \\emph{resultant} $R(f,f')$ vanishes. If \n\\[ \nS(f_\\alpha,f_\\alpha') =\n\\kbordermatrix{\n & 1 & \\cdots & n-1 & & n & n+1 & & 2n-1 \t\t\\\\\n1 & 1 & & & \\vrule & -\\beta & -\\alpha \t\\\\\n\\vdots & & \\ddots & & \\vrule & & \\ddots & \\ddots \t\\\\\nn-1 & & & 1 & \\vrule & & & -\\beta & -\\alpha \t\t\\\\\n\\cline{2-9}\nn & n & & & \\vrule & -\\beta \t\t\t\t \\\\\n\\vdots & & \\ddots & & \\vrule & & \\ddots \t\t\\\\\n2n & & & n & \\vrule & & & -\\beta \\\\\n2n-1 & 0 & \\cdots & 0 & \\vrule & n & & & -\\beta\n}, \\] \nthen $R(f_\\alpha,f_\\alpha') = | S(f_\\alpha,f_\\alpha')| = | D - C B|$, where $B$, $C$, and $D$ denote the upper-right, lower-left, and lower-right blocks of $S(f_\\alpha,f_\\alpha')$. Since \n\\[ D - CB =\n\\begin{bmatrix}\n(n-1) \\beta & n\\alpha & & \\\\\n& \\ddots & \\ddots & \\\\\n& & (n-1) \\beta & n\\alpha \\\\\nn & & & -\\beta\n\\end{bmatrix}, \\]\nand $n$ is odd, it follows that $R(f_\\alpha,f_\\alpha') = n^n \\alpha^{n-1} - (n-1)^{n-1} \\beta^n$ and the result is established.\n\\qedhere\n\\end{enumerate}\n\\end{proof}\n\n\\begin{rem}\\label{rem:negmult}\nIf $n$ is odd and $f_\\alpha$ has a multiple root $\\lambda$ (which, folllowing \\eqref{lambdafrac}, must be negative), then Descartes' Rule of Signs applied to $f_\\alpha (-t) = -t^n +\\beta t - \\alpha$ forces the multiplicity of $\\lambda$ as a root of $f_\\alpha$ to be exactly two. \n\\end{rem}\n\n\\begin{rem}\n\\label{polypi}\nUnder the hypotheses of part (iii) of \\hyp{Proposition}{distincteigs}, the resultant $R(f_\\alpha,f'_\\alpha) = \\pi(\\alpha) = n^n \\alpha^{n-1} + (n-1)^{n-1} (\\alpha -1)^n$ for the polynomial $f_\\alpha$ defined in \\eqref{polyalpha}, is a univariate polynomial in $\\alpha$. Since $\\pi(0) = -(n-1)^{n-1} < 0$ and $\\pi(1) = n^n > 0$, it folllows that $\\pi$ must have a root in $(0,1)$. However, $\\pi'(\\alpha) = n^n (n-1) \\alpha^{n-2} + (n-1)^{n-1} (\\alpha -1)^{n-1}$ and because $n$ is odd, we have $\\pi(\\alpha) \\geq 0$ for all $\\alpha \\geq 0$. Thus, $\\pi$ is strictly increasing on $(0,\\infty)$ and hence has exactly one root in $(0,1)$. \n\\end{rem}\n \n\\begin{cor} \\label{cor:diffarcs}\nLet $n \\geq 4$ be a positive integer.\n\\begin{enumerate}\n[label=(\\roman*)]\n\\item If $n$ is even and $\\floor{n\/2} \\leq m \\leq n$, then the K-arc $K_n \\left( {1}\/{m},{1}\/{m-1} \\right)$ is differentiable. \n\\item If $n$ is odd and $\\floor{n\/2}+1 \\leq m \\leq n$, then the K-arc $K_n \\left( {1}\/{m},{1}\/{m-1} \\right)$ is differentiable. \n\\end{enumerate}\n\\end{cor}\n\n\\begin{proof}\nIn view of \\hyp{Proposition}{distincteigs}, it suffices to consider the case when $n$ is odd and $\\alpha < \\beta$, where $f_\\alpha$ is defined as in \\eqref{polyalpha}; however, this case is clear as well since \\hyp{Remark}{rem:negmult} ensures that if $f_\\alpha$ has a multiple multiple root, then $\\lambda$ is real. \n\\end{proof}\n\n\\section{Powers of Realizing-matrices}\n\nFor each of the arc types listed in the proof of \\hyp{Theorem}{thm:main}, we refer to the collection of polynomials \n\\begin{align}\nf_\\alpha(t) &= (t - \\beta)^n - \\alpha^n \t\t\t\t\t\t\t\t\\tag{Type 0}\t\t\\\\\nf_\\alpha(t) &= t^{s} - \\beta t^{s-q} - \\alpha \t\t\t\t\t\t\t\\tag{Type I}\t\t\\\\\nf_\\alpha(t) &= (t^q - \\beta)^{\\floor{n\/q}} - \\alpha^{\\floor{n\/q}} t^{q\\floor{n\/q} - s}\t\\tag{Type II} \t\\\\\nf_\\alpha(t) &= t^{s - q\\floor{n\/q}} (t^q - \\beta)^{\\floor{n\/q}} - \\alpha^{\\floor{n\/q}} \t\t\t\\tag{Type III}\t\n\\end{align}\nas the \\emph{reduced Ito polynomials}. \n\nThe following result is readily deduced from several well-known theorems concerning Farey pairs (see, e.g., \\cite[pp. 28--29]{hw2008}).\n\n\\begin{lem} \n\\label{lem:farey_pair}\nIf $p\/q$, $r\/s$ are elements of ${F}_n$, then $(p\/q,r\/s)$ is a Farey pair of order $n$ if and only if $qr- ps= 1$ and $q + s > n$. \n\\end{lem}\n\n\\begin{lem}\n\\label{lem:divisor}\nIf $d$ is a positive integer such that $1 < d < n$, then $(d\/n,d\/n-1)$ is a Farey pair of order $n$ if and only if $d$ divides $n$ or $d$ divides $n-1$. \n\\end{lem}\n\n\\begin{proof}\nIf there is a positive integer $k$ such that $n = dk$, then $(d\/n,d\/n-1) = (1\/k,d\/n-1)$. Since $dk - (n-1) = 1 $, it follows that $d\/n-1 \\in {F}_n$. Because $k > 1$, it follows that $k + n - 1 > n$. Following \\hyp{Lemma}{lem:farey_pair}, $(1\/k,d\/n-1)$ is a Farey pair. A similar argument demonstrates that $(d\/n,1\/k)$ is a Farey pair if $d$ divides $n-1$.\n\nConversely, if $d$ does not a divisor of either $n$ or $n-1$, then $dn - d(n-1) = d \\neq 1$. The result now follows from \\hyp{Lemma}{lem:farey_pair}. \n\\end{proof}\n\n\\begin{cor}\n\\label{cor:divisor}\nLet $d$, $m$, and $n$ be positive integers such that $d < m \\leq n$, and suppose that $(1\/m,1\/m-1)$ is a Farey pair of order $n$. \n\\begin{enumerate}[label=(\\roman*)]\n\\item If $d$ divides $m$ and $k:=m\/d$, then $(1\/k, d\/m-1)$ is a Farey pair of order $n$ if and only if $k + m - 1 > n$. \n\\item If $d$ divides $m-1$ and $k:=(m-1)\/d$, then $(d\/m,1\/k)$ is a Farey pair of order $n$ if and only if $m + k > n$.\n\\end{enumerate}\n\\end{cor}\n\n\n\\begin{thm} \\label{thm:arcpowers}\nLet $d$, $m$, and $n$ be positive integers such that $1< d < m \\leq n$. Suppose that $(1\/m,1\/m-1)$ and $(d\/m,d\/m-1)$ are Farey pairs of order $n$. \nWe distinguish the following cases:\n\\begin{enumerate}[label=(\\roman*)]\n\\item d divides m: For $f_\\alpha(t) = t^m - \\beta t - \\alpha$, let $M(\\alpha)$ be defined as in \\eqref{typeonemats}. If $\\mathcal{M} := \\{ M(\\alpha) : \\alpha \\in [0,1] \\}$, then $\\mathcal{M}^d := \\{ M(\\alpha)^d : \\alpha \\in [0,1]\\}$ forms a set of realizing-matrices for $K_n(1\/k,d\/m-1)$, where $k = m\/d$. \n\n\\item d divides $m-1$ and $m > k\\floor{n\/k}$, where $k = m\/d$: For $f_\\alpha(t) = t^m - \\beta t - \\alpha$, let $M(\\alpha)$ be defined as in \\eqref{typeonemats}. If $\\mathcal{M} := \\{ M(\\alpha) : \\alpha \\in [0,1] \\}$, then $\\mathcal{M}^d := \\{ M(\\alpha)^d : \\alpha \\in [0,1]\\}$ forms a set of realizing-matrices for $K_n(d\/m,1\/k)$, where $k = m\/d$.\n\\end{enumerate}\n\\end{thm}\n\n\\begin{proof} Part (i): Since $(1\/k,d\/m-1)$ is a Farey pair, following \\hyp{Corollary}{cor:divisor}, $n < m + k - 1$; consequently, \n\\begin{align*}\nd = \\frac{m}{k} \\leq \\frac{n}{k} < \\frac{m + k - 1}{k} = d + 1 - \\frac{1}{k} < d + 1\n\\end{align*}\nand hence $\\floor{n\/k} = d$. The Ito equations for $(1\/k,d\/m-1)$ are given by \n\\begin{align*}\nt^{m-1} \\left( t^k - \\beta \\right)^{\\floor{n\/k}} = \\alpha^{\\floor{n\/k}} t^{k\\floor{n\/k}},~\\alpha \\in [0,1],~\\beta:=1-\\alpha,\n\\end{align*}\nand the reduced Ito polynomials for this arc are given by \n\\begin{align}\nq_\\alpha (t) = (t^k - \\beta)^d - \\alpha^d t,~\\alpha \\in [0,1],~\\beta:=1-\\alpha. \n\\end{align}\nNotice that $\\deg{(q_\\alpha)} = m$, for every $\\alpha \\in [0,1]$. \n\nLet $\\lambda = \\lambda(\\alpha) \\in K(1\/k,d\/m-1)$. Consider the reduced Ito polynomial $p_\\beta (t) = t^m - \\alpha t - \\beta$ and its nonnegative companion matrix $M = M(\\beta)$. The Cayley-Hamilton theorem (see, e.g., \\cite[p.~109]{hj2013}) ensures that $M^m - \\beta I = \\alpha M$; hence \n\\begin{align*}\nq_\\alpha(M^d) = (M^{dk} - \\beta I)^d - \\alpha^d M^d = (M^m - \\beta I)^d - (\\alpha M)^d = 0,\t\t\n\\end{align*}\ni.e., $q_\\alpha$ is an \\emph{annihilating polynomial} for $M^d$. \n\nDenote by $\\psi_M$ the \\emph{minimal polynomial} of $M$, i.e., $\\psi_M$ is the unique monic polynomial of minimum degree that annihilates $M$ (see, e.g., \\cite[p.~192]{hj2013}). Since $M$ is a companion matrix, $\\psi_M = \\chi_M$ (\\cite[Theorem 3.3.14]{hj2013}). Hence, if $J = \\inv{S} M S$ is a Jordan canonical form of $M$, then $J$ is \\emph{nonderogatory} (\\cite[Theorem 3.3.15]{hj2013}), i.e., $J$ contains exactly one \\emph{Jordan block} corresponding to every distinct eigenvalue. Since $M^d = S J^d \\inv{S}$, it follows that any Jordan canonical form of $J^d$ is nonderogatory -- indeed, if $f(x) = x^d$, then $f'(x) = d x^{d-1}$ and $f'(x) = 0$ if and only if $x=0$; since zero is not a repeated root \\eqref{lambdafrac} (and hence not associated with a nontrivial Jordan block), the claim follows from \\cite[p.~424, Theorem 6.2.25]{hj1994}) -- thus, $M^d$ is nonderogatory and, following \\cite[Theorem 3.3.15]{hj2013}, $\\psi_{M^d} = \\chi_{M^d}$ and $\\deg{\\left(\\psi_{M^d}\\right)} = m$. Since $\\psi_{M^d}$ is the unique polynomial of minimum degree that annihilates $M$, and since $\\deg{(q_\\alpha)} = m$, it must be the case that $\\chi_{M^d} = \\psi_{M^d} = q_\\alpha$. Hence, $M^d$ is a realizing-matrix for $\\lambda$.\n\nPart (ii): By hypothesis, \n\\begin{align*}\nd = \\frac{m-1}{k} < \\frac{m}{k} \\leq \\frac{n}{k}, \n\\end{align*}\nhence $d \\leq \\floor{n\/k}$. Since $m > k\\floor{n\/k}$, it follows that $m - k\\floor{n\/k} \\geq 1$ and $\\floor{n\/k} \\leq (m-1)\/k = d$. Hence, $d = \\floor{n\/k}$. \n\nThe Ito equations for $(d\/m,1\/k)$ are given by \n\\begin{align*}\nt^m \\left( t^k - \\beta \\right)^d = \\alpha^d t^{m-1},~\\alpha \\in [0,1],~\\beta:=1-\\alpha, \n\\end{align*}\nand the reduced Ito polynomials for this arc are given by\n\\begin{align*}\nq_\\alpha (t) = t(t^k - \\beta)^d - \\alpha^d,~\\alpha \\in [0,1],~\\beta:=1-\\alpha.\n\\end{align*}\nNotice that $\\deg{(q_\\alpha)} = m$, for every $\\alpha \\in [0,1]$. \n\nLet $\\lambda = \\lambda(\\alpha) \\in K(d\/m,1\/k)$. Consider the reduced Ito polynomial $f_\\alpha (t) = t^m - \\beta t - \\alpha$ and its nonnegative companion matrix $M = M(\\alpha)$. The Cayley-Hamilton theorem ensures that $M(M^{m-1} - \\beta I) = M^m - \\beta M = \\alpha I$; hence \n\\begin{align*}\nq_\\alpha(M^d) = M^d (M^{m-1} - \\beta I)^d - \\alpha^d I = (M^m - \\beta M)^d - (\\alpha I)^d = 0,\t\t\n\\end{align*}\ni.e., $q_\\alpha$ is an \\emph{annihilating polynomial} for $M^d$. \n\nUsing exactly the same argument as in part (i), it can be shown that $\\chi_{M^d} = \\psi_{M^d} = q_\\alpha$. Hence, $M^d$ is a realizing-matrix for $\\lambda$. \n\\end{proof}\n\n\\section{Additional Questions}\n\nIn this section, we pose several problems and conjectures for further inquiry.\n\n\\subsection{Karpelevi{\\v{c}}~Arcs}\n\n\\hyp{Theorem}{thm:main} establishes the existence of parametric realizing-matrices for the K-arcs. Suppose that $M$ is a realizing-matrix for a given point on a given arc, and let $M_k$ be the irreducible component that realizes the arc. Clearly, $M_k^\\top$ and $P M_k P^\\top$ are also realizing-matrices. With the aforementioned in mind, we offer the following. \n\n\\begin{prob}\nTo what extent are the realizing-matrices unique? \n\\end{prob}\n\n\\hyp{Corollary}{cor:diffarcs} and \\hyp{Theorem}{thm:arcpowers} show that many, but not all arcs are differentiable. Given the empirical evidence, we pose the following.\n\n\\begin{conj}\nAll K-arcs of order $n$ are differentiable for every $n$.\n\\end{conj} \n\nFor $S\\subseteq\\bb{C}$, let $S^d := \\{ \\lambda^d : \\lambda \\in S \\}$. \\hyp{Theorem}{thm:arcpowers} demonstrates that $\\sig{M}^d = \\sig{M^d}$. Although the evidence is ample, a demonstration that the powered K-arc $K_n^d(1\/m,1\/m-1)$ corresponds to $K_n(1\/k,d\/m-1)$ ($d$ divides $m$) or $K_n(d\/m,1\/k)$ ($d$ divides $m-1$ and $m>k\\floor{n\/k}$) has proven elusive. Thus, we offer the following.\n\n\\begin{conj}\nLet $d$, $m$, and $n$ be positive integers such that $1< d < m \\leq n$. Suppose that $(1\/m,1\/m-1)$ and $(d\/m,d\/m-1)$ are Farey pairs of order $n$. \n\\begin{enumerate}[label=(\\roman*)]\n\\item If d divides m, then $K_n^d (1\/m,1\/m-1) = K_n(1\/k,d\/m-1)$, where $k = m\/d$. \n\n\\item If d divides $m-1$ and $m > k\\floor{n\/k}$, then $K_n^d (1\/m,1\/m-1) = K_n(d\/m,1\/k)$, where $k = m\/d$. \n\\end{enumerate}\n\\end{conj}\n\nLet $K$ be a K-arc and let $d_K : [0,1] \\longrightarrow \\mathbb{R}_0^+$ be the function defined by $\\alpha \\longmapsto |\\lambda|$, where $\\lambda = \\lambda(\\alpha)$ is the point on $K$ corresponding to $\\alpha \\in [0,1]$. From \\hyp{Figure}{fig:karpregions}, we pose the following. \n\n\\begin{conj}\nIf $K$ is any K-arc, then the function $d_K$ is strictly convex. \n\\end{conj}\n\n\\subsection{The Levick-Pereira-Kribs Conjecture}\n \nFor a natural number $n$, denote by $\\Pi_n$ the convex-hull of the $n$\\textsuperscript{th} roots-of-unity, i.e., \n\\[ \\Pi_n = \\left\\{ \\sum_{k=0}^{n-1} \\alpha_k \\exp{(2\\pi\\ii k\/n)} : \\alpha_k \\geq 0,~\\sum_{k=0}^{n-1} \\alpha_k =1 \\right\\}. \\]\nDenote by $\\Omega_n$ the subset of the complex-plane containing all single eigenvalues of all $n$-by-$n$ doubly stochastic matrices. Perfect and Mirsky \\cite{pm1965} conjectured that $\\Omega_n = \\bigcup_{k=1}^n \\Pi_k$ and proved their conjecture when $1 \\leq n \\leq 3$. Levick et al.~\\cite{lpk2015} proved Perfect-Mirsky when $n=4$ but a counterexample when $n=5$ was given by Mashreghi and Rivard \\cite{mr2007}. Levick et al.~conjectured that $\\Omega_n = \\Theta_{n-1} \\cup \\Pi_n$ (\\cite[Conjecture 1]{lpk2015}). \n\nIn \\cite{j1981}, necessary and sufficient conditions were found for a stochastic matrix to be similar to a doubly stochastic matrix. Thus, it is possible to investigate the Levick-Pereira-Kribs Conjecture via the realizing matrices given in \\hyp{Theorem}{thm:main} vis-\\`{a}-vis the results in \\cite{j1981}. In particular, if $M$ is a realizing matrix for $\\lambda$ on the boundary of $\\Theta_n$ excluding the unit-circle (this case is clear), and $M \\oplus 1$ is similar to a doubly stochastic matrix $D$, then $\\Theta_{n-1} \\cup \\Pi_n \\subseteq \\Omega_n$.\n\n\\section{Acknowledgment}\n\nWe would like to thank University of Washington Bothell undergraduate student Amber R.~Thrall for proving that the polynomial $\\pi$ is \\hyp{Remark}{polypi} has only one root in $(0,1)$.\n\n\n\\bibliographystyle{abbrv}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThere has been a long-standing interest in the question of the amenability of Richard Thompson's group $F$, introduced in Thompson's notes of 1965 (see the survey \\cite{CFP} for a general background on the three Thompson groups $F0$ and let $g$ and $c_1,\\ldots, c_n\\in C_T(g)$ be such that \n$$\\| \\sum_{s\\in E} \\sum_{i=1}^n\\lambda((sg)^{c_i}) \\|\\leq \\varepsilon n.$$\n\nNote that the element $b=\\lambda(g-\\frac{1}{3}[e+g_1+g_2]g+\\frac{1}{3} \\sum_{s\\in E} sg)$ is in $J$, thus $\\frac{1}{n}\\sum_{i=1}^n\\lambda(c_i)b\\lambda({c_i}^{-1}) \\in J$. The distance between the element $\\frac{1}{n}\\sum_{i=1}^n\\lambda(c_i)b\\lambda({c_i}^{-1})$ and $\\lambda(g)$ is strictly smaller then $1$ for large $n$. Indeed,\n\n\\begin{align*}\n\\|\\lambda(g)-b\\|\\leq& \\frac{1}{3}\\|1+\\lambda(g_1)+\\lambda(g_2)\\| + \\frac{1}{3n}\\|\\sum_{s\\in E} \\sum_{i=1}^n\\lambda((sg)^{c_i}) \\|\\leq C+\\varepsilon.\n\\end{align*}\nthus we have found an invertible element in $J$, therefore $J=C^*_\\lambda(T)$.\n\\end{proof}\n\n\\section{Powers' test}\\label{tests}\nIn \\cite{PowersFreeAlgebraSimplicity} Powers' gives the following test for the simplicity of the algebra $\\cstar{G}$ over a group $G$.\n\n\\begin{theorem}\\label{sumTest}\n If for all non-empty $H\\subset G$ with $|H|<\\infty$, $e\\not\\in H$ and for all positive integers $n$ there is a set $\\{c_1,c_2,\\ldots,c_n\\}\\subset G$ so that \n\\[\n\\lim_{n\\to\\infty}\\frac{1}{n}||\\Sigma_{i=1}^n \\lambda(c_ih{c_i}^{-1})||=0,\\,\\forall h\\in H,\n\\]\nthen $\\cstar{G}$ is simple.\n\\end{theorem}\n\nLet $G$ be a group generated by a finite set $S$ with $S=S^{-1}$, then $\\frac{1}{|S|}||\\Sigma_{h\\in S} \\lambda(h)||$ is equal to the spectral radius of the simple random walk on the Cayley graph of $G$ with respect to $S$, denoted by $\\rho(G,S)$. The spectral radius of the simple random walk have been computed for many groups. Kesten, \\cite{kesten}, showed that if $S=\\{g_1,\\ldots,g_n\\}$ is {\\it a free set}, i.e., $g_1,\\ldots,g_n$ are standard generators of the free group of rank $n$, then the spectral radius is\n$$\\rho(G,S)=\\frac{\\sqrt{2n-1}}{n}$$\n\nThus the following condition implies the hypothesis of the Theorem \\ref{sumTest}.\n\\begin{condition}\\label{kesten-test}\nFor all finite subsets $H\\subset G$ with $e\\not\\in H$ and for all positive integers $n$ there is a set $\\{c_1,c_2,\\ldots,c_n\\}\\subset G$ so that \n\\[\n\\langle h^{c_1},h^{c_2}, \\ldots, h^{c_n}\\rangle\n\\]\nis a free subgroup of $G$ of rank $n$ for all $h\\in H$.\n\\end{condition}\n\n\nIf $g$ is a bijection from a set $X$ to itself, denote by $Supp(g):=\\left\\{x\\in X\\mid g\\cdot x\\neq x\\right\\}$ and $Fix(g):=X\\backslash Supp(g)$, the support and the set of points fixed by $g$, respectively.\n\nThe following remark holds true for groups of permutations of a set $X$.\n\\begin{remark}\\label{disjointSupportsAndConjugation}\nLet $X$ be a set, and $G$ the group of bijections from $X$ to $X$. \nSuppose $h_1$, $h_2\\in G\\backslash\\left\\{1\\right\\}$ so that $Supp(h_1)\\cap Supp(h_2)=\\emptyset$. If $c_1$, $c_2\\in G$ so that $Supp(h_1^{c_1})\\cup Supp( h_1^{c_2})=X$ then $Supp(h_2^{c_1})\\cap Supp(h_2^{c_2})=\\emptyset$.\n\\end{remark}\n\n\\begin{proof}\nSuppose \n\\[\nX=Supp(h_1^{c_1})\\cup Supp(h_1^{c_2}) (= c_1\\cdot Supp(h_1)\\cup c_2\\cdot Supp(h_1)).\n\\]\nIf there is $x\\in X$ so that $x\\in Supp(h_2^{c_1})\\cap Supp(h_2^{c_2})$, then $x=c_1\\cdot y$ and $x=c_2\\cdot z$, where $y$ and $z$ are in $Fix(h_1)$. In particular, $x\\in c_1\\cdot Fix(h_1)\\cap c_2\\cdot Fix(h_1)$. This implies that $c_1\\cdot Supp(h_1)\\cup c_2\\cdot Supp(h_1)\\neq X$.\n\\end{proof}\n\nRemark \\ref{disjointSupportsAndConjugation} immediately implies that we cannot use Condition \\ref{kesten-test} when approaching the question of the simplicity of the algebra $\\cstar{T}$.\n\n\\begin{corollary}\\label{negative}\nSuppose that $H\\subset T$ admits elements $h_1$ and $h_2$ so that $Supp(h_1)\\cap Supp(h_2)=\\emptyset $. Then for $n\\geq 2$ there is no set of elements $\\left\\{c_1,c_2\\ldots,c_n\\right\\}$ so that $\\langle h^{c_1},h^{c_2}, \\ldots,h^{c_n}\\rangle$ is a free group on $n$ generators for all $h\\in H$.\n\\end{corollary}\n\\begin{proof}\nSuppose $H:=\\left\\{h_1,h_2,\\ldots,h_k\\right\\}$ is a finite set with cardinality at least two, and $h_1$ and $h_2$ are in $H$ so that $Supp(h_1)\\cap Supp(h_2)=\\emptyset.$ Further suppose that $n\\geq 2$ is fixed and $c_1$, $c_2$,$\\ldots$, $c_n$ are chosen so that for all $h\\in H$, we have $\\langle h^{c_1},h^{c_2},\\ldots, h^{c_n}\\rangle$ is free on $n$ generators. As proven and Brin and Squier's paper \\cite{BSPLR}, the group of piecewise linear homeomorphisms of the unit interval has no non-abelian free subgroups, so we see immediately that $Supp(h_1^{c_1})\\cup Supp(h_1^{c_2})=S^1$. Now by Remark \\ref{disjointSupportsAndConjugation} we know that $Supp(h_2^{c_1})\\cap Supp(h_2^{c_2})=\\emptyset$ Therefore $\\langle h_2^{c_1},h_2^{c_2}\\rangle\\cong \\mathbb{Z}\\times\\mathbb{Z}$.\n\\end{proof}\n\nWe now offer an apparently weaker version of Condition \\ref{kesten-test} which will be used throughout the remainder of this article. First, we need a supporting theorem.\n\nBelow, let $C_G(g)$ be the centralizer of an element $g$ in $G$.\n\n\\begin{theorem}\nLet $H\\subset G$ be a finite set and there is an element $w\\in H$ such that for all positive integers $n$ there is a set $\\{c_1,c_2,\\ldots,c_n\\}\\subset G$ and $r,s\\in G$ such that $c_i \\in C_G(swr)$ for all $i$ and \n\\[\n\\lim_{n\\to\\infty}\\frac{1}{n}||\\Sigma_{i=1}^n \\lambda(c_isgr{c_i}^{-1})||=0,\\,\\text{ for all } g\\in H\\backslash \\{w\\},\n\\]\nthen for all coefficients $\\beta_g$ indexed by $H$ with $\\beta_w\\neq 0$, the ideal generated by $\\sum_{g\\in H}\\beta_g \\lambda(g)$ is equal to $\\cstar{G}$.\n\n\n\\end{theorem}\n\\begin{proof}\nLet $I$ be an ideal in $C^*_{\\lambda}(G)$ generated by $b:=\\sum_{g\\in H}\\beta_g \\lambda(g)$. Assume that $I$ is proper. The closure of $I$ is proper, thus we can assume $I$ is closed. Note that $\\Sigma_{i=1}^n \\lambda(c_is)b\\lambda(rc_i^{-1})\\in I$. Since $c_i \\in C_G(swr)$ we have \n\\begin{align*}\n\\|\\lambda (swr)-\\frac{1}{\\beta_w n}\\Sigma_{i=1}^n \\lambda(c_is)b\\lambda(rc_i^{-1})\\|=&\\frac{1}{\\beta_w n} \\|\\Sigma_{g\\in H\\backslash \\{w\\}}\\Sigma_{i=1}^n \\beta_g\\lambda(c_isgrc_i^{-1})\\| \\\\\n\\leq& \\frac{1}{\\beta_w} \\Sigma_{g\\in H\\backslash \\{w\\}}|\\beta_g|\\frac{1}{n}\\|\\Sigma_{i=1}^n \\lambda(c_isgrc_i^{-1})\\|\\\\\n=&\\max(|\\beta_g|\/ \\beta_w:g\\in H)\\cdot \\max(\\frac{1}{n}\\|\\Sigma_{i=1}^n \\lambda(c_isgrc_i^{-1})\\|:g\\in H).\n\\end{align*}\nBy our assumptions, the last quantity can be arbitrarily small for large $n$. Thus there is an element in $I$ which is on distance less then $1$ to a unitary operator, this implies that it is invertible and $I=\\cstar{G}$.\n\\end{proof}\n\nApplying the theorem above to the set $H\\cup \\{e\\}$ shows that the following condition implies simplicity of $C^*_{\\lambda}(G)$ :\n\n\\begin{condition}\\label{weak-kesten}\nFor all finite non-empty subsets $H\\subset G$, $e\\not\\in H$ and for all positive integers $n$ there are $r$, $s\\in G$ and a set $\\{c_1,c_2,\\ldots,c_n\\}\\subset C_G(sr)$ such that the set $\\{c_k(sgr)c_k^{-1}: k=1,\\ldots,n\\}$ is free for all $g\\in H$.\n\\end{condition}\n\n\nCondition \\ref{kesten-test} implies Condition \\ref{weak-kesten} and it seems that the other implication is false. However, Condition \\ref{weak-kesten} is still inadequate for showing that $\\cstar{T}$ is simple.\n\\begin{lemma}\nThere are $g_1$, $g_2\\in T\\backslash \\{e\\}$ so that for any $r$, $s\\in T$ there are no elements $c_1$, $c_2$, $c_3$, and $c_4\\in C_T(sr)$ with both $G_1=\\langle (sg_1r)^{c_1},(sg_1r)^{c_2}, (sg_1r)^{c_3},(sg_1r)^{c_4}\\rangle$ and $G_2=\\langle (sg_2r)^{c_1},(sg_2r)^{c_2}, (sg_2r)^{c_3},(sg_2r)^{c_4}\\rangle$ free on four generators.\n\\end{lemma}\n\\begin{proof}\nLet $g_1$, $g_2\\in T$ so that $Supp(g_1)=(0,1\/2)$ and $Supp(g_2)=(1\/2,1)$. Let $r$ and $s\\in T$ and suppose $c_1$, $c_2$, $c_3$ and $c_4\\in C_T(sr)$. Set $k_{ij}=(sg_ir)^{c_j}$, for $i$, $j\\in \\{1,2,3,4\\}$, and suppose that $c_1$, $c_2$, $c_3$ and $c_4$ were so chosen so that $G_i=\\langle k_{i1},k_{i2}, k_{i3}, k_{i4}\\rangle$ is free on four generators for $1\\leq i\\leq 2$.\n\nConsider the intervals $X_{i1}=(c_1r^{-1})\\cdot Fix(g_i)$, $X_{i2}=(c_2r^{-1})\\cdot Fix(g_i)$, $X_{i3}=(c_3r^{-1})\\cdot Fix(g_i)$, and $X_{i4}=(c_4r^{-1})\\cdot Fix(g_i)$. If $x_{ij}\\in X_{ij}$, then $k_{ij}\\cdot x_{ij} = c_jsg_irc_j^{-1}\\cdot x_{ij} =(c_jsg_i)\\cdot y_{ij}=(c_js)\\cdot y_{ij} = (c_jsrc_j^{-1})\\cdot x_{ij} = (sr)\\cdot x_{ij}$, as for all $i$, $j$ we have $y_{ij}\\in Fix(g_i)$. That is, $k_{ij}$ acts as $sr$ over $X_{ij}$.\n\nFurther, consider the elements $f_{i,ab}= k_{ia}^{-1}k_{ib}$, where $i\\in \\{1,2\\}$ and $a\\neq b \\in \\{1,2,3,4\\}$. It is immediate that $\\langle f_{i,ab},f_{i,cd}\\rangle$ is free on two generators if either $b\\neq c$ or $d\\neq a$. Therefore, by Brin and Squier's result (from \\cite{BSPLR}) that $PL_o(I)$ has no non-abelian free subgroups, we know that $Fix(f_{i,ab})\\cap Fix(f_{i,cd})=\\emptyset$ for $i\\in\\{1,2\\}$ and either $b\\neq c$ or $d\\neq a$. Now, for instance, if there is an index $i$ and some point $p\\in X_{i1}\\cap X_{i2}\\cap X_{i3}$, then both $f_{i,12}$ and $f_{i,13}$ must fix $p$, which is a contradiction. Therefore we see that $X_{i1}$, $X_{i2}$, and $X_{i3}$ cannot share a common point for any index $i$. By the same argument, for any valid indices $i$, $a$, $b$, and $c$ (where $i\\in \\{1,2\\}$ and $a\\neq b\\neq c\\neq a$) we see that $X_{ia}\\cap X_{ib}\\cap X_{ic}=\\emptyset$.\n\nOne now sees immediately that for any valid indices $i$, $a$,$b$, and $c$ (where $i\\in \\{1,2\\}$ and $a\\neq b\\neq c\\neq a$) we must also have that $X_{ia}\\cup X_{ib}\\cup X_{ic}=S^1$. This follows as otherwise there is some point $p$ in the intersection $X_{ja}\\cap X_{jb}\\cap X_{jc}$ for the index $j\\neq i$ (since $X_{1*} = \\overline{S^1\\backslash X_{2*}}$ for any index $*$). \n\nSuppose that for some indices $i$, $a\\neq b$ we have that $X_{ia}\\subset X_{ib}$, and let $c$ and $d$ be the two remaining distinct indices of $\\{1,2,3,4\\}\\backslash\\{a,b\\}$. Let $p$ be an endpoint of $X_{ib}$. We have that $p$ must be in both $X_{ic}$ and $X_{id}$, otherwise their will be some point $q \\in S^1\\backslash X_{ib}$ which is near to $p$ so that $q$ is not in either of $X_{ia}\\cup X_{ib}\\cup X_{ic}= X_{ib}\\cup X_{ic}$ or $X_{ia}\\cup X_{ib}\\cup X_{id}= X_{ib}\\cup X_{id}$. But this contradicts the fact that $X_{ib}\\cap X_{ic}\\cap X_{id}= \\emptyset$.\n\nIt now immediately follows that for any index $i$ and two distinct indices $a$ and $b$, we have that $X_{ia}\\cap X_{ib}$ is a non-empty closed interval (possibly a single point) while $X_{ia}\\cup X_{ib}$ is also a closed interval which misses some points in $S^1$.\n\nBut now we are done as follows. For any index $i$ the intervals $X_{i1}$ $X_{i2}$ and $X_{i3}$ cover the circle, and have the properties that each pair of sets intersects in an interval, and no pair covers the whole circle. Now consider $X_{i4}$. It must likewise intersect both $X_{i1}$ and $X_{i2}$ non-trivially, and the union of $X_{i1}$, $X_{i2}$ and $X_{i4}$ also covers the whole circle. Therefore the end of $X_{i1}$ which is not in $X_{i2}$ is in both $X_{i3}$ and $X_{i4}$. Hence $X_{i1}\\cap X_{i3}\\cap X_{i4}\\neq \\emptyset$, which implies that the group $G_i$ cannot be free on four generators, as $f_{i,13}$ and $f_{i,14}$ share a common fixed point and will not generate a free subgroup of $G_i$.\n\\end{proof}\n\n\\begin{remark}We observe that it is still plausible that even with $g_1$ and $g_2$ as in the proof above (supports over $(0,1\/2)$ and $(1\/2,1)$, respectively), one could plausibly find $r$, $s$, and $c_1$, $c_2$, and $c_3\\in C_T(sr)$ so that setting $k_{ij}= c_jsg_irc_j^{-1}$ as above we would have $H_r=\\langle k_{r1}, k_{r2}, k_{r3}\\rangle$ free on three generators for both $r=1$ and $r=2$, where the related claim for even two generator free groups could not be conceived of under Condition \\ref{kesten-test}.\n \\end{remark}\n \n\\section{A Ping-Pong Lemma for orientation preserving homeomorphisms of $S^1$}\\label{p-pong}\n\nIn this section, we prove a version of the Ping-Pong Lemma which we are using in our main argument. In the notations below we write all actions as left actions, in keeping with the tradition in the $C^*$ literature, although much Thompson groups literature uses right action. In particular, if $x\\in S^1$ and $s$,$t\\in T$, we write $tx$ for the image of $x$ under $t$, and the conjugation $s^t:= sts^{-1}$, which means, apply $s^{-1}$ first, then $t$, and then $s$. We consider finite sets with repetitions. \n\nIn support of that lemma we ask the reader to recall an ordinary statement of Fricke and Klein's Ping-Pong Lemma (first proven in \\cite{FrickeKlein}, but we give a different statement), and two further facts, one quite classical.\n\n\n\n\\begin{lemma}(Ping-Pong Lemma)\\label{ping-pong} Let $G$ be a group of permutations on a set $X$, and let $a$, $b\\in G$, where $b^2\\neq 1$. If $X_a$ and $X_b$ are two subsets of $X$ so that neither is contained in the other, and for all integers $n$ we have $b^n\\cdot X_a\\subset X_b$ whenever $b^n\\neq 1$, and $a^n\\cdot X_b\\subset X_a$ whenever $a^n\\neq 1$, then $\\langle a,b\\rangle$ factors naturally as the free product of $\\langle a\\rangle$ and $\\langle b\\rangle$. In particular, $\\langle a,b\\rangle\\cong \\langle a\\rangle *\\langle b\\rangle$.\n\\end{lemma}\n\nSuppose that $f:S^1\\rightarrow S^1$ is an orientation preserving homeomorphism of the circle $S^1=\\mathbb{R}\/\\mathbb{Z}$, then $f$ may be lifted to a homeomorphism of $\\mathbb{R}$ by $F(x+m)=F(x)+m$ for every $x$ and $m$. The rotation number of $f$ is defined to be $Rot(f)=\\lim_{n\\rightarrow \\infty} (F^n(x)-x)\/{n}$.\nThe following theorem is generally relevant to the arguments in the final section of this paper, and appears first in \\cite{GhysSergiescu}, although there now exist many different proofs, the shortest of which appears to be in \\cite{BKMStructure}.\n\\begin{theorem}\nEvery element of Thompson's group $T$ has rational rotation number.\n\\end{theorem}\nThe last tool we need in order to establish our own version of the Ping Pong lemma is the following classical result of Poincar\\`e.\n\\begin{lemma}(Poincar\\`es Lemma, circa 1905)\nIf $f$ is an orientation preserving homeomorphism of $S^1$ and $f$ has rotation number $p\/q$ in lowest terms, then there is an orbit in $S^1$ of size exactly $q$ under the action of $\\langle f\\rangle$.\n\\end{lemma}\n\n\n\nWe are now in a good position to quote and prove our main technical tool.\n\n\n\\begin{lemma}\\label{free-powering-one}\nSuppose $a$ and $b$ are orientation preserving homeomorphisms of the circle $S^1$ with rational rotation numbers $Rot(a)=p\/q$ and $Rot(b)=r\/s$ in lowest non-negative terms where\n\\begin{enumerate}\n\\item $b$ is not torsion, and \n\\item if $x\\in Fix(b^s)$ and $j\\in\\mathbb{Z}$ with $a^j\\neq 1_T$, then we have $a^jx\\not\\in Fix(b^s)$,\n\\end{enumerate}\nthen, there is a positive integer $k$ so that $a$ and $b^k$ are a free basis for the group $\\langle a, b^k\\rangle$.\n\\end{lemma}\n\\begin{proof}\nIn the proof below, let us take $a$, $b\\in T$ and $p$, $q$, $r$, $s\\in \\mathbb{N}$ as in the statement of the lemma. \nSet $b_0:=b$. We will occasionally update to a new version of $b$, which will be given by a new index. The new $b$ will always be an integral power of the previous indexed $b$.\n\nSet $b_1:=b_0^s$. The element $b_1$ will have rotation number $0\/1$ in lowest non-negative terms. For $b_1$, we have $Fix(b_1)$ is not empty, and also not the whole circle (else $b$ was originally a torsion element in $T$).\n\n\n\\newcommand{under the action of $\\langle a\\rangle$}{under the action of $\\langle a\\rangle$}\n\\newcommand{under the action of $\\langle b\\rangle$}{under the action of $\\langle b\\rangle$}\n\\newcommand{\\mathscr{I}}{\\mathscr{I}}\n\nLet $\\mathscr{I}\\subset S^1$ be such that for each component $C$ of $Supp(b_1)$, we have $|C\\cap \\mathscr{I}|=1$, and associate each such $C$ with its unique point in $\\mathscr{I}$, so that $\\mathscr{I}$ becomes an index set for the components of $Supp(b_1)$. We observe that $\\mathscr{I}$ comes with an inherent circular order as a subset of $S^1$. Let $L_b$ represent the set of limit points of $\\mathscr{I}$ which are not in $\\mathscr{I}$, and observe that $L_b\\subset Fix(b_1)$.\n\nFor each positive integer $d$, set $\\Delta_d:=[-d,d]\\cap(\\mathbb{Z}\\backslash\\{0\\})$, the set of non-zero integers a distance $d$ or less from zero. Now for all positive integers $d$ we can set $\\epsilon_d$ to be one half of the distance from $Fix(b_1)$ to the set $\\cup _{i\\in\\Delta_d} a^i\\cdot Fix(b_1)$. Noting that these $\\epsilon_d$ are all well defined and non-zero (unless $a$ is torsion) as the sets involved are compact and as $a^m\\cdot Fix(b_1)\\cap a^n\\cdot Fix(b_1) \\neq \\emptyset$ implies that either $m=n$ or that $a$ is torsion and $n-m$ is divisible by the order of $a$.\n\n\n Our analysis now splits, depending on whether or not $a$ is torsion. In the case that $a$ is torsion, our proof is somewhat easier, so we will execute that proof immediately.\n\n\\vspace{.1 in}\n{\\flushleft {\\it \\underline{Case}: $a$ is torsion with order $q$.}}\n\nIn this case, the value $\\epsilon_{q-1}$ explicitly measures one half of the distance between $Fix(b_1)$ and the union of the images of $Fix(b_1)$ under the action of non-trivial powers of $a$. Set $\\mathcal{U}$ to be the open $\\epsilon_{q-1}$ neighbourhood of $Fix(b_1)$, and observe that for each integer $i\\in \\{1,2,\\ldots ,q-1\\}$ we have $a^i\\cdot Fix(b_1)\\cap \\mathcal{U}=\\emptyset$. For each non-zero $i\\in \\left\\{1,2,\\ldots q-1\\right\\}$ set $\\mathcal{U}_i$ to be the $\\epsilon_{q-1}$ neighbourhood of $a^i\\cdot Fix(b)$. Again, for all such indices $i$, $\\mathcal{U}\\cap\\mathcal{U}_i=\\emptyset$. Set \n\\[\nX_b:= \\mathcal{U}\\bigcap_{1\\leq i0$, we shall use the notation $N_\\epsilon(X)$ to denote the open $\\epsilon$-neighbourhood of $X$, that is, all points in $S^1$ a distance less than $\\epsilon$ from some point in $X$.\n\nIn this case with $a$ not torsion, we must specify the set $F_a:= Fix(a^q)$, which is a closed non-empty subset of the circle which is disjoint from $Fix(b_1)$. Choose a specific $\\epsilon>0$ so that $N_\\epsilon(Fix(b_1))\\cap N_\\epsilon(F_a)= \\emptyset$, noting that such an epsilon value exists as $Fix(b_1)$ and $F_a$ are disjoint compact subsets of $S^1$.\n\nLet $m$ be a positive integer so that both $a^{mq}\\cdot Fix(b_1)\\subset N_\\epsilon(F_a)$ and $a^{-mq}\\cdot Fix(b_1)\\subset N_\\epsilon(F_a)$. This $m$ exists as $a^{q}$ acts as a monotone strictly increasing, or as a monotone strictly decreasing function over each component of its support, and as the limit point of any point in a component of support of $a^q$ under increasing powers of $a^q$ must be a fixed point of $a^q$ (and similarly under a negative powers of $a^q$), and as $Fix(b_1)$ is a compact set and hence is contained in a union of finitely many components of support of $a^q$.\n\nWe now observe that for $n$ an integer with $|n|>m$, we have that $a^{nq}\\cdot Fix(b_1)\\subset F_a$ as well. We would like to argue a stronger result now that there is a positive constant $N$ so that for all $j>N$ we have $a^{j}\\cdot Fix(b_1)\\subset N_\\epsilon(F_a)$ and $a^{-j}\\cdot Fix(b_1)\\subset N_\\epsilon(F_a)$.\n\nTo make this argument, the main point to observe is that there is an induced action of $\\langle a\\rangle$ on the set of components of support of $a^q$ which partitions these components into (possibly infinitely many) orbits of size $q$. Further, as $a$ commutes with $a^q$ and $a$ is orientation preserving, it is easy to see that each such orbit consists of components of support where the action of $a^q$ is increasing on all components of the orbit, or decreasing on all components of the orbit.\n\nIt is also the case that there are only finitely many components of support of $a^q$ which are not already wholly contained in $N_\\epsilon(F_a)$. Let $C_1$, $C_2$, $\\ldots$, $C_w$ represent these components, and observe that $Fix(b_1)$ is contained in the union $K$ of these compact intervals. For each component $C_j$, let $I_j$ be the closed interval $C_j\\backslash N_\\epsilon(F_a)$. Now each of these components $C_j$ are in an orbit of length $q$ amongst the components of support of $a^q$, and in each such orbit the action of $a^q$ on each component is in the same direction. Hence there is a finite number $N$ so that for all $j>N$ and intervals $I_m$, we have that $a^j\\cdot I_m\\subset N_\\epsilon(F_a)$, and also $a^{-j}\\cdot I_m\\subset N_\\epsilon(F_a)$.\n\nNow define $J$ as below:\n\\[\nJ:=\\cup_{i\\in\\Delta_N} ((a^i\\cdot Fix(b_1))\\cap K).\n\\]\nWhere we recall that $\\Delta_n=[-n,n]\\cap(\\mathbb{Z}\\backslash\\{0\\})$ for any particular $n\\in\\mathbb{N}$.\n\nIt is immediate that $J$ is a compact set which is disjoint from $Fix(b_1)$. As such, there is a $\\delta>0$ so that $\\delta<\\epsilon$ and the $\\delta$-neighbourhood $N_\\delta(Fix(b_1))$ of $Fix(b_1)$ is disjoint from the set $V_\\delta$ defined as\n\\[\nV_\\delta:=\\cup_{i\\in\\Delta_N} (a^i\\cdot N_\\delta(Fix(b_1)))\n\\]\n and noting that as $\\delta<\\epsilon$ we also have that $N_\\delta(Fix(b_1))$ is disjoint from $N_\\epsilon(F_a)$.\n\nNow set $X_b:=N_\\delta(Fix(b_1))$ and $X_a:=N_\\epsilon(F_a)\\cup V_\\delta$.\n\nBy construction, there is an integer $z>0$ so that $b_1^z$ takes the complement of $X_b$ (and so, $X_a$) into $X_b$, while all non-trivial powers of $a$ take $X_b$ into $X_a$. Hence the integer $k=s\\cdot z$ has the property that $a$ and $b^k$ freely generate a free group of rank $2$.\n\\end{proof}\n\n\\section{Applying Condition \\ref{weak-kesten}, and variants, in $T$}\\label{trying-conditions}\nHere we list lemmas, where the Condition \\ref{weak-kesten} can be used.\n\n\\begin{lemma}\\label{fixed-point}\nLet $H$ be a finite set of nontrivial elements in $T$ so that there is some point $p\\in \\cap_{h\\in H} Supp(h).$ Then, for any positive integer $n$ there is an element $g\\in T$ and $\\{c_1,c_2,\\ldots,c_n\\}$ so that $c_i\\in C_T(g)$ for all $i$, and so that for all $h\\in H$ we have the set\n\\[\nG_h:=\\left\\{ (gh)^{c_i}\\mid i\\in \\left\\{1,2,\\ldots, n\\right\\}\\right\\}\n\\]\nis a free basis for a free group of rank $n$.\n\\end{lemma}\n\\begin{proof}\nLet $H$ and $p$ as in the statement of the lemma, and let $n\\in \\mathbb{N}$ be given. For each $h\\in H$, let $Rot(h):=r_h\/s_h$ written in lowest terms (NB, any finite periodic orbit under the action of $\\langle h\\rangle$ is of length $s_h$). By the definition of $p$, we see there is a non-empty interval $(a,b)$ with $p\\in (a,b)$ so that for all $h\\in H$ we have $(a,b)\\cdot h^j\\cap (a,b)=\\emptyset$ for all $1\\leq j0$. One of the advantages of \\eqref{eqn:a10} over \\eqref{eqn:a10p} is that it offers direct control over estimators' sparsity via the discrete parameter $k$, as opposed to the Lagrangian form \\eqref{eqn:a10p} for which the correspondence between the continuous parameter $\\mu$ and the resulting sparsity of estimators obtained is not entirely clear. For further discussion, see \\cite{convspen}.\n\n\nAnother class of problems that have received considerable attention in the statistics and machine learning literature is the following:\n\\begin{equation}\\label{eqn:a1}\n\\min_{\\bs\\beta} \\frac{1}{2}\\|\\mb y-{\\mb X}{\\bs\\beta}\\|_2^2 + R({\\bs\\beta}),\n\\end{equation}\nwhere $R({\\bs\\beta})$ is a choice of regularizer which encourages sparsity in ${\\bs\\beta}$.\nFor example, the popularly used Lasso \\cite{tibshirani} takes the form of problem \\eqref{eqn:a1} with $R({\\bs\\beta})=\\mu\\|{\\bs\\beta}\\|_1$, where $\\|\\cdot\\|_1$ is the $\\ell_1$ norm; in doing so, the Lasso simultaneously selects variables and also performs shrinkage. \nThe Lasso has seen widespread success across a variety of applications.\n\nIn contrast to the convex approach of the Lasso, there also has been been growing interest in considering richer classes of regularizers $R$ which include nonconvex functions. Examples of such penalties include the $\\ell_{q}$-penalty (for $q\\in [0,1]$), minimax concave penalty (MCP) \\cite{mcp}, and the smoothly clipped absolute deviation (SCAD) \\cite{scad}, among others. Many of the nonconvex penalty functions considered are \\emph{coordinate-wise separable}; in other words, $R$ can be decomposed as\n$$R({\\bs\\beta}) = \\sum_{i=1}^p \\rho(|\\beta_i|),$$\nwhere $\\rho(\\cdot)$ is a real-valued function \\cite{zhangzhang}. There has been a variety of evidence suggesting the promise of such nonconvex approaches in overcoming certain shortcomings of Lasso-like approaches.\n\nOne of the central ideas of nonconvex penalty methods used in sparse modeling is that of creating a continuum of estimation problems which bridge the gap between convex methods for sparse estimation (such as Lasso) and subset selection in the form \\eqref{eqn:a10}. However, as noted above, such a connection does not necessarily offer direct control over the desired level of sparsity of estimators.\n\n\\subsection*{The trimmed Lasso}\n\nIn contrast with coordinate-wise separable penalties as considered above, we consider a family of penalties that are not separable across coordinates. One such penalty which forms a principal object of our study herein is \n$$\\tk{{\\bs\\beta}} := \\min_{\\substack{\\|\\bs \\phi\\|_0\\leq k}} \\|\\bs \\phi-{\\bs\\beta}\\|_1.$$\nThe penalty $T_k$ is a measure of the distance from the set of $k$-sparse estimators as measured via the $\\ell_1$ norm. In other words, when used in problem \\eqref{eqn:a1}, the penalty $R=T_k$ controls the amount of shrinkage towards sparse models. \n\nThe penalty $T_k$ can equivalently be written as \n$$\\tk{{\\bs\\beta}} = \\sum_{i=k+1}^p |\\beta_{(i)}|,$$\nwhere $|\\beta_{(1)}|\\geq |\\beta_{(2)}|\\geq \\cdots\\geq |\\beta_{(p)}|$ are the sorted entries of ${\\bs\\beta}$. In words, $\\tk{{\\bs\\beta}}$ is the sum of the absolute values of the $p-k$ smallest magnitude entries of ${\\bs\\beta}$. The penalty was first introduced in \\cite{thiao,hempel,gotoh1,gotoh2}. We refer to this family of penalty functions (over choices of $k$) as the \\emph{trimmed Lasso}.\\footnote{The choice of name is our own and is motivated by the least trimmed squares regression estimator, described below} The case of $k=0$ recovers the usual Lasso, as one would suspect. The distinction, of course, is that for general $k$, $T_k$ no longer shrinks, or biases towards zero, the $k$ largest entries of ${\\bs\\beta}$.\n\nLet us consider the least squares loss regularized via the trimmed lasso penalty---this leads to the following optimization criterion:\n\\begin{equation}\\label{eqn:rmaux1}\n\\displaystyle\\min_{{\\bs\\beta}} \\frac{1}{2}\\|\\mb y-{\\mb X}{\\bs\\beta}\\|_2^2 + \\lambda \\tk{{\\bs\\beta}},\n\\end{equation}\nwhere $\\lambda>0$ is the regularization parameter. The penalty term shrinks the smallest $p-k$ entries of ${\\bs\\beta}$ and does not impose any penalty on the largest $k$ entries of ${\\bs\\beta}$. If $\\lambda$ becomes larger, the smallest $p-k$ entries of ${\\bs\\beta}$ are shrunk further; after a certain threshold---as soon as $\\lambda \\geq \\lambda_0$ for some finite $\\lambda_0$---the smallest $p-k$ entries are set to zero.\nThe existence of a finite $\\lambda_0$ (as stated above) is an attractive feature of the trimmed Lasso and is known as its \\emph{exactness} property, namely, for $\\lambda$ sufficiently large, the problem \\eqref{eqn:rmaux1} exactly solves constrained best subset selection as in problem \\eqref{eqn:a10} (\\emph{c.f.} \\cite{gotoh1}). Note here the contrast with the separable penalty functions which correspond instead with problem \\eqref{eqn:a10p}; as such, the trimmed Lasso is distinctive in that it offers precise control over the desired level of sparsity vis-\\`a-vis the discrete parameter $k$. Further, it is also notable that many algorithms developed for separable-penalty estimation problems can be directly adapted for the trimmed Lasso.\n\nOur objective in studying the trimmed Lasso is distinctive from previous approaches. In particular, while previous work on the penalty $T_k$ has focused primarily on its use as a tool for reformulating sparse optimization problems \\cite{thiao,hempel} and on how such reformulations can be solved computationally \\cite{gotoh1,gotoh2}, we instead aim to explore the trimmed Lasso's structural properties and its relation to existing sparse modeling techniques.\n\nIn particular, a natural question we seek to explore is, what is the connection of the trimmed Lasso penalty with existing separable penalties commonly used in sparse statistical learning? For example, the trimmed Lasso bears a close resemblance to the clipped (or capped) Lasso penalty \\cite{cl}, namely,\n$$\\sum_{i=1}^p \\mu\\min\\{\\gamma|\\beta_i|,1\\},$$\nwhere $\\mu,\\gamma>0$ are parameters (when $\\gamma$ is large, the clipped Lasso approximates $\\mu\\|{\\bs\\beta}\\|_0$).\n\n\n\n\n\n\\subsection*{Robustness: robust statistics and robust optimization}\n\nA significant thread woven throughout the consideration of penalty methods for sparse modeling is the notion of robustness---in short, the ability of a method to perform in the face of noise. Not surprisingly, the notion of robustness has myriad distinct meanings depending on the context. Indeed, as Huber, a pioneer in the area of robust statistics, aptly noted:\n\\begin{quote}\n``The word `robust' is loaded with many---sometimes inconsistent---connotations.'' \\cite[p. 2]{huber}\n\\end{quote}\nFor this reason, we consider robustness from several perspectives---both the robust statistics~\\cite{huber} and robust optimization~\\cite{RObook} viewpoints.\n\nA common premise of the various approaches is as follows: that a robust model should perform well even under small deviations from its underlying assumptions; and that to achieve such behavior, some efficiency under the assumed model should be sacrificed. Not surprisingly in light of Huber's prescient observation, the exact manifestation of this idea can take many different forms, even if the initial premise is ostensibly the same.\n\n\\subsubsection*{Robust statistics and the ``min-min'' approach}\n\nOne such approach is in the field of robust statistics \\cite{huber,robRegBook,robStatsSurvey}. In this context, the primary assumptions are often probabilistic, i.e. distributional, in nature, and the deviations to be ``protected against'' include possibly gross, or arbitrarily bad, errors. Put simply, robust statistics is primary focused on analyzing and mitigating the influence of outliers on estimation methods. \n\n\n\nThere have been a variety of proposals of different estimators to achieve this. One that is particularly relevant for our purposes is that of \\emph{least trimmed squares} (``LTS'') \\cite{robRegBook}. For fixed $j\\in\\{1,\\ldots,n\\}$, the LTS problem is defined as\n\\begin{equation}\\label{eqn:introlts}\n\\min_{\\bs\\beta} \\sum_{i=j+1}^n |r_{(i)}({\\bs\\beta})|^2,\n\\end{equation}\nwhere $r_i({\\bs\\beta}) = y_i-\\mb x_i'{\\bs\\beta}$ are the residuals and $r_{(i)}({\\bs\\beta})$ are the sorted residuals given ${\\bs\\beta}$ with $|r_{(1)}({\\bs\\beta})|\\geq |r_{(2)}({\\bs\\beta})|\\geq\\cdots\\geq |r_{(n)}({\\bs\\beta})|$. In words, the LTS estimator performs ordinary least squares on the $n-j$ smallest residuals (discarding the $j$ largest or worst residuals). \n\n\n\nFurthermore, it is particularly instructive to express \\eqref{eqn:introlts} in the equivalent form (\\emph{c.f.} \\cite{bmlqs})\n\\begin{equation}\\label{eqn:introltsalt}\n\\min_{\\bs\\beta}\\min_{\\substack{I\\subseteq \\{1,\\ldots,n\\}:\\\\|I|=n-j }} \\sum_{i\\in I} |r_i ({\\bs\\beta})|^2.\n\\end{equation}\nIn light of this representation, we refer to LTS as a form of ``min-min'' robustness. One could also interpret this min-min robustness as \\emph{optimistic} in the sense the estimation problems \\eqref{eqn:introltsalt} and, \\emph{a fortiori}, \\eqref{eqn:introlts} allow the modeler to also choose observations to discard. \n\n\n\n\\subsubsection*{Other min-min models of robustness}\n\nAnother approach to robustness which also takes a min-min form like LTS is the classical technique known as \\emph{total least squares} \\cite{tls,tlsoverview}. For our purposes, we consider total least squares in the form\n\\begin{equation}\\label{eqn:introeiv}\n\\min_{{\\bs\\beta}}\\min_{\\bs\\Delta} \\frac{1}{2}\\|\\mb y-({\\mb X}+\\bs\\Delta){\\bs\\beta}\\|_2^2 + \\eta\\|\\bs\\Delta\\|_{2}^2,\n\\end{equation}\nwhere $\\|\\bs\\Delta\\|_2$ is the usual Frobenius norm of the matrix $\\bs\\Delta$ and $\\eta>0$ is a scalar parameter. In this framework, one again has an optimistic view on error: find the best possible ``correction'' of the data matrix ${\\mb X}$ as ${\\mb X}+\\bs\\Delta^*$ and perform least squares using this corrected data (with $\\eta$ controlling the flexibility in choice of $\\bs\\Delta$).\n\nIn contrast with the penalized form of \\eqref{eqn:introeiv}, one could also consider the problem in a constrained form such as\n\\begin{equation}\\label{eqn:introeivcon}\n\\min_{\\bs\\beta} \\min_{\\bs\\Delta\\in\\mathcal{V}} \\frac{1}{2}\\|\\mb y-({\\mb X}+\\bs\\Delta){\\bs\\beta}\\|_2^2,\n\\end{equation}\nwhere $\\mathcal{V}\\subseteq\\mathbb{R}^{n\\times p}$ is defined as $\\mathcal{V}= \\{\\bs\\Delta: \\|\\bs\\Delta\\|_2\\leq \\eta'\\}$ for some $\\eta'>0$. \nThis problem again has the min-min form, although now with perturbations $\\bs\\Delta$ as restricted to the set $\\mathcal{V}$.\n\n\n\n\n\n\\subsubsection*{Robust optimization and the ``min-max'' approach}\n\nWe now turn our attention to a different approach to the notion of robustness known as robust optimization \\cite{RObook,ROsurvey}. In contrast with robust statistics, robust optimization typically replaces distributional assumptions with a new primitive, namely, the deterministic notion of an \\emph{uncertainty set}. Further, in robust optimization one considers a worst-case or pessimistic perspective and the focus is on perturbations from the nominal model (as opposed to possible gross corruptions as in robust statistics).\n\nTo be precise, one possible robust optimization model for linear regression takes form \\cite{xu,RObook,bcrobreg}\n\\begin{equation}\\label{eqn:roprimitive}\n\\min_{\\bs\\beta}\\max_{\\bs\\Delta\\in\\mathcal{U}} \\frac{1}{2}\\|\\mb y-({\\mb X}+\\bs\\Delta){\\bs\\beta}\\|_2^2,\n\\end{equation}\nwhere $\\mathcal{U}\\subseteq\\mathbb{R}^{n\\times p}$ is a (deterministic) uncertainty set that captures the possible deviations of the model (from the nominal data ${\\mb X}$). \nNote the immediate contrast with the robust models considered earlier (LTS and total least squares in \\eqref{eqn:introlts} and \\eqref{eqn:introeiv}, respectively) that take the min-min form; instead, robust optimization focuses on ``min-max'' robustness. For a related discussion contrasting the min-min approach with min-max, see \\cite{worstbest,optimisticrobust} and references therein.\n\nOne of the attractive features of the min-max formulation is that it gives a re-interpretation of several statistical regularization methods. For example, the usual Lasso (problem \\eqref{eqn:a1} with $R=\\mu\\ell_1$) can be expressed in the form \\eqref{eqn:roprimitive} for a specific choice of uncertainty set:\n\\begin{proposition}[e.g. \\cite{xu,RObook}]\\label{prop:lasso}\nProblem \\eqref{eqn:roprimitive} with uncertainty set $\\mathcal{U} = \\{\\bs\\Delta: \\|\\bs\\Delta_i\\|_2\\leq \\mu\\;\\forall i\\}$\nis equivalent to the Lasso, i.e., problem \\eqref{eqn:a1} with $R({\\bs\\beta})=\\mu\\|{\\bs\\beta}\\|_1$, where $\\bs\\Delta_i$ denotes the $i$th column of $\\bs\\Delta$.\n\\end{proposition}\n\\noindent For further discussion of the robust optimization approach as applied to statistical problems, see \\cite{bcrobreg} and references therein.\n\n\n\\subsubsection*{Other min-max models of robustness}\n\nWe close our discussion of robustness by considering another example of min-max robustness that is of particular relevance to the trimmed Lasso. In particular, we consider problem \\eqref{eqn:a1} with the SLOPE (or OWL) penalty \\cite{slope,owl}, namely,\n$$R_{\\textsc{SLOPE}(\\mb w)}({\\bs\\beta}) = \\sum_{i=1}^p w_i |\\beta_{(i)}|, $$\nwhere $\\mb w$ is a (fixed) vector of weights with $w_1\\geq w_2\\geq \\cdots\\geq w_p\\geq 0$ and $w_1>0$. In its simplest form, the SLOPE penalty has weight vector $\\tilde{\\mb w}$, where $\\tilde{w}_1=\\cdots=\\tilde{w}_k=1$, $\\tilde{w}_{k+1}=\\cdots=\\tilde{w}_p=0$, in which case we have the identity\n$$R_{\\textsc{SLOPE}(\\tilde{\\mb w})}({\\bs\\beta}) =\\| {\\bs\\beta} \\|_{1} - T_{k}({{\\bs\\beta}}).$$\n\n\n\nThere are some apparent similarities but also subtle differences between the SLOPE penalty and the trimmed Lasso. From a high level, while the trimmed Lasso focuses on the smallest magnitude entries of ${\\bs\\beta}$, the SLOPE penalty in its simplest form focuses on the \\emph{largest} magnitude entries of ${\\bs\\beta}$. As such, the trimmed Lasso is generally nonconvex, while the SLOPE penalty is always convex; consequently, the techniques for solving the related estimation problems will necessarily be different.\n\nFinally, we note that the SLOPE penalty can be considered as a min-max model of robustness for a particular choice of uncertainty set:\n\n\\begin{proposition}\\label{prop:slope}\nProblem \\eqref{eqn:roprimitive} with uncertainty set\n$$\\mathcal{U} =\\left\\{\\bs\\Delta :\n\\begin{array}{c}\n\\bs\\Delta \\text{ has at most $k$ nonzero}\\\\\n\\text{columns and } \\|\\bs\\Delta_i\\|_2\\leq \\mu\\;\\forall i\n\\end{array}\\right\\}\n$$\nis equivalent to problem \\eqref{eqn:a1} with $R({\\bs\\beta})=\\mu R_{\\textsc{SLOPE}(\\tilde{\\mb w})}({\\bs\\beta}) $, where $\\tilde{w}_1=\\cdots=\\tilde{w}_k=1$ and $\\tilde{w}_{k+1}=\\cdots=\\tilde{w}_p=0$.\n\\end{proposition}\n\\noindent We return to this particular choice of uncertainty set later. (For completeness, we include a more general min-max representation of SLOPE in Appendix \\ref{app:slope}.)\n\n\n\\subsection*{Computation and Algorithms}\n\nBroadly speaking, there are numerous distinct approaches to algorithms for solving problems of the form \\eqref{eqn:a10}--\\eqref{eqn:a1} for various choices of $R$. We do not attempt to provide a comprehensive list of such approaches for general $R$, but we will discuss existing approaches for the trimmed Lasso and closely related problems. Approaches typically take one of two forms: heuristic or exact.\n\n\\subsubsection*{Heuristic techniques}\n\nHeuristic approaches to solving problems \\eqref{eqn:a10}--\\eqref{eqn:a1} often use techniques from convex optimization \\cite{BV2004}, such as proximal gradient descent or coordinate descent (see \\cite{scad,sparsenet}). Typically these techniques are coupled with an analysis of local or global behavior of the algorithm. For example, global behavior is often considered under additional restrictive assumptions on the underlying data; unfortunately, verifying such assumptions can be as difficult as solving the original nonconvex problem. (For example, consider the analogy with compressed sensing \\cite{crt,donoho1,gitta} and the hardness of verifying whether underlying assumptions hold \\cite{tillman,bandeira}).\n\n\nThere is also extensive work studying the local behavior (e.g. stationarity) of heuristic approaches to these problems. For the specific problems \\eqref{eqn:a10} and \\eqref{eqn:a10p}, the behavior of augmented Lagrangian methods \\cite{admmsilva,admmteng} and complementarity constraint techniques \\cite{mpccportfolio,burdakov,compconl0,asc} have been considered. For other local approaches, see \\cite{folded}.\n\n\n\\subsubsection*{Exact techniques}\n\nOne of the primary drawbacks of heuristic techniques is that it can often be difficult to verify the degree of suboptimality of the estimators obtained. For this reason, there has been an increasing interest in studying the behavior of exact algorithms for providing certifiably optimal solutions to problems of the form \\eqref{eqn:a10}--\\eqref{eqn:a1} \\cite{bkm,bmlqs,mipgo,discretedantzig}. Often these approaches make use of techniques from \\emph{mixed integer optimization} (``MIO'')\\cite{bonami} which are implemented in a variety of software, e.g. Gurobi \\cite{gurobi}. The tradeoff with such approaches is that they typically carry a heavier computational burden than convex approaches. For a discussion of the application of MIO in statistics, see \\cite{bkm,bmlqs,mipgo,discretedantzig}.\n\n\n\n\n\\subsection*{What this paper is about}\n\nIn this paper, we focus on a detailed analysis of the trimmed Lasso, especially with regard to its properties and its relation to existing methods. In particular, we explore the trimmed Lasso from two perspectives: that of sparsity as well as that of robustness. We summarize our contributions as follows:\n\n\\begin{enumerate}\n\n\n\\item We study the robustness of the trimmed Lasso penalty. In particular, we provide several min-min robustness representations of it. We first show that the same choice of uncertainty set that leads to the SLOPE penalty in the min-max robust model \\eqref{eqn:roprimitive} gives rise to the trimmed Lasso in the corresponding min-min robust problem \\eqref{eqn:introeivcon} (with an additional regularization term). This gives an interpretation of the SLOPE and trimmed Lasso as a complementary pair of penalties, one under a pessimistic (min-max) model and the other under an optimistic (min-min) model.\n\nMoreover, we show another min-min robustness interpretation of the trimmed Lasso by comparison with the ordinary Lasso. In doing so, we further highlight the nature of the trimmed Lasso and its relation to the LTS problem \\eqref{eqn:introlts}. \n\n\n\\item We provide a detailed analysis on the connection between estimation approaches using the trimmed Lasso and separable penalty functions. In doing so, we show directly how penalties such as the trimmed Lasso can be viewed as a generalization of such existing approaches in certain cases. In particular, a trimmed-Lasso-like approach always subsumes its separable analogue, and the containment is strict in general. We also focus on the specific case of the clipped (or capped) Lasso \\cite{cl}; for this we precisely characterize the relationship and provide a necessary and sufficient condition for the two approaches to be equivalent. In doing so, we highlight some of the limitations of an approach using a separable penalty function.\n\n\\item Finally, we describe a variety of algorithms, both existing and new, for trimmed Lasso estimation problems. We contrast two heuristic approaches for finding locally optimal solutions with exact techniques from mixed integer optimization that can be used to produce certificates of optimality for solutions found via the convex approaches. We also show that the convex envelope \\cite{rockafeller} of the trimmed Lasso takes the form\n$$\\left(\\|{\\bs\\beta}\\|_1 - k\\right)_+,$$\nwhere $(a)_+:=\\max\\{0,a\\}$, a ``soft-thresholded'' variant of the ordinary Lasso. Throughout this section, we emphasize how techniques from convex optimization can be used to find high-quality solutions to the trimmed Lasso estimation problem. An implementation of the various algorithms presented herein can be found at\n\\begin{center}\n\\url{https:\/\/github.com\/copenhaver\/trimmedlasso}.\n\\end{center}\n\n\n\\end{enumerate}\n\n\n\n\\subsubsection*{Paper structure}\n \n\nThe structure of the paper is as follows. In Section \\ref{sec:basic}, we study several properties of the trimmed Lasso, provide a few distinct interpretations, and highlight possible generalizations. In Section \\ref{sec:rob}, we explore the trimmed Lasso in the context of robustness. Then, in Section \\ref{sec:ncpm}, we study the relationship between the trimmed Lasso and other nonconvex penalties. In Section \\ref{sec:algs}, we study the algorithmic implications of the trimmed Lasso. Finally, in Section \\ref{sec:conc} we share our concluding thoughts and highlight future directions.\n\n\n\n\n\n\\section{Structural properties and interpretations}\\label{sec:basic}\n\nIn this section, we provide further background on the trimmed Lasso: its motivations, interpretations, and generalizations. Our remarks in this section are broadly grouped as follows: in Section \\ref{ssec:defn} we summarize the trimmed Lasso's basic properties as detailed in \\cite{thiao,hempel,gotoh1,gotoh2}; we then turn our attention to an interpretation of the trimmed Lasso as a relaxation of complementarity constraints problems from optimization (Section \\ref{ssec:compcon}) and as a variable decomposition method (Section \\ref{ssec:vardecomp}); finally, in Sections \\ref{ssec:gens} and \\ref{ssec:dantzig} we highlight the key structural features of the trimmed Lasso by identifying possible generalizations of its definition and its application. These results augment the existing literature by giving a deeper understanding of the trimmed Lasso and provide a basis for further results in Sections \\ref{sec:rob} and \\ref{sec:ncpm}.\n\n\n\\subsection{Basic observations}\\label{ssec:defn}\n\nWe begin with a summary of some of the basic properties of the trimmed Lasso as studied in \\cite{thiao,hempel,gotoh1}.\nFirst of all, let us also include another representation of $T_k$:\n\\begin{lemma}\\label{lemma:miprep}\nFor any ${\\bs\\beta}$,\n$$\\begin{array}{lll}\n\\tk{{\\bs\\beta}} = \\smash{\\displaystyle\\min_{\\substack{I\\subseteq\\{1,\\ldots,p\\}:\\\\|I| = p-k}} \\sum_{i\\in I} |\\beta_i| } = &\\displaystyle\\min_{\\mb z} & \\langle\\mb z,|{\\bs\\beta}|\\rangle\\\\\n & \\operatorname{s.t.}& \\displaystyle\\sum_{i} z_i =p- k\\\\ & &\\displaystyle \\mb z\\in\\{0,1\\}^p,\n\\end{array}$$\nwhere $|{\\bs\\beta}|$ denotes the vector whose entries are the absolute values of the entries of ${\\bs\\beta}$.\n\\end{lemma}\n\\noindent In other words, the trimmed Lasso can be represented using auxiliary binary variables.\n\n\nNow let us consider the problem\n\\begin{equation*}\n\\min_{{\\bs\\beta}} \\frac{1}{2} \\|\\mb y-{\\mb X}{\\bs\\beta}\\|_2^2+\\lambda \\tk{{\\bs\\beta}},\\tag{$\\textsc{TL}_{\\lambda,k}$}\n\\end{equation*}\nwhere $\\lambda>0$ and $k\\in\\{0,1,\\ldots,p\\}$ are parameters. Based on the definition of $T_k$, we have the following:\n\\begin{lemma}\\label{lemma:vdrep}\nThe problem $\\tla{\\lambda,k}$ can be rewritten exactly in several equivalent forms:\n\\begin{align*}\n\\tla{\\lambda,k} &=\\min_{\\substack{{\\bs\\beta},\\bs \\phi:\\\\\\|\\bs \\phi\\|_0\\leq k}}\\frac{1}{2}\\|\\mb y-{\\mb X}{\\bs\\beta}\\|^2 + \\lambda \\|{\\bs\\beta}-\\bs \\phi\\|_1\\nonumber\\\\\n&= \\min_{\\substack{{\\bs\\beta},\\bs \\phi,\\bs\\epsilon:\\\\{\\bs\\beta}=\\bs \\phi+\\bs\\epsilon\\\\\\|\\bs \\phi\\|_0\\leq k}}\\frac{1}{2}\\|\\mb y-{\\mb X}{\\bs\\beta}\\|^2 + \\lambda \\|\\bs\\epsilon\\|_1\\nonumber\\\\\n& =\\min_{\\substack{\\bs \\phi,\\bs\\epsilon:\\\\\\|\\bs \\phi\\|_0\\leq k}}\\frac{1}{2}\\|\\mb y-{\\mb X}(\\bs \\phi+\\bs\\epsilon)\\|^2 + \\lambda \\|\\bs\\epsilon\\|_\n\\end{align*}\n\\end{lemma}\n\n\n\n\n\\subsubsection*{Exact penalization}\n\nBased on the definition of $T_k$, it follows that $T_k({\\bs\\beta})=0$ if and only if $\\|{\\bs\\beta}\\|_0\\leq k$. Therefore, one can rewrite problem \\eqref{eqn:a10} as\n\\begin{equation*}\n\\min_{T_k({\\bs\\beta})=0} \\frac{1}{2}\\|\\mb y-{\\mb X}{\\bs\\beta}\\|_2^2.\n\\end{equation*}\nIn Lagrangian form, this would suggest an approximation for \\eqref{eqn:a10} of the form\n\\begin{equation*}\n\\min_{{\\bs\\beta}} \\frac{1}{2} \\|\\mb y-{\\mb X}{\\bs\\beta}\\|_2^2 + \\lambda T_k({\\bs\\beta}),\n\\end{equation*}\nwhere $\\lambda>0$. As noted in the introduction, this approximation is in fact exact (in the sense of \\cite{bert76,bertexact}), summarized in the following theorem; for completeness, we include in Appendix \\ref{app:proof} a full proof that is distinct from that in \\cite{gotoh1}.\\footnote{The presence of the additional regularizer $\\eta\\|{\\bs\\beta}\\|_1$ can be interpreted in many ways. For our purposes, it serves to make the problems well-posed.}\n\n\\begin{theorem}[\\emph{c.f.} \\cite{gotoh1}]\\label{thm:exactEquiv}\nFor any fixed $k\\in\\{0,1,2,\\ldots,p\\}$, $\\eta>0$, and problem data $\\mb y$ and ${\\mb X}$, there exists some $\\overline{\\lambda}=\\overline{\\lambda}(\\mb y,{\\mb X})>0$ so that for all $\\lambda>\\overline{\\lambda}$, the problems \n$$\\displaystyle\\min_{{\\bs\\beta}} \\frac{1}{2}\\|\\mb y-{\\mb X}{\\bs\\beta}\\|_2^2 + \\lambda \\tk{{\\bs\\beta}} + \\eta \\|{\\bs\\beta}\\|_1 $$\nand\n$$\\begin{array}{ll}\n\\displaystyle\\min_{{\\bs\\beta}}& \\frac{1}{2}\\|\\mb y-{\\mb X}{\\bs\\beta}\\|_2^2 + \\eta\\|{\\bs\\beta}\\|_1\\\\\n\\operatorname{s.t.}& \\|{\\bs\\beta}\\|_0\\leq k\n\\end{array}$$\nhave the same optimal objective value and the same set of optimal solutions.\n\\end{theorem}\n\n\n\n\n\nThe direct implication is that trimmed Lasso leads to a continuum (over $\\lambda$) of relaxations to the best subset selection problem starting from ordinary least squares estimation; further, best subset selection lies on this continuum for $\\lambda$ sufficiently large. \n\n\n\n\n\n\n\\subsection{A complementary constraints viewpoint}\\label{ssec:compcon}\n\n\nWe now turn our attention to a new perspective on the trimmed Lasso as considered via mathematical programming with complementarity constraints (``MPCCs'') \\cite{scholtes,mpcclin,kanzow0,kanzow1,kanzow2,burdakov}, sometimes also referred to as mathematical programs with equilibrium constraints \\cite{bilevel}. By studying this connection, we will show that a penalized form of a common relaxation scheme for MPCCs leads directly to the trimmed Lasso penalty. This gives a distinctly different optimization perspective on the trimmed Lasso penalty.\n\n\nAs detailed in \\cite{mpccportfolio,burdakov,compconl0}, the problem \\eqref{eqn:a10} can be exactly rewritten as\n\\begin{equation}\\label{eqn:BSSr}\n\\begin{array}{ll}\n\\displaystyle\\min_{{\\bs\\beta},\\mb z}&\\displaystyle \\frac{1}{2}\\|\\mb y-{\\mb X}{\\bs\\beta}\\|_2^2 \\\\%+ R({\\bs\\beta})\\\\\n\\operatorname{s.t.}& \\sum_iz_i=p-k\\\\\n& \\mb z\\in[0,1]^p\\\\\n& z_i\\beta_i = 0.\n\\end{array}\n\\end{equation}\nby the inclusion of auxiliary variables $\\mb z\\in[0,1]^p$. In essence, the auxiliary variables replace the combinatorial constraint $\\|{\\bs\\beta}\\|_0\\leq k$ with \\emph{complementarity} constraints of the form $z_i\\beta_i=0$. Of course, the problem as represented in \\eqref{eqn:BSSr} is still not directly amenable to convex optimization techniques.\n\nAs such, relaxation schemes can be applied to \\eqref{eqn:BSSr}. One popular method from the MPCC literature is the Scholtes-type relaxation \\cite{kanzow1}; applied to \\eqref{eqn:BSSr} as in \\cite{burdakov,compconl0}, this takes the form\n\\begin{equation}\\label{eqn:BSSr2}\n\\begin{array}{ll}\n\\displaystyle\\min_{{\\bs\\beta},\\mb z}& \\displaystyle\\frac{1}{2}\\|\\mb y-{\\mb X}{\\bs\\beta}\\|_2^2\\\\% + R({\\bs\\beta})\\\\\n\\operatorname{s.t.}& \\sum_iz_i=p-k\\\\\n& \\mb z\\in[0,1]^p\\\\\n& |z_i\\beta_i|\\leq t,\n\\end{array}\n\\end{equation}\nwhere $t>0$ is some fixed numerical parameter which controls the strength of the relaxation, with $t=0$ exactly recovering \\eqref{eqn:BSSr}. In the traditional MPCC context, it is standard to study local optimality and stationarity behavior of solutions to \\eqref{eqn:BSSr2} as they relate to the original problem \\eqref{eqn:a10}, \\emph{c.f.} \\cite{compconl0}.\n\nInstead, let us consider a different approach. In particular, consider a penalized, or Lagrangian, form of the Scholtes relaxation \\eqref{eqn:BSSr2}, namely,\n\\begin{equation}\\label{eqn:BSSr3}\n\\begin{array}{ll}\n\\displaystyle\\min_{{\\bs\\beta},\\mb z}& \\displaystyle\\frac{1}{2}\\|\\mb y-{\\mb X}{\\bs\\beta}\\|_2^2 + \\lambda\\sum_i (|z_i\\beta_i|-t)\\\\% + R({\\bs\\beta})\\\\\n\\operatorname{s.t.}& \\sum_iz_i=p-k\\\\\n& \\mb z\\in[0,1]^p\n\\end{array}\n\\end{equation}\nfor some fixed $\\lambda\\geq0$.\\footnote{To be precise, this is a \\emph{weaker} relaxation than if we had separate dual variables $\\lambda_i$ for each constraint $|z_i\\beta_i|\\leq t$, at least in theory.} Observe that we can minimize \\eqref{eqn:BSSr3} with respect to $\\mb z$ to obtain the equivalent problem\n$$\\min_{{\\bs\\beta}} \\displaystyle \\frac{1}{2}\\|\\mb y-{\\mb X}{\\bs\\beta}\\|_2^2 + \\lambda T_k({\\bs\\beta}) - p\\lambda t,$$\nwhich is precisely problem $(\\textsc{TL}_{\\lambda,\\ell})$ (up to the fixed additive constant). In other words, the trimmed Lasso can also be viewed as arising directly from a penalized form of the MPCC relaxation, with auxiliary variables eliminated. This gives another view on Lemma \\ref{lemma:miprep} which gave a representation of $T_k$ using auxiliary binary variables.\n\n\n\n\n\n\\subsection{Variable decomposition}\\label{ssec:vardecomp}\n\n\nTo better understand the relation of the trimmed Lasso to existing methods, it is also useful to consider alternative representations. Here we focus on representations which connect it to variable decomposition methods. Our discussion here is an extended form of related discussions in \\cite{hempel,gotoh1,gotoh2}.\n\nTo begin, we return to the final representation of the trimmed Lasso problem as shown in Lemma \\ref{lemma:vdrep}, viz.,\n\\begin{equation}\\label{eqn:dtl}\n\\tla{\\lambda,k}=\\min_{\\substack{\\bs \\phi,\\bs\\epsilon:\\\\\\|\\bs \\phi\\|_0\\leq k}}\\frac{1}{2}\\|\\mb y-{\\mb X}(\\bs \\phi+\\bs\\epsilon)\\|^2 + \\lambda \\|\\bs\\epsilon\\|_1\n\\end{equation}\nWe will refer to $\\tla{\\lambda,k}$ in the form \\eqref{eqn:dtl}\nas the \\emph{split} or \\emph{decomposed} representation of the problem. This is because in this form it is clear that we can think about estimators ${\\bs\\beta}$ found via $\\tla{\\lambda,k}$ as being decomposed into two different estimators: a sparse component $\\bs \\phi$ and another component $\\bs\\epsilon$ with small $\\ell_1$ norm (as controlled via $\\lambda$).\n\nSeveral remarks are in order. First, the decomposition of ${\\bs\\beta}$ into ${\\bs\\beta}=\\bs \\phi+\\bs\\epsilon$ is truly a decomposition in that if ${\\bs\\beta}^*$ is an optimal solution to $\\tla{\\lambda,k}$ with $(\\bs \\phi^*,\\bs\\epsilon^*)$ a corresponding optimal solution to the split representation of the problem \\eqref{eqn:dtl}, then one must have that $\\phi_i^*\\epsilon_i^*=0$ for all $i\\in\\{1,\\ldots,p\\}$. In other words, the supports of $\\bs \\phi$ and $\\bs\\epsilon$ do not overlap; therefore, ${\\bs\\beta}^*=\\bs \\phi^*+\\bs\\epsilon^*$ is a genuine decomposition.\n\nSecondly, the variable decomposition \\eqref{eqn:dtl} suggests that the problem of finding the $k$ largest entries of ${\\bs\\beta}$ (i.e., finding $\\bs \\phi$) can be solved as a best subset selection problem with a (possibly different) convex loss function (without $\\bs\\epsilon$). To see this, observe that the problem of finding $\\bs \\phi$ in \\eqref{eqn:dtl} can be written as the problem\n$$\\min_{\\|\\bs \\phi\\|_0\\leq k} \\widetilde{L}(\\bs \\phi),$$\nwhere\n$$\\widetilde{L}(\\bs \\phi) = \\min_{\\bs\\epsilon} \\frac{1}{2}\\|\\mb y-{\\mb X}(\\bs \\phi+\\bs\\epsilon)\\|_2^2 + \\lambda \\|\\bs\\epsilon\\|_1.$$\nUsing theory on duality for the Lasso problem \\cite{lassodual}, one can argue that $\\widetilde{L}$ is itself a convex loss function. Hence, the variable decomposition gives some insight into how the largest $k$ loadings for the trimmed Lasso relates to solving a related sparse estimation problem.\n\n\\subsubsection*{A view towards matrix estimation}\n\nFinally, we contend that the variable decomposition of ${\\bs\\beta}$ as a sparse component $\\bs \\phi$ plus a ``noise'' component $\\bs\\epsilon$ with small norm is a natural and useful analogue of corresponding decompositions in the matrix estimation literature, such as in factor analysis \\cite{mardia,anderson2003,barth} and robust Principal Component Analysis \\cite{candesrpca}. For the purposes of this paper, we will focus on the analogy with factor analysis.\n\nFactor analysis is a classical multivariate statistical method for decomposing the covariance structure of random variables; see \\cite{fabcm} for an overview of modern approaches to factor analysis. Given a covariance matrix $\\bs\\Sigma\\in\\mathbb{R}^{p\\times p}$, one is interested in describing it as the sum of two distinct components: a low-rank component $\\bs\\Theta$ (corresponding to a low-dimensional covariance structure common across the variables) and a diagonal component $\\bs\\Phi$ (corresponding to individual variances unique to each variable)---in symbols, $\\bs\\Sigma=\\bs\\Theta+\\bs\\Phi$.\n\nIn reality, this \\emph{noiseless} decomposition is often too restrictive (see e.g.\\cite{guttman1958extent,shapirorankred,ten1998some}), and therefore it is often better to focus on finding a decomposition $\\bs\\Sigma=\\bs\\Theta+\\bs\\Phi+\\mathcal{N}$, where $\\mathcal{N}$ is a noise component with small norm. As in \\cite{fabcm}, a corresponding estimation procedure can take the form\n\\begin{equation}\\label{eqn:fa}\n\\begin{array}{ll}\n\\displaystyle\\min_{\\bs\\Theta,\\bs\\Phi}& \\|\\bs\\Sigma-(\\bs\\Theta+\\bs\\Phi)\\|\\\\\n\\operatorname{s.t.}& \\operatorname{rank}(\\bs\\Theta)\\leq k\\\\\n& \\bs\\Phi = \\operatorname{diag}(\\Phi_{11},\\ldots,\\Phi_{pp}) \\cg\\mb0\\\\\n& \\bs\\Theta\\cg\\mb0,\n\\end{array}\n\\end{equation}\nwhere the constraint $\\mb A\\cg\\mb0$ denotes that $\\mb A$ is symmetric, positive semidefinite, and $\\|\\cdot\\|$ is some norm. One of the attractive features of the estimation procedure \\eqref{eqn:fa} is that for common choices of $\\|\\cdot\\|$, it is possible to completely eliminate the combinatorial rank constraint and the variable $\\bs\\Theta$ to yield a smooth (nonconvex) optimization problem with compact, convex constraints (see \\cite{fabcm} for details).\n\nThis exact same argument can be used to motivate the appearance of the trimmed Lasso penalty. Indeed, instead of considering estimators ${\\bs\\beta}$ which are exactly $k$-sparse (i.e., $\\|{\\bs\\beta}\\|_0\\leq k$), we instead consider estimators which are approximately $k$-sparse, i.e., ${\\bs\\beta}=\\bs \\phi+\\bs\\epsilon$, where $\\|\\bs \\phi\\|_0\\leq k$ and $\\bs\\epsilon$ has small norm. Given fixed ${\\bs\\beta}$, such a procedure is precisely\n$$\\min_{\\|\\bs \\phi\\|_0\\leq k} \\|{\\bs\\beta}-\\bs \\phi\\|.$$\nJust as the rank constraint is eliminated from \\eqref{eqn:fa}, the sparsity constraint can be eliminated from this to yield a continuous penalty which precisely captures the quality of the approximation ${\\bs\\beta}\\approx\\bs \\phi$. The trimmed Lasso uses the choice $\\|\\cdot\\|=\\ell_1$, although other choices are possible; see Section \\ref{ssec:gens}.\n\nThis analogy with factor analysis is also useful in highlighting additional benefits of the trimmed Lasso. One of particular note is that it enables the direct application of existing convex optimization techniques to find high-quality solutions to $\\tla{\\lambda,k}$.\n\n\n\n\\subsection{Generalizations}\\label{ssec:gens}\n\nWe close this section by considering some generalizations of the trimmed Lasso. These are particularly useful for connecting the trimmed Lasso to other penalties, as we will see later in Section \\ref{sec:ncpm}.\n\nAs noted earlier, the trimmed Lasso measures the distance (in $\\ell_1$ norm) from the set of $k$-sparse vectors; therefore, it is natural to inquire what properties other measures of distance might carry. In light of this, we begin with a definition:\n\\begin{definition}\nLet $k\\in\\{0,1,\\ldots,p\\}$ and $g:\\mathbb{R}_+\\to\\mathbb{R}_+$ be any unbounded, continuous, and strictly increasing function with $g(0)=0$. Define the corresponding $k$th projected penalty function, denoted $\\pi_k^g$, as\n$$\\pi_k^g({\\bs\\beta}) = \\min_{\\|\\bs \\phi\\|_0\\leq k} \\sum_i g(|\\phi_i-\\beta_i|).$$\n\\end{definition}\n\n\\noindent It is not difficult to argue that $\\pi_k^g$ has as an equivalent definition\n$$\\pi_k^g({\\bs\\beta}) = \\sum_{i>k} g(|\\beta_{(i)}|).$$\nAs an example, $\\pi_k^g$ is the trimmed Lasso penalty when $g$ is the absolute value, viz. $g(x)=|x|$, and so it is a special case of the projected penalties. Alternatively, suppose $g(x) = x^2\/2$. In this case, we get a trimmed version of the ridge regression penalty: $\\sum_{i>k} |\\beta_{(i)}|^2\/2$.\n\nThis class of penalty functions has one notable feature, summarized in the following result:\\footnote{An extended statement of the convergence claim is included in Appendix \\ref{app:proof}.}\n\n\\begin{proposition}\\label{prop:asymp}\nIf $g:\\mathbb{R}_+\\to\\mathbb{R}_+$ is an unbounded, continuous, and strictly increasing function with $g(0)=0$, then for any ${\\bs\\beta}$, \n$\\pi_k^g({\\bs\\beta}) = 0$ if and only if $\\|{\\bs\\beta}\\|_0\\leq k$. \nHence, the problem $\\displaystyle\\min_{{\\bs\\beta}} \\frac{1}{2}\\|\\mb y-{\\mb X}{\\bs\\beta}\\|_2^2 + \\lambda\\pi_k^g({\\bs\\beta})$ converges in objective value to $\\displaystyle\\min_{\\|{\\bs\\beta}\\|_0\\leq k} \\frac{1}{2}\\|\\mb y-{\\mb X}{\\bs\\beta}\\|_2^2$ as $\\lambda\\to\\infty$.\n\\end{proposition}\n\n\n\n\nTherefore, any projected penalty $\\pi_k^g$ results in the best subset selection problem \\eqref{eqn:a10} asymptotically. While the choice of $g$ as the absolute value gives the trimmed Lasso penalty and leads to exact sparsity in the non-asymptotic regime (\\emph{c.f.} Theorem \\ref{thm:exactEquiv}) , Proposition \\ref{prop:asymp} suggests that the projected penalty functions have potential utility in attaining approximately sparse estimators. We will return to the penalties $\\pi_k^g$ again in Section \\ref{sec:ncpm} to connect the trimmed Lasso to nonconvex penalty methods.\n\nBefore concluding this section, we briefly consider a projected penalty function that is different than the trimmed Lasso. As noted above, if $g(x) = x^2\/2$, then the corresponding penalty function is the trimmed ridge penalty $\\sum_{i>k} |\\beta_{(i)}|^2\/2$.\nThe estimation procedure is then\n$$\\min_{{\\bs\\beta}} \\frac{1}{2} \\|\\mb y-{\\mb X}{\\bs\\beta}\\|_2^2 +\\frac{\\lambda}{2} \\sum_{i>k}|\\beta_{(i)}|^2,$$\nor equivalently in decomposed form (\\emph{c.f.} Section \\ref{ssec:vardecomp}),\\footnote{Interestingly, if one considers this trimmed ridge regression problem and uses convex envelope techniques \\cite{rockafeller,BV2004} to relax the constraint $\\|\\bs \\phi\\|_0\\leq k$, the resulting problem takes the form $\\min_{{ \\bs \\phi ,\\bs\\epsilon }} \\|\\mb y-{\\mb X}(\\bs \\phi+\\bs\\epsilon)\\|_2^2\/2 + \\lambda \\|\\bs\\epsilon\\|_2^2 + \\tau\\|\\bs \\phi\\|_1$, a sort of ``split'' variant of the usual elastic net \\cite{zhel}, another popular convex method for sparse modeling.}\n$$\\min_{\\substack{\\bs \\phi,\\bs\\epsilon:\\\\\\|\\bs \\phi\\|_0\\leq k}}\\frac{1}{2}\\|\\mb y-{\\mb X}(\\bs \\phi+\\bs\\epsilon)\\|_2^2 +\\frac{ \\lambda}{2} \\|\\bs\\epsilon\\|_2^2.$$\nIt is not difficult to see that the variable $\\bs\\epsilon$ can be eliminated to yield\n\\begin{equation}\\label{eqn:ridge}\n\\min_{\\|\\bs \\phi\\|_0\\leq k} \\frac{1}{2}\\left\\|\\mb A(\\mb y-{\\mb X}\\bs \\phi)\\right\\|_2^2,\n\\end{equation}\nwhere $\\mb A = (\\mb I-{\\mb X}({\\mb X}'{\\mb X}+\\lambda\\mb I)^{-1}{\\mb X}')^{1\/2}$. It follows that the largest $k$ loadings are found via a modified best subset selection problem under a different loss function---precisely a variant of the $\\ell_2$ norm. This is in the same spirit of observations made in Section \\ref{ssec:vardecomp}.\n\n\n\\begin{obs}\\label{obs:ridge}\nAn obvious question is whether the norm in \\eqref{eqn:ridge} is genuinely different. Observe that this loss function is the same as the usual $\\ell_2^2$ loss if and only if $\\mb A'\\mb A$ is a non-negative multiple of the identity matrix. It is not difficult to see that this is true iff ${\\mb X}'{\\mb X}$ is a non-negative multiple of the identity. In other words, the loss function in \\eqref{eqn:ridge} is the same as the usual ridge regression loss if and only if ${\\mb X}$ is (a scalar multiple of) an orthogonal design matrix.\n\\end{obs}\n\n\n\n\\subsection{Other applications of the trimmed Lasso: the (Discrete) Dantzig Selector}\\label{ssec:dantzig}\n\n\n\nThe above discussion which pertains to the least squares loss data-fidelity term can be generalized to other loss functions as well.\nFor example, let us consider a data-fidelity term given by the \nmaximal absolute inner product between the features and residuals, given by $\\|{\\mb X}'(\\mb y-{\\mb X}{\\bs\\beta})\\|_\\infty$. An $\\ell_{1}$-penalized version of this data-fidelity term, \npopularly known as the Dantzig Selector~\\cite{dantzig2,dasso}, is given by the following linear optimization problem:\n\\begin{equation}\\label{eqn-DS}\n\\min_{\\bs\\beta} \\|{\\mb X}'(\\mb y-{\\mb X}{\\bs\\beta})\\|_\\infty + \\mu\\|{\\bs\\beta}\\|_1.\n\\end{equation}\nEstimators found via \\eqref{eqn-DS} have statistical properties similar to the Lasso.\nFurther, problem \\eqref{eqn-DS} may be interpreted as an $\\ell_{1}$-approximation to the cardinality constrained version: \n\\begin{equation}\\label{eqn-DS-L0}\n\\min_{\\|{\\bs\\beta}\\|_0\\leq k} \\|{\\mb X}'(\\mb y-{\\mb X}{\\bs\\beta})\\|_\\infty,\n\\end{equation}\nthat is, the Discrete Dantzig Selector, recently proposed and studied in~\\cite{discretedantzig}. The statistical properties of~\\eqref{eqn-DS-L0} are similar to the best-subset selection problem \\eqref{eqn:a10}, but may be more attractive from a computational viewpoint as it relies on mixed integer \\emph{linear} optimization as opposed to mixed integer \\emph{conic} optimization (see \\cite{discretedantzig}).\n\n\nThe trimmed Lasso penalty can also be applied to the data-fidelity term $\\|{\\mb X}'(\\mb y-{\\mb X}{\\bs\\beta})\\|_\\infty$, leading to the following estimator:\n$$\\min_{\\bs\\beta} \\|{\\mb X}'(\\mb y-{\\mb X}{\\bs\\beta})\\|_\\infty+\\lambda\\tk{{\\bs\\beta}}+\\mu\\|{\\bs\\beta}\\|_1.$$\nSimilar to the case of the least squares loss function, the above estimator yields $k$-sparse solutions for any $\\mu>0$ and for $\\lambda>0$ sufficiently large.\\footnote{For the same reason, but instead with the usual Lasso objective, the proof of Theorem \\ref{thm:exactEquiv} (see Appendix \\ref{app:proof}) could be entirely omitted; yet, it is instructive to see in the proof there that the trimmed Lasso truly does set the \\emph{smallest} entries to zero, and not simply all entries (when $\\lambda$ is large) like the Lasso.} While this claim follows \\emph{a fortiori} by appealing to properties of the Dantzig selector, it nevertheless highlights how any exact penalty method with a separable penalty function can be turned into a trimmed-style problem which offers direct control over the sparsity level.\n\n\n\n\\section{A perspective on robustness}\\label{sec:rob}\n\n\nWe now turn our attention to a deeper exploration of the robustness properties of the trimmed Lasso. We begin by studying the min-min robust analogue of the min-max robust SLOPE penalty; in doing so, we show under which circumstances this analogue is the trimmed Lasso problem. Indeed, in such a regime, the trimmed Lasso can be viewed as an optimistic counterpart to the robust optimization view of the SLOPE penalty. Finally, we turn our attention to an additional min-min robust interpretation of the trimmed Lasso in direct correspondence with the least trimmed squares estimator shown in \\eqref{eqn:introlts}, using the ordinary Lasso as our starting point.\n\n\n\n\\subsection{The trimmed Lasso as a min-min robust analogue of SLOPE}\n\nWe begin by reconsidering the uncertainty set that gave rise to the SLOPE penalty via the min-max view of robustness as considered in robust optimization:\n$$\\mathcal{U}_k^\\lambda:=\\left\\{\\bs\\Delta :\n\\begin{array}{c}\n\\bs\\Delta \\text{ has at most $k$ nonzero}\\\\\n\\text{columns and } \\|\\bs\\Delta_i\\|_2\\leq \\lambda\\;\\forall i\n\\end{array}\\right\\}.\n$$\nAs per Proposition \\ref{prop:slope}, the min-max problem \\eqref{eqn:roprimitive}, viz., \n\\begin{equation*\n\\min_{\\bs\\beta}\\max_{\\bs\\Delta\\in\\mathcal{U}_k^\\lambda} \\frac{1}{2}\\|\\mb y-({\\mb X}+\\bs\\Delta){\\bs\\beta}\\|_2^\n\\end{equation*}\nis equivalent to the SLOPE-penalized problem\n\\begin{equation}\\label{eqn:slopepen}\n\\min_{\\bs\\beta} \\frac{1}{2}\\|\\mb y-{\\mb X}{\\bs\\beta}\\|_2^2 + \\lambda R_{\\textsc{SLOPE}(\\tilde{\\mb w})} ({\\bs\\beta}).\n\\end{equation}\nfor the specific choice of $\\tilde{\\mb w}$ with $\\tilde w_1=\\cdots=\\tilde w_k=1$ and $\\tilde w_{k+1}=\\cdots=\\tilde w_{p}=0$.\n\nLet us now consider the form of the min-min robust analogue of the the problem \\eqref{eqn:roprimitive} for this specific choice of uncertainty set. As per the discussion in Section \\ref{sec:intro}, the min-min analogue takes the form of problem \\eqref{eqn:introeivcon}, i.e., a variant of total least squares:\n\\begin{equation*\n\\min_{\\bs\\beta} \\min_{\\bs\\Delta\\in\\mathcal{U}_k^\\lambda} \\frac{1}{2}\\|\\mb y-({\\mb X}+\\bs\\Delta){\\bs\\beta}\\|_2^2,\n\\end{equation*}\nor equivalently as the linearly homogenous problem\\footnote{In what follows, the linear homogeneity is useful primarily for simplicity of analysis, \\emph{c.f.} \\cite[ch. 12]{RObook}. Indeed, the conversion to linear homogeneous functions is often hidden in equivalence results like Proposition \\ref{prop:slope}.}\n\\begin{equation}\\label{eqn:eivconhom}\n\\min_{\\bs\\beta} \\min_{\\bs\\Delta\\in\\mathcal{U}_k^\\lambda} \\|\\mb y-({\\mb X}+\\bs\\Delta){\\bs\\beta}\\|_2.\n\\end{equation}\n\n\n\n\\noindent It is useful to consider problem \\eqref{eqn:eivconhom} with an explicit penalization (or regularization) on ${\\bs\\beta}$\n\\begin{equation}\\label{eqn:eivconhompen}\n\\min_{\\bs\\beta} \\min_{\\bs\\Delta\\in\\mathcal{U}_k^\\lambda} \\|\\mb y-({\\mb X}+\\bs\\Delta){\\bs\\beta}\\|_2 + r({\\bs\\beta}),\n\\end{equation}\nwhere $r(\\cdot)$ is, say, a norm (the use of lowercase is to distinguish from the function $R$ in Section \\ref{sec:intro}).\n\nAs described in the following theorem, this min-min robustness problem \\eqref{eqn:eivconhompen} is equivalent to the trimmed Lasso problem for specific choices of $r$. The proof is contained in Appendix \\ref{app:proof}.\n\n\\begin{theorem}\\label{thm:robeivInterp}\n\nFor any $k$, $\\lambda>0$, and norm $r$, the problem \\eqref{eqn:eivconhompen} can be rewritten exactly as\n\\begin{equation*\n\\begin{array}{ll}\n\\displaystyle\\min_{{\\bs\\beta} } &\\displaystyle\\|\\mb y-{\\mb X}{\\bs\\beta}\\|_2 + r({\\bs\\beta}) - \\lambda \\sum_{i=1}^k|\\beta_{(i)}| \\\\\n\\operatorname{s.t.} & \\displaystyle\\lambda\\sum_{i=1}^k|\\beta_{(i)}| \\leq \\|\\mb y-{\\mb X}{\\bs\\beta}\\|_2.\n\\end{array}\n\\end{equation*}\n\\end{theorem}\n\nWe have the following as an immediate corollary:\n\n\\begin{corollary}\\label{cor:slope}\nFor the choice of $r({\\bs\\beta}) = \\tau \\|{\\bs\\beta}\\|_1$, where $\\tau > \\lambda$, the problem \\eqref{eqn:eivconhompen} is precisely\n\\begin{equation}\\label{eqn:cor1}\n\\begin{array}{ll}\n\\displaystyle\\min_{{\\bs\\beta} } &\\displaystyle\\|\\mb y-{\\mb X}{\\bs\\beta}\\|_2 + (\\tau-\\lambda)\\|{\\bs\\beta}\\|_1+ \\lambda \\tk{{\\bs\\beta}} \\\\\n\\operatorname{s.t.} & \\displaystyle\\lambda\\sum_{i=1}^k|\\beta_{(i)}| \\leq \\|\\mb y-{\\mb X}{\\bs\\beta}\\|_2.\n\\end{array}\n\\end{equation}\nIn particular, when $\\lambda>0$ is small, it is approximately equal (in a precise sense)\\footnote{For a precise characterization and extended discussion, see Appendix \\ref{app:proof} and Theorem \\ref{thm:corprecise}. The informal statement here is sufficient for the purposes of our present discussion.} to the trimmed Lasso problem\n$$\\min_{{\\bs\\beta} } \\displaystyle\\|\\mb y-{\\mb X}{\\bs\\beta}\\|_2 + (\\tau-\\lambda)\\|{\\bs\\beta}\\|_1+ \\lambda \\tk{{\\bs\\beta}}.$$\n\\end{corollary}\n\nIn words, the min-min problem \\eqref{eqn:eivconhompen} (with an $\\ell_1$ regularization on ${\\bs\\beta}$) can be written as a variant of a trimmed Lasso problem, subject to an additional constraint. It is instructive to consider both the objective and the constraint of problem \\eqref{eqn:cor1}. To begin, the objective has a combined penalty on ${\\bs\\beta}$ of $(\\tau-\\lambda)\\|{\\bs\\beta}\\|_1 + \\lambda \\tk{{\\bs\\beta}}$. This can be thought of as the more general form of the penalty $T_k$. Namely, one can consider the penalty $T_{\\mb x}$ (with $0\\leq x_1\\leq x_2\\leq\\cdots\\leq x_p$ fixed) defined as\n$$T_{\\mb x} ({\\bs\\beta}) := \\sum_{i=1}^p x_i |\\beta_{(i)}|.$$\nIn this notation, the objective of \\eqref{eqn:cor1} can be rewritten as $\\|\\mb y-{\\mb X}{\\bs\\beta}\\|_2 + T_{\\mb x}({\\bs\\beta})$, with\n$$\\mb x=(\\underbrace{\\tau-\\lambda,\\ldots,\\tau-\\lambda}_{k \\text{ times}},\\underbrace{\\tau,\\ldots,\\tau}_{p-k \\text{ times}}).$$\nIn terms of the constraint of problem \\eqref{eqn:cor1}, note that it takes the form of a model-fitting constraint: namely, $\\lambda$ controls a trade-off between model fit $\\|\\mb y-{\\mb X}{\\bs\\beta}\\|_2$ and model complexity measured via the SLOPE norm $\\sum_{i=1}^k |\\beta_{(i)}|$.\n\nHaving described the structure of problem \\eqref{eqn:cor1}, a few remarks are in order. First of all, the trimmed Lasso problem (with an additional $\\ell_1$ penalty on ${\\bs\\beta}$) can be interpreted as (a close approximation to) a min-min robust problem, at least in the regime when $\\lambda$ is small; this provides an interesting contrast to the sparse-modeling regime when $\\lambda$ is large (\\emph{c.f.} Theorem \\ref{thm:exactEquiv}). Moreover, the trimmed Lasso is a min-min robust problem in a way that is the \\emph{optimistic} analogue of its min-max counterpart, namely, the SLOPE-penalized problem \\eqref{eqn:slopepen}. Finally, Theorem \\ref{thm:robeivInterp} gives a natural representation of the trimmed Lasso problem in a way that directly suggests why methods from difference-of-convex optimization \\cite{dcSummary} are relevant (see Section \\ref{sec:algs}).\n\n\n\n\\subsubsection*{The general SLOPE penalty}\n\n\n\nLet us briefly remark upon SLOPE in its most general form (with general $\\mb w$); again we will see that this leads to a more general trimmed Lasso as its (approximate) min-min counterpart. In its most general form, the SLOPE-penalized problem \\eqref{eqn:slopepen} can be written as the min-max robust problem \\eqref{eqn:roprimitive} with choice of uncertainty set\n$$\\mathcal{U}_\\mb w^\\lambda =\\left\\{\\bs\\Delta : \\|\\bs\\Delta\\bs \\phi\\|_2 \\leq \\lambda \\sum_i w_i|\\phi_{(i)}|\\;\\forall\\bs \\phi\\right\\}\n$$\n(see Appendix \\ref{app:slope}). In this case, the penalized, homogenized min-min robust counterpart, analogous to problem \\eqref{eqn:eivconhompen}, can be written as follows:\n\n\\begin{proposition}\\label{prop:robeivslope}\nFor any $k$, $\\lambda>0$, and norm $r$, the problem\n\\begin{equation}\\label{eqn:auxslopepf1}\n\\min_{\\bs\\beta} \\min_{\\bs\\Delta\\in\\mathcal{U}_\\mb w^\\lambda} \\|\\mb y-({\\mb X}+\\bs\\Delta){\\bs\\beta}\\|_2 + r({\\bs\\beta})\n\\end{equation}\ncan be rewritten exactly as\n\\begin{equation*\n\\begin{array}{ll}\n\\displaystyle\\min_{{\\bs\\beta} } &\\displaystyle\\|\\mb y-{\\mb X}{\\bs\\beta}\\|_2 + r({\\bs\\beta}) - \\lambda R_{\\textsc{SLOPE}(\\mb w)}({\\bs\\beta}) \\\\\n\\operatorname{s.t.} & \\displaystyle\\lambda R_{\\textsc{SLOPE}(\\mb w)}({\\bs\\beta}) \\leq \\|\\mb y-{\\mb X}{\\bs\\beta}\\|_2.\n\\end{array}\n\\end{equation*}\nFor the choice of $r({\\bs\\beta}) = \\tau \\|{\\bs\\beta}\\|_1$, where $\\tau > \\lambda w_1$, the problem \\eqref{eqn:auxslopepf1} is\n\\begin{equation*\n\\begin{array}{ll}\n\\displaystyle\\min_{{\\bs\\beta} } &\\displaystyle\\|\\mb y-{\\mb X}{\\bs\\beta}\\|_2 + T_{\\tau\\mb 1 - \\lambda \\mb w}({\\bs\\beta}) \\\\\n\\operatorname{s.t.} & \\displaystyle\\lambda R_{\\textsc{SLOPE}(\\mb w)}({\\bs\\beta}) \\leq \\|\\mb y-{\\mb X}{\\bs\\beta}\\|_2.\n\\end{array}\n\\end{equation*}\nIn particular, when $\\lambda>0$ is sufficiently small, problem \\eqref{eqn:auxslopepf1} is approximately equal to the generalized trimmed Lasso problem\n$$\\min_{{\\bs\\beta} } \\displaystyle\\|\\mb y-{\\mb X}{\\bs\\beta}\\|_2 + T_{\\tau\\mb 1 - \\lambda \\mb w}({\\bs\\beta}).$$\n\\end{proposition}\n\nPut plainly, the general form of the SLOPE penalty leads to a generalized form of the trimmed Lasso, precisely as was true for the simplified version considered in Theorem \\ref{thm:robeivInterp}.\n\n\\subsection{Another min-min interpretation}\n\nWe close our discussion of robustness by considering another min-min representation of the trimmed Lasso. We use the ordinary Lasso problem as our starting point and show how a modification in the same spirit as the min-min robust least trimmed squares estimator in \\eqref{eqn:introlts} leads directly to the trimmed Lasso.\n\nTo proceed, we begin with the usual Lasso problem\n\\begin{equation}\\label{eqn:lasso}\n\\min_{\\bs\\beta} \\frac{1}{2}\\|\\mb y-{\\mb X}{\\bs\\beta}\\|_2^2 + \\lambda\\|{\\bs\\beta}\\|_1.\n\\end{equation}\nAs per Proposition \\ref{prop:lasso}, this problem is equivalent to the min-max robust problem \\eqref{eqn:roprimitive} with uncertainty set\n$\\mathcal{U} = \\mathcal{L}^\\lambda = \\{\\bs\\Delta:\\|\\bs\\Delta_i\\|_2 \\leq \\lambda\\;\\forall i\\}$:\n\\begin{equation}\\label{eqn:lassorob}\n\\min_{\\bs\\beta}\\max_{\\bs\\Delta\\in\\mathcal{L}^\\lambda} \\frac{1}{2}\\|\\mb y-({\\mb X}+\\bs\\Delta){\\bs\\beta}\\|_2^2.\n\\end{equation}\nIn this view, the usual Lasso \\eqref{eqn:lasso} can be thought of as a least squares method which takes into account certain feature-wise adversarial perturbations of the matrix ${\\mb X}$. The net result is that the adversarial approach penalizes all loadings equally (with coefficient $\\lambda$).\n\n\nUsing this setup and Theorem \\ref{thm:exactEquiv},\nwe can re-express the trimmed Lasso problem $\\tla{\\lambda,k}$ in the equivalent min-min form\n\\begin{equation}\\label{eqn:a2}\n\\min_{\\bs\\beta}\\min_{\\substack{\\\\mb I\\subseteq\\{1,\\ldots,p\\}:\\\\|I|=p-k}} \\max_{\\bs\\Delta\\in\\mathcal{L}^\\lambda_I} \\frac{1}{2}\\|\\mb y-({\\mb X}+\\bs\\Delta){\\bs\\beta}\\|_2^2,\n\\end{equation}\nwhere $\\mathcal{L}^\\lambda_I\\subseteq \\mathcal{L}^\\lambda$ requires that the columns of $\\bs\\Delta\\in\\mathcal{L}^\\lambda_I$ are supported on $I$:\n$$\\mathcal{L}^\\lambda_I = \\{\\bs\\Delta: \\|\\bs\\Delta_i\\|_2\\leq \\lambda \\;\\forall i, \\; \\bs\\Delta_i = \\mb 0\\;\\forall i\\notin I\\}.$$\nWhile the adversarial min-max approach in problem \\eqref{eqn:lassorob} would attempt to ``corrupt'' all $p$ columns of ${\\mb X}$, in estimating ${\\bs\\beta}$ we have the power to optimally discard $k$ out of the $p$ corruptions to the columns (corresponding to $I^c$). In this sense, the trimmed Lasso in the min-min robust form \\eqref{eqn:a2} acts in a similar spirit to the min-min, robust-statistical least trimmed squares estimator shown in problem \\eqref{eqn:introltsalt}.\n\n\n\n\n\n\n\\section{Connection to nonconvex penalty methods}\\label{sec:ncpm}\n\n\nIn this section, we explore the connection between the trimmed Lasso and existing, popular nonconvex (component-wise separable) penalty functions used for sparse modeling. We begin in Section \\ref{ssec:ncpoverview} with a brief overview of existing approaches. In Section \\ref{ssec:ncpreform} we then highlight how these relate to the trimmed Lasso, making the connection more concrete with examples in Section \\ref{ssec:ncpeg}. Then in Section \\ref{ssec:ncpgenerality} we exactly characterize the connection between the trimmed Lasso and the clipped Lasso \\cite{cl}. In doing so, we show that the trimmed Lasso subsumes the clipped Lasso; further, we provide a necessary and sufficient condition for when the containment is strict. Finally, in Section \\ref{ssec:ncpunbounded} we comment on the special case of unbounded penalty functions.\n\n\n\n\n\\subsection{Setup and Overview}\\label{ssec:ncpoverview}\n\nOur focus throughout will be the penalized $M$-estimation problem of the form\n\\begin{equation}\\label{eqn:ncpm\n\\min_{{\\bs\\beta}} L({\\bs\\beta}) + \\sum_{i=1}^p \\rho(|\\beta_i|;\\mu,\\gamma),\n\\end{equation}\nwhere $\\mu$ represents a (continuous) parameter controlling the desired level of sparsity of ${\\bs\\beta}$ and $\\gamma$ is a parameter controlling the quality of the approximation of the indicator function $I\\{|\\beta|>0\\}$. A variety of nonconvex penalty functions and their description in this format is shown in Table \\ref{tab:ncp} (for a general discussion, see \\cite{zhangzhang}). In particular, for each of these functions we observe that \n$$\\lim_{\\gamma\\to\\infty} \\rho(|\\beta|;\\mu,\\gamma) = \\mu \\cdot I\\{|\\beta|>0\\}.$$\nIt is particularly important to note the \\emph{separable} nature of the penalty functions appearing in \\eqref{eqn:ncpm}---namely, each coordinate $\\beta_i$ is penalized (via $\\rho$) independently of the other coordinates.\n\nOur primary focus will be on the bounded penalty functions (clipped Lasso, MCP, and SCAD), all of which take the form\n\\begin{equation}\\label{eqn:pff}\n\\rho(|\\beta|;\\mu,\\gamma) = \\mu \\min\\{g(|\\beta|;\\mu,\\gamma),1\\}\n\\end{equation}\nwhere $g$ is an increasing function of $|\\beta|$. We will show that in this case, the problem \\eqref{eqn:ncpm} can be rewritten exactly as an estimation problem with a (non-separable) trimmed penalty function:\n\\begin{equation}\\label{eqn:ncpt\n\\min_{{\\bs\\beta}} L({\\bs\\beta}) + \\mu\\sum_{i=\\ell+1}^p g(|\\beta_{(i)} |)\n\\end{equation}\nfor some $\\ell\\in\\{0,1,\\ldots,p\\}$ (note the appearance of the projected penalties $\\pi_k^g$ as considered in Section \\ref{ssec:gens}). In the process of doing so, we will also show that, in general, \\eqref{eqn:ncpt} cannot be solved via the separable-penalty estimation approach of \\eqref{eqn:ncpm}, and so the trimmed estimation problem leads to a richer class of models. Throughout we will often refer to \\eqref{eqn:ncpt} (taken generically over all choices of $\\ell$) as the \\emph{trimmed counterpart} of the separable estimation problem \\eqref{eqn:ncpm}.\n\n\n\\begin{table*}\n\\centering\n \\begin{tabular}{| c | c | c | c|}\n\\hline \nName & Definition & Auxiliary Functions\\\\\\hline\\hline\nClipped Lasso &\\multirow{2}{*}{$ \\mu\\min\\{\\gamma|\\beta|,1\\} $ } &\\multirow{4}{*}{ \\small $g_1(|\\beta|) = \\left\\{\\begin{array}{rc} 2\\gamma|\\beta|-\\gamma^2\\beta^2, & |\\beta|\\leq 1\/\\gamma,\\\\1 ,& |\\beta|>1\/\\gamma.\\end{array}\\right.$ }\\\\\n\\cite{cl} & & \\\\\\cline{1-2}\nMCP & \\multirow{2}{*}{$\\mu\\min\\{g_1(|\\beta|),1\\}$ } & \\\\\n\\cite{mcp} & & \\\\\\cline{1-2}\nSCAD & \\multirow{2}{*}{$\\mu\\min\\{g_2(|\\beta|),1\\}$} & \\multirow{6}{*}{ \\small $ g_2(|\\beta|) = \\left\\{\n\\begin{array}{rc}\n|\\beta|\/(\\gamma\\mu),& |\\beta| \\leq 1\/\\gamma,\\\\\n \\frac{\\beta^2 +(2\/\\gamma-4\\mu\\gamma)|\\beta| +1\/\\gamma^2}{ 4\\mu - 4\\mu^2\\gamma^2} , & 1\/\\gamma < |\\beta| \\leq 2\\mu\\gamma-1\/\\gamma,\\\\1, & |\\beta| > 2\\mu\\gamma-1\/\\gamma.\n\\end{array}\\right. $ } \\\\\n\\cite{scad} & & \\\\\\cline{1-2}\n$\\ell_q$ ($00\\}$. For SCAD, it is usually recommended to set $2\\mu>3\/\\gamma^2$.\n}\n \\label{tab:ncp\n\\end{table*}\n\n\n\n\n\n\n\\subsection{Reformulating the problem \\eqref{eqn:ncpm}}\\label{ssec:ncpreform}\n\nLet us begin by considering penalty functions $\\rho$ of the form \\eqref{eqn:pff} with $g$ a non-negative, increasing function of $|\\beta|$. Observe that for any ${\\bs\\beta}$ we can rewrite $\\sum_{i=1}^p \\min\\{g(|\\beta_i|),1\\}$ as\n\\begin{align*}\n&\\min\\left\\{\\sum_{i=1}^p g(|\\beta_{(i)}|),1 + \\sum_{i=2}^p g(|\\beta_{(i)}|),\\ldots, p-1 + g(|\\beta_{(p)}|),p \\right\\}\\\\\n&=\\min_{\\ell\\in\\{0,\\ldots,p\\}} \\left\\{ \\ell + \\sum_{i>\\ell} g(|\\beta_{(i)}|) \\right\\}.\n\\end{align*}\nIt follows that \\eqref{eqn:ncpm}\n can be rewritten \\emph{exactly} as \n\\begin{equation}\\label{eqn:ncptna}\n\\min_{\\substack{{\\bs\\beta},\\\\\\ell\\in\\{0,\\ldots,p\\}}} \\left(L({\\bs\\beta}) + \\mu\\sum_{i>\\ell} g(|\\beta_{(i)}|) + \\mu\\ell\\right)\n\\end{equation}\nAn immediate consequence is the following theorem:\n\n\\begin{theorem}\\label{thm:MasT}\nIf ${\\bs\\beta}^*$ is an optimal solution to \\eqref{eqn:ncpm}, where $\\rho(|\\beta|;\\mu,\\gamma) = \\mu\\min\\{g(|\\beta|;\\mu,\\gamma),1\\}$, then there exists some $\\ell^*\\in\\{0,\\ldots,p\\}$ so that ${\\bs\\beta}^*$ is optimal to its trimmed counterpart\n\\begin{equation*\n\\min_{\\bs\\beta} L({\\bs\\beta}) + \\mu\\sum_{i>\\ell^*} g(|\\beta_{(i)}|).\n\\end{equation*}\nIn particular, the choice of $\\ell^* = |\\{i: g(|\\beta_i^*|) \\geq1 \\}|$ suffices.\n Conversely, if ${\\bs\\beta}^*$ is an optimal solution to \\eqref{eqn:ncptna}, then ${\\bs\\beta}^*$ in an optimal solution to \\eqref{eqn:ncpm}.\n\\end{theorem}\n\n\nIt follows that the estimation problem \\eqref{eqn:ncpm}, which decouples each loading $\\beta_i$ in the penalty function, can be solved using ``trimmed'' estimation problems of the form \\eqref{eqn:ncpt} with a trimmed penalty function that couples the loadings and only penalizes the $p-\\ell^*$ smallest. Because the trimmed penalty function is generally nonconvex by nature, we will focus on comparing it with other nonconvex penalties for the remainder of the section.\n\n\\subsection{Trimmed reformulation examples}\\label{ssec:ncpeg}\n\nWe now consider the structure of the estimation problem \\eqref{eqn:ncpm} and the corresponding trimmed estimation problem for the clipped Lasso and MCP penalties. We use the $\\ell_2^2$ loss throughout.\n\n\\subsubsection*{Clipped Lasso}\n\nThe clipped (or capped, or truncated) Lasso penalty \\cite{cl,shen12} takes the component-wise form\n$$\\rho(|\\beta|;\\mu,\\gamma) = \\mu \\min\\{\\gamma|\\beta|,1\\}.$$\nTherefore, in our notation, $g$ is a multiple of the absolute value function. A plot of $\\rho$ is shown in Figure \\ref{fig:ncpa}. In this case, the estimation problem with $\\ell_2^2$ loss is\n\\begin{equation}\\label{eqn:clm}\n\\min_{{\\bs\\beta}} \\frac{1}{2}\\|\\mb y-{\\mb X}{\\bs\\beta}\\|_2^2 + \\mu\\sum_i \\min\\{\\gamma|\\beta_i|,1\\}.\n\\end{equation}\nIt follows that the corresponding trimmed estimation problem (\\emph{c.f.} Theorem \\ref{thm:MasT}) is exactly the trimmed Lasso problem studied earlier, namely,\n\\begin{equation}\\label{eqn:clt}\n\\min_{{\\bs\\beta}} \\frac{1}{2}\\|\\mb y-{\\mb X}{\\bs\\beta}\\|_2^2 + \\mu\\gamma \\tk{{\\bs\\beta}}.\n\\end{equation}\nA distinct advantage of the trimmed Lasso formulation \\eqref{eqn:clt} over the traditional clipped Lasso formulation \\eqref{eqn:clm} is that it offers direct control over the desired level of sparsity vis-\\`a-vis the discrete parameter $k$. We perform a deeper analysis of the two problems in Section \\ref{ssec:ncpgenerality}.\n\n\n\\subsubsection*{MCP}\n\nThe MCP penalty takes the component-wise form\n$$\\rho(|\\beta|;\\mu,\\gamma) = \\mu \\min\\{g(|\\beta|),1\\}$$\nwhere $g$ is any function with $g(|\\beta|) = 2\\gamma|\\beta|-\\gamma^2\\beta^2$ whenever $|\\beta| \\leq1\/\\gamma$ and $g(|\\beta|)\\geq1$ whenever $|\\beta| > 1\/\\gamma$. An example of one such $g$ is shown in Table \\ref{tab:ncp}.\nA plot of $\\rho$ is shown in Figure \\ref{fig:ncpa}. Another valid choice of $g$ is $g(|\\beta|) = \\max\\{2\\gamma|\\beta|-\\gamma^2\\beta^2,\\gamma|\\beta|\\}$. In this case, the trimmed counterpart is\n\\begin{equation*\n\\min_{{\\bs\\beta}} \\frac{1}{2}\\|\\mb y-{\\mb X}{\\bs\\beta}\\|^2 + \\mu\\gamma \\sum_{i>\\ell}\\max\\left\\{ 2|\\beta_{(i)}| -\\gamma\\beta_{(i)}^2,|\\beta_{(i)}|\\right\\}.\n\\end{equation*}\n\nNote that this problem is amenable to the same class of techniques as applied to the trimmed Lasso problem in the form \\eqref{eqn:clt} because of the increasing nature of $g$, although the subproblems with respect to ${\\bs\\beta}$ are no longer convex (although it is a usual MCP estimation problem which is well-suited to convex optimization approaches; see \\cite{sparsenet}).\nAlso observe that we can separate the penalty function into a trimmed Lasso component and another component:\n$$\\sum_{i>\\ell} |\\beta_{(i)}|\\text{\\quad and \\quad} \\sum_{i>\\ell} \\left(|\\beta_{(i)}|-\\gamma\\beta_{(i)}^2\\right)_+.$$\nObserve that the second component is uniformly bounded above by $(p-\\ell)\/(4\\gamma)$, and so as $\\gamma\\to\\infty$,\nthe trimmed Lasso penalty dominates.\n\n\n\\begin{figure*\n\\centering\n\\begin{subfigure}{0.49\\linewidth}\n \\centering\n\\begin{tikzpicture}\n \\begin{axis}[ \n xlabel=$|\\beta|$,\n \n xmin = 0,\n xmax = 2,\n ymin = 0,\n ymax = 2,\n scale = .8,\n minor y tick num=0,\n yticklabels={,,,,$\\mu$},\n minor x tick num=0,\n xticklabels={,0,,,$1\/\\gamma$},\n legend entries = {$\\rho_\\text{CL}$,$\\rho_\\text{MCP}$},\n legend style = {at={(.93,.07)}, anchor = south east},\n ] \n \\addplot[samples=\\ns, color=blue, thick, dashed] { (x<1.5)*(x) + (x>1.5)*1.5};\n \\addplot[samples=\\ns, color=red, thick] { (x<1.5) * (2*x-x^2\/1.5) + (x>1.5)*(1.5) };\n \\end{axis}\n\\end{tikzpicture}\n \\caption{Clipped Lasso and MCP}\n \\label{fig:ncpa}\n \\end{subfigure}\n\\begin{subfigure}{0.49\\linewidth}\n \\centering\n \\begin{tikzpicture}\n \\begin{axis}[ \n xlabel=$|\\beta|$,\n \n xmin = 0,\n xmax = 4,\n ymin = 0,\n ymax = 4,\n scale = .8,\n minor y tick num=0,\n yticklabels={,,,,$\\mu$},\n minor x tick num=0,\n xticklabels={,0,,,$1$},\n legend entries = {$\\rho_\\text{log}$,$\\rho_\\text{$\\ell_q$}$},\n legend style = {at={(.93,.07)}, anchor = south east},\n ] \n \\addplot[samples=\\ns, color=blue, thick, dashed] { 3*ln(2*x\/3+1)\/ln(3) };\n \\addplot[samples=\\ns, color=red, thick] { 3*(x\/3)^(1\/2) };\n\\end{axis}\n\\end{tikzpicture}\n\\caption{Log and $\\ell_q$}\n\\label{fig:ncpb}\n\\end{subfigure}\\\\[1ex]\n\\caption{Plots of $\\rho(|\\beta|;\\mu,\\gamma)$ for some of the penalty functions in Table \\ref{tab:ncp}.}\n\\label{fig:ncp}\n\\end{figure*}\n\n\n\n\\subsection{The generality of trimmed estimation}\\label{ssec:ncpgenerality}\n\nWe now turn our focus to more closely studying the relationship between the separable-penalty estimation problem \\eqref{eqn:ncpm} and its trimmed estimation counterpart. The central problems of interest are the clipped Lasso and its trimmed counterpart, viz., the trimmed Lasso:\\footnote{One may be concerned about the well-definedness of such problems (e.g. as guaranteed vis-\\`a-vis coercivity of the objective, \\emph{c.f.}\\cite{rockafeller}). In all the results of Section \\ref{ssec:ncpgenerality}, it is possible to add a regularizer $\\eta\\|{\\bs\\beta}\\|_1$ for some fixed $\\eta>0$ to both $(\\textsc{CL}_{\\mu,\\gamma})$ and $(\\textsc{TL}_{\\lambda,\\ell})$ and the results remain valid, \\emph{mutatis mutandis}. The addition of this regularizer implies coercivity of the objective functions and, consequently, that the minimum is indeed well-defined. For completeness, we note a technical reason for a choice of $\\eta\\|{\\bs\\beta}\\|_1$ is its positive homogeneity; thus, the proof technique of Lemma \\ref{lem:key} easily adapts to this modification.}\n\\begin{align*}\n&\\displaystyle \\min_{\\bs\\beta} \\frac{1}{2}\\|\\mb y-{\\mb X}{\\bs\\beta}\\|^2_2 + \\mu \\sum_i \\min\\{\\gamma|\\beta_i|,1\\}\\tag{$\\textsc{CL}_{\\mu,\\gamma}$}\\\\\n&\\displaystyle\\min_{\\bs\\beta} \\frac{1}{2}\\|\\mb y-{\\mb X}{\\bs\\beta}\\|^2_2 + \\lambda \\tka{\\ell}{{\\bs\\beta}}.\\tag{$\\textsc{TL}_{\\lambda,\\ell}$}\n\\end{align*}\nAs per Theorem \\ref{thm:MasT}, if ${\\bs\\beta}^*$ is an optimal solution to $(\\textsc{CL}_{\\mu,\\gamma})$, then ${\\bs\\beta}^*$ is an optimal solution to $(\\textsc{TL}_{\\lambda,\\ell})$, where $\\lambda=\\mu\\gamma$ and $\\ell=|\\{i:|\\beta_i^*|\\geq1\/\\gamma\\}|$. We now consider the converse: given some $\\lambda>0$ and $\\ell\\in\\{0,1,\\ldots,p\\}$ and a solution ${\\bs\\beta}^*$ to $(\\textsc{TL}_{\\lambda,\\ell})$, when does there exist some $\\mu,\\gamma>0$ so that ${\\bs\\beta}^*$ is an optimal solution to $(\\textsc{CL}_{\\mu,\\gamma})$? As the following theorem suggests, the existence of such a $\\gamma$ is closely connected to an underlying discrete form of ``convexity'' of the sequence of problems $\\tla{\\lambda,k}$ for $k\\in\\{0,1,\\ldots,p\\}$. We will focus on the case when $\\lambda=\\mu\\gamma$, as this is the natural correspondence of parameters in light of Theorem \\ref{thm:MasT}.\n\n\n\\begin{theorem}\\label{thm:clconv}\nIf $\\lambda>0$, $\\ell\\in\\{0,\\ldots,p\\}$, and ${\\bs\\beta}^*$ is an optimal solution to $(\\textsc{TL}_{\\lambda,\\ell})$, then there exist $\\mu,\\gamma>0$ with $\\mu\\gamma=\\lambda$ and so that ${\\bs\\beta}^*$ is an optimal solution to $(\\textsc{CL}_{\\mu,\\gamma})$ if and only if\n\\begin{equation}\\label{eqn:clconv}\nZ\\tla{\\lambda,\\ell_e} < \\frac{j-\\ell_e}{j-i} Z\\tla{\\lambda,i} + \\frac{\\ell_e-i}{j-i} Z\\tla{\\lambda,j}\n\\end{equation}\nfor all $0\\leq i< \\ell_e < j \\leq p$, where $Z(\\textsc{P})$ denotes the optimal objective value to optimization problem $\\textsc{(P)}$ and $\\ell_e = \\min\\{\\ell,\\|{\\bs\\beta}^*\\|_0\\}$.\n\\end{theorem}\n\nLet us note why we refer to the condition in \\eqref{eqn:clconv} as a discrete analogue of convexity of the sequence $\\{z_k :=Z\\tla{\\lambda,k},\\; k=0,\\ldots,p\\}$. In particular, observe that this sequence satisfies the condition of Theorem \\ref{thm:clconv} if and only if the function defined as the linear interpolation between the points $(0,z_0)$, $(1,z_1)$, \\ldots, and $(p,z_p)$ is strictly convex about the point $(\\ell,z_\\ell)$.\\footnote{To be precise, we mean that the real-valued function that is a linear interpolation of the points has a subdifferential at the point $(\\ell,z_\\ell)$ which is an interval of strictly positive width.}\n\nBefore proceeding with the proof of the theorem, we state and prove a technical lemma about the structure of $(\\textsc{TL}_{\\lambda,\\ell})$.\n\n\\begin{lemma}\\label{lem:key}\nFix $\\lambda>0$ and suppose that ${\\bs\\beta}^*$ is optimal to $(\\textsc{TL}_{\\lambda,\\ell})$.\n\\begin{enumerate}[(a)]\n\\item The optimal objective value of $(\\textsc{TL}_{\\lambda,\\ell})$ is $Z(\\textsc{TL}_{\\lambda,\\ell}) = (\\|\\mb y\\|_2^2-\\|{\\mb X}{\\bs\\beta}^*\\|_2^2)\/2$.\n\\item If ${\\bs\\beta}^*$ is also optimal to $\\tla{\\lambda,\\ell'}$, where $\\ell<\\ell'$, then $\\|{\\bs\\beta}^*\\|_0\\leq\\ell$ and ${\\bs\\beta}^*$ is optimal to $\\tla{\\lambda,j}$ for all integral $j$ with $\\ell\\ell$. By part (a), one must necessarily have that $Z(\\textsc{TL}_{\\lambda,\\ell})=Z\\tla{\\lambda,\\ell'} = (\\|\\mb y\\|_2^2-\\|{\\mb X}{\\bs\\beta}^*\\|_2^2)\/2$. Inspecting $Z(\\textsc{TL}_{\\lambda,\\ell})-Z\\tla{\\lambda,\\ell'}$, we see that\n$$0=Z(\\textsc{TL}_{\\lambda,\\ell})-Z\\tla{\\lambda,\\ell'} = \\lambda \\sum_{i=\\ell+1}^{\\ell'} |\\beta_{(i)}^*|.$$\nHence, $|\\beta_{(\\ell+1)}^*|=0$ and therefore $\\|{\\bs\\beta}^*\\|_0\\leq \\ell$.\n\nFinally, for any integral $j$ with $\\ell\\leq j\\leq \\ell'$, one always has that $Z(\\textsc{TL}_{\\lambda,\\ell}) \\geq Z\\tla{\\lambda,j} \\geq Z\\tla{\\lambda,\\ell'}$. As per the preceding argument, $Z(\\textsc{TL}_{\\lambda,\\ell}) = Z\\tla{\\lambda,\\ell}$ and so $Z(\\textsc{TL}_{\\lambda,\\ell}) = Z\\tla{\\lambda,j}$, and therefore ${\\bs\\beta}^*$ must also be optimal to $\\tla{\\lambda,j}$ by applying part (a). This completes part (b).\n\nPart (c) follows from a straightforward inspection of objective functions and using the fact that $Z\\tla{\\lambda,j}\\geq Z\\tla{\\lambda,\\ell}$ whenever $j\\leq\\ell$.\n\\end{proof}\n\nUsing this lemma, we can now proceed with the proof of the theorem.\n\n\\begin{proof}[Proof of Theorem \\ref{thm:clconv}]\nLet $z_k = Z\\tla{\\lambda,k}$ for $k\\in\\{0,1,\\ldots,p\\}$. Suppose that $\\mu,\\gamma>0$ is so that $\\lambda = \\mu\\gamma$ and ${\\bs\\beta}^*$ is an optimal solution to $(\\textsc{CL}_{\\mu,\\gamma})$. Let $\\ell_e = \\min\\{\\ell,\\|{\\bs\\beta}^*\\|_0\\}$. Per equation \\eqref{eqn:ncptna}, ${\\bs\\beta}^*$ must be optimal to \n\\begin{equation}\\label{eqn:pf1}\n\\min_{\\bs\\beta} \\min_{k\\in\\{0,\\ldots,p\\}} \\frac{1}{2}\\|\\mb y-{\\mb X}{\\bs\\beta}\\|_2^2 + \\mu k + \\mu\\gamma \\tk{{\\bs\\beta}}.\n\\end{equation}\nObserve that this implies that if $k$ is such that $k$ is a minimizer of ${\\min}_k \\mu k + \\mu\\gamma \\tk{{\\bs\\beta}^*}$, then ${\\bs\\beta}^*$ must be optimal to $\\tla{\\lambda,k}$.\n\nWe claim that this observation, combined with Lemma \\ref{lem:key}, implies that\n$$\\ell_e=\\underset{ {k\\in\\{0,\\ldots,p\\}} }{\\argmin} \\mu k + \\mu\\gamma \\tk{{\\bs\\beta}^*}.$$\nThis can be shown as follows:\n\\begin{enumerate}[(a)]\n\\item Suppose $\\ell\\leq \\|{\\bs\\beta}^*\\|_0$ and so $\\ell_e = \\min\\{\\ell,\\|{\\bs\\beta}^*\\|_0\\} = \\ell$. Therefore, by Lemma \\ref{lem:key}(b), ${\\bs\\beta}^*$ is not optimal to $\\tla{\\lambda,j}$ for any $j< \\ell$, and thus\n$$\\underset{ {k\\in\\{0,\\ldots,p\\}} }{\\min} \\mu k + \\mu\\gamma \\tk{{\\bs\\beta}^*} = \\underset{ {k\\in\\{\\ell,\\ldots,p\\}} }{\\min} \\mu k + \\mu\\gamma \\tk{{\\bs\\beta}^*}.$$\n\n\n\nIf $k>\\ell$ is such that $k$ is a minimizer of ${\\min}_k \\mu k + \\mu\\gamma \\tk{{\\bs\\beta}^*}$, then ${\\bs\\beta}^*$ must be optimal to $\\tla{\\lambda,k}$ (using the observation), and hence by Lemma \\ref{lem:key}(b), $\\|{\\bs\\beta}^*\\|_0\\leq \\ell$. Combined with $\\ell\\leq \\|{\\bs\\beta}^*\\|_0$, this implies that $\\|{\\bs\\beta}^*\\|_0=\\ell$. Yet then,\n$\\mu\\ell = \\mu\\ell + \\mu\\gamma \\tka{\\ell}{{\\bs\\beta}^*} < \\mu k + \\mu\\gamma\\tk{{\\bs\\beta}^*}$, contradicting the optimality of $k$. Therefore, we conclude that $\\ell_e=\\ell$ is the \\emph{only} minimizer of $\\min_k \\mu k + \\mu\\gamma\\tk{{\\bs\\beta}^*}$. \n\n\\item Now instead suppose that $\\ell_e = \\|{\\bs\\beta}^*\\|_0 < \\ell$. Lemma \\ref{lem:key}(c) implies that any optimal solution $k$ to $\\min_k \\mu k + \\mu\\gamma \\tk{{\\bs\\beta}^*}$ must satisfy $k\\geq\\|{\\bs\\beta}^*\\|_0$ (by the second part combined with the observation). As before, if $k>\\|{\\bs\\beta}^*\\|_0=\\ell_e$, then $\\mu k > \\mu \\ell_e$, and so $k$ cannot be optimal. As a result, $k=\\ell_e=\\|{\\bs\\beta}^*\\|_0$ is the unique minimum.\n\n\n\\end{enumerate}\nIn either case, we have that $\\ell_e$ is the unique minimizer to $\\min_k \\mu k + \\mu\\gamma \\tk{{\\bs\\beta}^*}$.\n\nIt then follows that $Z(\\text{problem }\\eqref{eqn:pf1}) = z_{\\ell_e} + \\mu \\ell_e$. Further, by optimality of ${\\bs\\beta}^*$, $z_{\\ell_e} + \\mu \\ell_e < z_i + \\mu i$ for all $0\\leq i\\leq p$ with $i\\neq\\ell_e$. For $0\\leq i < \\ell_e$, this implies $\\mu < (z_i-z_{\\ell_e})\/(\\ell_e-i) $ and for $j>\\ell_e$, $\\mu > (z_{\\ell_e}-z_j)\/(j-\\ell_e)$. In other words, for $0\\leq i < \\ell_e < j\\leq p$,\n$$\\frac{z_{\\ell_e}-z_j}{j-\\ell_e} < \\frac{z_i - z_{\\ell_e}}{\\ell_e-i}, \\quad \\text{i.e., }\\; z_{\\ell_e} <\\frac{j-\\ell_e}{j-i}z_i + \\frac{\\ell_e-i}{j-i} z_j.$$\nThis completes the forward direction. The reverse follows in the same way by taking any $\\mu$ with\n\\begin{equation*}\n\\mu\\in \\left( \\max_{j> \\ell_e} \\frac{z_{\\ell_e} -z_j}{j-\\ell_e}, \\min_{i<\\ell_e} \\frac{z_i-z_{\\ell_e}}{\\ell_e-i} \\right).\n\\end{equation*}\n\\end{proof}\n\n\nWe briefly remark upon one implication of the proof of Theorem \\ref{thm:clconv}. In particular, if ${\\bs\\beta}^*$ is a solution to $(\\textsc{TL}_{\\lambda,\\ell})$ and $\\ell <\\|{\\bs\\beta}^*\\|_0$, then ${\\bs\\beta}^*$ is not the solution to $\\tla{\\lambda,k}$ for any $k\\neq \\ell$.\n\nAn immediate question is whether the convexity condition \\eqref{eqn:clconv} of Theorem \\ref{thm:clconv} always holds. While the sequence $\\{Z\\tla{\\lambda,k} : k=0,1,\\ldots,p\\}$ is always non-increasing, the following example shows that the convexity condition need not hold in general; as a result, there exist instances of the trimmed Lasso problem whose solutions \\emph{cannot} be found by solving a clipped Lasso problem.\n\n\\begin{example}\\label{eg:cl}\nConsider the case when $p=n=2$ with\n$$\\mb y = \\begin{pmatrix}1\\\\1\\end{pmatrix} \\text{ \\quad and \\quad } \\mb X = \\begin{pmatrix} 1 & -1\\\\-1&2\\end{pmatrix}.$$\nLet $\\lambda =1\/2$ and $ \\ell = 1$, and consider $\\min_{{\\bs\\beta}} \\|\\mb y-{\\mb X}{\\bs\\beta}\\|_2^2\/2 + |\\beta_{(2)}|\/2 = \\min_{\\beta_1,\\beta_2} (1-\\beta_1+\\beta_2)^2\/2 + (1+\\beta_1-2\\beta_2)^2\/2 + |\\beta_{(2)}|\/2.$\nThis has unique optimal solution ${\\bs\\beta}^* = (3\/2,1)$ with corresponding objective value $ z_1 = 3\/4$. One can also compute $z_0 = Z\\tla{1\/2,0} = 39\/40$ and $z_2 = Z\\tla{1\/2,2} = 0$. Note that $z_1 = 3\/4 > (39\/40)\/2 + (0)\/2 = z_0\/2+z_2\/2$, and so there do not exist any $\\mu,\\gamma>0$ with $\\mu\\gamma=1\/2$ so that ${\\bs\\beta}^*$ is an optimal solution to $\\cla{\\mu,\\gamma}$ by Theorem \\ref{thm:clconv}. Further, it is possible to show that ${\\bs\\beta}^*$ is not an optimal solution to $(\\textsc{CL}_{\\mu,\\gamma})$ for \\emph{any} choice of $\\mu,\\gamma\\geq0$. (See Appendix \\ref{app:proof}.)\n\\end{example}\n\nAn immediate corollary of this example, combined with Theorem \\ref{thm:MasT}, is that the class of trimmed Lasso models contains the class of clipped Lasso models as a \\emph{proper} subset, regardless of whether we restrict our attention to $\\lambda=\\mu\\gamma$. In this sense, the trimmed Lasso models comprise a richer set of models. The relationship is depicted in stylized form in Figure \\ref{fig:mc}.\n\n\\subsubsection*{Limit analysis}\n\nIt is important to contextualize the results of this section as $\\lambda\\to\\infty$. This corresponds to $\\gamma\\to\\infty$ for the clipped Lasso problem, in which case $(\\textsc{CL}_{\\mu,\\gamma})$ converges to the penalized form of subset selection:\n\\begin{equation*}\n\\min_{\\bs\\beta} \\frac{1}{2}\\|\\mb y-{\\mb X}{\\bs\\beta}\\|_2^2 + \\mu\\|{\\bs\\beta}\\|_0.\\tag{$\\textsc{CL}_{\\mu,\\infty}$}\n\\end{equation*}\nNote that penalized problems for all of the penalties listed in Table \\ref{tab:ncp} have this as their limit as $\\gamma\\to\\infty$. On the other hand, $(\\textsc{TL}_{\\lambda,\\ell})$ converges to constrained best subset selection:\n\\begin{equation*}\n\\min_{\\|{\\bs\\beta}\\|_0\\leq \\ell} \\frac{1}{2} \\|\\mb y-{\\mb X}{\\bs\\beta}\\|_2^2.\\tag{$\\textsc{TL}_{\\infty,k}$}\n\\end{equation*}\nIndeed, from this comparison it now becomes clear why a convexity condition of the form in Theorem \\ref{thm:clconv} appears in describing when the clipped Lasso solves the trimmed Lasso problem. In particular, the conditions under $\\cla{\\mu,\\infty}$ solves the constrained best subset selection problem $\\tla{\\infty,k}$ are precisely those in Theorem \\ref{thm:clconv}.\n\n\n\\begin{figure}\n\\centering\n\\begin{tikzpicture}\n \\coordinate (c1) at (0,0);\n \\pgfmathsetmacro{\\colmixer}{mod(10*1,100)}%\n \\path [draw, fill=blue!\\colmixer, postaction=decorate] (c1) ellipse (2 and 1.5);\n \\pgfmathsetmacro{\\colmixer}{mod(10*2,100)}%\n \\path [dashed, draw, fill=blue!\\colmixer, postaction=decorate] (c1) ellipse (1.5 and .9) ;\n \\path [ decoration={text along path, text={Clipped Lasso}, reverse path, text align={align=center}}, postaction=decorate] (2.5,-2.1) arc (0:180:2.5);\n \\path [ decoration={text along path, text={Trimmed Lasso}, reverse path, text align={align=center}}, postaction=decorate] (2.6,-1.5) arc (0:180:2.6);\n\\end{tikzpicture}\n\\caption{Stylized relation of clipped Lasso and trimmed Lasso models. Every clipped Lasso model can be written as a trimmed Lasso model, but the reverse does not hold in general.}\n\\label{fig:mc}\n\\end{figure}\n\n\n\n\n\\subsection{Unbounded penalty functions}\\label{ssec:ncpunbounded}\n\nWe close this section by now considering nonconvex penalty functions which are unbounded and therefore do not take the form $\\mu\\min\\{g(|\\beta|),1\\}$. Two such examples are the $\\ell_q$ penalty ($00$ are parameters, $g$ is an unbounded and strictly increasing function, and $ g(|\\phi_i|;\\gamma) \\xrightarrow{\\gamma\\to\\infty} I\\{|\\phi_i|>0\\}$. The change of variables in \\eqref{eqn:unbpm} is intentional and its purpose will become clear shortly.\n\nObserve that because $g$ is now unbounded, there exists some $\\overline{\\lambda} = \\overline{\\lambda}(\\mb y,{\\mb X},\\mu,\\gamma)>0$ so that for all $\\lambda>\\overline{\\lambda}$ any optimal solution $(\\bs \\phi^*,\\bs\\epsilon^*)$ to the problem\n\\begin{equation}\\label{eqn:unbpm-aux}\n\\min_{\\bs \\phi,\\bs\\epsilon} \\frac{1}{2} \\|\\mb y-{\\mb X}(\\bs \\phi+\\bs\\epsilon)\\|_2^2 + \\lambda\\|\\bs\\epsilon\\|_1+\\mu \\sum_{i=1}^p g(|\\phi_i|;\\gamma)\n\\end{equation}\nhas $\\bs\\epsilon^*=\\mb0$.\\footnote{The proof involves a straightforward modification of an argument along the lines of that given in Theorem \\ref{thm:exactEquiv}. Also note that we can choose $\\overline{\\lambda}$ so that it is decreasing in $\\gamma$, \\emph{ceteris paribus}.} Therefore, \\eqref{eqn:unbpm} is a special case of \\eqref{eqn:unbpm-aux}. We claim that in the limit as $\\gamma\\to\\infty$ (all else fixed), that \\eqref{eqn:unbpm-aux} can be written exactly as a trimmed Lasso problem $\\tla{\\lambda,k}$ for some choice of $k$ and with the identification of variables ${\\bs\\beta} = \\bs \\phi+\\bs\\epsilon$.\n\nWe summarize this as follows:\n\n\\begin{proposition}\nAs $\\gamma\\to\\infty$, the penalized estimation problem \\eqref{eqn:unbpm} is a special case of the trimmed Lasso problem.\n\\end{proposition}\n\\begin{proof}\nThis can be shown in a straightforward manner: namely, as $\\gamma\\to\\infty$, \\eqref{eqn:unbpm-aux} becomes\n\\begin{equation*\n\\min_{\\bs \\phi,\\bs\\epsilon} \\frac{1}{2} \\|\\mb y-{\\mb X}(\\bs \\phi+\\bs\\epsilon)\\|_2^2 + \\lambda\\|\\bs\\epsilon\\|_1+\\mu \\|\\bs \\phi\\|_0\n\\end{equation*}\nwhich can be in turn written as \n\\begin{equation*\n\\min_{\\substack{\\bs \\phi,\\bs\\epsilon:\\\\\\|\\bs \\phi\\|_0\\leq k}} \\frac{1}{2} \\|\\mb y-{\\mb X}(\\bs \\phi+\\bs\\epsilon)\\|_2^2 + \\lambda\\|\\bs\\epsilon\\|_1\n\\end{equation*}\nfor some $k\\in\\{0,1,\\ldots,p\\}$. But as per the observations of Section \\ref{ssec:vardecomp}, this is exactly $\\tla{\\lambda,k}$ using a change of variables ${\\bs\\beta}=\\bs \\phi+\\bs\\epsilon$. In the case when $\\lambda$ is sufficiently large, we necessarily have ${\\bs\\beta}=\\bs \\phi$ at optimality.\n\\end{proof}\n\nWhile this result is not surprising (given that as $\\gamma\\to\\infty$ the problem is \\eqref{eqn:unbpm} is precisely penalized best subset selection), it is useful for illustrating the connection between \\eqref{eqn:unbpm} and the trimmed Lasso problem even when the trimmed Lasso parameter $\\lambda$ is not necessarily large: in particular, $\\tla{\\lambda,k}$ can be viewed as estimating ${\\bs\\beta}$ as the sum of two components---a sparse component $\\bs \\phi$ and small-norm (``noise'') component $\\bs\\epsilon$. Indeed, in this setup, $\\lambda$ precisely controls the desirable level of allowed ``noise'' in ${\\bs\\beta}$. From this intuitive perspective, it becomes clearer why the trimmed Lasso type approach represents a continuous connection between best subset selection ($\\lambda$ large) and ordinary least squares ($\\lambda$ small).\n\nWe close this section by making the following observation regarding problem \\eqref{eqn:unbpm-aux}. In particular, observe that regardless of $\\lambda$, we can rewrite this as\n\\begin{equation*\n\\min_{{\\bs\\beta}} \\frac{1}{2} \\|\\mb y-{\\mb X}{\\bs\\beta}\\|_2^2 + \\sum_{i=1}^p \\widetilde{\\rho}(|\\beta_i|)\n\\end{equation*}\nwhere $\\widetilde{\\rho}(|\\beta_i|)$ is the new penalty function defined as\n$$\\widetilde{\\rho}(|\\beta_i|) = \\min_{\\phi+\\epsilon = \\beta_i} \\lambda|\\epsilon| + \\mu g(|\\phi|;\\gamma).$$\nFor the unbounded and concave penalty functions shown in Table \\ref{tab:ncp}, this new penalty function is quasi-concave and can be rewritten easily in closed form. For example, for the $\\ell_q$ penalty $\\rho(|\\beta_i|) = \\mu|\\beta_i|^{1\/\\gamma}$ (where $\\gamma>1$), the new penalty function is\n$$\\widetilde{\\rho}(|\\beta_i|) = \\min\\{\\mu|\\beta_i|^{1\/\\gamma},\\lambda|\\beta_i|\\}.$$\n\n\n\n\n\n\\section{Algorithmic Approaches}\\label{sec:algs}\n\n\n\nWe now turn our attention to algorithms for estimation with the trimmed Lasso penalty. Our principle focus throughout will be the same problem considered in Theorem \\ref{thm:exactEquiv}, namely\n\\begin{equation}\\label{eqn:alg}\n\\displaystyle\\min_{{\\bs\\beta}} \\frac{1}{2}\\|\\mb y-{\\mb X}{\\bs\\beta}\\|_2^2 + \\lambda \\tk{{\\bs\\beta}} + \\eta \\|{\\bs\\beta}\\|_1\n\\end{equation}\nWe present three possible approaches to finding potential solutions to \\eqref{eqn:alg}: a first-order-based alternating minimization scheme that has accompanying local optimality guarantees and was first studied in \\cite{gotoh1,gotoh2}; an augmented Lagrangian approach that appears to perform noticeably better, despite lacking optimality guarantees; and a convex envelope approach. We contrast these methods with approaches for certifying global optimality of solutions to \\eqref{eqn:alg} (described in \\cite{thiao}) and include an illustrative computational example.\nImplementations of the various algorithms presented can be found at\n\\begin{center}\n\\url{https:\/\/github.com\/copenhaver\/trimmedlasso}.\n\\end{center}\n\n\n\n\n\n\\subsection{Upper bounds via convex methods}\\label{ssec:ub}\n\nWe start by focusing on the application of convex optimization methods to finding to finding potential solutions to \\eqref{eqn:alg}. Technical details are contained in Appendix \\ref{app:algsupp}.\n\n\n\\subsubsection*{Alternating minimization scheme}\n\n\nWe begin with a first-order-based approach for obtaining a locally optimal solution of \\eqref{eqn:alg} as described in \\cite{gotoh1,gotoh2}. The key tool in this approach is the theory of difference of convex optimization (``DCO'') \\cite{anThesis,taoan97,dcSummary}. Set the following notation:\n$$\\begin{array}{lll}\nf({\\bs\\beta}) &=& \\|\\mb y-{\\mb X}{\\bs\\beta}\\|_2^2\/2 + \\lambda \\tk{{\\bs\\beta}}+ \\eta \\|{\\bs\\beta}\\|_1,\\\\\nf_1({\\bs\\beta}) &=& \\|\\mb y-{\\mb X}{\\bs\\beta}\\|_2^2\/2 + (\\eta+\\lambda) \\|{\\bs\\beta}\\|_1,\\\\\nf_2({\\bs\\beta}) &=& \\lambda \\sum_{i=1}^k |\\beta_{(i)}|.\n\\end{array}\n$$\nLet us make a few simple observations:\n\\begin{enumerate}[(a)]\n\\item Problem \\eqref{eqn:alg} can be written as $\\displaystyle\\min_{\\bs\\beta} f({\\bs\\beta})$.\n\n\\item For all ${\\bs\\beta}$, $f({\\bs\\beta}) = f_1({\\bs\\beta})-f_2({\\bs\\beta})$.\n\n\\item The functions $f_1$ and $f_2$ are convex.\n\\end{enumerate}\n\nWhile simple, these observations enable one to apply the theory of DCO, which focuses precisely on problems of the form\n$$\\min_{{\\bs\\beta}} f_1({\\bs\\beta})-f_2({\\bs\\beta}),$$\nwhere $f_1$ and $f_2$ are convex. In particular, the optimality conditions for such a problem have been studied extensively \\cite{dcSummary}. Let us note that while it may appear that the representation of the objective $f$ as $f_1-f_2$ might otherwise seem like an artificial algebraic manipulation, the min-min representation in Theorem \\ref{thm:robeivInterp} shows how such a difference-of-convex representation can arise naturally.\n\nWe now discuss an associated alternating minimization scheme (or equivalently, a sequential linearization scheme), shown in Algorithm \\ref{alg:1}, for finding local optima of \\eqref{eqn:alg}.\nThe convergence properties of Algorithm \\ref{alg:1} can be summarized as follows:\\footnote{To be entirely correct, this result holds for Algorithm \\ref{alg:1} with a minor technical modification---see details in Appendix \\ref{app:algsupp}.}\n\n\\begin{theorem}[\\cite{gotoh1}, Convergence of Algorithm \\ref{alg:1}]\\label{thm:altConvProp}\n\\begin{enumerate}[(a)]\n\\item The sequence $\\{f({\\bs\\beta}^\\ell):\\ell=0,1,\\ldots\\}$, where ${\\bs\\beta}^\\ell$ are as found in Algorithm \\ref{alg:1}, is non-increasing.\n\n\\item The set $\\{\\bs\\gamma^\\ell: \\ell=0,1,\\ldots\\}$ is finite and eventually periodic.\n\n\\item Algorithm \\ref{alg:1} converges in a finite number of iterations to local minimum of \\eqref{eqn:alg}.\n\n\\item The rate of convergence of $f({\\bs\\beta}^\\ell)$ is linear.\n\n\\end{enumerate}\n\\end{theorem}\n\n\\begin{algorithm}[ht]\n\\begin{enumerate}\n\\item Initialize with any ${\\bs\\beta}^{0}\\in\\mathbb{R}^p$ ($\\ell = 0$); for $\\ell \\geq 0$, repeat Steps 2-3 until $f({\\bs\\beta}^\\ell) = f({\\bs\\beta}^{\\ell+1})$.\n\n\\item Compute $\\bs\\gamma^\\ell$ as\n\\begin{equation}\\label{eqn:wrtbg}\n\\bs\\gamma^\\ell \\in\\begin{array}{ll}\n\\underset{\\bs\\gamma}{\\operatorname{argmax}} & \\langle \\bs\\gamma,{\\bs\\beta}^\\ell\\rangle\\\\\n\\operatorname{s.t.}& \\displaystyle\\sum_i |\\gamma_i| \\leq \\lambda k\\\\\n& \\displaystyle|\\gamma_i|\\leq \\lambda\\;\\forall i.\n\\end{array}\n\\end{equation}\n\n\\item Compute ${\\bs\\beta}^{\\ell+1}$ as \n\\begin{equation}\n{\\bs\\beta}^{\\ell+1} \\in\\underset{{\\bs\\beta}}{\\operatorname{argmin }} \\; \\frac{1}{2}\\|\\mb y-{\\mb X}{\\bs\\beta}\\|_2^2 +(\\eta+\\lambda)\\|{\\bs\\beta}\\|_1 - \\langle{\\bs\\beta},\\bs\\gamma^\\ell\\rangle.\\label{eqn:wrtbb}\n\\end{equation}\n\n\n\n\n\\end{enumerate}\n\\caption{An alternating scheme for computing a local optimum to \\eqref{eqn:alg}}\\label{alg:1}\n\\end{algorithm}\n\n\n\n\n\n\\begin{obs}\nLet us return to a remark that preceded Algorithm \\ref{alg:1}. In particular, we noted that Algorithm \\ref{alg:1} can also be viewed as a sequential linearization approach to solving \\eqref{eqn:alg}. Namely, this corresponds to sequentially performing a linearization of $f_2$ (and leaving $f_1$ as is), and then solving the new convex linearized problem.\n\nFurther, let us note why we refer to Algorithm \\ref{alg:1} as an alternating minimization scheme. In particular, in light of the reformulation \\eqref{eqn:mainReform} of \\eqref{eqn:alg}, we can rewrite \\eqref{eqn:alg} exactly as \n$$\\eqref{eqn:alg} = \\begin{array}{ll}\n\\displaystyle\\min_{{\\bs\\beta},\\bs\\gamma} & f_1({\\bs\\beta}) - \\langle\\bs\\gamma,{\\bs\\beta}\\rangle\\\\\n\\operatorname{s.t.}& \\displaystyle\\sum_i |\\gamma_i| \\leq \\lambda k\\\\\n& \\displaystyle|\\gamma_i|\\leq \\lambda\\;\\forall i.\n\\end{array}$$\nIn this sense, if one takes care in performing alternating minimization in ${\\bs\\beta}$ (with $\\bs\\gamma$ fixed) and in $\\bs\\gamma$ (with ${\\bs\\beta}$ fixed) (as in Algorithm \\ref{alg:1}), then a locally optimal solution is guaranteed.\n\\end{obs}\n\nWe now turn to how to actually apply Algorithm \\ref{alg:1}. Observe that the algorithm is quite simple; in particular, it only requires solving two types of well-structured convex optimization problems. The first such problem, for a fixed ${\\bs\\beta}$, is shown in \\eqref{eqn:wrtbg}. This can be solved in closed form by simply sorting the entries of $|{\\bs\\beta}|$, i.e., by finding $|\\beta_{(1)}|,\\ldots,|\\beta_{(p)}|$. \nThe second subproblem, shown in \\eqref{eqn:wrtbb} for a fixed $\\bs\\gamma$, is precisely the usual Lasso problem and is amenable to any of the possible algorithms for the Lasso \\cite{tibshirani,lars,hastie}. \n\n\n\n\n\\subsubsection*{Augmented Lagrangian approach}\n\nWe briefly mention another technique for finding potential solutions to \\eqref{eqn:alg} using an Alternating Directions Method of Multiplers (ADMM) \\cite{admm} approach. To our knowledge, the application of ADMM to the trimmed Lasso problem is novel, although it appears closely related to \\cite{admmteng}. \nWe begin by observing that \\eqref{eqn:alg} can be written exactly as\n$$\\begin{array}{ll}\n\\displaystyle\\min_{{\\bs\\beta},\\bs\\gamma}& \\frac{1}{2}\\left\\|\\mb y-{\\mb X}{\\bs\\beta}\\right\\|_2^2 + \\eta\\left\\|{\\bs\\beta}\\right\\|_1 + \\lambda \\tk{\\bs\\gamma} \\\\\n\\operatorname{s.t.} & {\\bs\\beta}=\\bs\\gamma,\n\\end{array}$$\nwhich makes use of the canonical variable splitting. Introducing dual variable $\\mb q\\in\\mathbb{R}^p$ and parameter $\\sigma>0$, this becomes in augmented Lagrangian form\n\\begin{align}\n\\displaystyle\\min_{{\\bs\\beta},\\bs\\gamma} \\max_{\\mb q}\\;&\\frac{1}{2}\\left\\|\\mb y-{\\mb X}{\\bs\\beta}\\right\\|_2^2 + \\eta\\left\\|{\\bs\\beta}\\right\\|_1 +\\lambda \\tk{\\bs\\gamma} + \\nonumber\\\\\n& \\langle \\mb q, {\\bs\\beta}-\\bs\\gamma\\rangle + \\frac{\\sigma}{2}\\left\\|{\\bs\\beta}-\\bs\\gamma\\right\\|_2^2.\\label{eqn:admm}\n\\end{align}\n\nThe utility of such a reformulation is that it is directly amenable to ADMM, as detailed in Algorithm \\ref{alg:admm}. While the problem is nonconvex and therefore the ADMM is not guaranteed to converge, numerical experiments suggest that this approach has superior performance to the DCO-inspired method considered in Algorithm \\ref{alg:1}\n\nWe close by commenting on the subproblems that must be solved in Algorithm \\ref{alg:admm}. Step 2 can be carried out using ``hot'' starts. Step 3 is the solution of the trimmed Lasso in the orthogonal design case and can be solved by performed by sorting $p$ numbers; see Appendix \\ref{app:algsupp}.\n\n\\begin{algorithm}[ht]\n\\begin{enumerate}\n\\item Initialize with any ${\\bs\\beta}^0,\\bs\\gamma^0,\\mb q^0 \\in\\mathbb{R}^p$ and $\\sigma>0$. Repeat, for $\\ell\\geq 0$, Steps 2, 3, and 4 until a desired numerical\nconvergence tolerance is satisfied.\n\n\\item Set\n\\begin{align*}\n{\\bs\\beta}^{\\ell+1} \\in \\displaystyle\\underset{{\\bs\\beta}}{\\operatorname{argmin}}\\; &\\frac{1}{2}\\|\\mb y-{\\mb X}{\\bs\\beta}\\|_2^2+\\eta\\|{\\bs\\beta}\\|_1\\;+\\\\\n&\\langle \\mb q^\\ell, {\\bs\\beta}\\rangle + \\frac{\\sigma}{2}\\|{\\bs\\beta}-\\bs\\gamma^\\ell\\|_2^2.\n\\end{align*}\n\n\\item Set\n$$\\bs\\gamma^{\\ell+1} \\in\\underset{\\bs\\gamma}{\\operatorname{argmin} } \\;\\lambda \\tk{\\bs\\gamma} + \\frac{\\sigma}{2}\\|{\\bs\\beta}^{\\ell+1}-\\bs\\gamma\\|_2^2 - \\langle \\mb q^\\ell,\\bs\\gamma\\rangle.$$\n\n\\item Set $\\mb q^{\\ell+1} = \\mb q^{\\ell} + \\sigma\\left({\\bs\\beta}^{\\ell+1} - \\bs\\gamma^{\\ell+1}\\right)$. \n\n\\end{enumerate}\n\\caption{ADMM algorithm for \\eqref{eqn:admm}}\\label{alg:admm}\n\\end{algorithm}\n\n\n\n\n\n\n\n\\subsubsection*{Convexification approach}\\label{ssec:convenv}\n\n\nWe briefly consider the convex relaxation of the problem \\eqref{eqn:alg}. We begin by computing the convex envelope \\cite{rockafeller,BV2004} of $T_k$ on $[-1,1]^p$ (here the choice of $[-1,1]^p$ is standard, such as in the convexification of $\\ell_0$ over this set which leads to $\\ell_1$). The proof follows standard techniques (e.g. computing the biconjugate\\cite{rockafeller}) and is omitted.\n\n\\begin{lemma}\\label{lem:convenv}\nThe convex envelope of $T_k$ on $[-1,1]^p$ is the function $\\overline{T_k}$ defined as\n$$\\overline{T_k}({\\bs\\beta}) = \\left(\\|{\\bs\\beta}\\|_1-k\\right)_+.$$\n\\end{lemma}\n\nIn words, the convex envelope of $T_k$ is a ``soft thresholded'' version of the Lasso penalty (thresholded at level $k$). This can be thought of as an alternative way of interpreting the name ``trimmed Lasso.''\n\nAs a result of Lemma \\ref{lem:convenv}, it follows that the convex analogue of \\eqref{eqn:alg}, as taken over $[-1,1]^p$, is precisely\n\\begin{equation}\\label{eqn:convenv}\n\\min_{{\\bs\\beta}} \\frac{1}{2}\\|\\mb y-{\\mb X}{\\bs\\beta}\\|_2^2 + \\eta\\|{\\bs\\beta}\\|_1 + \\lambda\\left(\\|{\\bs\\beta}\\|_1-k\\right)_+.\n\\end{equation}\nProblem \\eqref{eqn:convenv} is amenable to a variety of convex optimization techniques such as subgradient descent \\cite{BV2004}.\n\n\n\n\\subsection{Certificates of optimality for \\eqref{eqn:alg}}\\label{ssec:mio}\n\nWe close our discussion of the algorithmic implications of the trimmed Lasso by discussing techniques for finding certifiably optimal solutions to \\eqref{eqn:alg}. All approaches presented in the preceding section find potential candidates for solutions to \\eqref{eqn:alg}, but none is necessarily globally optimal. Let us return to a representation of \\eqref{eqn:alg} that makes use Lemma \\ref{lemma:miprep}:\n\\begin{equation*\n\\begin{array}{ll}\n\\displaystyle\\min_{{\\bs\\beta},\\mb z} & \\frac{1}{2}\\|\\mb y-{\\mb X}{\\bs\\beta}\\|_2^2 + \\eta\\|{\\bs\\beta}\\|_1 + \\lambda\\langle\\mb z,|{\\bs\\beta}|\\rangle\\\\\n\\operatorname{s.t.}& \\displaystyle\\sum_i z_i=p-k\\\\\n&\\mb z\\in\\{0,1\\}^p.\n\\end{array}\n\\end{equation*}\nAs noted in \\cite{gotoh1}, this representation of \\eqref{eqn:alg} is amenable to mixed integer optimization (``MIO'') methods \\cite{bonami} for finding globally optimal solutions to \\eqref{eqn:alg}, in the same spirit as other MIO-based approaches to statistical problems \\cite{bmlqs,bkm}.\n\n\n\nOne approach, as described in \\cite{thiao}, uses the notion of ``big $M$.'' In particular, for $M>0$ sufficiently large, problem \\eqref{eqn:alg} can be written exactly as the following linear MIO problem:\n\n\\begin{equation}\\label{eqn:pf7}\n\\begin{array}{ll}\n\\displaystyle\\min_{{\\bs\\beta},\\mb z,\\mb a} &\\displaystyle \\frac{1}{2} \\|\\mb y-{\\mb X}{\\bs\\beta}\\|_2^2 + \\eta\\|{\\bs\\beta}\\|_1 + \\lambda\\sum_i a_i\\\\\n\\operatorname{s.t.}& \\displaystyle\\sum_i z_i=p-k\\\\\n&\\mb z\\in\\{0,1\\}^p\\\\\n& \\mb a \\geq {\\bs\\beta} + M\\mb z- M\\mb 1\\\\\n& \\mb a \\geq -{\\bs\\beta} + M\\mb z- M\\mb 1\\\\\n& \\mb a\\geq \\mb0.\n\\end{array}\n\\end{equation}\nThis representation as a linear MIO problem enables the direct application of numerous existing MIO algorithms (such as \\cite{gurobi}).\\footnote{There are certainly other possible representations of \\eqref{eqn:mainReform}, such as using special ordered set (SOS) constraints, see e.g. \\cite{bkm}. Without more sophisticated tuning of $M$ as in \\cite{bkm}, the SOS formulations appear to be vastly superior in terms of time required to prove optimality. The precise formulation essentially takes the form of problem \\eqref{eqn:BSSr}. An SOS-based implementation is provided in the supplementary code as the default method of certifying optimality.} Also, let us note that the linear relaxation of \\eqref{eqn:pf7}, i.e., problem \\eqref{eqn:pf7} with the constraint $\\mb z\\in\\{0,1\\}^p$ replaced with $\\mb z\\in[0,1]^p$, is the problem\n\\begin{equation*\n\\displaystyle\\min_{{\\bs\\beta}} \\frac{1}{2} \\|\\mb y-{\\mb X}{\\bs\\beta}\\|_2^2 + \\eta\\|{\\bs\\beta}\\|_1 + \\lambda \\left(\\|{\\bs\\beta}\\|_1-Mk\\right)_+,\n\\end{equation*}\nwhere we see the convex envelope penalty appear directly. As such, when $M$ is large, the linear relaxation of \\eqref{eqn:pf7} is the ordinary Lasso problem $\\min_{{\\bs\\beta}} \\frac{1}{2} \\|\\mb y-{\\mb X}{\\bs\\beta}\\|_2^2 + \\eta\\|{\\bs\\beta}\\|_1$.\n\n\n\n\\subsection{Computational example}\n\nBecause a rigorous computational comparison is not the primary focus of this paper, we provide a limited demonstration that describes the behavior of solutions to \\eqref{eqn:alg} as computed via the different approaches. Precise computational details are contained in Appendix \\ref{app:compdetail}. We will focus on two different aspects: sparsity and approximation quality.\n\n\n\\subsubsection*{Sparsity properties}\n\nAs the motivation for the trimmed Lasso is ostensibly sparse modeling, its sparsity properties are of particular interest. We consider a problem instance with $p=20$, $n=100$, $k=2$, and signal-to-noise ratio 10 (the sparsity of the ground truth model ${\\bs\\beta}_\\text{true}$ is $10$). The relevant coefficient profiles as a function of $\\lambda$ are shown in Figure \\ref{fig:coeffpath}. In this example none of the convex approaches finds the optimal two variable solution computed using mixed integer optimization. Further, as one would expect \\emph{a priori}, the optimal coefficient profiles (as well as the ADMM profiles) are not continuous in $\\lambda$. Finally, note that by design of the algorithms, the alternating minimization and ADMM approaches yield solutions with sparsity at most $k$ for $\\lambda$ sufficiently large.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=.8\\textwidth]{coeffpath.pdf}\n\\caption{}\n\\label{fig:coeffpath}\n\\end{figure}\n\n\n\\subsubsection*{Optimality gap}\n\nAnother critical question is the degree of suboptimality of solutions found via the convex approaches. We average optimality gaps across 100 problem instances with $p=20$, $n=100$, and $k=2$; the relevant results are shown in Figure \\ref{fig:optgap}. The results are entirely as one might expect. When $\\lambda$ is small and the problem is convex or nearly convex, the heuristics perform well. However, this breaks down as $\\lambda$ increases and the sparsity-inducing nature of the trimmed Lasso penalty comes into play. Further, we see that the convex envelope approach tends to perform the worst, with the ADMM performing the best of the three heuristics. This is perhaps not surprising, as any solution found via the ADMM can be guaranteed to be locally optimal by subsequently applying the alternating minimization scheme of Algorithm \\ref{alg:1} to any solution found via Algorithm \\ref{alg:admm}.\n\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=.8\\textwidth]{optgap.pdf}\n\\caption{}\n\\label{fig:optgap}\n\\end{figure}\n\n\n\\subsubsection*{Computational burden}\n\nLoosely speaking, the heuristic approaches all carry a similar computational cost per iteration, namely, solving a Lasso-like problem. In contrast, the MIO approach can take significantly more computational resources. However, by design, the MIO approach maintains a suboptimality gap throughout computation and can therefore be terminated, before optimality is certified, with a certificate of suboptimality. We do not consider any empirical analysis of runtime here.\n\n\n\\subsubsection*{Other considerations}\n\nThere are other additional computational considerations that are potentially of interest as well, but they are primarily beyond the scope of the present work. For example, instead of considering optimality purely in terms of objective values in \\eqref{eqn:alg}, there are other critical notions from a statistical perspective (e.g. ability to recover true sparse models and performance on out-of-sample data) that would also be necessary to consider across the multiple approaches.\n\n\n\n\n\n\n\\section{Conclusions}\\label{sec:conc}\n\nIn this work, we have studied the trimmed Lasso, a nonconvex adaptation of Lasso that acts as an exact penalty method for best subset selection. Unlike some other approaches to exact penalization which use coordinate-wise separable functions, the trimmed Lasso offers direct control of the desired sparsity $k$. Further, we emphasized the interpretation of the trimmed Lasso from the perspective of robustness. In doing so, we provided contrasts with the SLOPE penalty as well as comparisons with estimators from the robust statistics and total least squares literature.\n\nWe have also taken care to contextualize the trimmed Lasso within the literature on nonconvex penalized estimation approaches to sparse modeling, showing that penalties like the trimmed Lasso can be viewed as a generalization of such approaches in the case when the penalty function is bounded. In doing so, we also highlighted how precisely the problems were related, with a complete characterization given in the case of the clipped Lasso.\n\nFinally, we have shown how modern developments in optimization can be brought to bear for the trimmed Lasso to create convex optimization optimization algorithms that can take advantage of the significant developments in algorithms for Lasso-like problems in recent years.\n\nOur work here raises many interesting questions about further properties of the trimmed Lasso and the application of similar ideas in other settings. We see two particularly noteworthy directions of focus: algorithms and statistical properties. For the former, we anticipate that an approach like trimmed Lasso, which leads to relatively straightforward algorithms that use close analogues from convex optimization, is simple to interpret and to implement. At the same time, the heuristic approaches to the trimmed Lasso presented herein carry no more of a computational burden than solving convex, Lasso-like problems. On the latter front, we anticipate that a deeper analysis of the statistical properties of estimators attained using the trimmed Lasso would help to illuminate it in its own right while also further connecting it to existing approaches in the statistical estimation literature.\n\n\n\\begin{appendices}\n\n\\section{General min-max representation of SLOPE}\\label{app:slope}\n\nFor completeness, in this appendix we include the more general representation of the SLOPE penalty $R_{\\textsc{SLOPE}(\\mb w)}$ in the same spirit of Proposition \\ref{prop:slope}. Here we work with SLOPE in its most general form, namely,\n$$R_{\\textsc{SLOPE}(\\mb w)}({\\bs\\beta}) = \\sum_{i=1}^p w_i |\\beta_{(i)}|, $$\nwhere $\\mb w$ is a (fixed) vector of weights with $w_1\\geq w_2\\geq \\cdots\\geq w_p\\geq 0$ and $w_1>0$.\n\nTo describe the general min-max representation, we first set some notation. For a matrix $\\bs\\Delta\\in\\mathbb{R}^{n\\times p}$, we let $\\boldsymbol\\nu(\\bs\\Delta)\\in\\mathbb{R}^p$ be the vector $(\\|\\bs\\Delta_1\\|_2,\\ldots,\\|\\bs\\Delta_p\\|_2)$ with entries sorted so that $\\nu_1\\geq \\nu_2\\geq \\cdots \\geq \\nu_p$. As usual, for two vectors $\\mb x$ and $\\mb y$, we use $\\mb x\\leq\\mb y$ to denote that coordinate-wise inequality holds. With this notation, we have the following:\n\\begin{proposition}\\label{prop:slopefullgenerality}\nProblem \\eqref{eqn:roprimitive} with uncertainty set\n$$\\mathcal{U}_\\mb w =\\left\\{\\bs\\Delta : \\boldsymbol\\nu(\\bs\\Delta) \\leq \\mb w \\right\\}\n$$\nis equivalent to problem \\eqref{eqn:a1} with $R({\\bs\\beta})=R_{\\textsc{SLOPE}(\\mb w)}({\\bs\\beta}) $. Further,\nproblem \\eqref{eqn:roprimitive} with uncertainty set\n$$\\mathcal{U}_{\\mb w} =\\left\\{\\bs\\Delta : \\|\\bs\\Delta\\bs \\phi\\|_2 \n\\leq R_{\\textsc{SLOPE}(\\mb w)}(\\bs \\phi) \\;\\forall \\bs \\phi \\right\\}\n$$\nis equivalent to problem \\eqref{eqn:a1} with $R({\\bs\\beta})=R_{\\textsc{SLOPE}(\\mb w)}({\\bs\\beta}) $.\n\\end{proposition}\n\nThe proof, like the proof of Proposition \\ref{prop:slope}, follows basic techniques described in \\cite{RObook} and is therefore omitted.\n\n\n\n\n\\section{Additional proofs}\\label{app:proof}\n\nThis appendix section contains supplemental proofs not contained in the main text.\n\n\\begin{proof}[Proof of Theorem \\ref{thm:exactEquiv}]\nLet $\\overline{\\lambda} = \\|\\mb y\\|_2\\cdot\\left(\\max_j\\|\\mb x_j\\|_2\\right)$, where $\\mb x_j$ denotes the $j$th row of ${\\mb X}$. We fix $\\lambda>\\overline{\\lambda}$, $k$, and $\\eta>0$ throughout the entire proof. We begin by observing that it suffices to show that any solution ${\\bs\\beta}$ to\n\\begin{equation}\\label{eqn:thmmain}\n\\displaystyle\\min_{{\\bs\\beta}} \\frac{1}{2}\\|\\mb y-{\\mb X}{\\bs\\beta}\\|_2^2 + \\lambda \\tk{{\\bs\\beta}} + \\eta \\|{\\bs\\beta}\\|_1\n\\end{equation}\nsatisfies $\\tk{{\\bs\\beta}} = 0$, or equivalently, $\\|{\\bs\\beta}\\|_0\\leq k$. As per Lemma \\ref{lemma:miprep}, problem \\eqref{eqn:thmmain} can be rewritten exactly as\n\\begin{equation}\\label{eqn:mainReform}\n\\begin{array}{ll}\n\\displaystyle\\min_{{\\bs\\beta},\\mb z} & \\frac{1}{2}\\|\\mb y-{\\mb X}{\\bs\\beta}\\|_2^2 + \\lambda\\langle\\mb z,|{\\bs\\beta}|\\rangle+ \\eta\\|{\\bs\\beta}\\|_1 \\\\\n\\operatorname{s.t.}& \\displaystyle\\sum_i z_i=p-k\\\\\n&\\mb z\\in\\{0,1\\}^p.\n\\end{array}\n\\end{equation}\nLet $({\\bs\\beta}^*,\\mb z^*)$ be any solution to \\eqref{eqn:mainReform}. Observe that necessarily ${\\bs\\beta}^*$ is also a solution to the problem\n\\begin{equation}\\label{eqn:pfsupp1}\n\\min_{{\\bs\\beta}}\\frac{1}{2}\\|\\mb y-{\\mb X}{\\bs\\beta}\\|_2^2 + \\lambda\\langle\\mb z^*,|{\\bs\\beta}|\\rangle+ \\eta\\|{\\bs\\beta}\\|_1.\n\\end{equation}\nNote that, unlike \\eqref{eqn:thmmain}, the problem in \\eqref{eqn:pfsupp1} is readily amenable to an analysis using the theory of proximal gradient methods \\cite{combetteswasj,bauschke}. In particular, we must have for any $\\gamma>0$ that \n\\begin{equation}\\label{eqn:pfsupp2}\n{\\bs\\beta}^* = \\operatorname{prox}_{\\gamma R} \\left({\\bs\\beta}^* - \\gamma({\\mb X}'{\\mb X}{\\bs\\beta}^* - {\\mb X}'\\mb y)\\right),\n\\end{equation}\nwhere $\\displaystyle R({\\bs\\beta}) = \\eta\\|{\\bs\\beta}\\|_1 + \\lambda \\sum_{i\\;:\\:z_i^*=1} |\\beta_i|$. Suppose that $\\tk{{\\bs\\beta}^*}>0$. In particular, for some $j\\in\\{1,\\ldots,p\\}$, we have $\\beta_j^* \\neq 0$ and $z_j^*=1$. Yet, as per \\eqref{eqn:pfsupp2},\\footnote{This is valid for the following reason: since $\\beta_j^*\\neq 0$ and $\\beta_j^*$ satisfies \\eqref{eqn:pfsupp2}, it must be the case that $\\left|\\beta_j^* - \\gamma \\mb x_{j}'({\\mb X}{\\bs\\beta}^* - \\mb y)\\right| > \\gamma(\\eta+\\lambda)$, for otherwise the soft-thresholding operator at level $\\gamma(\\eta+\\lambda)$ would set this quantity to zero.}\n$$\\left|\\beta_j^* - \\gamma \\langle\\mb x_{j},{\\mb X}{\\bs\\beta}^* - \\mb y\\rangle\\right| > \\gamma(\\eta+\\lambda)\\;\\quad \\text{ for all } \\gamma>0,$$\nwhere $\\mb x_j$ denotes the $j$th row of ${\\mb X}$. This implies that\n$$\\left|\\langle\\mb x_j,{\\mb X}{\\bs\\beta}^*-\\mb y\\rangle\\right| \\geq \\eta+\\lambda.$$\nNow, using the definition of $\\overline{\\lambda}$, observe that\n\\begin{align*}\n\\eta+\\lambda\\leq \\left|\\langle\\mb x_j,{\\mb X}{\\bs\\beta}^*-\\mb y\\rangle\\right|& \\leq \\|\\mb x_j\\|_2 \\|{\\mb X}{\\bs\\beta}^*-\\mb y\\|_2\\\\\n&\\leq \\|\\mb x_j\\|_2 \\|\\mb y\\| \\leq \\overline{\\lambda}<\\lambda,\n\\end{align*}\nwhich is a contradiction since $\\eta>0$. Hence, $\\tk{{\\bs\\beta}^*}=0$, completing the proof.\n\\end{proof}\n\n\n\n\n\\subsubsection*{Extended statement of Proposition \\ref{prop:asymp}}\n\nWe now include a precise version of the convergence claim in Proposition \\ref{prop:asymp}. Let us set a standard notion: we say that ${\\bs\\beta}$ is $\\epsilon$-optimal (for $\\epsilon>0$) to an optimization problem $(\\textrm{P})$ if the optimal objective value of $(\\textrm{P})$ is within $\\epsilon$ of the objective value of ${\\bs\\beta}$. We add an additional regularizer $\\eta\\|{\\bs\\beta}\\|_1$, for $\\eta>0$ fixed, to the objective in order to ensure coercivity of the objective functions.\n\n\\begin{proposition}[Extended form of Proposition \\ref{prop:asymp}]\nLet $g:\\mathbb{R}_+\\to\\mathbb{R}_+$ be an unbounded, continuous, and strictly increasing function with $g(0)=0$. \nConsider the problems\n\\[\\label{eqn:cvg1}\n\\displaystyle\\min_{{\\bs\\beta}} \\frac{1}{2}\\|\\mb y-{\\mb X}{\\bs\\beta}\\|_2^2 + \\lambda\\pi_k^g({\\bs\\beta}) + \\eta\\|{\\bs\\beta}\\|_1\n\\]\nand\n\\[\\label{eqn:cvg2}\n\\displaystyle\\min_{\\|{\\bs\\beta}\\|_0\\leq k} \\frac{1}{2}\\|\\mb y-{\\mb X}{\\bs\\beta}\\|_2^2 + \\eta\\|{\\bs\\beta}\\|_1.\n\\]\nFor every $\\epsilon>0$, there exists some $\\underline{\\lambda}=\\underline{\\lambda}(\\epsilon)>0$ so that for all $\\lambda>\\underline{\\lambda}$,\n\\begin{enumerate}\n\\item For every optimal $\\bb^*$ to \\eqref{eqn:cvg1}, there is some $\\widehat{\\bb}$ so that $\\|\\bb^*-\\widehat{\\bb}\\|_2\\leq \\epsilon$, $\\widehat{\\bb}$ is feasible to \\eqref{eqn:cvg2}, and $\\widehat{\\bb}$ is $\\epsilon$-optimal to \\eqref{eqn:cvg2}.\n\\item Every optimal ${\\bs\\beta}^*$ to \\eqref{eqn:cvg2} is $\\epsilon$-optimal to \\eqref{eqn:cvg1}.\n\\end{enumerate}\n\\end{proposition}\n\n\\begin{proof}\nThe proof follows a basic continuity argument that is simpler than the one presented below in Theorem \\ref{thm:corprecise}. For that reason, we do not include a full proof. Observe that the assumptions on $g$ imply that $g^{-1}$ is well-defined on, say, $g([0,1])$. If we let $\\epsilon>0$ and suppose that $\\bb^*$ is optimal to \\eqref{eqn:cvg1}, where $\\lambda > \\underline{\\lambda} := \\|\\mb y\\|_2^2\/(2g(\\epsilon\/p))$, and if we define $\\widehat{\\bb}$ to be $\\bb^*$ with all but the $k$ largest magnitude entries truncated to zero (ties broken arbitrarily), then $\\pi_k^g(\\bb^*)\\leq \\|\\mb y\\|_2^2\/(2\\lambda)$ and $\\pi_k^g (\\bb^*) = \\sum_{i=1}^p g(|\\beta_i^*-\\widehat{\\beta}_i|)$ so that $|\\beta_i^*-\\widehat{\\beta}_i| \\leq g^{-1}(\\|\\mb y\\|_2^2\/(2\\lambda)) \\leq \\epsilon\/p$ by definition of $\\underline{\\lambda}$. Hence, $\\|\\bb^*-\\widehat{\\bb}\\|_1\\leq \\epsilon$, and all the other claims essentially follow from this.\n\\end{proof}\n\n\n\n\n\n\\begin{proof}[Proof of Theorem \\ref{thm:robeivInterp}]\nWe begin by showing that for any ${\\bs\\beta}$,\n\\begin{equation*\n\\min_{\\bs\\Delta\\in\\mathcal{U}_k^\\lambda}\\|\\mb y-({\\mb X}+\\bs\\Delta){\\bs\\beta}\\|_2 = \\left(\\|\\mb y-{\\mb X}{\\bs\\beta}\\|_2 - \\lambda \\sum_{i=1}^k |\\beta_{(i)}| \\right)_+\n\\end{equation*}\nwhere $(a)_+:=\\max\\{0,a\\}$. Fix ${\\bs\\beta}$ and set $\\mb r = \\mb y-{\\mb X}{\\bs\\beta}$. We assume without loss of generality that $\\mb r\\neq \\mb0$ and that ${\\bs\\beta}\\neq\\mb0$. For any $\\bs\\Delta$, note that $\\|\\mb r - \\bs\\Delta{\\bs\\beta}\\|_2 \\geq 0$ and $\\|\\mb r - \\bs\\Delta{\\bs\\beta}\\|_2 \\geq \\|\\mb r\\|_2 - \\|\\bs\\Delta{\\bs\\beta}\\|_2$ by the reverse triangle inequality. Now observe that for $\\bs\\Delta\\in\\mathcal{U}_k^\\lambda$,\n$$\\|\\bs\\Delta{\\bs\\beta}\\|_2 \\leq \\sum_i |\\beta_i| \\|\\bs\\Delta_i\\|_2 \\leq \\sum_{i=1}^k \\lambda |\\beta_{(i)}|.$$\nTherefore, $\\|\\mb r-\\bs\\Delta{\\bs\\beta}\\|_2 \\geq \\left(\\|\\mb r\\|_2 - \\lambda\\sum_{i=1}^k |\\beta_{(i)}| \\right)_+$. Let $I\\subseteq\\{1,\\ldots,p\\}$ be a set of $k$ indices which correspond to the $k$ largest entries of ${\\bs\\beta}$ (if $|\\beta_{(k)}|=|\\beta_{(k+1)}|$, break ties arbitrarily). Define $\\bs\\Delta\\in\\mathcal{U}_k^\\lambda$ as the matrix whose $i$th column is\n$$\\left\\{\\begin{array}{rl}\n\\underline{\\lambda}\\operatorname{sgn}(\\beta_i)\\mb r \/ \\|\\mb r\\|_2,&i\\in I\\\\\n0,&i\\notin I,\n\\end{array}\\right.$$\nwhere $\\underline{\\lambda} = \\min\\left\\{\\lambda, \\|\\mb r\\|_2\/\\left(\\sum_{i=1}^k|\\beta_{(i)}|\\right)\\right\\}$. It is easy to verify that $\\bs\\Delta\\in\\mathcal{U}_k^\\lambda$ and $\\|\\mb r-\\bs\\Delta{\\bs\\beta}\\|_2 = \\left(\\|\\mb r\\|_2 - \\lambda\\sum_{i=1}^k |\\beta_{(i)}| \\right)_+$. \nCombined with the lower bound, we have\n$$\\min_{\\bs\\Delta\\in\\mathcal{U}_k^\\lambda} \\|\\mb y-({\\mb X}+\\bs\\Delta){\\bs\\beta}\\|_2 = \\left(\\|\\mb y-{\\mb X}{\\bs\\beta}\\|_2 -\\lambda\\sum_{i=1}^k |\\beta_{(i)}|\\right)_+$$\nwhich completes the first claim.\n\nIt follows that the problem \\eqref{eqn:eivconhompen} can be rewritten exactly as\n\\begin{equation}\\label{eqn:pfsupp3}\n\\min_{{\\bs\\beta}} \\left(\\|\\mb y-{\\mb X}{\\bs\\beta}\\|_2 - \\lambda \\sum_{i=1}^k |\\beta_{(i)}| \\right)_+ + r({\\bs\\beta}).\n\\end{equation}\n\nTo finish the proof of the theorem, it suffices to show that if ${\\bs\\beta}^*$ is a solution to \\eqref{eqn:pfsupp3}, then\n$$\\|\\mb y-{\\mb X}{\\bs\\beta}^*\\|_2 - \\lambda\\sum_{i=1}^k |\\beta_{(i)}^*| \\geq 0.$$\nIf this is not true, then $\\|\\mb y-{\\mb X}{\\bs\\beta}^*\\|_2 - \\lambda\\sum_{i=1}^k |\\beta_{(i)}^*| <0$ and so ${\\bs\\beta}^*\\neq \\mb 0$. However, this implies that for $1>\\epsilon>0$ sufficiently small, ${\\bs\\beta}_\\epsilon:=(1-\\epsilon){\\bs\\beta}^*$ satisfies $\\|\\mb y-{\\mb X}{\\bs\\beta}_\\epsilon\\|_2 - \\lambda\\sum_{i=1}^k |(\\beta_\\epsilon)_{(i)}| <0$. This in turn implies that\n$$\\begin{array}{l}\n\\left(\\|\\mb y-{\\mb X}{\\bs\\beta}_\\epsilon\\|_2 - \\lambda\\sum_{i=1}^k |(\\beta_\\epsilon)_{(i)}| \\right)_+ + r({\\bs\\beta}_\\epsilon)\\\\\n< \\left(\\|\\mb y-{\\mb X}{\\bs\\beta}^*\\|_2 - \\lambda\\sum_{i=1}^k |\\beta_{(i)}^*| \\right)_+ +r({\\bs\\beta}^*),\n\\end{array}$$\nwhich contradicts the optimality of ${\\bs\\beta}^*$. (We have used the absolute homogeneity of the norm $r$ and that ${\\bs\\beta}^*\\neq\\mb0$.) Hence, any optimal ${\\bs\\beta}^*$ to \\eqref{eqn:pfsupp3} necessarily satisfies $\\|\\mb y-{\\mb X}{\\bs\\beta}^*\\|_2 - \\lambda\\sum_{i=1}^k |\\beta_{(i)}^*| \\geq 0$ and so the desired results follows.\n\\end{proof}\n\n\\emph{N.B.} The assumption that $r$ is a norm can be relaxed somewhat (as is clear in the proof), although the full generality is not necessary for our purposes.\n\n\n\\subsection*{Corollary \\ref{cor:slope} and related discussions}\n\nHere we include a precise statement of the ``approximate'' claim in Corollary \\ref{cor:slope}. After the proof, we include a discussion of related technical issues\n\n\\begin{theorem}[Precise statement of Corollary \\ref{cor:slope}]\\label{thm:corprecise}\nFor $\\tau>\\lambda>0$, consider the problems\n\\begin{equation}\\label{eqn:ced1\n\\begin{array}{ll}\n\\displaystyle\\min_{{\\bs\\beta} } &\\displaystyle\\|\\mb y-{\\mb X}{\\bs\\beta}\\|_2 + (\\tau-\\lambda)\\|{\\bs\\beta}\\|_1+ \\lambda \\tk{{\\bs\\beta}} \\\\\n\\operatorname{s.t.} & \\displaystyle\\lambda\\sum_{i=1}^k|\\beta_{(i)}| \\leq \\|\\mb y-{\\mb X}{\\bs\\beta}\\|_2.\n\\end{array}\n\\end{equation}\nand\n\\begin{equation}\\label{eqn:ced2\n\\min_{{\\bs\\beta} } \\displaystyle\\|\\mb y-{\\mb X}{\\bs\\beta}\\|_2 + (\\tau-\\lambda)\\|{\\bs\\beta}\\|_1+ \\lambda \\tk{{\\bs\\beta}}.\n\\end{equation}\nFor all $\\epsilon>0$, there exists $\\overline{\\lambda}=\\overline{\\lambda}(\\epsilon)>0$ so that whenever $\\lambda\\in(0,\\overline{\\lambda})$,\n\\begin{enumerate}\n\\item Every optimal ${\\bs\\beta}^*$ to \\eqref{eqn:ced1} is $\\epsilon$-optimal to \\eqref{eqn:ced2}.\n\\item For every optimal ${\\bs\\beta}^*$ to \\eqref{eqn:ced2}, there is some $\\widehat{\\bb}$ so that $\\|{\\bs\\beta}^*-\\widehat{\\bb}\\|_2\\leq \\epsilon$, $\\widehat{\\bb}$ is feasible to \\eqref{eqn:ced1}, and $\\widehat{\\bb}$ is $\\epsilon$-optimal to \\eqref{eqn:ced1}. \n\\end{enumerate}\n\\end{theorem}\n\n\\begin{proof}\nFix $\\tau>0$ throughout. We assume without loss of generality that $\\mb y\\neq\\mb0$, as otherwise the claim is obvious. We will prove the second claim first, as it essentially implies the first.\n\nLet us consider two situations. In particular, we consider whether there exists a nonzero optimal solution to\n\\[\\label{eqn:ced3}\n\\min_{\\bs\\beta} \\|\\mb y-{\\mb X}{\\bs\\beta}\\|_2+\\tau\\|{\\bs\\beta}\\|_1.\n\\]\n\n\\subsubsection*{Case 1---existence of nonzero optimal solution to \\eqref{eqn:ced3}}\nWe first consider the case when there exists a nonzero solution to problem \\eqref{eqn:ced3}. We show a few lemmata:\n\n\\begin{enumerate}\n\\item We first show that the norm of solutions to \\eqref{eqn:ced2} are uniformly bounded away from zero, independent of $\\lambda$. To proceed,\nlet $\\widehat{\\bb}$ be any nonzero optimal solution to \\eqref{eqn:ced3}. Observe that if $\\bb^*$ is optimal to \\eqref{eqn:ced2}, then\n\\begin{align*}\n\\|\\mb y-{\\mb X}\\bb^*\\|_2 + (\\tau-\\lambda)\\|\\bb^*\\|_1 + \\lambda T_k({\\bb^*}) & \n\\leq \\|\\mb y-{\\mb X}\\widehat{\\bb}\\|_2 + (\\tau-\\lambda)\\|\\widehat{\\bb}\\|_1 + \\lambda T_k(\\widehat{\\bb})\\\\\n&\\leq \\|\\mb y-{\\mb X}\\bb^*\\|_2 + \\tau\\|\\bb^*\\|_1 - \\lambda \\|\\widehat{\\bb}\\|_1 + \\lambda T_k(\\widehat{\\bb}),\n\\end{align*}\nimplying that $\\|\\widehat{\\bb}\\|_1 - T_k(\\widehat{\\bb}) \\leq \\|\\bb^*\\|_1 - T_k(\\bb^*)$. In other words, $\\sum_{i=1}^k|\\widehat{\\beta}_{(i)}| \\leq \\sum_{i=1}^k |{\\beta}_{(i)}^*|\\leq \\|\\bb^*\\|_1$. Using the fact that $\\widehat{\\bb}\\neq\\mb0$, we have that any solution $\\bb^*$ to \\eqref{eqn:ced2} has strictly positive norm:\n$$\\|\\bb^*\\|_1 \\geq C>0,$$\nwhere $C:=\\sum_{i=1}^k|\\widehat{\\beta}_{(i)}|$ is a universal constant depending only on $\\tau$ (and not $\\lambda$).\n\n\\item We now upper bound the norm of solutions to \\eqref{eqn:ced2}. In particular, if $\\bb^*$ is optimal to \\eqref{eqn:ced2}, then\n$$\\|\\mb y-{\\mb X}\\bb^*\\|_2 + (\\tau-\\lambda)\\|\\bb^*\\|_1+ \\lambda T_k({\\bb^*}) \\leq \\|\\mb y\\|_2 + 0 + 0 = \\|\\mb y\\|_2,$$\nand so $\\|\\bb^*\\|_1\\leq \\|\\mb y\\|_2\/(\\tau-\\lambda)$. (This bound is not uniform in $\\lambda$, but if we restrict our attention to, say $\\lambda\\leq \\tau\/2$, it is.)\n\n\\item We now lower bound the loss for scaled version of optimal solutions. In particular, if $\\sigma\\in[0,1]$ and $\\bb^*$ is optimal to \\eqref{eqn:ced2}, then by optimality we have that\n$$\\|\\mb y-{\\mb X}\\bb^*\\|_2 + (\\tau-\\lambda)\\|\\bb^*\\|_1 + \\lambda T_k(\\bb^*) \\leq \\|\\mb y-\\sigma{\\mb X}\\bb^*\\|_2 + (\\tau-\\lambda)\\sigma\\|\\bb^*\\|_1 + \\lambda \\sigma T_k(\\bb^*),$$\nwhich in turn implies that\n\\begin{align*}\n\\|\\mb y-\\sigma{\\mb X}\\bb^*\\|_2 &\\geq \\|\\mb y-{\\mb X}\\bb^*\\|_2 + (\\tau-\\lambda)(1-\\sigma) \\|\\bb^*\\|_1 + \\lambda (1-\\sigma) T_k(\\bb^*)\\\\\n& \\geq \\|\\mb y-{\\mb X}\\bb^*\\|_2 + (\\tau-\\lambda)(1-\\sigma) C\\geq (\\tau-\\lambda)(1-\\sigma)C\n\\end{align*}\nby combining with the first observation.\n\n\\end{enumerate}\n\n\nUsing these, we are now ready to proceed. Let $\\epsilon>0$; we assume without loss of generality that $\\epsilon<2\\|\\mb y\\|_2\/\\tau$. Let\n$$\\overline{\\lambda}:= \\min\\left\\{\\frac{\\epsilon\\tau^3 C}{4\\|\\mb y\\|_2(2\\|\\mb y\\|_2 - \\epsilon\\tau)},\\frac{\\tau}{2}\\right\\}.$$\nFix $\\lambda\\in(0,\\overline{\\lambda})$ and let $\\bb^*$ be any optimal solution to \\eqref{eqn:ced2}. Define \n$$\\sigma:= \\left(1-\\frac{\\epsilon\\tau}{2\\|\\mb y\\|_2} \\right) \\text{\\quad and \\quad}\\widehat{\\bb} := \\sigma\\bb^*.$$\nWe claim that $\\widehat{\\bb}$ satisfies the desired requirements of the theorem:\n\n\\begin{enumerate}\n\\item We first argue that $\\|\\bb^*-\\widehat{\\bb}\\|_2 \\leq \\epsilon$. Observe that\n$$\\|\\bb^*-\\widehat{\\bb}\\|_2 = \\epsilon\\tau\\|\\bb^*\\|_2\/({2\\|\\mb y\\|_2} ) \\leq {\\epsilon\\tau}\\|\\bb^*\\|_1\/({2\\|\\mb y\\|_2} ) \\leq {\\epsilon\\tau} \\|\\mb y\\|_2\/({2\\|\\mb y\\|_2}( \\tau-\\lambda)) \\leq\\epsilon.\n$$\n\n\\item We now show that $\\widehat{\\bb}$ is feasible to \\eqref{eqn:ced1}. This requires us to argue that\n$\\lambda \\sum_{i=1}^k |\\widehat{\\beta}_{(i)}| \\leq \\|\\mb y-{\\mb X}\\widehat{\\bb}\\|_2$. Yet,\n\\begin{align*}\n\\lambda\\sum_{i=1}^k |\\widehat{\\beta}_{(i)}| &\\leq \\lambda \\|\\widehat{\\bb}\\|_1 = \\lambda \\sigma\\|\\bb^*\\|_1\\leq 2\\lambda\\sigma\\|\\mb y\\|_2\/\\tau\\leq \\frac{\\tau}{2} (1-\\sigma) C\\\\\n& \\leq (\\tau-\\lambda) (1-\\sigma)C \\leq \\|\\mb y-\\sigma{\\mb X}\\bb^*\\|_2 = \\|\\mb y-{\\mb X}\\widehat{\\bb}\\|_2,\n\\end{align*}\nas desired. The only non-obvious step is the inequality $2\\lambda\\sigma\\|\\mb y\\|_2\/\\tau\\leq \\tau(1-\\sigma)C\/2$, which follows from algebraic manipulations using the definitions of $\\sigma$ and $\\overline{\\lambda}$.\n\n\\item Finally, we show that $\\widehat{\\bb}$ is $\\left(\\epsilon\\|{\\mb X}\\|_2\\right)$-optimal to \\eqref{eqn:ced1}. Indeed, because $\\bb^*$ is optimal to \\eqref{eqn:ced2} which necessarily lowers bound problem \\eqref{eqn:ced1}, we have that the objective value gap between $\\widehat{\\bb}$ and an optimal solution to \\eqref{eqn:ced1} is at most\n\\begin{align*}\n&\\|\\mb y-\\sigma{\\mb X}\\bb^*\\|_2 -\\|\\mb y-{\\mb X}\\bb^*\\|_2 + (\\tau-\\lambda)(\\sigma-1)\\|\\bb^*\\|_1 + \\lambda(\\sigma-1)T_k(\\bb^*)\\\\\n&\\leq (1-\\sigma)\\|{\\mb X}\\bb^*\\|_2 + 0 + 0 \\leq (1-\\sigma)\\|{\\mb X}\\|_2\\|\\bb^*\\|_2 \\leq 2(1-\\sigma) \\|{\\mb X}\\|_2\\|\\mb y\\|_2\/\\tau \\\\\n&=2\\epsilon\\tau\/(2\\|\\mb y\\|_2)\\|{\\mb X}\\|_2\\|\\mb y\\|_2\/\\tau = \\epsilon\\|{\\mb X}\\|_2.\n\\end{align*}\n\\end{enumerate}\nAs the choice of $\\epsilon>0$ was arbitrary, this completes the proof of claim 2 in the theorem in the case when $\\mb0$ is not a solution to \\eqref{eqn:ced3}.\n\n\\subsubsection*{Case 2---no nonzero optimal solution to \\eqref{eqn:ced3}}\n\nIn the case when there is no nonzero optimal solution to \\eqref{eqn:ced3}, $\\mb 0$ is optimal and it is the only optimal point. Our analysis will be similar to the previous approach, with the key difference being in how we lower bound the quantity $\\|\\mb y-\\sigma{\\mb X}\\bb^*\\|_2$ where $\\bb^*$ is optimal to \\eqref{eqn:ced2}. Again, we have several lemmata:\n\n\\begin{enumerate}\n\\item As before, if $\\bb^*$ is optimal to \\eqref{eqn:ced2}, then $\\|\\bb^*\\|_1\\leq \\|\\mb y\\|_2\/(\\tau-\\lambda)$.\n\n\\item We now lower bound the quantity $\\|\\mb y-\\sigma{\\mb X}\\bb^*\\|_2$, where $\\bb^*$ is optimal to \\eqref{eqn:ced2} and $\\sigma\\in[0,1]$. As such, consider the function\n$$f(\\sigma):= \\|\\mb y-\\sigma{\\mb X}\\bb^*\\|_2 + \\sigma\\tau\\|\\bb^*\\|_1.$$\nBecause $f$ is convex in $\\sigma$ and the unique optimal solution to \\eqref{eqn:ced3} is $\\mb0$, we have that\n$$f(\\sigma)\\geq f(0) + \\sigma f'(0)\\;\\;\\;\\forall\\sigma\\in[0,1]\\text{\\quad and \\quad} f'(0)\\geq0$$\n(It is not difficult to argue that $f$ is differentiable at $0$.) An elementary computation shows that $f'(0) = \\tau\\|\\bb^*\\|_1 -\\langle\\mb y,{\\mb X}\\bb^*\\rangle\/\\|\\mb y\\|_2$. Therefore, we have that\n$$\\|\\mb y-\\sigma{\\mb X}\\bb^*\\|_2 + \\sigma\\tau\\|\\bb^*\\|_1 \\geq \\|\\mb y\\|_2 + \\sigma \\left(\\tau\\|\\bb^*\\|_1 - \\langle \\mb y,{\\mb X}\\bb^*\\rangle\/\\|\\mb y\\|_2\\right),$$\nimplying that\n$$\\|\\mb y-\\sigma{\\mb X}\\bb^*\\|_2 \\geq \\|\\mb y\\|_2 - \\sigma \\langle \\mb y,{\\mb X}\\bb^*\\rangle\/\\|\\mb y\\|_2 \\geq \\|\\mb y\\|_2 - \\sigma \\tau\\|\\bb^*\\|_1\\geq \\|\\mb y\\|_2 - \\sigma\\tau\\|\\mb y\\|_2\/(\\tau-\\lambda),$$\nwith the final step following by an application of the previous lemma.\n\n\\end{enumerate}\n\n\n\nWe are now ready to proceed. Let $\\epsilon>0$; we assume without loss of generality that $\\epsilon<2\\|\\mb y\\|_2\/\\tau$. Let\n$$\\overline{\\lambda}:= \\min\\left\\{\\frac{\\epsilon\\tau^2}{4\\|\\mb y\\|_2 - \\epsilon\\tau},\\frac{\\tau}{2}\\right\\}.$$\nFix $\\lambda\\in(0,\\overline{\\lambda})$ and let $\\bb^*$ be any optimal solution to \\eqref{eqn:ced2}. Define \n$$\\sigma:= \\left(1-\\frac{\\epsilon\\tau}{2\\|\\mb y\\|_2} \\right) \\text{\\quad and \\quad}\\widehat{\\bb} := \\sigma\\bb^*.$$\nWe claim that $\\widehat{\\bb}$ satisfies the desired requirements:\n\n\\begin{enumerate}\n\\item The proof of the claim that $\\|\\bb^*-\\widehat{\\bb}\\|_2 \\leq \\epsilon$ is exactly as before.\n\n\\item We now show that $\\widehat{\\bb}$ is feasible to \\eqref{eqn:ced1}, which requires a different proof. Again this requires us to argue that\n$\\lambda \\sum_{i=1}^k |\\widehat{\\beta}_{(i)}| \\leq \\|\\mb y-{\\mb X}\\widehat{\\bb}\\|_2$. Yet,\n\\begin{align*}\n\\lambda\\sum_{i=1}^k |\\widehat{\\beta}_{(i)}| &\\leq \\lambda \\|\\widehat{\\bb}\\|_1 = \\lambda \\sigma\\|\\bb^*\\|_1\\leq \\lambda\\sigma\\|\\mb y\\|_2\/(\\tau-\\lambda)\\leq \\|\\mb y\\|_2 - \\sigma\\tau\\|\\mb y\\|_2\/(\\tau-\\lambda)\\\\\n&\\leq \\|\\mb y-\\sigma{\\mb X}\\bb^*\\|_2 = \\|\\mb y-{\\mb X}\\widehat{\\bb}\\|_2,\n\\end{align*}\nas desired. The only non-obvious step is the inequality $\\lambda \\sigma\\|\\mb y\\|_2\/(\\tau-\\lambda) \\leq \\|\\mb y\\|_2 - \\sigma\\tau\\|\\mb y\\|_2\/(\\tau-\\lambda)$, which follows from algebraic manipulations using the definitions of $\\sigma$ and $\\overline{\\lambda}$.\n\n\\item Finally, the proof that $\\widehat{\\bb}$ is $\\left(\\epsilon\\|{\\mb X}\\|_2\\right)$-optimal to \\eqref{eqn:ced1} follows in the same way as before.\n\\end{enumerate}\n\nTherefore, we conclude that in the case when $\\mb 0$ is the unique optimal solution to \\eqref{eqn:ced3}, then again we have that the claim 2 of the theorem holds.\n\n\n\nFinally, we show that claim 1 holds: any solution $\\bb^*$ to \\eqref{eqn:ced1} is $\\epsilon$-optimal to \\eqref{eqn:ced2}. This follows by letting $\\overline{{\\bs\\beta}}$ be any optimal solution to \\eqref{eqn:ced2}. By applying the entire argument above, we know that the objective value of some $\\widehat{\\bb}$, feasible to \\eqref{eqn:ced1} and close to $\\overline{{\\bs\\beta}}$, is within $\\epsilon$ of the optimal objective value of \\eqref{eqn:ced1}, i.e., the objective value of $\\bb^*$, and within $\\epsilon$ of the objective value of \\eqref{eqn:ced2}, i.e., the objective value of $\\overline{{\\bs\\beta}}$. This completes the proof.\n\\end{proof}\n\n\nIn short, the key complication is that the quantity $\\|\\mb y-{\\mb X}{\\bs\\beta}^*\\|_2$ does not need to be uniformly bounded away from zero for solutions ${\\bs\\beta}^*$ to problem \\eqref{eqn:ced2}. This is part of the complication of working with the homogeneous form of the trimmed Lasso problem. For a concrete example, if one considers the homogeneous Lasso problem with $p=n=1$, $\\mb y=(1)$, and ${\\mb X}=(1)$, then the homogeneous Lasso problem $\\min_{\\bs\\beta} \\|\\mb y-{\\mb X}{\\bs\\beta}\\|_2+\\eta\\|{\\bs\\beta}\\|_1$ is\n$$\\min_{\\beta} |1-\\beta| + \\eta|\\beta|.$$\nFor $\\eta\\in[0,1]$, $\\beta^*=1$ is an optimal solution to this problem with corresponding error $\\|\\mb y-{\\mb X}{\\bs\\beta}^*\\|=0$. If we make an assumption about the behavior of $\\|\\mb y-{\\mb X}{\\bs\\beta}^*\\|$, then we do not need the setup as shown above. \n\n\n\n\\begin{proof}[Proof of Proposition \\ref{prop:robeivslope}]\nThe proof is entirely analogous to that of Theorems \\ref{thm:robeivInterp} and \\ref{thm:corprecise} and is therefore omitted.\n\\end{proof}\n\n\n\n\\begin{proof}[Proof of validity of Example \\ref{eg:cl}]\n\nLet us consider the problem instance where $p=n=2$ with\n$$\\mb y = \\begin{pmatrix}1\\\\1\\end{pmatrix} \\text{ \\quad and \\quad } \\mb X = \\begin{pmatrix} 1 & -1\\\\-1&2\\end{pmatrix}.$$\nLet $\\lambda =1\/2$ and $ \\ell = 1$, and consider the problem\n\\begin{equation}\\label{eqn:as1}\n\\min_{{\\bs\\beta}} \\|\\mb y-{\\mb X}{\\bs\\beta}\\|_2^2 + |\\beta_{(2)}| = \\min_{\\beta_1,\\beta_2} (1-\\beta_1+\\beta_2)^2 + (1+\\beta_1-2\\beta_2)^2 + |\\beta_{(2)}|.\n\\end{equation}\nWe have omitted the factor of $1\/2$ as shown in the actual example in the main text in order to avoid unnecessary complications.\n\nSolving problem \\eqref{eqn:as1} and its related counterparts (for $\\ell\\in\\{0,2\\}$) can rely on convex analysis because we can simply enumerate all possible scenarios. In particular, the solution to \\eqref{eqn:as1} is ${\\bs\\beta}^*=(3\/2,1)$ based on an analysis of two related problems:\n\\begin{align*}\n\\min_{\\beta_1,\\beta_2} (1-\\beta_1+\\beta_2)^2 + (1+\\beta_1-2\\beta_2)^2 + |\\beta_1|.\\\\\n\\min_{\\beta_1,\\beta_2} (1-\\beta_1+\\beta_2)^2 + (1+\\beta_1-2\\beta_2)^2 + |\\beta_2|.\n\\end{align*}\n(We should be careful to impose the additional constraints $|\\beta_1|\\leq |\\beta_2|$ and $|\\beta_1|\\geq|\\beta_2|$, respectively, although a simple argument shows that these constraints are not required in this example.) A standard convex analysis using the Lasso (e.g. by directly using subdifferentials) shows that the problems have respective solutions $(1\/2,1\/2)$ and $(3\/2,1)$, with the latter having the better objective value in \\eqref{eqn:as1}. As such, ${\\bs\\beta}^*$ is indeed optimal. The solution in the cases of $\\ell\\in\\{0,2\\}$ follows a similarly standard analysis.\n\nIt is perhaps more interesting to study the general case where $\\mu,\\gamma\\geq0$. In particular, we will show that ${\\bs\\beta}^*=(3\/2,1)$ is not an optimal solution to the clipped Lasso problem\n\\begin{equation}\\label{eqn:as2}\n\\min_{\\beta_1,\\beta_2} (1-\\beta_1+\\beta_2)^2 + (1+\\beta_1-2\\beta_2)^2 + \\mu\\min\\{\\gamma|\\beta_1|,1\\}+\\mu\\min\\{\\gamma|\\beta_2|,1\\}\n\\end{equation}\nfor any choices of $\\mu$ and $\\gamma$. While in general such a problem may be difficult to fully analyze, we can again rely on localized analysis using convex analysis. To proceed, let\n$$f(\\beta_1,\\beta_2) = (1-\\beta_1+\\beta_2)^2 + (1+\\beta_1-2\\beta_2)^2 + \\mu\\min\\{\\gamma|\\beta_1|,1\\}+\\mu\\min\\{\\gamma|\\beta_2|,1\\},$$\nwith the parameters $\\mu$ and $\\gamma$ implicit. We consider the following exhaustive cases:\n\n\\begin{enumerate}\n\\item $\\boxed{\\gamma>1}$ : In this case, $f$ is convex and differentiable in a neighborhood of ${\\bs\\beta}^*$. Its gradient at ${\\bs\\beta}^*$ is $\\nabla f({\\bs\\beta}^*)=(0,-1)$, and therefore ${\\bs\\beta}^*$ is neither locally optimal nor globally optimal to problem \\eqref{eqn:as2}.\n\n\\item $\\boxed{\\gamma<2\/3}$ : In this case, $f$ is again convex and differentiable in a neighborhood of ${\\bs\\beta}^*$. Its gradient at ${\\bs\\beta}^*$ is $\\nabla f({\\bs\\beta}^*)=(\\mu\\gamma,\\mu\\gamma-1)$. Again, this cannot equal $(0,0)$ and therefore ${\\bs\\beta}^*$ is neither locally nor globally optimal to problem \\eqref{eqn:as2}.\n\n\\item $\\boxed{2\/3<\\gamma<1}$ : In this case, $f$ is again convex and differentiable in a neighborhood of ${\\bs\\beta}^*$. Its gradient at ${\\bs\\beta}^*$ is $\\nabla f({\\bs\\beta}^*)=(0,\\mu\\gamma-1)$. As a necessary condition for local optimality, we must have that $\\mu\\gamma=1$, implying that $\\mu>1$. Further, if ${\\bs\\beta}^*$ is optimal to \\eqref{eqn:as2}, then $f({\\bs\\beta}^*)\\leq f(0,0)$. Yet,\n\\begin{align*}\nf({\\bs\\beta}^*) &= 1\/2 + \\mu + \\mu\\gamma = 3\/2 + \\mu\\\\\nf(0,0) & = 2 ,\n\\end{align*}\nimplying that $\\mu\\leq 1\/2 $, in contradiction of $\\mu>1$. Hence, ${\\bs\\beta}^*$ cannot be optimal to \\eqref{eqn:as2}.\n\n\n\\item $\\boxed{\\gamma=2\/3}$ : In this case, we make two comparisons, using the points ${\\bs\\beta}^*$, $(0,0)$, and $(3,2)$:\n\\begin{align*}\nf({\\bs\\beta}^*) &= 1\/2 + \\mu + 2\\mu\/3 = 1\/2 + 5\\mu\/3\\\\\nf(0,0) & = 2\\\\\nf(3,2)&= 2\\mu.\n\\end{align*}\nAssuming optimality of ${\\bs\\beta}^*$, we have that $f({\\bs\\beta}^*)\\leq f(0,0)$, i.e., $\\mu\\leq 9\/10$; similarly, $f({\\bs\\beta}^*)\\leq f(3,2)$, i.e., $\\mu\\geq3\/2$. Clearly both cannot hold, and so therefore ${\\bs\\beta}^*$ cannot be optimal.\n\n\\item $\\boxed{\\gamma=1}$ : Finally, we see that $f({\\bs\\beta}^*)\\leq f(3,2)$ would imply that $1\/2+2\\mu\\leq 2\\mu$, which is impossible; hence, ${\\bs\\beta}^*$ is not optimal to \\eqref{eqn:as2}. (This argument can clearly also be used in the case when $\\gamma>1$, although it is instructive to see the argument given above in that case.)\n\n\\end{enumerate}\n\n\\noindent In any case, we have that ${\\bs\\beta}^*$ cannot be a solution to the clipped Lasso problem \\eqref{eqn:as2}. This completes the proof of validity of Example \\ref{eg:cl}.\n\\end{proof}\n\n\n\n\n\\section{Supplementary details for Algorithms}\\label{app:algsupp}\n\nThis appendix contains further details on algorithms as discussed in Section \\ref{sec:algs}. The presentation here is primarily self-contained. Note that the alternating minimization scheme based on difference-of-convex optimization can be found in \\cite{gotoh1}.\n\n\\subsection{Alternating minimization scheme}\n \nLet us set the following notation\n$$\\begin{array}{lll}\nf({\\bs\\beta}) &=& \\|\\mb y-{\\mb X}{\\bs\\beta}\\|_2^2\/2 + \\lambda \\tk{{\\bs\\beta}}+ \\eta \\|{\\bs\\beta}\\|_1,\\\\\nf_1({\\bs\\beta}) &=& \\|\\mb y-{\\mb X}{\\bs\\beta}\\|_2^2\/2 + (\\eta+\\lambda) \\|{\\bs\\beta}\\|_1,\\\\\nf_2({\\bs\\beta}) &=& \\lambda \\sum_{i=1}^k |\\beta_{(i)}|.\n\\end{array}\n$$\n\n\\begin{definition}\nFor any function $F:\\mathbb{R}^p\\to\\mathbb{R}$ and $\\epsilon\\geq0$, we define the $\\epsilon$-subdifferential of $F$ at ${\\bs\\beta}_0\\in\\mathbb{R}^p$ to be the set $\\partial_\\epsilon F({\\bs\\beta}_0)$ defined as\n$$\\left\\{ \\bs\\gamma\\in\\mathbb{R}^p\\; : \\; F({\\bs\\beta}) - F({\\bs\\beta}_0) \\geq \\langle\\bs\\gamma,{\\bs\\beta}-{\\bs\\beta}_0\\rangle - \\epsilon \\;\\forall \\;{\\bs\\beta}\\in\\mathbb{R}^p\\right\\}.$$\nIn particular, when $\\epsilon=0$, we refer to $\\partial_0 F({\\bs\\beta}_0)$ as the subdifferential of $F$ at ${\\bs\\beta}_0$, and we will denote this as $\\partial F({\\bs\\beta}_0)$.\n\\end{definition}\n\n\n\nUsing this definition, we have the following result precisely characterizing local and global optima of \\eqref{eqn:alg}.\n\n\\begin{theorem}\\label{thm:optCharac}\n\\begin{enumerate}[(a)]\n\\item A point ${\\bs\\beta}^*$ is a local minimum of $f$ if and only if $\\partial f_2({\\bs\\beta}^*) \\subseteq \\partial f_1({\\bs\\beta}^*)$.\n\n\\item A point ${\\bs\\beta}^*$ is a global minimum of $f$ if and only if $\\partial_\\epsilon f_2({\\bs\\beta}^*) \\subseteq \\partial_\\epsilon f_1({\\bs\\beta}^*)$ for all $\\epsilon\\geq0$.\n\\end{enumerate}\n\\end{theorem}\n\\begin{proof}\nThis is a direct application of results in \\cite[Thm. 1]{taoan97}. Part (b) is immediate. The forward implication of part (a) is immediate as well; the converse implication follows by observing that $f_2$ is a \\emph{polyhedral} convex function \\cite[Thm. 1(ii)]{dcSummary} (see definition therein).\n\\end{proof}\n\n\nLet us note that $\\partial f_1$ and $\\partial f_2$ are both easily computable, and hence, local optimality can be verified given some candidate ${\\bs\\beta}^*$ per Theorem \\ref{thm:optCharac}.\\footnote{For the specific functions of interest, verifying local optimality of a candidate ${\\bs\\beta}^*$ can be performed in $O(p\\min\\{n,p\\}+p\\log p)$ operations; the first component relates to the computation of ${\\mb X}'{\\mb X}{\\bs\\beta}^*$, while the second captures the sorting of the entries of ${\\bs\\beta}^*$. See Appendix \\ref{app:alg1supp} for details.\n}\nWe now discuss the associated alternating minimization scheme (or equivalently, as a sequential linearization scheme), shown in Algorithm \\ref{alg:1} for finding local optima of \\eqref{eqn:alg} by making use of Theorem \\ref{thm:optCharac}. Through what follows, we make use of the standard notion of a conjugate function, defined as follows:\n\n\\begin{definition}\nFor any function $F:\\mathbb{R}^p\\to\\mathbb{R}$, we define its conjugate function $F^*:\\mathbb{R}^p\\to\\mathbb{R}$ to be the function\n$$F^*(\\bs\\gamma) = \\sup_{{\\bs\\beta}} \\langle\\bs\\gamma,{\\bs\\beta} - F({\\bs\\beta})\\rangle.$$\n\\end{definition}\n\n\nWe will make the following minor technical assumption: in step 2) of Algorithm \\ref{alg:1}, we assume without loss of generality that the $\\bs\\gamma^\\ell$ so computed satisfies the additional criteria:\n\\begin{enumerate}\n\\item it is an extreme point of the relevant feasible region, \n\\item and that if $\\partial f_2({\\bs\\beta}^\\ell) \\not\\subseteq \\partial f_1({\\bs\\beta}^\\ell)$, then $\\bs\\gamma^\\ell$ is chosen such that $\\bs\\gamma^\\ell \\in \\partial f_2({\\bs\\beta}^\\ell) \\setminus\\partial f_1({\\bs\\beta}^\\ell)$.\n\\end{enumerate}\nSolving \\eqref{eqn:wrtbg} with these additional assumptions can nearly be solved in closed form by simply sorting the entries of $|{\\bs\\beta}|$, i.e., by finding $|\\beta_{(1)}|,\\ldots,|\\beta_{(p)}|$\nWe must take some care to ensure that the second without loss of generality condition on $\\bs\\gamma$ is satisfied. This is straightforward but tedious; the details are shown in Appendix \\ref{app:alg1supp}.\n\n\nUsing this modification, the convergence properties of Algorithm \\ref{alg:1} can be proven as follows:\n\n\n\\begin{proof}[Proof of Theorem \\ref{thm:altConvProp}]\nThis is an application of \\cite[Thms. 3-5]{taoan97}. The only modification is in requiring that $\\bs\\gamma^\\ell$ is chosen so that $\\bs\\gamma^\\ell \\in \\partial f_2({\\bs\\beta}^*) \\setminus\\partial f_1({\\bs\\beta}^*)$ if ${\\bs\\beta}^\\ell$ is not a local minimum of $f$---see \\cite[\\textsection3.3]{taoan97} for a motivation and justification for such a modification. Finally, the correspondence between $\\bs\\gamma^\\ell\\in \\partial f_2({\\bs\\beta}^\\ell)$ and \\eqref{eqn:wrtbg}, and between ${\\bs\\beta}^{\\ell+1}\\in \\partial f_1^*(\\bs\\gamma^\\ell)$ and \\eqref{eqn:wrtbb}, is clear from an elementary argument applied to subdifferentials of variational formulations of functions.\n\\end{proof}\n\n\\subsection{Algorithm \\ref{alg:1}, Step 2}\\label{app:alg1supp}\n\nHere we present the details of solving \\eqref{eqn:wrtbg} in Algorithm \\ref{alg:1} in a way that ensures that the associated without loss of generality claims hold. In doing so, we also implicitly study how to verify the conditions for local optimality (\\emph{c.f.} Theorem \\ref{thm:optCharac}). \nThroughout, we use the $\\operatorname{sgn}$ function defined as \n$$\\operatorname{sgn}(x) = \\left\\{\\begin{array}{rl}\n1,& x>0\\\\\n-1,&x<0\\\\\n0,&x=0.\n\\end{array}\\right.$$\n\nFor fixed ${\\bs\\beta}$, the problem of interest is\n\\begin{equation*\n\\begin{array}{ll}\n\\displaystyle\\max_{\\bs\\gamma} & \\langle{\\bs\\beta},\\bs\\gamma\\rangle\\\\\n\\operatorname{s.t.}& \\displaystyle\\sum_i |\\gamma_i| \\leq \\lambda k\\\\\n& \\displaystyle|\\gamma_i|\\leq \\lambda\\;\\forall i.\n\\end{array}\n\\end{equation*}\nWe wish to find a maximizer $\\bs\\gamma$ for which the following hold:\n\\begin{enumerate}\n\\item $\\bs\\gamma$ is an extreme point of the relevant feasible region, \n\\item and that if $\\partial f_2({\\bs\\beta}) \\not\\subseteq \\partial f_1({\\bs\\beta})$, then $\\bs\\gamma$ is such that $\\bs\\gamma \\in \\partial f_2({\\bs\\beta}) \\setminus\\partial f_1({\\bs\\beta})$.\n\\end{enumerate}\nAs the problem on its own can be solved by sorting the entries of ${\\bs\\beta}$, the crux of the problem is ensuring that 2) holds.\n\nGiven the highly structured nature of $f_1$ and $f_2$ in our setup, it is simple, albeit tedious, to ensure that such a condition is satisfied. Let $I = \\{i: |\\beta_i|=|\\beta_{(k)}|\\}$. If $|I|=1$, the optimal solution is unique, and there is nothing to show. Therefore, we will assume that $|I|\\geq2$. We will construct an optimal solution $\\bs\\gamma$ which satisfies the desired conditions. First observe that we necessarily must have that 1) $\\gamma_i = \\lambda\\operatorname{sgn}(\\beta_i)$ if $|\\beta_i|> |\\beta_{(k)}|$ and 2) $\\gamma_i=0$ if $|\\beta_i|<|\\beta_{(k)}|$. We now proceed to define the rest of the entries of $\\bs\\gamma$. We consider two cases:\n\n\\begin{enumerate}\n\\item First consider the case when $|\\beta_{(k)}|>0$. We claim that $\\partial f_2({\\bs\\beta}) \\not\\subseteq \\partial f_1({\\bs\\beta})$. To do so, we will inspect the $i$th entries of $\\partial f_1({\\bs\\beta})$ for $i\\in I$; as such, let $P_i^j =\\{\\delta_i: \\bs\\delta\\in \\partial f_j({\\bs\\beta})\\}$ for $j\\in\\{1,2\\}$ and $i\\in I$ (a projection). For each $i\\in I$, we have using basic convex analysis that $P_i^1$ is a singelton: $P_i^1 = \\{\\langle{\\mb X}_i,{\\mb X}{\\bs\\beta}-\\mb y\\rangle + (\\eta+\\lambda)\\operatorname{sgn}(\\beta_i)\\}$, where ${\\mb X}_i$ is the $i$th column of ${\\mb X}$. In contrast, because $|I|\\geq2$, the set $P_i^2$ is an interval with strictly positive length for each $i\\in I$ (it is either $[-\\lambda,0]$ or $[0,\\lambda]$, depending on whether $\\beta_i<0$ or $\\beta_i>0$, respectively). Therefore, $\\partial f_2({\\bs\\beta}) \\not\\subseteq \\partial f_1({\\bs\\beta})$, as claimed.\n\nFix an arbitrary $j\\in I$. Per the above argument, we must have that $\\langle{\\mb X}_j,{\\mb X}{\\bs\\beta}-\\mb y\\rangle+ (\\eta+\\lambda)\\operatorname{sgn}(\\beta_j)\\neq 0$ or $\\langle{\\mb X}_j,{\\mb X}{\\bs\\beta}-\\mb y\\rangle + (\\eta+\\lambda)\\operatorname{sgn}(\\beta_j)\\neq\\lambda\\operatorname{sgn}(\\beta_j)$. In the former case, set $\\gamma_i=0$, while in the latter case we define $\\gamma_i=\\lambda\\operatorname{sgn}(\\beta_i)$ (if both are true, either choice suffices). It is clear that it is possible to fill in the remaining entries of $\\gamma_i$ for $i\\in I\\setminus\\{j\\}$ in a straightforward manner so that $\\bs\\gamma\\in \\partial f_2({\\bs\\beta})$. Further, by construction, $\\bs\\gamma\\notin \\partial f_1({\\bs\\beta})$, as desired.\n\n\n\\item Now consider the case when $|\\beta_{(k)}|=0$. Using the preceding argument, we see that $P_i^1$ is the interval $[\\langle{\\mb X}_i,{\\mb X}{\\bs\\beta}-\\mb y\\rangle-(\\eta+\\lambda),\\langle{\\mb X}_i,{\\mb X}{\\bs\\beta}-\\mb y\\rangle+\\eta+\\lambda] $ for $i\\in I$. In contrast, $P_i^2$ is the interval $[-\\lambda,\\lambda]$ for $i\\in I$. If for all $i\\in I$ one has that $P_i^2\\subseteq P_i^1$, then the choice of $\\gamma_i$ for $i\\in I$ is obvious: any optimal extreme point $\\bs\\gamma$ of the problem will suffice. (Note here that it may or may not be that $\\partial f_2({\\bs\\beta}) \\subseteq \\partial f_1({\\bs\\beta})$. This entirely depends on $\\beta_i$ for $i\\notin I$.)\n\nTherefore, we may assume that there exists some $j\\in I$ so that $P_j^2\\not\\subseteq P_j^1$. (It follows immediately that $\\partial f_2({\\bs\\beta}) \\not\\subseteq \\partial f_1({\\bs\\beta})$.) We must have that $\\langle{\\mb X}_j,{\\mb X}{\\bs\\beta}-\\mb y\\rangle -(\\eta+\\lambda)>-\\lambda$ or $\\langle{\\mb X}_j,{\\mb X}{\\bs\\beta}-\\mb y\\rangle + (\\eta+\\lambda)<\\lambda$. In the former case, set $\\gamma_i=-\\lambda$, while in the latter case we define $\\gamma_i=\\lambda$ (if both are true, either choice suffices). It is clear that it is possible to fill in the remaining entries of $\\gamma_i$ for $i\\in I\\setminus\\{j\\}$ in a straightforward manner so that $\\bs\\gamma\\in \\partial f_2({\\bs\\beta})$. By construction, $\\bs\\gamma\\notin \\partial f_1({\\bs\\beta})$, as desired.\n\n\n\\end{enumerate}\n\n\nIn either case, we have that one can choose $\\bs\\gamma\\in\\partial f_2({\\bs\\beta})$ so that 1) $\\bs\\gamma$ is an extreme point of the feasible region $\\{\\bs\\gamma:\\sum_i|\\gamma_i|\\leq\\lambda k,\\; |\\gamma_i|\\leq \\lambda\\;\\forall i\\}$ and that 2) $\\bs\\gamma \\in \\partial f_2({\\bs\\beta}) \\setminus\\partial f_1({\\bs\\beta})$ whenever $\\partial f_2({\\bs\\beta}) \\not\\subseteq \\partial f_1({\\bs\\beta})$. This concludes the analysis; thus, we have shown the validity (and computational feasibility) of the without loss of generality claim present in Algorithm \\ref{alg:1}. Indeed, per our analysis, Step 2 in Algorithm \\ref{alg:1} can be solved in $O(p\\min\\{n,p\\}+p\\log p)$ operations (sorting of ${\\bs\\beta}$ in $O(p\\log p)$ followed by $O(p)$ conditionals and gradient evaluation in $O(np)$). In reality, if we keep track of gradients in Step 3, there is no need to recompute gradients in Step 2, and therefore in practice Step 2 is of the same complexity of sorting a list of $p$ numbers. (We assume that ${\\mb X}'\\mb y$ has been computed offline and store throughout for simplicity.)\n\n\n\n\\subsection{Algorithm \\ref{alg:admm}, Step 3}\\label{app:admmsupp}\n\nHere we show how to solve Step 3 in Algorithm \\ref{alg:admm}, namely, solving the orthogonal design trimmed Lasso problem\n\\begin{equation}\\label{eqn:pf6}\n\\min_{\\bs\\gamma} \\lambda \\tk{\\bs\\gamma} + \\frac{\\sigma}{2}\\|{\\bs\\beta}-\\bs\\gamma\\|_2^2 - \\langle \\mb q,\\bs\\gamma\\rangle,\n\\end{equation}\nwhere ${\\bs\\beta}$ and $\\mb q$ are fixed. This is solvable in closed form. Let $\\bs\\alpha={\\bs\\beta}-\\mb q\/\\sigma$. First observe that we can rewrite \\eqref{eqn:pf6} as\n\\begin{align*}\n\\eqref{eqn:pf6} &= \\min_{\\bs\\gamma} \\lambda \\tk{\\bs\\gamma} + \\sigma\\|\\bs\\gamma-\\bs\\alpha\\|_2^2\/2\\\\\n&= \\min_{\\substack{\\bs\\gamma,\\mb z:\\\\\\sum_i z_i=p-k\\\\\n\\mb z\\in\\{0,1\\}^p}}\\lambda\\langle \\mb z,|\\bs\\gamma|\\rangle + \\sigma\\|\\bs\\gamma-\\bs\\alpha\\|_2^2\/2\\\\\n&= \\min_{\\substack{\\bs\\gamma,\\mb z:\\\\\\sum_i z_i=p-k\\\\\n\\mb z\\in\\{0,1\\}^p}}\\sum_i \\left(\\lambda z_i|\\gamma| + \\sigma(\\gamma_i-\\alpha_i)^2\/2\\right).\n\\end{align*}\nThe penultimate step follows via Lemma \\ref{lemma:miprep}. Per this final representation, the solution becomes clear. In particular, let $I$ be a set of $k$ indices of $\\bs\\alpha$ corresponding to $\\alpha_{(1)}$, $\\alpha_{(2)}$, \\ldots, $\\alpha_{(k)}$. (If $|\\alpha_{(k)}| = |\\alpha_{(k+1)}|$, we break ties arbitrarily.) Then a solution $\\bs\\gamma^*$ to \\eqref{eqn:pf6} is\n$$\\gamma_i^* = \\left\\{\\begin{array}{rl}\n\\alpha_i, & i\\in I\\\\\n\\operatorname{soft}_{\\lambda\/\\sigma}(\\alpha_i),&i\\notin I,\n\\end{array}\\right.$$\nwhere $\\operatorname{soft}_{\\lambda\/\\sigma}(\\alpha_i) = \\operatorname{sgn}(\\alpha_i) \\left|\\alpha_i-\\lambda\/\\sigma\\right|$.\n\n\n\n\\subsection{Computational details}\\label{app:compdetail}\n\nFor completeness and reproducibility, we also include all computational details. For Figure \\ref{fig:coeffpath}, the following parameters were used to generate the test instance: $n = 100$, $p = 20$, $\\text{SNR} = 10$, \\texttt{julia} seed = 1, $\\eta=0.01$, $k=2$. The example was generated from the following true model:\n\\begin{enumerate}\n\\item ${\\bs\\beta}_\\text{true}$ is a vector with ten entries equal to 1 and all others equal to zero. (So $\\|{\\bs\\beta}_\\text{true} \\|_0=10$.)\n\\item covariance matrix $\\bs\\Sigma$ is generated with $\\Sigma_{ij} = .8^{|i-j|}$.\n\\item ${\\mb X}\\sim N(\\mb 0,\\bs\\Sigma)$.\n\\item $\\epsilon_i\\stackrel{\\text{i.i.d.}}{\\sim} N(0, {\\bs\\beta}_0'\\bs\\Sigma{\\bs\\beta}_0\/\\text{SNR})$\n\\item $\\mb y$ is then defined as ${\\mb X}{\\bs\\beta}_0+\\bs\\epsilon$\n\n\\end{enumerate}\n\nThe 100 examples generated for Figure \\ref{fig:optgap} were using the following parameters: $n = 100$, $p = 20$, $\\text{SNR} = 10$, \\texttt{julia} seed $\\in\\{ 1,\\ldots,100\\}$, $\\eta=0.01$, $k=2$, $\\text{bigM} = 20$. MIO using Gurobi solver. Max iterations: alternating minimization---1000; ADMM (inner)---2000; ADMM (outer)---10000. ADMM parameters: $\\sigma=1$, $\\tau=0.9$. The examples themselves had the same structure as the previous example. The optimal gaps shown are relative to the objective in \\eqref{eqn:alg}. The averages are computed as geometric means (relative to optimal 100\\%) across the 100 instances, and then displayed relative to the optimal 100\\%.\n\n\n\\end{appendices}\n\n\\section*{Acknowledgments}\n\n\nCopenhaver was partially supported by the Department of Defense, Office of Naval Research, through the National Defense Science and Engineering Graduate (NDSEG) Fellowship. Mazumder was partially supported by ONR Grant N000141512342.\n\n\n\n\\bibliographystyle{IEEEtranS}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Reanalysis of CADIS data}\n\nIn two previous papers (Phleps et al. 2000, 2005) we have studied the\ndistribution of stars in the Milky Way using deep star counts based on Calar \nAlto Deep Imaging Survey data. For this purpose we analyzed five fields which \ncontained in total 1627 faint stars (15.5 $ < R < $ 23). We do not repeat here \nany details of our analysis, but give again in Table 1 the coordinates \nof the fields. At the time of writing the papers we were mainly interested in \ndetermining descriptive parameters of the density distributions\nof the various components of the Milky Way \nsuch as the vertical scale heights of the thin and thick disks, respectively, \ntheir relative normalization and the parameters of the halo density law. We\nrealized, however, already then that, while the 1h, 16h, and 32h fields gave \nvery consistent results, the 9h and to a lesser degree the 13h field lead \nto discrepant results in the sense that the vertical density distribution \nof the stars in these fields had a few kpc above the midplane a shallower slope \nthan in the three other fields. With the advent of the Data Release 5 of the\nSloan Digital Sky Survey and its subsequent analyses by Juri\\'c et al.~(2006) \nand Belokurov et al.~(2006a, b) overdensely populated parts of the density \ndistribution of the stars in the Milky Way have been revealed in such rich \ndetail that we can interpret now in retrospect also the results of the 13h\nfield in a consistent way, whereas the nature of the overdensity in 9h \nremains at present less clear. By pure chance two out of the five \nline--of--sights of the CADIS fields have crossed two separate of such \noverdense regions! Of course such few line--of--sights did not allow the \nidentification of the excess densities as distinct isolated overdensely \npopulated regions in the Milky Way. \n\n\\begin{table}\n\\caption[]{Pointings of the CADIS fields}\n \\label{poi}\n\\[\n\\begin{tabular}{rrrrr}\n\\hline\n \\noalign{\\smallskip}\n \\hline\n \\noalign{\\smallskip}\n field & R.A. & Dec & l & b \\\\ \n & \\multicolumn{4}{c}{deg}\\\\\n \\noalign{\\smallskip}\n \\hline\n \\noalign{\\smallskip}\n 1\\,h &\t 27 &\t 2 & 150 &\t-59 \\\\\n 9\\,h & 138 &\t 46 &\t175 &\t 45 \\\\\n13\\,h &\t207 &\t 6 &\t335 &\t 60 \\\\\n16\\,h & 246 &\t 56 &\t 85 &\t 45 \\\\\n23\\,h & 349 &\t 12 &\t 90 &\t-43 \\\\\n \\noalign{\\smallskip}\n \\hline\n \\end{tabular}\n \\]\n\\end{table}\n\nSuch features in the density distribution of stars have attracted recently\ngreat interest in the literature, because they almost certainly represent debris\nof satellite galaxies which have fallen into the Milky Way and were then\ndisrupted. Particularly striking are the large filaments like the Sgr stream or \nthe newly discovered `Orphan' stream (Ibata et al.~1997, Majewski et al.~2003, \nBelokurov et al.~2006a, b, Grillmair 2006) which are interpreted as tidal \ntails of dwarf galaxies presently in the process of being cannibalized. \nObviously such accretion events played an important role in the formation \nhistory of the Milky Way.\n\n\nIn this {\\em note} we present a reanalysis of the CADIS data, because in our \nview these data still contribute valuable information on the overdensities. \nIn particular we can trace the overdensities right into the disk of the Milky\nWay, which is not possible with the SDSS data. Moreover, we can provide \nestimates of the masses of the overdensities. \n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=8.7cm]{5698f1.ps}\n \\caption[]{Number density distribution of stars [**\/kpc$^3$] \n perpendicular to the Galactic plane derived from star counts in CADIS \n fields. The five fields are coded as: 1h (squares), 9h (asterisks),\n 13h (circles), 16h (triangles), 23h (pentagons). Only data above $z$ = 2 kpc \n are shown (colour coded in the electronic version). The solid line is the \n vertical density profile at the position of the Sun of a smooth Galaxy model \n fitted to the data of the 1h, 16h and 23h fields.}\n \\label{cadidat}\n \\end{figure}\n\n The CADIS star counts were carried out along the lines--of--sight\nwhose coordinates are given in Table 1. In Fig.~1 we show the inferred number \ndensities of stars not as function of heliocentric distance, but as function of\nthe distance from the midplane, i.e.~as vertical density profiles perpendicular \nto the Galactic plane. In this way the five fields are projected onto each \nother which allows a direct comparison of their density distributions. The \ndistributions of the 1h and\n23 h fields have been flipped up. As can be seen from Fig.~1 the distribution \nof stars is traced from the outer halo into the disk of the Milky Way. Data \nbelow $z$ = 2 kpc are not shown. The density distributions derived from the \nstar counts in the 1h, 16h, and 23h fields, respectively, are within statistical\nuncertainties consistent with each other. The inclinations of the\nlines--of--sight of the CADIS fields relative to the vertical axis hardly affect\nthe shape of the density profiles at heights of more then 5 kpc above the\nmidplane. At lower heights part of the scatter among the data shown in Fig.~1\ncan be ascribed to the varying viewing directions of the CADIS fields. For the \npresent purposes the 1h, 16h and 23h fields define a vertical reference profile.\nHaving realized this we have repeated the fit of the smooth Galaxy model of\nPhleps et al.~(2005, Eqns.~3 and 5) using only these three fields. As in the\nprevious paper we adopt a vertical scale height of the thin disk of 283 pc. The\nfit to the density distribution in the three reference fields leads to a \nslightly reduced vertical scale height of the thick disk of 900 pc, but to the \nsame normalization of the local density of the thick disk at 4 percent of the \ntotal density at the midplane. In the case of a spherical halo model we find an \nindex of the halo density law of $\\alpha = 3.25\\,\\pm\\,0.10$ and in the case of \na halo flattened as $(c\/a)=0.6$ an index of $\\alpha= 2.69\\,\\pm\\,0.09$. The \ndensity profile perpendicular to the Galactic plane ($b=90^\\circ$) of the\nsmooth Galaxy model, which is shown as a solid line in Fig.~1, is in excellent \nagreement with the Galaxy model of Juri\\'c et al.~(2006).\n\nThe 13h field shows a statistically significant excess density relative \nto the reference profile which can be traced from $z$ = 2 to 14 kpc with the\nmaximal deviation at about $z$ = 4 kpc. The heliocentric distances range \nfrom 2.3 to 16 kpc. The coordinates of this field point towards the fringe \nof the Virgo overdensity which is discussed in detail by Juri\\'c et al.~(2006, \ncf.~their Fig.~24). Indeed, Fig.~1 can be directly compared with Fig.~22 \nof Juri\\'c et al.~(2006), where they delineate the overdensity by subtracting \nfrom the density distribution of stars observed in the meridional section of \nthe Milky Way which contains the Virgo overdensity their smooth Galaxy model.\nAbove $z$ = 5 kpc, which corresponds to a galactocentric radius of $R$ = 5.5\nkpc in the 13h field, the vertical profile of the overdensity found in the \nSDSS DR5 and the CADIS data, respectively, are absolutely consistent with each \nother. However, in the CADIS data it can be traced right down into the disk of \nthe Milky Way, confirming the supposition of Juri\\'c et al.~(2006) that this might\nbe the case. \n\nMoreover, both data sets have been analyzed as Hess diagrams. In \ntheir Fig.~24 Juri\\'c et al.~(2006) show by subtracting the Hess diagram of a \ncontrol field from the Hess diagram of the Virgo field that the excess \npopulation of stars in the Virgo field is primarily found in the blue branch of\nthe Hess diagram at $(g-r) \\approx 0.4$, where halo stars are located. Precisely\nthe same is found in the Hess diagram of the 13h field shown in Fig.~2. There \nis a clear excess in the blue branch of the halo stars at $(b-r) \\approx 0.1$ \nwith respect to the smooth Galaxy model, which has been recalculated using the\nparameters given above. We conclude from this discussion that the excess \ndensity in the 13h CADIS field can be identified as part of the Virgo\noverdensity.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=8.7cm]{5698f2.ps}\n \\caption[]{Hess diagram of the 13h field. The observed distribution of stars\n in the $R - (b-r)$ plane is colour coded according to the number density of\n stars per bin of size 0.1 $\\times$ 0.1 in magnitude and colour, respectively. \n The colour table is shown at the top. A smooth model of the Galaxy, which has\n been fitted to the reference fields, is shown as isodensity contours. The \n equally spaced contour levels span the same range as the colour table.}\n \\label{hess1}\n \\end{figure}\n\nSubtracting the reference profile from the observed density profile of the 13h \nfield, both now reckoned along the line--of--sight, allows us to\ndetermine the mass of the excess population of stars. We find that 276 out of\nthe 517 stars in the 13h field seem to belong to the density excess. Phleps et\nal.~(2005) have identified in the CNS4 catalogue (Jahrei{\\ss} \\& Wielen 1997)\nthe analogues of the blue halo stars in the CADIS fields in the local volume\n(Fuchs \\& Jahrei{\\ss} 1998). They have shown that the extrapolation of the \nouter\nhalo density law towards the Galactic midplane agrees ideally with the number\ndensity of stars with the same colours and absolute magnitudes in the local\nsample. The average mass of these stars is 0.66 ${\\mathcal{M}}_\\odot$\nand the average mass--to--light ratio is ${\\mathcal{M}}\/{\\mathcal{L}_{\\rm V}}\n= 2.7 {\\mathcal{M}}_\\odot\/{\\mathcal{L}}_{{\\rm V}\\odot}$. \nThus a number density of 10$^5$ stars per kpc$^3$ corresponds to a mass \ndensity of $6.6\\,\\cdot\\,10^{-5}$ ${\\mathcal{M}}_\\odot\\,pc^{-3}$. We estimate \nfrom the area preserving Lambert projection in Fig.~24 of Juri\\'c et al.~(2006) \nthat the Virgo overdensity subtends an area of 846 square degrees. One CADIS \nfield has a size of 121 square arcminutes. If the mass of the excess population \nof stars in the 13h field is representative for the rest of the Virgo \noverdensity the latter contains a mass of $4.6\\,\\cdot\\,10^6$ \n${\\mathcal{M}}_\\odot$. This is in our view a clear indication that the Virgo\noverdensity is the relic of a shredded dwarf galaxy. Adopting the \nmass--to--light ratio of ${2.7\\,{\\mathcal{M}}_\\odot\/\n{\\mathcal{L}}_{{\\rm V}\\odot}}$ found above the stars in the Virgo overdensity\nwould have a total luminosity of $1.7\\,\\cdot\\,10^6$ \n${\\mathcal{L}}_{{\\rm V}\\odot}$ or an absolute magnitude of M$_{\\rm V}$ =\n -- 11 mag, which is quite typical for Local Group dwarf spheroidal galaxies. \nAlso the inferred mass--to--light ratio is typical for the stellar populations \nof dwarf spheroidals (Mateo 1998).\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=8.7cm]{5698f3.ps}\n \\caption[]{The same as Fig.~2 but for the 9h field.}\n \\label{hess2}\n \\end{figure}\n\nThe 9h field shows at distances between $z$ = 2 and 15 kpc above the midplane\nan overdensity relative to the reference profile, which is even more \npronounced than that in the 13h field. The maximal deviation is found around \n$z$ = 9 kpc. The corresponding heliocentric distances range from 3 to 21 kpc.\nHowever, this overdensity is more difficult to associate confidently with one of\nthe overdensities seen in the SDSS data (Juri\\'c et al.~2006, Belokurov et \nal.~2006a, b). We note that most of the density excess in the Hess diagram of\nthe 9h field shown in Fig.~3 is found in the colour range\n$(b-r) \\approx 0.5 - 1$ and falls right into the gap\nbetween the two branches of the disk and halo stars, respectively. Judging from\nthe Hess diagrams presented by Juri\\'c et al.~(2006) for the Virgo field and its\ncontrol field stars of such intermediate colour have been eliminated by the\ncolour cut $(g - r) < 0.4$ from the sample of Belokurov et al.~(2006a). \nIn Fig.~1 of Belokurov (2006b) there is also the distribution of stars with\ncolours $0.4 < (g - r) \\leq 0.6$ shown and we find the 9h field at the eastern \nfringe of the northern part of the Monoceros stream (Newberg et al.~2002) which\nseems to be in this colour range much more extended than in the bluer \n$(g - r) < 0.4$ colours (Belokurov et al.~2006a). Similarly the Monoceros \nstream is seen in the data of Juri\\'c et al.~(2006, their Fig.~9) mainly at \nslightly lower galactic latitudes than that of the 9h field. Its \nline--of--sight passes though through the \ninner fringe of the overdensity in anticenter direction in the panels of\nFig.~9 which show the density distribution of stars with colours $0.1 < \n(r - i) \\leq 0.15$ at heights of 4 and 5 kpc above the midplane. Juri\\'c et\nal.~(2006) do study the distribution of redder stars, but cannot trace it beyond\na few kpc from the Sun. Pe\\~narrubia et al.~(2005) have modelled the Monoceros\nstream by numerical simulations as a tidal stream. Their simulations \nshow that part of the stream might be very well seen in the direction of the 9h \nfield and at the heliocentric distances of the overdensity in the 9h field.\nThus this overdensity may be tentatively associated with the Monoceros \nstream. However, there is a further aspect of the interpretation of the\noverdensity in the 9h field. The 9h field is positioned exactly on the \ngalactocentric great circle on which the Orphan stream lies and which passes \nalso through the high velocity cloud complex A (Belokurov et al.~2006a, b,\nWakker 2001). The position is roughly in the middle between the northern tip of\nthe Orphan stream and Complex A. Belokurov et al.~(2006b) have determined \na heliocentric distance of the northern tip of the Orphan stream of 35$\\pm$10\nkpc, whereas the distance to Complex A is 10.1$\\pm$0.9 kpc. This discrepancy \ncan be resolved in a natural way if Complex A is on a different wrap around the\nGalactic center as the Orphan stream. The density profile of the 9h field\ngives the impression that there is an extra `hump' in the density excess of\nstars relative to the reference fields which is centered around z\n$\\approx$ 9 kpc or a heliocentric distance of 13 kpc. It is tempting to\nspeculate that the `hump stars' are associated with the second wrap of\nthe Orphan stream. The width of the Orphan stream is estimated to be only a few\n100 pc (Belokurov et al.~2006b). That would be consistent with a narrow feature\nin the density profile of the overdensity. The fairly long elongated \noverdensity in the density profile of the 9h field must be then still ascribed\nto the Monoceros stream.\n\nIn summary, we conclude that the Virgo overdensity has almost certainly been \nseen in the 13h CADIS field, although it was not recognized previously by us as\nsuch. We have found in the 9h field another overdensity which is as significant\nas the Virgo overdensity in the 13h field which we tentatively attribute to the\nMonoceros and Orphan streams. Interestingly both features could be traced to \ndistances less than 3 kpc from the Sun. If they are related to any of the star \nstreams identified as fine structure in the phase space distribution function \nof stars in the solar neighbourhood (Helmi et al.~1999, Chiba \\& Beers 2001, \nNavarro et al.~2004, Helmi et al.~2006, Arifyanto \\& Fuchs 2006) is at present \nunclear.\n\n\\acknowledgements{We thank Vasily Belokurov, Wyn Evans and \nHans--Walter Rix for very helpful discussions.}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}