diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzaznv" "b/data_all_eng_slimpj/shuffled/split2/finalzzaznv" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzaznv" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nLenticular, or S0, galaxies make up some 25\\% of large galaxies in the\nlocal Universe (Dressler 1980), so understanding how\nthey form must constitute a significant element of any explanation of\ngalaxy evolution. Their location at the crossroads between\nellipticals and spirals in Hubble's tuning-fork diagram underlines\ntheir importance in attempts to develop a unified understanding of\ngalaxy evolution, but also means that it is not even clear to which of\nthese classes of galaxy they are more closely related.\n\nOne often-cited piece of evidence comes from the fact that the\nproportion of S0s is substantially smaller in distant ($z\\sim0.5$) clusters\nthan in nearby ones, while spirals show the opposite trend (Dressler\net al.\\ 1997), strongly suggesting a transformation from\none to the other. However, even if this scenario is accepted, it does\nnot answer the question as to whether S0s are more closely related to\nspirals or ellipticals, which is intimately connected to the mechanism\nof transformation. If the transformation simply involves a spiral\ngalaxy losing its gas content through ram pressure stripping (Gunn \\&\nGott 1972) or ``strangulation'' (Larson et al.\\ 1980), \nso ceasing star formation and fading into an S0, then\nclearly S0s and spirals are closely related. However, it is also\npossible that mergers can cause such a transformation: while\nequal-mass mergers between spirals create elliptical galaxies, more\nminor mergers can heat the original disk of a spiral and trigger a\nbrief burst of star formation, using up the residual gas and leaving\nan S0. In such a merger scenario, the mechanism for creating an S0 is\nmuch more closely related to that for the formation of ellipticals.\n\nClues to which mechanism is responsible are to be found in the\n``archaeological record'' that can be extracted from spectral\nobservations of nearby S0s. In particular, the present-day stellar\ndynamics should reflect the system's origins, with the gentle gas\nstripping of a spiral resulting in stellar dynamics very similar to\nthe progenitor spiral, while the merger process will heat the stars,\nresulting in kinematics more dominated by random motions, akin to an\nelliptical. In addition, the absorption line strengths can be\ninterpreted through stellar population synthesis to learn about the\nmetallicity and star formation histories of these systems. Even more\ninterestingly, these dynamical and stellar properties can be compared\nto see if a consistent picture can be constructed for the formation of\neach system. I present here some recent \nevidence suggesting that such a consistent picture is indeed emerging. \n\n\n\\section{Evidence from the Tully-Fisher relation}\n\nCombining published data with high-quality VLT\/FORS spectroscopy of sample\nof Fornax S0s (Bedregal et al.\\ 2006a) we have carried out a combined\nstudy of the Tully-Fisher relation and the stellar populations of these\ngalaxies. Despite the relatively small sample and the considerable\ntechnical challenges involved in determining the true rotation velocity $V_{\\rm\nrot}$ from absorption line spectra of galaxies with significant non-rotational\nsupport (see Mathieu et al.\\ 2002), some very interesting results arise.\nS0s lie systematically below the spiral galaxy Tully-Fisher relation in both\nthe optical and near-infrared (Figure~1). If S0s are the descendants of spiral\ngalaxies, this offset can be naturally interpreted as arising from the\nluminosity evolution of spiral galaxies that have faded since ceasing star\nformation. Moreover, the amount of fading implied by the offset of individual\nS0s from the spiral relation seems to correlate with the luminosity-weighted\nage of their stellar population, particularly at their centres (Figure~2). \nThis correlation suggests a scenario in which the star formation clock stopped\nwhen gas was stripped out from a spiral galaxy and it began to fade into an S0.\nThe stronger correlation at small radii indicates a final last-gasp burst of\nstar formation in this region. See Bedregal, Arag\\'on-Salamanca \\& Merrifield\n(2006b) for details. \n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[height=3.3in,width=4.0in,angle=0]{Aragon-Salamanca_fig1.ps}\n\\end{center}\n\\caption{$B$-band Tully-Fisher relation (TFR) for \n S0 galaxies using\n different samples from the literature (open symbols) and our VLT Fornax \n data (filled circles). \n The solid and dashed lines show two independent determinations of \n the TFR relation for local spirals. On average (dotted line),\n S0s are $\\sim3$ times fainter\n than spirals at similar rotation velocities \n (Bedregal, Arag\\'on-Salamanca \\& Merrifield 2006b). \n }\n\\label{fig:fig1}\n\\end{figure}\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[height=1.65in,angle=0]{Aragon-Salamanca_fig2.eps}\n\\end{center}\n\\caption{\n For our VLT Fornax data we plot the\n shift in magnitudes from the\n $B$-band spiral TFR versus the stellar population age at the galaxy\n centre (left panel), at $1\\,R_e$ (middle panel) and at $2\\,R_e$ (right\n panel). The lines show models for fading spirals. \n Note that the correlation is strongest for the central stellar \n populations of the galaxies, suggesting that the last episode of star\n formation took place there (Bedregal, Arag\\'on-Salamanca \\& Merrifield 2006b). \n }\n\\label{fig:fig2}\n\\end{figure}\n\n\n\n\n\\section{Evidence from the globular cluster populations}\n\nEntirely consistent and independent evidence comes from our recent\nstudies of the properties of the globular cluster (GC) systems and stellar\npopulations of SOs (Arag\\'n-Salamanca, Bedregal \\& Merrifield 2006; Barr et\nal.\\ 2007). If interactions with the intra-cluster medium are responsible for\nthe transformation of spirals into S0s, the number of globular clusters in\nthese galaxies will not be affected. That is probably not true if more violent\nmechanisms such as galaxy-galaxy interactions are the culprit (see, e.g.,\nAshman \\& Zepf 1998). If we assume that the number of globular clusters remains\nconstant, the GC specific frequency ($S_N\\propto\\,$number of GCs per unit\n$V$-band Luminosity) would increase due to the fading of the galaxy. On\naverage, the GC specific frequency is a factor $\\sim 3$ larger for S0s than it\nis for spirals (Arag\\'on-Salamanca et al. 2006), meaning that\nin the process S0s become, on average, $\\sim 3$ times fainter than their\nparent spiral. Furthermore, in this scenario the amount of fading (or increase\nin GC specific frequency) should grow with the time elapsed since the star\nformation ceased, i.e., with the luminosity-weighted age of the S0 stellar\npopulation. Figure~3 shows that this is indeed the case, adding considerable\nweight to the conclusions reached from our Tully-Fisher studies. \n\n\n\n\n\\section{Additional evidence from the stellar populations and dynamics} \n\nIn Bedregal et al.\\ (2007) we show that the central absorption-line indices in\nS0 galaxies correlate well with the central velocity dispersions in accordance\nwith what previous studies found for elliptical galaxies. However, when these\nline indices are converted into stellar population properties, we find that the\nobserved correlations seem to be driven by systematic age and alpha-element\nabundance variations, and not changes in overall metallicity as is usually\nassumed for ellipticals. These correlations become even tighter when the\nmaximum circular velocity is used instead of the central velocity dispersion. \nThis improvement in correlations is interesting because the maximum rotation\nvelocity is a better proxy for the S0's dynamical mass than its central\nvelocity dispersion. Finally, the $\\alpha$-element over-abundance seems to be\ncorrelated with dynamical mass, while the absorption-line-derived ages also\ncorrelate with these over-abundances. These correlations imply that the most\nmassive S0s have the shortest star-formation timescales and the oldest stellar\npopulations, suggesting that mass plays a large role in dictating the life\nhistories of S0s.\n\n\n\n\n\n\n\n\\section{Conclusions}\n\n\nThe stellar populations, dynamics and globular clusters of S0s provide\nevidence consistent with these galaxies being the descendants of fading spirals\nwhose star formation ceased. However, caution is needed since significant\nproblems could still exist with this picture (see, e.g., Christlein \\&\nZabludoff 2004; Boselli \\& Gavazzi 2006). Moreover, the number of \ngalaxies studied\nhere is still small, and it would be highly desirable to extend this kind of\nstudies to much larger samples covering a broad range of galaxy masses and\nenvironments. \n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[height=2.9in,angle=0]{Aragon-Salamanca_fig3.eps}\n\\end{center}\n\\caption{\nLog$_{10}$ of the luminosity-weighted ages is Gyr \n vs.\\ the globular cluster specific frequency\n ($S_N$) of S0s. The line shows the evolution \n expected for a\n fading galaxy according to the stellar population models of \n Bruzual \\& Charlot\n (2003). The correlation between the fading of the galaxies \n (or increase in $S_N$) and the spectroscopically-determined\n age of their stellar populations is clearly consistent with the predictions of\n a simple fading model. \n Note that the $S_N$ value for NGC3115B\n is very unreliable and almost certainly \n severely overestimated due\n to contamination from the GC systems of neighbouring galaxies. \n See Barr et al.\\ (2007) for details. \n }\n\\label{fig:fig3}\n\\end{figure}\n\n\\begin{acknowledgments}\n\nI thank A.G.\\ Bedregal, M.\\ Merrifield, J.M.\\ Barr, B.\\ Milvang-Jensen, S.P.\\\nBamford and N. Cardiel for allowing me to discuss here results obtained with\ntheir help. \n\n\n\\end{acknowledgments}\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nSuper-resolution fluorescent microscopy has transformed many domains of biology.\nTo date there are two far-field classes of techniques that lead to fluorescence-based microscopy with a resolution far beyond the Rayleigh diffraction limit \\cite{ref1}. The first class is generically referred to as super-resolved ensemble fluorophore microscopy and the second as super-resolved single fluorophore microscopy. \n\nThe first class of techniques can be implemented either by stimulated emission depletion (STED) of fluorescence from all molecules in a sample except those in a small region of the imaged biological sample or by structured illumination microscopy (SIM). STED builds on the deterministic transitions that either switch fluorescence on or off to reduce the emission volume \\cite{ref4,ref5,ref6,STED}. In SIM, interference patterns used in sample illumination lead to a twofold\ngain in resolution \\cite{ref2,ref3,SIM}. \n\n\nThe second class of techniques is based on the a prior knowledge that the measurements at a given time are from single fluorescent molecules that are separated from each other by distances larger than the Rayleigh diffraction limit. This information is used to super-localize single molecules in an image, which means finding the position of each molecule to\na precision better than the Rayleigh diffraction limit. Super-resolved single fluorophore microscopy relies on the stochastic switching of fluorophores in a time sequence to localize single molecules. It can be implemented either by photo-activated localization microscopy (PALM) \\cite{ref7,PALM} or by stochastic optical reconstruction microscopy (STORM) \\cite{ref8,ref9}. \n\n\nA major disadvantage of these two classes of techniques is that they suffer from the trade-off between the spatial and temporal resolutions, which makes live cell imaging quite challenging. On one hand, super-resolved single fluorophore microscopy techniques require hundreds of thousands of exposures. This is because in every frame, the diffraction-limited image of each\nemitter must be well separated from its neighbours, to enable the identification of its exact\nposition. This inevitably leads to a long acquisition cycle, typically on the order of several minutes. Consequently, fast dynamics cannot be captured by these techniques. On the other hand, SIM techniques require only tens of frames (thus they are with high temporal resolution). But, their spatial resolution enhancement is limited by a factor of two. \n\nMost previous works on enhancing the temporal resolution focused on improving the localization accuracy in PALM\/STORM. Some of them (such as CS-STORM \\cite{CS-STORM} and SPARCOM\\cite{ref11}) used compressive sensing (CS) recovery algorithms to reduce the number of measurements, but any PALM\/STORM-based techniques inevitably suffers from the trade-off challenge. The trade-off originates from the fact that stochastic single molecule switching activates only a small part of the solution in each frame. \n\nTo better explain this, let us describe the mathematical problem for PALM\/STORM. We activate the fluorescent solution by stochastic switching with $T$ number of frames. Denote by $\\rho_1,\\rho_2,\\cdots,\\rho_T$ the sparse distribution of the activated fluorescent molecules (point scatterers). Then we collect the corresponding measured images $Y_1,Y_2,\\cdots,Y_T$. Hence we have\n$$\nS \\rho_t :=h\\circledast \\rho_t = Y_t, \\quad t=1,2,\\cdots,T,\n$$\nwhere $h$ is a blurring kernel and $\\circledast$ is the convolution product. \nIn CS-STORM, we apply compressive sensing to the following deconvolution problem for reconstructing the unknown $\\rho_t$:\n\\begin{equation} \\label{l10}\n \\min_{\\rho_t} \\| \\rho_t \\|_1 \\quad \\mbox{subject to}\\quad \\rho_t \\geq 0 \\mbox{ and } \\| S \\rho_t - Y_t\\|_2\\leq \\sigma,\n\\end{equation}\nwhere $\\sigma$ is the noise level. \nThen the super-resolved image can be obtained by\n$$\n \\rho = \\sum_{t=1}^T \\rho_t.\n$$\nWhen the density of activated molecules in each single frame is small, then in average the point scatterers are well separated. Then it is easy to localize them by deconvolution procedure. But the lower the density of molecules, the higher the number of frames $T$. This is the spatio-temporal resolution trade-off of PALM\/STORM-based approaches. \n\n\nIn \\cite{beam2022}, a novel imaging modality called Brownian Excitation Amplitude Modulation microscopy (BEAM) is introduced, which is based on speckle imaging and compressive sensing. On one hand, it reduces significantly the number of exposures by exposing the most part of the solution at each frame to the illumination pattern. On the other hand, it involves multiple incoherent illuminations of the biological sample and achieves super-resolution\nmicroscopy across both space and time from a sequence of diffraction-limited images and can capture fast dynamics of biological samples. Hence, BEAM outperforms the PALM\/STORM-based techniques. Their two key ingredients are spatial sparsity and temporal incoherence. BEAM combines the sparsity of the point scatterers and the incoherence between the illumination patterns in different frames. \n\n\nThere are some related works to BEAM. The Blind-SIM\\cite{blindSIM} and RIM\\cite{RIM} use random speckle modulations but compressive sensing was not exploited there and so the spatial resolution enhancement is limited by a factor of two (they also require a large number of measurements). The Joint Sparse Recovery approach in \\cite{JSR-Ye} uses both random speckles and compressive sensing. But their inverse problem is formulated in MMV (multiple measurement vectors) form whose sensing matrix has no incoherence, which is not optimal for CS, and hence requires a large number of measurements. \n\nLet us now briefly describe the inverse problem in BEAM. \nSuppose we have multiple speckle patterns $I_1,I_2,\\cdots,I_T$ illuminating the sparse fluorescent solution and then collect the corresponding measured images $Y_1,Y_2,\\cdots,Y_T$. Then we have\n$$\nA_t \\rho :=h\\circledast (I_t \\rho) = Y_t, \\quad t=1,2,\\cdots,T,\n$$\nwhere, as before, $h$ is a blurring kernel. We apply compressive sensing to reconstruct the unknown $\\rho$ with estimated speckle patterns $I_t$ \\cite{candes2014towards, denoyelle2017support, duval2015exact, morgenshtern2016super, morgenshtern2020super}:\n\\begin{equation} \\label{l1}\n \\min_{\\rho} \\| \\rho \\|_1 \\quad \\mbox{subject to}\\quad \\rho \\geq 0 \\mbox{ and } \\| A \\rho - Y\\|_2\\leq \\sigma,\n\\end{equation}\nwhere $\\sigma$ is the noise level, $A =(A_t)_{t=1,\\ldots,T}$ is the sensing matrix, and $Y=(Y_1,\\ldots,Y_T)^\\top$ (with $\\top$ denoting the transpose). \nNotice that the columns of $A$ have a high degree of incoherence coming from the Brownian motion of the speckle patterns $I_t$. This incoherence in the sensing matrix is an optimal feature for compressive sensing to work properly. The sparsity prior in BEAM enhances the spatial resolution (beyond SIM's two-fold enhancement), and at the same time, the required number of measurements stays small since our sensing matrix satisfies CS requirement (incoherence). \nTo the best of our knowledge, BEAM is the first compressive imaging approach satisfying the incoherence requirement, which is the key to overcome the trade-off barrier between the spatial and temporal resolutions.\n\nBEAM can be then seen as the first experimental realization of spatio-temporal sparsity-based super-resolved imaging, where threefold resolution enhancement can be achieved by applying compressive sensing over only few frames. \nMotivated by BEAM, our aim in this paper is to pioneer the mathematical foundation of spatio-temporal sparsity-based super-resolution. We consider mathematical models similar to (\\ref{l1}) \nbut tackle instead the sparsest solution ($l_0$ pseudo-norm minimizer) under the measurement constraints. The sparsest solution is usually the one targeted in sparsity-based imaging and also in the general compressive sensing theory (using tractable convex $l_1$-minimization). Moreover, we consider that the values of the illumination patterns may not be known. Our main results (Theorems \\ref{thm:l0normrecovery0} and \\ref{thm:twodl0normrecovery0}) consist in deriving lower bounds for the resolution enhancement in both the one- and two-dimensional cases. More precisely, we estimate the minimal separation distance for stable recovery of point scatterers from multi-illumination incoherent data. Our estimations reveal the dependence of the resolution enhancement on the cut-off frequency of the imaging system, the signal-to-noise ratio, the sparsity of the point scatterers, and more importantly on the incoherence of the illumination patterns. Our theory highlights the importance of incoherence in the illumination patterns and theoretically demonstrates the possibility of achieving super-resolution for sparsity-based multi-illumination imaging using very few frames. \n\n\nIt is worth emphasizing that there are many mathematical theories for estimating the stability of super-resolution in the single measurement case. To our knowledge, the first work was by Donoho \\cite{donoho1992superresolution}. He considered a grid setting where a discrete measure is supported on a lattice (spacing by $\\Delta$) and regularized by a so-called \"Rayleigh index\" $d$. He demonstrated that the minimax error for the recovery of the strength of the scatterer is bounded by $SRF^{\\alpha}\\sigma$ ($2d-1\\leq \\alpha \\leq 2d+1$) with $\\sigma$ being the noise level and the super-resolution factor $SRF = 1\/({\\Omega \\Delta})$. Here, $\\Omega$ is the cut-off frequency. Donoho's results emphasized the importance of sparsity (encoded in the Rayleigh index) in the super-resolution problem. In \\cite{demanet2015recoverability}, the authors considered $n$-sparse scatterers supported on a grid and obtained sharper bounds ($\\alpha=2n-1$) using an estimate of the minimum singular value for the measurement matrix. The case of multi-clumps was considered in \\cite{li2021stable, batenkov2020conditioning} and similar minimax error estimations were derived. See also other related works for the understanding of resolution limit from the perceptive of sample complexity \\cite{moitra2015super,chen2020algorithmic}. In \\cite{akinshin2015accuracy, batenkov2019super}, the authors considered the minimax error for recovering off-the-grid point scatterers. Based on an analysis of the \"prony-type system\", they derived bounds for both strength and location reconstructions of the point scatterers. More precisely, they showed that for $\\sigma \\lessapprox (SRF)^{-2p+1}$ where $p$ is the number of point scatterers in a cluster, the minimax error for the strength and the location recoveries scale respectively as $(SRF)^{2p-1}\\sigma$, $(SRF)^{2p-2} {\\sigma}\/{\\Omega}$. Moreover, for the isolated non-cluster point scatterer, the corresponding minimax error for the strength and the location recoveries scale respectively as $\\sigma$ and ${\\sigma}\/{\\Omega}$.\n\nDue to the popularity of sparse modeling and compressive sensing, many sparsity-promoting algorithms were proposed to address the super-resolution problem. In the groundbreaking work of Cand\\`es and Fernandez-Granda \\cite{candes2014towards}, it was demonstrated that off-the-grid sources can be exactly recovered from their low-pass Fourier coefficients by total variation minimization under a minimum separation condition. Other sparsity promoting methods include the BLASSO algorithm \\cite{azais2015spike, duval2015exact, poon2019} and the atomic norm minimization method \\cite{tang2013compressed, tang2014near}. These two algorithms were proved to be able to stably recover the sources under a minimum separation condition or a non-degeneracy condition. The resolution of these convex algorithms are limited by a distance of the order of the Rayleigh diffraction limit \\cite{tang2015resolution, da2020stable} for recovering general signed point scatterers. But for the case of positive sources \\cite{morgenshtern2016super, morgenshtern2020super, denoyelle2017support}, there is no such limitation on the resolution and the performance of these algorithms could be nearly optimal. \n\nMore recently, to analyze the resolution for recovering multiple point scatterers, in \\cite{liu2021mathematicaloned, liu2021mathematicalhighd, liu2021theorylse} the authors defined \"computational resolution limits\" which characterize the minimum required distance between point scatterers so that their number and locations can be stably resolved under certain noise level. By developing a non-linear approximation theory in a so-called Vandermonde space, they derived bounds for computational resolution limits for a deconvolution problem \\cite{liu2021mathematicaloned} and a line spectral problem \\cite{liu2021theorylse} (equivalent to the super-resolution problem considered here). In particular, they showed in \\cite{liu2021theorylse} that the computational resolution limit for number and location recovery should be respectively $\\frac{C_{\\mathrm{num}}}{\\Omega}(\\frac{\\sigma}{m_{\\min}})^{\\frac{1}{2n-2}}$ and $\\frac{C_{\\mathrm{supp}}}{\\Omega}(\\frac{\\sigma}{m_{\\min}})^{\\frac{1}{2n-1}}$ where $C_{\\mathrm{num}}, C_{\\mathrm{supp}}$ are constants and $m_{\\min}$ is the minimum strength of the point scatterers. Their results demonstrate that when the point scatterers are separated larger than $\\frac{C_{\\mathrm{supp}}}{\\Omega}(\\frac{\\sigma}{m_{\\min}})^{\\frac{1}{2n-1}}$, we can stably recover the scatterer locations. Conversely, when the point scatterers are separated by a distance less than $O(\\frac{C_{\\mathrm{supp}}}{\\Omega}(\\frac{\\sigma}{m_{\\min}})^{\\frac{1}{2n-1}})$, stably recovering the scatterer locations is impossible in the worst case. This resolution limit indicates that super-resolution is possible for the single measurement case but requires very high signal-to-noise ratio (according to the exponent $\\frac{1}{2n-1}$). This explains why it is so hard to achieve super-resolution by single illumination. Therefore, we have to resort to multiple illuminations in order to super-resolve point scatterers. \n\nAs we have seen, the mathematics behind resolution limit for single illumination imaging is towards to be fully understood. Nevertheless, the multiple illumination case still lacks or even is without any mathematical foundation. Thus, our paper serves as a first step towards understanding the resolution limit (or performance) of multi-illumination imaging. We consider both the one- and two-dimensional cases. Our results demonstrate that the resolution for the multiple illumination imaging problem in the one-dimensional case is less than \n\\[\n\\frac{2.2e\\pi }{\\Omega }\\Big(\\frac{1}{\\sigma_{\\infty, \\min}(I)}\\frac{\\sigma}{m_{\\min}}\\Big)^{\\frac{1}{n}}, \n\\]\nwhere $\\sigma_{\\infty, \\min}(I)$ is defined by (\\ref{sigmadef}). In two dimensions, the resolution limit is multiplied by $(n +1) (n+2)$ when the $n$ point scatterers are assumed to be in a disk of radius $n \\pi\/\\Omega$. \n\nOur paper is organized in the following way.\nSection \\ref{sect2} formulates the minimization problem for recovering point scatterers from multi-illumination data. Sections \\ref{sect3} and \\ref{section:twodcase} present the main results on the spatio-temporal super-resolution in respectively the one- and two-dimensional case and a detailed discussion on their significance. \nSection \\ref{section:approxinvandermonde} introduces the main technique (namely the approximation theory in Vandermonde space) that is used to show the main results of this paper. \nIn Section \\ref{section:proofofthml0normrecover}, Theorem \\ref{thm:l0normrecovery0}\nis proved. Section \\ref{section:prooftwodl0normrecovery} is devoted to the proof of Theorem \\ref{thm:twodl0normrecovery0}.\nFinally, the appendix provides some lemmas and inequalities\nthat are used in the paper.\n\n\n\\section{Resolution in the one-dimensional case} \\label{sect3}\n\n\\subsection{Problem setting} \\label{sect2}\nLet $\\Omega >0$ be the cut-off frequency. For a smooth function $f$ supported in $[-\\Omega, \\Omega]$, let $$||f||_2 = \\frac{1}{2\\Omega} \\int_{-\\Omega}^{\\Omega}|f(\\omega)|^2 d\\omega \\quad \\mbox{ and } \\quad ||f||_\\infty =\\max_{\\omega \\in [-\\Omega, \\Omega]}|f(\\omega)|.$$ For $\\Lambda >0$, we define the warped-around distance for $x,y\\in \\mathbb R$ by\n\\begin{equation}\\label{equ:warpedarounddistance}\n\\Big|x-y\\Big|_{\\Lambda} = \\min_{k\\in \\mathbb{Z}} \\babs{x-y-k\\Lambda}. \n\\end{equation} \n\nLet $\\displaystyle \\mu=\\sum_{j=1}^{n}a_{j}\\delta_{y_j}$ be a discrete measure, \nwhere $y_j \\in \\mathbb R,j=1,\\cdots,n$, represent the locations of the point scatterers and $a_j\\in \\mathbb C, j=1,\\cdots,n,$ their strengths. We set\n\\begin{equation} \\label{mmin}\nm_{\\min}=\\min_{j=1,\\cdots,n}|a_j|, \\quad d_{\\min}=\\min_{p\\neq j}| y_p-y_j|.\n\\end{equation}\nWe assume that the point scatterers are illuminated by some illumination pattern $I_t$ for each time step $t \\in \\mathbb{N}, 1\\leq t\\leq T$, where $T$ is the total number of frames. Then $I_t \\mu$ is given by \n\\[\nI_t \\mu = \\sum_{j=1}^n I_t(y_j)a_j\\delta_{y_j},\\ t=1, \\cdots, T.\n\\]\nThe available measurements are the noisy Fourier data of $I_t \\mu$ in a bounded interval. More precisely, they are given by \n\\begin{equation}\\label{equ:multimodelsetting1}\n\\mathbf Y_t(\\omega) = \\mathcal F [I_t \\mu] (\\omega) + \\mathbf W_t(\\omega)= \\sum_{j=1}^{n}I_t(y_j)a_j e^{i y_j \\omega} + \\mathbf W_t(\\omega), \\quad 1\\leq t\\leq T, \\ \\omega \\in [-\\Omega, \\Omega], \n\\end{equation}\nwhere $\\mathcal F[I_t \\mu]$ denotes the Fourier transform of $I_t \\mu$ and $\\vect W_t(\\omega)$ is the noise. We assume that $||\\mathbf W_t||_2<\\sigma$ with $\\sigma$ being the noise level. Recall that $\\pi\/\\Omega$ is the Rayleigh resolution limit.\n\n\nThe inverse problem we are concerned with is to recover the sparsest measure that could generate these diffraction-limited images $\\vect Y_t$'s under certain illuminations. In modern imaging techniques, there are three different cases of interest:\n\\begin{itemize}\n\t\\item The illumination patterns are exactly known, such as in SIM and STORM;\n\t\\item The illumination patterns are unknown but can be approximated, such as in BEAM;\n\t\\item The illumination patterns are completely unknown.\n\\end{itemize}\nIn this paper, we consider reconstructing the point scatterers as the sparsest solution under the measurement constraint for all these three cases. More specifically, when the illumination patterns are exactly known, we consider the following $l_0$-minimization problem:\n\\begin{equation}\\label{prob:l0minimization}\n\\min_{\\rho} ||\\rho||_{0} \\quad \\text{subject to} \\quad ||\\mathcal F[I_t \\rho] -Y_t||_2< \\sigma, \\quad 1\\leq t\\leq T,\n\\end{equation}\t\nwhere $||\\rho||_{0}$ is the number of Dirac masses representing the discrete measure $\\rho$. When the illumination patterns are not exactly known but could be approximated, we consider the $l_0$-minimization problem:\n\\begin{equation}\\label{prob:l0minimization1}\n\\min_{\\rho} ||\\rho||_{0} \\quad \\text{subject to} \\quad ||\\mathcal F[\\hat I_t \\rho] -Y_t||_2< \\sigma, \\quad 1\\leq t\\leq T,\n\\end{equation}\t\nwhere $\\hat I_t$ is an approximation of each $I_t$ so that the feasible set contains some discrete measures with $n$ supports. When the illumination patterns are completely unknown, we consider the following $l_0$-minimization problem:\n\\begin{equation}\\label{prob:l0minimization2}\n\\min_{\\rho} ||\\rho||_{0} \\quad \\text{subject to the existence of $\\hat I_t$'s such that}\\ ||\\mathcal F[\\hat I_t \\rho] -Y_t||_2< \\sigma, \\quad 1\\leq t\\leq T.\n\\end{equation} \nOur main result in the next section gives an estimation of the resolution of these sparsity recovery problems in the one-dimensional case. \n\n\n\n\\subsection{Main results}\nWe first introduce the illumination matrix as\n\\begin{equation}\\label{equ:illuminationpattern1}\nI = \\begin{pmatrix}\nI_1(y_1)&\\cdots&I_1(y_n)\\\\\n\\vdots&\\vdots&\\vdots\\\\\nI_T(y_1)&\\cdots&I_T(y_n)\\\\\n\\end{pmatrix}.\n\\end{equation}\nThen we define, for a $m\\times k$ matrix $A$, $\\sigma_{\\infty, \\min}(A)$ by\n\\begin{equation} \\label{sigmadef}\n\\sigma_{\\infty, \\min}(A) = \\min_{x\\in \\mathbb C^k, \\mathbb ||x||_{\\infty}\\geq 1} ||Ax||_{\\infty}.\n\\end{equation}\nIt is easy to see that $\\sigma_{\\infty, \\min}(A)$ characterizes the correlation between the columns of $A$. \n\nWe have the following result on the stability of problems (\\ref{prob:l0minimization}), (\\ref{prob:l0minimization1}), and (\\ref{prob:l0minimization2}). Its proof is given in Section \\ref{section:proofofthml0normrecover}.\n\\begin{thm}\\label{thm:l0normrecovery0}\n\tSuppose that $\\mu= \\sum_{j=1}^n a_j \\delta_{y_j}$ and the following separation condition holds:\n\t\\begin{equation}\\label{equ:sepaconditionl0normrecovery}\n\td_{\\min} := \\min_{p\\neq j}\\babs{y_p-y_j}_{\\frac{n\\pi}{\\Omega}}\\geq \\frac{2.2e\\pi }{\\Omega }\\Big(\\frac{1}{\\sigma_{\\infty, \\min}(I)}\\frac{\\sigma}{m_{\\min}}\\Big)^{\\frac{1}{n}},\n\t\\end{equation}\n\twith $\\frac{1}{\\sigma_{\\infty, \\min}(I)}\\frac{\\sigma}{m_{\\min}}\\leq 1$.\nHere, $m_{\\min}$ is defined in (\\ref{mmin}) and \t$\\frac{\\sigma}{m_{\\min}}$ is the noise-to-signal ratio. \n\t Then any solution to (\\ref{prob:l0minimization}), (\\ref{prob:l0minimization1}), or (\\ref{prob:l0minimization2}) contains exactly $n$ point scatterers. Moreover, for $\\rho = \\sum_{j=1}^n \\hat a_j \\delta_{\\hat y_j}$ being the corresponding solution, after reordering the $\\hat y_j$'s, we have \n\t\\begin{equation}\n\t\\Big|\\hat y_j-y_j\\Big|_{\\frac{n\\pi}{\\Omega}}<\\frac{d_{\\min}}{2},\n\t\\end{equation} \n\tand \n\t\\begin{equation}\n \\Big|\\hat y_j-y_j\\Big|_{\\frac{n\\pi}{\\Omega}}< \\frac{C(n)}{\\Omega}SRF^{n-1}\\frac{1}{\\sigma_{\\infty, \\min}(I)}\\frac{\\sigma}{m_{\\min}}, \\quad 1\\leq j\\leq n,\n\t\\end{equation}\n\twhere $C(n)=2\\sqrt{2 \\pi} ne^{n}$ and $SRF = \\frac{\\pi}{\\Omega d_{\\min}}$ is the super-resolution factor.\n\\end{thm}\n\n\n\\begin{remark}\n In this paper, for simplicity, we assume that the measurements are for all $\\omega \\in [-\\Omega, \\Omega]$. Nevertheless, our results can be easily extended to the discrete sampling case, for example, when the measurements are taken at $M$ evenly spaced points $\\omega_l\\in [-\\Omega, \\Omega]$ with $M\\geq n$. The minimum number of sampling points at each single frame is only $n$, which shows that the sparsity recovery can reduce significantly the number of measurements. \n Moreover, if we consider that the point scatterers (as well as the solution of (\\ref{prob:l0minimization})) are supported in an interval of length of several Rayleigh resolution limits, then the warped-around distance in Theorem \\ref{thm:l0normrecovery0} can be replaced by the Euclidean distance (with only a slight modification of the results). Under this scenario, by utilizing the projection trick introduced in \\cite{liu2021mathematicalhighd}, our results can also be extended to multi-dimensional spaces. \n\\end{remark}\n\n\n\\begin{remark}\nFor the case when $n=2$, the minimal separation distance in Theorem \\ref{thm:l0normrecovery0} \n\\[\n\\frac{2.2e\\pi }{\\Omega }\\Big(\\frac{1}{\\sigma_{\\infty, \\min}(I)}\\frac{\\sigma}{m_{\\min}}\\Big)^{\\frac{1}{2}},\n\\]\napplies for any $k$-dimensional spaces. It means that for multi-illumination imaging in general $k$-dimensional space, the two-point resolution \\cite{shahram2004imaging, shahram2005resolvability, shahram2004statistical, chen2020algorithmic,den1997resolution} of sparsity recoveries like (\\ref{prob:l0minimization}), (\\ref{prob:l0minimization1}), or (\\ref{prob:l0minimization2}) is less than $\\frac{2.2e\\pi }{\\Omega }\\Big(\\frac{1}{\\sigma_{\\infty, \\min}(I)}\\frac{\\sigma}{m_{\\min}}\\Big)^{\\frac{1}{2}}$. \n\\end{remark}\n\n\n\\begin{remark}\nNote that the stability result in Theorem \\ref{thm:l0normrecovery0} holds for any algorithm that can recover the sparsest solution (solution with $n$ point scatterers). Thus it also helps to understand the performance of other sparsity-promoting algorithms, such as the $l_1$-minimization that is frequently used in the sparsity-based super-resolution. Also, our results can be generalized to the multi-clump case, where the resolution is related to the sparsity of the point scatterers in each clump rather than the total number of point scatterers. This can explain the fact that we can achieve super-resolution imaging even in the case where we have tens or hundreds of point scatterers. \n\\end{remark}\n\n\n\\begin{remark}\nNote also that our results can be extended to other kinds of imaging systems with different point spread functions. For example, let the point spread function be $f$. In the presence of an additive noise $w(t)$, \nthe measurement in the time-domain is \n$$\nf \\circledast \\mu (t) + w(t) = \\sum_{j=1}^n a_j f (t-y_j) + w(t).\n$$\nBy taking the Fourier transform, we obtain\n$$\n\\mathcal F y (\\omega) = \\mathcal F f(\\omega) \\mathcal F \\mu (\\omega) + \\mathcal F w (\\omega) = \\mathcal F f(\\omega) \\big(\\sum_{j=1}^n a_j e^{i y_j \\omega} \\big) + \\mathcal F w (\\omega) .\n$$ \nSuppose that $| \\mathcal F f(\\omega)| >0$ at the sampling points. Then our results can be easily extended to the case when the point spread function is $f$. \n\\end{remark}\n\n\nTheorem \\ref{thm:l0normrecovery0} demonstrates that when the point scatterers are separated by the distance $d_{\\min}$ in (\\ref{equ:sepaconditionl0normrecovery}), we can stably recover the scatterer locations. Under the minimal separation condition, each of the recovered locations is in a neighborhood of the ground truth and the deviation of them from the ground truth is also estimated. Thus the resolution of our sparsity-promoting algorithms for the multi-illumination data is less than\n\\[\n\\frac{2.2e\\pi }{\\Omega }\\Big(\\frac{1}{\\sigma_{\\infty, \\min}(I)}\\frac{\\sigma}{m_{\\min}}\\Big)^{\\frac{1}{n}}.\n\\]\nBased on this formula of the resolution limit, we demonstrate that the incoherence (encoded in $\\sigma_{\\infty, \\min}(I)$) between the illumination patterns (or columns in illumination matrix (\\ref{equ:illuminationpattern1})) is crucial to the sparsity-based spatio-temporal super-resolution. More precisely, applying any sparsity-promoting algorithm for images from illumination patterns with high degree of incoherence can achieve desired super-resolution, even when only a small number of frames are provided, which yields high spatio-temporal resolution. This is the most important contribution of our paper. \n\n\n We remark that our result can even serve as a way to estimate explicitly the resolution for the multi-illumination imaging when we could know or estimate the incoherence of the illumination patterns and the signal-to-noise ratio. We present a simple example as follows that calculates explicitly the resolution limit of our sparsity recovery problem by the estimation (\\ref{equ:sepaconditionl0normrecovery}). We leave the other detailed discussions on Theorem \\ref{thm:l0normrecovery0} to the following three subsections.\n\n\\begin{example}\nWe consider two point scatterers that are illuminated by two illumination patterns. Suppose for instance that the illumination matrix is given by\n\\[\nI = \\begin{pmatrix}\n1&0.7\\\\\n0.7&1\n\\end{pmatrix}.\n\\] \nSuppose also that the noise level is $\\sigma = 10^{-3}$ and the noise-to-signal ratio is $\\frac{\\sigma}{m_{\\min}}=10^{-3}$. By Lemma \\ref{lem:sigmainftyminestimate1}, $\\sigma_{\\infty, \\min}(I)$, defined in (\\ref{sigmadef}), is equal to $0.3$. Hence, by Theorem \\ref{thm:l0normrecovery0}, the resolution limit $d_{\\min}$ in solving problem (\\ref{prob:l0minimization}) (\\ref{prob:l0minimization1}), or (\\ref{prob:l0minimization2}) is smaller than \n\\[\n\\frac{2.2e\\pi }{\\Omega }\\Big(\\frac{1}{\\sigma_{\\infty, \\min}(I)}\\frac{\\sigma}{m_{\\min}}\\Big)^{\\frac{1}{n}} \\approx 0.34 \\frac{\\pi}{\\Omega},\n\\]\nwhere $\\frac{\\pi}{\\Omega}$, as said before, is the classical Rayleigh resolution limit. This shows that even with only two illuminations with mild degree of incoherence, there is a threefold resolution improvement.\n\n\\end{example}\n\n\n\n\\subsection{Discussion of $\\sigma_{\\infty, \\min}(I)$ and the effect of multiple illumination}\n\n\n\n\\subsubsection{Adding the same illumination pattern will not enhance the resolution}\nLet \n\\[\nI = \\begin{pmatrix}\nI_1(y_1)&\\cdots&I_1(y_n)\\\\\n\\vdots&\\vdots&\\vdots\\\\\nI_{T-1}(y_1)&\\cdots&I_{T-1}(y_n)\\\\\nI_T(y_1)&\\cdots&I_T(y_n)\n\\end{pmatrix}, \\quad \\hat I = \\begin{pmatrix}\nI_1(y_1)&\\cdots&I_1(y_n)\\\\\n\\vdots&\\vdots&\\vdots\\\\\nI_T(y_1)&\\cdots&I_T(y_n)\\\\\nI_{T+1}(y_1)& \\cdots & I_{T+1}(y_n)\n\\end{pmatrix}\n\\]\nwith $I_{T+1} = I_{T}$. \nBy the definition of $\\sigma_{\\infty, \\min}$, it is clear that $\\sigma_{\\infty, \\min}(\\hat I) =\\sigma_{\\infty, \\min}(I)$. Thus, adding the same illumination pattern can not increase the resolution in Theorem \\ref{thm:l0normrecovery0}. This is consistent with our observation that multiple illuminations with different patterns are key for spatio-temporal super-resolution. \n\n\\subsubsection{The incoherence between the illumination patterns is crucial}\nThe value of $\\sigma_{\\infty, \\min}(I)$ is related to the correlation between the columns of the illumination matrix $I$. In particular, we have the following rough estimation of $\\sigma_{\\infty, \\min}(I)$:\n\\begin{equation}\\label{equ:sigmainftyminestimate1}\n\\sigma_{\\infty, \\min}(I)\\geq \\frac{\\sigma_{\\min}(I)}{\\sqrt{T}},\n\\end{equation}\nwhere $\\sigma_{\\min}(I)$ is the minimum singular value of $I$. This clearly illustrates that the correlation between the columns of $I$ is crucial to $\\sigma_{\\infty, \\min}(I)$. The correlation between columns of $I$ is related to the incoherence of the illumination patterns. Thus, we should employ illumination patterns with high degree of incoherence in order to increase $\\sigma_{\\infty, \\min}(I)$, and consequently, obtain a significant resolution enhancement. \n\n\n\\subsection{Comparison with the single illumination case}\nIn this subsection, we compare the resolution in the single illumination case (i.e., the single measurement case) with that in the multiple illumination case, whereby we illustrate the effect of multiple illuminations in enhancing the resolution. \n\nIn \\cite{liu2021theorylse}, the authors estimate the so-called computational resolution limit for the line spectral estimation problem of the single measurement case. The line spectral estimation problem is to estimate the locations of some line spectra from the Fourier data (in a bounded domain) of one of their linear combination. So the line spectral problem is equivalent to the super-resolution problem considered here. The results in \\cite{liu2021theorylse} show that, for the single measurement case, when the point scatterers are separated by\n\\[\n\\tau = \\frac{c_0}{\\Omega} (\\frac{\\sigma}{m_{\\min}})^{\\frac{1}{2n-1}},\n\\]\nfor some positive constant $c_0$, there exists a discrete measure $\\mu = \\sum_{j=1}^n a_j \\delta_{y_j}$ with $n$ point scatterers located at $\\{-\\tau, -2\\tau, -n\\tau\\}$ and another discrete measure $\\hat \\mu = \\sum_{j=1}^n \\hat a_j \\delta_{\\hat y_j}$ with $n$ point scatterers located at $\\{0,\\tau,\\cdots, (n-1)\\tau\\}$ such that\n\\[\n||\\mathcal F[\\hat \\mu]-\\mathcal F[\\mu]||_{\\infty}< \\sigma,\n\\]\nand either $\\min_{1\\leq j\\leq n}|a_j|= m_{\\min}$ or $\\min_{1\\leq j\\leq n}|\\hat a_j|= m_{\\min}$. \n\nBy the definition of $||\\cdot||_{\\infty}$ and $||\\cdot||_{2}$, we also have \n\\[\n||\\mathcal F[\\hat \\mu]-\\mathcal F[\\mu]||_{2}< \\sigma. \n\\]\nThis result demonstrates that when the point scatterers are separated by $\\frac{c_0}{\\Omega} (\\frac{\\sigma}{m_{\\min}})^{\\frac{1}{2n-1}}$, the solution of the $l_0$-minimization problem in the single measurement case\n\\begin{equation}\\label{prob:singlel0minimization}\n\\min_{\\rho} ||\\rho||_{0} \\quad \\text{subject to} \\quad ||\\mathcal F[\\rho] -Y||_2< \\sigma, \n\\end{equation}\t\nis not stable. In particular, the recovered point scatterers by (\\ref{prob:singlel0minimization}) may be located in an interval completely disjoint from that of the ground truth. \n\nTherefore, for the single measurement case, when the scatterers are separated by $O(\\frac{(\\frac{\\sigma}{m_{\\min}})^{\\frac{1}{2n-1}}}{\\Omega})$, the \n$l_0$-minimization may be unstable. However, for the multiple illumination case, when the point scatterers are separated by $O(\\frac{(\\frac{1}{\\sigma_{\\infty, \\min}(I)}\\frac{\\sigma}{m_{\\min}})^{\\frac{1}{n}}}{\\Omega})$, the $l_0$-minimization (\\ref{prob:l0minimization}) is still stable. \n\nSuppose we have illumination patterns such that $\\frac{1}{\\sigma_{\\infty, \\min}(I)}$ is of constant order, the resolution now is of order $O(\\frac{(\\frac{\\sigma}{m_{\\min}})^{\\frac{1}{n}}}{\\Omega})$. Compared with the resolution in the single measurement case, say of order $O(\\frac{(\\frac{\\sigma}{m_{\\min}})^{\\frac{1}{2n-1}}}{\\Omega})$, this clearly shows a significant enhancement and illustrates the effect of multiple illuminations in improving the resolution. \n\n\\subsection{Lower bound for the resolution of multi-illumination imaging}\nBy Theorem \\ref{thm:l0normrecovery0}, when we have desired illumination patterns with high degree of incoherence so that $\\sigma_{\\infty, \\min}(I)$ is of order one, the resolution of the sparsity recovery is expected to be less than $\\frac{c_0}{\\Omega}(\\frac{\\sigma}{m_{\\min}})^{\\frac{1}{n}}$ for some positive constant $c_0$. We next demonstrate that this resolution order is the best we can obtain if the illumination patterns are unknown. More precisely, we have the following proposition whose proof is given in Appendix \\ref{section:proofofsupportlowerbound}. \n\n\n\\begin{prop}\\label{prop:multisupportlowerboundthm1}\n\tGiven $n \\geq 2$, $\\sigma, m_{\\min}$ with $\\frac{\\sigma}{m_{\\min}}\\leq 1$, and unknown illumination pattern $I_t$ with $|I_t(y)|\\leq 1, y\\in \\mathbb R, 1\\leq t\\leq T$, let $\\tau$ be given by\n\t\\begin{equation}\\label{equ:multisupportlowerboundsepadis2}\n\t\\tau = \\frac{0.043}{\\Omega}\\Big(\\frac{\\sigma}{m_{\\min}}\\Big)^{\\frac{1}{n}}.\n\t\\end{equation}\n Then there exist $\\mu=\\sum_{j=1}^{n}a_j\\delta_{y_j}$ with $n$ supports at $\\big\\{-\\tau, -2\\tau,\\ldots, -n\\tau \\big\\}$ and $|a_j| = m_{\\min}, 1\\leq j\\leq n$, and $\\rho=\\sum_{j=1}^{n-1}\\hat a_j \\delta_{\\hat y_j}$ with $n$ supports at $\\big\\{0, \\tau,\\cdots, (n-1)\\tau\\big\\}$, such that\n\t\\[\n\t\\text{there exist $\\hat I_t$'s so that } \\ ||\\mathcal F [\\hat I_t \\rho]-\\mathcal F[I_t \\mu]||_{2}< \\sigma, \\ t=1,\\cdots, T.\n\t\\]\n\\end{prop} \n\n\\section{Resolution in the two-dimensional case}\\label{section:twodcase}\n\\subsection{Problem setting}\nLet $\\Omega >0$ be the cut-off frequency. For a smooth function $f:\\mathbb R^2\\rightarrow \\mathbb R$ supported on $||\\vect \\omega ||_2 \\leq \\Omega $, let \n\\[\n ||f||_\\infty =\\max_{||\\vect \\omega||_2 \\leq \\Omega }|f(\\vect \\omega)|.\n\\] \nLet $\\displaystyle \\mu=\\sum_{j=1}^{n}a_{j}\\delta_{\\vect y_j}$ be a discrete measure, \nwhere $\\vect y_j \\in \\mathbb R^2,j=1,\\cdots,n$, represent the locations of the point scatterers and $a_j\\in \\mathbb C, j=1,\\cdots,n,$ their strengths. We set\n\\begin{equation} \\label{twodmmin}\nm_{\\min}=\\min_{j=1,\\cdots,n}|a_j|, \\quad d_{\\min}=\\min_{p\\neq j}||\\vect y_p - \\vect y_j||_2 .\n\\end{equation}\nAgain, we assume that the point scatterers are illuminated by some illumination pattern $I_t$ for each time step $t \\in \\mathbb{N}, 1\\leq t\\leq T$, where $T$ is the total number of illumination patterns. Then $I_t \\mu$ is \n\\[\nI_t \\mu = \\sum_{j=1}^n I_t(\\vect y_j)a_j\\delta_{\\vect y_j},\\quad t=1, \\cdots, T.\n\\]\nIn the time-domain, the measurements are\n\\[\nA_t\\mu:= h \\circledast (I_t \\mu) , \\quad t= 1, 2, \\cdots, T, \n\\]\nwhere $h$ is a blurring kernel in $\\mathbb R^2$. Thus, in the Fourier-domain, the available measurements are given by \n\\begin{equation}\\label{equ:twodmultimodelsetting1}\n\\mathbf Y_t(\\vect \\omega) = \\mathcal F [I_t \\mu] (\\vect \\omega) + \\mathbf W_t(\\vect \\omega)= \\sum_{j=1}^{n}I_t(\\vect y_j)a_j e^{i \\vect y_j \\cdot \\vect \\omega} + \\mathbf W_t(\\vect \\omega), \\ 1\\leq t\\leq T, ||\\vect \\omega ||_2 \\leq \\Omega,\n\\end{equation}\nwhere $\\mathcal F[I_t \\mu]$ denotes the Fourier transform of $I_t \\mu$ and $\\vect W_t(\\vect \\omega)$ is the noise. We assume that $||\\mathbf W_t||_{\\infty}<\\sigma$ with $\\sigma$ being the noise level. \n\nWe consider reconstructing the point scatterers as the sparsest solution (solution to the $l_0$-minimization problem) under the measurement constraints for the three cases of illumination patterns that are discussed in Section \\ref{sect2}. With a slight abuse of notation, we also denote by $\\mathcal F[\\rho]$ the function $\\mathcal F[\\rho](\\vect \\omega), ||\\vect \\omega||_2\\leq \\Omega$. In this section, we suppose that the point scatterers are located in a disk $\\mathcal O$ with radius of several Rayleigh resolution limits. Then we consider the following optimization problems. When the illumination patterns are exactly known, we consider the following $l_0$-minimization problem:\n\\begin{equation}\\label{prob:twodl0minimization}\n\\min_{\\rho \\ \\text{supported in $\\mathcal O$}} ||\\rho||_{0} \\quad \\text{subject to} \\ ||\\mathcal F[I_t \\rho] -\\vect Y_t||_{\\infty}< \\sigma, \\quad 1\\leq t\\leq T,\n\\end{equation}\t\nwhere $||\\rho||_{0}$ is the number of Dirac masses representing the discrete measure $\\rho$. When the illumination patterns are not exactly known but could be approximated, we consider the $l_0$-minimization problem\n\\begin{equation}\\label{prob:twodl0minimization1}\n\\min_{\\rho \\ \\text{supported in $\\mathcal O$}} ||\\rho||_{0} \\quad \\text{subject to} \\ ||\\mathcal F[\\hat I_t \\rho] - \\vect Y_t||_{\\infty}< \\sigma, \\quad 1\\leq t\\leq T,\n\\end{equation}\t\nwhere $\\hat I_t$ is an approximation of each $I_t$ so that the feasible set contains some measures with $n$ supports. When the illumination patterns are completely unknown, we consider the following $l_0$-minimization problem: \n\\begin{equation}\\label{prob:twodl0minimization2}\n\\min_{\\rho \\ \\text{supported in $\\mathcal O$}} ||\\rho||_{0} \\quad\n\\text{subject to the existence of $\\hat I_t$'s such that}\\ ||\\mathcal F[\\hat I_t \\rho] - \\vect Y_t||_{\\infty}< \\sigma,\\ 1\\leq t\\leq T.\n\\end{equation} \nOur main result in the following subsection gives an estimation of the resolution of these two-dimensional sparsity recovery problems. \n\n\\subsection{Main results for the stability of sparsity recoveries in two dimensions}\nThe illumination matrix in the two-dimensional case is\n\\begin{equation}\\label{equ:twodilluminationpattern1}\nI = \\begin{pmatrix}\nI_1(\\vect y_1)&\\cdots&I_1(\\vect y_n)\\\\\n\\vdots&\\vdots&\\vdots\\\\\nI_T(\\vect y_1)&\\cdots&I_T(\\vect y_n)\\\\\n\\end{pmatrix}.\n\\end{equation}\n We have the following theorem on the stability of problems (\\ref{prob:twodl0minimization}), (\\ref{prob:twodl0minimization1}), and (\\ref{prob:twodl0minimization2}). Its proof is given in Section \\ref{section:prooftwodl0normrecovery}.\n \n \n\\begin{thm}\\label{thm:twodl0normrecovery0}\nLet $n\\geq 2$ and let the disk $\\mathcal O$ be of radius $\\frac{c_0n\\pi}{\\Omega}$ with $c_0\\geq 1$. Let $\\vect Y_t$'s be the measurements that are generated by an $n$-sparse measure $\\mu=\\sum_{j=1}^{n}a_j \\delta_{\\vect y_j}, \\vect y_j \\in \\mathcal O$ in the two-dimensional space. Assume that\n\\begin{equation}\\label{equ:highdsupportlimithm0equ0}\nd_{\\min}:=\\min_{p\\neq j}\\Big|\\Big|\\mathbf y_p-\\mathbf y_j\\Big|\\Big|_2\\geq \\frac{2.2c_0e\\pi(n+2)(n+1)}{\\Omega }\\Big(\\frac{1}{\\sigma_{\\infty, \\min}(I)}\\frac{\\sigma}{m_{\\min}}\\Big)^{\\frac{1}{n}}. \n\\end{equation}\nHere, $I$ is the matrix in (\\ref{equ:twodilluminationpattern1}), $m_{\\min}$ is defined in (\\ref{twodmmin}) and $\\frac{\\sigma}{m_{\\min}}$ is the noise-to-signal ratio. \nThen any solution to (\\ref{prob:twodl0minimization}), (\\ref{prob:twodl0minimization1}), and (\\ref{prob:twodl0minimization2}) contains exactly $n$ point scatterers. Moreover, for $\\rho = \\sum_{j=1}^n \\hat a_j \\delta_{\\hat {\\mathbf y}_j}$ being the corresponding solution, after reordering the $\\hat {\\mathbf y}_j$'s, we have \n\\begin{equation}\n\\btwonorm{\\hat {\\mathbf y}_j- \\vect y_j}<\\frac{d_{\\min}}{2},\n\\end{equation} \nand \n\\begin{equation}\n\\btwonorm{\\hat {\\mathbf y}_j-\\vect y_j} < \\frac{C(n)}{\\Omega}SRF^{n-1}\\frac{1}{\\sigma_{\\infty, \\min}(I)}\\frac{\\sigma}{m_{\\min}}, \\quad 1\\leq j\\leq n,\n\\end{equation}\nwhere $C(n)=(n+1)^n(n+2)^n\\sqrt{2\\pi}nc_0^{n-1}e^{n}$ and $SRF = \\frac{\\pi}{\\Omega d_{\\min}}$ is the super-resolution factor.\n\\end{thm}\n\n\nTheorem \\ref{thm:twodl0normrecovery0} is the two-dimensional analogue of Theorem \\ref{thm:l0normrecovery0}. It reveals the dependence of the resolution of two-dimensional sparsity recoveries on the cut-off frequency of the imaging system, the signal-to-noise ratio, the sparsity of point scatters, and the incoherence of illumination patterns. It highlights the importance of multiple illumination patterns with high degree of incoherence in achieving two-dimensional spatio-temporal super-resolution. \n\n\\section{Non-linear approximation theory in Vandermonde space}\\label{section:approxinvandermonde}\nIn this section, we present the main technique that is used in the proofs of the main results of the paper, namely the approximation theory in Vandermonde space. This theory was first introduced in \\cite{liu2021mathematicaloned, liu2021theorylse}.\n Instead of considering the non-linear approximation problem there, we consider a different approximation problem, which is relevant to the stability analysis of (\\ref{prob:l0minimization}). More specifically, for $s \\in \\mathbb{N}, s \\geq 1,$ and $z\\in \\mathbb C$, we define the complex Vandermonde-vector\n\\begin{equation}\\label{equ:multiphiformula}\n\\phi_s(z)=(1,z,\\cdots,z^s)^\\top.\n\\end{equation}\n\nThroughout this paper, for a complex matrix $A$, we denote $A^\\top$ its transpose and $A^*$ its conjugate transpose. \n\nWe consider the following non-linear problem: \n\\begin{equation}\\label{equ:multinon-linearapproxproblem1}\n\\min_{\\hat \\theta_j \\in \\mathbb R, j=1,\\cdots,k}\\max_{t=1, \\cdots, T} \\min_{\\hat a_{j,t}\\in \\mathbb{C}, j=1,\\cdots,k}\\Big|\\Big|\\sum_{j=1}^k \\hat a_{j,t}\\phi_s(e^{i\\hat \\theta_j})-v_t\\Big|\\Big|_2,\n\\end{equation}\nwhere $v_t=\\sum_{j=1}^{k+1}a_{j,t}\\phi_s(e^{i\\theta_j})$ is given with $\\theta_j$'s being real numbers. \nWe shall derive a lower bound for the optimal value of the minimization problem for the case when $s=k$. The main results are presented in Section \\ref{section:mainresultsapproxinvandermonde}. \n\n\\subsection{Notation and Preliminaries}\nWe first introduce some notation and preliminaries. We denote for $k \\in \\mathbb{N}, k\\geq 1$, \n\\begin{equation}\\label{equ:multizetaxiformula1} \n\\zeta(k)= \\left\\{\n\\begin{array}{cc}\n(\\frac{k-1}{2}!)^2,& \\text{$k$ is odd,}\\\\\n(\\frac{k}{2})!(\\frac{k-2}{2})!,& \\text{$k$ is even,}\n\\end{array} \n\\right. \\ \\xi(k)=\\left\\{\n\\begin{array}{cc}\n1\/ 2, & k=1,\\\\\n\\frac{(\\frac{k-1}{2})!(\\frac{k-3}{2})!}{4},& \\text{$k$ is odd,\\,\\,$ k\\geq 3$,}\\\\\n\\frac{(\\frac{k-2}{2}!)^2}{4},& \\text{$k$ is even}.\n\\end{array} \n\\right.\t\n\\end{equation}\nWe also define for $p, q \\in \\mathbb{N}, p,q \\geq 1$, and $z_1, \\cdots, z_p, \\hat z_1, \\cdots, \\hat z_q \\in \\mathbb C$, the following vector in $\\mathbb{R}^p$:\n\\begin{equation}\\label{equ:multieta}\n\\eta_{p,q}(z_1,\\cdots,z_{p}, \\hat z_1,\\cdots,\\hat z_q)=\\left(\\begin{array}{c}\n|(z_1-\\hat z_1)|\\cdots|(z_1-\\hat z_q)|\\\\\n|(z_2-\\hat z_1)|\\cdots|(z_2-\\hat z_q)|\\\\\n\\vdots\\\\\n|(z_{p}-\\hat z_1)|\\cdots|(z_{p}-\\hat z_q)|\n\\end{array}\\right).\n\\end{equation}\n\nWe present two auxiliary lemmas that are helpful for deriving our main results. These lemmas are slightly different from the ones in \\cite[ Section III]{liu2021theorylse}. Thus, we employ different techniques for proving them. Their proofs are presented in Appendix \\ref{section:proofofproductloweraandstable}. \n\n\\begin{lem}\\label{lem:multimultiproductlowerbound1}\n\tFor $\\theta_j \\in \\mathbb R, j=1, \\cdots, k+1$, assume that $\\min_{p\\neq j}|\\theta_p-\\theta_j|_{2\\pi}=\\theta_{\\min}$. Then, for any $\\hat \\theta_1,\\cdots, \\hat \\theta_k\\in \\mathbb R$, we have the following estimate: \t\n\t\\[\n\t||\\eta_{k+1,k}(e^{i\\theta_1},\\cdots,e^{i\\theta_{k+1}},e^{i \\hat \\theta_1},\\cdots,e^{i\\hat \\theta_k})||_{\\infty}\\geq \\xi(k)(\\frac{2 \\theta_{\\min}}{\\pi})^k. \n\t\\] \n\\end{lem}\n\n\\medskip\n\\begin{lem}\\label{lem:multistablemultiproduct0}\nLet $\\epsilon >0$. For $\\theta_j, \\hat \\theta_j \\in \\mathbb R, j=1, \\cdots, k$, assume that\n\t\\begin{equation}\\label{equ:stablemultiproductlemma1equ1}\n\t||\\eta_{k,k}(e^{i\\theta_1},\\cdots,e^{i\\theta_k}, e^{i\\hat \\theta_1},\\cdots, e^{i\\hat \\theta_k})||_{\\infty}< (\\frac{2}{\\pi})^{k}\\epsilon,\n\t\\end{equation}\n\twhere $\\eta_{k,k}$ is defined as in (\\ref{equ:multieta}), and that\n\t\\begin{equation}\\label{equ:stablemultiproductlemma1equ2}\n\t\\theta_{\\min} =\\min_{q\\neq j}|\\theta_q-\\theta_j|_{2\\pi}\\geq \\Big(\\frac{4\\epsilon}{\\lambda(k)}\\Big)^{\\frac{1}{k}},\n\t\\end{equation} \n\twhere \n\t\\begin{equation}\\label{equ:lambda1}\n\t\\lambda(k)=\\left\\{\n\t\\begin{array}{ll}\n\t1, & k=2,\\\\\n\t\\xi(k-2),& k\\geq 3.\n\t\\end{array} \n\t\\right.\t\n\t\\end{equation}\n\tThen, after reordering the $\\hat \\theta_j$'s, we have\n\t\\begin{equation}\\label{equ:stablemultiproductlemma1equ4}\n\t|\\hat \\theta_j -\\theta_j|_{2\\pi}< \\frac{\\theta_{\\min}}{2}, \\quad j=1,\\cdots,k,\n\t\\end{equation}\n\tand moreover,\n\t\\begin{equation}\\label{equ:stablemultiproductlemma1equ5}\n\t|\\hat \\theta_j -\\theta_j|_{2\\pi}< \\frac{2^{k-1}\\epsilon}{(k-2)!(\\theta_{\\min})^{k-1}}, \\quad j=1,\\cdots, k.\n\t\\end{equation}\n\\end{lem}\n\n\n\n\n\\subsection{Main results on the approximation theory in Vandermonde space}\\label{section:mainresultsapproxinvandermonde}\nBefore presenting a lower bound for problem (\\ref{equ:multinon-linearapproxproblem1}), we introduce a basic approximation result in Vandermonde space. This result was first derived in \\cite{liu2021theorylse}. \n\n\\begin{thm}\\label{thm:multispaceapprolowerbound0}\n\tLet $k\\geq 1$. For fixed $\\hat \\theta_1,\\cdots, \\hat \\theta_k\\in \\mathbb{R}$, denote $\\hat A= \\big(\\phi_{k}(e^{i\\hat \\theta_1}),\\cdots, \\phi_{k}(e^{i\\hat \\theta_k})\\big)$, where the $\\phi_{k}(e^{i\\hat \\theta_j})$'s are defined as in (\\ref{equ:multiphiformula}). Let $V$ be the $k$-dimensional complex space spanned by the column vectors of $\\hat A$ and let $V^\\perp$ be the one-dimensional orthogonal complement of $V$ in $\\mathbb{C}^{k+1}$. Denote by $P_{V^{\\perp}}$ the orthogonal projection onto $V^{\\perp}$ in $\\mathbb{C}^{k+1}$. Then, we have\n\t\\begin{equation*}\n\t\\min_{\\hat a\\in \\mathbb C^{k}}||\\hat A\\hat a-\\phi_{k}(e^{i\\theta})||_2=||P_{V^{\\perp}}\\big(\\phi_{k}(e^{i\\theta})\\big)||_2 = |v^*\\phi_{k}(e^{i\\theta}) |\\geq \\frac{1}{2^k}|\\Pi_{j=1}^k(e^{i\\theta}-e^{i\\hat \\theta_j})|,\n\t\\end{equation*}\t\n\twhere $v$ is a unit vector in $V^\\perp$ and $v^*$ is its conjugate transpose. \t\n\\end{thm}\n\n\\medskip\nWe then have the following result for non-linear approximation (\\ref{equ:multinon-linearapproxproblem1}) in Vandermonde space.\n\\begin{thm}\\label{thm:multispaceapprolowerbound1}\n\tLet $k\\geq 1$ and $\\theta_j\\in\\mathbb R, 1\\leq j\\leq k+1,$ be $k+1$ distinct points with $\\theta_{\\min}=\\min_{p\\neq j}|\\theta_p-\\theta_j|_{2\\pi}>0$. For $q\\leq k$, let $\\hat \\alpha_t(q)=(\\hat a_{1,t},\\cdots, \\hat a_{q,t})^\\top$, $\\alpha_t=(a_{1,t},\\cdots, a_{k+1, t})^\\top$ and\n\t\\[\n\t\\hat A(q)= \\big(\\phi_{k}(e^{i\\hat \\theta_1}),\\cdots, \\phi_{k}(e^{i\\hat \\theta_q})\\big), \\quad A= \\big(\\phi_{k}(e^{i\\theta_1}),\\cdots, \\phi_{k}(e^{i\\theta_{k+1}})\\big),\n\t\\]\n\twhere $\\phi_{k}(z)$ is defined as in (\\ref{equ:multiphiformula}). Then, for any $\\ \\hat \\theta_1, \\cdots, \\hat \\theta_q\\in \\mathbb{R}$,\n\t\\begin{equation*}\n\t\\max_{t=1,\\cdots,T}\\min_{\\hat \\alpha_t(q)\\in \\mathbb C^q}||\\hat A(q)\\hat \\alpha_t(q)-A \\alpha_t||_2\\geq \\frac{\\sigma_{\\infty, \\min}(B)\\xi(k)(\\theta_{\\min})^{k}}{\\pi^{k}},\n\t\\end{equation*}\n\twhere \n\t\\begin{equation}\\label{equ:multispaceapprolowerbound1equ1}\n\tB=\\left(\\begin{array}{cccc}\n\ta_{1,1}&a_{2,1}&\\cdots&a_{k+1,1}\\\\\n\t\\vdots&\\vdots&\\vdots&\\vdots\\\\\n\ta_{1,T}&a_{2,T}&\\cdots&a_{k+1, T}\n\t\\end{array}\\right).\n\t\\end{equation}\n\\end{thm}\n\\begin{proof}\n\\textbf{Step 1.} \nNote that, for any $\\hat \\theta_1, \\cdots, \\hat \\theta_q, \\cdots, \\hat \\theta_k\\in \\mathbb R$, if $q0.$$ Assume that there are $k$ distinct points $\\hat \\theta_1,\\cdots,\\hat \\theta_k\\in \\mathbb R$ satisfying\n\t\\[ \\max_{t=1, \\cdots, T} ||\\hat A\\hat \\alpha_t-A \\alpha_t||_2< \\sigma, \\]\n\twhere\n\t$\\hat \\alpha_t=(\\hat a_{1,t},\\cdots, \\hat a_{k,t})^\\top$, $\\alpha_t=(a_{1,t},\\cdots, a_{k,t})^\\top$ and\n\t\\[\n\t\\hat A= \\big(\\phi_{k}(e^{i \\hat \\theta_1}),\\cdots, \\phi_{k}(e^{i \\hat \\theta_k})\\big), \\quad A= \\big(\\phi_{k}(e^{i \\theta_1}),\\cdots, \\phi_{k}(e^{i \\theta_{k}})\\big).\n\t\\]\n\tThen\n\t\\[\n\t||\\eta_{k,k}(e^{i \\theta_1},\\cdots,e^{i \\theta_k},e^{i \\hat \\theta_1},\\cdots,e^{i \\hat \\theta_k})||_{\\infty}<\\frac{2^{k}}{\\sigma_{\\infty, \\min}(B)}\\sigma,\n\t\\]\n\twhere\n\t\\begin{equation}\\label{equ:multispaceapproxlowerbound2equ1}\n\tB=\\left(\\begin{array}{cccc}\n\ta_{1,1}&a_{2,1}&\\cdots&a_{k,1}\\\\\n\t\\vdots&\\vdots&\\vdots&\\vdots\\\\\n\ta_{1,T}&a_{2,T}&\\cdots&a_{k, T}\n\t\\end{array}\\right).\n\t\\end{equation}\n\\end{thm}\n\\begin{proof} Let $V$ be the complex space spanned by the column vectors of $\\hat A$ and let $V^\\perp$ be the orthogonal complement of $V$ in $\\mathbb C^{k+1}$. Let $v$ be a unit vector in $V^{\\perp}$ and denote by $P_{V^{\\perp}}$ the orthogonal projection onto $V^{\\perp}$ in $\\mathbb C^{k+1}$. Similarly to {Step 2} in the proof of Theorem \\ref{thm:multispaceapprolowerbound1}, we obtain that\n\\begin{align}\\label{equ:multispaceapproxlowerbound2equ2}\n\\min_{\\hat \\alpha_t\\in \\mathbb C^k}||\\hat A\\hat \\alpha_t-A\\alpha_t||_2 = ||P_{V^{\\perp}}(A\\alpha_{t})||_2= |v^*A\\alpha_{t}|= |\\sum_{j=1}^{k}a_{j,t}v^*\\phi_{k}(e^{i\\theta_j})| =|\\beta_{t}|,\n\\end{align} \nwhere $\\beta_{t}= \\sum_{j=1}^{k} a_{j,t}v^*\\phi_{k}(e^{i\\theta_j}), \\ t = 1,\\cdots, T$. Denote by $\\beta=(\\beta_{1}, \\beta_{2},\\cdots, \\beta_{T})^\\top$, we have $\\beta= B \\hat \\eta$,\nwhere $B$ is given by (\\ref{equ:multispaceapproxlowerbound2equ1}) and \n$\\hat \\eta = (v^*\\phi_{k}(e^{i\\theta_1}), v^*\\phi_{k}(e^{i\\theta_2}), \\cdots, v^*\\phi_{k}(e^{i\\theta_{k}}))^\\top$. By the definition of $\\sigma_{\\infty, \\min}(B)$, we arrive at\n\\[\n||\\beta||_{\\infty}\\geq \\sigma_{\\infty, \\min}(B)||\\hat \\eta||_{\\infty}.\n\\]\nOn the other hand, by Theorem \\ref{thm:multispaceapprolowerbound0}, we get\n\\begin{equation*}\\label{equ:multispaceapproxlowerbound2equ3}\n||\\hat \\eta||_{\\infty} \\geq \\frac{1}{2^k}||\\eta_{k, k}(e^{i\\theta_1}, \\cdots, e^{i\\theta_{k}}, e^{i\\hat \\theta_1}, \\cdots, e^{i\\hat \\theta_k} )||_{\\infty},\n\\end{equation*}\nand hence the theorem is proved. \\end{proof}\n\n\n\n\\section{Proof of Theorem \\ref{thm:l0normrecovery0}}\\label{section:proofofthml0normrecover}\n\nThe proof of Theorem \\ref{thm:l0normrecovery0} is divided into four steps. \n\n\\textbf{Step 1.} We only prove the theorem for problem (\\ref{prob:l0minimization}) and the other two cases can be proved in the same manner. We first prove that the solution to (\\ref{prob:l0minimization}) is a discrete measure corresponding to at least $n$ point scatterers. For $\\rho = \\sum_{j=1}^{k}\\hat a_j \\delta_{\\hat y_j}$ and $\\mu = \\sum_{j=1}^n a_j \\delta_{y_{j}}$, we set $\\hat \\mu_t = I_t \\rho = \\sum_{j=1}^k\\hat a_{j,t} \\delta_{\\hat y_j}$ and $\\mu_{t} = \\sum_{j=1}^n I_t(y_j)a_j \\delta_{y_{j}}$. We shall prove that if $k 2\\sigma.\n\\end{equation}\nFor ease of presentation, we fix $\\hat y_j, \\hat a_{j,t}$'s in the subsequent arguments. In view of $||\\vect W_t||_2<\\sigma$, from (\\ref{equ:multinumberresultequ0}) we further have \n\\begin{equation}\n\\max_{t=1, \\cdots, T} ||\\mathcal F[\\hat \\mu_t]-\\vect Y_t||_{2}>\\sigma,\n\\end{equation} \nwhereby any solution corresponding to only $k4\\sigma^2.\n\\end{equation}\n Without loss of generality, we only show (\\ref{equ:multinumberresultequ2}) for $\\omega =0$. For $k 2\\sigma,\n\\end{equation}\n and consequently arrive at (\\ref{equ:multinumberresultequ2}). \n\n\n\\textbf{Step 2.} We let $\\theta_j = \ty_j\\frac{2\\Omega}{n}, j=1,\\cdots,n$ and $\\hat \\theta_j = \\hat y_j\\frac{2\\Omega}{n}$. From the following decompositions: \n\\begin{equation}\\label{equ:multimatrixdecomposition1}\n\\begin{aligned}\n&\\hat \\Phi=\\big(\\phi_{n-1}(e^{i \\hat \\theta_1}), \\cdots,\\phi_{n-1}(e^{i\\hat \\theta_k} ) \\big)\\text{diag}(e^{-i\\hat y_1\\Omega},\\cdots,e^{-i\\hat y_k\\Omega}),\\\\\n&\\Phi=\\big(\\phi_{n-1}(e^{i \\theta_1}), \\cdots,\\phi_{n-1}(e^{i \\theta_n})\\big)\\text{diag}(e^{-i y_1\\Omega},\\cdots,e^{-iy_n\\Omega}),\n\\end{aligned}\n\\end{equation}\nwhere $\\phi(\\cdot)$ is defined as in (\\ref{equ:multiphiformula}), we readily obtain that\n\\begin{align} \\label{equ:multi1111}\n\\max_{t=1, \\cdots, T} ||\\hat \\Phi \\hat \\alpha_t-\\Phi \\alpha_t||_2= \\max_{t=1, \\cdots, T} ||\\hat D \\hat \\gamma_t-D \\tilde{\\gamma}_t||_2,\n\\end{align}\nwhere $\\hat \\gamma_{t} = (\\hat a_{1,t}e^{-i\\hat y_1\\Omega},\\cdots, \\hat a_{k,t}e^{-i\\hat y_k\\Omega})^\\top, \\gamma_t=(I_{t}(y_1)a_1e^{-iy_1\\Omega},\\cdots, I_{t}(y_n)a_{n}e^{-iy_n\\Omega})^\\top$, \\\\\n$\\hat D=\\big(\\phi_{n-1}(e^{i \\hat \\theta_1}), \\cdots,\\phi_{n-1}(e^{i \\hat \\theta_k} ) \\big)$ and $D=\\big(\\phi_{n-1}(e^{i \\theta_1}), \\cdots,\\phi_{n-1}(e^{i \\theta_n})\\big)$.\nWe consider $I$ in (\\ref{equ:illuminationpattern1}) and denote $B= I \\text{diag}(a_1e^{-iy_1\\Omega},\\cdots, a_{n}e^{-iy_n\\Omega})$. Applying Theorem \\ref{thm:multispaceapprolowerbound1}, we get \n\\begin{equation*}\n\\max_{t=1, \\cdots, T} ||\\hat D \\hat \\gamma_t-D \\gamma_t||_2\\geq \\frac{\\sigma_{\\infty, \\min}(B)\\xi(n-1)(\\theta_{\\min})^{n-1}}{\\pi^{n-1}}, \n\\end{equation*}\nwhere $\\theta_{\\min}=\\min_{j\\neq p}|\\theta_j-\\theta_p|_{2\\pi}$.\nOn the other hand, by the definition of $\\sigma_{\\infty, \\min}$, we have\n\\begin{equation}\\label{equ:multinumberupperboundequ2}\n\\begin{aligned}\n&\\sigma_{\\infty, \\min}(I)m_{\\min} = \\min_{||\\alpha||_{\\infty}\\geq m_{\\min}}||I \\alpha||_{\\infty}\\\\\n \\leq & \\min_{||\\alpha||_{\\infty}\\geq 1}||I\\text{diag}(a_1e^{-iy_1\\Omega},\\cdots, a_{n}e^{-iy_n\\Omega})\\alpha||_{\\infty} \\quad (\\text{by $m_{\\min} =\\min_{1\\leq j\\leq n}|a_j|$})\\\\\n =& \\sigma_{\\infty, \\min}(B).\n\\end{aligned}\n\\end{equation}\nThus,\n\\begin{equation*}\n\\max_{t=1, \\cdots, T} ||\\hat D \\hat \\gamma_t-D \\gamma_t||_2\\geq \\frac{m_{\\min}\\sigma_{\\infty, \\min}(I)\\xi(n-1)(\\theta_{\\min})^{n-1}}{\\pi^{n-1}}. \n\\end{equation*}\nBy (\\ref{equ:multi1111}), it follows that \n\\[\n\\max_{t=1, \\cdots, T} ||\\hat \\Phi \\hat \\alpha_t-\\Phi \\alpha_t||_2 \\geq \\frac{m_{\\min}\\sigma_{\\infty, \\min}(I)\\xi(n-1)(\\theta_{\\min})^{n-1}}{\\pi^{n-1}}. \n\\]\nOn the other hand, recall that $d_{\\min}= \\min_{j\\neq p}|y_j-y_p|_{\\frac{n\\pi}{\\Omega}}$. Using the relation $\\theta_j = y_j\\frac{2\\Omega}{n}$, we have $\\theta_{\\min}=\\frac{2\\Omega}{n}d_{\\min}$. Then the separation condition (\\ref{equ:sepaconditionl0normrecovery}) and $\\frac{1}{\\sigma_{\\infty, \\min}(I) \\frac{\\sigma}{m_{\\min}}} \\leq 1$ imply that\n\\begin{equation*}\n\\theta_{\\min}\\geq \\frac{4.4 e\\pi }{n}\\Big(\\frac{1}{\\sigma_{\\infty, \\min}( I)}\\frac{\\sigma}{m_{\\min}}\\Big)^{\\frac{1}{n}} \\geq \\frac{4.4 e\\pi }{n}\\Big(\\frac{1}{\\sigma_{\\infty, \\min}(I)}\\frac{\\sigma}{m_{\\min}}\\Big)^{\\frac{1}{n-1}}> \\pi \\Big(\\frac{2\\sqrt{n}}{\\xi(n-1)\\sigma_{\\infty, \\min}(I)}\\frac{\\sigma}{m_{\\min}}\\Big)^{\\frac{1}{n-1}},\n\\end{equation*}\nwhere here we have used Lemma \\ref{lem:multinumbercalculate1} for deriving the last inequality. Therefore, \n\\[\n\\max_{t=1, \\cdots, T} ||\\hat \\Phi \\hat \\alpha_t-\\Phi \\alpha_t||_2 > 2\\sqrt{n}\\sigma,\n\\]\nwhence (\\ref{equ:multinumberresultequ3}) is proved.\n\n\\textbf{Step 3.} By the above results, the solution of (\\ref{prob:l0minimization}) corresponds exactly to $n$ point scatterers. Suppose that the solution is $\\rho = \\sum_{j=1}^n \\hat a_j \\delta_{\\hat y_j}$ and $\\hat \\mu_{t} = I_t \\rho = \\sum_{j=1}^n \\hat a_{j,t} \\delta_{\\hat y_j}$. We now prove the stability of the location recovery. Similarly to {Step 1}, using the constraints in (\\ref{prob:l0minimization}) \n\\[\n||\\mathcal F[\\hat \\mu_t] - \\vect Y_t||_2 < \\sigma, \\quad 1\\leq t\\leq T,\n\\]\nwe can derive that \n\\[\n\\frac{1}{2\\Omega} \\int_{0}^{h}\\max_{t=1, \\cdots, T} \\sum_{j=1}^n|\\mathcal F[\\hat \\mu_t](\\omega+(j-1)h-\\Omega)-\\mathcal F[\\mu_t](\\omega+(j-1)h-\\Omega)|^2d\\omega < 4\\sigma^2.\n\\]\nHence, there exists $\\omega_0 \\in [0,h]$ ($h = \\frac{2\\Omega}{n}$) such that \n\\begin{equation}\\label{equ:multisupportresultequ2}\n\\max_{t=1, \\cdots, T}\\frac{1}{n}\\sum_{j=1}^n|\\mathcal F[\\hat \\mu_t](\\omega_0+(j-1)h-\\Omega)-\\mathcal F[\\mu_t](\\omega_0+(j-1)h-\\Omega)|^2< 4 \\sigma^2.\n\\end{equation}\nWithout loss of generality, we suppose that $\\omega_0 = 0$ and consider \n\\begin{equation*}\n\\left(\\mathcal F [\\hat \\mu_t](\\omega_1), \\mathcal F [\\hat \\mu_t](\\omega_{2}), \\cdots,\\mathcal F [\\hat \\mu_t](\\omega_{n})\\right)^\\top -\\left(\\mathcal F [\\mu_t](\\omega_1), \\mathcal F [\\mu_t](\\omega_{2}), \\cdots,\\mathcal F [\\mu_t](\\omega_{n})\\right)^\\top =\\hat \\Phi \\hat \\alpha_t- \\Phi \\alpha_t, \n\\end{equation*}\nwhere $\\omega_j = (j-1)h-\\Omega$, $\\hat \\alpha_t= (\\hat a_{1,t},\\cdots, \\hat a_{n,t})^\\top$, $\\alpha_t=(I_t(y_1)a_1, \\cdots, I_{t}(y_n)a_n)^\\top$ and \n\\begin{equation*}\n\\hat \\Phi= \\left(\n\\begin{array}{ccc}\ne^{i\\hat y_1\\omega_1}&\\cdots& e^{i\\hat y_n\\omega_1}\\\\\ne^{i\\hat y_1\\omega_{2}} &\\cdots& e^{i\\hat y_n\\omega_{2}}\\\\\n\\vdots&\\vdots&\\vdots\\\\\ne^{i\\hat y_1\\omega_{n}}&\\cdots& e^{i\\hat y_n \\omega_{n}}\\\\\n\\end{array}\n\\right), \\quad \\Phi=\\left(\n\\begin{array}{ccc}\ne^{i y_1\\omega_1}&\\cdots& e^{i y_n\\omega_1}\\\\\ne^{i y_1\\omega_{2}} &\\cdots& e^{i y_n\\omega_{2}}\\\\\n\\vdots&\\vdots&\\vdots\\\\\ne^{i y_1\\omega_{n}}&\\cdots& e^{i y_n \\omega_{n}}\\\\\n\\end{array}\n\\right).\n\\end{equation*}\nBy (\\ref{equ:multisupportresultequ2}), it is clear that \n\\[\n\\max_{t=1, \\cdots, T}||\\hat \\Phi \\hat \\alpha_t- \\Phi\\alpha_t||_{2}<2\\sqrt{n}\\sigma.\n\\]\nNote that \n\\begin{align}\\label{equ:multiupperboundsupportlimithm1equ1}\n\\max_{t=1, \\cdots, T}||\\hat \\Phi \\hat \\alpha_t-\\Phi \\alpha_t||_2= \\max_{t=1, \\cdots, T}||\\hat D \\hat \\gamma_t -D \\gamma_t||_2,\n\\end{align}\nwhere $\\hat \\gamma_{t}=(\\hat a_{1,t}e^{-i\\hat y_1\\Omega},\\cdots, \\hat a_{n,t}e^{-i\\hat y_n\\Omega})^\\top, \\gamma_t=(I_1(y_1)a_1e^{-iy_1\\Omega},\\cdots, I_n(y_n)a_{n}e^{-iy_n\\Omega})^\\top$,\\\\\n $\\hat D=\\big(\\phi_{n}(e^{i \\hat \\theta_1}),\\cdots,\\phi_{n}(e^{i \\hat \\theta_n})\\big)$ and $D=\\big(\\phi_{n}(e^{i \\theta_1}),\\cdots,\\phi_{n}(e^{i \\theta_n})\\big)$. Thus,\n\\begin{equation}\\label{equ:multisupportupperboundequ1}\n\\max_{t=1, \\cdots, T}||\\hat D \\hat \\gamma_t-D \\gamma_t||_2 < 2\\sqrt{n}\\sigma. \n\\end{equation}\nWe can apply Theorem \\ref{thm:multispaceapproxlowerbound2} to get\n\\begin{align}\\label{equ:multiupperboundsupportlimithm1equ3}\n||\\eta_{n,n}(e^{i \\theta_1},\\cdots,e^{i \\theta_n},e^{i \\hat \\theta_1},\\cdots,e^{i \\hat \\theta_n})||_{\\infty}<\\frac{2^{n+1}\\sqrt{n}\\sigma}{\\sigma_{\\infty, \\min}(B)}, \n\\end{align}\nwhere $\\eta_{n,n}$ is defined by (\\ref{equ:multieta}) and $B= I\\text{diag}(a_1e^{-iy_1\\Omega},\\cdots, a_{n}e^{-iy_n\\Omega})$. By (\\ref{equ:multinumberupperboundequ2}),\nit follows that \n\\begin{align*}\n\\sigma_{\\infty, \\min}(I)m_{\\min} \\leq \\sigma_{\\infty, \\min}(B).\n\\end{align*}\nThus, we have\n\\begin{align}\\label{equ:multisupportupperboundequ2}\n||\\eta_{n,n}(e^{i \\theta_1},\\cdots,e^{i \\theta_n},e^{i \\hat \\theta_1},\\cdots,e^{i \\hat \\theta_n})||_{\\infty}<\\frac{2^{n+1}}{\\sigma_{\\infty, \\min} (I)}\\frac{\\sigma}{m_{\\min}}. \n\\end{align}\n\n\\textbf{Step 4.}\nWe apply Lemma \\ref{lem:multistablemultiproduct0} to estimate $|\\hat \\theta_j -\\theta_j|_{2\\pi}$'s. For this purpose, let $\\epsilon = \\frac{2\\sqrt{n}\\pi^{n}}{\\sigma_{\\infty, \\min}(I)} \\frac{\\sigma}{m_{\\min}}$. It is clear that \n$||\\eta_{n ,n}||_{\\infty}<(\\frac{2}{\\pi})^n\\epsilon$ and we only need to check the following condition:\n\\begin{equation}\\label{equ:multiupperboundsupportlimithm1equ4}\n\\theta_{\\min}\\geq \\Big(\\frac{4\\epsilon}{\\lambda(n)}\\Big)^{\\frac{1}{n}}, \\quad \\mbox{or equivalently}\\,\\,\\, (\\theta_{\\min})^n \\geq \\frac{4\\epsilon}{\\lambda(n)}.\n\\end{equation}\nIndeed, by $\\theta_{\\min} = \\frac{2\\Omega}{n} d_{\\min}$ and the separation condition (\\ref{equ:sepaconditionl0normrecovery}), \n\\begin{equation}\\label{equ:multiupperboundsupportlimithm1equ-1}\n\\theta_{\\min}\\geq \\frac{4.4\\pi e}{n}\\Big(\\frac{1}{\\sigma_{\\infty, \\min}(I)}\\frac{\\sigma}{m_{\\min}}\\Big)^{\\frac{1}{n}}\\geq \\Big(\\frac{8\\sqrt{n}\\pi^n}{\\lambda(n)}\\frac{1}{\\sigma_{\\infty, \\min}(I)}\\frac{\\sigma}{m_{\\min}}\\Big)^{\\frac{1}{n}}.\n\\end{equation}\nHere, we have used Lemma \\ref{lem:multisupportcalculate1} for deriving the last inequality. Then, we get (\\ref{equ:multiupperboundsupportlimithm1equ4}). Therefore, we can apply Lemma \\ref{lem:multistablemultiproduct0} to get that, after reordering $\\hat \\theta_j$'s,\n\\begin{equation} \\label{equ:multiupperboundsupportlimithm1equ7}\n\\Big|\\hat \\theta_{j}-\\theta_j\\Big|_{2\\pi}< \\frac{\\theta_{\\min}}{2}, \\text{ and } \\Big|\\hat \\theta_{j}-\\theta_j\\Big|_{2\\pi}< \\frac{2^n\\sqrt{n}\\pi^{n}}{(n-2)!(\\theta_{\\min})^{n-1}} \\frac{1}{\\sigma_{\\infty, \\min}(I)} \\frac{\\sigma}{m_{\\min}},\\ j=1,\\cdots,n.\n\\end{equation}\nFinally, we estimate $|\\hat y_j - y_j|_{\\frac{n\\pi}{\\Omega}}$. Since $|\\hat \\theta_{j}-\\theta_j|_{2\\pi}< \\frac{\\theta_{\\min}}{2}$, we have after reordering the $\\hat y_j's$,\n$$|\\hat y_j-y_j|_{\\frac{n\\pi}{\\Omega}}< \\frac{d_{\\min}}{2}.$$\nOn the other hand, $\\Big|\\hat y_j-y_j\\Big|_{\\frac{n\\pi}{\\Omega}}= \\frac{n}{2\\Omega}\\Big|\\hat \\theta_j -\\theta_j\\Big|_{2\\pi}$. \t\nCombining (\\ref{equ:multiupperboundsupportlimithm1equ7}) and (\\ref{equ:stirlingformula}), a direct calculation shows that\n\\begin{align*}\n\\Big|\\hat y_j-y_j\\Big|_{\\frac{n\\pi}{\\Omega}}< \\frac{C(n)}{\\Omega} (\\frac{\\pi}{\\Omega d_{\\min}})^{n-1} \\frac{1}{\\sigma_{\\infty, \\min}(I)}\\frac{\\sigma}{m_{\\min}}, \n\\end{align*}\nwhere $C(n)=2\\sqrt{2}ne^{n}\\sqrt{\\pi}$.\n\n\\section{Proof of Theorem \\ref{thm:twodl0normrecovery0}}\\label{section:prooftwodl0normrecovery}\n\\subsection{Number and location recoveries in one-dimensional case}\nWe first introduce some results for the number and location recoveries of the one-dimensional case, which will help us to derive the stability results for two-dimensional super-resolution. Unlike Theorem \\ref{thm:l0normrecovery0}, the stability results here consider Euclidean distance between point scatterers. \n\nFor source $\\mu = \\sum_{j=1}^n a_j \\delta_{y_j}$ and illumination patterns $I_t$'s, the measurements are\n\\begin{equation}\\label{equ:onedmultimodelsetting1}\n\\mathbf Y_t(\\omega) = \\mathcal F [I_t \\mu] (\\omega) + \\mathbf W_t(\\omega)= \\sum_{j=1}^{n}I_t(y_j)a_j e^{i y_j \\omega} + \\mathbf W_t(\\omega), \\quad 1\\leq t\\leq T, \\ \\omega \\in [-\\Omega, \\Omega], \n\\end{equation}\nwhere $\\mathcal F[I_t \\mu]$ denotes the Fourier transform of $I_t \\mu$ and $\\vect W_t(\\omega)$ is the noise with $||\\mathbf W_t||_{\\infty}<\\sigma$. \n\n\n\\begin{thm}\\label{thm:onednumberbound}\nSuppose the measurements $\\vect Y_t$'s in (\\ref{equ:onedmultimodelsetting1}) are generated from $\\mu= \\sum_{j=1}^n a_j \\delta_{y_j}, y_j\\in \\mathbb R$ where $y_j$'s are in an interval $\\mathcal O$ of length $\\frac{c_0n\\pi}{\\Omega}$ with $c_0\\geq 1$ and satisfy\n\\begin{equation}\\label{equ:onedsepaconditionnumber}\nd_{\\min} := \\min_{p\\neq j}\\babs{y_p-y_j}\\geq \\frac{4.4c_0e\\pi }{\\Omega }\\Big(\\frac{1}{\\sigma_{\\infty, \\min}(I)}\\frac{\\sigma}{m_{\\min}}\\Big)^{\\frac{1}{n}}\n\\end{equation}\nwith $\\frac{1}{\\sigma_{\\infty, \\min}(I)}\\frac{\\sigma}{m_{\\min}}\\leq 1$. Then there is no $k 2\\sigma.\n\\end{equation}\nSpecifically, for $k 2\\sigma,\n\\end{equation}\nand consequently it yields (\\ref{equ:twodmultinumberresultequ0}). Let $\\theta_j = y_j h = y_j\\frac{\\Omega}{c_0n}$ and $\\hat \\theta_j = \\hat y_j h = \\hat y_j\\frac{\\Omega}{c_0n}$. Similar to Step 2 in the proof of Theorem \\ref{thm:l0normrecovery0}, we can have\n\\[\n\\max_{t=1, \\cdots, T} ||\\hat \\Phi \\hat \\alpha_t-\\Phi \\alpha_t||_2 \\geq \\frac{m_{\\min}\\sigma_{\\infty, \\min}(I)\\xi(n-1)(\\theta_{\\min})^{n-1}}{\\pi^{n-1}},\n\\]\nwhere $\\theta_{\\min} = \\min_{p\\neq j}|\\theta_j-\\theta_p|_{2\\pi}$. Because $y_j$'s are in an interval of length $\\frac{c_0n\\pi}{\\Omega}$, by $\\theta_j = y_jh$ we have $\\theta_{\\min} = \\min_{p\\neq j}|\\theta_j-\\theta_p|_{2\\pi}=d_{\\min}\\frac{\\Omega}{c_0n}$. Then the separation condition (\\ref{equ:onedsepaconditionnumber}) and $\\frac{1}{\\sigma_{\\infty, \\min}(I) \\frac{\\sigma}{m_{\\min}}} \\leq 1$ imply that\n\\begin{equation*}\n\\theta_{\\min}\\geq \\frac{4.4 e\\pi }{n}\\Big(\\frac{1}{\\sigma_{\\infty, \\min}( I)}\\frac{\\sigma}{m_{\\min}}\\Big)^{\\frac{1}{n}} \\geq \\frac{4.4 e\\pi }{n}\\Big(\\frac{1}{\\sigma_{\\infty, \\min}(I)}\\frac{\\sigma}{m_{\\min}}\\Big)^{\\frac{1}{n-1}}> \\pi \\Big(\\frac{2\\sqrt{n}}{\\xi(n-1)\\sigma_{\\infty, \\min}(I)}\\frac{\\sigma}{m_{\\min}}\\Big)^{\\frac{1}{n-1}},\n\\end{equation*}\nwhere here we have used Lemma \\ref{lem:multinumbercalculate1} for deriving the last inequality. Therefore, \n\\[\n\\max_{t=1, \\cdots, T} ||\\hat \\Phi \\hat \\alpha_t-\\Phi \\alpha_t||_2 > 2\\sqrt{n}\\sigma,\n\\]\nwhence we prove (\\ref{equ:onedmultinumberresultequ3}).\n\\end{proof}\n\n\\begin{thm}\\label{thm:onedsupportbound}\nSuppose that the measurements $\\vect Y_t$'s in (\\ref{equ:onedmultimodelsetting1}) are generated from $\\mu= \\sum_{j=1}^n a_j \\delta_{y_j}, y_j\\in \\mathbb R$, where $y_j$'s are in an interval $\\mathcal O$ of length $\\frac{c_0n\\pi}{\\Omega}$ with $c_0\\geq 1$ and satisfy\n\\begin{equation}\\label{equ:onedsepaconditionsupport}\nd_{\\min} := \\min_{p\\neq j}\\babs{y_p-y_j}\\geq \\frac{4.4c_0e\\pi }{\\Omega }\\Big(\\frac{1}{\\sigma_{\\infty, \\min}(I)}\\frac{\\sigma}{m_{\\min}}\\Big)^{\\frac{1}{n}}\n\\end{equation}\nwith $\\frac{1}{\\sigma_{\\infty, \\min}(I)}\\frac{\\sigma}{m_{\\min}}\\leq 1$. Moreover, for $\\hat \\mu_t = \\sum_{j=1}^n \\hat a_{j,t} \\delta_{\\hat y_j}, \\hat y_j \\in \\mathcal O$ satisfying $||\\mathcal F[\\hat \\mu_t] - \\vect Y_t||_{\\infty}< \\sigma, t=1, \\cdots, T$, after reordering the $\\hat y_j$'s, we have \n\t\\begin{equation}\n\t\\Big|\\hat y_j-y_j\\Big|<\\frac{d_{\\min}}{2},\n\t\\end{equation} \n\tand \n\t\\begin{equation}\n\t\\Big|\\hat y_j-y_j\\Big| < \\frac{C(n)}{\\Omega}SRF^{n-1}\\frac{1}{\\sigma_{\\infty, \\min}(I)}\\frac{\\sigma}{m_{\\min}}, \\quad 1\\leq j\\leq n,\n\t\\end{equation}\n\twhere $C(n)=2^n\\sqrt{2\\pi}nc_0^{n-1}e^{n}$ and $SRF = \\frac{\\pi}{\\Omega d_{\\min}}$ is the super-resolution factor.\n\\end{thm}\n\\begin{proof}\nLet $\\mu_t = \\sum_{j=1}^n a_j I_t(y_j)\\delta_{y_j}$. Similar to the proof of Theorem \\ref{thm:l0normrecovery0}, we consider \n\\begin{equation*}\n\\left(\\mathcal F [\\hat \\mu_t](\\omega_1), \\mathcal F [\\hat \\mu_t](\\omega_{2}), \\cdots,\\mathcal F [\\hat \\mu_t](\\omega_{n})\\right)^\\top -\\left(\\mathcal F [\\mu_t](\\omega_1), \\mathcal F [\\mu_t](\\omega_{2}), \\cdots,\\mathcal F [\\mu_t](\\omega_{n})\\right)^\\top =\\hat \\Phi \\hat \\alpha_t- \\Phi \\alpha_t, \n\\end{equation*}\nwhere $\\omega_j = (j-1)h-\\Omega$ with $h = \\frac{\\Omega}{c_0n}$, $\\hat \\alpha_t= (\\hat a_{1,t},\\cdots, \\hat a_{n,t})^\\top$, $\\alpha_t=(I_t(y_1)a_1, \\cdots, I_{t}(y_n)a_n)^\\top$ and \n\\begin{equation*}\n\\hat \\Phi= \\left(\n\\begin{array}{ccc}\ne^{i\\hat y_1\\omega_1}&\\cdots& e^{i\\hat y_n\\omega_1}\\\\\ne^{i\\hat y_1\\omega_{2}} &\\cdots& e^{i\\hat y_n\\omega_{2}}\\\\\n\\vdots&\\vdots&\\vdots\\\\\ne^{i\\hat y_1\\omega_{n}}&\\cdots& e^{i\\hat y_n \\omega_{n}}\\\\\n\\end{array}\n\\right), \\quad \\Phi=\\left(\n\\begin{array}{ccc}\ne^{i y_1\\omega_1}&\\cdots& e^{i y_n\\omega_1}\\\\\ne^{i y_1\\omega_{2}} &\\cdots& e^{i y_n\\omega_{2}}\\\\\n\\vdots&\\vdots&\\vdots\\\\\ne^{i y_1\\omega_{n}}&\\cdots& e^{i y_n \\omega_{n}}\\\\\n\\end{array}\n\\right).\n\\end{equation*}\nBy the constraint on the noise, it is clear that \n\\[\n\\max_{t=1, \\cdots, T}||\\hat \\Phi \\hat \\alpha_t- \\Phi\\alpha_t||_{2}<2\\sqrt{n}\\sigma.\n\\]\nLet $\\theta_j = y_j h$ and $\\hat \\theta_j = \\hat y_j h.$\nSimilar to the proof of Theorem \\ref{thm:l0normrecovery0}, we can prove that, after reordering $\\hat \\theta_j$'s,\n\\begin{equation} \\label{equ:onedmultiupperboundsupportlimithm1equ7}\n\\Big|\\hat \\theta_{j}-\\theta_j\\Big|_{2\\pi}< \\frac{\\theta_{\\min}}{2}, \\text{ and } \\Big|\\hat \\theta_{j}-\\theta_j\\Big|_{2\\pi}< \\frac{2^n\\sqrt{n}\\pi^{n}}{(n-2)!(\\theta_{\\min})^{n-1}} \\frac{1}{\\sigma_{\\infty, \\min}(I)} \\frac{\\sigma}{m_{\\min}},\\ j=1,\\cdots,n.\n\\end{equation}\nFinally, we estimate $|\\hat y_j - y_j|$. Since $|\\hat \\theta_{j}-\\theta_j|_{2\\pi}< \\frac{\\theta_{\\min}}{2}$ and $\\hat y_j$'s, $y_j$'s are in $\\mathcal O$, we have after reordering the $\\hat y_j$'s,\n$$|\\hat y_j-y_j|< \\frac{d_{\\min}}{2}.$$\nOn the other hand, $\\Big|\\hat y_j-y_j\\Big| = \\frac{nc_0}{\\Omega}\\Big|\\hat \\theta_j -\\theta_j\\Big|_{2\\pi}$. \t\nCombining (\\ref{equ:onedmultiupperboundsupportlimithm1equ7}) and (\\ref{equ:stirlingformula}), a direct calculation shows that\n\\begin{align*}\n\\Big|\\hat y_j-y_j\\Big|< \\frac{C(n)}{\\Omega} (\\frac{\\pi}{\\Omega d_{\\min}})^{n-1} \\frac{1}{\\sigma_{\\infty, \\min}(I)}\\frac{\\sigma}{m_{\\min}}, \n\\end{align*}\nwhere $C(n)=2^n\\sqrt{2\\pi}nc_0^{n-1}e^{n}$.\n\\end{proof}\n\n\n\\subsection{Projection lemmas}\nNext we introduce two auxiliary lemmas whose ideas are from \\cite{liu2021mathematicalhighd}. We introduce some notation. For $0<\\theta\\leq \\frac{\\pi}{2}$ and $N=\\lfloor \\frac{\\pi}{\\theta} \\rfloor$, we denote the unit vectors in $\\mathbb R^2$ by\n\\begin{align}\\label{equ:vectorlist1}\n\\vect v(\\tau \\theta)= \\big(\\cos(\\tau \\theta), \\sin(\\tau \\theta)\\big)^T, \\quad 1\\leq \\tau \\leq N.\n\\end{align}\nIt is obvious that there are $N$ different unit vectors of the form (\\ref{equ:vectorlist1}). \n\nFor a vector $\\vect v \\in \\mathbb R^2$, we denote $\\mathcal P_{\\vect v}$ the projection to the one-dimensional space spanned by $\\vect v$. We have the following lemmas. \n\n\\begin{lem}\\label{lem:highdsupportproject1}\n\tLet $n\\geq 2$ and $\\vect{y}_1, \\cdots, \\vect{y}_n$ be $n$ different points in $\\mathbb R^2$. Let $\\ d_{\\min}=\\min_{p\\neq j}||\\vect{y}_p-\\vect{y}_j||_2$ and $\\Delta = \\frac{\\pi}{(n+2)(n+1)}$. Then there exist $n+1$ unit vectors $\\vect{v}_q$'s such that $0 \\leq \\vect v_p \\cdot \\vect v_j \\leq \\cos(2\\Delta)$ for $p\\neq j$ and\n\t\\begin{equation}\\label{equ:highdprojectionlower3}\n\t\\min_{p\\neq j, 1\\leq p, j \\leq n}||\\mathcal P_{\\vect v_q}(\\vect{y}_p)-\\mathcal P_{\\vect v_q}(\\vect{y}_j)||_2\\geq \\frac{2\\Delta d_{\\min}}{\\pi},\\quad q=1,\\cdots, n+1. \n\t\\end{equation}\n\\end{lem}\n\\begin{proof}\n\tNote that there are at most $\\frac{n(n-1)}{2}$ different vectors of the form $\\vect u_{pj}= \\vect y_p -\\vect y_j, p \\frac{(n+1)^2}{2}.\n\t\\end{align*}\n\tNote that $\\frac{(n+1)^2}{2} - \\frac{n(n-1)}{2} = n+1$, we can find $n+1$ vectors of the form $\\vect v(\\tau \\theta)$ that are not contained in the set $\\cup_{p$ few $\\cdot 10^{-3}$. Hence a low energy Neutrino Factory would be a precision tool for both large and small $\\theta_{13}$.\n \nIn Section II we describe the design for the low-threshold detector and its performance. In Section III, we discuss in detail the physics reach of the proposed setup. We first consider the disappearance $\\nu_\\mu$ signal in order to determine precisely the value of the atmospheric mass squared difference and, possibly, the type of hierarchy even for $\\theta_{13}=0$.\nThen, we consider the appearance signals $\\nu_e \\rightarrow \\nu_\\mu$ and $\\bar{\\nu}_e \\rightarrow \\bar{\\nu}_\\mu$, which depend on $\\theta_{13}$, $\\delta$ and the type of neutrino mass ordering. \nWe perform a detailed numerical simulation and discuss the sensitivity of the low-energy Neutrino Factory to these parameters.\nIn Section IV, we draw our conclusions.\n\n\\section{Detector design and performance}\n\n\nA totally active scintillator detector (TASD) has been proposed for a Neutrino Factory, and results from a first study of its expected performance are described in the recent International Scoping Study Report~\\cite{ISS-Detector Report}. Using a TASD for neutrino physics is not new. Examples are KamLAND~\\cite{KamLAND}, which has been operating for several years, and the proposed NO$\\nu$A detector~\\cite{Nova}, which is a $15-18$~Kton liquid scintillator detector that will operate off-axis to the NuMI beam line~\\cite{Numi} at Fermilab. Note that, unlike KamLAND or NO$\\nu$A, the TASD we are investigating for the low energy Neutrino Factory is magnetized and has a segmentation that is approximately 10 times that of \\mbox{NO$\\nu$A}. Magnetization of such a large volume ($>30,000$~m$^3$) is the main technical challenge in designing a TASD for a Neutrino Factory, although R\\&D to reduce the detector cost (driven in part by the large channel count, $7.5 \\times 10^6$) is also needed. \n\nThe Neutrino Factory TASD we are considering consists of long plastic scintillator bars with a triangular cross-section arranged in planes which\n make x and y measurements (we plan to also consider an x-u-v readout scheme). Optimization of the cell cross section still needs further study since a true triangular cross section results in tracking anomalies at the corners of the triangle. The scintillator bars have a length of $15$~m and\nthe triangular cross-section has a base of $3$~cm and a height of $1.5$~cm. We have considered a design using liquid as in \\mbox{NO$\\nu$A}, but, compared to \\mbox{NO$\\nu$A}, the cell size is small (\\mbox{NO$\\nu$A} uses a $4\\times 6$~cm$^2$ cell) and the non-active component due to the PVC extrusions that hold the liquid becomes quite large (in \\mbox{NO$\\nu$A}, the scintillator is approximately $70\\%$ of the detector mass). Our design is\nan extrapolation of the MINER$\\nu$A experiment~\\cite{minerva_www} which in turn was an extrapolation of the D0\npreshower detectors~\\cite{D0}. We are considering a detector mass of approximately $35$~Kton (dimensions $15 \\times 15 \\times 150$~m).\nWe believe that an air-core solenoid can produce the field required ($0.5$~Tesla) to do the physics.\n\nAs was mentioned above, magnetizing the large detector volume presents the main technical challenge for a Neutrino Factory TASD. Conventional\nroom temperature magnets are ruled out due to their prohibitive power consumption, and conventional superconducting magnets are believed to be too expensive, due to the cost of the enormous cryostats needed in a conventional superconducting magnet design. In order to eliminate \nthe cryostat, we have investigated a concept based on the superconducting transmission line (STL) that was \ndeveloped for the Very Large Hadron Collider superferric magnets~\\cite{VLHC}. The solenoid windings now consist of this superconducting cable which is confined \nin its own cryostat (Fig.~\\ref{fig:STL}). Each solenoid ($10$ required for the full detector) consists of $150$ turns and requires $7500$ m of cable. There is \nno large vacuum vessel and thus no large vacuum loads which make the cryostats for large conventional superconducting magnets very expensive. \n\nThe Neutrino Factory TASD response has been simulated with GEANT4 version 8.1 (Fig.~\\ref{fig:tasd}). The GEANT4 model of the detector included each of the individual scintillator bars, but did not include edge effects on light collection, or the effects of a central wavelength shifting fiber. A uniform 0.5 Tesla magnetic field was simulated.\n \nSamples of isolated muons in the range of momentum between $100$~MeV$\/c$ and $15$~GeV$\/c$ were simulated to allow the determination of the momentum resolution and charge identification capabilities. The NUANCE~\\cite{Nuance} event generator was also used to simulate 1 million $\\nu_e$ and 1 million $\\nu_\\mu$ interactions. Events were generated in $50$ mono-energetic neutrino energy bins between $100$~MeV and $5$~GeV. The results that follow only have one thousand events processed through the GEANT4 simulation and reconstruction.\n \nThe detector response was simulated assuming a light yield consistent with MINER$\\nu$A measurements and current photo detector performance~\\cite{Nova}. In addition, a 2 photo-electron energy resolution was added through Gaussian smearing to ensure that the energy resolution used in the following physics analysis would be a worst-case estimate. Since a complete pattern recognition algorithm was beyond the scope of our study, for our analysis the Monte Carlo information was used to aid in pattern recognition. All digitised hits from a given simulated particle where the reconstructed signal was above 0.5 photo electrons were collected. When using the isolated particles, hits in neighboring x and y planes were used to determine the 3 dimensional position of the particle. The position resolution was found to be approximately $4.5$~mm RMS with a central Gaussian of width $2.5$~mm~\\footnote{At this stage, the simulation does not take into account light collection inefficiencies in the corners of the base of the triangle.}. These space points were then passed to the RecPack Kalman track fitting package~\\cite{recpack}. \n \nFor each collection of points, the track fit was performed with an assumed positive and negative charge. The momentum resolution and charge misidentification rates were determined by studying the fitted track in each case which had the better $\\chi^2$ per degree of freedom. Figure~\\ref{fig:tasd_mom} shows the momentum resolution as a function of muon momentum. The tracker achieves a resolution of better than $10\\%$ over the momentum range studied. Figure~\\ref{fig:Track}(a) shows the efficiency for reconstructing positive muons as a function of the initial muon momentum. The detector becomes fully efficient above $400$~MeV.\n \nThe charge mis-identification rate was determined by counting the rate at which the track fit with the incorrect charge had a better $\\chi^2$ per degree of freedom than that with the correct charge. Figure~\\ref{fig:Track}(b) shows the charge mis-identification rate as a function of the initial muon momentum.\n \nThe neutrino interactions were also reconstructed using the aid of the Monte Carlo information for pattern recognition. In an attempt to produce some of the effects of a real pattern recognition algorithm on the detector performance, only every fourth hit was collected for track fitting. Tracks were only fit if $10$ such hits were found from a given particle. The Monte Carlo positions were smeared (Gaussian smearing using the $4.5$~mm RMS determined previously) and passed to the Kalman track fit. The reconstruction returned:\n\\begin{itemize}\n\\item The total momentum vector of all fitted tracks,\n\\item The momentum vector of the muon (muon ID from MC truth),\n\\item The reconstructed and truth energy sum of all the hits that were not in a particle that was fitted, and \n\\item The reconstructed energy sum of all hits in the event.\n\\end{itemize}\n\nThe $\\nu_{\\mu}$ CC event reconstruction efficiency as a function of neutrino energy is shown in Fig.~\\ref{fig:NuMuCC}(a). The fraction of\n$\\nu_{\\mu}$ CC events with a reconstructed muon is shown in Fig.~\\ref{fig:NuMuCC}(b). In this figure the bands represent the limits of the \nstatistical errors for this analysis.\n\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=6.5in]{STL.eps}\n\\end{center}\n\\caption[]{\\textit{Diagram of Superconducting Transmission Line design.}}\n\\label{fig:STL}\n\\end{figure}\n\n\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=4in]{TASD_sch.eps}\n\\end{center}\n\\caption[]{\\textit{Schematic of Totally Active Scintillator Detector.}}\n\\label{fig:tasd}\n\\end{figure}\n\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=4in]{pres.eps}\n\\end{center}\n\\caption[]{\\textit{Momentum resolution as a function of the muon momentum.}}\n\\label{fig:tasd_mom}\n\\end{figure}\n\n\\begin{figure}[h]\n\\begin{center}\n\\begin{tabular}{ll}\n\\includegraphics[width=3in]{fig5a.eps}&\\hskip 0.cm\n\\includegraphics[width=3in]{fig5b.eps}\\\\\n\\hskip 4.truecm\n{\\small (a)} &\n\\hskip 4.truecm\n{\\small (b)} \\\\ \n\\end{tabular}\n\\end{center}\n\\caption{\\textit{(a) Efficiency for reconstructing positive muons. (b)\nMuon charge mis-identification rate as a function of the initial muon momentum.}}\n\\label{fig:Track}\n\\end{figure}\n\n\n\\begin{figure}[h]\n\\begin{center}\n\\begin{tabular}{ll}\n\\includegraphics[width=3in]{eff_vs_e.eps}&\\hskip 0.cm\n\\includegraphics[width=3in]{numu_frac.eps}\\\\\n\\hskip 4.truecm\n{\\small (a)} &\n\\hskip 4.truecm\n{\\small (b)} \\\\ \n\\end{tabular}\n\\end{center}\n\\caption{\\textit{(a) Reconstruction efficiency of NuMu CC events as a function of neutrino interaction energy. (b) Fraction of NuMu CC events with a reconstructed muon.}}\n\\label{fig:NuMuCC}\n\\end{figure}\n\nBased on these initial Neutrino Factory TASD studies, in our phenomenological analysis we assume the detector has an effective threshold for measuring muon neutrino CC events at $E_\\nu = 500$~MeV, above which it has an energy independent efficiency of $73\\%$. The $73\\%$ efficiency is primarily driven by the neutrino interaction kinematics, not by the detector tracking efficiency. No charge-ID criterion is applied here. The charge misidentification rate information is used as input into the effect of backgrounds on the analysis.\n\nWe note that, to fully understand the backgrounds in the TASD requires a simulation that includes neutrino interactions and a full event reconstruction. Although this is beyond the scope of the present study, a consideration of backgrounds in the well studied Magnetized Fe-Scintillator detector proposed for the high-energy Neutrino Factory~\\cite{nf5} and The International Scoping Study for a Neutrino Factory~\\cite{ISS-Detector Report} motivates the $10^{-3}$ background (contamination) assumption used in this paper for the TASD. Before kinematic cuts, the main backgrounds for the Fe-Scintillator detector are muon charge mis-ID, charm decay, pion and kaon decay, and are all of comparable order: $1-5\\times 10^{-4}$. For the TASD at a low energy Neutrino Factory the muon charge mis-ID rate (Fig.~\\ref{fig:Track}(b)) and the charm decay background is suppressed (at the level $4-8\\times 10^{-5}$) due to the low energy beam. Pion and Kaon decay in flight become the main background concerns at the $1-5\\times 10^{-4}$ level. A figure of merit for comparing the TASD to a conventional Magnetized Fe-Scintillator detector is the ratio of their respective particle decay lengths to interaction lengths. For TASD the ratio is about $1$; for the Magnetized Fe-Scintillator detector it is approximately $8$. So naively we can conclude that the decay background in TASD will be 10 times worse than in the conventional detector ignoring any kinematic or topological cuts. However the TASD will have vastly superior kink detection to identify decay-in-flight. For example we will typically have $40$ hits on the pion track before decay. In addition TASD will have continuous $dE\/dx$ measurements along the track and better overall energy resolution. We believe that these properties will allow us to control backgrounds to the $10^{-3}$ level or better.\n\n\n\n\\section{Physics reach of the low energy Neutrino Factory}\n\nWe have previously mentioned that, by exploiting the energy dependence of the signal, it is possible to extract from the measurements the correct values of $\\theta_{13}$ and $\\delta$, and eliminate the additional solutions arising from discrete ambiguities. In the present study, we include the detector simulation results described in the previous section, which suggests a lower energy threshold (500~MeV) than previously assumed~\\cite{GeerMenaPascoli}, and an energy resolution $dE\/E=30\\%$~\\footnote{We have assumed a very conservative $dE\/E=30\\%$ because at this time the\nsimulation work has not yet produced a number for the TASD. Based on \\mbox{NO$\\nu$A} \nresults, we expect the TASD $dE\/E$ to be better than $6\\%$ at 2~GeV.}. Above threshold, the detector efficiency for muon neutrino CC events is taken to be 73\\%.\n\nIn the following we consider the representative baseline $L=1480$~km, which corresponds to the distance from Fermilab to the Henderson mine. However, we believe that the TASD will not require operation deep underground in order to remove backgrounds. Results are similar for other baselines in the 1200--1500~km range. The results are presented for the high-statistics scenario described in~\\cite{GeerMenaPascoli} as well as for a more aggressive scenario which improves the statistics of the old high-statistics scenario by a factor of three, to quantify the benefits of increased detector sizes and\/or stored-muon luminosities. The high-statistics scenario corresponds to $1 \\times 10^{23}$~Kton-decays (10 years of data taking, with $5 \\times 10^{20}$ useful muon decays of each sign per year, and a detector fiducial mass times efficiency of 20~Kt). The more aggressive scenario corresponds to $3 \\times 10^{23}$~Kton-decays (which could correspond, for instance, to 10 years of data taking, with $1 \\times 10^{21}$ useful muon decays of each sign per year, and a detector fiducial mass times efficiency of 30~Kt).\n\nTable~\\ref{tab:tab1} shows the number of CC muon events expected in the two scenarios explored here for, respectively, the positive and negative muons stored in the Neutrino Factory. Notice that, in the absence of oscillations, there would be a few times $10^4$~$\\nu_e$ CC interactions, which would allow a search for $\\nu_e \\rightarrow \\nu_\\mu$ oscillations with probabilities below $10^{-4}$.\n\\begin{table}[thb]\n\\centering\n\\begin{tabular}{||c|c||c|c||c|c||}\n\\hline \\hline\n\\multicolumn{2}{||c||}{$E_{\\mu^{\\mp}}=$} & \n\\multicolumn{2}{c||} {$\\mu^{+}$} & \n\\multicolumn{2}{c||}{$\\mu^{-}$}\\\\\n\\cline{3-6}\n\\multicolumn{2}{||c||}{$4.12$ GeV}& $N_{\\bar{\\nu}_\\mu}\/10^3$ & $N_{\\nu_e}\/10^3$ & \n$N_{\\nu_\\mu}\/10^3$ &$N_{\\bar{\\nu}_e}\/10^3$ \\\\ \n\\hline\\hline\nstatistics& $1$& $13$ & $22$ & $25$ & $11$\\\\\n\\cline{2-6}\n$(10^{23})$ Kt-decays\n & $3$& $39$ & $66$ & $77$ & $34$\\\\\n\\hline\\hline\n\\end{tabular}\n\\caption{\\it{ \nNeutrino and antineutrino charged currents interaction rates \nfor L = 1480~km, for the $10^{23}$~Kton-decays and the $3 \\times 10^{23}$~Kton-decays-statistics scenarios.}} \n\\label{tab:tab1}\n\\end{table}\n\nAll numerical results reported in the next subsections have been obtained with the exact formulae for the oscillation probabilities. Unless specified otherwise, we take the following central values for the remaining oscillation parameters: $\\sin^{2}\\theta_{12}=0.29$, $\\Delta m^2_{21} = 8 \\times 10^{-5}$ eV$^2$, $|\\Delta m^2_{31}| = 2.5 \\times 10^{-3}$ eV$^2$ and $\\theta_{23}=40^\\circ$. We show in Tables~\\ref{tab:tab2} and \\ref{tab:tab3}, for two representative values of $\\theta_{13}=1^\\circ$ and $8^\\circ$, and the CP phase $\\delta=0^\\circ, 90^\\circ, 180^\\circ$ and $270^\\circ$, the number of \\emph{wrong-sign} muon events in the two scenarios explored here, for, respectively, the positive and negative muons stored in the Neutrino Factory, for normal (inverted) hierarchy.\n\\begin{table}[bht]\n\\centering\n\\begin{tabular}{||c|c||c||c||}\n\\hline\\hline\nstatistics (Kt-decays) & $\\delta(^{o})$ & $\\mu^{+}$ stored (wrong-sign: $\\mu^-$) &$\\mu^{-}$ stored (wrong-sign: $\\mu^{+}$) \\\\\n\\hline\\hline\n & 0 & 880 (340) & 180 (520)\\\\\n\\cline{2-4}\n$1 \\times 10^{23}$ \n & 90 & 1230 (505) & 90 (330)\\\\\n\\cline{2-4}\n & 180 & 1000 (340) & 170 (440)\\\\\n\\cline{2-4}\n & 270 & 645 (175) & 260 (625) \\\\\n\\hline\\hline \n & 0 &2640 (1020)& 540 (1550)\\\\ \n\\cline{2-4}\n$3 \\times 10^{23}$ \n & 90 & 3700 (1520) &270 (990)\\\\\n\\cline{2-4}\n & 180 &2990 (1020) & 510 (1310)\\\\\n\\cline{2-4}\n & 270 &1930 (520) &780 (1870) \\\\\n\\hline\\hline \n\\end{tabular}\n\\caption{\\it \n{Wrong sign muon event rates for normal (inverted) hierarchy, assuming $\\nu_e\\to \\nu_\\mu$ ($\\bar{\\nu}_e \\to \\bar{\\nu}_\\mu$)\noscillations in a 20~Kt fiducial volume detector, for a L = 1480~km baseline. \nWe assume here $\\theta_{13}=8^{o}$, i.e. $\\sin^2 2\\theta_{13}\\simeq 0.076$. \nWe present the results for several possible values of the CP-violating phase $\\delta$ for both the two scenarios.}}\n\\label{tab:tab2}\n\\end{table} \n\\begin{table}[bht]\n\\centering\n\\begin{tabular}{||c|c||c||c||}\n\\hline\\hline\nstatistics (Kt-decays)& $\\delta(^{o})$ & $\\mu^{+}$ stored (wrong-sign: $\\mu^-$) &$\\mu^{-}$ stored (wrong-sign: $\\mu^{+}$) \\\\\n\\hline\\hline\n & 0 & 54 (50) & 27 (37) \\\\\n\\cline{2-4}\n$1 \\times 10^{23}$ \n & 90 & 100 (70) & 13 (10)\\\\\n\\cline{2-4}\n & 180 & 67 (50) & 70 (25)\\\\\n\\cline{2-4}\n & 270 & 22 (30) & 37 (50) \\\\\n\\hline\\hline \n & 0 &160 (150)& 80 (110)\\\\ \n\\cline{2-4}\n$3 \\times 10^{23}$ \n & 90 & 300 (210)&40 (30)\\\\\n\\cline{2-4}\n & 180 &200 (150)& 230 (250)\\\\\n\\cline{2-4}\n & 270 &65 (90) &110 (150)\\\\\n\\hline\\hline \n\\end{tabular}\n\\caption{\\it \n{As Table~\\protect\\ref{tab:tab2} but for $\\theta_{13}=1^{o}$, i.e. $\\sin^2 2\\theta_{13}\\simeq 0.001$. \n}}\n\\label{tab:tab3}\n\\end{table} \n\n\nFor our analysis, we use the following $\\chi^{2}$ definition\n\\begin{equation}\n\\chi^2 = \\sum_{i,j} \\sum_{p,p'} \\; (n_{i,p} - N_{i,p}) C_{i,p:,j,p'}^{-1} (n_{j,p'} - N_{j,p'})\\,,\n\\end{equation}\n where $N_{i,\\pm}$ is the predicted number of muons for\n a certain oscillation hypothesis, $n_{i,p}$ are the simulated ``data'' from a Gaussian or Poisson smearing and $C$ is the $2 N_{bin} \\times 2 N_{bin}$ covariance matrix given by:\n\\begin{equation}\nC_{i,p:,j,p'}^{-1}\\equiv \\delta_{ij}\\delta_{pp'}(\\delta n_{i,p})^2 \n\\end{equation}\nwhere $(\\delta n_{i,p}) = \\sqrt{n_{i,p} + (f_{sys}\\cdot n_{i,p})^2}$\ncontains both statistical and a $2\\%$ overall systematic error ($f_{sys}=0.02$).\n\n\\subsection{Exploring the disappearance channel}\n\nConsider first the disappearance channels, already considered in the context of Neutrino Factories~\\cite{nf4,dis} and carefully explored in Ref.~\\cite{Stef}.\nIn Ref.~\\cite{GeerMenaPascoli} it was shown that, with its high statistics and good energy resolution, a low energy neutrino factory can be used to precisely determine the atmospheric neutrino oscillation parameters, $\\theta_{23}$ and $\\Delta m^2_{31}$. In particular, for an exposure of $3 \\times 10^{22}$~Kton-decays for each muon sign, and allowing for a $2\\%$ systematic uncertainty, it was shown that: (i) \nMaximal mixing in the 23-sector could be excluded at $99\\%$ CL if $\\sin^2 \\theta_{23}<0.48$ ($\\theta_{23}<43.8^\\circ$), independently of the value of $\\theta_{13}$, and (ii) For a large value of $\\theta_{13}$, i.e. $\\theta_{13}>8^\\circ$, the $\\theta_{23}$-octant degeneracy would be resolved at the $99\\%$ CL for $\\sin^2 \\theta_{23}<0.44$ ($\\theta_{23} < 41.5^\\circ$). \nIn our present study, the good energy resolution of the TASD provides sensitivity to the oscillatory pattern of the disappearance signal that is comparable to, and somewhat better than, we previously assumed. \n\nIn Fig.~\\ref{fig:dis} we show the $68\\%$, $90\\%$ and $95\\%$~CL contours (for 2 d.o.f) resulting from the fits to the measured energy dependent $\\nu_\\mu$ and $\\overline{\\nu}_{\\mu}$ CC rates at $L = 1480$ km. Results correspond to $1 \\times 10^{23}$~Kton-decays, and are shown for $\\Delta m^2_{31}=2.5 \\times 10^{-3}$ eV~$^2$ and two simulated values of $\\sin^2 \\theta_{23}$ (= $0.4$ and $0.44$). For $\\theta_{13}=0$, $P_{\\nu_\\mu \\to \\nu_\\mu} (\\theta_{23})= P_{\\nu_\\mu \\to \\nu_\\mu} (\\pi\/2 -\\theta_{23})$, i.e. the disappearance channel is symmetric under $\\theta_{23}\\to \\pi\/2-\\theta_{23}$. However, when a rather large non-vanishing value of $\\theta_{13}$ is switched on, a $\\theta_{23}$ asymmetry appears in the $P_{\\nu_\\mu \\to \\nu_\\mu}$. Notice that the asymmetry grows with increasing $\\theta_{13}$ and the four-fold degeneracy in the atmospheric neutrino parameters is resolved more easily. We conclude that, using only the $\\nu_\\mu$-disappearance data, the uncertainty on $\\Delta m^2_{31}$ could be reduced down to the $1\\%-2\\%$ level. In principle, the $\\nu_e$ disappearance channel could also be used, which is sensitive to $\\theta_{13}$ and matter effects. However, charge discrimination for electrons has not yet been adequately studied to determine the relevant TASD performance parameters.\n\n\nThe extremely good determination of the atmospheric mass squared difference opens the possibility \nto determine the mass hierarchy by exploiting the effects of the solar mass squared difference on the $\\nu_\\mu$ disappearance probability, even for negligible values of $\\theta_{13}$.\nThis strategy was studied in detail in Refs.~\\cite{deGouvea:2005hk,deGouvea:2005mi,Minakata:2006gq}. The vacuum $\\nu_\\mu \\to \\nu_\\mu$ oscillation probability is given b\n\\begin{equation}\nP(\\nu_\\mu \\to \\nu_\\mu) = 1 - 4 | U_{\\mu 1}|^2 | U_{\\mu 2}|^2 \\sin^2 \\frac{\\Delta m^2_{12} L}{4E} - 4 | U_{\\mu 1}|^2 | U_{\\mu 3}|^2 \\sin^2 \\frac{\\Delta m^2_{13} L}{4E} - 4 | U_{\\mu 2}|^2 | U_{\\mu 3}|^2 \\sin^2 \\frac{\\Delta m^2_{23} L}{4E} ~,\n\\end{equation}\nwhere the usual notation is used for the mass squared \ndifferences $\\Delta m^2_{ij}$ and for the elements of the leptonic mixing matrix $U$.\nIn the following we take $\\theta_{13}=0$.\nThe oscillation probabilities depend on whether\n$|\\Delta m^2_{13}| > |\\Delta m^2_{23}|$ (normal hierarchy) or $|\\Delta m^2_{13}| < |\\Delta m^2_{23}|$ (inverted hierarchy).\nPrecisely measured disappearance probabilities can distinguish between normal and inverted hierarchies if there is sensitivity to effects driven by both $|\\Delta m^2_{13}|$ and $\\Delta m^2_{12}$.\nThis requires the atmospheric mass squared difference to be measured at different $L\/E$ with a precision of better than $|\\Delta m^2_{21}| \/ |\\Delta m^2_{31}| \\sim 0.026$. In fact, it was pointed out in Ref.~\\cite{deGouvea:2005hk} that, for a fixed $L\/E$, the disappearance\nprobabilities for the normal and inverted hierarchies are \nthe same if $|\\Delta m^2_{13}|$ is substituted with\n$-|\\Delta m^2_{13}| + \\Delta m^2_{12} + \\frac{4 E}{L} \\arctan \\Big(\n\\cos {2 \\theta_{12} } \\tan \\frac{ \\Delta m^2_{12} L}{4E} \\Big)$.\nIn order to break this degeneracy it is necessary to measure the atmospheric mass squared difference at different energies and at distances for which the oscillations driven by the solar term are non negligible.\nIn our setup, if we assume a $0\\%$ ($2\\%$) overall systematic error, we find that the hierarchy can be measured at the $1 \\sigma$ level ($1 \\sigma$ level) for the $10^{23}$~Kton-decays case, while for the $3 \\times 10^{23}$~Kton-decays scenario it can be determined at $4 \\sigma$ level ($2 \\sigma$ level). Note that the systematic errors play a crucial role. It is in principle possible to reduce the impact of the systematics errors using the ratios of the number of events at the near and far detectors:\n\\begin{equation}\n{\\cal R} (E) = \\frac{\\frac{N_{\\mathrm N} (\\nu_\\mu) }{N_{\\mathrm N} (\\bar{\\nu}_e) }}{\\frac{N_{\\mathrm F} (\\nu_\\mu) }{N_{\\mathrm F} (\\bar{\\nu}_e) }}~,\n\\end{equation}\nwhere $N_{\\mathrm N (F)} (\\nu_\\mu [\\bar{\\nu}_e]) $ refer to the number of $\\nu_\\mu \\ [\\bar{\\nu}_e]$ events in the near (far) detector for a fixed energy $E$. \nVery good energy resolution is required for such cancellations to be effective.\nIn this case, a low energy Neutrino Factory \ncan give important information on the type of hierarchy \neven if $\\theta_{13}=0$.\n\n\n\n\n\n\\begin{figure}[h]\n\\begin{center}\n\\begin{tabular}{ll}\n\\includegraphics[width=3in]{dis_new_8.ps}&\\hskip 0.cm\n\\includegraphics[width=3in]{dis_new_3.ps}\\\\\n\\end{tabular}\n\\end{center}\n\\caption{\\textit{$68\\%$, $90\\%$ and $95\\%$ (2 d.o.f) CL contours resulting from the fits at $L = 1480$ km assuming two central values for $\\sin^2 \\theta_{23}=0.4$ and $0.44$ and $\\Delta m^2_{31}=2.5 \\times 10^{-3}$ eV~$^2$. In the left (right) panel, $\\theta_{13}=8^\\circ$ ($3^\\circ$). The statistics considered for both simulations corresponds to $1\\times 10^{23}$~Kton-decays. Only disappearance data have been used to perform these plots.} } \\label{fig:dis}\n\\end{figure}\n\n\n\n\\subsection{Simultaneous fits to $\\theta_{13}$ and $\\delta$}\n\nNext, we study the extraction of the unknown parameters $\\theta_{13}$ and $\\delta$, using the \\emph{golden channel} ($\\nu_e (\\bar{\\nu}_e) \\to \\nu_\\mu (\\bar{\\nu}_\\mu)$).\nWe start by considering a neutrino factory scenario with $1 \\times 10^{23}$~Kton-decays. We find that, for values of $\\theta_{13}>2^\\circ$, the sign degeneracy is resolved at the $95\\%$~C.L. Note that for $\\theta_{13} >4^\\circ$ the octant degeneracy has already been resolved using the disappearance data. \n\nFigure~\\ref{fig:fig2} shows, for a fit to the simulated data at a baseline $L=1480$ km, the $68\\%$, $90\\%$ and $95\\%$~CL contours in the ($\\theta_{13}, \\delta$)-plane. Results are shown for background levels set to zero (left panel) and $10^{-3}$ (right panel) for the $10^{23}$~Kton-decays scenario. The four sets of contours correspond to four simulated test points in the ($\\theta_{13}, \\delta$)-plane, which are depicted by a star. The simulations are for the normal mass hierarchy and $\\theta_{23}$ in the first octant ($\\sin^2 \\theta_{23} = 0.41$ which corresponds to $\\theta_{23}=40^\\circ$). Our analysis includes the study of the discrete degeneracies. That is, we have fitted the data assuming both the right and wrong hierarchies, and the right and wrong choices for the $\\theta_{23}$ octant. If present, the additional solutions associated to the $\\theta_{23}$ octant ambiguity are shown as dotted contours.\n\nNotice from Fig.~\\ref{fig:fig2} that the sign ambiguity is resolved at the $95\\%$ CL in the $10^{23}$~Kton-decays scenario. Additional solutions associated to the wrong choice of the $\\theta_{23}$ octant are still present in the $10^{23}$ Kton-decays scenario, but notice that the presence of these additional solutions does not interfere with a measurement of the CP violating phase $\\delta$ and $\\theta_{13}$, since the locations of the fake solutions in the ($\\theta_{13}$, $\\delta$) plane, are almost the same as the correct locations.\n\nThe effect of the background can be easily understood in terms of the statistics presented in Tables~\\ref{tab:tab2} and \\ref{tab:tab3}. For small values of $\\theta_{13}$, the addition of the background has a larger impact for $\\delta \\sim -90^\\circ$, since for that value of the CP phase the statistics are dominated by the antineutrino channel, which suffers from a larger background (from $\\nu_\\mu$'s) than the neutrino channel (from $\\bar{\\nu}_\\mu$'s). For a background level smaller than $\\sim 10^{-4}$, the results are indistinguishable from the zero background case. \n\nWe illustrate the corresponding results for the improved scenario of $3\\times 10^{23}$~Kton-decays in Fig.~\\ref{fig:fig1}. Note that the higher statistics allow us to consider a smaller value for $\\mbox{$\\theta_{13}$}=1^\\circ$. The additional solutions arising from the wrong choice for the neutrino mass hierarchy or $\\theta_{23}$ octant are not present at the $95\\%$ CL. Furthermore, the addition of a background level of $10^{-3}$ does not significantly affect the resolution of the degeneracies, and has only an impact on the CP violation measurement.\n\nThe performance of the \\emph{low energy neutrino factory} in the two high statistics scenarios explored here is unique. The sign($\\Delta m_{31}^2$) can be determined at the $95\\%$~CL in the $10^{23}$~Kton-decays ($3 \\times 10^{23}$~Kton-decays) scenario if $\\theta_{13}>2^\\circ$ ($>1^\\circ$) for all values of the CP phase $\\delta$. The $\\theta_{23}$-octant ambiguity can be removed at the $95\\%$ CL down to roughly $\\theta_{13}>0.5-1.0^\\circ$ for the representative choice of $\\sin^2 \\theta_{23}=0.41$, independently of the value of $\\delta$, except for some intermediate values of $\\theta_{13}\\sim 2^\\circ$, for which the $\\theta_{23}$ degeneracy is still present for some values of the CP violating phase $\\delta$. Resolving the $\\theta_{23}$-octant degeneracy therefore is easier for small values of $\\theta_{13}<2^\\circ$. This is due to the fact that, as explored in Ref.~\\cite{GeerMenaPascoli}, the $\\theta_{23}$-octant degeneracy is resolved using the information from the low energy bins, which are sensitive to the solar term. For the setup described in this paper, the solar term starts to be important if $\\theta_{13}<2^\\circ$. However, notice that the presence of the $\\theta_{23}$ octant ambiguity at $\\theta_{13}\\sim 2^\\circ$ will not interfere with the extraction of $\\theta_{13}$ and $\\delta$, since the locations of the degenerate (fake) solutions almost coincide with the positions of ``true'', nature solutions.\n\n\n\\begin{figure}[h]\n\\begin{center}\n\\begin{tabular}{ll}\n\\includegraphics[width=3in]{2_hl_nback_new.ps}&\\hskip 0.cm\n\\includegraphics[width=3in]{2_hl_back_new.ps}\\\\\n\\hskip 2.truecm\n{\\small} &\n\\hskip 2.truecm\n{\\small} \\\\ \n\\end{tabular}\n\\end{center}\n\\caption{\\textit{$68\\%$, $90\\%$ and $95\\%$ (2 d.o.f) CL contours resulting from the fits at $L = 1480$ km assuming four central values for $\\delta=0^{\\circ}$, $90^{\\circ}$, $-90^{\\circ}$ and $180^{\\circ}$ and $\\mbox{$\\theta_{13}$}=2^\\circ$ without backgrounds (left panel) and with a background level of $10^{-3}$ (right panel). The additional $\\theta_{23}$ octant solutions are depicted in dotted blue. The statistics considered for both simulations corresponds to $10^{23}$~Kton-decays.}}\n\\label{fig:fig2}\n\\end{figure}\n\\begin{figure}[h]\n\n\\begin{center}\n\n\\begin{tabular}{ll}\n\n\\includegraphics[width=3in]{1_hhl_nback_new.ps}&\\hskip 0.cm\n\\includegraphics[width=3in]{1_hhl_back_new.ps}\\\\\n\\hskip 2.truecm\n{\\small} &\n\\hskip 2.truecm\n{\\small} \\\\ \n\\end{tabular}\n\\end{center}\n\\caption{\\textit{$68\\%$, $90\\%$ and $95\\%$ (2 d.o.f) CL contours resulting from the fits at $L = 1480$ km assuming four central values for $\\delta=0^{\\circ}$, $90^{\\circ}$, $-90^{\\circ}$ and $180^{\\circ}$ and $\\mbox{$\\theta_{13}$}=1^\\circ$ without backgrounds (left panel) and with a background level of $10^{-3}$ (right panel). The statistics considered for both simulations corresponds to $3\\times 10^{23}$~Kton-decays.}}\n\\label{fig:fig1}\n\n\\end{figure}\n\n\nIn Figs.~\\ref{fig:hier} and \\ref{fig:cp} we summarize, for the $10^{23}$ and $3\\times 10^{23}$~Kton decays scenarios, the physics reach for a TASD detector located 1480 km from a low energy neutrino factory. The analysis takes into account the impact of both the intrinsic and discrete degeneracies. Figure~\\ref{fig:hier} shows the region in the ($\\sin^2 2 \\theta_{13}$, ``fraction of $\\delta$'') plane for which the mass hierarchy can be resolved at the $95\\%$ CL (1 d.o.f). Contours are shown for zero background, and for when a background level of $10^{-3}$ is included in the analysis. Note that, with a background level of $\\simeq 10^{-3}$, the hierarchy can be determined in both scenarios if $\\sin^2 2 \\theta_{13}>$ few $10^{-3}$ (i.e. $\\theta_{13}> 2-3^\\circ$) for all values of the CP violating phase $\\delta$. For a background level smaller than $\\sim 10^{-4}$, the results are indistinguishable from the zero background case.\n\n\n\n\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=3.5in]{hierarchy_new_v1.ps}\n\\end{center}\n\\caption[]{\\textit{$95\\%$ CL (1 d.o.f) hierarchy resolution assuming that the far detector is located at a distance of $1480$ km at the Henderson mine. The solid (dotted) red curves depict the results assuming $1\\times 10^{23}$~Kton-decays ($3\\times 10^{23}$~Kton-decays) without backgrounds. The long-dashed (short-dashed) black curves depict the results assuming $1\\times 10^{23}$~Kton-decays ($3\\times 10^{23}$~Kton-decays) with a background level of $10^{-3}$. }}\n\\label{fig:hier}\n\\end{figure}\n\nFigure~\\ref{fig:cp} shows the region in the ($\\sin^2 2 \\theta_{13}$, $\\delta$) plane for which a given (non-zero) CP violating value of the CP-phase $\\delta$ can be distinguished at the $95\\%$ CL (1 d.o.f) from the CP conserving case, i.e. $\\delta =0, \\pm 180^\\circ$. The results are given for the two statistics scenarios studied here. Note that, even in the presence of a $10^{-3}$ background level, the CP violating phase $\\delta$ could be measured with a $95\\%$ CL precision of better than $20^\\circ$ in the $10^{23}$~Kton-decays ($3\\times 10^{23}$~kton-deacys) luminosity scenario if $\\sin^2 2 \\theta_{13}>0.01$ ($\\sin^2 2 \\theta_{13}>0.002$).\n\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=3.5in]{cp_new_v1.ps}\n\\end{center}\n\\caption[]{\\textit{$95\\%$ CL (1 d.o.f) CP Violation extraction assuming that the far detector is located at a distance of $1480$. The solid (dotted) red curves depict the results assuming $1\\times 10^{23}$~Kton-decays ($3\\times 10^{23}$~Kton-decays) without backgrounds. The long-dashed (short-dashed) black curves depict the results assuming $1\\times 10^{23}$~Kton-decays ($3\\times 10^{23}$~Kton-decays) with a background level of $10^{-3}$.}}\n\\label{fig:cp}\n\\end{figure}\n\n\n\\section{Summary and Conclusions}\n\nWe have studied the physics reach of a \\emph{low energy neutrino factory}, first presented in Ref.~\\cite{GeerMenaPascoli}, in which the stored muons have an energy of $4.12$~GeV. The simulated detector performance is based upon a\nmagnetized Totally Active Scintillator Detector. Our simulations suggest this detector will have a threshold for measuring muon neutrino CC interactions of about 500 MeV and an energy independent efficiency of about 73\\% above threshold. We have assumed a conservative energy resolution of 30\\% for the detector.\n\nIn our analysis, we consider the representative baseline of $1480$ Km, divide the simulated observed neutrino event spectrum into 9 energy bins above the 500~MeV threshold, and exploit both the disappearance ($\\nu_\\mu \\to \\nu_\\mu$) and the \\emph{golden} ($\\nu_e \\to \\nu_\\mu$) channels by measuring CC events tagged by ``right-sign'' and ``wrong-sign'' muons. The results can be easily generalized to other baselines in the 1200--1500~km range. We have investigated the dependence of the physics sensitivity on statistics by considering a high statistics scenario corresponding to $1 \\times 10^{23}$~Kton-decays for each muon sign, and a more aggressive scenario corresponding to $3 \\times 10^{23}$~Kton-decays for each muon sign. We have also explored the impact of backgrounds to the wrong-sign muon signal by considering background levels of zero and $10^{-3}$.\n\nWe find that, based only on the disappearance channel, maximal atmospheric neutrino mixing can be excluded at $95\\%$ CL if $\\sin^2 \\theta_{23}<0.44$ ($\\theta_{23}<41.5^\\circ$). The atmospheric mass difference could be measured with a precision of $1\\%-2\\%$, opening the possibility of determining the neutrino mass hierarchy even if $\\theta_{13}=0$, provided systematic uncertainties can be controlled. Neglecting systematic uncertainties, the mass hierarchy could be determined at the $1 \\sigma$ level ($4 \\sigma$ level) in the $1 \\times 10^{23}$~Kton-decays ($3 \\times 10^{23}$~Kton-decays) statistics scenario.\n\nThe rich oscillation pattern of the $\\nu_e\\to \\nu_\\mu$ ($\\bar\\nu_e \\to \\bar\\nu_\\mu$) appearance channels at energies between $0.5$ and $4$ GeV for baselines $\\mathcal{O}(1000)$~km facilitates an elimination of the degenerate solutions. If the atmospheric mixing angle is not maximal, for the representative choice of $\\sin^2 \\theta_{23}=0.4$, the octant in which $\\theta_{23}$ lies could be extracted at the $95\\%$ CL in both scenarios if $\\theta_{13}> 0.5-1^\\circ$, for all values of the CP violating phase $\\delta$, except for some intermediate values of the mixing angle $\\theta_{13}$ in which the fake solutions's location coincides with the true's solution position and therefore the presence of these fake solutions does not interfere with the extraction of $\\delta_{CP}$ and $\\theta_{13}$.\n\nIn the $10^{23}$ kton-decays scenario, if the background level is $\\sim 10^{-3}$ ($10^{-4}$), the neutrino mass hierarchy could be determined at the $95\\%$ CL, and the CP violating phase $\\delta$ could be measured with a $95\\%$ CL precision of better than $20^\\circ$, if $\\sin^2 2 \\theta_{13}>0.01$ ($\\sim 0.006$). With a factor of three improvement in the former statistics, the numbers quoted above are $\\sin^2 2 \\theta_{13}= 0.005$ and $\\sin^2 2 \\theta_{13}=0.002$, for background levels of $10^{-3}$ and $10^{-4}$, respectively. In our analysis we have included a $2\\%$ systematic error on all measured event rates.\n\n\nIn summary, the low statistics low energy Neutrino Factory scenario we have described, with a background level of $10^{-3}$, for both large and very small values of $\\theta_{13}$ would be able to eliminate ambiguous solutions, determine $\\theta_{13}$, the mass hierarchy, and search for CP violation. Higher statistics and lower backgrounds would further improve the sensitivity, and may enable the mass hierarchy to be determined even if $\\theta_{13}=0$.\n\n\n\\vspace{1cm}\n\\section*{Acknowledgments} \nThis work was supported in part by the European Programme ``The Quest for\nUnification'' contract MRTN-CT-2004-503369, and by the Fermi National Accelerator Laboratory, which is operated by the Fermi Research Association, under contract No. DE-AC02-76CH03000 with the U.S. Department of Energy. SP acknowledges the support of CARE,\ncontract number RII3-CT-2003-506395.\nOM and SP would like to thank the Theoretical Physics Department at Fermilab for hospitality and support. \n\\section*{Appendix} \nAll detector concepts for the Neutrino Factory (NF) require a magnetic field in order to determine the sign of muon (or possibly the electron) produced in the neutrino interaction. For the baseline detector, this is done with magnetized iron. Technically this is very straightforward, although for the $100$~Kton baseline detector does present challenges because of its size. The cost of this magnetic solution is felt to be manageable. Magnetic solutions for the TASD become much more problematic. The solution that we propose is to use the Superconducting Transmission Line developed for the VLHC~\\cite{VLHC}, Fig.~\\ref{fig:STL}, as windings for very large solenoids that form a magnetic cavern (see Fig.~\\ref{fig:mag_cav}) for the detector. The Superconducting Transmission Line (STL) consists of a superconducting cable inside a cryo-pipe cooled by supercritical liquid helium at $4.5-6.0$~ K placed inside a co-axial cryostat. It consists of a perforated Invar tube, a copper stabilized superconducting cable, an Invar helium pipe, the cold pipe support system, a thermal shield covered by multilayer superinsulation, and the vacuum shell. One of the possible STL designs developed for the VLHC is shown in Fig.~\\ref{fig:mag_cav} within the main text. The STL is designed to carry a current of $100$~kA at $6.5$~K in a magnetic field up to $1$~T. This provides a $50\\%$ current margin with respect to the required current in order to reach a field of $0.5$~T. This operating margin can compensate for temperature variations, mechanical or other perturbations in the system.\n\n\nThe solenoid windings now consist of this superconducting cable which is confined in its own cryostat. Each solenoid consists of $150$ turns and requires $\\sim 7500$~ m of cable. There is no large vacuum vessel and access to the detectors can be made through the winding support cylinder since the STL does not need to be close-packed in order to reach an acceptable field. We have performed a simulation of the Magnetic Cavern concept using STL solenoids and the results are shown in Fig.~\\ref{fig:stl_sol}. With the iron end-walls ($1$~m thick), the average field in the XZ plane is approximately $0.58$~T at an excitation current of $50$~kA. This figure shows the field uniformity in the X-Z plane which is better than $\\pm 2\\% $ throughout the majority of the volume with approximately $20\\%$ variations near the end-irons. Figure~\\ref{fig:app3} shows the on-axis $B$ field as a function of position along the $z$ axis (in meters).\n\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=6in]{Mag_cav_A.1.eps3}\n\\end{center}\n\\caption[]{\\textit{Simulation results for magnetic cavern design.}}\n\\label{fig:mag_cav}\n\\end{figure}\n\n\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=6in]{Field_map_A.2.eps3}\n\\end{center}\n\\caption[]{\\textit{STL Solenoid Magnetic Cavern Field Uniformity in XZ plane.}}\n\\label{fig:stl_sol}\n\\end{figure}\n\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=6in]{BvZ2_A.3.eps3}\n\\end{center}\n\\caption[]{\\textit{On-axis $B$ field in $T$ as a function of position along the $z$ axis (m).}}\n\\label{fig:app3}\n\\end{figure}\nWe have not yet been able to do a detailed costing of the magnetic cavern. The STL costs can be estimated quite accurately ($30\\%$) from the VLHC work and current SC cable costs and are believed to be $\\$50$M. The total magnetic cavern cost is estimated to be less than $\\$150$M. This is to be compared to a fully loaded cost savings of the Low-Energy Neutrino Factory (compared to the $50$~GeV design) as indicated in Ref.~\\cite{ISS-Detector Report}.\n\n\n\n\n \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction and instructions for participants}\nOverall, using the ASCL is straight-forward, but how it interacts with other resources, such as ADS, may not be apparent to new or casual users, and certainly there are tips and tricks for using it more efficiently and effectively. This tutorial was developed to teach new users of the ASCL how this resource works. Prospective attendees were provided with instructions, via the ADASS website, as to what they needed to have to participate in the tutorial (a computer with a browser and Internet access) and what knowledge they were assumed to already have (rudimentary familiarity with using ADS for searching for articles\/resources; what a bibcode is; how to use a general search engine such as Google). They were also asked to bookmark several URLs in their browser, including the ASCL home page (\\url{https:\/\/ascl.net}), the ADS home page (\\url{https:\/\/ui.adsabs.harvard.edu\/}, and the Google search page (\\url{https:\/\/www.google.com}).\n\n\\begin{figure}\n \\centering\n \\includegraphics [scale=0.5]{T01_f1.eps}\n \\caption{Poll results: Familiarity with the ASCL}\n \\label{fig:ASCLfamiliarity}\n\\end{figure}\n\nUpon entering the virtual space in which the tutorial was given, participants were asked to answer a poll to gauge their starting familiarity with the ASCL; Figure \\ref{fig:ASCLfamiliarity} shows that most attendees had little experience with the resource.\n\n\\section{Tutorial outline}\nThe tutorial covered:\n\\begin{itemize}\n \\item what the ASCL is and components of an ASCL entry\n \\item common and alternate ways to bring up ASCL records\n \\item how to find software using different methods and tools\n \\item how citation tracking and preferred citation work\n \\item how to find a code's preferred citation (where one exists)\n \\item how to create a metadata file that informs others how to cite your code \n \\item the best place(s) to put preferred citation information\n\\end{itemize}\n\n\\section{Hands-on activities}\nTutorial participants were encouraged to follow along and mirror the presenter's actions as different tasks, such as bringing up ASCL records or finding software in ADS, were demonstrated. Four specific hands-on activities provided practice with the ASCL, and with using ADS and Google, too, for information relating to the ASCL, reinforcing the training. Figures \\ref{fig:Searching1} and \\ref{fig:Searching2} show the searching activities. Readers are encouraged to try these exercises to gauge their familiarity with the ASCL and even with ADS. The full set of slides is available on the ASCL (\\url{https:\/\/tinyurl.com\/ASCLtutorial}).\n\n\\begin{figure}\n \\centering\n \\includegraphics [scale=0.47]{T01_f3.eps}\n \\caption{Hands-on Activity \\#1}\n \\label{fig:Searching1}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics [scale=0.47]{T01_f4.eps}\n \\caption{Hands-on Activity \\#2}\n \\label{fig:Searching2}\n\\end{figure}\n\n\n\n\\section{Questions, answers, and comments}\nAttendees were encouraged to post questions and comments to the Q\\&A and chat areas provided by the virtual tool used to present the tutorial, and to unmute themselves to ask questions or make comments directly. Questions were varied, but three topics were of particular interest: keywords, citing software, and submitting software to ASCL. The ASCL currently has keywords only for NASA missions and HITS software. Participants asked, \\textit{``Is there a guideline on list of keywords?''}, \\textit{``Do you have a taxonomy for your keywords?''}, \\textit{``Can we suggest more?''}, and \\textit{``Can we suggest keywords to you for our own codes?''} I was glad to learn there is so much interest in the ASCL providing more keywords! How to do that, and what keywords to use, will be discussed with the community at a later date. Regarding citation, one attendee asked, \\textit{``What would you define as a good method?''}. Criteria for inclusion in the ASCL came up in questions on whether there is a way for a new code to be added to the resource before research that describes or uses the software is published (yes, there is), and what is considered a refereed resource, with the point made that SPIE proceedings, for example, may not be refereed in the traditional sense. \n\nIn addition, one participant mentioned, \\textit{``I'm really interested in these new sort options, I didn't realise ADS had added those.''} This was not surprising to me; I include information on ADS searching when I show people how to use the ASCL because so many do not know about using, for example, doctype to find just software. \n\n\\section{Using every minute}\nFour additional optional topics were made available that could be discussed if time permitted; participants were asked to express which of these was of the most interest to them in a second poll, as shown in Figure \\ref{fig:LastTopic}. As a result, the differences between and similarities of ASCL and Zenodo was the final topic of the tutorial.\n\n\\begin{figure}\n \\centering\n \\includegraphics [scale=0.5]{T01_f2.eps}\n \\caption{Poll results: Final topic for discussion}\n \\label{fig:LastTopic}\n\\end{figure}\n\n\n\\section{Summary}\nThis tutorial was intended for people relatively new to the ASCL; polling the audience before the start of the session showed that it was on-target as to who would attend. Only 11\\% of the participants had any significant experience with the resource. I thank the Heidelberg Institute for Theoretical Studies, Michigan Technological University, and the University of Maryland College Park for support, the ADASS POC for selecting this tutorial, the participants for coming, and software authors everywhere for providing the computational methods on which research depends.\n\n\n\n\n\n\n\n\\end{document}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nTraditionally, convex optimization problems have been divided into two main classes: the class of smooth problems and the class of non-smooth problems \\cite{polyakintroduction}. However, introducing an intermediate class of problems with convex differentiable objectives with H$\\ddot{\\text{o}}$lder continuous gradient allows us to view the classes of smooth and non-smooth convex optimization problems as two extreme cases of this intermediate class. \n\nThe first optimal methods for this class were introduced in \\cite{nemirovski1985optimal}. However, both these procedures and some others presented later had a serious drawback: they required too much information about the objective (for example, the degree of the objective function's smoothness or the distance from the initial point to the solution) to be used efficiently. \n\nIn \\cite{nesterov2015universal} the Universal Fast Gradient Method is presented. It is optimal for the class of problems with convex differentiable objectives with H$\\ddot{\\text{o}}$lder continuous gradient, has a low iteration cost, and does not involve any parameters dependent on the objective.\n\nSome minimization methods allow for the use of an exact line search procedure. A classic example of such a method is the steepest descent method, which is a version of the gradient descent method in which on each iteration instead of performing a step of fixed length in the direction of the negative gradient the objective function is minimized along said direction. Although this does not improve the worst-case convergence rate, such line search procedures often perform very well in practice. The aim of this work was to construct a universal method which allowed for the use of an exact line search procedure. By combining the core idea of Nesterov's Universal Fast Gradient Method with the framework described by Allen-Zhu et al. in \\cite{allen2014linear}, such a method was devised. As far as it is known to the authors of this paper, our work contains the first example of such a method, although a method utilising exact line search for solving minimization problems with convex Lipschitz continuous objectives was recently constructed by Drori et al \\cite{drori2018efficient}. Their work also contains an example of a universal method which uses an exact three dimension subspace minimization on each iteration. Our numerical experiments indicate that the exact line search step does indeed demonstrate great performance on some non-smooth problems. Note that in the well-known Shor's type methods with variable metric for non-smooth convex optimization problems line search is performed not in the direction of the negative gradient. These methods also require quadratic memory \\cite{polyakintroduction}. \n\nThe paper is organized as follows. Firstly, we define the intermediate class of problems which we refer to above, set the problem and give other definition used later in this paper. Secondly, we define Nesterov's Universal Fast Gradient Method, which we will be using as a benchmark in our numerical experiments. In \\textbf{Section 2} we present our Universal Linear Coupling Method, prove its convergence and equip it with a stopping criterion. \\textbf{Section 3} contains notes on how to implement the line search procedure and how its accuracy affects the method's convergence. Finally, \\textbf{Section 4} is dedicated to the results of our numerical experiments.\n\n\n\\subsection{Preliminaries}\n\nOne of the conditions often used in convergence analysis of numerical optimization methods is $L$-smoothness.\n\n\\begin{definition}\nA function $f:\\ \\mathbb{R}^n\\rightarrow \\mathbb{R}^m$ is called Lipschitz continuous with constant $L$ if\n\n\\[\\|f(x)-f(y)\\|\\leq L\\|x-y\\|\\quad \\forall x,y\\in\\mathbb{R}^n.\\]\n\n\\end{definition}\n\n\\begin{definition}\nA differentiable function $f:\\ \\mathbb{R}^n\\rightarrow \\mathbb{R}^m$ is called $L$-smooth if its gradient is Lipschitz continuous with constant $L$:\n\\[\\|\\nabla f(x)-\\nabla f(y)\\|\\leq L\\|x-y\\|\\quad \\forall x,y\\in\\mathbb{R}^n.\\]\n\\end{definition}\n\nWe will be using the following natural generalisation of Lipschitz continuity.\n\n\\begin{definition}\nA function $f:\\ \\mathbb{R}^n\\rightarrow \\mathbb{R}^m$ satisfies the H$\\ddot{\\text{o}}$lder condition (or is H$\\ddot{\\text{o}}$lder continuous) if there exist constants $\\nu\\in[0,1]$ and $M_\\nu\\geqslant 0$, such that \\[\\|f(x)-f(y)\\| \\leqslant M_\\nu\\|x-y\\|^\\nu\\quad \\forall\\ x,y\\in\\mathbb{R}^n.\\]\n\\end{definition} \n\nThe constant $\\nu$ in this definition is called the exponent of the H$\\ddot{\\text{o}}$lder condition. H$\\ddot{\\text{o}}$lder continuity coincides with Lipschitz continuity if $\\nu=1$. On the other hand, H$\\ddot{\\text{o}}$lder continuity with $\\nu=0$ is just boundedness. If a function is differentiable and its gradient is H$\\ddot{\\text{o}}$lder continuous, then exponent $\\nu$ is a measure of the function's smoothness.\n\n\nThroughout this paper we will be working with the problem \\[f(x)\\rightarrow \\min_{x\\in\\mathbb{R}^n}, \\] where $f(x)$ is a convex differentiable function and its gradient satisfies the H$\\ddot{\\text{o}}$lder condition for some $\\nu\\in[0,1]$ with some constant $M_{\\nu}$. We denote some solution to this problem as $x^\\ast$.\n\nLet us define Bregman divergence $V_x(y)$ as follows:\n\n\\[V_x(y)=\\omega(y)-\\langle\\nabla\\omega(x),y-x\\rangle-\\omega(x), \\] where $\\omega(x)$ is a 1-strongly convex function. $\\omega$ is also called a distance generating function. By definition, \n\n\\[V_x(y)\\geqslant \\frac{1}{2}\\|y-x\\|^2.\\]\n\n\n\\subsection{Universal Method}\n\nIn \\cite{Devolder2014} it is shown that the notion of inexact oracle allows one to apply some methods of smooth convex optimization to non-smooth problems. The following lemma plays a key role in this:\n\n\\begin{lemma}\nLet function $f$ be differentiable and have H$\\ddot{o}$lder continuous gradient. Then for any $\\delta>0$ we have \\[f(y)\\leqslant f(x)+\\langle\\nabla f(x), y-x\\rangle+\\frac{M}{2}\\|y-x\\|^2+\\frac{\\delta}{2},\\] where \\[M=M\\left(\\delta,\\nu, M_\\nu\\right)=\\left[\\frac{1-\\nu}{1+\\nu}\\frac{M_\\nu}{\\delta}\\right]^{\\frac{1-\\nu}{1+\\nu}}M_\\nu.\\]\n\n\\end{lemma}\nThe exact values $\\left(f(x),\\nabla f(x)\\right)$ of a differentiable function $f$ with H$\\ddot{\\text{o}}$lder continuous gradient allow us to obtain an upper bound similar to the one obtained by using inexact information for a differentiable and L-smooth function. This allows one to apply methods reliant on the usage of an inexact oracle for L-smooth objectives to optimize objectives with H$\\ddot{\\text{o}}$lder continuos gradient.\n\nHowever, knowledge of the parameters $\\nu$ and $M_\\nu$ from the definition of H$\\ddot{\\text{o}}$lder continuity is still required to apply such an approach. In \\cite{nesterov2015universal} a line search procedure was used to estimate the needed parameters similarly to how the constant of $L$-smoothness is estimated in adaptive methods. For a general norm on $\\mathbb{R}^n$ and a corresponding Bregman divergence $V_x(y)$ the Universal Fast Gradient Method may be written as follows.\n\\newpage\n\\begin{algorithm}\n \\SetKwInOut{Input}{Input}\n \\SetKwInOut{Output}{Output}\n\t\n \\caption{UFGM($f$, $L_0$, $x_0$, $\\varepsilon$, $T$)}\n \\Input{$f$ a differentiable convex function with H$\\ddot{\\text{o}}$lder continuous gradient;\n initial value of the \"inexact\" Lipschitz continuity constant $L_0$;\n initial point $x_0$;\n accuracy $\\varepsilon$;\n number of iterations $T$.}\n $y_0\\gets x_0$, $z_0\\gets x_0$, $\\alpha_0 \\gets 0$, $\\psi_0(x)\\gets V_{x_0}(x)$\\\\\n \\For{$k=0$ to $T-1$}{\n \t$L_{k+1}\\gets\\frac{L_{k}}{2}$\\\\\n \\While{True}{\n $v_k=\\operatornamewithlimits{argmin}\\limits_{x\\in \\mathbb{R}^n} \\psi_k(x)$\\\\\n $\\alpha_{k+1}\\gets\\frac{1}{2L_{k+1}}+\\sqrt{\\frac{1}{4L^2_{k+1}}+\\alpha^2_k\\frac{L_k}{L_{k+1}}}$\\\\\n $\\tau_k\\gets\\frac{1}{\\alpha_{k+1}L_{k+1}}$\\\\\n $x_{k+1}\\gets\\tau_kv_k+(1-\\tau_k)y_k$\\\\\n $z_{k+1}\\gets \\operatornamewithlimits{argmin}\\limits_{z\\in \\mathbb{R}^n} \\alpha_{k+1}\\langle\\nabla f(x_{k+1}), z-v_k\\rangle +V_{v_k}(z)$\\\\\n $y_{k+1}\\gets\\tau_kz_{k+1}+(1-\\tau_k)y_k$\\\\\n \\If{$f(y_{k+1})\\leqslant f(x_{k+1})+\\langle\\nabla f(x_{k+1}),y_{k+1}-x_{k+1}\\rangle+\\frac{L_{k+1}}{2}\\|y_{k+1}-x_{k+1}\\|^2+\\frac{\\tau_k\\varepsilon}{2}$}{\\textbf{break}}\n \\Else{$L_{k+1}\\gets 2L_{k+1}$}\n \t}\n $\\psi_{k+1}(x)\\gets\\psi_k(x)+\\alpha_{k+1}\\left[f(x_{k+1})+\\langle\\nabla f(x_{k+1}),x-x_{k+1}\\rangle\\right]$ \n }\n \\Return{$y_T$}\n \n\\end{algorithm}\n\n\nThe above method does not require a priori knowledge of the smoothness parameter $\\nu$ or the corresponding $M_\\nu$. The following theorem gives the convergence rate of the above algorithm:\n\n\\begin{theorem}\nLet f be a differentiable convex function with H$\\ddot{\\text{o}}$lder continuous gradient with some exponent $\\nu$ and $M_\\nu<\\infty$. Let $L_0\\leqslant M(\\varepsilon,\\nu,M_\\nu)$. Then\n\\[f(y_k)-f(x^\\ast) \\leq \\left[\\frac{2^{2+4\\nu}M_\\nu^2}{\\varepsilon^{1-\\nu}k^{1+3\\nu}}\\right]^{\\frac{1}{1+\\nu}} +\\frac{\\varepsilon}{2}.\\]\n\n\\end{theorem} \n\nWhat follows is that one may obtain an $\\varepsilon$-accurate solution in \\[k\\leqslant \\inf_{\\nu\\in[0,1]} \\left[\\left(\\frac{2^\\frac{3+5\\nu}{2}M_\\nu}{\\varepsilon}\\right)^\\frac{2}{1+3\\nu} \\left(\\frac{1}{2}\\|x_0 -x^\\ast\\|^2\\right)^{\\frac{1+\\nu}{1+3\\nu}}\\right]\\] iterations. If the problem admits multiple solutions, then $x^\\ast$ may be considered to be the solution minimizing $\\frac{1}{2}\\| x_0 - x^\\ast \\|^2$. As shown in \\cite{nemirovskii1983problem}, this is optimal up to a multiplicative constant independent of the accuracy, the initial point, and the objective function.\n\n\\section{Universal Linear Coupling Method}\n\nWe are now ready to present our universal method based on the linear coupling method proposed by Allen-Zhu et al. \\cite{allen2014linear} The Linear Coupling framework is chosen as a basis for our method because it allows for the usage of an exact line search step, which is our goal. The original linear coupling method utilizes gradient and mirror descent steps to guarantee optimal convergence rate for convex objectives. However, it is clear from the convergence analysis of said method that the gradient step is only used to obtain a lower bound on the decrease of the objective during this step. This means that any procedure capable of guaranteeing at least such a decrease may be utilized instead. Since in the unconstrained Euclidean setting the gradient step is always performed in the direction of the negative of the gradient, one may use the steepest descent method instead. This idea combined with the idea of Nesterov's universal method allows us to modify the Linear Coupling method in the following way:\n\n\\begin{algorithm}\n \\SetKwInOut{Input}{Input}\n \\SetKwInOut{Output}{Output}\n\t\n \\caption{ULCM($f$, $L_0$, $x_0$, $\\varepsilon$, $T$)}\n \\Input{$f$ a differentiable convex function with H$\\ddot{\\text{o}}$lder continuous gradient;\n initial value of the \"inexact\" Lipschitz continuity constant $L_0$;\n initial point $x_0$;\n accuracy $\\varepsilon$;\n number of iterations $T$.}\n $y_0 \\gets x_0$, $z_0 \\gets x_0$, $\\alpha_0 \\gets 0$\\\\\n \\For{$k=0$ to $T-1$}{\n \t$L_{k+1}\\gets\\frac{L_{k}}{2}$\\\\\n \\While{True}{\n \t$\\alpha_{k+1}\\gets\\frac{1}{2L_{k+1}}+\\sqrt{\\frac{1}{4L^2_{k+1}}+\\alpha^2_k\\frac{L_k}{L_{k+1}}}$\\\\\n $\\tau_k\\gets\\frac{1}{\\alpha_{k+1}L_{k+1}}$\\\\\n $x_{k+1}\\gets\\tau_kz_k+(1-\\tau_k)y_k$\\\\\n $h_{k+1}\\gets\\operatornamewithlimits{argmin}\\limits_{h\\geqslant 0} f(x_{k+1}-h\\nabla f(x_{k+1}))$\\\\\n $y_{k+1}\\gets x_{k+1}-h_{k+1}\\nabla f(x_{k+1})$\\\\\n $z_{k+1}\\gets z_k-\\alpha_{k+1}\\nabla f(x_{k+1})$\\\\\n \\If{$\\langle \\alpha_{k+1}\\nabla f(x_{k+1}),z_k-z_{k+1}\\rangle-\\frac{1}{2}\\|z_k-z_{k+1}\\|^2\\leq \\alpha^2_{k+1}L_{k+1}(f(x_{k+1})-f(y_{k+1})+\\frac{\\tau_k\\varepsilon}{2})$}{\\textbf{break}}\n \\Else{$L_{k+1}\\gets 2L_{k+1}$}\n \t}\n \n }\n \\Return{$y_T$}\n\\end{algorithm}\n\\newpage\n\n\nAs far as it is known to the authors of this paper, this is the first universal method of non-smooth optimization utilizing steepest descent steps.\n \nFrom this point onwards $L_k$ will always denote the value obtained at the end of a full iteration of the \"for\" loop.\n\nWe shall now show that the above algorithm is well-defined. To be more precise, we shall prove that the if-condition inside the while loop is satisfied after a finite number of iterations for any $k$. \n\n\\begin{lemma}\n$f(x)$ is a convex differentiable function and its gradient satisfies the H$\\ddot{\\text{o}}$lder condition for some $\\nu\\in[0,1]$ with some constant $M_{\\nu}$. Then for all steps k of above algorithm\n\n\\[\\alpha_{k+1}\\langle\\nabla f(x_{k+1}),z_k-z_{k+1}\\rangle-\\frac{1}{2}\\|z_k-z_{k+1}\\|^2\\leqslant \\alpha^2_{k+1}L_{k+1}\\left(f(x_{k+1})-f(y_{k+1})+\\frac{\\tau_k\\varepsilon}{2}\\right), \\] for all $L_{k+1}$ satisfying\n\n\\[L_{k+1}\\geqslant M(\\tau_k\\varepsilon,\\nu,M_\\nu)=\\left[\\frac{1-\\nu}{1+\\nu}\\frac{M_\\nu}{\\tau_k\\varepsilon}\\right]^{\\frac{1-\\nu}{1+\\nu}}M_\\nu.\\]\n\n\\end{lemma}\n\\begin{proof}\n\n\\begin{align*}\n\\alpha_{k+1}\\langle\\nabla f(x_{k+1}),z_k-z_{k+1}\\rangle-\\frac{1}{2}\\|z_k-z_{k+1}\\|^2\\\\\\leqslant\\frac{\\alpha^2_{k+1}}{2}\\|\\nabla f(x_{k+1})\\|^2&\\leqslant M\\alpha_{k+1}^2\\left(f(x_{k+1})-f(y_{k+1})+\\frac{\\tau_k\\varepsilon}{2})\\right)\n\\end{align*}\n\nHere the first inequality follows from the fact that $\\|\\alpha_{k+1}\\nabla f(x_{k+1})-(z_k-z_{k+1})\\|^2\\geq 0$. To get the last inequality we will use \\textsc{Lemma 1.1} with $\\delta=\\tau_k\\varepsilon$ and $x=x_{k+1}$, $y=x_{k+1}-\\beta\\nabla f(x_{k+1})$: \n\n\\begin{align*}\nf(y)&\\leqslant f(x_{k+1})+\\langle\\nabla f(x_{k+1}), -\\beta\\nabla f(x_{k+1})\\rangle+\\frac{\\beta^2M}{2}\\|\\nabla f(x_{k+1})\\|^2+\\frac{\\tau_k\\varepsilon}{2}\\\\ &= f(x_{k+1})-\\beta\\|\\nabla f(x_{k+1})\\|^2+\\frac{\\beta^2M}{2}\\|\\nabla f(x_{k+1})\\|^2+\\frac{\\tau_k\\varepsilon}{2}.\n\\end{align*}\nMinimising the right-hand side over $\\beta\\in\\mathbb{R}$, we get $\\beta=\\frac{1}{M}$. This results in the following guarantee:\n\n\\[f(y)-f(x_{k+1})\\leqslant -\\frac{\\|\\nabla f(x_{k+1})\\|^2}{2M}+\\frac{\\tau_k\\varepsilon}{2}. \\]\n\nIn our algorithm \\begin{align*}\ny_{k+1}= x_{k+1}-h_{k+1}\\nabla f(x_{k+1}),\\\\h_{k+1}=\\operatornamewithlimits{argmin}\\limits_{h\\geqslant 0} f(x_{k+1}-h\\nabla f(x_{k+1})),\n\\end{align*} so \\[f(y_{k+1})-f(x_{k+1})\\leq f(y)-f(x_{k+1})\\leq -\\frac{\\|\\nabla f(x_{k+1})\\|^2}{2M}+\\frac{\\tau_k\\varepsilon}{2}.\\]\n\n\\end{proof}\n\\subsection{Comparison with the UFGM method}\n\nNote that in the case of Euclidean norm and $V_x(y)=\\frac{1}{2}\\|x-y\\|^2$, in the UFGM algorithm the mirror descent step \\[z_{k+1}\\gets \\operatornamewithlimits{argmin}\\limits_{z\\in \\mathbb{R}^n} \\alpha_{k+1}\\langle\\nabla f(x_{k+1}), z-v_k\\rangle +V_{v_k}(z)\\] may be rewritten as \\[z_{k+1}\\gets v_k-\\alpha_{k+1}\\nabla f(x_{k+1}).\\] Moreover, in the case of the Euclidean norm the sequence $\\{v_k\\}$ turns out to be identical to the sequence $\\{z_k\\}$. Now by direct substitution of $z_{k+1}$ and by using $(1-\\tau_k)y_k=x_{k+1}-\\tau_kv_k$ we get that \\[y_{k+1}=\\tau_k(z_k-\\alpha_{k+1}\\nabla f(x_{k+1}))+(1-\\tau_k)y_k=x_{k+1}-\\frac{1}{L_{k+1}}\\nabla f(x_{k+1}).\\] This means that the two methods are not just very similar, but are practically identical. The only difference between them is the usage of exact line search instead of a fixed-length gradient descent step.\n\n\\subsection{Convergence Analysis}\nTo ascertain the convergence of the above algorithm we will require the following lemmas:\n\n\\begin{lemma}\nFor any $u\\in\\mathbb{R}^n$\n\n\\[\\alpha_{k+1}\\langle\\nabla f(x_{k+1}),z_k-u\\rangle\\leqslant\\alpha_{k+1}^2L_{k+1}\\left(f(x_{k+1})-f(y_{k+1})+\\frac{\\tau_k\\varepsilon}{2}\\right)+\\frac{1}{2}\\|z_k-u\\|^2-\\frac{1}{2}\\|z_{k+1}-u\\|^2. \\]\n\\end{lemma}\n\n\\begin{proof}\n\\begin{align*}\n\\alpha_{k+1}&\\langle\\nabla f(x_{k+1}),z_k-u\\rangle = \\alpha_{k+1}\\langle\\nabla f(x_{k+1}),z_k-z_{k+1}\\rangle+\\alpha_{k+1}\\langle\\nabla f(x_{k+1}),z_{k+1}-u\\rangle \\\\\n&\\stackrel{\\scriptsize{\\circled{1}}}{=} \\alpha_{k+1}\\langle\\nabla f(x_{k+1}),z_k-z_{k+1}\\rangle+\\langle z_k-z_{k+1} ,z_{k+1}-u\\rangle\\\\\n&\\stackrel{\\scriptsize{\\circled{2}}}{=}\\alpha_{k+1}\\langle\\nabla f(x_{k+1}),z_k-z_{k+1}\\rangle+\\frac{1}{2}\\|z_k-u\\|^2-\\frac{1}{2}\\|z_{k+1}-u\\|^2-\\frac{1}{2}\\|z_k-z_{k+1}\\|^2\\\\\n&\\stackrel{\\scriptsize{\\circled{3}}}{\\leqslant} \\alpha_{k+1}^2L_{k+1}\\left(f(x_{k+1})-f(y_{k+1})+\\frac{\\tau_k\\varepsilon}{2}\\right)+\\frac{1}{2}\\|z_k-u\\|^2-\\frac{1}{2}\\|z_{k+1}-u\\|^2.\n\\end{align*}\n\nHere, $\\circled{1}$ is due to \\[z_{k+1}=\\operatornamewithlimits{argmin}\\limits_{z\\in\\mathbb{R}^n} \\langle\\alpha_{k+1}\\nabla f(x_{k+1}),z\\rangle+\\frac{1}{2}\\|z_k-z\\|^2,\\] which implies \\[\\nabla \\left(\\frac{1}{2}\\|z_k-z\\|^2+\\langle\\alpha_{k+1}\\nabla f(x_{k+1}),z\\rangle\\right)\\bigg\\rvert_{z=z_{k+1}}=0.\\] $\\circled{2}$ follows from the triangle equality of Bregman divergence \n\\[\\langle -\\nabla V_x(y),y-u\\rangle = V_x(u)-V_y(u)-V_x(y),\\] which takes the following form when $V_x(y)=\\frac{1}{2}\\|x-y\\|^2$: \\[\\langle x-y,y-u\\rangle=\\frac{1}{2}\\|x-u\\|^2-\\frac{1}{2}\\|y-u\\|^2-\\frac{1}{2}\\|x-y\\|^2\\]\nFinally, $\\circled{3}$ is due to our choice of $L_{k+1}.$\n\n\\end{proof}\n\n\\begin{lemma}\nFor any $u\\in\\mathbb{R}^n$\n\n\\begin{align*}\n\\alpha_{k+1}^2L_{k+1}f(y_{k+1})-&\\left(\\alpha^2_{k+1}L_{k+1}-\\alpha_{k+1}\\right)f(y_k)+\\\\&\n\\left(\\frac{1}{2}\\|z_{k+1}-u\\|^2-\\frac{1}{2}\\|z_k-u\\|^2\\right)-\\frac{\\alpha_{k+1}\\varepsilon}{2}\\leqslant\\alpha_{k+1}f(u).\n\\end{align*}\n\\end{lemma}\n\n\\begin{proof}\nWe deduce the following sequence of relations:\n\\begin{align*}\n&\\alpha_{k+1}(f(x_{k+1})-f(u))\\leqslant \\alpha_{k+1}\\langle \\nabla f(x_{k+1}), x_{k+1}-u\\rangle\\\\&=\\alpha_{k+1}\\langle \\nabla f(x_{k+1}), x_{k+1}-z_k\\rangle+\\alpha_{k+1}\\langle \\nabla f(x_{k+1}), z_k-u\\rangle\\\\\n&\\stackrel{\\scriptsize{\\circled{1}}}{=} \\frac{(1-\\tau_k)\\alpha_{k+1}}{\\tau_k}\\langle \\nabla f(x_{k+1}), y_k-x_{k+1}\\rangle+\\alpha_{k+1}\\langle \\nabla f(x_{k+1}), z_k-u\\rangle\\\\\n&\\stackrel{\\scriptsize{\\circled{2}}}{\\leqslant} \\frac{(1-\\tau_k)\\alpha_{k+1}}{\\tau_k}(f(y_k)-f(x_{k+1}))+\\alpha^2_{k+1}L_{k+1}\\left(f(x_{k+1})-f(y_{k+1})+\\frac{\\tau_k\\varepsilon}{2}\\right)\\\\&+\\frac{1}{2}\\|z_{k}-u\\|^2-\\frac{1}{2}\\|z_{k+1}-u\\|^2\n\\stackrel{\\scriptsize{\\circled{3}}}{=} (\\alpha^2_{k+1}L_{k+1}-\\alpha_{k+1})f(y_k)-\\alpha_{k+1}^2L_{k+1}f(y_{k+1})\\\\&+\\alpha_{k+1}f(x_{k+1})+\\left(\\frac{1}{2}\\|z_{k}-u\\|^2-\\frac{1}{2}\\|z_{k+1}-u\\|^2\\right)+\\frac{\\alpha_{k+1}\\varepsilon}{2}.\n\\end{align*} Here, $\\circled{1}$ uses the fact that our choice of $x_{k+1}$ satisfies $\\tau_k(x_{k+1}-z_k)=(1-\\tau_k)(y_k-x_{k+1})$. $\\circled{2}$ is by convexity of $f(\\cdot)$ and \\textsc{Lemma 2.2}, while $\\circled{3}$ uses the choice of $\\tau_k=\\frac{1}{\\alpha_{k+1}L_{k+1}}$.\n\\end{proof}\n\nWe are now ready to begin our proof of the method's convergence.\n\n\n\\begin{theorem}\nLet $f(x)$ be a convex, differentiable function such that its gradient satisfies the H$\\ddot{\\text{o}}$lder condition for some $\\nu\\in[0,1]$ with some finite $M_{\\nu}$. Let $L_0$ also satisfy \n\\[L_0\\leqslant \\inf_{\\nu\\in[0,1]}4\\left[\\frac{1-\\nu}{1+\\nu}\\frac{M_\\nu}{\\varepsilon}\\right]^{\\frac{1-\\nu}{1+\\nu}}M_\\nu.\\] Then ULCM($f$, $L_0$, $x_0$, $\\varepsilon$, $T$) outputs $y_T$ such that $f(y_T)-f(x^\\ast)\\leqslant\\varepsilon$ in the number of iterations\n\n\\[T\\leqslant\\inf_{\\nu\\in[0,1]}\\ \\left[\\frac{1-\\nu}{1+\\nu}\\right]^\\frac{1-\\nu}{1+3\\nu}\\left[\\frac{2^\\frac{3+5\\nu}{2}M_\\nu}{\\varepsilon}\\right]^\\frac{2}{1+3\\nu}\\Theta^\\frac{1+\\nu}{1+3\\nu},\\]where $\\Theta$ is any upper bound on $\\frac{1}{2}\\|x_0-x^\\ast\\|^2$.\n\\end{theorem}\n\\begin{proof}\nNote that our choice of $\\alpha_{k+1}$ satisfies\n\\[\\alpha^2_{k+1}L_{k+1}-\\alpha_{k+1}=\\alpha^2_{k}L_k,\\addtag\\] which allows us to telescope \\textsc{Lemma 2.3}. Summing up \\textsc{Lemma 2.3} for $k=0,1,\\ldots, T-1$ and $u=x^\\ast$, we obtain\n\n\\[\\alpha_{T}^2L_{T}f(y_T)+\\left(\\frac{1}{2}\\|z_T-x^\\ast\\|^2-\\frac{1}{2}\\|z_0-x^\\ast\\|^2\\right)\\leqslant\\sum_{k=1}^T\\alpha_kf(x^\\*) +\\sum_{k=1}^T\\frac{\\alpha_k\\varepsilon}{2}.\\] By using $(1)$ we get that $\\sum\\limits_{k=1}^T\\alpha_k=\\alpha^2_TL_T$. We also notice that $\\frac{1}{2}\\|z_t-x^\\ast\\|^2\\geqslant 0$ and $\\frac{1}{2}\\|z_0-x^\\ast\\|^2\\leqslant \\Theta$. Therefore, \n\n\\[f(y_T)-f(x^\\ast)\\leqslant \\frac{\\Theta}{\\alpha^2_TL_T}+\\frac{\\varepsilon}{2}.\\]\n\nNote that our process of calculating $L_k$ guarantees that if the step $L_{k+1}\\gets 2L_{k+1}$ of the algorithm was executed at least once for some $k$, then for that $k$\n\n\\[L_{k+1}\\leqslant 2\\left[\\frac{1-\\nu}{1+\\nu}\\frac{M_\\nu}{\\varepsilon\\tau_k}\\right]^{\\frac{1-\\nu}{1+\\nu}}M_\\nu. \\addtag\\]\n\nAssume that $L_n\\leq 2\\left[\\frac{1-\\nu}{1+\\nu}\\frac{M_\\nu}{\\varepsilon\\tau_{n-1}}\\right]^{\\frac{1-\\nu}{1+\\nu}}M_\\nu$ and $L_{n+1}=\\frac{L_n}{2}$ for some $n\\geq 1$. Then \\[\\frac{1}{\\tau_n}=\\alpha_{n+1}L_{n+1}=\\frac{1}{2}+\\sqrt{\\frac{1}{4}+\\alpha_n^2L_nL_{n+1}}\\geq\\frac{1}{2}+\\sqrt{\\frac{1}{4}+\\frac{\\alpha_n^2L_n^2}{2}}\\geq\\frac{1}{\\sqrt{2}\\tau_{n-1}}.\\]\n\n\\[L_{n+1}=\\frac{L_n}{2}\\leq \\left[\\frac{1-\\nu}{1+\\nu}\\frac{\\sqrt{2}M_\\nu}{\\varepsilon\\tau_{n}}\\right]^{\\frac{1-\\nu}{1+\\nu}}M_\\nu\\leq2\\left[\\frac{1-\\nu}{1+\\nu}\\frac{M_\\nu}{\\varepsilon\\tau_{n}}\\right]^{\\frac{1-\\nu}{1+\\nu}}M_\\nu.\\] This shows that even if we don't execute the step $L_{k+1}\\gets 2L_{k+1}$, (2) remains true as long as it held true on the previous iteration. All of the above proves that the assumption about $L_0$ in the statement of the theorem implies that (2) is true for all $k=0,\\ldots T-1$. \n\nDenote $A_k=\\alpha^2_kL_k$. We may now proceed to attain a lower bound on $A_T$.\n\n\\[\\frac{\\alpha^2_k}{A_k}=\\frac{1}{L_k}\\geqslant\\frac{1}{2M_\\nu}\\left[\\frac{1+\\nu}{1-\\nu}\\frac{\\varepsilon}{M_\\nu}\\right]^{\\frac{1-\\nu}{1+\\nu}}\\left[\\frac{\\alpha_k}{A_k}\\right]^{\\frac{1-\\nu}{1+\\nu}}\\]\n\n\\[\\alpha_k\\geqslant\\frac{1}{2^\\frac{1+\\nu}{1+3\\nu}M_\\nu^\\frac{2}{1+3\\nu}}\\left[\\frac{1+\\nu}{1-\\nu}\\varepsilon\\right]^\\frac{1-\\nu}{1+3\\nu}A_k^\\frac{2\\nu}{1+3\\nu}.\\]\n\nDenote $\\gamma=\\frac{1+\\nu}{1+3\\nu}\\geqslant \\frac{1}{2}.$ Since $A_{k+1}=A_k+\\alpha_{k+1},$ \n\n\\[A^\\gamma_{k+1}-A^\\gamma_{k+1}\\geqslant \\frac{A_{k+1}-A_{k}}{A_{k+1}^{1-\\gamma}+A_k^{1-\\gamma}}\\geqslant\\frac{\\alpha_{k+1}}{2A_{k+1}^{1-\\gamma}}\\geqslant\\frac{1}{2^\\frac{2+4\\nu}{1+3\\nu}M_\\nu^\\frac{2}{1+3\\nu}}\\left[\\frac{1+\\nu}{1-\\nu}\\varepsilon\\right]^\\frac{1-\\nu}{1+3\\nu}. \\addtag\\]Now we telescope (3) for $k=0,\\ldots,T-1$ and get\n\n\\[A_T\\geqslant\\left[\\frac{1+\\nu}{1-\\nu}\\right]^\\frac{1-\\nu}{1+\\nu}\\frac{T^\\frac{1+3\\nu}{1+\\nu}\\varepsilon^\\frac{1-\\nu}{1+\\nu}}{2^\\frac{2+4\\nu}{1+\\nu}M_\\nu^\\frac{2}{1+\\nu}}.\\] This allows us to estimate the number of iterations necessary to achieve error no more than $\\varepsilon$. However, beforehand we shall note that this estimate heavily depends on $\\nu$. By allowing $M_\\nu$ to be infinite, we make the gradient of any differentiable function satisfy the H$\\ddot{\\text{o}}$lder condition for all $\\nu\\in[0,1]$. This in turn allows to easily select the most appropriate estimate:\n\n\\[T\\leqslant\\inf_{\\nu\\in[0,1]}\\ \\left[\\frac{1-\\nu}{1+\\nu}\\right]^\\frac{1-\\nu}{1+3\\nu}\\left[\\frac{2^\\frac{3+5\\nu}{2}M_\\nu}{\\varepsilon}\\right]^\\frac{2}{1+3\\nu}\\Theta^\\frac{1+\\nu}{1+3\\nu}.\\]\n\nNote that since the solution $x^\\ast$ was arbitrary, $x^\\ast$ may now be considered to be the solution which minimizes $\\frac{1}{2}\\|x_0-x^\\ast\\|^2$.\n\n\\end{proof}\n\\subsection{Stopping criterion}\n\nIn \\cite{anikin2017dual} it is shown that the original version of the Linear Coupling Method may be equipped with a stopping criterion. By using similar techniques, we are now going to show that our universal modification of said method may also be equipped with a calculable stopping criterion.\n\nBy ignoring the first inequality in the proof of \\textsc{Lemma 2.3}, we get that for all $u\\in\\mathbb{R}^n$ (remember that $A_k=\\alpha^2_k L_k$)\n\\begin{align*}\nA_{k+1} f(y_{k+1})-A_k f(y_k)&+\\frac{1}{2}\\|z_{k+1}-u\\|^2-\\frac{1}{2}\\|z_k-u\\|^2-\\frac{\\alpha_{k+1}\\varepsilon}{2}\\\\&\\leq \\alpha_{k+1}\\left(f(x_{k+1})+\\langle\\nabla f(x_{k+1}),u-x_{k+1}\\rangle\\right).\n\\end{align*}\n Summing up for $k=0,\\ldots,m-1$, we obtain \n\n\\[f(y_m)\\leq \\frac{\\varepsilon}{2}+\\frac{1}{A_m}\\min_{u\\in\\mathbb{R}^n}\\left\\lbrace\\frac{1}{2}\\|z_0-u\\|^2+\\sum_{i=1}^m\\alpha_{i}\\left(f(x_{i})+\\langle\\nabla f(x_{i}),u-x_{i}\\rangle\\right)\\right\\rbrace.\\]\n\nDenote \\[l_m(u)=\\sum_{i=1}^m\\left[ \\alpha_{i}\\left(f(x_{i})+\\langle\\nabla f(x_{i}),u-x_{i}\\rangle\\right)\\right]\\] and\n\n\\[\\hat{f}_m=\\min_{u:\\ \\frac{1}{2}\\|z_0-u\\|^2\\leq\\Theta} \\frac{1}{A_m} l_m(u).\\]\n\nThen by using strong duality one may see that\n\n\\begin{align*}\n\\hat{f}_m&=\\min_{u\\in \\mathbb{R}^n}\\max_{\\lambda\\geq 0}\\left\\lbrace\\frac{1}{A_m} l_m(u)+\\lambda\\left(\\frac{1}{2}\\|z_0-u\\|^2-\\Theta\\right)\\right\\rbrace\\\\&=\\max_{\\lambda\\geq 0} \\min_{u\\in \\mathbb{R}^n}\\left\\lbrace\\frac{1}{A_m} l_m(u)+\\lambda\\left(\\frac{1}{2}\\|z_0-u\\|^2-\\Theta\\right)\\right\\rbrace.\n\\end{align*}\n\nBy setting $\\lambda=\\frac{1}{A_m}$, we get that \n\n\\[\\hat{f}_m\\geq\\frac{1}{A_m}\\min_{u\\in\\mathbb{R}^n}\\left\\lbrace\\frac{1}{2}\\|z_0-u\\|^2+\\sum_{i=1}^m\\alpha_{i}\\left(f(x_{i})+\\langle\\nabla f(x_{i}),u-x_{i}\\rangle\\right)\\right\\rbrace-\\frac{\\Theta}{A_m}.\\]\n\nThen $f(y_m)-\\hat{f}_m\\leq\\frac{\\varepsilon}{2}+\\frac{\\Theta}{A_m}.$ This means that our method is primal-dual. By the convexity of $f$ we also get that $f(x^\\ast)\\geq\\hat{f}_m$, so $f(y_m)-f(x^\\ast)\\le f(y_m)-\\hat{f}_m\\leq\\varepsilon$ may be used as an implementable stopping criterion. Of course, an estimate of $\\Theta$ is required to compute $\\hat{f}_m$. Overestimating $\\Theta$ may lead to performing an excessive amount of iterations, while underestimating it invalidates the criterion completely. However, the stopping criterion requires an estimate of only one unknown parameter, which is also not used in the algorithm's definition. On the other hand, three unknown parameters ($\\nu, M_\\nu, \\Theta$) need to be estimated to calculate the upper bound on the number of iterations required to get an $\\varepsilon$-accurate solution\n\\section{Line search}\n\nDuring all of the previous analysis we assumed that $\\forall x\\in\\mathbb{R}^n\\ f(x),\\ \\nabla f(x)$, the steepest descent step, and the mirror descent step may be calculated exactly. However, in relation to the steepest descent step this assumption is not critical for the method's convergence.\n\nFor any convex function of one real argument defined on a segment of the form $[a,b]$ of length $l=b-a$ a point $y$ such that \\[\\|y-\\operatornamewithlimits{argmin}_{x\\in[a,b]}f(x)\\|\\leqslant \\varepsilon \\] may be found in $O(\\log\\frac{l}{\\varepsilon})$ function value calculations by using the bisection method. However, to perform an exact line search in our algorithm one needs to localize the solution first. To do that we propose the following simple procedure:\n\n\n\\begin{algorithm}\n \\SetKwInOut{Input}{Input}\n \\SetKwInOut{Output}{Output}\n\t\n \\caption{Localize(f,$l_0$)}\n \\Input{$f(x)$ -- convex function defined on $[0,+\\infty)$; initial segment length $l_0$.}\n \\Output{$l$ such that $\\operatornamewithlimits{argmin}\\limits_{x\\in[0,+\\infty)} f(x) \\in [0,l]$}\n $l\\gets l_0$\\\\\n \\While{$f(2l)\\leqslant f(l)$}{\n \t\t$l\\gets2l$\n }\n \\Return{$l$}\n\\end{algorithm}\n\n\nLet us estimate the accuracy with which the steepest descent must be performed to guarantee our method's convergence. \nLet's say we want to get a solution with accuracy of $\\varepsilon+\\delta$, where $\\delta$ is the term resulting from the inaccuracy of the steepest descent step. To do that we need to slightly modify our algorithm: \n\\newpage\n\\begin{algorithm}\n \\SetKwInOut{Input}{Input}\n \\SetKwInOut{Output}{Output}\n\t\n \\caption{$\\delta$-ULCM($f$, $L_0$, $x_0$, $\\varepsilon$, $\\delta$, $T$)}\n \\Input{$f$ a differentiable convex function with H$\\ddot{\\text{o}}$lder continuous gradient;\n initial value of the \"inexact\" Lipschitz continuity constant $L_0$;\n initial point $x_0$;\n accuracy $\\varepsilon$;\n line search accuracy $\\delta$;\n number of iterations $T$.}\n $y_0 \\gets x_0$, $z_0 \\gets x_0$, $\\alpha_0 \\gets 0$\\\\\n \\For{$k=0 \\to T-1$}{\n \t$L_{k+1}\\gets\\frac{L_{k}}{2}$\\\\\n \\While{True}{\n \t$\\alpha_{k+1}\\gets\\frac{1}{2L_{k+1}}+\\sqrt{\\frac{1}{4L^2_{k+1}}+\\alpha^2_k\\frac{L_k}{L_{k+1}}}$\\\\\n $\\tau_k\\gets\\frac{1}{\\alpha_{k+1}L_{k+1}}$\\\\\n $x_{k+1}\\gets\\tau_kz_k+(1-\\tau_k)y_k$\\\\\n Choose $y_{k+1}$ such that $f(y_{k+1})\\leq \\operatornamewithlimits{argmin}\\limits_{h\\geq 0} f(x_{k+1}-h\\nabla f(x_{k+1}))+\\frac{\\tau_k\\delta}{2}$\\\\\n $z_{k+1}\\gets\\operatornamewithlimits{argmin}\\limits_{z\\in\\mathbb{R}^n}\\ \\langle\\alpha_{k+1}\\nabla f(x_{k+1}),z-z_k\\rangle +\\frac{1}{2}\\|z_k-z\\|^2$\\\\\n \\If{$\\langle \\alpha_{k+1}\\nabla f(x_{k+1}),z_k-z_{k+1}\\rangle-\\frac{1}{2}\\|z_k-z_{k+1}\\|^2\\leq \\alpha^2_{k+1}L_{k+1}(f(x_{k+1})-f(y_{k+1})+\\frac{\\tau_k\\varepsilon}{2})$}{\\textbf{break}}\n \\Else{$L_{k+1}\\gets 2L_{k+1}$}\n \t}\n \n }\n \\Return{$y_T$}\n\\end{algorithm}\n\n\\begin{theorem}\nLet $f(x)$ be a convex, differentiable function such that its gradient satisfies the H$\\ddot{\\text{o}}$lder condition for some $\\nu\\in[0,1]$ with some finite $M_{\\nu}$. Let $L_0$ also satisfy \n\\[L_0\\leqslant \\inf_{\\nu\\in[0,1]}4\\left[\\frac{1-\\nu}{1+\\nu}\\frac{M_\\nu}{\\varepsilon}\\right]^{\\frac{1-\\nu}{1+\\nu}}M_\\nu.\\] Then $\\delta$-ULCM($f$, $L_0$, $x_0$, $\\varepsilon$, $\\delta$, $T$) outputs $y_T$ such that $f(y_T)-f(x^\\ast)\\leqslant\\varepsilon+\\delta$ in the number of iterations\n\n\\[T\\leqslant\\inf_{\\nu\\in[0,1]}\\ \\left[\\frac{1-\\nu}{1+\\nu}\\right]^\\frac{1-\\nu}{1+3\\nu}\\left[\\frac{2^\\frac{3+5\\nu}{2}M_\\nu}{\\varepsilon}\\right]^\\frac{2}{1+3\\nu}\\Theta^\\frac{1+\\nu}{1+3\\nu},\\]where $\\Theta$ is any upper bound on $\\frac{1}{2}\\|x_0-x^\\ast\\|^2$.\n\\end{theorem}\n \nThis immediately follows from the proof of $\\textsc{Theorem 2.4}$. To see that, note that if for some $L_{k+1}$ and the exact solution of the line search problem $\\hat{y}_{k+1}$\n\n\\[\\langle \\alpha_{k+1}\\nabla f(x_{k+1}),z_k-z_{k+1}\\rangle-V_{z_k}(z_{k+1})\\leq \\alpha^2_{k+1}L_{k+1}\\left(f(x_{k+1})-f(\\hat{y}_{k+1})+\\frac{\\tau_k\\varepsilon}{2}\\right)\\] holds true, then by definition of $y_{k+1}$ we have\n\\[\\langle \\alpha_{k+1}\\nabla f(x_{k+1}),z_k-z_{k+1}\\rangle-V_{z_k}(z_{k+1})\\leq \\alpha^2_{k+1}L_{k+1}\\left(f(x_{k+1})-f(y_{k+1})+\\frac{\\tau_k(\\varepsilon+\\delta)}{2}\\right).\\] This leads to an analogue of \\textsc{Lemma 2.1}. Then by proceeding with the proof the same way it was done in \\textsc{Theorem 2.4}, one gets the desired result.\n\n\\subsection{Simplified function evaluation during line search}\n\nAs noted in \\cite{SESOP}, for objectives of particular form the steepest descent step may be performed significantly faster.\n\nConsider a function of the form \n\n\\[f(x)=\\phi(\\bm{\\rm{A}}x)+\\psi(x),\\] where $x\\in \\mathbb{R}^n$, $\\bm{\\rm{A}}\\in \\mathbb{R}^{n\\times n}$.\n\nIf $n$ is sufficiently large, the computation of $\\bm{\\rm{A}}x$ may be the most time-consuming operation during computation of $f(x)$. However, if we are performing the steepest descent step, we can be sure that $x$ is of the form $x_k+\\alpha\\nabla f(x_k)$. Then\n\n\\[\\bm{\\rm{A}}x=\\bm{\\rm{A}}x_k+\\alpha \\bm{\\rm{A}}\\nabla f(x_k)=v_0+\\alpha v_1.\\] This shows that one may calculate the two points $v_0$ and $v_1$ just once at the beginning of a steepest descent step. \n\nIf $\\psi(y)$ and $\\phi(y)$ with $y$ known may be calculated in $\\mathcal{O}(n)$ arithmetic operations, then this representation of $\\bm{\\rm{A}}x$ allows us to evaluate $f(x)$ in $\\mathcal{O}(n)$ arithmetic operations after performing matrix multiplication, which requires $\\mathcal{O}(n^2)$ arithmetic operations, only twice. This may significantly decrease the cost of one steepest descent step.\n\n\\section{Numerical experiments}\n\nThe proposed methods were implemented in C$++$ and tested using the\nmodern versions of GCC, clang and ICC (Intel C Compiler) on both GNU\/Linux,\nMac OS X and Microsoft Windows operating systems. The source code is available at \\url{http:\/\/github.com\/htower\/ulcm}.\n\nFor the presented computational experiments we have also implemented a variant of the conjugate gradients method proposed by Y.~Nesterov in \\cite{nesterov1989book}, which we denote as NCG. The method has high numerical stability and a number of interesting properties. In particular, it lacks a restart procedure. This results in an increased iteration complexity relatively to \"classic\" conjugate gradient methods, which may be attributed to the necessity of solving two line search problems at each iteration. Details are presented in Algorithm \\ref{alg_ncg} and Figure \\ref{fig_ncg}.\n\n\\begin{algorithm}\n \\SetKwInOut{Input}{Input}\n \\SetKwInOut{Output}{Output}\n\n \\caption{NCG($f$, $x_0$, $\\delta$, $T$)}\n \\Input{$f$ a differentiable convex function with H$\\ddot{\\text{o}}$lder continuous gradient;\n initial point $x_0$;\n line search accuracy $\\delta$;\n number of iterations $T$.}\n $y_{-2} \\gets x_0$,\n $y_{-1} \\gets x_0$,\n $y_{ 0} \\gets x_0$ \\\\\n \\For{$k = 0$ to $T-1$}\n {\n $\\alpha_k \\gets \\operatornamewithlimits{argmin}\\limits_{\\alpha \\in \\mathbb{R}} f(x_k + \\alpha (y_{k-2} - x_k))$\\\\\n $y_k=x_k+\\alpha_k(y_{k-2} - x_k)$\\\\\n $\\beta_k \\gets \\operatornamewithlimits{argmin}\\limits_{\\beta\\ge 0} f(y_k - \\beta \\nabla f(y_k))$\\\\\n $x_{k+1}=y_k-\\beta_k\\nabla f(y_k)$\n }\n \\Return{$x_T$}\n \\label{alg_ncg}\n\\end{algorithm}\n\n\\begin{figure}\n\\includegraphics[]{pics\/cg_nesterov.pdf}\n\\caption{Illustration of the NCG method.}\n\\label{fig_ncg}\n\\end{figure}\n\nThe behaviour of the proposed methods was investigated by a series of numerical experiments on different smooth and non-smooth optimization problems. For all experiments we set the starting point $x_0$ to $10 \\cdot e$, where $e = (1,...,1)$, and the precision value $\\varepsilon = 10^{-4}$. The methods were interrupted as soon as the objective function's value became lower than $f(x^*) + 5\\varepsilon = f(x^*) + 5\\times 10^{-4}$. The dimensionality of the problem was up to $10^6$ .\n\nFirstly, we considered the following smooth (quadratic) problem:\n\\begin{equation}\n f(x) = \\sum_{i=1}^{n} i x_i^2 .\n\\label{eq_problem_s}\n\\end{equation}\nThis function is $L$-smooth, but the parameter $L$ depends on the number of dimensions $n$ linearly. This minimization problem can be solved analytically, the optimal value $f(x^\\ast)$ is equal to $0$. The results of our experiments are presented in Table~\\ref{tbl_s} and Figure~\\ref{fig_s36}.\n\\begin{table}\n\\begin{tabular}{r|l||r|r|r|r|r|r}\n\\multirow{2}{26mm}{$n$, problem size} & \\multirow{2}{10mm}{$f(x_0)$} & \\multicolumn{2}{c|}{UFGM} & \\multicolumn{2}{c|}{ULCM} & \\multicolumn{2}{c}{NCG} \\\\\n & & iterations & t, sec. & iterations & t, sec. & iterations & t, sec. \\\\ \\hline\n$10^3$ & $5 \\cdot 10^7$ & 743 & 0.035 & 722 & 0.035 & 121 & 0.004 \\\\\n$10^4$ & $5 \\cdot 10^9$ & 3230 & 1.429 & 3459 & 3.233 & 385 & 0.079 \\\\\n$10^5$ & $5 \\cdot 10^{11}$ & 15231 & 141.2 & 18053 & 372.6 & 1217 & 2.796 \\\\\n$10^6$ & $5 \\cdot 10^{13}$ & 73185 & 6857 & 84117 & 22373 & 3850 & 98.40 \\\\\n\\end{tabular}\n\\caption{Method's complexity for the smooth problems.}\n\\label{tbl_s}\n\\end{table}\n\n\\begin{figure}\n\\includegraphics[]{pics\/s3.pdf}\n\\includegraphics[]{pics\/s6.pdf}\n\\caption{Methods convergence for the smooth problems with $n = 10^3$ (top) and $n = 10^6$ (bottom). The solid line stands for the UFGM method, the dotted line stands for the ULCM method, the dashed line stands for the NCG method.}\n\\label{fig_s36}\n\\end{figure}\n\nNext, we consider the following non-smooth problem:\n\\begin{equation}\n f(x) = \\max_{i=1,...,n} x_i + \\frac{\\mu}{2} \\| x \\|_2^2.\n\\label{eq_problem_ns}\\end{equation}\n\nIn our experiments $\\mu=0.1$.\nThough this function is differentiable almost everywhere. Though it does not have globally H$\\ddot{\\text{o}}$lder continuous gradients, the gradient satisfies the H$\\ddot{\\text{o}}$lder continuity condition on any bounded set. \n\nThis minimization problem can be solved analytically, the optimal value $f(x^\\ast)$ is equal to $-\\frac{1}{2\\mu n}=-\\frac{5}{n}$.\n\nThe gradient (subgradient, in case $f$ is not differentiable at $x$) can be evaluated as\n\\[\n \\nabla f(x) = \\mu x + z(x), \\quad\n z(x) = (0,...0,1,0,...0) ,\n\\]\nwhere $1$ is located at position $k = \\operatornamewithlimits{argmin}\\limits_{i=1,...,n} x_i .$\n\nThe results are shown in Table~\\ref{tbl_ns} and Figure~\\ref{fig_ns3_ns6}.\n\n\\begin{table}\n\\begin{tabular}{r|l||r|r|r|r}\n\\multirow{2}{26mm}{$n$, problem size} & \\multirow{2}{10mm}{$f(x_0)$} & \\multicolumn{2}{c|}{UFGM} & \\multicolumn{2}{c}{ULCM} \\\\\n & & iterations & t, sec. & iterations & t, sec. \\\\ \\hline\n$10^3$ & $1 \\cdot 10^4$ & 535795 & 17.48 & 1376 & 0.175 \\\\\n$10^4$ & $1 \\cdot 10^5$ & 706870 & 233.8 & 6930 & 6.059 \\\\\n$10^5$ & $1 \\cdot 10^6$ & 1751285 & 4713 & 6950 & 34.18 \\\\\n$10^6$ & $1 \\cdot 10^7$ & 4341186 & 165435 & 6977 & 575.1 \\\\\n\\end{tabular}\n\\caption{Method's complexity for the non-smooth problems.}\n\\label{tbl_ns}\n\\end{table}\n\n\\begin{figure}\n\\includegraphics[]{pics\/ns3.pdf}\n\\includegraphics[]{pics\/ns6.pdf}\n\\caption{Methods convergence for the non-smooth problems with $n = 10^3$ (top) and $n = 10^6$ (bottom).}\n\\label{fig_ns3_ns6}\n\\end{figure}\n\nNote that in our particular case, since the ULCM and UFGM methods become identical if the steepest descent of the ULCM methods is replaced with a gradient descent step with step length $\\frac{1}{L_{k+1}}$, all the differences in actual performance may be attributed to the line search procedure.\n\nThe results of our experiments may be summarized as follows:\n\\begin{enumerate}\n \\item For the smooth problems (\\ref{eq_problem_s}) the NCG method showed best performance. Its convergence rate significantly exceeds the convergence rates of UFGM and ULCM methods by up to two orders of magnitude. Although the ULCM method took less iterations to converge, it was slower (about 3 times) in terms of running time. \n \\item For the non-smooth problems (\\ref{eq_problem_ns}) the situation is opposite. In that case the ULCM method significantly outperformed UFGM, both in terms of required iterations and elapsed time. In the case of $10^6$ arguments our method converged about 300 times faster.\n\\end{enumerate}\n\n\\section*{Conclusions}\nIn this paper we propose the first primal-dual method of non-smooth convex optimization with auxiliary line search. Practical experiments show that this method significantly outperforms Nesterov's Universal Fast Gradient Method \\cite{nesterov2015universal}. Moreover, we prove that the presented method is also optimal for all the problems with intermediate level of smoothness. The advantage of such an approach is that one can generalize it to stochastic programming using mini-batches \\cite{gasnikov2016universal} and to gradient-free methods \\cite{dvurechensky2017randomized}. \n\n\\section*{Acknowledgements}\n\nThe authors would like to thank Boris Polyak and Yurii Nesterov for helpful comments.\n\n\\section*{Funding}\nThis work was partially supported by RNF 17-11-01027.\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}