diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzpmme" "b/data_all_eng_slimpj/shuffled/split2/finalzzpmme" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzpmme" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction and summary: can one hear the shape of a CFT?}\n\\label{sec:intro} \n\nIt is often illuminating to ask how a quantum field theory behaves when it is deformed. When the theory is a conformal field theory (CFT), a particularly useful concept of deformations is the ``space of CFTs,'' on which an individual theory is just a single point. There are two natural and common ways to formulate the space of CFTs, both of which lead to a rich and fascinating structure. The first is to use the fact that conformal symmetry (or some extension of it) together with the operator product expansion (OPE) allow one to define CFTs in terms of a discrete set of ``data'' -- typically, operator dimensions and OPE coefficients -- so that, roughly speaking, the space of CFTs is the space of such data. One can then study which points in this space satisfy abstract CFT axioms. Remarkably, such constraints can lead not only to the exclusion of large regions of CFT data, but can sometimes sharply pinpoint specific theories living at the boundary of the allowed region, and can even be a practical method for computing results about a specific universality class of CFTs. The challenge in this conformal bootstrap approach is to extract the constraints in an efficient and useful way. The second concept of a space of CFTs arises when a theory contains one or more exactly marginal operators, which connect continuous families of CFTs. Understanding this moduli space of deformations can provide a new perspective on the original CFT. \n\nOur goal in this paper is to describe a set of tools -- and more generally, an organizing principle -- for 2d CFTs that sheds light on both of these approaches. The main technology will be harmonic analysis on the fundamental domain $\\mathcal{F} = \\mathbb H\/SL(2,\\mathbb Z)$.\n\nIn two dimensions, CFTs have a rich structure due to the infinite-dimensional conformal symmetry as well as modular covariance of local torus observables. For example, the torus partition function must be modular-invariant. Viewing the partition function as a positive sum of characters of the Virasoro algebra, the mechanism by which modular invariance is satisfied remains obscure and rather remarkable. On the other hand, harmonic analysis on $\\mathcal{F}$ builds in the constraint of modular invariance automatically. This comes at the expense of obscuring unitarity and discreteness, in a reversal of the typical modular bootstrap method \\cite{Hellerman:2009bu}.\\footnote{Harmonic analysis on the Euclidean conformal group \\cite{Dobrev:1977qv} has been an extremely useful tool for the study of CFT four-point functions \\cite{Cornalba:2006xm,Costa:2012cb,Cornalba:2007fs}. It has played a central role in numerous recent developments including the Lorentzian inversion formula and the SYK model \\cite{Maldacena:2016hyu, Gadde:2017sjg, Hogervorst:2017sfd,Caron-Huot:2017vep,Simmons-Duffin:2017nub,Kravchuk:2018htv,Karateev:2018oml,Liu:2018jhs}. There are also conceptual similarities between our approach and the Polyakov-Mellin bootstrap for four-point functions \\cite{Polyakov:1974gs,Gopakumar:2016wkt,Gopakumar:2016cpb,Gopakumar:2018xqi,gopak}, where crossing symmetry is completely manifest but positivity and analyticity properties are obscured.} Fortunately, mathematicians have extensively developed the theory of harmonic analysis on $\\mathcal{F}$, reviewed in e.g. \\cite{Terras_2013}. All square-integrable modular functions can be uniquely decomposed into a complete set of eigenfunctions of the Laplacian on the upper half-plane. These basis functions, comprised of both a continuous and discrete series, are themselves fully modular-invariant. \n\nIn promoting the role of this technology in 2d CFT, we will apply it to the Narain family of free boson CFTs, and then to general 2d CFTs. Let us now discuss these in turn.\n\nThe Narain lattice CFTs, well-known to string theorists from the literature on toroidal compactifications, turn out to be especially well-suited to the application of spectral methods because their central charge is saturated by currents generating the $U(1)^c \\times U(1)^c$ algebra. Spectral decomposition of these CFT partition functions -- more precisely, the $U(1)^c \\times U(1)^c$ primary-counting partition functions (henceforth ``primary partition functions'') -- makes certain properties of the theories manifest. One of the main advantages of the spectral representation is that it naturally separates the partition function into a piece that is constant over the moduli space, and residual pieces whose averages on the moduli space vanish. Consequently, the recently-studied ensemble averages \\cite{Maloney:2020nni, Afkhami-Jeddi:2020ezh} become completely transparent. The residual pieces are square-integrable and hence admit a spectral decomposition, while the moduli-independent piece, though ``almost'' but not quite square-integrable, can be treated using techniques of \\cite{zbMATH03796039}. Moreover, in many cases we can obtain the explicit spectral decomposition of the residual pieces, which provide a concrete form for the deviation from the ensemble average. \n\nOne of our most surprising findings is that, at least in the cases considered herein, the Narain lattice primary partition functions have a simple overlap with all of the discrete eigenfunctions, known as Maass cusp forms.\\footnote{These are in contrast with the continuous, plane-wave normalizable eigenfunctions, which are real analytic Eisenstein series.} What makes this result surprising is that the cusp forms themselves cannot be written in closed form and are, in a precise mathematical sense that we will review \\cite{sarnak, PhysRevLett.69.2188,1993MaCom..61..245H,Steil:1994ue, Sarnak_1987}, {\\it chaotic} linear combinations of elementary functions. For the special case of the $c=1$ free boson compactified on a circle of radius $r$, we show that the overlap of the partition function with the cusp forms vanishes for any $r$. The same is true for $c<1$ minimal models.\\footnote{This statement is slightly subtle. In all cases we decompose the CFT partition function after first dividing by powers of the Dedekind $\\eta(\\tau)$ function, dressed by factors of $\\im\\tau$ to retain modular invariance. For Narain lattices, there is one obvious natural choice for the power of $\\eta(\\tau)$, but for minimal models there are multiple natural choices and the statement only holds true for one of them.} However, while one might reasonably have expected the same to be true of all $c$, we show that at $c=2$, as well as at $c>2$ in regions of the moduli space where we can perform the computation, the overlap with the cusp forms does not vanish but instead has an essentially closed form: in a sense to be made precise, the resulting overlap is equal to the number 8 for all cusp forms. This closed-form relation between the spectrum of free bosons on a Narain lattice, which is an integrable model, and cusp forms, which exhibit chaotic properties, deserves further study.\n\nCertain other properties are nicely encoded in the spectral decomposition of Narain lattice partition functions.\\footnote{For one, this decomposition was used in \\cite{Angelantonj:2011br} as an elegant way to regulate one-loop string theory integrals while manifestly preserving worldsheet modular invariance.} The fact that the Narain lattice partition functions are eigenfunctions of the difference between the target space and worldsheet Laplacians \\cite{Obers:1999um,Maloney:2020nni} arises in the spectral decomposition as the fact that the coefficients of the eigenfunctions of the worldsheet Laplacian are target space Laplacian eigenfunctions with a correlated eigenvalue. In the special case of $c=2$, the ``triality'' under exchange among the three complex moduli \\cite{Dijkgraaf:1987jt}, that of the worldsheet and two from the target space, is made completely manifest in the spectral representation. Perhaps most interesting from a modular bootstrap perspective is that the scalar primary spectrum in Narain CFTs is completely determined by the overlaps with the continuous (rather than discrete) series of basis eigenfunctions in the spectral decomposition, and therefore it provides a new handle on the problem of determining the maximal gap in the scalar primary spectrum in the Narain lattice moduli space. We demonstrate this explicitly at $c=2$.\n\n\nTurning now to ``generic'' 2d CFTs, by which we mean those with Virasoro symmetry alone and $c>1$, spectral decomposition remains a powerful, if somewhat trickier, technique to implement. Harmonic analysis is applicable to modular objects that are square-integrable -- or close to it, in a sense articulated by Zagier \\cite{zbMATH03796039} and used heavily here -- but primary partition functions of generic CFTs do not obey this constraint: instead, they possess exponential divergences at the cusp at $\\tau\\rightarrow i\\infty$, due to the vacuum state and any other ``light'' primaries, defined as those with conformal dimension $\\Delta \\leq {c-1\\over 12}$. Their presence forces us to address the key question of how to massage partition functions into a form fit for spectral decomposition, and what this means physically. We are led to advance the following perspective for general 2d CFTs. The spectral decomposition may be applied to a partition function after subtracting off all light states in a modular-invariant way. We refer to this as subtracting the ``modular completion'' of the light spectrum. The remainder after subtraction, which we call $Z_{\\rm spec}$, is modular-invariant and square-integrable, and thus admits a spectral decomposition. \n\nOne rigorous consequence of this structure that emerges rather easily, as shown in Section \\ref{secspecdet}, is a new result on {\\it spectral determinacy}, i.e. the minimal content necessary to fully determine the spectrum of a 2d CFT \\cite{Douglas:2010ic,Kaidi:2020ecu}. We show that, under the widely-held assumption that the cuspidal eigenspectrum of $SL(2,\\mathbb{Z})$ is non-degenerate (e.g. \\cite{cusp82,hejh,sque}), the entire primary spectrum of a 2d CFT is {\\it uniquely} fixed by the light spectrum, the scalar spectrum, and the spectrum of any single nonzero integer spin $j$ (see Fig.~\\ref{specfig}).\\footnote{Strictly speaking, this statement can be proven only for spin $j=1$. For $j>1$ this relies on the additional conjecture that the Fourier coefficients of cusp forms are all non-zero. As we describe in Section \\ref{secspecdet} this conjecture is almost certainly true, but unproven.} As we will discuss, we think it quite plausible that this result could be strengthened to use even less data as input. \n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[scale=.6]{Spec_determinacy_fig_2}\n\\caption{In a 2d CFT, the full Virasoro primary spectrum is determined by the primary spectrum in blue. Circles represent the unitarity bound, $\\Delta\\geq j$, where $\\Delta=h+\\overline{h}$ and $j=|h-\\overline{h}|$. This statement assumes that the cuspidal eigenspectrum is non-degenerate, an unproven but widely held property of $SL(2,\\mathbb{Z})$. The $j=1$ data may, subject to a further mild assumption about Maass cusp forms, be replaced by the data of any fixed integer spin $j>0$ without affecting this conclusion. This result is explained in Section \\ref{secspecdet}.}\n\\label{specfig}\n\\end{figure}\n\n\nPhysically, the modular completion of light states represents a universal part of the CFT, or perhaps a kind of ``average,'' implied by the existence of the light spectrum. The remainder, $Z_{\\rm spec}$, captures the deviation of a given CFT spectrum around this universal part. That spectral analysis of CFT partition functions is sensitive to the distinction between light and heavy operators jibes with central properties of the black hole spectrum of quantum gravity in AdS$_3$. We explain how this interpretation provides a 2d CFT analog of the ``half-wormholes'' of \\cite{Saad:2021rcu}. This viewpoint has some limitations, and we discuss some concrete avenues for further research that could strengthen it. Fortunately, it is buttressed by application to the Narain case, in the following way. In situations where the partition function of interest belongs to an exact moduli space of CFTs, the sense of ``average'' used above may be given a literal interpretation. As noted earlier, the partition function $Z^{(c)}$ of a fixed Narain lattice CFT is a sum of the square-integrable terms, which average to zero, and the ensemble average with respect to the Zamolodchikov metric, $\\langle Z^{(c)} \\rangle$; in harmony with the above paradigm, the known result \\cite{Afkhami-Jeddi:2020ezh,Maloney:2020nni} for the average $\\langle Z^{(c)} \\rangle$ turns out to be precisely equal to the modular completion, via Poincar\\'e sum, of the vacuum state.\n\n\nThis paper is organized as follows. In Section \\ref{sec:harmonicAnalysis}, we introduce harmonic analysis on the fundamental domain $\\mathcal{F} = \\mathbb{H}\/SL(2,\\mathbb{Z})$. In Section \\ref{sec:narain}, we apply our technology to Narain's family of free boson CFTs. In Section \\ref{sec4}, we apply our technology to general 2d CFTs. \n\n\n\\emph{A word on notation:} In this paper all functions of complex variables are assumed to be non-holomorphic. However, for the sake of brevity of notation, we will drop all anti-holomorphic coordinate dependence. We will write e.g. $f(z)$ instead of $f(z, \\bar{z})$, but $f$ should not be assumed to be a holomorphic function. We will also denote the real and imaginary parts of the torus modular parameter $\\tau$ as $x$ and $y$, respectively.\n\n\n\\section{Harmonic analysis on the fundamental domain}\\label{sec:harmonicAnalysis}\nWe begin by collecting some basic properties of the spectral theory of the Laplacian on the fundamental domain,\n\\begin{equation}\n\t\\mathcal{F} = \\mathbb{H}\/SL(2,\\mathbb{Z}) = \\left\\{\\tau = x + i y \\in \\mathbb{H} \\,\\bigg|\\, -{1\\over 2} < x \\leq {1\\over 2},~ |\\tau| \\geq 1\\right\\}.\n\\end{equation}\nMany important results of this subject are nicely summarized in Chapter 3 of \\cite{Terras_2013}. The Laplacian on the upper half-plane $\\mathbb{H}$ is given by\n\\begin{equation}\n\t\\Delta_\\tau = -y^2\\left(\\partial_x^2+\\partial_y^2\\right).\n\\end{equation}\nThroughout this paper we will make use of the Petersson inner-product on the space $L^2(\\mathcal{F})$ of square-integrable modular-invariant functions,\n\\begin{equation}\n\t(f,g) \\coloneqq \\int_{\\mathcal{F}}{dxdy\\over y^2}f(\\tau) \\overline{g(\\tau)}.\n\\end{equation}\n\n\n\\subsection{Spectral resolution of the Laplacian}\nThe spectrum of the Laplacian on the fundamental domain includes both discrete and continuous components. The discrete part can be expanded in an orthogonal basis\\footnote{It will turn out to be convenient to use a basis of cusp forms that are not unit-normalized. See Appendix \\ref{subApp:cusp} for more details.} of \\textbf{Maass cusp forms},\n\\begin{equation}\n\\begin{aligned}\n\t\\nu_0 =& \\, \\sqrt{3\\over \\pi} = \\vol(\\mathcal{F})^{-{1\\over 2}}\\\\\n\t\\{\\nu_{n\\ge 1}\\}:& ~~ \\Delta_\\tau \\nu_n(\\tau) = \\left({1\\over 4} + R_n^2\\right)\\nu_n(\\tau), ~ R_n > 0\n\\end{aligned}\n\\end{equation}\nThe lowest one, $\\nu_0$, is a constant. The cusp forms are distinguished by having no scalar component\\footnote{In what follows unless stated otherwise the label $n$ will denote cusp forms with $n \\ge 1$.}\n\\begin{equation}\n\t\\int_{-{1\\over 2}}^{{1\\over 2}}dx\\, \\nu_n(\\tau) = 0\n\\end{equation}\nand by the fact that they decay exponentially at the cusp,\n\\begin{equation}\n\t\\nu_n(\\tau) \\sim e^{-2\\pi y}, \\quad y \\to \\infty.\n\\end{equation}\nThe continuous spectrum of the Laplacian can similarly be expanded in an orthonormal basis consisting of the \\textbf{real analytic Eisenstein series} with $s$, which labels their eigenvalue, on the critical line $\\re s = {1\\over 2}$,\n\\begin{equation}\n\\{E_{s={1\\over 2}+i\\mathbb{R}}\\} :\\, \\Delta_\\tau E_s(\\tau) = s(1-s) E_s(\\tau).\n\\end{equation}\nUnlike the cusp forms, the Eisenstein series do not decay exponentially at the cusp:\n\\begin{equation}\n\tE_s(\\tau) \\sim y^s + {\\Lambda(1-s)\\over \\Lambda(s)}y^{1-s},\\quad y\\to\\infty\n\\end{equation}\nwhere $\\Lambda(s)$ is a symmetrized version of the Riemann zeta function that satisfies a functional equation\n\\begin{equation}\\label{eq:LambdaDefinition}\n\\begin{aligned}\n\t\\Lambda(s) &\\coloneqq \\pi^{-s}\\Gamma(s)\\zeta(2s)\\\\\n\t&= \\Lambda\\({1\\over 2}-s\\).\n\t\\end{aligned}\n\\end{equation}\nWe summarize some basic facts about real analytic Eisenstein series, Maass cusp forms and their Fourier decompositions in Appendix \\ref{app:eisensteinCusp}.\n\nAny square-integrable modular-invariant function $f(\\tau)\\in L^2(\\mathcal{F})$ admits the following \\textbf{Roelcke-Selberg spectral decomposition} into these eigenfunctions \\cite{Terras_2013}:\n\\begin{equation}\\label{eq:RoelckeSelberg}\n\tf(\\tau) = \\sum_{n=0}^\\infty \\frac{(f,\\nu_n)}{(\\nu_n, \\nu_n)}\\nu_n(\\tau) + {1\\over 4\\pi i}\\int_{\\re s = {1\\over 2}}ds\\, (f,E_s)E_s(\\tau).\n\\end{equation} \n\n\\subsection{The Rankin-Selberg transform}\\label{subSec:RankinSelberg}\n\nIn this paper the inner product of various partition functions $Z$ with the real analytic Eisenstein series, $(Z,E_s)$, will play a central role. We will see that this quantity amounts to a Mellin transform of the scalar component of the partition function, and inherits many interesting analytic properties in the $s$ plane from those of the Eisenstein series. These observations date back to work of Rankin \\cite{rankin_1939} and Selberg \\cite{selberg1940bemerkungen}, whose work was usefully extended by Zagier \\cite{zbMATH03796039}.\\footnote{See e.g. \\cite{Pioline:2014bra,Green:2014yxa,DHoker:2019mib,DHoker:2019txf} for some more recent applications of this method in the string theory context.}\n\nConsider a modular-invariant function $f(\\tau)$ that is of ``rapid decay'', meaning that $f$ decays faster than any polynomial at the cusp $y=\\infty$. We will define the \\textbf{Rankin-Selberg (RS) transform} of $f$ by its integral over the fundamental domain, weighted by a real analytic Eisenstein series\n\\begin{equation}\\label{eq:RankinSelberg}\n\\begin{aligned}\n\tR_s[f] &\\coloneqq \\int_{\\mathcal{F}}{dxdy\\over y^2}\\,f(\\tau) E_s(\\tau)\\\\\n\t&= \\int_{\\mathcal{F}}{dx dy\\over y^2}\\, f(\\tau) \\sum_{\\gamma\\in\\Gamma_\\infty\\backslash PSL(2,\\mathbb{Z})}\\im(\\gamma\\tau)^s\\\\\n\t&= \\int_{\\Gamma_\\infty\\backslash \\mathbb{H}}{dxdy\\over y^2}\\, f(\\tau) y^s\\\\\n\t&= \\int_0^\\infty dy \\, y^{s-2}f_0(y)\n\\end{aligned}\n\\end{equation}\nwhere $f_0(y)$ is the zeroth Fourier component of $f$\n\\begin{equation}\n\tf_0(y) = \\int_{-{1\\over 2}}^{{1\\over 2}}dx\\, f(\\tau).\n\\end{equation}\nIn (\\ref{eq:RankinSelberg}) we have made use of a standard \\emph{unfolding trick}, where we employ the fact that the Eisenstein series is a Poincar\\'e series. The Mellin transform in the last line of (\\ref{eq:RankinSelberg}) converges provided $\\re s > 1$. However as described in Appendix \\ref{subApp:eisenstein}, the Eisenstein series has a meromorphic continuation in $s$ that is holomorphic everywhere in the $s$ half-plane to the right of the critical line $\\re s = {1\\over 2}$ except for a simple pole with constant residue ${3\\over \\pi}$ at $s=1$, thus endowing the RS transform with its own meromorphic continuation. The Eisenstein series also satisfies the functional equation (\\ref{eq:eisensteinFunctionalEqn}), which implies a functional equation for the RS transform\n\\begin{equation}\n\t\\Lambda(s)R_s[f] = \\Lambda(1-s)R_{1-s}[f].\t\n\\end{equation}\nMoreover, the residue of the RS transform at the pole $s=1$ encodes the average of the function $f$ over the fundamental domain\n\\begin{equation}\\label{avgrs}\n\t\\Res_{s=1}R_s[f] = {3\\over \\pi}\\int_{\\mathcal{F}}{dxdy\\over y^2}\\, f(\\tau).\n\\end{equation}\n\nHowever, we will see that CFT primary partition functions are not of rapid decay, and are thus not subject to the spectral analysis as stated in (\\ref{eq:RoelckeSelberg}). Thankfully, Zagier has developed the RS method to accommodate modular functions that are of ``slow growth'' at the cusp \\cite{zbMATH03796039}. We will summarize many of the relevant details from \\cite{zbMATH03796039} here. When we say that a modular-invariant function $f$ is of slow growth at the cusp, what we mean is \n\\begin{equation}\n\tf(\\tau) \\sim \\varphi(y) + \\text{(sub-polynomial)}\n\t,\\quad y\\to\\infty,\n\\end{equation}\nwhere $\\varphi(y)$ grows sub-exponentially,\n\\begin{equation}\\label{eq:varphiDefinition}\n\t\\varphi(y) = \\sum_{i=1}^m {c_i\\over n_i!}y^{\\alpha_i}\\log^{n_i}y\\quad (c_i, \\alpha_i \\in \\mathbb{C}\\,,~n_i \\in \\mathbb{Z}_{\\geq 0}).\n\\end{equation}\nIn Section \\ref{sec:narain}, we will see that the primary partition functions of Narain lattice CFTs correspond to the maximally simple $m=1,\\, c_1=1,\\, n_1 = 0,\\, \\alpha_1 = {c\\over 2}$. Then the definition of the RS transform is simply modified to omit the problematic terms at the cusp, namely\n\\begin{equation}\\label{RSsub}\n\tR_s[f] \\coloneqq \\int_0^\\infty dy\\, y^{s-2}\\left(f_0(y)-\\varphi(y)\\right).\n\\end{equation}\nThis converges when $\\re s$ is sufficiently large. This procedure, which essentially amounts to throwing out the problematic terms in the scalar part of $f$, may seem ad hoc, but it turns out to be quite natural. The reason is that this definition of the RS transform can be shown \\cite{zbMATH03796039} to be equivalent to a renormalized integral of the product of the function weighted by the Eisenstein series over the fundamental domain\n\\begin{equation}\n\\begin{aligned}\n\tR_s[f] &= \\int_{\\mathcal{F}_T} {dxdy\\over y^2} f(\\tau)E_s(\\tau) + \\int_{\\mathcal{F}-\\mathcal{F}_T} {dxdy\\over y^2}\\left(f(\\tau)E_s(\\tau) - \\varphi(y)\\varphi_s(y)\\right) - h^{(T)}_s - {\\Lambda(1-s)\\over \\Lambda(s)}h^{(T)}_{1-s}\\\\\n\t&\\coloneqq \\rnrn\\left(\\int_{\\mathcal{F}}{dxdy\\over y^2}f(\\tau)E_s(\\tau)\\right)\n\t\\label{eq:ZRS}\n\\end{aligned}\n\\end{equation}\nwhere $\\mathcal{F}_T = \\mathcal{F}\\cap \\{y\\le T\\}$ is the fundamental domain with an infrared cutoff, $\\varphi_s(y)$ is the zeroth Fourier mode of the Eisenstein series\n\\begin{equation}\n\t\\varphi_s(y) \\coloneqq y^s + {\\Lambda(1-s)\\over\\Lambda(s)}y^{1-s} \n\\end{equation}\nand \n\\begin{equation}\n\th^{(T)}_s \\coloneqq \\int_0^T dy\\, y^{s-2}\\varphi(y).\n\\end{equation}\n\nThis modification of the RS transform will be both conceptually and technically useful in our spectral analysis of CFT partition functions. To explain why, consider the case of the constant function, $f(\\tau) = 1$. In this case $f_0(y) = \\varphi(y) = 1$ so its RS transform trivially vanishes. In particular one has\n\\begin{equation}\n\tR_s[1] = \\int_{\\mathcal{F}}{dx dy\\over y^2}E_s(\\tau) = 0,\\quad 0<\\re s < 1.\n\\end{equation}\nSimilarly, if we consider the case that $f$ is an Eisenstein series, $f(\\tau) = E_{r}(\\tau)$, then again its RS transform \\eqref{RSsub} vanishes because $f_0(y) = \\varphi(y) = \\varphi_r(y)$:\n\\begin{equation}\n\tR_s[E_r(\\tau)] = \\rnrn\\left(\\int_{\\mathcal{F}}{dxdy\\over y^2}E_r(\\tau)E_s(\\tau)\\right) = 0.\n\\end{equation}\nThis motivates the following interpretation of the renormalized integral of $f$ in the fundamental domain. Since $R_s[E_r(\\tau)]=0$, we can subtract from $f(\\tau)$ a suitable combination of Eisenstein series without affecting its renormalized integral,\n\\begin{equation}\\label{eq:fTildeDefinition}\n\t\\widetilde f(\\tau) \\coloneqq f(\\tau) - \\sum_{i|\\alpha_i\\ge {1\\over 2}}\\left.c_i{\\partial^{n_i}\\over\\partial s^{n_i}}E_s(\\tau)\\right|_{s=\\alpha_i},\n\\end{equation}\nsuch that the resulting function is square integrable:\n\\begin{equation}\n\tR_s[f] = \\int_{\\mathcal{F}}{dxdy\\over y^2}\\, \\widetilde{f}(\\tau) E_s(\\tau).\n\\end{equation}\nThis converges for $\\re s$ in a certain range dictated by the powers of $y$ in $\\varphi(y)$. The RS transform again inherits the meromorphic continuation of the Eisenstein series in the $s$ plane, now with poles in the plane to the right of $\\re s = {1\\over 2}$ at $s=1$ and $s=\\max(\\alpha_i,1-\\alpha_i)$. \n\nIn other words -- and as groundwork for the viewpoint espoused later -- one simply forms a new, square-integrable modular-invariant function $\\widetilde f (\\tau)$ by subtracting the appropriate linear combination of $E_s(\\tau)$ and its derivatives from $f(\\tau)$, one for each term in \\eqref{eq:varphiDefinition}.\\footnote{The reader may notice that terms with $\\alpha_i < 1\/2$ are not subtracted off in \\eqref{eq:fTildeDefinition}, but are nevertheless included in the definition of $\\varphi(\\tau)$. They are so included because they spoil convergence of the original RS transform for some range of $s$. However, it also follows from Zagier's theorem that for RS transforms with $\\re s=1\/2$ -- the only value needed for spectral decomposition -- the terms with $\\alpha_i < 1\/2$ may be safely neglected in the RS transform. This is consistent, as it must be, with the fact that they do not violate square-integrability, so $\\widetilde f(\\tau)$ must admit a Roelcke-Selberg spectral decomposition without any modifications.} The procedure of mapping the divergent terms to modular-invariant functions with the same asymptotic growth is what we will call ``modular completion.'' The new function $\\widetilde f(\\tau)$ then admits a spectral decomposition \\eqref{eq:RoelckeSelberg}, with the overlap integrals computed using the unmodified RS transform. Thus, the form of the spectral decomposition for functions of slow growth is\n\\begin{equation}\\label{eq:modifiedRoelckeSelberg}\n\tf(\\tau) = \\sum_{i|\\alpha_i\\ge {1\\over 2}}\\left.c_i{\\partial^{n_i}\\over\\partial s^{n_i}}E_s(\\tau)\\right|_{s=\\alpha_i} + \\sum_{n=0}^\\infty \\frac{( f,\\nu_n)}{(\\nu_n, \\nu_n)}\\nu_n(\\tau) + {1\\over 4\\pi i}\\int_{\\re s = {1\\over 2}}ds\\, R_{1-s}[\\widetilde f]E_s(\\tau).\n\\end{equation}\n which is equivalent to (\\ref{eq:RoelckeSelberg}) applied to the function $\\widetilde f(\\tau)$. \n\n\\subsection{Comparison with harmonic analysis on the Euclidean conformal group}\\label{eucharm}\n\nHarmonic analysis on the Euclidean conformal group $SO(d+1,1)$ \\cite{Dobrev:1977qv} is a powerful tool for the study of the constraints of conformal invariance on four-point functions of local operators, that has underlied many recent developments in the conformal bootstrap \\cite{Cornalba:2006xm,Costa:2012cb,Cornalba:2007fs,Murugan:2017eto,Gadde:2017sjg,Hogervorst:2017sfd,Caron-Huot:2017vep,Hogervorst:2017kbj,Kravchuk:2018htv,Karateev:2018oml,Liu:2018jhs}, and has some parallels with the technology developed in this section. Here we briefly review some elements of $SO(d+1,1)$ harmonic analysis in order to contextualize our results.\n\nIn Euclidean kinematics, the four-point function of local operators $\\langle \\phi_1\\phi_2\\phi_3\\phi_4\\rangle$, taken to be scalars for simplicity, is, up to a kinematic prefactor, a function $G(z,\\bar z)$ of a single complex conformally-invariant cross-ratio $z$. The function $G(z,\\bar z)$ admits a conformal block decomposition\n\\begin{equation}\n\tG(z,\\bar z) = \\sum_{\\Delta,\\, j} C_{12\\mathcal{O}}C_{34\\mathcal{O}}F_{\\Delta, j}(z,\\bar z),\n\\end{equation}\nwhere the sum is over local primaries $\\mathcal{O}$ with dimension $\\Delta$ and spin $j$ appearing in the $\\phi \\times \\phi$ OPE, $F_{\\Delta, j}$ are the conformal blocks that encode the kinematical contribution of $\\mathcal{O}$ and its descendants, and the coefficients $C$ are the corresponding structure constants. Harmonic analysis on the conformal group allows one to decompose the function $G(z,\\bar z)$ into a complete basis of eigenfunctions of the conformal Casimir that are single-valued functions of the cross-ratio in Euclidean kinematics, namely, the conformal partial waves $\\Psi_{\\Delta, j}$ (which are a particular linear combination of the conformal block $F_{\\Delta, j}$ and the ``shadow block'' $F_{d-\\Delta,j}$) \\cite{Dobrev:1977qv,Costa:2012cb,Caron-Huot:2017vep,Simmons-Duffin:2017nub,Hogervorst:2017sfd}. For real external dimensions, \n\\begin{equation}\\label{eq:CPWDecomposition}\n\tG(z,\\bar z) = \\text{(non-normalizable)} + \\sum_{j=0}^\\infty\\,\\int_{{d\\over 2}}^{{d\\over 2}+i\\infty}{d\\Delta\\over 2\\pi i}\\, C_j(\\Delta)\\Psi_{\\Delta, j}(z,\\bar z).\n\\end{equation}\nIn the above equation we have emphasized that the contributions of certain operators in the OPE, in particular those with $\\Delta \\le {d\\over 2}$ (which always includes the identity operator), must be subtracted from the four-point function in order to apply the spectral decomposition and preserve normalizability. The coefficient function $C_j(\\Delta)$ is a sort of resolvent for the structure constants, in the sense that it has simple poles in $\\Delta$ corresponding to the dimensions of operators in the OPE\\footnote{Below we have assumed a natural normalization of the conformal partial waves.}\n\\begin{equation}\\label{eq:coefficientFunctionResidue}\n\t-\\Res_{\\Delta = \\Delta_i} C_{j_i}(\\Delta) = C_{12\\mathcal{O}_i}C_{34\\mathcal{O}_i},\\quad \\text{for generic $\\Delta_i$}.\n\\end{equation}\nWe say that this applies for ``generic'' $\\Delta_i$ due to the fact that the conformal partial waves have poles in the right half $\\Delta$ plane with residues that are themselves conformal blocks with special values of the dimension (which may thus shift the relationship between the structure constants and the residue of the coefficient function), and due to the need for subtractions of states that give non-normalizable contributions. That the conformal partial waves are orthogonal with respect to a certain inner product means that one can extract the coefficient function $C_j(\\Delta)$ by means of a \\emph{Euclidean inversion formula} \\cite{Caron-Huot:2017vep}\n\\begin{equation}\\label{eq:euclideanInversion}\n\tC_j(\\Delta) = N_j(\\Delta) \\int_{\\mathbb{C}}d^2 z \\, \\mu(z,\\bar z) \\Psi_{\\Delta, j}(z,\\bar z) G(z,\\bar z),\n\\end{equation}\nwhere $N_j(\\Delta)$ is a proportionality constant and $\\mu(z,\\bar z)$ a measure, whose detailed form are known but unimportant for our purposes. \n\nLet us now make the analogy with harmonic analysis on the fundamental domain. One may think of the modified Roelcke-Selberg decomposition (\\ref{eq:modifiedRoelckeSelberg}) as analogous to (\\ref{eq:CPWDecomposition}). As we will see when studying the spectral decomposition of both the Narain theories, as discussed in Section \\ref{sec:narain}, and especially in more general CFTs in Section \\ref{sec4}, we will also encounter the need to subtract non-normalizable contributions: in particular, those corresponding to the {\\it light} primary operators. The RS transform (\\ref{eq:RankinSelberg}) discussed in the previous subsections, which extracts the overlap coefficient $(f,E_s) = R_{1-s}[f]$ that appears in the Roelcke-Selberg decomposition (\\ref{eq:RoelckeSelberg}) from an integral of $f$ weighted by an Eisenstein series over the fundamental domain, is closely analogous to this Euclidean inversion formula, but with some key differences.\n\nFor one, an obvious difference from (\\ref{eq:CPWDecomposition}) is that in applying harmonic analysis on the fundamental domain, we are expanding the partition function into a complete basis of functions that are themselves modular-invariant, unlike the conformal partial waves that appear above which are not crossing-symmetric.\\footnote{A crossing-symmetric decomposition of CFT correlators was proposed in \\cite{gopak} (see also \\cite{Mazac:2018qmi,Maloney:2016kee}). It would be productive to further explore the analogy between these proposals and the present work.} This is related to the fact that a key feature of the Euclidean inversion formula for correlators is that the eigenfunctions are labeled by their spin $j$, which is restricted to be an integer. By contrast, the eigenfunctions in the Roelcke-Selberg decomposition do not have such a spin label, but rather are all certain infinite sums over spins. In fact, once one has determined the Roelcke-Selberg decomposition, one can use it to evaluate the partition function for complex values of $x$ and $y$ as long as $\\textrm{Re}(y)>0$, so in this sense the decomposition is equally Lorentzian or Euclidean, even though our method for extracting it relies on the partition function in the Euclidean regime $(x,y) \\in \\mathbb{H}$. Another important difference stemming from the built-in modular invariance is that there is no immediate analog of (\\ref{eq:coefficientFunctionResidue}) that directly relates the RS transform of a partition function to the degeneracies of particular operators in the spectrum. However, after some subtractions or if $Z$ has particularly tame asymptotic growth, we are able to extract a resolvent from $R_s[Z]$ via a formal Laplace transform (e.g. for the $c=1$ free boson; see Appendix \\ref{app:c=1Resolvent}). A final difference is the presence of the infinite family of cusp forms in the Roelcke-Selberg decomposition, which do not have any obvious analog in the decomposition into conformal partial waves.\n\nIn \\cite{Caron-Huot:2017vep}, by exploiting an analogy with the Froissart-Gribov formula for the partial wave coefficients in scattering amplitudes, Caron-Huot was able to deform the integration region in the Euclidean inversion formula (\\ref{eq:euclideanInversion}) to one over the Lorentzian causal diamond $z,\\bar z \\in (0,1)$ with profound implications for the structure of the spectrum of a CFT. In particular, by providing a formula for the CFT data (for operators of sufficiently large spin) that is analytic in the spin $j$, the result of \\cite{Caron-Huot:2017vep} formalized the idea that the local operator content of a CFT is organized into Regge trajectories of increasing spin. This implies a remarkable rigidity of the spectrum. Although our methods for $SL(2,\\mathbb{Z})$ are apparently Euclidean, we will nevertheless see in Section \\ref{secspecdet} that the spectral analysis of 2d CFT partition functions has conceptually similar implications for the rigidity of the spectrum of local operators. \n\n\n\\section{Application to Narain lattice CFTs}\\label{sec:narain}\nIn this section we will discuss the application of techniques from harmonic analysis on $\\mathcal{F}$ to the study of partition functions of free boson CFTs based on an even self-dual lattice $\\Lambda\\in \\mathbb{R}^{c,c}$. Narain lattice CFTs are characterized by a $U(1)^c\\times U(1)^c$ current algebra, and thus provide a natural testing ground for the application of the technology described in the previous section. The reason is the following. For unitary CFTs with two copies of a $U(1)^c$ chiral algebra, the primary partition function $Z_p(\\tau)$ that counts $U(1)^c\\times U(1)^c$ primary operators is given in terms of the full partition function $Z(\\tau)$ by\n\\begin{equation}\n\tZ_p(\\tau) \\coloneqq \\left(y^{1\/2}|\\eta(\\tau)|^2\\right)^c Z(\\tau),\n\t\\label{eq:stripoffcrap}\n\\end{equation}\nwhich only diverges polynomially at the cusp\n\\begin{equation}\n\tZ_p(\\tau) \\sim y^{c\\over 2} + \\mathcal{O}(y^{c\\over 2}e^{-2\\pi \\Delta_{*} y}), \\quad y \\to \\infty\n\\end{equation}\nwhere $\\Delta_{*}$ is the dimension of the lightest non-trivial primary operator in the spectrum. The divergence is merely polynomial, rather than exponential, because Narain CFTs have central charge equal to the number of currents; more precisely, $c=c_{\\rm currents}$, where $c_{\\rm currents}$ parameterizes the asymptotic density of current composites as defined in \\eqref{ccurr}. Moreover, the only source of this divergence is the universal contribution of the identity operator. Thus the primary partition function is a modular-invariant function of ``slow growth'' on the fundamental domain, so we can apply the technology of Section \\ref{subSec:RankinSelberg} to understand its spectral decomposition. In contrast, partition functions of generic CFTs diverge {\\it exponentially} at the cusp, and so in that case more care must be taken to make sense of a spectral decomposition, as we discuss further in Section \\ref{sec4}.\n\nIn the remainder of this section we drop the subscript on the primary partition function $Z_p(\\tau)$, leaving the restriction to primaries implicit. \n\n\\subsection{Spectral decomposition of specific partition functions}\nWe will now explicitly compute the spectral decomposition of Narain CFT partition functions, $Z^{(c)}$, at small values of $c$, and will outline the general computation for general $c$. We do so by explicit computation of the RS transform $(Z^{(c)},E_s) = R_{1-s}[Z^{(c)}]$ -- which, as we have seen in Section \\ref{subSec:RankinSelberg}, amounts to a Mellin transform of the scalar sector of the partition function -- and also computation of $(Z^{(c)},\\nu_n)$, the inner product with the cusp forms.\n\nNarain lattices are far from unique; a particular lattice is specified by a point in Narain's moduli space $\\mathcal{M}_c$\n\\begin{equation}\n\t\\mathcal{M}_c = O(c,c;\\mathbb{Z})\\backslash O(c,c;\\mathbb{R}) \/ O(c)\\times O(c).\n\\end{equation}\nSuch theories are realized as sigma models with toroidal target space $T^c$, with the data of the Narain lattice encoded in the $c^2$ degrees of freedom of the metric and $B$-field flux of the target. The modular-invariant primary partition function for a Narain lattice CFT with central charge $c$ is given by the following sum over momentum and winding modes\n\\begin{equation}\\label{eq:narainPartitionFunction}\n\tZ^{(c)}(\\tau;m) = y^{c\\over 2} \\sum_{n_a,w^a\\in\\mathbb{Z}^c}\\exp\\left(-\\pi y M_{n,w}(m)^2 + 2\\pi i x n\\cdot w\\right),\n\\end{equation}\nwhere $m$ denote the target space moduli (the metric $G$ and 2-form flux $B$) and\\footnote{Throughout this paper we set $\\alpha'=1$. At $c=1$ for instance we take $r=1$ to be the self-dual radius.}\n\\begin{equation}\n\tM_{n,w}(m)^2 \\coloneqq G^{ab}\\left(n_a+B_{ac}w^c\\right)\\left(n_b+B_{bd}w^d\\right)+G_{cd}w^cw^d.\n\\end{equation}\n\nFrom their explicit forms, one can confirm that the Narain partition functions satisfy the differential equation \\cite{Obers:1999um,Maloney:2020nni}\n\\begin{equation}\n\t\\left(\\Delta_\\tau - \\Delta_{\\mathcal{M}_c} -{c\\over 2}\\left(1-{c\\over 2}\\right)\\right)Z^{(c)}(\\tau;m) = 0,\n\t\\label{eq:DIFFEQ}\n\\end{equation}\nwhere\n\\begin{equation}\n\t\\Delta_{\\mathcal{M}_c} = -G_{ac}G_{bd}\\left(\\widehat\\partial_{G_{ab}}\\widehat\\partial_{G_{cd}} + {1\\over 4}\\partial_{B_{ab}}\\partial_{B_{cd}}\\right) - G_{ab}\\widehat\\partial_{G_{ab}}\n\\end{equation}\nis the Laplacian associated with the Zamolodchikov metric on the target space moduli space, with $\\widehat \\partial_{G_{ab}} \\coloneqq {1\\over 2}\\left(1+\\delta_{ab}\\right)\\partial_{G_{ab}}$. \n\n\n\n\\subsubsection{\\texorpdfstring{$c=1$}{c1}}\n\nConsider the special case of the $c=1$ free boson compactified on a circle of radius $r$. In this case the primary partition function is given by\n\\begin{equation}\\label{eq:c=1}\n\tZ^{(c=1)}(\\tau;r) = \\sqrt{y}\\sum_{n,w\\in\\mathbb{Z}}\\exp\\left(-\\pi y\\left(n^2 r^{-2}+w^2 r^2\\right)+2\\pi i x nw\\right).\n\\end{equation}\nWe begin by computing the continuous part of the spectral decomposition via RS transform of the partition function (\\ref{eq:c=1}), using technology of Section \\ref{subSec:RankinSelberg} for modular functions of ``slow growth,'' followed by the discrete part. \n\nNeglecting the contribution of the identity operator by defining\n\\begin{equation}\n\t\\widetilde Z^{(c=1)}(\\tau;r) \\coloneqq \\sqrt{y}\\sideset{}{'}\\sum_{n,w\\in\\mathbb{Z}}\\exp\\left(-\\pi y\\left(n^2 r^{-2}+w^2 r^2\\right)+2\\pi i x nw\\right),\n\\end{equation} \nwhere the prime on the summation denotes the omission of $n=w=0$, we get\n\\begin{equation}\n\\begin{aligned}\n\tR_s[Z^{(c=1)}] &= \\int_{\\mathcal{F}}{dxdy\\over y^2}\\widetilde Z^{(c=1)}(\\tau;r)E_s(\\tau) \\\\\n\t&= 2\\int_0^\\infty dy \\, y^{s-{3\\over 2}}\\sum_{n=1}^\\infty\\left(e^{-\\pi y n^2 r^2}+e^{-\\pi y n^2 r^{-2}}\\right).\n\\end{aligned}\n\\end{equation}\nProvided the real part of $s$ is sufficiently large (in particular, $\\re s > {1\\over 2}$), we can exchange the sum and the integral, leading to\n\\begin{equation}\n\\begin{aligned}\\label{eq:c=1RS}\n\tR_s[Z^{(c=1)}] &= 2\\sum_{n=1}^\\infty (\\pi n^2)^{{1\\over 2}-s}\\Gamma\\left(s-{1\\over 2}\\right)\\left(r^{1-2s}+r^{2s-1}\\right)\\\\\n\t&= 2\\Lambda\\left(s-{1\\over 2}\\right)\\left(r^{1-2s}+r^{2s-1}\\right).\n\\end{aligned}\n\\end{equation}\nThe result indeed has poles at $s=1$ and $s={1\\over 2}$ as promised. We thus conclude that\n\\begin{equation}\\label{eq:c=1EisensteinOverlap}\n\t(Z^{(c=1)},E_s) = R_{1-s}[Z^{(c=1)}] = 2\\Lambda(s)\\left(r^{1-2s}+r^{2s-1}\\right).\n\\end{equation}\n\nNext, the constant part of the spectral decomposition may be computed by applying \\eqref{avgrs}:\n\\begin{equation}\\label{eq:c=1Constant}\n\t\\Res_{s=1}\\left(R_s[Z^{(c=1)}]\\right) = r + r^{-1}.\n\\end{equation}\nIn fact, the results (\\ref{eq:c=1RS}) and (\\ref{eq:c=1Constant}) have previously appeared in the literature in \\cite{Angelantonj:2011br}, where the RS method was used as a technical tool for the computation of certain one-loop integrals in toroidal compactifications of string theory. For those purposes, only the residue of the RS transform at $s=1$ was desired.\n\nSo far we have computed the constant and continuous components in the spectral decomposition of the primary partition function of the $c=1$ free boson. To finish the job we need to add in both the contribution of the vacuum state and the Maass cusp forms. In fact it turns out that both of these contributions will vanish. For the former, we see in the modified Roelcke-Selberg decomposition (\\ref{eq:modifiedRoelckeSelberg}) with $\\alpha_i=1\/2$ and $n_i=0$ leaves $E_{{1\\over 2}}(\\tau)$, which is in fact zero: $E_{{1\\over 2}}(\\tau)= 0$. For the latter, in Appendix \n\\ref{app:derivingEight} we explicitly show that the Maass cusp forms do not show up at $c=1$ by computing the overlap $(Z^{(c=1)}, \\nu_n)$ and showing it vanishes. Moreover, we have verified this by checking numerically that $(Z^{(c=1)},\\nu_n)$ can be made seemingly arbitrarily small by increasing the precision of the numerical integration. Thus, the spectral representation of the partition function of the $c=1$ free boson is given exactly by \n\\begin{equation}\\label{eq:c=1Spectral}\n\t\\boxed{Z^{(c=1)}(\\tau;r) = r + r^{-1} + {1\\over 4\\pi i}\\int_{\\re s = {1\\over 2}}ds\\, 2\\Lambda(s)\\left(r^{1-2s}+r^{2s-1}\\right)E_s(\\tau).}\n\\end{equation}\nSee Appendix \\ref{app:c=1Resolvent} to see explicitly that the resolvent inherited from (\\ref{eq:c=1Spectral}) has poles precisely at the locations of physical operators in the free boson CFT with residues equal to the correct degeneracies. \n \nThe partition functions of the $\\mathbb{Z}_2$ orbifolds of the compact boson admit a similarly straightforward spectral decomposition. The orbifold partition functions, $Z^{(c=1)}_{\\rm orb}(\\tau;r)$, can be written in terms of free boson partition functions as\n\\begin{equation}\\label{ZZ2}\n\tZ^{(c=1)}_{\\rm orb}(\\tau;r) = {1\\over 2} Z^{(c=1)}(\\tau;r) + Z_{\\rm tw}(\\tau),\n\\end{equation}\nwhere the contribution of the $\\mathbb{Z}_2$ twisted sectors is independent of the target space moduli:\n\\begin{equation}\n\tZ_{\\rm tw}(\\tau) = {1\\over 2} \\sqrt{y}\\left(|\\theta_2\\theta_3|+|\\theta_3\\theta_4|+|\\theta_4\\theta_2|\\right),\n\\end{equation}\nwhere $\\theta_i$ are elliptic theta functions. Then the well-known relation \n\\be\nZ^{(c=1)}_{\\rm orb}(\\tau;r=1) = Z^{(c=1)}(\\tau;r=2) \n\\ee\nimplies\n\\begin{equation}\\label{twspec}\n\tZ_{\\rm tw}(\\tau) = {3\\over 2} + {1\\over 4\\pi i}\\int_{\\re s = {1\\over 2}}ds\\, 2\\Lambda(s)\\left(2^{1-2s}+2^{2s-1} - 1\\right)E_s(\\tau)\n\\end{equation}\nThe spectral decomposition of $Z^{(c=1)}_{\\rm orb}(\\tau;r)$ then follows from \\eqref{eq:c=1Spectral}, \\eqref{ZZ2} and \\eqref{twspec}.\\footnote{Of course, since we are computing $y^{1\/2} |\\eta(\\tau)|^2$ times the orbifold partition function, and $\\frac1{\\eta(\\tau)}$ is not the vacuum character for the $S^1\/\\mathbb{Z}_2$ theory, the spectral decomposition above, unlike (\\ref{eq:c=1Spectral}), does not have a positive inverse Laplace transform (although it is still discrete).}\n\n\\subsubsection{\\texorpdfstring{$c=2$}{c2}}\n\nWe next consider the family of $c=2$ Narain CFTs with $T^2$ target space. The moduli space of such theories is captured by two elements $\\sigma, \\rho$ of $\\mathbb{H}\/PSL(2,\\mathbb{Z})$, respectively the complex structure and complexified K\\\"ahler structure of the target. They are given in terms of the metric and $B$ field flux on the $T^2$ as (see e.g. \\cite{Dijkgraaf:1987jt}) \n\\begin{equation}\n\\begin{aligned}\\label{eq:T2Moduli}\n\t\\rho &= B + i {\\sqrt{\\det G}} \\\\\n\t\\sigma &= {G_{12}\\over G_{11}} + i {\\sqrt{\\det G}\\over G_{11}}.\n\\end{aligned}\n\\end{equation}\nIn these variables, the primary partition function can be written as\\footnote{This expression can be elegantly rewritten in terms of Poincar\\'e series and Hecke operators, see Appendix \\ref{app:derivingEight}.} \n\\begin{equation}\n\\begin{aligned}\\label{eq:c=2Narain}\n\tZ^{(c=2)}(\\tau;\\rho,\\sigma) = y \\sum_{n,w\\in\\mathbb{Z}^2} \\exp\\Bigg[{\\pi i \\over 2}\\Bigg(&{\\tau\\over \\rho_2\\sigma_2}\\left|n_2-n_1 \\sigma -\\rho(w^1+w^2 \\sigma)\\right|^2 \\\\- &{\\bar\\tau\\over \\rho_2\\sigma_2}\\left|n_2-n_1\\bar\\sigma - \\rho\\left(w^1+w^2\\bar\\sigma\\right) \\right|^2 \\Bigg)\\Bigg].\n\\end{aligned}\n\\end{equation}\nRemarkably, $Z^{(c=2)}(\\tau; \\rho, \\sigma)$ obeys a ``triality\" symmetry: it is totally symmetric under exchange of the three moduli \\cite{Dijkgraaf:1987jt}. The invariance under exchange of $\\rho$ and $\\sigma$ is a special case of mirror symmetry. The invariance under the exchange of the worldsheet modular parameter $\\tau$ with the target space moduli $\\rho,\\sigma$, which follows from Poisson summation, is more mysterious. \n\nThe RS transform of the $c=2$ Narain partition function is given by\n\\begin{equation}\n\\begin{aligned}\n\tR_s[Z^{(c=2)}] &= \\int_{\\mathcal{F}}{dxdy\\over y^2}\\, \\widetilde Z^{(c=2)}(\\tau;\\rho,\\sigma) E_s(\\tau) \\\\\n\t&= \\int_0^\\infty dy\\, y^{s-1}\\sideset{}{'}\\sum_{n,w\\in\\mathbb{Z}^2}\\delta_{n\\cdot w,0}\\exp\\left(-\\pi y M_{n,w}(m)^2\\right)\\\\\n\t&= \\pi^{-s}\\Gamma(s) \\sideset{}{'}\\sum_{n,w\\in\\mathbb{Z}^2}{\\delta_{n\\cdot w,0}\\over\\left(M_{n,w}(m)^2\\right)^{s}},\n\\end{aligned}\n\\end{equation}\nwhere again the prime on the summation indicates the omission of the identity operator $n_a = w^a = 0$ and we have assumed $\\re s > 0$ in order to exchange the integral and the sum. \n\nIn fact, this constrained sum was worked out very explicitly in \\cite{Angelantonj:2011br}, whose results we briefly summarize. In solving the constraint $n\\cdot w = 0$, there are essentially two cases to consider. In the first case we simply set $w^a = 0$, and we have\n\\begin{equation}\n\tM_{(n_1,n_2),(0,0)}(m)^2 = {|n_2-\\sigma n_1|^2\\over \\sigma_2\\rho_2}.\n\\end{equation}\nIn the second case we set\n\\begin{equation}\n\t(n_1,n_2,w^1,w^2) = (du_1,du_2, -cu_2, cu_1),\n\\end{equation}\nwhere $u_1,u_2\\in\\mathbb{Z}$ with $(u_1,u_2) = 1$ and $c,d\\in\\mathbb{Z}$ with $c\\ge 1$. In this case we have\n\\begin{equation}\n\tM_{(du_1,du_2),(-cu_2,cu_1)}(m)^2 = {|u_2-\\sigma u_1|^2 |c\\rho+d|^2\\over \\sigma_2\\rho_2}.\n\\end{equation} \nThe RS transform then takes the form\n\\begin{equation}\\label{eq:c=2RS}\n\\begin{aligned}\n\tR_s[Z^{(c=2)}] &= \\pi^{-s}\\Gamma(s)\\left[\\sideset{}{'}\\sum_{n_1,n_2\\in\\mathbb{Z}}\\left({\\rho_2\\sigma_2\\over |n_2-\\sigma n_1|^2}\\right)^s + \\sideset{}{'}\\sum_{\\substack{u_1,u_2\\in\\mathbb{Z} \\\\ (u_1,u_2)=1}}\\left(\\sigma_2\\over |u_2-\\sigma u_1|^2\\right)^s\\sum_{\\substack{c,d\\in\\mathbb{Z} \\\\ c \\ge 1}}\\left({\\rho_2\\over |c\\rho+d|^2}\\right)^s\\right]\\\\\n\t&= 2\\Lambda(s)E_s(\\rho)E_s(\\sigma).\n\\end{aligned}\n\\end{equation}\nSo the inner product of the $c=2$ Narain partition functions with the real analytic Eisenstein series is given by the following product of Eisenstein series\n\\begin{equation}\\label{eq:c=2InnerProduct}\n\\begin{aligned}\n\t(Z^{(c=2)},E_s) = R_{1-s}[Z^{(c=2)}] &= 2\\Lambda(1-s)E_{1-s}(\\rho)E_{1-s}(\\sigma) \\\\&= 2{\\Lambda(s)^2\\over \\Lambda(1-s)}E_s(\\rho)E_s(\\sigma).\n\\end{aligned}\n\\end{equation}\n\nAt the cusp, the $c=2$ Narain partition functions diverge linearly, $Z^{(c=2)}(\\tau;m) \\stackrel{y\\to\\infty}{\\sim} y$. And indeed we see as expected that the RS transform (\\ref{eq:c=2RS}) has a double pole at $s=1$. For this reason we must take slightly more care in extracting the constant part of the spectral decomposition. Because of the linear divergence of $Z^{(c=2)}$ at the cusp, in order to apply the Roelcke-Selberg theorem we subtract $\\widehat E_1(\\tau) - \\omega$ from the partition function, where $\\widehat E_1(\\tau)$ is the non-singular part of the Eisenstein series at $s=1$ (see (\\ref{eq:e1hatdef}))\n\\be\n\\widehat{E}_{1}(\\tau) = \\lim_{s\\rightarrow1}\\left[ E_s(\\tau) -\\frac{3}{\\pi(s-1)}\\right]\n\\ee \nand $\\omega \\coloneqq {6\\over \\pi}\\left(1-12\\zeta'(-1)-\\log(4\\pi)\\right)$ is the constant part of $\\widehat E_1(\\tau)$. The remainder of the constant part of the spectral decomposition comes from the residue of the RS transform at $s=1$, cf. \\eqref{avgrs}, which is given by\n\\begin{equation}\n\t\\Res_{s=1}\\left(R_s[Z^{(c=2)}]\\right) = \\widehat{E}_1(\\rho) + \\widehat{E}_1(\\sigma) + \\delta,\n\\end{equation}\nwhere $\\delta \\coloneqq {18\\over \\pi^2} \\Lambda'(1) = {3\\over \\pi}\\left(\\gamma_E+\\log(4\\pi) + 24 \\zeta'(-1) -2\\right)$ with $\\gamma_E$ the Euler-Mascheroni constant. \n\nFinally we address the discrete part of the spectrum. Following the discussion in Appendix \\ref{app:eisensteinCG}, we generally expect all $c>1$ Narain partition functions to have support on the cusp forms. The differential equation (\\ref{eq:DIFFEQ}), the triality of the $c=2$ partition function, and the orthogonality of the cusp forms together imply that the discrete contribution must take the following highly constrained form:\n\\begin{align}\n\t& Z^{(c=2)}(\\tau; \\rho, \\sigma) \\supset \\sum_{n=1}^\\infty c^+_n (\\nu^+_n,\\nu^+_n)^{-1}\\nu^+_n(\\rho)\\nu^+_n(\\sigma)\\nu^+_n(\\tau) + \\sum_{n=1}^\\infty c^-_n (\\nu^-_n,\\nu^-_n)^{-1}\\nu^-_n(\\rho)\\nu^-_n(\\sigma)\\nu^-_n(\\tau), \\label{eq:blahblahblahtraility} \n\\end{align}\nwhere $c_n^\\pm$ are a set of constants and where we have separated the cusp forms into the even and odd cusp forms, denoted by $\\nu^+$ and $\\nu^-$ respectively.\\footnote{A previous version of this paper incorrectly omitted the odd cusp forms $\\nu_n^-$ from the spectral decomposition. However, Narain theories with $B$ field turned on are generically not parity invariant, which imply the presence of odd cusp forms. We are grateful to Ying-Hsuan Lin and Yifan Wang for discussions related to this point.} Any other product of cusp forms totally symmetric in $\\rho, \\sigma, \\tau$ will not obey (\\ref{eq:DIFFEQ}). We have also normalized the $c_n^\\pm$'s in (\\ref{eq:blahblahblahtraility}) by a factor of $(\\nu^\\pm_n, \\nu^\\pm_n)$ which will turn out to be convenient.\n\nIn Appendix \\ref{app:derivingEight} we analytically derive:\\footnote{One may be concerned that the overlap coefficient with the odd Maass cusp forms is imaginary, given that the cusp forms are real functions when $\\overline\\tau = \\tau^*$. This is not a problem, since parity-non-invariant partition functions need not be real. Indeed, we see below that the overlap with the odd cusp forms is only non-vanishing when the $B$-field is turned on, in particular when the $c=2$ partition function is not parity invariant.}\n\\begin{align}\n \\left(Z^{(c=2)},\\nu^+_n\\right) &= 8 \\nu^+_n(\\rho)\\nu^+_n(\\sigma) \\nonumber \\\\\n \\left(Z^{(c=2)},\\nu^-_n\\right) &= -8i \\nu^-_n(\\rho)\\nu^-_n(\\sigma) \\label{eq:eight}\n\\end{align}\n by making use of the fact that the $c=2$ Narain partition function (\\ref{eq:c=2Narain}) can be written in terms of Poincar\\'e series and Hecke operators.\n\nAssembling the continuous and discrete pieces, we thus obtain the following exact form for the spectral decomposition of the $c=2$ Narain partition functions:\n\\begin{empheq}[box=\\fbox]{equation}\\label{eq:c=2Spectral}\n\\begin{split}\n\tZ^{(c=2)} (\\tau;\\rho,\\sigma) =& \\, \\alpha + \\widehat{E}_1(\\rho) + \\widehat{E}_1(\\sigma) + \\widehat{E}_1(\\tau)\\\\\n\t& \\, + {1\\over 4\\pi i}\\int_{\\re s ={1\\over 2}}ds\\, 2{\\Lambda(s)^2\\over \\Lambda(1-s)}E_s(\\rho)E_s(\\sigma)E_s(\\tau)\\\\\n\t& \\, + 8\\sum_{\\epsilon = \\pm} \\delta_\\epsilon \\sum_{n=1}^\\infty(\\nu^\\epsilon_n,\\nu^\\epsilon_n)^{-1}\\nu^\\epsilon_n(\\rho)\\nu^\\epsilon_n(\\sigma)\\nu^\\epsilon_n(\\tau),\n\\end{split}\n\\end{empheq}\nwhere the constant $\\alpha$ is given by\n\\begin{align}\n\t\\alpha &\\coloneqq \\delta - \\omega = \\frac3\\pi \\(\\gamma_E + 3\\log(4\\pi) + 48\\zeta'(-1) - 4\\) \n\t\\approx -3.6000146,\n\\end{align}\nand $\\delta_\\epsilon$ is defined as\n\\begin{align}\n\\delta_+ \\coloneqq 1, ~~~~~\\delta_- \\coloneqq -i.\n\\end{align}\nThe result exhibits manifestly the remarkable triality of \\cite{Dijkgraaf:1987jt}. We do not have a physical explanation for the ``8,'' but it is tempting to speculate that it is counting something.\n\nWe pause to note that we have also checked our expression (\\ref{eq:c=2Spectral}) numerically, in large part thanks to the plethora of numerical data available on the cusp forms.\\footnote{We refer the reader to the online database \\cite{LMFDB} for numerical data on the cusp forms, and to Appendix \\ref{subApp:cusp} for more details of the cusp forms. See also \\cite{Booker} for even higher numerical precision on the cusp forms which we used in our fit (\\ref{eq:8sgalore}) (we are grateful to A. Sutherland for pointing out this reference) and \\cite{Then_2004} for a study of cusp forms at large eigenvalue.} For example, by plugging in various values of $\\tau$, $\\rho$, and $\\sigma$ into $Z^{(c=2)}(\\tau; \\rho, \\sigma)$, we can numerically fit for the first few $c_n$.\\footnote{In principle we could also have explicitly numerically estimated the overlap integral $\\int_{-1\/2}^{1\/2}dx \\int_{\\sqrt{1-x^2}}^{\\infty} dy \\,y^{-2}Z^{(c=2)}(\\tau;\\rho,\\sigma) \\nu_n(\\tau)$ for fixed $\\rho$ and $\\sigma$, and found the $c_n$'s this way. In practice, however, we found that restricting to a specific spin, plugging in various values of $\\tau$, $\\rho$, and $\\sigma$ and finding the best fit for the $c_n$'s (after cutting off the sum in $n$) led to far more accurate numerics. For simplicity, we restricted ourselves to parity-invariant theories, so we did not obtain numerical data on $c^-_n$.} We find the following values:\n\\begin{align} \n&c^+_1 \\sim 7.99999999911,~~ c^+_2 \\sim 8.0000000011, ~~ c^+_3 \\sim 7.999999916,~~ c^+_4 \\sim 8.00000043,\\nonumber\\\\\n&c^+_5 \\sim 8.000013,~~ c^+_6 \\sim 8.016, ~~ c^+_7\\sim 8.00033, ~~ c^+_8 \\sim 7.9953\n\\label{eq:8sgalore}\n\\end{align} \nwith higher values of $n$ being less numerically stable. Our numerical results (\\ref{eq:8sgalore}) are indeed consistent with the result\n\\be\nc^+_n = 8.\n\\ee \n\n\n\\subsubsection{Decompactification loci of \\texorpdfstring{$c=3$}{c3} and \\texorpdfstring{$c=4$}{c4}}\n\nIn \\cite{Obers:1999um, Angelantonj:2011br}, an efficient method was described to perform spectral decomposition on the product locus of Narain moduli $T^d \\times S^1$ where the radius of the $S^1$ is taken very large. Here we will focus on the case $d=2$. In Appendix \\ref{sec:decomp}, we describe this method in detail; here we simply state the result. If we take $\\rho, \\sigma$ to be the moduli of the $T^2$ theory, and $r_3$ to be the radius of the $S^1$, then at large $r_3$ (with all other parameters fixed), we get\n\\begin{align}\nZ^{(c=3)}(\\tau; &\\rho, \\sigma, r_3) \\simeq E_{3\\o2}(\\tau) + r_3 \\(\\frac 6\\pi \\log(r_3)+ \\widehat E_1(\\rho) + \\widehat E_1(\\sigma) + \\beta \\)\n\\nonumber\\\\& + \\frac{1}{4\\pi i} \\int_{\\re s = {1\\over 2}}ds\\, E_s(\\tau) \\frac{ r_3 2 \\Lambda^2(s) E_s(\\sigma) E_s(\\rho) + 2 \\Lambda(s)\\Lambda(-s) r_3^{2s+1} + 2 \\Lambda(1-s) \\Lambda(s-1) r_3^{3-2s} }{ \\Lambda(1-s)} \\nonumber\\\\\n&+ \\sum_{\\epsilon=\\pm}\\sum_{n=1}^{\\infty} \\nu^\\epsilon_n(\\tau) \\frac{(Z^{(c=3)},\\nu^\\epsilon_n)}{(\\nu^\\epsilon_n, \\nu^\\epsilon_n)}\n\\label{eq:c3bla}\n\\end{align}\nwhere $\\simeq$ means equals to all orders in $1\/r_3$ (i.e. the error is non-perturbative in $1\/r_3$). In (\\ref{eq:c3bla}), $E_{3\\o2}(\\tau)$ is the non-holomorphic Eisenstein series at $s=3\/2$, and the constant term is inherited from the residue of the Eisenstein overlap at $s=1$\n\\begin{equation}\n\t\\Res_{s=1}\\left(R_s[Z^{(c=3)}]\\right) \\simeq r_3 \\(\\frac 6\\pi \\log(r_3)+ \\widehat E_1(\\rho) + \\widehat E_1(\\sigma) + \\beta \\)\n\\end{equation}\nwith\n\\begin{equation}\n\t\\beta \\coloneqq {6\\over \\pi}\\left(\\gamma_E+24\\zeta'(-1)+\\log (4\\pi) -2\\right) \\approx -5.46576. \n\\end{equation}\n\nTo fix the cusp form contributions, we again can numerically fit (\\ref{eq:c3bla}) for various values of target space moduli. Remarkably they also seem to take a simple form at large $r_3$. We conjecture that at large $r_3$\n\\begin{align}\n&Z^{(c=3)}(\\tau; \\rho, \\sigma, r_3) \\simeq E_{3\\o2}(\\tau) + r_3 \\(\\frac 6\\pi \\log(r_3)+ \\widehat E_1(\\rho) + \\widehat E_1(\\sigma) + \\beta\\)\n\\nonumber\\\\& + \\frac{1}{4\\pi i} \\int_{\\re s = {1\\over 2}}ds\\,E_s(\\tau) \\frac{ r_3 2 \\Lambda^2(s) E_s(\\sigma) E_s(\\rho) + 2 \\Lambda(s)\\Lambda(-s) r_3^{2s+1} + 2 \\Lambda(1-s) \\Lambda(s-1) r_3^{3-2s} }{ \\Lambda(1-s)} \\nonumber\\\\\n&+ 8r_3 \\sum_{n=1}^{\\infty} \\frac{\\nu^+_n(\\tau) \\nu^+_n(\\rho) \\nu^+_n(\\sigma)}{(\\nu^+_n,\\nu^+_n)} - 8 i r_3 \\sum_{n=1}^{\\infty} \\frac{\\nu^-_n(\\tau) \\nu^-_n(\\rho) \\nu^-_n(\\sigma)}{(\\nu^-_n,\\nu^-_n)}.\n\\label{eq:c3speciallocusts}\n\\end{align} \nInterestingly in (\\ref{eq:c3speciallocusts}) we see that for $c=3$, unlike for $c=2$, there are special points in the moduli space (for instance, very large $r_3$) where the cusp forms contribute numerically a much larger contribution to the partition function than the average $E_{3\\o2}(\\tau)$. For the $c=2$ partition function (\\ref{eq:c=2Spectral}) this is not the case due to the boundedness of the cusp forms. \n\nIn Appendix \\ref{sec:decomp} we also describe a generalization of the method introduced in \\cite{Obers:1999um, Angelantonj:2011br} to consider the spectral decomposition of Narain partition functions on the product locus $T^d \\times T^D$, where we take the volume of the $T^d$ to be parametrically large. Here we will focus on the case $d = D = 2$, which is a sublocus of the $c=4$ Narain moduli space. We will denote the moduli of the individual tori as $\\rho^{(i)}$ and $\\sigma^{(i)}$ for $i=1,2$, and will consider the limit in which $\\rho_2^{(1)} = \\im(\\rho^{(1)})$ is taken very large. In this limit, up to non-perturbative corrections in $\\rho_2^{(1)}$ we find\n\\begin{equation}\n\\begin{aligned}\n\tZ^{(c=4)}(\\tau;m) &\\simeq \\, E_2(\\tau) + \\rho_2^{(1)}\\left(\\widehat E_1(\\sigma^{(1)})+\\widehat E_1(\\rho^{(2)})+\\widehat E_1(\\sigma^{(2)})+{3\\over \\pi}\\log\\rho_2^{(1)} + \\gamma\\right)\\\\\n\t& \\, + {1\\over 4\\pi i}\\int_{\\re s={1\\over 2}}ds\\, 2\\rho_2^{(1)}\\bigg[ {\\Lambda(s)^2\\over\\Lambda(1-s)}E_s(\\rho^{(2)})E_s(\\sigma^{(2)}) + \\left(\\rho_2^{(1)}\\right)^{1-s}\\Lambda(s-1)E_{s-1}(\\sigma^{(1)}) \\\\\n\t& \\, \\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad + \\left(\\rho_2^{(1)}\\right)^s{\\Lambda(s)\\Lambda(s+1)\\over\\Lambda(1-s)}E_{s+1}(\\sigma^{(1)}) \\bigg]E_s(\\tau)\\\\\n\t& \\, + 8\\rho_2^{(1)}\\sum_{n=1}^\\infty{\\nu^+_n(\\rho^{(2)})\\nu^+_n(\\sigma^{(2)})\\over (\\nu^+_n,\\nu^+_n)}\\nu^+_n(\\tau) - 8i\\rho_2^{(1)}\\sum_{n=1}^\\infty{\\nu^-_n(\\rho^{(2)})\\nu^-_n(\\sigma^{(2)})\\over (\\nu^-_n,\\nu^-_n)}\\nu^-_n(\\tau) ,\n\t\\label{eq:c4wooo}\n\\end{aligned}\n\\end{equation}\nwith\n\\begin{equation}\n\t\\gamma \\coloneqq\n\t{6\\over \\pi}(\\gamma_E + 2\\log(4\\pi) + 36 \\zeta'(-1)-3)\\approx -6.3329.\n\t\\label{eq:gammadeff}\n\\end{equation}\n\n\\subsubsection{\\texorpdfstring{$c>2$}{cg2}}\n\nWe are now in a position to study the spectral decomposition of the partition functions of Narain lattice CFTs with $c>2$. As in previous subsections, we begin with the Eisenstein overlap, whose residue at $s=1$ gives the constant term, followed by the cusp form overlap. \n\nFrom (\\ref{eq:DIFFEQ}), we see the RS transform of the Narain partition functions is an eigenfunction of the target space Laplacian $\\Delta_{\\mathcal{M}_c}$\n\\begin{equation}\n\t\\Delta_{\\mathcal{M}_c}R_s[Z^{(c)}] = \\left({c\\over 2}-s\\right)\\left({c\\over 2}-1+s\\right)R_s[Z^{(c)}].\n\\end{equation}\nThat the lattice theta function provides a pairing between eigenfunctions of the worldsheet and target space Laplacians is reminiscent of a concept in the math literature known as the ``theta correspondence\" \\cite{Deitmar_1991}. The RS transform is straightforwardly computed to be \\cite{Angelantonj:2011br}\n\\begin{equation}\n\\begin{aligned}\n\tR_s[Z^{(c)}] \n\t&= \\int_0^\\infty dy \\, y^{s+{c\\over 2}-2}\\sideset{}{'}\\sum_{n,w\\in\\mathbb{Z}^c}\\delta_{n\\cdot w,0}e^{-\\pi y M_{n,w}(m)^2}\\\\\n\t&= \\pi^{1-s-{c\\over 2}}\\Gamma\\left(s+{c\\over 2}-1\\right)\\sideset{}{'}\\sum_{n,w\\in\\mathbb{Z}^c} {\\delta_{n\\cdot w,0}\\over (M_{n,w}(m)^2)^{s+{c\\over 2}-1}}\\\\\n\t&= \\pi^{1-s-{c\\over 2}}\\Gamma\\left(s+{c\\over 2}-1\\right) \\mathcal{E}^c_{s+{c\\over 2}-1}(m)\n\\end{aligned}\n\\end{equation}\nwhere $m$ represents the moduli, and we have assumed $\\re s > 1-{c\\over 2}$ in order to exchange the sum and the integral. In \\cite{Angelantonj:2011br}, $\\mathcal{E}^c_s$ is referred to as a ``constrained Epstein zeta series''\n\\begin{equation}\n\t\\mathcal{E}^c_{s}(m) \\coloneqq \\sideset{}{'}\\sum_{n,w\\in\\mathbb{Z}^c}{\\delta_{n\\cdot w,0}\\over M_{n,w}(m)^{2s}},\n\t\\label{eq:constrainedepsteindef}\n\\end{equation}\nwhich converges for $\\re s > c-1$.\\footnote{In e.g. \\cite{Angelantonj:2011br}, it was written that convergence requires $\\re s > c$, but it seems that the weaker condition $\\re s > c - 1$ is enough for convergence. For $c > 2$, the density of states of scalar primary operators grows at large $\\Delta$ as $\\rho(\\Delta) \\sim \\frac{(2\\pi)^{c-1}\\Lambda\\(\\frac{c-1}2\\)}{\\Gamma(c-1)\\Lambda\\(\\frac c2\\)} \\Delta^{c-2}$, so for convergence of (\\ref{eq:constrainedepsteindef}), we require that $c-2-\\re s < -1$, or equivalently $\\re s > c -1$. We are grateful to Cyuan-Han Chang for pointing this out to us.} It admits a meromorphic continuation to the entire $s$ plane and satisfies a functional equation, both inherited from the Eisenstein series: \n\\begin{equation}\n\t\\pi^{-s}\\Gamma(s)\\Lambda\\left(s+1-{c\\over 2}\\right)\\mathcal{E}^c_s(m) = \\pi^{-(c-1-s)}\\Gamma(c-1-s)\\Lambda\\left({c\\over 2}-s\\right)\\mathcal{E}^c_{c-1-s}(m).\n\\end{equation}\nThe constant part of the spectral decomposition is extracted from the residue of $R_{1-s}[Z^{(c)}]$ at $s=1$, cf. \\eqref{avgrs}. As far as the cusp forms, we do not have analytic control over their inner product with the Narain partition functions. However, we do know that they must be eigenfunctions of the target space Laplacian, in particular,\n\\begin{equation}\n\t\\Delta_{\\mathcal{M}_c}(Z^{(c)},\\nu_n) = \\left({(c-1)^2\\over 4}+R_n^2\\right)(Z^{(c)},\\nu_n).\n\\end{equation}\nMoreover from the argument in Appendix \\ref{app:eisensteinCG}, we know that the inner products $(Z^{(c)},\\nu_n)$ are generically non-zero. \nAltogether, therefore, the spectral decomposition of the partition function of a general Narain lattice CFT with $c>2$ takes the following form\n\\begin{empheq}[box=\\fbox]{align}\\label{eq:higherCSpectral}\n\tZ^{(c)}(\\tau;m) &= E_{c\\over 2}(\\tau) + {1\\over 4\\pi i}\\int_{\\re s = {1\\over 2}} ds\\, \\pi^{s-{c\\over 2}}\\Gamma\\left({c\\over 2}-s\\right)\\mathcal{E}^c_{{c\\over 2}-s}(m)E_s(\\tau)\\nonumber\\\\\n\t& + {3\\over \\pi}\\pi^{1-{c\\over 2}}\\Gamma\\left({c\\over 2}-1\\right)\\mathcal{E}^c_{{c\\over 2}-1}(m) + \\sum_{n=1}^\\infty \\frac{(Z^{(c)},\\nu_n)}{(\\nu_n, \\nu_n)}\\nu_n(\\tau).\n\\end{empheq}\n\nLet us emphasize one important feature here. The Eisenstein series $E_{c\\over 2}(\\tau)$, present in (\\ref{eq:higherCSpectral}) because the Narain partition functions are not square integrable (due to the identity operator), has recently been shown to have an elegant interpretation as the Narain partition function \\emph{averaged} over the moduli space $\\mathcal{M}_c$ with the measure inherited from the Zamolodchikov metric \\cite{Afkhami-Jeddi:2020ezh,Maloney:2020nni}. So the other terms on the right hand side of (\\ref{eq:higherCSpectral}) admit a natural interpretation as the deviation from the average. (We remind the reader that this average is only well-defined for $c>2$ \\cite{Afkhami-Jeddi:2020ezh,Maloney:2020nni}.) We will say more about this interpretation in Section \\ref{sec4}.\n\n\\subsection{On the optimal gap in the scalar sector}\n\nBefore moving on, we make a brief remark on how our methods may be useful in determining the largest possible gap to non-vacuum primary operators in Narain CFTs.\n\nIn the spectral representations of the Narain partition functions (\\ref{eq:c=1Spectral}), (\\ref{eq:c=2Spectral}), (\\ref{eq:higherCSpectral}), modular invariance is completely manifest but unitarity (positivity of the spectrum), compactness (discreteness of the spectrum) and integrality (degeneracies are integers) are obscured. This is in contrast to standard approaches to the modular bootstrap, where positivity is manifest, but modular invariance, compactness\\footnote{Although in numerical approaches to the modular bootstrap one typically demands positivity of the action of the linear functional on the contribution of the vacuum module to the modular crossing equation, in practice \\cite{Keller:2012mr,Friedan:2013cba,Collier:2016cls} it is often quite difficult to rule out noncompact solutions to the modular crossing equations that are spurious for the problems of interest. A typical example is the question of the optimal upper bound on the gap in the spectrum of twists in a compact 2d CFT. There is a simple argument \\cite{Collier:2016cls,Afkhami-Jeddi:2017idc} that shows the twist gap cannot exceed ${c-1\\over 12}$, which is saturated by the noncompact Liouville CFT. Numerical approaches cannot improve on this bound without a novel mechanism demanding the presence of a normalizable ground state and\/or a discrete spectrum of local operators (see \\cite{Collier:2016cls, Benjamin:2019stq} for discussions on this).} and integrality are not. \n\nA simple example of a problem for which the technology developed in this paper may be suitable is the question of the optimal upper bound on the gap in the spectrum of scalar $U(1)^c\\times U(1)^c$ primary operators, particularly in the large $c$ limit. The latter is a problem that evades even state-of-the-art numerical modular bootstrap analyses \\cite{Afkhami-Jeddi:2019zci,Hartman:2019pcd,Afkhami-Jeddi:2020hde}. For the purposes of attacking this question, it is convenient to introduce a counting function $N_j(\\Delta)$\n\\begin{equation}\n\tN_j(\\Delta) \\coloneqq \\int_0^\\Delta d\\Delta'\\, \\left(\\rho_j(\\Delta')-\\delta_{j,0}\\delta(\\Delta')\\right)\n\\end{equation}\nthat enumerates the total number of spin-$j$ primaries with dimension less than or equal to $\\Delta$. In particular, the scalar counting function $N_0(\\Delta)$ is dictated entirely in terms of the overlap of the Narain partition function with the real analytic Eisenstein series, as the cusp forms have no scalar Fourier mode. From (\\ref{eq:higherCSpectral}) we can read off the counting function by integrating the scalar part, which gives:\n\\begin{equation}\n\\begin{aligned}\\label{eq:scalarCountingFunction}\n\tN_0(\\Delta) =& \\, {2\\pi^c\\zeta(c-1)\\over(c-1)\\Gamma\\left({c\\over 2}\\right)^2\\zeta(c)}\\Delta^{c-1} + {3\\over \\pi}{(2\\pi\\Delta)^{c\\over 2}\\over \\Gamma\\left({c\\over 2}+1\\right)}f_c(1)\\\\\n\t& \\, + {1\\over 4\\pi i}\\int_{\\re s={1\\over 2}}\\left[{(2\\pi\\Delta)^{{c\\over 2}-s}\\over\\Gamma\\left({c\\over 2}-s+1\\right)}+{\\Lambda(1-s)\\over\\Lambda(s)}{(2\\pi\\Delta)^{{c\\over 2}-1 + s}\\over\\Gamma\\left({c\\over 2}+s\\right)}\\right]f_c(s),\n\\end{aligned}\n\\end{equation}\nwhere\n\\begin{equation}\n\tf_c(s) \\coloneqq (Z^{(c)},E_s).\n\\end{equation}\n\nThe existence of a gap in the spectrum implies that $f_c(s)$ has certain analytic properties in the complex $s$-plane. For sufficiently small $\\Delta$, we should be able to evaluate the counting function (\\ref{eq:scalarCountingFunction}) by deforming the $s$ contour away from the critical strip to infinity to get zero. For example, in order to cancel the contribution of the first term in (\\ref{eq:scalarCountingFunction}), representing the continuous density of states of the average, we know that $f_c(s)$ must have a simple pole at $s={c\\over 2}$ with unit residue\n\\begin{equation}\n\t\\Res_{s={c\\over 2}}f_c(s) = 1.\n\\end{equation} \nDiscreteness of the spectrum implies that $\\frac{\\Lambda(1-s)}{\\Lambda(s)} f_c(s)$ must not have any other poles in the half-plane to the right of the critical line, as the existence of such a pole would lead to a continuous contribution to the counting function $N_0$. The existence of a scalar gap also implies that $f_c(s)$ has certain asymptotic properties as $s$ is taken to infinity, for example in the right half-plane. If we define $g_c(s)$ by rescaling\n\\begin{equation}\n\tg_c(s) \\coloneqq {\\Lambda(1-s)\\over \\Lambda(s)}{1\\over \\pi^{-({c\\over 2}-1+s)}\\Gamma\\left({c\\over 2}-1+s\\right)}f_c(s)\n\\end{equation}\nthen if the scalar gap is $\\Delta_*$, we learn that $g_c(s)$ must fall off as $s \\to\\infty$ in the right half $s$-plane as follows\n\\begin{equation}\n\tg_c(s) \\stackrel{s\\to\\infty}{\\sim} (2\\Delta_*)^{-s}.\n\t\\label{eq:gcsgap}\n\\end{equation}\nIf this is the case, then for any $\\Delta<\\Delta_*$ we can deform the $s$-contour and get $N_0(\\Delta<\\Delta_*)=0$. Then the question is: for Narain CFTs at fixed central charge, how large can $\\Delta_*$ be?\n\nTo get a feeling for how this works, it is instructive to consider some cases for which we already know the optimal bound on the gap. For example, in the $c=1$ case, we know from (\\ref{eq:c=1EisensteinOverlap}) that\n\\begin{equation}\n\tg_1(s) = 2\\zeta(2s-1)\\left(r^{2s-1}+r^{1-2s}\\right) \\stackrel{s\\to\\infty}{\\approx} 2\\left(r^{2s-1}+r^{1-2s}\\right).\n\\label{eq:gcc1}\n\\end{equation}\nThe bound on the gap is optimized when this quantity is minimized, corresponding to $r=1$. For this choice of $r$, $g_c$ is asymptotically constant as $s$ is taken to infinity, corresponding to an optimal gap of $\\Delta_* = {1\\over 2}$, which is indeed saturated by the $c=1$ free boson at the self-dual radius.\\footnote{More generally, let us consider a $T$-duality fundamental domain that maximizes $r$, namely $r\\in [1, \\infty)$. Then (\\ref{eq:gcc1}) at large $s$ is $g_1(s) \\approx r^{2s}$ which when compared with (\\ref{eq:gcsgap}) gives $\\Delta_*^{(c=1)} = 1\/2r^2$.} Similarly, in the $c=2$ case, we know from (\\ref{eq:c=2InnerProduct}) that\n\\begin{equation}\n\tg_2(s) = 2\\zeta(2s)E_s(\\rho)E_s(\\sigma).\n\\end{equation}\nLet us assume $\\rho, \\sigma$ are in the usual fundamental domain of $\\mathbb{H}\/SL(2,\\mathbb Z)$ (which maximizes the imaginary part over the $SL(2,\\mathbb Z)$ orbits). Then as $\\re s$ is taken to infinity, $g_2$ is approximated by\n\\begin{equation}\n\tg_2(s) \\approx 2(\\rho_2\\sigma_2)^s.\n\\end{equation}\nComparing to (\\ref{eq:gcsgap}), this tells us that in a given Narain lattice CFT with $T^2$ target space,\n\\begin{equation}\n\t\\Delta_*^{(c=2)} = {1\\over 2\\rho_2\\sigma_2},\n\\end{equation} \nprovided $-\\frac12 < \\rho_1, \\sigma_1 \\leq \\frac12$ and $|\\rho|, |\\sigma| \\geq 1$. Since $\\rho$ and $\\sigma$ are valued in the fundamental domain, we know $\\rho_2,\\sigma_2 \\geq {\\sqrt{3}\\over 2}$. Therefore the optimal gap is $\\Delta_* = {2\\over 3}$. This is saturated by the $SU(3)_1$ WZW model \\cite{Collier:2016cls,Afkhami-Jeddi:2020ezh}, corresponding to the complexified K\\\"ahler and complex structure moduli at the $\\mathbb{Z}_3$ point, $\\rho = \\sigma = e^{2\\pi i\\over 3}$.\n\nMore generally for $c>2$, we can make statements about the scalar gap as a function of the $O(c,c;\\mathbb{Z}) \\backslash O(c,c; \\mathbb R)\/O(c)\\times O(c)$ moduli. We conjecture that $g_c(s)$ at large $\\re s$ will take the simple analytic form of $\\frac1{2(G^{-1})_{cc}}$ when written in variables that minimize $(G^{-1})_{cc}$ (i.e. the $T$-duality fundamental domain is chosen to minimize $(G^{-1})_{cc}$). The scalar gap would then be optimized at a cusp point in the target space moduli, corresponding to a specific rational CFT. It would be very interesting to have more explicit expressions at higher $c$, and to compare to candidate optimal Narain theories found by scanning moduli space in Table 3.1 of \\cite{Afkhami-Jeddi:2020ezh}.\n\n\\section{Application to general CFTs}\\label{sec4}\n\nWe now explore the role of spectral decomposition in general CFTs. \n\n\\subsection{Rendering the partition function square-integrable}\n \nUnlike the Narain case, there is an immediate obstruction to applying the spectral decomposition to generic, i.e. irrational, CFT partition functions. For a CFT with extended chiral algebra $\\mathcal{A}$, we may define the quantity $c_{\\rm currents}\\geq 1$ as the effective central charge in the high-temperature asymptotics of the vacuum character of $\\mathcal{A}$:\n\\begin{equation}\\label{ccurr}\n\\log\\chi_{\\rm vac}^{\\mathcal{A}}(y \\rightarrow 0) \\sim {i\\pi\\over 12\\tau} c_{\\rm currents}.\n\\end{equation}\nThis quantity counts the number of currents modulo non-trivial null states. Due to the identity operator, the primary partition function of the CFT near the cusp behaves as (suppressing a power-law prefactor) \n\\e{zpcusp}{Z_p(y\\rightarrow\\infty) \\sim q^{-{c-c_{\\rm currents}\\over 12}}.}\nWhereas Narain CFTs obey $c=c_{\\rm currents}$, a generic CFT obeys $c>c_{\\rm currents}$, so \\eqref{zpcusp} diverges exponentially. Indeed, any {\\it light} primary operator, defined by the condition\n\\be\nh + \\overline{h} \\leq \\frac{c-c_{\\rm currents}}{12},\n\\label{eq:inequalityasdf}\n\\ee\nrenders $Z_p(\\tau)\\notin L^2(\\mathcal{F})$; moreover, those contributions strictly below the threshold are not even of ``slow growth'' near the cusp $y\\rightarrow \\infty$, in the sense of Zagier \\cite{zbMATH03796039}. \n\nWe henceforth take the CFTs in question to be compact, with Virasoro symmetry alone and $c>c_{\\rm currents}=1$, leaving implicit the straightforward extension to larger chiral algebras. The primary partition function is\n\\be\nZ_p(\\tau) = y^{1\/2} |\\eta(\\tau)|^2 Z(\\tau).\n\\label{eq:reducedpf}\n\\ee\nIt will be useful to introduce a primary partition function over light operators only:\n\\e{Zlight}{Z_L(\\tau) \\coloneqq y^{1\/2} \\sum_{\\substack{h,\\overline{h} \\\\ h + \\bar h\\, \\leq\\, {c-1\\over 12}}} q^{h-{c-1\\o24}}\\bar q^{\\bar h-{c-1\\o24}}}\nwhere each term is the contribution of a Virasoro primary operator of weight $(h,\\overline{h})$. At finite $c$, there are a finite number of such contributions. As it stands, (\\ref{eq:reducedpf}) does not have a Roelcke-Selberg spectral decomposition because $Z_L(\\tau) \\neq 0$, due to the vacuum and any other light operators.\n\nHowever, let us view this problem slightly differently. We can subtract all light primary operators in a modular-invariant way, such that the resulting function {\\it will} be in $L^2(\\mathcal{F})$. In particular, let us define \n\\e{Zspec}{Z_{\\rm spec}(\\tau) \\coloneqq Z_p(\\tau) - \\widehat Z_{L}(\\tau).}\nThe quantity $\\widehat Z_L(\\tau)$ is defined to be a {\\it modular completion} of $Z_L(\\tau)$. That is, given $Z_L(\\tau)$, one adds states to make it modular-invariant, \n\\begin{equation}\n\\widehat Z_L(\\gamma\\tau) = \\widehat Z_L(\\tau)\\,, \\quad \\gamma\\in SL(2,\\mathbb{Z}),\n\\end{equation}\nbut without adding any new light states, \n\\e{}{\\widehat Z_L(\\tau) - Z_L(\\tau) \\rightarrow 0 ~~\\text{as}~~ y\\rightarrow \\infty.}\nSubtracting this quantity from $Z_p(\\tau)$ as in \\eqref{Zspec}, the resulting $Z_{\\rm spec}(\\tau)$ {\\it is} square-integrable, and therefore admits a spectral decomposition of Roelcke-Selberg type, cf. \\eqref{eq:RoelckeSelberg}.\n\nThe modular completion $\\widehat Z_L(\\tau)$ of a given $Z_L(\\tau)$ is not unique. Indeed, mathematically, there are an infinite number of ways to modular complete which differ by ``cuspidal functions,'' i.e. modular-invariant functions which vanish at $y \\rightarrow 0$. One particular modular completion that seems privileged, and provides insight into the meaning of $Z_{\\rm spec}(\\tau)$, is to replace each light state by its $PSL(2,\\mathbb{Z})$ Poincar\\'e sum:\n\\e{ZLpoinc}{\\widehat Z_L(\\tau) = \\sum_{\\substack{h,\\overline{h} \\\\ h + \\bar h \\, \\leq\\, {c-1\\over 12}}} \\sum_{\\gamma\\in\\Gamma_\\infty\\backslash PSL(2,\\mathbb{Z})} \\text{Im}(\\gamma\\tau)^{1\/2} q_\\gamma^{h-{c-1\\o24}}\\overline q_\\gamma^{\\overline{h}-{c-1\\o24}}} \nwhere $q_\\gamma \\coloneqq \\exp(2\\pi i \\gamma \\tau)$ and $\\overline q_\\gamma \\coloneqq \\exp(-2\\pi i \\gamma \\bar \\tau)$. The Poincar\\'e sum of each term adds, upon regularization \\cite{Keller:2014xba, Benjamin:2020mfz}, states with $h \\geq {c-1\\over 24}$ and $\\overline{h} \\geq {c-1\\over 24}$ to the original state. See appendix \\ref{app:generalizedEisenstein} for more details on the generalized Eisenstein series that appear when considering the modular completion via Poincar\\'e series of light characters. Part of the appeal of \\eqref{ZLpoinc} is that at large central charge, the Poincar\\'e sum is quite natural from the point of view of quantum gravity in AdS$_3$. In particular, given a light state, a Poincar\\'e sum generates contributions to the heavy spectral density that qualitatively reproduce the behavior of AdS$_3$ gravity in the small-$G_N$ expansion. That Poincar\\'e sums ``look'' gravitational has long been observed \\cite{Dijkgraaf:2000fq, Maloney:2007ud, Castro:2011zq, Keller:2014xba}; recent work has augmented this point of view \\cite{Alday:2019vdr, Benjamin:2020mfz,Afkhami-Jeddi:2020ezh,Maloney:2020nni,Cotler:2020ugk,Perez:2020klz, Maxfield:2020ale, Alday:2020qkm,Dymarsky:2020pzc,Meruliya:2021utr,Benjamin:2021wzr,Datta:2021ftn, Meruliya:2021lul,Ashwinkumar:2021kav,Dong:2021wot} in part due to the possibility that AdS$_3$\/CFT$_2$ may be understood as involving ensemble averages over CFTs, which assigns a physical interpretation to a continuous density of states. \n\nWith that said, there is no clear-cut candidate for a canonical modular completion. For example, a very interesting method of modular completion that leads to a slightly different heavy spectrum than that of the Poincar\\'e sum, based on ideas used in Rademacher expansions, was recently put forth in \\cite{Alday:2019vdr}. It is not obvious to us if the modular completion using the Poincar\\'e or Rademacher formalism is a more ``natural\" candidate. One may instead wish for a modular completion of the contribution of a given light state to the partition function that preserves discreteness of the accompanying heavy spectrum. However, we are not aware of such a modular completion that also retains square-integrability.\\footnote{For example, iteratively subtracting powers of the modular $j$-function from $Z(\\tau)$ to eliminate the singularity at $y\\rightarrow\\infty$ necessarily involves negative powers, which in turn introduces new singularities elsewhere due to the zero of the $j$-function on the unit $\\tau$-circle.}\n\n\\subsection{Interpretation}\n\nWe are led to suggest the following perspective on the role of spectral decomposition in general 2d CFTs: \\textit{given a light spectrum, $Z_{\\rm spec}(\\tau)$ computes a deviation from average of the primary partition function $Z_p(\\tau)$, where the role of the ``average'' is played by $\\widehat Z_L(\\tau)$.}\n\nThis dovetails with our previous treatment of Narain CFTs, where ``average'' is taken to mean the literal ensemble average over Narain moduli space. Recognizing $E_{c\\over 2}(\\tau)$ as the average primary partition function \\eqref{eq:stripoffcrap} of the $U(1)^c \\times U(1)^c$ Narain CFTs, we may rewrite \\eqref{eq:higherCSpectral} as\n\\es{eq:individualNarain}{Z^{(c)}(\\tau;m) - \\langle Z^{(c)}(\\tau;m) \\rangle =& \\, {1\\over 4\\pi i}\\int_{\\re s = {1\\over 2}} ds\\, \\pi^{s-{c\\over 2}}\\Gamma\\left({c\\over 2}-s\\right)\\mathcal{E}^c_{{c\\over 2}-s}(m)E_s(\\tau)\\\\\n\t& \\, + {3\\over \\pi}\\pi^{1-{c\\over 2}}\\Gamma\\left({c\\over 2}-1\\right)\\mathcal{E}^c_{{c\\over 2}-1}(m) + \\sum_{n=1}^\\infty (Z^{(c)},\\nu_n)\\nu_n(\\tau).}\nThe right-hand side is $Z_{\\rm spec}^{(c)}(\\tau)$, obeying $\\langle Z_{\\rm spec}^{(c)}(\\tau)\\rangle =0$. So this is precisely of the form \\eqref{Zspec} with \n\\e{}{\\widehat Z_L(\\tau)=\\langle Z^{(c)}(\\tau;m) \\rangle=E_{c\\over 2}(\\tau)}\nwhere $E_{c\\over 2}(\\tau)$ is, we emphasize, the Poincar\\'e sum of the identity operator. \n\nFrom this point of view, it is rather interesting that $E_{c\\over 2}(\\tau)$ is the average Narain partition function. Suppose one were to average over Narain moduli space with a measure {\\it different} from the Zamolodchikov metric. Then $\\widehat Z_L(\\tau)$ and $\\langle Z^{(c)}(\\tau) \\rangle$ would not necessarily be equal. This gives a novel perspective on why the Zamolodchikov metric is privileged, and motivates the Poincar\\'e sum as a natural modular completion: the average over moduli space, the Rankin-Selberg method, and the Poincar\\'e sum of the identity operator all coincide.\n\nIn the general holographic context with Virasoro symmetry, we view $\\widehat Z_L(\\tau)$ as an ``average'' in the sense of {\\it universality}. For the modular completion via Poincar\\'e sum, for example, $\\widehat Z_L(\\tau)$ captures universal contributions of light matter to the black hole spectrum. This takes the form of the Cardy entropy plus an infinite series of corrections from the full family of $SL(2,\\mathbb{Z})$ black holes \\cite{Maldacena:1998bw}. It encodes the semiclassical gravity approximation of the microstate counts of those black holes that form via collapse of light matter. The quantity $Z_{\\rm spec}(\\tau)$ therefore accesses the microstructure that lies beneath the coarse-grained approximation, by subtracting these contributions from the full partition function $Z_p(\\tau)$. Note that, unlike the Narain case reviewed above, we do not know how to ensemble average general Virasoro primary partition functions in the absence of moduli, and different modular completions $\\widehat Z_L(\\tau)$ give rise to different $Z_{\\rm spec}(\\tau)$ for a given theory. The suggestion of the Narain case is this: if there exists a formal notion of ``ensemble averaging'' over Virasoro CFTs without moduli, a natural definition would be one for which $\\langle Z_{\\rm spec}(\\tau) \\rangle =0$.\n\nIf $\\widehat Z_L(\\tau)$ is to be interpreted in the above way, it is natural to expect that it should satisfy some physical constraints. For instance, in order to interpret $\\widehat Z_L(\\tau)$ as capturing universality of a (putative) consistent gravitational theory in AdS$_3$, one would like its inverse Laplace transform to be positive-definite.\\footnote{This need only hold for those light spectra consistent with all CFT axioms, in the spirit of the modular bootstrap; of course, it remains an open problem to determine the constraints on such spectra.} As currently formulated, this may not hold in all regions of $(h,\\overline{h})$ because the present application of harmonic analysis is firmly {\\it Euclidean}: CFT partition functions are finite everywhere in $\\mathcal{F}$ away from the cusp at $y\\rightarrow\\infty$, so square-integrability is sensitive to the spectrum of dimensions, $\\Delta$, but not to the spectrum of twists, $t \\coloneqq 2\\,\\text{min}(h,\\overline{h})$. As such, the modular completion $\\widehat Z_L(\\tau)$ may not capture certain universal properties of twist spectra. For example, if one chooses to compute $\\widehat Z_L(\\tau)$ via Poincar\\'e sum, the result will be positive-definite only if $Z_L(\\tau)$ includes an operator with $t \\leq \\frac{c-1}{16}$ \\cite{Benjamin:2019stq}. Also, independently, if a CFT includes an operator lying strictly below this bound, it also includes infinitely many other operators, organized into Regge trajectories of asymptotically large spin, with asymptotic twist less than $\\frac{c-1}{12}$ \\cite{Kusuki:2018wpa, Collier:2018exn}. Neither of these phenomena would be fully captured by the Poincar\\'e sum $\\widehat Z_L(\\tau)$, though other modular completions might plausibly do better. This is not a contradiction, but it does motivate the extension of $SL(2,\\mathbb{Z})$ harmonic analysis to {\\it Lorentzian} regimes\n\n\\subsection{On half-wormholes in 2d CFT}\n\nThe above discussion suggests an analogy between structures in 2d CFT partition functions and recent observations about saddle points in holography. It has been shown that in a toy version of the SYK model, the partition function contains novel saddle points, dubbed ``half-wormholes'' \\cite{Saad:2021rcu}. The general picture of \\cite{Saad:2021rcu, shenker} is that the partition function of the large $N$ SYK model \\emph{without} disorder average is a sum of two saddle points: a disk-type saddle point, dual to a smooth black hole geometry in the bulk, and a half-wormhole, a ``noisy'' saddle point dual to some bulk geometry which gives small corrections to the black hole.\n\nWe can identify parallel structures in 2d CFT partition functions. In particular, at large central charge, $\\widehat Z_L(\\tau)$ captures the configurations continuously connected to saddle points, while $Z_{\\rm spec}(\\tau)$ is a certain 2d analog of the half-wormhole. This follows rather naturally from our earlier discussion, since $\\widehat Z_L(\\tau)$ captures black hole universality of semiclassical gravity, while $Z_{\\rm spec}(\\tau)$ encodes small corrections, i.e. the ``noise.'' The total partition function is the sum of the two. \n\nThis picture becomes especially sharp for the Narain theories whose partition functions are written, for example, in (\\ref{eq:higherCSpectral}). In this case $\\widehat Z_L(\\tau)$ is precisely the averaged partition function $E_{c\\over 2}(\\tau)$, while the remaining terms describe $Z_{\\rm spec}(\\tau)$. As is apparent from (\\ref{eq:higherCSpectral}), many of the terms in $Z_{\\rm spec}(\\tau)$ can be written in a way which suggests their origin as a sum over geometries. For example, the Eisenstein series $E_s(\\tau)$ is a Poincare series (see equation (\\ref{eq:eisensteinDefinition})) which can be interpreted as coming from a sum over geometries (handlebodies) labelled by $\\Gamma_\\infty \\backslash SL(2,\\mathbb{Z})$ in the usual way. This suggests, therefore, that we interpret these terms as the analog of half-wormhole contributions in the theory of gravity dual to the Narain ensemble. It would be interesting, of course, to give a more explicit bulk interpretation of these contributions.\n\nOne crucial feature of the half-wormhole solutions is that they restore factorization. \nIn particular, in the computation of the two-boundary path integral of individual instantiations in the model of \\cite{Saad:2021rcu} (see also \\cite{Mukhametzhanov:2021nea}), i.e. before disorder average, the half-wormhole terms combine with the wormhole (present in the disorder-averaged SYK model) in such a way as to restore factorization of the square of the partition function. We can, at least in principle, explore this mechanism explicitly in Narain CFTs. We know from \\cite{Maloney:2020nni} that the two-point function of the torus partition function averaged over Narain moduli space can be written in terms of a degree-two Eisenstein series (see \\cite{Collier:2021rsn} for more details on the averaged two-point function)\n\\begin{equation}\n\t\\langle Z^{(c)}(\\tau_1)Z^{(c)}(\\tau_2)\\rangle = E_{c\\over 2}^{(2)}(\\Omega) = \\sum_{\\gamma\\in P \\backslash Sp(4,\\mathbb{Z})}\\left(\\det\\im\\gamma\\Omega\\right)^{c\\over 2} = \\sum_{``(C,D) = 1\"}{(y_1y_2)^{c\\over 2}\\over |\\det\\left(C\\Omega+D\\right)|^{c}},\n\\end{equation}\nwhere $\\Omega = \\diag(\\tau_1,\\tau_2)$ is a diagonal element of the degree-two Siegel upper half-space, and the rest of the technical details of the above formula are unimportant for present purposes. A natural generalization of (\\ref{eq:individualNarain}) would be something like \n\\begin{equation}\\label{eq:narainProduct}\n\tZ^{(c)}(\\tau_1;m)Z^{(c)}(\\tau_2;m) = \\langle Z^{(c)}(\\tau_1)Z^{(c)}(\\tau_2)\\rangle + Z_{\\rm spec}^{(g=2)}(\\tau_1,\\tau_2;m).\n\\end{equation}\nwhere $Z_{\\rm spec}^{(g=2)}$ admits an $Sp(4,\\mathbb{Z})$ spectral decomposition. The left-hand side is computed by squaring (\\ref{eq:higherCSpectral}), while the right-hand side is what is suggested by \\cite{Saad:2021rcu} and our perspective on spectral decomposition, now at genus-two. It is a concrete problem to understand whether such an equality makes sense, and how to perform $Sp(4,\\mathbb{Z})$ harmonic analysis, given the expressions in Section \\ref{sec:narain} of this paper (see \\cite{Pioline:2014bra,Florakis:2016boz}, where the Rankin-Selberg method has been generalized to higher-genus modular functions). \n\n\n\\subsection{Spectral determinacy}\\label{secspecdet}\n\nThe spectral methods herein shed light on the question of {\\it spectral determinacy} in compact 2d CFT; that is, whether the complete spectrum is fully determined once part of it is fixed. Indeed, applying these methods to primary partition functions readily gives the following result:\n\n\\begin{quotation}\n\\noindent {\\it The entire spectrum of a 2d CFT is uniquely fixed by the light spectrum, the scalar spectrum, and the spin $j=1$ spectrum.\n}\n\\end{quotation}\n\n\\noindent This is depicted in a Chew-Frautschi plot in Figure \\ref{specfig} for Virasoro CFTs, but the result applies for any extended chiral algebra, with ``light'' primaries defined by \\eqref{eq:inequalityasdf}. The proof -- indeed, the algorithm for constructing the spectrum -- may be simply stated as follows:\n\n{\\bf i)} Given the light spectrum, form $Z_{\\rm spec}(\\tau)$ defined in \\eqref{Zspec}. Being square-integrable, $Z_{\\rm spec}(\\tau)$ admits a spectral decomposition \\eqref{eq:RoelckeSelberg}. \n\n{\\bf ii)} The $j=0$ spectrum of $Z_{\\rm spec}(\\tau)$ fixes the overlap $(Z_{\\rm spec}, E_s)$, via the RS transform \\eqref{eq:RankinSelberg}. This in turn fixes all higher-spin data coming from the Eisenstein part of $Z_{\\rm spec}(\\tau)$.\nThis step hinges on the fact that cusp forms have no scalar support, cf. (\\ref{eq:nuexpansionasdf}). \n\n{\\bf iii)} The $j=1$ spectrum of $Z_{\\rm spec}(\\tau)$, after subtracting the contribution from the Eisenstein part, fixes the overlap $(Z_{\\rm spec},\\nu_n)$, and hence the remaining cusp form contribution.\\footnote{In order to read off the cusp form contribution, we must invert the modified Bessel function of the second kind. This is done using orthogonality relations given in e.g. equation (3.22) of \\cite{Whittaker}. In general, if we define the subtracted spin-$j$ spectrum,\n\\be\\label{reducedspinj}\nZ_{\\rm spec}^{\\text{spin-}j}(y) \\coloneqq \\int_{-1\/2}^{1\/2} dx \\, e^{-2\\pi i j x} \\(Z_{\\rm{spec}}(\\tau) - \\frac{1}{4\\pi i}\\int_{\\re s=\\frac12} ds (Z_{\\text{spec}}, E_s) E_s(\\tau)\\),\n\\ee\nthen we can read off the cusp form support by (assuming $a_j^{(n)} \\neq 0$)\n\\be\n\\frac{(Z_{\\text{spec}}, \\nu_n)}{(\\nu_n, \\nu_n)} = - \\frac{4 R_n \\sinh(\\pi R_n)}{\\pi a_j^{(n)}} \\lim_{\\epsilon\\rightarrow 0} \\frac{1}{\\log \\epsilon} \\int_{\\epsilon}^{\\infty} \\frac{dy}{y^{3\/2}} K_{i R_n}(2\\pi jy) Z_{\\rm spec}^{\\text{spin-}j}(y).\n\\ee} \n\nThis proof relies on an important assumption, which is that the cuspidal eigenspectrum is ``simple'': that is, the multiplicity of cusp forms with spectral parameter $R_n$ is no greater than one. This is unproven, but widely held to be true (e.g. \\cite{cusp82,hejh,sque}). A direct implication of such non-degeneracy is that all cusp forms are not just eigenfunctions of the Laplacian, but also eigenfunctions of all Hecke operators $T_j$.\\footnote{The assumption of a simple eigenspectrum, or a restriction to Maass cusp forms that are eigenvalues of all Hecke operators (so-called ``Hecke-Maass forms''), is often an input in derivations of various theorems, see e.g. \\cite{quesound,Ghosh_2013}. The best known bound on the multiplicity $d(R_n)$ is, up to an overall constant, $d(R_n) \\lesssim R_n\/\\log R_n$ as $R_n\\rightarrow\\infty$, which is rather far from one \\cite{ SLetter}.} Consequently, the spin $j=1$ coefficients $a_1^{(n)}$ of the cusp forms are all non-vanishing, the proof of which we review in Appendix \\ref{subApp:cusp}. It is suggestive that our spectral determinacy result relies not just on the harmonic analysis of modular functions, but on the number-theoretic structure of these eigenfunctions related to Hecke operators. We suspect that Hecke operators may have an important role to play in future studies of 2d CFT.\n\nIn fact, we can derive a somewhat more general result, subject to a further minor assumption: that the entire spectrum of a 2d CFT is uniquely fixed by the light spectrum, the scalar spectrum, and the spectrum of any fixed integer spin $j>0$. The argument is exactly as given above, except that one now requires that the coefficients $a_j^{(n)}$ are all non-vanishing. When $j>1$ this cannot be easily proven, but it is certainly very plausible. Indeed, as reviewed in the next section, the Fourier coefficients $a_j^{(n)}$ (for prime spin $j$) are essentially random real numbers distributed according to the semi-circle law. Thus, although it is not rigorously proven that they are all non-zero, it is almost certainly true. \n\nIt is interesting to contrast these results on spectral determinacy with those in narrower classes of CFTs. In holomorphic CFTs, the heavy spectrum $h>{c\\over 24}$ is fully determined by its complement because the partition function is a weakly holomorphic modular form of $SL(2,\\mathbb{Z})$ \\cite{rademacher1943expansion}. In rational CFTs, the twist spectrum $t> {c\\over 12}$ is fully determined by its complement, this time due to properties of vector-valued modular forms \\cite{Kaidi:2020ecu}. Note that spectral determinacy in $\\Delta$ also applies to rational CFTs: performing the algorithm with respect to the full partition function $Z(\\tau)$ rather than to $Z_p(\\tau)$, whose form is obfuscated by non-trivial null states, yields essentially the same result, marginally weaker due to the lack of distinction between primaries and descendants. \n\nPlaced in a wider CFT context, it is an interesting and curious feature of the above result that all spins $j>0$ are on equal footing (at least if the conjecture about the $a_j^{(n)}$ non-vanishing holds). The Lorentzian inversion formula and non-perturbative Regge bound of \\cite{Caron-Huot:2017vep} imply that in $d$-dimensional CFTs, all primary operators of integer spin $j>0$ lie on analytic Regge trajectories. (For CFTs that are opaque in the sense of \\cite{Caron-Huot:2020ouj}, the bound is instead $j>1$.) Their OPE data are subtly intertwined: one may not dial dimensions or structure constants of any finite number of operators at will. We are finding a more specific result for Virasoro primary degeneracies in 2d CFTs, at least once all light primaries are accounted for: not only are the spin $j>0$ degeneracies {\\it intertwined}, but the degeneracies of all spins $j>0$ are entirely {\\it determined} by those of a single spin (in addition to the scalar data used to fix the Eisenstein contribution, cf. \\eqref{reducedspinj}). As for $d$-dimensional Regge trajectories, the scalar data is treated separately. It would be of great interest to understand 2d CFT spectral determinacy from a conformal Regge theory point of view. \n\nIn fact, it may be the case that the nonzero spin data need not be specified as extra input. If two compact CFTs have identical light primaries and scalar primaries, then their difference of spectra is a discrete sum of delta functions which can be written entirely as a linear combination of Maass cusp forms. If one can show that this is impossible, i.e. that no linear combination of Maass cusp forms can give a function whose inverse Laplace transform is a sum of delta functions, this would complete the argument. Similar determinacy arguments emphasizing the strong constraints implied by compactness were given in \\cite{Kaidi:2020ecu}. As we discuss in the next subsection, this impossibility is a reasonable expectation because cusp forms are, in a precise sense we will articulate in the next subsection, chaotic. We do not know how to prove the above statement and leave it for future study.\\footnote{An observation that might suggest an alternate version of spectral determinacy is that the logarithm of the partition function can be made square-integrable without subtracting a modular completion of light states, assuming the partition function is positive everywhere in the fundamental domain. In particular, $\\log Z(\\tau)$ has a linear divergence as $y \\rightarrow \\infty$, which can be subtracted off in a modular-invariant way by an appropriate factor multiplying $\\widehat E_1(\\tau)$:\n\\be\n\\log Z(\\tau) - \\frac{2\\pi c}{12} \\widehat E_1(\\tau) \\in L^2(\\mathcal{F}).\n\\label{eq:logggg}\n\\ee\nThe meaning and utility of this are unclear. It may nevertheless be interesting to consider its spectral decomposition. Similarly, although the average of the $c=2$ Narain primary counting partition function diverges, the average of its logarithm converges. It is not clear to us how to compute this other than numerically. } \n\n\n\n\\subsection{On chaos and cusp forms}\\label{subsec:chaosAndCusps}\n\nOur discussion of $Z_{\\rm spec}(\\tau)$ suggests that it is probing a certain fine structure of the spectrum of CFTs, the noise above the background of universal features. Here we will explore the idea that $Z_{\\rm spec}(\\tau)$ --- more specifically, the presence or absence of Maass cusp forms --- is a probe of chaos in the spectrum of the CFT. Of course, that cusp forms participate in the spectral decomposition of the Narain partition functions (including at rational points in moduli space) prevents us from taking this idea literally as stated. Nevertheless, both the distribution of cusp form eigenvalues $R_n$, and the Fourier coefficients $a_j^{(n)}$ of a fixed cusp form $\\nu_n$, are known to exhibit certain forms of chaos. This has been explored in the mathematics literature, see e.g. \\cite{sarnak, PhysRevLett.69.2188,1993MaCom..61..245H,Steil:1994ue, Sarnak_1987}.\n\nLet us first address the eigenvalues $R_n$. Unlike random matrix ensembles (GUE, GOE, etc.), the eigenvalues at asymptotically large $n$ obey a Poisson distribution \\cite{sarnak, PhysRevLett.69.2188}. The eigenvalues are the (discrete part of the) quantum mechanical energy spectrum of a particle propagating on the fundamental domain $\\mathcal{F}$ of $PSL(2,\\mathbb{Z})$ with Hamiltonian given by the Laplacian $-\\Delta_\\tau$, a system that is known to be strongly chaotic. This is known as ``arithmetic chaos'' \\cite{PhysRevLett.69.2188,sarnak,Bolte:1993ur}, terminology that is due to the existence of an infinite number of operators that commute with the Hamiltonian (the Hecke operators (\\ref{eq:heckeDefinition})) and establish a deep connection with number theory.\n\nThe Fourier coefficients $a_j^{(n)}$ of the cusp forms themselves also exhibit properties related to chaos. As discussed in Appendix \\ref{subApp:cusp}, the Fourier coefficients are the eigenvalues of the cusp forms under the action of the Hecke operators $T_j$, and the cusp forms are conventionally normalized so that the spin-one coefficient is unity, $a_1^{(n)} = 1$. Although this normalization of the cusp forms may seem contrived from a physical point of view, it turns out to be quite natural for a number of purposes. For example, the \\textbf{Ramanujan-Petersson conjecture} (as stated e.g. in \\cite{Terras_2013,Sarnak_1987}) states that the absolute value of the Fourier coefficients with prime $j$ are bounded\n\\begin{equation}\n |a^{(n)}_j| \\le 2.\n\\end{equation}\nIt turns out that statistical properties of the Fourier coefficients of the cusp forms can also be naturally stated with this normalization. For example, if one considers the sequence of Fourier coefficients $\\{a^{(n)}_j\\}$ for \\emph{fixed} prime $j$ ordered by increasing eigenvalue $R_n$, then Sarnak has shown that they are equidistributed with respect to the following measure \\cite{Sarnak_1987}\n\\begin{equation}\n d\\mu_j(t) = {(j+1)\\sqrt{4-t^2}\\over 2\\pi \\left[\\left(j^{\\frac12}+j^{-\\frac12}\\right)^2-t^2\\right]}dt,~ |t|<2.\n\\end{equation}\nNote that as $j\\to \\infty$, this measure is asymptotically Wigner's semicircle. Moreover, the \\textbf{Sato-Tate conjecture} states that one can reverse the limits $j\\to\\infty$ and $n\\to\\infty$ and arrive at the same conclusion: for a \\emph{fixed} cusp form, its prime Fourier coefficients are equidistributed with respect to a measure that is asymptotically Wigner's semicircle. This has been extensively checked numerically for the cusp forms with lowest-lying eigenvalues \\cite{1993MaCom..61..245H,Steil:1994ue}.\n\nIt is tempting to speculate that this notion of chaos is related to spectral chaos in CFTs. It is however not clear how to make this relation precise. As shown previously, the Narain CFTs for $c>1$ all have cusp form support, though with a miraculously simple structure at e.g. $c=2$ (see (\\ref{eq:eight})). We can also look at some examples of rational CFTs and see if there is any pattern in the behavior of their cusp form support. Let us compute the following quantity for the $(p,q)$ Virasoro minimal models:\n\\be\ny^{1\/2}|\\eta(\\tau)|^2 Z_{(p,q)}(\\tau)\n\\ee\nThis is somewhat unnatural -- $|\\eta(\\tau)|^2$ does not count Verma module descendants, due to the non-trivial null states -- but we proceed nevertheless. We now recall that partition functions for all Virasoro minimal models, diagonal or not, may be written as linear combinations of $Z^{(c=1)}(\\tau;r)$ of different radii \\cite{DiFrancesco:1987gwq}. For example, diagonal invariants obey\n\\be\nZ^{\\rm diag}_{(p,q)}(\\tau) = {\\frac12}\\(Z^{(c=1)}\\(\\tau;\\sqrt{pq}\\) - Z^{(c=1)}\\(\\tau;\\sqrt{p^{-1}q}\\)\\)\n\\ee\nIt follows from (\\ref{eq:c=1Spectral}) that the above quantity does not have cusp forms in its spectral decomposition. The fact that the cusp forms do not show up for Virasoro minimal models seems to be an accident of these particular CFTs, however. For instance, we have verified numerically that $y|\\eta(\\tau)|^4$ times the partition function of the lowest unitary $W_3$ minimal model does have cusp forms in its decomposition. It remains unclear to us what the precise connection is between the Maass cusp form overlap of the partition function and quantum chaos. \n\n\n\\section*{Acknowledgments}\nWe are grateful to Fernando Alday, Cyuan-Han Chang, Tolya Dymarsky, Michael Green, Tom Hartman, Jeff Harvey, Peter Humphries, Yuya Kusuki, Ying-Hsuan Lin, Hirosi Ooguri, Boris Pioline, Sylvain Ribault, Steve Shenker, Andrew Sutherland, Yifan Wang, Xinan Zhou, and especially Herman Verlinde for very helpful discussions. NB is supported in part by the Simons Foundation Grant No. 488653. ALF is supported by the US Department of Energy Office of Science under Award Number DE-SC0015845, the Simons Collaboration on the Non-Perturbative Bootstrap, and a Sloan Foundation fellowship. AM is supported by the Simons Foundation Grant No. 385602 and the Natural Sciences and Engineering Research Council of Canada (NSERC), funding reference number SAPIN-2020-00047. EP is supported by the World Premier International Research Center Initiative,\nMEXT, Japan, by the U.S. Department of Energy, Office of Science, Office of High Energy\nPhysics, under Award Number DE-SC0011632, and by ERC Starting Grant 853507.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn recent years there has been considerable interest in (2+1)-dimensional\nsoliton equations [1-2]. The study of these equations has thrown up new\nideas in soliton theory, because, they, though, are much richer than their\n1+1 dimensional counterparts. Particularly, the introduction of\nexponentially localized structures (dromions) has triggered renewed interest\nin these integrable equations [3-12]. An specially important subject related to\nthe study of nonlinear differential equations (NLDE) is that concerning the\nsingularity structure of them [13-14]. The singularity structure analysis appears\nas a systematic procedure for constructing B${\\ddot a}$cklund, Darboux\nand Miura transformations, Lax representations, different types solutions\netc of given NLDE [14]. At the same time, the Painleve test allows us identify\nintegrable equations (see, e.g., [3] and refs therein). Notable amongst (2+1)-dimensional soliton equations\nare the Davey-Stewartson (DS) equation, the Zakharov equations (ZE), the\nNizhnik-Novikov-Veselov equation, the Ishimori equation, the\nKadomtsev-Petviashvili equation and so on.\n\nIn this paper, we consider the following ZE\n$$\niq_{t}+M_{1}q+vq=0 \\eqno(1a)\n$$\n$$\nip_{t}-M_{1}p-vp=0 \\eqno(1b)\n$$\n$$\nM_{2} v = -2 M_{1} (pq) \\eqno (1c)\n$$\nwhere\n$$\nM_1= \\alpha ^2\\frac{\\partial ^2}{\\partial y^2}+4\\alpha (b-a)\\frac{\\partial^2}\n {\\partial x \\partial y}+4(a^2-2ab-b)\\frac{\\partial^2}{\\partial x^2},\n$$\n$$\nM_2=\\alpha^2\\frac{\\partial^2}{\\partial y^2} -2\\alpha(2a+1)\\frac{\\partial^2}\n {\\partial x \\partial y}+4a(a+1)\\frac{\\partial^2}{\\partial x^2}.\n$$\nThis equation was may be introduced in [5] and is integrable. Equation\n(1) contains several important particular cases. So, for example,\nwe have the following cases:\n\n(i) $a=b=-\\frac{1}{2},$ it yields the DS equation\n$$\niq_t + q_{xx} + \\alpha^{2}q_{yy} + vq = 0 \\eqno(2a)\n$$\n$$\n\\alpha^{2}v_{yy} - v_{xx}=-2(\\alpha^{2}(pq)_{yy} + (pq)_{xx}). \\eqno(2b)\n$$\n\n(ii) When $a=b=-1$, we obtain the following equation\n$$\niq_{t}+ q_{YY} + vq = 0 \\eqno(3a)\n$$\n$$\nip_{t}-p_{YY} - vp = 0 \\eqno(3b)\n$$\n$$\nv_{X} + v_{Y} + 2(pq)_{Y} =0 \\eqno(3c)\n$$\nwhere $X=x\/2, \\quad Y = y\/\\alpha $.\n\n(iii) If $a=b=-1, X=t$, then equation (1) reduces to the following\n(1+1)-dimensional Yajima-Oikawa equation [19]\n$$\niq_{t}+ q_{YY} + vq = 0 \\eqno(4a)\n$$\n$$\nip_{t}-p_{YY} - vp = 0 \\eqno(4b)\n$$\n$$\nv_{t} + v_{Y} + 2(pq)_{Y} =0 \\eqno(4c)\n$$\nand so on [16].\n\nEven though the ZE (1) is known to be completely integrable,\nits Painleve property has\nnot yet been established. Also the interesting question arises naturally,\nwhether there exist dromion solutions in equation (1) as well.\nIn this paper, following Lakshmanan and coworkers (see, e.g. [3] and\nreferences therein),\nwe address ourselves to these problems and carry out\nthe singularity structure analysis and confirm its Painleve nature.\nWe also deduce its bilinear form from the Painleve analysis. Next we\nconstruct soliton and dromion solutions using the Hirota method.\nWe also show that the Fokas equation (FE) is the particular case of the\nZE as $a=-\\frac{1}{2}, \\alpha^{2} = 1.$\n\nThe present work falls into seven parts. In section II, we present the\nequivalent forms of the ZE (1) apart from studing its properties. In\nsection III, we carry out the singularity structure analysis of equation (1)\nand confirm its Painleve nature. We then obtain the Hirota bilinear form\ndirectly from the Painleve analysis in section IV. In section V, we\ngenerate the simplest one soliton solution (1-SS), the dromion and\n1-rational solutions.\nA connection between the ZE and the FE we will discuss\nin section VI. The associated integrable spin systems we present in section VII.\nSection VIII contains a short discussion\nof the results.\n\n\\section{Lax representation and equivalent forms}\n\n\nEquation (1) has the following Lax representation [5]\n$$\n\\alpha \\Psi_y =2B_{1}\\Psi_x + B_{0}\\Psi \\eqno(5a)\n$$\n$$\n\\Psi_t=4iC_{2}\\Psi_{xx}+2C_{1}\\Psi_x+C_{0}\\Psi \\eqno(5b)\n$$\nwith\n$$\nB_{1}= \\pmatrix{\na+1 & 0 \\cr\n0 & a\n},\\quad\nB_{0}= \\pmatrix{\n0 & q \\cr\np & 0\n}\n$$\n$$\nC_{2}= \\pmatrix{\nb+1 & 0 \\cr\n0 & b\n},\\quad\nC_{1}= \\pmatrix{\n0 & iq \\cr\nip & 0\n},\\quad\nC_{0}= \\pmatrix{\nc_{11} & c_{12} \\cr\nc_{21} & c_{22}\n}\n$$\n$$\nc_{12}=i[2(2b-a+1)q_{x}+i\\alpha q_{y}] \\quad\nc_{21}=i[2(a-2b)p_{x}-i\\alpha p_{y}]\n$$\nand $v=i(c_{22}-c_{11})$. Here $c_{jj}$ are the solution of the following\nequations\n$$\n2(a+1) c_{11x}- \\alpha c_{11y} = i[2(2b-a+1)(pq)_{x} + \\alpha (pq)_{y}]\n$$\n$$\n2ac_{22x}-\\alpha c_{22y} = i[2(a-2b)(pq)_{x} - \\alpha (pq)_{y}].\n$$\n\nFor our future algebra the ZE in the form (1) is rather complicated.\nBecause it has sense to look for a other more convenient and elegant forms\nof equation (1). The first form we obtain from the compatibility condition\nof equations (5). We have\n$$\niq_{t}+M_{1}q+i(c_{22}-c_{11})q=0 \\eqno(6a)\n$$\n$$\nip_{t}-M_{1}p-i(c_{11}-c_{22})p=0 \\eqno(6b)\n$$\n$$\n2(a+1) c_{11x}- \\alpha c_{11y} = i[2(2b-a+1)(pq)_{x} + \\alpha (pq)_{y}]\n\\eqno (6c)\n$$\n$$\n2ac_{22x}-\\alpha c_{22y} = i[2(a-2b)(pq)_{x} - \\alpha (pq)_{y}].\n\\eqno (6d)\n$$\nNow, if we introduce the following transformations\n$$\nV^{\\prime} = c_{22} - i(2b+1)pq , \\quad U^{\\prime} = c_{11} -i(2b+1) pq\n\\eqno (7)\n$$\nthen equation (6) takes the form\n$$\niq_{t}+M_{1}q+i(V^{\\prime} - U^{\\prime}) q = 0 \\eqno(8a)\n$$\n$$\nip_{t}-M_{1}p-i(V^{\\prime} - U^{\\prime}) p = 0 \\eqno(8b)\n$$\n$$\n2a V^{\\prime}_{x}- \\alpha V^{\\prime}_{y} = -2ib[2(a+1)(pq)_{x} -\n\\alpha (pq)_{y}] \\eqno (8c)\n$$\n$$\n2(a+1)U^{\\prime}_{x}-\\alpha U^{\\prime}_{y} = -2i(b+1)[2a(pq)_{x} -\n\\alpha (pq)_{y}]. \\eqno (8d)\n$$\nLet us we now rewrite this equation in the following form\n$$\niq_t+ (1 + b)q_{\\xi \\xi } - b q_{\\eta \\eta } + [2bV-2(b+1)U]q = 0 \\eqno(9a)\n$$\n$$\nip_t - (1 + b)p_{\\xi \\xi } + b q_{\\eta \\eta } - [2bV-2(b+1)U]p = 0 \\eqno(9b)\n$$\n$$\nV_{\\xi} = (pq)_{\\eta} \\eqno (9c)\n$$\n$$\nU_{\\eta} = (pq)_{\\xi} \\eqno (9d)\n$$\nwhere $U, V, \\xi$ and $\\eta $ are defined by\n$$\nU^{\\prime} = -2i(b+1)U, \\quad V^{\\prime} = -2ibV, \\quad\n\\xi = \\frac{x}{2} + \\frac{a+1}{\\alpha}y, \\quad \\eta = -\\frac{x}{2} -\n\\frac{a}{\\alpha}y. \\eqno(10)\n$$\nHaving this form of the ZE, we are in a convenient position to explore the\nsingularity structure of it. Note that in terms of $\\xi, \\eta$, equation (1)\ntakes the form\n$$\niq_t+ (1 + b)q_{\\xi \\xi } - b q_{\\eta \\eta } + vq = 0 \\eqno(11a)\n$$\n$$\nip_t - (1 + b)p_{\\xi \\xi } + b q_{\\eta \\eta } - vq = 0 \\eqno(11b)\n$$\n$$\nv_{\\xi \\eta } = -2[(1+ b) (pq)_{\\xi \\xi} - b(pq)_{\\eta \\eta}]. \\eqno(11c)\n$$\nIn particular, from this equation as $b=0$, we get the other ZE [5]\n$$\niq_t+ q_{\\xi \\xi } + vq = 0 \\eqno(12a)\n$$\n$$\nip_t - p_{\\xi \\xi } - vq = 0 \\eqno(12b)\n$$\n$$\nv_{\\eta } = -2 (pq)_{\\xi }. \\eqno(12c)\n$$\n\n\\section{Singularity structure analysis}\n\nIn order to carry out a singularity structure analysis, following [3],\nwe effect\na local Laurent expansion\nin the neighbourhood\nof a noncharacteristic singular manifold $\\phi(\\xi, \\eta, t) = 0,\n\\quad (\\phi_{\\xi}, \\phi_{\\eta}, \\phi_{t} \\ne 0)$. We assume the leading\norders of the solutions of equation (9) to take the form\n$$\nq=q_{0}\\phi^{m}, \\quad p=p_{0}\\phi^{n}, \\quad V=V_{0}\\phi^{\\gamma},\n\\quad U=U_{0}\\phi^{\\delta} \\eqno(13)\n$$\nwhere $q_{0}, \\quad p_{0}, \\quad V_{0}$ and $U_{0}$ are analytic functions\nof $(\\xi, \\eta, t)$. In (13) $ m, n, \\gamma$ and $\\delta$ are integers\n(if they exist) to be evaluated. Substituting expressions (13) into equation\n(9) and balancing the most dominant terms, we get\n$$\nm = n = - 1, \\quad \\gamma = \\delta = - 2 \\eqno(14)\n$$\nand the following equations\n$$\np_{0}q_{0} = \\phi_{\\xi}\\phi_{\\eta}, \\quad V_{0} = \\phi^{2}_{\\eta},\n\\quad U_{0} = \\phi^{2}_{\\xi}.\n\\eqno(15)\n$$\nTo evaluate the resonances, we consider the Laurent series of the\nsolutions\n$$\nq=\\sum_{j=0}q_{j}\\phi^{j-1}, \\quad\np=\\sum_{j=0}p_{j}\\phi^{j-1}, \\quad\nV=\\sum_{j=0}V_{j}\\phi^{j-2}, \\quad\nU=\\sum_{j=0}U_{j}\\phi^{j-2}. \\eqno(16)\n$$\nThen we substitute these expansions into equation (9) and equate the\ncoefficients $(\\phi^{j-3}, \\phi^{j-3}, \\phi^{j-3},$\\\\$ \\phi^{j-3})$ to zero\nto give\n$$\n\\left ( \\begin{array}{cccc}\nj(j-3)[(b+1)\\phi_{\\xi}^{2}-b\\phi_{\\eta}^{2}] & 0 & 2bq_{0} &-2(b+1)q_{0} \\\\\n0 & j(j-3)[(b+1)\\phi_{\\xi}^{2}-b\\phi_{\\eta}^{2}] & 2bp_{0} & -2(b+1)p_{0} \\\\\n(j-2)p_{0}\\phi_{\\eta} & (j-2)q_{0}\\phi_{\\eta} & - (j- 2)\\phi_{\\xi} & 0 \\\\\n(j-2)p_{0}\\phi_{\\xi} & (j-2)q_{0}\\phi_{\\xi} & 0 & - (j-2)\\phi_{\\eta}\n\\end{array} \\right )\n\\left ( \\begin{array}{c}\nq_{j} \\\\\np_{j} \\\\\nV_{j} \\\\\nU_{j}\n\\end{array} \\right )\n= 0 \\eqno(17)\n$$\nFrom the condition for the existence of nontrivial solutions to equation (17),\nwe get the resonance values as\n$$\nj = - 1, 0, 2, 2, 4. \\eqno(18)\n$$\n\nObviously, the resonance at $j = - 1$ represents the arbitrariness\nof the singularity manifold $\\phi(\\xi, \\eta, t) = 0$. At the same time,\nthe resonance at $j=0$ is associated with the arbitrariness of the\nfunctions $q_{0}, p_{0}, V_{0}$ or $U_{0}$ (cf equation (15)).\nTo prove the existence of arbitrary functions at the other resonance values\n$j=2,2,3,4$, we use the Laurent expansion (16) into equation (9).\n\nNow, gathering the coefficients of $(\\phi^{-2}, \\phi^{-2}, \\phi^{-2},\n\\phi^{-2})$, we obtain\n$$\n2[bV_{0} -(b+1)U_{0}]q_{1} +2bq_{0}V_{1} - 2(b+1)q_{0}U_{1} = A_{1} \\eqno(19a)\n$$\n$$\n2[bV_{0} -(b+1)U_{0}]p_{1} +2bp_{0}V_{1} - 2(b+1)p_{0}U_{1} = B_{1} \\eqno(19b)\n$$\n$$\np_{0} \\phi_{\\eta} q_{1} + q_{0} \\phi_{\\eta} p_{1} - \\phi_{\\xi} V_{1}\n= C_{1} \\eqno(19c)\n$$\n$$\np_{0} \\phi_{\\xi} q_{1} + q_{0} \\phi_{\\xi} p_{1} - \\phi_{\\eta} U_{1}\n= D_{1} \\eqno(19d)\n$$\nwhere\n$$\nA_{1} = iq_{0} \\phi_{t} + (b+1)[2q_{0 \\xi} \\phi_{\\xi} +\nq_{0} \\phi_{\\xi \\xi}] -b[2q_{0 \\eta} \\phi_{\\eta} +\nq_{0} \\phi_{\\eta \\eta}] \\eqno(20a)\n$$\n$$\nB_{1} = -ip_{0} \\phi_{t} + (b+1)[2p_{0 \\xi} \\phi_{\\xi} +\np_{0} \\phi_{\\xi \\xi}] -b[2p_{0 \\eta} \\phi_{\\eta} +\np_{0} \\phi_{\\eta \\eta}] \\eqno(20b)\n$$\n$$\nC_{1} =\\phi_{\\xi} \\phi_{\\eta \\eta} - \\phi_{\\eta} \\phi_{\\xi \\eta} \\eqno (20c)\n$$\n$$\nD_{1} = \\phi_{\\eta} \\phi_{\\xi \\xi} - \\phi_{\\xi} \\phi_{\\xi \\eta}. \\eqno(20d)\n$$\nThe solution of equation (19) has the form\n$$\nq_{1} = \\frac{iq_{0} \\phi_{t} +(b+1)[2q_{0 \\xi} \\phi_{\\xi} -\nq_{0} \\phi_{\\xi \\xi}]-b[2q_{0\\eta} \\phi_{\\eta}\n-q_{0} \\phi_{\\eta \\eta}]}{2[b \\phi_{\\eta}^{2} - (b+1) \\phi_{\\xi}^{2}]}\n\\eqno (21a)\n$$\n$$\np_{1} = \\frac{-ip_{0} \\phi_{t} +(b+1)[2p_{0 \\xi} \\phi_{\\xi} -\np_{0} \\phi_{\\xi \\xi}]-b[2p_{0\\eta} \\phi_{\\eta}\n-p_{0} \\phi_{\\eta \\eta}]}{2[b \\phi_{\\eta}^{2} - (b+1) \\phi_{\\xi}^{2}]}\n\\eqno (21b)\n$$\n$$\nV_{1} = - \\phi_{\\eta \\eta} \\eqno (21c)\n$$\n$$\nU_{1} = - \\phi_{\\xi \\xi}. \\eqno (21d)\n$$\nSimilarly, collecting the coefficients of $ (\\phi^{-1}, \\phi^{-1}, \\phi^{-1},\n\\phi^{-1} ),$ we obtain\n$$\n2[bV_{0} - (b+1)U_{0}]q_{2} + 2bq_{0}V_{2} -2(b+1)q_{0}U_{2} = A_{2}\n\\eqno (22a)\n$$\n$$\n2[bV_{0} - (b+1)U_{0}]p_{2} + 2bp_{0}V_{2} -2(b+1)p_{0}U_{2} = B_{2}\n\\eqno (22b)\n$$\n$$\nV_{1 \\xi} = (p_{0} q_{1} + p_{1} q_{0})_{\\eta} \\eqno (22c)\n$$\n$$\nU_{1 \\eta} = (p_{0} q_{1} + p_{1} q_{0})_{\\xi} \\eqno (22d)\n$$\nwhere\n$$\nA_{2} = -iq_{0t} +bq_{0 \\eta \\eta} -\n(b+1)q_{0 \\xi \\xi} -[2bV_{1} -2(b+1)U_{1}]q_{1} \\eqno (23a)\n$$\n$$\nB_{2} = ip_{0t} +bp_{0 \\eta \\eta} -\n(b+1)p_{0 \\xi \\xi} -[2bV_{1} -2(b+1)U_{1}]p_{1}. \\eqno (23b)\n$$\nEquations (22c,d) are identically satisfied. So, we have\nonly two equations for four unknown functions $a_{2}, b_{2},\nV_{2}, U_{2} $, i.e., two of them must be arbitrary.\n\nNow, collecting the coefficients of $ (\\phi^{0}, \\phi^{0}, \\phi^{0},\n\\phi^{0}),$ we have\n$$\n2[bV_{3} - (b+1)U_{3} ]q_{0} = A_{3} \\eqno (24a)\n$$\n$$\n2[bV_{3} - (b+1)U_{3} ]p_{0} = B_{3} \\eqno (24b)\n$$\n$$\n\\phi_{\\eta}(p_{3}q_{0} + p_{0} q_{3}) - V_{3} \\phi_{\\xi} = C_{3} \\eqno (24c)\n$$\n$$\n\\phi_{\\xi}(p_{3}q_{0} + p_{0} q_{3}) - U_{3} \\phi_{\\eta} = D_{3} \\eqno (24d)\n$$\nwith\n$$\nA_{3} = -iq_{1t} - iq_{2} \\phi_{t} -(b+1)[q_{1 \\xi \\xi} + 2q_{2 \\xi} \\phi_{\\xi} +\nq_{2} \\phi_{\\xi \\xi}]+b[q_{1 \\eta \\eta} + 2q_{2 \\eta} \\phi_{\\eta} +\nq_{2} \\phi_{\\eta \\eta}]\n$$\n$$\n-2b[V_{1}q_{2} +V_{2}q_{1}] +2(b+1)[U_{1}q_{2} + U_{2}q_{1}] \\eqno (25a)\n$$\n$$\nB_{3} = ip_{1t} + ip_{2} \\phi_{t} -(b+1)[p_{1 \\xi \\xi} + 2p_{2 \\xi} \\phi_{\\xi} +\np_{2} \\phi_{\\xi \\xi}]+b[p_{1 \\eta \\eta} + 2p_{2 \\eta} \\phi_{\\eta} +\np_{2} \\phi_{\\eta \\eta}]\n$$\n$$\n-2b[V_{1}p_{2} +V_{2}p_{1}] +2(b+1)[U_{1}p_{2} + U_{2}p_{1}] \\eqno (25b)\n$$\n$$\nC_{3}= V_{2\\xi} - [(p_{0}q_{2})_{\\eta}+(p_{1}q_{1})_{\\eta}+(p_{2}q_{0})_{\\eta} +\n(p_{2}q_{1} + p_{1}q_{2}) \\phi_{\\eta}] \\eqno (25c)\n$$\n$$\nD_{3}= U_{2\\eta} - [(p_{0}q_{2})_{\\xi}+(p_{1}q_{1})_{\\xi}+(p_{2}q_{0})_{\\xi} +\n(p_{2}q_{1} + p_{1}q_{2}) \\phi_{\\xi}]. \\eqno (25d)\n$$\n\nThis system can be reduced to the three equations in four unknown functions,\nhence follows that one of the functions $q_{3}, p_{3}, V_{3} $ and $U_{3}$\nis arbitrary. Proceeding further\nto the coefficients of $(\\phi^{1}, \\phi^{1}, \\phi^{1}, \\phi^{1})$ ,\nwe have checked that one of the functions $q_{4}, p_{4}, V_{4}$ and $U_{4}$\nis arbitrary. Thus the general solution $(q, p, V, U) (\\xi, \\eta, t)$ of\nequation (9) admits the required number of arbitrary functions without\nthe introduction of any movable critical manifold, thereby passing\nthe Painleve property. Thus the ZE (9) is expected to be integrable.\n\n\\section {Bilinearization and B${\\ddot a}$cklund transformation}\n\nUsing the results of the previous section, we can investigate the other\nintegrability properties of equation (9). Particularly,\nwe can construct B${\\ddot a}$cklund, Darboux\nand Miura transformations, Lax representations, bilinear form,\ndifferent types solutions of the ZE. For example, to obtain\nthe B${\\ddot a}$cklund transformation of equation (9),\nwe truncate the Laurent series at the constant level term, that is\n$$\nq_{j-1} = p_{j-1} = V_{j} = U_{j} = 0, \\quad j \\geq 3 \\eqno(26)\n$$\nIn this case, from (16) we have\n$$\nq = q_{0}\\phi^{-1} + q_{1}, \\quad p = p_{0}\\phi^{-1} + p_{1} \\eqno(27a)\n$$\n$$\nV=V_{0}\\phi^{-2} +V_{1}\\phi^{-1} + V_{2}, \\quad U=U_{0}\\phi^{-2}+\nU_{1}\\phi^{-1}+U_{2} \\eqno(27b)\n$$\nwhere $(q, q_{1}), (p, p_{1}), (V, V_{2})$ and $(U, U_{2})$ satisfy equation\n(9) with $(q_{0}, p_{0}, V_{0}, U_{0})$ and $(V_{1}, U_{1})$ satisfying\nequations (15). If we take the vacuum\nsolution $q_{1}=p_{1}=V_{2}=U_{2}=0 $, then from the above\nB${\\ddot a}$cklund transformation (27) we have\n$$\nq = q_{0}\\phi^{-1} \\eqno (28a)\n$$\n$$\np = p_{0}\\phi^{-1} \\eqno (28b)\n$$\n$$\nV = V_{0}\\phi^{-2} + V_{1}\\phi^{-1} = - \\partial_{\\eta \\eta}\\log\\phi\n\\eqno (28c)\n$$\n$$\nU = U_{0}\\phi^{-2} + U_{1}\\phi^{-1} = - \\partial_{\\xi \\xi}\\log\\phi.\n\\eqno (28d)\n$$\nHence and from (9), in the case, when $\\phi$ is real, follows\n$$\n[iD_{t} + (b+1)D_{\\xi}^{2} - bD_{\\eta}^{2}]q_{0}\\circ \\phi = 0 \\eqno (29a)\n$$\n$$\n[iD_{t} - (b+1)D_{\\xi}^{2} + bD_{\\eta}^{2}]p_{0}\\circ \\phi = 0 \\eqno (29b)\n$$\n$$\nD_{\\xi}D_{\\eta} \\phi \\circ \\phi = -2p_{0}q_{0} \\eqno (29c)\n$$\nwhich is the desired Hirota bilinear form for equations (9). Note that the\nbilinear form of the ZE for its form (1) is given by\n$$\n[iD_{t} - 4(a^{2} -2ab -b)D_{x}^{2} - 4 \\alpha (b-a)D_{x} D_{y} -\n\\alpha^{2} D_{y}^{2} ](G \\circ \\phi) = 0 \\eqno (29e)\n$$\n$$\n[iD_{t} - 4(a^{2} -2ab -b)D_{x}^{2} - 4 \\alpha (b-a)D_{x} D_{y} -\n\\alpha^{2} D_{y}^{2} ](P\\circ \\phi) = 0 \\eqno (29f)\n$$\n$$\n[4a(a+1)D_{x}^{2} - 2\\alpha (2a+1)D_{x} D_{y} + \\alpha^{2} D_{y}^{2})\n(\\phi \\circ \\phi) = -2PG \\eqno (29g)\n$$\nwhere\n$$\nq = \\frac{G}{\\phi}, \\quad p = \\frac{P}{\\phi} \\eqno(29h)\n$$\nwith\n$$\nv=2M_{2}\\log \\phi . \\eqno(29i)\n$$\nHereafter $(29)\\equiv (29a,b,c).$\n\n\\section{Simplest solutions}\n\nEquations (29) allow us to obtain the interesting classes of\nsolutions for the ZE (9) [25]. Below we find some simplest solutions\nof equation (9), when $p=Eq^{*}, E=\\pm 1, \\alpha^{2}=1$.\nIn this case, the Hirota bilinear\nequations (29) take the form $(q_{0} \\equiv g)$\n$$\n[iD_{t} + (b+1)D_{\\xi}^{2} - bD_{\\eta}^{2}]g\\circ \\phi = 0 \\eqno (30a)\n$$\n$$\nD_{\\xi}D_{\\eta} \\phi \\circ \\phi = -2Egg^{*} \\eqno (30b)\n$$\n\n\n\\subsection{The 1-soliton solution}\n\nThe construction of the soliton solutions is standard.\nOne expands the functions $g$ and $\\phi$ as a series of $\\epsilon$\n$$\ng = \\epsilon g_{1} + \\epsilon^{3} g_{3} + \\epsilon^{5}g_{5} +\n\\cdot \\cdot \\cdot \\cdot \\cdot \\eqno(31a)\n$$\n$$\n\\phi =1+\\epsilon^2 \\phi_2+\\epsilon^4 \\phi_4+\\epsilon^6 \\phi_6 + .....\\quad .\n\\eqno(31b)\n$$\n\nSubstituting these expansions into (30) and equating the coefficients\nof $\\epsilon $, in the 1-SS case, one obtains the following system\nof equations:\n$$\n\\epsilon^1: \\quad [iD_{t} + (b+1)D_{\\xi}^{2} - bD_{\\eta}^{2}]g_{1} \\circ 1 = 0\n\\eqno (32a)\n$$\n$$\n\\epsilon^3: \\quad [iD_{t} + (b+1)D_{\\xi}^{2} - bD_{\\eta}^{2}]g_{1} \\circ \\phi_{2}\n= 0 \\eqno (32b)\n$$\n$$\n\\epsilon^{2}: \\quad D_{\\xi}D_{\\eta} (1\\circ \\phi_{2} + \\phi_{2} \\circ 1) = -2Egg^{*} \\eqno (32c)\n$$\n$$\n\\epsilon^{4}: \\quad D_{\\xi}D_{\\eta} (\\phi_{2} \\circ \\phi_{2}) = 0. \\eqno (32d)\n$$\nUsing these equations we can construct the 1-SS\nof equation (5). In order to construct exact 1-SS of equation (9),\nwe take the ansatz\n$$\ng_1= \\exp {\\chi_1} \\eqno (33)\n$$\nwhere\n$$\n\\chi_1 = p_1\\xi + s_1\\eta + c_1t + e_1, \\quad p_{1} = p_{1R} + ip_{1I},\n\\quad s_{1}=s_{1R}+is_{1I}. \\eqno(34)\n$$\nSustituting (33) into (32a), we obtain\n$$\nc_1=i[(b+1)p^{2}_{1} - bs^{2}_{1}]. \\eqno (35)\n$$\n\nThe expression for $\\phi_2$ , we get from (32c)\n$$\n\\phi_2=\\exp{(\\chi_1+\\chi_1^* + 2\\psi)} \\eqno(36)\n$$\nwith\n$$\n\\exp(2\\psi) = -E\/4p_{1R}s_{1R}. \\eqno(37)\n$$\nEquations (32b,d) are identically satisfied. Finally, from (28), (33) and (36),\nwe get the 1-SS of equation (9)\nin the form\n$$\nq(\\xi, \\eta, t) = \\frac{1}{2}\\exp(-\\psi)sech(\\chi_{1R} +\n\\psi)\\exp(i\\chi_{1I}) \\eqno (38a)\n$$\n$$\nV(\\xi, \\eta, t) = -s_{1R}^{2}sech^{2}(\\chi_{1R} +\\psi) \\eqno (38b)\n$$\n$$\nU(\\xi, \\eta, t) = -p_{1R}^{2}sech^{2}(\\chi_{1R} +\\psi) \\eqno (38c)\n$$\nand for the hybrid potential\n$$\nv(\\xi, \\eta, t) = 2[(b+1)p^{2}_{1R}-bs_{1R}^{2}]sech^{2}(\\chi_{1R} +\\psi)\n\\eqno (38d)\n$$\nwhere $\\chi_{1R} = Re \\chi_{1} = p_{1R}\\xi + s_{1R}\\eta -\n[2(b+1)p_{1R}p_{1I}-2bs_{1R}s_{1I}]t.$ This algebra can be\nused to construct $N-$line soliton solutions as well. As shown in [3],\nthe above 1-SS reveals the fact that\n$$\nq\\rightarrow 0, \\quad U\\rightarrow 0, \\quad\nV = -s_{1R}^{2}sech^{2}(\\chi^{\\prime}_{1R} +\\psi{\\prime})\\rightarrow\nv_{1}(\\eta, t) \\quad as \\quad p_{1R} \\rightarrow 0 \\eqno (39)\n$$\nwhere $\\chi^{\\prime}_{1R} = s_{1R}[\\eta + 2bs_{1I}t] + e_{1R}$ and $\n\\psi^{\\prime}$ is a new phase constant. Similarly, we have\n$$\nq\\rightarrow 0, \\quad V\\rightarrow 0, \\quad\nU = -p_{1R}^{2}sech^{2}(\\chi^{\\prime \\prime}_{1R} +\n\\psi{\\prime \\prime})\\rightarrow\nu_{1}(\\eta, t) \\quad as \\quad p_{1R} \\rightarrow 0 \\eqno (40)\n$$\nwhere $\n\\chi^{\\prime \\prime}_{1R} = p_{1R}[\\xi - 2(b+1)p_{1I}t] + e_{1R}$ and $\n\\psi^{\\prime\\prime}$ is another phase constant.\nThus, as in [3], the solution is composed of\ntwo ghost solitons $v_{1}(\\eta, t) $ and $u_{1}(\\xi, t)$ driving the\npotentials $V$ and $U$ respectively in the absence of the physical field $q$.\nNote that these results are the same as in [3].\n\n\\subsection{The (1,1)-dromion solutions}\n\n\nLet us we now construct a dromion solution of the ZE. For example,\nto get a simple (1, 1) dromion solution, following Radha and Lakshmanan (see,\ne.g [3]), we take the ansatz\n$$\ng_{11D}=\\rho\\exp(\\chi_{1} + \\chi_{2}) \\eqno (41a)\n$$\n$$\n\\phi_{11D} = 1+ j\\exp(\\chi_{1} + \\chi_{1}^{*}) + k\\exp(\\chi_{2} + \\chi_{2}^{*})\n + l\\exp(\\chi_{1} + \\chi_{1}^{*} +\\chi_{2} + \\chi_{2}^{*}) \\eqno (41b)\n$$\nwhere $j, k, l$ are real positive constants and\n$$\n\\chi_{1} = p_{1} \\xi + i(b+1)p_{1}^{2}t + \\chi_{1}^{0}, \\quad\n\\chi_{2} = s_{2} \\eta - ibs_{2}^{2}t + \\chi_{2}^{0}. \\eqno (42)\n$$\nHere $p_{1}=p_{1R} + ip_{1I}, s_{2} =s_{2R}+is_{1I}$ are complex constants.\nSubstituting (41) into (30), we get the following conditions\n$$\n\\mid \\rho \\mid ^{2} = 4p_{1R}s_{1R}(jk-l)\/E, \\quad\n(l-jk)\\exp(-2\\psi)>0. \\eqno (43)\n$$\nAt last, from (41) and (28) , we obtain the (1, 1) dromion solution\nin the form\n$$\nq_{11D} = \\frac{g_{11D}}{\\phi_{11D}}, \\quad V_{11D} = -\n\\partial_{\\eta \\eta}\\log \\phi_{11D},\n\\quad U_{11D} =-\\partial_{\\xi\\xi} \\log \\phi_{11D} \\eqno(44a)\n$$\nor\n$$\nq_{11D} = \\frac{\\rho\\exp(\\chi_{1} + \\chi_{2})}{1+j\\exp(\\chi_{1}+\n\\chi_{1}^{*}) + k\\exp(\\chi_{2}+\\chi_{2}^{*}) +\nl\\exp(\\chi_{1} + \\chi_{1}^{*} + \\chi_{2} + \\chi_{2}^{*})}\n\\eqno(44b)\n$$\n$$\nV_{11D} = \\frac{-4s_{2R}^{2}\\exp(2\\chi_{2R})[(k+l\\exp(2\\chi_{2R})][(1\n+j\\exp(2\\chi_{1R})]}{[1+j\\exp(2\\chi_{1R}) +k\\exp(2\\chi_{2R}) +\nl\\exp(2(\\chi_{1R}+\\chi_{2R}))]^{2}} \\eqno(44c)\n$$\n$$\nU_{11D} = \\frac{-4p_{1R}^{2}\\exp(2\\chi_{1R})[j+l\\exp(2\\chi_{1R})][1\n+k\\exp(2\\chi_{2R})]}{[1+j\\exp(2\\chi_{1R}) +k\\exp(2\\chi_{2R})\n +l\\exp(2(\\chi_{1R}+\\chi_{2R}))]^{2}}. \\eqno(44d)\n$$\nFrom the last two equations, we can get the expression for the hybrid\npotential $v$ (9)\n$$\nv_{11D} = 2bV_{11D} - 2(b+1)U_{11D}. \\eqno(44e)\n$$\n\n\n\\subsection{The 1-rational solution}\nIn this subsection, we want present the simple 1-rational\nsolution of equation (9). Let $g_{1}=b_{0}=const$. Then, from (32) we get\n$$\n\\phi_{2} = -E|b_{0}|^{2}\\xi\\eta + b_{1}\\eta, \\quad b_{1}=const. \\eqno(45)\n$$\nSo, the 1-rational solution has the form\n$$\nq = \\frac{b_{0}}{1-E|b_{0}|^{2}\\xi\\eta + b_{1}\\eta} \\eqno (46a)\n$$\n$$\nV = [\\frac{b_{1}-E|b_{0}|^{2}\\xi}{1-E|b_{0}|^{2}\\xi\\eta +\n b_{1}\\eta}]^{2} \\eqno (46b)\n$$\n$$\nU = [\\frac{|b_{0}|^{4}\\eta^{2}}{1-E|b_{0}|^{2}\\xi\\eta +\n b_{1}\\eta}]^{2}. \\eqno (46c)\n$$\nNote that in this case, we have the following boundary conditions\n$$\n(q, U, V)|_{\\xi = \\pm \\infty} = (0,0, \\frac{1}{\\eta^{2}}=v_{2}(\\eta))\n \\eqno (47a)\n$$\nand\n$$\n(q, U, V)|_{\\eta = \\pm \\infty} = (0, \\frac{1}{\\xi^{2}}=u_{2}(\\xi), 0).\n \\eqno (47b)\n$$\n\n\\section{A connection between the ZE and the FE}\n\nNow let us we consider the FE [4]\n$$\niq_t - (\\gamma - \\beta)q_{\\xi \\xi } + (\\gamma + \\beta)\nq_{\\eta \\eta } - 2\\lambda q[(\\gamma +\n\\beta)(\\int^{\\xi}_{-\\infty}(pq)_{\\eta}d\\xi^{\\prime}\n$$\n$$\n+ v_{1}(\\eta, t))\n- (\\gamma - \\beta)(\\int^{\\eta}_{-\\infty}(pq)_{\\xi} d\\eta^{\\prime}\n+ v_{2}(\\xi,t))] =0 \\eqno(48a)\n$$\n$$\nip_t + (\\gamma - \\beta)p_{\\xi \\xi } - (\\gamma + \\beta)\np_{\\eta \\eta } + 2\\lambda p[(\\gamma +\n\\beta)(\\int^{\\xi}_{-\\infty}(pq)_{\\eta}d\\xi^{\\prime}\n$$\n$$\n+ v_{1}(\\eta, t))\n- (\\gamma - \\beta)(\\int^{\\eta}_{-\\infty}(pq)_{\\xi} d\\eta^{\\prime}\n+ v_{2}(\\xi,t))] =0 \\eqno(48b)\n$$\nwith $ p=\\bar q$ and in contrast with the equation (9), in this case\n$\\xi, \\eta$ are the characteristic coordinates defined by\n$$\n\\xi = x+y, \\quad \\eta = x-y. \\eqno(49)\n$$\n\nThis equation also contains several interesting particular cases. Let\nus recall these cases.\n\n(i) $\\gamma = \\beta = \\frac{1}{2}, v_{1} = v_{2} =0,$ yields equation\n$$\niq_t +q_{\\eta\\eta} - 2\\lambda q\\int^{\\xi}_{-\\infty}(pq)_{\\eta}d\\xi^{\\prime}\n= 0, \\quad \\lambda = \\pm 1. \\eqno(50)\n$$\nAs noted by Fokas, equation (50) is perhaps the simplest complex\nscalar equation in 2+1 dimensions, which can be solved by the IST method.\nIt is also worth pointing out that when $x=y$ this equation reduces\nto the (1+1)-dimensional integrable NLSE.\n\n(ii) $\\gamma = 0, \\beta = 1$, yields the celebrated DSI equation\n$$\niq_t + q_{\\xi \\xi } + q_{\\eta \\eta }\n- 2\\lambda q[(\\int^{\\xi}_{-\\infty}(pq)_{\\eta}d\\xi^{\\prime} +\nv_{1}(\\eta, t))\n+(\\int^{\\eta}_{-\\infty}(pq)_{\\xi} d\\eta^{\\prime}\n+ v_{2}(\\xi,t))] =0 .\\eqno(51)\n$$\nThis equation has the Painleve property and admits exponentially\nlocalized solutions including dromions for nonvanishing boundaries.\n\n(iii) $\\gamma = 1, \\beta =0$ yields the DSIII equation\n$$\niq_t - q_{\\xi \\xi } + q_{\\eta \\eta }\n- 2\\lambda q[(\\int^{\\xi}_{-\\infty}(pq)_{\\eta}d\\xi^{\\prime} +\nv_{1}(\\eta, t))\n-(\\int^{\\eta}_{-\\infty}(pq)_{\\xi} d\\eta^{\\prime}\n+ v_{2}(\\xi,t))] =0. \\eqno(52)\n$$\nEquation (52) also supports certain localized solutions.\n\nNow we return to the ZE (9) and make the simplest scaling\ntranformation: from $(t,\\xi, \\eta, q,p,v)$\nto $(Ft, C\\xi, D\\eta, Aq, Bp, HV, EU)$. Then, for example, equation (9)\ntakes the form\n$$\niq_t - (\\gamma - \\beta)q_{\\xi \\xi } + (\\gamma + \\beta)\nq_{\\eta \\eta } -2\\lambda [(\\gamma+\\beta)V-(\\gamma-\\beta)U]q = 0 \\eqno(53a)\n$$\n$$\nip_t + (\\gamma - \\beta)p_{\\xi \\xi } - (\\gamma + \\beta)\np_{\\eta\\eta } +2\\lambda [(\\gamma+\\beta)V-(\\gamma-\\beta)U]p = 0 \\eqno(53b)\n$$\n$$\nV_{\\xi} = (pq)_{\\eta} \\eqno (53c)\n$$\n$$\nU_{\\eta} = (pq)_{\\xi} \\eqno (53d)\n$$\nwhere\n$$\n\\lambda =\\frac{ABCD}{F},\\quad F= \\frac{\\beta - \\gamma}{1+b}C^{2}, \\quad\nD^{2}=\\frac{b(\\gamma-\\beta)}{(1+b)(\\gamma+\\beta)}C^{2},\n$$\n$$\n\\gamma = -\\frac{1}{2}F[(b+1)D^{2}+bC^{2}]C^{-2}D^{-2}, \\quad\n\\beta = \\frac{1}{2}F[(b+1)D^{2}-bC^{2}]C^{-2}D^{-2}.\n$$\n\nFrom (53c,d), we get\n$$\nV = \\int^{\\xi}_{-\\infty}(pq)_{\\eta}d\\xi^{\\prime} + v_{1} (\\eta, t) \\eqno (54a)\n$$\n$$\nU = \\int^{\\eta}_{-\\infty}(pq)_{\\xi}d\\eta^{\\prime} + v_{2}(\\xi,t). \\eqno (54b)\n$$\nSubstituting (54) into (53a,b), we obtain the FE (48).\nThus, we have proved that the ZE and the FE are equivalent to\neach other, as $\\alpha^{2}=1, a=-\\frac{1}{2}$.\nIn particular, this is why the ZE contains and at the same time\nthe FE not contains the DSII equation.\nRecently it was proved by\nRadha and Lakshmanan [3] that the FE (48) satisfies the Painleve property and\nhence it is expected to be integrable. From these results follow that\nthe ZE also satisfies the Painleve property and is integrable.\n\n\\section{ Associated integrable spin systems}\n\nIn this section, we wish present, in a briefly form, the spin equivalent\ncounterpart of the ZE and its reductions.\nIt is well known that the ZE (1) is gauge equivalent to the Myrzakulov\nIX (M-IX) equation [16]\n$$\niS_t + \\frac{1}{2}[S, M_1 S] + A_2 S_x + A_1 S_y = 0 \\eqno(55a)\n$$\n$$\nM_2u = \\frac{\\alpha^{2}}{2i}tr( S[ S_x , S_y]) \\eqno(55b)\n$$\nwhere $ \\alpha,b,a $= consts,\n$$\nS= \\pmatrix{\nS_3 & rS^- \\cr\nrS^+ & -S_3\n},\\quad S^{\\pm}=S_{1}\\pm iS_{2}, \\quad S^2 = EI,\\quad E = \\pm 1,\n\\quad r^{2}=\\pm 1,\n$$\n$$\nA_1=i\\{\\alpha (2b+1)u_y - 2(2ab+a+b)u_{x}\\},\n$$\n$$\nA_2=i\\{4\\alpha^{-1}(2a^2b+a^2+2ab+b)u_x - 2(2ab+a+b)u_{y}\\}.\n$$\nThis equation is integrable and also admits several integrable reductions.\nThere some of them:\n\n(i) {\\it The Myrzakulov VIII (M-VIII) equation}. First, let us we consider the\nreduction of the M-IX equation (55)\nas $ a=b=-1$.\nWe have [16]\n$$\niS_t+\\frac{1}{2}[S,S_{YY}]+iwS_Y = 0 \\eqno(56a)\n$$\n$$\nw_{X} + w_{Y} + \\frac{1}{4i}tr(S[S_X,S_Y]) = 0 \\eqno(56b)\n$$\nwhere $X=x\/2, \\quad Y = y\/\\alpha , \\quad w = - \\alpha^{-1}u_{Y}$.\n\n(ii) {\\it The Ishimori equation.} Now let $ a=b=-\\frac{1}{2} $. Then equation (4) reduces to the known\nIshimori equation [15]\n$$\niS_t+\\frac{1}{2}[S,(S_{xx}+\\alpha^{2}S_{yy})]+\niu_{y}S_x+iu_{x}S_y = 0 \\eqno(57a)\n$$\n$$\n\\alpha^{2}u_{yy} - u_{xx}= \\frac{\\alpha^{2}}{2i}tr(S[S_x,S_y]). \\eqno(57b)\n$$\n\n(iii) {\\it The Myrzakulov XXXIV (M-XXXIV) equation.} This equation has\nthe form\n$$\niS_t+\\frac{1}{2}[S,S_{YY}]+iwS_Y = 0 \\eqno(58a)\n$$\n$$\nw_{t} + w_{Y} + \\frac{1}{4}\\{tr(S^{2}_{Y})\\}_{Y} = 0. \\eqno(58b)\n$$\n\nThe M-XXXIV equation (19)\nwas proposed in [16] to describe nonlinear dynamics of\ncompressible magnets. It is integrable and has the different soliton\nsolutions [24].\n\n(iv) {\\it The Myrzakulov XVIII (M-XVIII) equation.}\nNow we consider the reduction: $a=-\\frac{1}{2}$. Then the equation (55)\nreduces to the M-XVIII equation [16]\n$$\niS_t+\\frac{1}{2}[S, S_{xx} + 2\\alpha(2b+1)S_{xy}+\\alpha^{2}S_{yy}]+\nA^{\\prime}_{2}S_x+A^{\\prime}_{1}S_y = 0 \\eqno(59a)\n$$\n$$\n\\alpha^{2}u_{yy} - u_{xx}=\\frac{\\alpha^{2}}{2i}tr(S[S_{x},S_{y}]) \\eqno(59b)\n$$\nwhere $A^{\\prime}_{j} = A_{j}$ as $a=-\\frac{1}{2}$.\n\n(v) {\\it The Myrzakulov XIX (M-XIX) equation.}\nLet us consider the case: $a = b$. Then we obtain the M-XIX equation [16]\n$$\niS_t + \\frac{1}{2} [S, \\alpha^{2} S_{yy} - 4a(a+1) S_{xx}]\n+ A_{2}^{\\prime \\prime } S_x + A_{1}^{\\prime \\prime } S_y = 0 \\eqno(60a)\n$$\n$$\nM_{2} u = \\frac{\\alpha^{2}}{2i} tr( S [S_{x}, S_{y}]) \\eqno(60b)\n$$\nwhere $A_{j}^{\\prime \\prime} = A_{j}$ as $ a = b$.\n\n(vi) {\\it The Myrzakulov XX (M-XX) equation.} This equation has the form [16]\n$$\niS_t + \\frac{1}{2}[S,(b+1) S_{\\xi \\xi} -bS_{\\eta \\eta}] +\nibw_{\\eta} S_{\\eta} + i(b+1)w_{\\xi}S_{\\xi} = 0 \\eqno(61a)\n$$\n$$\nw_{\\xi \\eta} = \\frac{1}{4i}tr(S[S_{\\xi},S_{\\eta}]) \\eqno(61b)\n$$\nand so on [16]. The gauge equivalent counterparts of equations (56),\n(57), (58) and (61) are the equations (3), (2), (4) and (11), respectively.\nNote that from (61) as $b=0$, we get the M-VIII equation in the following\nform [16]\n$$\niS_t + \\frac{1}{2}[S, S_{\\xi \\xi}] + ibw_{\\xi}S_{\\xi} = 0 \\eqno(62a)\n$$\n$$\nw_{\\xi \\eta} = \\frac{1}{4i}tr(S[S_{\\xi},S_{\\eta}]) \\eqno(62b)\n$$\nthe gauge equivalent of which is the equation (12). If we put $\\eta = t$,\nthen equations (61) and (12) take the forms\n$$\niS_t + \\frac{1}{2}[S, S_{\\xi\\xi}]+iwS_{\\xi} = 0 \\eqno (63a)\n$$\n$$\nw_{t} + \\frac{1}{4}(trS^{2}_{\\xi})_{\\xi} = 0 \\eqno (63b)\n$$\nand\n$$\niq_{t}+q_{\\xi \\xi}+vq=0 \\eqno(64a)\n$$\n$$\nv_{t} + 2r^{2}(\\bar q q)_{\\xi}=0. \\eqno(64b)\n$$\nEquation (63) is the equivalent form of the M-XXXIV equations. At the\nsame time, its gauge equivalent (64) is the Ma equation [20],\nwhich is also the\nequivalent form of the YOE (4).\n\nNote that these spin systems admit the different types solutions (see,\ne.g. [21-25]).\n\\section{ Conclusion}\n\nWe have investigated the integrability aspects of the (2+1)-dimensional\nZE by the singularity structure analysis and shown that it admits the\nPainleve property. We have also derived its bilinear form directly from\nthe Painleve analysis. We have then generated the simplest 1-SS using\nthe Hirota method. We have constructed and the (1,1)\ndromion solution. Finally, in last section we have presented the associated\nintegrable spin systems, which are gauge equivalent counterparts of the ZE and\nits reductions. Here, we would like note that between these spin systems and\nthe NLSE-type equations can take place the so-called Lakshmanan equivalence.\nThis problem we will consider, in detail, in other places (see, e.g.,\n[26-29]).\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Additional Details on FixupResNet20}\n\nFixupResNet20 \\cite{zhang2019fixup} is amended from the popular ResNet20 \\cite{he2016deep} by deleting the BatchNorm layers \\cite{ioffe2015batch}. The BatchNorm layers use the mean and variance of some hidden layers based on the data inputted into the models. In our experiment, the data on nodes are heterogeneous. If the models include BatchNorm layers, even all nodes have the same model parameters after training, their testing performance on the whole data would be different for different nodes because the mean and variance of the hidden layers are produced on the heterogeneous data. Thus we use FixupResNet20 instead of ResNet20. \n\n\n\n\n\n\n\n\\section{Some Key Existing Lemmas}\n\n \nFor $L$-smoothness function $f_i$, it holds for any ${\\mathbf{x}}, {\\mathbf{y}}\\in\\dom(r)$,\n\\begin{align}\\label{eq:assump-to-f_i}\n\\textstyle \\big|f_i({\\mathbf{y}}) - f_i({\\mathbf{x}}) - \\langle \\nabla f_i({\\mathbf{x}}), {\\mathbf{y}}-{\\mathbf{x}}\\rangle\\big| \\le \\frac{L}{2}\\|{\\mathbf{y}}-{\\mathbf{x}}\\|^2.\n\\end{align} \n\nFrom the smoothness of $f_i$ in Assumption \\ref{assu:prob}, it follows that $f = \\frac{1}{n}f_i$ is also $ L$-smooth in $\\dom(r)$.\n\nWhen $f_i$ is $ L$-smooth in $\\dom(r)$, we have that $f_i(\\cdot) + \\frac{L}{2}\\|\\cdot\\|^2$ is convex. \nSince $r(\\cdot)$ is convex, $\\phi_i(\\cdot) + \\frac{L}{2}\\|\\cdot\\|^2$ is convex, i.e., $\\phi_i$ is $L$-weakly convex for each $i$. So is $\\phi$. In the following, we give some lemmas about weakly convex functions.\n\nThe following result is from Lemma II.1 in \\cite{chen2021distributed}.\n \\begin{lemma}\\label{lem:weak_convx}\n For any function $\\psi$ on $\\mathbb{R}^{d}$, if it is $L$-weakly convex, i.e., $\\psi(\\cdot) + \\frac{L}{2}\\|\\cdot\\|^2$ is convex, then for any ${\\mathbf{x}}_1, {\\mathbf{x}}_2, \\ldots, {\\mathbf{x}}_m\\in\\mathbb{R}^d$, it holds that \n\\[\n\\psi\\left(\\sum_{i=1}^m a_i{\\mathbf{x}}_i\\right)\\leq \\sum_{i=1}^m a_i \\psi({\\mathbf{x}}_i) + \\frac{L}{2} \\sum_{i=1}^{m-1} \\sum_{j=i+1}^m a_i a_j \\|{\\mathbf{x}}_i-{\\mathbf{x}}_j\\|^2,\n\\]\nwhere $a_i\\geq 0$ for all $i$ and $\\sum_{i=1}^m a_i=1$.\n\\end{lemma} \n\n \n The first result below is from Lemma II.8 in \\cite{chen2021distributed}, and the nonexpansiveness of the proximal mapping of a closed convex function is well known. \n\\begin{lemma} \\label{lem:prox_diff} \nFor any function $\\psi$ on $\\mathbb{R}^{d}$, if it is $L$-weakly convex, i.e., $\\psi(\\cdot) + \\frac{L}{2}\\|\\cdot\\|^2$ is convex, then the proximal mapping with $\\lambda< \\frac{1}{L}$ satisfies \n\\[\n\\|\\prox_{\\lambda \\psi}({\\mathbf{x}}_1)-\\prox_{\\lambda \\psi}({\\mathbf{x}}_2)\\|\\leq \\frac{1}{1-\\lambda L} \\|{\\mathbf{x}}_1-{\\mathbf{x}}_2\\|.\n\\]\nFor a closed convex function $r(\\cdot)$, its proximal mapping is nonexpansive, i.e., \n\\[\n\\|\\prox_{r}({\\mathbf{x}}_1)-\\prox_{r}({\\mathbf{x}}_2)\\|\\leq \\|{\\mathbf{x}}_1-{\\mathbf{x}}_2\\|.\n\\]\n\\end{lemma}\n\n\\begin{lemma}\nFor $\\mathrm{DProxSGT}$ in Algorithm \\ref{alg:DProxSGT} and $\\mathrm{CDProxSGT}$ in Algorithm \\ref{alg:CDProxSGT}, we both have\n\\begin{gather}\n\t\\bar{\\mathbf{y}}^t =\\overline{\\nabla} \\mathbf{F}^t, \\quad\n\t\\bar{\\mathbf{x}}^{t} = \\bar{\\mathbf{x}}^{t+\\frac{1}{2}} = \\frac{1}{n} \\sum_{i=1}^n \\prox_{\\eta r}\\left({\\mathbf{x}}_i^t - \\eta {\\mathbf{y}}_i^{t}\\right). \\label{eq:x_y_mean}\n\\end{gather} \n\\end{lemma}\n\\begin{proof}\nFor DProxSGT in Algorithm \\ref{alg:DProxSGT}, taking the average among the workers on \\eqref{eq:y_half_update} to \\eqref{eq:x_1_update} gives \n\\begin{align}\n\\bar{\\mathbf{y}}^{t-\\frac{1}{2}} = \\bar{\\mathbf{y}}^{t-1} + \\overline{\\nabla} \\mathbf{F}^t - \\overline{\\nabla} \\mathbf{F}^{t-1}, \\quad\n \\bar{\\mathbf{y}}^t =\\bar{\\mathbf{y}}^{t-\\frac{1}{2}}, \\quad\n \\bar{\\mathbf{x}}^{t+\\frac{1}{2}} = \\frac{1}{n} \\sum_{i=1}^n \\prox_{\\eta r}\\left({\\mathbf{x}}_i^t - \\eta {\\mathbf{y}}_i^{t}\\right), \\quad \\bar{\\mathbf{x}}^{t} = \\bar{\\mathbf{x}}^{t+\\frac{1}{2}},\\label{eq:proof_mean}\n\\end{align}\nwhere $\\mathbf{1}^\\top\\mathbf{W}=\\mathbf{1}^\\top$ follows from Assumption \\ref{assu:mix_matrix}. With $\\bar{\\mathbf{y}}^{-1}=\\overline{\\nabla} \\mathbf{F}^{-1}$, we have \\eqref{eq:x_y_mean}.\n\nSimilarly, for CDProxSGT in Algorithm \\ref{alg:CDProxSGT}, \ntaking the average on \\eqref{eq:alg3_1_matrix} to \\eqref{eq:alg3_6_matrix} \nwill also give \\eqref{eq:proof_mean} and \\eqref{eq:x_y_mean}.\n\\end{proof}\n\nIn the rest of the analysis, we define the Moreau envelope of $\\phi$ for $\\lambda\\in(0,\\frac{1}{L})$ as \\begin{align*}\n\\phi_\\lambda({\\mathbf{x}}) = \\min_{\\mathbf{y}}\\left\\{\\phi({\\mathbf{y}}) + \\frac{1}{2\\lambda}\\|{\\mathbf{y}}-{\\mathbf{x}}\\|^2\\right\\}. \n\\end{align*}\nDenote the minimizer as \n \\begin{align*}\n \\prox_{\\lambda \\phi}({\\mathbf{x}}):= \\argmin_{{\\mathbf{y}}} \\phi({\\mathbf{y}})+\\frac{1}{2\\lambda} \\|{\\mathbf{y}}-{\\mathbf{x}}\\|^2.\n \\end{align*}\nIn addition, we will use the notation $\\widehat{{\\mathbf{x}}}^t_i$ and $\\widehat{{\\mathbf{x}}}^{t+\\frac{1}{2}}_i$ that are defined by\n\\begin{align}\n\\widehat{{\\mathbf{x}}}^t_i = \\prox_{\\lambda \\phi}({\\mathbf{x}}^t_i),\\ \\widehat{{\\mathbf{x}}}^{t+\\frac{1}{2}}_i = \\prox_{\\lambda \\phi}({\\mathbf{x}}^{t+\\frac{1}{2}}_i),\\, \\forall\\, i\\in\\mathcal{N},\n\\label{eq:x_t_hat} \n\\end{align}\nwhere $\\lambda \\in(0,\\frac{1}{L})$.\n\n\n\\section{Convergence Analysis for CDProxSGT} \\label{sec:proof_CDProxSGT}\nIn this section, we analyze the convergence rate of CDProxSGT. Similar to the analysis of DProxSGT, we establish a Lyapunov function that involves consensus errors and the Moreau envelope. But due to the compression,\n\n compression errors $\\|\\widehat\\mathbf{X}^t-\\mathbf{X}^t\\|$ and $\\|\\widehat\\Y^t-\\Y^t\\|$ will occur. Hence, we will also include the two compression errors in our Lyapunov function.\n\n\nAgain, we can equivalently write a matrix form of the updates \n\\eqref{eq:alg3_1}-\\eqref{eq:alg3_6} in \nAlgorithm \\ref{alg:CDProxSGT} as follows: \n\\begin{gather}\n \\Y^{t-\\frac{1}{2}} = \\Y^{t-1} + \\nabla \\mathbf{F}^t - \\nabla \\mathbf{F}^{t-1}, \\label{eq:alg3_1_matrix}\\\\\n \\underline\\Y^{t} = \\underline\\Y^{t-1} + Q_{\\mathbf{y}}\\big[\\Y^{t-\\frac{1}{2}} - \\underline\\Y^{t-1}\\big], \\label{eq:alg3_2_matrix}\\\\\n \\Y^{t} = \\Y^{t-\\frac{1}{2}} +\\gamma_y \\underline\\Y^{t}(\\mathbf{W}-\\mathbf{I}), \\label{eq:alg3_3_matrix}\\\\\n \\mathbf{X}^{t+\\frac{1}{2}} =\\prox_{\\eta r} \\left(\\mathbf{X}^t - \\eta \\Y^{t}\\right), \\label{eq:alg3_4_matrix}\\\\\n \\underline\\mathbf{X}^{t+1} = \\underline\\mathbf{X}^{t} + Q_{\\mathbf{x}}\\big[\\mathbf{X}^{t+\\frac{1}{2}} - \\underline\\mathbf{X}^{t}\\big], \\label{eq:alg3_5_matrix}\\\\\n \\mathbf{X}^{t+1} = \\mathbf{X}^{t+\\frac{1}{2}}+\\gamma_x\\underline\\mathbf{X}^{t+1}(\\mathbf{W}-\\mathbf{I}).\\label{eq:alg3_6_matrix}\n\\end{gather} \nWhen we apply the compressor to the column-concatenated matrix in \\eqref{eq:alg3_2_matrix} and \\eqref{eq:alg3_5_matrix}, it means applying the compressor to each column separately, i.e.,\n$Q_{\\mathbf{x}}[\\mathbf{X}] = [Q_x[{\\mathbf{x}}_1],Q_x[{\\mathbf{x}}_2],\\ldots,Q_x[{\\mathbf{x}}_n]]$. \n\n\nBelow we first analyze the progress by the half-step updates of $\\Y$ and $\\mathbf{X}$ from $t+1\/2$ to $t+1$ in Lemmas \\ref{lem:prepare_comp_y} and \\ref{lem:Xhat_Xhalf_comp}. Then we bound the one-step consensus error and compression error for $\\mathbf{X}$\nin Lemma \\ref{lem:X_consensus_comperror} and for $\\Y$\nin Lemma \\ref{lem:Y_consensus_comperror}. The bound of $\\mathbb{E}[\\phi_\\lambda({\\mathbf{x}}_i^{t+1})]$ after one-step update\nis given in \\ref{lem:phi_one_step}. Finally, we prove Theorem \\ref{thm:sect3thm} by building a Lyapunov function that involves all the five terms.\n \n\n \n\n\\begin{lemma} \\label{lem:prepare_comp_y} It holds that\n\\begin{align}\n \\mathbb{E}\\big[\\|\\underline\\Y^{t+1}-\\Y^{t+\\frac{1}{2}}\\|^2\\big] \\leq &~2 \\alpha^2\\mathbb{E}\\big[\\|\\Y^{t} -\\underline\\Y^{t}\\|^2\\big] + \n 6 \\alpha^2 n \\sigma^2 + 4 \\alpha^2 L^2 \\mathbb{E}\\big[\\|\\mathbf{X}^{t+1}-\\mathbf{X}^{t}\\|^2\\big], \\label{eq:2.3.2_1} \\\\\n \\mathbb{E}\\big[\\|\\underline\\Y^{t+1}-\\Y^{t+\\frac{1}{2}}\\|^2\\big] \\leq &~\\frac{1+\\alpha^2}{2}\\mathbb{E}\\big[\\|\\Y^{t} -\\underline\\Y^{t}\\|^2\\big] + \\frac{6 n \\sigma^2}{1-\\alpha^2} + \\frac{4 L^2}{1-\\alpha^2} \\mathbb{E}\\big[\\|\\mathbf{X}^{t+1}-\\mathbf{X}^{t}\\|^2\\big]. \\label{eq:2.3.2} \n\\end{align}\n\\end{lemma}\n\\begin{proof}\nFrom \\eqref{eq:alg3_1} and \\eqref{eq:alg3_2}, we have\n\\begin{align}\n &~ \\mathbb{E}\\big[\\|\\underline\\Y^{t+1}-\\Y^{t+\\frac{1}{2}}\\|^2\\big] = \\mathbb{E}\\big[\\mathbb{E}_Q\\big[\\|Q_{\\mathbf{y}}\\big[\\Y^{t+\\frac{1}{2}}-\\underline\\Y^{t}\\big]- (\\Y^{t+\\frac{1}{2}}-\\underline\\Y^{t})\\|^2\\big]\\big] \\nonumber\\\\\n \\leq &~ \\alpha^2\\mathbb{E}\\big[\\|\\Y^{t+\\frac{1}{2}}-\\underline\\Y^{t}\\|^2\\big] = \\alpha^2\\mathbb{E}\\big[\\|\\Y^{t} -\\underline\\Y^{t} +\\nabla \\mathbf{F}^{t+1}-\\nabla \\mathbf{F}^{t}\\|^2\\big]\\nonumber\\\\\n \\leq &~ \\alpha^2(1+\\alpha_0)\\mathbb{E}\\big[\\|\\Y^{t} -\\underline\\Y^{t}\\|^2\\big] + \\alpha^2(1+\\alpha_0^{-1})\\mathbb{E}\\big[\\|\\nabla \\mathbf{F}^{t+1}-\\nabla \\mathbf{F}^{t}\\|^2\\big] \\nonumber\\\\\n \\leq &~ \\alpha^2(1+\\alpha_0)\\mathbb{E}\\big[\\|\\Y^{t} -\\underline\\Y^{t}\\|^2\\big] + \\alpha^2(1+\\alpha_0^{-1}) \n \\left(3 n \\sigma^2 + 2 L^2 \\mathbb{E}\\big[\\|\\mathbf{X}^{t+1}-\\mathbf{X}^{t}\\|^2\\big]\\right), \\label{eq:2.3.2_0}\n\\end{align}\nwhere the first inequality holds by Assumption \\ref{assu:compressor}, \n$\\alpha_0$ can be any positive number,\nand the last inequality holds by \\eqref{eq:y_cons12} which still holds for CDProxSGT. Taking $\\alpha_0=1$ in \\eqref{eq:2.3.2_0} gives \\eqref{eq:2.3.2_1}. Letting \n$\\alpha_0=\\frac{1-\\alpha^2}{2}$ in \\eqref{eq:2.3.2_0}, we obtain $\\alpha^2(1+\\alpha_0) = (1-(1-\\alpha^2))(1+\\frac{1-\\alpha^2}{2}) \\leq \\frac{1+\\alpha^2}{2}$ and $\\alpha^2(1+\\alpha_0^{-1}) \\leq \\frac{2}{1-\\alpha^2}$, and thus \\eqref{eq:2.3.2} follows. \n\\end{proof}\n \n\\begin{lemma} \\label{lem:Xhat_Xhalf_comp} \nLet $\\eta\\leq \\lambda \\leq\\frac{1}{4 L}$. Then\n\\begin{align}\n \\mathbb{E}\\big[\\|\\widehat\\mathbf{X}^{t}-\\mathbf{X}^{t+\\frac{1}{2}}\\|^2\\big] \n \\leq &~ 4\\mathbb{E}\\big[\\|\\mathbf{X}^t_\\perp\\|^2\\big] + \\left( 1-\\frac{\\eta}{2\\lambda} \\right) \\mathbb{E}\\big[\\| \\widehat\\mathbf{X}^t - \\mathbf{X}^t\\|^2\\big] +4\\eta^2 \\mathbb{E}\\big[\\|\\Y^t_\\perp\\|^2\\big] + 2\\eta^2\\sigma^2, \\label{eq:hatx_xprox_comp}\\\\\n \\mathbb{E}\\big[\\|\\underline\\mathbf{X}^{t+1}-\\mathbf{X}^{t+\\frac{1}{2}}\\|^2\\big]\n \\leq &~ 3\\alpha^2 \\left(\\mathbb{E}\\big[ \\|\\mathbf{X}^t-\\underline\\mathbf{X}^{t}\\|^2\\big] + \\mathbb{E}\\big[\\|\\mathbf{X}^{t+\\frac{1}{2}}-\\widehat{\\mathbf{X}}^t\\|^2\\big]+ \\mathbb{E}\\big[\\|\\widehat{\\mathbf{X}}^t - \\mathbf{X}^t\\|^2\\big]\\right), \\label{eq:X_-X_1}\\\\ \n \\mathbb{E}\\big[\\|\\underline\\mathbf{X}^{t+1}-\\mathbf{X}^{t+\\frac{1}{2}}\\|^2\\big] \\leq\n &~ \\frac{16}{1-\\alpha^2}\\Big( \\mathbb{E}\\big[\\|\\mathbf{X}^t_\\perp\\|^2\\big]+ \\eta^2\\mathbb{E}\\big[\\|\\Y^t_\\perp\\|^2\\big]\\Big) + \\frac{1+\\alpha^2}{2} \\mathbb{E}\\big[\\|\\mathbf{X}^t-\\underline\\mathbf{X}^{t}\\|^2\\big] \\nonumber\\\\ \n &~ +\\frac{8}{1-\\alpha^2}\\left( \\mathbb{E}\\big[\\| \\widehat\\mathbf{X}^t - \\mathbf{X}^t\\|^2\\big] +\\eta^2\\sigma^2\\right). \\label{eq:2.2.2}\n\\end{align}\nFurther, if $\\gamma_x\\leq \\frac{2\\sqrt{3}-3}{6\\alpha}$, then \n\\begin{align}\n \\mathbb{E}\\big[\\| \\mathbf{X}^{t+1} - \\mathbf{X}^{t}\\|^2\\big] \\leq \n &~ 30\\mathbb{E}\\big[\\|\\mathbf{X}^t_\\perp\\|^2\\big] +4\\sqrt{3} \\alpha \\gamma_x \\mathbb{E}\\big[\\|\\mathbf{X}^t-\\underline\\mathbf{X}^{t}\\|^2\\big] +16\\eta^2 \\mathbb{E}\\big[\\|\\Y^t_\\perp\\|^2\\big] \\nonumber \\\\\n &~ + 8\\mathbb{E}\\big[\\| \\widehat\\mathbf{X}^t - \\mathbf{X}^t\\|^2\\big] + 8\\eta^2\\sigma^2. \\label{eq:2.2.3} \n\\end{align}\n\\end{lemma}\n\\begin{proof}\nThe proof of \\eqref{eq:hatx_xprox_comp} is the same as that of Lemma \\ref{lem:Xhat_Xhalf} because \\eqref{eq:alg3_4} and \\eqref{eq:x_y_mean} are the same as \\eqref{eq:x_half_update} and \\eqref{eq:x_y_mean}.\n\nFor $\\underline\\mathbf{X}^{t+1}-\\mathbf{X}^{t+\\frac{1}{2}}$, we have from \\eqref{eq:alg3_5} that\n\\begin{align}\n &~ \\mathbb{E}\\big[\\|\\underline\\mathbf{X}^{t+1}-\\mathbf{X}^{t+\\frac{1}{2}}\\|^2\\big] = \\mathbb{E}\\big[\\mathbb{E}_Q\\big[\\| Q_{\\mathbf{x}}\\big[\\mathbf{X}^{t+\\frac{1}{2}} - \\underline\\mathbf{X}^{t}\\big] -(\\mathbf{X}^{t+\\frac{1}{2}}-\\underline\\mathbf{X}^{t})\\|^2\\big]\\big] \\nonumber \\\\\n \\leq &~ \\alpha^2 \\mathbb{E}\\big[\\|\\mathbf{X}^{t+\\frac{1}{2}}-\\underline\\mathbf{X}^{t}\\|^2\\big] = \\alpha^2 \\mathbb{E}\\big[\\|\\mathbf{X}^{t+\\frac{1}{2}}-\\widehat{\\mathbf{X}}^t+\\widehat{\\mathbf{X}}^t - \\mathbf{X}^t+\\mathbf{X}^t-\\underline\\mathbf{X}^{t}\\|^2\\big] \\nonumber \\\\ \n \\le & ~ \\alpha^2(1+\\alpha_1)\\mathbb{E}\\big[ \\|\\mathbf{X}^t-\\underline\\mathbf{X}^{t}\\|^2\\big] + \\alpha^2(1+\\alpha_1^{-1})\\mathbb{E}\\big[\\|\\mathbf{X}^{t+\\frac{1}{2}}-\\widehat{\\mathbf{X}}^t + \\widehat{\\mathbf{X}}^t - \\mathbf{X}^t\\|^2\\big] \\nonumber \\\\ \n\\leq &~ \\alpha^2(1+\\alpha_1)\\mathbb{E}\\big[ \\|\\mathbf{X}^t-\\underline\\mathbf{X}^{t}\\|^2\\big] + 2\\alpha^2(1+\\alpha_1^{-1})\\mathbb{E}\\big[\\|\\mathbf{X}^{t+\\frac{1}{2}}-\\widehat{\\mathbf{X}}^t\\|^2\\big]+2\\alpha^2(1+\\alpha_1^{-1})\\mathbb{E}\\big[\\|\\widehat{\\mathbf{X}}^t - \\mathbf{X}^t\\|^2\\big], \\label{eq:X_-X_0} \n\\end{align}\nwhere $\\alpha_1$ can be any positive number.\nTaking $\\alpha_1 = 2$ in \\eqref{eq:X_-X_0} gives \\eqref{eq:X_-X_1}.\nTaking $\\alpha_1 = \\frac{1-\\alpha^2}{2}$ in \\eqref{eq:X_-X_0} and plugging \\eqref{eq:hatx_xprox_comp} give \\eqref{eq:2.2.2}.\n\n\nAbout $\\mathbb{E}[\\| \\mathbf{X}^{t+1} - \\mathbf{X}^{t}\\|^2]$, similar to \\eqref{eq:Xplus1-X}, we have from \\eqref{eq:compX_hatW} that\n\\begin{align}\n&~ \\mathbb{E}\\big[\\| \\mathbf{X}^{t+1} - \\mathbf{X}^{t}\\|^2\\big] = \\mathbb{E}\\big[\\|\\mathbf{X}^{t+\\frac{1}{2}}\\widehat\\mathbf{W}_x - \\mathbf{X}^{t} + \\gamma_x(\\underline\\mathbf{X}^{t+1}-\\mathbf{X}^{t+\\frac{1}{2}})(\\mathbf{W}-\\mathbf{I})\\|^2\\big] \\nonumber \\\\\n\\leq&~(1+\\alpha_2) \\mathbb{E}\\big[\\|\\mathbf{X}^{t+\\frac{1}{2}}\\widehat\\mathbf{W}_x-\\mathbf{X}^t\\|^2\\big] + (1+\\alpha_2^{-1}) \\mathbb{E}\\big[\\|\\gamma_x(\\underline\\mathbf{X}^{t+1}-\\mathbf{X}^{t+\\frac{1}{2}})(\\mathbf{W}-\\mathbf{I})\\|^2\\big]\\nonumber\\\\\n \\overset{\\eqref{eq:Xplus1-X}, \\eqref{eq:X_-X_1}}\\leq &~ (1+\\alpha_2) \\left( 3\\mathbb{E}\\big[\\|\\mathbf{X}^{t+\\frac{1}{2}}-\\widehat\\mathbf{X}^t \\|^2\\big] +3\\mathbb{E}\\big[\\|\\widehat\\mathbf{X}^t-\\mathbf{X}^t \\|^2\\big] + 12 \\mathbb{E}\\big[\\|\\mathbf{X}^t_\\perp\\|^2\\big]\\right) \\nonumber \\\\\n&~ + (1+\\alpha_2^{-1})4\\gamma_x^2 \\cdot 3\\alpha^2 \\left( \\mathbb{E}\\big[\\|\\mathbf{X}^{t+\\frac{1}{2}}-\\widehat\\mathbf{X}^t \\|^2\\big] + \\mathbb{E}\\big[\\|\\widehat\\mathbf{X}^t-\\mathbf{X}^t \\|^2\\big] + \\mathbb{E}\\big[\\|\\mathbf{X}^t-\\underline\\mathbf{X}^{t}\\|^2\\big]\\right) \\nonumber \\\\\n\\leq &~ 4\\mathbb{E}\\big[\\|\\mathbf{X}^{t+\\frac{1}{2}}-\\widehat\\mathbf{X}^t \\|^2\\big] + 4 \\mathbb{E}\\big[\\|\\widehat\\mathbf{X}^t-\\mathbf{X}^t \\|^2\\big] + 14\\mathbb{E}\\big[\\|\\mathbf{X}^t _\\perp\\|^2\\big] + 4\\sqrt{3} \\alpha \\gamma_x \\mathbb{E}\\big[\\|\\mathbf{X}^t-\\underline\\mathbf{X}^{t}\\|^2\\big], \\nonumber\n\\end{align}\nwhere in the first inequality $\\alpha_2$ could be any positive number, in the second inequality we use \\eqref{eq:X_-X_1},\nand in the last inequality we take $\\alpha_2 = 2\\gamma_x \\alpha$ and thus with $\\gamma_x\\leq \\frac{2\\sqrt{3}-3}{6\\alpha}$, it holds\n$ 3(1+\\alpha_2) +12\\gamma_x^2\\alpha^2(1+\\alpha_2^{-1}) = 3(1+2\\gamma_x\\alpha)^2 \\leq 4$,\n $12(1+\\alpha_2)\\leq\n 8\\sqrt{3}\\leq 14$, \n$(1+\\alpha_2^{-1})4\\gamma_x^2\\cdot3\\alpha^2 \\leq\n4\\sqrt{3} \\alpha \\gamma_x$.\nThen plugging \\eqref{eq:hatx_xprox_comp} into the inequality above, we obtain \\eqref{eq:2.2.3}.\n\\end{proof}\n\n\\begin{lemma}\\label{lem:X_consensus_comperror} \nLet $\\eta\\leq \\lambda \\leq\\frac{1}{4 L}$ and $\\gamma_x\\leq \\min\\{\\frac{ (1-\\widehat\\rho_x^2)^2}{60\\alpha}, \\frac{1-\\alpha^2}{25}\\}$.\nThen the consensus error and compression error of $\\mathbf{X}$ can be bounded by\n\\begin{align} \n \\mathbb{E}\\big[\\|\\mathbf{X}^{t+1}_\\perp\\|^2\\big] \n \\leq &~\n \\frac{3+\\widehat\\rho_x^2}{4} \\mathbb{E}\\big[\\|\\mathbf{X}^t_\\perp\\|^2\\big] + 2\\alpha \\gamma_x (1-\\widehat\\rho_x^2) \\mathbb{E}\\big[\\|\\mathbf{X}^t-\\underline\\mathbf{X}^{t}\\|^2\\big]\n + \\frac{9}{4(1-\\widehat\\rho_x^2)}\\eta^2 \\mathbb{E}\\big[\\|\\Y^t_\\perp\\|^2\\big] \\nonumber\\\\\n &~ + 4\\alpha \\gamma_x (1-\\widehat\\rho_x^2)\\mathbb{E}\\big[\\| \\widehat\\mathbf{X}^t - \\mathbf{X}^t\\|^2\\big] + 4 \\alpha \\gamma_x (1-\\widehat\\rho_x^2)\\eta^2\\sigma^2, \\label{eq:2.4.1}\\\\\n \\mathbb{E}\\big[\\|\\mathbf{X}^{t+1}-\\underline\\mathbf{X}^{t+1}\\|^2\\big] \n \\leq &~ \\frac{21}{1-\\alpha^2} \\mathbb{E}\\big[\\|\\mathbf{X}^t_\\perp\\|^2\\big] + \\frac{3+\\alpha^2}{4}\\mathbb{E}\\big[\\|\\mathbf{X}^t-\\underline\\mathbf{X}^{t}\\|^2\\big] +\\frac{21}{1-\\alpha^2} \\eta^2 \\mathbb{E}\\big[\\|\\Y^t_\\perp\\|^2\\big]\\nonumber\\\\\n&~ + \\frac{11}{1-\\alpha^2} \\mathbb{E}\\big[\\| \\widehat\\mathbf{X}^t - \\mathbf{X}^t\\|^2\\big] + \\frac{11}{1-\\alpha^2} \\eta^2\\sigma^2. \\label{eq:2.5.1}\n\\end{align}\n\\end{lemma}\n\\begin{proof}\nFirst, let us consider the consensus error of $\\mathbf{X}$. \nWith the update \\eqref{eq:compX_hatW}, we have\n\\begin{align}\n \n \\mathbb{E}\\big[\\|\\mathbf{X}^{t+1}_\\perp\\|^2\\big] \\leq &~ (1+\\alpha_3)\\mathbb{E}\\big[\\|\\mathbf{X}^{t+\\frac{1}{2}}\\widehat\\mathbf{W}_x (\\mathbf{I}- \\mathbf{J})\\|^2\\big] +(1+\\alpha_3^{-1}) \\mathbb{E}\\big[\\|\\gamma_x(\\underline\\mathbf{X}^{t+1}-\\mathbf{X}^{t+\\frac{1}{2}})(\\mathbf{W}-\\mathbf{I})\\|^2\\big], \\nonumber\\\\\n \\leq &~ (1+\\alpha_3)\\mathbb{E}\\big[\\|\\mathbf{X}^{t+\\frac{1}{2}}(\\widehat\\mathbf{W}_x - \\mathbf{J})\\|^2\\big] + (1+\\alpha_3^{-1})4\\gamma_x^2\\mathbb{E}\\big[\\|\\underline\\mathbf{X}^{t+1}-\\mathbf{X}^{t+\\frac{1}{2}}\\|^2\\big], \\label{eq:XComp_consensus0}\n\\end{align}\nwhere $\\alpha_3$ is any positive number, and $\\|\\mathbf{W}-\\mathbf{I}\\|_2\\leq 2$ is used.\nThe first term in the right hand side of \\eqref{eq:XComp_consensus0} can be processed similarly as the non-compressed version in Lemma \\ref{lem:XI_J} by replacing $\\mathbf{W}$ by $\\widehat\\mathbf{W}_x$, namely,\n\\begin{align}\n \n \\mathbb{E}\\big[\\|\\mathbf{X}^{t+\\frac{1}{2}} (\\widehat\\mathbf{W}_x-\\mathbf{J})\\|^2\\big]\n \\leq &~ \\textstyle \\frac{1+\\widehat\\rho^2_x}{2} \\mathbb{E}\\big[\\|\\mathbf{X}^{t}_\\perp\\|^2\\big]+ \\frac{2\\widehat\\rho^2_x \\eta^2 }{1-\\widehat\\rho^2_x} \\mathbb{E}\\big[\\| \\Y^{t}_\\perp \\|^2\\big]. \\label{eq:XComp_consensus1}\n\\end{align} \nPlugging \\eqref{eq:XComp_consensus1} and \\eqref{eq:X_-X_1} into \\eqref{eq:XComp_consensus0} gives\n\\begin{align*}\n &~ \\mathbb{E}\\big[\\|\\mathbf{X}^{t+1}_\\perp\\|^2\\big] \\leq ~ (1+\\alpha_3)\\left( \\textstyle \\frac{1+\\widehat\\rho^2_x}{2} \\mathbb{E}\\big[\\| \\mathbf{X}^{t}_\\perp \\|^2\\big]+ \\frac{2\\widehat\\rho^2_x \\eta^2 }{1-\\widehat\\rho^2_x} \\mathbb{E}\\big[\\| \\Y^{t}_\\perp \\|^2\\big]\\right) \\\\\n &~ + (1+\\alpha_3^{-1})12 \\alpha^2 \\gamma_x^2 \\left(\\mathbb{E}\\big[ \\|\\mathbf{X}^t-\\underline\\mathbf{X}^{t}\\|^2\\big] + \\mathbb{E}\\big[\\|\\mathbf{X}^{t+\\frac{1}{2}}-\\widehat{\\mathbf{X}}^t\\|^2\\big]+ \\mathbb{E}\\big[\\|\\widehat{\\mathbf{X}}^t - \\mathbf{X}^t\\|^2\\big]\\right)\\\\\n \\overset{\\eqref{eq:hatx_xprox_comp}}{\\leq} &~\n \\left( \\textstyle \\frac{1+\\widehat\\rho_x^2}{2}(1+\\alpha_3) + 48 \\alpha^2 \\gamma_x^2 (1+\\alpha_3^{-1}) \\right) \\mathbb{E}\\big[\\|\\mathbf{X}^t_\\perp\\|^2\\big] \\nonumber\\\\\n &~+ 12\\alpha^2 \\gamma_x^2 (1+\\alpha_3^{-1}) \\mathbb{E}\\big[\\|\\mathbf{X}^t-\\underline\\mathbf{X}^{t}\\|^2\\big] +\\left( \\textstyle \\frac{2\\widehat\\rho_x^2}{1-\\widehat\\rho_x^2}(1+\\alpha_3) +48 \\alpha^2 \\gamma_x^2 (1+\\alpha_3^{-1})\\right)\\eta^2 \\mathbb{E}\\big[\\|\\Y^t_\\perp\\|^2\\big] \\\\\n&~ +24 \\alpha^2 \\gamma_x^2 (1+\\alpha_3^{-1}) \\mathbb{E}\\big[\\| \\widehat\\mathbf{X}^t - \\mathbf{X}^t\\|^2\\big] +24 \\alpha^2 \\gamma_x^2 (1+\\alpha_3^{-1})\\eta^2\\sigma^2.\n\\end{align*} \nLet $\\alpha_3 = \\frac{7\\alpha\\gamma_x}{1-\\widehat\\rho_x^2}$ and $\\gamma_x\\leq \\frac{(1-\\widehat\\rho_x^2)^2}{60\\alpha}$. \nThen $\\alpha^2 \\gamma_x^2 (1+\\alpha_3^{-1})=\\alpha\\gamma_x (\\alpha\\gamma_x+\\frac{1-\\widehat\\rho_x^2}{7})\\leq \\alpha\\gamma_x (\\frac{ (1-\\widehat\\rho_x^2)^2}{60}+\\frac{1-\\widehat\\rho_x^2}{7})\\leq \\frac{\\alpha\\gamma_x (1-\\widehat\\rho_x^2)}{6}$\nand \n\\begin{align*}\n &~ \\textstyle \\frac{1+\\widehat\\rho_x^2}{2}(1+\\alpha_3) + 48 \\alpha^2 \\gamma_x^2 (1+\\alpha_3^{-1}) = \\frac{1+\\widehat\\rho_x^2}{2} + 48 \\alpha^2 \\gamma_x^2 + \\frac{7\\alpha\\gamma_x}{1-\\widehat\\rho_x^2} + \\frac{48\\alpha\\gamma_x(1-\\widehat\\rho_x^2)}{7} \\\\ \n \\leq&~ \\textstyle \\frac{1+\\widehat\\rho_x^2}{2} + \\frac{48}{60^2}(1-\\widehat\\rho_x^2)^4 + \\frac{7}{60}(1-\\widehat\\rho_x^2) + \\frac{7}{60}(1-\\widehat\\rho_x^2)^3\\leq \\frac{1+\\widehat\\rho_x^2}{2} + \\frac{ 1-\\widehat\\rho_x^2}{4} = \\frac{3+\\widehat\\rho_x^2}{4},\\\\\n &~ \\textstyle \\frac{2\\widehat\\rho_x^2}{1-\\widehat\\rho_x^2}(1+\\alpha_3) + 48 \\alpha^2 \\gamma_x^2 (1+\\alpha_3^{-1}) = \\frac{2\\widehat\\rho_x^2}{1-\\widehat\\rho_x^2} + 48 \\alpha^2 \\gamma_x^2 + \n \\frac{2\\widehat\\rho_x^2}{1-\\widehat\\rho_x^2} \\frac{7 \\alpha \\gamma_x }{1-\\widehat\\rho_x^2} + \\frac{48\\alpha\\gamma_x(1-\\widehat\\rho_x^2)}{7}\\\\\n \\leq &~ \\textstyle \\frac{1}{1-\\widehat\\rho_x^2} \\left(\n 2\\widehat\\rho_x^2 + \\frac{48}{60^2} (1-\\widehat\\rho_x^2) + \\frac{14\\widehat\\rho_x^2}{60} + \\frac{7}{60}(1-\\widehat\\rho_x^2)\n \\right) \\leq \n \\frac{1}{1-\\widehat\\rho_x^2} \\left(\n 2\\widehat\\rho_x^2 + \\frac{48}{60^2} + \\frac{7}{60}\n \\right) \\leq \\frac{9}{4(1-\\widehat\\rho_x^2)}. \n\\end{align*}\nThus\n\\eqref{eq:2.4.1} holds. \n \n\nNow let us consider the compression error of $\\mathbf{X}$.\nBy \\eqref{eq:alg3_6}, we have\n\\begin{align}\n &~\\mathbb{E}\\big[\\|\\mathbf{X}^{t+1}-\\underline\\mathbf{X}^{t+1}\\|^2\\big\n = \\mathbb{E}\\big[\\|(\\underline\\mathbf{X}^{t+1} - \\mathbf{X}^{t+\\frac{1}{2}}) \\big(\\gamma_x(\\mathbf{W}-\\mathbf{I}) -\\mathbf{I}\\big) + \\gamma_x \\mathbf{X}^{t+\\frac{1}{2}} (\\mathbf{I}-\\mathbf{J}) (\\mathbf{W}-\\mathbf{I}) \\|^2\\big] \\nonumber\\\\\n\\leq&~ (1+\\alpha_4) (1+2\\gamma_x)^2 \\mathbb{E}\\big[\\|\\underline\\mathbf{X}^{t+1}-\\mathbf{X}^{t+\\frac{1}{2}}\\|^2\\big] + (1+\\alpha_4^{-1})4 \\gamma_x^2 \\mathbb{E}\\big[\\|\\mathbf{X}^{t+\\frac{1}{2}}_\\perp\\|^2\\big],\\label{eq:2.5.1.0}\n\\end{align}\nwhere we have used $\\mathbf{J}\\mathbf{W}=\\mathbf{J}$ in the equality,\n$\\|\\gamma_x (\\mathbf{W}-\\mathbf{I}) -\\mathbf{I}\\|_2\\leq \\gamma_x\\|\\mathbf{W}-\\mathbf{I}\\|_2+\\|\\mathbf{I}\\|_2\\leq 1+2\\gamma_x$ and $\\|\\mathbf{W}-\\mathbf{I}\\|_2\\leq 2$ in the inequality, and $\\alpha_4$ can be any positive number. For the second term in the right hand side of \\eqref{eq:2.5.1.0}, we have\n\\begin{align}\n \\|\\mathbf{X}^{t+\\frac{1}{2}}_\\perp\\|^2 \\overset{\\eqref{eq:alg3_4}}{=}&~ \\left\\|\\left(\\prox_{\\eta r} \\left(\\mathbf{X}^t - \\eta \\Y^{t}\\right)-\\prox_{\\eta r} \\left(\\bar{\\mathbf{x}}^t - \\eta \\bar{\\mathbf{y}}^{t}\\right)\\mathbf{1}^\\top\\right)(\\mathbf{I}-\\mathbf{J})\\right\\|^2 \\nonumber \\\\\n \\leq&~\n \\|\\mathbf{X}^t_\\perp- \\eta \\Y^{t}_\\perp\\|^2 \n \\leq 2\\|\\mathbf{X}^t_\\perp\\|^2+2\\eta^2\\|\\Y^{t}_\\perp\\|^2, \\label{eq:2.2.1}\n\\end{align}\nwhere we have used $\\mathbf{1}^\\top(\\mathbf{I}-\\mathbf{J})=\\mathbf{0}^\\top$, $\\|\\mathbf{I}-\\mathbf{J}\\|_2\\leq 1$, and Lemma \\ref{lem:prox_diff}.\nNow plugging \\eqref{eq:2.2.2} and \\eqref{eq:2.2.1} into \\eqref{eq:2.5.1.0} gives\n\\begin{align*}\n \\mathbb{E}\\big[\\|\\mathbf{X}^{t+1}-\\underline\\mathbf{X}^{t+1}\\|^2\\big]\n \\leq \\left( \\textstyle (1+\\alpha_4^{-1})8\\gamma_x^2+(1+\\alpha_4) (1+2\\gamma_x)^2\\frac{16}{1-\\alpha^2}\\right) \\left( \\mathbb{E}\\big[\\|\\mathbf{X}^t_\\perp\\|^2\\big] + \\eta^2 \\mathbb{E}\\big[\\|\\Y^t_\\perp\\|^2\\big]\\right) \\nonumber\\\\\n \\textstyle + (1+\\alpha_4) (1+2\\gamma_x)^2\\frac{1+\\alpha^2}{2}\\mathbb{E}\\big[\\|\\mathbf{X}^t-\\underline\\mathbf{X}^{t}\\|^2\\big]\n +(1+\\alpha_4)(1+2\\gamma_x)^2\\frac{8}{1-\\alpha^2} \\left( \\mathbb{E}\\big[\\| \\widehat\\mathbf{X}^t - \\mathbf{X}^t\\|^2\\big] + \\eta^2\\sigma^2\\right).\n\\end{align*}\nWith $\\alpha_4=\\frac{1-\\alpha^2}{12}$ and $\\gamma_x\\leq \\frac{1-\\alpha^2}{25}$, \\eqref{eq:2.5.1} holds because $(1+2\\gamma_x)^2 \n \\leq 1 + \\frac{104}{25}\\gamma_x \\leq \\frac{7}{6}$, $ (1+2\\gamma_x)^2\\frac{1+\\alpha^2}{2}\\leq \\frac{1+\\alpha^2}{2}+\\frac{104}{25}\\gamma_x\\leq \\frac{2+\\alpha^2}{3}$, and\n\\begin{align} \n \n \n \n \n \n (1+\\alpha_4) (1+2\\gamma_x)^2\\frac{1+\\alpha^2}{2} \\leq &~ \\frac{2+\\alpha^2}{3} + \\alpha_4 = \\frac{3+\\alpha^2}{4}, \\label{eq:gamma_x_1}\\\\\n (1+\\alpha_4^{-1}) 8\\gamma_x^2+ (1+\\alpha_4) (1+2\\gamma_x)^2\\frac{16}{1-\\alpha^2} \\leq&~ \\frac{13}{1-\\alpha^2}\\frac{8}{625} + \\frac{13}{12} \\frac{7}{6} \\frac{16}{1-\\alpha^2} \\leq \\frac{21}{1-\\alpha^2}, \\label{eq:gamma_x_2}\\\\\n (1+\\alpha_4)(1+2\\gamma_x)^2\\frac{8}{1-\\alpha^2} \\leq&~ \\frac{13}{12} \\frac{7}{6}\\frac{8}{1-\\alpha^2} \\leq \\frac{11}{1-\\alpha^2}. \\nonumber\n\\end{align}\n\\end{proof}\n\n\n\n \n \n \n\n\n\n\\begin{lemma} \n\\label{lem:Y_consensus_comperror} Let $\\eta\\leq \\min\\{\\lambda, \\frac{1-\\widehat\\rho^2_y}{8\\sqrt{5} L} \\} $, $\\lambda \\leq\\frac{1}{4 L}$, $ \\gamma_x\\leq \\frac{2\\sqrt{3}-3}{6\\alpha}$, $\\gamma_y\\leq \\min\\{\\frac{\\sqrt{1-\\widehat\\rho^2_y}}{12\\alpha}, \\frac{1-\\alpha^2}{25}\\}$.\nThen the consensus error and compression error of $\\Y$ can be bounded by\n\\begin{align}\n \\mathbb{E}\\big[\\|\\Y^{t+1}_\\perp\\|^2\\big] \\leq &~ \\frac{150 L^2 }{1-\\widehat\\rho^2_y } \\mathbb{E}\\big[\\|\\mathbf{X}^t_\\perp\\|^2\\big] + \\frac{20\\sqrt{3} \\alpha\\gamma_x L^2}{1-\\widehat\\rho^2_y } \\mathbb{E}\\big[\\|\\mathbf{X}^t-\\underline\\mathbf{X}^{t}\\|^2\\big]+\\frac{3+\\widehat\\rho^2_y }{4}\\mathbb{E}\\big[\\|\\Y^t_\\perp\\|^2\\big] \\nonumber\\\\\n&~ +\\frac{48\\alpha^2\\gamma_y^2}{1-\\widehat\\rho^2_y } \\mathbb{E}\\big[\\|\\Y^{t} -\\underline\\Y^{t}\\|^2\\big] + \\frac{40 L^2 }{1-\\widehat\\rho^2_y } \\mathbb{E}\\big[\\| \\widehat\\mathbf{X}^t - \\mathbf{X}^t\\|^2\\big] + 12n \\sigma^2, \\label{eq:2.4.2} \\\\\n\\mathbb{E}\\big[\\|\\Y^{t+1}-\\underline\\Y^{t+1}\\|^2\\big]\n\\leq &~ \\frac{180 L^2}{1-\\alpha^2}\\mathbb{E}\\big[\\|\\mathbf{X}^t_\\perp\\|^2\\big] + \\frac{24\\sqrt{3}\\alpha\\gamma_x L^2}{1-\\alpha^2} \\mathbb{E}\\big[\\|\\mathbf{X}^t-\\underline\\mathbf{X}^{t}\\|^2\\big]\n+ \\frac{3+\\alpha^2}{4}\\mathbb{E}\\big[\\|\\Y^{t} -\\underline\\Y^{t}\\|^2\\big] \\nonumber\\\\\n&~ +\\frac{104\\gamma_y^2+ 96\\eta^2 L^2}{1-\\alpha^2}\\mathbb{E}\\big[\\|\\Y^{t}(\n\\mathbf{I}-\\mathbf{J})\\|^2\\big] + \\frac{48 L^2}{1-\\alpha^2} \\mathbb{E}\\big[\\| \\widehat\\mathbf{X}^t - \\mathbf{X}^t\\|^2\\big] + \\frac{10 n}{1-\\alpha^2} \\sigma^2 .\\label{eq:2.5.2}\n\\end{align} \n\\end{lemma}\n\n\\begin{proof}\nFirst, let us consider the consensus of $\\Y$. Similar to \\eqref{eq:XComp_consensus0}, we have from the update \\eqref{eq:Y_hatW} that\n\\begin{align}\n \\mathbb{E}\\big[\\|\\Y^{t+1}_\\perp\\|^2\\big] \\leq (1+\\alpha_5)\\mathbb{E}\\big[\\|\\Y^{t+\\frac{1}{2}}(\\widehat\\mathbf{W}_y-\\mathbf{J})\\|^2\\big] + (1+\\alpha_5^{-1})4\\gamma_y^2 \\mathbb{E}\\big[\\|\\underline\\Y^{t+1}-\\Y^{t+\\frac{1}{2}}\\|^2\\big], \\label{eq:Ycomp_conses0}\n\\end{align}\nwhere $\\alpha_5$ can be any positive number.\nSimilarly as \\eqref{eq:y_cons1}-\\eqref{eq:y_cons2} in the proof of Lemma \\ref{lem:YI_J}, we have the bound for the first term on the right hand side of \\eqref{eq:Ycomp_conses0} by replacing $\\mathbf{W}$ with $\\widehat\\mathbf{W}_y$, namely, \n\\begin{align}\n \\mathbb{E}\\big[\\|\\Y^{t+\\frac{1}{2}}(\\widehat\\mathbf{W}_y-\\mathbf{J})\\|^2\\big] \\leq \\textstyle \\frac{1+\\widehat\\rho^2_y}{2} \\mathbb{E}\\big[\\|\\Y^{t}_\\perp \\|^2\\big] + \\frac{2 \\widehat\\rho^2_y L^2 }{1-\\widehat\\rho^2_y } \\mathbb{E}\\big[\\| \\mathbf{X}^{t+1} - \\mathbf{X}^{t}\\|^2\\big] + 5 \\widehat\\rho^2_y n \\sigma^2.\\label{eq:comp_y_cons220} \n\\end{align} \nPlug \\eqref{eq:comp_y_cons220} and \\eqref{eq:2.3.2_1} back to \\eqref{eq:Ycomp_conses0}, and take $\\alpha_5 = \\frac{1-\\widehat\\rho^2_y}{3(1+\\widehat\\rho^2_y)}$. We have \n\\begin{align*} \n &~ \\mathbb{E}\\big[\\|\\Y^{t+1}_\\perp\\|^2\\big] \n \\leq \\textstyle \\frac{2(2+\\widehat\\rho^2_y)}{3(1+\\widehat\\rho^2_y)}\\frac{1+\\widehat\\rho^2_y}{2} \\mathbb{E}\\big[\\|\\Y^{t}_\\perp \\|^2\\big] \n + \\frac{24\\gamma_y^2}{1-\\widehat\\rho^2_y} 2\\alpha^2 \\mathbb{E}\\big[\\|\\Y^{t} -\\underline\\Y^{t}\\|^2\\big] \\nonumber\\\\\n &~\\quad \\textstyle + \\frac{24\\gamma_y^2}{1-\\widehat\\rho^2_y} 6\\alpha^2 n\\sigma^2 + 2\\cdot5 \\widehat\\rho^2_y n \\sigma^2 + \\left( \\textstyle \\frac{24\\gamma_y^2}{1-\\widehat\\rho^2_y} 4\\alpha^2 L^2 + 2\\cdot\\frac{2 \\widehat\\rho^2_y L^2 }{1-\\widehat\\rho^2_y } \\right)\\mathbb{E}\\big[\\| \\mathbf{X}^{t+1} - \\mathbf{X}^{t}\\|^2\\big] \\nonumber\\\\\n \\leq &~ \\textstyle \\frac{2+\\widehat\\rho^2_y}{3} \\mathbb{E}\\big[\\|\\Y^{t}_\\perp \\|^2\\big] \n + \\frac{48\\alpha^2\\gamma_y^2}{1-\\widehat\\rho^2_y} \\mathbb{E}\\big[\\|\\Y^{t} -\\underline\\Y^{t}\\|^2\\big] + 11 n \\sigma^2 + \\frac{5 L^2}{1-\\widehat\\rho^2_y} \\mathbb{E}\\big[\\| \\mathbf{X}^{t+1} - \\mathbf{X}^{t}\\|^2\\big] \\\\\n \\leq &~ \\textstyle \\frac{150 L^2 }{1-\\widehat\\rho^2_y } \\mathbb{E}\\big[\\|\\mathbf{X}^t_\\perp\\|^2\\big] + \\frac{20\\sqrt{3} L^2}{1-\\widehat\\rho^2_y } \\alpha \\gamma_x \\mathbb{E}[\\|\\mathbf{X}^t-\\underline\\mathbf{X}^{t}\\|^2] + \\frac{40 L^2 }{1-\\widehat\\rho^2_y } \\eta^2\\sigma^2 + 11 n \\sigma^2 \\nonumber\\\\\n&~ \\textstyle +\\left( \\textstyle \\frac{2+\\widehat\\rho^2_y}{3}+ \\frac{80 L^2 }{1-\\widehat\\rho^2_y } \\eta^2\\right) \\mathbb{E}\\big[\\|\\Y^t_\\perp\\|^2\\big] +\\frac{48\\alpha^2\\gamma_y^2}{1-\\widehat\\rho^2_y} \\mathbb{E}\\big[\\|\\Y^{t} -\\underline\\Y^{t}\\|^2\\big] + \\frac{40 L^2 }{1-\\widehat\\rho^2_y}\\mathbb{E}\\big[\\| \\widehat\\mathbf{X}^t - \\mathbf{X}^t\\|^2\\big],\n\\end{align*}\nwhere the first inequality holds by $1+\\alpha_5 = \\frac{2(2+\\widehat\\rho^2_y)}{3(1+\\widehat\\rho^2_y)} \\leq 2$ and $1+\\alpha_5^{-1} = \\frac{2(2+\\widehat\\rho^2_y)}{1-\\widehat\\rho^2_y}\\leq \\frac{6}{1-\\widehat\\rho^2_y}$, \nthe second inequality holds by $\\gamma_y\\leq \\frac{\\sqrt{1-\\widehat\\rho^2_y}}{12\\alpha}$ and $\\alpha^2\\leq 1$, and the third equality holds by \\eqref{eq:2.2.3}.\nBy $\\frac{80 L^2 }{1-\\widehat\\rho^2_y} \\eta^2 \\leq \\frac{1-\\widehat\\rho^2_y}{4}$ and $ \\frac{40 L^2 }{1-\\widehat\\rho^2_y} \\eta^2\\leq \\frac{1-\\widehat\\rho^2_y}{8}\\leq 1$ from $\\eta\\leq \\frac{1-\\widehat\\rho^2_y}{8\\sqrt{5} L} $, we can now obtain \\eqref{eq:2.4.2}.\n\nNext let us consider the compression error of $\\Y$, similar to \\eqref{eq:2.5.1.0}, we have by \\eqref{eq:alg3_3} that\n\\begin{align} \n&~\\mathbb{E}\\big[\\|\\Y^{t+1}-\\underline\\Y^{t+1}\\|^2\\big] \n\\leq (1+\\alpha_6)(1+2\\gamma_y)^2 \\mathbb{E}\\big[\\|\\underline\\Y^{t+1}-\\Y^{t+\\frac{1}{2}}\\|^2\\big] + (1+\\alpha_6^{-1})4 \\gamma_y^2 \\mathbb{E}\\big[\\|\\Y^{t+\\frac{1}{2}}_\\perp\\|^2\\big], \\label{eq:Y_compress_0}\n\\end{align}\nwhere\n$\\alpha_6$ is any positive number. \nFor $\\mathbb{E}\\big[\\|\\Y^{t+\\frac{1}{2}}_\\perp\\|^2\\big]$, we have from \\eqref{eq:alg3_1} that\n\\begin{align}\n &~\\mathbb{E}\\big[\\|\\Y^{t+\\frac{1}{2}}_\\perp\\|^2\\big] =\\mathbb{E}\\big[\\|( \\Y^{t} + \\nabla \\mathbf{F}^{t+1} - \\nabla \\mathbf{F}^{t})(\\mathbf{I}-\\mathbf{J})\\|^2\\big]\\nonumber \\\\\n \\leq &~ 2\\mathbb{E}\\big[\\|\\Y^{t}_\\perp\\|^2\\big] +2\\mathbb{E}\\big[\\|\\nabla \\mathbf{F}^{t+1}-\\nabla \\mathbf{F}^{t}\\|^2\\big] \\leq \n 2\\mathbb{E}\\big[\\|\\Y^{t}_\\perp\\|^2\\big] +6 n \\sigma^2 + 4 L^2 \\mathbb{E}\\big[\\|\\mathbf{X}^{t+1}-\\mathbf{X}^{t}\\|^2\\big], \\label{eq:2.3.1}\n\\end{align} \nwhere we have used \\eqref{eq:y_cons12}.\nPlug \\eqref{eq:2.3.2} and \\eqref{eq:2.3.1} back to \\eqref{eq:Y_compress_0} to have\n\\begin{align*} \n&~\\mathbb{E}\\big[\\|\\Y^{t+1}-\\underline\\Y^{t+1}\\|^2\\big] \\leq \\textstyle (1+\\alpha_6) (1+2\\gamma_y)^2 \\frac{1+\\alpha^2}{2}\\mathbb{E}\\big[\\|\\Y^{t} -\\underline\\Y^{t}\\|^2\\big] +(1+\\alpha_6^{-1})8\\gamma_y^2\\mathbb{E}\\big[\\|\\Y^{t}(\n\\mathbf{I}-\\mathbf{J})\\|^2\\big] \\\\\n&~+ \\left( \\textstyle (1+\\alpha_6^{-1})4\\gamma_y^2 +(1+\\alpha_6)(1+2\\gamma_y)^2 \\frac{1}{1-\\alpha^2} \\right)4 L^2 \\mathbb{E}\\big[\\|\\mathbf{X}^{t+1}-\\mathbf{X}^{t}\\|^2\\big] \\\\\n&~ + \\left( \\textstyle (1+\\alpha_6^{-1})4\\gamma_y^2 +(1+\\alpha_6)(1+2\\gamma_y)^2 \\frac{1}{1-\\alpha^2} \\right) 6 n \\sigma^2.\n\\end{align*} \nWith $\\alpha_6=\\frac{1-\\alpha^2}{12}$ and $\\gamma_y< \\frac{1-\\alpha^2}{25}$,\nlike \\eqref{eq:gamma_x_1} and \\eqref{eq:gamma_x_2}, we have $(1+\\alpha_6) (1+2\\gamma_y)^2 \\frac{1+\\alpha^2}{2}\\leq \\frac{3+\\alpha^2}{4}$, $8(1+\\alpha_6^{-1})\\leq\\frac{8\\cdot13}{1-\\alpha^2} = \\frac{104}{1-\\alpha^2} $ and $ (1+\\alpha_6^{-1})4\\gamma_y^2 +(1+\\alpha_6)(1+2\\gamma_y)^2 \\frac{1}{1-\\alpha^2} \\leq \\frac{13}{1-\\alpha^2}\\frac{4}{625}+\\frac{13}{12}\\frac{7}{6}\\frac{1}{1-\\alpha^2}\\leq \\frac{3}{2(1-\\alpha^2)}$. Thus\n\\begin{align*} \n\\mathbb{E}\\big[\\|\\Y^{t+1}-\\underline\\Y^{t+1}\\|^2\\big]\n\\leq &~ \\textstyle \\frac{3+\\alpha^2}{4}\\mathbb{E}\\big[\\|\\Y^{t} -\\underline\\Y^{t}\\|^2\\big] \n +\\frac{104\\gamma_y^2}{1-\\alpha^2}\\mathbb{E}\\big[\\|\\Y^{t}(\n\\mathbf{I}-\\mathbf{J})\\|^2\\big]+\\frac{6 L^2}{1-\\alpha^2} \\mathbb{E}\\big[\\|\\mathbf{X}^{t+1}-\\mathbf{X}^{t}\\|^2\\big] + \\frac{9n \\sigma^2}{1-\\alpha^2} \\nonumber\\\\\n\\leq &~ \\textstyle\\frac{180 L^2}{1-\\alpha^2}\\mathbb{E}\\big[\\|\\mathbf{X}^t_\\perp\\|^2\\big] + \\frac{24\\sqrt{3} \\alpha\\gamma_x L^2 }{1-\\alpha^2} \\mathbb{E}\\big[\\|\\mathbf{X}^t-\\underline\\mathbf{X}^{t}\\|^2\\big] \n+ \\frac{3+\\alpha^2}{4}\\mathbb{E}\\big[\\|\\Y^{t} -\\underline\\Y^{t}\\|^2\\big] \\\\\n&~ \\textstyle +\\frac{104\\gamma_y^2+ 96\\eta^2 L^2}{1-\\alpha^2}\\mathbb{E}\\big[\\|\\Y^{t}(\n\\mathbf{I}-\\mathbf{J})\\|^2\\big] + \\frac{48 L^2}{1-\\alpha^2} \\mathbb{E}\\big[\\| \\widehat\\mathbf{X}^t - \\mathbf{X}^t\\|^2\\big] + \\frac{48 L^2\\eta^2+9n}{1-\\alpha^2} \\sigma^2,\n\\end{align*}\nwhere the second inequality holds by \\eqref{eq:2.2.3}.\nBy $48 L^2\\eta^2\\leq n$, we have \\eqref{eq:2.5.2} and complete the proof.\n\\end{proof}\n \n\\begin{lemma}\\label{lem:phi_one_step}\nLet $\\eta\\leq \\lambda \\leq\\frac{1}{4 L}$ and $ \\gamma_x\\leq \\frac{1}{6\\alpha}$. \nIt holds\n\\begin{align}\n \\sum_{i=1}^n \\mathbb{E}\\big[\\phi_\\lambda({\\mathbf{x}}_i^{t+1})\\big] \n\\leq&~ \\sum_{i=1}^n\\mathbb{E}\\big[ \\phi_\\lambda( {\\mathbf{x}}_i^{t})\\big] + \\frac{12}{\\lambda}\\mathbb{E}\\big[\\|\\mathbf{X}^t_\\perp\\|^2\\big] + \\frac{7\\alpha\\gamma_x}{\\lambda} \\mathbb{E}\\big[\\|\\mathbf{X}^t-\\underline\\mathbf{X}^{t}\\|^2\\big] + \\frac{12}{\\lambda} \\eta^2\\mathbb{E}\\big[\\|\\Y^t_\\perp\\|^2\\big] \\nonumber \\\\\n&~+\\frac{1}{\\lambda}\\left( -\\frac{\\eta}{4\\lambda} + 23\\alpha\\gamma_x \\right) \\mathbb{E}\\big[\\| \\widehat\\mathbf{X}^{t}- \\mathbf{X}^{t} \\|^2\\big] + \\frac{5}{\\lambda} \\eta^2 \\sigma^2. \\label{eq:2.7}\n\\end{align} \n\\end{lemma}\n\n\\begin{proof}\nSimilar to \\eqref{eq:phi_update_0}, we have \n\\begin{align}\n&~ \\mathbb{E}\\big[\\phi_\\lambda({\\mathbf{x}}_i^{t+1})\\big] \\overset{\\eqref{eq:x_t_hat}}{=} \\mathbb{E}\\big[\\phi(\\widehat{\\mathbf{x}}_i^{t+1})\\big]+\\frac{1}{2\\lambda} \\mathbb{E}\\big[\\|\\widehat{\\mathbf{x}}_i^{t+1}-{\\mathbf{x}}_i^{t+1}\\|^2\\big] \\nonumber \\\\\n\\overset{ \\eqref{eq:compX_hatW}}{\\leq} &~ \\mathbb{E}\\bigg[\\phi\\bigg(\\sum_{j=1}^n \\big(\\widehat\\mathbf{W}_x\\big)_{ji}\\widehat{\\mathbf{x}}_j^{t+\\frac{1}{2}}\\bigg)\\bigg] +\\frac{1}{2\\lambda} \\mathbb{E}\\bigg[\\bigg\\| \\sum_{j=1}^n \\big(\\widehat\\mathbf{W}_x\\big)_{ji} \\big(\\widehat{\\mathbf{x}}_j^{t+\\frac{1}{2}}- {\\mathbf{x}}_j^{t+\\frac{1}{2}}\\big) - \\gamma_x\\sum_{j=1}^n \\big(\\mathbf{W}_{ji}-\\mathbf{I}_{ji}\\big)\\big(\\underline{\\mathbf{x}}_j^{t+1}-{\\mathbf{x}}_j^{t+\\frac{1}{2}}\\big) \\bigg\\|^2\\bigg] \\nonumber\\\\\n\\leq&~ \\mathbb{E}\\bigg[\\phi\\bigg(\\sum_{j=1}^n \\big(\\widehat\\mathbf{W}_x\\big)_{ji}\\widehat{\\mathbf{x}}_j^{t+\\frac{1}{2}}\\bigg)\\bigg] + \\frac{1+\\alpha_7}{2\\lambda} \\mathbb{E}\\bigg[\\bigg\\| \\sum_{j=1}^n \\big(\\widehat\\mathbf{W}_x\\big)_{ji} \\big(\\widehat{\\mathbf{x}}_j^{t+\\frac{1}{2}}-{\\mathbf{x}}_j^{t+\\frac{1}{2}}\\big)\\bigg\\|^2\\bigg]\\nonumber\\\\\n&~ + \\frac{1+\\alpha_7^{-1}}{2\\lambda} \\mathbb{E}\\bigg[\\bigg\\| \\gamma_x \\sum_{j=1}^n\\big(\\mathbf{W}_{ji}-\\mathbf{I}_{ji}\\big)\\big(\\underline{\\mathbf{x}}_j^{t+1}-{\\mathbf{x}}_j^{t+\\frac{1}{2}}\\big)\\bigg\\|^2\\bigg] \\nonumber \\\\\n\\overset{\\mbox{Lemma \\ref{lem:weak_convx}}}\\leq &~ \\sum_{j=1}^n \\big(\\widehat\\mathbf{W}_x\\big)_{ji} \\mathbb{E}\\big[\\phi( \\widehat{\\mathbf{x}}_j^{t+\\frac{1}{2}})\\big] + \\frac{ L}{2} \\sum_{j=1}^{n-1}\\sum_{l=j+1}^n \\big(\\widehat\\mathbf{W}_x\\big)_{ji} (\\widehat\\mathbf{W}_x)_{li}\\mathbb{E}\\big[\\|\\widehat{\\mathbf{x}}_j^{t+\\frac{1}{2}}-\\widehat{\\mathbf{x}}_l^{t+\\frac{1}{2}}\\|^2\\big] \\nonumber \\\\\n&~ + \\frac{1+\\alpha_7}{2\\lambda} \\sum_{j=1}^n \\big(\\widehat\\mathbf{W}_x\\big)_{ji}\\mathbb{E}\\big[\\| \\widehat{\\mathbf{x}}_j^{t+\\frac{1}{2}}-{\\mathbf{x}}_j^{t+\\frac{1}{2}}\\|^2\\big] + \\frac{1+\\alpha_7^{-1}}{2\\lambda}\\gamma_x^2 \\mathbb{E}\\big[\\| \\sum_{j=1}^n(\\mathbf{W}_{ji}-\\mathbf{I}_{ji})(\\underline{\\mathbf{x}}_j^{t+1}-{\\mathbf{x}}_j^{t+\\frac{1}{2}})\\|^2\\big] \\nonumber \\\\\n\\leq &~ \\sum_{j=1}^n \\big(\\widehat\\mathbf{W}_x\\big)_{ji} \\mathbb{E}\\big[\\phi_\\lambda({\\mathbf{x}}_j^{t+\\frac{1}{2}})\\big] + \\frac{1}{4\\lambda} \\sum_{j=1}^{n-1}\\sum_{l=j+1}^n \\big(\\widehat\\mathbf{W}_x\\big)_{ji} (\\widehat\\mathbf{W}_x)_{li} \\mathbb{E}\\big[\\|{\\mathbf{x}}_j^{t+\\frac{1}{2}}-{\\mathbf{x}}_l^{t+\\frac{1}{2}}\\|^2\\big] \\nonumber \\\\\n&~+ \\frac{\\alpha_7}{2\\lambda} \\sum_{j=1}^n \\big(\\widehat\\mathbf{W}_x\\big)_{ji}\\mathbb{E}\\big[\\| \\widehat{\\mathbf{x}}_j^{t+\\frac{1}{2}}-{\\mathbf{x}}_j^{t+\\frac{1}{2}}\\|^2\\big] + \\frac{1+\\alpha_7^{-1}}{2\\lambda}\\gamma_x^2 \\mathbb{E}\\big[\\| \\sum_{j=1}^n(\\mathbf{W}_{ji}-\\mathbf{I}_{ji})(\\underline{\\mathbf{x}}_j^{t+1}-{\\mathbf{x}}_j^{t+\\frac{1}{2}})\\|^2\\big]. \\label{eq:phi_lambda1}\n\\end{align}\nThe same as \\eqref{eq:phi_lambda} and \\eqref{eq:2_3}, for the first two terms in the right hand side of \\eqref{eq:phi_lambda1}, we have \n\\begin{align}\n \\sum_{i=1}^n \\sum_{j=1}^n \\big(\\widehat\\mathbf{W}_x\\big)_{ji} \\phi_\\lambda({\\mathbf{x}}_j^{t+\\frac{1}{2}}) \\leq \\sum_{i=1}^n \\phi_\\lambda( {\\mathbf{x}}_i^{t}) +\\frac{1}{2\\lambda} \\|\\widehat\\mathbf{X}^{t}-\\mathbf{X}^{t+\\frac{1}{2}}\\|^2 - \\frac{1}{2\\lambda} \\|\\widehat\\mathbf{X}^t - \\mathbf{X}^t\\|^2,\\label{eq:2_2_press}\\\\\n \\sum_{i=1}^n\\sum_{j=1}^{n-1}\\sum_{l=j+1}^n \\big(\\widehat\\mathbf{W}_x\\big)_{ji}(\\widehat\\mathbf{W}_x)_{li}\\|{\\mathbf{x}}_j^{t+\\frac{1}{2}}-{\\mathbf{x}}_l^{t+\\frac{1}{2}}\\|^2 \n\\leq 8 \\|\\mathbf{X}^{t}_\\perp\\|^2+ 8\\eta^2 \\|\\Y^{t}_\\perp\\|^2. \\label{eq:2_3_press}\n\\end{align}\nFor the last two terms on the right hand side of \\eqref{eq:phi_lambda1}, we have\n\\begin{align}\n &~ \\sum_{i=1}^n\\sum_{j=1}^n \\big(\\widehat\\mathbf{W}_x\\big)_{ji}\\mathbb{E}\\big[\\| \\widehat{\\mathbf{x}}_j^{t+\\frac{1}{2}}-{\\mathbf{x}}_j^{t+\\frac{1}{2}}\\|^2\\big] \n = \\| \\widehat\\mathbf{X}^{t+\\frac{1}{2}}-\\mathbf{X}^{t+\\frac{1}{2}} \\|^2 \n \\leq 2 \\| \\widehat\\mathbf{X}^{t+\\frac{1}{2}}-\\widehat\\mathbf{X}^{t} \\|^2 +2 \\| \\widehat\\mathbf{X}^{t} - \\mathbf{X}^{t+\\frac{1}{2}} \\|^2 \\nonumber \\\\\n \\leq &~ \\textstyle \\frac{2}{(1-\\lambda L)^2} \\| \\mathbf{X}^{t+\\frac{1}{2}}\n \n - \\mathbf{X}^{t} \\|^2 +2 \\| \\widehat\\mathbf{X}^{t} - \\mathbf{X}^{t+\\frac{1}{2}} \\|^2 \n \n \\leq 10 \\| \\mathbf{X}^{t+\\frac{1}{2}}- \\widehat\\mathbf{X}^{t} \\|^2+ 8 \\| \\widehat\\mathbf{X}^{t}- \\mathbf{X}^{t} \\|^2, \\label{eq:X_-X2}\\\\\n &~ \\sum_{i=1}^n \\mathbb{E}\\big[\\| \\sum_{j=1}^n(\\mathbf{W}_{ji}-\\mathbf{I}_{ji})(\\underline{\\mathbf{x}}_j^{t+1}-{\\mathbf{x}}_j^{t+\\frac{1}{2}})\\|^2\\big] = \\mathbb{E}\\big[\\|(\\underline\\mathbf{X}^{t+1}-\\mathbf{X}^{t+\\frac{1}{2}})(\\mathbf{W}-\\mathbf{I})\\|^2\\big]\\leq\n 4\\mathbb{E}\\big[\\|\\underline\\mathbf{X}^{t+1}-\\mathbf{X}^{t+\\frac{1}{2}}\\|^2\\big]\\nonumber \\\\\n \\leq &~ 12\\alpha^2 \\left(\\mathbb{E}\\big[ \\|\\mathbf{X}^t-\\underline\\mathbf{X}^{t}\\|^2\\big] + \\mathbb{E}\\big[\\|\\mathbf{X}^{t+\\frac{1}{2}}-\\widehat{\\mathbf{X}}^t\\|^2\\big]+ \\mathbb{E}\\big[\\|\\widehat{\\mathbf{X}}^t - \\mathbf{X}^t\\|^2\\big]\\right), \\label{eq:X_-X1}\n\\end{align}\nwhere \\eqref{eq:X_-X2} holds by Lemma \\ref{lem:prox_diff} and $\\frac{1}{(1-\\lambda L)^2}\\leq 2$, and \\eqref{eq:X_-X1} holds by \\eqref{eq:X_-X_1}.\n\n \nSum up \\eqref{eq:phi_lambda1} for $t=0,1,\\ldots,T-1$ and take $\\alpha_7 =\\alpha\\gamma_x$.\nThen with \\eqref{eq:2_2_press}, \\eqref{eq:2_3_press}, \\eqref{eq:X_-X2} and \\eqref{eq:X_-X1}, we have\n\\begin{align*}\n \\sum_{i=1}^n \\mathbb{E}\\big[\\phi_\\lambda({\\mathbf{x}}_i^{t+1})\\big]\n\\leq & ~\\sum_{i=1}^n \\mathbb{E}\\big[\\phi_\\lambda( {\\mathbf{x}}_i^{t}) \\big] + \\frac{2}{\\lambda}\\left( \\mathbb{E}\\big[\\|\\mathbf{X}^{t}_\\perp\\|^2\\big] + \\eta^2 \\mathbb{E}\\big[\\|\\Y^{t}_\\perp\\|^2\\big]\\right) +\\textstyle \\frac{6\\alpha\\gamma_x+6\\alpha^2\\gamma_x^2}{\\lambda} \\mathbb{E}\\big[\\|\\mathbf{X}^t-\\underline\\mathbf{X}^{t}\\|^2\\big] \\nonumber \\\\\n&~ + \\frac{1}{\\lambda}\\left( \\textstyle \\frac{1}{2}+11\\alpha\\gamma_x +6\\alpha^2\\gamma_x^2\\right) \\mathbb{E}\\big[\\| \\mathbf{X}^{t+\\frac{1}{2}}- \\widehat\\mathbf{X}^{t} \\|^2\\big]+ \\frac{1}{\\lambda}\\left( \\textstyle -\\frac{1}{2}+10\\alpha\\gamma_x +6\\alpha^2\\gamma_x^2\\right) \\mathbb{E}\\big[\\| \\widehat\\mathbf{X}^{t}- \\mathbf{X}^{t} \\|^2\\big]\\\\\n\\leq & ~ \\sum_{i=1}^n\\mathbb{E}\\big[ \\phi_\\lambda( {\\mathbf{x}}_i^{t})\\big]+ \\frac{2}{\\lambda}\\left( \\mathbb{E}\\big[\\|\\mathbf{X}^{t}_\\perp\\|^2\\big] + \\eta^2 \\mathbb{E}\\big[\\|\\Y^{t}_\\perp\\|^2\\big]\\right) + \\frac{7\\alpha\\gamma_x}{\\lambda} \\mathbb{E}\\big[\\|\\mathbf{X}^t-\\underline\\mathbf{X}^{t}\\|^2\\big] \\nonumber \\\\\n&~\\quad +\\frac{1}{\\lambda}\\left( \\textstyle \\frac{1}{2}+12\\alpha\\gamma_x\\right)\\mathbb{E}\\big[ \\|\\widehat\\mathbf{X}^{t}-\\mathbf{X}^{t+\\frac{1}{2}}\\|^2\\big] +\\frac{1}{\\lambda} \\left( \\textstyle -\\frac{1}{2}+11\\alpha\\gamma_x\\right) \\mathbb{E}\\big[\\| \\widehat\\mathbf{X}^{t}- \\mathbf{X}^{t} \\|^2\\big].\n\\nonumber \\\\\n\\leq &~ \\sum_{i=1}^n\\mathbb{E}\\big[ \\phi_\\lambda( {\\mathbf{x}}_i^{t})\\big] + \\frac{12}{\\lambda}\\mathbb{E}\\big[\\|\\mathbf{X}^t_\\perp\\|^2\\big] + \\frac{7\\alpha\\gamma_x}{\\lambda} \\mathbb{E}\\big[\\|\\mathbf{X}^t-\\underline\\mathbf{X}^{t}\\|^2\\big] + \\frac{12}{\\lambda} \\eta^2\\mathbb{E}\\big[\\|\\Y^t_\\perp\\|^2\\big] \\nonumber \\\\\n&~+ \\frac{1}{\\lambda}\\Big( {\\textstyle\\left(\\frac{1}{2}+12\\alpha\\gamma_x \\right) \\left( 1-\\frac{\\eta}{2\\lambda} \\right) + \\left( -\\frac{1}{2}+11\\alpha\\gamma_x\\right) }\\Big) \\mathbb{E}\\big[\\| \\widehat\\mathbf{X}^{t}- \\mathbf{X}^{t} \\|^2\\big] + \\frac{5}{\\lambda} \\eta^2 \\sigma^2,\n\\end{align*}\nwhere the second inequality holds by $6\\alpha\\gamma_x\\leq 1$, and the third inequality holds by \\eqref{eq:hatx_xprox_comp} with $\\frac{1}{2}+12\\alpha\\gamma_x\\leq \\frac{5}{2}$.\nNoticing $$\\left(\\frac{1}{2}+12\\alpha\\gamma_x \\right) \\left( 1-\\frac{\\eta}{2\\lambda} \\right) + \\left( -\\frac{1}{2}+11\\alpha\\gamma_x\\right) = 23\\alpha\\gamma_x - \\frac{\\eta}{4\\lambda} - \\frac{6\\alpha\\gamma_x\\eta}{\\lambda}\\leq 23\\alpha\\gamma_x - \\frac{\\eta}{4\\lambda},$$ \nwe obtain \\eqref{eq:2.7} and complete the proof.\n\\end{proof}\n\n\nWith Lemmas \\ref{lem:X_consensus_comperror}, \\ref{lem:Y_consensus_comperror} and \\ref{lem:phi_one_step}, we are ready to prove the Theorem \\ref{thm:sect3thm}. We will use the Lyapunov function:\n\\begin{align*}\n \\mathbf{V}^t = z_1 \\mathbb{E}\\big[\\|\\mathbf{X}^{t}_\\perp\\|^2\\big] + z_2 \\mathbb{E}\\big[\\|\\mathbf{X}^{t}-\\underline\\mathbf{X}^{t}\\|^2\\big] +z_3\\mathbb{E}\\big[\\|\\Y^{t}_\\perp\\|^2\\big]+z_4 \\mathbb{E}\\big[\\|\\Y^{t}-\\underline\\Y^{t}\\|^2\\big] + z_5 \\sum_{i=1}^n \\mathbb{E}[\\phi_\\lambda( {\\mathbf{x}}_i^{t})], \n\\end{align*}\nwhere $z_1, z_2, z_3, z_4, z_5 \\geq 0$ are determined later.\n\n\n\n\n\\subsection*{Proof of Theorem \\ref{thm:sect3thm}}\n\n\\begin{proof}\nDenote \n\\begin{align*} \n &~\\Omega_0^t = \\mathbb{E}[\\|\\widehat\\mathbf{X}^{t}-\\mathbf{X}^{t}\\|^2], \\quad \\Phi^t = \\sum_{i=1}^n \\mathbb{E}[\\phi_\\lambda( {\\mathbf{x}}_i^{t})], \\\\\n &~ \\Omega^t = \\left(\\mathbb{E}\\big[\\|\\mathbf{X}^{t}_\\perp\\|^2\\big], \\mathbb{E}\\big[\\|\\mathbf{X}^{t}-\\underline\\mathbf{X}^{t}\\|^2\\big], \\mathbb{E}\\big[\\|\\Y^{t}_\\perp\\|^2\\big], \\mathbb{E}\\big[\\|\\Y^{t}-\\underline\\Y^{t}\\|^2\\big], \\Phi^t\\right)^\\top.\n\\end{align*}\nThen Lemmas \\ref{lem:X_consensus_comperror}, \\ref{lem:Y_consensus_comperror} and \\ref{lem:phi_one_step} imply\n$\\Omega^{t+1} \\leq \\mathbf{A}\\Omega^t + {\\mathbf{b}} \\Omega_0^t + {\\mathbf{c}} \\sigma^2$ with\n\\begin{align*}\n &\\mathbf{A} = \\begin{pmatrix}\n \\frac{3+\\widehat\\rho^2_x}{4} &~ 2\\alpha\\gamma_x(1-\\widehat\\rho_x^2) &~ \\frac{9}{4(1-\\widehat\\rho^2_x)} \\eta^2 &~ 0 &~ 0\\\\\n \\frac{21}{1-\\alpha^2} &~ \\frac{3+\\alpha^2}{4} &~\\frac{21}{1-\\alpha^2} \\eta^2 &~ 0 &~ 0 \\\\\n \\frac{150 L^2}{1-\\widehat\\rho^2_y} &~ \\frac{20\\sqrt{3} L^2}{1-\\widehat\\rho^2_y }\\alpha\\gamma_x &~ \\frac{3+\\widehat\\rho^2_y}{4} &~ \\frac{48}{1-\\widehat\\rho^2_y }\\alpha^2\\gamma_y^2 &~ 0\\\\ \n \\frac{180 L^2}{1-\\alpha^2} &~ \\frac{24\\sqrt{3} L^2}{1-\\alpha^2} \\alpha\\gamma_x &~ \\frac{104\\gamma_y^2+96 L^2 \\eta^2}{1-\\alpha^2} &~ \\frac{3+\\alpha^2}{4} &~ 0\\\\\n \\frac{12}{\\lambda} &~ \\frac{7\\alpha\\gamma_x}{\\lambda} &~ \\frac{12}{\\lambda}\\eta^2 &~ 0 &~ 1\\\\\n \\end{pmatrix}, \\\\[0.2cm] \n&{\\mathbf{b}} = \n \\begin{pmatrix}\n 4\\alpha\\gamma_x(1-\\widehat\\rho_x^2) \\\\\n \\frac{11}{1-\\alpha^2} \\\\\n \\frac{40 L^2 }{1-\\widehat\\rho^2_y}\\\\\n \\frac{48 L^2}{1-\\alpha^2} \\\\\n \\frac{1}{\\lambda}\\left( \\textstyle -\\frac{\\eta}{4\\lambda} + 23\\alpha\\gamma_x \\right) \n \\end{pmatrix}, \\quad\n{\\mathbf{c}} = \n \\begin{pmatrix}\n 4\\alpha\\gamma_x \\eta^2 (1-\\widehat\\rho_x^2) \\\\\n \\frac{11 \\eta^2 }{1-\\alpha^2} \\\\ \n 12n \\\\\n \\frac{10n}{1-\\alpha^2}\\\\\n \\frac{5}{\\lambda} \\eta^2\n \\end{pmatrix}.\n\\end{align*}\nThen for any ${\\mathbf{z}} = (z_1, z_2 , z_3, z_4, z_5 )^\\top\\geq \\mathbf{0}^\\top$, it holds \n\\begin{align*}\n {\\mathbf{z}}^\\top \\Omega^{t+1} \\leq {\\mathbf{z}}^\\top \\Omega^t + ({\\mathbf{z}}^\\top \\mathbf{A}-{\\mathbf{z}}^\\top) \\Omega^t + {\\mathbf{z}}^\\top{\\mathbf{b}} \\Omega_0^t + {\\mathbf{z}}^\\top{\\mathbf{c}} \\sigma^2.\n\\end{align*}\nLet $\\gamma_x\\leq \\frac{\\eta}{\\alpha}$ and $\\gamma_y\\leq \\frac{(1-\\alpha^2) (1-\\widehat\\rho^2_x)(1-\\widehat\\rho^2_y)}{317}$.\nTake $$z_1=\\frac{52}{1-\\widehat\\rho^2_x}, z_2 = \\frac{448}{1-\\alpha^2} \\eta , z_3 = \\frac{521}{(1-\\widehat\\rho^2_x)^2(1-\\widehat\\rho^2_y)} \\eta^2, z_4=(1-\\alpha^2) \\eta^2, z_5=\\lambda.$$ We have\n\\begin{align*}\n{\\mathbf{z}}^\\top \\mathbf{A}-{\\mathbf{z}}^\\top \\leq &~\n \\begin{pmatrix}\n \\frac{21\\cdot448}{ (1-\\alpha^2)^2} \\eta + \\frac{150\\cdot521 L^2\\eta^2}{(1-\\widehat\\rho^2_x)^2(1-\\widehat\\rho^2_y)^2} + 180 L^2\\eta^2 - 1 \\\\[0.2cm]\n \\frac{521\\cdot20\\sqrt{3} L^2\\eta^3}{(1-\\widehat\\rho^2_x)^2(1-\\widehat\\rho^2_y)^2} + 24\\sqrt{3} L^2\\eta^3 -\\eta \\\\[0.2cm]\n \\frac{448\\cdot21\\eta^3}{ (1-\\alpha^2)^2} + 96 L^2 \\eta^4 -\n \\frac{\\eta^2}{(1-\\widehat\\rho^2_x)^2}\\\\[0.1cm]\n 0 \\\\[0.1cm]\n 0\n \\end{pmatrix}^\\top, \\\\\n {\\mathbf{z}}^\\top{\\mathbf{b}} \\leq &~ \\textstyle -\\frac{\\eta}{4\\lambda} + 23\\eta + 48 L^2 \\eta^2 + \\frac{521\\cdot 40 \\eta^2 L^2}{(1-\\widehat\\rho^2_x)^2 (1-\\widehat\\rho^2_y)^2} + \\frac{448\\cdot11\\eta}{ (1-\\alpha^2)^2} + 52\\cdot4 \\eta,\\\\\n{\\mathbf{z}}^\\top{\\mathbf{c}} \\leq &~ \\left( \\textstyle 52\\cdot4\\eta + \\frac{448\\cdot 11\\eta}{ (1-\\alpha^2)^2} + \\frac{521\\cdot12n}{(1-\\widehat\\rho^2_x)^2(1-\\widehat\\rho^2_y)}+ 10n + 5 \\right)\\eta^2.\n\\end{align*}\n\nBy $\\eta\\leq \\frac{(1-\\alpha^2)^2(1-\\widehat\\rho^2_x)^2(1-\\widehat\\rho^2_y)^2}{18830\\max\\{1, L\\}}$ and \n$\\lambda\\leq \\frac{ (1-\\alpha^2)^2}{9 L+41280}$,\nwe have ${\\mathbf{z}}^\\top \\mathbf{A}-{\\mathbf{z}}^\\top \\leq (-\\frac{1}{2}, 0, 0, 0, 0)^\\top$, \n\\begin{align*}\n{\\mathbf{z}}^\\top{\\mathbf{c}} \\leq \\textstyle\n\\frac{(521\\cdot12+10)n+6}{(1-\\widehat\\rho^2_x)^2(1-\\widehat\\rho^2_y)}\\eta^2 = \\textstyle\\frac{6262n+6}{(1-\\widehat\\rho^2_x)^2(1-\\widehat\\rho^2_y)}\\eta^2\n\\end{align*}\nand\n\\begin{align*}\n {\\mathbf{z}}^\\top{\\mathbf{b}} ~ \\leq &~ \\textstyle \\eta\\Big( -\\frac{1}{4\\lambda} + 23 + 48 L^2 \\eta + \\frac{521\\cdot 40 \\eta L^2}{(1-\\widehat\\rho^2_x)^2 (1-\\widehat\\rho^2_y)^2} + \\frac{448\\cdot11 }{ (1-\\alpha^2)^2} + 52\\cdot4 \\Big) \\nonumber \\\\\n \\leq &~ \\textstyle -\\frac{\\eta}{8\\lambda} + \\eta\\Big( -\\frac{1}{8\\lambda} + \\frac{ 9 L}{8 } + \\frac{5160}{ (1-\\alpha^2)^2}\\Big) \n \n \n \\leq \n -\\frac{\\eta}{8\\lambda}.\n\\end{align*}\nHence we have\n\\begin{align}\n {\\mathbf{z}}^\\top \\Omega^{t+1} \\leq \\textstyle {\\mathbf{z}}^\\top \\Omega^{t} -\\frac{\\eta}{8\\lambda} \\Omega_0^t -\\frac{1}{2}\\mathbb{E}[\\|\\mathbf{X}^t_\\perp\\|^2] + \\frac{6262n+6}{(1-\\widehat\\rho^2_x)^2(1-\\widehat\\rho^2_y)}\\eta^2\\sigma^2.\\label{eq:l_fun_comp} \n\\end{align}\nThus summing up \\eqref{eq:l_fun_comp} for $t=0,1,\\ldots,T-1$ gives \n\\begin{align}\n \\frac{1}{\\lambda T}\\sum_{t=0}^{T-1} \\Omega_0^t +\\frac{4}{\\eta T}\\sum_{t=0}^{T-1} \\mathbb{E}[\\|\\mathbf{X}^t_\\perp\\|^2] \\leq \\textstyle \\frac{8\\left({\\mathbf{z}}^\\top \\Omega^0 - {\\mathbf{z}}^\\top \\Omega^{T}\\right)}{\\eta T} + \\frac{8(6262n+6)}{(1-\\widehat\\rho^2_x)^2(1-\\widehat\\rho^2_y)} \\eta\\sigma^2. \\label{eq:thm3_avg-Omega}\n\\end{align}\nFrom ${\\mathbf{y}}_i^{-1}=\\mathbf{0}$, $\\underline{\\mathbf{y}}_i^{-1}=\\mathbf{0}$, $\\nabla F_i({\\mathbf{x}}_i^{-1}$, $\\xi_i^{-1})=\\mathbf{0}$, $\\underline{\\mathbf{x}}_i^{0} =\\mathbf{0}$, ${\\mathbf{x}}_i^0 = {\\mathbf{x}}^0, \\forall\\, i \\in \\mathcal{N}$, we have \n\\begin{gather}\n \\|\\Y^0_\\perp\\|^2 = \\|\\nabla \\mathbf{F}^0(\\mathbf{I}-\\mathbf{J})\\|^2\\leq\\|\\nabla \\mathbf{F}^0\\|^2, \n \\quad \\|\\Y^{0}-\\underline\\Y^{0}\\|^2 = \\|\\nabla \\mathbf{F}^0-Q_{\\mathbf{y}}\\big[\\nabla \\mathbf{F}^0\\big]\\|^2 \\leq \\alpha^2 \\|\\nabla \\mathbf{F}^0\\|^2, \\label{eq:initial_thm3_1}\\\\\n \\|\\mathbf{X}^0_\\perp\\|^2=0, \\quad \\|\\mathbf{X}^0-\\underline\\mathbf{X}^{0}\\|^2=0, \n \\quad \\Phi^0=n \\phi_\\lambda({\\mathbf{x}}^0).\n \\label{eq:initial_thm3_2}\n\\end{gather}\nNote \\eqref{eq:end_thm2} still holds here. \nWith \\eqref{eq:initial_thm3_1}, \\eqref{eq:initial_thm3_2}, \\eqref{eq:end_thm2}, and the nonnegativity of $ \\mathbb{E}[\\|\\mathbf{X}^T_\\perp\\|^2]$, $\\mathbb{E}[\\|\\mathbf{X}^{T}-\\underline\\mathbf{X}^{T}\\|^2]$, $\\mathbb{E}[\\|\\Y^T_\\perp\\|^2]$, $\\mathbb{E}[\\|\\Y^{T}-\\underline\\Y^{T}\\|^2]$, we have\n\\begin{align}\n{\\mathbf{z}}^\\top \\Omega^0 - {\\mathbf{z}}^\\top \\Omega^{T} \\le \\textstyle\n \\frac{521}{(1-\\widehat\\rho^2_x)^2(1-\\widehat\\rho^2_y)} \\eta^2 \\mathbb{E}[\\|\\nabla \\mathbf{F}^0\\|^2] + \\eta^2 \\mathbb{E}[\\|\\nabla \\mathbf{F}^0\\|^2] + \\lambda n \\phi_\\lambda({\\mathbf{x}}^0) -\\lambda n \\phi_\\lambda^*. \\label{eq:them3_Omega0_OmegaT}\n\\end{align}\nwhere we have used $\\alpha^2\\leq 1$ from Assumption \\ref{assu:compressor}.\n\nBy the convexity of the frobenius norm and \\eqref{eq:them3_Omega0_OmegaT}, we obtain from \\eqref{eq:thm3_avg-Omega} that \n\\begin{align}\n &~ \\frac{1}{n\\lambda^2} \\mathbb{E}\\big[\\|\\widehat\\mathbf{X}^{\\tau}-\\mathbf{X}^{\\tau}\\|^2\\big] +\\frac{4}{n \\lambda \\eta} \\mathbb{E}[\\|\\mathbf{X}^\\tau_\\perp\\|^2]\n \\leq \\frac{1}{n\\lambda^2} \\frac{1}{T}\\sum_{t=0}^{T-1} \\mathbb{E}\\big[\\|\\widehat\\mathbf{X}^{t}-\\mathbf{X}^{t}\\|^2\\big] +\\frac{4}{n \\lambda \\eta T}\\sum_{t=0}^{T-1} \\mathbb{E}[\\|\\mathbf{X}^t_\\perp\\|^2] \\nonumber \\\\\n \\leq & \\textstyle \\frac{8\\left( \\phi_\\lambda({\\mathbf{x}}^0) - \\phi_\\lambda^*\\right)}{ \\eta T} \n +\\frac{50096n+48}{(1-\\widehat\\rho^2_x)^2(1-\\widehat\\rho^2_y)} \\frac{\\eta}{n\\lambda}\\sigma^2\n \\textstyle + \\frac{8\\cdot521 \\eta }{n\\lambda T (1-\\widehat\\rho^2_x)^2(1-\\widehat\\rho^2_y)} \\mathbb{E}\\big[ \\|\\nabla \\mathbf{F}^0\\|^2\\big] + \n \\frac{8\\eta}{n\\lambda T} \\mathbb{E}\\big[ \\|\\nabla \\mathbf{F}^0\\|^2\\big] \\nonumber \\\\\n \\leq &~\\textstyle \\frac{8\\left(\\phi_\\lambda({\\mathbf{x}}^0) - \\phi_\\lambda^*\\right)}{\\eta T} \n +\\frac{(50096n+48)\\eta \\sigma^2}{n\\lambda(1-\\widehat\\rho^2_x)^2(1-\\widehat\\rho^2_y)} + \\textstyle \\frac{4176 \\eta \\mathbb{E}\\left[ \\|\\nabla \\mathbf{F}^0\\|^2\\right] }{n\\lambda T (1-\\widehat\\rho^2_x)^2(1-\\widehat\\rho^2_y)}. \\label{eq:them_CDProxSGT0}\n\\end{align}\nWith $\\|\\nabla \\phi_\\lambda ({\\mathbf{x}}_i^\\tau)\\|^2 = \\frac{\\|{\\mathbf{x}}_i^\\tau-\\widehat{\\mathbf{x}}_i^\\tau\\|^2}{\\lambda^{2}}$ from Lemma \\ref{lem:xhat_x}, we complete the proof.\n\\end{proof}\n\n\n\\section{Conclusion}\nWe have proposed two decentralized proximal stochastic gradient methods, DProxSGT and CDProxSGT, for nonconvex composite problems with data heterogeneously distributed on the computing nodes of a connected graph. CDProxSGT is an extension of DProxSGT by applying compressions on the communicated model parameter and gradient information. Both methods need only a single or $\\mathcal{O}(1)$ samples for each update, which is important to yield good generalization performance on training deep neural networks. The gradient tracking is used in both methods to address data heterogeneity. An $\\mathcal{O}\\left( \\frac{1}{ \\epsilon^4}\\right)$ sample complexity and communication complexity is established to both methods to produce an expected $\\epsilon$-stationary solution. \nNumerical experiments on training neural networks demonstrate the good generalization performance and the ability of the proposed methods on handling heterogeneous data.\n\n\n\n \n\n\n\n\\section{Convergence Analysis for DProxSGT} \\label{sec:proof_DProxSGT}\n\nIn this section, we analyze the convergence rate of DProxSGT in Algorithm \\ref{alg:DProxSGT}. For better readability, we use the matrix form of Algorithm \\ref{alg:DProxSGT}. By the notation introduced in section~\\ref{sec:notation}, we can write \\eqref{eq:y_half_update}-\\eqref{eq:x_1_update} in the more compact matrix form:\n\\begin{align}\n & \\Y^{t-\\frac{1}{2}} = \\Y^{t-1} + \\nabla \\mathbf{F}^t - \\nabla \\mathbf{F}^{t-1},\\label{eq:y_half_update_matrix}\n \\\\\n & \\Y^t = \\Y^{t-\\frac{1}{2}}\\mathbf{W},\\label{eq:y_update_matrix}\\\\\n & \\mathbf{X}^{t+\\frac{1}{2}} =\\prox_{\\eta r} \\left(\\mathbf{X}^t - \\eta \\Y^{t}\\right) \\triangleq [\\prox_{\\eta r} \\left({\\mathbf{x}}_1^t - \\eta {\\mathbf{y}}_1^{t}\\right),\\ldots,\\prox_{\\eta r} \\left({\\mathbf{x}}_n^t - \\eta {\\mathbf{y}}_n^{t}\\right)],\\label{eq:x_half_update_matrix}\n \\\\\n & \\mathbf{X}^{t+1} = \\mathbf{X}^{t+\\frac{1}{2}}\\mathbf{W}. \\label{eq:x_1_update_matrix}\n\\end{align} \n\nBelow, we first bound $\\|\\widehat\\mathbf{X}^{t}-\\mathbf{X}^{t+\\frac{1}{2}}\\|^2$ in Lemma~\\ref{lem:Xhat_Xhalf}. Then we give the bounds of the consensus error $\\|\\mathbf{X}_\\perp^t\\|$ and $\\|\\Y_\\perp^t\\|$ and $\\phi_\\lambda({\\mathbf{x}}_i^{t+1})$ after one step in Lemmas~\\ref{lem:XI_J}, \\ref{lem:YI_J}, and \\ref{lem:weak_convex}. Finally, we prove Theorem \\ref{thm:sec2} by constructing a Lyapunov function that involves $\\|\\mathbf{X}_\\perp^t\\|$, $\\|\\Y_\\perp^t\\|$, and $\\phi_\\lambda({\\mathbf{x}}_i^{t+1})$.\n\n\n\n\n \n \n\\begin{lemma} \\label{lem:Xhat_Xhalf} Let $\\eta\\leq \\lambda \\leq \\frac{1}{4 L}$. Then\n\\begin{align}\n \\mathbb{E}\\big[\\|\\widehat\\mathbf{X}^{t}-\\mathbf{X}^{t+\\frac{1}{2}}\\|^2\\big] \\leq &~\n4 \\mathbb{E}\\big[\\|\\mathbf{X}^t_\\perp\\|^2\\big] + \\left( 1-\\frac{\\eta}{2\\lambda} \\right) \\mathbb{E}\\big[\\| \\widehat\\mathbf{X}^t - \\mathbf{X}^t\\|^2\\big] +4\\eta^2 \\mathbb{E}\\big[\\|\\Y^t_\\perp\\|^2\\big] + 2\\eta^2\\sigma^2. \\label{eq:hatx_xprox}\n\\end{align} \n\\end{lemma} \n\\begin{proof}\nBy the definition of $\\widehat{\\mathbf{x}}^t_i$ in \\eqref{eq:x_t_hat}, we have $0 \\in \\nabla f(\\widehat{\\mathbf{x}}^t_i) + \\partial r(\\widehat{\\mathbf{x}}^t_i) + \\frac{1}{\\lambda}(\\widehat{\\mathbf{x}}^t_i-{\\mathbf{x}}^t_i)$, i.e.,\n\\[ \\textstyle\n0 \\in \\partial r(\\widehat{\\mathbf{x}}^t_i) + \\frac{1}{\\eta} \\left(\\frac{\\eta}{\\lambda} \\widehat{\\mathbf{x}}^t_i-\\frac{\\eta}{\\lambda}{\\mathbf{x}}^t_i + \\eta \\nabla f(\\widehat{\\mathbf{x}}^t_i) \\right) = \\partial r(\\widehat{\\mathbf{x}}^t_i) + \\frac{1}{\\eta} \\left(\\widehat{\\mathbf{x}}^t_i - \\left( \\frac{\\eta}{\\lambda}{\\mathbf{x}}^t_i - \\eta \\nabla f(\\widehat{\\mathbf{x}}^t_i)+ \\left(1- \\frac{\\eta}{\\lambda}\\right) \\widehat{\\mathbf{x}}^t_i \\right)\\right).\n\\]\nThus we have $\\widehat{\\mathbf{x}}^t_i = \\prox_{\\eta r}\\left( \\frac{\\eta}{\\lambda}{\\mathbf{x}}^t_i - \\eta \\nabla f(\\widehat{\\mathbf{x}}^t_i) + \\left(1- \\frac{\\eta}{\\lambda}\\right)\\widehat{\\mathbf{x}}^t_i\\right)$. Then by \\eqref{eq:x_half_update}, the convexity of $r$, and Lemma \\ref{lem:prox_diff},\n\\begin{align}\n &~ \\textstyle \\|\\widehat{\\mathbf{x}}_i^{t}-{\\mathbf{x}}_i^{t+\\frac{1}{2}}\\|^2\n = \\left\\| \\prox_{\\eta r}\\left( \\frac{\\eta}{\\lambda}{\\mathbf{x}}^t_i - \\eta \\nabla f(\\widehat{\\mathbf{x}}^t_i) + \\left(1- \\frac{\\eta}{\\lambda}\\right)\\widehat{\\mathbf{x}}^t_i\\right)- \\prox_{\\eta r} \\left(\n {\\mathbf{x}}_i^t - \\eta {\\mathbf{y}}^t_i\\right) \\right\\|^2 \\nonumber\\\\\n \\leq &~ \\textstyle \\left\\| \\frac{\\eta}{\\lambda}{\\mathbf{x}}^t_i - \\eta \\nabla f(\\widehat{\\mathbf{x}}^t_i) + \\left(1- \\frac{\\eta}{\\lambda}\\right)\\widehat{\\mathbf{x}}^t_i - ({\\mathbf{x}}^t_i-\\eta{\\mathbf{y}}^t_i) \\right\\|^2 = \\left\\| \\left(1- \\frac{\\eta}{\\lambda}\\right)(\\widehat{\\mathbf{x}}^t_i -{\\mathbf{x}}^t_i )- \\eta (\\nabla f(\\widehat{\\mathbf{x}}^t_i) -{\\mathbf{y}}^t_i) \\right\\|^2 \\nonumber\\\\\n = & ~ \\textstyle \\left(1- \\frac{\\eta}{\\lambda}\\right)^2 \\left\\| \\widehat{\\mathbf{x}}^t_i - {\\mathbf{x}}^t_i \\right\\|^2 + \\eta^2\\left\\| {\\mathbf{y}}^t_i- \\nabla f(\\widehat{\\mathbf{x}}^t_i) \\right\\|^2 + 2 \\left(1- \\frac{\\eta}{\\lambda}\\right)\\eta \\left\\langle \\widehat{\\mathbf{x}}^t_i-{\\mathbf{x}}_i^t, {\\mathbf{y}}_i^t-\\nabla f({\\mathbf{x}}^t_i) + \\nabla f({\\mathbf{x}}^t_i)-\\nabla f(\\widehat{\\mathbf{x}}^t_i) \\right\\rangle\\nonumber\\\\\n\\leq & ~ \\textstyle\\left(\\left(1- \\frac{\\eta}{\\lambda}\\right)^2 + 2\\left(1- \\frac{\\eta}{\\lambda}\\right)\\eta L\n\\right) \\left\\| \\widehat{\\mathbf{x}}^t_i - {\\mathbf{x}}^t_i \\right\\|^2 + \\eta^2\\left\\| {\\mathbf{y}}^t_i- \\nabla f(\\widehat{\\mathbf{x}}^t_i) \\right\\|^2 + 2 \\left(1- \\frac{\\eta}{\\lambda}\\right)\\eta \\left\\langle \\widehat{\\mathbf{x}}^t_i-{\\mathbf{x}}_i^t, {\\mathbf{y}}_i^t-\\nabla f({\\mathbf{x}}^t_i) \\right\\rangle , \\label{eq:lem1.6.1} \n \\end{align}\nwhere\nthe second inequality holds by $\\left\\langle \\widehat{\\mathbf{x}}^t_i-{\\mathbf{x}}_i^t, \\nabla f({\\mathbf{x}}^t_i)-\\nabla f(\\widehat{\\mathbf{x}}^t_i) \\right\\rangle \\leq L\\left\\|\\widehat{\\mathbf{x}}^t_i-{\\mathbf{x}}_i^t\\right\\|^2$. \nThe second term in the right hand side of \\eqref{eq:lem1.6.1} can be bounded by\n\\begin{align*}\n&~ \\textstyle \\mathbb{E}_t [\\| {\\mathbf{y}}^t_i- \\nabla f(\\widehat{\\mathbf{x}}^t_i) \\|^2\\big] \\overset{\\eqref{eq:x_y_mean}}{=} \\mathbb{E}_t\\big[\\| {\\mathbf{y}}^t_i- \\bar{\\mathbf{y}}^t + \\overline{\\nabla} \\mathbf{F}^t - \\nabla f(\\widehat{\\mathbf{x}}^t_i) \\|^2\\big] \n\\leq 2\\mathbb{E}_t\\big[\\| {\\mathbf{y}}^t_i- \\bar{\\mathbf{y}}^t \\|^2\\big] + 2\\mathbb{E}_t\\big[\\big\\| \\overline{\\nabla} \\mathbf{F}^t - \\nabla f(\\widehat{\\mathbf{x}}^t_i) \\big\\|^2\\big] \\\\\n= &~2\\mathbb{E}_t\\big[\\| {\\mathbf{y}}^t_i- \\bar{\\mathbf{y}}^t \\|^2\\big] + 2\\mathbb{E}_t\\big[\\| \\overline{\\nabla} \\mathbf{F}^t - \\overline{\\nabla} \\mathbf{f}^t \\|^2\\big]+ 2\\| \\overline{\\nabla} \\mathbf{f}^t - \\nabla f(\\widehat{\\mathbf{x}}^t_i) \\|^2 \\\\\n\\leq&~ 2\\mathbb{E}_t[ \\|{\\mathbf{y}}_i^t-\\bar{\\mathbf{y}}^t\\|^2\\big] + \\frac{2}{n^2}\\sum_{j=1}^n \\mathbb{E}_t\\big[\\|\\nabla F_j({\\mathbf{x}}_j^t,\\xi_j^t)-\\nabla f_j({\\mathbf{x}}_j^t)\\|^2\\big] + 4 \\| \\overline{\\nabla} \\mathbf{f}^t -\\nabla f({\\mathbf{x}}^t_i) \\|^2 + 4\\| \\nabla f({\\mathbf{x}}^t_i)- \\nabla f(\\widehat{\\mathbf{x}}^t_i) \\|^2\\\\\n\\leq &~ 2\\mathbb{E}_t[ \\|{\\mathbf{y}}_i^t-\\bar{\\mathbf{y}}^t\\|^2\\big] + 2 \\frac{\\sigma^2}{n} + 4 \\| \\overline{\\nabla} \\mathbf{f}^t -\\nabla f({\\mathbf{x}}^t_i) \\|^2 + 4 L^2 \\|{\\mathbf{x}}^t_i -\\widehat{\\mathbf{x}}^t_i\\|^2,\n\\end{align*}\nwhere the second equality holds by the unbiasedness of stochastic gradients, and the second inequality holds also by the independence between $\\xi_i^t$'s. \nIn the last inequality, we use the bound of the variance of stochastic gradients, and the $L$-smooth assumption.\nTaking the full expectation over the above inequality and summing for all $i$ give\n\\begin{align}\n\\sum_{i=1}^n\\mathbb{E}\\big[\\| {\\mathbf{y}}^t_i- \\nabla f(\\widehat{\\mathbf{x}}^t_i) \\|^2 ] \n\\leq 2\\mathbb{E}\\big[\\|\\Y^t_\\perp\\|^2 ] +2\\sigma^2 + 8 L^2 \\mathbb{E}\\big[\\|\\mathbf{X}^t_\\perp\\|^2 ] +4 L^2 \\mathbb{E}\\big[\\| \\mathbf{X}^t - \\widehat\\mathbf{X}^t\\|^2 ]. \\label{eq:lem161_1}\n\\end{align}\nTo have the inequality above, we have used\n\\begin{align}\n&~ \\sum_{i=1}^n \\left\\| \\overline{\\nabla} \\mathbf{f}^t -\\nabla f({\\mathbf{x}}^t_i) \\right\\|^2 \n\\leq \\frac{1}{n} \\sum_{i=1}^n \\sum_{j=1}^n \\left\\|\\nabla f_j({\\mathbf{x}}_j^t) -\\nabla f_j({\\mathbf{x}}^t_i) \\right\\|^2 \n\\leq \\frac{ L^2}{n}\\sum_{i=1}^n\\sum_{j=1}^n \n\\left\\|{\\mathbf{x}}_j^t - {\\mathbf{x}}^t_i \\right\\|^2 \\nonumber \\\\\n= &~ \\frac{ L^2}{n}\\sum_{i=1}^n\\sum_{j=1}^n \\left( \\left\\|{\\mathbf{x}}_j^t - \\bar{\\mathbf{x}}^t\n\\right\\|^2 +\\left\\|\\bar{\\mathbf{x}}^t-{\\mathbf{x}}^t_i \\right\\|^2 + 2\\left\\langle {\\mathbf{x}}_j^t - \\bar{\\mathbf{x}}^t, \\bar{\\mathbf{x}}^t-{\\mathbf{x}}^t_i\\right\\rangle\\right)\n=\n2 L^2 \\left\\|\\mathbf{X}^t_\\perp\\right\\|^2, \\label{eq:sumsum}\n\\end{align}\nwhere the last equality holds by $ \\frac{1}{n} \\sum_{i=1}^n\\sum_{j=1}^n \\left\\langle {\\mathbf{x}}_j^t - \\bar{\\mathbf{x}}^t, \\bar{\\mathbf{x}}^t-{\\mathbf{x}}^t_i\\right\\rangle = \\sum_{i=1}^n \\left\\langle \\frac{1}{n} \\sum_{j=1}^n ({\\mathbf{x}}_j^t - \\bar{\\mathbf{x}}^t), \\bar{\\mathbf{x}}^t-{\\mathbf{x}}^t_i\\right\\rangle =\\sum_{i=1}^n\\left\\langle \\bar{\\mathbf{x}}^t - \\bar{\\mathbf{x}}^t, \\bar{\\mathbf{x}}^t-{\\mathbf{x}}^t_i\\right\\rangle=0$ from the definition of $\\bar{\\mathbf{x}}$. \n\nAbout the third term in the right hand side of \\eqref{eq:lem1.6.1}, we have\n\\begin{align}\n & ~\\sum_{i=1}^n \\mathbb{E}\\left[ \\left\\langle \\widehat{\\mathbf{x}}^t_i-{\\mathbf{x}}_i^t, {\\mathbf{y}}_i^t-\\nabla f({\\mathbf{x}}^t_i) \\right\\rangle\\right] \\overset{\\eqref{eq:x_y_mean}}{=} \\sum_{i=1}^n \\mathbb{E}\\left[\\left\\langle \\widehat{\\mathbf{x}}^t_i-{\\mathbf{x}}_i^t, {\\mathbf{y}}_i^t -\\bar{\\mathbf{y}}^t+\\overline{\\nabla} \\mathbf{F}^t -\\nabla f({\\mathbf{x}}^t_i) \\right\\rangle\\right] \\nonumber \\\\\n= & ~ \\textstyle \\sum_{i=1}^n \\mathbb{E}\\big[ \\langle\\widehat{\\mathbf{x}}^t_i -\\bar{\\widehat{\\mathbf{x}}}^t, {\\mathbf{y}}_i^t -\\bar{\\mathbf{y}}^t \\rangle\\big] + \\sum_{i=1}^n \\mathbb{E}\\big[\\langle \\bar{{\\mathbf{x}}}^t - {\\mathbf{x}}_i^t,{\\mathbf{y}}_i^t -\\bar{\\mathbf{y}}^t \\rangle\\big] + \\sum_{i=1}^n \\mathbb{E}\\left[ \\left\\langle \\widehat{\\mathbf{x}}^t_i-{\\mathbf{x}}_i^t, \\mathbb{E}_{t} \\left[\\overline{\\nabla} \\mathbf{F}^t\\right] -\\nabla f({\\mathbf{x}}^t_i)\\right\\rangle\\right] \\nonumber \\\\\n \\leq &~ \\frac{1}{2\\eta} \\left( \\textstyle \\mathbb{E}\\big[\\|\\widehat\\mathbf{X}^t_\\perp\\|^2\\big]+ \\mathbb{E}\\big[\\|\\mathbf{X}^t_\\perp\\|^2\\big]\\right) + \\eta \\mathbb{E}\\big[\\|\\Y^t_\\perp\\|^2\\big] + \\textstyle L\\mathbb{E}\\big[\\| \\widehat\\mathbf{X}^t-\\mathbf{X}^t\\|^2\\big] + \\frac{1}{4 L} \\sum_{i=1}^n \\mathbb{E}\\big[\\|\\overline{\\nabla} {\\mathbf{f}}^t -\\nabla f({\\mathbf{x}}^t_i)\\|^2\\big] \n \\nonumber \\\\\n\\leq &~ \\left(\\textstyle\\frac{1}{2\\eta(1-\\lambda L)^2} + \\frac{1}{2\\eta} + \\frac{L}{2}\\right)\\mathbb{E}\\big[\\|\\mathbf{X}^t_\\perp\\|^2\\big] + \\eta\\mathbb{E}\\big[\\|\\Y^t_\\perp\\|^2\\big] + L\\mathbb{E}\\big[\\|\\widehat\\mathbf{X}^t-\\mathbf{X}^t\\|^2\\big],\\label{eq:lem161_2}\n\\end{align}\nwhere $\\textstyle \\sum_{i=1}^n \\big\\langle \\bar{\\widehat{\\mathbf{x}}}^t,{\\mathbf{y}}_i^t -\\bar{\\mathbf{y}}^t \\big\\rangle = 0$ and $\\sum_{i=1}^n \\left\\langle \\bar{{\\mathbf{x}}}^t,{\\mathbf{y}}_i^t -\\bar{\\mathbf{y}}^t \\right\\rangle = 0$ is used in the second equality, $\\mathbb{E}_{t} \\left[\\overline{\\nabla} \\mathbf{F}^t\\right] = \\overline{\\nabla} {\\mathbf{f}}^t$ is used in the first inequality, and $\\|\\widehat\\mathbf{X}^t_\\perp\\|^2 =\\left\\|\\left(\\prox_{\\lambda \\phi}(\\mathbf{X}^t)- \\prox_{\\lambda \\phi}(\\bar{\\mathbf{x}}^t)\\mathbf{1}^\\top\\right) (\\mathbf{I}-\\mathbf{J})\\right\\|^2\\leq \\frac{1}{(1-\\lambda L)^2}\\|\\mathbf{X}^t-\\bar\\mathbf{X}^t\\|^2$ and \\eqref{eq:sumsum} are used in the last inequality.\n\nNow we can bound the summation of \\eqref{eq:lem1.6.1} by using \\eqref{eq:lem161_1} and \\eqref{eq:lem161_2}:\n\\begin{align*}\n& ~ \\mathbb{E}\\big[\\|\\widehat\\mathbf{X}^{t}-\\mathbf{X}^{t+\\frac{1}{2}}\\|^2\\big]\\\\\n \\leq & ~ \\left(\\textstyle \\left(1- \\frac{\\eta}{\\lambda}\\right)^2 + 2\\left(1- \\frac{\\eta}{\\lambda}\\right)\\eta L\n\\right) \\mathbb{E}\\big[\\| \\widehat\\mathbf{X}^t - \\mathbf{X}^t \\|^2\\big] \\\\\n& ~ + \\eta^2 \\left(2\\mathbb{E}[ \\|\\Y^t_\\perp\\|^2\\big] +2\\sigma^2 + 8 L^2 \\mathbb{E}\\big[\\|\\mathbf{X}^t_\\perp\\|^2\\big] +4 L^2 \\mathbb{E}\\big[\\| \\mathbf{X}^t - \\widehat\\mathbf{X}^t\\|^2\\big]\\right) \\\\\n& ~ + \\textstyle 2 \\left(1- \\frac{\\eta}{\\lambda}\\right)\\eta \\left(\\textstyle\\left(\\frac{1}{2\\eta(1-\\lambda L)^2} + \\frac{1}{2\\eta} + \\frac{ L}{2} \\right)\\mathbb{E}\\big[\\|\\mathbf{X}^t_\\perp\\|^2\\big] + \\eta\\mathbb{E}\\big[\\|\\Y^t_\\perp\\|^2\\big] + L\\mathbb{E}\\big[\\|\\widehat\\mathbf{X}^t-\\mathbf{X}^t\\|^2\\big]\\right) \\\\ \n= & ~ \\textstyle \\left(1 - 2\\eta (\\frac{1}{\\lambda} - 2 L) + \\frac{\\eta^2}{\\lambda} (\\frac{1}{\\lambda} - 2 L) + 2 L \\eta^2(-\\frac{1}{\\lambda}+2 L)\\right) \\mathbb{E}\\big[\\| \\widehat\\mathbf{X}^t - \\mathbf{X}^t\\|^2\\big] + 2\\eta^2\\sigma^2 \\nonumber\\\\\n & ~ + \\textstyle \\left( \\left(1- \\frac{\\eta}{\\lambda}\\right) (1+\\frac{1}{(1-\\lambda L)^2}+ \\eta L) + 8\\eta^2 L^2 \\right) \\mathbb{E}\\big[\\|\\mathbf{X}^t_\\perp\\|^2\\big]\n + 2 (2- \\frac{\\eta}{\\lambda}) \\eta^2 \\mathbb{E}\\big[\\|\\Y^t_\\perp\\|^2\\big].\n\\end{align*} \nWith $\\eta \\leq \\lambda \\leq \\frac{1}{4 L}$, we have $\\frac{1}{(1-\\lambda L)^2}\\leq 2$ and \n\\eqref{eq:hatx_xprox} follows from the inequality above.\n \n\\end{proof}\n \n\\begin{lemma}\\label{lem:XI_J} \nThe consensus error of $\\mathbf{X}$ satisfies the following inequality\n\\begin{align} \n \\mathbb{E}\\big[\\|\\mathbf{X}^t_\\perp\\|^2\\big] \\leq \\frac{1+\\rho^2}{2} \\mathbb{E}\\big[\\| \\mathbf{X}^{t-1}_\\perp \\|^2\\big]+ \\frac{2\\rho^2 \\eta^2 }{1-\\rho^2} \\mathbb{E}\\big[\\| \\Y^{t-1}_\\perp \\|^2\\big]. \\label{eq:X_consensus}\n\\end{align}\n\\end{lemma}\n\\begin{proof}\n With the updates \\eqref{eq:x_half_update} and \\eqref{eq:x_1_update}, we have\n\\begin{align*}\n &~ \n \\mathbb{E}\\big[\\|\\mathbf{X}^t_\\perp\\|^2\\big] = \\mathbb{E}\\big[\\|\\mathbf{X}^{t-\\frac{1}{2}}\\mathbf{W}(\\mathbf{I}- \\mathbf{J})\\|^2\\big] = \\mathbb{E}\\big[\\|\\mathbf{X}^{t-\\frac{1}{2}} (\\mathbf{W}-\\mathbf{J})\\|^2\\big] \\nonumber\\\\\n =&~ \\mathbb{E}\\big[\\| \\prox_{\\eta r} \\left(\\mathbf{X}^{t-1} - \\eta \\Y^{t-1}\\right) (\\mathbf{W}-\\mathbf{J})\\|^2\\big] \\nonumber\\\\\n =&~ \\mathbb{E}\\big[\\| \\left(\\prox_{\\eta r} \\left(\\mathbf{X}^{t-1} - \\eta \\Y^{t-1}\\right)-\\prox_{\\eta r} \\left(\\bar{\\mathbf{x}}^{t-1} - \\eta \\bar{\\mathbf{y}}^{t-1}\\right)\\mathbf{1}^\\top\\right) (\\mathbf{W}-\\mathbf{J})\\|^2\\big] \\nonumber\\\\\n \\leq &~ \\mathbb{E}\\big[\\|\\prox_{\\eta r} \\left(\\mathbf{X}^{t-1} - \\eta \\Y^{t-1}\\right)-\\prox_{\\eta r} \\left(\\bar{\\mathbf{x}}^{t-1} - \\eta \\bar{\\mathbf{y}}^{t-1}\\right)\\mathbf{1}^\\top\\|^2 \\|(\\mathbf{W}-\\mathbf{J})\\|^2_2] \\nonumber\\\\\n \\leq &~ \\rho^2 \\mathbb{E}\\left[ \\textstyle \\sum_{i=1}^n\\| \\prox_{\\eta r} \\left({\\mathbf{x}}_i^{t-1} - \\eta {\\mathbf{y}}_i^{t-1}\\right)-\\prox_{\\eta r} \\left(\\bar{\\mathbf{x}}^{t-1} - \\eta \\bar{\\mathbf{y}}^{t-1}\\right) \\|^2\\right] \\nonumber\\\\\n \\leq &~ \\rho^2 \\mathbb{E}\\left[ \\textstyle \\sum_{i=1}^n\\|\\left({\\mathbf{x}}_i^{t-1} - \\eta {\\mathbf{y}}_i^{t-1}\\right)-\\left(\\bar{\\mathbf{x}}^{t-1} - \\eta \\bar{\\mathbf{y}}^{t-1}\\right) \\|^2\\right] = \\rho^2 \\mathbb{E}\\big[\\| \\mathbf{X}^{t-1}_\\perp - \\eta \\Y^{t-1}_\\perp \\|^2\\big] \\nonumber\\\\\n \\leq &~ \\textstyle \\big(\\textstyle \\rho^2 + \\frac{1-\\rho^2}{2}\\big) \\mathbb{E}\\big[\\| \\mathbf{X}^{t-1}_\\perp \\|^2\\big]+ \\big( \\textstyle\\rho^2 + \\frac{2\\rho^4}{1-\\rho^2}\\big) \\eta^2\\mathbb{E}\\big[\\| \\Y^{t-1}_\\perp \\|^2\\big] \\nonumber\\\\\n = &~\\textstyle \\frac{1+\\rho^2}{2} \\mathbb{E}\\big[\\| \\mathbf{X}^{t-1}_\\perp \\|^2\\big]+ \\frac{1+\\rho^2}{1-\\rho^2} \\rho^2 \\eta^2\\mathbb{E}\\big[\\| \\Y^{t-1}_\\perp \\|^2\\big] \\nonumber \\\\\n \\leq &~\\textstyle \\frac{1+\\rho^2}{2} \\mathbb{E}\\big[\\| \\mathbf{X}^{t-1}_\\perp \\|^2\\big]+ \\frac{2\\rho^2 \\eta^2 }{1-\\rho^2} \\mathbb{E}\\big[\\| \\Y^{t-1}_\\perp \\|^2\\big],\n\\end{align*} \n\nwhere we have used $\\mathbf{1}^\\top (\\mathbf{W}-\\mathbf{J})=\\mathbf{0}$ in the third equality, $\\|\\mathbf{W}-\\mathbf{J}\\|_2\\leq \\rho$ in the second inequality, and Lemma \\ref{lem:prox_diff} in the third inequality, and $\\rho\\leq 1$ is used in the last inequality.\n\\end{proof}\n \n\\begin{lemma}\\label{lem:YI_J}\nLet $\\eta\\leq \\min\\{\\lambda, \\frac{1-\\rho^2}{4\\sqrt{6} \\rho L} \\} $ and $\\lambda \\leq\\frac{1}{4 L}$. The consensus error of $\\Y$ satisfies\n\\begin{align} \n \\mathbb{E}\\big[\\|\\Y^t_\\perp\\|^2\\big] \n \\leq &~ \\frac{48\\rho^2 L^2 }{1-\\rho^2 } \\mathbb{E}\\big[\\|\\mathbf{X}^{t-1}_\\perp\\|^2\\big] \\!+\\! \\frac{3\\!+\\!\\rho^2}{4} \\mathbb{E}\\big[\\|\\Y^{t-1}_\\perp \\|^2\\big] \\!+\\! \\frac{12\\rho^2 L^2 }{1-\\rho^2 } \\mathbb{E}\\big[\\|\\widehat\\mathbf{X}^{t-1}-\\mathbf{X}^{t-1} \\|^2\\big] \\!+\\! 6 n\\sigma^2. \\label{eq:Y_consensus}\n\\end{align}\n\\end{lemma}\n\n\\begin{proof}\nBy the updates \\eqref{eq:y_half_update} and \\eqref{eq:y_update}, we have \n\\begin{align}\t \n &~ \n \\mathbb{E}\\big[\\|\\Y^t_\\perp\\|^2\\big] = \\mathbb{E}\\big[\\|\\Y^{t-\\frac{1}{2}}(\\mathbf{W}- \\mathbf{J})\\|^2\\big] \n = \\mathbb{E}\\big[\\| \\Y^{t-1}(\\mathbf{W} -\\mathbf{J}) + (\\nabla \\mathbf{F}^t - \\nabla \\mathbf{F}^{t-1}) (\\mathbf{W} -\\mathbf{J})\\|^2\\big] \\nonumber\\\\\n = &~ \\mathbb{E}\\big[\\|\\Y^{t-1}(\\mathbf{I}-\\mathbf{J})(\\mathbf{W} -\\mathbf{J})\\|^2\\big] + \\mathbb{E}\\big[\\|(\\nabla \\mathbf{F}^t - \\nabla \\mathbf{F}^{t-1}) (\\mathbf{W}-\\mathbf{J}) \\|^2\\big] + 2\\mathbb{E}\\big[\\langle \\Y^{t-1} (\\mathbf{W} -\\mathbf{J}), (\\nabla \\mathbf{F}^t - \\nabla \\mathbf{F}^{t-1}) (\\mathbf{W}-\\mathbf{J}) \\rangle\\big] \\nonumber\\\\\n \\leq &~ \\rho^2 \\mathbb{E}\\big[\\|\\Y^{t-1}_\\perp \\|^2\\big] + \\rho^2 \\mathbb{E}\\big[\\|\\nabla \\mathbf{F}^t - \\nabla \\mathbf{F}^{t-1}\\|^2\\big] + 2\\mathbb{E}\\big[\\langle \\Y^{t-1} (\\mathbf{W} -\\mathbf{J}),(\\nabla \\mathbf{f}^t - \\nabla \\mathbf{F}^{t-1})(\\mathbf{W}-\\mathbf{J}) \\rangle\\big], \\label{eq:y_cons1}\n\\end{align} \nwhere we have used $\\mathbf{J}\\mathbf{W}=\\mathbf{J}\\J=\\mathbf{J}$, $\\|\\mathbf{W}-\\mathbf{J}\\|_2\\leq \\rho$\nand $\\mathbb{E}_t[\\nabla \\mathbf{F}^t] = \\nabla {\\mathbf{f}}^t$. \nFor the second term on the right hand side of \\eqref{eq:y_cons1}, we have\n\\begin{align}\n &~\\mathbb{E}\\big[\\|\\nabla \\mathbf{F}^t - \\nabla \\mathbf{F}^{t-1}\\|^2\\big] = \\mathbb{E}\\big[\\|\\nabla \\mathbf{F}^t - \\nabla \\mathbf{f}^t+\\nabla \\mathbf{f}^t -\\nabla \\mathbf{F}^{t-1}\\|^2\\big] \\nonumber\\\\\n \\overset{\\mathbb{E}_t[\\nabla \\mathbf{F}^t] = \\nabla \\mathbf{f}^t}{=}&~\n \\mathbb{E}\\big[\\|\\nabla \\mathbf{F}^t - \\nabla \\mathbf{f}^t\\|^2\\big]+\\mathbb{E}\\big[\\|\\nabla \\mathbf{f}^t- \\nabla \\mathbf{f}^{t-1}+\\nabla \\mathbf{f}^{t-1}-\\nabla \\mathbf{F}^{t-1}\\|^2\\big] \\nonumber \\\\ \n\\leq &~ \\mathbb{E}\\big[\\|\\nabla \\mathbf{F}^t - \\nabla \\mathbf{f}^t\\|^2\\big]+2\\mathbb{E}\\big[\\|\\nabla \\mathbf{f}^t- \\nabla \\mathbf{f}^{t-1}\\|^2\\big]+2\\mathbb{E}\\big[\\|\\nabla \\mathbf{f}^{t-1}-\\nabla \\mathbf{F}^{t-1}\\|^2\\big] \\nonumber\\\\\n \\leq &~ 3 n \\sigma^2 + 2 L^2 \\mathbb{E}\\big[\\|\\mathbf{X}^{t}-\\mathbf{X}^{t-1}\\|^2\\big]. \\label{eq:y_cons12}\n\\end{align}\n\nFor the third term on the right hand side of \\eqref{eq:y_cons1}, we have\n\\begin{align}\n&~2\\mathbb{E}\\big[\\langle \\Y^{t-1} (\\mathbf{W} -\\mathbf{J}), (\\nabla \\mathbf{f}^t - \\nabla \\mathbf{F}^{t-1})(\\mathbf{W}-\\mathbf{J}) \\rangle\\big] \\nonumber \\\\ \n =&~2\\mathbb{E}\\big[\\langle \\Y^{t-1}(\\mathbf{W} -\\mathbf{J}), (\\nabla \\mathbf{f}^t - \\nabla \\mathbf{f}^{t-1})(\\mathbf{W}-\\mathbf{J}) \\rangle\\big] +2\\mathbb{E}\\big[\\langle \\Y^{t-1}(\\mathbf{W} -\\mathbf{J}), (\\nabla \\mathbf{f}^{t-1} - \\nabla \\mathbf{F}^{t-1})(\\mathbf{W}-\\mathbf{J}) \\rangle\\big] \\nonumber \\\\\n =&~2\\mathbb{E}\\big[\\langle \\Y^{t-1}(\\mathbf{I}-\\mathbf{J})(\\mathbf{W} -\\mathbf{J}), (\\nabla \\mathbf{f}^t - \\nabla \\mathbf{f}^{t-1})(\\mathbf{W}-\\mathbf{J}) \\rangle\\big] \\nonumber \\\\\n &~ +2\\mathbb{E}\\big[\\langle (\\Y^{t-2} + \\nabla \\mathbf{F}^{t-1} - \\nabla \\mathbf{F}^{t-2})\\mathbf{W}(\\mathbf{W} -\\mathbf{J}), (\\nabla \\mathbf{f}^{t-1} - \\nabla \\mathbf{F}^{t-1})(\\mathbf{W}-\\mathbf{J}) \\rangle\\big] \\nonumber \\\\\n =&~2\\mathbb{E}\\big[\\langle \\Y^{t-1}(\\mathbf{I}-\\mathbf{J})(\\mathbf{W} -\\mathbf{J}), (\\nabla \\mathbf{f}^t - \\nabla \\mathbf{f}^{t-1})(\\mathbf{W}-\\mathbf{J}) \\rangle\\big] \\nonumber \\\\\n &~ +2\\mathbb{E}\\big[\\langle (\\nabla \\mathbf{F}^{t-1} - \\nabla \\mathbf{f}^{t-1} )\\mathbf{W}(\\mathbf{W} -\\mathbf{J}), (\\nabla \\mathbf{f}^{t-1} - \\nabla \\mathbf{F}^{t-1})(\\mathbf{W}-\\mathbf{J}) \\rangle\\big] \\nonumber \\\\\n \\leq &~2\\mathbb{E}\\big[\\|\\Y^{t-1}(\\mathbf{I}-\\mathbf{J})(\\mathbf{W} -\\mathbf{J})\\|\\cdot\\|(\\nabla \\mathbf{f}^t - \\nabla \\mathbf{f}^{t-1})(\\mathbf{W}-\\mathbf{J}) \\|\\big] \\nonumber \\\\\n &~ +2\\mathbb{E}\\big[\\|(\\nabla \\mathbf{F}^{t-1} - \\nabla \\mathbf{f}^{t-1} )\\mathbf{W}(\\mathbf{W} -\\mathbf{J})\\|\\cdot\\|(\\nabla \\mathbf{f}^{t-1} - \\nabla \\mathbf{F}^{t-1})(\\mathbf{W}-\\mathbf{J})\\|\\big] \\nonumber \\\\\n \\leq&~ 2\\rho^2\\mathbb{E}\\big[\\| \\Y^{t-1}_\\perp\\|\\cdot\\|\\nabla \\mathbf{f}^t - \\nabla \\mathbf{f}^{t-1}\\|\\big] + 2\\rho^2\\mathbb{E}\\big[\\|\\nabla \\mathbf{F}^{t-1} - \\nabla \\mathbf{f}^{t-1}\\|^2\\big] \\nonumber\\\\\n \\leq &~ \\textstyle\\frac{1-\\rho^2}{2} \\mathbb{E}\\big[\\| \\Y^{t-1}_\\perp\\|^2\\big]+\\frac{2\\rho^4}{1-\\rho^2}\\mathbb{E}\\big[\\|\\nabla \\mathbf{f}^t - \\nabla \\mathbf{f}^{t-1}\\|^2\\big] + 2\\rho^2 n \\sigma^2 \\nonumber \\\\\n\\leq &~ \\textstyle\\frac{1-\\rho^2}{2} \\mathbb{E}\\big[\\| \\Y^{t-1}_\\perp\\|^2\\big]+\\frac{2\\rho^4 L^2}{1-\\rho^2} \\mathbb{E}\\big[\\| \\mathbf{X}^t - \\mathbf{X}^{t-1}\\|^2\\big]+ 2\\rho^2 n \\sigma^2, \\label{eq:y_cons13}\n\\end{align}\nwhere the second equality holds by $\\mathbf{W}-\\mathbf{J}=(\\mathbf{I}-\\mathbf{J})(\\mathbf{W}-\\mathbf{J})$, \\eqref{eq:y_half_update} and \\eqref{eq:y_update}, the third equality holds because $\\Y^{t-2} - \\nabla \\mathbf{F}^{t-2} -\\nabla \\mathbf{f}^{t-1}$ does not depend on $\\xi_i^{t-1}$'s, \nand the second inequality holds because $\\|\\mathbf{W}-\\mathbf{J}\\|_2\\leq \\rho$ and $\\|\\mathbf{W}\\|_2\\leq 1$. \nPlugging \\eqref{eq:y_cons12} and \\eqref{eq:y_cons13} into \\eqref{eq:y_cons1}, we have\n\\begin{align} \n \\mathbb{E}\\big[\\|\\Y^t_\\perp\\|^2\\big] \n \\leq &~ \\textstyle\\frac{1+\\rho^2}{2} \\mathbb{E}\\big[\\|\\Y^{t-1}_\\perp \\|^2\\big] + \\frac{2 \\rho^2 L^2 }{1-\\rho^2 } \\mathbb{E}\\big[\\| \\mathbf{X}^t - \\mathbf{X}^{t-1}\\|^2\\big] + 5 \\rho^2 n \\sigma^2 , \\label{eq:y_cons2} \n \\end{align}\nwhere we have used $1+\\frac{\\rho^2}{1-\\rho^2} = \\frac{1}{1-\\rho^2 }$.\nFor the second term in the right hand side of \\eqref{eq:y_cons2}, we have\n\\begin{align}\n&~ \\| \\mathbf{X}^{t+1} - \\mathbf{X}^{t}\\|^2 = \\|\\mathbf{X}^{t+\\frac{1}{2}}\\mathbf{W}-\\mathbf{X}^t\\|^2 =\n\\|(\\mathbf{X}^{t+\\frac{1}{2}}-\\widehat\\mathbf{X}^t)\\mathbf{W} +(\\widehat\\mathbf{X}^t-\\mathbf{X}^t)\\mathbf{W} + \\mathbf{X}^t (\\mathbf{W}-\\mathbf{I})\\|^2 \\nonumber \\\\\n\\leq &~ 3\\|(\\mathbf{X}^{t+\\frac{1}{2}}-\\widehat\\mathbf{X}^t)\\mathbf{W}\\|^2 +3\\|(\\widehat\\mathbf{X}^t-\\mathbf{X}^t)\\mathbf{W}\\|^2 + 3\\|\\mathbf{X}^t(\\mathbf{I}-\\mathbf{J})(\\mathbf{W}-\\mathbf{I})\\|^2 \\nonumber \\\\\n\\leq &~ 3\\|\\mathbf{X}^{t+\\frac{1}{2}}-\\widehat\\mathbf{X}^t \\|^2 +3\\|\\widehat\\mathbf{X}^t-\\mathbf{X}^t \\|^2 + 12\\|\\mathbf{X}^t_\\perp\\|^2,\\label{eq:Xplus1-X}\n\\end{align} \nwhere in the first inequality we have used $\\mathbf{X}^t (\\mathbf{W}-\\mathbf{I})=\\mathbf{X}^t(\\mathbf{I}-\\mathbf{J})(\\mathbf{W}-\\mathbf{I})$ from $\\mathbf{J}(\\mathbf{W}-\\mathbf{I}) = \\mathbf{J}-\\mathbf{J}$, and in the second inequality we have used $\\|\\mathbf{W}\\|_2\\leq 1$ and $\\|\\mathbf{W}-\\mathbf{I}\\|_2\\leq 2$.\n\nTaking expectation over both sides of \\eqref{eq:Xplus1-X} and using \\eqref{eq:hatx_xprox}, we have\n\\begin{align*}\n&~ \\mathbb{E}\\big[\\| \\mathbf{X}^{t+1} - \\mathbf{X}^{t}\\|^2\\big] \\\\\n\\le &~3 \\left( \\textstyle 4 \\mathbb{E}\\big[\\|\\mathbf{X}^t_\\perp\\|^2\\big] + \\left( 1-\\frac{\\eta}{2\\lambda} \\right) \\mathbb{E}\\big[\\| \\widehat\\mathbf{X}^t - \\mathbf{X}^t\\|^2\\big] +4\\eta^2 \\mathbb{E}\\big[\\|\\Y^t_\\perp\\|^2\\big] + 2\\eta^2\\sigma^2\\right) +3 \\mathbb{E}\\big[\\|\\widehat\\mathbf{X}^t-\\mathbf{X}^t \\|^2\\big] + 12 \\mathbb{E}\\big[\\|\\mathbf{X}^t_\\perp\\|^2\\big]\\\\\n= &~ 3 \\textstyle \\left(2 -\\frac{\\eta}{2\\lambda} \\right) \\mathbb{E}\\big[\\| \\widehat\\mathbf{X}^t - \\mathbf{X}^t\\|^2\\big] +12\\eta^2 \\mathbb{E}\\big[\\|\\Y^t_\\perp\\|^2\\big] + 6\\eta^2\\sigma^2 + 24\\mathbb{E}\\big[\\|\\mathbf{X}^t_\\perp\\|^2\\big].\n\\end{align*}\nPlugging the inequality above into \\eqref{eq:y_cons2} gives \n\\begin{align*} \n \\mathbb{E}\\big[\\|\\Y^t_\\perp\\|^2\\big] \n \\leq &~ \\left(\\textstyle\\frac{1+\\rho^2}{2} + \\frac{24 \\rho^2 L^2\\eta^2 }{1-\\rho^2 } \\right) \\mathbb{E}\\big[\\|\\Y^{t-1}_\\perp \\|^2\\big] + \\textstyle 5 \\rho^2 n\\sigma^2 +\\frac{12 \\rho^2 L^2 \\eta^2 \\sigma^2 }{1-\\rho^2 } \\nonumber \\\\\n &~\\textstyle + \\frac{6\\rho^2 L^2 }{1-\\rho^2 }\\left( \\textstyle 2- \\frac{\\eta}{2\\lambda} \\right) \\mathbb{E}\\big[\\|\\widehat\\mathbf{X}^{t-1}-\\mathbf{X}^{t-1} \\|^2\\big] + \\frac{48 \\rho^2 L^2 }{1-\\rho^2 } \\mathbb{E}\\big[\\|\\mathbf{X}^{t-1}_\\perp\\|^2\\big].\n\\end{align*}\nBy $\\rho<1$ and $ \\eta \\leq \\frac{1-\\rho^2}{4\\sqrt{6} \\rho L}$, we have \n$\\frac{24 \\rho^2 L^2 \\eta^2}{1-\\rho^2 } \\leq \\frac{1-\\rho^2}{4}$ and $\\frac{12 \\rho^2 L^2 \\eta^2}{1-\\rho^2 } \\leq \\frac{1-\\rho^2}{8}\\leq n$, and further \\eqref{eq:Y_consensus}.\n\\end{proof} \n\n\n\\begin{lemma}\\label{lem:weak_convex} \nLet $\\eta\\leq \\lambda \\leq\\frac{1}{4 L}$. It holds\n\\begin{align}\n \\sum_{i=1}^n \\mathbb{E}[\\phi_\\lambda({\\mathbf{x}}_i^{t+1})]\n\\leq &~ \\sum_{i=1}^n \\mathbb{E}[ \\phi_\\lambda( {\\mathbf{x}}_i^{t})] + \\frac{4}{\\lambda}\n \\mathbb{E}\\big[\\|\\mathbf{X}^t_\\perp\\|^2\\big] + \\frac{4 \\eta^2}{\\lambda} \\mathbb{E}[ \\|\\Y^t_\\perp\\|^2\\big] - \\frac{\\eta}{4\\lambda^2} \\mathbb{E}\\big[\\| \\widehat\\mathbf{X}^t - \\mathbf{X}^t \\|^2\\big] + \\frac{\\eta^2\\sigma^2}{\\lambda}. \\label{eq:phi_update}\n\\end{align}\n\\end{lemma}\n\\begin{proof}\nBy the definition in \\eqref{eq:x_t_hat\n, the update in \\eqref{eq:x_1_update}, the $ L$-weakly convexity of $\\phi$, and the convexity of $\\|\\cdot\\|^2$, we have\n\\begin{align}\n&~\\phi_\\lambda({\\mathbf{x}}_i^{t+1}) \\overset{\\eqref{eq:x_t_hat}}{=} \\phi(\\widehat{\\mathbf{x}}_i^{t+1})+{\\textstyle \\frac{1}{2\\lambda} }\\|\\widehat{\\mathbf{x}}_i^{t+1}-{\\mathbf{x}}_i^{t+1}\\|^2 \\overset{\\eqref{eq:x_1_update}}{\\leq} \\phi\\bigg(\\sum_{j=1}^n\\mathbf{W}_{ji}\\widehat{\\mathbf{x}}_j^{t+\\frac{1}{2}}\\bigg)+{ \\frac{1}{2\\lambda}} \\bigg\\|\\sum_{j=1}^n \\mathbf{W}_{ji}\\big(\\widehat{\\mathbf{x}}_j^{t+\\frac{1}{2}}-{\\mathbf{x}}_j^{t+\\frac{1}{2}}\\big)\\bigg\\|^2 \\nonumber \\\\\n&~\\overset{\\mbox{Lemma \\ref{lem:weak_convx}} }{\\leq} \\sum_{j=1}^n \\mathbf{W}_{ji} \\phi(\\widehat{\\mathbf{x}}_j^{t+\\frac{1}{2}}) +{ \\frac{L}{2} }\\sum_{j=1}^{n-1}\\sum_{l=j+1}^n \\mathbf{W}_{ji}\\mathbf{W}_{li}\\|\\widehat{\\mathbf{x}}_j^{t+\\frac{1}{2}}-\\widehat{\\mathbf{x}}_l^{t+\\frac{1}{2}}\\|^2+{ \\frac{1}{2\\lambda} }\\sum_{j=1}^n \\mathbf{W}_{ji} \\|\\widehat{\\mathbf{x}}_j^{t+\\frac{1}{2}}-{\\mathbf{x}}_j^{t+\\frac{1}{2}}\\|^2 \\nonumber \\\\\n&~\\leq \\sum_{j=1}^n \\mathbf{W}_{ji} \\phi_\\lambda({\\mathbf{x}}_j^{t+\\frac{1}{2}}) + \\frac{1}{4\\lambda} \\sum_{j=1}^{n-1}\\sum_{l=j+1}^n \\mathbf{W}_{ji}\\mathbf{W}_{li}\\|{\\mathbf{x}}_j^{t+\\frac{1}{2}}-{\\mathbf{x}}_l^{t+\\frac{1}{2}}\\|^2, \\label{eq:phi_update_0}\n\\end{align}\nwhere\nin the last inequality we use $ \\phi(\\widehat{\\mathbf{x}}_j^{t+\\frac{1}{2}}) + \\frac{1}{2\\lambda} \\|(\\widehat{\\mathbf{x}}_j^{t+\\frac{1}{2}}-{\\mathbf{x}}_j^{t+\\frac{1}{2}})\\|^2 = \\phi_\\lambda({\\mathbf{x}}_j^{t+\\frac{1}{2}})$, $\\|\\widehat{\\mathbf{x}}_j^{t+\\frac{1}{2}}-\\widehat{\\mathbf{x}}_l^{t+\\frac{1}{2}}\\|^2\\leq \\frac{1}{(1-\\lambda L)^2}\\|{\\mathbf{x}}_j^{t+\\frac{1}{2}}-{\\mathbf{x}}_l^{t+\\frac{1}{2}}\\|^2$ from Lemma \\ref{lem:prox_diff}, $\\frac{1}{(1-\\lambda L)^2}\\leq 2$ and $ L \\leq \\frac{1}{4\\lambda}$.\nFor the first term on the right hand side of \\eqref{eq:phi_update_0}, with $\\sum_{i=1}^n \\mathbf{W}_{ji}=1$, we have\n\\begin{align}\n \\sum_{i=1}^n \\sum_{j=1}^n \\mathbf{W}_{ji} \\phi_\\lambda({\\mathbf{x}}_j^{t+\\frac{1}{2}}) = &~ \n \\sum_{i=1}^n \\phi_\\lambda({\\mathbf{x}}_i^{t+\\frac{1}{2}}) \n\\leq \\sum_{i=1}^n \\phi_\\lambda( {\\mathbf{x}}_i^{t}) + { \\frac{1}{2\\lambda}} \\|\\widehat\\mathbf{X}^{t}-\\mathbf{X}^{t+\\frac{1}{2}}\\|^2 - { \\frac{1}{2\\lambda}} \\|\\widehat\\mathbf{X}^t - \\mathbf{X}^t\\|^2, \\label{eq:phi_lambda} \n\\end{align}\nwhere we have used\n$\n\\phi_\\lambda({\\mathbf{x}}_i^{t+\\frac{1}{2}}) \n\\leq \\phi(\\widehat{\\mathbf{x}}_i^{t})+\\frac{1}{2\\lambda} \\|\\widehat{\\mathbf{x}}_i^{t}-{\\mathbf{x}}_i^{t+\\frac{1}{2}}\\|^2$ and $\\phi_\\lambda({\\mathbf{x}}_i^{t}) = \\phi( \\widehat{\\mathbf{x}}_i^{t}) + \\frac{1}{2\\lambda} \\|\\widehat{\\mathbf{x}}_i^{t}-{\\mathbf{x}}_i^t\\|$.\nFor the second term on the right hand side of \\eqref{eq:phi_update_0}, with Lemma \\ref{lem:prox_diff} and \\eqref{eq:x_half_update}, we have\n\\begin{align}\n&~\\sum_{i=1}^n\\sum_{j=1}^{n-1}\\sum_{l=j+1}^n \\mathbf{W}_{ji}\\mathbf{W}_{li}\\|{\\mathbf{x}}_j^{t+\\frac{1}{2}}-{\\mathbf{x}}_l^{t+\\frac{1}{2}}\\|^2 = \\sum_{i=1}^n\\sum_{j=1}^{n-1}\\sum_{l=j+1}^n \\mathbf{W}_{ji}\\mathbf{W}_{li}\\|\\prox_{\\eta r}({\\mathbf{x}}_j^{t}-\\eta{\\mathbf{y}}_j^{t})-\\prox_{\\eta r}({\\mathbf{x}}_l^{t}-\\eta{\\mathbf{y}}_l^t)\\|^2 \\nonumber\\\\\n \\leq &~ \\sum_{i=1}^n\\sum_{j=1}^{n-1}\\sum_{l=j+1}^n \\mathbf{W}_{ji}\\mathbf{W}_{li}\\|({\\mathbf{x}}_j^{t}-\\eta{\\mathbf{y}}_j^{t})-({\\mathbf{x}}_l^{t}-\\eta{\\mathbf{y}}_l^t)\\|^2 \\nonumber\\\\\n = &~ \\sum_{i=1}^n\\sum_{j=1}^{n-1}\\sum_{l=j+1}^n \\mathbf{W}_{ji}\\mathbf{W}_{li}\\|({\\mathbf{x}}_j^{t}-\\eta{\\mathbf{y}}_j^{t})-(\\bar{\\mathbf{x}}^{t}-\\eta\\bar{\\mathbf{y}}^t)+(\\bar{\\mathbf{x}}^{t}-\\eta\\bar{\\mathbf{y}}^t)-({\\mathbf{x}}_l^{t}-\\eta{\\mathbf{y}}_l^t)\\|^2 \\nonumber\\\\\n\\leq&~ 2\\sum_{i=1}^n\\sum_{j=1}^{n-1}\\sum_{l=j+1}^n \\mathbf{W}_{ji}\\mathbf{W}_{li}\\|({\\mathbf{x}}_j^{t}-\\eta{\\mathbf{y}}_j^{t})-(\\bar{\\mathbf{x}}^{t}-\\eta\\bar{\\mathbf{y}}^t)\\|^2 + 2\\sum_{i=1}^n\\sum_{j=1}^{n-1}\\sum_{l=j+1}^n \\mathbf{W}_{ji}\\mathbf{W}_{li}\\|(\\bar{{\\mathbf{x}}}^{t}-\\eta\\bar{{\\mathbf{y}}}^{t})-({\\mathbf{x}}_l^{t}-\\eta{\\mathbf{y}}_l^t)\\|^2 \\nonumber\\\\\n\\leq&~ 2\\sum_{i=1}^n\\sum_{j=1}^{n-1} \\mathbf{W}_{ji} \\|({\\mathbf{x}}_j^{t}-\\eta{\\mathbf{y}}_j^{t})-(\\bar{\\mathbf{x}}^{t}-\\eta\\bar{\\mathbf{y}}^t)\\|^2 + 2\\sum_{i=1}^n \\sum_{l=2}^n \\mathbf{W}_{li}\\|(\\bar{{\\mathbf{x}}}^{t}-\\eta\\bar{{\\mathbf{y}}}^{t})-({\\mathbf{x}}_l^{t}-\\eta{\\mathbf{y}}_l^t)\\|^2 \\nonumber \\\\\n\\leq&~4 \\sum_{j=1}^{n} \\|({\\mathbf{x}}_j^{t}-\\eta{\\mathbf{y}}_j^{t})-(\\bar{\\mathbf{x}}^{t}-\\eta\\bar{\\mathbf{y}}^t)\\|^2 \n\\leq 8 \\|\\mathbf{X}^{t}_\\perp\\|^2+ 8\\eta^2 \\|\\Y^{t}_\\perp\\|^2. \\label{eq:2_3}\n\\end{align}\nWith \\eqref{eq:phi_lambda} and \\eqref{eq:2_3}, summing up \\eqref{eq:phi_update_0} from $i=1$ to $n$ gives\n\n\\begin{align*}\n\\sum_{i=1}^n \\phi_\\lambda({\\mathbf{x}}_i^{t+1}) \\leq &~ \\sum_{i=1}^n \\phi_\\lambda( {\\mathbf{x}}_i^{t}) +{ \\frac{1}{2\\lambda} }\\|\\widehat\\mathbf{X}^{t}-\\mathbf{X}^{t+\\frac{1}{2}}\\|^2 - { \\frac{1}{2\\lambda} } \\|\\widehat\\mathbf{X}^t - \\mathbf{X}^t\\|^2\n +{ \\frac{2}{\\lambda} }\\left( \\|\\mathbf{X}^{t}_\\perp\\|^2+ \\eta^2 \\|\\Y^{t}_\\perp\\|^2 \\right)\n\\end{align*}\nNow taking the expectation on the above inequality and using \\eqref{eq:hatx_xprox}, we have\n\\begin{align*}\n\\sum_{i=1}^n \\mathbb{E}\\big[\\phi_\\lambda({\\mathbf{x}}_i^{t+1}) \\big] \\leq &~ \\sum_{i=1}^n \\mathbb{E}\\big[\\phi_\\lambda( {\\mathbf{x}}_i^{t}) \\big] - \\frac{1}{2\\lambda} \\mathbb{E}\\big[ \\|\\widehat\\mathbf{X}^t - \\mathbf{X}^t\\|^2\\big]\n + \\frac{2}{\\lambda} \\mathbb{E}\\big[ \\|\\mathbf{X}^{t}_\\perp\\|^2+ \\eta^2 \\|\\Y^{t}_\\perp\\|^2 \\big]\\\\\n &~ \\hspace{-2cm}+\\frac{1}{2\\lambda} \\left(\\textstyle 4 \\mathbb{E}\\big[\\|\\mathbf{X}^t_\\perp\\|^2\\big] + \\left(\\textstyle 1-\\frac{\\eta}{2\\lambda} \\right) \\mathbb{E}\\big[\\| \\widehat\\mathbf{X}^t - \\mathbf{X}^t\\|^2\\big] +4\\eta^2 \\mathbb{E}\\big[\\|\\Y^t_\\perp\\|^2\\big] + 2\\eta^2\\sigma^2 \\right).\n\\end{align*}\nCombining like terms in the inequality above gives \\eqref{eq:phi_update}.\n\\end{proof}\n\nWith Lemmas \\ref{lem:XI_J}, \\ref{lem:YI_J} and \\ref{lem:weak_convex}, we are ready to prove Theorem \\ref{thm:sec2}. We build the following Lyapunov function:\n\\begin{align*}\n \\mathbf{V}^t = z_1 \\mathbb{E}[\\|\\mathbf{X}^t_\\perp\\|^2] +z_2\\mathbb{E}[\\|\\Y^t_\\perp\\|^2] +z_3\\sum_{i=1}^n \\mathbb{E}[ \\phi_\\lambda( {\\mathbf{x}}_i^{t})], \n\\end{align*}\nwhere $z_1, z_2, z_3 \\geq 0$ will be determined later.\n\n\\subsection*{Proof of Theorem \\ref{thm:sec2}.}\n\\begin{proof}\nDenote\n\\begin{align*}\n \\Phi^t = \\sum_{i=1}^n \\mathbb{E}[ \\phi_\\lambda( {\\mathbf{x}}_i^{t})],\\quad \\Omega_0^t = \\mathbb{E}[\\|\\widehat\\mathbf{X}^{t}-\\mathbf{X}^{t}\\|^2],\\quad \n \\Omega^t = \\left(\\mathbb{E}[\\|\\mathbf{X}^t_\\perp\\|^2], \\mathbb{E}[\\|\\Y^t_\\perp\\|^2], \\Phi^t\\right)^\\top.\n\\end{align*}\nThen Lemmas \\ref{lem:XI_J}, \\ref{lem:YI_J} and \\ref{lem:weak_convex} imply $\\Omega^{t+1} \\leq \\mathbf{A}\\Omega^t + {\\mathbf{b}} \\Omega_0^t + {\\mathbf{c}} \\sigma^2$,\nwhere \n\\begin{align*}\n \\mathbf{A} = \\begin{pmatrix}\n \\frac{1+\\rho^2}{2} &~ \\frac{2\\rho^2}{1-\\rho^2}\\eta^2 &~ 0\\\\\n \\frac{48\\rho^2 L^2 }{1-\\rho^2 } &~\\frac{3+\\rho^2}{4} &~ 0 \\\\\n \\frac{4}{\\lambda} &~ \\frac{4}{\\lambda}\\eta^2 &~ 1\n \\end{pmatrix}, \\quad \n{\\mathbf{b}} = \n \\begin{pmatrix}\n 0 \\\\\n \\frac{12\\rho^2 L^2 }{1-\\rho^2 } \\\\\n - \\frac{\\eta}{4\\lambda^2}\n \\end{pmatrix}, \\quad\n{\\mathbf{c}} = \n \\begin{pmatrix}\n 0 \\\\\n 6n \\\\ \n \\frac{\\eta^2}{\\lambda}\n \\end{pmatrix}.\n\\end{align*}\nFor any ${\\mathbf{z}} = (z_1, z_2, z_3)^\\top \\geq \\mathbf{0}$, We have\n\\begin{align*}\n {\\mathbf{z}}^\\top \\Omega^{t+1} \\leq {\\mathbf{z}}^\\top \\Omega^{t}+ ({\\mathbf{z}}^\\top \\mathbf{A}-{\\mathbf{z}}^\\top)\\Omega^t +{\\mathbf{z}}^\\top {\\mathbf{b}} \\Omega_0^t + {\\mathbf{z}}^\\top{\\mathbf{c}} \\sigma^2. \n\\end{align*}\nTake $$z_1=\\frac{10}{1-\\rho^2},\\ z_2=\\left(\\frac{80\\rho^2}{(1-\\rho^2)^3} + \\frac{16}{1-\\rho^2}\\right)\\eta^2,\\ z_3 = \\lambda.$$ We have\n$\n{\\mathbf{z}}^\\top \\mathbf{A}-{\\mathbf{z}}^\\top = \\begin{pmatrix}\n \\frac{48\\rho^2 L^2 }{1-\\rho^2 }z_2-1,\n 0, \n 0\n \\end{pmatrix}.\n$\nNote $z_2 \\leq \\frac{96}{(1-\\rho^2)^3}\\eta^2$. Thus\n\\begin{align*}\n{\\mathbf{z}}^\\top \\mathbf{A}-{\\mathbf{z}}^\\top \\leq \\begin{pmatrix} \\textstyle\n \\frac{4608\\rho^2 L^2 }{(1-\\rho^2)^4 }\\eta^2-1,\n 0, \n 0\n \\end{pmatrix}, \\ \n{\\mathbf{z}}^\\top{\\mathbf{b}} \\leq \\textstyle \\frac{1152\\rho^2 L^2 }{(1-\\rho^2)^4 }\\eta^2 - \\frac{\\eta}{4\\lambda}, \\ \n{\\mathbf{z}}^\\top{\\mathbf{c}} \\leq \\textstyle \\Big( \\textstyle \\frac{576n }{(1-\\rho^2)^3} + 1\\Big)\\eta^2 \\leq \\frac{577n}{(1-\\rho^2)^3} \\eta^2.\n\\end{align*}\n \n\nWith $\\eta\\leq \\frac{(1-\\rho^2)^4}{96\\rho L}$ and $\\lambda \\leq \\frac{1}{96\\rho L}$, we have\n$ {\\mathbf{z}}^\\top \\mathbf{A}-{\\mathbf{z}}^\\top \\leq (-\\frac{1}{2}, 0, 0 )^\\top$ and \n$\n{\\mathbf{z}}^\\top{\\mathbf{b}} \\leq \n\\left(12\\rho L - \\frac{1}{8\\lambda}\\right)\\eta - \\frac{\\eta}{8\\lambda}\n\\leq -\\frac{\\eta}{8\\lambda}$. Thus \n\\begin{align}\n {\\mathbf{z}}^\\top \\Omega^{t+1} \\leq \\textstyle {\\mathbf{z}}^\\top \\Omega^{t} -\\frac{1}{2}\\mathbb{E}[\\|\\mathbf{X}^t_\\perp\\|^2] -\\frac{\\eta}{8\\lambda} \\Omega_0^t + \\frac{577n}{(1-\\rho^2)^3} \\eta^2 \\sigma^2.\\label{eq:l_fun}\n\\end{align}\nHence, summing up \\eqref{eq:l_fun} for $t=0,1,\\ldots,T-1$ gives \n\\begin{align}\\label{eq:avg-Omega}\n \\frac{1}{\\lambda T}\\sum_{t=0}^{T-1} \\Omega_0^t +\\frac{4}{\\eta T}\\sum_{t=0}^{T-1} \\mathbb{E}[\\|\\mathbf{X}^t_\\perp\\|^2] \\leq \\textstyle \\frac{8}{\\eta T} \\left({\\mathbf{z}}^\\top \\Omega^0 - {\\mathbf{z}}^\\top \\Omega^{T}\\right) + \\frac{577n}{(1-\\rho^2)^3} 8\\eta\\sigma^2 .\n\\end{align}\nFrom ${\\mathbf{y}}_i^{-1} =\\mathbf{0}, \\nabla F_i({\\mathbf{x}}_i^{-1},\\xi_i^{-1}) = \\mathbf{0}, {\\mathbf{x}}_i^0 = {\\mathbf{x}}^0, \\forall\\, i \\in \\mathcal{N}$, we have \n\\begin{align}\n \\|\\mathbf{X}^0_\\perp\\|^2 = 0, \\quad \\|\\Y^0_\\perp\\|^2 = \\|\\nabla \\mathbf{F}^0(\\mathbf{I}-\\mathbf{J})\\|^2, \\quad \\Phi^0=n \\phi_\\lambda({\\mathbf{x}}^0). \\label{eq:initial_thm2}\n\\end{align}\nFrom Assumption \\ref{assu:prob}, $\\phi$ is lower bounded and thus $\\phi_\\lambda $ is also lower bounded, i.e., there is a constant $\\phi_\\lambda^*$ satisfying $\\phi_\\lambda^* = \\min_{{\\mathbf{x}}} \\phi_\\lambda({\\mathbf{x}}) > -\\infty$. Thus \n\\begin{align}\n \\Phi^T \\geq n \\phi_\\lambda^*.\\label{eq:end_thm2}\n\\end{align}\nWith \\eqref{eq:initial_thm2}, \\eqref{eq:end_thm2}, and the nonnegativity of $ \\mathbb{E}[\\|\\mathbf{X}^T_\\perp\\|^2]$ and $ \\mathbb{E}[\\|\\Y^T_\\perp\\|^2]$, we have\n\\begin{align}\n\\textstyle\n{\\mathbf{z}}^\\top \\Omega^0 - {\\mathbf{z}}^\\top \\Omega^{T} \\le\n\\frac{96 \\eta^2}{(1-\\rho^2)^3} \\mathbb{E}[ \\|\\nabla \\mathbf{F}^0(\\mathbf{I}-\\mathbf{J})\\|^2] + \\lambda n \\phi_\\lambda({\\mathbf{x}}^0) -\\lambda n \\phi_\\lambda^*. \\label{eq:Omega0_OmegaT}\n\\end{align}\nBy the convexity of the Frobenius norm and \\eqref{eq:Omega0_OmegaT}, we obtain from \\eqref{eq:avg-Omega} that\n\\begin{align*} \n &~ \\frac{1}{\\lambda^2n} \\mathbb{E}\\big[\\|\\widehat\\mathbf{X}^{\\tau}-\\mathbf{X}^{\\tau}\\|^2\\big] +\\frac{4}{n \\lambda \\eta}\\mathbb{E}\\big[\\|\\mathbf{X}^\\tau_\\perp\\|^2\\big] \\leq \\frac{1}{\\lambda^2n T}\\sum_{t=0}^{T-1} \\mathbb{E}\\big[\\|\\widehat\\mathbf{X}^{t}-\\mathbf{X}^{t}\\|^2\\big] +\\frac{4}{n \\lambda \\eta T}\\sum_{t=0}^{T-1} \\mathbb{E}\\big[\\|\\mathbf{X}^t_\\perp\\|^2\\big] \\nonumber \\\\ \n \\leq &~ \\textstyle \\frac{8\\left( \\phi_\\lambda({\\mathbf{x}}^0) - \\phi_\\lambda^*\\right)}{ \\eta T} + \\frac{4616 \\eta}{\\lambda(1-\\rho^2)^3} \\sigma^2 \\textstyle + \\frac{768\\eta \\mathbb{E}\\left[ \\|\\nabla \\mathbf{F}^0(\\mathbf{I}-\\mathbf{J})\\|^2\\right]}{n\\lambda T(1-\\rho^2)^3}.\n\\end{align*}\n Note $\\|\\nabla \\phi_\\lambda ({\\mathbf{x}}_i^\\tau)\\|^2 = \\frac{\\|{\\mathbf{x}}_i^\\tau-\\widehat{\\mathbf{x}}_i^\\tau\\|^2}{\\lambda^{2}}$ from Lemma \\ref{lem:xhat_x}, we finish the proof.\n\\end{proof}\n \n\n\\section{Convergence Analysis}\n\nIn this section, we analyze the convergence of the algorithms proposed in section~\\ref{sec:alg}. Nonconvexity of the problem and stochasticity of the algorithms both raise difficulty on the analysis. In addition, the coexistence of the nonsmooth regularizer $r(\\cdot)$ causes more significant challenges. \nTo address these challenges, we employ a tool of the so-called Moreau envelope \\cite{MR201952}, which has been commonly used for analyzing methods on solving nonsmooth weakly-convex problems.\n\n\\begin{definition}[Moreau envelope] Let $\\psi$ be an $L$-weakly convex function, i.e., $\\psi(\\cdot) + \\frac{L}{2}\\|\\cdot\\|^2$ is convex. For $\\lambda\\in(0,\\frac{1}{L})$, the Moreau envelope of $\\psi$ is defined as\n\\vspace{-0.2cm}\n \\begin{equation*} \n\\psi_\\lambda({\\mathbf{x}}) = \\min_{\\mathbf{y}} \\textstyle \\left\\{\\psi({\\mathbf{y}}) + \\frac{1}{2\\lambda}\\|{\\mathbf{y}}-{\\mathbf{x}}\\|^2\\right\\}, \\vspace{-0.2cm}\n\\end{equation*} \nand the unique minimizer is denoted as\n\\vspace{-0.2cm}\n\\begin{equation*}\n \\prox_{\\lambda \\psi}({\\mathbf{x}})= \\argmin_{{\\mathbf{y}}} \\textstyle \\left\\{\\psi({\\mathbf{y}})+\\frac{1}{2\\lambda} \\|{\\mathbf{y}}-{\\mathbf{x}}\\|^2\\right\\}.\\vspace{-0.2cm}\n \\end{equation*} \n\\end{definition}\n\n\n\nThe Moreau envelope $\\psi_\\lambda$ has nice properties.\nThe result below can be found in \n \\cite{davis2019stochastic, nazari2020adaptive, xu2022distributed-SsGM}.\n\n \\begin{lemma}\\label{lem:xhat_x}\n For any function $\\psi$, if it is $L$-weakly convex, then for any $\\lambda \\in (0, \\frac{1}{L})$, the Moreau envelope $\\psi_\\lambda$ is smooth with gradient given by \n$\\nabla \\psi_\\lambda ({\\mathbf{x}}) = \\lambda^{-1} ({\\mathbf{x}}-\\widehat{\\mathbf{x}}),$\n where $\\widehat{\\mathbf{x}}=\\prox_{\\lambda\\psi}({\\mathbf{x}})$. \nMoreover, \\vspace{-0.2cm}\n\\[\n\\|{\\mathbf{x}}-\\widehat{\\mathbf{x}}\\|=\\lambda\\|\\nabla \\psi_\\lambda({\\mathbf{x}})\\|, \\quad\n\\dist(\\mathbf{0}, \\partial \\psi(\\widehat {\\mathbf{x}}))\\leq \\|\\nabla \\psi_\\lambda({\\mathbf{x}})\\|.\\vspace{-0.2cm}\n\\]\n\\end{lemma}\n\nLemma~\\ref{lem:xhat_x} implies that if $\\|\\nabla \\psi_\\lambda({\\mathbf{x}})\\|$ is small, then $\\widehat{\\mathbf{x}}$ is a near-stationary point of $\\psi$ and ${\\mathbf{x}}$ is close to $\\widehat{\\mathbf{x}}$. Hence, $\\|\\nabla \\psi_\\lambda({\\mathbf{x}})\\|$ can be used as a valid measure of stationarity violation at ${\\mathbf{x}}$ for $\\psi$. Based on this observation, we define the $\\epsilon$-stationary solution below for the decentralized problem \\eqref{eq:decentralized_problem}.\n\n\n\\begin{definition}[Expected $\\epsilon$-stationary solution]\\label{def:eps-sol} Let $\\epsilon > 0$. A point $\\mathbf{X} = [{\\mathbf{x}}_1, \\ldots, {\\mathbf{x}}_n]$ is called an expected $\\epsilon$-stationary solution of \\eqref{eq:decentralized_problem} if for a constant $\\lambda\\in (0, \\frac{1}{L})$,\n \\vspace{-0.1cm}\n\\begin{equation*}\n \\textstyle\\frac{1}{n} \\mathbb{E}\\left[\\sum_{i=1}^n \\|\\nabla \\phi_\\lambda({\\mathbf{x}}_i)\\|^2 + L^2 \\|\\mathbf{X}_\\perp\\|^2\\right] \\leq \\epsilon^2.\n \\vspace{-0.1cm}\n\\end{equation*}\n\\end{definition} \n\nIn the definition above, $L^2$ before the consensus error term $\\|\\mathbf{X}_\\perp\\|^2$ is to balance the two terms. This scaling scheme has also been used in existing works such as \\cite{xin2021stochastic,mancino2022proximal,DBLP:journals\/corr\/abs-2202-00255} . From the definition, we see that if $\\mathbf{X}$ is an expected $\\epsilon$-stationary solution of \\eqref{eq:decentralized_problem}, then each local solution ${\\mathbf{x}}_i$ will be a near-stationary solution of $\\phi$ and in addition, these local solutions are all close to each other, namely, they are near consensus.\n\n\nBelow we first state the convergence results of the non-compressed method DProxSGT\nand then the compressed one CDProxSGT.\nAll the proofs are given in the appendix.\n \n\\begin{theorem}[Convergence rate of DProxSGT]\\label{thm:sec2}\nUnder Assumptions \\ref{assu:prob} -- \\ref{assu:stoc_grad}, let $\\{\\mathbf{X}^t\\}$ be generated from $\\mathrm{DProxSGT}$ in Algorithm~\\ref{alg:DProxSGT} with ${\\mathbf{x}}_i^0 = {\\mathbf{x}}^0, \\forall\\, i \\in \\mathcal{N}$. Let $\\lambda = \\min\\big\\{\\frac{1}{4 L}, \\frac{1}{96\\rho L}\\big\\}$ and $\\eta\\leq \\min\\big\\{\\frac{1}{4 L},\\frac{(1-\\rho^2)^4}{96\\rho L}\\big\\}$. \nSelect $\\tau$ from $\\{0, 1, \\ldots, T-1\\}$ uniformly at random.\nThen \n\\vspace{-0.1cm}\n\\begin{equation*\n\\begin{aligned} \n &~ \\textstyle \\frac{1}{n} \\mathbb{E}\\left[\\sum_{i=1}^n \\|\\nabla\\phi_\\lambda({\\mathbf{x}}_i^\\tau)\\|^2 +\\frac{4}{\\lambda \\eta} \\|\\mathbf{X}^\\tau_\\perp\\|^2\\right] \\\\\n \\leq &~ \\textstyle \\frac{8\\left( \\phi_\\lambda({\\mathbf{x}}^0) - \\phi_\\lambda^*\\right)}{ \\eta T} + \\frac{4616 \\eta}{\\lambda(1-\\rho^2)^3} \\sigma^2 \\textstyle + \\frac{768\\eta \\mathbb{E}\\left[ \\|\\nabla \\mathbf{F}^0(\\mathbf{I}-\\mathbf{J})\\|^2\\right]}{n\\lambda T(1-\\rho^2)^3},\n\\end{aligned}\n\\vspace{-0.1cm}\n\\end{equation*}\nwhere $\\phi_\\lambda^* = \\min_{{\\mathbf{x}}} \\phi_\\lambda({\\mathbf{x}})> -\\infty$.\n\\end{theorem}\n \n\nBy Theorem~\\ref{thm:sec2}, we obtain a complexity result as follows.\n\n \n\\begin{corollary}[Iteration complexity]\nUnder the assumptions of Theorem~\\ref{thm:sec2}, \nfor a given $\\epsilon>0$, take $ \\eta = \\min\\{\\frac{1}{4 L},\\frac{(1-\\rho^2)^4}{96\\rho L}, \\frac{ \\lambda(1-\\rho^2)^3 \\epsilon^2}{9232\\sigma^2}\\}$. Then $\\mathrm{DProxSGT}$ can find an expected $\\epsilon$-stationary point of \\eqref{eq:decentralized_problem} when $T \\geq T_\\epsilon = \\left\\lceil \\frac{16\\left( \\phi_\\lambda({\\mathbf{x}}^0) - \\phi_\\lambda^*\\right)}{ \\eta \\epsilon^2 } + \\frac{1536\\eta \\mathbb{E}\\left[ \\|\\nabla \\mathbf{F}^0(\\mathbf{I}-\\mathbf{J})\\|^2\\right]}{n\\lambda (1-\\rho^2)^3 \\epsilon^2} \\right\\rceil$. \n\\end{corollary}\n \n \n\\begin{remark} \\label{remark:DProxSGT}\nWhen\n$\\epsilon$ is small enough, $\\eta$ will take $\\frac{ \\lambda(1-\\rho^2)^3 \\epsilon^2}{9232\\sigma^2}$, and $T_\\epsilon$ will be dominated by the first term. \nIn this case, DProxSGT can find an expected $\\epsilon$-stationary solution of \\eqref{eq:decentralized_problem} in $O\\Big( \\frac{\\sigma^2\\left( \\phi_\\lambda({\\mathbf{x}}^0) - \\phi_\\lambda^*\\right) }{\\lambda(1-\\rho^2)^3 \\epsilon^4}\\Big)$ iterations, leading to the same number of stochastic gradient samples\nand communication rounds. Our sample complexity is optimal in terms of the dependence on $\\epsilon$ under the smoothness condition in Assumption~\\ref{assu:prob}, as it matches with the lower bound in \\cite{arjevani2022lower}. However, the dependence on $1-\\rho$ may not be optimal because of our possibly loose analysis, as the \\emph{deterministic} method with single communication per update in \\cite{scutari2019distributed} for nonconvex nonsmooth problems has a dependence $(1-\\rho)^2$ on the graph topology.\n\\end{remark}\n\n\n \n\n \n\n \n\\begin{theorem}[Convergence rate of CDProxSGT] \\label{thm:sect3thm}\n Under Assumptions \\ref{assu:prob} through \\ref{assu:compressor},\n let $\\{\\mathbf{X}^t\\}$ be generated from $\\mathrm{CDProxSGT}$ in Algorithm \\ref{alg:CDProxSGT} with\n ${\\mathbf{x}}_i^0 = {\\mathbf{x}}^0, \\forall\\, i \\in \\mathcal{N}$. Let $\\lambda = \\min \\big\\{\\frac{1}{4 L}, \\frac{ (1-\\alpha^2)^2}{9 L+41280}\\big\\}$, and suppose\n\\vspace{-0.1cm}\n\\begin{gather*}\n \\eta \\leq~ \\min\\left\\{ \\textstyle \\lambda,\n \\frac{(1-\\alpha^2)^2(1-\\widehat\\rho^2_x)^2(1-\\widehat\\rho^2_y)^2}{18830\\max\\{1, L\\}}\\right\\}, \\\\\n\\gamma_x\\leq~ \\min\\left\\{ \\textstyle\n\\frac{1-\\alpha^2}{25}, \\frac{\\eta}{\\alpha\n\\right\\}, \\quad \n \\gamma_y\\leq ~ \\textstyle \n \\frac{(1-\\alpha^2)(1-\\widehat\\rho^2_x)(1-\\widehat\\rho^2_y)}{317}.\n\\end{gather*}\n\\vspace{-0.1cm}\nSelect $\\tau$ from $\\{0, 1, \\ldots, T-1\\}$ uniformly at random.\nThen \n\\vspace{-0.1cm}\n\\begin{equation*\n \\begin{aligned}\n &~ \\textstyle \\frac{1}{n} \\mathbb{E}\\left[\\sum_{i=1}^n \\|\\nabla\\phi_\\lambda({\\mathbf{x}}_i^\\tau)\\|^2 +\\frac{4}{\\lambda \\eta} \\|\\mathbf{X}^\\tau_\\perp\\|^2\\right] \\\\\n \\leq &~\\textstyle \\frac{8\\left(\\phi_\\lambda({\\mathbf{x}}^0) - \\phi_\\lambda^*\\right)}{\\eta T} \n +\\frac{(50096n+48)\\eta \\sigma^2}{n\\lambda(1-\\widehat\\rho^2_x)^2(1-\\widehat\\rho^2_y)} + \\frac{4176 \\eta \\mathbb{E}\\left[ \\|\\nabla \\mathbf{F}^0\\|^2\\right] }{n\\lambda T (1-\\widehat\\rho^2_x)^2(1-\\widehat\\rho^2_y)},\n\\end{aligned}\n\\vspace{-0.1cm}\n\\end{equation*}\nwhere $\\phi_\\lambda^* = \\min_{{\\mathbf{x}}} \\phi_\\lambda({\\mathbf{x}})> -\\infty$.\n\\end{theorem}\n\nBy Theorem~\\ref{thm:sect3thm}, we have the complexity result as follows.\n\n\\begin{corollary}[Iteration complexity]\nUnder the assumptions of Theorem \\ref{thm:sect3thm}, for a given $\\epsilon>0$, take \n\\begin{gather*}\n \\eta = \\textstyle \\min \\left\\{\\frac{1}{4 L}, \\frac{ (1-\\alpha^2)^2}{9 L+41280}, \\frac{(1-\\alpha^2)^2(1-\\widehat\\rho^2_x)^2(1-\\widehat\\rho^2_y)^2}{18830\\max\\{1, L\\}}\\right.,\\\\ \\textstyle \\left. \\frac{n\\lambda(1-\\widehat\\rho^2_x)^2(1-\\widehat\\rho^2_y)\\epsilon^2}{2(50096n+48) \\sigma^2}\\right\\}, \\\\\n \\textstyle \\gamma_x = \\min\\left\\{ \\textstyle \n\\frac{1-\\alpha^2}{25}, \\frac{\\eta}{\\alpha}\\right\\}, \\quad\n \\gamma_y = \\frac{(1-\\alpha^2)(1-\\widehat\\rho^2_x)(1-\\widehat\\rho^2_y)}{317}.\n\\end{gather*}\nThen $\\mathrm{CDProxSGT}$ can find an expected $\\epsilon$-stationary point of \\eqref{eq:decentralized_problem} when $T\\geq T_\\epsilon^c$ where \n\\begin{align*} \n T_\\epsilon^c = \\textstyle \\left\\lceil\\frac{16\\left(\\phi_\\lambda({\\mathbf{x}}^0) - \\phi_\\lambda^*\\right)}{\\eta \\epsilon^2} + \\frac{8352 \\eta \\mathbb{E}\\left[ \\|\\nabla \\mathbf{F}^0\\|^2\\right] }{n\\lambda (1-\\widehat\\rho^2_x)^2(1-\\widehat\\rho^2_y)\\epsilon^2} \\right\\rceil.\n\\end{align*}\n \n\n\n\\end{corollary}\n\n\\begin{remark}\nWhen the given tolerance $\\epsilon$ is small enough,\n$\\eta$ will take $\\frac{n\\lambda(1-\\widehat\\rho^2_x)^2(1-\\widehat\\rho^2_y)\\epsilon^2}{2(50096n+48) \\sigma^2}$ and $T_\\epsilon^c$ will be dominated by the first term. In this case, similar to DProxSGT in Remark \\ref{remark:DProxSGT}, CDProxSGT can find an expected $\\epsilon$-stationary solution of \\eqref{eq:decentralized_problem} in $O\\Big( \\frac{\\sigma^2\\left( \\phi_\\lambda({\\mathbf{x}}^0) - \\phi_\\lambda^*\\right) }{ \\lambda(1-\\widehat\\rho^2_x)^2(1-\\widehat\\rho^2_y) \\epsilon^4}\\Big)$ iterations.\n\\end{remark} \n\\section{Decentralized Algorithms}\\label{sec:alg}\n\nIn this section, we give our decentralized algorithms for solving \\eqref{eq:decentralized_problem} or equivalently \\eqref{eq:problem_original}. To perform neighbor communications, we introduce a mixing (or gossip) matrix $\\mathbf{W}$ that satisfies the following standard assumption.\n\\begin{assumption}[Mixing matrix] \\label{assu:mix_matrix} We choose a mixing matrix $\\mathbf{W}$ such that\n\\vspace{-1.5mm}\n\\begin{enumerate}\n\\item [(i)] $\\mathbf{W}$ is doubly stochastic: $\\mathbf{W}\\mathbf{1} = \\mathbf{1}$ and $\\mathbf{1}^\\top \\mathbf{W} = \\mathbf{1}^\\top$;\n\\item [(ii)] $\\mathbf{W}_{ij} = 0$ if $i$ and $j$ are not neighbors to each other;\n\\item [(iii)] $\\mathrm{Null}(\\mathbf{W}-\\mathbf{I}) = \\mathrm{span}\\{\\mathbf{1}\\}$ and $\\rho \\triangleq \\|\\mathbf{W} - \\mathbf{J}\\|_2 < 1$.\n\\end{enumerate}\n\\vspace{-2mm}\n\\end{assumption}\nThe condition in (ii) above is enforced so that \\emph{direct} communications can be made only if two nodes (or workers) are immediate (or 1-hop) neighbors of each other. The condition in (iii) can hold if the graph $\\mathcal{G}$ is connected. The assumption $\\rho < 1$ is critical to ensure contraction of consensus error. \n\nThe value of $\\rho$ depends on the graph topology. \n\\cite{koloskova2019decentralized} gives three commonly used examples: when uniform weights are used between nodes, $\\mathbf{W} = \\mathbf{J}$ and $\\rho = 0$ for a fully-connected graph (in which case, our algorithms will reduce to centralized methods), $1-\\rho = \\Theta(\\frac{1}{n})$ for a 2d torus grid graph where every node has 4 neighbors, and $1-\\rho = \\Theta(\\frac{1}{n^2})$ for a ring-structured graph.\nMore examples can be found in \\cite{nedic2018network}.\n\n\n\n\n\\subsection{Non-compreseed Method}\n\nWith the mixing matrix $\\mathbf{W}$, we propose a decentralized proximal stochastic gradient method with gradient tracking (DProxSGT) for \\eqref{eq:decentralized_problem}. The pseudocode is shown in Algorithm~\\ref{alg:DProxSGT}. \nIn every iteration $t$, each node $i$ first computes a local stochastic gradient $\\nabla F_i({\\mathbf{x}}_i^{t},\\xi_i^{t})$ by taking a sample $\\xi_i^{t}$ from its local data distribution $\\mathcal{D}_i$, then performs gradient tracking in \\eqref{eq:y_half_update} and neighbor communications of the tracked gradient in \\eqref{eq:y_update}, and finally takes a proximal gradient step in \\eqref{eq:x_half_update} and mixes the model parameter with its neighbors in \\eqref{eq:x_1_update}.\n\n\\begin{algorithm}[tb]\n \\caption{DProxSGT}\\label{alg:DProxSGT}\n\\begin{algorithmic}\n \\small{ \\STATE Initialize ${\\mathbf{x}}_i^{0}$ and set ${\\mathbf{y}}_i^{-1}=\\mathbf{0}$, $\\nabla F_i({\\mathbf{x}}_i^{-1},\\xi_i^{-1}) =\\mathbf{0}$, $\\forall i\\in\\mathcal{N}$.\n \\FOR{$t=0, 1, 2, \\ldots, T-1$} \n \\STATE \\hspace{-0.1cm}\\textbf{all} nodes $i=1, 2, \\ldots, n$ do the updates \\textbf{in parallel:}\n \\STATE obtain one random sample $\\xi_i^t$, compute a stochastic gradient $\\nabla F_i({\\mathbf{x}}_i^{t},\\xi_i^{t})$, and perform \n \\vspace{-0.2cm}\n \\begin{gather}\n \t{\\mathbf{y}}_i^{t-\\frac{1}{2}} = {\\mathbf{y}}_i^{t-1} + \\nabla F_i({\\mathbf{x}}_i^{t},\\xi_i^{t}) - \\nabla F_i({\\mathbf{x}}_i^{t-1},\\xi_i^{t-1}),\\label{eq:y_half_update}\n \t\\\\\n \t{\\mathbf{y}}_i^t = \\textstyle \\sum_{j=1}^n \\mathbf{W}_{ji} {\\mathbf{y}}_j^{t-\\frac{1}{2}},\\label{eq:y_update}\\\\\n \t{\\mathbf{x}}_i^{t+\\frac{1}{2}} =\\prox_{\\eta r} \\left({\\mathbf{x}}_i^t - \\eta {\\mathbf{y}}_i^{t}\\right),\\label{eq:x_half_update}\n \t\\\\\n \t{\\mathbf{x}}_i^{t+1} = \\textstyle \\sum_{j=1}^n \\mathbf{W}_{ji}{\\mathbf{x}}_j^{t+\\frac{1}{2}}. \\label{eq:x_1_update}\n \\vspace{-0.2cm} \n \t\\end{gather} \n \\ENDFOR}\n\\end{algorithmic}\n\\end{algorithm}\n\\vspace{-0.1mm}\n \n\nNote that for simplicity, we take only one random sample $\\xi_i^{t}$ in Algorithm \\ref{alg:DProxSGT} but in general, a mini-batch of random samples can be taken, and all theoretical results that we will establish in the next section still hold. We emphasize that we need only $\\mathcal{O}(1)$ samples for each update. This is different from ProxGT-SA in \\cite{xin2021stochastic}, which shares a similar update formula as our algorithm but needs a very big batch of samples, as many as $\\mathcal{O}(\\frac{1}{\\epsilon^2})$, where $\\epsilon$ is a target tolerance. A small-batch training can usually generalize better than a big-batch one \\cite{lecun2012efficient, keskar2016large} on training large-scale deep learning models. Throughout the paper, we make the following standard assumption on the stochastic gradients.\n\n\\begin{assumption}[Stochastic gradients] \\label{assu:stoc_grad}\nWe assume that \n\\vspace{-1.5mm}\n\\begin{itemize}\n \\item[(i)] The random samples $\\{\\xi_i^t\\}_{i\\in \\mathcal{N}, t\\ge 0}$ are independent.\n \\item[(ii)]\n There exists a finite number $\\sigma\\ge0$ such that for any $i\\in \\mathcal{N}$ and ${\\mathbf{x}}_i\\in\\dom(r)$,\n \\begin{gather*} \n \\mathbb{E}_{\\xi_i}[\\nabla F_i({\\mathbf{x}}_i,\\xi_i)] = \\nabla f_i({\\mathbf{x}}_i),\\\\ \n \\mathbb{E}_{\\xi_i}[\\|\\nabla F_i({\\mathbf{x}}_i,\\xi_i)-\\nabla f_i({\\mathbf{x}}_i)\\|^2] \\leq \\sigma^2.\n \\end{gather*} \n\\end{itemize}\n\\vspace{-2mm}\n\\end{assumption}\n \n\n\n\nThe gradient tracking step in \\eqref{eq:y_half_update}\nis critical to handle heterogeneous data \n\\cite{di2016next,nedic2017achieving,lu2019gnsd,pu2020distributed,sun2020improving,xin2021stochastic,song2021optimal,mancino2022proximal,zhao2022beer,DBLP:journals\/corr\/abs-2202-00255,song2022compressed}.\nIn a deterministic scenario where $\\nabla f_i(\\cdot)$ is used instead of $\\nabla F_i(\\cdot, \\xi)$, for each $i$, the tracked gradient ${\\mathbf{y}}_i^t$ can converge to the gradient of the global function $\\frac{1}{n}\\sum_{i=1}^n f_i(\\cdot)$ at $\\bar{\\mathbf{x}}^t$, and thus all local updates move towards a direction to minimize the \\emph{global} objective. When stochastic gradients are used, the gradient tracking can play a similar role and make ${\\mathbf{y}}_i^t$ approach to the stochastic gradient of the global function. \nWith this nice property of gradient tracking, we can guarantee convergence without strong assumptions that are made in existing works, such as bounded gradients \\cite{koloskova2019decentralized,koloskova2019decentralized-b, taheri2020quantized, singh2021squarm} and bounded data similarity over nodes \\cite{lian2017can, tang2018communication, tang2019deepsqueeze, vogels2020practical, wang2021error}.\n \n\n\n\\subsection{Compressed Method}\nIn DProxSGT, each worker needs to communicate both the model parameter and tracked stochastic gradient with its neighbors at every iteration. Communications have become a bottleneck for distributed training on GPUs. In order to save the communication cost, we further propose a compressed version of DProxSGT, named CDProxSGT. The pseudocode is shown in Algorithm \\ref{alg:CDProxSGT}, where $Q_{\\mathbf{x}}$ and $Q_{\\mathbf{y}}$ are two compression operators.\n\n\\begin{algorithm}[tb]\n\\caption{CDProxSGT}\\label{alg:CDProxSGT}\n\\begin{algorithmic}\n \\small{\\STATE Initialize ${\\mathbf{x}}_i^{0}$; set \n ${\\mathbf{y}}_i^{-1}=\\underline{\\mathbf{y}}_i^{-1}=\\nabla F_i({\\mathbf{x}}_i^{-1}, \\xi_i^{-1})=\\underline{\\mathbf{x}}_i^{0} =\\mathbf{0}$, $\\forall i\\in\\mathcal{N}$.\n \\FOR{$t=0, 1, 2, \\ldots, T-1$}\n \\STATE \\hspace{-0.1cm}\\textbf{all} nodes $i=1, 2, \\ldots, n$ do the updates \\textbf{in parallel:}\n \\vspace{-0.2cm}\n \\begin{gather}\n \t{\\mathbf{y}}_i^{t-\\frac{1}{2}} = {\\mathbf{y}}_i^{t-1} + \\nabla F_i({\\mathbf{x}}_i^{t},\\xi_i^{t}) - \\nabla F_i({\\mathbf{x}}_i^{t-1},\\xi_i^{t-1}),\\label{eq:alg3_1}\\\\\n \\underline{\\mathbf{y}}_i^{t} = \\underline{\\mathbf{y}}_i^{t-1} + Q_{\\mathbf{y}}\\big[{\\mathbf{y}}_i^{t-\\frac{1}{2}} - \\underline{\\mathbf{y}}_i^{t-1}\\big], \\label{eq:alg3_2}\\\\\n {\\mathbf{y}}_i^{t} = {\\mathbf{y}}_i^{t-\\frac{1}{2}} +\\gamma_y \\left(\\textstyle \\sum_{j=1}^n \\mathbf{W}_{ji} \\underline{\\mathbf{y}}_j^{t}-\\underline{\\mathbf{y}}_i^{t}\\right), \\label{eq:alg3_3}\\\\\n \t{\\mathbf{x}}_i^{t+\\frac{1}{2}} =\\prox_{\\eta r} \\left({\\mathbf{x}}_i^t - \\eta {\\mathbf{y}}_i^{t}\\right), \\label{eq:alg3_4}\\\\\n \t\\underline{\\mathbf{x}}_i^{t+1} = \\underline{\\mathbf{x}}_i^{t} + Q_{\\mathbf{x}}\\big[{\\mathbf{x}}_i^{t+\\frac{1}{2}} - \\underline{\\mathbf{x}}_i^{t}\\big], \\label{eq:alg3_5}\\\\\n \t{\\mathbf{x}}_i^{t+1} = {\\mathbf{x}}_i^{t+\\frac{1}{2}}+\\gamma_x\\Big(\\textstyle \\overset{n}{\\underset{j=1}\\sum} \\mathbf{W}_{ji} \\underline{\\mathbf{x}}_j^{t+1}-\\underline{\\mathbf{x}}_i^{t+1}\\Big).\\label{eq:alg3_6}\n \\vspace{-0.2cm}\n \\end{gather} \n \\ENDFOR}\n\\end{algorithmic}\n\\end{algorithm}\n\\vspace{-0.1mm}\n\n \n\nIn Algorithm \\ref{alg:CDProxSGT}, each node communicates the non-compressed vectors $\\underline{\\mathbf{y}}_i^{t}$ and $\\underline{\\mathbf{x}}_i^{t+1}$ with its neighbors in \\eqref{eq:alg3_3} and \\eqref{eq:alg3_6}. We write it in this way for ease of read and analysis. For efficient and \\emph{equivalent} implementation,\nwe do not communicate \n$\\underline{\\mathbf{y}}_i^{t}$ and $\\underline{\\mathbf{x}}_i^{t+1}$ directly but the compressed residues $Q_{\\mathbf{y}}\\big[{\\mathbf{y}}_i^{t-\\frac{1}{2}} - \\underline{\\mathbf{y}}_i^{t-1}\\big]$ and $Q_{\\mathbf{x}}\\big[{\\mathbf{x}}_i^{t+\\frac{1}{2}} - \\underline{\\mathbf{x}}_i^{t}\\big]$, explained as follows. \nBesides ${\\mathbf{y}}_i^{t-1}$, ${\\mathbf{x}}_i^{t}$, $\\underline{\\mathbf{y}}_i^{t-1}$ and $\\underline{\\mathbf{x}}_i^{t}$, each node also stores ${\\mathbf{z}}_i^{t-1}$ and ${\\mathbf{s}}_i^{t} $ which record $\\sum_{j=1}^n \\mathbf{W}_{ji} \\underline{\\mathbf{y}}_i^{t-1}$ and $\\sum_{j=1}^n \\mathbf{W}_{ji} \\underline{\\mathbf{x}}_i^{t}$. For the gradient communication, each node $i$ initializes ${\\mathbf{z}}_i^{-1} = \\mathbf{0}$, and then at each iteration $t$, \nafter receiving $Q_{\\mathbf{y}}\\big[{\\mathbf{y}}_j^{t-\\frac{1}{2}} - \\underline{\\mathbf{y}}_j^{t-1}\\big]$ from its neighbors, it updates $\\underline{\\mathbf{y}}_i^{t}$ by \\eqref{eq:alg3_2}, and ${\\mathbf{z}}_i^{t}$ and ${\\mathbf{y}}_i^t$ by \n\\vspace{-0.2cm} \n\\begin{align*}\n {\\mathbf{z}}_i^{t} =&~ \\textstyle {\\mathbf{z}}_i^{t-1} + \\sum_{j=1}^n \\mathbf{W}_{ji} Q_{\\mathbf{y}}\\big[{\\mathbf{y}}_j^{t-\\frac{1}{2}} - \\underline{\\mathbf{y}}_j^{t-1}\\big], \\\\\n{\\mathbf{y}}_i^{t} =&~ \\textstyle {\\mathbf{y}}_i^{t-\\frac{1}{2}} +\\gamma_y \\big({\\mathbf{z}}_i^{t}-\\underline{\\mathbf{y}}_i^{t}\\big).\\vspace{-0.2cm}\n\\end{align*}\nFrom the initialization and the updates of $\\underline{\\mathbf{y}}_i^{t}$ and ${\\mathbf{z}}_i^{t}$, \nit always holds that\n${\\mathbf{z}}_i^{t}=\\sum_{j=1}^n \\mathbf{W}_{ji} \\underline{\\mathbf{y}}_i^{t}$.\nThe model communication can be done efficiently in the same way.\n\n\nThe compression operators $Q_{\\mathbf{x}}$ and $Q_{\\mathbf{y}}$ in Algorithm \\ref{alg:CDProxSGT} can be different, but we assume that they both satisfy the following assumption.\n\\begin{assumption} \\label{assu:compressor}\nThere exists $\\alpha \\in [0,1)$ such that \n$$\\mathbb{E}[\\|{\\mathbf{x}}-Q[{\\mathbf{x}}]\\|^2]\\leq \\alpha^2 \\|{\\mathbf{x}}\\|^2, \\forall\\, {\\mathbf{x}}\\in\\mathbb{R}^d,$$ \nfor both $Q=Q_{\\mathbf{x}}$ and $Q=Q_{\\mathbf{y}}$.\n\\end{assumption}\nThe assumption on compression operators is standard and also made in \\cite{koloskova2019decentralized-b,koloskova2019decentralized,zhao2022beer}. It is satisfied by the sparsification, such as Random-$k$ \\cite{stich2018sparsified} and Top-$k$ \\cite{aji2017sparse}. It can also be satisfied by rescaled quantizations. For example, QSGD \\cite{alistarh2017qsgd} compresses ${\\mathbf{x}}\\in \\mathbb{R}^d$ by $Q_{sqgd}({\\mathbf{x}}) = \\frac{\\mathbf{sign}({\\mathbf{x}})\\|{\\mathbf{x}}\\|}{s} \\lfloor s \\frac{|{\\mathbf{x}}|}{\\|{\\mathbf{x}}\\|} + \\xi \\rfloor\n$\nwhere $\\xi$ is uniformly distributed on $[0,1]^d$, $s$ is the parameter about compression level. Then $Q({\\mathbf{x}})= \\frac{1}{\\tau} Q_{sqgd} ({\\mathbf{x}})$ with $\\tau=(1+\\min\\{d\/s^2, \\sqrt{d}\/s\\})$ satisfies Assumption \\ref{assu:compressor} with $\\alpha^2=1-\\frac{1}{\\tau}$. More examples can be found in \\cite{koloskova2019decentralized}.\n\nBelow, we make a couple of remarks to discuss the relations between Algorithm \\ref{alg:DProxSGT} and Algorithm \\ref{alg:CDProxSGT}.\n\n\\begin{remark}\nWhen $Q_{\\mathbf{x}}$ and $Q_{\\mathbf{y}}$ are both identity operators, i.e., $Q_{\\mathbf{x}}[{\\mathbf{x}}] = {\\mathbf{x}}, Q_{\\mathbf{y}}[{\\mathbf{y}}] = {\\mathbf{y}}$, and $\\gamma_x=\\gamma_y=1$, in Algorithm \\ref{alg:CDProxSGT}, CDProxSGT will reduce to DProxSGT. Hence, the latter can be viewed as a special case of the former. However, we will analyze them separately. Although the big-batch training method ProxGT-SA in \\cite{xin2021stochastic} shares a similar update as the proposed DProxSGT, our analysis will be completely different and new, as we need only $\\mathcal{O}(1)$ samples in each iteration in order to achieve better generalization performance. The analysis of CDProxSGT will be built on that of DProxSGT by carefully controlling the variance error of stochastic gradients and the consensus error, as well as the additional compression error.\n\\end{remark}\n\n\n \n\n \n\n \n\n\\begin{remark}\nWhen $Q_{\\mathbf{y}}$ and $Q_{\\mathbf{x}}$ are identity operators, $\\underline{\\mathbf{y}}_i^{t} = {\\mathbf{y}}_i^{t-\\frac{1}{2}}$ and $\\underline{\\mathbf{x}}_i^{t+1} = {\\mathbf{x}}_i^{t+\\frac{1}{2}}$ for each $i\\in\\mathcal{N}$. Hence, in the compression case, $\\underline{\\mathbf{y}}_i^{t}$ and $\\underline{\\mathbf{x}}_i^{t+1}$ can be viewed as estimates of ${\\mathbf{y}}_i^{t-\\frac{1}{2}}$ and ${\\mathbf{x}}_i^{t+\\frac{1}{2}}$.\nIn addition, in a matrix format, we have from \\eqref{eq:alg3_3} and \\eqref{eq:alg3_6} that\n \\begin{align} \n \\Y^{t+1}\n \n =&~ \\Y^{t+\\frac{1}{2}}\\widehat\\mathbf{W}_y + \\gamma_y\\big(\\underline\\Y^{t+1}-\\Y^{t+\\frac{1}{2}}\\big)(\\mathbf{W}-\\mathbf{I}), \\label{eq:Y_hatW}\\\\\n \\mathbf{X}^{t+1}\n \n =&~ \\mathbf{X}^{t+\\frac{1}{2}}\\widehat\\mathbf{W}_x + \\gamma_x(\\underline\\mathbf{X}^{t+1}-\\mathbf{X}^{t+\\frac{1}{2}})(\\mathbf{W}-\\mathbf{I}), \\label{eq:compX_hatW}\n\\end{align}\nwhere \n$\\widehat\\mathbf{W}_y = \\gamma_y \\mathbf{W} + (1-\\gamma_y)\\mathbf{I},\\ \\widehat\\mathbf{W}_x = \\gamma_x \\mathbf{W} + (1-\\gamma_x)\\mathbf{I}.$\nWhen $\\mathbf{W}$ satisfies the conditions (i)-(iii) in Assumption~\\ref{assu:mix_matrix}, it can be easily shown that $\\widehat\\mathbf{W}_y$ and $\\widehat\\mathbf{W}_x$ also satisfy all three conditions. Indeed, we have\n$$\\widehat\\rho_x \\triangleq \\|\\widehat\\mathbf{W}_x - \\mathbf{J}\\|_2 < 1,\\quad \n\\widehat\\rho_y \\triangleq \\|\\widehat\\mathbf{W}_y - \\mathbf{J}\\|_2 < 1.$$\nThus we can view $\\Y^{t+1}$ and $\\mathbf{X}^{t+1}$ as the results of $\\Y^{t+\\frac{1}{2}}$ and $\\mathbf{X}^{t+\\frac{1}{2}}$ by one round of neighbor communication with mixing matrices $\\widehat{\\mathbf{W}}_y$ and $\\widehat{\\mathbf{W}}_x$, and the addition of the estimation error $\\underline\\Y^{t+1}-\\Y^{t+\\frac{1}{2}}$ and $\\underline\\mathbf{X}^{t+1}-\\mathbf{X}^{t+\\frac{1}{2}}$ after one round of neighbor communication.\n\\end{remark}\n\n \n\n\n\n\n\n \n\n\n\n\n \n\n\n\\section{Introduction} \nIn this paper, we consider to solve nonconvex stochastic composite problems in a decentralized setting:\n \\vspace{-0.1cm}\n\\begin{equation}\\label{eq:problem_original}\n \\begin{aligned}\n& \\min_{{\\mathbf{x}}\\in\\mathbb{R}^d} \\phi({\\mathbf{x}}) = f({\\mathbf{x}}) + r({\\mathbf{x}}),\\\\[-0.1cm] \n& \\text{with } f({\\mathbf{x}})=\\frac{1}{n}\\sum_{i=1}^n f_i({\\mathbf{x}}), f_i({\\mathbf{x}})\\!=\\!\\mathbb{E}_{\\xi_i \\sim \\mathcal{D}_i}[F_i({\\mathbf{x}},\\xi_i)].\n \\end{aligned}\n \\vspace{-0.1cm} \n\\end{equation}\nHere, $\\{\\mathcal{D}_i\\}_{i=1}^n$ are possibly \\emph{non-i.i.d data} distributions on $n$ machines\/workers that can be viewed as nodes of a connected graph $\\mathcal{G}$, and each $F_i(\\cdot, \\xi_i)$ can only be accessed by the $i$-th worker. \nWe are interested in problems\nthat satisfy the following structural assumption. \n\\begin{assumption}[Problem structure] \\label{assu:prob}\nWe assume that \n\\vspace{-1.5mm}\n\\begin{itemize} \n\\item[(i)] $r$ is closed convex and possibly nondifferentiable.\n\\item[(ii)] Each $f_i$ is $L$-smooth in $\\dom(r)$, i.e., $\\|\\nabla f_i({\\mathbf{x}}) - \\nabla f_i({\\mathbf{y}})\\| \\le L \\|{\\mathbf{x}}- {\\mathbf{y}}\\|$, for any ${\\mathbf{x}}, {\\mathbf{y}}\\in\\dom(r)$. \n\\item[(iii)] $\\phi$ is lower bounded, i.e., $\\phi^* \\triangleq \\min_{\\mathbf{x}} \\phi({\\mathbf{x}}) > -\\infty$.\n\\end{itemize}\n\\vspace{-2mm}\n\\end{assumption}\n\n\nLet $\\mathcal{N}=\\{1, 2, \\ldots, n\\}$ be the set of nodes of $\\mathcal{G}$ and $\\mathcal{E}$ the set of edges.\nFor each $i\\in\\mathcal{N}$, denote $\\mathcal{N}_i$ as the neighbors of worker $i$ and itself, i.e., $\\mathcal{N}_i = \\{j: (i,j) \\in \\mathcal{E}\\}\\cup \\{i\\}$. Every worker can only communicate with its neighbors. To solve \\eqref{eq:problem_original} collaboratively, each worker $i$ maintains a copy, denoted as ${\\mathbf{x}}_i$, of the variable ${\\mathbf{x}}$. With these notations, \n\n\\eqref{eq:problem_original} can be formulated equivalently to\n\\vspace{-0.1cm}\n{\\begin{align}\\label{eq:decentralized_problem} \n\\begin{split}\n\\min_{\\mathbf{X} \\in \\mathbb{R}^{d\\times n}} & \\frac{1}{n}\\sum_{i=1}^n \\phi_i({\\mathbf{x}}_i), \\text{with }\\phi_i({\\mathbf{x}}_i) \\triangleq f_i({\\mathbf{x}}_i) + r({\\mathbf{x}}_i), \\\\\n \\mbox{s.t. } \\quad & {\\mathbf{x}}_i={\\mathbf{x}}_j, \\forall\\, j\\in \\mathcal{N}_i, \\forall\\, i = 1,\\ldots, n.\n\\end{split} \n\\end{align}}\n\\vspace{-0.5cm}\n\nProblems with a \\emph{nonsmooth} regularizer, i.e., in the form of \\eqref{eq:problem_original}, appear in many applications such as $\\ell_1$-regularized signal recovery \\cite{eldar2014phase,duchi2019solving}, online nonnegative matrix factorization \\cite{guan2012online}, and training sparse neural networks \\cite{scardapane2017group, yang2020proxsgd}. When data involved in these applications are distributed onto (or collected by workers on) a decentralized network, it necessitates the design of decentralized algorithms.\n\nAlthough decentralized optimization has attracted a lot of research interests in recent years, most existing works focus on strongly convex problems \\cite{scaman2017optimal, koloskova2019decentralized} or convex problems \\cite{6426375,taheri2020quantized} or smooth nonconvex problems \\cite{bianchi2012convergence, di2016next, wai2017decentralized, lian2017can,zeng2018nonconvex}.\nFew works have studied \\emph{nonsmooth nonconvex} decentralized \\emph{stochastic} optimization like \\eqref{eq:decentralized_problem} that we consider. \\cite{chen2021distributed, xin2021stochastic, mancino2022proximal} are among the exceptions. However, they either require to take many data samples for each update or assume a so-called mean-squared smoothness condition, which is stronger than the smoothness condition in Assumption~\\ref{assu:prob}(ii), in order to perform momentum-based variance-reduction step. Though these methods can have convergence (rate) guarantee, they often yield poor generalization performance on training deep neural networks, as demonstrated in \\cite{lecun2012efficient, keskar2016large} for large-batch training methods and in our numerical experiments for momentum variance-reduction methods.\n\nOn the other side, many distributed optimization methods \\cite{shamir2014distributed,lian2017can,wang2018cooperative} \noften assume that the data are i.i.d across the workers.\nHowever, this assumption does not hold in many real-world scenarios, for instance, due to data privacy issue that local data has to stay on-premise.\nData heterogeneity can result in significant degradation of the performance by these methods.\nThough some papers do not assume i.i.d. data, they require certain data similarity, such as bounded stochastic gradients \\cite{koloskova2019decentralized,koloskova2019decentralized-b, taheri2020quantized} and bounded gradient dissimilarity \\cite{ tang2018communication,assran2019stochastic, tang2019deepsqueeze, vogels2020practical}. \n \n\nTo address the critical practical issues mentioned above, we propose a decentralized proximal stochastic gradient tracking method that needs only a single or $O(1)$ data samples (per worker) for each update. With no assumption on data similarity, it can still achieve the optimal convergence rate on solving problems satisfying conditions in Assumption~\\ref{assu:prob} and yield good generalization performance. In addition, to reduce communication cost, we give a compressed version of the proposed algorithm, by performing compression on the communicated information. The compressed algorithm can inherit the benefits of its non-compressed counterpart. \n\n\n\n \n\n \n\n\\subsection{Our Contributions}\n\nOur contributions are three-fold. First, we propose two decentralized algorithms, one without compression (named DProxSGT) and the other with compression (named CDProxSGT), for solving \\emph{decentralized nonconvex nonsmooth stochastic} problems. Different from existing methods, e.g., \\cite{xin2021stochastic, wang2021distributed, mancino2022proximal}, which need a very large batchsize and\/or perform momentum-based variance reduction to handle the challenge from the nonsmooth term, DProxSGT needs only $\\mathcal{O}(1)$ data samples for each update, without performing variance reduction. The use of a small batch and a standard proximal gradient update enables our method to achieve significantly better generalization performance over the existing methods, as we demonstrate on training neural networks. To the best of our knowledge, CDProxSGT is the first decentralized algorithm that applies a compression scheme for solving nonconvex nonsmooth stochastic problems, and it inherits the advantages of the non-compressed method DProxSGT. Even applied to the special class of smooth nonconvex problems, CDProxSGT can perform significantly better over state-of-the-art methods, in terms of generalization and handling data heterogeneity.\n\nSecond, we establish an optimal sample complexity result of DProxSGT, which matches the lower bound result in \\cite{arjevani2022lower} in terms of the dependence on a target tolerance $\\epsilon$, to produce an $\\epsilon$-stationary solution. Due to the coexistence of nonconvexity, nonsmoothness, big stochasticity variance (due to the small batch and no use of variance reduction for better generalization), and decentralization, the analysis is highly non-trivial. We employ the tool of Moreau envelope and construct a decreasing Lyapunov function by carefully controlling the errors introduced by stochasticity and decentralization. \n\nThird, we establish the iteration complexity result of the proposed compressed method CDProxSGT, which is in the same order as that for DProxSGT and thus also optimal in terms of the dependence on a target tolerance. The analysis builds on that of DProxSGT but is more challenging due to the additional compression error and the use of gradient tracking. Nevertheless, we obtain our results by making the same (or even weaker) assumptions as those assumed by state-of-the-art methods \\cite{koloskova2019decentralized-b, zhao2022beer}. \n\n\n\n \n\n \n\n\n\\subsection{Notation}\\label{sec:notation}\nFor any vector ${\\mathbf{x}}\\in\\mathbb{R}^{d}$, we use $\\|{\\mathbf{x}}\\|$ for the $\\ell_2$ norm. For any matrix $\\mathbf{A}$, $\\|\\mathbf{A}\\|$ denotes the Frobenius norm and $\\|\\mathbf{A}\\|_2$ the spectral norm.\n$\\mathbf{X} = [{\\mathbf{x}}_1,{\\mathbf{x}}_2,\\ldots,{\\mathbf{x}}_n]\\in\\mathbb{R}^{d\\times n}$ concatinates all local variables. The superscript $^t$ will be used for iteration or communication.\n$\\nabla F_i({\\mathbf{x}}_i^t,\\xi_i^t)$ denotes a local stochastic gradient of $F_i$ at ${\\mathbf{x}}_i^t$ with a random sample $\\xi_i^t$. The column concatenation of $\\{\\nabla F_i({\\mathbf{x}}_i^t,\\xi_i^t)\\}$ is denoted as \n\\vspace{-0.1cm}\n\\begin{equation*\n\\nabla \\mathbf{F}^t = \\nabla \\mathbf{F}(\\mathbf{X}^t,\\Xi^t) = [ \\nabla F_1({\\mathbf{x}}_1^t,\\xi_1^t),\\ldots, \\nabla F_n({\\mathbf{x}}_n^t,\\xi_n^t)],\\vspace{-0.1cm}\n\\end{equation*}\nwhere $\\Xi^t = [\\xi_1^t,\\xi_2^t,\\ldots,\\xi_n^t]$.\nSimilarly, we denote\n\\vspace{-0.1cm}\n\\begin{equation*} \n\\nabla \\mathbf{f}^t \n= [ \\nabla f_1({\\mathbf{x}}_1^t ),\\ldots, \\nabla f_n({\\mathbf{x}}_n^t )].\\vspace{-0.1cm}\n\\end{equation*} \nFor any $\\mathbf{X} \\in \\mathbb{R}^{d\\times n}$,\nwe define \n\\vspace{-0.1cm}\n\\begin{equation*} \\bar{{\\mathbf{x}}} = \\textstyle\\frac{1}{n}\\mathbf{X}\\mathbf{1}, \\quad \\overline{\\mathbf{X}} = \\mathbf{X}\\mathbf{J} = \\bar{{\\mathbf{x}}}\\mathbf{1}^\\top,\\quad \\mathbf{X}_\\perp = \\mathbf{X}(\\mathbf{I} - \\mathbf{J}), \\vspace{-0.1cm}\n\\end{equation*} \nwhere $\\mathbf{1}$ is the all-one vector, and $\\mathbf{J} = \\frac{\\mathbf{1}\\1^\\top}{n}$ is the averaging matrix.\nSimilarly, we define the mean vectors \n\\vspace{-0.1cm}\n\\begin{equation*} \n\\overline{\\nabla} \\mathbf{F}^t = \\textstyle\\frac{1}{n} \\mathbf{F}^t \\mathbf{1},\\ \\overline{\\nabla} \\mathbf{f}^t = \\textstyle\\frac{1}{n} \\mathbf{f}^t \\mathbf{1}.\\vspace{-0.1cm}\n\\end{equation*} \nWe will use $\\mathbb{E}_t$ for the expectation about the random samples $\\Xi^t$ at the $t$th iteration and $\\mathbb{E}$ for the full expectation. $\\mathbb{E}_Q$ denotes the expectation about a stochastic compressor $Q$.\n\n\n\n\n\n\\section{Related Works}\nThe literature of decentralized optimization has been growing vastly. To exhaust the literature is impossible. Below we review existing works on decentralized algorithms for solving nonconvex problems, with or without using a compression technique. For ease of understanding the difference of our methods from existing ones, \nwe compare to a few relevant methods in Table \\ref{tab:method_compare}.\n\n\\begin{table*}[t]\\label{tab:method_compare}\n\\caption{Comparison between our methods and some relevant methods: ProxGT-SA and ProxGT-SR-O in \\cite{xin2021stochastic}, DEEPSTORM \\cite{mancino2022proximal}, ChocoSGD \\cite{koloskova2019decentralized-b}, and BEER \\cite{zhao2022beer}. We\nuse ``CMP'' to represent whether compression is performed by a method. \nGRADIENTS represents additional assumptions on the stochastic gradients in addition to those made in Assumption \\ref{assu:stoc_grad}. \nSMOOTHNESS represents the smoothness condition, where ``mean-squared'' means $\\mathbb{E}_{\\xi_i}[\\|\\nabla F_i({\\mathbf{x}}; \\xi_i) - \\nabla F_i({\\mathbf{y}}; \\xi_i)\\|^2]\\le L^2\\|{\\mathbf{x}}-{\\mathbf{y}}\\|^2$ that is stronger than the $L$-smoothness of $f_i$.\nBS is the required batchsize to get an $\\epsilon$-stationary solution. VR and MMT represent whether the variance reduction or momentum are used. Large batchsize and\/or momentum variance reduction can degrade the generalization performance, as we demonstrate\nin numerical experiments.\n}\n\\label{sample-table}\n\\begin{center}\n\\begin{small}\n\\begin{sc}\n\\begin{tabular}{lccccc}\n\\toprule\n Methods & CMP & $r\\not\\equiv 0$ & GRADIENTS & SMOOTHNESS & (BS, VR, MMT) \\\\\n\\midrule\n ProxGT-SA & No& Yes & No & $f_i$ is smooth & \\big($\\mathcal{O}(\\frac{1}{\\epsilon^2})$, No , No\\big) \\\\[0.1cm]\n ProxGT-SR-O & No & Yes & No & mean-squared & \\big($\\mathcal{O}(\\frac{1}{\\epsilon})$, Yes, No\\big) \\\\[0.1cm]\n DEEPSTORM & No & Yes & No & mean-squared & ($\\mathcal{O}(1)$, Yes, Yes) \\\\\n \\textbf{DProxSGT (this paper)} & No & Yes & No & $f_i$ is smooth & ($\\mathcal{O}(1)$, No, No) \\\\\n\\midrule\n ChocoSGD & Yes& No & $\\mathbb{E}_{\\xi}[\\|\\nabla F_i({\\mathbf{x}},\\xi_i)\\|^2]\\leq G^2$ & $f_i$ is smooth & ($\\mathcal{O}(1)$, No, No) \\\\\n BEER & Yes & No & No & $f$ is smooth & \\big($\\mathcal{O}(\\frac{1}{\\epsilon^2})$, No, No\\big) \\\\[0.1cm]\n \\textbf{CDProxSGT (this paper)} & Yes & Yes & No & $f_i$ is smooth & ($\\mathcal{O}(1)$, No, No) \\\\\n\\bottomrule\n\\end{tabular}\n\\end{sc}\n\\end{small}\n\\end{center}\n\\vskip -0.1in\n\\end{table*}\n\n\\subsection{Non-compressed Decentralized Methods}\n\nFor nonconvex decentralized problems with a nonsmooth regularizer, a lot of deterministic decentralized methods have been studied, e.g., \\cite{di2016next, wai2017decentralized, zeng2018nonconvex, chen2021distributed, scutari2019distributed}.\nWhen only stochastic gradient is available, a majority of existing works focus on smooth cases without a regularizer or a hard constraint, such as \\cite{lian2017can, assran2019stochastic, tang2018d}, \ngradient tracking based methods \\cite{lu2019gnsd,zhang2019decentralized, koloskova2021improved},\nand momentum-based variance reduction methods \\cite{xin2021hybrid, zhang2021gt}.\nSeveral works such as \\cite{bianchi2012convergence, wang2021distributed, xin2021stochastic, mancino2022proximal} have studied stochastic decentralized methods for problems with a nonsmooth term $r$. \nHowever, they either consider some special $r$ or require a large batch size. \\cite{bianchi2012convergence} considers\nthe case where $r$ is an indicator function of a compact convex set. Also, it\nrequires bounded stochastic gradients.\n\\cite{wang2021distributed} focuses on problems with a polyhedral $r$, and it\nrequires a large batch size of $\\mathcal{O}(\\frac{1}{\\epsilon})$ to produce an (expected) $\\epsilon$-stationary point.\n\\cite{xin2021stochastic, mancino2022proximal} are the most closely related to our methods. To produce an (expected) $\\epsilon$-stationary point, the methods in \\cite{xin2021stochastic} require a large batch size, either $\\mathcal{O}(\\frac{1}{\\epsilon^2})$ or $\\mathcal{O}(\\frac{1}{\\epsilon})$ if variance reduction is applied. \nThe method in \\cite{mancino2022proximal} requires only $\\mathcal{O}(1)$ samples for each update by taking a momentum-type variance reduction scheme. However, in order to reduce variance, it needs a stronger mean-squared smoothness assumption. In addition, the momentum variance reduction step can often hurt the generalization performance on training complex neural networks, as we will demonstrate in our numerical experiments.\n\n\n \n\n\n \n \n\n\n\n\\subsection{Compressed Distributed Methods}\n \n\nCommunication efficiency is a crucial factor when designing a distributed optimization strategy. The current machine learning paradigm oftentimes resorts to models with a large number of parameters, which indicates a high communication cost when the models or gradients are transferred from workers to the parameter server or among workers. This may incur significant latency in training. Hence, communication-efficient algorithms by model or gradient compression have been actively sought.\n\nTwo major groups of compression operators are quantization and sparsification. The quantization approaches include 1-bit SGD \\cite{seide20141}, SignSGD \\cite{bernstein2018signsgd}, QSGD \\cite{alistarh2017qsgd}, TernGrad \\cite{wen2017terngrad}. The sparsification approaches include Random-$k$ \\cite{stich2018sparsified}, Top-$k$ \\cite{aji2017sparse}, Threshold-$v$ \\cite{dutta2019discrepancy} and ScaleCom \\cite{chen2020scalecom}. Direct compression may slow down the convergence especially when compression ratio is high. Error compensation or error-feedback can mitigate the effect by saving the compression error in one communication step and compensating it in the next communication step before another compression \\cite{seide20141}. These compression operators are first designed to compress the gradients in the centralized setting \\cite{tang2019DoubleSqueeze,karimireddy2019error}.\n \nThe compression can also be applied to the decentralized setting for smooth problems, i.e., \\eqref{eq:decentralized_problem} with $r=0$. \\cite{tang2019deepsqueeze} applies the compression with error compensation to the communication of model parameters in the decentralized seeting.\nChoco-Gossip \\cite{koloskova2019decentralized} is another communication way to mitigate the slow down effect from compression. It does not compress the model parameters but a residue between model parameters and its estimation. Choco-SGD uses Choco-Gossip to solve \\eqref{eq:decentralized_problem}. BEER \\cite{zhao2022beer} includes gradient tracking and compresses both tracked stochastic gradients and model parameters in each iteration by the Choco-Gossip.\nBEER needs a large batchsize of $\\mathcal{O}(\\frac{1}{\\epsilon^2})$ in order to produce an $\\epsilon$-stationary solution.\nDoCoM-SGT\\cite{DBLP:journals\/corr\/abs-2202-00255} does similar updates as BEER but with a momentum term for the update of the tracked gradients, and it only needs an $\\mathcal{O}(1)$ batchsize. \n\nOur proposed CDProxSGT is for solving decentralized problems in the form of \\eqref{eq:decentralized_problem} with a nonsmooth $r({\\mathbf{x}})$. To the best of our knowledge, CDProxSGT is the first compressed decentralized method for nonsmooth nonconvex problems without the use of a large batchsize, and it can achieve an optimal sample complexity without the assumption of data similarity or gradient boundedness. \n\n\n\n\n\\section{Numerical Experiments}\\label{sec:numerical_experiments}\nIn this section, we test the proposed algorithms on training two neural network models, in order to demonstrate their better generalization over momentum variance-reduction methods and large-batch training methods and to demonstrate the success of handling heterogeneous data even when only compressed model parameter and gradient information are communicated among workers.\nOne neural network that we test is LeNet5 \\cite{lecun1989backpropagation} on the FashionMNIST dataset \\cite{xiao2017fashion}, and the other is FixupResNet20 \\cite{zhang2019fixup} on Cifar10 \\cite{krizhevsky2009learning}. \n\nOur experiments are representative to show the practical performance of our methods. Among several closely-related works, \\cite{xin2021stochastic} includes no experiments, and \\cite{mancino2022proximal,zhao2022beer} only tests on tabular data and MNIST. \\cite{koloskova2019decentralized-b} tests its method on Cifar10 but needs\nsimilar data distribution on all workers \nfor good performance.\nFashionMNIST has a similar scale as MNIST but poses a more challenging classification task \\cite{xiao2017fashion}. \nCifar10 is more complex, and FixupResNet20 has more layers than LeNet5. \n \n\n\n\n \nAll the compared algorithms are implemented in Python with Pytorch\nand MPI4PY (for distributed computing).\nThey run on a Dell workstation with\ntwo Quadro RTX 5000 GPUs. We use the 2 GPUs as 5 workers, which communicate over a ring-structured network (so each worker can only communicate with two neighbors). Uniform weight is used, i.e., $W_{ji} = \\frac{1}{3}$ for each pair of connected workers $i$ and $j$.\nBoth FashionMNIST and Cifar10 have 10 classes. We distribute each data onto the 5 workers based on the class labels, namely, each worker holds 2 classes of data points, and thus the data are heterogeneous across the workers.\n\nFor all methods, we report their objective values on training data, prediction accuracy on testing data, and consensus errors at each epoch. \nTo save time, the objective values are computed as the average of the losses that are evaluated during the training process (i.e., on the sampled data instead of the whole training data) plus the regularizer per epoch. \nFor the testing accuracy, we first compute the accuracy on the whole testing data for each worker by using its own model parameter and then take the average.\nThe consensus error is simply $\\|\\mathbf{X}_\\perp\\|^2$.\n\n\\subsection{Sparse Neural Network Training} \\label{subsect:RegL1}\nIn this subsection, we test the non-compressed method DProxSGT and compare it with AllReduce (that is a centralized method and used as a baseline), DEEPSTORM\\footnote{For DEEPSTORM, we implement DEEPSTORM v2 in \\cite{mancino2022proximal}.} and ProxGT-SA \\cite{xin2021stochastic} on solving \\eqref{eq:decentralized_problem}, where $f$ is the loss on the whole training data and $r({\\mathbf{x}}) = \\mu\\|{\\mathbf{x}}\\|_1$ serves as a sparse regularizer that encourages a sparse model.\n\nFor training LeNet5 on FashionMNIST, we set $\\mu= 10^{-4}$ and run each method to 100 epochs. The learning rate $\\eta$ and batchsize are set to $0.01$ and 8 for AllReduce and DProxSGT. \nDEEPSTORM uses the same $\\eta$ and batchsize but with a larger initial batchsize 200, \nand its momentum parameter is tuned to $\\beta=0.8$ in order to yield the best performance.\nProxGT-SA is a large-batch training method.\nWe set its batchsize to 256 and accordingly apply a larger step size $\\eta=0.3$ that is the best among $\\{0.1, 0.2, 0.3, 0.4\\}$.\n\nFor training FixupResnet20 on Cifar10, we set $\\mu= 5 \\times 10^{-5}$ and run each method to 500 epochs.\nThe learning rate and batchsize are set to $\\eta=0.02$ and 64 for AllReduce, DProxSGT, and DEEPSTORM. The initial batchsize is set to 1600 for DEEPSTORM and the momentum parameter set to $\\beta=0.8$. \nProxGT-SA uses a larger batchsize 512 and a larger stepsize $\\eta=0.1$ that gives the best performance among $\\{0.05, 0.1, 0.2, 0.3\\}$.\n\n\\begin{figure}[ht] \n\\begin{center} \n\\includegraphics[width=.9\\columnwidth]{.\/figures\/noncompressed} \n\\vspace{-0.2cm}\n\\caption{Results of training sparse neural networks by non-compressed methods with $r({\\mathbf{x}}) = \\mu \\|{\\mathbf{x}}\\|_1$ for the same number of epochs. Left: LeNet5 on FashionMNIST with $\\mu=10^{-4}$. Right: FixupResnet20 on Cifar10 with $\\mu=5\\times 10^{-5}$.}\n\\label{fig:RegL1}\n\\end{center} \n\\end{figure}\n\n\nThe results for all methods are plotted in Figure \\ref{fig:RegL1}. For LeNet5, DProxSGT produces almost the same curves as the centralized training method AllReduce, while on FixupResnet20, DProxSGT even outperforms AllReduce in terms of testing accuracy. This could be because AllReduce aggregates stochastic gradients from all the workers for each update and thus equivalently, it actually uses a larger batchsize.\nDEEPSTORM performs equally well as our method DProxSGT on training LeNet5. However, it gives lower testing accuracy than DProxSGT and also oscillates significantly more seriously on training the more complex neural network FixupResnet20. This appears to be caused by the momentum variance reduction scheme used in DEEPSTORM.\nIn addition, we see that the large-batch training method ProxGT-SA performs much worse than DProxSGT within the same number of epochs (i.e., data pass), especially on training FixupResnet20.\n\n\\subsection{Neural Network Training by Compressed Methods} \\label{subsect:compress}\nIn this subsection, we compare CDProxSGT with two state-of-the-art compressed training methods: Choco-SGD \\cite{koloskova2019decentralized,koloskova2019decentralized-b} and BEER \\cite{zhao2022beer}. As Choco-SGD and BEER are studied only for problems without a regularizer, we set $r({\\mathbf{x}})=0$ in \\eqref{eq:decentralized_problem} for the tests. Again, we compare their performance on training LeNet5 and FixupResnet20.\nThe two non-compressed methods AllReduce and DProxSGT are included as baselines. \nThe same compressors are used for CDProxSGT, Choco-SGD, and BEER, when compression is applied.\n\n\\begin{figure}[htbp] \n\\begin{center} \n\\includegraphics[width=.9\\columnwidth]{.\/figures\/Compressed} \n\\vspace{-0.2cm}\n\\caption{Results of training neural network models by compressed methods for the same number of epochs. Left: LeNet5 on FashionMNIST. Right: FixupResnet20 on Cifar10.}\n\\label{fig:Compress}\n\\end{center} \n\\end{figure} \n\nWe run each method to 100 epochs for training LeNet5 on FashionMNIST. \nThe compressors $Q_y$ and $Q_x$ are set to top-$k(0.3)$ \\cite{aji2017sparse}, i.e., taking the largest $30\\%$ elements of an input vector in absolute values and zeroing out all others.\nWe set batchsize to 8 and tune the learning rate $\\eta$ to $0.01$ for AllReduce, DProxSGT, CDProxSGT and Choco-SGD, and for CDProxSGT, we set $\\gamma_x=\\gamma_y=0.5$. \nBEER is a large-batch training method. It uses a larger batchsize 256 and accordingly a larger learning rate $\\eta=0.3$, which appears to be the best among $\\{0.1, 0.2, 0.3, 0.4\\}$. \n \nFor training FixupResnet20 on the Cifar10 dataset, we run each method to 500 epochs. We take top-$k(0.4)$ \\cite{aji2017sparse} as the compressors $Q_y$ and $Q_x$ and set $\\gamma_x=\\gamma_y=0.8$.\nFor AllReduce, DProxSGT, CDProxSGT and Choco-SGD, we set their batchsize to 64 and tune the learning rate $\\eta$ to $0.02$. For BEER, we use a larger batchsize 512 and a larger learning rate $\\eta=0.1$, which is the best among\n$\\{0.05, 0.1, 0.2, 0.3\\}$. \n\nThe results are shown in Figure \\ref{fig:Compress}. \nFor both models, CDProxSGT yields almost the same curves of objective values and testing accuracy as its non-compressed counterpart DProxSGT and the centralized non-compressed method AllReduce. This indicates about 70\\% saving of communication for the training of LeNet5 and 60\\% saving for FixupResnet20 without sacrifying the testing accuracy.\nIn comparison, BEER performs significantly worse than the proposed method CDProxSGT within the same number of epochs in terms of all the three measures, especially on training the more complex neural network FixupResnet20, which should be attributed to the use of a larger batch by BEER. Choco-SGD can produce comparable objective values. However, its testing accuracy is much lower than that produced by our method CDProxSGT.\nThis should be because of the data heterogeneity that ChocoSGD cannot handle, while CDProxSGT applies the gradient tracking to successfully address the challenges of data heterogeneity.\n\n\n\n\n\n\n\n\n\\section{Decentralized Algorithms}\\label{sec:alg}\n\nIn this section, we give our decentralized algorithms for solving \\eqref{eq:decentralized_problem} or equivalently \\eqref{eq:problem_original}. To perform neighbor communications, we introduce a mixing (or gossip) matrix $\\mathbf{W}$ that satisfies the following standard assumption.\n\\begin{assumption}[Mixing matrix] \\label{assu:mix_matrix} We choose a mixing matrix $\\mathbf{W}$ such that\n\\vspace{-1.5mm}\n\\begin{enumerate}\n\\item [(i)] $\\mathbf{W}$ is doubly stochastic: $\\mathbf{W}\\mathbf{1} = \\mathbf{1}$ and $\\mathbf{1}^\\top \\mathbf{W} = \\mathbf{1}^\\top$;\n\\item [(ii)] $\\mathbf{W}_{ij} = 0$ if $i$ and $j$ are not neighbors to each other;\n\\item [(iii)] $\\mathrm{Null}(\\mathbf{W}-\\mathbf{I}) = \\mathrm{span}\\{\\mathbf{1}\\}$ and $\\rho \\triangleq \\|\\mathbf{W} - \\mathbf{J}\\|_2 < 1$.\n\\end{enumerate}\n\\vspace{-2mm}\n\\end{assumption}\nThe condition in (ii) above is enforced so that \\emph{direct} communications can be made only if two nodes (or workers) are immediate (or 1-hop) neighbors of each other. The condition in (iii) can hold if the graph $\\mathcal{G}$ is connected. The assumption $\\rho < 1$ is critical to ensure contraction of consensus error. \n\nThe value of $\\rho$ depends on the graph topology. \n\\cite{koloskova2019decentralized} gives three commonly used examples: when uniform weights are used between nodes, $\\mathbf{W} = \\mathbf{J}$ and $\\rho = 0$ for a fully-connected graph (in which case, our algorithms will reduce to centralized methods), $1-\\rho = \\Theta(\\frac{1}{n})$ for a 2d torus grid graph where every node has 4 neighbors, and $1-\\rho = \\Theta(\\frac{1}{n^2})$ for a ring-structured graph.\nMore examples can be found in \\cite{nedic2018network}.\n\n\n\n\n\\subsection{Non-compreseed Method}\n\nWith the mixing matrix $\\mathbf{W}$, we propose a decentralized proximal stochastic gradient method with gradient tracking (DProxSGT) for \\eqref{eq:decentralized_problem}. The pseudocode is shown in Algorithm~\\ref{alg:DProxSGT}. \nIn every iteration $t$, each node $i$ first computes a local stochastic gradient $\\nabla F_i({\\mathbf{x}}_i^{t},\\xi_i^{t})$ by taking a sample $\\xi_i^{t}$ from its local data distribution $\\mathcal{D}_i$, then performs gradient tracking in \\eqref{eq:y_half_update} and neighbor communications of the tracked gradient in \\eqref{eq:y_update}, and finally takes a proximal gradient step in \\eqref{eq:x_half_update} and mixes the model parameter with its neighbors in \\eqref{eq:x_1_update}.\n\n\\begin{algorithm}[tb]\n \\caption{DProxSGT}\\label{alg:DProxSGT}\n\\begin{algorithmic}\n \\small{ \\STATE Initialize ${\\mathbf{x}}_i^{0}$ and set ${\\mathbf{y}}_i^{-1}=\\mathbf{0}$, $\\nabla F_i({\\mathbf{x}}_i^{-1},\\xi_i^{-1}) =\\mathbf{0}$, $\\forall i\\in\\mathcal{N}$.\n \\FOR{$t=0, 1, 2, \\ldots, T-1$} \n \\STATE \\hspace{-0.1cm}\\textbf{all} nodes $i=1, 2, \\ldots, n$ do the updates \\textbf{in parallel:}\n \\STATE obtain one random sample $\\xi_i^t$, compute a stochastic gradient $\\nabla F_i({\\mathbf{x}}_i^{t},\\xi_i^{t})$, and perform \n \\vspace{-0.2cm}\n \\begin{gather}\n \t{\\mathbf{y}}_i^{t-\\frac{1}{2}} = {\\mathbf{y}}_i^{t-1} + \\nabla F_i({\\mathbf{x}}_i^{t},\\xi_i^{t}) - \\nabla F_i({\\mathbf{x}}_i^{t-1},\\xi_i^{t-1}),\\label{eq:y_half_update}\n \t\\\\\n \t{\\mathbf{y}}_i^t = \\textstyle \\sum_{j=1}^n \\mathbf{W}_{ji} {\\mathbf{y}}_j^{t-\\frac{1}{2}},\\label{eq:y_update}\\\\\n \t{\\mathbf{x}}_i^{t+\\frac{1}{2}} =\\prox_{\\eta r} \\left({\\mathbf{x}}_i^t - \\eta {\\mathbf{y}}_i^{t}\\right),\\label{eq:x_half_update}\n \t\\\\\n \t{\\mathbf{x}}_i^{t+1} = \\textstyle \\sum_{j=1}^n \\mathbf{W}_{ji}{\\mathbf{x}}_j^{t+\\frac{1}{2}}. \\label{eq:x_1_update}\n \\vspace{-0.2cm} \n \t\\end{gather} \n \\ENDFOR}\n\\end{algorithmic}\n\\end{algorithm}\n\\vspace{-0.1mm}\n \n\nNote that for simplicity, we take only one random sample $\\xi_i^{t}$ in Algorithm \\ref{alg:DProxSGT} but in general, a mini-batch of random samples can be taken, and all theoretical results that we will establish in the next section still hold. We emphasize that we need only $\\mathcal{O}(1)$ samples for each update. This is different from ProxGT-SA in \\cite{xin2021stochastic}, which shares a similar update formula as our algorithm but needs a very big batch of samples, as many as $\\mathcal{O}(\\frac{1}{\\epsilon^2})$, where $\\epsilon$ is a target tolerance. A small-batch training can usually generalize better than a big-batch one \\cite{lecun2012efficient, keskar2016large} on training large-scale deep learning models. Throughout the paper, we make the following standard assumption on the stochastic gradients.\n\n\\begin{assumption}[Stochastic gradients] \\label{assu:stoc_grad}\nWe assume that \n\\vspace{-1.5mm}\n\\begin{itemize}\n \\item[(i)] The random samples $\\{\\xi_i^t\\}_{i\\in \\mathcal{N}, t\\ge 0}$ are independent.\n \\item[(ii)]\n There exists a finite number $\\sigma\\ge0$ such that for any $i\\in \\mathcal{N}$ and ${\\mathbf{x}}_i\\in\\dom(r)$,\n \\begin{gather*} \n \\mathbb{E}_{\\xi_i}[\\nabla F_i({\\mathbf{x}}_i,\\xi_i)] = \\nabla f_i({\\mathbf{x}}_i),\\\\ \n \\mathbb{E}_{\\xi_i}[\\|\\nabla F_i({\\mathbf{x}}_i,\\xi_i)-\\nabla f_i({\\mathbf{x}}_i)\\|^2] \\leq \\sigma^2.\n \\end{gather*} \n\\end{itemize}\n\\vspace{-2mm}\n\\end{assumption}\n \n\n\n\nThe gradient tracking step in \\eqref{eq:y_half_update}\nis critical to handle heterogeneous data \n\\cite{di2016next,nedic2017achieving,lu2019gnsd,pu2020distributed,sun2020improving,xin2021stochastic,song2021optimal,mancino2022proximal,zhao2022beer,DBLP:journals\/corr\/abs-2202-00255,song2022compressed}.\nIn a deterministic scenario where $\\nabla f_i(\\cdot)$ is used instead of $\\nabla F_i(\\cdot, \\xi)$, for each $i$, the tracked gradient ${\\mathbf{y}}_i^t$ can converge to the gradient of the global function $\\frac{1}{n}\\sum_{i=1}^n f_i(\\cdot)$ at $\\bar{\\mathbf{x}}^t$, and thus all local updates move towards a direction to minimize the \\emph{global} objective. When stochastic gradients are used, the gradient tracking can play a similar role and make ${\\mathbf{y}}_i^t$ approach to the stochastic gradient of the global function. \nWith this nice property of gradient tracking, we can guarantee convergence without strong assumptions that are made in existing works, such as bounded gradients \\cite{koloskova2019decentralized,koloskova2019decentralized-b, taheri2020quantized, singh2021squarm} and bounded data similarity over nodes \\cite{lian2017can, tang2018communication, tang2019deepsqueeze, vogels2020practical, wang2021error}.\n \n\n\n\\subsection{Compressed Method}\nIn DProxSGT, each worker needs to communicate both the model parameter and tracked stochastic gradient with its neighbors at every iteration. Communications have become a bottleneck for distributed training on GPUs. In order to save the communication cost, we further propose a compressed version of DProxSGT, named CDProxSGT. The pseudocode is shown in Algorithm \\ref{alg:CDProxSGT}, where $Q_{\\mathbf{x}}$ and $Q_{\\mathbf{y}}$ are two compression operators.\n\n\\begin{algorithm}[tb]\n\\caption{CDProxSGT}\\label{alg:CDProxSGT}\n\\begin{algorithmic}\n \\small{\\STATE Initialize ${\\mathbf{x}}_i^{0}$; set \n ${\\mathbf{y}}_i^{-1}=\\underline{\\mathbf{y}}_i^{-1}=\\nabla F_i({\\mathbf{x}}_i^{-1}, \\xi_i^{-1})=\\underline{\\mathbf{x}}_i^{0} =\\mathbf{0}$, $\\forall i\\in\\mathcal{N}$.\n \\FOR{$t=0, 1, 2, \\ldots, T-1$}\n \\STATE \\hspace{-0.1cm}\\textbf{all} nodes $i=1, 2, \\ldots, n$ do the updates \\textbf{in parallel:}\n \\vspace{-0.2cm}\n \\begin{gather}\n \t{\\mathbf{y}}_i^{t-\\frac{1}{2}} = {\\mathbf{y}}_i^{t-1} + \\nabla F_i({\\mathbf{x}}_i^{t},\\xi_i^{t}) - \\nabla F_i({\\mathbf{x}}_i^{t-1},\\xi_i^{t-1}),\\label{eq:alg3_1}\\\\\n \\underline{\\mathbf{y}}_i^{t} = \\underline{\\mathbf{y}}_i^{t-1} + Q_{\\mathbf{y}}\\big[{\\mathbf{y}}_i^{t-\\frac{1}{2}} - \\underline{\\mathbf{y}}_i^{t-1}\\big], \\label{eq:alg3_2}\\\\\n {\\mathbf{y}}_i^{t} = {\\mathbf{y}}_i^{t-\\frac{1}{2}} +\\gamma_y \\left(\\textstyle \\sum_{j=1}^n \\mathbf{W}_{ji} \\underline{\\mathbf{y}}_j^{t}-\\underline{\\mathbf{y}}_i^{t}\\right), \\label{eq:alg3_3}\\\\\n \t{\\mathbf{x}}_i^{t+\\frac{1}{2}} =\\prox_{\\eta r} \\left({\\mathbf{x}}_i^t - \\eta {\\mathbf{y}}_i^{t}\\right), \\label{eq:alg3_4}\\\\\n \t\\underline{\\mathbf{x}}_i^{t+1} = \\underline{\\mathbf{x}}_i^{t} + Q_{\\mathbf{x}}\\big[{\\mathbf{x}}_i^{t+\\frac{1}{2}} - \\underline{\\mathbf{x}}_i^{t}\\big], \\label{eq:alg3_5}\\\\\n \t{\\mathbf{x}}_i^{t+1} = {\\mathbf{x}}_i^{t+\\frac{1}{2}}+\\gamma_x\\Big(\\textstyle \\overset{n}{\\underset{j=1}\\sum} \\mathbf{W}_{ji} \\underline{\\mathbf{x}}_j^{t+1}-\\underline{\\mathbf{x}}_i^{t+1}\\Big).\\label{eq:alg3_6}\n \\vspace{-0.2cm}\n \\end{gather} \n \\ENDFOR}\n\\end{algorithmic}\n\\end{algorithm}\n\\vspace{-0.1mm}\n\n \n\nIn Algorithm \\ref{alg:CDProxSGT}, each node communicates the non-compressed vectors $\\underline{\\mathbf{y}}_i^{t}$ and $\\underline{\\mathbf{x}}_i^{t+1}$ with its neighbors in \\eqref{eq:alg3_3} and \\eqref{eq:alg3_6}. We write it in this way for ease of read and analysis. For efficient and \\emph{equivalent} implementation,\nwe do not communicate \n$\\underline{\\mathbf{y}}_i^{t}$ and $\\underline{\\mathbf{x}}_i^{t+1}$ directly but the compressed residues $Q_{\\mathbf{y}}\\big[{\\mathbf{y}}_i^{t-\\frac{1}{2}} - \\underline{\\mathbf{y}}_i^{t-1}\\big]$ and $Q_{\\mathbf{x}}\\big[{\\mathbf{x}}_i^{t+\\frac{1}{2}} - \\underline{\\mathbf{x}}_i^{t}\\big]$, explained as follows. \nBesides ${\\mathbf{y}}_i^{t-1}$, ${\\mathbf{x}}_i^{t}$, $\\underline{\\mathbf{y}}_i^{t-1}$ and $\\underline{\\mathbf{x}}_i^{t}$, each node also stores ${\\mathbf{z}}_i^{t-1}$ and ${\\mathbf{s}}_i^{t} $ which record $\\sum_{j=1}^n \\mathbf{W}_{ji} \\underline{\\mathbf{y}}_i^{t-1}$ and $\\sum_{j=1}^n \\mathbf{W}_{ji} \\underline{\\mathbf{x}}_i^{t}$. For the gradient communication, each node $i$ initializes ${\\mathbf{z}}_i^{-1} = \\mathbf{0}$, and then at each iteration $t$, \nafter receiving $Q_{\\mathbf{y}}\\big[{\\mathbf{y}}_j^{t-\\frac{1}{2}} - \\underline{\\mathbf{y}}_j^{t-1}\\big]$ from its neighbors, it updates $\\underline{\\mathbf{y}}_i^{t}$ by \\eqref{eq:alg3_2}, and ${\\mathbf{z}}_i^{t}$ and ${\\mathbf{y}}_i^t$ by \n\\vspace{-0.2cm} \n\\begin{align*}\n {\\mathbf{z}}_i^{t} =&~ \\textstyle {\\mathbf{z}}_i^{t-1} + \\sum_{j=1}^n \\mathbf{W}_{ji} Q_{\\mathbf{y}}\\big[{\\mathbf{y}}_j^{t-\\frac{1}{2}} - \\underline{\\mathbf{y}}_j^{t-1}\\big], \\\\\n{\\mathbf{y}}_i^{t} =&~ \\textstyle {\\mathbf{y}}_i^{t-\\frac{1}{2}} +\\gamma_y \\big({\\mathbf{z}}_i^{t}-\\underline{\\mathbf{y}}_i^{t}\\big).\\vspace{-0.2cm}\n\\end{align*}\nFrom the initialization and the updates of $\\underline{\\mathbf{y}}_i^{t}$ and ${\\mathbf{z}}_i^{t}$, \nit always holds that\n${\\mathbf{z}}_i^{t}=\\sum_{j=1}^n \\mathbf{W}_{ji} \\underline{\\mathbf{y}}_i^{t}$.\nThe model communication can be done efficiently in the same way.\n\n\nThe compression operators $Q_{\\mathbf{x}}$ and $Q_{\\mathbf{y}}$ in Algorithm \\ref{alg:CDProxSGT} can be different, but we assume that they both satisfy the following assumption.\n\\begin{assumption} \\label{assu:compressor}\nThere exists $\\alpha \\in [0,1)$ such that \n$$\\mathbb{E}[\\|{\\mathbf{x}}-Q[{\\mathbf{x}}]\\|^2]\\leq \\alpha^2 \\|{\\mathbf{x}}\\|^2, \\forall\\, {\\mathbf{x}}\\in\\mathbb{R}^d,$$ \nfor both $Q=Q_{\\mathbf{x}}$ and $Q=Q_{\\mathbf{y}}$.\n\\end{assumption}\nThe assumption on compression operators is standard and also made in \\cite{koloskova2019decentralized-b,koloskova2019decentralized,zhao2022beer}. It is satisfied by the sparsification, such as Random-$k$ \\cite{stich2018sparsified} and Top-$k$ \\cite{aji2017sparse}. It can also be satisfied by rescaled quantizations. For example, QSGD \\cite{alistarh2017qsgd} compresses ${\\mathbf{x}}\\in \\mathbb{R}^d$ by $Q_{sqgd}({\\mathbf{x}}) = \\frac{\\mathbf{sign}({\\mathbf{x}})\\|{\\mathbf{x}}\\|}{s} \\lfloor s \\frac{|{\\mathbf{x}}|}{\\|{\\mathbf{x}}\\|} + \\xi \\rfloor\n$\nwhere $\\xi$ is uniformly distributed on $[0,1]^d$, $s$ is the parameter about compression level. Then $Q({\\mathbf{x}})= \\frac{1}{\\tau} Q_{sqgd} ({\\mathbf{x}})$ with $\\tau=(1+\\min\\{d\/s^2, \\sqrt{d}\/s\\})$ satisfies Assumption \\ref{assu:compressor} with $\\alpha^2=1-\\frac{1}{\\tau}$. More examples can be found in \\cite{koloskova2019decentralized}.\n\nBelow, we make a couple of remarks to discuss the relations between Algorithm \\ref{alg:DProxSGT} and Algorithm \\ref{alg:CDProxSGT}.\n\n\\begin{remark}\nWhen $Q_{\\mathbf{x}}$ and $Q_{\\mathbf{y}}$ are both identity operators, i.e., $Q_{\\mathbf{x}}[{\\mathbf{x}}] = {\\mathbf{x}}, Q_{\\mathbf{y}}[{\\mathbf{y}}] = {\\mathbf{y}}$, and $\\gamma_x=\\gamma_y=1$, in Algorithm \\ref{alg:CDProxSGT}, CDProxSGT will reduce to DProxSGT. Hence, the latter can be viewed as a special case of the former. However, we will analyze them separately. Although the big-batch training method ProxGT-SA in \\cite{xin2021stochastic} shares a similar update as the proposed DProxSGT, our analysis will be completely different and new, as we need only $\\mathcal{O}(1)$ samples in each iteration in order to achieve better generalization performance. The analysis of CDProxSGT will be built on that of DProxSGT by carefully controlling the variance error of stochastic gradients and the consensus error, as well as the additional compression error.\n\\end{remark}\n\n\n \n\n \n\n \n\n\\begin{remark}\nWhen $Q_{\\mathbf{y}}$ and $Q_{\\mathbf{x}}$ are identity operators, $\\underline{\\mathbf{y}}_i^{t} = {\\mathbf{y}}_i^{t-\\frac{1}{2}}$ and $\\underline{\\mathbf{x}}_i^{t+1} = {\\mathbf{x}}_i^{t+\\frac{1}{2}}$ for each $i\\in\\mathcal{N}$. Hence, in the compression case, $\\underline{\\mathbf{y}}_i^{t}$ and $\\underline{\\mathbf{x}}_i^{t+1}$ can be viewed as estimates of ${\\mathbf{y}}_i^{t-\\frac{1}{2}}$ and ${\\mathbf{x}}_i^{t+\\frac{1}{2}}$.\nIn addition, in a matrix format, we have from \\eqref{eq:alg3_3} and \\eqref{eq:alg3_6} that\n \\begin{align} \n \\Y^{t+1}\n \n =&~ \\Y^{t+\\frac{1}{2}}\\widehat\\mathbf{W}_y + \\gamma_y\\big(\\underline\\Y^{t+1}-\\Y^{t+\\frac{1}{2}}\\big)(\\mathbf{W}-\\mathbf{I}), \\label{eq:Y_hatW}\\\\\n \\mathbf{X}^{t+1}\n \n =&~ \\mathbf{X}^{t+\\frac{1}{2}}\\widehat\\mathbf{W}_x + \\gamma_x(\\underline\\mathbf{X}^{t+1}-\\mathbf{X}^{t+\\frac{1}{2}})(\\mathbf{W}-\\mathbf{I}), \\label{eq:compX_hatW}\n\\end{align}\nwhere \n$\\widehat\\mathbf{W}_y = \\gamma_y \\mathbf{W} + (1-\\gamma_y)\\mathbf{I},\\ \\widehat\\mathbf{W}_x = \\gamma_x \\mathbf{W} + (1-\\gamma_x)\\mathbf{I}.$\nWhen $\\mathbf{W}$ satisfies the conditions (i)-(iii) in Assumption~\\ref{assu:mix_matrix}, it can be easily shown that $\\widehat\\mathbf{W}_y$ and $\\widehat\\mathbf{W}_x$ also satisfy all three conditions. Indeed, we have\n$$\\widehat\\rho_x \\triangleq \\|\\widehat\\mathbf{W}_x - \\mathbf{J}\\|_2 < 1,\\quad \n\\widehat\\rho_y \\triangleq \\|\\widehat\\mathbf{W}_y - \\mathbf{J}\\|_2 < 1.$$\nThus we can view $\\Y^{t+1}$ and $\\mathbf{X}^{t+1}$ as the results of $\\Y^{t+\\frac{1}{2}}$ and $\\mathbf{X}^{t+\\frac{1}{2}}$ by one round of neighbor communication with mixing matrices $\\widehat{\\mathbf{W}}_y$ and $\\widehat{\\mathbf{W}}_x$, and the addition of the estimation error $\\underline\\Y^{t+1}-\\Y^{t+\\frac{1}{2}}$ and $\\underline\\mathbf{X}^{t+1}-\\mathbf{X}^{t+\\frac{1}{2}}$ after one round of neighbor communication.\n\\end{remark}\n\n \n\n\n\n\n\n \n\n\n\n\n \n\n\n\\section{Convergence Analysis}\n\nIn this section, we analyze the convergence of the algorithms proposed in section~\\ref{sec:alg}. Nonconvexity of the problem and stochasticity of the algorithms both raise difficulty on the analysis. In addition, the coexistence of the nonsmooth regularizer $r(\\cdot)$ causes more significant challenges. \nTo address these challenges, we employ a tool of the so-called Moreau envelope \\cite{MR201952}, which has been commonly used for analyzing methods on solving nonsmooth weakly-convex problems.\n\n\\begin{definition}[Moreau envelope] Let $\\psi$ be an $L$-weakly convex function, i.e., $\\psi(\\cdot) + \\frac{L}{2}\\|\\cdot\\|^2$ is convex. For $\\lambda\\in(0,\\frac{1}{L})$, the Moreau envelope of $\\psi$ is defined as\n\\vspace{-0.2cm}\n \\begin{equation*} \n\\psi_\\lambda({\\mathbf{x}}) = \\min_{\\mathbf{y}} \\textstyle \\left\\{\\psi({\\mathbf{y}}) + \\frac{1}{2\\lambda}\\|{\\mathbf{y}}-{\\mathbf{x}}\\|^2\\right\\}, \\vspace{-0.2cm}\n\\end{equation*} \nand the unique minimizer is denoted as\n\\vspace{-0.2cm}\n\\begin{equation*}\n \\prox_{\\lambda \\psi}({\\mathbf{x}})= \\argmin_{{\\mathbf{y}}} \\textstyle \\left\\{\\psi({\\mathbf{y}})+\\frac{1}{2\\lambda} \\|{\\mathbf{y}}-{\\mathbf{x}}\\|^2\\right\\}.\\vspace{-0.2cm}\n \\end{equation*} \n\\end{definition}\n\n\n\nThe Moreau envelope $\\psi_\\lambda$ has nice properties.\nThe result below can be found in \n \\cite{davis2019stochastic, nazari2020adaptive, xu2022distributed-SsGM}.\n\n \\begin{lemma}\\label{lem:xhat_x}\n For any function $\\psi$, if it is $L$-weakly convex, then for any $\\lambda \\in (0, \\frac{1}{L})$, the Moreau envelope $\\psi_\\lambda$ is smooth with gradient given by \n$\\nabla \\psi_\\lambda ({\\mathbf{x}}) = \\lambda^{-1} ({\\mathbf{x}}-\\widehat{\\mathbf{x}}),$\n where $\\widehat{\\mathbf{x}}=\\prox_{\\lambda\\psi}({\\mathbf{x}})$. \nMoreover, \\vspace{-0.2cm}\n\\[\n\\|{\\mathbf{x}}-\\widehat{\\mathbf{x}}\\|=\\lambda\\|\\nabla \\psi_\\lambda({\\mathbf{x}})\\|, \\quad\n\\dist(\\mathbf{0}, \\partial \\psi(\\widehat {\\mathbf{x}}))\\leq \\|\\nabla \\psi_\\lambda({\\mathbf{x}})\\|.\\vspace{-0.2cm}\n\\]\n\\end{lemma}\n\nLemma~\\ref{lem:xhat_x} implies that if $\\|\\nabla \\psi_\\lambda({\\mathbf{x}})\\|$ is small, then $\\widehat{\\mathbf{x}}$ is a near-stationary point of $\\psi$ and ${\\mathbf{x}}$ is close to $\\widehat{\\mathbf{x}}$. Hence, $\\|\\nabla \\psi_\\lambda({\\mathbf{x}})\\|$ can be used as a valid measure of stationarity violation at ${\\mathbf{x}}$ for $\\psi$. Based on this observation, we define the $\\epsilon$-stationary solution below for the decentralized problem \\eqref{eq:decentralized_problem}.\n\n\n\\begin{definition}[Expected $\\epsilon$-stationary solution]\\label{def:eps-sol} Let $\\epsilon > 0$. A point $\\mathbf{X} = [{\\mathbf{x}}_1, \\ldots, {\\mathbf{x}}_n]$ is called an expected $\\epsilon$-stationary solution of \\eqref{eq:decentralized_problem} if for a constant $\\lambda\\in (0, \\frac{1}{L})$,\n \\vspace{-0.1cm}\n\\begin{equation*}\n \\textstyle\\frac{1}{n} \\mathbb{E}\\left[\\sum_{i=1}^n \\|\\nabla \\phi_\\lambda({\\mathbf{x}}_i)\\|^2 + L^2 \\|\\mathbf{X}_\\perp\\|^2\\right] \\leq \\epsilon^2.\n \\vspace{-0.1cm}\n\\end{equation*}\n\\end{definition} \n\nIn the definition above, $L^2$ before the consensus error term $\\|\\mathbf{X}_\\perp\\|^2$ is to balance the two terms. This scaling scheme has also been used in existing works such as \\cite{xin2021stochastic,mancino2022proximal,DBLP:journals\/corr\/abs-2202-00255} . From the definition, we see that if $\\mathbf{X}$ is an expected $\\epsilon$-stationary solution of \\eqref{eq:decentralized_problem}, then each local solution ${\\mathbf{x}}_i$ will be a near-stationary solution of $\\phi$ and in addition, these local solutions are all close to each other, namely, they are near consensus.\n\n\nBelow we first state the convergence results of the non-compressed method DProxSGT\nand then the compressed one CDProxSGT.\nAll the proofs are given in the appendix.\n \n\\begin{theorem}[Convergence rate of DProxSGT]\\label{thm:sec2}\nUnder Assumptions \\ref{assu:prob} -- \\ref{assu:stoc_grad}, let $\\{\\mathbf{X}^t\\}$ be generated from $\\mathrm{DProxSGT}$ in Algorithm~\\ref{alg:DProxSGT} with ${\\mathbf{x}}_i^0 = {\\mathbf{x}}^0, \\forall\\, i \\in \\mathcal{N}$. Let $\\lambda = \\min\\big\\{\\frac{1}{4 L}, \\frac{1}{96\\rho L}\\big\\}$ and $\\eta\\leq \\min\\big\\{\\frac{1}{4 L},\\frac{(1-\\rho^2)^4}{96\\rho L}\\big\\}$. \nSelect $\\tau$ from $\\{0, 1, \\ldots, T-1\\}$ uniformly at random.\nThen \n\\vspace{-0.1cm}\n\\begin{equation*\n\\begin{aligned} \n &~ \\textstyle \\frac{1}{n} \\mathbb{E}\\left[\\sum_{i=1}^n \\|\\nabla\\phi_\\lambda({\\mathbf{x}}_i^\\tau)\\|^2 +\\frac{4}{\\lambda \\eta} \\|\\mathbf{X}^\\tau_\\perp\\|^2\\right] \\\\\n \\leq &~ \\textstyle \\frac{8\\left( \\phi_\\lambda({\\mathbf{x}}^0) - \\phi_\\lambda^*\\right)}{ \\eta T} + \\frac{4616 \\eta}{\\lambda(1-\\rho^2)^3} \\sigma^2 \\textstyle + \\frac{768\\eta \\mathbb{E}\\left[ \\|\\nabla \\mathbf{F}^0(\\mathbf{I}-\\mathbf{J})\\|^2\\right]}{n\\lambda T(1-\\rho^2)^3},\n\\end{aligned}\n\\vspace{-0.1cm}\n\\end{equation*}\nwhere $\\phi_\\lambda^* = \\min_{{\\mathbf{x}}} \\phi_\\lambda({\\mathbf{x}})> -\\infty$.\n\\end{theorem}\n \n\nBy Theorem~\\ref{thm:sec2}, we obtain a complexity result as follows.\n\n \n\\begin{corollary}[Iteration complexity]\nUnder the assumptions of Theorem~\\ref{thm:sec2}, \nfor a given $\\epsilon>0$, take $ \\eta = \\min\\{\\frac{1}{4 L},\\frac{(1-\\rho^2)^4}{96\\rho L}, \\frac{ \\lambda(1-\\rho^2)^3 \\epsilon^2}{9232\\sigma^2}\\}$. Then $\\mathrm{DProxSGT}$ can find an expected $\\epsilon$-stationary point of \\eqref{eq:decentralized_problem} when $T \\geq T_\\epsilon = \\left\\lceil \\frac{16\\left( \\phi_\\lambda({\\mathbf{x}}^0) - \\phi_\\lambda^*\\right)}{ \\eta \\epsilon^2 } + \\frac{1536\\eta \\mathbb{E}\\left[ \\|\\nabla \\mathbf{F}^0(\\mathbf{I}-\\mathbf{J})\\|^2\\right]}{n\\lambda (1-\\rho^2)^3 \\epsilon^2} \\right\\rceil$. \n\\end{corollary}\n \n \n\\begin{remark} \\label{remark:DProxSGT}\nWhen\n$\\epsilon$ is small enough, $\\eta$ will take $\\frac{ \\lambda(1-\\rho^2)^3 \\epsilon^2}{9232\\sigma^2}$, and $T_\\epsilon$ will be dominated by the first term. \nIn this case, DProxSGT can find an expected $\\epsilon$-stationary solution of \\eqref{eq:decentralized_problem} in $O\\Big( \\frac{\\sigma^2\\left( \\phi_\\lambda({\\mathbf{x}}^0) - \\phi_\\lambda^*\\right) }{\\lambda(1-\\rho^2)^3 \\epsilon^4}\\Big)$ iterations, leading to the same number of stochastic gradient samples\nand communication rounds. Our sample complexity is optimal in terms of the dependence on $\\epsilon$ under the smoothness condition in Assumption~\\ref{assu:prob}, as it matches with the lower bound in \\cite{arjevani2022lower}. However, the dependence on $1-\\rho$ may not be optimal because of our possibly loose analysis, as the \\emph{deterministic} method with single communication per update in \\cite{scutari2019distributed} for nonconvex nonsmooth problems has a dependence $(1-\\rho)^2$ on the graph topology.\n\\end{remark}\n\n\n \n\n \n\n \n\\begin{theorem}[Convergence rate of CDProxSGT] \\label{thm:sect3thm}\n Under Assumptions \\ref{assu:prob} through \\ref{assu:compressor},\n let $\\{\\mathbf{X}^t\\}$ be generated from $\\mathrm{CDProxSGT}$ in Algorithm \\ref{alg:CDProxSGT} with\n ${\\mathbf{x}}_i^0 = {\\mathbf{x}}^0, \\forall\\, i \\in \\mathcal{N}$. Let $\\lambda = \\min \\big\\{\\frac{1}{4 L}, \\frac{ (1-\\alpha^2)^2}{9 L+41280}\\big\\}$, and suppose\n\\vspace{-0.1cm}\n\\begin{gather*}\n \\eta \\leq~ \\min\\left\\{ \\textstyle \\lambda,\n \\frac{(1-\\alpha^2)^2(1-\\widehat\\rho^2_x)^2(1-\\widehat\\rho^2_y)^2}{18830\\max\\{1, L\\}}\\right\\}, \\\\\n\\gamma_x\\leq~ \\min\\left\\{ \\textstyle\n\\frac{1-\\alpha^2}{25}, \\frac{\\eta}{\\alpha\n\\right\\}, \\quad \n \\gamma_y\\leq ~ \\textstyle \n \\frac{(1-\\alpha^2)(1-\\widehat\\rho^2_x)(1-\\widehat\\rho^2_y)}{317}.\n\\end{gather*}\n\\vspace{-0.1cm}\nSelect $\\tau$ from $\\{0, 1, \\ldots, T-1\\}$ uniformly at random.\nThen \n\\vspace{-0.1cm}\n\\begin{equation*\n \\begin{aligned}\n &~ \\textstyle \\frac{1}{n} \\mathbb{E}\\left[\\sum_{i=1}^n \\|\\nabla\\phi_\\lambda({\\mathbf{x}}_i^\\tau)\\|^2 +\\frac{4}{\\lambda \\eta} \\|\\mathbf{X}^\\tau_\\perp\\|^2\\right] \\\\\n \\leq &~\\textstyle \\frac{8\\left(\\phi_\\lambda({\\mathbf{x}}^0) - \\phi_\\lambda^*\\right)}{\\eta T} \n +\\frac{(50096n+48)\\eta \\sigma^2}{n\\lambda(1-\\widehat\\rho^2_x)^2(1-\\widehat\\rho^2_y)} + \\frac{4176 \\eta \\mathbb{E}\\left[ \\|\\nabla \\mathbf{F}^0\\|^2\\right] }{n\\lambda T (1-\\widehat\\rho^2_x)^2(1-\\widehat\\rho^2_y)},\n\\end{aligned}\n\\vspace{-0.1cm}\n\\end{equation*}\nwhere $\\phi_\\lambda^* = \\min_{{\\mathbf{x}}} \\phi_\\lambda({\\mathbf{x}})> -\\infty$.\n\\end{theorem}\n\nBy Theorem~\\ref{thm:sect3thm}, we have the complexity result as follows.\n\n\\begin{corollary}[Iteration complexity]\nUnder the assumptions of Theorem \\ref{thm:sect3thm}, for a given $\\epsilon>0$, take \n\\begin{gather*}\n \\eta = \\textstyle \\min \\left\\{\\frac{1}{4 L}, \\frac{ (1-\\alpha^2)^2}{9 L+41280}, \\frac{(1-\\alpha^2)^2(1-\\widehat\\rho^2_x)^2(1-\\widehat\\rho^2_y)^2}{18830\\max\\{1, L\\}}\\right.,\\\\ \\textstyle \\left. \\frac{n\\lambda(1-\\widehat\\rho^2_x)^2(1-\\widehat\\rho^2_y)\\epsilon^2}{2(50096n+48) \\sigma^2}\\right\\}, \\\\\n \\textstyle \\gamma_x = \\min\\left\\{ \\textstyle \n\\frac{1-\\alpha^2}{25}, \\frac{\\eta}{\\alpha}\\right\\}, \\quad\n \\gamma_y = \\frac{(1-\\alpha^2)(1-\\widehat\\rho^2_x)(1-\\widehat\\rho^2_y)}{317}.\n\\end{gather*}\nThen $\\mathrm{CDProxSGT}$ can find an expected $\\epsilon$-stationary point of \\eqref{eq:decentralized_problem} when $T\\geq T_\\epsilon^c$ where \n\\begin{align*} \n T_\\epsilon^c = \\textstyle \\left\\lceil\\frac{16\\left(\\phi_\\lambda({\\mathbf{x}}^0) - \\phi_\\lambda^*\\right)}{\\eta \\epsilon^2} + \\frac{8352 \\eta \\mathbb{E}\\left[ \\|\\nabla \\mathbf{F}^0\\|^2\\right] }{n\\lambda (1-\\widehat\\rho^2_x)^2(1-\\widehat\\rho^2_y)\\epsilon^2} \\right\\rceil.\n\\end{align*}\n \n\n\n\\end{corollary}\n\n\\begin{remark}\nWhen the given tolerance $\\epsilon$ is small enough,\n$\\eta$ will take $\\frac{n\\lambda(1-\\widehat\\rho^2_x)^2(1-\\widehat\\rho^2_y)\\epsilon^2}{2(50096n+48) \\sigma^2}$ and $T_\\epsilon^c$ will be dominated by the first term. In this case, similar to DProxSGT in Remark \\ref{remark:DProxSGT}, CDProxSGT can find an expected $\\epsilon$-stationary solution of \\eqref{eq:decentralized_problem} in $O\\Big( \\frac{\\sigma^2\\left( \\phi_\\lambda({\\mathbf{x}}^0) - \\phi_\\lambda^*\\right) }{ \\lambda(1-\\widehat\\rho^2_x)^2(1-\\widehat\\rho^2_y) \\epsilon^4}\\Big)$ iterations.\n\\end{remark} \n\\section{Additional Details on FixupResNet20}\n\nFixupResNet20 \\cite{zhang2019fixup} is amended from the popular ResNet20 \\cite{he2016deep} by deleting the BatchNorm layers \\cite{ioffe2015batch}. The BatchNorm layers use the mean and variance of some hidden layers based on the data inputted into the models. In our experiment, the data on nodes are heterogeneous. If the models include BatchNorm layers, even all nodes have the same model parameters after training, their testing performance on the whole data would be different for different nodes because the mean and variance of the hidden layers are produced on the heterogeneous data. Thus we use FixupResNet20 instead of ResNet20. \n\n\n\n\n\n\n\n\\section{Some Key Existing Lemmas}\n\n \nFor $L$-smoothness function $f_i$, it holds for any ${\\mathbf{x}}, {\\mathbf{y}}\\in\\dom(r)$,\n\\begin{align}\\label{eq:assump-to-f_i}\n\\textstyle \\big|f_i({\\mathbf{y}}) - f_i({\\mathbf{x}}) - \\langle \\nabla f_i({\\mathbf{x}}), {\\mathbf{y}}-{\\mathbf{x}}\\rangle\\big| \\le \\frac{L}{2}\\|{\\mathbf{y}}-{\\mathbf{x}}\\|^2.\n\\end{align} \n\nFrom the smoothness of $f_i$ in Assumption \\ref{assu:prob}, it follows that $f = \\frac{1}{n}f_i$ is also $ L$-smooth in $\\dom(r)$.\n\nWhen $f_i$ is $ L$-smooth in $\\dom(r)$, we have that $f_i(\\cdot) + \\frac{L}{2}\\|\\cdot\\|^2$ is convex. \nSince $r(\\cdot)$ is convex, $\\phi_i(\\cdot) + \\frac{L}{2}\\|\\cdot\\|^2$ is convex, i.e., $\\phi_i$ is $L$-weakly convex for each $i$. So is $\\phi$. In the following, we give some lemmas about weakly convex functions.\n\nThe following result is from Lemma II.1 in \\cite{chen2021distributed}.\n \\begin{lemma}\\label{lem:weak_convx}\n For any function $\\psi$ on $\\mathbb{R}^{d}$, if it is $L$-weakly convex, i.e., $\\psi(\\cdot) + \\frac{L}{2}\\|\\cdot\\|^2$ is convex, then for any ${\\mathbf{x}}_1, {\\mathbf{x}}_2, \\ldots, {\\mathbf{x}}_m\\in\\mathbb{R}^d$, it holds that \n\\[\n\\psi\\left(\\sum_{i=1}^m a_i{\\mathbf{x}}_i\\right)\\leq \\sum_{i=1}^m a_i \\psi({\\mathbf{x}}_i) + \\frac{L}{2} \\sum_{i=1}^{m-1} \\sum_{j=i+1}^m a_i a_j \\|{\\mathbf{x}}_i-{\\mathbf{x}}_j\\|^2,\n\\]\nwhere $a_i\\geq 0$ for all $i$ and $\\sum_{i=1}^m a_i=1$.\n\\end{lemma} \n\n \n The first result below is from Lemma II.8 in \\cite{chen2021distributed}, and the nonexpansiveness of the proximal mapping of a closed convex function is well known. \n\\begin{lemma} \\label{lem:prox_diff} \nFor any function $\\psi$ on $\\mathbb{R}^{d}$, if it is $L$-weakly convex, i.e., $\\psi(\\cdot) + \\frac{L}{2}\\|\\cdot\\|^2$ is convex, then the proximal mapping with $\\lambda< \\frac{1}{L}$ satisfies \n\\[\n\\|\\prox_{\\lambda \\psi}({\\mathbf{x}}_1)-\\prox_{\\lambda \\psi}({\\mathbf{x}}_2)\\|\\leq \\frac{1}{1-\\lambda L} \\|{\\mathbf{x}}_1-{\\mathbf{x}}_2\\|.\n\\]\nFor a closed convex function $r(\\cdot)$, its proximal mapping is nonexpansive, i.e., \n\\[\n\\|\\prox_{r}({\\mathbf{x}}_1)-\\prox_{r}({\\mathbf{x}}_2)\\|\\leq \\|{\\mathbf{x}}_1-{\\mathbf{x}}_2\\|.\n\\]\n\\end{lemma}\n\n\\begin{lemma}\nFor $\\mathrm{DProxSGT}$ in Algorithm \\ref{alg:DProxSGT} and $\\mathrm{CDProxSGT}$ in Algorithm \\ref{alg:CDProxSGT}, we both have\n\\begin{gather}\n\t\\bar{\\mathbf{y}}^t =\\overline{\\nabla} \\mathbf{F}^t, \\quad\n\t\\bar{\\mathbf{x}}^{t} = \\bar{\\mathbf{x}}^{t+\\frac{1}{2}} = \\frac{1}{n} \\sum_{i=1}^n \\prox_{\\eta r}\\left({\\mathbf{x}}_i^t - \\eta {\\mathbf{y}}_i^{t}\\right). \\label{eq:x_y_mean}\n\\end{gather} \n\\end{lemma}\n\\begin{proof}\nFor DProxSGT in Algorithm \\ref{alg:DProxSGT}, taking the average among the workers on \\eqref{eq:y_half_update} to \\eqref{eq:x_1_update} gives \n\\begin{align}\n\\bar{\\mathbf{y}}^{t-\\frac{1}{2}} = \\bar{\\mathbf{y}}^{t-1} + \\overline{\\nabla} \\mathbf{F}^t - \\overline{\\nabla} \\mathbf{F}^{t-1}, \\quad\n \\bar{\\mathbf{y}}^t =\\bar{\\mathbf{y}}^{t-\\frac{1}{2}}, \\quad\n \\bar{\\mathbf{x}}^{t+\\frac{1}{2}} = \\frac{1}{n} \\sum_{i=1}^n \\prox_{\\eta r}\\left({\\mathbf{x}}_i^t - \\eta {\\mathbf{y}}_i^{t}\\right), \\quad \\bar{\\mathbf{x}}^{t} = \\bar{\\mathbf{x}}^{t+\\frac{1}{2}},\\label{eq:proof_mean}\n\\end{align}\nwhere $\\mathbf{1}^\\top\\mathbf{W}=\\mathbf{1}^\\top$ follows from Assumption \\ref{assu:mix_matrix}. With $\\bar{\\mathbf{y}}^{-1}=\\overline{\\nabla} \\mathbf{F}^{-1}$, we have \\eqref{eq:x_y_mean}.\n\nSimilarly, for CDProxSGT in Algorithm \\ref{alg:CDProxSGT}, \ntaking the average on \\eqref{eq:alg3_1_matrix} to \\eqref{eq:alg3_6_matrix} \nwill also give \\eqref{eq:proof_mean} and \\eqref{eq:x_y_mean}.\n\\end{proof}\n\nIn the rest of the analysis, we define the Moreau envelope of $\\phi$ for $\\lambda\\in(0,\\frac{1}{L})$ as \\begin{align*}\n\\phi_\\lambda({\\mathbf{x}}) = \\min_{\\mathbf{y}}\\left\\{\\phi({\\mathbf{y}}) + \\frac{1}{2\\lambda}\\|{\\mathbf{y}}-{\\mathbf{x}}\\|^2\\right\\}. \n\\end{align*}\nDenote the minimizer as \n \\begin{align*}\n \\prox_{\\lambda \\phi}({\\mathbf{x}}):= \\argmin_{{\\mathbf{y}}} \\phi({\\mathbf{y}})+\\frac{1}{2\\lambda} \\|{\\mathbf{y}}-{\\mathbf{x}}\\|^2.\n \\end{align*}\nIn addition, we will use the notation $\\widehat{{\\mathbf{x}}}^t_i$ and $\\widehat{{\\mathbf{x}}}^{t+\\frac{1}{2}}_i$ that are defined by\n\\begin{align}\n\\widehat{{\\mathbf{x}}}^t_i = \\prox_{\\lambda \\phi}({\\mathbf{x}}^t_i),\\ \\widehat{{\\mathbf{x}}}^{t+\\frac{1}{2}}_i = \\prox_{\\lambda \\phi}({\\mathbf{x}}^{t+\\frac{1}{2}}_i),\\, \\forall\\, i\\in\\mathcal{N},\n\\label{eq:x_t_hat} \n\\end{align}\nwhere $\\lambda \\in(0,\\frac{1}{L})$.\n\n\n\\section{Convergence Analysis for DProxSGT} \\label{sec:proof_DProxSGT}\n\nIn this section, we analyze the convergence rate of DProxSGT in Algorithm \\ref{alg:DProxSGT}. For better readability, we use the matrix form of Algorithm \\ref{alg:DProxSGT}. By the notation introduced in section~\\ref{sec:notation}, we can write \\eqref{eq:y_half_update}-\\eqref{eq:x_1_update} in the more compact matrix form:\n\\begin{align}\n & \\Y^{t-\\frac{1}{2}} = \\Y^{t-1} + \\nabla \\mathbf{F}^t - \\nabla \\mathbf{F}^{t-1},\\label{eq:y_half_update_matrix}\n \\\\\n & \\Y^t = \\Y^{t-\\frac{1}{2}}\\mathbf{W},\\label{eq:y_update_matrix}\\\\\n & \\mathbf{X}^{t+\\frac{1}{2}} =\\prox_{\\eta r} \\left(\\mathbf{X}^t - \\eta \\Y^{t}\\right) \\triangleq [\\prox_{\\eta r} \\left({\\mathbf{x}}_1^t - \\eta {\\mathbf{y}}_1^{t}\\right),\\ldots,\\prox_{\\eta r} \\left({\\mathbf{x}}_n^t - \\eta {\\mathbf{y}}_n^{t}\\right)],\\label{eq:x_half_update_matrix}\n \\\\\n & \\mathbf{X}^{t+1} = \\mathbf{X}^{t+\\frac{1}{2}}\\mathbf{W}. \\label{eq:x_1_update_matrix}\n\\end{align} \n\nBelow, we first bound $\\|\\widehat\\mathbf{X}^{t}-\\mathbf{X}^{t+\\frac{1}{2}}\\|^2$ in Lemma~\\ref{lem:Xhat_Xhalf}. Then we give the bounds of the consensus error $\\|\\mathbf{X}_\\perp^t\\|$ and $\\|\\Y_\\perp^t\\|$ and $\\phi_\\lambda({\\mathbf{x}}_i^{t+1})$ after one step in Lemmas~\\ref{lem:XI_J}, \\ref{lem:YI_J}, and \\ref{lem:weak_convex}. Finally, we prove Theorem \\ref{thm:sec2} by constructing a Lyapunov function that involves $\\|\\mathbf{X}_\\perp^t\\|$, $\\|\\Y_\\perp^t\\|$, and $\\phi_\\lambda({\\mathbf{x}}_i^{t+1})$.\n\n\n\n\n \n \n\\begin{lemma} \\label{lem:Xhat_Xhalf} Let $\\eta\\leq \\lambda \\leq \\frac{1}{4 L}$. Then\n\\begin{align}\n \\mathbb{E}\\big[\\|\\widehat\\mathbf{X}^{t}-\\mathbf{X}^{t+\\frac{1}{2}}\\|^2\\big] \\leq &~\n4 \\mathbb{E}\\big[\\|\\mathbf{X}^t_\\perp\\|^2\\big] + \\left( 1-\\frac{\\eta}{2\\lambda} \\right) \\mathbb{E}\\big[\\| \\widehat\\mathbf{X}^t - \\mathbf{X}^t\\|^2\\big] +4\\eta^2 \\mathbb{E}\\big[\\|\\Y^t_\\perp\\|^2\\big] + 2\\eta^2\\sigma^2. \\label{eq:hatx_xprox}\n\\end{align} \n\\end{lemma} \n\\begin{proof}\nBy the definition of $\\widehat{\\mathbf{x}}^t_i$ in \\eqref{eq:x_t_hat}, we have $0 \\in \\nabla f(\\widehat{\\mathbf{x}}^t_i) + \\partial r(\\widehat{\\mathbf{x}}^t_i) + \\frac{1}{\\lambda}(\\widehat{\\mathbf{x}}^t_i-{\\mathbf{x}}^t_i)$, i.e.,\n\\[ \\textstyle\n0 \\in \\partial r(\\widehat{\\mathbf{x}}^t_i) + \\frac{1}{\\eta} \\left(\\frac{\\eta}{\\lambda} \\widehat{\\mathbf{x}}^t_i-\\frac{\\eta}{\\lambda}{\\mathbf{x}}^t_i + \\eta \\nabla f(\\widehat{\\mathbf{x}}^t_i) \\right) = \\partial r(\\widehat{\\mathbf{x}}^t_i) + \\frac{1}{\\eta} \\left(\\widehat{\\mathbf{x}}^t_i - \\left( \\frac{\\eta}{\\lambda}{\\mathbf{x}}^t_i - \\eta \\nabla f(\\widehat{\\mathbf{x}}^t_i)+ \\left(1- \\frac{\\eta}{\\lambda}\\right) \\widehat{\\mathbf{x}}^t_i \\right)\\right).\n\\]\nThus we have $\\widehat{\\mathbf{x}}^t_i = \\prox_{\\eta r}\\left( \\frac{\\eta}{\\lambda}{\\mathbf{x}}^t_i - \\eta \\nabla f(\\widehat{\\mathbf{x}}^t_i) + \\left(1- \\frac{\\eta}{\\lambda}\\right)\\widehat{\\mathbf{x}}^t_i\\right)$. Then by \\eqref{eq:x_half_update}, the convexity of $r$, and Lemma \\ref{lem:prox_diff},\n\\begin{align}\n &~ \\textstyle \\|\\widehat{\\mathbf{x}}_i^{t}-{\\mathbf{x}}_i^{t+\\frac{1}{2}}\\|^2\n = \\left\\| \\prox_{\\eta r}\\left( \\frac{\\eta}{\\lambda}{\\mathbf{x}}^t_i - \\eta \\nabla f(\\widehat{\\mathbf{x}}^t_i) + \\left(1- \\frac{\\eta}{\\lambda}\\right)\\widehat{\\mathbf{x}}^t_i\\right)- \\prox_{\\eta r} \\left(\n {\\mathbf{x}}_i^t - \\eta {\\mathbf{y}}^t_i\\right) \\right\\|^2 \\nonumber\\\\\n \\leq &~ \\textstyle \\left\\| \\frac{\\eta}{\\lambda}{\\mathbf{x}}^t_i - \\eta \\nabla f(\\widehat{\\mathbf{x}}^t_i) + \\left(1- \\frac{\\eta}{\\lambda}\\right)\\widehat{\\mathbf{x}}^t_i - ({\\mathbf{x}}^t_i-\\eta{\\mathbf{y}}^t_i) \\right\\|^2 = \\left\\| \\left(1- \\frac{\\eta}{\\lambda}\\right)(\\widehat{\\mathbf{x}}^t_i -{\\mathbf{x}}^t_i )- \\eta (\\nabla f(\\widehat{\\mathbf{x}}^t_i) -{\\mathbf{y}}^t_i) \\right\\|^2 \\nonumber\\\\\n = & ~ \\textstyle \\left(1- \\frac{\\eta}{\\lambda}\\right)^2 \\left\\| \\widehat{\\mathbf{x}}^t_i - {\\mathbf{x}}^t_i \\right\\|^2 + \\eta^2\\left\\| {\\mathbf{y}}^t_i- \\nabla f(\\widehat{\\mathbf{x}}^t_i) \\right\\|^2 + 2 \\left(1- \\frac{\\eta}{\\lambda}\\right)\\eta \\left\\langle \\widehat{\\mathbf{x}}^t_i-{\\mathbf{x}}_i^t, {\\mathbf{y}}_i^t-\\nabla f({\\mathbf{x}}^t_i) + \\nabla f({\\mathbf{x}}^t_i)-\\nabla f(\\widehat{\\mathbf{x}}^t_i) \\right\\rangle\\nonumber\\\\\n\\leq & ~ \\textstyle\\left(\\left(1- \\frac{\\eta}{\\lambda}\\right)^2 + 2\\left(1- \\frac{\\eta}{\\lambda}\\right)\\eta L\n\\right) \\left\\| \\widehat{\\mathbf{x}}^t_i - {\\mathbf{x}}^t_i \\right\\|^2 + \\eta^2\\left\\| {\\mathbf{y}}^t_i- \\nabla f(\\widehat{\\mathbf{x}}^t_i) \\right\\|^2 + 2 \\left(1- \\frac{\\eta}{\\lambda}\\right)\\eta \\left\\langle \\widehat{\\mathbf{x}}^t_i-{\\mathbf{x}}_i^t, {\\mathbf{y}}_i^t-\\nabla f({\\mathbf{x}}^t_i) \\right\\rangle , \\label{eq:lem1.6.1} \n \\end{align}\nwhere\nthe second inequality holds by $\\left\\langle \\widehat{\\mathbf{x}}^t_i-{\\mathbf{x}}_i^t, \\nabla f({\\mathbf{x}}^t_i)-\\nabla f(\\widehat{\\mathbf{x}}^t_i) \\right\\rangle \\leq L\\left\\|\\widehat{\\mathbf{x}}^t_i-{\\mathbf{x}}_i^t\\right\\|^2$. \nThe second term in the right hand side of \\eqref{eq:lem1.6.1} can be bounded by\n\\begin{align*}\n&~ \\textstyle \\mathbb{E}_t [\\| {\\mathbf{y}}^t_i- \\nabla f(\\widehat{\\mathbf{x}}^t_i) \\|^2\\big] \\overset{\\eqref{eq:x_y_mean}}{=} \\mathbb{E}_t\\big[\\| {\\mathbf{y}}^t_i- \\bar{\\mathbf{y}}^t + \\overline{\\nabla} \\mathbf{F}^t - \\nabla f(\\widehat{\\mathbf{x}}^t_i) \\|^2\\big] \n\\leq 2\\mathbb{E}_t\\big[\\| {\\mathbf{y}}^t_i- \\bar{\\mathbf{y}}^t \\|^2\\big] + 2\\mathbb{E}_t\\big[\\big\\| \\overline{\\nabla} \\mathbf{F}^t - \\nabla f(\\widehat{\\mathbf{x}}^t_i) \\big\\|^2\\big] \\\\\n= &~2\\mathbb{E}_t\\big[\\| {\\mathbf{y}}^t_i- \\bar{\\mathbf{y}}^t \\|^2\\big] + 2\\mathbb{E}_t\\big[\\| \\overline{\\nabla} \\mathbf{F}^t - \\overline{\\nabla} \\mathbf{f}^t \\|^2\\big]+ 2\\| \\overline{\\nabla} \\mathbf{f}^t - \\nabla f(\\widehat{\\mathbf{x}}^t_i) \\|^2 \\\\\n\\leq&~ 2\\mathbb{E}_t[ \\|{\\mathbf{y}}_i^t-\\bar{\\mathbf{y}}^t\\|^2\\big] + \\frac{2}{n^2}\\sum_{j=1}^n \\mathbb{E}_t\\big[\\|\\nabla F_j({\\mathbf{x}}_j^t,\\xi_j^t)-\\nabla f_j({\\mathbf{x}}_j^t)\\|^2\\big] + 4 \\| \\overline{\\nabla} \\mathbf{f}^t -\\nabla f({\\mathbf{x}}^t_i) \\|^2 + 4\\| \\nabla f({\\mathbf{x}}^t_i)- \\nabla f(\\widehat{\\mathbf{x}}^t_i) \\|^2\\\\\n\\leq &~ 2\\mathbb{E}_t[ \\|{\\mathbf{y}}_i^t-\\bar{\\mathbf{y}}^t\\|^2\\big] + 2 \\frac{\\sigma^2}{n} + 4 \\| \\overline{\\nabla} \\mathbf{f}^t -\\nabla f({\\mathbf{x}}^t_i) \\|^2 + 4 L^2 \\|{\\mathbf{x}}^t_i -\\widehat{\\mathbf{x}}^t_i\\|^2,\n\\end{align*}\nwhere the second equality holds by the unbiasedness of stochastic gradients, and the second inequality holds also by the independence between $\\xi_i^t$'s. \nIn the last inequality, we use the bound of the variance of stochastic gradients, and the $L$-smooth assumption.\nTaking the full expectation over the above inequality and summing for all $i$ give\n\\begin{align}\n\\sum_{i=1}^n\\mathbb{E}\\big[\\| {\\mathbf{y}}^t_i- \\nabla f(\\widehat{\\mathbf{x}}^t_i) \\|^2 ] \n\\leq 2\\mathbb{E}\\big[\\|\\Y^t_\\perp\\|^2 ] +2\\sigma^2 + 8 L^2 \\mathbb{E}\\big[\\|\\mathbf{X}^t_\\perp\\|^2 ] +4 L^2 \\mathbb{E}\\big[\\| \\mathbf{X}^t - \\widehat\\mathbf{X}^t\\|^2 ]. \\label{eq:lem161_1}\n\\end{align}\nTo have the inequality above, we have used\n\\begin{align}\n&~ \\sum_{i=1}^n \\left\\| \\overline{\\nabla} \\mathbf{f}^t -\\nabla f({\\mathbf{x}}^t_i) \\right\\|^2 \n\\leq \\frac{1}{n} \\sum_{i=1}^n \\sum_{j=1}^n \\left\\|\\nabla f_j({\\mathbf{x}}_j^t) -\\nabla f_j({\\mathbf{x}}^t_i) \\right\\|^2 \n\\leq \\frac{ L^2}{n}\\sum_{i=1}^n\\sum_{j=1}^n \n\\left\\|{\\mathbf{x}}_j^t - {\\mathbf{x}}^t_i \\right\\|^2 \\nonumber \\\\\n= &~ \\frac{ L^2}{n}\\sum_{i=1}^n\\sum_{j=1}^n \\left( \\left\\|{\\mathbf{x}}_j^t - \\bar{\\mathbf{x}}^t\n\\right\\|^2 +\\left\\|\\bar{\\mathbf{x}}^t-{\\mathbf{x}}^t_i \\right\\|^2 + 2\\left\\langle {\\mathbf{x}}_j^t - \\bar{\\mathbf{x}}^t, \\bar{\\mathbf{x}}^t-{\\mathbf{x}}^t_i\\right\\rangle\\right)\n=\n2 L^2 \\left\\|\\mathbf{X}^t_\\perp\\right\\|^2, \\label{eq:sumsum}\n\\end{align}\nwhere the last equality holds by $ \\frac{1}{n} \\sum_{i=1}^n\\sum_{j=1}^n \\left\\langle {\\mathbf{x}}_j^t - \\bar{\\mathbf{x}}^t, \\bar{\\mathbf{x}}^t-{\\mathbf{x}}^t_i\\right\\rangle = \\sum_{i=1}^n \\left\\langle \\frac{1}{n} \\sum_{j=1}^n ({\\mathbf{x}}_j^t - \\bar{\\mathbf{x}}^t), \\bar{\\mathbf{x}}^t-{\\mathbf{x}}^t_i\\right\\rangle =\\sum_{i=1}^n\\left\\langle \\bar{\\mathbf{x}}^t - \\bar{\\mathbf{x}}^t, \\bar{\\mathbf{x}}^t-{\\mathbf{x}}^t_i\\right\\rangle=0$ from the definition of $\\bar{\\mathbf{x}}$. \n\nAbout the third term in the right hand side of \\eqref{eq:lem1.6.1}, we have\n\\begin{align}\n & ~\\sum_{i=1}^n \\mathbb{E}\\left[ \\left\\langle \\widehat{\\mathbf{x}}^t_i-{\\mathbf{x}}_i^t, {\\mathbf{y}}_i^t-\\nabla f({\\mathbf{x}}^t_i) \\right\\rangle\\right] \\overset{\\eqref{eq:x_y_mean}}{=} \\sum_{i=1}^n \\mathbb{E}\\left[\\left\\langle \\widehat{\\mathbf{x}}^t_i-{\\mathbf{x}}_i^t, {\\mathbf{y}}_i^t -\\bar{\\mathbf{y}}^t+\\overline{\\nabla} \\mathbf{F}^t -\\nabla f({\\mathbf{x}}^t_i) \\right\\rangle\\right] \\nonumber \\\\\n= & ~ \\textstyle \\sum_{i=1}^n \\mathbb{E}\\big[ \\langle\\widehat{\\mathbf{x}}^t_i -\\bar{\\widehat{\\mathbf{x}}}^t, {\\mathbf{y}}_i^t -\\bar{\\mathbf{y}}^t \\rangle\\big] + \\sum_{i=1}^n \\mathbb{E}\\big[\\langle \\bar{{\\mathbf{x}}}^t - {\\mathbf{x}}_i^t,{\\mathbf{y}}_i^t -\\bar{\\mathbf{y}}^t \\rangle\\big] + \\sum_{i=1}^n \\mathbb{E}\\left[ \\left\\langle \\widehat{\\mathbf{x}}^t_i-{\\mathbf{x}}_i^t, \\mathbb{E}_{t} \\left[\\overline{\\nabla} \\mathbf{F}^t\\right] -\\nabla f({\\mathbf{x}}^t_i)\\right\\rangle\\right] \\nonumber \\\\\n \\leq &~ \\frac{1}{2\\eta} \\left( \\textstyle \\mathbb{E}\\big[\\|\\widehat\\mathbf{X}^t_\\perp\\|^2\\big]+ \\mathbb{E}\\big[\\|\\mathbf{X}^t_\\perp\\|^2\\big]\\right) + \\eta \\mathbb{E}\\big[\\|\\Y^t_\\perp\\|^2\\big] + \\textstyle L\\mathbb{E}\\big[\\| \\widehat\\mathbf{X}^t-\\mathbf{X}^t\\|^2\\big] + \\frac{1}{4 L} \\sum_{i=1}^n \\mathbb{E}\\big[\\|\\overline{\\nabla} {\\mathbf{f}}^t -\\nabla f({\\mathbf{x}}^t_i)\\|^2\\big] \n \\nonumber \\\\\n\\leq &~ \\left(\\textstyle\\frac{1}{2\\eta(1-\\lambda L)^2} + \\frac{1}{2\\eta} + \\frac{L}{2}\\right)\\mathbb{E}\\big[\\|\\mathbf{X}^t_\\perp\\|^2\\big] + \\eta\\mathbb{E}\\big[\\|\\Y^t_\\perp\\|^2\\big] + L\\mathbb{E}\\big[\\|\\widehat\\mathbf{X}^t-\\mathbf{X}^t\\|^2\\big],\\label{eq:lem161_2}\n\\end{align}\nwhere $\\textstyle \\sum_{i=1}^n \\big\\langle \\bar{\\widehat{\\mathbf{x}}}^t,{\\mathbf{y}}_i^t -\\bar{\\mathbf{y}}^t \\big\\rangle = 0$ and $\\sum_{i=1}^n \\left\\langle \\bar{{\\mathbf{x}}}^t,{\\mathbf{y}}_i^t -\\bar{\\mathbf{y}}^t \\right\\rangle = 0$ is used in the second equality, $\\mathbb{E}_{t} \\left[\\overline{\\nabla} \\mathbf{F}^t\\right] = \\overline{\\nabla} {\\mathbf{f}}^t$ is used in the first inequality, and $\\|\\widehat\\mathbf{X}^t_\\perp\\|^2 =\\left\\|\\left(\\prox_{\\lambda \\phi}(\\mathbf{X}^t)- \\prox_{\\lambda \\phi}(\\bar{\\mathbf{x}}^t)\\mathbf{1}^\\top\\right) (\\mathbf{I}-\\mathbf{J})\\right\\|^2\\leq \\frac{1}{(1-\\lambda L)^2}\\|\\mathbf{X}^t-\\bar\\mathbf{X}^t\\|^2$ and \\eqref{eq:sumsum} are used in the last inequality.\n\nNow we can bound the summation of \\eqref{eq:lem1.6.1} by using \\eqref{eq:lem161_1} and \\eqref{eq:lem161_2}:\n\\begin{align*}\n& ~ \\mathbb{E}\\big[\\|\\widehat\\mathbf{X}^{t}-\\mathbf{X}^{t+\\frac{1}{2}}\\|^2\\big]\\\\\n \\leq & ~ \\left(\\textstyle \\left(1- \\frac{\\eta}{\\lambda}\\right)^2 + 2\\left(1- \\frac{\\eta}{\\lambda}\\right)\\eta L\n\\right) \\mathbb{E}\\big[\\| \\widehat\\mathbf{X}^t - \\mathbf{X}^t \\|^2\\big] \\\\\n& ~ + \\eta^2 \\left(2\\mathbb{E}[ \\|\\Y^t_\\perp\\|^2\\big] +2\\sigma^2 + 8 L^2 \\mathbb{E}\\big[\\|\\mathbf{X}^t_\\perp\\|^2\\big] +4 L^2 \\mathbb{E}\\big[\\| \\mathbf{X}^t - \\widehat\\mathbf{X}^t\\|^2\\big]\\right) \\\\\n& ~ + \\textstyle 2 \\left(1- \\frac{\\eta}{\\lambda}\\right)\\eta \\left(\\textstyle\\left(\\frac{1}{2\\eta(1-\\lambda L)^2} + \\frac{1}{2\\eta} + \\frac{ L}{2} \\right)\\mathbb{E}\\big[\\|\\mathbf{X}^t_\\perp\\|^2\\big] + \\eta\\mathbb{E}\\big[\\|\\Y^t_\\perp\\|^2\\big] + L\\mathbb{E}\\big[\\|\\widehat\\mathbf{X}^t-\\mathbf{X}^t\\|^2\\big]\\right) \\\\ \n= & ~ \\textstyle \\left(1 - 2\\eta (\\frac{1}{\\lambda} - 2 L) + \\frac{\\eta^2}{\\lambda} (\\frac{1}{\\lambda} - 2 L) + 2 L \\eta^2(-\\frac{1}{\\lambda}+2 L)\\right) \\mathbb{E}\\big[\\| \\widehat\\mathbf{X}^t - \\mathbf{X}^t\\|^2\\big] + 2\\eta^2\\sigma^2 \\nonumber\\\\\n & ~ + \\textstyle \\left( \\left(1- \\frac{\\eta}{\\lambda}\\right) (1+\\frac{1}{(1-\\lambda L)^2}+ \\eta L) + 8\\eta^2 L^2 \\right) \\mathbb{E}\\big[\\|\\mathbf{X}^t_\\perp\\|^2\\big]\n + 2 (2- \\frac{\\eta}{\\lambda}) \\eta^2 \\mathbb{E}\\big[\\|\\Y^t_\\perp\\|^2\\big].\n\\end{align*} \nWith $\\eta \\leq \\lambda \\leq \\frac{1}{4 L}$, we have $\\frac{1}{(1-\\lambda L)^2}\\leq 2$ and \n\\eqref{eq:hatx_xprox} follows from the inequality above.\n \n\\end{proof}\n \n\\begin{lemma}\\label{lem:XI_J} \nThe consensus error of $\\mathbf{X}$ satisfies the following inequality\n\\begin{align} \n \\mathbb{E}\\big[\\|\\mathbf{X}^t_\\perp\\|^2\\big] \\leq \\frac{1+\\rho^2}{2} \\mathbb{E}\\big[\\| \\mathbf{X}^{t-1}_\\perp \\|^2\\big]+ \\frac{2\\rho^2 \\eta^2 }{1-\\rho^2} \\mathbb{E}\\big[\\| \\Y^{t-1}_\\perp \\|^2\\big]. \\label{eq:X_consensus}\n\\end{align}\n\\end{lemma}\n\\begin{proof}\n With the updates \\eqref{eq:x_half_update} and \\eqref{eq:x_1_update}, we have\n\\begin{align*}\n &~ \n \\mathbb{E}\\big[\\|\\mathbf{X}^t_\\perp\\|^2\\big] = \\mathbb{E}\\big[\\|\\mathbf{X}^{t-\\frac{1}{2}}\\mathbf{W}(\\mathbf{I}- \\mathbf{J})\\|^2\\big] = \\mathbb{E}\\big[\\|\\mathbf{X}^{t-\\frac{1}{2}} (\\mathbf{W}-\\mathbf{J})\\|^2\\big] \\nonumber\\\\\n =&~ \\mathbb{E}\\big[\\| \\prox_{\\eta r} \\left(\\mathbf{X}^{t-1} - \\eta \\Y^{t-1}\\right) (\\mathbf{W}-\\mathbf{J})\\|^2\\big] \\nonumber\\\\\n =&~ \\mathbb{E}\\big[\\| \\left(\\prox_{\\eta r} \\left(\\mathbf{X}^{t-1} - \\eta \\Y^{t-1}\\right)-\\prox_{\\eta r} \\left(\\bar{\\mathbf{x}}^{t-1} - \\eta \\bar{\\mathbf{y}}^{t-1}\\right)\\mathbf{1}^\\top\\right) (\\mathbf{W}-\\mathbf{J})\\|^2\\big] \\nonumber\\\\\n \\leq &~ \\mathbb{E}\\big[\\|\\prox_{\\eta r} \\left(\\mathbf{X}^{t-1} - \\eta \\Y^{t-1}\\right)-\\prox_{\\eta r} \\left(\\bar{\\mathbf{x}}^{t-1} - \\eta \\bar{\\mathbf{y}}^{t-1}\\right)\\mathbf{1}^\\top\\|^2 \\|(\\mathbf{W}-\\mathbf{J})\\|^2_2] \\nonumber\\\\\n \\leq &~ \\rho^2 \\mathbb{E}\\left[ \\textstyle \\sum_{i=1}^n\\| \\prox_{\\eta r} \\left({\\mathbf{x}}_i^{t-1} - \\eta {\\mathbf{y}}_i^{t-1}\\right)-\\prox_{\\eta r} \\left(\\bar{\\mathbf{x}}^{t-1} - \\eta \\bar{\\mathbf{y}}^{t-1}\\right) \\|^2\\right] \\nonumber\\\\\n \\leq &~ \\rho^2 \\mathbb{E}\\left[ \\textstyle \\sum_{i=1}^n\\|\\left({\\mathbf{x}}_i^{t-1} - \\eta {\\mathbf{y}}_i^{t-1}\\right)-\\left(\\bar{\\mathbf{x}}^{t-1} - \\eta \\bar{\\mathbf{y}}^{t-1}\\right) \\|^2\\right] = \\rho^2 \\mathbb{E}\\big[\\| \\mathbf{X}^{t-1}_\\perp - \\eta \\Y^{t-1}_\\perp \\|^2\\big] \\nonumber\\\\\n \\leq &~ \\textstyle \\big(\\textstyle \\rho^2 + \\frac{1-\\rho^2}{2}\\big) \\mathbb{E}\\big[\\| \\mathbf{X}^{t-1}_\\perp \\|^2\\big]+ \\big( \\textstyle\\rho^2 + \\frac{2\\rho^4}{1-\\rho^2}\\big) \\eta^2\\mathbb{E}\\big[\\| \\Y^{t-1}_\\perp \\|^2\\big] \\nonumber\\\\\n = &~\\textstyle \\frac{1+\\rho^2}{2} \\mathbb{E}\\big[\\| \\mathbf{X}^{t-1}_\\perp \\|^2\\big]+ \\frac{1+\\rho^2}{1-\\rho^2} \\rho^2 \\eta^2\\mathbb{E}\\big[\\| \\Y^{t-1}_\\perp \\|^2\\big] \\nonumber \\\\\n \\leq &~\\textstyle \\frac{1+\\rho^2}{2} \\mathbb{E}\\big[\\| \\mathbf{X}^{t-1}_\\perp \\|^2\\big]+ \\frac{2\\rho^2 \\eta^2 }{1-\\rho^2} \\mathbb{E}\\big[\\| \\Y^{t-1}_\\perp \\|^2\\big],\n\\end{align*} \n\nwhere we have used $\\mathbf{1}^\\top (\\mathbf{W}-\\mathbf{J})=\\mathbf{0}$ in the third equality, $\\|\\mathbf{W}-\\mathbf{J}\\|_2\\leq \\rho$ in the second inequality, and Lemma \\ref{lem:prox_diff} in the third inequality, and $\\rho\\leq 1$ is used in the last inequality.\n\\end{proof}\n \n\\begin{lemma}\\label{lem:YI_J}\nLet $\\eta\\leq \\min\\{\\lambda, \\frac{1-\\rho^2}{4\\sqrt{6} \\rho L} \\} $ and $\\lambda \\leq\\frac{1}{4 L}$. The consensus error of $\\Y$ satisfies\n\\begin{align} \n \\mathbb{E}\\big[\\|\\Y^t_\\perp\\|^2\\big] \n \\leq &~ \\frac{48\\rho^2 L^2 }{1-\\rho^2 } \\mathbb{E}\\big[\\|\\mathbf{X}^{t-1}_\\perp\\|^2\\big] \\!+\\! \\frac{3\\!+\\!\\rho^2}{4} \\mathbb{E}\\big[\\|\\Y^{t-1}_\\perp \\|^2\\big] \\!+\\! \\frac{12\\rho^2 L^2 }{1-\\rho^2 } \\mathbb{E}\\big[\\|\\widehat\\mathbf{X}^{t-1}-\\mathbf{X}^{t-1} \\|^2\\big] \\!+\\! 6 n\\sigma^2. \\label{eq:Y_consensus}\n\\end{align}\n\\end{lemma}\n\n\\begin{proof}\nBy the updates \\eqref{eq:y_half_update} and \\eqref{eq:y_update}, we have \n\\begin{align}\t \n &~ \n \\mathbb{E}\\big[\\|\\Y^t_\\perp\\|^2\\big] = \\mathbb{E}\\big[\\|\\Y^{t-\\frac{1}{2}}(\\mathbf{W}- \\mathbf{J})\\|^2\\big] \n = \\mathbb{E}\\big[\\| \\Y^{t-1}(\\mathbf{W} -\\mathbf{J}) + (\\nabla \\mathbf{F}^t - \\nabla \\mathbf{F}^{t-1}) (\\mathbf{W} -\\mathbf{J})\\|^2\\big] \\nonumber\\\\\n = &~ \\mathbb{E}\\big[\\|\\Y^{t-1}(\\mathbf{I}-\\mathbf{J})(\\mathbf{W} -\\mathbf{J})\\|^2\\big] + \\mathbb{E}\\big[\\|(\\nabla \\mathbf{F}^t - \\nabla \\mathbf{F}^{t-1}) (\\mathbf{W}-\\mathbf{J}) \\|^2\\big] + 2\\mathbb{E}\\big[\\langle \\Y^{t-1} (\\mathbf{W} -\\mathbf{J}), (\\nabla \\mathbf{F}^t - \\nabla \\mathbf{F}^{t-1}) (\\mathbf{W}-\\mathbf{J}) \\rangle\\big] \\nonumber\\\\\n \\leq &~ \\rho^2 \\mathbb{E}\\big[\\|\\Y^{t-1}_\\perp \\|^2\\big] + \\rho^2 \\mathbb{E}\\big[\\|\\nabla \\mathbf{F}^t - \\nabla \\mathbf{F}^{t-1}\\|^2\\big] + 2\\mathbb{E}\\big[\\langle \\Y^{t-1} (\\mathbf{W} -\\mathbf{J}),(\\nabla \\mathbf{f}^t - \\nabla \\mathbf{F}^{t-1})(\\mathbf{W}-\\mathbf{J}) \\rangle\\big], \\label{eq:y_cons1}\n\\end{align} \nwhere we have used $\\mathbf{J}\\mathbf{W}=\\mathbf{J}\\J=\\mathbf{J}$, $\\|\\mathbf{W}-\\mathbf{J}\\|_2\\leq \\rho$\nand $\\mathbb{E}_t[\\nabla \\mathbf{F}^t] = \\nabla {\\mathbf{f}}^t$. \nFor the second term on the right hand side of \\eqref{eq:y_cons1}, we have\n\\begin{align}\n &~\\mathbb{E}\\big[\\|\\nabla \\mathbf{F}^t - \\nabla \\mathbf{F}^{t-1}\\|^2\\big] = \\mathbb{E}\\big[\\|\\nabla \\mathbf{F}^t - \\nabla \\mathbf{f}^t+\\nabla \\mathbf{f}^t -\\nabla \\mathbf{F}^{t-1}\\|^2\\big] \\nonumber\\\\\n \\overset{\\mathbb{E}_t[\\nabla \\mathbf{F}^t] = \\nabla \\mathbf{f}^t}{=}&~\n \\mathbb{E}\\big[\\|\\nabla \\mathbf{F}^t - \\nabla \\mathbf{f}^t\\|^2\\big]+\\mathbb{E}\\big[\\|\\nabla \\mathbf{f}^t- \\nabla \\mathbf{f}^{t-1}+\\nabla \\mathbf{f}^{t-1}-\\nabla \\mathbf{F}^{t-1}\\|^2\\big] \\nonumber \\\\ \n\\leq &~ \\mathbb{E}\\big[\\|\\nabla \\mathbf{F}^t - \\nabla \\mathbf{f}^t\\|^2\\big]+2\\mathbb{E}\\big[\\|\\nabla \\mathbf{f}^t- \\nabla \\mathbf{f}^{t-1}\\|^2\\big]+2\\mathbb{E}\\big[\\|\\nabla \\mathbf{f}^{t-1}-\\nabla \\mathbf{F}^{t-1}\\|^2\\big] \\nonumber\\\\\n \\leq &~ 3 n \\sigma^2 + 2 L^2 \\mathbb{E}\\big[\\|\\mathbf{X}^{t}-\\mathbf{X}^{t-1}\\|^2\\big]. \\label{eq:y_cons12}\n\\end{align}\n\nFor the third term on the right hand side of \\eqref{eq:y_cons1}, we have\n\\begin{align}\n&~2\\mathbb{E}\\big[\\langle \\Y^{t-1} (\\mathbf{W} -\\mathbf{J}), (\\nabla \\mathbf{f}^t - \\nabla \\mathbf{F}^{t-1})(\\mathbf{W}-\\mathbf{J}) \\rangle\\big] \\nonumber \\\\ \n =&~2\\mathbb{E}\\big[\\langle \\Y^{t-1}(\\mathbf{W} -\\mathbf{J}), (\\nabla \\mathbf{f}^t - \\nabla \\mathbf{f}^{t-1})(\\mathbf{W}-\\mathbf{J}) \\rangle\\big] +2\\mathbb{E}\\big[\\langle \\Y^{t-1}(\\mathbf{W} -\\mathbf{J}), (\\nabla \\mathbf{f}^{t-1} - \\nabla \\mathbf{F}^{t-1})(\\mathbf{W}-\\mathbf{J}) \\rangle\\big] \\nonumber \\\\\n =&~2\\mathbb{E}\\big[\\langle \\Y^{t-1}(\\mathbf{I}-\\mathbf{J})(\\mathbf{W} -\\mathbf{J}), (\\nabla \\mathbf{f}^t - \\nabla \\mathbf{f}^{t-1})(\\mathbf{W}-\\mathbf{J}) \\rangle\\big] \\nonumber \\\\\n &~ +2\\mathbb{E}\\big[\\langle (\\Y^{t-2} + \\nabla \\mathbf{F}^{t-1} - \\nabla \\mathbf{F}^{t-2})\\mathbf{W}(\\mathbf{W} -\\mathbf{J}), (\\nabla \\mathbf{f}^{t-1} - \\nabla \\mathbf{F}^{t-1})(\\mathbf{W}-\\mathbf{J}) \\rangle\\big] \\nonumber \\\\\n =&~2\\mathbb{E}\\big[\\langle \\Y^{t-1}(\\mathbf{I}-\\mathbf{J})(\\mathbf{W} -\\mathbf{J}), (\\nabla \\mathbf{f}^t - \\nabla \\mathbf{f}^{t-1})(\\mathbf{W}-\\mathbf{J}) \\rangle\\big] \\nonumber \\\\\n &~ +2\\mathbb{E}\\big[\\langle (\\nabla \\mathbf{F}^{t-1} - \\nabla \\mathbf{f}^{t-1} )\\mathbf{W}(\\mathbf{W} -\\mathbf{J}), (\\nabla \\mathbf{f}^{t-1} - \\nabla \\mathbf{F}^{t-1})(\\mathbf{W}-\\mathbf{J}) \\rangle\\big] \\nonumber \\\\\n \\leq &~2\\mathbb{E}\\big[\\|\\Y^{t-1}(\\mathbf{I}-\\mathbf{J})(\\mathbf{W} -\\mathbf{J})\\|\\cdot\\|(\\nabla \\mathbf{f}^t - \\nabla \\mathbf{f}^{t-1})(\\mathbf{W}-\\mathbf{J}) \\|\\big] \\nonumber \\\\\n &~ +2\\mathbb{E}\\big[\\|(\\nabla \\mathbf{F}^{t-1} - \\nabla \\mathbf{f}^{t-1} )\\mathbf{W}(\\mathbf{W} -\\mathbf{J})\\|\\cdot\\|(\\nabla \\mathbf{f}^{t-1} - \\nabla \\mathbf{F}^{t-1})(\\mathbf{W}-\\mathbf{J})\\|\\big] \\nonumber \\\\\n \\leq&~ 2\\rho^2\\mathbb{E}\\big[\\| \\Y^{t-1}_\\perp\\|\\cdot\\|\\nabla \\mathbf{f}^t - \\nabla \\mathbf{f}^{t-1}\\|\\big] + 2\\rho^2\\mathbb{E}\\big[\\|\\nabla \\mathbf{F}^{t-1} - \\nabla \\mathbf{f}^{t-1}\\|^2\\big] \\nonumber\\\\\n \\leq &~ \\textstyle\\frac{1-\\rho^2}{2} \\mathbb{E}\\big[\\| \\Y^{t-1}_\\perp\\|^2\\big]+\\frac{2\\rho^4}{1-\\rho^2}\\mathbb{E}\\big[\\|\\nabla \\mathbf{f}^t - \\nabla \\mathbf{f}^{t-1}\\|^2\\big] + 2\\rho^2 n \\sigma^2 \\nonumber \\\\\n\\leq &~ \\textstyle\\frac{1-\\rho^2}{2} \\mathbb{E}\\big[\\| \\Y^{t-1}_\\perp\\|^2\\big]+\\frac{2\\rho^4 L^2}{1-\\rho^2} \\mathbb{E}\\big[\\| \\mathbf{X}^t - \\mathbf{X}^{t-1}\\|^2\\big]+ 2\\rho^2 n \\sigma^2, \\label{eq:y_cons13}\n\\end{align}\nwhere the second equality holds by $\\mathbf{W}-\\mathbf{J}=(\\mathbf{I}-\\mathbf{J})(\\mathbf{W}-\\mathbf{J})$, \\eqref{eq:y_half_update} and \\eqref{eq:y_update}, the third equality holds because $\\Y^{t-2} - \\nabla \\mathbf{F}^{t-2} -\\nabla \\mathbf{f}^{t-1}$ does not depend on $\\xi_i^{t-1}$'s, \nand the second inequality holds because $\\|\\mathbf{W}-\\mathbf{J}\\|_2\\leq \\rho$ and $\\|\\mathbf{W}\\|_2\\leq 1$. \nPlugging \\eqref{eq:y_cons12} and \\eqref{eq:y_cons13} into \\eqref{eq:y_cons1}, we have\n\\begin{align} \n \\mathbb{E}\\big[\\|\\Y^t_\\perp\\|^2\\big] \n \\leq &~ \\textstyle\\frac{1+\\rho^2}{2} \\mathbb{E}\\big[\\|\\Y^{t-1}_\\perp \\|^2\\big] + \\frac{2 \\rho^2 L^2 }{1-\\rho^2 } \\mathbb{E}\\big[\\| \\mathbf{X}^t - \\mathbf{X}^{t-1}\\|^2\\big] + 5 \\rho^2 n \\sigma^2 , \\label{eq:y_cons2} \n \\end{align}\nwhere we have used $1+\\frac{\\rho^2}{1-\\rho^2} = \\frac{1}{1-\\rho^2 }$.\nFor the second term in the right hand side of \\eqref{eq:y_cons2}, we have\n\\begin{align}\n&~ \\| \\mathbf{X}^{t+1} - \\mathbf{X}^{t}\\|^2 = \\|\\mathbf{X}^{t+\\frac{1}{2}}\\mathbf{W}-\\mathbf{X}^t\\|^2 =\n\\|(\\mathbf{X}^{t+\\frac{1}{2}}-\\widehat\\mathbf{X}^t)\\mathbf{W} +(\\widehat\\mathbf{X}^t-\\mathbf{X}^t)\\mathbf{W} + \\mathbf{X}^t (\\mathbf{W}-\\mathbf{I})\\|^2 \\nonumber \\\\\n\\leq &~ 3\\|(\\mathbf{X}^{t+\\frac{1}{2}}-\\widehat\\mathbf{X}^t)\\mathbf{W}\\|^2 +3\\|(\\widehat\\mathbf{X}^t-\\mathbf{X}^t)\\mathbf{W}\\|^2 + 3\\|\\mathbf{X}^t(\\mathbf{I}-\\mathbf{J})(\\mathbf{W}-\\mathbf{I})\\|^2 \\nonumber \\\\\n\\leq &~ 3\\|\\mathbf{X}^{t+\\frac{1}{2}}-\\widehat\\mathbf{X}^t \\|^2 +3\\|\\widehat\\mathbf{X}^t-\\mathbf{X}^t \\|^2 + 12\\|\\mathbf{X}^t_\\perp\\|^2,\\label{eq:Xplus1-X}\n\\end{align} \nwhere in the first inequality we have used $\\mathbf{X}^t (\\mathbf{W}-\\mathbf{I})=\\mathbf{X}^t(\\mathbf{I}-\\mathbf{J})(\\mathbf{W}-\\mathbf{I})$ from $\\mathbf{J}(\\mathbf{W}-\\mathbf{I}) = \\mathbf{J}-\\mathbf{J}$, and in the second inequality we have used $\\|\\mathbf{W}\\|_2\\leq 1$ and $\\|\\mathbf{W}-\\mathbf{I}\\|_2\\leq 2$.\n\nTaking expectation over both sides of \\eqref{eq:Xplus1-X} and using \\eqref{eq:hatx_xprox}, we have\n\\begin{align*}\n&~ \\mathbb{E}\\big[\\| \\mathbf{X}^{t+1} - \\mathbf{X}^{t}\\|^2\\big] \\\\\n\\le &~3 \\left( \\textstyle 4 \\mathbb{E}\\big[\\|\\mathbf{X}^t_\\perp\\|^2\\big] + \\left( 1-\\frac{\\eta}{2\\lambda} \\right) \\mathbb{E}\\big[\\| \\widehat\\mathbf{X}^t - \\mathbf{X}^t\\|^2\\big] +4\\eta^2 \\mathbb{E}\\big[\\|\\Y^t_\\perp\\|^2\\big] + 2\\eta^2\\sigma^2\\right) +3 \\mathbb{E}\\big[\\|\\widehat\\mathbf{X}^t-\\mathbf{X}^t \\|^2\\big] + 12 \\mathbb{E}\\big[\\|\\mathbf{X}^t_\\perp\\|^2\\big]\\\\\n= &~ 3 \\textstyle \\left(2 -\\frac{\\eta}{2\\lambda} \\right) \\mathbb{E}\\big[\\| \\widehat\\mathbf{X}^t - \\mathbf{X}^t\\|^2\\big] +12\\eta^2 \\mathbb{E}\\big[\\|\\Y^t_\\perp\\|^2\\big] + 6\\eta^2\\sigma^2 + 24\\mathbb{E}\\big[\\|\\mathbf{X}^t_\\perp\\|^2\\big].\n\\end{align*}\nPlugging the inequality above into \\eqref{eq:y_cons2} gives \n\\begin{align*} \n \\mathbb{E}\\big[\\|\\Y^t_\\perp\\|^2\\big] \n \\leq &~ \\left(\\textstyle\\frac{1+\\rho^2}{2} + \\frac{24 \\rho^2 L^2\\eta^2 }{1-\\rho^2 } \\right) \\mathbb{E}\\big[\\|\\Y^{t-1}_\\perp \\|^2\\big] + \\textstyle 5 \\rho^2 n\\sigma^2 +\\frac{12 \\rho^2 L^2 \\eta^2 \\sigma^2 }{1-\\rho^2 } \\nonumber \\\\\n &~\\textstyle + \\frac{6\\rho^2 L^2 }{1-\\rho^2 }\\left( \\textstyle 2- \\frac{\\eta}{2\\lambda} \\right) \\mathbb{E}\\big[\\|\\widehat\\mathbf{X}^{t-1}-\\mathbf{X}^{t-1} \\|^2\\big] + \\frac{48 \\rho^2 L^2 }{1-\\rho^2 } \\mathbb{E}\\big[\\|\\mathbf{X}^{t-1}_\\perp\\|^2\\big].\n\\end{align*}\nBy $\\rho<1$ and $ \\eta \\leq \\frac{1-\\rho^2}{4\\sqrt{6} \\rho L}$, we have \n$\\frac{24 \\rho^2 L^2 \\eta^2}{1-\\rho^2 } \\leq \\frac{1-\\rho^2}{4}$ and $\\frac{12 \\rho^2 L^2 \\eta^2}{1-\\rho^2 } \\leq \\frac{1-\\rho^2}{8}\\leq n$, and further \\eqref{eq:Y_consensus}.\n\\end{proof} \n\n\n\\begin{lemma}\\label{lem:weak_convex} \nLet $\\eta\\leq \\lambda \\leq\\frac{1}{4 L}$. It holds\n\\begin{align}\n \\sum_{i=1}^n \\mathbb{E}[\\phi_\\lambda({\\mathbf{x}}_i^{t+1})]\n\\leq &~ \\sum_{i=1}^n \\mathbb{E}[ \\phi_\\lambda( {\\mathbf{x}}_i^{t})] + \\frac{4}{\\lambda}\n \\mathbb{E}\\big[\\|\\mathbf{X}^t_\\perp\\|^2\\big] + \\frac{4 \\eta^2}{\\lambda} \\mathbb{E}[ \\|\\Y^t_\\perp\\|^2\\big] - \\frac{\\eta}{4\\lambda^2} \\mathbb{E}\\big[\\| \\widehat\\mathbf{X}^t - \\mathbf{X}^t \\|^2\\big] + \\frac{\\eta^2\\sigma^2}{\\lambda}. \\label{eq:phi_update}\n\\end{align}\n\\end{lemma}\n\\begin{proof}\nBy the definition in \\eqref{eq:x_t_hat\n, the update in \\eqref{eq:x_1_update}, the $ L$-weakly convexity of $\\phi$, and the convexity of $\\|\\cdot\\|^2$, we have\n\\begin{align}\n&~\\phi_\\lambda({\\mathbf{x}}_i^{t+1}) \\overset{\\eqref{eq:x_t_hat}}{=} \\phi(\\widehat{\\mathbf{x}}_i^{t+1})+{\\textstyle \\frac{1}{2\\lambda} }\\|\\widehat{\\mathbf{x}}_i^{t+1}-{\\mathbf{x}}_i^{t+1}\\|^2 \\overset{\\eqref{eq:x_1_update}}{\\leq} \\phi\\bigg(\\sum_{j=1}^n\\mathbf{W}_{ji}\\widehat{\\mathbf{x}}_j^{t+\\frac{1}{2}}\\bigg)+{ \\frac{1}{2\\lambda}} \\bigg\\|\\sum_{j=1}^n \\mathbf{W}_{ji}\\big(\\widehat{\\mathbf{x}}_j^{t+\\frac{1}{2}}-{\\mathbf{x}}_j^{t+\\frac{1}{2}}\\big)\\bigg\\|^2 \\nonumber \\\\\n&~\\overset{\\mbox{Lemma \\ref{lem:weak_convx}} }{\\leq} \\sum_{j=1}^n \\mathbf{W}_{ji} \\phi(\\widehat{\\mathbf{x}}_j^{t+\\frac{1}{2}}) +{ \\frac{L}{2} }\\sum_{j=1}^{n-1}\\sum_{l=j+1}^n \\mathbf{W}_{ji}\\mathbf{W}_{li}\\|\\widehat{\\mathbf{x}}_j^{t+\\frac{1}{2}}-\\widehat{\\mathbf{x}}_l^{t+\\frac{1}{2}}\\|^2+{ \\frac{1}{2\\lambda} }\\sum_{j=1}^n \\mathbf{W}_{ji} \\|\\widehat{\\mathbf{x}}_j^{t+\\frac{1}{2}}-{\\mathbf{x}}_j^{t+\\frac{1}{2}}\\|^2 \\nonumber \\\\\n&~\\leq \\sum_{j=1}^n \\mathbf{W}_{ji} \\phi_\\lambda({\\mathbf{x}}_j^{t+\\frac{1}{2}}) + \\frac{1}{4\\lambda} \\sum_{j=1}^{n-1}\\sum_{l=j+1}^n \\mathbf{W}_{ji}\\mathbf{W}_{li}\\|{\\mathbf{x}}_j^{t+\\frac{1}{2}}-{\\mathbf{x}}_l^{t+\\frac{1}{2}}\\|^2, \\label{eq:phi_update_0}\n\\end{align}\nwhere\nin the last inequality we use $ \\phi(\\widehat{\\mathbf{x}}_j^{t+\\frac{1}{2}}) + \\frac{1}{2\\lambda} \\|(\\widehat{\\mathbf{x}}_j^{t+\\frac{1}{2}}-{\\mathbf{x}}_j^{t+\\frac{1}{2}})\\|^2 = \\phi_\\lambda({\\mathbf{x}}_j^{t+\\frac{1}{2}})$, $\\|\\widehat{\\mathbf{x}}_j^{t+\\frac{1}{2}}-\\widehat{\\mathbf{x}}_l^{t+\\frac{1}{2}}\\|^2\\leq \\frac{1}{(1-\\lambda L)^2}\\|{\\mathbf{x}}_j^{t+\\frac{1}{2}}-{\\mathbf{x}}_l^{t+\\frac{1}{2}}\\|^2$ from Lemma \\ref{lem:prox_diff}, $\\frac{1}{(1-\\lambda L)^2}\\leq 2$ and $ L \\leq \\frac{1}{4\\lambda}$.\nFor the first term on the right hand side of \\eqref{eq:phi_update_0}, with $\\sum_{i=1}^n \\mathbf{W}_{ji}=1$, we have\n\\begin{align}\n \\sum_{i=1}^n \\sum_{j=1}^n \\mathbf{W}_{ji} \\phi_\\lambda({\\mathbf{x}}_j^{t+\\frac{1}{2}}) = &~ \n \\sum_{i=1}^n \\phi_\\lambda({\\mathbf{x}}_i^{t+\\frac{1}{2}}) \n\\leq \\sum_{i=1}^n \\phi_\\lambda( {\\mathbf{x}}_i^{t}) + { \\frac{1}{2\\lambda}} \\|\\widehat\\mathbf{X}^{t}-\\mathbf{X}^{t+\\frac{1}{2}}\\|^2 - { \\frac{1}{2\\lambda}} \\|\\widehat\\mathbf{X}^t - \\mathbf{X}^t\\|^2, \\label{eq:phi_lambda} \n\\end{align}\nwhere we have used\n$\n\\phi_\\lambda({\\mathbf{x}}_i^{t+\\frac{1}{2}}) \n\\leq \\phi(\\widehat{\\mathbf{x}}_i^{t})+\\frac{1}{2\\lambda} \\|\\widehat{\\mathbf{x}}_i^{t}-{\\mathbf{x}}_i^{t+\\frac{1}{2}}\\|^2$ and $\\phi_\\lambda({\\mathbf{x}}_i^{t}) = \\phi( \\widehat{\\mathbf{x}}_i^{t}) + \\frac{1}{2\\lambda} \\|\\widehat{\\mathbf{x}}_i^{t}-{\\mathbf{x}}_i^t\\|$.\nFor the second term on the right hand side of \\eqref{eq:phi_update_0}, with Lemma \\ref{lem:prox_diff} and \\eqref{eq:x_half_update}, we have\n\\begin{align}\n&~\\sum_{i=1}^n\\sum_{j=1}^{n-1}\\sum_{l=j+1}^n \\mathbf{W}_{ji}\\mathbf{W}_{li}\\|{\\mathbf{x}}_j^{t+\\frac{1}{2}}-{\\mathbf{x}}_l^{t+\\frac{1}{2}}\\|^2 = \\sum_{i=1}^n\\sum_{j=1}^{n-1}\\sum_{l=j+1}^n \\mathbf{W}_{ji}\\mathbf{W}_{li}\\|\\prox_{\\eta r}({\\mathbf{x}}_j^{t}-\\eta{\\mathbf{y}}_j^{t})-\\prox_{\\eta r}({\\mathbf{x}}_l^{t}-\\eta{\\mathbf{y}}_l^t)\\|^2 \\nonumber\\\\\n \\leq &~ \\sum_{i=1}^n\\sum_{j=1}^{n-1}\\sum_{l=j+1}^n \\mathbf{W}_{ji}\\mathbf{W}_{li}\\|({\\mathbf{x}}_j^{t}-\\eta{\\mathbf{y}}_j^{t})-({\\mathbf{x}}_l^{t}-\\eta{\\mathbf{y}}_l^t)\\|^2 \\nonumber\\\\\n = &~ \\sum_{i=1}^n\\sum_{j=1}^{n-1}\\sum_{l=j+1}^n \\mathbf{W}_{ji}\\mathbf{W}_{li}\\|({\\mathbf{x}}_j^{t}-\\eta{\\mathbf{y}}_j^{t})-(\\bar{\\mathbf{x}}^{t}-\\eta\\bar{\\mathbf{y}}^t)+(\\bar{\\mathbf{x}}^{t}-\\eta\\bar{\\mathbf{y}}^t)-({\\mathbf{x}}_l^{t}-\\eta{\\mathbf{y}}_l^t)\\|^2 \\nonumber\\\\\n\\leq&~ 2\\sum_{i=1}^n\\sum_{j=1}^{n-1}\\sum_{l=j+1}^n \\mathbf{W}_{ji}\\mathbf{W}_{li}\\|({\\mathbf{x}}_j^{t}-\\eta{\\mathbf{y}}_j^{t})-(\\bar{\\mathbf{x}}^{t}-\\eta\\bar{\\mathbf{y}}^t)\\|^2 + 2\\sum_{i=1}^n\\sum_{j=1}^{n-1}\\sum_{l=j+1}^n \\mathbf{W}_{ji}\\mathbf{W}_{li}\\|(\\bar{{\\mathbf{x}}}^{t}-\\eta\\bar{{\\mathbf{y}}}^{t})-({\\mathbf{x}}_l^{t}-\\eta{\\mathbf{y}}_l^t)\\|^2 \\nonumber\\\\\n\\leq&~ 2\\sum_{i=1}^n\\sum_{j=1}^{n-1} \\mathbf{W}_{ji} \\|({\\mathbf{x}}_j^{t}-\\eta{\\mathbf{y}}_j^{t})-(\\bar{\\mathbf{x}}^{t}-\\eta\\bar{\\mathbf{y}}^t)\\|^2 + 2\\sum_{i=1}^n \\sum_{l=2}^n \\mathbf{W}_{li}\\|(\\bar{{\\mathbf{x}}}^{t}-\\eta\\bar{{\\mathbf{y}}}^{t})-({\\mathbf{x}}_l^{t}-\\eta{\\mathbf{y}}_l^t)\\|^2 \\nonumber \\\\\n\\leq&~4 \\sum_{j=1}^{n} \\|({\\mathbf{x}}_j^{t}-\\eta{\\mathbf{y}}_j^{t})-(\\bar{\\mathbf{x}}^{t}-\\eta\\bar{\\mathbf{y}}^t)\\|^2 \n\\leq 8 \\|\\mathbf{X}^{t}_\\perp\\|^2+ 8\\eta^2 \\|\\Y^{t}_\\perp\\|^2. \\label{eq:2_3}\n\\end{align}\nWith \\eqref{eq:phi_lambda} and \\eqref{eq:2_3}, summing up \\eqref{eq:phi_update_0} from $i=1$ to $n$ gives\n\n\\begin{align*}\n\\sum_{i=1}^n \\phi_\\lambda({\\mathbf{x}}_i^{t+1}) \\leq &~ \\sum_{i=1}^n \\phi_\\lambda( {\\mathbf{x}}_i^{t}) +{ \\frac{1}{2\\lambda} }\\|\\widehat\\mathbf{X}^{t}-\\mathbf{X}^{t+\\frac{1}{2}}\\|^2 - { \\frac{1}{2\\lambda} } \\|\\widehat\\mathbf{X}^t - \\mathbf{X}^t\\|^2\n +{ \\frac{2}{\\lambda} }\\left( \\|\\mathbf{X}^{t}_\\perp\\|^2+ \\eta^2 \\|\\Y^{t}_\\perp\\|^2 \\right)\n\\end{align*}\nNow taking the expectation on the above inequality and using \\eqref{eq:hatx_xprox}, we have\n\\begin{align*}\n\\sum_{i=1}^n \\mathbb{E}\\big[\\phi_\\lambda({\\mathbf{x}}_i^{t+1}) \\big] \\leq &~ \\sum_{i=1}^n \\mathbb{E}\\big[\\phi_\\lambda( {\\mathbf{x}}_i^{t}) \\big] - \\frac{1}{2\\lambda} \\mathbb{E}\\big[ \\|\\widehat\\mathbf{X}^t - \\mathbf{X}^t\\|^2\\big]\n + \\frac{2}{\\lambda} \\mathbb{E}\\big[ \\|\\mathbf{X}^{t}_\\perp\\|^2+ \\eta^2 \\|\\Y^{t}_\\perp\\|^2 \\big]\\\\\n &~ \\hspace{-2cm}+\\frac{1}{2\\lambda} \\left(\\textstyle 4 \\mathbb{E}\\big[\\|\\mathbf{X}^t_\\perp\\|^2\\big] + \\left(\\textstyle 1-\\frac{\\eta}{2\\lambda} \\right) \\mathbb{E}\\big[\\| \\widehat\\mathbf{X}^t - \\mathbf{X}^t\\|^2\\big] +4\\eta^2 \\mathbb{E}\\big[\\|\\Y^t_\\perp\\|^2\\big] + 2\\eta^2\\sigma^2 \\right).\n\\end{align*}\nCombining like terms in the inequality above gives \\eqref{eq:phi_update}.\n\\end{proof}\n\nWith Lemmas \\ref{lem:XI_J}, \\ref{lem:YI_J} and \\ref{lem:weak_convex}, we are ready to prove Theorem \\ref{thm:sec2}. We build the following Lyapunov function:\n\\begin{align*}\n \\mathbf{V}^t = z_1 \\mathbb{E}[\\|\\mathbf{X}^t_\\perp\\|^2] +z_2\\mathbb{E}[\\|\\Y^t_\\perp\\|^2] +z_3\\sum_{i=1}^n \\mathbb{E}[ \\phi_\\lambda( {\\mathbf{x}}_i^{t})], \n\\end{align*}\nwhere $z_1, z_2, z_3 \\geq 0$ will be determined later.\n\n\\subsection*{Proof of Theorem \\ref{thm:sec2}.}\n\\begin{proof}\nDenote\n\\begin{align*}\n \\Phi^t = \\sum_{i=1}^n \\mathbb{E}[ \\phi_\\lambda( {\\mathbf{x}}_i^{t})],\\quad \\Omega_0^t = \\mathbb{E}[\\|\\widehat\\mathbf{X}^{t}-\\mathbf{X}^{t}\\|^2],\\quad \n \\Omega^t = \\left(\\mathbb{E}[\\|\\mathbf{X}^t_\\perp\\|^2], \\mathbb{E}[\\|\\Y^t_\\perp\\|^2], \\Phi^t\\right)^\\top.\n\\end{align*}\nThen Lemmas \\ref{lem:XI_J}, \\ref{lem:YI_J} and \\ref{lem:weak_convex} imply $\\Omega^{t+1} \\leq \\mathbf{A}\\Omega^t + {\\mathbf{b}} \\Omega_0^t + {\\mathbf{c}} \\sigma^2$,\nwhere \n\\begin{align*}\n \\mathbf{A} = \\begin{pmatrix}\n \\frac{1+\\rho^2}{2} &~ \\frac{2\\rho^2}{1-\\rho^2}\\eta^2 &~ 0\\\\\n \\frac{48\\rho^2 L^2 }{1-\\rho^2 } &~\\frac{3+\\rho^2}{4} &~ 0 \\\\\n \\frac{4}{\\lambda} &~ \\frac{4}{\\lambda}\\eta^2 &~ 1\n \\end{pmatrix}, \\quad \n{\\mathbf{b}} = \n \\begin{pmatrix}\n 0 \\\\\n \\frac{12\\rho^2 L^2 }{1-\\rho^2 } \\\\\n - \\frac{\\eta}{4\\lambda^2}\n \\end{pmatrix}, \\quad\n{\\mathbf{c}} = \n \\begin{pmatrix}\n 0 \\\\\n 6n \\\\ \n \\frac{\\eta^2}{\\lambda}\n \\end{pmatrix}.\n\\end{align*}\nFor any ${\\mathbf{z}} = (z_1, z_2, z_3)^\\top \\geq \\mathbf{0}$, We have\n\\begin{align*}\n {\\mathbf{z}}^\\top \\Omega^{t+1} \\leq {\\mathbf{z}}^\\top \\Omega^{t}+ ({\\mathbf{z}}^\\top \\mathbf{A}-{\\mathbf{z}}^\\top)\\Omega^t +{\\mathbf{z}}^\\top {\\mathbf{b}} \\Omega_0^t + {\\mathbf{z}}^\\top{\\mathbf{c}} \\sigma^2. \n\\end{align*}\nTake $$z_1=\\frac{10}{1-\\rho^2},\\ z_2=\\left(\\frac{80\\rho^2}{(1-\\rho^2)^3} + \\frac{16}{1-\\rho^2}\\right)\\eta^2,\\ z_3 = \\lambda.$$ We have\n$\n{\\mathbf{z}}^\\top \\mathbf{A}-{\\mathbf{z}}^\\top = \\begin{pmatrix}\n \\frac{48\\rho^2 L^2 }{1-\\rho^2 }z_2-1,\n 0, \n 0\n \\end{pmatrix}.\n$\nNote $z_2 \\leq \\frac{96}{(1-\\rho^2)^3}\\eta^2$. Thus\n\\begin{align*}\n{\\mathbf{z}}^\\top \\mathbf{A}-{\\mathbf{z}}^\\top \\leq \\begin{pmatrix} \\textstyle\n \\frac{4608\\rho^2 L^2 }{(1-\\rho^2)^4 }\\eta^2-1,\n 0, \n 0\n \\end{pmatrix}, \\ \n{\\mathbf{z}}^\\top{\\mathbf{b}} \\leq \\textstyle \\frac{1152\\rho^2 L^2 }{(1-\\rho^2)^4 }\\eta^2 - \\frac{\\eta}{4\\lambda}, \\ \n{\\mathbf{z}}^\\top{\\mathbf{c}} \\leq \\textstyle \\Big( \\textstyle \\frac{576n }{(1-\\rho^2)^3} + 1\\Big)\\eta^2 \\leq \\frac{577n}{(1-\\rho^2)^3} \\eta^2.\n\\end{align*}\n \n\nWith $\\eta\\leq \\frac{(1-\\rho^2)^4}{96\\rho L}$ and $\\lambda \\leq \\frac{1}{96\\rho L}$, we have\n$ {\\mathbf{z}}^\\top \\mathbf{A}-{\\mathbf{z}}^\\top \\leq (-\\frac{1}{2}, 0, 0 )^\\top$ and \n$\n{\\mathbf{z}}^\\top{\\mathbf{b}} \\leq \n\\left(12\\rho L - \\frac{1}{8\\lambda}\\right)\\eta - \\frac{\\eta}{8\\lambda}\n\\leq -\\frac{\\eta}{8\\lambda}$. Thus \n\\begin{align}\n {\\mathbf{z}}^\\top \\Omega^{t+1} \\leq \\textstyle {\\mathbf{z}}^\\top \\Omega^{t} -\\frac{1}{2}\\mathbb{E}[\\|\\mathbf{X}^t_\\perp\\|^2] -\\frac{\\eta}{8\\lambda} \\Omega_0^t + \\frac{577n}{(1-\\rho^2)^3} \\eta^2 \\sigma^2.\\label{eq:l_fun}\n\\end{align}\nHence, summing up \\eqref{eq:l_fun} for $t=0,1,\\ldots,T-1$ gives \n\\begin{align}\\label{eq:avg-Omega}\n \\frac{1}{\\lambda T}\\sum_{t=0}^{T-1} \\Omega_0^t +\\frac{4}{\\eta T}\\sum_{t=0}^{T-1} \\mathbb{E}[\\|\\mathbf{X}^t_\\perp\\|^2] \\leq \\textstyle \\frac{8}{\\eta T} \\left({\\mathbf{z}}^\\top \\Omega^0 - {\\mathbf{z}}^\\top \\Omega^{T}\\right) + \\frac{577n}{(1-\\rho^2)^3} 8\\eta\\sigma^2 .\n\\end{align}\nFrom ${\\mathbf{y}}_i^{-1} =\\mathbf{0}, \\nabla F_i({\\mathbf{x}}_i^{-1},\\xi_i^{-1}) = \\mathbf{0}, {\\mathbf{x}}_i^0 = {\\mathbf{x}}^0, \\forall\\, i \\in \\mathcal{N}$, we have \n\\begin{align}\n \\|\\mathbf{X}^0_\\perp\\|^2 = 0, \\quad \\|\\Y^0_\\perp\\|^2 = \\|\\nabla \\mathbf{F}^0(\\mathbf{I}-\\mathbf{J})\\|^2, \\quad \\Phi^0=n \\phi_\\lambda({\\mathbf{x}}^0). \\label{eq:initial_thm2}\n\\end{align}\nFrom Assumption \\ref{assu:prob}, $\\phi$ is lower bounded and thus $\\phi_\\lambda $ is also lower bounded, i.e., there is a constant $\\phi_\\lambda^*$ satisfying $\\phi_\\lambda^* = \\min_{{\\mathbf{x}}} \\phi_\\lambda({\\mathbf{x}}) > -\\infty$. Thus \n\\begin{align}\n \\Phi^T \\geq n \\phi_\\lambda^*.\\label{eq:end_thm2}\n\\end{align}\nWith \\eqref{eq:initial_thm2}, \\eqref{eq:end_thm2}, and the nonnegativity of $ \\mathbb{E}[\\|\\mathbf{X}^T_\\perp\\|^2]$ and $ \\mathbb{E}[\\|\\Y^T_\\perp\\|^2]$, we have\n\\begin{align}\n\\textstyle\n{\\mathbf{z}}^\\top \\Omega^0 - {\\mathbf{z}}^\\top \\Omega^{T} \\le\n\\frac{96 \\eta^2}{(1-\\rho^2)^3} \\mathbb{E}[ \\|\\nabla \\mathbf{F}^0(\\mathbf{I}-\\mathbf{J})\\|^2] + \\lambda n \\phi_\\lambda({\\mathbf{x}}^0) -\\lambda n \\phi_\\lambda^*. \\label{eq:Omega0_OmegaT}\n\\end{align}\nBy the convexity of the Frobenius norm and \\eqref{eq:Omega0_OmegaT}, we obtain from \\eqref{eq:avg-Omega} that\n\\begin{align*} \n &~ \\frac{1}{\\lambda^2n} \\mathbb{E}\\big[\\|\\widehat\\mathbf{X}^{\\tau}-\\mathbf{X}^{\\tau}\\|^2\\big] +\\frac{4}{n \\lambda \\eta}\\mathbb{E}\\big[\\|\\mathbf{X}^\\tau_\\perp\\|^2\\big] \\leq \\frac{1}{\\lambda^2n T}\\sum_{t=0}^{T-1} \\mathbb{E}\\big[\\|\\widehat\\mathbf{X}^{t}-\\mathbf{X}^{t}\\|^2\\big] +\\frac{4}{n \\lambda \\eta T}\\sum_{t=0}^{T-1} \\mathbb{E}\\big[\\|\\mathbf{X}^t_\\perp\\|^2\\big] \\nonumber \\\\ \n \\leq &~ \\textstyle \\frac{8\\left( \\phi_\\lambda({\\mathbf{x}}^0) - \\phi_\\lambda^*\\right)}{ \\eta T} + \\frac{4616 \\eta}{\\lambda(1-\\rho^2)^3} \\sigma^2 \\textstyle + \\frac{768\\eta \\mathbb{E}\\left[ \\|\\nabla \\mathbf{F}^0(\\mathbf{I}-\\mathbf{J})\\|^2\\right]}{n\\lambda T(1-\\rho^2)^3}.\n\\end{align*}\n Note $\\|\\nabla \\phi_\\lambda ({\\mathbf{x}}_i^\\tau)\\|^2 = \\frac{\\|{\\mathbf{x}}_i^\\tau-\\widehat{\\mathbf{x}}_i^\\tau\\|^2}{\\lambda^{2}}$ from Lemma \\ref{lem:xhat_x}, we finish the proof.\n\\end{proof}\n \n\n\\section{Convergence Analysis for CDProxSGT} \\label{sec:proof_CDProxSGT}\nIn this section, we analyze the convergence rate of CDProxSGT. Similar to the analysis of DProxSGT, we establish a Lyapunov function that involves consensus errors and the Moreau envelope. But due to the compression,\n\n compression errors $\\|\\widehat\\mathbf{X}^t-\\mathbf{X}^t\\|$ and $\\|\\widehat\\Y^t-\\Y^t\\|$ will occur. Hence, we will also include the two compression errors in our Lyapunov function.\n\n\nAgain, we can equivalently write a matrix form of the updates \n\\eqref{eq:alg3_1}-\\eqref{eq:alg3_6} in \nAlgorithm \\ref{alg:CDProxSGT} as follows: \n\\begin{gather}\n \\Y^{t-\\frac{1}{2}} = \\Y^{t-1} + \\nabla \\mathbf{F}^t - \\nabla \\mathbf{F}^{t-1}, \\label{eq:alg3_1_matrix}\\\\\n \\underline\\Y^{t} = \\underline\\Y^{t-1} + Q_{\\mathbf{y}}\\big[\\Y^{t-\\frac{1}{2}} - \\underline\\Y^{t-1}\\big], \\label{eq:alg3_2_matrix}\\\\\n \\Y^{t} = \\Y^{t-\\frac{1}{2}} +\\gamma_y \\underline\\Y^{t}(\\mathbf{W}-\\mathbf{I}), \\label{eq:alg3_3_matrix}\\\\\n \\mathbf{X}^{t+\\frac{1}{2}} =\\prox_{\\eta r} \\left(\\mathbf{X}^t - \\eta \\Y^{t}\\right), \\label{eq:alg3_4_matrix}\\\\\n \\underline\\mathbf{X}^{t+1} = \\underline\\mathbf{X}^{t} + Q_{\\mathbf{x}}\\big[\\mathbf{X}^{t+\\frac{1}{2}} - \\underline\\mathbf{X}^{t}\\big], \\label{eq:alg3_5_matrix}\\\\\n \\mathbf{X}^{t+1} = \\mathbf{X}^{t+\\frac{1}{2}}+\\gamma_x\\underline\\mathbf{X}^{t+1}(\\mathbf{W}-\\mathbf{I}).\\label{eq:alg3_6_matrix}\n\\end{gather} \nWhen we apply the compressor to the column-concatenated matrix in \\eqref{eq:alg3_2_matrix} and \\eqref{eq:alg3_5_matrix}, it means applying the compressor to each column separately, i.e.,\n$Q_{\\mathbf{x}}[\\mathbf{X}] = [Q_x[{\\mathbf{x}}_1],Q_x[{\\mathbf{x}}_2],\\ldots,Q_x[{\\mathbf{x}}_n]]$. \n\n\nBelow we first analyze the progress by the half-step updates of $\\Y$ and $\\mathbf{X}$ from $t+1\/2$ to $t+1$ in Lemmas \\ref{lem:prepare_comp_y} and \\ref{lem:Xhat_Xhalf_comp}. Then we bound the one-step consensus error and compression error for $\\mathbf{X}$\nin Lemma \\ref{lem:X_consensus_comperror} and for $\\Y$\nin Lemma \\ref{lem:Y_consensus_comperror}. The bound of $\\mathbb{E}[\\phi_\\lambda({\\mathbf{x}}_i^{t+1})]$ after one-step update\nis given in \\ref{lem:phi_one_step}. Finally, we prove Theorem \\ref{thm:sect3thm} by building a Lyapunov function that involves all the five terms.\n \n\n \n\n\\begin{lemma} \\label{lem:prepare_comp_y} It holds that\n\\begin{align}\n \\mathbb{E}\\big[\\|\\underline\\Y^{t+1}-\\Y^{t+\\frac{1}{2}}\\|^2\\big] \\leq &~2 \\alpha^2\\mathbb{E}\\big[\\|\\Y^{t} -\\underline\\Y^{t}\\|^2\\big] + \n 6 \\alpha^2 n \\sigma^2 + 4 \\alpha^2 L^2 \\mathbb{E}\\big[\\|\\mathbf{X}^{t+1}-\\mathbf{X}^{t}\\|^2\\big], \\label{eq:2.3.2_1} \\\\\n \\mathbb{E}\\big[\\|\\underline\\Y^{t+1}-\\Y^{t+\\frac{1}{2}}\\|^2\\big] \\leq &~\\frac{1+\\alpha^2}{2}\\mathbb{E}\\big[\\|\\Y^{t} -\\underline\\Y^{t}\\|^2\\big] + \\frac{6 n \\sigma^2}{1-\\alpha^2} + \\frac{4 L^2}{1-\\alpha^2} \\mathbb{E}\\big[\\|\\mathbf{X}^{t+1}-\\mathbf{X}^{t}\\|^2\\big]. \\label{eq:2.3.2} \n\\end{align}\n\\end{lemma}\n\\begin{proof}\nFrom \\eqref{eq:alg3_1} and \\eqref{eq:alg3_2}, we have\n\\begin{align}\n &~ \\mathbb{E}\\big[\\|\\underline\\Y^{t+1}-\\Y^{t+\\frac{1}{2}}\\|^2\\big] = \\mathbb{E}\\big[\\mathbb{E}_Q\\big[\\|Q_{\\mathbf{y}}\\big[\\Y^{t+\\frac{1}{2}}-\\underline\\Y^{t}\\big]- (\\Y^{t+\\frac{1}{2}}-\\underline\\Y^{t})\\|^2\\big]\\big] \\nonumber\\\\\n \\leq &~ \\alpha^2\\mathbb{E}\\big[\\|\\Y^{t+\\frac{1}{2}}-\\underline\\Y^{t}\\|^2\\big] = \\alpha^2\\mathbb{E}\\big[\\|\\Y^{t} -\\underline\\Y^{t} +\\nabla \\mathbf{F}^{t+1}-\\nabla \\mathbf{F}^{t}\\|^2\\big]\\nonumber\\\\\n \\leq &~ \\alpha^2(1+\\alpha_0)\\mathbb{E}\\big[\\|\\Y^{t} -\\underline\\Y^{t}\\|^2\\big] + \\alpha^2(1+\\alpha_0^{-1})\\mathbb{E}\\big[\\|\\nabla \\mathbf{F}^{t+1}-\\nabla \\mathbf{F}^{t}\\|^2\\big] \\nonumber\\\\\n \\leq &~ \\alpha^2(1+\\alpha_0)\\mathbb{E}\\big[\\|\\Y^{t} -\\underline\\Y^{t}\\|^2\\big] + \\alpha^2(1+\\alpha_0^{-1}) \n \\left(3 n \\sigma^2 + 2 L^2 \\mathbb{E}\\big[\\|\\mathbf{X}^{t+1}-\\mathbf{X}^{t}\\|^2\\big]\\right), \\label{eq:2.3.2_0}\n\\end{align}\nwhere the first inequality holds by Assumption \\ref{assu:compressor}, \n$\\alpha_0$ can be any positive number,\nand the last inequality holds by \\eqref{eq:y_cons12} which still holds for CDProxSGT. Taking $\\alpha_0=1$ in \\eqref{eq:2.3.2_0} gives \\eqref{eq:2.3.2_1}. Letting \n$\\alpha_0=\\frac{1-\\alpha^2}{2}$ in \\eqref{eq:2.3.2_0}, we obtain $\\alpha^2(1+\\alpha_0) = (1-(1-\\alpha^2))(1+\\frac{1-\\alpha^2}{2}) \\leq \\frac{1+\\alpha^2}{2}$ and $\\alpha^2(1+\\alpha_0^{-1}) \\leq \\frac{2}{1-\\alpha^2}$, and thus \\eqref{eq:2.3.2} follows. \n\\end{proof}\n \n\\begin{lemma} \\label{lem:Xhat_Xhalf_comp} \nLet $\\eta\\leq \\lambda \\leq\\frac{1}{4 L}$. Then\n\\begin{align}\n \\mathbb{E}\\big[\\|\\widehat\\mathbf{X}^{t}-\\mathbf{X}^{t+\\frac{1}{2}}\\|^2\\big] \n \\leq &~ 4\\mathbb{E}\\big[\\|\\mathbf{X}^t_\\perp\\|^2\\big] + \\left( 1-\\frac{\\eta}{2\\lambda} \\right) \\mathbb{E}\\big[\\| \\widehat\\mathbf{X}^t - \\mathbf{X}^t\\|^2\\big] +4\\eta^2 \\mathbb{E}\\big[\\|\\Y^t_\\perp\\|^2\\big] + 2\\eta^2\\sigma^2, \\label{eq:hatx_xprox_comp}\\\\\n \\mathbb{E}\\big[\\|\\underline\\mathbf{X}^{t+1}-\\mathbf{X}^{t+\\frac{1}{2}}\\|^2\\big]\n \\leq &~ 3\\alpha^2 \\left(\\mathbb{E}\\big[ \\|\\mathbf{X}^t-\\underline\\mathbf{X}^{t}\\|^2\\big] + \\mathbb{E}\\big[\\|\\mathbf{X}^{t+\\frac{1}{2}}-\\widehat{\\mathbf{X}}^t\\|^2\\big]+ \\mathbb{E}\\big[\\|\\widehat{\\mathbf{X}}^t - \\mathbf{X}^t\\|^2\\big]\\right), \\label{eq:X_-X_1}\\\\ \n \\mathbb{E}\\big[\\|\\underline\\mathbf{X}^{t+1}-\\mathbf{X}^{t+\\frac{1}{2}}\\|^2\\big] \\leq\n &~ \\frac{16}{1-\\alpha^2}\\Big( \\mathbb{E}\\big[\\|\\mathbf{X}^t_\\perp\\|^2\\big]+ \\eta^2\\mathbb{E}\\big[\\|\\Y^t_\\perp\\|^2\\big]\\Big) + \\frac{1+\\alpha^2}{2} \\mathbb{E}\\big[\\|\\mathbf{X}^t-\\underline\\mathbf{X}^{t}\\|^2\\big] \\nonumber\\\\ \n &~ +\\frac{8}{1-\\alpha^2}\\left( \\mathbb{E}\\big[\\| \\widehat\\mathbf{X}^t - \\mathbf{X}^t\\|^2\\big] +\\eta^2\\sigma^2\\right). \\label{eq:2.2.2}\n\\end{align}\nFurther, if $\\gamma_x\\leq \\frac{2\\sqrt{3}-3}{6\\alpha}$, then \n\\begin{align}\n \\mathbb{E}\\big[\\| \\mathbf{X}^{t+1} - \\mathbf{X}^{t}\\|^2\\big] \\leq \n &~ 30\\mathbb{E}\\big[\\|\\mathbf{X}^t_\\perp\\|^2\\big] +4\\sqrt{3} \\alpha \\gamma_x \\mathbb{E}\\big[\\|\\mathbf{X}^t-\\underline\\mathbf{X}^{t}\\|^2\\big] +16\\eta^2 \\mathbb{E}\\big[\\|\\Y^t_\\perp\\|^2\\big] \\nonumber \\\\\n &~ + 8\\mathbb{E}\\big[\\| \\widehat\\mathbf{X}^t - \\mathbf{X}^t\\|^2\\big] + 8\\eta^2\\sigma^2. \\label{eq:2.2.3} \n\\end{align}\n\\end{lemma}\n\\begin{proof}\nThe proof of \\eqref{eq:hatx_xprox_comp} is the same as that of Lemma \\ref{lem:Xhat_Xhalf} because \\eqref{eq:alg3_4} and \\eqref{eq:x_y_mean} are the same as \\eqref{eq:x_half_update} and \\eqref{eq:x_y_mean}.\n\nFor $\\underline\\mathbf{X}^{t+1}-\\mathbf{X}^{t+\\frac{1}{2}}$, we have from \\eqref{eq:alg3_5} that\n\\begin{align}\n &~ \\mathbb{E}\\big[\\|\\underline\\mathbf{X}^{t+1}-\\mathbf{X}^{t+\\frac{1}{2}}\\|^2\\big] = \\mathbb{E}\\big[\\mathbb{E}_Q\\big[\\| Q_{\\mathbf{x}}\\big[\\mathbf{X}^{t+\\frac{1}{2}} - \\underline\\mathbf{X}^{t}\\big] -(\\mathbf{X}^{t+\\frac{1}{2}}-\\underline\\mathbf{X}^{t})\\|^2\\big]\\big] \\nonumber \\\\\n \\leq &~ \\alpha^2 \\mathbb{E}\\big[\\|\\mathbf{X}^{t+\\frac{1}{2}}-\\underline\\mathbf{X}^{t}\\|^2\\big] = \\alpha^2 \\mathbb{E}\\big[\\|\\mathbf{X}^{t+\\frac{1}{2}}-\\widehat{\\mathbf{X}}^t+\\widehat{\\mathbf{X}}^t - \\mathbf{X}^t+\\mathbf{X}^t-\\underline\\mathbf{X}^{t}\\|^2\\big] \\nonumber \\\\ \n \\le & ~ \\alpha^2(1+\\alpha_1)\\mathbb{E}\\big[ \\|\\mathbf{X}^t-\\underline\\mathbf{X}^{t}\\|^2\\big] + \\alpha^2(1+\\alpha_1^{-1})\\mathbb{E}\\big[\\|\\mathbf{X}^{t+\\frac{1}{2}}-\\widehat{\\mathbf{X}}^t + \\widehat{\\mathbf{X}}^t - \\mathbf{X}^t\\|^2\\big] \\nonumber \\\\ \n\\leq &~ \\alpha^2(1+\\alpha_1)\\mathbb{E}\\big[ \\|\\mathbf{X}^t-\\underline\\mathbf{X}^{t}\\|^2\\big] + 2\\alpha^2(1+\\alpha_1^{-1})\\mathbb{E}\\big[\\|\\mathbf{X}^{t+\\frac{1}{2}}-\\widehat{\\mathbf{X}}^t\\|^2\\big]+2\\alpha^2(1+\\alpha_1^{-1})\\mathbb{E}\\big[\\|\\widehat{\\mathbf{X}}^t - \\mathbf{X}^t\\|^2\\big], \\label{eq:X_-X_0} \n\\end{align}\nwhere $\\alpha_1$ can be any positive number.\nTaking $\\alpha_1 = 2$ in \\eqref{eq:X_-X_0} gives \\eqref{eq:X_-X_1}.\nTaking $\\alpha_1 = \\frac{1-\\alpha^2}{2}$ in \\eqref{eq:X_-X_0} and plugging \\eqref{eq:hatx_xprox_comp} give \\eqref{eq:2.2.2}.\n\n\nAbout $\\mathbb{E}[\\| \\mathbf{X}^{t+1} - \\mathbf{X}^{t}\\|^2]$, similar to \\eqref{eq:Xplus1-X}, we have from \\eqref{eq:compX_hatW} that\n\\begin{align}\n&~ \\mathbb{E}\\big[\\| \\mathbf{X}^{t+1} - \\mathbf{X}^{t}\\|^2\\big] = \\mathbb{E}\\big[\\|\\mathbf{X}^{t+\\frac{1}{2}}\\widehat\\mathbf{W}_x - \\mathbf{X}^{t} + \\gamma_x(\\underline\\mathbf{X}^{t+1}-\\mathbf{X}^{t+\\frac{1}{2}})(\\mathbf{W}-\\mathbf{I})\\|^2\\big] \\nonumber \\\\\n\\leq&~(1+\\alpha_2) \\mathbb{E}\\big[\\|\\mathbf{X}^{t+\\frac{1}{2}}\\widehat\\mathbf{W}_x-\\mathbf{X}^t\\|^2\\big] + (1+\\alpha_2^{-1}) \\mathbb{E}\\big[\\|\\gamma_x(\\underline\\mathbf{X}^{t+1}-\\mathbf{X}^{t+\\frac{1}{2}})(\\mathbf{W}-\\mathbf{I})\\|^2\\big]\\nonumber\\\\\n \\overset{\\eqref{eq:Xplus1-X}, \\eqref{eq:X_-X_1}}\\leq &~ (1+\\alpha_2) \\left( 3\\mathbb{E}\\big[\\|\\mathbf{X}^{t+\\frac{1}{2}}-\\widehat\\mathbf{X}^t \\|^2\\big] +3\\mathbb{E}\\big[\\|\\widehat\\mathbf{X}^t-\\mathbf{X}^t \\|^2\\big] + 12 \\mathbb{E}\\big[\\|\\mathbf{X}^t_\\perp\\|^2\\big]\\right) \\nonumber \\\\\n&~ + (1+\\alpha_2^{-1})4\\gamma_x^2 \\cdot 3\\alpha^2 \\left( \\mathbb{E}\\big[\\|\\mathbf{X}^{t+\\frac{1}{2}}-\\widehat\\mathbf{X}^t \\|^2\\big] + \\mathbb{E}\\big[\\|\\widehat\\mathbf{X}^t-\\mathbf{X}^t \\|^2\\big] + \\mathbb{E}\\big[\\|\\mathbf{X}^t-\\underline\\mathbf{X}^{t}\\|^2\\big]\\right) \\nonumber \\\\\n\\leq &~ 4\\mathbb{E}\\big[\\|\\mathbf{X}^{t+\\frac{1}{2}}-\\widehat\\mathbf{X}^t \\|^2\\big] + 4 \\mathbb{E}\\big[\\|\\widehat\\mathbf{X}^t-\\mathbf{X}^t \\|^2\\big] + 14\\mathbb{E}\\big[\\|\\mathbf{X}^t _\\perp\\|^2\\big] + 4\\sqrt{3} \\alpha \\gamma_x \\mathbb{E}\\big[\\|\\mathbf{X}^t-\\underline\\mathbf{X}^{t}\\|^2\\big], \\nonumber\n\\end{align}\nwhere in the first inequality $\\alpha_2$ could be any positive number, in the second inequality we use \\eqref{eq:X_-X_1},\nand in the last inequality we take $\\alpha_2 = 2\\gamma_x \\alpha$ and thus with $\\gamma_x\\leq \\frac{2\\sqrt{3}-3}{6\\alpha}$, it holds\n$ 3(1+\\alpha_2) +12\\gamma_x^2\\alpha^2(1+\\alpha_2^{-1}) = 3(1+2\\gamma_x\\alpha)^2 \\leq 4$,\n $12(1+\\alpha_2)\\leq\n 8\\sqrt{3}\\leq 14$, \n$(1+\\alpha_2^{-1})4\\gamma_x^2\\cdot3\\alpha^2 \\leq\n4\\sqrt{3} \\alpha \\gamma_x$.\nThen plugging \\eqref{eq:hatx_xprox_comp} into the inequality above, we obtain \\eqref{eq:2.2.3}.\n\\end{proof}\n\n\\begin{lemma}\\label{lem:X_consensus_comperror} \nLet $\\eta\\leq \\lambda \\leq\\frac{1}{4 L}$ and $\\gamma_x\\leq \\min\\{\\frac{ (1-\\widehat\\rho_x^2)^2}{60\\alpha}, \\frac{1-\\alpha^2}{25}\\}$.\nThen the consensus error and compression error of $\\mathbf{X}$ can be bounded by\n\\begin{align} \n \\mathbb{E}\\big[\\|\\mathbf{X}^{t+1}_\\perp\\|^2\\big] \n \\leq &~\n \\frac{3+\\widehat\\rho_x^2}{4} \\mathbb{E}\\big[\\|\\mathbf{X}^t_\\perp\\|^2\\big] + 2\\alpha \\gamma_x (1-\\widehat\\rho_x^2) \\mathbb{E}\\big[\\|\\mathbf{X}^t-\\underline\\mathbf{X}^{t}\\|^2\\big]\n + \\frac{9}{4(1-\\widehat\\rho_x^2)}\\eta^2 \\mathbb{E}\\big[\\|\\Y^t_\\perp\\|^2\\big] \\nonumber\\\\\n &~ + 4\\alpha \\gamma_x (1-\\widehat\\rho_x^2)\\mathbb{E}\\big[\\| \\widehat\\mathbf{X}^t - \\mathbf{X}^t\\|^2\\big] + 4 \\alpha \\gamma_x (1-\\widehat\\rho_x^2)\\eta^2\\sigma^2, \\label{eq:2.4.1}\\\\\n \\mathbb{E}\\big[\\|\\mathbf{X}^{t+1}-\\underline\\mathbf{X}^{t+1}\\|^2\\big] \n \\leq &~ \\frac{21}{1-\\alpha^2} \\mathbb{E}\\big[\\|\\mathbf{X}^t_\\perp\\|^2\\big] + \\frac{3+\\alpha^2}{4}\\mathbb{E}\\big[\\|\\mathbf{X}^t-\\underline\\mathbf{X}^{t}\\|^2\\big] +\\frac{21}{1-\\alpha^2} \\eta^2 \\mathbb{E}\\big[\\|\\Y^t_\\perp\\|^2\\big]\\nonumber\\\\\n&~ + \\frac{11}{1-\\alpha^2} \\mathbb{E}\\big[\\| \\widehat\\mathbf{X}^t - \\mathbf{X}^t\\|^2\\big] + \\frac{11}{1-\\alpha^2} \\eta^2\\sigma^2. \\label{eq:2.5.1}\n\\end{align}\n\\end{lemma}\n\\begin{proof}\nFirst, let us consider the consensus error of $\\mathbf{X}$. \nWith the update \\eqref{eq:compX_hatW}, we have\n\\begin{align}\n \n \\mathbb{E}\\big[\\|\\mathbf{X}^{t+1}_\\perp\\|^2\\big] \\leq &~ (1+\\alpha_3)\\mathbb{E}\\big[\\|\\mathbf{X}^{t+\\frac{1}{2}}\\widehat\\mathbf{W}_x (\\mathbf{I}- \\mathbf{J})\\|^2\\big] +(1+\\alpha_3^{-1}) \\mathbb{E}\\big[\\|\\gamma_x(\\underline\\mathbf{X}^{t+1}-\\mathbf{X}^{t+\\frac{1}{2}})(\\mathbf{W}-\\mathbf{I})\\|^2\\big], \\nonumber\\\\\n \\leq &~ (1+\\alpha_3)\\mathbb{E}\\big[\\|\\mathbf{X}^{t+\\frac{1}{2}}(\\widehat\\mathbf{W}_x - \\mathbf{J})\\|^2\\big] + (1+\\alpha_3^{-1})4\\gamma_x^2\\mathbb{E}\\big[\\|\\underline\\mathbf{X}^{t+1}-\\mathbf{X}^{t+\\frac{1}{2}}\\|^2\\big], \\label{eq:XComp_consensus0}\n\\end{align}\nwhere $\\alpha_3$ is any positive number, and $\\|\\mathbf{W}-\\mathbf{I}\\|_2\\leq 2$ is used.\nThe first term in the right hand side of \\eqref{eq:XComp_consensus0} can be processed similarly as the non-compressed version in Lemma \\ref{lem:XI_J} by replacing $\\mathbf{W}$ by $\\widehat\\mathbf{W}_x$, namely,\n\\begin{align}\n \n \\mathbb{E}\\big[\\|\\mathbf{X}^{t+\\frac{1}{2}} (\\widehat\\mathbf{W}_x-\\mathbf{J})\\|^2\\big]\n \\leq &~ \\textstyle \\frac{1+\\widehat\\rho^2_x}{2} \\mathbb{E}\\big[\\|\\mathbf{X}^{t}_\\perp\\|^2\\big]+ \\frac{2\\widehat\\rho^2_x \\eta^2 }{1-\\widehat\\rho^2_x} \\mathbb{E}\\big[\\| \\Y^{t}_\\perp \\|^2\\big]. \\label{eq:XComp_consensus1}\n\\end{align} \nPlugging \\eqref{eq:XComp_consensus1} and \\eqref{eq:X_-X_1} into \\eqref{eq:XComp_consensus0} gives\n\\begin{align*}\n &~ \\mathbb{E}\\big[\\|\\mathbf{X}^{t+1}_\\perp\\|^2\\big] \\leq ~ (1+\\alpha_3)\\left( \\textstyle \\frac{1+\\widehat\\rho^2_x}{2} \\mathbb{E}\\big[\\| \\mathbf{X}^{t}_\\perp \\|^2\\big]+ \\frac{2\\widehat\\rho^2_x \\eta^2 }{1-\\widehat\\rho^2_x} \\mathbb{E}\\big[\\| \\Y^{t}_\\perp \\|^2\\big]\\right) \\\\\n &~ + (1+\\alpha_3^{-1})12 \\alpha^2 \\gamma_x^2 \\left(\\mathbb{E}\\big[ \\|\\mathbf{X}^t-\\underline\\mathbf{X}^{t}\\|^2\\big] + \\mathbb{E}\\big[\\|\\mathbf{X}^{t+\\frac{1}{2}}-\\widehat{\\mathbf{X}}^t\\|^2\\big]+ \\mathbb{E}\\big[\\|\\widehat{\\mathbf{X}}^t - \\mathbf{X}^t\\|^2\\big]\\right)\\\\\n \\overset{\\eqref{eq:hatx_xprox_comp}}{\\leq} &~\n \\left( \\textstyle \\frac{1+\\widehat\\rho_x^2}{2}(1+\\alpha_3) + 48 \\alpha^2 \\gamma_x^2 (1+\\alpha_3^{-1}) \\right) \\mathbb{E}\\big[\\|\\mathbf{X}^t_\\perp\\|^2\\big] \\nonumber\\\\\n &~+ 12\\alpha^2 \\gamma_x^2 (1+\\alpha_3^{-1}) \\mathbb{E}\\big[\\|\\mathbf{X}^t-\\underline\\mathbf{X}^{t}\\|^2\\big] +\\left( \\textstyle \\frac{2\\widehat\\rho_x^2}{1-\\widehat\\rho_x^2}(1+\\alpha_3) +48 \\alpha^2 \\gamma_x^2 (1+\\alpha_3^{-1})\\right)\\eta^2 \\mathbb{E}\\big[\\|\\Y^t_\\perp\\|^2\\big] \\\\\n&~ +24 \\alpha^2 \\gamma_x^2 (1+\\alpha_3^{-1}) \\mathbb{E}\\big[\\| \\widehat\\mathbf{X}^t - \\mathbf{X}^t\\|^2\\big] +24 \\alpha^2 \\gamma_x^2 (1+\\alpha_3^{-1})\\eta^2\\sigma^2.\n\\end{align*} \nLet $\\alpha_3 = \\frac{7\\alpha\\gamma_x}{1-\\widehat\\rho_x^2}$ and $\\gamma_x\\leq \\frac{(1-\\widehat\\rho_x^2)^2}{60\\alpha}$. \nThen $\\alpha^2 \\gamma_x^2 (1+\\alpha_3^{-1})=\\alpha\\gamma_x (\\alpha\\gamma_x+\\frac{1-\\widehat\\rho_x^2}{7})\\leq \\alpha\\gamma_x (\\frac{ (1-\\widehat\\rho_x^2)^2}{60}+\\frac{1-\\widehat\\rho_x^2}{7})\\leq \\frac{\\alpha\\gamma_x (1-\\widehat\\rho_x^2)}{6}$\nand \n\\begin{align*}\n &~ \\textstyle \\frac{1+\\widehat\\rho_x^2}{2}(1+\\alpha_3) + 48 \\alpha^2 \\gamma_x^2 (1+\\alpha_3^{-1}) = \\frac{1+\\widehat\\rho_x^2}{2} + 48 \\alpha^2 \\gamma_x^2 + \\frac{7\\alpha\\gamma_x}{1-\\widehat\\rho_x^2} + \\frac{48\\alpha\\gamma_x(1-\\widehat\\rho_x^2)}{7} \\\\ \n \\leq&~ \\textstyle \\frac{1+\\widehat\\rho_x^2}{2} + \\frac{48}{60^2}(1-\\widehat\\rho_x^2)^4 + \\frac{7}{60}(1-\\widehat\\rho_x^2) + \\frac{7}{60}(1-\\widehat\\rho_x^2)^3\\leq \\frac{1+\\widehat\\rho_x^2}{2} + \\frac{ 1-\\widehat\\rho_x^2}{4} = \\frac{3+\\widehat\\rho_x^2}{4},\\\\\n &~ \\textstyle \\frac{2\\widehat\\rho_x^2}{1-\\widehat\\rho_x^2}(1+\\alpha_3) + 48 \\alpha^2 \\gamma_x^2 (1+\\alpha_3^{-1}) = \\frac{2\\widehat\\rho_x^2}{1-\\widehat\\rho_x^2} + 48 \\alpha^2 \\gamma_x^2 + \n \\frac{2\\widehat\\rho_x^2}{1-\\widehat\\rho_x^2} \\frac{7 \\alpha \\gamma_x }{1-\\widehat\\rho_x^2} + \\frac{48\\alpha\\gamma_x(1-\\widehat\\rho_x^2)}{7}\\\\\n \\leq &~ \\textstyle \\frac{1}{1-\\widehat\\rho_x^2} \\left(\n 2\\widehat\\rho_x^2 + \\frac{48}{60^2} (1-\\widehat\\rho_x^2) + \\frac{14\\widehat\\rho_x^2}{60} + \\frac{7}{60}(1-\\widehat\\rho_x^2)\n \\right) \\leq \n \\frac{1}{1-\\widehat\\rho_x^2} \\left(\n 2\\widehat\\rho_x^2 + \\frac{48}{60^2} + \\frac{7}{60}\n \\right) \\leq \\frac{9}{4(1-\\widehat\\rho_x^2)}. \n\\end{align*}\nThus\n\\eqref{eq:2.4.1} holds. \n \n\nNow let us consider the compression error of $\\mathbf{X}$.\nBy \\eqref{eq:alg3_6}, we have\n\\begin{align}\n &~\\mathbb{E}\\big[\\|\\mathbf{X}^{t+1}-\\underline\\mathbf{X}^{t+1}\\|^2\\big\n = \\mathbb{E}\\big[\\|(\\underline\\mathbf{X}^{t+1} - \\mathbf{X}^{t+\\frac{1}{2}}) \\big(\\gamma_x(\\mathbf{W}-\\mathbf{I}) -\\mathbf{I}\\big) + \\gamma_x \\mathbf{X}^{t+\\frac{1}{2}} (\\mathbf{I}-\\mathbf{J}) (\\mathbf{W}-\\mathbf{I}) \\|^2\\big] \\nonumber\\\\\n\\leq&~ (1+\\alpha_4) (1+2\\gamma_x)^2 \\mathbb{E}\\big[\\|\\underline\\mathbf{X}^{t+1}-\\mathbf{X}^{t+\\frac{1}{2}}\\|^2\\big] + (1+\\alpha_4^{-1})4 \\gamma_x^2 \\mathbb{E}\\big[\\|\\mathbf{X}^{t+\\frac{1}{2}}_\\perp\\|^2\\big],\\label{eq:2.5.1.0}\n\\end{align}\nwhere we have used $\\mathbf{J}\\mathbf{W}=\\mathbf{J}$ in the equality,\n$\\|\\gamma_x (\\mathbf{W}-\\mathbf{I}) -\\mathbf{I}\\|_2\\leq \\gamma_x\\|\\mathbf{W}-\\mathbf{I}\\|_2+\\|\\mathbf{I}\\|_2\\leq 1+2\\gamma_x$ and $\\|\\mathbf{W}-\\mathbf{I}\\|_2\\leq 2$ in the inequality, and $\\alpha_4$ can be any positive number. For the second term in the right hand side of \\eqref{eq:2.5.1.0}, we have\n\\begin{align}\n \\|\\mathbf{X}^{t+\\frac{1}{2}}_\\perp\\|^2 \\overset{\\eqref{eq:alg3_4}}{=}&~ \\left\\|\\left(\\prox_{\\eta r} \\left(\\mathbf{X}^t - \\eta \\Y^{t}\\right)-\\prox_{\\eta r} \\left(\\bar{\\mathbf{x}}^t - \\eta \\bar{\\mathbf{y}}^{t}\\right)\\mathbf{1}^\\top\\right)(\\mathbf{I}-\\mathbf{J})\\right\\|^2 \\nonumber \\\\\n \\leq&~\n \\|\\mathbf{X}^t_\\perp- \\eta \\Y^{t}_\\perp\\|^2 \n \\leq 2\\|\\mathbf{X}^t_\\perp\\|^2+2\\eta^2\\|\\Y^{t}_\\perp\\|^2, \\label{eq:2.2.1}\n\\end{align}\nwhere we have used $\\mathbf{1}^\\top(\\mathbf{I}-\\mathbf{J})=\\mathbf{0}^\\top$, $\\|\\mathbf{I}-\\mathbf{J}\\|_2\\leq 1$, and Lemma \\ref{lem:prox_diff}.\nNow plugging \\eqref{eq:2.2.2} and \\eqref{eq:2.2.1} into \\eqref{eq:2.5.1.0} gives\n\\begin{align*}\n \\mathbb{E}\\big[\\|\\mathbf{X}^{t+1}-\\underline\\mathbf{X}^{t+1}\\|^2\\big]\n \\leq \\left( \\textstyle (1+\\alpha_4^{-1})8\\gamma_x^2+(1+\\alpha_4) (1+2\\gamma_x)^2\\frac{16}{1-\\alpha^2}\\right) \\left( \\mathbb{E}\\big[\\|\\mathbf{X}^t_\\perp\\|^2\\big] + \\eta^2 \\mathbb{E}\\big[\\|\\Y^t_\\perp\\|^2\\big]\\right) \\nonumber\\\\\n \\textstyle + (1+\\alpha_4) (1+2\\gamma_x)^2\\frac{1+\\alpha^2}{2}\\mathbb{E}\\big[\\|\\mathbf{X}^t-\\underline\\mathbf{X}^{t}\\|^2\\big]\n +(1+\\alpha_4)(1+2\\gamma_x)^2\\frac{8}{1-\\alpha^2} \\left( \\mathbb{E}\\big[\\| \\widehat\\mathbf{X}^t - \\mathbf{X}^t\\|^2\\big] + \\eta^2\\sigma^2\\right).\n\\end{align*}\nWith $\\alpha_4=\\frac{1-\\alpha^2}{12}$ and $\\gamma_x\\leq \\frac{1-\\alpha^2}{25}$, \\eqref{eq:2.5.1} holds because $(1+2\\gamma_x)^2 \n \\leq 1 + \\frac{104}{25}\\gamma_x \\leq \\frac{7}{6}$, $ (1+2\\gamma_x)^2\\frac{1+\\alpha^2}{2}\\leq \\frac{1+\\alpha^2}{2}+\\frac{104}{25}\\gamma_x\\leq \\frac{2+\\alpha^2}{3}$, and\n\\begin{align} \n \n \n \n \n \n (1+\\alpha_4) (1+2\\gamma_x)^2\\frac{1+\\alpha^2}{2} \\leq &~ \\frac{2+\\alpha^2}{3} + \\alpha_4 = \\frac{3+\\alpha^2}{4}, \\label{eq:gamma_x_1}\\\\\n (1+\\alpha_4^{-1}) 8\\gamma_x^2+ (1+\\alpha_4) (1+2\\gamma_x)^2\\frac{16}{1-\\alpha^2} \\leq&~ \\frac{13}{1-\\alpha^2}\\frac{8}{625} + \\frac{13}{12} \\frac{7}{6} \\frac{16}{1-\\alpha^2} \\leq \\frac{21}{1-\\alpha^2}, \\label{eq:gamma_x_2}\\\\\n (1+\\alpha_4)(1+2\\gamma_x)^2\\frac{8}{1-\\alpha^2} \\leq&~ \\frac{13}{12} \\frac{7}{6}\\frac{8}{1-\\alpha^2} \\leq \\frac{11}{1-\\alpha^2}. \\nonumber\n\\end{align}\n\\end{proof}\n\n\n\n \n \n \n\n\n\n\\begin{lemma} \n\\label{lem:Y_consensus_comperror} Let $\\eta\\leq \\min\\{\\lambda, \\frac{1-\\widehat\\rho^2_y}{8\\sqrt{5} L} \\} $, $\\lambda \\leq\\frac{1}{4 L}$, $ \\gamma_x\\leq \\frac{2\\sqrt{3}-3}{6\\alpha}$, $\\gamma_y\\leq \\min\\{\\frac{\\sqrt{1-\\widehat\\rho^2_y}}{12\\alpha}, \\frac{1-\\alpha^2}{25}\\}$.\nThen the consensus error and compression error of $\\Y$ can be bounded by\n\\begin{align}\n \\mathbb{E}\\big[\\|\\Y^{t+1}_\\perp\\|^2\\big] \\leq &~ \\frac{150 L^2 }{1-\\widehat\\rho^2_y } \\mathbb{E}\\big[\\|\\mathbf{X}^t_\\perp\\|^2\\big] + \\frac{20\\sqrt{3} \\alpha\\gamma_x L^2}{1-\\widehat\\rho^2_y } \\mathbb{E}\\big[\\|\\mathbf{X}^t-\\underline\\mathbf{X}^{t}\\|^2\\big]+\\frac{3+\\widehat\\rho^2_y }{4}\\mathbb{E}\\big[\\|\\Y^t_\\perp\\|^2\\big] \\nonumber\\\\\n&~ +\\frac{48\\alpha^2\\gamma_y^2}{1-\\widehat\\rho^2_y } \\mathbb{E}\\big[\\|\\Y^{t} -\\underline\\Y^{t}\\|^2\\big] + \\frac{40 L^2 }{1-\\widehat\\rho^2_y } \\mathbb{E}\\big[\\| \\widehat\\mathbf{X}^t - \\mathbf{X}^t\\|^2\\big] + 12n \\sigma^2, \\label{eq:2.4.2} \\\\\n\\mathbb{E}\\big[\\|\\Y^{t+1}-\\underline\\Y^{t+1}\\|^2\\big]\n\\leq &~ \\frac{180 L^2}{1-\\alpha^2}\\mathbb{E}\\big[\\|\\mathbf{X}^t_\\perp\\|^2\\big] + \\frac{24\\sqrt{3}\\alpha\\gamma_x L^2}{1-\\alpha^2} \\mathbb{E}\\big[\\|\\mathbf{X}^t-\\underline\\mathbf{X}^{t}\\|^2\\big]\n+ \\frac{3+\\alpha^2}{4}\\mathbb{E}\\big[\\|\\Y^{t} -\\underline\\Y^{t}\\|^2\\big] \\nonumber\\\\\n&~ +\\frac{104\\gamma_y^2+ 96\\eta^2 L^2}{1-\\alpha^2}\\mathbb{E}\\big[\\|\\Y^{t}(\n\\mathbf{I}-\\mathbf{J})\\|^2\\big] + \\frac{48 L^2}{1-\\alpha^2} \\mathbb{E}\\big[\\| \\widehat\\mathbf{X}^t - \\mathbf{X}^t\\|^2\\big] + \\frac{10 n}{1-\\alpha^2} \\sigma^2 .\\label{eq:2.5.2}\n\\end{align} \n\\end{lemma}\n\n\\begin{proof}\nFirst, let us consider the consensus of $\\Y$. Similar to \\eqref{eq:XComp_consensus0}, we have from the update \\eqref{eq:Y_hatW} that\n\\begin{align}\n \\mathbb{E}\\big[\\|\\Y^{t+1}_\\perp\\|^2\\big] \\leq (1+\\alpha_5)\\mathbb{E}\\big[\\|\\Y^{t+\\frac{1}{2}}(\\widehat\\mathbf{W}_y-\\mathbf{J})\\|^2\\big] + (1+\\alpha_5^{-1})4\\gamma_y^2 \\mathbb{E}\\big[\\|\\underline\\Y^{t+1}-\\Y^{t+\\frac{1}{2}}\\|^2\\big], \\label{eq:Ycomp_conses0}\n\\end{align}\nwhere $\\alpha_5$ can be any positive number.\nSimilarly as \\eqref{eq:y_cons1}-\\eqref{eq:y_cons2} in the proof of Lemma \\ref{lem:YI_J}, we have the bound for the first term on the right hand side of \\eqref{eq:Ycomp_conses0} by replacing $\\mathbf{W}$ with $\\widehat\\mathbf{W}_y$, namely, \n\\begin{align}\n \\mathbb{E}\\big[\\|\\Y^{t+\\frac{1}{2}}(\\widehat\\mathbf{W}_y-\\mathbf{J})\\|^2\\big] \\leq \\textstyle \\frac{1+\\widehat\\rho^2_y}{2} \\mathbb{E}\\big[\\|\\Y^{t}_\\perp \\|^2\\big] + \\frac{2 \\widehat\\rho^2_y L^2 }{1-\\widehat\\rho^2_y } \\mathbb{E}\\big[\\| \\mathbf{X}^{t+1} - \\mathbf{X}^{t}\\|^2\\big] + 5 \\widehat\\rho^2_y n \\sigma^2.\\label{eq:comp_y_cons220} \n\\end{align} \nPlug \\eqref{eq:comp_y_cons220} and \\eqref{eq:2.3.2_1} back to \\eqref{eq:Ycomp_conses0}, and take $\\alpha_5 = \\frac{1-\\widehat\\rho^2_y}{3(1+\\widehat\\rho^2_y)}$. We have \n\\begin{align*} \n &~ \\mathbb{E}\\big[\\|\\Y^{t+1}_\\perp\\|^2\\big] \n \\leq \\textstyle \\frac{2(2+\\widehat\\rho^2_y)}{3(1+\\widehat\\rho^2_y)}\\frac{1+\\widehat\\rho^2_y}{2} \\mathbb{E}\\big[\\|\\Y^{t}_\\perp \\|^2\\big] \n + \\frac{24\\gamma_y^2}{1-\\widehat\\rho^2_y} 2\\alpha^2 \\mathbb{E}\\big[\\|\\Y^{t} -\\underline\\Y^{t}\\|^2\\big] \\nonumber\\\\\n &~\\quad \\textstyle + \\frac{24\\gamma_y^2}{1-\\widehat\\rho^2_y} 6\\alpha^2 n\\sigma^2 + 2\\cdot5 \\widehat\\rho^2_y n \\sigma^2 + \\left( \\textstyle \\frac{24\\gamma_y^2}{1-\\widehat\\rho^2_y} 4\\alpha^2 L^2 + 2\\cdot\\frac{2 \\widehat\\rho^2_y L^2 }{1-\\widehat\\rho^2_y } \\right)\\mathbb{E}\\big[\\| \\mathbf{X}^{t+1} - \\mathbf{X}^{t}\\|^2\\big] \\nonumber\\\\\n \\leq &~ \\textstyle \\frac{2+\\widehat\\rho^2_y}{3} \\mathbb{E}\\big[\\|\\Y^{t}_\\perp \\|^2\\big] \n + \\frac{48\\alpha^2\\gamma_y^2}{1-\\widehat\\rho^2_y} \\mathbb{E}\\big[\\|\\Y^{t} -\\underline\\Y^{t}\\|^2\\big] + 11 n \\sigma^2 + \\frac{5 L^2}{1-\\widehat\\rho^2_y} \\mathbb{E}\\big[\\| \\mathbf{X}^{t+1} - \\mathbf{X}^{t}\\|^2\\big] \\\\\n \\leq &~ \\textstyle \\frac{150 L^2 }{1-\\widehat\\rho^2_y } \\mathbb{E}\\big[\\|\\mathbf{X}^t_\\perp\\|^2\\big] + \\frac{20\\sqrt{3} L^2}{1-\\widehat\\rho^2_y } \\alpha \\gamma_x \\mathbb{E}[\\|\\mathbf{X}^t-\\underline\\mathbf{X}^{t}\\|^2] + \\frac{40 L^2 }{1-\\widehat\\rho^2_y } \\eta^2\\sigma^2 + 11 n \\sigma^2 \\nonumber\\\\\n&~ \\textstyle +\\left( \\textstyle \\frac{2+\\widehat\\rho^2_y}{3}+ \\frac{80 L^2 }{1-\\widehat\\rho^2_y } \\eta^2\\right) \\mathbb{E}\\big[\\|\\Y^t_\\perp\\|^2\\big] +\\frac{48\\alpha^2\\gamma_y^2}{1-\\widehat\\rho^2_y} \\mathbb{E}\\big[\\|\\Y^{t} -\\underline\\Y^{t}\\|^2\\big] + \\frac{40 L^2 }{1-\\widehat\\rho^2_y}\\mathbb{E}\\big[\\| \\widehat\\mathbf{X}^t - \\mathbf{X}^t\\|^2\\big],\n\\end{align*}\nwhere the first inequality holds by $1+\\alpha_5 = \\frac{2(2+\\widehat\\rho^2_y)}{3(1+\\widehat\\rho^2_y)} \\leq 2$ and $1+\\alpha_5^{-1} = \\frac{2(2+\\widehat\\rho^2_y)}{1-\\widehat\\rho^2_y}\\leq \\frac{6}{1-\\widehat\\rho^2_y}$, \nthe second inequality holds by $\\gamma_y\\leq \\frac{\\sqrt{1-\\widehat\\rho^2_y}}{12\\alpha}$ and $\\alpha^2\\leq 1$, and the third equality holds by \\eqref{eq:2.2.3}.\nBy $\\frac{80 L^2 }{1-\\widehat\\rho^2_y} \\eta^2 \\leq \\frac{1-\\widehat\\rho^2_y}{4}$ and $ \\frac{40 L^2 }{1-\\widehat\\rho^2_y} \\eta^2\\leq \\frac{1-\\widehat\\rho^2_y}{8}\\leq 1$ from $\\eta\\leq \\frac{1-\\widehat\\rho^2_y}{8\\sqrt{5} L} $, we can now obtain \\eqref{eq:2.4.2}.\n\nNext let us consider the compression error of $\\Y$, similar to \\eqref{eq:2.5.1.0}, we have by \\eqref{eq:alg3_3} that\n\\begin{align} \n&~\\mathbb{E}\\big[\\|\\Y^{t+1}-\\underline\\Y^{t+1}\\|^2\\big] \n\\leq (1+\\alpha_6)(1+2\\gamma_y)^2 \\mathbb{E}\\big[\\|\\underline\\Y^{t+1}-\\Y^{t+\\frac{1}{2}}\\|^2\\big] + (1+\\alpha_6^{-1})4 \\gamma_y^2 \\mathbb{E}\\big[\\|\\Y^{t+\\frac{1}{2}}_\\perp\\|^2\\big], \\label{eq:Y_compress_0}\n\\end{align}\nwhere\n$\\alpha_6$ is any positive number. \nFor $\\mathbb{E}\\big[\\|\\Y^{t+\\frac{1}{2}}_\\perp\\|^2\\big]$, we have from \\eqref{eq:alg3_1} that\n\\begin{align}\n &~\\mathbb{E}\\big[\\|\\Y^{t+\\frac{1}{2}}_\\perp\\|^2\\big] =\\mathbb{E}\\big[\\|( \\Y^{t} + \\nabla \\mathbf{F}^{t+1} - \\nabla \\mathbf{F}^{t})(\\mathbf{I}-\\mathbf{J})\\|^2\\big]\\nonumber \\\\\n \\leq &~ 2\\mathbb{E}\\big[\\|\\Y^{t}_\\perp\\|^2\\big] +2\\mathbb{E}\\big[\\|\\nabla \\mathbf{F}^{t+1}-\\nabla \\mathbf{F}^{t}\\|^2\\big] \\leq \n 2\\mathbb{E}\\big[\\|\\Y^{t}_\\perp\\|^2\\big] +6 n \\sigma^2 + 4 L^2 \\mathbb{E}\\big[\\|\\mathbf{X}^{t+1}-\\mathbf{X}^{t}\\|^2\\big], \\label{eq:2.3.1}\n\\end{align} \nwhere we have used \\eqref{eq:y_cons12}.\nPlug \\eqref{eq:2.3.2} and \\eqref{eq:2.3.1} back to \\eqref{eq:Y_compress_0} to have\n\\begin{align*} \n&~\\mathbb{E}\\big[\\|\\Y^{t+1}-\\underline\\Y^{t+1}\\|^2\\big] \\leq \\textstyle (1+\\alpha_6) (1+2\\gamma_y)^2 \\frac{1+\\alpha^2}{2}\\mathbb{E}\\big[\\|\\Y^{t} -\\underline\\Y^{t}\\|^2\\big] +(1+\\alpha_6^{-1})8\\gamma_y^2\\mathbb{E}\\big[\\|\\Y^{t}(\n\\mathbf{I}-\\mathbf{J})\\|^2\\big] \\\\\n&~+ \\left( \\textstyle (1+\\alpha_6^{-1})4\\gamma_y^2 +(1+\\alpha_6)(1+2\\gamma_y)^2 \\frac{1}{1-\\alpha^2} \\right)4 L^2 \\mathbb{E}\\big[\\|\\mathbf{X}^{t+1}-\\mathbf{X}^{t}\\|^2\\big] \\\\\n&~ + \\left( \\textstyle (1+\\alpha_6^{-1})4\\gamma_y^2 +(1+\\alpha_6)(1+2\\gamma_y)^2 \\frac{1}{1-\\alpha^2} \\right) 6 n \\sigma^2.\n\\end{align*} \nWith $\\alpha_6=\\frac{1-\\alpha^2}{12}$ and $\\gamma_y< \\frac{1-\\alpha^2}{25}$,\nlike \\eqref{eq:gamma_x_1} and \\eqref{eq:gamma_x_2}, we have $(1+\\alpha_6) (1+2\\gamma_y)^2 \\frac{1+\\alpha^2}{2}\\leq \\frac{3+\\alpha^2}{4}$, $8(1+\\alpha_6^{-1})\\leq\\frac{8\\cdot13}{1-\\alpha^2} = \\frac{104}{1-\\alpha^2} $ and $ (1+\\alpha_6^{-1})4\\gamma_y^2 +(1+\\alpha_6)(1+2\\gamma_y)^2 \\frac{1}{1-\\alpha^2} \\leq \\frac{13}{1-\\alpha^2}\\frac{4}{625}+\\frac{13}{12}\\frac{7}{6}\\frac{1}{1-\\alpha^2}\\leq \\frac{3}{2(1-\\alpha^2)}$. Thus\n\\begin{align*} \n\\mathbb{E}\\big[\\|\\Y^{t+1}-\\underline\\Y^{t+1}\\|^2\\big]\n\\leq &~ \\textstyle \\frac{3+\\alpha^2}{4}\\mathbb{E}\\big[\\|\\Y^{t} -\\underline\\Y^{t}\\|^2\\big] \n +\\frac{104\\gamma_y^2}{1-\\alpha^2}\\mathbb{E}\\big[\\|\\Y^{t}(\n\\mathbf{I}-\\mathbf{J})\\|^2\\big]+\\frac{6 L^2}{1-\\alpha^2} \\mathbb{E}\\big[\\|\\mathbf{X}^{t+1}-\\mathbf{X}^{t}\\|^2\\big] + \\frac{9n \\sigma^2}{1-\\alpha^2} \\nonumber\\\\\n\\leq &~ \\textstyle\\frac{180 L^2}{1-\\alpha^2}\\mathbb{E}\\big[\\|\\mathbf{X}^t_\\perp\\|^2\\big] + \\frac{24\\sqrt{3} \\alpha\\gamma_x L^2 }{1-\\alpha^2} \\mathbb{E}\\big[\\|\\mathbf{X}^t-\\underline\\mathbf{X}^{t}\\|^2\\big] \n+ \\frac{3+\\alpha^2}{4}\\mathbb{E}\\big[\\|\\Y^{t} -\\underline\\Y^{t}\\|^2\\big] \\\\\n&~ \\textstyle +\\frac{104\\gamma_y^2+ 96\\eta^2 L^2}{1-\\alpha^2}\\mathbb{E}\\big[\\|\\Y^{t}(\n\\mathbf{I}-\\mathbf{J})\\|^2\\big] + \\frac{48 L^2}{1-\\alpha^2} \\mathbb{E}\\big[\\| \\widehat\\mathbf{X}^t - \\mathbf{X}^t\\|^2\\big] + \\frac{48 L^2\\eta^2+9n}{1-\\alpha^2} \\sigma^2,\n\\end{align*}\nwhere the second inequality holds by \\eqref{eq:2.2.3}.\nBy $48 L^2\\eta^2\\leq n$, we have \\eqref{eq:2.5.2} and complete the proof.\n\\end{proof}\n \n\\begin{lemma}\\label{lem:phi_one_step}\nLet $\\eta\\leq \\lambda \\leq\\frac{1}{4 L}$ and $ \\gamma_x\\leq \\frac{1}{6\\alpha}$. \nIt holds\n\\begin{align}\n \\sum_{i=1}^n \\mathbb{E}\\big[\\phi_\\lambda({\\mathbf{x}}_i^{t+1})\\big] \n\\leq&~ \\sum_{i=1}^n\\mathbb{E}\\big[ \\phi_\\lambda( {\\mathbf{x}}_i^{t})\\big] + \\frac{12}{\\lambda}\\mathbb{E}\\big[\\|\\mathbf{X}^t_\\perp\\|^2\\big] + \\frac{7\\alpha\\gamma_x}{\\lambda} \\mathbb{E}\\big[\\|\\mathbf{X}^t-\\underline\\mathbf{X}^{t}\\|^2\\big] + \\frac{12}{\\lambda} \\eta^2\\mathbb{E}\\big[\\|\\Y^t_\\perp\\|^2\\big] \\nonumber \\\\\n&~+\\frac{1}{\\lambda}\\left( -\\frac{\\eta}{4\\lambda} + 23\\alpha\\gamma_x \\right) \\mathbb{E}\\big[\\| \\widehat\\mathbf{X}^{t}- \\mathbf{X}^{t} \\|^2\\big] + \\frac{5}{\\lambda} \\eta^2 \\sigma^2. \\label{eq:2.7}\n\\end{align} \n\\end{lemma}\n\n\\begin{proof}\nSimilar to \\eqref{eq:phi_update_0}, we have \n\\begin{align}\n&~ \\mathbb{E}\\big[\\phi_\\lambda({\\mathbf{x}}_i^{t+1})\\big] \\overset{\\eqref{eq:x_t_hat}}{=} \\mathbb{E}\\big[\\phi(\\widehat{\\mathbf{x}}_i^{t+1})\\big]+\\frac{1}{2\\lambda} \\mathbb{E}\\big[\\|\\widehat{\\mathbf{x}}_i^{t+1}-{\\mathbf{x}}_i^{t+1}\\|^2\\big] \\nonumber \\\\\n\\overset{ \\eqref{eq:compX_hatW}}{\\leq} &~ \\mathbb{E}\\bigg[\\phi\\bigg(\\sum_{j=1}^n \\big(\\widehat\\mathbf{W}_x\\big)_{ji}\\widehat{\\mathbf{x}}_j^{t+\\frac{1}{2}}\\bigg)\\bigg] +\\frac{1}{2\\lambda} \\mathbb{E}\\bigg[\\bigg\\| \\sum_{j=1}^n \\big(\\widehat\\mathbf{W}_x\\big)_{ji} \\big(\\widehat{\\mathbf{x}}_j^{t+\\frac{1}{2}}- {\\mathbf{x}}_j^{t+\\frac{1}{2}}\\big) - \\gamma_x\\sum_{j=1}^n \\big(\\mathbf{W}_{ji}-\\mathbf{I}_{ji}\\big)\\big(\\underline{\\mathbf{x}}_j^{t+1}-{\\mathbf{x}}_j^{t+\\frac{1}{2}}\\big) \\bigg\\|^2\\bigg] \\nonumber\\\\\n\\leq&~ \\mathbb{E}\\bigg[\\phi\\bigg(\\sum_{j=1}^n \\big(\\widehat\\mathbf{W}_x\\big)_{ji}\\widehat{\\mathbf{x}}_j^{t+\\frac{1}{2}}\\bigg)\\bigg] + \\frac{1+\\alpha_7}{2\\lambda} \\mathbb{E}\\bigg[\\bigg\\| \\sum_{j=1}^n \\big(\\widehat\\mathbf{W}_x\\big)_{ji} \\big(\\widehat{\\mathbf{x}}_j^{t+\\frac{1}{2}}-{\\mathbf{x}}_j^{t+\\frac{1}{2}}\\big)\\bigg\\|^2\\bigg]\\nonumber\\\\\n&~ + \\frac{1+\\alpha_7^{-1}}{2\\lambda} \\mathbb{E}\\bigg[\\bigg\\| \\gamma_x \\sum_{j=1}^n\\big(\\mathbf{W}_{ji}-\\mathbf{I}_{ji}\\big)\\big(\\underline{\\mathbf{x}}_j^{t+1}-{\\mathbf{x}}_j^{t+\\frac{1}{2}}\\big)\\bigg\\|^2\\bigg] \\nonumber \\\\\n\\overset{\\mbox{Lemma \\ref{lem:weak_convx}}}\\leq &~ \\sum_{j=1}^n \\big(\\widehat\\mathbf{W}_x\\big)_{ji} \\mathbb{E}\\big[\\phi( \\widehat{\\mathbf{x}}_j^{t+\\frac{1}{2}})\\big] + \\frac{ L}{2} \\sum_{j=1}^{n-1}\\sum_{l=j+1}^n \\big(\\widehat\\mathbf{W}_x\\big)_{ji} (\\widehat\\mathbf{W}_x)_{li}\\mathbb{E}\\big[\\|\\widehat{\\mathbf{x}}_j^{t+\\frac{1}{2}}-\\widehat{\\mathbf{x}}_l^{t+\\frac{1}{2}}\\|^2\\big] \\nonumber \\\\\n&~ + \\frac{1+\\alpha_7}{2\\lambda} \\sum_{j=1}^n \\big(\\widehat\\mathbf{W}_x\\big)_{ji}\\mathbb{E}\\big[\\| \\widehat{\\mathbf{x}}_j^{t+\\frac{1}{2}}-{\\mathbf{x}}_j^{t+\\frac{1}{2}}\\|^2\\big] + \\frac{1+\\alpha_7^{-1}}{2\\lambda}\\gamma_x^2 \\mathbb{E}\\big[\\| \\sum_{j=1}^n(\\mathbf{W}_{ji}-\\mathbf{I}_{ji})(\\underline{\\mathbf{x}}_j^{t+1}-{\\mathbf{x}}_j^{t+\\frac{1}{2}})\\|^2\\big] \\nonumber \\\\\n\\leq &~ \\sum_{j=1}^n \\big(\\widehat\\mathbf{W}_x\\big)_{ji} \\mathbb{E}\\big[\\phi_\\lambda({\\mathbf{x}}_j^{t+\\frac{1}{2}})\\big] + \\frac{1}{4\\lambda} \\sum_{j=1}^{n-1}\\sum_{l=j+1}^n \\big(\\widehat\\mathbf{W}_x\\big)_{ji} (\\widehat\\mathbf{W}_x)_{li} \\mathbb{E}\\big[\\|{\\mathbf{x}}_j^{t+\\frac{1}{2}}-{\\mathbf{x}}_l^{t+\\frac{1}{2}}\\|^2\\big] \\nonumber \\\\\n&~+ \\frac{\\alpha_7}{2\\lambda} \\sum_{j=1}^n \\big(\\widehat\\mathbf{W}_x\\big)_{ji}\\mathbb{E}\\big[\\| \\widehat{\\mathbf{x}}_j^{t+\\frac{1}{2}}-{\\mathbf{x}}_j^{t+\\frac{1}{2}}\\|^2\\big] + \\frac{1+\\alpha_7^{-1}}{2\\lambda}\\gamma_x^2 \\mathbb{E}\\big[\\| \\sum_{j=1}^n(\\mathbf{W}_{ji}-\\mathbf{I}_{ji})(\\underline{\\mathbf{x}}_j^{t+1}-{\\mathbf{x}}_j^{t+\\frac{1}{2}})\\|^2\\big]. \\label{eq:phi_lambda1}\n\\end{align}\nThe same as \\eqref{eq:phi_lambda} and \\eqref{eq:2_3}, for the first two terms in the right hand side of \\eqref{eq:phi_lambda1}, we have \n\\begin{align}\n \\sum_{i=1}^n \\sum_{j=1}^n \\big(\\widehat\\mathbf{W}_x\\big)_{ji} \\phi_\\lambda({\\mathbf{x}}_j^{t+\\frac{1}{2}}) \\leq \\sum_{i=1}^n \\phi_\\lambda( {\\mathbf{x}}_i^{t}) +\\frac{1}{2\\lambda} \\|\\widehat\\mathbf{X}^{t}-\\mathbf{X}^{t+\\frac{1}{2}}\\|^2 - \\frac{1}{2\\lambda} \\|\\widehat\\mathbf{X}^t - \\mathbf{X}^t\\|^2,\\label{eq:2_2_press}\\\\\n \\sum_{i=1}^n\\sum_{j=1}^{n-1}\\sum_{l=j+1}^n \\big(\\widehat\\mathbf{W}_x\\big)_{ji}(\\widehat\\mathbf{W}_x)_{li}\\|{\\mathbf{x}}_j^{t+\\frac{1}{2}}-{\\mathbf{x}}_l^{t+\\frac{1}{2}}\\|^2 \n\\leq 8 \\|\\mathbf{X}^{t}_\\perp\\|^2+ 8\\eta^2 \\|\\Y^{t}_\\perp\\|^2. \\label{eq:2_3_press}\n\\end{align}\nFor the last two terms on the right hand side of \\eqref{eq:phi_lambda1}, we have\n\\begin{align}\n &~ \\sum_{i=1}^n\\sum_{j=1}^n \\big(\\widehat\\mathbf{W}_x\\big)_{ji}\\mathbb{E}\\big[\\| \\widehat{\\mathbf{x}}_j^{t+\\frac{1}{2}}-{\\mathbf{x}}_j^{t+\\frac{1}{2}}\\|^2\\big] \n = \\| \\widehat\\mathbf{X}^{t+\\frac{1}{2}}-\\mathbf{X}^{t+\\frac{1}{2}} \\|^2 \n \\leq 2 \\| \\widehat\\mathbf{X}^{t+\\frac{1}{2}}-\\widehat\\mathbf{X}^{t} \\|^2 +2 \\| \\widehat\\mathbf{X}^{t} - \\mathbf{X}^{t+\\frac{1}{2}} \\|^2 \\nonumber \\\\\n \\leq &~ \\textstyle \\frac{2}{(1-\\lambda L)^2} \\| \\mathbf{X}^{t+\\frac{1}{2}}\n \n - \\mathbf{X}^{t} \\|^2 +2 \\| \\widehat\\mathbf{X}^{t} - \\mathbf{X}^{t+\\frac{1}{2}} \\|^2 \n \n \\leq 10 \\| \\mathbf{X}^{t+\\frac{1}{2}}- \\widehat\\mathbf{X}^{t} \\|^2+ 8 \\| \\widehat\\mathbf{X}^{t}- \\mathbf{X}^{t} \\|^2, \\label{eq:X_-X2}\\\\\n &~ \\sum_{i=1}^n \\mathbb{E}\\big[\\| \\sum_{j=1}^n(\\mathbf{W}_{ji}-\\mathbf{I}_{ji})(\\underline{\\mathbf{x}}_j^{t+1}-{\\mathbf{x}}_j^{t+\\frac{1}{2}})\\|^2\\big] = \\mathbb{E}\\big[\\|(\\underline\\mathbf{X}^{t+1}-\\mathbf{X}^{t+\\frac{1}{2}})(\\mathbf{W}-\\mathbf{I})\\|^2\\big]\\leq\n 4\\mathbb{E}\\big[\\|\\underline\\mathbf{X}^{t+1}-\\mathbf{X}^{t+\\frac{1}{2}}\\|^2\\big]\\nonumber \\\\\n \\leq &~ 12\\alpha^2 \\left(\\mathbb{E}\\big[ \\|\\mathbf{X}^t-\\underline\\mathbf{X}^{t}\\|^2\\big] + \\mathbb{E}\\big[\\|\\mathbf{X}^{t+\\frac{1}{2}}-\\widehat{\\mathbf{X}}^t\\|^2\\big]+ \\mathbb{E}\\big[\\|\\widehat{\\mathbf{X}}^t - \\mathbf{X}^t\\|^2\\big]\\right), \\label{eq:X_-X1}\n\\end{align}\nwhere \\eqref{eq:X_-X2} holds by Lemma \\ref{lem:prox_diff} and $\\frac{1}{(1-\\lambda L)^2}\\leq 2$, and \\eqref{eq:X_-X1} holds by \\eqref{eq:X_-X_1}.\n\n \nSum up \\eqref{eq:phi_lambda1} for $t=0,1,\\ldots,T-1$ and take $\\alpha_7 =\\alpha\\gamma_x$.\nThen with \\eqref{eq:2_2_press}, \\eqref{eq:2_3_press}, \\eqref{eq:X_-X2} and \\eqref{eq:X_-X1}, we have\n\\begin{align*}\n \\sum_{i=1}^n \\mathbb{E}\\big[\\phi_\\lambda({\\mathbf{x}}_i^{t+1})\\big]\n\\leq & ~\\sum_{i=1}^n \\mathbb{E}\\big[\\phi_\\lambda( {\\mathbf{x}}_i^{t}) \\big] + \\frac{2}{\\lambda}\\left( \\mathbb{E}\\big[\\|\\mathbf{X}^{t}_\\perp\\|^2\\big] + \\eta^2 \\mathbb{E}\\big[\\|\\Y^{t}_\\perp\\|^2\\big]\\right) +\\textstyle \\frac{6\\alpha\\gamma_x+6\\alpha^2\\gamma_x^2}{\\lambda} \\mathbb{E}\\big[\\|\\mathbf{X}^t-\\underline\\mathbf{X}^{t}\\|^2\\big] \\nonumber \\\\\n&~ + \\frac{1}{\\lambda}\\left( \\textstyle \\frac{1}{2}+11\\alpha\\gamma_x +6\\alpha^2\\gamma_x^2\\right) \\mathbb{E}\\big[\\| \\mathbf{X}^{t+\\frac{1}{2}}- \\widehat\\mathbf{X}^{t} \\|^2\\big]+ \\frac{1}{\\lambda}\\left( \\textstyle -\\frac{1}{2}+10\\alpha\\gamma_x +6\\alpha^2\\gamma_x^2\\right) \\mathbb{E}\\big[\\| \\widehat\\mathbf{X}^{t}- \\mathbf{X}^{t} \\|^2\\big]\\\\\n\\leq & ~ \\sum_{i=1}^n\\mathbb{E}\\big[ \\phi_\\lambda( {\\mathbf{x}}_i^{t})\\big]+ \\frac{2}{\\lambda}\\left( \\mathbb{E}\\big[\\|\\mathbf{X}^{t}_\\perp\\|^2\\big] + \\eta^2 \\mathbb{E}\\big[\\|\\Y^{t}_\\perp\\|^2\\big]\\right) + \\frac{7\\alpha\\gamma_x}{\\lambda} \\mathbb{E}\\big[\\|\\mathbf{X}^t-\\underline\\mathbf{X}^{t}\\|^2\\big] \\nonumber \\\\\n&~\\quad +\\frac{1}{\\lambda}\\left( \\textstyle \\frac{1}{2}+12\\alpha\\gamma_x\\right)\\mathbb{E}\\big[ \\|\\widehat\\mathbf{X}^{t}-\\mathbf{X}^{t+\\frac{1}{2}}\\|^2\\big] +\\frac{1}{\\lambda} \\left( \\textstyle -\\frac{1}{2}+11\\alpha\\gamma_x\\right) \\mathbb{E}\\big[\\| \\widehat\\mathbf{X}^{t}- \\mathbf{X}^{t} \\|^2\\big].\n\\nonumber \\\\\n\\leq &~ \\sum_{i=1}^n\\mathbb{E}\\big[ \\phi_\\lambda( {\\mathbf{x}}_i^{t})\\big] + \\frac{12}{\\lambda}\\mathbb{E}\\big[\\|\\mathbf{X}^t_\\perp\\|^2\\big] + \\frac{7\\alpha\\gamma_x}{\\lambda} \\mathbb{E}\\big[\\|\\mathbf{X}^t-\\underline\\mathbf{X}^{t}\\|^2\\big] + \\frac{12}{\\lambda} \\eta^2\\mathbb{E}\\big[\\|\\Y^t_\\perp\\|^2\\big] \\nonumber \\\\\n&~+ \\frac{1}{\\lambda}\\Big( {\\textstyle\\left(\\frac{1}{2}+12\\alpha\\gamma_x \\right) \\left( 1-\\frac{\\eta}{2\\lambda} \\right) + \\left( -\\frac{1}{2}+11\\alpha\\gamma_x\\right) }\\Big) \\mathbb{E}\\big[\\| \\widehat\\mathbf{X}^{t}- \\mathbf{X}^{t} \\|^2\\big] + \\frac{5}{\\lambda} \\eta^2 \\sigma^2,\n\\end{align*}\nwhere the second inequality holds by $6\\alpha\\gamma_x\\leq 1$, and the third inequality holds by \\eqref{eq:hatx_xprox_comp} with $\\frac{1}{2}+12\\alpha\\gamma_x\\leq \\frac{5}{2}$.\nNoticing $$\\left(\\frac{1}{2}+12\\alpha\\gamma_x \\right) \\left( 1-\\frac{\\eta}{2\\lambda} \\right) + \\left( -\\frac{1}{2}+11\\alpha\\gamma_x\\right) = 23\\alpha\\gamma_x - \\frac{\\eta}{4\\lambda} - \\frac{6\\alpha\\gamma_x\\eta}{\\lambda}\\leq 23\\alpha\\gamma_x - \\frac{\\eta}{4\\lambda},$$ \nwe obtain \\eqref{eq:2.7} and complete the proof.\n\\end{proof}\n\n\nWith Lemmas \\ref{lem:X_consensus_comperror}, \\ref{lem:Y_consensus_comperror} and \\ref{lem:phi_one_step}, we are ready to prove the Theorem \\ref{thm:sect3thm}. We will use the Lyapunov function:\n\\begin{align*}\n \\mathbf{V}^t = z_1 \\mathbb{E}\\big[\\|\\mathbf{X}^{t}_\\perp\\|^2\\big] + z_2 \\mathbb{E}\\big[\\|\\mathbf{X}^{t}-\\underline\\mathbf{X}^{t}\\|^2\\big] +z_3\\mathbb{E}\\big[\\|\\Y^{t}_\\perp\\|^2\\big]+z_4 \\mathbb{E}\\big[\\|\\Y^{t}-\\underline\\Y^{t}\\|^2\\big] + z_5 \\sum_{i=1}^n \\mathbb{E}[\\phi_\\lambda( {\\mathbf{x}}_i^{t})], \n\\end{align*}\nwhere $z_1, z_2, z_3, z_4, z_5 \\geq 0$ are determined later.\n\n\n\n\n\\subsection*{Proof of Theorem \\ref{thm:sect3thm}}\n\n\\begin{proof}\nDenote \n\\begin{align*} \n &~\\Omega_0^t = \\mathbb{E}[\\|\\widehat\\mathbf{X}^{t}-\\mathbf{X}^{t}\\|^2], \\quad \\Phi^t = \\sum_{i=1}^n \\mathbb{E}[\\phi_\\lambda( {\\mathbf{x}}_i^{t})], \\\\\n &~ \\Omega^t = \\left(\\mathbb{E}\\big[\\|\\mathbf{X}^{t}_\\perp\\|^2\\big], \\mathbb{E}\\big[\\|\\mathbf{X}^{t}-\\underline\\mathbf{X}^{t}\\|^2\\big], \\mathbb{E}\\big[\\|\\Y^{t}_\\perp\\|^2\\big], \\mathbb{E}\\big[\\|\\Y^{t}-\\underline\\Y^{t}\\|^2\\big], \\Phi^t\\right)^\\top.\n\\end{align*}\nThen Lemmas \\ref{lem:X_consensus_comperror}, \\ref{lem:Y_consensus_comperror} and \\ref{lem:phi_one_step} imply\n$\\Omega^{t+1} \\leq \\mathbf{A}\\Omega^t + {\\mathbf{b}} \\Omega_0^t + {\\mathbf{c}} \\sigma^2$ with\n\\begin{align*}\n &\\mathbf{A} = \\begin{pmatrix}\n \\frac{3+\\widehat\\rho^2_x}{4} &~ 2\\alpha\\gamma_x(1-\\widehat\\rho_x^2) &~ \\frac{9}{4(1-\\widehat\\rho^2_x)} \\eta^2 &~ 0 &~ 0\\\\\n \\frac{21}{1-\\alpha^2} &~ \\frac{3+\\alpha^2}{4} &~\\frac{21}{1-\\alpha^2} \\eta^2 &~ 0 &~ 0 \\\\\n \\frac{150 L^2}{1-\\widehat\\rho^2_y} &~ \\frac{20\\sqrt{3} L^2}{1-\\widehat\\rho^2_y }\\alpha\\gamma_x &~ \\frac{3+\\widehat\\rho^2_y}{4} &~ \\frac{48}{1-\\widehat\\rho^2_y }\\alpha^2\\gamma_y^2 &~ 0\\\\ \n \\frac{180 L^2}{1-\\alpha^2} &~ \\frac{24\\sqrt{3} L^2}{1-\\alpha^2} \\alpha\\gamma_x &~ \\frac{104\\gamma_y^2+96 L^2 \\eta^2}{1-\\alpha^2} &~ \\frac{3+\\alpha^2}{4} &~ 0\\\\\n \\frac{12}{\\lambda} &~ \\frac{7\\alpha\\gamma_x}{\\lambda} &~ \\frac{12}{\\lambda}\\eta^2 &~ 0 &~ 1\\\\\n \\end{pmatrix}, \\\\[0.2cm] \n&{\\mathbf{b}} = \n \\begin{pmatrix}\n 4\\alpha\\gamma_x(1-\\widehat\\rho_x^2) \\\\\n \\frac{11}{1-\\alpha^2} \\\\\n \\frac{40 L^2 }{1-\\widehat\\rho^2_y}\\\\\n \\frac{48 L^2}{1-\\alpha^2} \\\\\n \\frac{1}{\\lambda}\\left( \\textstyle -\\frac{\\eta}{4\\lambda} + 23\\alpha\\gamma_x \\right) \n \\end{pmatrix}, \\quad\n{\\mathbf{c}} = \n \\begin{pmatrix}\n 4\\alpha\\gamma_x \\eta^2 (1-\\widehat\\rho_x^2) \\\\\n \\frac{11 \\eta^2 }{1-\\alpha^2} \\\\ \n 12n \\\\\n \\frac{10n}{1-\\alpha^2}\\\\\n \\frac{5}{\\lambda} \\eta^2\n \\end{pmatrix}.\n\\end{align*}\nThen for any ${\\mathbf{z}} = (z_1, z_2 , z_3, z_4, z_5 )^\\top\\geq \\mathbf{0}^\\top$, it holds \n\\begin{align*}\n {\\mathbf{z}}^\\top \\Omega^{t+1} \\leq {\\mathbf{z}}^\\top \\Omega^t + ({\\mathbf{z}}^\\top \\mathbf{A}-{\\mathbf{z}}^\\top) \\Omega^t + {\\mathbf{z}}^\\top{\\mathbf{b}} \\Omega_0^t + {\\mathbf{z}}^\\top{\\mathbf{c}} \\sigma^2.\n\\end{align*}\nLet $\\gamma_x\\leq \\frac{\\eta}{\\alpha}$ and $\\gamma_y\\leq \\frac{(1-\\alpha^2) (1-\\widehat\\rho^2_x)(1-\\widehat\\rho^2_y)}{317}$.\nTake $$z_1=\\frac{52}{1-\\widehat\\rho^2_x}, z_2 = \\frac{448}{1-\\alpha^2} \\eta , z_3 = \\frac{521}{(1-\\widehat\\rho^2_x)^2(1-\\widehat\\rho^2_y)} \\eta^2, z_4=(1-\\alpha^2) \\eta^2, z_5=\\lambda.$$ We have\n\\begin{align*}\n{\\mathbf{z}}^\\top \\mathbf{A}-{\\mathbf{z}}^\\top \\leq &~\n \\begin{pmatrix}\n \\frac{21\\cdot448}{ (1-\\alpha^2)^2} \\eta + \\frac{150\\cdot521 L^2\\eta^2}{(1-\\widehat\\rho^2_x)^2(1-\\widehat\\rho^2_y)^2} + 180 L^2\\eta^2 - 1 \\\\[0.2cm]\n \\frac{521\\cdot20\\sqrt{3} L^2\\eta^3}{(1-\\widehat\\rho^2_x)^2(1-\\widehat\\rho^2_y)^2} + 24\\sqrt{3} L^2\\eta^3 -\\eta \\\\[0.2cm]\n \\frac{448\\cdot21\\eta^3}{ (1-\\alpha^2)^2} + 96 L^2 \\eta^4 -\n \\frac{\\eta^2}{(1-\\widehat\\rho^2_x)^2}\\\\[0.1cm]\n 0 \\\\[0.1cm]\n 0\n \\end{pmatrix}^\\top, \\\\\n {\\mathbf{z}}^\\top{\\mathbf{b}} \\leq &~ \\textstyle -\\frac{\\eta}{4\\lambda} + 23\\eta + 48 L^2 \\eta^2 + \\frac{521\\cdot 40 \\eta^2 L^2}{(1-\\widehat\\rho^2_x)^2 (1-\\widehat\\rho^2_y)^2} + \\frac{448\\cdot11\\eta}{ (1-\\alpha^2)^2} + 52\\cdot4 \\eta,\\\\\n{\\mathbf{z}}^\\top{\\mathbf{c}} \\leq &~ \\left( \\textstyle 52\\cdot4\\eta + \\frac{448\\cdot 11\\eta}{ (1-\\alpha^2)^2} + \\frac{521\\cdot12n}{(1-\\widehat\\rho^2_x)^2(1-\\widehat\\rho^2_y)}+ 10n + 5 \\right)\\eta^2.\n\\end{align*}\n\nBy $\\eta\\leq \\frac{(1-\\alpha^2)^2(1-\\widehat\\rho^2_x)^2(1-\\widehat\\rho^2_y)^2}{18830\\max\\{1, L\\}}$ and \n$\\lambda\\leq \\frac{ (1-\\alpha^2)^2}{9 L+41280}$,\nwe have ${\\mathbf{z}}^\\top \\mathbf{A}-{\\mathbf{z}}^\\top \\leq (-\\frac{1}{2}, 0, 0, 0, 0)^\\top$, \n\\begin{align*}\n{\\mathbf{z}}^\\top{\\mathbf{c}} \\leq \\textstyle\n\\frac{(521\\cdot12+10)n+6}{(1-\\widehat\\rho^2_x)^2(1-\\widehat\\rho^2_y)}\\eta^2 = \\textstyle\\frac{6262n+6}{(1-\\widehat\\rho^2_x)^2(1-\\widehat\\rho^2_y)}\\eta^2\n\\end{align*}\nand\n\\begin{align*}\n {\\mathbf{z}}^\\top{\\mathbf{b}} ~ \\leq &~ \\textstyle \\eta\\Big( -\\frac{1}{4\\lambda} + 23 + 48 L^2 \\eta + \\frac{521\\cdot 40 \\eta L^2}{(1-\\widehat\\rho^2_x)^2 (1-\\widehat\\rho^2_y)^2} + \\frac{448\\cdot11 }{ (1-\\alpha^2)^2} + 52\\cdot4 \\Big) \\nonumber \\\\\n \\leq &~ \\textstyle -\\frac{\\eta}{8\\lambda} + \\eta\\Big( -\\frac{1}{8\\lambda} + \\frac{ 9 L}{8 } + \\frac{5160}{ (1-\\alpha^2)^2}\\Big) \n \n \n \\leq \n -\\frac{\\eta}{8\\lambda}.\n\\end{align*}\nHence we have\n\\begin{align}\n {\\mathbf{z}}^\\top \\Omega^{t+1} \\leq \\textstyle {\\mathbf{z}}^\\top \\Omega^{t} -\\frac{\\eta}{8\\lambda} \\Omega_0^t -\\frac{1}{2}\\mathbb{E}[\\|\\mathbf{X}^t_\\perp\\|^2] + \\frac{6262n+6}{(1-\\widehat\\rho^2_x)^2(1-\\widehat\\rho^2_y)}\\eta^2\\sigma^2.\\label{eq:l_fun_comp} \n\\end{align}\nThus summing up \\eqref{eq:l_fun_comp} for $t=0,1,\\ldots,T-1$ gives \n\\begin{align}\n \\frac{1}{\\lambda T}\\sum_{t=0}^{T-1} \\Omega_0^t +\\frac{4}{\\eta T}\\sum_{t=0}^{T-1} \\mathbb{E}[\\|\\mathbf{X}^t_\\perp\\|^2] \\leq \\textstyle \\frac{8\\left({\\mathbf{z}}^\\top \\Omega^0 - {\\mathbf{z}}^\\top \\Omega^{T}\\right)}{\\eta T} + \\frac{8(6262n+6)}{(1-\\widehat\\rho^2_x)^2(1-\\widehat\\rho^2_y)} \\eta\\sigma^2. \\label{eq:thm3_avg-Omega}\n\\end{align}\nFrom ${\\mathbf{y}}_i^{-1}=\\mathbf{0}$, $\\underline{\\mathbf{y}}_i^{-1}=\\mathbf{0}$, $\\nabla F_i({\\mathbf{x}}_i^{-1}$, $\\xi_i^{-1})=\\mathbf{0}$, $\\underline{\\mathbf{x}}_i^{0} =\\mathbf{0}$, ${\\mathbf{x}}_i^0 = {\\mathbf{x}}^0, \\forall\\, i \\in \\mathcal{N}$, we have \n\\begin{gather}\n \\|\\Y^0_\\perp\\|^2 = \\|\\nabla \\mathbf{F}^0(\\mathbf{I}-\\mathbf{J})\\|^2\\leq\\|\\nabla \\mathbf{F}^0\\|^2, \n \\quad \\|\\Y^{0}-\\underline\\Y^{0}\\|^2 = \\|\\nabla \\mathbf{F}^0-Q_{\\mathbf{y}}\\big[\\nabla \\mathbf{F}^0\\big]\\|^2 \\leq \\alpha^2 \\|\\nabla \\mathbf{F}^0\\|^2, \\label{eq:initial_thm3_1}\\\\\n \\|\\mathbf{X}^0_\\perp\\|^2=0, \\quad \\|\\mathbf{X}^0-\\underline\\mathbf{X}^{0}\\|^2=0, \n \\quad \\Phi^0=n \\phi_\\lambda({\\mathbf{x}}^0).\n \\label{eq:initial_thm3_2}\n\\end{gather}\nNote \\eqref{eq:end_thm2} still holds here. \nWith \\eqref{eq:initial_thm3_1}, \\eqref{eq:initial_thm3_2}, \\eqref{eq:end_thm2}, and the nonnegativity of $ \\mathbb{E}[\\|\\mathbf{X}^T_\\perp\\|^2]$, $\\mathbb{E}[\\|\\mathbf{X}^{T}-\\underline\\mathbf{X}^{T}\\|^2]$, $\\mathbb{E}[\\|\\Y^T_\\perp\\|^2]$, $\\mathbb{E}[\\|\\Y^{T}-\\underline\\Y^{T}\\|^2]$, we have\n\\begin{align}\n{\\mathbf{z}}^\\top \\Omega^0 - {\\mathbf{z}}^\\top \\Omega^{T} \\le \\textstyle\n \\frac{521}{(1-\\widehat\\rho^2_x)^2(1-\\widehat\\rho^2_y)} \\eta^2 \\mathbb{E}[\\|\\nabla \\mathbf{F}^0\\|^2] + \\eta^2 \\mathbb{E}[\\|\\nabla \\mathbf{F}^0\\|^2] + \\lambda n \\phi_\\lambda({\\mathbf{x}}^0) -\\lambda n \\phi_\\lambda^*. \\label{eq:them3_Omega0_OmegaT}\n\\end{align}\nwhere we have used $\\alpha^2\\leq 1$ from Assumption \\ref{assu:compressor}.\n\nBy the convexity of the frobenius norm and \\eqref{eq:them3_Omega0_OmegaT}, we obtain from \\eqref{eq:thm3_avg-Omega} that \n\\begin{align}\n &~ \\frac{1}{n\\lambda^2} \\mathbb{E}\\big[\\|\\widehat\\mathbf{X}^{\\tau}-\\mathbf{X}^{\\tau}\\|^2\\big] +\\frac{4}{n \\lambda \\eta} \\mathbb{E}[\\|\\mathbf{X}^\\tau_\\perp\\|^2]\n \\leq \\frac{1}{n\\lambda^2} \\frac{1}{T}\\sum_{t=0}^{T-1} \\mathbb{E}\\big[\\|\\widehat\\mathbf{X}^{t}-\\mathbf{X}^{t}\\|^2\\big] +\\frac{4}{n \\lambda \\eta T}\\sum_{t=0}^{T-1} \\mathbb{E}[\\|\\mathbf{X}^t_\\perp\\|^2] \\nonumber \\\\\n \\leq & \\textstyle \\frac{8\\left( \\phi_\\lambda({\\mathbf{x}}^0) - \\phi_\\lambda^*\\right)}{ \\eta T} \n +\\frac{50096n+48}{(1-\\widehat\\rho^2_x)^2(1-\\widehat\\rho^2_y)} \\frac{\\eta}{n\\lambda}\\sigma^2\n \\textstyle + \\frac{8\\cdot521 \\eta }{n\\lambda T (1-\\widehat\\rho^2_x)^2(1-\\widehat\\rho^2_y)} \\mathbb{E}\\big[ \\|\\nabla \\mathbf{F}^0\\|^2\\big] + \n \\frac{8\\eta}{n\\lambda T} \\mathbb{E}\\big[ \\|\\nabla \\mathbf{F}^0\\|^2\\big] \\nonumber \\\\\n \\leq &~\\textstyle \\frac{8\\left(\\phi_\\lambda({\\mathbf{x}}^0) - \\phi_\\lambda^*\\right)}{\\eta T} \n +\\frac{(50096n+48)\\eta \\sigma^2}{n\\lambda(1-\\widehat\\rho^2_x)^2(1-\\widehat\\rho^2_y)} + \\textstyle \\frac{4176 \\eta \\mathbb{E}\\left[ \\|\\nabla \\mathbf{F}^0\\|^2\\right] }{n\\lambda T (1-\\widehat\\rho^2_x)^2(1-\\widehat\\rho^2_y)}. \\label{eq:them_CDProxSGT0}\n\\end{align}\nWith $\\|\\nabla \\phi_\\lambda ({\\mathbf{x}}_i^\\tau)\\|^2 = \\frac{\\|{\\mathbf{x}}_i^\\tau-\\widehat{\\mathbf{x}}_i^\\tau\\|^2}{\\lambda^{2}}$ from Lemma \\ref{lem:xhat_x}, we complete the proof.\n\\end{proof}\n\n\n\\section{Numerical Experiments}\\label{sec:numerical_experiments}\nIn this section, we test the proposed algorithms on training two neural network models, in order to demonstrate their better generalization over momentum variance-reduction methods and large-batch training methods and to demonstrate the success of handling heterogeneous data even when only compressed model parameter and gradient information are communicated among workers.\nOne neural network that we test is LeNet5 \\cite{lecun1989backpropagation} on the FashionMNIST dataset \\cite{xiao2017fashion}, and the other is FixupResNet20 \\cite{zhang2019fixup} on Cifar10 \\cite{krizhevsky2009learning}. \n\nOur experiments are representative to show the practical performance of our methods. Among several closely-related works, \\cite{xin2021stochastic} includes no experiments, and \\cite{mancino2022proximal,zhao2022beer} only tests on tabular data and MNIST. \\cite{koloskova2019decentralized-b} tests its method on Cifar10 but needs\nsimilar data distribution on all workers \nfor good performance.\nFashionMNIST has a similar scale as MNIST but poses a more challenging classification task \\cite{xiao2017fashion}. \nCifar10 is more complex, and FixupResNet20 has more layers than LeNet5. \n \n\n\n\n \nAll the compared algorithms are implemented in Python with Pytorch\nand MPI4PY (for distributed computing).\nThey run on a Dell workstation with\ntwo Quadro RTX 5000 GPUs. We use the 2 GPUs as 5 workers, which communicate over a ring-structured network (so each worker can only communicate with two neighbors). Uniform weight is used, i.e., $W_{ji} = \\frac{1}{3}$ for each pair of connected workers $i$ and $j$.\nBoth FashionMNIST and Cifar10 have 10 classes. We distribute each data onto the 5 workers based on the class labels, namely, each worker holds 2 classes of data points, and thus the data are heterogeneous across the workers.\n\nFor all methods, we report their objective values on training data, prediction accuracy on testing data, and consensus errors at each epoch. \nTo save time, the objective values are computed as the average of the losses that are evaluated during the training process (i.e., on the sampled data instead of the whole training data) plus the regularizer per epoch. \nFor the testing accuracy, we first compute the accuracy on the whole testing data for each worker by using its own model parameter and then take the average.\nThe consensus error is simply $\\|\\mathbf{X}_\\perp\\|^2$.\n\n\\subsection{Sparse Neural Network Training} \\label{subsect:RegL1}\nIn this subsection, we test the non-compressed method DProxSGT and compare it with AllReduce (that is a centralized method and used as a baseline), DEEPSTORM\\footnote{For DEEPSTORM, we implement DEEPSTORM v2 in \\cite{mancino2022proximal}.} and ProxGT-SA \\cite{xin2021stochastic} on solving \\eqref{eq:decentralized_problem}, where $f$ is the loss on the whole training data and $r({\\mathbf{x}}) = \\mu\\|{\\mathbf{x}}\\|_1$ serves as a sparse regularizer that encourages a sparse model.\n\nFor training LeNet5 on FashionMNIST, we set $\\mu= 10^{-4}$ and run each method to 100 epochs. The learning rate $\\eta$ and batchsize are set to $0.01$ and 8 for AllReduce and DProxSGT. \nDEEPSTORM uses the same $\\eta$ and batchsize but with a larger initial batchsize 200, \nand its momentum parameter is tuned to $\\beta=0.8$ in order to yield the best performance.\nProxGT-SA is a large-batch training method.\nWe set its batchsize to 256 and accordingly apply a larger step size $\\eta=0.3$ that is the best among $\\{0.1, 0.2, 0.3, 0.4\\}$.\n\nFor training FixupResnet20 on Cifar10, we set $\\mu= 5 \\times 10^{-5}$ and run each method to 500 epochs.\nThe learning rate and batchsize are set to $\\eta=0.02$ and 64 for AllReduce, DProxSGT, and DEEPSTORM. The initial batchsize is set to 1600 for DEEPSTORM and the momentum parameter set to $\\beta=0.8$. \nProxGT-SA uses a larger batchsize 512 and a larger stepsize $\\eta=0.1$ that gives the best performance among $\\{0.05, 0.1, 0.2, 0.3\\}$.\n\n\\begin{figure}[ht] \n\\begin{center} \n\\includegraphics[width=.9\\columnwidth]{.\/figures\/noncompressed} \n\\vspace{-0.2cm}\n\\caption{Results of training sparse neural networks by non-compressed methods with $r({\\mathbf{x}}) = \\mu \\|{\\mathbf{x}}\\|_1$ for the same number of epochs. Left: LeNet5 on FashionMNIST with $\\mu=10^{-4}$. Right: FixupResnet20 on Cifar10 with $\\mu=5\\times 10^{-5}$.}\n\\label{fig:RegL1}\n\\end{center} \n\\end{figure}\n\n\nThe results for all methods are plotted in Figure \\ref{fig:RegL1}. For LeNet5, DProxSGT produces almost the same curves as the centralized training method AllReduce, while on FixupResnet20, DProxSGT even outperforms AllReduce in terms of testing accuracy. This could be because AllReduce aggregates stochastic gradients from all the workers for each update and thus equivalently, it actually uses a larger batchsize.\nDEEPSTORM performs equally well as our method DProxSGT on training LeNet5. However, it gives lower testing accuracy than DProxSGT and also oscillates significantly more seriously on training the more complex neural network FixupResnet20. This appears to be caused by the momentum variance reduction scheme used in DEEPSTORM.\nIn addition, we see that the large-batch training method ProxGT-SA performs much worse than DProxSGT within the same number of epochs (i.e., data pass), especially on training FixupResnet20.\n\n\\subsection{Neural Network Training by Compressed Methods} \\label{subsect:compress}\nIn this subsection, we compare CDProxSGT with two state-of-the-art compressed training methods: Choco-SGD \\cite{koloskova2019decentralized,koloskova2019decentralized-b} and BEER \\cite{zhao2022beer}. As Choco-SGD and BEER are studied only for problems without a regularizer, we set $r({\\mathbf{x}})=0$ in \\eqref{eq:decentralized_problem} for the tests. Again, we compare their performance on training LeNet5 and FixupResnet20.\nThe two non-compressed methods AllReduce and DProxSGT are included as baselines. \nThe same compressors are used for CDProxSGT, Choco-SGD, and BEER, when compression is applied.\n\n\\begin{figure}[htbp] \n\\begin{center} \n\\includegraphics[width=.9\\columnwidth]{.\/figures\/Compressed} \n\\vspace{-0.2cm}\n\\caption{Results of training neural network models by compressed methods for the same number of epochs. Left: LeNet5 on FashionMNIST. Right: FixupResnet20 on Cifar10.}\n\\label{fig:Compress}\n\\end{center} \n\\end{figure} \n\nWe run each method to 100 epochs for training LeNet5 on FashionMNIST. \nThe compressors $Q_y$ and $Q_x$ are set to top-$k(0.3)$ \\cite{aji2017sparse}, i.e., taking the largest $30\\%$ elements of an input vector in absolute values and zeroing out all others.\nWe set batchsize to 8 and tune the learning rate $\\eta$ to $0.01$ for AllReduce, DProxSGT, CDProxSGT and Choco-SGD, and for CDProxSGT, we set $\\gamma_x=\\gamma_y=0.5$. \nBEER is a large-batch training method. It uses a larger batchsize 256 and accordingly a larger learning rate $\\eta=0.3$, which appears to be the best among $\\{0.1, 0.2, 0.3, 0.4\\}$. \n \nFor training FixupResnet20 on the Cifar10 dataset, we run each method to 500 epochs. We take top-$k(0.4)$ \\cite{aji2017sparse} as the compressors $Q_y$ and $Q_x$ and set $\\gamma_x=\\gamma_y=0.8$.\nFor AllReduce, DProxSGT, CDProxSGT and Choco-SGD, we set their batchsize to 64 and tune the learning rate $\\eta$ to $0.02$. For BEER, we use a larger batchsize 512 and a larger learning rate $\\eta=0.1$, which is the best among\n$\\{0.05, 0.1, 0.2, 0.3\\}$. \n\nThe results are shown in Figure \\ref{fig:Compress}. \nFor both models, CDProxSGT yields almost the same curves of objective values and testing accuracy as its non-compressed counterpart DProxSGT and the centralized non-compressed method AllReduce. This indicates about 70\\% saving of communication for the training of LeNet5 and 60\\% saving for FixupResnet20 without sacrifying the testing accuracy.\nIn comparison, BEER performs significantly worse than the proposed method CDProxSGT within the same number of epochs in terms of all the three measures, especially on training the more complex neural network FixupResnet20, which should be attributed to the use of a larger batch by BEER. Choco-SGD can produce comparable objective values. However, its testing accuracy is much lower than that produced by our method CDProxSGT.\nThis should be because of the data heterogeneity that ChocoSGD cannot handle, while CDProxSGT applies the gradient tracking to successfully address the challenges of data heterogeneity.\n\n\n\n\n\n\n\n\n\\section{Conclusion}\nWe have proposed two decentralized proximal stochastic gradient methods, DProxSGT and CDProxSGT, for nonconvex composite problems with data heterogeneously distributed on the computing nodes of a connected graph. CDProxSGT is an extension of DProxSGT by applying compressions on the communicated model parameter and gradient information. Both methods need only a single or $\\mathcal{O}(1)$ samples for each update, which is important to yield good generalization performance on training deep neural networks. The gradient tracking is used in both methods to address data heterogeneity. An $\\mathcal{O}\\left( \\frac{1}{ \\epsilon^4}\\right)$ sample complexity and communication complexity is established to both methods to produce an expected $\\epsilon$-stationary solution. \nNumerical experiments on training neural networks demonstrate the good generalization performance and the ability of the proposed methods on handling heterogeneous data.\n\n\n\n \n\n\n\n\\section{Introduction} \nIn this paper, we consider to solve nonconvex stochastic composite problems in a decentralized setting:\n \\vspace{-0.1cm}\n\\begin{equation}\\label{eq:problem_original}\n \\begin{aligned}\n& \\min_{{\\mathbf{x}}\\in\\mathbb{R}^d} \\phi({\\mathbf{x}}) = f({\\mathbf{x}}) + r({\\mathbf{x}}),\\\\[-0.1cm] \n& \\text{with } f({\\mathbf{x}})=\\frac{1}{n}\\sum_{i=1}^n f_i({\\mathbf{x}}), f_i({\\mathbf{x}})\\!=\\!\\mathbb{E}_{\\xi_i \\sim \\mathcal{D}_i}[F_i({\\mathbf{x}},\\xi_i)].\n \\end{aligned}\n \\vspace{-0.1cm} \n\\end{equation}\nHere, $\\{\\mathcal{D}_i\\}_{i=1}^n$ are possibly \\emph{non-i.i.d data} distributions on $n$ machines\/workers that can be viewed as nodes of a connected graph $\\mathcal{G}$, and each $F_i(\\cdot, \\xi_i)$ can only be accessed by the $i$-th worker. \nWe are interested in problems\nthat satisfy the following structural assumption. \n\\begin{assumption}[Problem structure] \\label{assu:prob}\nWe assume that \n\\vspace{-1.5mm}\n\\begin{itemize} \n\\item[(i)] $r$ is closed convex and possibly nondifferentiable.\n\\item[(ii)] Each $f_i$ is $L$-smooth in $\\dom(r)$, i.e., $\\|\\nabla f_i({\\mathbf{x}}) - \\nabla f_i({\\mathbf{y}})\\| \\le L \\|{\\mathbf{x}}- {\\mathbf{y}}\\|$, for any ${\\mathbf{x}}, {\\mathbf{y}}\\in\\dom(r)$. \n\\item[(iii)] $\\phi$ is lower bounded, i.e., $\\phi^* \\triangleq \\min_{\\mathbf{x}} \\phi({\\mathbf{x}}) > -\\infty$.\n\\end{itemize}\n\\vspace{-2mm}\n\\end{assumption}\n\n\nLet $\\mathcal{N}=\\{1, 2, \\ldots, n\\}$ be the set of nodes of $\\mathcal{G}$ and $\\mathcal{E}$ the set of edges.\nFor each $i\\in\\mathcal{N}$, denote $\\mathcal{N}_i$ as the neighbors of worker $i$ and itself, i.e., $\\mathcal{N}_i = \\{j: (i,j) \\in \\mathcal{E}\\}\\cup \\{i\\}$. Every worker can only communicate with its neighbors. To solve \\eqref{eq:problem_original} collaboratively, each worker $i$ maintains a copy, denoted as ${\\mathbf{x}}_i$, of the variable ${\\mathbf{x}}$. With these notations, \n\n\\eqref{eq:problem_original} can be formulated equivalently to\n\\vspace{-0.1cm}\n{\\begin{align}\\label{eq:decentralized_problem} \n\\begin{split}\n\\min_{\\mathbf{X} \\in \\mathbb{R}^{d\\times n}} & \\frac{1}{n}\\sum_{i=1}^n \\phi_i({\\mathbf{x}}_i), \\text{with }\\phi_i({\\mathbf{x}}_i) \\triangleq f_i({\\mathbf{x}}_i) + r({\\mathbf{x}}_i), \\\\\n \\mbox{s.t. } \\quad & {\\mathbf{x}}_i={\\mathbf{x}}_j, \\forall\\, j\\in \\mathcal{N}_i, \\forall\\, i = 1,\\ldots, n.\n\\end{split} \n\\end{align}}\n\\vspace{-0.5cm}\n\nProblems with a \\emph{nonsmooth} regularizer, i.e., in the form of \\eqref{eq:problem_original}, appear in many applications such as $\\ell_1$-regularized signal recovery \\cite{eldar2014phase,duchi2019solving}, online nonnegative matrix factorization \\cite{guan2012online}, and training sparse neural networks \\cite{scardapane2017group, yang2020proxsgd}. When data involved in these applications are distributed onto (or collected by workers on) a decentralized network, it necessitates the design of decentralized algorithms.\n\nAlthough decentralized optimization has attracted a lot of research interests in recent years, most existing works focus on strongly convex problems \\cite{scaman2017optimal, koloskova2019decentralized} or convex problems \\cite{6426375,taheri2020quantized} or smooth nonconvex problems \\cite{bianchi2012convergence, di2016next, wai2017decentralized, lian2017can,zeng2018nonconvex}.\nFew works have studied \\emph{nonsmooth nonconvex} decentralized \\emph{stochastic} optimization like \\eqref{eq:decentralized_problem} that we consider. \\cite{chen2021distributed, xin2021stochastic, mancino2022proximal} are among the exceptions. However, they either require to take many data samples for each update or assume a so-called mean-squared smoothness condition, which is stronger than the smoothness condition in Assumption~\\ref{assu:prob}(ii), in order to perform momentum-based variance-reduction step. Though these methods can have convergence (rate) guarantee, they often yield poor generalization performance on training deep neural networks, as demonstrated in \\cite{lecun2012efficient, keskar2016large} for large-batch training methods and in our numerical experiments for momentum variance-reduction methods.\n\nOn the other side, many distributed optimization methods \\cite{shamir2014distributed,lian2017can,wang2018cooperative} \noften assume that the data are i.i.d across the workers.\nHowever, this assumption does not hold in many real-world scenarios, for instance, due to data privacy issue that local data has to stay on-premise.\nData heterogeneity can result in significant degradation of the performance by these methods.\nThough some papers do not assume i.i.d. data, they require certain data similarity, such as bounded stochastic gradients \\cite{koloskova2019decentralized,koloskova2019decentralized-b, taheri2020quantized} and bounded gradient dissimilarity \\cite{ tang2018communication,assran2019stochastic, tang2019deepsqueeze, vogels2020practical}. \n \n\nTo address the critical practical issues mentioned above, we propose a decentralized proximal stochastic gradient tracking method that needs only a single or $O(1)$ data samples (per worker) for each update. With no assumption on data similarity, it can still achieve the optimal convergence rate on solving problems satisfying conditions in Assumption~\\ref{assu:prob} and yield good generalization performance. In addition, to reduce communication cost, we give a compressed version of the proposed algorithm, by performing compression on the communicated information. The compressed algorithm can inherit the benefits of its non-compressed counterpart. \n\n\n\n \n\n \n\n\\subsection{Our Contributions}\n\nOur contributions are three-fold. First, we propose two decentralized algorithms, one without compression (named DProxSGT) and the other with compression (named CDProxSGT), for solving \\emph{decentralized nonconvex nonsmooth stochastic} problems. Different from existing methods, e.g., \\cite{xin2021stochastic, wang2021distributed, mancino2022proximal}, which need a very large batchsize and\/or perform momentum-based variance reduction to handle the challenge from the nonsmooth term, DProxSGT needs only $\\mathcal{O}(1)$ data samples for each update, without performing variance reduction. The use of a small batch and a standard proximal gradient update enables our method to achieve significantly better generalization performance over the existing methods, as we demonstrate on training neural networks. To the best of our knowledge, CDProxSGT is the first decentralized algorithm that applies a compression scheme for solving nonconvex nonsmooth stochastic problems, and it inherits the advantages of the non-compressed method DProxSGT. Even applied to the special class of smooth nonconvex problems, CDProxSGT can perform significantly better over state-of-the-art methods, in terms of generalization and handling data heterogeneity.\n\nSecond, we establish an optimal sample complexity result of DProxSGT, which matches the lower bound result in \\cite{arjevani2022lower} in terms of the dependence on a target tolerance $\\epsilon$, to produce an $\\epsilon$-stationary solution. Due to the coexistence of nonconvexity, nonsmoothness, big stochasticity variance (due to the small batch and no use of variance reduction for better generalization), and decentralization, the analysis is highly non-trivial. We employ the tool of Moreau envelope and construct a decreasing Lyapunov function by carefully controlling the errors introduced by stochasticity and decentralization. \n\nThird, we establish the iteration complexity result of the proposed compressed method CDProxSGT, which is in the same order as that for DProxSGT and thus also optimal in terms of the dependence on a target tolerance. The analysis builds on that of DProxSGT but is more challenging due to the additional compression error and the use of gradient tracking. Nevertheless, we obtain our results by making the same (or even weaker) assumptions as those assumed by state-of-the-art methods \\cite{koloskova2019decentralized-b, zhao2022beer}. \n\n\n\n \n\n \n\n\n\\subsection{Notation}\\label{sec:notation}\nFor any vector ${\\mathbf{x}}\\in\\mathbb{R}^{d}$, we use $\\|{\\mathbf{x}}\\|$ for the $\\ell_2$ norm. For any matrix $\\mathbf{A}$, $\\|\\mathbf{A}\\|$ denotes the Frobenius norm and $\\|\\mathbf{A}\\|_2$ the spectral norm.\n$\\mathbf{X} = [{\\mathbf{x}}_1,{\\mathbf{x}}_2,\\ldots,{\\mathbf{x}}_n]\\in\\mathbb{R}^{d\\times n}$ concatinates all local variables. The superscript $^t$ will be used for iteration or communication.\n$\\nabla F_i({\\mathbf{x}}_i^t,\\xi_i^t)$ denotes a local stochastic gradient of $F_i$ at ${\\mathbf{x}}_i^t$ with a random sample $\\xi_i^t$. The column concatenation of $\\{\\nabla F_i({\\mathbf{x}}_i^t,\\xi_i^t)\\}$ is denoted as \n\\vspace{-0.1cm}\n\\begin{equation*\n\\nabla \\mathbf{F}^t = \\nabla \\mathbf{F}(\\mathbf{X}^t,\\Xi^t) = [ \\nabla F_1({\\mathbf{x}}_1^t,\\xi_1^t),\\ldots, \\nabla F_n({\\mathbf{x}}_n^t,\\xi_n^t)],\\vspace{-0.1cm}\n\\end{equation*}\nwhere $\\Xi^t = [\\xi_1^t,\\xi_2^t,\\ldots,\\xi_n^t]$.\nSimilarly, we denote\n\\vspace{-0.1cm}\n\\begin{equation*} \n\\nabla \\mathbf{f}^t \n= [ \\nabla f_1({\\mathbf{x}}_1^t ),\\ldots, \\nabla f_n({\\mathbf{x}}_n^t )].\\vspace{-0.1cm}\n\\end{equation*} \nFor any $\\mathbf{X} \\in \\mathbb{R}^{d\\times n}$,\nwe define \n\\vspace{-0.1cm}\n\\begin{equation*} \\bar{{\\mathbf{x}}} = \\textstyle\\frac{1}{n}\\mathbf{X}\\mathbf{1}, \\quad \\overline{\\mathbf{X}} = \\mathbf{X}\\mathbf{J} = \\bar{{\\mathbf{x}}}\\mathbf{1}^\\top,\\quad \\mathbf{X}_\\perp = \\mathbf{X}(\\mathbf{I} - \\mathbf{J}), \\vspace{-0.1cm}\n\\end{equation*} \nwhere $\\mathbf{1}$ is the all-one vector, and $\\mathbf{J} = \\frac{\\mathbf{1}\\1^\\top}{n}$ is the averaging matrix.\nSimilarly, we define the mean vectors \n\\vspace{-0.1cm}\n\\begin{equation*} \n\\overline{\\nabla} \\mathbf{F}^t = \\textstyle\\frac{1}{n} \\mathbf{F}^t \\mathbf{1},\\ \\overline{\\nabla} \\mathbf{f}^t = \\textstyle\\frac{1}{n} \\mathbf{f}^t \\mathbf{1}.\\vspace{-0.1cm}\n\\end{equation*} \nWe will use $\\mathbb{E}_t$ for the expectation about the random samples $\\Xi^t$ at the $t$th iteration and $\\mathbb{E}$ for the full expectation. $\\mathbb{E}_Q$ denotes the expectation about a stochastic compressor $Q$.\n\n\n\n\n\n\\section{Related Works}\nThe literature of decentralized optimization has been growing vastly. To exhaust the literature is impossible. Below we review existing works on decentralized algorithms for solving nonconvex problems, with or without using a compression technique. For ease of understanding the difference of our methods from existing ones, \nwe compare to a few relevant methods in Table \\ref{tab:method_compare}.\n\n\\begin{table*}[t]\\label{tab:method_compare}\n\\caption{Comparison between our methods and some relevant methods: ProxGT-SA and ProxGT-SR-O in \\cite{xin2021stochastic}, DEEPSTORM \\cite{mancino2022proximal}, ChocoSGD \\cite{koloskova2019decentralized-b}, and BEER \\cite{zhao2022beer}. We\nuse ``CMP'' to represent whether compression is performed by a method. \nGRADIENTS represents additional assumptions on the stochastic gradients in addition to those made in Assumption \\ref{assu:stoc_grad}. \nSMOOTHNESS represents the smoothness condition, where ``mean-squared'' means $\\mathbb{E}_{\\xi_i}[\\|\\nabla F_i({\\mathbf{x}}; \\xi_i) - \\nabla F_i({\\mathbf{y}}; \\xi_i)\\|^2]\\le L^2\\|{\\mathbf{x}}-{\\mathbf{y}}\\|^2$ that is stronger than the $L$-smoothness of $f_i$.\nBS is the required batchsize to get an $\\epsilon$-stationary solution. VR and MMT represent whether the variance reduction or momentum are used. Large batchsize and\/or momentum variance reduction can degrade the generalization performance, as we demonstrate\nin numerical experiments.\n}\n\\label{sample-table}\n\\begin{center}\n\\begin{small}\n\\begin{sc}\n\\begin{tabular}{lccccc}\n\\toprule\n Methods & CMP & $r\\not\\equiv 0$ & GRADIENTS & SMOOTHNESS & (BS, VR, MMT) \\\\\n\\midrule\n ProxGT-SA & No& Yes & No & $f_i$ is smooth & \\big($\\mathcal{O}(\\frac{1}{\\epsilon^2})$, No , No\\big) \\\\[0.1cm]\n ProxGT-SR-O & No & Yes & No & mean-squared & \\big($\\mathcal{O}(\\frac{1}{\\epsilon})$, Yes, No\\big) \\\\[0.1cm]\n DEEPSTORM & No & Yes & No & mean-squared & ($\\mathcal{O}(1)$, Yes, Yes) \\\\\n \\textbf{DProxSGT (this paper)} & No & Yes & No & $f_i$ is smooth & ($\\mathcal{O}(1)$, No, No) \\\\\n\\midrule\n ChocoSGD & Yes& No & $\\mathbb{E}_{\\xi}[\\|\\nabla F_i({\\mathbf{x}},\\xi_i)\\|^2]\\leq G^2$ & $f_i$ is smooth & ($\\mathcal{O}(1)$, No, No) \\\\\n BEER & Yes & No & No & $f$ is smooth & \\big($\\mathcal{O}(\\frac{1}{\\epsilon^2})$, No, No\\big) \\\\[0.1cm]\n \\textbf{CDProxSGT (this paper)} & Yes & Yes & No & $f_i$ is smooth & ($\\mathcal{O}(1)$, No, No) \\\\\n\\bottomrule\n\\end{tabular}\n\\end{sc}\n\\end{small}\n\\end{center}\n\\vskip -0.1in\n\\end{table*}\n\n\\subsection{Non-compressed Decentralized Methods}\n\nFor nonconvex decentralized problems with a nonsmooth regularizer, a lot of deterministic decentralized methods have been studied, e.g., \\cite{di2016next, wai2017decentralized, zeng2018nonconvex, chen2021distributed, scutari2019distributed}.\nWhen only stochastic gradient is available, a majority of existing works focus on smooth cases without a regularizer or a hard constraint, such as \\cite{lian2017can, assran2019stochastic, tang2018d}, \ngradient tracking based methods \\cite{lu2019gnsd,zhang2019decentralized, koloskova2021improved},\nand momentum-based variance reduction methods \\cite{xin2021hybrid, zhang2021gt}.\nSeveral works such as \\cite{bianchi2012convergence, wang2021distributed, xin2021stochastic, mancino2022proximal} have studied stochastic decentralized methods for problems with a nonsmooth term $r$. \nHowever, they either consider some special $r$ or require a large batch size. \\cite{bianchi2012convergence} considers\nthe case where $r$ is an indicator function of a compact convex set. Also, it\nrequires bounded stochastic gradients.\n\\cite{wang2021distributed} focuses on problems with a polyhedral $r$, and it\nrequires a large batch size of $\\mathcal{O}(\\frac{1}{\\epsilon})$ to produce an (expected) $\\epsilon$-stationary point.\n\\cite{xin2021stochastic, mancino2022proximal} are the most closely related to our methods. To produce an (expected) $\\epsilon$-stationary point, the methods in \\cite{xin2021stochastic} require a large batch size, either $\\mathcal{O}(\\frac{1}{\\epsilon^2})$ or $\\mathcal{O}(\\frac{1}{\\epsilon})$ if variance reduction is applied. \nThe method in \\cite{mancino2022proximal} requires only $\\mathcal{O}(1)$ samples for each update by taking a momentum-type variance reduction scheme. However, in order to reduce variance, it needs a stronger mean-squared smoothness assumption. In addition, the momentum variance reduction step can often hurt the generalization performance on training complex neural networks, as we will demonstrate in our numerical experiments.\n\n\n \n\n\n \n \n\n\n\n\\subsection{Compressed Distributed Methods}\n \n\nCommunication efficiency is a crucial factor when designing a distributed optimization strategy. The current machine learning paradigm oftentimes resorts to models with a large number of parameters, which indicates a high communication cost when the models or gradients are transferred from workers to the parameter server or among workers. This may incur significant latency in training. Hence, communication-efficient algorithms by model or gradient compression have been actively sought.\n\nTwo major groups of compression operators are quantization and sparsification. The quantization approaches include 1-bit SGD \\cite{seide20141}, SignSGD \\cite{bernstein2018signsgd}, QSGD \\cite{alistarh2017qsgd}, TernGrad \\cite{wen2017terngrad}. The sparsification approaches include Random-$k$ \\cite{stich2018sparsified}, Top-$k$ \\cite{aji2017sparse}, Threshold-$v$ \\cite{dutta2019discrepancy} and ScaleCom \\cite{chen2020scalecom}. Direct compression may slow down the convergence especially when compression ratio is high. Error compensation or error-feedback can mitigate the effect by saving the compression error in one communication step and compensating it in the next communication step before another compression \\cite{seide20141}. These compression operators are first designed to compress the gradients in the centralized setting \\cite{tang2019DoubleSqueeze,karimireddy2019error}.\n \nThe compression can also be applied to the decentralized setting for smooth problems, i.e., \\eqref{eq:decentralized_problem} with $r=0$. \\cite{tang2019deepsqueeze} applies the compression with error compensation to the communication of model parameters in the decentralized seeting.\nChoco-Gossip \\cite{koloskova2019decentralized} is another communication way to mitigate the slow down effect from compression. It does not compress the model parameters but a residue between model parameters and its estimation. Choco-SGD uses Choco-Gossip to solve \\eqref{eq:decentralized_problem}. BEER \\cite{zhao2022beer} includes gradient tracking and compresses both tracked stochastic gradients and model parameters in each iteration by the Choco-Gossip.\nBEER needs a large batchsize of $\\mathcal{O}(\\frac{1}{\\epsilon^2})$ in order to produce an $\\epsilon$-stationary solution.\nDoCoM-SGT\\cite{DBLP:journals\/corr\/abs-2202-00255} does similar updates as BEER but with a momentum term for the update of the tracked gradients, and it only needs an $\\mathcal{O}(1)$ batchsize. \n\nOur proposed CDProxSGT is for solving decentralized problems in the form of \\eqref{eq:decentralized_problem} with a nonsmooth $r({\\mathbf{x}})$. To the best of our knowledge, CDProxSGT is the first compressed decentralized method for nonsmooth nonconvex problems without the use of a large batchsize, and it can achieve an optimal sample complexity without the assumption of data similarity or gradient boundedness. \n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{INTRODUCTION}\n\n$D$--dimensional simplicial quantum gravity is a discretization\nof Euclidean quantum gravity with the integration over\nspace-time metrics replaced by a sum over all possible\n$D$--dimensional triangulations constructed by gluing together\nequilateral simplexes. It is defined by the partition function\n\\begin{equation}\n Z(\\mu,\\kappa) \\;=\\;\n \\sum_{N_D} \\; e^{-{\\mu} N_D} \\;\n Z({\\kappa},N_D) \n\\end{equation}\n\\vspace{-12pt}\n\\[ \\nonumber\n \\hspace{37pt}\n = \\sum_{N_D,N_0} \\; e^{-{\\mu} N_D + {\\kappa} N_0}\n \\;W_D(N_0,N_D) \\;,\\nonumber\n\\]\n\\vspace{-13pt}\n\\begin{equation} \n W_D(N_0,N_D) \\;= \\sum_{T\\in{\\cal T}(N_0,N_D)} \\frac{1}{C_T}\\;,\n\\end{equation}\n\\vspace{-5pt}\nwhere $N_D$ = \\# $D$--simplexes, $N_0$ = \\# vertices,\nand $C_T$ is the {\\it symmetry} factor of a labeled\ntriangulation $T$ chosen from a suitable\nensemble $\\cal T$ ({\\it eg} combinatorial).\n$\\mu$ and $\\kappa$ are the discrete cosmological\nand Newton's coupling constants.\n\nIn $D = 3$ and 4 this model has {\\it two} phases:\n\\begin{itemize}\n \\vspace{-5pt}\n \\item{$\\kappa < \\kappa_c$} :\n a (intrinsically) {\\it crumpled} phase\n \\vspace{-5pt} \n \\item{$\\kappa > \\kappa_c$} :\n a {\\it branched polymer} phase\n\\end{itemize}\n\\vspace{-5pt}\nseparated (regrettably) by a {\\it discontinuous} phase transition\n\\cite{gen}.\n\nAs a discontinuous phase transition excludes a sensible\ncontinuum limit, there have been\nseveral attempts to modify the model Eq.~(1) in the hope of \nfinding a non-trivial phase structure.\nThis includes: adding a {\\it measure} term \\cite{enso}\n\\vspace{-5pt}\n\\begin{equation}\n W_D(N_0,N_D,{\\beta}) \\;= \\sum_{T\\in\n {\\cal T}(N_0,N_D)} \\frac{1}{C_T} \\;\n {\\prod_{i=0}^{N_0} q_{i}^{{\\;\\beta}}},\n\\end{equation}\n\\vspace{-2pt}\nwhere $q_i$ is the order of the vertex $i$ (number of simplexes\ncontaining $i$),\nand coupling matter fields to the geometry \\cite{bilke}.\n\nSuch modifications do indeed lead to a more complicated \nphase diagram (Fig.~1), and suitable modified the \nmodel exhibits a new {\\it crinkled} phase.\nBut does this new phase \nstructure imply a more interesting non-trivial\ncritical behavior?\nTo investigate this we have studied the weak-coupling\nlimit, $\\kappa \\rightarrow \\infty$, of the model Eq.~(1)\nfor $D=3$.\n\\begin{figure}[t]\n \\centerline{\\includegraphics[width=2.7in,bb=57 270 573 590]{Fig1.bw.eps}}\n \\label{fig1}\n \\caption{A schematic phase diagram of simplicial gravity\n in 3 and 4 dimensions.}\n\\end{figure}\n\n\\section{THE EXTREMAL ENSEMBLE} \n\nIn the weak-coupling limit the partition function Eq.~(1),\nin $D=3$ and 4, is expected to be dominated by an \nExtremal Ensemble (EE) of triangulations.\nFor this ensemble, defined as triangulations with\nthe maximal ratio $N_0\/N_D$, the partition function simplifies:\n\\begin{equation}\n Z(\\mu) \\;\\;=\\; \\sum_{N_D} \\; {\\rm e}^{- \\mu N_D}\n \\;W_D(N_D)\n\\end{equation}\n\\vspace{-4pt}\n\\[ \n W_D(N_D) \\;= \\hspace{-10pt} \\sum_{T\\in\n {\\cal T}(N_0^{\\rm max},N_D)} \\;\\frac{1}{C_T}\n\\]\nwhere \\vspace{-2pt} \n\\begin{equation}\n N_0^{\\rm max} \\;=\\; \\left \\{ \\begin{array}{cc}\n \\left\\lfloor \\frac{\\textstyle N_3+10}{\\textstyle 3}\\right\\rfloor\n & D = 3\\;, \\\\ \\vspace{-5pt} \\\\\n \\left\\lfloor \\frac{\\textstyle N_4+18}{\\textstyle 4}\\right\\rfloor\n & D = 4\\;. \n \\end{array} \\right.\n\\end{equation}\nHere $\\lfloor x \\rfloor$ denotes the floor function ---\nthe biggest integer not greater than $x$.\nThis in turn defines several {\\it distinct} series\nfor the EE:\n\\[\n 3D \\; : \\;\\;\n S^0\\; \\bigl (N_0,\\;3 N_0\\!-\\!10\\bigr ),\n \\;\\; S^1\\;\\bigl (N_0,\\;3 N_0\\!-\\!9\\bigr ), \n\\]\n\\vspace{-14pt}\n\\[\n \\hspace{30pt} S^2\\;\\bigl (N_0,\\;3 N_0\\!-\\!8\\bigr ).\n\\]\n\\vspace{-12pt}\n\\[ 4D \\; : \\;\\;\n S^0\\;\\bigl (N_0,\\;4N_0\\!-\\!18\\bigr ),\n \\;\\;\\; S^1\\;\\bigl (N_0,\\;4 N_0\\!-\\!17\\bigr ).\n\\]\n\nAssuming the asymptotic behavior $W_D(N_D) \\sim\n\\exp (-\\mu_c N_D)\\;N_D^{\\gamma - 3}$, which defines the\nstring susceptibility exponent $\\gamma$,\nwe observe (from a SCE) that for the different series $S^k$,\n$\\gamma^k=k+\\frac{1}{2}$. This difference in the \nexponent $\\gamma$ can be understood as the 'higher' series\n($k = 1,2,\\ldots$) can be constructed by introducing\n$k$ ``defects'' (marked points) into triangulations\nbelonging to the minimal series $S^0$.\nMoreover, we observe that\nthe minimal series appears to have very small finite-size effects.\n\nThe minimal series $S^0$ can be explicitly {\\it enumerated}\nas it corresponds to $D$--dimensional \n{\\it combinatorial stacked spheres}\n(CSS), {\\it ie} to the surface of\na $(D+1)$--dimensional simplicial cluster.\nThe number of $(D+1)$--dimensional simplicial clusters build\nout of $n \\; (D+1)$ simplexes, rooted at a marked outer face,\nis given by (where $n=N_0-D-1$) \\cite{her}\n\\[\n e_{D+1,n} \\;=\\!\\!\\!\\!\\!\\!\n \\sum_{\\scriptscriptstyle \\begin{array}{c}\n n_1+\\cdots +n_{D+1}\\\\\n =n-1\n \\end{array}}\n \\!\\!\\!\\!\\!e_{D+1,n_1} \\cdots e_{D+1,n_{D+1}} \n\\]\n\\vspace{-6pt}\n\\begin{equation}\n \\hspace{35pt}=\\; \\frac{1}{nD +1 }\\;\\left (\n \\begin{array}{c}\n (D\\!+\\!1)\\; n \\\\\n n\n \\end{array} \\right )\n\\end{equation}\n\n\\begin{equation}\n \\Rightarrow \\;\\;\\ W_{D}(N_D) \\;= \\; \\frac{D+2}{N_D} \\;\n e_{D+1,\\frac{N_D-2}{D}}\\;.\n\\end{equation}\nExpanding this gives\n\\vspace{-5pt}\n{\\small\n\\[\n W_{3}(N_3) = \\frac{10}{\\sqrt{2\\pi} \\; N_3^{5\/2}}\n \\left (\\frac{256}{27}\\right)^{\\frac{N_3-2}{3}}\n \\left ( 1\n +\\frac{83}{48}\\frac{1}{N_3} \\right.\n \\cdots \n\\]\n\\vspace{-10pt}\n\\[\n W_{4}(N_4) = \\frac{6\\sqrt{5}}{\\sqrt{2\\pi}\\;N_4^{5\/2}}\n \\left (\\frac{3125}{256}\\right)^{\\frac{N_4-2}{4}}\n \\left ( 1+\\frac{33}{20}\\frac{1}{N_4} \\right.\n \\cdots \n\\]\n} \n\\noindent\nwith $\\gamma = 1\/2$ as expected for branched polymers.\n\n\\section{A MODIFIED MEASURE}\n\n\n\\begin{figure}[t]\n \\centerline{\\includegraphics[width=2.7in,bb=104 250 465 589]{Fig6.bw.eps}}\n \\label{fig2}\n \\caption{Evidence of a phase transition in the\n $3D$ EE, Eq.~(4), with a modified measure Eq.~(3)\n ({\\sc Top}) The fluctuations in the measure term: $C_V$, \n and ({\\sc Bottom}) in the maximal vertex order: $\\chi_{p_0}$.}\n\\end{figure}\n\nWe have investigated the $3D$ EE\nincluding a measure term, using both\nMC simulations and a SCE \\cite{us}.\nWe find a {\\it continuous} phase transition to a\ncrinkled phase at $\\beta \\approx -1$ (Fig.~2).\nThis is evident in the fluctuations both in\nthe measure term --- the ``specific heat'' $C_V$ ---\nand in the maximal vertex order $p_0$. \nScaling analysis of the peak value of \nspecific heat gives: $C_V^{\\rm max} \\approx\na + b N_3^{-0.34(4)}$.\n \nTo explore the fractal properties of the geometry\nin the crinkled phase \nwe have measured the variations in $\\gamma$ \nwith $\\beta$ using several different methods (Fig.~3). \nAs in $D = 4$, we find that $\\gamma$\nbecomes negative at $\\beta_c$ and decreases \nwith $\\beta$. Similarly we find a spectral\ndimension that increase from $d_s = 4\/3$ \nfor $\\beta > \\beta_c$, to $d_s \\approx 2$ as \n$\\beta \\rightarrow \\infty$.\n \nEstimates of the intrinsic\nfractal dimension $d_H$ differ, on the other hand, \nsubstantially depending on\nhow it is defined --- on the direct graph\n(from a vertex-vertex distribution) or on \nthe dual graph (simplex-simplex distribution).\nThe former yields $d_H \\rightarrow \\infty$,\nthe latter $d_H \\approx 2$. \nIn addition, we observe that the crinkled phase\nappears dominated by a {\\it gas} of sub-singular vertices.\n\nCombined this evidence suggests that the crinkled phase probably\ncorresponds to some kind of non-generic branched polymers phase which\nmakes it unlikely that any sensible continuum limit exist in this\nphase. This of course does not exclude the possibility that a second\nphase order transition point exists somewhere on the phase boudary,\nfor example at the end of the first order transition line (Fig.~\\ref{fig1}). \n\\begin{figure}[t]\n \\centerline{\\includegraphics[width=2.7in,bb=122 300 479 547]{Fig9.bw.eps}}\n \\label{fig3}\n \\caption{Variations in $\\gamma$ with $\\beta$\n for the EE Eq.~(4) with a modified measure\n Eq.~(3).}\n\\end{figure}\n\n\\section{DEGENERATE TRIANGULATIONS}\n\nThe EE can also be defined with the ensemble\nof {\\it degenerate} triangulations \nintroduced in Ref.\\ \\cite{deg}.\nIn this case degenerate stacked spheres (DSS)\nare constructed by slicing open a face and inserting \na vertex. Different from CSS, Eq.~(5), DSS \nare defined by the maximal ratio:\n\\begin{equation} \n \\frac{N_0}{N_D} \\;=\\; \\frac{1}{2}+\\frac{D}{N_D}.\n\\end{equation}\nThis ensemble can also be enumerated explicitly.\n\nModifying the measure leads to identical phase\nstructure as is observed for CSS. This\nis shown in Fig.~4 where we plot the \nvariations in $\\gamma$ with $\\beta$ for the \ntwo ensembles. That the two, very different,\nensembles agree on the fractal \nstructure is reassuring and reflects the universal\nproperties of the crinkled phase.\n\\begin{figure}[t]\n \\centerline{\\includegraphics[width=2.7in,bb=122 300 479 547]{G_deg_bw.eps}}\n \\caption{\\label{fig4}bel{fig4}Variations in $\\gamma$ with a modified\n measure, both for an ensemble of DSS and \n CSS.}\n\\end{figure}\n\n{\\bf Acknowledgments} P.~B. was supported by the Alexander von Humboldt\nFoundation. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Discussion and Future Direction} \\label{sec:discussion}\n\n\n\nWe have presented a highly efficient algorithm AccAltProj for robust principal component analysis. The algorithm is developed by introducing a novel subspace projection step before the SVD truncation, which reduces the per iteration computational complexity of the algorithm of alternating projections significantly. Theoretical recovery guarantee has been established for the new algorithm, while numerical simulations show that our algorithm is superior to other state-of-the-art algorithms. \n\nThere are three lines of research for future work. Firstly, the theoretical number of the non-zero entries in a sparse matrix below which AccAltProj can achieve successful recovery is highly pessimistic compared with our numerical findings. This suggests the possibility of improving the theoretical result. Secondly, recovery stability of the proposed algorithm to additive noise will be investigated in the future. Finally, this paper focuses on the fully observed setting. The proposed algorithm might be similarly extended to the partially observed setting where only partial entries of a matrix are observed. It is also interesting to study the recovery guarantee of the proposed algorithm under this partial observed setting. \n\\section{Introduction}\nRobust principal component analysis (RPCA) appears in a wide range of applications, including video and voice background subtraction \\citep{li2004statistical,huang2012singing}, sparse graphs clustering \\citep{chen2012clustering}, 3D reconstruction \\citep{mobahi2011holistic}, and fault isolation \\citep{tharrault2008fault}. Suppose we are given a sum of a \nlow rank matrix and a sparse matrix, denoted $\\bm{D}=\\bm{L}+\\bm{S}$. The goal of RPCA is to reconstruct $\\bm{L}$ and $\\bm{S}$ simultaneously from $\\bm{D}$. As a concrete example, for foreground-background separation in video processing, $\\bm{L}$ represents static background through all the frames of a video which should be low rank while \n$\\bm{S}$ represents moving objects which can be assumed to be sparse since typically they will not block a large portion of the screen for a long time.\n\nRPCA can be achieved by seeking a low rank matrix $\\bm{L'}$ and a sparse matrix $\\bm{S'}$ such that their sum fits the measurement matrix $\\bm{D}$ as well as possible:\n \\begin{equation} \\label{eq:non-convex model 2}\n\\min_{\\bm{L'},\\bm{S'}\\in\\mathbb{R}^{m\\times n}} \\|\\bm{D}-\\bm{L'}-\\bm{S'}\\|_F \\quad \\textnormal{ subject to } {rank}(\\bm{L'}) \\leq r \\textnormal{ and } \\|\\bm{S'}\\|_0 \\leq |\\Omega|,\n\\end{equation}\nwhere $r$ denotes the rank of the underlying low rank matrix, $\\Omega$ denotes the support set of the underlying sparse matrix, and $\\|\\bm{S'}\\|_0$ counts the number \nof non-zero entries in $\\bm{S'}$.\nCompared to the traditional principal component analysis (PCA) which computes a low rank approximation of a data matrix, RPCA is less sensitive to outliers since it includes a sparse part in the formulation.\n\nSince the seminal works of \\citep{wright2009robust,candes2011robust,chandrasekaran2011rank}, RPCA has received intensive investigations both from theoretical and algorithmic aspects. Noticing that \n\\eqref{eq:non-convex model 2} is a non-convex problem, some of the earlier works focus on the following convex relaxation of RPCA: \n\\begin{equation} \\label{eq:convex model}\n\\min_{\\bm{L'},\\bm{S'}\\in\\mathbb{R}^{m\\times n}} \\|\\bm{L'}\\|_*+\\lambda\\|\\bm{S'}\\|_1 \\quad \\textnormal{ subject to } \\bm{L'} + \\bm{S'}=\\bm{D},\n\\end{equation}\nwhere $\\|\\cdot\\|_*$ is the nuclear norm ({\\em viz.} trace norm) of matrices, $\\lambda$ is the regularization parameter, and $\\|\\cdot\\|_1$ denotes the $\\ell_1$-norm of the vectors obtained by stacking the columns of associated matrices.\nUnder some mild conditions, it has been proven that the RPCA problem can be solved exactly by the aforementioned convex relaxation \\cite{candes2011robust,chandrasekaran2011rank}.\nHowever, a limitation of the convex relaxation based approach is that the resulting semidefinite programming is computationally rather expensive to solve, even for medium size matrices. Alternative to the convex relaxation, many non-convex algorithms have been designed to target \\eqref{eq:non-convex model 2} directly. This line of research will be reviewed in more detail in Section~\\ref{subsec:related work} after our approach has been introduced. \n\nThis paper targets the non-convex optimization for RPCA directly. The main contributions of this work are two-fold. Firstly, we propose a new algorithm, accelerated alternating projections (AccAltProj), for RPCA, which is substantially faster than other state-of-the-art algorithms. Secondly, exact recovery of accelerated alternating projections has been established for the fixed sparsity model, where we assume the ratio of the number of non-zero entries in each row and column of $\\bm{S}$ is less than a threshold. \n\n\\subsection{Assumptions}\nIt is clear that the RPCA problem is ill-posed without any additional conditions. Common assumptions are that $\\bm{L}$ cannot be too sparse and $\\bm{S}$ cannot be locally too dense, which are formalized in \\nameref{assume:Inco} and \\nameref{assume:Sparse}, respectively. \n\\paragraph*{A1}\\label{assume:Inco} \n\\textit{The underlying low rank matrix $\\bm{L}\\in \\mathbb{R}^{m\\times n}$ is a rank-$r$ matrix with ${\\mu}$-incoherence, that is\n\\begin{equation*}\n\\max_i \\|\\bm{e}_i^T \\bm{U}\\|_2\\leq \\sqrt{\\frac{\\mu r}{m}}, \\quad\\textnormal{and}\\quad \\max_j \\|\\bm{e}_j^T \\bm{V}\\|_2\\leq \\sqrt{\\frac{\\mu r}{n}}\n\\end{equation*}\nhold for a positive numerical constant $1\\leq\\mu\\leq\\frac{\\min\\{m,n\\}}{r}$, where $\\bm{L}=\\bm{U}\\bm{\\Sigma} \\bm{V}^T$ is the SVD of $\\bm{L}$. }\n\nAssumption \\nameref{assume:Inco} was first introduced in \\citep{candes2009exact} for low rank matrix completion, and now it is a very standard assumption for related low rank reconstruction problems. It basically states that the left and right singular vectors of $\\bm{L}$ are weakly correlated with the canonical basis, which implies $\\bm{L}$ cannot be a very sparse matrix.\n\n\\paragraph*{A2}\\label{assume:Sparse}\n\\textit{The underlying sparse matrix $\\bm{S}\\in \\mathbb{R}^{m\\times n}$ is $\\alpha$-sparse. That is, $\\bm{S}$ has at most $\\alpha n$ non-zero entries in each row, and at most $\\alpha m$ non-zero entries in each column. In the other words, for all $1\\leq i \\leq m, 1\\leq j \\leq n$, \n\\begin{equation}\\label{eq:p_model}\n\\|\\bm{e}_i^T \\bm{S}\\|_0 \\leq \\alpha n\\quad\\mbox{and} \\quad \\|\\bm{S}\\bm{e}_j\\|_0 \\leq \\alpha m\n\\end{equation}\nIn this paper, we assume\\footnote{The standard notion ``$\\lesssim$'' in \\eqref{eq:condition_on_p} means there exists an absolute numerical constant $C>0$ such that $\\alpha$ can be upper bounded by $C$ times the right hand side. }\n\\begin{equation}\\label{eq:condition_on_p}\n\\alpha\\lesssim\\min \\left\\{ \\frac{1}{\\mu r^2 \\kappa^3}, \\frac{1}{\\mu^{1.5} r^2\\kappa},\\frac{1}{\\mu^2r^2}\\right\\},\n\\end{equation}\nwhere $\\kappa$ is the condition number of $\\bm{L}$}.\n\n\nAssumption \\nameref{assume:Sparse} states that the non-zero entries of the sparse matrix $\\bm{S}$ cannot concentrate in a few rows or columns, so there does not exist a low rank component in $\\bm{S}$. If the indices of the support set $\\Omega$ are sampled independently from the Bernoulli distribution with the associated parameter being slightly smaller than $\\alpha$, by the Chernoff inequality, one can easily show that \\eqref{eq:p_model} holds with high probability. \n\n\\subsection{Organization and Notation of the Paper } \\label{subsec:notation}\nThe rest of the paper is organized as follows. In the remainder of this section, we introduce standard notation that is used throughout the paper. Section~\\ref{subsec:proposed algorithms} presents the proposed algorithm and discusses how to implement it efficiently. The theoretical recovery guarantee of the proposed algorithm is presented in Section~\\ref{subsec:guaranteed results}, followed by a review of prior art for RPCA. In Section~\\ref{sec:experience}, we present the numerical simulations of our algorithm. \nSection~\\ref{sec:proofs} contains all the mathematical proofs of our main theoretical result. We conclude this paper with future directions in Section~\\ref{sec:discussion}.\n\nIn this paper, vectors are denoted by bold lowercase letters (e.g., $\\bm{x}$), matrices are denoted by bold capital letters (e.g., $\\bm{X}$), and operators are denoted by calligraphic letters (e.g., $\\mathcal{H}$). In particular, $\\bm{e}_i$ denotes the $i^{th}$ canonical basis vector, $\\bm{I}$ denotes the identity matrix, and $\\mathcal{I}$ denotes the identity operator. For a vector $\\bm{x}$, $\\|\\bm{x}\\|_0$ counts the number of non-zero entries in $\\bm{x}$, and $\\|\\bm{x}\\|_2$ denotes the $\\ell_2$ norm of $\\bm{x}$. For a matrix $\\bm{X}$, $[\\bm{X}]_{ij}$ denotes its $(i,j)^{th}$ entry, $\\sigma_i(\\bm{X})$ denotes its $i^{th}$ singular value, $\\|\\bm{X}\\|_\\infty=\\max_{ij} |[\\bm{X}]_{ij} |$ denotes the maximum magnitude of its entries, $\\|\\bm{X}\\|_2=\\sigma_1(\\bm{X})$ denotes its spectral norm, $\\|\\bm{X}\\|_F=\\sqrt{\\sum_i \\sigma_i^2(\\bm{X})}$ denotes its Frobenius norm, and $\\|\\bm{X}\\|_*=\\sum_i \\sigma_i(\\bm{X})$ denotes its nuclear norm. The inner product of two real valued vectors is defined as $\\langle\\bm{x},\\bm{y}\\rangle=\\bm{x}^T\\bm{y}$, and the inner product of two real valued matrices is defined as $\\langle\\bm{X},\\bm{Y}\\rangle=Trace(\\bm{X}^T\\bm{Y})$, where $(\\cdot)^T$ represents the transpose of a vector or matrix.\n\nAdditionally, we sometimes use the shorthand $\\sigma_i^A$ to denote the $i^{th}$ singular value of a matrix $\\bm{A}$. Note that $\\kappa=\\sigma_{1}^L\/\\sigma_{r}^L$ always denotes the condition number of the underlying rank-$r$ matrix $\\bm{L}$, and $\\Omega=supp(\\bm{S})$ is always referred to as the support of the underlying sparse matrix $\\bm{S}$. At the $k^{th}$ iteration of the proposed algorithm, the estimates of the low rank matrix and the sparse matrix are denoted by $\\bm{L}_k$ and $\\bm{S}_k$, respectively. \n\n\n\\section{Algorithm and Theoretical Results} \\label{sec:algo and results}\nIn this section, we present the new algorithm and its recovery guarantee. For ease of exposition, we assume all matrices are square (i.e., $m=n$), but emphasize that nothing is special about this assumption and all the results can be easily extended to rectangular matrices.\n\\subsection{Proposed Algorithm} \\label{subsec:proposed algorithms}\nAlternating projections is a minimization approach that has been successfully used in many fields, including image processing \\citep{wang2008new,chan2000convergence,o2007alternating}, matrix completion \\citep{keshavan2012efficient,jain2013low,hardt2013provable,tannerwei2016asd}, phase retrieval \\citep{netrapalli2013phase,cai2017fast,zhang2017phase}, and many others \\citep{peters2009interference,agarwal2014learning,yu2016alternating,pu2017complexity}. \nA non-convex algorithm based on alternating projections, namely AltProj, is presented in \\citep{netrapalli2014non} for RPCA accompanied with a theoretical recovery guarantee. In each iteration, AltProj first updates $\\bm{L}$ by projecting $\\bm{D}-\\bm{S}$ onto the space of rank-$r$ matrices, denoted $\\mathcal{M}_r$, and then updates $\\bm{S}$ by projecting $\\bm{D}-\\bm{L}$ onto the space of sparse matrices, denoted $\\mathcal{S}$; see the left plot of Figure~\\ref{fig:illustration} for an illustration. Regarding to the implementation of AltProj, \nthe projection of a matrix onto the space of low rank matrices can be computed by the singular value decomposition (SVD) followed by truncating out small singular values, while the projection of a matrix onto the space of sparse matrices can be computed by the hard thresholding operator.\nAs a non-convex algorithm which targets \\eqref{eq:non-convex model 2} directly, AltProj is computationally much more efficient than solving the convex relaxation problem \\eqref{eq:convex model} using semidefinite programming (SDP). However, when projecting $\\bm{D}-\\bm{S}$ onto the low rank matrix manifold, AltProj requires to compute the SVD of a full size matrix, which is computationally expensive. Inspired by the work in \\citep{vandereycken2013low, wei2016guarantees_completion,wei2016guarantees_recovery}, we propose an accelerated algorithm for RPCA, coined accelerated alternating projections (AccAltProj), to circumvent the high computational cost of the SVD. The new algorithm is able to reduce the per-iteration computational cost of AltProj significantly, while a theoretical guarantee can be similarly established.\n\n\n\\begin{figure}[t]\n\\subfloat[Illustration of AltProj\\label{fig:AltProj}]\n {\\includegraphics[width=.50\\linewidth]{Illustration_AltProj}\n }\\hfill\n\\subfloat[Illustration of AccAltProj\\label{fig:AccAltProj}]\n {\\includegraphics[width=.50\\linewidth]{Illustration_AccAltProj}\n }\\hfill\n\\caption{Visual comparison between AltProj and AccAltProj, where $\\mathcal{M}_r$ denotes the manifold of rank-$r$ matrices and $\\mathcal{S}$ denotes the set of sparse matrices. The red dash line in \\protect\\subref{fig:AccAltProj} represents the tangent space of $\\mathcal{M}_r$ at $\\bm{L}_k$. In fact, each circle represents a sum of a low rank matrix and a sparse matrix, but with the component on one circle fixed when projecting onto the other circle. For conciseness, the trim stage, i.e., $\\widetilde{\\bm{L}}_k$, is not included in the plot for AccAltProj.}\\label{fig:illustration}\n\\end{figure}\n\n\n\nOur algorithm consists of two phases: initialization and projections onto $\\mathcal{M}_r$ and $\\mathcal{S}$ alternatively. We begin our discussion with the second phase, which is described in Algorithm~\\ref{Algo:Algo1}. For geometric comparison between AltProj and AccAltProj, see Figure~\\ref{fig:illustration}.\n\n\n\n\\begin{algorithm}[t]\n\\caption{Robust PCA by Accelerated Alternating Projections (AccAltProj)}\\label{Algo:Algo1}\n\\begin{algorithmic}[1]\n\\State \\textbf{Input:} $\\bm{D}=\\bm{L}+\\bm{S}$: matrix to be split; $r$: rank of $\\bm{L}$; $\\epsilon$: target precision level; $\\beta$: thresholding parameter; $\\gamma$: target converge rate; $\\mu$: incoherence parameter of $\\bm{L}$.\n\\State \\textbf{Initialization}\n\\State $k=0$\n\\While{ \\texttt{<$\\|\\bm{D}-\\bm{L}_{k}-\\bm{S}_{k}\\|_F\/\\|\\bm{D}\\|_F \\geq \\epsilon$>} }\n\\State $\\widetilde{\\bm{L}}_{k}=\\textnormal{Trim}(\\bm{L}_{k},\\mu)$\n\\State $\\bm{L}_{k+1}=\\mathcal{H}_r(\\mathcal{P}_{\\widetilde{T}_{k}}(\\bm{D}-\\bm{S}_{k}))$\n\\State $\\zeta_{k+1}= \\beta\\left(\\sigma_{r+1}\\left(\\mathcal{P}_{\\widetilde{T}_{k}}(\\bm{D}-\\bm{S}_{k})\\right) + \\gamma^{k+1} \\sigma_{1}\\left(\\mathcal{P}_{\\widetilde{T}_{k}}(\\bm{D}-\\bm{S}_{k})\\right)\\right) $\n\\State $\\bm{S}_{k+1}=\\mathcal{T}_{\\zeta_{k+1}}(\\bm{D}-\\bm{L}_{k+1})$\n\\State $k=k+1$\n\\EndWhile{\\textbf{end while}}\n\\State \\textbf{Output:} $\\bm{L}_k$, $\\bm{S}_k$\n\\end{algorithmic}\n\\end{algorithm}\n\n\\begin{algorithm}[t]\n\\caption{Trim}\\label{Algo:Trim}\n\\begin{algorithmic}[1]\n\\State \\textbf{Input:} $\\bm{L}=\\bm{U}\\bm{\\Sigma} \\bm{V}^T$: matrix to be trimmed; $\\mu$: target incoherence level.\n\\State $c_{\\mu}=\\sqrt{\\frac{\\mu r}{n}}$\n\\For{\\texttt{<$i=1$ to $m$>}}\n \\State $\\bm{A}^{(i)}=\\min\\{1,\\frac{c_{\\mu}}{\\|\\bm{U}^{(i)}\\|}\\}\\bm{U}^{(i)}$\n\\EndFor{\\textbf{end for}}\n\\For{\\texttt{<$j=1$ to $n$>}}\n \\State $\\bm{B}^{(j)}=\\min\\{1,\\frac{c_{\\mu}}{\\|\\bm{V}^{(j)}\\|}\\}\\bm{V}^{(i)}$\n\\EndFor{\\textbf{end for}}\n\\State \\textbf{Output:} $\\widetilde{\\bm{L}}=\\bm{A}\\bm{\\Sigma} \\bm{B}$\n\\end{algorithmic}\n\\end{algorithm}\n\n\n\nLet $(\\bm{L}_k,\\bm{S}_k)$ be a pair of current estimates. \nAt the $(k+1)^{th}$ iteration, AccAltProj first trims $\\bm{L}_k$ into an incoherent matrix $\\widetilde{\\bm{L}}_k$ using Algorithm~\\ref{Algo:Trim}.\nNoting that $\\widetilde{\\bm{L}}_k$ is still a rank-$r$ matrix, so its left and right singular vectors define an $(2n-r)r$-dimensional subspace \\citep{vandereycken2013low}, \n\\begin{equation} \\label{eq:tangent space tilde k}\n\\widetilde{T}_k=\\{\\widetilde{\\bm{U}}_k\\bm{A}^T+\\bm{B}\\widetilde{\\bm{V}}_k^T ~|~\\bm{A},\\bm{B}\\in\\mathbb{R}^{n\\times r} \\},\n\\end{equation}\nwhere $\\widetilde{\\bm{L}}_k=\\widetilde{\\bm{U}}_k\\widetilde{\\bm{\\Sigma}}_k\\widetilde{\\bm{V}}_k^T$ is the SVD of $\\widetilde{\\bm{L}}_k$\\footnote{In practice, we only need the trimmed orthogonal matrices $\\widetilde{\\bm{U}}_k$ and $\\widetilde{\\bm{V}}_k$ for the projection $\\mathcal{P}_{\\widetilde{T}_k}$, and they can be computed efficiently via a QR decomposition. The entire matrix $\\widetilde{\\bm{L}}_k$ should never be formed in an efficient implementation of AccAltProj.}. Given a matrix $\\bm{Z}\\in\\mathbb{R}^{n\\times n}$, it can be easily verified that the projections of $\\bm{Z}$ onto the subspace $\\widetilde{T}_k$ and its orthogonal complement are given by\n\\begin{equation} \\label{eq:projection onto tangent space tilde k}\n\\mathcal{P}_{\\widetilde{T}_k} \\bm{Z}=\\widetilde{\\bm{U}}_k\\widetilde{\\bm{U}}_k^T\\bm{Z}+\\bm{Z}\\widetilde{\\bm{V}}_k\\widetilde{\\bm{V}}_k^T-\\widetilde{\\bm{U}}_k\\widetilde{\\bm{U}}_k^T\\bm{Z}\\widetilde{\\bm{V}}_k\\widetilde{\\bm{V}}_k^T\n\\end{equation}\nand\n\\begin{equation} \\label{eq:projection onto perpendicular space tilde k}\n(\\mathcal{I}-\\mathcal{P}_{\\widetilde{T}_k}) \\bm{Z}=(\\bm{I}-\\widetilde{\\bm{U}}_k\\widetilde{\\bm{U}}_k^T)\\bm{Z}(\\bm{I}-\\widetilde{\\bm{V}}_k\\widetilde{\\bm{V}}_k^T).\n\\end{equation}\n\nAs stated previously, AltProj truncates the SVD of $\\bm{D}-\\bm{S}_k$ directly to get a new estimate of $\\bm{L}$. { In contrast, AccAltProj\nfirst projects $\\bm{D}-\\bm{S}_k$ onto the low dimensional subspace $\\widetilde{T}_k$, and then projects the intermediate matrix onto the rank-$r$ matrix manifold $\\mathcal{M}_r$ using the truncated SVD.} That is, \n$$\\bm{L}_{k+1}=\\mathcal{H}_r(\\mathcal{P}_{\\widetilde{T}_{k}}(\\bm{D}-\\bm{S}_{k})),$$\nwhere $\\mathcal{H}_r$ computes the best rank-$r$ approximation of a matrix, \n\\begin{equation}\n\\mathcal{H}_r(\\bm{Z}):=\\bm{Q}\\bm{\\Lambda}_r\\bm{P}^T \\textnormal{ where }\\bm{Z}=\\bm{Q\\Lambda P}^T\\textnormal{ is its SVD and $[\\bm{\\Lambda}_r]_{ii}:=\\begin{cases} [\\bm{\\Lambda}]_{ii} & i\\leq r\\\\ 0& \\mbox{otherwise}. \\end{cases}$}\n\\end{equation}\nBefore proceeding, it is worth noting that the set of rank-$r$ matrices $\\mathcal{M}_r$ form a smooth manifold of dimension $(2n-r)r$, and $\\widetilde{T}_k$ is indeed the tangent space of $\\mathcal{M}_r$ at $\\widetilde{\\bm{L}}_k$ \\citep{vandereycken2013low}. Matrix manifold algorithms based on the tangent space of low dimensional spaces have been widely studied in the literature, see for example \\citep{ngo2012scaled,mishra2012riemannian,vandereycken2013low,mishra2014r3mc,mishra2014fixed,wei2016guarantees_completion,wei2016guarantees_recovery} and references therein. In particular, we invite readers to explore the book \\citep{absil2009optimization} for more details about the differential geometry ideas behind manifold algorithms. \n\n\n\n\n\n\nOne can see that a SVD is still needed to obtain the new estimate $\\bm{L}_{k+1}$. Nevertheless, it can be computed in a very efficient way \\citep{vandereycken2013low,wei2016guarantees_completion,wei2016guarantees_recovery}. Let $(\\bm{I}-\\widetilde{\\bm{U}}_k\\widetilde{\\bm{U}}_k^T)(\\bm{D}-\\bm{S}_k)\\widetilde{\\bm{V}}_k=\\bm{Q}_1\\bm{R}_1$ and $(\\bm{I}-\\widetilde{\\bm{V}}_k\\widetilde{\\bm{V}}_k^T)(\\bm{D}-\\bm{S}_k)\\widetilde{\\bm{U}}_k=\\bm{Q}_2\\bm{R}_2$ be the QR decompositions of $(\\bm{I}-\\widetilde{\\bm{U}}_k\\widetilde{\\bm{U}}_k^T)(\\bm{D}-\\bm{S}_k)\\widetilde{\\bm{V}}_k$ and $(\\bm{I}-\\widetilde{\\bm{V}}_k\\widetilde{\\bm{V}}_k^T)(\\bm{D}-\\bm{S}_k)\\widetilde{\\bm{U}}_k$, respectively. Note that $(\\bm{I}-\\widetilde{\\bm{U}}_k\\widetilde{\\bm{U}}_k^T)(\\bm{D}-\\bm{S}_k)\\widetilde{\\bm{V}}_k$ and $(\\bm{I}-\\widetilde{\\bm{V}}_k\\widetilde{\\bm{V}}_k^T)(\\bm{D}-\\bm{S}_k)\\widetilde{\\bm{U}}_k$ can be computed by one matrix-matrix subtraction between an $n\\times n$ matrix and an $n\\times n$ matrix, two matrix-matrix multiplications between an $n\\times n$ matrix and an $n\\times r$ matrix, and a few matrix-matrix multiplications between a $r\\times n$ and an $n\\times r$ or between an $n\\times r$ matrix and a $r\\times r$ matrix. Moreover,\nA little algebra gives\n\\begin{align*}\n\\mathcal{P}_{\\widetilde{T}_{k}} (\\bm{D}-\\bm{S}_k) &= \\widetilde{\\bm{U}}_k\\widetilde{\\bm{U}}_k^T(\\bm{D}-\\bm{S}_k)+(\\bm{D}-\\bm{S}_k)\\widetilde{\\bm{V}}_k\\widetilde{\\bm{V}}_k^T-\\widetilde{\\bm{U}}_k\\widetilde{\\bm{U}}_k^T(\\bm{D}-\\bm{S}_k)\\widetilde{\\bm{V}}_k\\widetilde{\\bm{V}}_k^T \\cr\n\t\t\t&=\\widetilde{\\bm{U}}_k\\widetilde{\\bm{U}}_k^T(\\bm{D}-\\bm{S}_k)(\\bm{I}-\\widetilde{\\bm{V}}_k\\widetilde{\\bm{V}}_k^T)+(\\bm{I}-\\widetilde{\\bm{U}}_k\\widetilde{\\bm{U}}_k^T)(\\bm{D}-\\bm{S}_k)\\widetilde{\\bm{V}}_k\\widetilde{\\bm{V}}_k^T+\\widetilde{\\bm{U}}_k\\widetilde{\\bm{U}}_k^T(\\bm{D}-\\bm{S}_k)\\widetilde{\\bm{V}}_k\\widetilde{\\bm{V}}_k^T \\cr\n\t\t\t&=\\widetilde{\\bm{U}}_k\\bm{R}_2^T\\bm{Q}_2^T+\\bm{Q}_1\\bm{R}_1\\widetilde{\\bm{V}}_k^T+\\widetilde{\\bm{U}}_k\\widetilde{\\bm{U}}_k^T(\\bm{D}-\\bm{S}_k)\\widetilde{\\bm{V}}_k\\widetilde{\\bm{V}}_k^T \\cr\n\t\t\t&=\\begin{bmatrix}\\widetilde{\\bm{U}}_k & \\bm{Q}_1\\end{bmatrix} \\begin{bmatrix} \\widetilde{\\bm{U}}_k^T(\\bm{D}-\\bm{S}_k)\\widetilde{\\bm{V}}_k & \\bm{R}_2^T \\\\ \\bm{R}_1 & \\bm{0} \\end{bmatrix} \\begin{bmatrix} \\widetilde{\\bm{V}}_k^T \\\\ \\bm{Q}_2^T \\end{bmatrix} \\cr\n\t\t\t&:= \\begin{bmatrix}\\widetilde{\\bm{U}}_k & \\bm{Q}_1\\end{bmatrix} \\bm{M}_k \\begin{bmatrix} \\widetilde{\\bm{V}}_k^T \\\\ \\bm{Q}_2^T \\end{bmatrix},\n\\end{align*}\nwhere the fourth line follows from the fact $\\widetilde{\\bm{U}}_k^T\\bm{Q}_1=\\widetilde{\\bm{V}}_k^T\\bm{Q}_2=\\bm{0}$.\nLet $\\bm{M}_k = \\bm{U}_{M_k}\\bm{\\Sigma}_{M_k}\\bm{V}_{M_k}^T$ be the SVD of $\\bm{M}_k$, which can be computed using $O(r^3)$ flops since $\\bm{M}_k$ is a $2r\\times 2r$ matrix. Then the SVD of $\\mathcal{P}_{\\widetilde{T}_{k}} (\\bm{D}-\\bm{S}_k)=\\widetilde{\\bm{U}}_k\\widetilde{\\bm{\\Sigma}}_k\\widetilde{\\bm{V}}_k^T$ can be computed by\n\\begin{equation}\n\\widetilde{\\bm{U}}_{k+1}=\\begin{bmatrix}\\widetilde{\\bm{U}}_k & \\bm{Q}_1\\end{bmatrix}\\bm{U}_{M_k},\\quad \\widetilde{\\bm{\\Sigma}}_{k+1}=\\bm{\\Sigma}_{M_k},\\quad\\textnormal{and}\\quad \\widetilde{\\bm{V}}_{k+1}=\\begin{bmatrix}\\widetilde{\\bm{V}}_k & \\bm{Q}_2\\end{bmatrix}\\bm{V}_{M_k}\n\\end{equation}\nsince both the matrices $\\begin{bmatrix}\\widetilde{\\bm{U}}_k & \\bm{Q}_1\\end{bmatrix}$ and $\\begin{bmatrix}\\widetilde{\\bm{V}}_k & \\bm{Q}_2\\end{bmatrix}$ are orthogonal.\nIn summary, the overall computational costs of $\\mathcal{H}_r(\\mathcal{P}_{\\widetilde{T}_{k}}(\\bm{D}-\\bm{S}_{k}))$ lie in one matrix-matrix subtraction between an $n\\times n$ matrix and an $n\\times n$ matrix, two matrix-matrix multiplications between an $n\\times n$ matrix and an $n\\times r$ matrix, the QR decomposition of two $n\\times r$ matrices, an SVD of a $2r\\times 2r$ matrix, and a few matrix-matrix multiplications between a $r\\times n$ matrix and an $n\\times r$ matrix or between an $n\\times r$ matrix and a $r\\times r$ matrix, leading to a total of $4n^2r+n^2+O(nr^2+r^3)$ flops. Thus, the dominant per iteration computational complexity of AccAltProj for updating the estimate of $\\bm{L}$ is the same as the novel gradient descent based approach introduced in \\citep{yi2016fast}. In contrast, computing the best rank-$r$ approximation of a non-structured $n\\times n$ matrix $\\bm{D}-\\bm{S}_k$ typically costs $O(n^2r)+n^2$ flops with a large hidden constant in front of $n^2r$. \n\n\nAfter $\\bm{L}_{k+1}$ is obtained, following the approach in \\citep{netrapalli2014non}, we apply the hard thresholding operator to update the estimate of the sparse matrix, \n\\begin{equation*}\n\\bm{S}_{k+1}=\\mathcal{T}_{\\zeta_{k+1}}(\\bm{D}-\\bm{L}_{k+1}),\n\\end{equation*}\nwhere the thresholding operator $\\mathcal{T}_{\\zeta_{k+1}}$ is defined as\n\\begin{equation}\n[\\mathcal{T}_{\\zeta_{k+1}}\\bm{Z}]_{ij} =\n\\begin{cases}\n[\\bm{Z}]_{ij} & |[\\bm{Z}]_{ij}| >\\zeta_{k+1}\\\\\n0 & \\mbox{otherwise}\n\\end{cases}\n\\end{equation}\nfor any matrix $\\bm{Z}\\in\\mathbb{R}^{m\\times n}$. Notice that the thresholding value of $\\zeta_{k+1}$ in Algorithm~\\ref{Algo:Algo1} is chosen as\n$$\\zeta_{k+1}= \\beta\\left(\\sigma_{r+1}\\left(\\mathcal{P}_{\\widetilde{T}_{k}}(\\bm{D}-\\bm{S}_{k})\\right) + \\gamma^{k+1} \\sigma_{1}\\left(\\mathcal{P}_{\\widetilde{T}_{k}}(\\bm{D}-\\bm{S}_{k})\\right)\\right) ,$$\nwhich relies on a tuning parameter $\\beta>0$, a convergence rate parameter $0\\leq\\gamma<1$, and the singular values of $\\mathcal{P}_{\\widetilde{T}_{k}} (\\bm{D}-\\bm{S}_k)$. Since we have already obtained all the singular values of $\\mathcal{P}_{\\widetilde{T}_{k}} (\\bm{D}-\\bm{S}_k)$ when computing $\\bm{L}_{k+1}$, the extra cost of computing $\\zeta_{k+1}$ is very marginal. Therefore, the cost of updating the estimate of $\\bm{S}$ is very low and insensitive to the sparsity of $\\bm{S}$.\n\n\n\n\nIn this paper, a good initialization is achieved by two steps of modified AltProj when setting the input rank to $r$, see Algorithm~\\ref{Algo:Init1}. With this initialization scheme, we can construct an initial guess that is sufficiently close to the ground truth and is inside the ``basin of attraction'' as detailed in the next subsection. Note that the thresholding parameter $\\beta_{init}$ used in Algorithm~\\ref{Algo:Init1} is different from that in Algorithm~\\ref{Algo:Algo1}.\n\n\\begin{algorithm}[htp]\n\\caption{Initialization by Two Steps of AltProj}\\label{Algo:Init1}\n\\begin{algorithmic}[1]\n\\State \\textbf{Input:} $\\bm{D}=\\bm{L}+\\bm{S}$: matrix to be split; $r$: rank of $\\bm{L}$; $\\beta_{init}, \\beta$: thresholding parameters.\n\\State $\\bm{L}_{-1}=\\bm{0}$\n\\State $\\zeta_{-1} = \\beta_{init} \\cdot \\sigma_1^D$\n\\State $\\bm{S}_{-1}=\\mathcal{T}_{\\zeta_{-1}}(\\bm{D}-\\bm{L}_{-1})$\n\\State $\\bm{L}_0=\\mathcal{H}_r(\\bm{D}-\\bm{S}_{-1})$\n\\State $\\zeta_{0} = \\beta \\cdot \\sigma_{1}(\\bm{D}-\\bm{S}_{-1})$\n\\State $\\bm{S}_0=\\mathcal{T}_{\\zeta_{0}}(\\bm{D}-\\bm{L}_{0})$\n\\State \\textbf{Output:} $\\bm{L}_0$, $\\bm{S}_0$\n\\end{algorithmic}\n\\end{algorithm}\n\n\n\n\n\n\\subsection{Theoretical Guarantee} \\label{subsec:guaranteed results}\n\n\n\n\n\nIn this subsection, we present the theoretical recovery guarantee of AccAltProj (Algorithm~\\ref{Algo:Algo1} together with Algorithm~\\ref{Algo:Init1}). \nThe following theorem establishes the \nlocal convergence of AccAltProj.\n\n\\begin{theorem}[Local Convergence of AccAltProj] \\label{thm:local convergence}\n Let $\\bm{L}\\in\\mathbb{R}^{n\\times n}$ and $\\bm{S}\\in\\mathbb{R}^{n\\times n}$ be two symmetric matrices satisfying Assumptions \\nameref{assume:Inco} and \\nameref{assume:Sparse}. If the initial guesses $\\bm{L}_0$ and $\\bm{S}_0$ obey the following conditions:\n\\[\n\\|\\bm{L}-\\bm{L}_0\\|_2 \\leq 8\\alpha\\mu r \\sigma_1^L,\\quad\n\\|\\bm{S}-\\bm{S}_0\\|_\\infty \\leq \\frac{\\mu r}{n} \\sigma_1^L,\\quad \\textnormal{and} \\quad\nsupp(\\bm{S}_0)\\subset \\Omega,\n\\]\nthen\nthe iterates of Algorithm \\ref{Algo:Algo1} with parameters $\\beta =\\frac{\\mu r}{2n}$ and $\\gamma\\in\\left(\\frac{1}{\\sqrt{12}},1\\right)$ satisfy \n\\[\n\\|\\bm{L}-\\bm{L}_k\\|_2 \\leq 8\\alpha\\mu r \\gamma^k\\sigma_1^L,\\quad\n\\|\\bm{S}-\\bm{S}_k\\|_\\infty \\leq \\frac{\\mu r}{n} \\gamma^k\\sigma_1^L,\\quad \\textnormal{and} \\quad\nsupp(\\bm{S}_k)\\subset \\Omega.\n\\]\n\\end{theorem}\n\n\n\n\nThe next theorem states that the initial guesses obtained from Algorithm \\ref{Algo:Init1} fulfill the conditions required in Theorem~\\ref{thm:local convergence}\n\n\\begin{theorem}[Guaranteed Initialization]\\label{thm:initialization bound} Let $\\bm{L}\\in\\mathbb{R}^{n\\times n}$ and $\\bm{S}\\in\\mathbb{R}^{n\\times n}$ be two symmetric matrices satisfying Assumptions \\nameref{assume:Inco} and \\nameref{assume:Sparse}, respectively. If the thresholding parameters obey $\\frac{\\mu r\\sigma_1^L}{n\\sigma_1^D}\\leq\\beta_{init}\\leq\\frac{3\\mu r\\sigma_1^L}{n\\sigma_1^D}$ and $\\beta=\\frac{{\\mu r}}{2n}$, then the outputs of Algorithm \\ref{Algo:Init1} satisfy\n\\[\n\\|\\bm{L}-\\bm{L}_0\\|_2 \\leq 8\\alpha\\mu r \\sigma_1^L,\\quad\n\\|\\bm{S}-\\bm{S}_0\\|_\\infty \\leq \\frac{\\mu r}{n} \\sigma_1^L,\\quad \\textnormal{and} \\quad\nsupp(\\bm{S}_0)\\subset \\Omega.\n\\]\n\\end{theorem}\n\nThe proofs of Theorems \\ref{thm:local convergence} and \\ref{thm:initialization bound} are presented in Section~\\ref{sec:proofs}. The convergence of AccAltProj follows immediately by combining the above two theorems together. \n\n\nFor conciseness, the main theorems are stated for symmetric matrices. \nHowever, similar results can be established for nonsymmetric matrix recovery problems as they can be cast as problems with respect to symmetric augmented matrices, as suggested in \\citep{netrapalli2014non}.\nWithout loss of generality, assume $dm\\leq n < (d+1)m$ for some $d\\geq 1$ and construct $\\overline{\\bm{L}}$ and $\\overline{\\bm{S}}$ as\n\\begin{equation*} \\setstretch{1.5}\n\\overline{\\bm{L}}:=\n\\begin{bmatrix}\\,\n\\smash{\n\\underbrace{\n\\begin{matrix}\n\\bm{0} &\\cdots & \\bm{0} \\\\\n\\vdots &\\ddots &\\vdots \\\\\n\\bm{0} &\\cdots &\\bm{0} \\\\\n\\bm{L}^T &\\cdots &\\bm{L}^T \n\\end{matrix}}_{d \\textnormal{ times}} }\n\\begin{matrix}\n~&\\bm{L}\\\\\n~&\\vdots\\\\\n~&\\bm{L}\\\\\n~&\\bm{0}\n\\end{matrix}\n\\vphantom{\n \\begin{matrix}\n \\smash[b]{\\vphantom{\\Big|}}\n 0\\\\\\vdots\\\\0\\\\0\n \\smash[t]{\\vphantom{\\Big|}}\n \\end{matrix}\n}\n\\,\\,\\end{bmatrix}\n\\begin{matrix}\n\\left.\n\\vphantom{\\begin{matrix}\\bm{L}\\\\\\vdots\\\\\\bm{L}\\end{matrix}}\n\\right\\rbrace{\\scriptstyle {d \\textnormal{ times}}}\\\\~\\\\\n\\end{matrix}, \\qquad\n\\overline{\\bm{S}}:=\n\\begin{bmatrix}\\,\n\\smash{\n\\underbrace{\n\\begin{matrix}\n\\bm{0} &\\cdots & \\bm{0} \\\\\n\\vdots &\\ddots &\\vdots \\\\\n\\bm{0} &\\cdots &\\bm{0} \\\\\n\\bm{S}^T &\\cdots &\\bm{S}^T \n\\end{matrix}}_{d \\textnormal{ times}} }\n\\begin{matrix}\n~&\\bm{S}\\\\\n~&\\vdots\\\\\n~&\\bm{S}\\\\\n~&\\bm{0}\n\\end{matrix}\n\\vphantom{\n \\begin{matrix}\n \\smash[b]{\\vphantom{\\Big|}}\n 0\\\\\\vdots\\\\0\\\\0\n \\smash[t]{\\vphantom{\\Big|}}\n \\end{matrix}\n}\n\\,\\,\\end{bmatrix}\n\\begin{matrix}\n\\left.\n\\vphantom{\\begin{matrix}\\bm{S}\\\\\\vdots\\\\\\bm{S}\\end{matrix}}\n\\right\\rbrace{\\scriptstyle {d \\textnormal{ times}}}\\\\~\\\\\n\\end{matrix}.\n\\end{equation*}\n\\\\\\\\\nThen it is not hard to see that $\\overline{\\bm{L}}$ is $O(\\mu)$-incoherent, and $\\overline{\\bm{S}}$ is $O(\\alpha)$-sparse, with the hidden constants being independent of $d$. Moreover, based on the connection between the SVD of the augmented matrix and that of the original one, it can be easily verified that at the $k^{th}$ iteration the estimates returned by AccAltProj with input $\\overline{\\bm{D}}=\\overline{\\bm{L}}+\\overline{\\bm{S}}$ have the form \\begin{equation*} \\setstretch{1.5}\n\\overline{\\bm{L}}_k=\n\\begin{bmatrix}\\,\n\\smash{\n\\underbrace{\n\\begin{matrix}\n\\bm{0} &\\cdots & \\bm{0} \\\\\n\\vdots &\\ddots &\\vdots \\\\\n\\bm{0} &\\cdots &\\bm{0} \\\\\n\\bm{L}_k^T &\\cdots &\\bm{L}_k^T \n\\end{matrix}}_{d \\textnormal{ times}} }\n\\begin{matrix}\n~&\\bm{L}_k\\\\\n~&\\vdots\\\\\n~&\\bm{L}_k\\\\\n~&\\bm{0}\n\\end{matrix}\n\\vphantom{\n \\begin{matrix}\n \\smash[b]{\\vphantom{\\Big|}}\n 0\\\\\\vdots\\\\0\\\\0\n \\smash[t]{\\vphantom{\\Big|}}\n \\end{matrix}\n}\n\\,\\,\\end{bmatrix}\n\\begin{matrix}\n\\left.\n\\vphantom{\\begin{matrix}\\bm{L}_k\\\\\\vdots\\\\\\bm{L}_k\\end{matrix}}\n\\right\\rbrace{\\scriptstyle {d \\textnormal{ times}}}\\\\~\\\\\n\\end{matrix}, \\qquad\n\\overline{\\bm{S}}_k=\n\\begin{bmatrix}\\,\n\\smash{\n\\underbrace{\n\\begin{matrix}\n\\bm{0} &\\cdots & \\bm{0} \\\\\n\\vdots &\\ddots &\\vdots \\\\\n\\bm{0} &\\cdots &\\bm{0} \\\\\n\\bm{S}_k^T &\\cdots &\\bm{S}_k^T \n\\end{matrix}}_{d \\textnormal{ times}} }\n\\begin{matrix}\n~&\\bm{S}_k\\\\\n~&\\vdots\\\\\n~&\\bm{S}_k\\\\\n~&\\bm{0}\n\\end{matrix}\n\\vphantom{\n \\begin{matrix}\n \\smash[b]{\\vphantom{\\Big|}}\n 0\\\\\\vdots\\\\0\\\\0\n \\smash[t]{\\vphantom{\\Big|}}\n \\end{matrix}\n}\n\\,\\,\\end{bmatrix}\n\\begin{matrix}\n\\left.\n\\vphantom{\\begin{matrix}\\bm{S}_k\\\\\\vdots\\\\\\bm{S}_k\\end{matrix}}\n\\right\\rbrace{\\scriptstyle {d \\textnormal{ times}}}\\\\~\\\\\n\\end{matrix},\n\\end{equation*}\n\\\\\\\\\nwhere $\\bm{L}_k,\\bm{S}_k$ are the the $k^{th}$ estimates returned by AccAltProj with input $\\bm{D}=\\bm{L}+\\bm{S}$.\n\n\n\n\\subsection{Related Work} \\label{subsec:related work}\nAs mentioned earlier, convex relaxation based methods for RPCA have higher computational complexity and slower convergence rate which are not applicable for high dimensional problems.\nIn fact, the convergence rate of the algorithm for computing the solution to the SDP formulation of RPCA \\citep{candes2011robust,chandrasekaran2011rank,xu2010robust} is sub-linear with the per iteration \ncomputational complexity being $O(n^3)$. By contrast, AccAltProj only requires $O(\\log(1\/\\epsilon))$ iterations to achieve an accuracy of $\\epsilon$, and the dominant per iteration computational cost is $O(rn^2)$.\n\nThere have been many other algorithms which are designed to solve the non-convex RPCA problem directly. \nIn \\citep{WSLer2013}, an alternating minimization algorithm was proposed for \\eqref{eq:non-convex model 2} based on the factorization model of \nlow rank matrices. However, only convergence to fixed points was established there. \nIn \\citep{gu2016low}, the authors developed an alternating minimization algorithm for RPCA, which allows the sparsity level $\\alpha$ to be $O(1\/(\\mu^{2\/3}r^{2\/3}n))$ for successful recovery, which is more stringent than our result when $r\\ll n$. A projected gradient descent algorithm was proposed in \\cite{chen2015fast} for the special case of positive semidefinite matrices based on the $\\ell_1$-norm of each row of the underlying sparse matrix, which is not very practical. \n \n\nIn Table~\\ref{tab:algo compare}, we compare AccAltProj with the other two competitive non-convex algorithms for RPCA: AltProj from \\citep{netrapalli2014non} and non-convex gradient descent (GD) from \\citep{yi2016fast}. GD attempts to reconstruct the low rank matrix by minimizing an objective function which contains the prior knowledge of the sparse matrix. The table displays the computational complexity of each algorithm for updating the estimates of the low rank matrix and the sparse matrix, as well as the convergence rate and the theoretical tolerance for the number of non-zero entries in the sparse matrix.\n\nFrom the table, we can see that AccAltProj achieves the same linear convergence rate as AltProj, which is faster than GD. Moreover, AccAltProj has the lowest per iteration computational complexity for updating both the estimates of $\\bm{L}$ and $\\bm{S}$ (ties with AltProj for updating the sparse part).\nIt is worth emphasizing that the acceleration stage in AccAltProj which first projects $\\bm{D}-\\bm{S}_k$ onto a low dimensional subspace reduces the computational cost of the SVD in AltProj dramatically. \nOverall, \nAccAltProj will be substantially faster than AltProj and GD, as confirmed by our numerical simulations in next section. The table also shows that the theoretical \nsparsity level that can be tolerated by AccAltProj is lower than that of GD and AltProj. Our result looses an order in $r$ because we have\nreplaced the spectral norm by the Frobenius norm when considering the reduction of the reconstruction error in terms of the spectral norm. In addition, the condition number of the target matrix appears in the theoretical result because the current version\nof AccAltProj deals with the fixed rank case which requires the initial guess is sufficiently\nclose to the target matrix for the theoretical analysis.\nNevertheless, we note that the sufficient condition regarding to $\\alpha$ to guarantee the exact recovery of AccAltProj is highly pessimistic when compared with its empirical performance. Numerical investigations in next section show that AccAltProj can tolerate as large $\\alpha$ as AltProj does under different energy levels. \n\n\n\n\n\n\n\n\n\\begin{table}[htp]\n\\centering\n\\caption{Comparison of AccAltProj, AltProj and GD.} \\label{tab:algo compare}\n\\makegapedcells\n\\setcellgapes{3pt}\n\\begin{tabular}{ |c||c|c|c|c| }\n\\hline\nAlgorithm & AccAltProj & AltProj & GD \\cr\n\n\\hhline {|=||=|=|=|=|}\nUpdating $\\bm{S}$ & $\\bm{O\\left(n^2\\right)}$ & $O\\left(rn^2\\right)$& $O\\left(n^2+\\alpha n^2\\log(\\alpha n)\\right)$ \\cr\nUpdating $\\bm{L}$ & $\\bm{O\\left(rn^2\\right)}$ & $O\\left(r^2n^2\\right)$ & $\\bm{O\\left(rn^2\\right)}$ \\cr\nTolerance of $\\alpha$ & $O\\left(\\frac{1}{\\max\\{\\mu r^2 \\kappa^3,\\mu^{1.5} r^2\\kappa,\\mu^2r^2\\}}\\right)$& $\\bm{O\\left(\\frac{1}{\\mu r}\\right)}$& $O\\left(\\frac{1}{\\max\\{\\mu r^{1.5}\\kappa^{1.5},\\mu r\\kappa^2\\}}\\right)$ \\cr\nIterations needed &\n$\\bm{O\\left(\\log(\\frac{1}{\\epsilon})\\right)}$ & $\\bm{O\\left(\\log(\\frac{1}{\\epsilon})\\right)}$ &\n$O\\left(\\kappa\\log(\\frac{1}{\\epsilon})\\right)$ \\cr\n\\hline\n\\end{tabular}\n\\end{table}\n\n\n\n\n\n\n\\section{Numerical Expierments} \\label{sec:experience}\n\nIn this section, we present the empirical performance of our AccAltProj algorithm and compare it with the state-of-the-art AltProj algorithm from \\citep{netrapalli2014non} and the leading \ngradient descent based algorithm (GD) from \\citep{yi2016fast}.\nThe tests are conducted on a laptop equipped with 64-bit Windows 7, Intel i7-4712HQ (4 Cores at 2.3 GHz) and 16GB DDR3L-1600 RAM, and executed from MATLAB R2017a. We implement AltProj by ourselves, while the codes for GD are downloaded from the author's website\\footnote{Website: \\url{www.yixinyang.org\/code\/RPCA_GD.zip}.}. Hand tuned parameters \nare used for these algorithms to achieve the best performance in the numerical comparison. The codes for AccAltProj can be found online:\n\\begin{center}\n\\url{https:\/\/github.com\/caesarcai\/AccAltProj_for_RPCA}.\n\\end{center}\n\nNotice that the computation of an initial guess by Algorithm~\\ref{Algo:Init1} requires the truncated SVD on a full size matrix. As is typical in the literature, we used the PROPACK library\\footnote{Website: \\url{sun.stanford.edu\/~rmunk\/PROPACK}.} for this task when the size of $\\bm{D}$ is large and $r$ is relatively small. \nTo reduce the dependence of the theoretical result on the condition number of the underlying low rank matrix, AltProj was originally designed to loop $r$ stages for the input rank increasing from $1$ to $r$ and each stage contains a few number of iterations for a fixed rank.\nHowever, when the condition number is medium large which is the case in our tests, we have observed that AltProj achieves the best computational efficiency when fixing the rank to $r$. Thus, to make fair comparison, we test AltProj when input rank is fixed, the same as the other two algorithms. \n\n\\paragraph*{Synthetic Datasets} We follow the setup in \\citep{netrapalli2014non} and \\citep{yi2016fast} for the random tests on synthetic data. \nAn $n\\times n$ rank $r$ matrix $\\bm{L}$ is formed via $\\bm{L}=\\bm{P}\\bm{Q}^T$, where $\\bm{P},\\bm{Q}\\in\\mathbb{R}^{n\\times r}$ are two random matrices having their entries drawn i.i.d from the standard\nnormal distribution.\n The locations of the non-zero entries of the underlying sparse matrix $\\bm{S}$ are sampled uniformly and independently without replacement, while the values of the non-zero entries are drawn i.i.d from the uniform distribution over the interval $[-c\\cdot\\mathbb{E}(|[\\bm{L}]_{ij}|),c\\cdot\\mathbb{E}(|[\\bm{L}]_{ij}|)]$ for some constant $c>0$.\nThe relative computing error at the $k^{th}$ iteration of a single test is defined as\n\\begin{equation} \\label{relative_err}\nerr_k=\\frac{\\|\\bm{D}-\\bm{L}_k-\\bm{S}_k\\|_F}{\\|\\bm{D}\\|_F} .\n\\end{equation}\nThe test algorithms are terminated when either the relative computing error is smaller than a tolerance, $err_k\\gamma\\geq512\\tau r \\kappa^2+\\frac{1}{\\sqrt{12}}$. Here recall that $\\tau=4\\alpha \\mu r\\kappa$ and $\\upsilon=\\tau(48\\sqrt{\\mu}r\\kappa+\\mu r)$.\n\\end{lemma}\n\\begin{proof}\nFor all $k\\geq 0$, we get\n\\begin{align*}\n\\|(\\mathcal{P}_{\\widetilde{T}_k}-\\mathcal{I})\\bm{L}+\\mathcal{P}_{\\widetilde{T}_k}(\\bm{S}-\\bm{S}_k)\\|_2 &\\leq \\|(\\mathcal{P}_{\\widetilde{T}_k}-\\mathcal{I})\\bm{L}\\|_2+ \\| \\mathcal{P}_{\\widetilde{T}_k} (\\bm{S}-\\bm{S}_k)\\|_2\\cr\n &\\leq \\frac{\\|\\bm{L}-\\widetilde{\\bm{L}}_k\\|_2^2}{\\sigma_r^L} + \\sqrt{\\frac{4}{3}}\\|\\bm{S}-\\bm{S}_k\\|_2 \\cr\n &\\leq \\frac{(8\\sqrt{2r}\\kappa)^2\\|\\bm{L}-\\bm{L}_k\\|_2^2}{\\sigma_r^L} + \\sqrt{\\frac{4}{3}}\\alpha n\\|\\bm{S}-\\bm{S}_k\\|_\\infty \\KW{\\alpha \\lesssim\\frac{1}{\\mu r^{3\/2}\\kappa}}\\cr\n &\\leq 128\\cdot 8\\alpha \\mu r^2 \\kappa^3\\|\\bm{L}-\\bm{L}_k\\|_2+ \\sqrt{\\frac{4}{3}}\\alpha n\\|\\bm{S}-\\bm{S}_k\\|_\\infty \\cr\n &\\leq \\left( 512 \\tau r \\kappa^2 + \\frac{1}{4}\\sqrt{\\frac{4}{3}}\\right) 4\\alpha \\mu r\\gamma^k\\sigma_1^L \\cr\n &\\leq 4\\alpha \\mu r\\gamma^{k+1}\\sigma_1^L, \\KW{\\alpha \\lesssim\\frac{1}{\\mu r^2\\kappa^3}} \\cr\n &= \\tau\\gamma^{k+1}\\sigma_r^L\n\\end{align*}\nwhere the second inequality uses Lemma \\ref{lemma: secound_order_bound_I-P} and \\ref{lemma:P_T X 2-norm ineq}, the third inequality uses Lemma \\ref{lemma:bound of sparse matrix} and \\ref{lemma:trimmed bound}, the fourth inequality follows from $\\frac{\\|\\bm{L}-\\bm{L}_k\\|_2}{\\sigma_r^L}\\leq 8\\alpha \\mu r \\kappa$, and the last inequality uses the bound of $\\gamma$.\n\n\nTo compute the bound of $\\max_l \\sqrt{n}\\|\\bm{e}_l^T[(\\mathcal{P}_{\\widetilde{T}_k}-\\mathcal{I})\\bm{L}+\\mathcal{P}_{\\widetilde{T}_k}(\\bm{S}-\\bm{S}_k)]\\|_2$, first note that\n\\begin{align*}\n\\max_l \\|\\bm{e}_l^T(\\mathcal{I}-\\mathcal{P}_{\\widetilde{T}_k})\\bm{L}\\|_2 &= \\max_l \\|\\bm{e}_l^T(\\bm{U}\\bm{U}^T-\\widetilde{\\bm{U}}_k\\widetilde{\\bm{U}}^T_k)(\\bm{L}-\\widetilde{\\bm{L}}_k)(\\bm{I}-\\widetilde{\\bm{U}}_k\\widetilde{\\bm{U}}^T_k)\\|_2 \\cr\n\t\t\t\t\t\t\t\t &\\leq \\max_l \\|\\bm{e}_l^T(\\bm{U}\\bm{U}^T-\\widetilde{\\bm{U}}_k\\widetilde{\\bm{U}}^T_k)\\|_2\\|\\bm{L}-\\widetilde{\\bm{L}}_k\\|_2\\|\\bm{I}-\\widetilde{\\bm{U}}_k\\widetilde{\\bm{U}}^T_k\\|_2 \\cr\n\t\t\t\t\t\t\t\t &\\leq \\left(\\frac{19}{9}\\sqrt{\\frac{\\mu r}{n}}\\right)\\|\\bm{L}-\\widetilde{\\bm{L}}_k\\|_2,\n\\end{align*}\nwhere the last inequality follows from the fact $\\bm{L}$ is $\\mu$-incoherent and $\\widetilde{\\bm{L}}_k$ is $\\frac{100}{81}\\mu$-incoherent.\nHence, for all $k\\geq 0$, we have\n\\begin{align*}\n\\max_l\\sqrt{n}\\|\\bm{e}_l^T((\\mathcal{P}_{\\widetilde{T}_k}-\\mathcal{I})\\bm{L}+\\mathcal{P}_{\\widetilde{T}_k}(\\bm{S}-\\bm{S}_k))\\|_2 &\\leq \\max_l\\sqrt{n}\\|\\bm{e}_l^T(\\mathcal{I}-\\mathcal{P}_{\\widetilde{T}_k})\\bm{L}\\|_2+\\sqrt{n}\\|\\bm{e}_l^T\\mathcal{P}_{\\widetilde{T}_k}(\\bm{S}-\\bm{S}_k)\\|_2 \\cr\n\t\t\t\t\t\t\t&\\leq \\frac{19\\sqrt{n}}{9}\\sqrt{\\frac{\\mu r}{n}}\\|\\bm{L}-\\widetilde{\\bm{L}}_k\\|_2 + n\\|\\mathcal{P}_{\\widetilde{T}_k}(\\bm{S}-\\bm{S}_k)\\|_\\infty\\cr\n\t\t\t\t\t\t\t&\\leq \\frac{19}{9}8\\sqrt{2\\mu}r\\kappa\\|\\bm{L}-\\bm{L}_k\\|_2 + 4n\\alpha \\mu r\\|\\bm{S}-\\bm{S}_k\\|_\\infty\\cr\n\t\t\t\t\t\t\t&\\leq 24\\sqrt{\\mu}r\\kappa\\cdot 8\\alpha \\mu r \\gamma^k\\sigma_1^L + 4n\\alpha \\mu r\\cdot\\frac{\\mu r}{n} \\gamma^k\\sigma_1^L \\cr\n\t\t\t\t\t\t\n\t\t\t\t\t\t\t&= \\upsilon\\gamma^{k}\\sigma_r^L, \\KW{\\alpha \\lesssim\\frac{1}{u^{3\/2}r^2\\kappa}} \\KW{, \\alpha \\lesssim\\frac{1}{\\mu^2r^2}}\n\\end{align*}\nwhere the third inequality uses Lemma \\ref{lemma:trimmed bound} and \\ref{lemma:error_of_projected_max_norm}.\n\\end{proof}\n\n\n\n\n\\begin{lemma} \\label{lemma:Bound_eigenvalues}\nLet $\\bm{L}\\in\\mathbb{R}^{n\\times n}$ and $\\bm{S}\\in\\mathbb{R}^{n\\times n}$ be two symmetric matrices satisfying Assumptions \\nameref{assume:Inco} and \\nameref{assume:Sparse}, respectively. Let $\\widetilde{\\bm{L}}_k\\in\\mathbb{R}^{n\\times n}$ be the trim output of $\\bm{L}_k$. \nIf \\[\n\\|\\bm{L}-\\bm{L}_k\\|_2 \\leq 8\\alpha \\mu r \\gamma^k\\sigma_1^L,\\quad\n\\|\\bm{S}-\\bm{S}_k\\|_\\infty \\leq \\frac{\\mu r}{n} \\gamma^k\\sigma_1^L,\\textnormal{\\ and }\nsupp(\\bm{S}_k)\\subset \\Omega,\n\\]\nthen\n\\begin{equation} \\label{eq:Bound_eigenvalues 1}\n|\\sigma^L_i-|\\lambda^{(k)}_i|| \\leq \\tau\\sigma_r^L\n\\end{equation}\nand\n\\begin{equation} \\label{eq:Bound_eigenvalues 2}\n(1-2\\tau)\\gamma^j \\sigma^L_1\\leq |\\lambda^{(k)}_{r+1}| +\\gamma^j|\\lambda^{(k)}_1|\\leq(1+2\\tau)\\gamma^j \\sigma^L_1\n\\end{equation}\nhold for all $k\\geq 0$ and $j \\leq k+1$, provided $1>\\gamma\\geq512\\tau r \\kappa^2+\\frac{1}{\\sqrt{12}}$. Here $|\\lambda^{(k)}_{i}|$ is the $i^{th}$ singular value of $\\mathcal{P}_{\\widetilde{T}_k}(\\bm{D}-\\bm{S}_k)$.\n\\end{lemma}\n\\begin{proof}\nSince $\\bm{D}=\\bm{L}+\\bm{S}$, we have\n\\begin{align*}\n\\mathcal{P}_{\\widetilde{T}_k}(\\bm{D}-\\bm{S}_k)&=\\mathcal{P}_{\\widetilde{T}_k}(\\bm{L}+\\bm{S}-\\bm{S}_k) \\cr\n&=\\bm{L}+(\\mathcal{P}_{\\widetilde{T}_k}-\\mathcal{I})\\bm{L}+\\mathcal{P}_{\\widetilde{T}_k}(\\bm{S}-\\bm{S}_k).\n\\end{align*}\nHence, by Weyl's inequality and (\\ref{eq:norm_of_Z 1}) in Lemma \\ref{lemma:norm_of_Z}, we can see that\n\\begin{align*}\n|\\sigma^L_i-|\\lambda^{(k)}_i|| &\\leq \\|(\\mathcal{P}_{\\widetilde{T}_k}-\\mathcal{I})\\bm{L}+\\mathcal{P}_{\\widetilde{T}_k}(\\bm{S}-\\bm{S}_k)\\|_2 \\cr\n \n &\\leq \\tau\\gamma^{k+1} \\sigma_r^L \\KW{\\alpha \\lesssim\\frac{1}{\\mu r\\kappa}} \n\\end{align*}\nhold for all $i$ and $k\\geq0$.\nSo the first claim is proved since $\\gamma<1$.\n\nNotice that $\\bm{L}$ is a rank-$r$ matrix, which implies $\\sigma_{r+1}^L=0$, so we have\n\\[\n\\begin{split}\n||\\lambda^{(k)}_{r+1}| +\\gamma^{j}|\\lambda^{(k)}_1| -\\gamma^{j}\\sigma_1^L|\n\t\t\t\t\t\t&= ||\\lambda^{(k)}_{r+1}| -\\sigma_{r+1}^L+\\gamma^{j}|\\lambda^{(k)}_1| -\\gamma^{j}\\sigma_1^L| \\cr\n\t\t\t\t\t\t&\\leq \\tau \\gamma^{k+1}\\sigma_r^L + \\tau \\gamma^{j+k+1}\\sigma_r^L\\cr\n\t\t\t\t\t\t&\\leq \\left(1+\\gamma^{k+1}\\right) \\tau \\gamma^j\\sigma_r^L \\cr\n\t\t\t\t\t\t&\\leq 2\\tau \\gamma^j\\sigma_1^L \n\\end{split}\n\\]\nfor all $j\\leq k+1$. This completes the proof of the second claim.\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\\begin{lemma} \\label{lemma:Bound_of_L-L_k_2_norm}\nLet $\\bm{L}\\in\\mathbb{R}^{n\\times n}$ and $\\bm{S}\\in\\mathbb{R}^{n\\times n}$ be two symmetric matrices satisfying Assumptions \\nameref{assume:Inco} and \\nameref{assume:Sparse}, respectively. Let $\\widetilde{\\bm{L}}_k\\in\\mathbb{R}^{n\\times n}$ be the trim output of $\\bm{L}_k$. \n If\n\\[\n\\|\\bm{L}-\\bm{L}_k\\|_2 \\leq 8\\alpha \\mu r \\gamma^k\\sigma_1^L,\\quad\n\\|\\bm{S}-\\bm{S}_k\\|_\\infty \\leq \\frac{\\mu r}{n} \\gamma^k\\sigma_1^L,\\textnormal{\\ and }\nsupp(\\bm{S}_k)\\subset \\Omega,\n\\]\nthen we have \n\\[\n\\|\\bm{L}-\\bm{L}_{k+1}\\|_2 \\leq 8\\alpha \\mu r \\gamma^{k+1}\\sigma_1^L,\n\\]\nprovided $1>\\gamma\\geq512\\tau r \\kappa^2+\\frac{1}{\\sqrt{12}}$.\n\\end{lemma}\n\\begin{proof} A direct calculation yields \n\\begin{align*}\n\\|\\bm{L}-\\bm{L}_{k+1}\\|_2 &\\leq\\|\\bm{L}-\\mathcal{P}_{\\widetilde{T}_k}(\\bm{D}-\\bm{S}_k)\\|_2+\\|\\mathcal{P}_{\\widetilde{T}_k}(\\bm{D}-\\bm{S}_k)-\\bm{L}_{k+1}\\|_2\\cr\n &\\leq 2\\|\\bm{L}-\\mathcal{P}_{\\widetilde{T}_k}(\\bm{D}-\\bm{S}_k)\\|_2\\cr\n &=2\\|\\bm{L}-\\mathcal{P}_{\\widetilde{T}_k}(\\bm{L}+\\bm{S}-\\bm{S}_k)\\|_2\\cr\n &= 2\\|(\\mathcal{P}_{\\widetilde{T}_k}-\\mathcal{I})\\bm{L}+\\mathcal{P}_{\\widetilde{T}_k}(\\bm{S}-\\bm{S}_k)\\|_2 \\cr\n &\\leq 2 \\cdot \\tau\\gamma^{k+1}\\sigma_r^L \\cr\n &= 8\\alpha \\mu r\\gamma^{k+1}\\sigma_1^L,\n\\end{align*}\nwhere the second inequality follows from the fact $\\bm{L}_{k+1}=\\mathcal{H}_r(\\mathcal{P}_{{\\widetilde{T}_k}}(\\bm{D}-\\bm{S}_k))$ is the best rank-$r$ approximation of $\\mathcal{P}_{{\\widetilde{T}_k}}(\\bm{D}-\\bm{S}_k)$, and the last inequality uses (\\ref{eq:norm_of_Z 1}) in Lemma \\ref{lemma:norm_of_Z}.\n\\end{proof}\n\n\n\\begin{lemma} \\label{lemma:Bound_of_L-L_k_infty_norm}\nLet $\\bm{L}\\in\\mathbb{R}^{n\\times n}$ and $\\bm{S}\\in\\mathbb{R}^{n\\times n}$ be two symmetric matrices satisfying Assumptions \\nameref{assume:Inco} and \\nameref{assume:Sparse}, respectively. Let $\\widetilde{\\bm{L}}_k\\in\\mathbb{R}^{n\\times n}$ be the trim output of $\\bm{L}_k$. If\\[\n\\|\\bm{L}-\\bm{L}_k\\|_2 \\leq 8\\alpha \\mu r \\gamma^k\\sigma_1^L,\\quad\n\\|\\bm{S}-\\bm{S}_k\\|_\\infty \\leq \\frac{\\mu r}{n} \\gamma^k\\sigma_1^L,\\textnormal{\\ and }\nsupp(\\bm{S}_k)\\subset \\Omega,\n\\]\nthen we have \n\\[\n\\|\\bm{L}-\\bm{L}_{k+1}\\|_\\infty \\leq \\left(\\frac{1}{2}-\\tau\\right)\\frac{\\mu r}{n}\\gamma^{k+1} \\sigma_1^L,\n\\]\nprovided $1>\\gamma\\geq\\max\\{512\\tau r \\kappa^2+\\frac{1}{\\sqrt{12}},\\frac{2\\upsilon}{(1-12\\tau)(1-\\tau-\\upsilon)^2}\\}$ and $\\tau<\\frac{1}{12}$.\n\\end{lemma}\n\\begin{proof}\nLet $\\mathcal{P}_{\\widetilde{T}_k}(\\bm{D}-\\bm{S}_k)=\\begin{bmatrix}\\bm{U}_{k+1}& \\ddot{\\bm{U}}_{k+1} \\end{bmatrix} \\begin{bmatrix}\\bm{\\Lambda} & \\bm{0}\\\\ \\bm{0} &\\ddot{\\bm{\\Lambda}}\\end{bmatrix} \\begin{bmatrix}\\bm{U}_{k+1}^T \\\\ \\ddot{\\bm{U}}_{k+1}^T\\end{bmatrix} =\\bm{U}_{k+1}\\bm{\\Lambda} \\bm{U}_{k+1}^T+\\ddot{\\bm{U}}_{k+1}\\ddot{\\bm{\\Lambda}}\\ddot{\\bm{U}}_{k+1}^T$ be its eigenvalue decomposition. We use the lighter notation $\\lambda_i$ ($1\\leq i\\leq n$) for the eigenvalues of $\\mathcal{P}_{\\widetilde{T}_k}(\\bm{D}-\\bm{S}_k)$ at the $k$-th iteration and assume they are ordered by $|\\lambda_1|\\geq|\\lambda_2|\\geq\\cdots\\geq|\\lambda_n|$. Moreover, $\\bm{\\Lambda} $ has its $r$ largest eigenvalues in magnitude, $\\bm{U}_{k+1}$ contains the first $r$ eigenvectors, and $\\ddot{\\bm{U}}_{k+1}$ has the rest. It follows that $ \\bm{L}_{k+1}=\\mathcal{H}_r(\\mathcal{P}_{\\widetilde{T}_k}(\\bm{D}-\\bm{S}_k))=\\bm{U}_{k+1}\\bm{\\Lambda} \\bm{U}_{k+1}^T $\n\nDenote $\\bm{Z}=\\mathcal{P}_{\\widetilde{T}_k}(\\bm{D}-\\bm{S}_k)-\\bm{L}=(\\mathcal{P}_{\\widetilde{T}_k}-\\mathcal{I})\\bm{L}+\\mathcal{P}_{\\widetilde{T}_k}(\\bm{S}-\\bm{S}_k)$. Let $\\bm{u}_i$ be the $i^{th}$ eigenvector of $\\mathcal{P}_{\\widetilde{T}_k}(\\bm{D}-\\bm{S}_k)$. Noting that $(\\lambda_i\\bm{I}-\\bm{Z})\\bm{u}_i=\\bm{L}\\bm{u}_i$, we have \n\\begin{align*}\n\\bm{u}_i = \\left(\\bm{I}-\\frac{\\bm{Z}}{\\lambda_i}\\right)^{-1} \\frac{\\bm{L}}{\\lambda_i}\\bm{u}_i= \\left(\\bm{I}+\\frac{\\bm{Z}}{\\lambda_i}+\\left(\\frac{\\bm{Z}}{\\lambda_i}\\right)^2+\\cdots\\right)\\frac{\\bm{L}}{\\lambda_i}\\bm{u}_i\n\\end{align*}\nfor all $\\bm{u}_i$ with $1\\leq i\\leq r$, where the expansion is valid because \n$$\\frac{\\left\\|\\bm{Z}\\right\\|_2}{\\lambda_i}\\leq\\frac{\\left\\|\\bm{Z}\\right\\|_2}{\\lambda_r}\\leq\\frac{\\tau}{1-\\tau}<1 $$\nfollowing from (\\ref{eq:norm_of_Z 1}) in Lemma \\ref{lemma:norm_of_Z} and (\\ref{eq:Bound_eigenvalues 1}) in Lemma \\ref{lemma:Bound_eigenvalues}.\n This implies\n\\begin{align*}\n\\bm{U}_{k+1}\\bm{\\Lambda} \\bm{U}_{k+1}^T &= \\sum_{i=1}^r \\bm{u}_i\\lambda_i\\bm{u}_i^T \\cr\n\t\t\t\t\t\t\t\t\t\t&= \\sum_{i=1}^r \\left( \\sum_{a\\geq0} \\left(\\frac{\\bm{Z}}{\\lambda_i}\\right)^a\\frac{\\bm{L}}{\\lambda_i} \\right)\\bm{u}_i\\lambda_i\\bm{u}_i^T \\left( \\sum_{b\\geq0} \\left(\\frac{\\bm{Z}}{\\lambda_i}\\right)^b\\frac{\\bm{L}}{\\lambda_i} \\right)^T \\cr\n\t\t\t\t\t\t\t\t\t\t&= \\sum_{a\\geq0} \\bm{Z}^a \\bm{L} \\sum_{i=1}^r\\left(\\bm{u}_i\\frac{1}{\\lambda_i^{a+b+1}}\\bm{u}_i^T \\right) \\bm{L} \\sum_{b\\geq0} \\bm{Z}^b \\cr\n\t\t\t\t\t\t\t\t\t\t&= \\sum_{a,b\\geq0} \\bm{Z}^a\\bm{L}\\bm{U}_{k+1}\\bm{\\Lambda} ^{-(a+b+1)}\\bm{U}_{k+1}^T\\bm{L}\\bm{Z}^b.\n\\end{align*}\nThus, we have\n\\begin{align*}\n\\|\\bm{L}_{k+1}-\\bm{L}\\|_\\infty &= \\|\\bm{U}_{k+1}\\bm{\\Lambda} \\bm{U}_{k+1}^T -\\bm{L}\\|_\\infty \\cr\n\t\t &= \\|\\bm{L}\\bm{U}_{k+1}\\bm{\\Lambda} ^{-1}\\bm{U}_{k+1}^T\\bm{L}-\\bm{L} + \\sum_{a+b>0} \\bm{Z}^a\\bm{L}\\bm{U}_{k+1}\\bm{\\Lambda} ^{-(a+b+1)}\\bm{U}_{k+1}^T\\bm{L}\\bm{Z}^b \\|_\\infty \\cr\n\t\t &\\leq \\|\\bm{L}\\bm{U}_{k+1}\\bm{\\Lambda} ^{-1}\\bm{U}_{k+1}^T\\bm{L}-\\bm{L}\\|_\\infty + \\sum_{a+b>0} \\|\\bm{Z}^a\\bm{L}\\bm{U}_{k+1}\\bm{\\Lambda} ^{-(a+b+1)}\\bm{U}_{k+1}^T\\bm{L}\\bm{Z}^b \\|_\\infty \\cr\n\t\t &:= \\bm{Y}_0 + \\sum_{a+b>0} \\bm{Y}_{ab}.\n\\end{align*}\n\nWe will handle $\\bm{Y}_0$ first. Recall that $\\bm{L}=\\bm{U}\\bm{\\Sigma} \\bm{V}^T$ is the SVD of the symmetric matrix $\\bm{L}$ which obeys $\\mu$-incoherence, i.e., $\\bm{U}\\bm{U}^T=\\bm{V}\\bm{V}^T$ and $\\|\\bm{e}_i^T\\bm{U}\\bm{U}^T\\|_2\\leq\\sqrt{\\frac{\\mu r}{n}}$ for all $i$. So, for each $(i,j)$ entry of $\\bm{Y}_0$, one has \n\\begin{align*}\n\\bm{Y}_0 &= \\max_{ij} |\\bm{e}_i^T(\\bm{L}\\bm{U}_{k+1}\\bm{\\Lambda} ^{-1}\\bm{U}_{k+1}^T\\bm{L}-\\bm{L})\\bm{e}_j| \\cr\n\t\t\t\t\t\t\t\t\t\t\t\t&=\\max_{ij} |\\bm{e}_i^T\\bm{U}\\bm{U}^T(\\bm{L}\\bm{U}_{k+1}\\bm{\\Lambda} ^{-1}\\bm{U}_{k+1}^T\\bm{L}-\\bm{L})\\bm{U}\\bm{U}^T\\bm{e}_j| \\cr\n\t\t\t\t\t\t\t\t\t\t\t\t&\\leq\\max_{ij} \\|\\bm{e}_i^T\\bm{U}\\bm{U}^T\\|_2~\\|\\bm{L}\\bm{U}_{k+1}\\bm{\\Lambda} ^{-1}\\bm{U}_{k+1}^T\\bm{L}-\\bm{L}\\|_2~ \\|\\bm{U}\\bm{U}^T\\bm{e}_j\\|_2 \\cr\n\t\t\t\t\t\t\t\t\t\t\t\t&\\leq \\frac{\\mu r}{n}\\|\\bm{L}\\bm{U}_{k+1}\\bm{\\Lambda} ^{-1}\\bm{U}_{k+1}^T\\bm{L}-\\bm{L}\\|_2,\n\\end{align*}\nwhere the second equation follows from the fact $\\bm{U}\\bm{U}^T\\bm{L}=\\bm{L}\\bm{U}\\bm{U}^T=\\bm{L}$. Since $\\bm{L}=\\bm{U}_{k+1}\\bm{\\Lambda} \\bm{U}_{k+1}^T+\\ddot{\\bm{U}}_{k+1}\\ddot{\\bm{\\Lambda}}\\ddot{\\bm{U}}_{k+1}^T -\\bm{Z}$, there hold\n\\begin{align*}\n&~\\|\\bm{L}\\bm{U}_{k+1}\\bm{\\Lambda} ^{-1}\\bm{U}_{k+1}^T\\bm{L}-\\bm{L}\\|_2 \\cr\n=&~ \\|(\\bm{U}_{k+1}\\bm{\\Lambda} \\bm{U}_{k+1}^T+\\ddot{\\bm{U}}_{k+1}\\ddot{\\bm{\\Lambda}}\\ddot{\\bm{U}}_{k+1}^T -\\bm{Z})\\bm{U}_{k+1}\\bm{\\Lambda} ^{-1}\\bm{U}_{k+1}^T(\\bm{U}_{k+1}\\bm{\\Lambda} \\bm{U}_{k+1}^T+\\ddot{\\bm{U}}_{k+1}\\ddot{\\bm{\\Lambda}}\\ddot{\\bm{U}}_{k+1}^T -\\bm{Z})-\\bm{L}\\|_2 \\cr\n=&~\\|\\bm{U}_{k+1}\\bm{\\Lambda} \\bm{U}_{k+1}^T-\\bm{L}-\\bm{U}_{k+1}\\bm{U}_{k+1}^T\\bm{Z}-\\bm{Z}\\bm{U}_{k+1}\\bm{U}_{k+1}^T-\\bm{Z}\\bm{U}_{k+1}\\bm{\\Lambda} ^{-1}\\bm{U}_{k+1}^T\\bm{Z}\\|_2 \\cr\n\\leq&~ \\|\\bm{Z}-\\ddot{\\bm{U}}_{k+1}\\ddot{\\bm{\\Lambda}}\\ddot{\\bm{U}}_{k+1}^T\\|_2 +2\\|\\bm{Z}\\|_2+\\frac{\\|\\bm{Z}\\|_2^2}{|\\lambda_r|} \\cr\n\\leq&~ \\|\\ddot{\\bm{U}}_{k+1}\\ddot{\\bm{\\Lambda}}\\ddot{\\bm{U}}_{k+1}^T\\|_2 +4\\|\\bm{Z}\\|_2 \\cr\n\\leq&~ |\\lambda_{r+1}|+4\\|\\bm{Z}\\|_2 \\cr\n\\leq&~ 5\\|\\bm{Z}\\|_2 \\cr\n\\leq&~ 5\\tau\\gamma^{k+1}\\sigma_1^L, \\KW{\\alpha \\lesssim\\frac{1}{\\mu r}}\n\\end{align*}\nwhere the fifth inequality uses (\\ref{eq:norm_of_Z 1}) in Lemma \\ref{lemma:norm_of_Z},\nand notice that $\\frac{\\|\\bm{Z}\\|_2}{|\\lambda_r|}\\leq\\frac{\\tau}{1-\\tau}<1$ since $\\tau<\\frac{1}{2}$ and $|\\lambda_{r+1}|\\leq\\|\\bm{Z}\\|_2$ since $\\bm{L}$ is a rank-$r$ matrix.\nThus, we have\n\\begin{equation} \\label{eq:Y0 bound}\n\\bm{Y}_0\\leq \\frac{\\mu r}{n} 5\\tau\\gamma^{k+1}\\sigma_1^L.\n\\end{equation}\n\nNext, we derive an upper bound for the rest part. Note that\n\\begin{align*}\n\\bm{Y}_{ab}&=\\max_{ij} |\\bm{e}_i^T\\bm{Z}^a\\bm{L}\\bm{U}_{k+1}\\bm{\\Lambda} ^{-(a+b+1)}\\bm{U}_{k+1}^T\\bm{L}\\bm{Z}^b\\bm{e}_j| \\cr\n &=\\max_{ij} |(\\bm{e}_i^T\\bm{Z}^a\\bm{U}\\bm{U}^T)\\bm{L}\\bm{U}_{k+1}\\bm{\\Lambda} ^{-(a+b+1)}\\bm{U}_{k+1}^T\\bm{L}(\\bm{U}\\bm{U}^T\\bm{Z}^b\\bm{e}_j)| \\cr\n &\\leq \\max_{ij} \\|\\bm{e}_i^T\\bm{Z}^a\\bm{U}\\|_2~\\|\\bm{L}\\bm{U}_{k+1}\\bm{\\Lambda} ^{-(a+b+1)}\\bm{U}_{k+1}^T\\bm{L}\\|_2~\\|\\bm{U}^T\\bm{Z}^b\\bm{e}_j\\|_2 \\cr\n &\\leq \\max_l\\frac{\\mu r}{n}( \\sqrt{n}\\|\\bm{e}_l^T \\bm{Z}\\|_2)^{a+b} \\|\\bm{L}\\bm{U}_{k+1}\\bm{\\Lambda} ^{-(a+b+1)}\\bm{U}_{k+1}^T\\bm{L}\\|_2,\n \n\\end{align*}\nwhere the last inequality uses Lemma \\ref{lemma:bound_power_vector_norm_with_incoherence}.\nFurthermore, by using $\\bm{L}=\\bm{U}_{k+1}\\bm{\\Lambda} \\bm{U}_{k+1}^T+\\ddot{\\bm{U}}_{k+1}\\ddot{\\bm{\\Lambda}}\\ddot{\\bm{U}}_{k+1}^T -\\bm{Z}$ again, we get\n\\begin{align*}\n&~\\|\\bm{L}\\bm{U}_{k+1}\\bm{\\Lambda} ^{-(a+b+1)}\\bm{U}_{k+1}^T\\bm{L}\\|_2 \\cr\n=&~ \\|(\\bm{U}_{k+1}\\bm{\\Lambda} \\bm{U}_{k+1}^T+\\ddot{\\bm{U}}_{k+1}\\ddot{\\bm{\\Lambda}}\\ddot{\\bm{U}}_{k+1}^T -\\bm{Z})\\bm{U}_{k+1}\\bm{\\Lambda} ^{-(a+b+1)}\\bm{U}_{k+1}^T(\\bm{U}_{k+1}\\bm{\\Lambda} \\bm{U}_{k+1}^T+\\ddot{\\bm{U}}_{k+1}\\ddot{\\bm{\\Lambda}}\\ddot{\\bm{U}}_{k+1}^T -\\bm{Z})\\|_2 \\cr\n=&~ \\|\\bm{U}_{k+1}\\bm{\\Lambda} ^{-(a+b-1)}\\bm{U}_{k+1}^T-\\bm{Z}\\bm{U}_{k+1}\\bm{\\Lambda} ^{-(a+b)}\\bm{U}_{k+1}^T-\\bm{U}_{k+1}\\bm{\\Lambda} ^{-(a+b)}\\bm{U}_{k+1}^T\\bm{Z}+\\bm{Z}\\bm{U}_{k+1}\\bm{\\Lambda} ^{-(a+b+1)}\\bm{U}_{k+1}^T\\bm{Z}\\|_2 \\cr\n\\leq&~ |\\lambda_r|^{-(a+b-1)} + |\\lambda_r|^{-(a+b)}\\|\\bm{Z}\\|_2+ |\\lambda_r|^{-(a+b)}\\|\\bm{Z}\\|_2+ |\\lambda_r|^{-(a+b+1)}\\|\\bm{Z}\\|_2^2 \\cr\n=&~ |\\lambda_r|^{-(a+b-1)}\\left( 1+ \\frac{2\\|\\bm{Z}\\|_2}{|\\lambda_r|}+\\left(\\frac{\\|\\bm{Z}\\|_2}{|\\lambda_r|}\\right)^2 \\right) \\cr\n=&~ |\\lambda_r|^{-(a+b-1)}\\left( 1+ \\frac{\\|\\bm{Z}\\|_2}{|\\lambda_r|} \\right)^2 \\cr\n\\leq&~ |\\lambda_r|^{-(a+b-1)}\\left( \\frac{1}{1-\\tau} \\right)^2 \\cr\n\\leq&~ \\left( \\frac{1}{1-\\tau} \\right)^2 \\left((1-\\tau)\\sigma_r^L\\right)^{-(a+b-1)},\n\\end{align*}\nwhere the second inequality follows from $\\frac{\\|\\bm{Z}\\|_2}{|\\lambda_r|}\\leq\\frac{\\tau}{1-\\tau}$, and the last inequality follows from Lemma \\ref{lemma:Bound_eigenvalues}. Together with (\\ref{eq:norm_of_Z 2}) in Lemma \\ref{lemma:norm_of_Z}, we have\n\\begin{align*}\n\\sum_{a+b>0}\\bm{Y}_{ab}\n\t&\\leq \\sum_{a+b>0}\\frac{\\mu r}{n}\\left( \\frac{1}{1-\\tau} \\right)^2 \\upsilon\\gamma^{k}\\sigma_r^L \\left( \\frac{\\upsilon\\gamma^{k}\\sigma_r^L}{(1-\\tau)\\sigma_r^L}\\right)^{a+b-1}\\cr\n &\\leq \\frac{\\mu r}{n}\\left( \\frac{1}{1-\\tau} \\right)^2 \\upsilon \\gamma^{k}\\sigma_1^L \\sum_{a+b>0} \\left( \\frac{\\upsilon}{1-\\tau}\\right)^{a+b-1}\\cr\n &\\leq \\frac{\\mu r}{n}\\left( \\frac{1}{1-\\tau} \\right)^2 \\upsilon \\gamma^{k}\\sigma_1^L \\left( \\frac{1}{1-\\frac{\\upsilon}{1-\\tau}}\\right)^2\\cr\n \n &\\leq \\frac{\\mu r}{n}\\left( \\frac{1}{1-\\tau-\\upsilon} \\right)^2 \\upsilon \\gamma^{k}\\sigma_1^L. \\numberthis\\label{eq:sum Y_ab bound}\n\\end{align*}\nFinally, combining (\\ref{eq:Y0 bound}) and (\\ref{eq:sum Y_ab bound}) together gives\n\\begin{align*}\n\\|\\bm{L}_{k+1}-\\bm{L}\\|_\\infty &= \\bm{Y}_0 + \\sum_{a+b>0} \\bm{Y}_{ab} \\cr\n &\\leq \\frac{\\mu r}{n} 5\\tau\\gamma^{k+1}\\sigma_1^L + \\frac{\\mu r}{n}\\left( \\frac{1}{1-\\tau-\\upsilon} \\right)^2 \\upsilon \\gamma^{k}\\sigma_1^L \\cr\n &\\leq \\left(\\frac{1}{2}-\\tau\\right)\\frac{\\mu r}{n} \\gamma^{k+1}\\sigma_1^L,\n\\end{align*}\nwhere the last inequality follows from $\\gamma\\geq\\frac{2\\upsilon}{(1-12\\tau)(1-\\tau-\\upsilon)^2}$.\n\\end{proof}\n\n\\begin{lemma} \\label{lemma:Bound_of_S-S_k}\nLet $\\bm{L}\\in\\mathbb{R}^{n\\times n}$ and $\\bm{S}\\in\\mathbb{R}^{n\\times n}$ be two symmetric matrices satisfying Assumptions \\nameref{assume:Inco} and \\nameref{assume:Sparse}, respectively. Let $\\widetilde{\\bm{L}}_k\\in\\mathbb{R}^{n\\times n}$ be the trim output of $\\bm{L}_k$. Recall that $\\beta=\\frac{\\mu r}{2n}$. If \n\\[\n\\|\\bm{L}-\\bm{L}_k\\|_2 \\leq 8\\alpha \\mu r \\gamma^k\\sigma_1^L,\\quad\n\\|\\bm{S}-\\bm{S}_k\\|_\\infty \\leq \\frac{\\mu r}{n} \\gamma^k\\sigma_1^L,\\textnormal{\\ and }\nsupp(\\bm{S}_k)\\subset \\Omega\n\\]\n then we have\n\\[\nsupp(\\bm{S}_{k+1})\\subset \\Omega \\quad and \\quad \\|\\bm{S}-\\bm{S}_{k+1}\\|_\\infty \\leq \\frac{\\mu r}{n} \\gamma^{k+1} \\sigma_1^L,\n\\]\nprovided $1>\\gamma\\geq\\max\\{512\\tau r \\kappa^2+\\frac{1}{\\sqrt{12}},\\frac{2\\upsilon}{(1-12\\tau)(1-\\tau-\\upsilon)^2}\\}$ and $\\tau<\\frac{1}{12}$.\n\\end{lemma}\n\\begin{proof}\nWe first notice that\n\\[\n[\\bm{S}_{k+1}]_{ij}=[\\mathcal{T}_{\\zeta_{k+1}} (\\bm{D}-\\bm{L}_{k+1})]_{ij}=[\\mathcal{T}_{\\zeta_{k+1}} (\\bm{S}+\\bm{L}-\\bm{L}_{k+1})]_{ij}=\n\\begin{cases}\n\\mathcal{T}_{\\zeta_{k+1}} ([\\bm{S}+\\bm{L}-\\bm{L}_{k+1}]_{ij}) & (i,j)\\in\\Omega \\cr\n\\mathcal{T}_{\\zeta_{k+1}} ([\\bm{L}-\\bm{L}_{k+1}]_{ij}) & (i,j)\\in\\Omega^c \\cr\n\\end{cases}.\n\\]\nLet $|\\lambda_{i}^{(k)}|$ denote $i^{th}$ singular value of $\\mathcal{P}_{\\widetilde{T}_k}(\\bm{D}-\\bm{S}_k)$. By Lemmas \\ref{lemma:Bound_eigenvalues} and \\ref{lemma:Bound_of_L-L_k_infty_norm}, we have\n\\[\n\\begin{split}\n|[\\bm{L}-\\bm{L}_{k+1}]_{ij}|&\\leq\\|\\bm{L}-\\bm{L}_{k+1}\\|_\\infty\\leq \\left(\\frac{1}{2}-\\tau\\right)\\frac{\\mu r}{n} \\gamma^{k+1} \\sigma_1^L \\\\\n&\\leq \\left(\\frac{1}{2}-\\tau\\right)\\frac{\\mu r}{n} \\frac{1}{1-2\\tau}\\left(|\\lambda_{r+1}^{(k)}| +\\gamma^{k+1}|\\lambda_1^{(k)}|\\right) \\\\\n&= \\zeta_{k+1}\n\\end{split}\n\\]\nfor any entry of $\\bm{L}-\\bm{L}_{k+1}$.\nHence, $[\\bm{S}_{k+1}]_{ij}=0$ for all $(i,j)\\in \\Omega^c$, i.e., $supp(\\bm{S}_{k+1})\\subset \\Omega$. \n\nDenote $\\Omega_{k+1}:=supp(\\bm{S}_{k+1})=\\{(i,j)~|~[(\\bm{D}-\\bm{L}_{k+1})]_{ij}>\\zeta_k\\}$. Then, for any entry of $\\bm{S}-\\bm{S}_{k+1}$, there hold\n\\[\n[\\bm{S}-\\bm{S}_{k+1}]_{ij} =\n\\begin{cases}\n0 &\\cr\n[\\bm{L}_{k+1}-\\bm{L}]_{ij} & \\cr\n[\\bm{S}]_{ij} & \\cr\n\\end{cases} \\leq\n\\begin{cases}\n0 &\\cr\n\\|\\bm{L}-\\bm{L}_{k+1}\\|_\\infty & \\cr\n\\|\\bm{L}-\\bm{L}_{k+1}\\|_\\infty +\\zeta_{k+1} & \\cr\n\\end{cases} \\leq\n\\begin{cases}\n0 & (i,j)\\in\\Omega^c \\cr\n\\left(\\frac{1}{2}-\\tau\\right)\\frac{\\mu r}{n} \\gamma^{k+1} \\sigma_1^L & (i,j)\\in \\Omega_{k+1} \\cr\n\\frac{\\mu r}{n} \\gamma^{k+1} \\sigma_1^L & (i,j)\\in \\Omega\\backslash\\Omega_{k+1}. \\cr\n\\end{cases}\n\\]\nHere the last step follows from Lemma \\ref{lemma:Bound_eigenvalues} which implies $\\zeta_{k+1}=\\frac{\\mu r}{2n} (|\\lambda_{r+1}^{(k)}| +\\gamma^{k+1}|\\lambda_1^{(k)}|)\\leq \\left(\\frac{1}{2}+\\tau\\right)\\frac{\\mu r}{n}\\gamma^{k+1}\\sigma_1^L$.\nTherefore, $\\|\\bm{S}-\\bm{S}_{k+1}\\|_\\infty\\leq \\frac{\\mu r}{n} \\gamma^{k+1} \\sigma_1^L$. \n\\end{proof}\n\nNow, we have all the ingredients for the proof of Theorem \\ref{thm:local convergence}.\n\\begin{proof} [Proof of Theorem \\ref{thm:local convergence}] This theorem will be proved by mathematical induction.\\\\\n\\textbf{Base Case:} When $k=0$, the base case is satisfied by the assumption on the intialization.\\\\\n\\textbf{Induction Step:} Assume we have\n\\[\n\\|\\bm{L}-\\bm{L}_k\\|_2 \\leq 8\\alpha \\mu r \\gamma^k\\sigma_1^L,\\quad\n\\|\\bm{S}-\\bm{S}_k\\|_\\infty \\leq \\frac{\\mu r}{n} \\gamma^k\\sigma_1^L,\\quad \\textnormal{and} \\quad\nsupp(\\bm{S}_k)\\subset \\Omega\n\\]\nat the $k^{th}$ iteration. At the $(k+1)^{th}$ iteration. If follows directly from Lemmas~\\ref{lemma:Bound_of_L-L_k_2_norm} and \\ref{lemma:Bound_of_S-S_k} that\n\\[\n\\|\\bm{L}-\\bm{L}_{k+1}\\|_2 \\leq 8\\alpha \\mu r \\gamma^{k+1}\\sigma_1^L,\\quad \\|\\bm{S}-\\bm{S}_{k+1}\\|_\\infty \\leq \\frac{\\mu r}{n} \\gamma^{k+1}\\sigma_1^L\\quad \\textnormal{and} \\quad\nsupp(\\bm{S}_{k+1})\\subset \\Omega,\n\\]\nwhich completes the proof.\n\nAdditionally, notice that we overall require $1>\\gamma\\geq\\max\\{512\\tau r \\kappa^2+\\frac{1}{\\sqrt{12}},\\frac{2\\upsilon}{(1-12\\tau)(1-\\tau-\\upsilon)^2}\\}$. By the definition of $\\tau$ and $\\upsilon$, one can easily see that the lower bound approaches $\\frac{1}{\\sqrt{12}}$ when the constant hidden in \\eqref{eq:condition_on_p} is sufficiently large. Therefore, the theorem can be proved for any $\\gamma\\in\\left(\\frac{1}{\\sqrt{12}},1\\right)$.\n\\end{proof}\n\n\n\n\n\\subsection{Proof of Theorem \\ref{thm:initialization bound}} \\label{subsec:proof of initialization}\nWe first present a lemma which is a variant of Lemma~\\ref{lemma:bound_power_vector_norm_with_incoherence} and also appears in \\citep[Lemma 5]{netrapalli2014non}. The lemma can be similarly proved by mathematical induction. \n\\begin{lemma} \\label{init:lemma:bound_power_vector_norm_with_incoherence}\nLet $\\bm{S}\\in\\mathbb{R}^{n\\times n}$ be a sparse matrix satisfying Assumption \\nameref{assume:Sparse}. Let $\\bm{U}\\in\\mathbb{R}^{n\\times r}$ be an orthogonal matrix with $\\mu$-incoherence, i.e., $\\|\\bm{e}_i^T\\bm{U}\\|_2\\leq\\sqrt{\\frac{\\mu r}{n}}$ for all $i$. Then\n\\[\n\\|\\bm{e}_i^T\\bm{S}^a\\bm{U}\\|_2\\leq \\sqrt{\\frac{\\mu r}{n}}(\\alpha n\\|\\bm{S}\\|_\\infty)^a\n\\]\nfor all $i$ and $a\\geq 0$.\n\\end{lemma}\n\n\nThough the proposed initialization scheme (i.e., Algorithms~\\ref{Algo:Init1}) basically consists of two steps of AltProj \\citep{netrapalli2014non}, we provide an independent proof for Theorem~\\ref{thm:initialization bound} here because we bound the approximation errors of the low rank matrices using the spectral norm rather than the infinity norm. The proof of Theorem~\\ref{thm:initialization bound} follows a similar structure to that of Theorem~\\ref{thm:local convergence}, but without the projection onto a low dimensional tangent space. Instead of \nfirst presenting several auxiliary lemmas, we give a single proof by putting all the elements together. \n\n\\begin{proof} [Proof of Theorem \\ref{thm:initialization bound}] The proof can be partitioned into several parts.\n\n~\\\\\n(i) Note that $\\bm{L}_{-1}=0$ and\n\\[\n\\|\\bm{L}-\\bm{L}_{-1}\\|_\\infty=\\|\\bm{L}\\|_\\infty = \\max_{ij} \\left|\\bm{e}_i^T\\bm{U}\\bm{\\Sigma} \\bm{U}^T\\bm{e}_j\\right| \\leq \\max_{ij} \\|\\bm{e}_i^T\\bm{U}\\|_2\\|\\bm{\\Sigma}\\|_2\\|\\bm{U}^T\\bm{e}_j\\|_2\\leq\\frac{\\mu r}{n}\\sigma_1^L,\n\\]\nwhere the last inequality follows from Assumption \\nameref{assume:Inco}, i.e., $\\bm{L}$ is $\\mu$-incoherent.\nThus, with the choice of $\\beta_{init}\\geq\\frac{\\mu r\\sigma_1^L}{n\\sigma_1^D}$, we have\n\\begin{equation} \\label{eq:Init:step 1 result}\n\\|\\bm{L}-\\bm{L}_{-1}\\|_\\infty\\leq \\beta_{init}\\sigma_1^D = \\zeta_{-1}.\n\\end{equation}\nSince\n\\[\n[\\bm{S}_{-1}]_{ij}=[\\mathcal{T}_{\\zeta_{-1}} (\\bm{S}+\\bm{L}-\\bm{L}_{-1})]_{ij} =\n\\begin{cases}\n\\mathcal{T}_{\\zeta_{-1}} ([\\bm{S}+\\bm{L}-\\bm{L}_{-1}]_{ij}) & (i,j)\\in\\Omega \\cr\n\\mathcal{T}_{\\zeta_{-1}} ([\\bm{L}-\\bm{L}_{-1}]_{ij}) & (i,j)\\in\\Omega^c, \\cr\n\\end{cases}\n\\]\nit follows that $[\\bm{S}_{-1}]_{ij}=0$ for all $(i,j)\\in\\Omega^c$, i.e. $\\Omega_{-1}:=supp(\\bm{S}_{-1})\\subset \\Omega$. Moreover, for any entries of $\\bm{S}-\\bm{S}_{-1}$, we have\n\\[\n[\\bm{S}-\\bm{S}_{-1}]_{ij} =\n\\begin{cases}\n0 & \\cr\n[\\bm{L}_{-1}-\\bm{L}]_{ij} & \\cr\n[\\bm{S}]_{ij} & \\cr\n\\end{cases} \\leq\n\\begin{cases}\n0 & \\cr\n\\|\\bm{L}-\\bm{L}_{-1}\\|_\\infty & \\cr\n\\|\\bm{L}-\\bm{L}_{-1}\\|_\\infty +\\zeta_{-1} & \\cr\n\\end{cases} \\leq\n\\begin{cases}\n0 & (i,j)\\in \\Omega^c \\cr\n\\frac{\\mu r}{n} \\sigma_1^L & (i,j)\\in \\Omega_{-1} \\cr\n\\frac{4\\mu r}{n} \\sigma_1^L & (i,j)\\in \\Omega\\backslash\\Omega_{-1} \\cr\n\\end{cases},\n\\]\nwhere the last inequality follows from $\\beta_{init}\\leq\\frac{3\\mu r\\sigma_1^L}{n\\sigma_1^D}$, so that $\\zeta_{-1}\\leq \\frac{3\\mu r}{n}\\sigma_1^L$. Therefore, if follows that\n\\begin{equation}\\label{eq:SminusS}\nsupp(\\bm{S}_{-1})\\subset\\Omega\\quad \\textnormal{and}\\quad \\|\\bm{S}-\\bm{S}_{-1}\\|_\\infty\\leq\\frac{4\\mu r}{n}\\sigma_1^L.\n\\end{equation}\nBy Lemma \\ref{lemma:bound of sparse matrix}, we also have\n\\[\n\\|\\bm{S}-\\bm{S}_{-1}\\|_2\\leq \\alpha n\\|\\bm{S}-\\bm{S}_{-1}\\|_\\infty\\leq4\\alpha \\mu r\\sigma_1^L.\n\\]\n\n\n~\\\\\n(ii) To bound the approximation error of $\\bm{L}_0$ to $\\bm{L}$ in terms of the spectral norm, note that\n\\[\n\\begin{split}\n\\|\\bm{L}-\\bm{L}_0\\|_2&\\leq\\|\\bm{L}-(\\bm{D}-\\bm{S}_{-1})\\|_2 + \\|(\\bm{D}-\\bm{S}_{-1})-\\bm{L}_0\\|_2 \\cr\n &\\leq 2\\|\\bm{L}-(\\bm{D}-\\bm{S}_{-1})\\|_2\\cr\n &= 2\\|\\bm{L}-(\\bm{L}+\\bm{S}-\\bm{S}_{-1})\\|_2\\cr\n &= 2\\|\\bm{S}-\\bm{S}_{-1}\\|_2,\n\\end{split}\n\\]\nwhere the second inequality follows from the fact $\\bm{L}_{0}=\\mathcal{H}_r(\\bm{D}-\\bm{S}_{-1})$ is the best rank-$r$ approximation of $\\bm{D}-\\bm{S}_{-1}$. It follows immediately that\n\\begin{equation}\\label{eq:norm:L-L0}\n\\|\\bm{L}-\\bm{L}_0\\|_2\\leq 8\\alpha \\mu r\\sigma_1^L .\n\\end{equation}\n\n\n~\\\\\n(iii) Since $\\bm{D}=\\bm{L}+\\bm{S}$, we have $\\bm{D}-\\bm{S}_{-1}=\\bm{L}+\\bm{S}-\\bm{S}_{-1}$. Let $\\lambda_i$ denotes the $i^{th}$ eigenvalue of $\\bm{D}-\\bm{S}_{-1}$ ordered by $|\\lambda_1|\\geq|\\lambda_2|\\geq\\cdots\\geq|\\lambda_n|$. The application of Weyl's inequality together with the bound of $\\alpha $ in Assumption \\nameref{assume:Sparse} implies that\n\\begin{equation} \\label{init:eq:eigenvalues bound 0}\n|\\sigma^L_i-|\\lambda_i|| \\leq \\|\\bm{S}-\\bm{S}_{-1}\\|_2 \\leq \\frac{\\sigma_r^L}{8} \\KW{\\alpha \\lesssim\\frac{1}{\\mu r\\kappa}}\n\\end{equation}\nholds for all $i$.\nConsequently, we have \n\\begin{align*} \n&\\frac{7}{8}\\sigma^L_i \\leq |\\lambda_i| \\leq \\frac{9}{8}\\sigma^L_i,\\qquad \\forall 1\\leq i\\leq r,\\numberthis \\label{init:eq:eigenvalues bound 1}\\\\\n&\\frac{\\|\\bm{S}-\\bm{S}_{-1}\\|_2}{\\left|\\lambda_r\\right|}\\leq \\frac{\\frac{\\sigma_r^L}{8}}{\\frac{7\\sigma_r^L}{8}}=\\frac{1}{7}.\\numberthis\\label{init:eq:eigenvalues bound 2}\n\\end{align*}\n\nLet $\\bm{D}-\\bm{S}_{-1}=[\\bm{U}_0, \\ddot{\\bm{U}}_0 ]\\begin{bmatrix}\\bm{\\Lambda} & \\bm{0}\\\\\\bm{0} &\\ddot{\\bm{\\Lambda}}\\end{bmatrix}[\\bm{U}_0, \\ddot{\\bm{U}}_0]^T =\\bm{U}_0\\bm{\\Lambda} \\bm{U}_0^T+\\ddot{\\bm{U}}_0\\ddot{\\bm{\\Lambda}}\\ddot{\\bm{U}}_0^T$ be its eigenvalue decomposition, where $\\bm{\\Lambda} $ has the $r$ largest eigenvalues in magnitude and $\\ddot{\\bm{\\Lambda}}$ contains the rest eigenvalues. Also, $\\bm{U}_0$ contains the first $r$ eigenvectors, and $\\ddot{\\bm{U}}_0$ has the rest. Notice that $ \\bm{L}_0=\\mathcal{H}_r(\\bm{D}-\\bm{S}_{-1})=\\bm{U}_0\\bm{\\Lambda} \\bm{U}_0^T $ due to the symmetric setting.\nDenote $\\bm{E}=\\bm{D}-\\bm{S_{-1}}-\\bm{L}=\\bm{S}-\\bm{S}_{-1}$. Let $\\bm{u}_i$ be the $i^{th}$ eigenvector of $\\bm{D}-\\bm{S}_{-1}=\\bm{L}+\\bm{E}$. For $1\\leq i\\leq r$, since $(\\bm{L}+\\bm{E})\\bm{u}_i = \\lambda_i \\bm{u}_i$, we have\n\\begin{align*}\n\\bm{u}_i = \\left(\\bm{I}-\\frac{\\bm{E}}{\\lambda_i}\\right)^{-1}\\frac{\\bm{L}}{\\lambda_i}\\bm{u}_i=\\left(\\bm{I}+\\frac{\\bm{E}}{\\lambda_i}+\\left(\\frac{\\bm{E}}{\\lambda_i}\\right)^2+\\cdots\\right)\\frac{\\bm{L}}{\\lambda_i}\\bm{u}_i\n\\end{align*}\nfor each $\\bm{u}_i$, where the expansion in the last equality is valid because $ \\frac{\\left\\|\\bm{E}\\right\\|_2}{\\left|\\lambda_i\\right|}\\leq\\frac{1}{7}$ for all $1\\leq i\\leq r$ following from \\eqref{init:eq:eigenvalues bound 2}. Therefore, $\\KW{E \\mbox{ SYMMETRIC}}$\n\\begin{align*}\n\\|\\bm{L}_0-\\bm{L}\\|_\\infty &= \\|\\bm{U}_0\\bm{\\Lambda} \\bm{U}_0^T -\\bm{L}\\|_\\infty \\cr\n\t\t &= \\|\\bm{L}\\bm{U}_0\\bm{\\Lambda} ^{-1}\\bm{U}_0^T\\bm{L}-\\bm{L} + \\sum_{a+b>0} \\bm{E}^a\\bm{L}\\bm{U}_0\\bm{\\Lambda} ^{-(a+b+1)}\\bm{U}_0^T\\bm{L}\\bm{E}^b \\|_\\infty \\cr\n\t\t &\\leq \\|\\bm{L}\\bm{U}_0\\bm{\\Lambda} ^{-1}\\bm{U}_0^T\\bm{L}-\\bm{L}\\|_\\infty + \\sum_{a+b>0} \\|\\bm{E}^a\\bm{L}\\bm{U}_0\\bm{\\Lambda} ^{-(a+b+1)}\\bm{U}_0^T\\bm{L}\\bm{E}^b \\|_\\infty \\cr\n\t\t &:= \\bm{Y}_0 + \\sum_{a+b>0} \\bm{Y}_{ab}.\n\\end{align*}\n\nWe will handle $\\bm{Y}_0$ first. Recall that $\\bm{L}=\\bm{U}\\bm{\\Sigma} \\bm{V}^T$ is the SVD of the symmetric matrix $\\bm{L}$ which is $\\mu$-incoherence, i.e., $\\bm{U}\\bm{U}^T=\\bm{V}\\bm{V}^T$ and $\\|\\bm{e}_i^T\\bm{U}\\bm{U}^T\\|_2\\leq\\sqrt{\\frac{\\mu r}{n}}$ for all $i$. For each $(i,j)$ entry of $\\bm{Y}_0$, we have\n\\begin{align*}\n\\bm{Y}_0 &= \\max_{ij} |\\bm{e}_i^T(\\bm{L}\\bm{U}_0\\bm{\\Lambda} ^{-1}\\bm{U}_0^T\\bm{L}-\\bm{L})\\bm{e}_j| \\cr\n\t\t\t\t\t\t\t\t\t\t\t\t&=\\max_{ij} |\\bm{e}_i^T\\bm{U}\\bm{U}^T(\\bm{L}\\bm{U}_0\\bm{\\Lambda} ^{-1}\\bm{U}_0^T\\bm{L}-\\bm{L})\\bm{U}\\bm{U}^T\\bm{e}_j| \\cr\n\t\t\t\t\t\t\t\t\t\t\t\t&\\leq\\max_{ij} \\|\\bm{e}_i^T\\bm{U}\\bm{U}^T\\|_2~\\|\\bm{L}\\bm{U}_0\\bm{\\Lambda} ^{-1}\\bm{U}_0^T\\bm{L}-\\bm{L}\\|_2~ \\|\\bm{U}\\bm{U}^T\\bm{e}_j\\|_2 \\cr\t\t\n\t\t\t\t\t\t\t\t\t\t\t&\\leq \\frac{\\mu r}{n}\\|\\bm{L}\\bm{U}_0\\bm{\\Lambda} ^{-1}\\bm{U}_0^T\\bm{L}-\\bm{L}\\|_2,\n\\end{align*}\nwhere the second equation follows from the fact $\\bm{L}=\\bm{U}\\bm{U}^T\\bm{L}=\\bm{L}\\bm{U}\\bm{U}^T$. Since $\\bm{L}=\\bm{U}_0\\bm{\\Lambda} \\bm{U}_0^T+\\ddot{\\bm{U}}_0\\ddot{\\bm{\\Lambda}}\\ddot{\\bm{U}}_0^T -\\bm{E}$,\n\\begin{align*}\n&~\\|\\bm{L}\\bm{U}_0\\bm{\\Lambda} ^{-1}\\bm{U}_0^T\\bm{L}-\\bm{L}\\|_2 \\cr\n=&~ \\|(\\bm{U}_0\\bm{\\Lambda} \\bm{U}_0^T+\\ddot{\\bm{U}}_0\\ddot{\\bm{\\Lambda}}\\ddot{\\bm{U}}_0^T -\\bm{E})\\bm{U}_0\\bm{\\Lambda} ^{-1}\\bm{U}_0^T(\\bm{U}_0\\bm{\\Lambda} \\bm{U}_0^T+\\ddot{\\bm{U}}_0\\ddot{\\bm{\\Lambda}}\\ddot{\\bm{U}}_0^T -\\bm{E})-\\bm{L}\\|_2 \\cr\n=&~\\|\\bm{U}_0\\bm{\\Lambda} \\bm{U}_0^T-\\bm{L}-\\bm{U}_0\\bm{U}_0^T\\bm{E}-\\bm{E}\\bm{U}_0\\bm{U}_0^T-\\bm{E}\\bm{U}_0\\bm{\\Lambda} ^{-1}\\bm{U}_0^T\\bm{E}\\|_2 \\cr\n\\leq&~ \\|\\bm{E}-\\ddot{\\bm{U}}_0\\ddot{\\bm{\\Lambda}}\\ddot{\\bm{U}}_0^T\\|_2 +2\\|\\bm{E}\\|_2+\\frac{\\|\\bm{E}\\|_2^2}{|\\lambda_r|} \\cr\n\\leq&~ \\|\\ddot{\\bm{U}}_0\\ddot{\\bm{\\Lambda}}\\ddot{\\bm{U}}_0^T\\|_2 +4\\|\\bm{E}\\|_2 \\cr\n\\leq&~ |\\lambda_{r+1}|+4\\|\\bm{E}\\|_2 \\cr\n\\leq&~ 5\\|\\bm{E}\\|_2,\n\\end{align*}\nwhere the first and fourth inequality follow from (\\ref{init:eq:eigenvalues bound 0}) and (\\ref{init:eq:eigenvalues bound 2}), and $|\\lambda_{r+1}|\\leq\\|\\bm{E}\\|_2$ since $\\sigma_{r+1}^L=0$. Together, we have\n\\begin{equation} \\label{init:eq:Y0 bound}\n\\bm{Y}_0\\leq \\frac{5\\mu r}{n} \\|\\bm{E}\\|_2 \\leq 5\\alpha \\mu r \\|\\bm{E}\\|_\\infty,\n\\end{equation}\nwhere the last inequality follows from Lemma \\ref{lemma:bound of sparse matrix}.\n\n\nNext, we will find an upper bound for the rest part. Note that\n\\begin{align*}\n\\bm{Y}_{ab}&=\\max_{ij} |\\bm{e}_i^T\\bm{E}^a\\bm{L}\\bm{U}_0\\bm{\\Lambda} ^{-(a+b+1)}\\bm{U}_0^T\\bm{L}\\bm{E}^b\\bm{e}_j| \\cr\n &=\\max_{ij} |(\\bm{e}_i^T\\bm{E}^a\\bm{U}\\bm{U}^T)\\bm{L}\\bm{U}_0\\bm{\\Lambda} ^{-(a+b+1)}\\bm{U}_0^T\\bm{L}(\\bm{U}\\bm{U}^T\\bm{E}^b\\bm{e}_j)| \\cr\n &\\leq \\max_{ij} \\|\\bm{e}_i^T\\bm{E}^a\\bm{U}\\|_2~\\|\\bm{L}\\bm{U}_0\\bm{\\Lambda} ^{-(a+b+1)}\\bm{U}_0^T\\bm{L}\\|_2~\\|\\bm{U}^T\\bm{E}^b\\bm{e}_j\\|_2 \\cr\n &\\leq \\frac{\\mu r}{n}( \\alpha n\\|\\bm{E}\\|_\\infty)^{a+b} \\|\\bm{L}\\bm{U}_0\\bm{\\Lambda} ^{-(a+b+1)}\\bm{U}_0^T\\bm{L}\\|_2 \\cr\n &\\leq \\alpha \\mu r \\|\\bm{E}\\|_\\infty \\left(\\frac{\\sigma_r^L}{8}\\right)^{a+b-1} \\|\\bm{L}\\bm{U}_0\\bm{\\Lambda} ^{-(a+b+1)}\\bm{U}_0^T\\bm{L}\\|_2, \\KW{ \\alpha \\lesssim\\frac{1}{\\mu r\\kappa}}\n\\end{align*}\nwhere the second inequality uses Lemma \\ref{init:lemma:bound_power_vector_norm_with_incoherence}. \nFurthermore, by using $\\bm{L}=\\bm{U}_0\\bm{\\Lambda} \\bm{U}_0^T+\\ddot{\\bm{U}}_0\\ddot{\\bm{\\Lambda}}\\ddot{\\bm{U}}_0^T -\\bm{E}$ again, we have\n\\begin{align*}\n&~\\|\\bm{L}\\bm{U}_0\\bm{\\Lambda} ^{-(a+b+1)}\\bm{U}_0^T\\bm{L}\\|_2 \\cr\n=&~ \\|(\\bm{U}_0\\bm{\\Lambda} \\bm{U}_0^T+\\ddot{\\bm{U}}_0\\ddot{\\bm{\\Lambda}}\\ddot{\\bm{U}}_0^T -\\bm{E})\\bm{U}_0\\bm{\\Lambda} ^{-(a+b+1)}\\bm{U}_0^T(\\bm{U}_0\\bm{\\Lambda} \\bm{U}_0^T+\\ddot{\\bm{U}}_0\\ddot{\\bm{\\Lambda}}\\ddot{\\bm{U}}_0^T -\\bm{E})\\|_2 \\cr\n=&~ \\|\\bm{U}_0\\bm{\\Lambda} ^{-(a+b-1)}\\bm{U}_0^T-\\bm{E}\\bm{L}\\bm{U}_0\\bm{\\Lambda} ^{-(a+b)}\\bm{U}_0^T-\\bm{L}\\bm{U}_0\\bm{\\Lambda} ^{-(a+b)}\\bm{U}_0^T\\bm{E}+\\bm{E}\\bm{L}\\bm{U}_0\\bm{\\Lambda} ^{-(a+b+1)}\\bm{U}_0^T\\bm{E}\\|_2 \\cr\n\\leq&~ |\\lambda_r|^{-(a+b-1)} + |\\lambda_r|^{-(a+b)}\\|\\bm{E}\\|_2+ |\\lambda_r|^{-(a+b)}\\|\\bm{E}\\|_2+ |\\lambda_r|^{-(a+b+1)}\\|\\bm{E}\\|_2^2 \\cr\n=&~ |\\lambda_r|^{-(a+b-1)}\\left( 1+ \\frac{2\\|\\bm{E}\\|_2}{|\\lambda_r|}+\\left(\\frac{\\|\\bm{E}\\|_2}{|\\lambda_r|}\\right)^2 \\right) \\cr\n=&~ |\\lambda_r|^{-(a+b-1)}\\left( 1+ \\frac{\\|\\bm{E}\\|_2}{|\\lambda_r|} \\right)^2 \\cr\n\\leq&~ 2|\\lambda_r|^{-(a+b-1)} \\cr\n\\leq&~ 2\\left(\\frac{7}{8}\\sigma_r^L\\right)^{-(a+b-1)},\n\\end{align*}\nwhere the second inequality follows from \\eqref{init:eq:eigenvalues bound 2} and the last inequality follows from \\eqref{init:eq:eigenvalues bound 1}. Together, we have\n\\begin{align*}\n\\sum_{a+b>0}\\bm{Y}_{ab}&\\leq \\sum_{a+b>0} 2\\alpha \\mu r \\|\\bm{E}\\|_\\infty \\left(\\frac{\\frac{1}{8}\\sigma_r^L}{\\frac{7}{8}\\sigma_r^L }\\right)^{a+b-1} \\cr\n &\\leq 2\\alpha \\mu r \\|\\bm{E}\\|_\\infty \\sum_{a+b>0} \\left( \\frac{1}{7}\\right)^{a+b-1}\\cr\n &\\leq 2\\alpha \\mu r \\|\\bm{E}\\|_\\infty \\left( \\frac{1}{1-\\frac{1}{7}}\\right)^2\\cr\n &\\leq 3\\alpha \\mu r \\|\\bm{E}\\|_\\infty. \\numberthis\\label{init:eq:sum Y_ab bound}\n\\end{align*}\nFinally, combining \\eqref{init:eq:Y0 bound}) and \\eqref{init:eq:sum Y_ab bound}) together yields\n\\begin{align*}\n\\|\\bm{L}_0-\\bm{L}\\|_\\infty &= \\bm{Y}_0 + \\sum_{a+b>0} \\bm{Y}_{ab} \\cr\n &\\leq 5\\alpha \\mu r \\|\\bm{E}\\|_\\infty + 3\\alpha \\mu r \\|\\bm{E}\\|_\\infty \\cr\n \n &\\leq \\frac{\\mu r}{4n}\\sigma_1^L, \\KW{\\alpha \\lesssim\\frac{1}{\\mu r}}\\numberthis\\label{init:eq: L-L0 inf norm}\n \\end{align*}\nwhere the last step uses \\eqref{eq:SminusS} and the bound of $\\alpha $ in Assumption \\nameref{assume:Sparse}.\n\n\n~\\\\\n(iv) From the thresholding rule, we know that\n\\[\n[\\bm{S}_{0}]_{ij}=[\\mathcal{T}_{\\zeta_{0}} (\\bm{S}+\\bm{L}-\\bm{L}_{0})]_{ij} =\n\\begin{cases}\n\\mathcal{T}_{\\zeta_{0}} ([\\bm{S}+\\bm{L}-\\bm{L}_{0}]_{ij}) & (i,j)\\in\\Omega \\cr\n\\mathcal{T}_{\\zeta_{0}} ([\\bm{L}-\\bm{L}_{0}]_{ij}) & (i,j)\\in\\Omega^c \\cr\n\\end{cases}.\n\\]\nSo (\\ref{init:eq:eigenvalues bound 1}), (\\ref{init:eq: L-L0 inf norm}) and $\\zeta_{0}=\\frac{\\mu r}{2n}\\lambda_1$ imply $[\\bm{S}_{0}]_{ij}=0$ for all $(i,j)\\in\\Omega^c$, i.e., $supp(\\bm{S}_{0}):=\\Omega_{0}\\subset \\Omega$. Also, for any entries of $\\bm{S}-\\bm{S}_{0}$, there hold\n\\[\n[\\bm{S}-\\bm{S}_{0}]_{ij} =\n\\begin{cases}\n0 & \\cr\n[\\bm{L}_{0}-\\bm{L}]_{ij} & \\cr\n[\\bm{S}]_{ij} & \\cr\n\\end{cases} \\leq\n\\begin{cases}\n0 & \\cr\n\\|\\bm{L}-\\bm{L}_{0}\\|_\\infty & \\cr\n\\|\\bm{L}-\\bm{L}_{0}\\|_\\infty +\\zeta_{0} & \\cr\n\\end{cases} \\leq\n\\begin{cases}\n0 & (i,j)\\in \\Omega^c \\cr\n\\frac{\\mu r}{4n} \\sigma_1^L & (i,j)\\in \\Omega_0 \\cr\n\\frac{\\mu r}{n} \\sigma_1^L & (i,j)\\in \\Omega\\backslash\\Omega_0. \\cr\n\\end{cases}\n\\]\nHere the last inequality follows from (\\ref{init:eq:eigenvalues bound 1}) which implies $\\zeta_{0}=\\frac{\\mu r}{2n}\\lambda_1\\leq \\frac{3\\mu r}{4n}\\sigma_1^L$.\nTherefore, we have\n\\[\nsupp(\\bm{S}_{0})\\subset\\Omega\\quad \\textnormal{and}\\quad \\|\\bm{S}-\\bm{S}_{0}\\|_\\infty\\leq\\frac{\\mu r}{n}\\sigma_1^L.\n\\]\nThe proof is compete by noting \\eqref{eq:norm:L-L0} and the above results. \n\\end{proof}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}