diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzdqan" "b/data_all_eng_slimpj/shuffled/split2/finalzzdqan" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzdqan" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{thintro}\n\n In quantum transport, current correlation function\ncontains more information than average current\n \\cite{Bla001,Imr02,Bee0337,Naz03}.\nExperiments often measure the power spectrum,\nthe Fourier transformation of correlation function\n\\cite{Deb03203,Del09208,Bas10166801,Bas12046802,Del18041412}.\nIn general, nonequilibrium noise spectrum of transport current\nneither is asymmetric and nor satisfies\nthe detail--balance relation\n\\cite{Eng04136602,Agu001986,\nNak19134403,Ent07193308,Rot09075307,Mao21014104}.\nMoreover, mesoscopic systems with discrete energy levels\nexhibit strong Coulomb interaction\nand the contacting electrodes in general\ninduce memory effect \\cite{Bla001,Imr02,Bee0337,Naz03}.\nTheoretical methods that are practical to general mesoscopic systems,\nwith Coulomb interaction and memory effects on quantum transport,\nwould be needed.\n\n As real--time dynamics is concerned,\nthe quantum master equation approach is the most popular.\nJin--Zheng--Yan established the exact fermionic\nhierarchical equation of motion approach \\cite{Jin08234703}.\nThis nonperturbative\ntheory has been widely used in studying nanostructures\nwith strong correlations including the Kondo problems\n\\cite{Zhe09164708,Li12266403,Wan13035129,Che15033009}.\nRecently, Yan's group further developed the dissipaton equation of motion (DEOM)\ntheory \\cite{Yan14054105,Jin15234108,Yan16110306,Jin20235144}.\nThe underlying algebra addresses the hybrid bath dynamics.\nThe current correlation function can now be evaluated,\neven in the Kondo regime \\cite{Jin15234108,Mao21014104}.\nNote also that\nZhang's group established an exact fermionic master equation,\nbut only for noninteracting systems \\cite{Tu08235311,Jin10083013,Yan14115411}.\n\n\n\n\n\nIn this work, we extend the convention\ntime-nonlocal master equation (TNL-ME)\nto cover an efficient evaluation of\ntransport current noise spectrum.\nThe key step is to identify the underlying\ncurrent related density operators.\nThis converts TNL-ME\ninto a set of three time-local equation-of-motion (TL-EOM) formalism.\nThe latter has the advantage in such as the initial value problems\nand nonequilibrium non-Markovian correlation functions\n \\cite{Jin16083038}.\nThe underlying algebra here is closely related to the DEOM theory\n\\cite{Yan14054105,Jin15234108,Yan16110306,Jin20235144}.\nTL-EOM provides not only real-time dynamics,\n but also analytical formulae for both transport current and noise spectrum.\n\n\n\n The remainder of this paper is organized as follows.\nIn \\Sec{thNMK-ME}, we introduce the transport model and\n the energy-dispersed TL-EOM formalism.\nIn \\Sec{thcurr}, combining the TL-EOM and\ndissipaton decomposition technique,\nwe present an efficient method for calculating the current noise spectrum.\nThe time-dependent current formula is first given in \\Sec{thsubcurr}.\n We then derive the current correlation function\n and straightforwardly obtain the analytical formula of noise\n spectrum in \\Sec{thsubcurrcf}\n and \\Sec{thsubnoise}, respectively.\n\nThe detail derivation is given in \\App{thappsw}.\nWe further give discussions and remarks on the resulting noise spectrum formula\nin \\Sec{thRemarks}.\nFor illustration, we apply the present method to demonstrate the quantum noise spectra\nof the transport through interacting double quantum dots in \\Sec{thnum}.\n The numerical results are further compared with\nthe accurate ones based on DEOM theory.\nFinally, we conclude this work with \\Sec{thsum}.\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Non-Markovian master equation formalisms}\n\n\\label{thNMK-ME}\n\n\\subsection{Model Hamiltonian}\n\nConsider the electron transport through the central nanostructure system\n contacted by the two electrode reservoirs (left $\\alpha = {\\rm L}$ and\n right $\\alpha = {\\rm R}$).\nThe total Hamiltonian reads\n\\begin{align}\\label{Htot0}\nH_{\\T}\\!=\\!H_{\\tS}\\!+\\!\\sum_{\\alpha k}\\varepsilon_{\\alpha k}\n c^{\\dg}_{\\alpha k} c_{\\alpha k}\\!+\\!\\sum_{\\alpha k}\\!\\left(t_{\\alpha u k}a^\\dg_u\n c_{\\alpha k} \\!+\\!{\\rm H.c.}\\right).\n\\end{align}\nThe system Hamiltonian $H_{\\tS}$ includes electron-electron interaction,\ngiven in terms of local electron creation $ a^{\\dg}_{u}$\n(annihilation $ a_{u}$) operators of the spin-orbit state $u$.\n The second term describes the two electrodes ($H_{\\B}$) modeled by the\n the reservoirs of noninteracting electrons\nand $c^{\\dg}_{\\alpha k}$ ($c_{\\alpha k}$) denotes the creation (annihilation) operator\nof electron in the $\\alpha$-reservoir with momentum $k$ and\nenergy $ \\varepsilon_{\\alpha k}$.\nThe last term is the standard tunneling Hamiltonian between the system and the electrodes\nwith the tunneling coefficient $t_{\\alpha u k}$.\nThroughout this work, we adopt the unit of $e=\\hbar=1$.\n\n\n\nFor convenience, we reexpress the tunneling Hamiltonian as\n\\be\\label{Hsb1}\n H'\\!=\\!\\sum_{\\alpha u }\\left( a^{+}_{u} F^-_{\\alpha u}\n + F^+_{\\alpha u} a^{-}_{u} \\right)\\!=\\!\\sum_{\\alpha u \\sigma} a^{\\bar\\sigma}_{u}\n \\wti F^{\\sigma}_{\\alpha u},\n\\ee\nwhere $ F^-_{\\alpha u}\n=\\sum_k t_{\\alpha u k} c_{\\alpha k}=( F^+_{\\alpha u})^\\dg$\nand\n $\\wti F^{\\sigma}_{\\alpha u} \\equiv \\bar\\sigma F^{\\sigma}_{\\alpha u}$,\nwith $\\sigma =+,-$ ($\\bar\\sigma$ is the opposite sign to $\\sigma$).\nAs well-known, the effect of the reservoirs on the transient dynamics of the\ncentral system is characterized\nby the bath correlation function,\n \\begin{align}\\label{ct}\n C^{(\\sigma)}_{\\alpha uv} (t )\n & \\!=\\! \\la F^\\sigma_{\\alpha u} (t) F^{\\bar\\sigma}_{\\alpha v} (0) \\ra_{\\B}\n \\!= \\!\\int_{-\\infty}^{\\infty}\\!\\!\\frac{{\\mathrm d} E}{2\\pi}\\,e^{\\sigma i E t}\n \\Gamma^{\\sigma}_{\\alpha uv}(E),\n \\end{align}\n where $\\la \\cdots\\ra_{\\B}$ stands for the statistical average\nover the bath (electron reservoirs) in thermal equilibrium.\n \n The second identity in \\Eq{ct} arises from the\n the bath correlation function related to\n the hybridization spectral density\n $J_{\\alpha u v}(E)\n\\equiv2\\pi\\sum_k t_{\\alpha u k}t^\\ast_{\\alpha v k}\\delta(E-\\varepsilon_{\\alpha k})\n=J^\\ast_{\\alpha vu}(E)$.\nHere, we introduced\n\\be\\label{cw-real}\n\\Gamma^{\\sigma}_{\\alpha uv}(E)\\equiv n^{\\sigma}_\\alpha(E) J^{\\sigma}_{\\alpha uv}(E),\n\\ee\nwith $J^+_{\\alpha vu}(E) = J^-_{\\alpha uv}(E) = J_{\\alpha uv}(E)$,\n$n^{+}_\\alpha(E)$ the Fermi distribution\nfunction of\n$\\alpha$-reservoir and $n^{-}_\\alpha(E)=1-n^{+}_\\alpha(E)$.\n\n\n\nFor later use, we introduce the dissipaton decomposition for the hybridizing bath \\cite{Yan14054105}\nin the energy-domain,\n\\begin{align}\\label{dissp}\n \\wti F^{\\sigma}_{\\alpha u} \\equiv \\bar\\sigma F^{\\sigma}_{\\alpha u}\n \\equiv \\!\\int_{-\\infty}^{\\infty}\\!\\!\\frac{{\\mathrm d} E}{2\\pi}\n f^{\\sigma}_{\\alpha u}(E).\n\\end{align}\nThe so-called dissipatons \\{$f^{\\sigma}_{\\alpha u}(E)$ \\} satisfy\n\\begin{align*}\\label{all_notation}\n\\la f^{\\sigma}_{\\alpha u }(E,t)f^{\\bar\\sigma}_{\\beta v }(E',0)\\ra\n = - \\delta_{\\alpha\\beta}\\delta(E-E')\n e^{\\sigma i E t}\\Gamma^\\sigma_{\\alpha u v}(E).\n\\end{align*}\n It is easy to verify that the above decomposition preserves\nthe bath correlation function given by \\Eq{ct}.\n\n\n\n\n\n\n\n\n\\subsection{TNL-ME and TL-EOM}\n\\label{thnmkme}\n\nLet us outline the\nTNL-ME and the equivalent energy-dispersed TL-EOM for weak system-reservoir coupling.\nIt is well-known that the primary central system\n is described by the reduced density operator, $\\rho(t)\\equiv{\\rm tr}_{\\B}[\\rho_{\\T }(t)]$,\ni.e., the partial trace of the total density operator $\\rho_{\\T}$ over the bath\nspace. The corresponding\ndynamics is determined by the TNL-ME,\n $\\dot\\rho(t) = -i[H_{\\tS},\\rho(t)]\n - \\int_{t_0}^t\\!\\!{\\mathrm d}\\tau\n \\Sigma(t-\\tau)\\rho(\\tau)$.\n It can describe the non-Markovian dynamics for\nthe self-energy $\\Sigma(t-\\tau)$ containing the memory effect.\nAssuming weak system-bath coupling and performing Born but without\nMarkovian approximation,\nthe self-energy for the expansion up to second-order\nof the tunneling Hamiltonian is expressed as\n$\\Sigma(t-\\tau)=\\big\\la{\\cal L}'(t) e^{-i{\\cal L}_{\\tS}(t-\\tau)}{\\cal L}'(\\tau)\n \\big\\ra_{\\B}$\nin the $H_{\\B}$-interaction picture.\nThe resulted TNL-ME is explicitly given by:\n\\be\\label{TNL-ME}\n\\dot\\rho(t)= -i{\\cal L}_{\\tS}\\rho(t) -i\\sum_{\\alpha u\\sigma}\n \\big[a^{\\bar\\sigma}_u, \\varrho^{\\sigma}_{\\alpha u}(t)\\big],\n \\ee\n\nwith ${\\cal L}_{\\tS} \\hat O=[H_{\\tS},\\hat O]$ and\n\\be\\label{TNL-ME1}\n \\varrho^{\\sigma}_{\\alpha u}(t)\n= -i\\int_{t_0}^t\\!\\!{\\mathrm d}\\tau\\, e^{-i{\\cal L}_{\\tS} (t-\\tau)}\n {\\cal C}^{(\\sigma)}_{\\alpha u}(t-\\tau) \\rho(\\tau),\n\\ee\nwhere\n\\be\\label{calCt}\n {\\cal C}^{(\\sigma)}_{\\alpha u}(t) \\hat O\n \\equiv \\sum_{v} \\big[C^{(\\sigma)}_{\\alpha uv}(t)a^\\sigma_{v}\\hat O\n - C^{(\\bar\\sigma)\\ast}_{\\alpha uv}(t)\\hat O a^{\\sigma}_v\\big].\n \\ee\nThis depends on the bath correlation function, \\Eq{ct}.\nIn \\Eq{TNL-ME}, the first term describes the intrinsic coherent dynamics\nand the second term\ndepicts the non-Markovian dissipative effect of the coupled reservoirs.\n\n Let $\\rho(t)\\equiv \\Pi(t,t_0)\\rho(t_0)$ be\nthe formal solution to \\Eq{TNL-ME}.\nNote that $\\Pi(t,t_0)\\neq\\Pi(t,t_1)\\Pi(t_1,t_0)$.\nIn other words, the conventional quantum regression theorem\nis not directly applicable for\nthe calculation of the correlation functions.\n Alternatively, with the introduction of\n${\\bm\\rho}(t)\n\\!\\equiv\\!\\left[\\rho(t),\\rho^{\\pm}_{\\alpha u}(E,t) \\right]^T$,\n the TNL-ME (\\ref{TNL-ME}), with \\Eq{TNL-ME1},\n can be converted to TL-EOM \\cite{Jin16083038}\n \\bsube\\label{TL-EOM}\n \\begin{align}\n \\dot\\rho(t)\n &=\\!-i{\\cal L}_{\\tS}\\rho(t)-\\!i\\!\\sum_{\\alpha u\\sigma}\\!\\int\\! \\frac{{\\rm d}E}{2\\pi}\n \\big[ a^{\\bar\\sigma}_u,\\rho^{\\sigma}_{\\alpha u}(E,t)\\big],\n\\label{rho0t}\n\\\\\n\\dot\\rho^{\\sigma}_{\\alpha u}(E,t)\n&=\\!-i({\\cal L}_{\\tS}\\!-\\!\\sigma E)\\rho^{\\sigma}_{\\alpha u}(E,t)\n -i{\\cal C}^{(\\sigma)}_{\\alpha u}(E) \\rho(t),\n \\label{rho1t}\n \\end{align}\n\\esube\nwhere\n ${\\cal C}^{(\\sigma)}_{\\alpha u}(E)=\\int\\! {\\rm d}t\\, e^{-\\sigma iEt} {\\cal C}^{(\\sigma)}_{\\alpha u}(t)$;\n cf.\\,\\Eq{calCt},\n \\be\\label{calCw}\n {\\cal C}^{(\\sigma)}_{\\alpha u}(E) \\hat O\\equiv\\sum_v \\left[\\Gamma^{(\\sigma)}_{\\alpha u v}(E)\n a^{\\sigma}_v \\hat O-\\hat O \\Gamma^{(\\bar\\sigma)\\ast}_{\\alpha u v}(E)a^{\\sigma}_v\\right].\n \\ee\nAs implied in \\Eq{TNL-ME1}, we have\n \\begin{align}\\label{varrho-phi}\n \\varrho^{\\sigma}_{\\alpha u}(t) = \\int\\frac{{\\rm d}E}{2\\pi}\\rho^{\\sigma}_{\\alpha u}(E,t).\n \\end{align}\n\n\n Equation (\\ref{TL-EOM}) can be summarized as\n$\\dot {\\bm\\rho}(t)={\\bf{\\Lambda}}\\bm{\\rho}(t)$\nwhich leads to the solution of ${\\bm\\rho}(t)=\\bm\\Pi(t,t_0)\\bm{\\rho}(t_0)$ with\n$\\bm\\Pi(t,t_0)=e^{{\\bm\\Lambda}(t-t_0)}$.\nThe TL-EOM space propagator satisfies\nthe time translation invariance, i.e., $\\bm\\Pi(t,t_0)=\n\\bm\\Pi(t,\\tau)\\bm\\Pi(\\tau,t_0)$.\nIn other words, the TL-EOM \\Eq{TL-EOM} is a mathematical\nisomorphism of the conventional ``Schr\\\"{o}dinger equation''\nand applicable to any physically supported initial state $\\rho_{\\T}(t_0)$.\nIn particular, the total system-plus-bath composite density operator $\\rho_{\\T}(t)$ maps to\n ${\\bm{\\rho}}(t)$, including the nonequilibrium steady state mapping,\n $\\rho^{\\rm st}_{\\T} \\rightarrow {\\bm{\\rho}}^{\\rm st}$.\nThis protocol can be extended to system correlation functions and\ncurrent correlation functions.\nThis is the advantage of TL-EOM (\\ref{TL-EOM}) over TNL-ME (\\ref{TNL-ME}).\nThe details are as follows.\n\n\n\n\n\n\n\\section{Current and noise spectrum}\n\\label{thcurr}\n\n\\subsection{The current formula}\n\\label{thsubcurr}\n\nFirst, we identify\n$\\rho^{\\sigma}_{\\alpha u}( E,t)$ in \\Eq{TL-EOM}\nbeing the current-related density operator.\nBy the definition, the lead-specified current operator is\n$\\hat I_{\\alpha}=-{\\rm d}\\hat N_{\\alpha}\/{\\rm d}t=-i[\\hat N_{\\alpha},H']$,\nwith $\\hat N_{\\alpha}\\equiv\\sum_k c^\\dg_{\\alpha k}c_{\\alpha k}$\nbeing the number operator.\nThe tunneling Hamiltonian $H'$ is given by \\Eq{Hsb1} with \\Eq{dissp}.\nWe immediately obtain\n\\begin{align}\\label{currI_hat}\n \\hat I_{\\alpha}\n&= -i \\sum_{\\sigma u} \\ti a^{\\sigma}_{ u}\n {\\wti F}^{\\bar\\sigma}_{\\alpha u}\n =-i\\sum_{\\sigma u}\\int\\! \\frac{{\\mathrm d} E}{2\\pi} \\ti a^{\\sigma}_u\n f^{\\bar\\sigma}_{\\alpha u}( E),\n\\end{align}\nwhere $\\ti a^{ \\sigma}_{ u}\\equiv \\sigma a^{\\sigma}_{ u}$.\nThe average current reads\n\\begin{align}\n I_{\\alpha}(t)\n &\\!=\\!{\\rm Tr}[\\hat I_{\\alpha}\\rho_{\\T}(t)]\n \\!=\\!-i\\!\\sum_{\\sigma u}\\! \\int\\! \\frac{{\\mathrm d} E}{2\\pi}{\\rm tr}_{\\rm s}\n [ \\ti a^{\\sigma}_{ u}\\rho^{\\bar\\sigma}_{\\alpha u }( E,t)],\n\\label{curr-ddo}\n\\end{align}\nwhere\n\\begin{align}\n\\rho^{\\sigma}_{\\alpha u}( E,t)\n &\\equiv\n {\\rm tr}_{\\B}\\big[f^{\\sigma}_{\\alpha u}( E)\\rho_{\\T}(t)\\big].\n\\label{phi1}\n\\end{align}\nOn the other hand,\nperforming the bath subspace trace (${\\rm tr}_{\\B}$) over\n$\\dot{\\rho}_{\\T}(t)=-i[H_{\\tS}+H_{\\B}+H',\\rho_{\\T}(t)]$,\n we obtain immediately \\Eq{rho0t}, where $\\rho^{\\sigma}_{\\alpha u}( E,t)$\n is the right given by \\Eq{phi1}.\n In other words,\n TL-EOM (\\ref{TL-EOM}) provides not only the real-time dynamics, but also transient current,\n\\Eq{curr-ddo}, with \\Eqs{varrho-phi} and (\\ref{TNL-ME1}),\n\\be\\label{curr-exp}\n I_{\\alpha}(t)\n=-\\!\\sum_{\\sigma u} \\!\\!\\int_{t_0}^t\\!\\! {\\mathrm d}\\tau\\, {\\rm tr}_{\\rm s}\n [ \\ti a^{\\sigma}_{ u}e^{-i{\\cal L}_{\\tS} (t-\\tau)}{\\cal C}^{(\\bar\\sigma)}_{\\alpha u}(t-\\tau) \\rho(\\tau) ].\n\\ee\nHere, we set $\\rho^{\\pm}( E,t_0\\!\\rightarrow\\!-\\infty)=0$ for the initially\n decoupled system and reservoir.\n\n\n\n\n\n\n\\subsection{Current correlation function}\n\\label{thsubcurrcf}\n\nTurn to the lead-specified steady-state current correlation function,\n\\begin{align}\\label{CorrI}\n \\la \\hat I_{\\alpha}(t)\\hat I_{\\alpha'}\\!(0)\\ra\n &={\\rm Tr}\\big[\\hat I_{\\alpha} \\rho_{\\T}(t; {\\alpha'})\\big],\n\\end{align}\nwith\n\\be\n\\rho_{\\T}(t; {\\alpha'})= e^{-i{\\cal L}_{\\T}t} (\\hat I_{\\alpha'}\\rho^{\\rm st}_{\\T})\n\\equiv e^{-i{\\cal L}_{\\T}t} \\rho_{\\T}(0; {\\alpha'}).\n\\ee\nIts TL-EOM correspondence reads\n\\be\n\\bm\\rho (t; {\\alpha'})= e^{{\\bm\\Lambda}t}(\\hat I_{\\alpha'} \\bm\\rho^{\\rm st} )\n\\equiv e^{{\\bm\\Lambda}t}{\\bm\\rho}(0; {\\alpha'}).\n\\ee\nHere,\n$\\bm{\\rho}(t;\\alpha')\n\\!\\equiv\\!\\left[\\rho(t;\\alpha'),\\rho^{\\pm}_{\\alpha u}(E,t;\\alpha')\\right]^T$,\nwith the propagator being defined in \\Eq{TL-EOM}\nand the initial values via \\Eq{curr-ddo} being\n\\bsube\\label{vecI02}\n\\begin{align}\n\\label{rho0alpha}\n&\\rho(0;\\alpha')\n \\equiv {\\rm tr}_{\\B} \\big(\\hat I_{\\alpha'}\\rho^{\\rm st}_{\\T}\\big)\n=-i\\!\\sum_{\\sigma u}\\! \\int\\! \\frac{{\\rm d} E}{2\\pi}\n \\ti a^{\\bar\\sigma}_{ u}\\bar\\rho^{\\sigma}_{\\alpha' u }(E) ,\n\\\\\n&\\rho^{\\sigma}_{\\alpha u}(E,0;\\alpha')\n\\equiv{\\rm tr}_{\\B}\\big[f^{\\sigma}_{\\alpha u}( E)\n\\hat I_{\\alpha'}\\rho^{\\rm st}_{\\T}\\big]\n\\nl&\\hspace{5.5 em}\n= -i\\delta_{\\alpha\\alpha'}\\!\\sum_v \\Gamma^{\\sigma}_{\\alpha u v}(E)\n \\ti a^{\\sigma}_{ v}\\bar\\rho,\n\\label{phiI0}\n\\end{align}\n\\esube\nwhere $\\bar\\rho\\equiv \\rho^{\\rm st}$ and\n$\\bar\\rho^{\\sigma}_{\\alpha' u }(E)\n\\equiv [\\rho^{\\sigma}_{\\alpha' u }(E)]^{\\rm st}$.\nWe can then evaluate \\Eq{CorrI} as\n\\begin{align}\\label{CorrI1}\n \\la \\hat I_{\\alpha}(t)\\hat I_{\\alpha'}\\!(0)\\ra\n&=-i\\sum_{\\sigma u}\\!\\int\\! \\frac{{\\mathrm d} E}{2\\pi}{\\rm tr}_{\\rm s}\n [ \\ti a^{\\sigma}_{ u}\\rho^{\\bar\\sigma}_{\\alpha u }( E,t;\\alpha')].\n\\end{align}\n\n\n\\subsection{Quantum noise spectrum}\n\\label{thsubnoise}\n\nThe lead--specified shot noise spectrum is given by\n\\be\\label{Sw0}\n S_{\\alpha\\alpha'}(\\omega)=\\int_{-\\infty}^{\\infty}\\!\\!{\\rm d}t\\,\n e^{i\\omega t} \\La \\delta{\\hat I}_\\alpha(t)\n \\delta{\\hat I}_{\\alpha'}(0)\\Ra,\n\\ee\nwith $\\delta{\\hat I}_\\alpha(t)\\equiv{\\hat I}_\\alpha(t)-I^{\\rm st}_{\\alpha}$;\ni.e.,\n\\be\\label{corr-curr}\n \\La \\delta{\\hat I}_\\alpha(t)\\delta{\\hat I}_{\\alpha'}(0)\\Ra\n=\\La {\\hat I}_\\alpha(t){\\hat I}_{\\alpha'}(0)\\Ra\n -I^{\\rm st}_{\\alpha}I^{\\rm st}_{\\alpha'}.\n\\ee\nThe steady--state current,\n$I^{\\rm st}_{\\alpha}\\equiv {\\rm Tr}(\\hat I_{\\alpha}\\bar\\rho_{\\T})$,\nsatisfies\n$I^{\\rm st}_{\\rm L}=-I^{\\rm st}_{\\rm R}$.\n To proceed, we apply the initial values, \\Eq{vecI02},\nand express \\Eq{CorrI1} in terms of\n\\begin{align}\n &\\la \\hat I_{\\alpha}(t)\\hat I_{\\alpha'}\\!(0)\\ra\n=\\delta_{\\alpha\\alpha'}\\!\\sum_{\\sigma u v}\\!\n {\\rm tr}_{\\rm s}[a^{\\sigma}_{ u}e^{-i{\\cal L}_{\\tS} t}\n C^{(\\bar\\sigma)}_{\\alpha uv}(t)a^{\\bar\\sigma}_{v} \\bar\\rho]\n\\nl&\\quad\n -\\sum_{\\sigma u} \\!\\int_{t_0}^t\\! {\\mathrm d}\\tau\\,\n {\\rm tr}_{\\rm s}[\\ti a^{\\sigma}_{ u} e^{-i{\\cal L}_{\\tS} (t-\\tau)}\n {\\cal C}^{(\\bar\\sigma)}_{\\alpha u}(t-\\tau) \\rho(\\tau;\\alpha') ].\n\\label{curr-curr}\n\\end{align}\nAs detailed in Appendix,\nthe first term describes the contribution from \\Eq{phiI0},\nthe second term involves $\\rho(\\tau;\\alpha')$,\nwith the initial value of \\Eq{rho0alpha}.\n\n\n\n To resolve $\\rho(\\tau;\\alpha')$,\none can exploit either TNL-ME (\\ref{TNL-ME})\nor TL-EOM (\\ref{TL-EOM}).\nThe related resolvent reads\n\\be\n \\Pi(\\omega)=[i({\\cal L}_{\\tS}-\\omega)+\\Sigma(\\omega)]^{-1},\n\\ee\nwith\n $\\Sigma(\\omega)=\\sum_{\\alpha}\n\\big[{\\cal J}^{<}_{\\alpha}(\\omega)-{\\cal J}^{>}_{\\alpha}(\\omega)\\big]$,\n\\bsube\\label{caljomega}\n \\begin{align}\n{\\cal J}^{>}_{\\alpha}(\\omega)\\hat O&\\equiv\\!-\\!\\sum_{\\sigma u}\\ti a^{\\bar\\sigma}_{ u}\n\\big[{\\cal C}^{(\\sigma)}_{\\alpha u}(\\omega-{\\cal L}_{\\tS})\\hat O\\big],\n\\\\\n{\\cal J}^{<}_{\\alpha}(\\omega)\\hat O&\\equiv\\!-\\!\\sum_{\\sigma u}\n\\big[{\\cal C}^{( \\sigma)}_{\\alpha u}(\\omega-{\\cal L}_{\\tS})\\hat O\\big]\\ti a^{\\bar\\sigma}_{ u},\n\\end{align}\n\\esube\nwhere [cf.\\,\\Eq{calCt}]\n\\begin{align}\\label{calCw}\n {\\cal C}^{(\\sigma)}_{\\alpha u}(\\omega) \\hat O\n \\!=\\!\\! \\sum_{v}\n \\! \\big[C^{(\\sigma)}_{\\alpha uv}(\\omega)(a^\\sigma_{v}\\!\\hat O)\n \\!- C^{(\\bar\\sigma)\\ast}_{\\alpha uv}( -\\omega)(\\hat O a^{\\sigma}_v)\\big],\n \\end{align}\nand $C^{(\\sigma)}_{\\alpha u v}( \\omega)\\equiv\n\\int_{0}^{\\infty}\\!dt\\\ne^{i\\omega t}C^{(\\sigma)}_{\\alpha u v}(t)$.\nDenote further\n\\bsube\\label{calwomega}\n \\begin{align}\n{\\cal W}^{>}_{\\alpha}(\\omega)\\hat O&\\equiv\\sum_{ \\sigma uv}\n\\big[ \\ti a^{\\bar\\sigma}_{u},C^{(\\sigma)}_{\\alpha uv }(\\omega-{\\cal L}_{\\tS})\n (a^\\sigma_{v}\\hat O) \\big],\n\\\\\n{\\cal W}^{<}_{\\alpha}(\\omega)\\hat O&\\equiv\n \\sum_{ \\sigma uv} \\big[ \\ti a^{\\bar\\sigma}_{u},C^{(\\bar\\sigma)\\ast}_{\\alpha uv }({\\cal L}_{\\tS}-\\omega)\n (\\hat O a^{\\sigma}_{v})\\big].\n\\end{align}\n\\esube\nNote that\n\\bsube\\label{caljw}\n\\begin{align}\n \\big[{\\cal W}^{<}_{\\alpha}(\\omega)\\hat O\\big]^\\dg\n&={\\cal W}^{>}_{\\alpha}(-\\omega)\\hat O^\\dg,\n\\\\\n \\big[{\\cal J}^{<}_{\\alpha}(\\omega)\\hat O\\big]^\\dg\n&={\\cal J}^{>}_{\\alpha}(-\\omega)\\hat O^\\dg.\n\\end{align}\n\\esube\nMoreover,\nwe have $I^{\\rm st}_{\\alpha}\n={\\rm tr}_{\\tS}\\big[{\\cal J}^{>}_{\\alpha}(0)\\bar{\\rho}\\big]\n={\\rm tr}_{\\tS}\\big[{\\cal J}^{<}_{\\alpha}(0)\\bar{\\rho}\\big]$,\nfor the steady current,\nas inferred from \\Eq{curr-exp}.\n\n\n\nFinally, we obtain \\Eq{Sw0},\nwith \\Eqs{corr-curr} and (\\ref{curr-curr}),\nthe expression\n(see Appendix for the derivations),\n\\begin{align}\\label{Sw}\nS_{\\alpha\\alpha'}(\\omega)\n &= {\\rm tr}_{\\rm s}\\Big\\{{\\cal J}^{>}_{\\alpha}(\\omega)\n \\Pi(\\omega)\\big[{\\cal J}^{>}_{\\alpha'}(0)\n + {\\cal W}^{>}_{\\alpha'}(\\omega)\\big]\\bar\\rho\n\\nl&\\qquad\n+{\\cal J}^{<}_{\\alpha'}(-\\omega)\\Pi(-\\omega)\\big[{\\cal J}^{<}_{\\alpha}(0) +\n {\\cal W}^{<}_{\\alpha}(-\\omega) \\big]\\bar\\rho\\Big\\}\n\\nl&\\quad\n +2\\delta_{\\alpha'\\alpha}{\\rm Re}\\! \\sum_{\\sigma u v}\n {\\rm tr}_{\\rm s}\\big[ a^{\\bar\\sigma}_u C^{(\\sigma)}_{\\alpha uv }(\\omega-{\\cal L}_{\\tS})\n a^{\\sigma}_{v}{\\bar\\rho}\n \\big].\n \\end{align}\nThis is the key result of this paper,\nwith $\\omega>0$ and $<0$\ncorresponding to energy\nabsorption and emission processes,\nrespectively \\cite{Eng04136602,Agu001986,Jin15234108,Nak19134403}.\n\n\n\n\n\n\n\n\n\\subsection{Discussions and remarks}\n\\label{thRemarks}\n\n\n\n\n\nIn mesoscopic quantum transport,\nthe charge conservation is about\n$-\\dot{Q}(t)=I_{\\rm L}(t)+I_{\\rm R}(t)\\equiv I_{\\rm dis}(t)$,\nwith the displacement current arising from the change of the charge $Q(t)$\nin the central system. The corresponding\nfluctuation spectrum,\n$S_{\\rm c}(\\omega)=\\int_{-\\infty}^{\\infty} \\!dt\\,\n e^{i\\omega t} \\La \\delta{\\dot{Q}(t)} \\delta{\\dot{Q}(0)}\\Ra$,\n can then be evaluated via \\cite{Jin11053704}\n\\begin{align}\\label{Scw}\nS_{\\rm c}(\\omega)&=S_{\\rm LL}(\\omega)+S_{\\rm RR}(\\omega)+2{\\rm Re}[S_{\\rm LR}(\\omega)].\n\\end{align}\n For auto-correlation noise spectrum,\n\\Eq{Sw} with $\\alpha'=\\alpha$,\nwe have [\\cf\\Eq{caljw}]\n\\begin{align}\\label{Sw-auto}\nS_{\\alpha\\alpha}(\\omega)\n &=2\\,{\\rm Re}\\,{\\rm tr}_{\\rm s}\\big\\{{\\cal J}^{>}_{\\alpha}(\\omega) \\Pi(\\omega)\\big[{\\cal J}^{>}_{\\alpha}(0)\n + {\\cal W}^{>}_{\\alpha}(\\omega)\\big]\\bar\\rho\\big\\}\n\\nl&\\quad\n +2\\,{\\rm Re}\\sum_{\\sigma u v}\n {\\rm tr}_{\\rm s}\\big[ a^{\\bar\\sigma}_u C^{(\\sigma)}_{\\alpha uv }(\\omega-{\\cal L}_{\\tS})\n a^{\\sigma}_{v}{\\bar\\rho}\\big].\n \\end{align}\n \nAlternatively, $S_{\\rm c}(\\omega)$ can also be calculated via $S_{\\rm c}(\\omega)=e^2\\omega^2 S_{\\rm N}(\\omega)$,\nwhere $S_{\\rm N}(\\omega)\\equiv {\\cal F}[\\delta \\hat N(t)\\delta \\hat N(0)]$,\nwith $\\hat N =\\sum_u a^\\dg_u a_u$. The spectrum of the charge fluctuation $S_{\\rm N}(\\omega)$\ncan be evaluated straightforwardly by the established formula for the non-Markovian\n correlation function of the system operators in our previous work \\cite{Jin16083038}.\n\n\n\nThe total current in experiments reads\n $I(t) =a I_{\\rm L}(t)- bI_{\\rm R} (t)$,\n with the junction capacitance parameters ($a,b\\geq0$) satisfying $a+b=1$\n \\cite{Bla001,Wan99398,Mar10123009}.\nIn wide-band limit, $a=\\frac{\\Gamma_{\\rm R}}{\\Gamma_{\\rm L}+\\Gamma_{\\rm R}}$\n and $b=\\frac{\\Gamma_{\\rm L}}{\\Gamma_{\\rm L}+\\Gamma_{\\rm R}}$ \\cite{Wan99398}.\nThe total current noise spectrum\ncan be calculated via either\n \\be\\label{Swtotal}\n S(\\omega) = a^2S_\\text{LL}(\\omega)+b^2S_\\text{RR}(\\omega)\n-2ab\\,{\\rm Re}[S_\\text{LR}(\\omega)],\n\\ee\nor\n\\be\\label{Swtotal2}\n S(\\omega) = aS_\\text{LL}(\\omega)+bS_\\text{RR}(\\omega)\n-ab\\,S_{\\rm c}(\\omega).\n\\ee\n\n\n\nAs known, the present method is a second-order theory and applicable for weak\nsystem-reservoir coupling, i.e., $\\Gamma\\lesssim k_{\\rm B}T$. This describes the\nelectron sequential tunneling (ST) processes.\nThe resulted noise formula expressed by \\Eq{Sw}\nin principle is similar to that obtained in Ref.\\,\\onlinecite{Eng04136602}.\nThe most advantage of [\\Eq{Sw}]\nis that the involved supertoperators have well-defined in \\Eq{caljomega}\nand \\Eq{calwomega}.\nOne only needs the matrix operations where\n we should\n transform the Liouville operator ${\\cal L}_{\\tS}$ into energy difference\n in the eigenstate basis $\\{|n\\ra\\}$ ($H_{\\tS}|n\\ra=\\varepsilon_n|n\\ra$), e.g.,\n $\\la n|f({\\cal L}_{\\tS})\\hat Q|m\\ra=f(\\varepsilon_n-\\varepsilon_m)Q_{nm}$.\n\n\nIn \\Eq{Sw},\nthe memory effect enters through\nthe frequency--dependence in the last term and also in ${\\cal J}^{\\lgter}_{\\alpha}(\\omega)$\nand ${\\cal W}^{\\lgter}_{\\alpha}(\\omega)$. In the Markovian limit, \\Eq{Sw} reduces to\n\\begin{align}\\label{Swmk}\nS^{\\rm Mar}_{\\alpha\\alpha'}(\\omega)\n &= {\\rm tr}_{\\rm s}\\Big\\{{\\cal J}^{>}_{\\alpha}(0)\n \\Pi_0(\\omega)\\big[{\\cal J}^{>}_{\\alpha'}(0)\n + {\\cal W}^{>}_{\\alpha'}(0)\\big]\\bar\\rho\n\\nl&\\qquad\n+{\\cal J}^{<}_{\\alpha'}(0)\\Pi_0(-\\omega)\\big[{\\cal J}^{<}_{\\alpha}(0) +\n {\\cal W}^{<}_{\\alpha}(0) \\big]\\bar\\rho\\Big\\}\n\\nl&\\quad\n +2\\delta_{\\alpha'\\alpha}{\\rm Re}\\! \\sum_{\\sigma u v}\n {\\rm tr}_{\\rm s}\\big[ a^{\\bar\\sigma}_u C^{(\\sigma)}_{\\alpha uv }(-{\\cal L}_{\\tS})\n a^{\\sigma}_{v}{\\bar\\rho}\n \\big],\n \\end{align}\nwhere $\\Pi_0(\\omega)=[i({\\cal L}_{\\tS}-\\omega)+\\Sigma(0)]^{-1}$\nwith $\\Sigma(0)=\\sum_{\\alpha}\n\\big[{\\cal J}^{<}_{\\alpha}(0)-{\\cal J}^{>}_{\\alpha}(0)\\big]$.\nThe involved superoperators were defined in \\Eq{caljomega}\n and \\Eq{calwomega}.\nThe widely studied Markovain problems \\cite{Xu02023807,Li04085315,Li05205304,Li05066803,Luo07085325,Mar10123009}\nhad also considered the Redfield approximation with\nthe neglect of the bath dispersion $\\Lambda^{(\\pm)}_{\\alpha uv}(\\omega)$\nin \\Eq{appcw} (the imaginary part of $C^{(\\pm)}_{\\alpha uv}(\\omega)$).\n One then can easily check that\n${\\rm Re}[S^{\\rm Mar}_{\\alpha\\alpha'}(\\omega)]={\\rm Re}[S^{\\rm Mar}_{\\alpha'\\alpha}(-\\omega)]$ with $\\alpha\\neq\\alpha'$\nand $S^{\\rm Mar}_{\\alpha\\alpha}(\\omega)=S^{\\rm Mar}_{\\alpha\\alpha}(-\\omega)$ based on \\Eq{Swmk}.\nIn other words, Markovian transport corresponds to\nthe symmetrized spectrum.\n\n\n\n\n\\section{Numerical demonstrations}\n\\label{thnum}\n\n\n\nTo verify the validity of the established method,\nwe will apply it to demonstrate the quantum noise spectrum of the transport\ncurrent through interacting double quantum dots (DQDs).\n \nAll the numerical results will be further compared with exact results based on\nDEOM theory.\n\n\nThe total composite Hamiltonian of the DQDs\n contacted by the two electrodes is described by \\Eq{Htot0}.\nThe Hamiltonian for the DQDs in series is specified by,\n\\be\\label{Hs-cqd}\n H_{\\tS}= \\varepsilon_{l}a^\\dg_la_l + \\varepsilon_{r}a^\\dg_ra_r\n +U \\hat n_l \\hat n_r+\\Omega\\big(a^\\dg_{l} a_{r}+a^\\dg_{r} a_{l}\\big).\n\\ee\nwhere $U$ is the inter-dot Coulomb interaction, $\\Omega$ describes the\ninter-dot electron coherent transition,\n and $\\hat n_u=a^\\dg_u a_u$.\nThe involved states of the double dot are $|0\\ra$ for the empty double dot,\n$|l\\ra$ for the left dot occupied, $|r\\ra$\nfor the right dot occupied, and $|2\\ra\\equiv|lr\\ra$ for the two dots occupied.\nUnder the assumption of\nthe infinite intra Coulomb interaction and large Zeeman\nsplit in each dot,\nwe consider at most one electron in each dot.\nIn this space, we have $a_{l}=|0\\ra\\la l|+|r\\ra\\la2|$\nand $a_{r}=|0\\ra\\la r|-|l\\ra\\la 2|$.\nApparently, the single-electron occupied states\nof $|l\\ra$ and $|r\\ra$ are not the eigenstates of the\nsystem Hamiltonian $HS_{\\tS}$.\nIt has the intrinsic coherent Rabi oscillation\ndemonstrated by the coherent coupling strength $\\Omega$.\nThe corresponding Rabi frequency denoted by $\\Delta$ is\nthe energy difference between the two eigenstates ($\\varepsilon_{\\pm}$),\ne.g., $\\Delta=\\varepsilon_{+}-\\varepsilon_{-}=2\\Omega$ for\nthe degenerate DQDs ($\\varepsilon_{l}=\\varepsilon_{r}=\\varepsilon_{0}$) considered here.\nThe characteristic of the Rabi coherence has been well studied in the symmetrized noise spectrum\n\\cite{Luo07085325,Agu04206601,Mar11125426,Shi16095002}.\n\n\n\nNow we apply the present TL-EOM approach\nto calculate the quantum noise spectrums\nof the transport current through DQDs.\nAs we mentioned above, the TL-EOM method is suitable for weak system-reservoir\ncoupling which can appropriately\ndescribe the electron ST processes.\nWe thus consider the ST regime\nwhere the energy levels in DQDs are within the bias\nwindow ($\\mu_{\\rm L}>\\varepsilon_{0}>\\mu_{\\rm R}$).\nWithout loss of generality,\nwe set antisymmetry bias voltage with $\\mu_{\\rm L}=-\\mu_{\\rm R}=eV\/2$\nand the energy level with $\\varepsilon_{0}=0$.\nThe wide band width is considered\nwith setting $W_{\\alpha}= 300\\Gamma$\nin \\Eq{jw}.\n\n We adopt the total coupling strength of $\\Gamma=\\Gamma_{\\rm L}+\\Gamma_{\\rm R}$ as the unit of\n the energy and focus on the symmetrical coupling strength\n $\\Gamma_{\\rm L}=\\Gamma_{\\rm R}=0.5\\Gamma$ (a=b=1\/2)\nin this work.\n Furthermore, we test the upper limit of the system-reservoir coupling\nwhich is comparable to the order of the temperature ($\\Gamma\\approx k_{\\rm B}T$), with setting\n$k_{\\rm B}T=0.5\\Gamma$ here.\nDetails for the other parameters are given in the figure captions.\n\n\n\\begin{figure}\n\\includegraphics[width=1.0\\columnwidth]{fig1.eps}\n\\caption{(Color online)\nThe total and the lead-specified current noise spectra with noninteracting effect ($U=0$)\nbased on TL-EOM method (black-solid line) and exact DEOM theory (red-dash line).\n(a) The total current noise spectrum, $S(\\omega)$.\n(b) The central current fluctuation spectrum, $S_{\\rm c}(\\omega)$.\n(c) The auto-correlation noise spectrum of $R$-lead, $S_{\\rm RR}(\\omega)$.\n(d) The cross-correlation noise spectrum, ${\\rm Re}[S_{\\rm LR}(\\omega)]$.\n The other parameters (in unit of $\\Gamma$) are $\\Omega=4$\n and $eV=16$.\n}\n\\label{fig1}\n\\end{figure}\n\n\n\\begin{figure}\n\\includegraphics[width=1.0\\columnwidth,angle=0]{fig2.eps}\n\\caption{(Color online)\nThe total and the lead-specified current noise spectra with\nwith strong inter-dot Coulomb interaction ($U=18\\Gamma$)\nbased on TL-EOM method (black-solid line) and exact DEOM theory (red-dash line).\n(a) The total current noise spectrum, $S(\\omega)$.\n(b) The central current fluctuation spectrum, $S_{\\rm c}(\\omega)$.\n(c) The auto-correlation noise spectrum of $R$-lead, $S_{\\rm RR}(\\omega)$.\n(d) The cross-correlation noise spectrum, ${\\rm Re}[S_{\\rm LR}(\\omega)]$.\n The other parameters are the same as in \\Fig{fig1}. }\n \\label{fig2}\n\\end{figure}\n\n\n\\begin{figure}\n\\includegraphics[width=1.0\\columnwidth,angle=0]{fig3.eps}\n\\caption{(Color online)\nThe total and the lead-specified current noise spectra at resonance regime\n($\\varepsilon_{\\pm}=\\pm\\Omega=\\pm8\\Gamma=\\pm eV\/2$)\nwith strong inter-dot Coulomb interaction ($U=18\\Gamma$),\nbased on TL-EOM method (black-solid line) and exact DEOM theory (red-dash line).\n(a) The total current noise spectrum, $S(\\omega)$.\n(b) The central current fluctuation spectrum, $S_{\\rm c}(\\omega)$.\n(c) The auto-correlation noise spectrum of $R$-lead, $S_{\\rm RR}(\\omega)$.\n(d) The cross-correlation noise spectrum, ${\\rm Re}[S_{\\rm LR}(\\omega)]$.\n The other parameters are the same as in \\Fig{fig1}. }\n \\label{fig3}\n\\end{figure}\n\n\nThe numerical results\nof the total and the lead-specified current noise spectra are\ndisplayed in Figs.\\,\\ref{fig1}, \\ref{fig2} and \\ref{fig3}.\nThey correspond to noninteracting ($U=0$), strong inter-dot interacting ($U=18\\Gamma$),\nand the resonance regime ($U=18\\Gamma$ and $\\varepsilon_{+\/-}=\\mu_{\\rm L\/R}$), respectively.\nFurthermore, the evaluations are based on\nthe present TL-EOM method (black solid-line) and the exact DEOM theory (red dash-line).\nEvidently, the TL-EOM method reproduces well, at least qualitatively,\nall the basic features of the quantum noise spectra in the entire frequency range.\nThe detail demonstrations see below.\n\n\nFigures \\ref{fig1} depicts the noise spectra\nin the absence of the inter-dot Coulomb interaction ($U=0$).\nThe characteristics are as follow:\n(\\emph{i}) The well--known quasi-steps around\nthe energy resonances,\n$\\omega=\\pm\\omega_{\\alpha \\pm}\\equiv\\pm|\\varepsilon_{\\pm}-\\mu_\\alpha|$,\nemerge in the total noise spectrum $S(\\w)$, the displacement $S_{c}(\\w)$\nand the diagonal component, exemplified with $S_{\\rm RR}(\\w)$;\nsee the arrows in \\Fig{fig1}(a)--(c).\nThe aforementioned feature\narises from the non-Markovian dynamics\nof the electrons in $\\alpha$--electrode\ntunneling into and out of the DQDs,\naccompanied by\nenergy absorption ($\\omega>0$) and emission ($\\omega<0$), respectively.\n(\\emph{ii}) In addition,\nthe Rabi resonance at $\\omega=\\pm\\Delta\\equiv \\pm (\\varepsilon_{+}-\\varepsilon_{-})$ appears\nin\n$S(\\omega)$ [\\Fig{fig1}(a)] and $S_{\\rm RR}(\\omega)$ [\\Fig{fig1}(c)] as dips,\nwhereas in\n${\\rm Re}[S_{\\rm LR}(\\omega)]$ [\\Fig{fig1}(d)] as peaks.\nOn the other hand, in $S_{\\rm c}(\\omega)$,\nthe former aforementioned dips and peaks are accidently canceled out [see \\Fig{fig1}(b)],\nin the absence of Coulomb interaction ($U=0$).\n\n\nFigure \\ref{fig2} depicts the noise spectra in the presence of\n strong inter-dot Coulomb interaction ($U=18\\Gamma$).\n(\\emph{iii}) In contrast to \\Fig{fig1}(b),\nnow the displacement current noise spectrum $S_{\\rm c}(\\omega)$\n displays at $\\omega=\\pm\\Delta$ the Rabi coherence [see \\Fig{fig2}(b)].\nWhile the Rabi peaks are enhanced in ${\\rm Re}[S_{\\rm LR}(\\omega)]$\n [see \\Fig{fig2}(d)],\n the original Rabi dips in \\Fig{fig1}(c) become peak-dip profile\nin $S_{\\rm RR}(\\omega)$ [see \\Fig{fig2}(c)].\n(\\emph{iv}) Moreover, the Coulomb-assisted transport channels ($\\varepsilon_{\\pm}+U$)\nproduces new non-Markovian quasi-steps around $\\omega=\\pm\n\\omega_{\\alpha {\\rm u}\\pm}\\equiv\\pm|\\varepsilon_{\\pm}+U-\\mu_\\alpha|$\nin the total, displacement, and the auto-correlation current\nnoise spectra, as shown in \\Fig{fig2}(a)--(c).\n\n\n\nIn \\Fig{fig3}, we highlight the characteristics of the noise spectra\nin the resonance regime ($\\varepsilon_{+\/-}=\\mu_{\\rm L\/R}$)\nby increasing the coherent coupling strength $\\Omega$.\n(\\emph{v})\nCompared with \\Fig{fig2}, the Rabi signal\nin the absorption noise spectrum\nat $\\omega=\\Delta$ is remarkably enhanced, while\nthe signal in the emission one at $\\omega=-\\Delta$\nis negligibly small. The observation here had been explored\nin the isolation\nof competing mechanisms, such as the Kondo resonance\nemission noise spectrum \\cite{Bas12046802, Del18041412,Mao21014104}.\n\n\nThe above absorptive versus emissive feature can\n be understood in terms of steady occupation from the following two aspects:\n(1) Away from the energy resonance ($\\mu_{\\rm L}>\\varepsilon_{\\pm}>\\mu_{\\rm R}$),\n the probabilities of single-electron occupied states are nearly the same,\n$\\bar\\rho_{++}\\cong\\bar\\rho_{--}$.\nThe resulting energy absorption and emission are equivalent in the noise spectrum.\n(2) In the energy resonance ($\\varepsilon_{+\/-}=\\mu_{\\rm L\/R}$) region,\nthe stationary state is very different.\nThe lower energy state occupation on $|-\\ra$ is the majority, e.g., $\\bar\\rho_{--}\\gg\\bar\\rho_{++}$.\nThus, the Rabi feature in absorption is much stronger than\nthat in emission noise.\n\n\n\n\n\n\n\n\n\\section{Summary}\n\\label{thsum}\n\n\nIn summary, we have presented an efficient TL-EOM approach for the quantum noise\nspectrum of the transport current through interacting mesoscopic systems.\nThe established method is\nbased on the transformation of the second-order non-Markovian master equation described by\nTNL-EM into the energy-dispersed\n TL-EOM formalism by introducing the current-related density operator.\nThe resulted analytical formula of the current noise spectrum\ncan characterize the nonequilibrium transport including electron-electron Coulomb interaction and\nthe memory effect.\n\n\n\n\nWe have demonstrated the proposed method in transport through interacting-quantum-dots system,\nand find good agreement with the exact results under broad range of parameters.\nThe numerical calculations are based on both\nthe present TL-EOM method and exact DEOM theory.\nWe find that all the basic features of the lead-specified noise spectra in the entire frequency range,\nincluding energy-resonance and Coulomb-assisted non-Markovian\nquasi-steps, and the intrinsic coherent Rabi signal, at least qualitatively,\nare reconciled well with the accurate results.\nAs a perturbative theory, the present TL-EOM is applicable in the\n weak system-reservoir coupling ($\\Gamma\\lesssim k_{\\rm B}T$) regime,\n dominated by sequential tunneling processes.\n\n Other parameters such as the bias voltage and Coulomb interaction,\nare rather flexible.\n\n\n\n\n\n\n\n\n\n\n\n\n\\acknowledgments\nWe acknowledge helpful discussions with\n X. Q. Li.\n The support from the Ministry of Science and Technology of China (No. 2021YFA1200103)\n and the Natural Science Foundation of China\n(Grant No. 11447006) is acknowledged.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nAnswering the question of precisely what distinguishes our experience with\nquantum as opposed to classical physical phenomena has historically been a\ncentral element of the overall project of interpreting quantum theory. For\n\\citet[]{schrodinger1935}, for instance, the sole distinguishing feature of\nquantum theory was none other than entanglement, while for Feynman the one and\nonly quantum mystery was self-interference \\citep[vol. 3,\n 1-1]{feynman1964}. The question continues to occupy many. However in much of\nthe more recent literature it has taken on a different form. That is, it has\nbecome one of specifying a set of appropriately motivated constraints or\n`principles' that serve to distinguish quantum from classical\ntheory. \\citet*[]{clifton2003}, for instance, prove a theorem which they argue\nshows quantum mechanics to be essentially characterisable in terms of a small\nnumber of information-theoretic constraints. \\citet[]{spekkens2007}, meanwhile,\nshows that features often thought of as distinctively quantum can be manifested\nin a toy classical theory to which one adds a principled restriction on the\nmaximal obtainable knowledge of a system.\\footnote{For a discussion of both\n \\citeauthor[]{clifton2003}'s and \\citeauthor[]{spekkens2007}' results, and\n of the project in general, see \\citet[]{myrvold2010}; and see also\n \\citet[]{felline2016}.}\n\nOne feature that quantum and classical theory have in common is that the\ncorrelations manifested between the subsystems of a combined system satisfy the\ncondition that the marginal probabilities associated with local experiments on a\nsubsystem are independent of which particular experiments are performed on the\nother subsystems. It is a consequence of this condition that it is impossible to\nuse either a classically correlated or entangled quantum system to signal faster\nthan light. For this reason the condition is referred to as the `no-signalling'\ncondition or principle, even though the condition is not a relativistic\nconstraint \\emph{per se}.\n\nQuantum and classical theory do not exhaust the conceivable ways in which the\nworld could be. The world could be such that neither quantum nor classical\ntheory are capable of adequately describing the correlations between subsystems\nof combined systems. In particular the world could be such that correlations\n\\emph{stronger} than quantum correlations are possible within it. In a landmark\npaper, \\citet[]{popescu1994} asked the question of whether all such correlations\nmust violate the no-signalling condition. The surprising answer to this question\nis no. As they showed, there do indeed exist conceivable correlations between\nthe subsystems of combined systems that are stronger than the strongest possible\nquantum correlations---i.e. such that they exceed the so-called `Tsirelson\nbound' \\citep[]{tsirelson1980}---and yet non-signalling.\n\n\\citeauthor[]{popescu1994}'s result raises the question of whether some\nmotivated principle or principles can be given which would pick out quantum\ntheory---or at least some restricted subset of theories which includes quantum\ntheory---from among the space of conceivable non-signalling physical theories in\nwhich correlations at or above the Tsirelson bound occur. This question has\ndeveloped into an active research program. A particularly important result\nemerging from it is that of \\citet[]{pawlowski2009}, who show that one can in\nfact derive the Tsirelson bound from a principle they call `information\ncausality', which they describe as a generalisation of no-signalling applicable\nto experimental setups in which the subsystems of a combined system\n(e.g. spatially separated labs) may be subluminally communicating classical\ninformation with one another. \\citeauthor[]{pawlowski2009} conjecture that\ninformation causality may be a foundational principle of nature.\n\nBelow I will argue that, suitably interpreted \\citep[][]{bub2012}, the principle\ncan be regarded as a useful and illuminating answer to the question of what the\nTsirelson bound expresses about correlations which exceed it. However I will\nargue that if one wishes to think of information causality as a fundamental\nprinciple of nature---in the sense that theories which violate the principle\nshould thereby be regarded as unphysical or in some other sense\nimpossible---then it requires more in the way of motivation than has hitherto\nbeen given.\n\nWhat has typically been appealed to previously to motivate the principle is the\nintuition that a world in which information causality is not satisfied would be\n`too simple' \\citep[p. 1101]{pawlowski2009}, or `too good to be true'\n(\\citealt[p. 180]{bub2012}, \\citealt[p. 187]{bub2016}); that it would allow one\nto ``implausibly'' access remote data \\citep[ibid.]{pawlowski2009}, and that\n``things like this should not happen'' \\citep[p. 429]{pawlowski2016}. I will\nargue below that these statements are unsatisfactorily vague. Nevertheless I\nwill argue that they gesture at something that is importantly right; although\nthey are right in, perhaps, a different sense than their authors envision.\n\nMore specifically, in contrast to \\citet[]{bub2012}, who in his otherwise\nilluminating analysis of information causality argues that it is misleadingly\ncharacterised as a generalisation of the no-signalling principle, I will argue\nthat information causality can indeed be regarded as generalising no-signalling\nin a sense. To clarify this sense I will draw on the work of\nDemopoulos,\\footnote{\\label{fn:demo}I am referring to the chapter ``Quantum\n Reality'' of Demopoulos's monograph \\nocite{demopoulosForth}\\emph{On\n Theories}, which is currently being prepared for posthumous publication.}\nwho convincingly shows that no-signalling can itself be thought of as a\ngeneralisation, appropriate for an irreducibly statistical theory such as\nquantum mechanics, of Einstein's principle of the mutually independent\nexistence of spatially distant things. Einstein regarded this principle as\nnecessary for the very possibility of `physical thought', and argued that it is\nviolated by quantum mechanics \\citep[p. 187]{howard1985}. However, suitably\ngeneralised and interpreted as a constraint on physical practice, Demopoulos\nconvincingly argues that Einstein's principle is in that sense satisfied both\nin Newtonian mechanics (despite its being an action-at-a-distance theory), and\nindeed (somewhat ironically\\footnote{Demopoulos's `judo-like' argumentative\n manoeuvre is reminiscent of Bell's \\citep[cf.][p. 41]{shimony1984}.}) that\nit is satisfied in quantum mechanics, wherein it is expressed by none other\nthan the no-signalling condition.\n\nComing back to information causality, I will then argue that it can likewise be\nthought of as a further generalisation of Einstein's principle that is\nappropriate for a theory of communication. As I will clarify, in the context of\nthe experimental setups to which the principle is applicable, a failure of\ninformation causality would imply an ambiguity in the way one distinguishes\nconceptually between the systems belonging to a sender and a receiver of\ninformation. This ambiguity (arguably) makes communication theory as we know it\nin the context of such setups impossible, similarly to the way in which the\nfailure of the principle of mutually independent existence (arguably) makes\nphysical theory as we know it impossible.\n\nBefore beginning let me emphasise that the general approach represented by the\ninvestigation into information causality is only one of a number of\nprinciple-theoretic approaches that one can take regarding the question of how\nto distinguish quantum from super-quantum theories. In the kind of approach\nexemplified by the investigation into information causality, one focuses on\nsets of static correlation tables associated with quantum and super-quantum\ntheories, and in particular one disregards the dynamics of (super-)quantum\nsystems. There is another family of principle-theoretic approaches to the\nquestion, however, wherein a richer framework is considered that does include\ndynamics.\\footnote{For further references, as well as an accessible\n description of one of these reconstructions of quantum theory, see\n \\citet[]{koberinski2018}.} \\citeauthor[]{popescu1994}'s seminal\n\\citeyearpar[]{popescu1994} investigation is an example of the former type of\napproach, though they themselves consider the latter, dynamical, approach to\nhave the potential for deeper insight. For my part I do not consider any\nparticular approach to be superior. Principle-theoretic approaches to the\ncharacterisation of quantum theory augment our understanding of the world by\nilluminating various aspects of it to us. Which particular aspect of the world\nis illuminated by an investigation will depend upon the particular\nquestion---and the framework which defines it---that is asked.\\footnote{Thanks\n to Giulio Chiribella for expressing something like this statement in answer\n to a question posed to him at the workshop `Contextuality: Conceptual\n Issues, Operational Signatures, and Applications', held at the Perimeter\n Institute in July, 2017.} I am highly skeptical of the idea that any one\nframework is sufficient by itself to illuminate all. Rather, these different\nframeworks of analysis should be seen as conveying to us information---in\ngeneral neither literal nor complete---regarding different aspects of one and\nthe same reality.\n\nThe rest of this paper will proceed as follows: I will introduce\nPopescu-Rohrlich (PR) correlations in \\S\\ref{sec:prcorr}. In \\S\\ref{sec:game} I\nwill introduce the `guessing game' by which the principle of information\ncausality is standardly operationally defined. The principle of information\ncausality itself will be introduced in \\S\\ref{sec:ic}, wherein I will also\ndescribe how it can be used to derive the Tsirelson bound. I will argue in that\nsection that information causality has not been sufficiently motivated to play\nthe role of a foundational principle of nature, and in the remainder of the\npaper I will consider how one might begin to provide it with such a\nmotivation. This analysis begins in \\S\\ref{sec:demopoulos} where I describe an\nargument, due to Demopoulos, to the effect that the no-signalling condition can\nbe viewed as a generalisation, appropriate to an irreducibly statistical theory,\nof Einstein's principle of mutually independent existence interpreted\nas a constraint on physical practice. Then in \\S\\ref{sec:howposs} I argue that\na promising route toward successfully motivating information causality is to in\nturn consider it as a further generalisation of no-signalling that is\nappropriate to a theory of communication. I describe, however, some important\nobstacles that must yet be overcome if the project of establishing information\ncausality as a foundational principle of nature is to succeed.\n\n\\section{Popescu-Rohrlich correlations}\n\\label{sec:prcorr}\n\nConsider a correlated state $\\sigma$ of two two-level\nsubsystems.\\footnote{Elements of the exposition in this and the next section\n have been adapted from \\citet[]{bub2012,bub2016} and \\citet[]{pawlowski2009}.}\nLet Alice and Bob each be given one of the subsystems, and instruct them to\ntravel to distinct distant locations. Let $p(A, B|a, b)$ be the probability that\nAlice and Bob obtain outcomes $A$ and $B$, respectively, after measuring their\nlocal subsystems with the respective settings $a$ and $b$. If $A,B \\in \\{\\pm\n1\\}$, the expectation value of the outcome of their combined measurement is\ngiven by: $$\\langle a, b \\rangle = \\sum_{i, j \\in \\{1,-1\\}} (i \\cdot j) \\cdot\np(i, j|a, b),$$ where $A = i$ and $B = j$. Less concisely, this is:\n\\begin{align*}\n\\langle a, b \\rangle & = 1 \\cdot p(1,1|a, b) - 1 \\cdot p(1,\\text{-}1|a, b) - 1\n\\cdot p(\\text{-}1,1|a, b) + 1 \\cdot p(\\text{-}1,\\text{-}1|a, b) \\\\\n& = p(\\mbox{same}|a, b) - p(\\mbox{different}|a, b).\n\\end{align*}\nSince $p(\\mbox{same}|a, b)$ + $p(\\mbox{different}|a, b)$ = 1, it follows that\n$\\langle a, b \\rangle$ + $2 \\cdot p(\\mbox{different}|a, b)$ = 1, so\nthat: $$p(\\mbox{different}|a, b) = \\frac{1 - \\langle a, b \\rangle}{2}.$$\nSimilarly, we have that $$p(\\mbox{same}|a, b) = \\frac{1 + \\langle a, b\n \\rangle}{2}.$$\n\nNow imagine that $\\sigma$ is such that the probabilities for the results of\nexperiments with settings $a, b, a', b'$, where $a'$ and $b'$ are different from\n$a$ and $b$ but arbitrary \\citep[p. 382]{popescu1994}, are:\n\\begin{align}\n \\label{eqn:prprobs}\n p(1,1|a,b) & = p(\\text{-}1,\\text{-}1|a,b) = 1\/2, \\nonumber \\\\\n p(1,1|a,b') & = p(\\text{-}1,\\text{-}1|a,b') = 1\/2, \\nonumber \\\\\n p(1,1|a',b) & = p(\\text{-}1,\\text{-}1|a',b) = 1\/2, \\nonumber \\\\\n p(1,\\text{-}1|a',b') & = p(\\text{-}1,1|a',b') = 1\/2.\n\\end{align}\nIn other words, if at least one of their settings is one of $a$ or $b$, then\nAlice's and Bob's results are guaranteed to be the same. Otherwise they are\nguaranteed to be different. These correlations are called `PR' correlations\nafter \\citet{popescu1994}.\n\nAlice's marginal probability $p(1_A|a,b)$ of obtaining the outcome 1 given\nthat she measures $a$ and Bob measures $b$ is defined as: $p(1_A,1_B|a,b)$ +\n$p(1_A,\\text{-}1_B|a,b)$. The no-signalling condition requires that her marginal\nprobability of obtaining 1 is the same irrespective of whether Bob measures $b$\nor $b'$, i.e. that $p(1_A|a,b)$ = $p(1_A|a,b')$, in which case we can write her\nmarginal probability simply as $p(1_A|a)$. In general, no-signalling requires\nthat\n\\begin{align}\n \\label{eqn:nosig}\n p(A|a,b) & = p(A|a,b'), & p(A|a',b) & = p(A|a',b'), \\nonumber \\\\\n p(B|a,b) & = p(B|a',b), & p(B|a, b') & = p(B|a', b').\n\\end{align}\nThe reader can verify that the PR correlations \\eqref{eqn:prprobs} satisfy\nthe no-signalling condition \\eqref{eqn:nosig}.\n\nIf we imagine trying to simulate the PR correlations \\eqref{eqn:prprobs} with\nsome bipartite general non-signalling system $\\eta$, then the probability of a\nsuccessful simulation (assuming a uniform probability distribution over the\npossible joint measurements $(a,b)$, $(a,b')$, $(a',b)$, and $(a',b')$) is given\nby:\\footnote{By a `successful simulation' I mean a single joint measurement in\n which Alice and Bob get opposite outcomes---(1,-1) or (-1,1)---if their\n settings are $(a', b')$, or the same outcome---(1,1) or (-1,-1)---otherwise.}\n\\begin{align*}\n \\frac{1}{4}\\big(p(\\mbox{same}|a,b) + p(\\mbox{same}|a,b') +\n p(\\mbox{same}|a',b) + p(\\mbox{different}|a',b')\\big) \\\\\n = \\frac{1}{4}\\Bigg(\\frac{1 + \\langle a, b \\rangle}{2} + \\frac{1 + \\langle\n a, b' \\rangle}{2} + \\frac{1 + \\langle a', b \\rangle}{2} + \\frac{1 - \\langle\n a', b' \\rangle}{2} \\Bigg) \\\\\n = \\frac{1}{2}\\Bigg(1 + \\frac{\\langle a, b \\rangle + \\langle a, b' \\rangle +\n \\langle a', b \\rangle - \\langle a', b' \\rangle}{4}\\Bigg).\n\\end{align*}\nNotice that $\\langle a, b \\rangle + \\langle a, b' \\rangle + \\langle a', b\n\\rangle - \\langle a', b' \\rangle$ is just the Clauser-Horne-Shimony-Holt (CHSH)\ncorrelation expression \\citep[]{chsh1969}. So the probability of a successful\nsimulation of the PR correlations by $\\eta$ is:\n\\begin{align}\n \\label{eqn:succsim}\n p(\\mbox{successful sim}) = \\frac{1}{2}\\Bigg(1 + \\frac{\\mbox{CHSH}}{4}\\Bigg),\n\\end{align}\nwith CHSH = 4 if $\\eta$ is itself a PR-system.\\footnote{The reader may be\n familiar with the use of the term `PR-box' to refer to systems whose\n subsystems are correlated as in \\eqref{eqn:prprobs}. I find the term `box' to\n be misleading since it conveys the idea of a spatially contiguous region\n occupied by a combined system. Bub's \\citeyearpar[]{bub2016} banana imagery is\n far less misleading in this sense. Below I will not use figurative language at\n all, but will (boringly) refer merely to such entities as `PR-systems',\n `PR-correlated systems', and so on.} As is well known, classically correlated\nsystems are bounded by $|\\mbox{CHSH}| \\leq 2$. Thus the optimum probability of\nsimulating PR correlations with a bipartite classical system is given by 1\/2(1 +\n2\/4) = 3\/4. Quantum correlations are bounded by $|\\mbox{CHSH}| \\leq 2\\sqrt 2$.\n\n\\section{Alice and Bob play a guessing game}\n\\label{sec:game}\n\nAt this point it will be convenient to change our notation. From now on I will\nrefer to the measurement settings $a$ and $a'$ as 0 and 1, respectively, and\nlikewise for $b$ and $b'$. The outcomes 1 and -1 will also be respectively\nrelabelled as 0 and 1. This will allow us to describe PR correlations more\nabstractly using the exclusive-or (alternately: modulo two addition) operator as\nfollows:\n\\begin{align}\n \\label{eqn:xorpr}\n M_1 \\oplus M_2 = m_1 \\cdot m_2\n\\end{align}\nwhere capital letters refer to measurement outcomes and small letters to\nmeasurement settings. To illustrate, for a given 01-experiment (formerly\n$(a,b')$) there are two possible outcomes: 00 and 11 (formerly: (1,1) and\n(-1,-1)), and we have: $0 \\oplus 0 = 0 \\cdot 1$ and $1 \\oplus 1 = 0 \\cdot 1$,\nrespectively.\n\nNow imagine the following game. At the start of each round of the game, Alice\nand Bob receive random and independently generated bit strings $\\mathbf{a} =\na_{N-1},a_{N-2},\\dots,a_0$ and $\\mathbf{b} = b_{n-1},b_{n-2},\\dots,b_0$,\nrespectively, with $N = 2^n$. They win a round if Bob is able to guess the value\nof the $\\textbf{b}^{\\mbox{\\scriptsize th}}$ bit in Alice's list. For example,\nsuppose Alice receives the string $a_{7}a_{6}a_{5}a_{4}a_{3}a_{2}a_{1}a_{0}$,\nand Bob receives the string 110. Then Bob must guess the value of $a_{6}$. They\nwin the game if Bob is able to guess correctly over any sequence of rounds.\n\nBesides this the rules of the game are as follows. Before the game starts, Alice\nand Bob are allowed to determine a mutual strategy and to prepare and share\nnon-signalling physical resources such as classically correlated systems, or\nquantum systems in entangled states, or PR-systems, or other (bipartite) systems\nmanifesting non-signalling correlations. They then go off to distinct distant\nlocations, taking with them their portions of whatever systems were previously\nprepared. Once separated, Alice receives her bit string $\\mathbf{a}$ and Bob his\nbit string $\\mathbf{b}$. She is then allowed to send Bob one additional\nclassical bit $c$, upon receipt of which Bob must guess the value of Alice's\n$\\mathbf{b}^{\\mbox{\\scriptsize th}}$ bit.\n\nAlice and Bob can be certain to win the game if they share a number of\nPR-systems. I will illustrate the case of $N=4$, which requires three\nPR-systems (per round) labelled \\textbf{I}, \\textbf{II}, and \\textbf{III}. Upon\nreceiving the bit string $\\mathbf{a} = a_3a_2a_1a_0$, Alice measures $a_0 \\oplus\na_1$ on her part of system \\textbf{I} and gets the result $A_I$. She then\nmeasures $a_2 \\oplus a_3$ on her part of system \\textbf{II} and gets the outcome\n$A_{II}$. She then measures $(a_o \\oplus A_I) \\oplus (a_2 \\oplus A_{II})$ on her\npart of system \\textbf{III} and gets the result $A_{III}$. She finally sends $c\n= a_0 \\oplus A_I \\oplus A_{III}$ to Bob. Meanwhile, Bob, who has previously\nreceived $\\mathbf{b} = b_1b_0$, measures $b_0$ on his parts of systems\n\\textbf{I} and \\textbf{II}, and gets back the results $B_I$ and $B_{II}$. He\nalso measures $b_1$ on system \\textbf{III} with the result $B_{III}$.\n\nBob's next step depends on the value of $\\mathbf{b}$, i.e. on which of Alice's\nbits he has to guess. When $\\mathbf{b} = b_1b_0 = 00$ (i.e. when Bob must guess\nthe 0$^{\\mbox{\\scriptsize th}}$ bit) or $\\mathbf{b} = b_1b_0 = 01$ (i.e. when\nBob must guess the 1$^{\\mbox{\\scriptsize st}}$ bit) his guess should be:\n\\begin{align}\n \\label{eqn:guess0or1}\n c \\oplus B_{III} \\oplus B_I = a_0 \\oplus A_I \\oplus A_{III} \\oplus B_{III}\n \\oplus B_I.\n\\end{align}\nFor since $A_{III} \\oplus B_{III} = \\big((a_0 \\oplus A_I) \\oplus (a_2 \\oplus\nA_{II})\\big) \\cdot b_1$, we have:\n\\begin{align}\n \\label{eqn:b1equal0}\n & a_0 \\oplus A_I \\oplus A_{III} \\oplus B_{III} \\oplus B_I \\nonumber \\\\\n =\\mbox{ } & a_0 \\oplus A_I \\oplus b_1(a_0 \\oplus A_I) \\oplus b_1(a_2 \\oplus\n A_{II}) \\oplus B_I \\nonumber \\\\\n =\\mbox{ } & a_0 \\oplus A_I \\oplus B_I \\nonumber \\\\\n =\\mbox{ } & a_0 \\oplus b_0(a_0 \\oplus a_1).\n\\end{align}\nIf $\\mathbf{b} = 00$ then \\eqref{eqn:b1equal0} correctly yields $a_0$. If\n$\\mathbf{b} = 01$ then \\eqref{eqn:b1equal0} correctly yields $a_1$.\n\nSuppose instead that $\\mathbf{b} = 10$ or $\\mathbf{b} = 11$. In this\ncase, Bob's guess should be\n\\begin{align}\n \\label{eqn:guess2or3}\n c \\oplus B_{III} \\oplus B_{II} = a_0 \\oplus A_I \\oplus A_{III} \\oplus\n B_{III} \\oplus B_{II}.\n\\end{align}\nThis is\n\\begin{align}\n \\label{eqn:b1equal1}\n =\\mbox{ } & a_0 \\oplus A_I \\oplus b_1(a_0 \\oplus A_I) \\oplus b_1(a_2 \\oplus A_{II})\n \\oplus B_{II} \\nonumber\\\\\n =\\mbox{ } & (a_0 \\oplus A_I) \\oplus (a_0 \\oplus A_I) \\oplus (a_2 \\oplus\n A_{II}) \\oplus B_{II} \\nonumber\\\\\n =\\mbox{ } & a_2 \\oplus A_{II} \\oplus B_{II} \\nonumber\\\\\n =\\mbox{ } & a_2 \\oplus b_0(a_2 \\oplus a_3).\n\\end{align}\nIf $\\mathbf{b} = 11$ then \\eqref{eqn:b1equal1} correctly yields $a_3$. If\n$\\mathbf{b} = 10$ then \\eqref{eqn:b1equal1} correctly yields $a_2$.\n\nIn general, given $N-1$ PR-correlated systems per round,\\footnote{These are to\n be arranged in an inverted pyramid so that the results of Alice's\n (respectively, Bob's) local measurements on the first $2^{n-1}$ PR-systems\n are used to determine the local settings for her (his) next $2^{n-2}$\n measurements, and so on, for $(n-i) \\geq 0$. Note that the cost in the number\n of PR-systems needed scales exponentially with respect to the length of\n $\\mathbf{b}$. I will return to this point later.} and a single classical bit\nper round communicated by Alice to Bob, Alice and Bob can be certain to win the\ngame for any value of $N$. In other words, given these resources and a single\nclassical bit communicated to him by Alice, Bob can access the value of any\nsingle bit from her data set, however large that data set is. This result\nfurther generalises to the case where Alice is allowed to send not just one but\n$m$ bits $c_{m-1}\\dots c_0$ to Bob in a given round, and Bob is required to\nguess an arbitrary set of $m$ bits from Alice's data set. Note that if Alice is\nnot allowed to send anything to Bob, i.e., when $m$ = 0, then Bob will not be\nable to access the values of any of Alice's bits irrespective of how many\nPR-systems they share. This is a consequence of the fact that PR-correlations\nsatisfy the no-signalling principle \\eqref{eqn:nosig}.\n\n\\section{Information causality and the Tsirelson bound}\n\\label{sec:ic}\n\nAs we saw in the last section, Alice and Bob can be certain to win the guessing\ngame described there if they share a number of PR-correlated systems prior to\ngoing off to their respective locations. Note that if they do not use any\ncorrelated resources, they can still be sure to win the occasional round if\nAlice always sends Bob the value of whatever bit is at a previously agreed-upon\nfixed position $a_k$ in her list. In this case, Bob will be guaranteed to guess\ncorrectly whenever $\\mathbf{b}$ singles out $k$ (but only then; otherwise he\nmust rely on blind luck). If Alice and Bob share a sequence of classically\ncorrelated random bits, on the other hand, then Bob will be able to access the\nvalue of a single in general different $a_i$ in Alice's list on each round.\n\nNow consider the case where Alice and Bob share general no-signalling systems,\ni.e. bipartite systems such that the correlations between their subsystems\nsatisfy the no-signalling condition. Recall that the probability that a\nnon-signalling system simulates a PR-system on a given run depends on the value\nof CHSH in \\eqref{eqn:succsim} that is associated with it. For convenience we\nwill define $E =_{\\mathit{df}} \\mbox{CHSH}\/4$ so that \\eqref{eqn:succsim}\nbecomes:\n\\begin{align}\n \\label{eqn:succsim2}\n p(\\mbox{successful sim}) = \\frac{1}{2}(1 + E).\n\\end{align}\nWhen $E = 1$ for a given non-signalling system, then it just is a PR-system, and\nthe probability of a successful simulation is 1. When $E < 1$, then for given\nsettings $m_1, m_2$, the values of the outcomes $M_1, M_2$, will in general not\nsatisfy the relation \\eqref{eqn:xorpr}, i.e. $M_1 \\oplus M_2$ will not always\nequal $m_1 \\cdot m_2$. For a given attempted simulation, let us say that $M_2$\nis `correct' whenever \\eqref{eqn:xorpr} holds, and `incorrect'\notherwise.\\footnote{There is of course no reason why we should not say that\n $M_1$ rather than $M_2$ is incorrect, but for the analysis that follows it\n is convenient to take Bob's point of view.}\n\nRecall that in the $N=4$ game above, at the end of each round, Bob guesses\neither (i) $c \\oplus B_{III} \\oplus B_{I}$, or (ii) $c \\oplus B_{III} \\oplus\nB_{II}$, depending on the value of $\\mathbf{b}$. We will consider only case (i),\nas the analysis is similar for (ii). If both $B_I$ and $B_{III}$ are `correct',\nthen for that particular round, the non-signalling systems will have yielded the\nsame guess for Bob as PR-systems would have yielded:\n\\begin{align}\n \\label{eqn:prmatch}\n (c \\oplus B_{III} \\oplus B_{I})_{NS} = (c \\oplus B_{III} \\oplus B_{I})_{PR}.\n\\end{align}\nNote that if \\emph{both} $B_I$ and $B_{III}$ are \\emph{incorrect},\n\\eqref{eqn:prmatch} will still hold, since in general $x_1 \\oplus x_2 =\n\\overline{x_1} \\oplus \\overline{x_2}$. So either way Bob will guess right. The\nprobability of an unsuccessful simulation is $$1-\\frac{1}{2}(1+E) =\n\\frac{1}{2}(1-E).$$ Thus the probability that Bob makes the right guess on a\ngiven round in the $N=4$ game is:\n\\begin{align*}\n\\left(\\frac{1}{2}(1 + E)\\right)^2 + \\left(\\frac{1}{2}(1 - E)\\right)^2 =\n\\frac{1}{2}(1+E^2).\n\\end{align*}\nIn the general case, for $N = 2^n$, one can show\n\\citep[]{pawlowski2009,bub2012,bub2016} that the probability that Bob correctly\nguesses Alice's $\\mathbf{b}^{\\mbox{\\scriptsize th}}$ bit is\n\\begin{align}\n \\label{eqn:prguess}\n p_{\\mathbf{b}} = \\frac{1}{2}(1 + E^n).\n\\end{align}\n\nThe binary entropy $h(p_{\\mathbf{b}})$ associated with $p_{\\mathbf{b}}$ is given\nby $$h(p_{\\mathbf{b}}) = \\text{-}p_{\\mathbf{b}}\\log_2{p_{\\mathbf{b}}} - (1 -\np_{\\mathbf{b}})\\log_2{(1 - p_{\\mathbf{b}})}.$$ In the case where Bob has no\ninformation about Alice's $\\mathbf{b}^{\\mbox{\\scriptsize th}}$ bit,\n$p_{\\mathbf{b}} = 1\/2$ and $h(p_{\\mathbf{b}}) = 1$. If Alice then sends Bob $m$\nbits, then in general Bob's information about that bit will increase by some\nnon-zero amount. \\citet[]{pawlowski2009} propose the following constraint on\nthis quantity, which they call the `information causality' principle:\n\n\\begin{quote}\nThe information gain that Bob can reach about a previously unknown to him data\nset of Alice, by using all his local resources and $m$ classical bits\ncommunicated by Alice, is at most $m$ bits\n\\citeyearpar[p. 1101]{pawlowski2009}.\n\\label{quo:ic}\n\\end{quote}\n\nFor example, assuming that the $N = 2^n$ bits in Alice's bit string $\\mathbf{a}$\nare unbiased and independently distributed, then if Alice sends Bob a single bit\n(i.e. when $m = 1$), information causality asserts that Bob's information about\nthe $\\mathbf{b}^{\\mbox{\\scriptsize th}}$ bit in Alice's string may increase by\nno more than $1\/2^n$, i.e.,\n\\begin{align}\n \\label{eqn:infcaus}\n h(p_{\\mathbf{b}}) \\geq 1 - \\frac{1}{2^n}.\n\\end{align}\nAs \\citet[]{pawlowski2009} show, the principle is satisfied within quantum\nmechanics. But within any theory which permits correlations with a value of $E$\nexceeding $1\/\\sqrt{2}$ (i.e. any theory which allows correlations above the\nTsirelson bound), one can find an $n$ such that for a given $m$ the principle\nis violated (for example, let $E = .72$, $m = 1$, and $n = 10$).\\footnote{Note\n that when $E = 1$ the principle is always violated for any $m$ and\n $n$.}$^{\\mbox{,}}$\\footnote{I have followed Bub in expressing information\n causality as a constraint on binary entropy, as conceptually this is a more\n transparent way of expressing \\citeauthor[]{pawlowski2009}'s `qualitative'\n statement of the principle in terms of concrete information-theoretic\n quantities. While \\citeauthor[]{pawlowski2009} also relate information\n causality to the binary entropy \\citeyearpar[p. 1102 and Supplementary\n Information \\S{III}]{pawlowski2009}, their general results (that\n information causality is satisfied within quantum mechanics and that it is\n violated within any theory which allows correlations above the Tsirelson\n bound) begin with the formulation of information causality as a condition\n on mutual information rather than binary entropy. For our purposes it is\n immaterial which formulation one chooses; in particular,\n \\citet[\\S\\S{11.4--11.5}]{bub2012} has shown that \\eqref{eqn:infcaus} is\n entailed by \\citeauthor[]{pawlowski2009}'s formulation and moreover proves\n that \\eqref{eqn:infcaus} is satisfied when $E = \\frac{1}{\\sqrt 2}$.}\n\nGiven that any correlations above the Tsirelson bound will demonstrably violate\nthe principle in this sense, it is tempting to view information causality as\nthe answer to the question (i) of why nature does not allow correlations above\nthis bound. And since the Tsirelson bound represents the maximum value of the\nCHSH expression for quantum correlations, one is further tempted to view\ninformation causality as the answer to the question (ii) of why only quantum\ncorrelations are allowable in nature. Indeed, \\citeauthor[]{pawlowski2009}\nsuggest that information causality ``might be one of the foundational\nproperties of nature'' \\citeyearpar[p. 1101]{pawlowski2009}.\n\nThere is a subtlety here, however. The set of quantum correlations forms a\nconvex set which can be represented as a multi-dimensional region of points such\nthat the points within this region that are furthest from the centre are at the\nTsirelson bound \\citep[\\S{}5.1]{bub2016}. Information causality disallows\ncorrelations beyond this bound, as we saw. It also disallows some correlations\nbelow the bound that are outside of the quantum convex set \\citep[for a\n discussion, see][]{pawlowski2016}. However there is numerical evidence that\nthere exist correlations within the bound but outside of the quantum convex set\nthat satisfy the information causality principle \\citep[]{navascues2015}. So it\nappears unlikely (though this was not known in 2009) that information causality\ncan provide an answer to question (ii). It nevertheless remains promising as a\nprinciple with which to answer question (i) and can arguably still be thought of\nas a fundamental principle in that sense. Analogously, the fact that\nsuper-quantum no-signalling correlations are possible does not, in itself,\nundermine the status of no-signalling as a fundamental principle.\n\nThe information causality principle must be given some independent motivation if\nit is to play this explanatory role, however. For even a conventionalist would\nagree that some stipulations are better than others \\citep[]{disalle2002}. Thus\nsome independent reason should be given for why one might be inclined to accept\nthe principle. Of course, the statement that the communication of $m$ bits can\nyield no more than $m$ bits of additional information to a receiver about a data\nset unknown to him is an intuitive one. But foundational principles of nature\nshould require more for their motivation than such bare appeals to\nintuition. After all, quantum mechanics, which the principle aims to legitimate,\narguably already violates many of our most basic\nintuitions. \\citet[]{pawlowski2009} unfortunately do not say very much to\nmotivate information causality. But two ideas can be gleaned from statements\nmade in their paper. The first is that in a world in which violations of\ninformation causality could occur, ``certain tasks [would be] `too simple'''\n(p. 1101). The second is that in such a world there would be ``implausible\naccessibility of remote data'' (ibid.). The former idea has been expressed in\nthis general context before. Van Dam \\citeyearpar[]{vanDam2013}, notably, shows\nthat in a world in which PR-correlations exist and can be taken advantage of,\nonly a trivial amount of communication (i.e. a single bit) is required to\nperform any distributed computational task. Van Dam argues (ibid., p. 12) that\nthis is a reason to believe that such correlations cannot exist, for they\nviolate the principle that ``Nature does not allow a computational `free\nlunch''' (ibid., p. 9).\\footnote{Cf. \\citet[][]{aaronson2005a}.}\n\\citet[pp. 180-181]{bub2012} echoes this thought by listing examples of\ndistributed tasks (`the dating game' and `one-out-of-two' oblivious transfer)\nwhich would become implausibly trivial if PR-correlated systems could be used.\n\nLater in this paper I will argue that although such statements are\nunsatisfactorily vague, they nevertheless get at something that is importantly\nright; although they are right in, perhaps, a different sense than their authors\nenvision. For now let me just say that even if one accepts van Dam's argument\nthat pervasive trivial communication complexity is implausible and should be\nruled out---and that this should constitute a constraint on physical\ntheory---not all correlations above the Tsirelson bound in fact result in the\ntrivialisation of communication complexity theory.\\footnote{Communication\n complexity theory aims to quantify the communicational resources---measured\n in transmitted bits---required to solve various distributed computational\n problems. A good reference work is that of \\citet[]{hushilevitz1997}.}\n\\citet[]{brassard2006} have extended van Dam's result by showing that\n(probabilistic) pervasive trivial communication complexity can be achieved for\nvalues of $E > \\sqrt{6}\/3$. But this still leaves a range of values for $E$\nopen; physical correlations with associated values of $E$ between the quantum\nmechanical maximum of $1\/\\sqrt 2$ and $\\sqrt{6}\/3$ have not been shown to result\nin pervasive trivial communication complexity and cannot---at least not yet---be\nruled out on those grounds. Thus the avoidance of pervasive trivial\ncommunication complexity cannot be used to motivate information causality in the\nway suggested by the statements of \\citet[]{pawlowski2009}. In fairness, to say\nas they do that certain tasks would be `too simple' in a world in which\ninformation causality is violated is not the same as saying that they would be\ntrivial. The task remains, then, of expressing more precisely what is meant by\n`too simple' in a way that is sufficient to motivate ruling out theories which\nviolate the information causality principle in a less than maximal way (in\nparticular with a value of $E \\leq \\sqrt{6}\/3$). We will return to this point\nlater.\n\nRegarding their second idea---that a world in which information causality is\nviolated would manifest ``implausible accessibility of remote data''\n(p. 1101)---\\citet[]{pawlowski2009} again do not say\nenough,\\footnote{\\citet[p. 429]{pawlowski2016} do expand on the idea of\n implausible accessibility slightly: ``we have transmitted only a single bit\n and the PR-boxes are supposed to be no-signalling so they cannot be used to\n transmit the other. Somehow the amount of information that the lab of Bob\n has is larger than the amount it received. Things like this should not\n happen.'' I do not think this adds anything substantial to the idea\n expressed by \\citet[]{pawlowski2009} that such a situation is\n `implausible'.} although the idea is perhaps alluded to implicitly in another\nassertion they (too briefly) make, namely that information causality\ngeneralises the no-signalling principle (ibid., p. 1103). We will come back to\nthis point later. In any case, the idea of implausible accessibility is\nfortunately expanded upon by \\citet[]{bub2012}, who motivates it in the\nfollowing way:\n\n\\begin{quote}\nwhen the bits of Alice's data set are unbiased and independently distributed,\nthe intuition is that if the correlations can be exploited to distribute one bit\nof communicated information among the $N$ unknown bits in Alice's data set, the\namount of information distributed should be no more than $\\frac{1}{N}$ bits,\nbecause there can be no information about the bits in Alice's data set in the\npreviously established correlations themselves (p. 180).\n\\end{quote}\n\nPartly for this reason, Bub argues that the principle is misnamed. Drawing on\nthe idea of implausible accessibility he argues that `information causality'\nshould rather be referred to as information \\emph{neutrality}: ``The principle\nreally has nothing to do with causality and is better understood as a\n\\emph{constraint on the ability of correlations to enhance the information\n content of communication in a distributed task}'' (ibid., emphasis in\noriginal). Bub reformulates the principle as follows:\n\n\\begin{quote}\nCorrelations are informationally neutral: insofar as they can be exploited to\nallow Bob to distribute information communicated by Alice among the bits in an\nunknown data set held by Alice in such a way as to increase Bob's ability to\ncorrectly guess an arbitrary bit in the data set, they cannot increase Bob's\ninformation about the data set by more than the number of bits communicated by\nAlice to Bob (ibid.).\n\\end{quote}\n\nStated in this way the principle sounds plausible and seems, intuitively, to be\ncorrect. However if the principle is to be of aid in ruling out classes of\nphysical theory then it should be more than just intuitively plausible. If the\ngoal of answering the question `Why the Tsirelson bound?' is to give a\nconvincing reason why correlations that are above the bound should be regarded\nas impossible, then if the fact that such correlations violate informational\nneutrality is to be one's answer, one should give an independent motivation for\nwhy correlations must be informationally neutral. One might, for instance,\nmotivate information neutrality by showing how it generalises or gives\nexpression in some sense to a deeper underlying principle that is already\nwell-motivated, or by pointing to `undesirable consequences' of its failure. The\nconsequence of a `free computational lunch' given the existence of correlations\nabove the bound, if it could be demonstrated, could (perhaps) constitute an\nexample of the latter kind of motivation.\n\nThis said, there is a different way to think of the question `Why the Tsirelson\nbound?' for which Bub's explication of information causality in terms of\ninformational neutrality is both a full answer and indeed an illuminating and\nuseful one. In this sense the question represents a desire to understand what\nthe Tsirelson bound expresses about correlations which violate it. Information\nneutrality answers this question by directing attention to a feature that no\ncorrelations above the bound can have. This feature, moreover, is one that we\ncan easily grasp and explicitly connect operationally with our experience of\ncorrelated physical systems. On such a reading of the question, to answer\n`information neutrality' is not of course to rule out that the world could\ncontain non-informationally-neutral physical correlations. But on this view\nruling out such a possibility is not the point, which is rather to provide a\nphysically meaningful principle to help us to understand what our current\nphysical theories, assuming they are to be believed, are telling us about the\nstructure of the world.\n\nIn the remainder of this paper, however, I will continue to consider the\ninformation causality\/neutrality principle as a possible answer in the first\nsense to the question `Why the Tsirelson bound?'. I will continue to consider,\nthat is, whether there is some independent way of motivating the conclusion that\ncorrelations which violate the condition should be ruled out.\n\n\\section{The `being-thus' of spatially distant things}\n\\label{sec:demopoulos}\n\nOur goal is to determine whether there is some sense in which we can motivate\nthe idea that information causality must be satisfied by all physical theories\nwhich treat of correlated systems. I will now argue that some insight into this\nquestion can be gained if we consider the analogous question regarding\nno-signalling. As I mentioned earlier, the no-signalling condition\n\\eqref{eqn:nosig} is not a relativistic constraint per se---in itself it is\nmerely a restriction on the marginal probabilities associated with experiments\non the subsystems of combined systems---but its violation entails the ability to\ninstantaneously signal, which is in tension if not in outright violation of the\nconstraints imposed by relativistic theory.\\footnote{For a discussion of\n signalling in the context of special and general relativity see\n \\citet[Ch. 4]{maudlin2011}.} Indeed, the independently confirmed relativity\ntheory can in this sense be thought of as an external motivation for thinking of\nthe no-signalling principle as a constraint on the marginal probabilities\nallowable in any physical theory.\n\nThere is an arguably deeper way to motivate no-signalling, however, that can be\ndrawn from the work of Einstein and which has been expanded upon by\nDemopoulos.\\footnote{This is done in Demopoulos's monograph \\emph{On\n Theories}; see fn. \\ref{fn:demo}.} In the course of expressing his\ndissatisfaction with the `orthodox' interpretation of quantum theory, Einstein\ndescribed two foundational ideas---what Demopoulos calls \\emph{local realism}\nand \\emph{local action}. Realism in general, for Einstein, is a basic\npresupposition of any physical theory. It amounts to the claim that things in\nthe world exist independently of our capability of knowing them; i.e.\n\n\\begin{quote}\nthe concepts of physics refer to a real external world, i.e., ideas are posited\nof things that claim a `real existence' independent of the perceiving subject\n(bodies, fields, etc.), and these ideas are, on the other hand, brought into as\nsecure a relationship as possible with sense impressions\n(\\citealt[]{einstein1948}, as translated by \\citealt[p. 187]{howard1985}).\n\\end{quote}\n\n\\emph{Local} realism---alternately: the `mutually independent existence' of\nspatially distant things---is the idea that things claim independent existence\nfrom one another insofar as at a given time they are located in different parts\nof space. Regarding this idea, Einstein writes:\n\n\\begin{quote}\nWithout such an assumption of the mutually independent existence (the `being\nthus') of spatially distant things, an assumption which originates in everyday\nthought, physical thought in the sense familiar to us would not be possible\n(ibid.).\n\\end{quote}\n\nIn the concrete context of a physical system made up of two correlated subsystems\n$S_1$ and $S_2$ (such as that described in the thought experiment of\n\\citealt[]{epr1935}), local realism requires that\n\n\\begin{quote}\nevery statement regarding $S_2$ which we are able to make on the basis of a\ncomplete measurement on $S_1$ must also hold for the system $S_2$ if, after all,\nno measurement whatsoever ensued on $S_1$ (\\citealt[]{einstein1948}, as\ntranslated by \\citealt[p. 187]{howard1985}).\n\\end{quote}\n\nIn other words the value of a measurable theoretical parameter of $S_2$ must\nnot depend on whether a measurement is made on a system $S_1$ that is located\nin some distant region of space. (And of course it must also not depend upon\nthe \\emph{kind} of measurement performed on $S_1$;\ncf. \\citealt[][p. 186]{howard1985}.) Demopoulos notes that local realism as it\nis applied in such a context is a condition imposed on the measurable\nproperties of the theory and hence it is a condition that is imposed at a\ntheory's `surface' or operational level. This is an important point that I will\nreturn to later.\n\nIn the same \\emph{Dialectica} article Einstein also formulated a second\nprinciple:\n\n\\begin{quote}\nFor the relative independence of spatially distant things (A and B), this idea\nis characteristic: an external influence on A has no \\emph{immediate} effect on\nB; this is known as the `principle of local action' ... The complete suspension\nof this basic principle would make impossible the idea of the existence of\n(quasi-) closed systems and, thereby, the establishment of empirically testable\nlaws in the sense familiar to us (\\citealt[]{einstein1948}, as translated by\n\\citealt[p. 188]{howard1985}).\n\\end{quote}\n\nThe thought expressed in the second part of this statement seems similar to\nEinstein's earlier assertion that `physical thought' would not be possible\nwithout the assumption of local realism. However Demopoulos convincingly argues\nthat the principle of local realism, though it receives support from the\nprinciple of local action, is a conceptually more fundamental principle than\nthe latter. For conceivably the principle of local realism---i.e. of `mutually\nindependent existence'---could be treated as holding, Demopoulos argues, even\nin the absence of local action. Indeed this is so in Newtonian mechanics. For\nexample, Corollary VI to the laws of motion \\citep[p. 423]{newton1999} states\nthat a system of bodies moving in any way whatsoever with respect to one\nanother will continue to do so in the presence of equal accelerative forces\nacting on the system along parallel lines. This makes it possible to treat the\nsystem of Jupiter and its moons, for example, as a quasi-closed system with\nrespect to the sun. For owing to the sun's great distance (and relative size),\nthe actions of the forces exerted by it upon the Jovian system will be\napproximately equal and parallel. Corollary VI, moreover, is used by Newton to\nprove Proposition 3 of Book I \\citep[p. 448]{newton1999}, which enables one to\ndistinguish forces that are internal to a given system from forces that are\nexternal to it, and which provides a criterion (i.e. that the motions of the\nbodies comprising a system obey the Area Law with respect to its centre of\nmass) for determining when the gravitational forces internal to a system have\nbeen fully characterised. Thus despite its violation of local action,\nDemopoulos argues convincingly that Einstein would not (or anyway should not)\nhave regarded a theory such as Newtonian mechanics as unphysical. It is still a\nbasic \\emph{methodological} presupposition of Newtonian mechanics that\nspatially distant systems have their own individual `being thus-ness', the\ndescription of which is made possible via the theory's characteristic\nmethodological tool of successive approximation, in turn made possible by, for\nexample, Corollary VI, Proposition 3, and the notion of quasi-closed system\nimplied by them.\\footnote{Demopoulos does not specifically mention either\n Corollary VI or Proposition 3 in his discussion, but I take them to be\n implicit therein. For a detailed analysis of Newton's method of successive\n approximations and the methodological role therein played by Corollary VI\n and Proposition 3, see \\citet[]{harper2011}. For a discussion of the same\n in relation to general relativity, see \\citet[]{disalle2006, disalle2016}.}\n\nEinstein's principle of local realism or mutually independent existence\npresupposes the framework of classical physics, which itself presupposes the\nframework of classical probability theory. Demopoulos argues, however, that the\nconceptual novelty of quantum theory consists in the fact that it is an\n`irreducibly statistical theory', precisely in the sense that its probability\nassignments, unlike those described by classical probability theory, cannot in\ngeneral be represented as weighted averages of two-valued measures over the\nBoolean algebra of all possible properties of a physical system \\citep[see\n also][]{pitowsky1989, pitowsky2006, dickson2011}. This raises the question of\nwhether one can formulate a generalisation of the mutually independent existence\ncondition that is appropriate for an irreducibly statistical theory such as\nquantum mechanics.\\footnote{I am not claiming here that Einstein himself would\n have been inclined to follow this line of reasoning.}\n\nRecall that Einstein's mutually independent existence condition is a condition\nthat is imposed on the level of the measurable parameters of a theory and hence\nat its `surface' or operational level. It requires, in particular, that the\nvalue of a measurable property of a system $S_1$ in some region of physical\nspace $R_1$ is independent of what kind of measurement (or whether any\nmeasurement) is performed on some system $S_2$ in a distant region of space\n$R_2$, irrespective of whether $S_1$ and $S_2$ have previously interacted.\n\nDemopoulos argues that in the context of an irreducibly statistical theory such\nas quantum mechanics, it is in fact the no-signalling condition which\ngeneralises the mutually independent existence condition. It does so in the\nsense that like mutually independent existence, no-signalling is a\nsurface-level constraint on the local facts associated with a particular\nsystem, requiring that these facts be independent of the local surface-level\nfacts associated with other spatially distant systems. Unlike the mutually\nindependent existence condition, however, these local facts refer to the\nmarginal probabilities associated with a system's measurable properties rather\nthan with what one might regard as those properties themselves. Specifically,\nno-signalling asserts that the marginal probability associated with a\nmeasurement on a system $S_1$ at a given location $R_1$ is independent of what\nkind of measurement (or whether any measurement) is performed on some system\n$S_2$ in a distant region of space $R_2$.\\footnote{It is worth noting that the\n parameter independence condition (\\citealt[]{shimony1993}) is just the\n no-signalling condition extended to include a hypothetical, possibly\n hidden, set of \\emph{underlying} parameters.} In this way no-signalling\nallows us to coherently treat systems in different regions of physical space as\nif they had mutually independent existences---i.e. as quasi-closed systems in\nthe sense described above---and thus allows for the possibility of `physical\nthought' in a methodological sense and for ``the establishment of empirically\ntestable laws in the sense familiar to us'' (\\citealt[]{einstein1948}, as\ntranslated by \\citealt[p. 188]{howard1985}). Demopoulos argues that quantum\nmechanics, even under its orthodox interpretation, is in this way legitimated\nby the principle and may be thought of as a local theory of nonlocal\ncorrelations.\n\n\\section{Mutually independent existence and communication}\n\\label{sec:howposs}\n\nIn the previous section we saw that no-signalling can be regarded as\ngeneralising a criterion for the possibility of `physical thought' originally\nput forward by Einstein. And we saw that since quantum mechanics satisfies\nno-signalling, one may think of that theory, even under its orthodox\ninterpretation, as in this sense legitimated methodologically by the\nprinciple. As we saw in \\S\\S\\ref{sec:prcorr}-\\ref{sec:ic}, however, other\nconceivable physical theories---some of which allow for stronger-than-quantum\ncorrelations---satisfy the no-signalling condition as well. In light of this,\n`information causality' (or `information neutrality', in Bub's terminology) was\nput forward by \\citet[]{pawlowski2009} as an additional foundational principle\nfor more narrowly circumscribing the class of physically sensible theories. But\nin \\S\\ref{sec:ic} I argued that the principle requires further motivation\nbefore it can legitimately be seen as playing this role. With our recent\ndiscussion of no-signalling in mind, let us now consider the proposal of\n\\citeauthor[]{pawlowski2009} again.\n\n\\emph{No-signalling} asserts that the marginal probabilities associated with\nAlice's local measurements on a system $S_A$ in a region $R_A$ are independent\nof what kind of measurement (or whether any measurement) is performed by Bob\nlocally on a system $S_B$ in a distant region $R_B$. \\emph{Information\n causality} asserts that Bob can gain no more than $m$ bits of information\nabout Alice's data set if she sends him only $m$\nbits. \\citet[p. 1101]{pawlowski2009} remark that ``The standard no-signalling\ncondition is just information causality for $m = 0$''. \\citet[p. 180]{bub2012}\nconsiders this remark to be misleading, but presumably all that\n\\citeauthor[]{pawlowski2009} intend is that if Alice and Bob share\n\\emph{signalling} correlations, then Alice may provide Bob with information\nabout her data set merely by measuring it, i.e. without actually sending him\nany bits. The information causality principle disallows this for any value of\n$E$, as does no-signalling.\\footnote{That is, for any value of $E$ within the\n allowed range of: $\\text{-}1 \\leq E \\leq 1$.}\n\n\\begin{figure}[t]\n\\footnotesize\n$$\n\\begin{array}{l | l | l || l | l}\n b_0 & a_2 & a_3 & a_2 \\oplus a_3 & G \\\\ \\hline\n 0 & 0 & 0 & 0 & 0 \\\\ \\hline\n 0 & 0 & 1 & 1 & 0 \\\\ \\hline\n 0 & 1 & 0 & 1 & 1 \\\\ \\hline\n 0 & 1 & 1 & 0 & 1 \\\\ \\hline\n 1 & 0 & 0 & 0 & 0 \\\\ \\hline\n 1 & 0 & 1 & 1 & 1 \\\\ \\hline\n 1 & 1 & 0 & 1 & 0 \\\\ \\hline\n 1 & 1 & 1 & 0 & 1\n\\end{array}\n$$\n\\caption{A summary of the possible outcomes associated with Bob's measurement\n $G$ (his `guess') in the guessing game of \\S\\ref{sec:game}, based on\n Eq. \\eqref{eqn:b1equal1}. If all atomic variables are assumed to be equally\n likely to take on a value of 0 or 1, then $G$ is probabilistically independent\n of Alice's measurement setting $a_2 \\oplus a_3$, but not of its components\n $a_2$ and $a_3$, since, for example, $p(G=0|a_2=0) = 3\/4 \\neq p(G=0|a_2=1)$,\n and $p(G=0|a_3=0) = 3\/4 \\neq p(G=0|a_3=1)$.}\n\\label{fig:wittprob}\n\\end{figure}\n\nOn the other hand when (for instance) $m = 1$, then in the case where they have\npreviously shared PR-correlated systems (i.e. systems such that $E = 1$), one\nmight argue that there arises a subtle sense in which the probabilities of Bob's\nmeasurement outcomes can be influenced by Alice's remote measurement\nsettings. Consider the outcome of Bob's combined measurement $G =_{\\mathit{df}}\nc \\oplus B_{III} \\oplus B_{II}$, i.e. his `guess' \\eqref{eqn:guess2or3}. From\n\\eqref{eqn:b1equal1} it would appear that Bob's outcome is in part determined by\nthe setting of Alice's measurement on system \\textbf{II}, $a_2 \\oplus a_3$,\nsince this appears explicitly in the equation. However in this case appearances\nare misleading, for the reader can verify that $G$ is probabilistically\nindependent of $a_2 \\oplus a_3$ (see figure \\ref{fig:wittprob}). $G$ is\nnevertheless probabilistically dependent on both of $a_2$ and $a_3$ considered\nindividually. So one might say that although the outcome of $G$ is not\ninfluenced by any of Alice's measurement settings \\emph{per se}, it does seem to\nbe influenced by the particular way in which those settings have been determined\n(despite the fact that neither $a_2$ nor $a_3$ are directly used by Alice to\ndetermine the value of the bit that she sends to Bob, $c$). Put a different way,\nthe constituents of Alice's measurement setting on system \\textbf{II}\nrespectively determine the two possible outcomes of Bob's guess whenever he\nperforms the measurement $G$ (for a given $b_0$). Likewise in the case where Bob\nmeasures $G' = c \\oplus B_{III} \\oplus B_I$ (i.e. his guess\n\\eqref{eqn:guess0or1}); the two possible outcomes of $G'$ are, respectively,\ndetermined by the constituents of Alice's measurement settings on system\n\\textbf{I}, $a_0$ and $a_1$ (for a given $b_0$).\n\nNote that since $a_2$ and $a_3$ (respectively: $a_0$ and $a_1$), besides being\nthe constituents of Alice's measurement settings on \\textbf{II} (respectively:\n\\textbf{I}), are also in fact the values of bits in Alice's list $\\mathbf{a}$,\nthe above considerations resonate with Bub's remark (quoted above) that\nTsirelson-bound-violating correlations are such that they may themselves include\ninformation about Alice's data set in the context of a game like that described\nin \\S\\ref{sec:game}. These considerations further suggest a sense, \\emph{pace}\nBub, in which it could be argued that the name `information causality' is indeed\napt. For the bit of information $c$ that Alice sends to Bob can be thought of as\nthe `enabler' or `cause', at least in a metaphorical sense, of Bob's ability to\nuse this aspect of the correlations to his advantage\n\\citep[cf.][\\S{}3.4]{pawlowski2016}.\\footnote{Perhaps, though, a better name\n would be the `\\emph{no} information causality' principle.}\n\nThus one can think of information causality as generalising no-signalling (in\nthe context of the protocol under which information causality is operationally\ndefined) in two ways. On the one hand information causality generalises\nno-signalling in the sense alluded to by \\citeauthor[]{pawlowski2009}; i.e. it\nreduces to no-signalling for $m = 0$. On the other hand information causality\ngeneralises no-signalling in the sense that, like the no-signalling principle,\nit expresses a restriction on the accessibility of the remote measurement\nsettings of a distant party; but this restriction now applies not just to those\nremote measurement settings themselves, but also more generally to the\ncomponents by which those measurement settings are determined. Since, as we saw\nin the previous section, no-signalling is already well-motivated in the sense\nthat it gives expression within quantum mechanics to an arguably fundamental\nassumption that is implicit in physical practice, the very fact that\ninformation causality generalises no-signalling can be taken as a compelling\nmotivation for it.\n\nSuch a conclusion would be too quick, however, for it does not follow from the\nfact that information causality generalises no-signalling that it continues to\ngive expression to the condition of mutually independent existence. But it is\nmutually independent existence which, as we saw, motivates no-signalling as a\nconstraint on physical theories. Thus we must still ask whether a violation of\ninformation causality would result in a violation of the mutually independent\nexistence condition in some relevant sense. Arguably this is indeed the\nsituation one is confronted with in the context of the guessing game described\nabove when it is played with Tsirelson-bound-violating correlated systems. On\nthe one hand, when Alice and Bob share maximally super-quantum systems\n(i.e. PR-systems, for which $E = 1$), then after receiving $c$ there is a sense\nin which Alice's system can be said to be `a part' of Bob's system in the\ncontext of the game being played. For after receiving $c$ Bob has\n\\emph{immediate} access to the value of any single bit of Alice's that he would\nlike. Alice's bits may as well be his own for the purposes of the game. Indeed,\nfrom this point of view the fact that the communication complexity associated\nwith any distributed computational task is trivial when PR-correlations are\nused seems natural; for once Alice's and Bob's systems are nonlocally joined in\nthis way there is naturally no need for further communication. On the other\nhand, when Tsirelson-bound-violating correlations that are non-maximal are\nused, trivial communication complexity has not been shown to result in all\ncases. But mutually independent existence is nevertheless violated in the sense\nthat the correlations shared prior to the beginning of the game, upon being\n`activated' by Alice's classical message $c$ to Bob, contribute information\nover and above $c$ to the information Bob then gains about Alice's data set;\nthey `implausibly' enhance the accessibility of Alice's data set by nonlocally\njoining Alice to Bob, at least to some extent, in the sense just described.\n\nNow it is one thing to claim that information causality gives expression to a\ngeneralised sense of mutually independent existence. It is another, however, to\nclaim that mutually independent existence should be thought of as necessary in\nthis context. Recall that in the last section we saw that mutually independent\nexistence (arguably) must be presupposed if `physical thought' is to be\npossible---in other words that it is (arguably) a fundamental presupposition\nimplicit in physical practice as such. And we saw that a form of this principle\nholds in the context of Newtonian mechanics, which may be thought of as in that\nsense a local theory of nonlocal forces. We also saw that a form of\nmutually independent existence appropriate for an irreducibly statistical\ntheory---i.e. the no-signalling principle---holds in the context of quantum\nmechanics, and that it may thus be thought of analogously as a local theory of\nnonlocal correlations. The context of our current investigation is one which\ninvolves considering communicating agents capable of building and manipulating\nphysical systems---thought of now as resources---for their own particular\npurposes. Our context, that is, is the `practical' one associated with quantum\ncomputation and information theory, recently described by \\citet[]{cuffaro2017,\n cuffaroForthB}.\\footnote{Similar ideas have been expressed by\n \\citet{pitowsky1990, pitowsky1996, pitowsky2002}.} As Cuffaro has argued,\nthis context of investigation is in fact distinct from the more familiar\n`theoretical' context that is associated with traditional foundational\ninvestigations of quantum mechanics. A different way of putting this is that\nquantum computation and information theory are `resource' or `control' theories\nsimilarly to the science of thermodynamics \\citep[]{myrvold2011, wallace2014,\n ladyman2018}. Thus the question of whether mutually independent existence is\nnecessary for the practice of quantum information and communication complexity\ntheory is a distinct question from the question of whether it is necessary for\nphysical practice in the traditional sense.\n\nWithout the presupposition of mutually independent existence---according to\nwhich systems that occupy distinct regions of space are to be regarded as\nexisting independently of one another---the idea of a (quasi-) closed system\nthat can be subjected to empirical test, and in this sense `physical thought',\nwould not be possible (or anyway so argued Einstein). Analogously, one could\nargue that in the context of a theory of communication---i.e. of the various\nresource costs associated with different communicational protocols and their\ninterrelations---that it is necessary to presuppose that an operational\ndistinction can be made between the parties involved in a communicational\nprotocol. One might argue, that is, that it is constitutive of the very idea of\ncommunication that it is an activity that takes place between what can be\neffectively regarded as two mutually independently existing entities, and\nmoreover that such a distinction is presupposed when one quantifies the\ncomplexity of a particular\nprotocol.\\footnote{Cf. \\citet[p. x]{hushilevitz1997}. Cf. also\n \\citeauthor[]{maroney2018}'s \\citeyearpar[]{maroney2018} emphasis on the\n initialisation and readout stages of an information processing task.} For\nwithout the ability to make such an effective distinction between the systems\nbelonging to the sender and the receiver of information, it is not at all\nobvious how one should begin to quantify the amount of information that is\nrequired to be sent \\emph{from} Alice \\emph{to} Bob in the context of a\nparticular protocol. From this point of view it is indeed not surprising that\ncommunication complexity theory becomes impossible (in the sense that all\ncommunicational problems become trivially solvable) when PR-correlated systems\nare available to use.\n\n\\section{Objections}\n\\label{sec:obj}\n\nAn objection to this line of thought is the following. Cannot something similar\nbe said in the context of the information causality game when Alice and Bob\nshare an entangled quantum system? For arguably \\citep[cf.][]{howard1989} Alice\nand Bob will become likewise inseparable or `nonlocally joined' in such a\nscenario. And yet no one imagines the very possibility of the sciences of\nquantum information theory and quantum communication complexity to have been\nundermined as a result. So why should one believe them to be undermined by the\npossibility of sharing systems whose correlations violate the Tsirelson bound?\nThis objection, however, involves a description of the situation regarding the\nsharing of an entangled quantum system that is below the surface-level\ncharacterisation that is relevant to our discussion. It therefore does not\nundermine the considerations of the previous section.\n\nConsider the description of a classical bipartite communication protocol. Both\nbefore and after communication has taken place, such a description may be\nregarded as decomposable into three parts: a sending system, a receiving\nsystem, and something communicated between them. For a quantum protocol the\npossibility of such a decomposition is in general far less obvious as a result\nof the well-known conceptual intricacies associated with entangled quantum\nstates. However whether or not Alice and her system, and Bob and his system,\nare `in reality' inseparably entangled with one another, it remains the case,\nboth before (because of quantum mechanics' satisfaction of the no-signalling\ncondition) and after the communication of a classical message (because of\nquantum mechanics' satisfaction of the information causality condition), that\nAlice's system, Bob's system, and the message $c$ may be operationally\ndistinguished from one another in the sense that Bob cannot take advantage of\nthe underlying connection he has with Alice and her system via the correlations\nhe shares with her to gain information about her data set over and above what\nhas been provided to him via $c$. It is true that previously shared quantum\ncorrelations enable one to communicate with greater efficiency than is possible\nusing only previously shared classical correlations. As \\eqref{eqn:prguess}\nshows, Bob has a higher probability of guessing correctly in the information\ncausality game if he and Alice have previously shared quantum as opposed to\nclassical correlations.\\footnote{This is true in other contexts besides that of\n the information causality game. See, e.g., \\citet[]{buhrman2001,\n brukner2002, brukner2004}.} And the question arises regarding the source\nof this increased communicational power. But whatever that source is, it is not\nthe case that it manifests itself in nonlocality or nonseparability at the\n\\emph{operational} level.\\footnote{Compare this with \\citet[]{buhrman2001},\n who writes that entanglement enables one to ``\\emph{circumvent} (rather\n than simulate) communication'' (p. 1831, emphasis in original), and also\n with \\citet[]{bub2010}'s discussion of entanglement in the context of\n quantum computation, which he argues allows a quantum computer to compute a\n global property of a function by performing fewer, not more, computations\n than classical computers.} This is in contrast to systems whose correlations\nviolate the Tsirelson bound.\n\nBut the game described by \\citet[]{pawlowski2009} involves the communication of\n\\emph{classical} bits from Alice to Bob. Might not this limitation in Bob's\nability to take advantage of his underlying connection with Alice be overcome if\nwe allow her to send him qubits rather than only classical bits? Indeed, it is\nwell known that if Alice sends a qubit to Bob that is entangled with a qubit\nthat is already in his possession, then Alice and Bob can implement the\n`superdense coding' protocol \\citep[\\S{}2.3]{nielsenChuang2000}; Alice's sending\nof a single qubit to Bob according to this protocol will allow him to learn two\nbits' worth of classical information.\\footnote{In the context of a suitably\n generalised version of the information causality game, it turns out that a\n two-bit information gain per qubit constitutes an upper bound\n \\citep[]{pitaluaGarcia2013}.} Does this not undermine the claim that quantum\ncorrelations contribute nothing over and above whatever message is sent between\nAlice and Bob to the information gained by him?\n\nIt does not. On the one hand, before the transmission of the qubit(s) from Alice\nto Bob, no-signalling implies that Alice and Bob can be considered as\noperationally separable despite their sharing an entangled system, as we have\nseen above. On the other hand, in the superdense coding protocol, after Alice\ntransmits her message to Bob, all of the correlated quantum system that was\ninitially shared is now in Bob's possession. So after transmission there is no\nsense in which Bob can take advantage of correlations shared with Alice at that\ntime. In a sense Alice's message to Bob `just is' information regarding the\ncorrelations that exist between them at the time at which she sends\nit.\\footnote{This conclusion is essentially that of\n \\citet[p. 032110-20]{spekkens2007}. Fascinatingly, Spekkens also shows that\n the superdense coding protocol can be implemented in his toy classical\n theory.}\n\nAs we have seen, when Alice and Bob share PR-correlated systems, they can win a\nround with certainty in the $m = 1$ game for any $N$ by exchanging a single\nclassical bit. Earlier I also mentioned \\citeauthor[]{vanDam2013}'s\n\\citeyearpar[]{vanDam2013} result to the effect that PR-correlated systems allow\none to perform \\emph{any} distributed computational task with only a trivial\namount of communication. These results are striking. However the reader may\nnevertheless feel somewhat unimpressed by them for the following reason: the\nnumber of PR-correlated systems required to implement these protocols, as we\nhave seen, is great. With respect to the length $n$ of Bob's bit string\n$\\mathbf{b}$ (arguably the most appropriate measure of input size for the game),\nimplementing the solution described above requires that they share $2^n-1$\nPR-systems; i.e. the number of PR-systems required grows exponentially with the\ninput size. Likewise for van Dam's protocol.\\footnote{Specifically, van Dam's\n \\citeyearpar[]{vanDam2013} protocol requires a number of systems that can\n grow exponentially with respect to the input size of an instance of the\n Inner Product problem, after which the solution can be efficiently converted\n into a solution to any other distributed computational problem.} A reduction\nin \\emph{communication} complexity has therefore been achieved only at the\nexpense of an increase in \\emph{computational} complexity. One might argue that\nit is in this sense misleading to consider the complexity of implementing the\nprotocol with PR-correlated systems to be trivial---that they provide us with a\n`free lunch'.\n\nI will return to this point later. But for now let me say that, arguably, this\nis not a relevant consideration in this context. The theories of communication\ncomplexity and computational complexity are distinct sub-disciplines of computer\nscience. The goal of communication complexity is to quantify the amount of\ncommunication necessary to implement various communicational protocols. For this\npurpose one abstracts away from any consideration of how complicated a\ncomputational system must be in other respects \\citep[]{hushilevitz1997}. The\nquestion addressed in \\citet[]{vanDam2013} and in \\citet[]{pawlowski2009} and\n\\citet{pawlowski2016} concerns whether the availability of PR-correlated systems\nwould make communicational, not computational, complexity theory\nsuperfluous. From this point of view any previously prepared PR-correlated\nsystems are viewed as `free resources' for the purposes of the analysis.\n\nThis said, one can imagine that the subsystems of PR-correlated systems employ\nsome hidden means of communication with one another, and then argue that this\nmust be included in the complexity ascribed to the protocol. This would of\ncourse constitute a descent below the empirically verifiable level. In itself\nthis is obviously not objectionable. But it is hard to see what use this would\nbe to a theory of communicational complexity, which after all, like\ncomputational complexity \\citep[]{cuffaro2018}, aims to be a practical science\nwhose goal is to guide us in making distinctions in practice between real\nproblems related to data transmission that are of varying levels of\ndifficulty. In this sense appealing to unseen and unmanipulable communication\nbetween the subsystems of PR-systems does not help with the conclusion that\ncommunication complexity theory, at least in an operational sense, becomes\nsuperfluous if PR-correlated systems are available. The objection addressed in\nthe previous two paragraphs is nevertheless an important one that I will return\nto.\n\nAbove I have motivated the idea, of \\citet[p. 1101]{pawlowski2009}, that the\nkind of accessibility of remote data that is possible given the existence of\ncorrelated systems which violate the Tsirelson bound is `implausible'. I have\ndone so by describing, \\emph{pace} \\citet[]{bub2012}, the sense in which\ninformation causality can be taken to generalise no-signalling. In so doing I\nhave gestured at a connection between the idea of implausible accessibility and\nthe \\emph{prima facie} separate idea that a world in which\nTsirelson-bound-violating correlated systems exist would be `too good to be\ntrue' in a communicational complexity-theoretic sense. My arguments have been\nmainly conceptual. I have argued, that is, that a kind of conceptual ambiguity\nat the operational level between the parties to a communicational protocol may\nresult if correlations which violate the Tsirelson bound are available to\nuse. As we have seen, when such stronger-than-quantum correlations are strong\nenough (i.e. when $E > \\sqrt{6}\/3$), this results in the trivial communicational\ncomplexity of any distributed computational task. But trivial communicational\ncomplexity does not result, or anyway has not yet been shown to result, for\nvalues of $E$ above the Tsirelson bounded value of $1\/\\sqrt 2$ that are below\n$\\sqrt{6}\/3$. This is despite the fact that the conceptual ambiguity I have\ndescribed is present to some extent for all such values of $E$.\n\n\\begin{sloppypar}\nThus one may wonder whether `a little' ambiguity may be tolerable for practical\npurposes---whether, that is, a theory which admits correlations which only\n`weakly' violate the Tsirelson bound should be admitted within the space of\npossible physical theories from the point of view of the information causality\nprinciple. The situation could be seen as analogous to the situation one is\nfaced with in Newtonian Mechanics, in fact, for Corollary VI (which I described\nin \\S\\ref{sec:demopoulos}) only guarantees that a system in the presence of\nexternal forces can be treated as (quasi-) closed when these forces act\n\\emph{exactly} equally upon it and are \\emph{exactly} parallel. Clearly this is\nnot the case for the Jovian system \\emph{vis-\\'a-vis} the sun, for\nexample. Corollary VI---and Proposition 3---nevertheless function as\nmethodological tools in that they allows us to maintain the idea of the\nmutually independent existence of spatially distant things as a methodological\nprinciple and treat the Jovian system, for the practical purpose of analysing\nits internal motions, as unaffected by the forces exerted upon it by the sun.\n\\end{sloppypar}\n\nThere is much work to be done before information causality can be considered as\nsuccessful in ruling out---in the conceptual sense described in the previous\ntwo paragraphs---\\emph{all} theories whose correlations violate the Tsirelson\nbound. Irrespective of whether this goal can be achieved, however, this does\nnot necessarily undermine the status of information causality motivated as a\nmethodological principle in something like the way that I have done in this\npaper. In particular, information causality would be especially compelling if\none could draw a relation between the degree of violation of the principle and\nthe degree of `superfluousness' of the resulting theory of communication\ncomplexity with an eye to distinguishing `weak' violations of the Tsirelson\nbound from more objectionable violations. Thus there is much work to do in any\ncase.\n\nI close with the following more fundamental objection. Why should nature care\nwhether beings such as us are able to engage in communication complexity\ntheory? In fact there is no fundamental reason why nature should\ncare. Analogously, there is no fundamental reason why nature should care\nwhether beings such as us can do physics. But the goal of empirical science is\nnot to derive the structure of the world or its constituent entities by way of\na priori or `self-evident' principles. It is rather to make sense of and\nexplain our experience of and in the world, as well as to enable us to predict\nand to control aspects of that world for whatever particular practical purposes\nwe may have. In fact we have a science which is called physics. And in fact we\nhave a science which we refer to as communication complexity theory. The\nprinciple of mutually independent existence, and analogously the principle of\ninformation causality, may be thought of as answers to the question: `how are\nsuch facts possible?' in the sense that they aim to identify the necessary\nsuppositions implicit in \\emph{any} such theories and in our practice of\nthem.\\footnote{Cf. \\citet[pp. B20-B21]{kant1781german}.}\n\nThat said, these may not be definitive answers. The necessity of presupposing\nEinstein's mutual independence and local action principles for the purposes of\ntheory testing has been questioned by \\citet[]{howard1989}. In a similar way,\none might argue that it is wrong to think that the existence of correlated\nsystems which `strongly' violate the Tsirelson bound would make any science of\ncommunication complexity impossible. Rather, one might conclude instead that\nthe idea of a science of communication complexity that is wholly independent of\n\\emph{computational} complexity-theoretic considerations is unachievable. This,\none might argue, is the real lesson to take away from the fact that an\nexponential number of PR-correlated systems is required to implement Alice's\nand Bob's solution to their guessing game. Yet even if this were all that we\nlearned from information causality, it would still represent a significant\nadvance in our understanding of the structure of our theoretical knowledge---an\nunderstanding of the physically motivated constraints under which two\nmathematical \\emph{theories} may be regarded as mutually independent.\n\n\\section{Summary}\n\\label{sec:conc}\n\nAbove I have argued that the principle of information causality has not yet\nbeen sufficiently motivated to play the role of a foundational principle of\nnature, and I have described a way in which one might begin to provide it with\nsuch a motivation. More specifically I described an argument, due to\nDemopoulos, to the effect that the no-signalling condition can be viewed as a\ngeneralisation, appropriate to an irreducibly statistical theory, of Einstein's\nprinciple of mutually independent existence interpreted as a constraint on\nphysical practice. I then argued that information causality can in turn be\nmotivated as a further generalisation of no-signalling that is appropriate to a\ntheory of communication. I closed by describing a number of important obstacles\nthat are required to be overcome if the project of establishing information\ncausality as a foundational principle is to succeed.\n\n\\bibliographystyle{apa-good}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\subsection{Cumulative Absolute Value Estimations of $\\delta$-Bounded Polynomials}\n\\label{s:values}\n\nTo bound the total error of the algorithm, in Section~\\ref{s:pip_together}, we need an upper bound on $\\sum_{j \\in N} \\tb_j$, i.e., on the sum of the cumulative absolute value estimations at the top level of the decomposition of a $\\beta$-smooth $\\delta$-bounded polynomial $p(\\vec{x})$. In this section, we show that $\\sum_{j \\in N} \\tb_j = O(d^2 \\beta n^{d-1+\\delta})$. This upper bound is an immediate consequence of an upper bound of $O(d\\beta n^{d-1+\\delta})$ on the sum of the absolute value estimations, for each level $\\ell$ of the decomposition of $p(\\vec{x})$. \n\nFor simplicity and clarity, we assume, in the statements of the lemmas below and in their proofs, that the hidden constant in the definition of $p(\\vec{x})$ as a $\\delta$-bounded polynomial is $1$. If this constant is some $\\kappa \\geq 1$, we should multiply the upper bounds of Lemma~\\ref{l:abs_est} and Lemma~\\ref{l:cum_est} by $\\kappa$. \n\\begin{lemma}\\label{l:abs_est}\nLet $p(\\vec{x})$ be an $n$-variate degree-$d$ $\\beta$-smooth $\\delta$-bounded polynomial. Also let $\\rho_{i_1 \\ldots i_{d-\\ell}}$ and $\\rb_{i_1 \\ldots i_{d-\\ell}}$ be the estimations and absolute value estimations, for all levels $\\ell \\in \\{1, \\ldots, d-1\\}$ of the decomposition of $p(\\vec{x})$ and all tuples $(i_1, \\ldots, i_{d-\\ell}) \\in N^{d-\\ell}$, computed by Algorithm~\\ref{alg:estimate} and used in ($d$-LP) and ($d$-IP). Then, for each level $\\ell \\geq 1$, the sum of the absolute value estimations is:\n\\begin{equation}\\label{eq:abs_est}\n \\sum_{(i_1, \\ldots, i_{d-\\ell}) \\in N^{d-\\ell}} \\rb_{i_1\\ldots i_{d-\\ell}} \\leq\n \\ell\\beta n^{d-1+\\delta}\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nThe proof is by induction on the level $\\ell$ of the decomposition. For the basis, we recall that for $\\ell = 1$, level-$1$ absolute value estimations are defined as \n\\[ \\rb_{i_1\\ldots i_{d-1}} = \\sum_{j \\in N} |\\rho_{i_1\\ldots i_{d-1} j}|\n = \\sum_{j \\in N} |c_{i_1\\ldots i_{d-1} j}|\n\\]\nThis holds because, in Algorithm~\\ref{alg:estimate}, each level-$0$ estimation $\\rho_{i_1\\ldots i_{d-1} i_d}$ is equal to the coefficient $c_{i_1\\ldots i_{d-1} i_d}$ of the corresponding degree-$d$ monomial. Hence, if $p(\\vec{x})$ is a degree-$d$ $\\beta$-smooth $\\delta$-bounded polynomial, we have that\n\\begin{equation}\\label{eq:bounded_level1}\n \\sum_{(i_1, \\ldots, i_{d-1}) \\in N^{d-1}} \\rb_{i_1\\ldots i_{d-1}}\n = \\sum_{(i_1, \\ldots, i_{d-1}, j) \\in N^{d}} |c_{i_1\\ldots i_{d-1} j}|\n \\leq \\beta n^{d-1+\\delta}\n\\end{equation}\nThe upper bound holds because by the definition of degree-$d$ $\\beta$-smooth $\\delta$-bounded polynomials, for each $\\ell \\in \\{ 0, \\ldots, d \\}$, the sum, over all monomials of degree $d-\\ell$, of the absolute values of their coefficients is $O(\\beta n^{d-1+\\delta})$ (and assuming that the hidden constant is $1$, at most $\\beta n^{d-1+\\delta}$). In (\\ref{eq:bounded_level1}), we use this upper bound for $\\ell = 0$ and for the absolute values of the coefficients of all degree-$d$ monomials in the expansion of $p(\\vec{x})$.\n\nFor the induction step, we consider any level $\\ell \\geq 2$. We observe that any binary vector $\\vec{x}$ satisfies the level-$(\\ell-1)$ constraints of ($d$-LP) and ($d$-IP) with certainty, if for each level-$(\\ell-1)$ estimation,\n\\[ \n \\rho_{i_1\\ldots i_{d-\\ell}j} \\leq \n c_{i_1\\ldots i_{d-\\ell}j} + \\sum_{l \\in N} |\\rho_{i_1\\ldots i_{d-\\ell} j l}| =\n c_{i_1\\ldots i_{d-\\ell}j} + \\rb_{i_1\\ldots i_{d-\\ell} j}\n\\]\nWe also note that we can easily enforce such upper bounds on the estimations computed by Algorithm~\\ref{alg:estimate}. Since each level-$\\ell$ absolute value estimation is defined as $\\rb_{i_1\\ldots i_{d-\\ell}} = \\sum_{j \\in N} |\\rho_{i_1\\ldots i_{d-\\ell}j}|$, we obtain that for any level $\\ell \\geq 2$,\n\\begin{eqnarray*}\n \\sum_{(i_1, \\ldots, i_{d-\\ell}) \\in N^{d-\\ell}} \\rb_{i_1\\ldots i_{d-\\ell}} & \\leq &\n \\sum_{(i_1, \\ldots, i_{d-\\ell}, j) \\in N^{d-\\ell+1}} \\left(|c_{i_1\\ldots i_{d-\\ell}j}| +\n \\rb_{i_1\\ldots i_{d-\\ell} j} \\right)\\\\\n & \\leq & \\beta n^{d-1+\\delta} + (\\ell-1)\\beta n^{d-1+\\delta}\n = \\ell\\beta n^{d-1+\\delta}\n\\end{eqnarray*}\nFor the second inequality, we use the induction hypothesis and that since $p(\\vec{x})$ is $\\beta$-smooth and $\\delta$-bounded, the sum, over all monomials of degree $d-\\ell+1$, of the absolute values $|c_{i_1\\ldots i_{d-\\ell}j}|$ of their coefficients $c_{i_1\\ldots i_{d-\\ell}j}$ is at most $\\beta n^{d-1+\\delta}$. We also use the fact that the estimations are computed over the decomposition tree of the polynomial $p(\\vec{x})$. Hence, each coefficient $c_{i_1\\ldots i_{d-\\ell}j}$ is included only once in the sum.\n\\qed\\end{proof}\n\\begin{lemma}\\label{l:cum_est}\nLet $p(\\vec{x})$ be an $n$-variate degree-$d$ $\\beta$-smooth $\\delta$-bounded polynomial. Also let $\\tb_{i_1 \\ldots i_{d-\\ell}}$ be the cumulative absolute value estimations, for all levels $\\ell \\in \\{1, \\ldots, d-1\\}$ of the decomposition of $p(\\vec{x})$ and all tuples $(i_1, \\ldots, i_{d-\\ell}) \\in N^{d-\\ell}$, corresponding to the estimations $\\rho_{i_1 \\ldots i_{d-\\ell}}$ computed by Algorithm~\\ref{alg:estimate} and used in ($d$-LP) and ($d$-IP). Then, \n\\begin{equation}\\label{eq:cum_est}\n \\sum_{j \\in N} \\tb_{j} \\leq d(d-1)\\beta n^{d-1+\\delta}\/2\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nUsing induction on the level $\\ell$ of the decomposition and Lemma~\\ref{l:abs_est}, we show that for each level $\\ell \\geq 1$, the sum of the cumulative absolute value estimations is:\n\\begin{equation}\\label{eq:cum_est2}\n \\sum_{(i_1, \\ldots, i_{d-\\ell}) \\in N^{d-\\ell}} \\tb_{i_1\\ldots i_{d-\\ell}} \\leq\n (\\ell+1)\\ell\\beta n^{d-1+\\delta}\/2\n\\end{equation}\nThe conclusion of the lemma is obtained by applying (\\ref{eq:cum_est2}) for the first level of the decomposition of $p(\\vec{x})$, i.e., for $\\ell = d-1$.\n\nFor the basis, we recall that for $\\ell = 1$, level-$1$ cumulative absolute value estimations are defined as\n\\( \\tb_{i_1 \\ldots i_{d-1}} = \\rb_{i_1 \\ldots i_{d-1}} \\). Using Lemma~\\ref{l:abs_est}, we obtain that:\n\\[ \\sum_{(i_1, \\ldots, i_{d-1}) \\in N^{d-1}} \\tb_{i_1\\ldots i_{d-1}} =\n \\sum_{(i_1, \\ldots, i_{d-1}) \\in N^{d-1}} \\rb_{i_1\\ldots i_{d-1}}\n \\leq \\beta n^{d-1+\\delta}\n\\]\nWe recall (see also Section~\\ref{s:pip_value}) that for each $\\ell \\geq 2$, level-$\\ell$ cumulative absolute value estimations are defined as\n\\( \\tb_{i_1 \\ldots i_{d-\\ell}} = \\rb_{i_1 \\ldots i_{d-\\ell}} + \\sum_{j \\in N} \\tb_{i_1 \\ldots i_{d-\\ell}j} \\). \nSumming up over all tuples $(i_1, \\ldots, i_{d-\\ell}) \\in N^{d-\\ell}$, we obtain that for any level $\\ell \\geq 2$,\n\\begin{eqnarray*}\n \\sum_{(i_1, \\ldots, i_{d-\\ell}) \\in N^{d-\\ell}} \\tb_{i_1\\ldots i_{d-\\ell}} & = &\n \\sum_{(i_1, \\ldots, i_{d-\\ell}) \\in N^{d-\\ell}} \\left( \\rb_{i_1\\ldots i_{d-\\ell}}\n +\n \\sum_{j \\in N} \\tb_{i_1 \\ldots i_{d-\\ell}j} \\right) \\\\\n & = & \n \\sum_{(i_1, \\ldots, i_{d-\\ell}) \\in N^{d-\\ell}} \\rb_{i_1\\ldots i_{d-\\ell}}\n + \\sum_{(i_1, \\ldots, i_{d-\\ell}, j) \\in N^{d-\\ell-1}} \\tb_{i_1 \\ldots i_{d-\\ell}j} \\\\\n & \\leq & \\ell \\beta n^{d-1+\\delta} + \\ell(\\ell-1)\\beta n^{d-1+\\delta}\/2 \n = (\\ell+1)\\ell\\beta n^{d-1+\\delta}\/2\\,,\n\\end{eqnarray*}\nwhere the inequality follows from Lemma~\\ref{l:abs_est} and from the induction hypothesis.\n\\qed\\end{proof}\n\\section{Introduction}\n\\label{s:intro}\n\nThe complexity of Constraint Satisfaction Problems (CSPs) has long played a\ncentral role in theoretical computer science and it quickly became evident that\nalmost all interesting CSPs are NP-complete \\cite{S78}. Thus, since\napproximation algorithms are one of the standard tools for dealing with NP-hard\nproblems, the question of approximating the corresponding optimization problems\n({\\sc Max}-CSP) has attracted significant interest over the years \\cite{T10}.\nUnfortunately, most CSPs typically resist this approach: not only are they\nAPX-hard \\cite{KSW97}, but quite often the best polynomial-time approximation\nratio we can hope to achieve for them is that guaranteed by a trivial random\nassignment \\cite{H01}. This striking behavior is often called\n\\emph{approximation resistance}.\n\nApproximation resistance and other APX-hardness results were originally\nformulated in the context of \\emph{polynomial-time} approximation. It would\ntherefore seem that one conceivable way for working around such barriers could\nbe to consider approximation algorithms running in super-polynomial time, and\nindeed super-polynomial approximation for NP-hard problems is a topic that has\nbeen gaining more attention in the literature recently\n\\cite{CLN13,BEP09,BCEP13,CKW09,CP10,CPW11}. Unfortunately, the existence of\nquasi-linear PCPs with small soundness error, first given in the work of\nMoshkovitz and Raz \\cite{MR10}, established that approximation resistance is a\nphenomenon that carries over even to \\emph{sub-exponential} time approximation,\nessentially ``killing'' this approach for CSPs. For instance, we now know that\nif, for any $\\eps>0$, there exists an algorithm for {\\sc Max}-3-SAT with ratio\n$7\/8+\\eps$ running in time $2^{n^{1-\\eps}}$ this would imply the existence of a\nsub-exponential \\emph{exact} algorithm for 3-SAT, disproving the Exponential\nTime Hypothesis (ETH). It therefore seems that sub-exponential time\ndoes not improve the approximability of CSPs, or put another way, for many CSPs\nobtaining a very good approximation ratio requires almost as much time as\nsolving the problem exactly.\n\nDespite this grim overall picture, many positive approximation results for CSPs\nhave appeared over the years, by taking advantage of the special structure of\nvarious classes of instances. One notable line of research in this vein is the\nwork on the approximability of \\emph{dense} CSPs, initiated by Arora, Karger\nand Karpinski \\cite{AKK99} and independently by de la Vega \\cite{V96}. The\ntheme of this set of results is that the problem of maximizing the number of\nsatisfied constraints in a CSP instance with arity $k$ (\\kCSP) becomes\nsignificantly easier if the instance contains $\\Omega(n^k)$ constraints. More\nprecisely, it was shown in \\cite{AKK99} that \\kCSP\\ admits a\n\\emph{polynomial-time approximation scheme} (PTAS) on dense instances, that is,\nan algorithm which for any constant $\\eps>0$ can in time polynomial in $n$\nproduce an assignment that satisfies $(1-\\eps)\\mathrm{OPT}$ constraints.\nSubsequent work produced a stream of positive\n\\cite{VK00,BVK03,AVKK03,CKSV12,CKSV11,FK96,AFK02,DFJ98,II05} (and some negative\n\\cite{VK99,AA07}) results on approximating CSPs which are in general APX-hard,\nshowing that dense instances form an island of tractability where many\noptimization problems which are normally APX-hard admit a PTAS.\n\n\\noindent\\textbf{Our contribution}: The main goal of this paper is to use the\nadditional power afforded by sub-exponential time to extend this island of\ntractability as much as possible. To demonstrate the main result, consider a\nconcrete CSP such as {\\sc Max}-3-SAT. As mentioned, we know that\nsub-exponential time does not in general help us approximate this problem: the\nbest ratio achievable in, say, $2^{\\sqrt{n}}$ time is still 7\/8. On the other\nhand, this problem admits a PTAS on instances with $\\Omega(n^3)$ clauses. This\ndensity condition is, however, rather strict, so the question we would like to\nanswer is the following: Can we efficiently approximate a larger (and more\nsparse) class of instances while using sub-exponential time?\n\nIn this paper we provide a positive answer to this question, not just for {\\sc\nMax}-3-SAT, but also for any \\kCSP\\ problem. Specifically, we show that for\nany constants $\\delta\\in (0,1]$, $\\eps>0$ and integer $k\\ge 2$, there is an\nalgorithm which achieves a $(1-\\eps)$ approximation of \\kCSP\\ instances with\n$\\Omega(n^{k-1+\\delta})$ constraints in time $2^{O(n^{1-\\delta}\\ln n\n\/\\eps^3)}$. A notable special case of this result is for $k=2$, where the input\ninstance can be described as a graph. For this case, which contains classical\nproblems such as \\MC, our algorithm gives an approximation scheme running in\ntime $2^{O(\\frac{n}{\\Delta}\\ln n\/\\eps^3)}$ for graphs with average degree\n$\\Delta$. In other words, this is an approximation scheme that runs in time\n\\emph{sub-exponential in $n$} even for almost sparse instances where the\naverage degree is $\\Delta = n^\\delta$ for some small $\\delta>0$. More\ngenerally, our algorithm provides a trade-off between the time available and\nthe density of the instances we can handle. For graph problems ($k=2$) this\ntrade-off covers the whole spectrum from dense to almost sparse instances,\nwhile for general \\kCSP, it covers instances where the number of constraints\nranges from $\\Theta(n^{k})$ to $\\Theta(n^{k-1})$.\n\n\\noindent\\textbf{Techniques}: The algorithms in this paper are an extension and\ngeneralization of the \\emph{exhaustive sampling} technique given by Arora,\nKarger and Karpinski \\cite{AKK99}, who introduced a framework of smooth\npolynomial integer programs to give a PTAS for dense \\kCSP. The basic idea of\nthat work can most simply be summarized for \\MC. This problem can be recast as\nthe problem of maximizing a quadratic function over $n$ boolean variables.\nThis is of course a hard problem, but suppose that we could somehow ``guess''\nfor each vertex how many of its neighbors belong in each side of the cut. This\nwould make the quadratic problem linear, and thus much easier. The main\nintuition now is that, if the graph is dense, we can take a sample of $O(\\log\nn)$ vertices and guess their partition in the optimal solution. Because every\nnon-sample vertex will have ``many'' neighbors in this sample, we can with high\nconfidence say that we can estimate the fraction of neighbors on each side for\nall vertices. The work of de la Vega \\cite{V96} uses exactly this algorithm\nfor \\MC, greedily deciding the vertices outside the sample. The work of\n\\cite{AKK99} on the other hand pushed this idea to its logical conclusion,\nshowing that it can be applied to degree-$k$ polynomial optimization problems,\nby recursively turning them into linear programs whose coefficients are\nestimated from the sample. The linear programs are then relaxed to produce\nfractional solutions, which can be rounded back into an integer solution to the\noriginal problem.\n\nOn a very high level, the approach we follow in this paper retraces the steps\nof \\cite{AKK99}: we formulate \\kCSP\\ as a degree-$k$ polynomial maximization\nproblem; we then recursively decompose the degree-$k$ polynomial problem into\nlower-degree polynomial optimization problems, estimating the coefficients by\nusing a sample of variables for which we try all assignments; the result of\nthis process is an integer linear program, for which we obtain a fractional\nsolution in polynomial time; we then perform randomized rounding to obtain an\ninteger solution that we can use for the original problem.\n\nThe first major difference between our approach and \\cite{AKK99} is of course\nthat we need to use a larger sample. This becomes evident if one considers \\MC\\\non graphs with average degree $\\Delta$. In order to get the sampling scheme to\nwork we must be able to guarantee that each vertex outside the sample has\n``many'' neighbors inside the sample, so we can safely estimate how many of\nthem end up on each side of the cut. For this, we need a sample of size at\nleast $n\\log n\/\\Delta$. Indeed, we use a sample of roughly this size, and exhausting\nall assignments to the sample is what dominates the running time of our\nalgorithm. As we argue later, not only is the sample size we use essentially\ntight, but more generally the running time of our algorithm is essentially\noptimal (under the ETH).\n\nNevertheless, using a larger sample is not in itself sufficient to extend the\nscheme of \\cite{AKK99} to non-dense instances. As observed in \\cite{AKK99} ``to\nachieve a multiplicative approximation for dense instances it suffices to\nachieve an additive approximation for the nonlinear integer programming\nproblem''. In other words, one of the basic ingredients of the analysis of\n\\cite{AKK99} is that additive approximation errors of the order $\\eps n^k$ can\nbe swept under the rug, because we know that in a dense instance the optimal\nsolution has value $\\Omega(n^k)$. This is \\emph{not} true in our case, and we\nare therefore forced to give a more refined analysis of the error of our\nscheme, independently bounding the error introduced in the first step\n(coefficient estimation) and the last (randomized rounding).\n\nA further complication arises when considering \\kCSP\\ for $k>2$. The scheme of\n\\cite{AKK99} recursively decomposes such dense instances into lower-order\npolynomials which retain the same ``good'' properties. This seems much harder\nto extend to the non-dense case, because intuitively if we start from a\nnon-dense instance the decomposition could end up producing some dense and\nsome sparse sub-problems. Indeed we present a scheme that approximates \\kCSP\\\nwith $\\Omega(n^{k-1+\\delta})$ constraints, but does not seem to extend to\ninstances with fewer than $n^{k-1}$ constraints. As we will see, there seems\nto be a fundamental complexity-theoretic justification explaining exactly why\nthis decomposition method cannot be extended further.\n\nTo ease presentation, we first give all the details of our scheme for the\nspecial case of \\MC\\ in Section \\ref{s:maxcut}. We then present the full\nframework for approximating \\emph{smooth polynomials} in Section \\ref{s:pip};\nthis implies the approximation result for \\kSAT\\ and more generally \\kCSP. We\nthen show in Section \\ref{s:kdense} that it is possible to extend our framework\nto handle \\kDense, a problem which can be expressed as the maximization of a\npolynomial subject to linear constraints. For this problem we obtain an\napproximation scheme which, given a graph with average degree $\\Delta=n^\\delta$\ngives a $(1-\\eps)$ approximation in time $2^{O(n^{1-\\delta\/3}\\ln n\/\\eps^3)}$.\nObserve that this extends the result of \\cite{AKK99} for this problem not only\nin terms of the density of the input instance, but also in terms of $k$ (the\nresult of \\cite{AKK99} required that $k=\\Omega(n)$).\n\n\\noindent\\textbf{Hardness}: What makes the results of this paper more\ninteresting is that we can establish that in many ways they are essentially\nbest possible, if one assumes the ETH. In particular, there are at least two\nways in which one may try to improve on these results further: one would be to\nimprove the running time of our algorithm, while another would be to extend the\nalgorithm to the range of densities it cannot currently handle. In Section\n\\ref{s:lower} we show that both of these approaches would face significant\nbarriers. Our starting point is the fact that (under ETH) it takes exponential\ntime to approximate \\MC\\ arbitrarily well on sparse instances, which is a\nconsequence of the existence of quasi-linear PCPs. By manipulating such \\MC\\\ninstances, we are able to show that for \\emph{any} average degree\n$\\Delta=n^{\\delta}$ with $\\delta<1$ the time needed to approximate \\MC\\\narbitrarily well almost matches the performance of our algorithm. Furthermore,\nstarting from sparse \\MC\\ instances, we can produce instances of \\kSAT\\ with\n$O(n^{k-1})$ clauses while preserving hardness of approximation. This gives a\ncomplexity-theoretic justification for our difficulties in decomposing \\kCSP\\\ninstances with less than $n^{k-1}$ constraints.\n\\section{Approximating the \\kDense\\ in Almost Sparse Graphs}\n\\label{s:kdense}\n\nIn this section, we show how an extension of the approximation algorithms we\nhave presented can be used to approximate the \\kDense\\ problem in\n$\\delta$-almost sparse graphs. Recall that this is a problem also handled in\n\\cite{AKK99}, but only for the case where $k=\\Omega(n)$. The reason that\nsmaller values of $k$ are not handled by the scheme of \\cite{AKK99} for dense\ngraphs is that when $k=o(n)$ the optimal solution has objective value much\nsmaller than the additive error of $\\eps n^2$ inherent in the scheme.\n\nHere we obtain a sub-exponential time approximation scheme that works on graphs\nwith $\\Omega(n^{1+\\delta})$ edges \\emph{for all} $k$ by judiciously combining\ntwo approaches: when $k$ is relatively large, we use a sampling approach\nsimilar to \\MC; when $k$ is small, we can resort to the na\\\"ive algorithm that\ntries all $n\\choose k$ possible solutions. We select (with some foresight) the\nthreshold between the two algorithms to be $k=\\Omega(n^{1-\\delta\/3})$, so that\nin the end we obtain an approximation scheme with running time of\n$2^{O(n^{1-\\delta\/3}\\ln n)}$, that is, slightly slower than the approximation\nscheme for \\MC. It is clear that the brute-force algorithm achieves this\nrunning time for $k=O(n^{1-\\delta\/3})$, so in the remainder we focus on the\ncase of large $k$.\n\nThe \\kDense\\ problem in a graph $G(V, E)$ is equivalent to maximizing, over all\nbinary vectors $\\vec{x} \\in \\{0, 1\\}^n$, the $n$-variate degree-$2$ $1$-smooth\npolynomial\n\\( p(\\vec{x}) = \\sum_{\\{i, j\\} \\in E} x_i x_j \\)\\,,\nunder the linear constraint $\\sum_{j \\in V} x_j = k$. Setting a variable $x_i$ to $1$ indicates that the vertex $i$ is included in the set $C$ that induces a dense subgraph $G[C]$ of $k$ vertices. Next, we assume that $G$ is $\\delta$-almost sparse and thus, has $m = \\Omega(n^{1+\\delta})$ edges. As usual, $\\vec{x}$ denotes the optimal solution.\n\nThe algorithm follows the same general approach and the same basic steps as the algorithm for \\MC\\ in Section~\\ref{s:maxcut}. In the following, we highlight only the differences. \n\n\\smallskip\\noindent{\\bf Obtaining Estimations by Exhaustive Sampling.} We first\nobserve that if $G$ is $\\delta$-almost sparse and $k = \\Omega(n^{1-\\delta\/3})$,\nthen a random subset of $k$ vertices contains $\\Omega(n^{1+\\delta\/3})$ edges in\nexpectation. Hence, we can assume that the optimal solution induces at least\n$\\Omega(n^{1+\\delta\/3})$ edges.\n\nWorking as in Section~\\ref{s:cut_sampling}, we use exhaustive sampling and\nobtain for each vertex $j \\in V$, an estimation $\\rho_j$ of $j$'s neighbors in\nthe optimal dense subgraph, i.e., $\\rho_j$ is an estimation of $\\hat{\\rho}_j =\n\\sum_{i \\in N} x_i^\\ast$. For the analysis, we apply Lemma~\\ref{l:cut_sampling}\nwith $n^{\\delta\/3}$, instead of $\\Delta$, or in other words, we use a sample of\nsize $\\Theta(n^{1-\\delta\/3}\\ln n)$. The reason is that we can only tolerate an\nadditive error of $\\eps n^{1+\\delta\/3}$, by the lower bound on the optimal\nsolution observed in the previous paragraph.\nThen, the running time due to exhaustive sampling is $ 2^{O(n^{1-\\delta\/3} \\ln\nn)}$. \n\nThus, by Lemma~\\ref{l:cut_sampling} and the discussion following it in\nSection~\\ref{s:cut_sampling}, we obtain that for all $\\e_1, \\e_2 > 0$, if we\nuse a sample of the size $\\Theta(n^{1-\\delta\/3}\\ln n \/(\\e^2_1 \\e_2))$, with\nprobability at least $1 - 2\/n^2$, the following holds for all estimations\n$\\rho_j$ and all vertices $j \\in V$:\n\\begin{equation}\\label{eq:dense_sample}\n (1-\\e_1)\\rho_j - \\e_2 n^{\\delta\/3} \\leq \\hat{\\rho}_j \\leq\n (1+\\e_1)\\rho_j + \\e_2 n^{\\delta\/3}\n\\end{equation}\n\\noindent{\\bf Linearizing the Polynomial.}\nApplying Proposition~\\ref{pr:decomposition}, we can write the polynomial $p(\\vec{x})$ as $p(\\vec{x}) = \\sum_{j \\in V} x_j p_j(\\vec{x})$, where $p_j(\\vec{x}) = \\sum_{i \\in N(j)} x_i$ is a degree-$1$ $1$-smooth polynomial that indicates how many neighbors of vertex $j$ are in $C$ in the solution corresponding to $\\vec{x}$. Then, using the estimations $\\rho_j$ of $\\sum_{i \\in N(j)} x^\\ast_i$\\,, obtained by exhaustive sampling, we have that approximate maximization of $p(\\vec{x})$ can be reduced to the solution of the following Integer Linear Program:\n\\begin{alignat*}{3}\n& &\\max \\sum_{j \\in V} &y_j \\rho_j & & \\tag{IP$'$}\\\\\n&\\mathrm{s.t.}\\quad &\n(1-\\e_1) \\rho_j - \\e_2 n^{\\delta\/3} \\leq \\sum_{i \\in N(j)} &y_i \\leq (1+\\e_1) \\rho_j + \\e_2 n^{\\delta\/3} \\quad & \\forall &j \\in V\\\\\n& & \\sum_{i \\in N(j)} &y_i = k \\\\\n& & &y_j \\in \\{0, 1\\} &\\forall & j \\in V\n\\end{alignat*}\nBy (\\ref{eq:dense_sample}), if the sample size is $|R| = \\Theta(n^{1-\\delta\/3}\\ln n\/(\\e^2_1 \\e_2))$, with probability at least $1-2\/n^2$, the densest subgraph $\\vec{x}^\\ast$ is a feasible solution to (IP$'$) with the estimations $\\rho_j$ obtained by restricting $\\vec{x}^\\ast$ to the vertices in $R$. In the following, we let (LP$'$) denote the Linear Programming relaxation of (IP$'$), where each $y_j \\in [0, 1]$.\n\n\\smallskip\\noindent{\\bf The Number of Edges in Feasible Solutions.}\nWe next show that the objective value of any feasible solution $\\vec{y}$ to (LP$'$) is close to $p(\\vec{y})$. Therefore, assuming that $\\vec{x}^\\ast$ is feasible, any good approximation to (IP$'$) is a good approximation to the densest subgraph. \n\\begin{lemma}\\label{l:dense_approx}\nLet $\\rho_1, \\ldots, \\rho_n$ be non-negative numbers and $\\vec{y}$ be any feasible solution to (LP\\,$'$). Then,\n\\begin{equation}\\label{eq:dense_approx}\n p(\\vec{y}) \\in (1\\pm\\e_1)\\sum_{j \\in V} y_j \\rho_j \\pm \\e_2 n^{1+\\delta\/3}\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nUsing the decomposition of $p(\\vec{y})$ and the formulation of (LP$'$), we obtain that:\n\\begin{align*}\n p(\\vec{y}) = \\sum_{j \\in V} y_j \\sum_{i \\in N(j)} y_i \\ & \\in\n \\sum_{j \\in V} y_j \\left((1\\pm \\e_1) \\rho_j \\pm \\e_2 n^{\\delta\/3}\\right) \\\\\n &= (1\\pm \\e_1)\\sum_{j \\in V} y_j \\rho_j \\pm \\e_2 n^{\\delta\/3} \\sum_{j \\in V} y_j \\\\\n &\\in (1\\pm \\e_1)\\sum_{j \\in V} y_j \\rho_j \\pm \\e_2 n^{1+\\delta\/3}\n\\end{align*}\nThe first inclusion holds because $\\vec{y}$ is feasible for (LP$'$) and thus, $\\sum_{i \\in N(j)} y_i \\in (1\\pm \\e_1)\\rho_j \\pm \\e_2n^{\\delta\/3}$, for all $j$. The second inclusion holds because $\\sum_{j \\in V} y_j \\leq n$.\n\\qed\\end{proof}\n\\noindent{\\bf Randomized Rounding of the Fractional Optimum.}\nAs a last step, we show how to round the fractional optimum $\\vec{y}^\\ast =\n(y^\\ast_1, \\ldots, y^\\ast_n)$ of (LP$'$) to an integral solution $\\vec{z} =\n(z_1, \\ldots, z_n)$ that almost satisfies the constraints of (IP$'$). To this\nend, we use randomized rounding, as for \\MC. We obtain that with probability at\nleast $1 - 2\/n^{8}$,\n\\begin{equation}\\label{eq:k_deviation}\n k - 2\\sqrt{n\\ln(n)} \\leq\n \\sum_{j \\in V} z_i \\leq\n k + 2\\sqrt{n\\ln(n)}\n\\end{equation}\nSpecifically, the inequality above follows from the Chernoff bound in footnote~\\ref{foot:chernoff}, with $t = 2\\sqrt{n \\ln(n)}$, since $\\Exp[\\sum_{i \\in N(j)} z_j] = k$. \nMoreover, applying Lemma~\\ref{l:rounding} with $q = 0$, $\\beta = 1$, $k = 7$, $\\delta\/3$ (instead of $\\delta$) and $\\alpha = \\max\\{ \\e_1, \\e_2\/2\\}$, and using that $\\vec{y}^\\ast$ is a feasible solution to (LP$'$) and that $\\e_1 \\in (0, 1)$, we obtain that with probability at least $1 - 2\/n^{8}$, for each vertex $j$,\n\\begin{equation}\\label{eq:z_deviation}\n (1-\\e_1)^2\\rho_j - 2\\e_2 n^{\\delta\/3} \\leq\n \\sum_{i \\in N(j)} z_i \\leq\n (1+\\e_1)^2\\rho_j + 2\\e_2 n^{\\delta\/3} \n\\end{equation}\nBy the union bound, the integral solution $\\vec{z}$ obtained from $\\vec{y}^\\ast$ by randomized rounding satisfies (\\ref{eq:k_deviation}) and (\\ref{eq:z_deviation}), for all vertices $j$, with probability at least $1 - 3\/n^7$.\n\nBy linearity of expectation, $\\Exp[ \\sum_{j \\in V} z_j \\rho_j ] = \\sum_{j \\in V} y^\\ast_j \\rho_j$. Moreover, since the probability that $\\vec{z}$ does not satisfy\neither (\\ref{eq:k_deviation}) or (\\ref{eq:z_deviation}), for some vertex $j$, is at most $3\/n^7$, and since the objective value of (IP$'$) is at most $n^2$, the expected value of a rounded solution $\\vec{z}$ that (\\ref{eq:k_deviation}) and (\\ref{eq:z_deviation}), for all vertices $j$, is least $\\sum_{j \\in V} y^\\ast_j \\rho_j - 1$ (assuming that $n \\geq 2$). As in \\MC, such an integral solution $\\vec{z}$ can be found in (deterministic) polynomial time using the method of conditional expectations (see \\cite{Rag88}). \n\nThe following is similar to Lemma~\\ref{l:dense_approx} and shows that the objective value $p(\\vec{z})$ of the rounded solution $\\vec{z}$ is close to the optimal value of (LP$'$). \n\\begin{lemma}\\label{l:dense_approx2}\nLet $\\vec{y}^\\ast$ be the optimal solution of (LP$'$) and let $\\vec{z}$ be the integral solution obtained from $\\vec{y}^\\ast$ by randomized rounding (and the method of conditional expectations). Then,\n\\begin{equation}\\label{eq:dense_approx2}\n p(\\vec{z}) \\in (1 \\pm \\e_1)^2 \\sum_{j \\in V} y^\\ast_j \\rho_j \\pm 3\\e_2 n^{1+\\delta\/3}\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nUsing the decomposition of $p(\\vec{y})$ and an argument similar to that in the proof of Lemma~\\ref{l:dense_approx}, we obtain that:\n\\begin{align*}\n p(\\vec{z}) = \\sum_{j \\in V} z_j \\sum_{i \\in N(j)} z_i \\ \\ & \\in \n \\sum_{j \\in V} z_j \\left((1\\pm \\e_1)^2 \\rho_j \\pm 2\\e_2 n^{\\delta\/3} \\right) \\\\\n &= (1\\pm \\e_1)^2 \\sum_{j \\in V} z_j \\rho_j \n \\pm 2 \\e_2 n^{\\delta\/3} \\sum_{j \\in V} z_j\\\\\n &\\in (1\\pm \\e_1)^2 \\sum_{j \\in V} z_j \\rho_j \\pm 2\\e_2 n^{1+\\delta\/3} \\\\\n &\\in (1\\pm \\e_1)^2 \\sum_{j \\in V} y^\\ast_j \\rho_j \\pm 3\\e_2 n^{1+\\delta\/3}\n\\end{align*}\nThe first inclusion holds because $\\vec{z}$ satisfies (\\ref{eq:z_deviation}) for all $j \\in V$. For the second inclusion, we use that $\\sum_{j \\in V} z_j \\leq n$. For the last inclusion, we recall that $\\sum_{j \\in V} z_j \\rho_j \\geq \\sum_{j \\in V} y^\\ast_j \\rho_j - 1$ and assume that $n$ is sufficiently large.\n\\qed \\end{proof}\n\\noindent{\\bf Putting Everything Together.}\nTherefore, for $\\eps > 0$, if $G$ is $\\delta$-almost sparse and $k =\n\\Omega(n^{1-\\delta\/3})$, the algorithm described computes estimations $\\rho_j$\nsuch that the densest subgraph $\\vec{x}^\\ast$ is a feasible solution to (IP$'$)\nwhp. Hence, by the analysis above, the algorithm computes a slightly\ninfeasible solution approximating the number of edges in the densest subgraph\nwith $k$ vertices within a multiplicative factor of $(1-\\e_1)^2$ and an\nadditive error of $\\e_2 n^{1+\\delta\/3}$. Setting $\\e_1 = \\e_2 = \\eps\/8$, the\nnumber of edges in the subgraph induced by $\\vec{z}$ satisfies the following\nwith probability at least $1-2\/n^2$\\,:\n\\[ p(\\vec{z}) \\geq (1-\\e_1)^2 \\sum_{j \\in V} y_j^\\ast \\rho_j - 3 \\e_2 n^{1+\\delta\/3} \\geq \n\t(1-\\e_1)^2 \\sum_{j \\in V} x_j^\\ast \\rho_j - 3 \\e_2 n^{1+\\delta\/3} \\geq\n\tp(\\vec{x}^\\ast) - \\eps n^{1+\\delta\/3} \\geq\n\t(1-\\eps) p(\\vec{x}^\\ast)\n\\]\nThe first inequality follows from Lemma~\\ref{l:dense_approx2}, the second\ninequality holds because $\\vec{y}^\\ast$ is the optimal solution to (LP) and\n$\\vec{x}^\\ast$ is feasible for (LP), the third inequality follows from\nLemma~\\ref{l:dense_approx} and the fourth inequality holds because the optimal\ncut has at least $\\Omega(n^{1+\\delta\/3})$ edges.\n\nThis solution is infeasible by at most $2\\sqrt{n \\ln n}=o(k)$ vertices and can\nbecome feasible by adding or removing at most so many vertices and\n$O(n^{1\/2+\\delta})$ edges. \n\\begin{theorem}\\label{th:densest}\nLet $G(V, E)$ be a $\\delta$-almost sparse graph with $n$ vertices. Then, for any integer $k \\geq 1$ and for any $\\eps > 0$, we can compute, in time $2^{O(n^{1-\\delta\/3} \\ln n\/\\eps^3)}$ and with probability at least $1-2\/n^2$, an induced subgraph $\\vec{z}$ of $G$ with $k$ vertices whose number of edges satisfies $p(\\vec{z}) \\geq (1-\\eps)p(\\vec{x}^\\ast)$, where $\\vec{x}^\\ast$ is the number of edges in the \\kDense\\ of $G$. \n\\end{theorem}\n\n\\section{Lower Bounds} \\label{s:lower}\n\n\nIn this section we give some lower bound arguments which show that the\nalgorithmic schemes we have presented are, in some senses, likely to be almost\noptimal. Our working complexity assumption will be the Exponential Time\nHypothesis (ETH), which states that there is no algorithm that can solve an\ninstance of 3-SAT of size $n$ in time $2^{o(n)}$.\n\nOur starting point is the following inapproximability result,\nwhich can be obtained using known PCP constructions and standard reductions.\n\\begin{theorem} \\label{thm:start}\nThere exist constants $c,s\\in[0,1]$ with $c>s$ such that for all $\\epsilon>0$\nwe have the following: if there exists an algorithm which, given an $n$-vertex\n$5$-regular instance of \\MC, can distinguish between the case where a solution\ncuts at least a $c$ fraction of the edges and the case where all solutions cut\nat most an $s$ fraction of the edges in time $2^{n^{1-\\epsilon}}$ then the ETH\nfails.\n\\end{theorem}\n\\begin{proof}\nThis inapproximability result follows from the construction of quasi-linear\nsize PCPs given, for example, in \\cite{Dinur05}. In particular, we use as\nstarting point a result explicitly formulated in \\cite{MR10} as follows:\n``Solving 3-\\textsc{SAT} on inputs of size $N$ can be reduced to distinguishing\nbetween the case that a 3CNF formula of size $N^{1+o(1)}$ is satisfiable and\nthe case that only $\\frac{7}{8} + o(1)$ fraction of its clauses are\nsatisfiable''.\n\nTake an arbitrary 3-\\textsc{SAT} instance of size $N$, which according to the\nETH cannot be solved in time $2^{o(N)}$. By applying the aforementioned PCP\nconstruction we obtain a 3CNF formula of size $N^{1+o(1)}$ which is either\nsatisfiable or far from satisfiable. Using standard constructions\n(\\cite{PY91,BK99}) we can reduce this formula to a $5$-regular graph $G(V,E)$\nwhich will be a \\MC\\ instance (we use degree $5$ here for concreteness, any\nreasonable constant would do). We have that $|V|$ is only a constant factor\napart from the size of the 3CNF formula. At the same time, there exist\nconstants $c,s$ such that, if the formula was satisfiable $G$ has a cut of\n$c|E|$ edges, while if the formula was far from satisfiable $G$ has no cut with\nmore than $s|E|$ edges. If there exists an algorithm that can distinguish\nbetween these two cases in time $2^{|V|^{1-\\epsilon}}$ the whole procedure\nwould run in $2^{N^{1-\\epsilon+o(1)}}$ and would allow us to decide if the\noriginal formula was satisfiable.\n\\qed\\end{proof}\nThere are two natural ways in which one may hope to improve or extend the\nalgorithms we have presented so far: relaxing the density requirement or\ndecreasing the running time. We prove in what follows that none of them can improve the results presented so far.\n\n\\subsection{Arity Higher Than Two}\n\nFirst, recall that the algorithm we have given for\n\\kCSP\\ works in the density range between $n^k$ and $n^{k-1}$. Here, we give a\nreduction establishing that it's unlikely that this can be improved.\n\\begin{theorem} \\label{thm:hard1}\nThere exists $r>1$ such that for all $\\epsilon>0$ and all (fixed) integers\n$k\\ge 3$ we have the following: if there exists an algorithm which approximates\n\\textsc{Max-$k$-SAT} on instances with $\\Omega(n^{k-1})$ clauses in time\n$2^{n^{1-\\epsilon}}$ then the ETH fails.\n\\end{theorem}\n\\begin{proof}\nConsider the \\MC\\ instance of Theorem \\ref{thm:start}, and transform it into a\n2-\\textsc{SAT} instance in the standard way: the set of variables is the set of\nvertices of the graph and for each edge $(u,v)$ we include the two clauses\n$(\\neg u \\lor v)$ and $(u\\lor \\neg v)$. This is an instance of 2-\\textsc{SAT}\nwith $n$ variables and $5n$ clauses and there exist constants $c,s$ such that\neither there exists an assignment satisfying a $c$ fraction of the clauses or\nall assignments satisfy at most an $s$ fraction of the clauses.\n\nFix a constant $k$ and introduce to the instance $(k-2)n$ new variables\n$x_{(i,j)}$, $i\\in\\{1,\\ldots,k-2\\}$, $j\\in\\{1,\\ldots,n\\}$. We perform the\nfollowing transformation to the 2-\\textsc{SAT} instance: for each clause\n$(l_1\\lor l_2)$ and for each tuple\n$(i_1,i_2,\\ldots,i_{k-2})\\in\\{1,\\ldots,n\\}^{k-2}$ we construct $2^{k-2}$ new\nclauses of size $k$. The first two literals of these clauses are always $l_1,\nl_2$. The remaining $k-2$ literals consist of the variables\n$x_{(1,i_1)},x_{(2,i_2)},\\ldots,x_{(k,i_{k-2})}$, where in each clause we pick\na different set of variables to be negated. In other words, to construct a\nclause of the new instance we select a clause of the original instance, one\nvariable from each of the $(k-2)$ groups of $n$ new variables, and a subset of\nthese variables that will be negated. The new instance consists of all the size\n$k$ clauses constructed in this way, for all possible choices.\n\nFirst, observe that the new instance has $5n^{k-1}2^k$ clauses and $(k-1)n$\nvariables, therefore, for each fixed $k$ it satisfies the density conditions of\nthe theorem. Furthermore, consider any assignment of the original formula. Any\nsatisfied clause has now been replaced by $2^k$ satisfied clauses, while for an\nunsatisfied clause any assignment to the new variables satisfies exactly\n$2^k-1$ clauses. Thus, for fixed $k$, there exist constants $s',c'$ such that\neither a $c'$ fraction of the clauses of the new instance is satisfiable or at\nmost a $s'$ fraction is. If there exists an approximation algorithm with ratio\nbetter than $c'\/s'$ running in time $2^{N^{1-\\epsilon}}$, where $N$ is the\nnumber of variables of the new instance, we could use it to decide the original\ninstance in a time bound that would disprove the ETH.\n\\qed\\end{proof}\n\n\\subsection{Almost Tight Time Bounds}\n\nA second possible avenue for improvement may be to consider potential speedups\nof our algorithms. Concretely, one may ask whether the (roughly)\n$2^{\\sqrt{n}\\ln n}$ running time guaranteed by our scheme for \\MC\\ on graphs\nwith average degree $\\sqrt{n}$ is best possible. We give an almost tight answer\nto such questions via the following theorem.\n\\begin{theorem} \\label{thm:hard2}\nThere exists $r>1$ such that for all $\\epsilon>0$ we have the following: if\nthere exists an algorithm which, for some $\\Delta=o(n)$, approximates \\MC\\ on\n$n$-vertex $\\Delta$-regular graphs in time $2^{(n\/\\Delta)^{1-\\epsilon}}$ then\nthe ETH fails.\n\\end{theorem}\n\\begin{proof}[Theorem \\ref{thm:hard2}]\nWithout loss of generality we prove the theorem for the case when the degree is\na multiple of 10.\n\nConsider an instance $G(V,E)$ of \\MC\\ as given by Theorem \\ref{thm:start}. Let\n$n=|V|$ and suppose that the desired degree is $d=10\\Delta$, where $\\Delta$ is\na function of $n$. We construct a graph $G'$ as follows: for each vertex $u\\in\nV$ we introduce $\\Delta$ new vertices $u_1,\\ldots,u_\\Delta$ as well as\n$5\\Delta$ ``consistency'' vertices $c^u_1,\\ldots,c^u_{5\\Delta}$. For every edge\n$(u,v)\\in E$ we add all edges $(u_i,v_j)$ for $i,j\\in\\{1,\\ldots,\\Delta\\}$.\nAlso, for every $u\\in V$ we add all edges $(u_i,c^u_j)$, for\n$i\\in\\{1,\\ldots,\\Delta\\}$ and $j\\in\\{1,\\ldots,5\\Delta\\}$. This completes the\nconstruction.\n\nThe graph we have constructed is $10\\Delta$-regular and is made up of $6\\Delta\nn$ vertices. Let us examine the size of its optimal cut. Consider an optimal\nsolution and observe that, for a given $u\\in V$ all the vertices $c^u_i$ can be\nassumed to be on the same side of the cut, since they all have the same\nneighbors. Furthermore, for a given $u\\in V$, all vertices $u_i$ can be assumed\nto be on the same side of the cut, namely on the side opposite that of $c^u_i$,\nsince the vertices $c^u_i$ are a majority of the neighborhood of each $u_i$.\nWith this observation it is easy to construct a one-to-one correspondence\nbetween cuts in $G$ and locally optimal cuts in $G'$.\n\nConsider now a cut that cuts $c|E|$ edges of $G$. If we set all $u_i$ of $G'$\non the same side as $u$ is placed in $G$ we cut $c|E|\\Delta^2$ edges of the\nform $(u_i,v_j)$. Furthermore, by placing the $c^u_i$ on the opposite side of\n$u_i$ we cut $5\\Delta^2 |V|$ edges. Thus the max cut of $G'$ is at least\n$c|E|\\Delta^2 + 5\\Delta^2 |V|$. Using the previous observations on locally\noptimal cuts of $G'$ we can conclude that if $G'$ has a cut with $s|E|\\Delta^2\n+ 5\\Delta^2|V|$ edges, then $G$ has a cut with $s|E|$ edges. Using the fact\nthat $2|E|=5|V|$ (since $G$ is 5-regular) we get a constant ratio between the\nsize of the cut of $G'$ in the two cases. Call that ratio $r$.\n\nSuppose now that we have an approximation algorithm with ratio better than $r$\nwhich, given an $N$-vertex $d$-regular graph runs in time\n$2^{(N\/d)^{1-\\epsilon}}$. Giving our constructed instance as input to this\nalgorithm would allow to decide the original instance in time\n$2^{n^{1-\\epsilon}}$.\n\\qed\\end{proof}\nTheorem \\ref{thm:hard2} establishes that our approach is essentially\noptimal, not just for average degree $\\sqrt{n}$, but for any other intermediate\ndensity.\n\\section{Approximating \\MC\\ in Almost Sparse Graphs}\n\\label{s:maxcut}\n\nIn this section, we apply our approach to \\MC, which serves as a convenient example and allows us to present the intuition and the main ideas.\n\nThe \\MC\\ problem in a graph $G(V, E)$ is equivalent to maximizing, over all binary vectors $\\vec{x} \\in \\{0, 1\\}^n$, the following $n$-variate degree-$2$ $2$-smooth polynomial\n\\[ p(\\vec{x}) = \\sum_{\\{i, j\\} \\in E} (x_i (1 - x_j) + x_j (1 - x_i)) \\]\nSetting a variable $x_i$ to $0$ indicates that the corresponding vertex $i$ is assigned to the left side of the cut, i.e., to $S_0$, and setting $x_i$ to $1$ indicates that vertex $i$ is assigned to the right side of the cut, i.e., to $S_1$.\nWe assume that $G$ is $\\delta$-almost sparse and thus, has $m = \\Omega(n^{1+\\delta})$ edges and average degree $\\Delta = \\Omega(n^\\delta)$.\nMoreover, if $m = \\Theta(n^{1+\\delta})$, $p(\\vec{x})$ is $\\delta$-bounded, since for each edge $\\{i, j\\} \\in E$, the monomial $x_ix_j$ appears with coefficient $-2$ in the expansion of $p$, and for each vertex $i \\in V$, the monomial $x_i$ appears with coefficient $\\deg(i)$ in the expansion of $p$. Therefore, for $\\ell \\in \\{1, 2\\}$, the sum of the absolute values of the coefficients of all monomials of degree $\\ell$ is at most $2m = O(n^{1+\\delta})$. \n\nNext, we extend and generalize the approach of \\cite{AKK99} and show how to $(1-\\eps)$-approximate the optimal cut, for any constant $\\eps > 0$, in time $2^{O(n\\ln n\/(\\Delta \\eps^3))}$ (see Theorem~\\ref{th:maxcut}). The running time is subexponential in $n$, if $G$ is $\\delta$-almost sparse. \n\n\\subsection{Outline and Main Ideas}\n\\label{s:cut_main}\n\nApplying Proposition~\\ref{pr:decomposition}, we can write the smooth polynomial $p(\\vec{x})$ as\n\\begin{equation}\\label{eq:cut_decomp}\np(\\vec{x}) = \\sum_{j \\in V} x_j (\\deg(j) - p_j(\\vec{x}))\\,,\n\\end{equation}\nwhere $p_j(\\vec{x}) = \\sum_{i \\in N(j)} x_i$ is a degree-$1$ $1$-smooth polynomial that indicates how many neighbors of vertex $j$ are in $S_1$ in the solution corresponding to $\\vec{x}$. The key observation, due to \\cite{AKK99}, is that if we have a good estimation $\\rho_j$ of the value of each $p_j$ at the optimal solution $\\vec{x}^\\ast$, then approximate maximization of $p(\\vec{x})$ can be reduced to the solution of the following Integer Linear Program:\n\\begin{alignat*}{3}\n& &\\max \\sum_{j \\in V} &y_j (\\deg(j) - \\rho_j) & & \\tag{IP}\\\\\n&\\mathrm{s.t.}\\quad &\n(1-\\e_1) \\rho_j - \\e_2 \\Delta \\leq \\sum_{i \\in N(j)} &y_i \\leq (1+\\e_1) \\rho_j + \\e_2 \\Delta \\quad & \\forall &j \\in V\\\\\n& & &y_j \\in \\{0, 1\\} &\\forall & j \\in V\n\\end{alignat*}\nThe constants $\\e_1, \\e_2 > 0$ and the estimations $\\rho_j \\geq 0$ are computed so that the optimal solution $\\vec{x}^\\ast$ is a feasible solution to (IP). We always assume wlog. that $0 \\leq \\sum_{i \\in N(j)} y_i \\leq \\deg(j)$, i.e., we let the lhs of the $j$-th constraint be $\\max\\{ (1-\\e_1) \\rho_j - \\e_2 \\Delta, 0 \\}$ and the rhs be $\\min\\{ (1+\\e_1) \\rho_j + \\e_2 \\Delta, \\deg(j) \\}$. Clearly, if $\\vec{x}^\\ast$ is a feasible solution to (IP), it remains a feasible solution after this modification. We let (LP) denote the Linear Programming relaxation of (IP), where each $y_j \\in [0, 1]$.\n\nThe first important observation is that for any $\\e_1, \\e_2 > 0$, we can compute estimations $\\rho_j$, by exhaustive sampling, so that $\\vec{x}^\\ast$ is a feasible solution to (IP) with high probability (see Lemma~\\ref{l:cut_sampling}). The second important observation is that the objective value of any feasible solution $\\vec{y}$ to (LP) is close to $p(\\vec{y})$ (see Lemma~\\ref{l:cut_approx}). Namely, for any feasible solution $\\vec{y}$, $\\sum_{j \\in V} y_j (\\deg(j) - \\rho_j) \\approx p(\\vec{y})$.\n\nBased on these observations, the approximation algorithm performs the following steps:\n\\begin{enumerate}\n\\item We guess a sequence of estimations $\\rho_1, \\ldots, \\rho_n$, by exhaustive sampling, so that $\\vec{x}^\\ast$ is a feasible solution to the resulting (IP) (see Section~\\ref{s:cut_sampling} for the details).\n\\item We formulate (IP) and find an optimal fractional solution $\\vec{y}^\\ast$ to (LP).\n\\item We obtain an integral solution $\\vec{z}$ by applying randomized rounding to $\\vec{y}^\\ast$ (and the method of conditional probabilities, as in \\cite{RT87,Rag88}).\n\\end{enumerate}\nTo see that this procedure indeed provides a good approximation to $p(\\vec{x}^\\ast)$, we observe that:\n\\begin{equation}\\label{eq:cut_est}\n p(\\vec{z}) \\approx \\sum_{j \\in V} z_j (\\deg(j) - \\rho_j) \\approx\n \\sum_{j \\in V} y^\\ast_j (\\deg(j) - \\rho_j) \\geq\n \\sum_{j \\in V} x^\\ast_j (\\deg(j) - \\rho_j) \\approx\n p(\\vec{x}^\\ast)\\,,\n\\end{equation}\nThe first approximation holds because $\\vec{z}$ is an (almost) feasible solution to (IP) (see Lemma~\\ref{l:cut_approx2}), the second approximation holds because the objective value of $\\vec{z}$ is a good approximation to the objective value of $\\vec{y}^\\ast$, due to randomized rounding, the inequality holds because $\\vec{x}^\\ast$ is a feasible solution to (LP) and the final approximation holds because $\\vec{x}^\\ast$ is a feasible solution to (IP).\n\nIn Sections~\\ref{s:cut_linearization}~and~\\ref{s:cut_rounding}, we make the notion of approximation precise so that $p(\\vec{z}) \\geq (1-\\eps) p(\\vec{x}^\\ast)$. As for the running time, it is dominated by the time required for the exhaustive-sampling step. Since we do not know $\\vec{x}^\\ast$, we need to run the steps (2) and (3) above for every sequence of estimations produced by exhaustive sampling. So, the outcome of the approximation scheme is the best of the integral solutions $\\vec{z}$ produced in step (3) over all executions of the algorithm. In Section~\\ref{s:cut_sampling}, we show that a sample of size $O(n \\ln n\/\\Delta)$ suffices for the computation of estimations $\\rho_j$ so that $\\vec{x}^\\ast$ is a feasible solution to (IP) with high probability. If $G$ is $\\delta$-almost sparse, the sample size is sublinear in $n$ and the running time is subexponential in $n$.\n\n\\subsection{Obtaining Estimations $\\rho_j$ by Exhaustive Sampling}\n\\label{s:cut_sampling}\n\nTo obtain good estimations $\\rho_j$ of the values $p_j(\\vec{x}^\\ast) = \\sum_{i \\in N(j)} x_i^\\ast$, i.e., of the number of $j$'s neighbors in $S_1$ in the optimal cut, we take a random sample $R \\subseteq V$ of size $\\Theta(n \\ln n \/ \\Delta)$ and try exhaustively all possible assignments of the vertices in $R$ to $S_0$ and $S_1$. If $\\Delta = \\Omega(n^\\delta)$, we have $2^{O(n\\ln n \/ \\Delta)} = 2^{O(n^{1-\\delta} \\ln n)}$ different assignments. For each assignment, described by a $0\/1$ vector $\\vec{x}$ restricted to $R$, we compute an estimation $\\rho_j = (n \/ |R|) \\sum_{i \\in N(j) \\cut R} x_i$, for each vertex $j \\in V$, and run the steps (2) and (3) of the algorithm above. Since we try all possible assignments, one of them agrees with $\\vec{x}^\\ast$ on all vertices of $R$. So, for this assignment, the estimations computed are $\\rho_j = (n \/ |R|) \\sum_{i \\in N(j) \\cut R} x^\\ast_i$. \nThe following shows that for these estimations, we have that $p_j(\\vec{x}^\\ast) \\approx \\rho_j$ with high probability.\n\\begin{lemma}\\label{l:cut_sampling}\nLet $\\vec{x}$ be any binary vector. For all $\\alpha_1, \\alpha_2 > 0$, we let $\\gamma = \\Theta(1\/(\\alpha^2_1 \\alpha_2))$ and let $R$ be a multiset of $r = \\gamma n \\ln n \/ \\Delta$ vertices chosen uniformly at random with replacement from $V$. For any vertex $j$, if $\\rho_j = (n \/ r) \\sum_{i \\in N(j) \\cut R} x_i$ and $\\hat{\\rho}_j = \\sum_{i \\in N(j)} x_i$, with probability at least $1 - 2\/n^{3}$,\n\\begin{equation}\\label{eq:cut_sample_cor}\n (1-\\alpha_1)\\hat{\\rho}_j - (1-\\alpha_1)\\alpha_2 \\Delta \\leq \\rho_j \\leq\n (1+\\alpha_1)\\hat{\\rho}_j + (1+\\alpha_1)\\alpha_2 \\Delta\n\\end{equation}\n\\end{lemma}\n\\begin{proofsketch} If $\\hat{\\rho}_j = \\Omega(\\Delta)$, the neighbors of $j$\nare well-represented in the random sample $R$ whp., because $|R| = \\Theta(n\\ln\nn\/\\Delta)$. Therefore, $|\\hat{\\rho}_j - \\rho_j| \\leq \\alpha_1\\hat{\\rho}_j$\nwhp., by Chernoff bounds. If $\\hat{\\rho}_j = o(\\Delta)$, the lower bound in\n(\\ref{eq:cut_sample_cor}) becomes trivial, since it is non-positive, while\n$\\rho_j \\geq 0$. As for the upper bound, we increase some $x_i$ to $x'_i \\in\n[0, 1]$, so that $\\hat{\\rho}'_j = \\alpha_2 \\Delta$. Then, $\\rho'_j \\leq\n(1+\\alpha_1)\\hat{\\rho}'_j = (1+\\alpha_1)\\alpha_2 \\Delta$ whp., by the same\nChernoff bound as above. Now the upper bound of (\\ref{eq:cut_sample_cor})\nfollows from $\\rho_j \\leq \\rho'_j$, which holds for any instantiation of the\nrandom sample $R$. The formal proof follows from Lemma~\\ref{l:sampling}, with\n$\\beta = 1$, $d = 2$ and $q = 0$, and with $\\Delta$ instead of $n^\\delta$.\n\\qed\\end{proofsketch}\nWe note that $\\rho_j \\geq 0$ and always assume that $\\rho_j \\leq \\deg(j)$, since if $\\rho_j$ satisfies (\\ref{eq:cut_sample_cor}), $\\min\\{ \\rho_j, \\deg(j) \\}$ also satisfies (\\ref{eq:cut_sample_cor}). For all $\\e_1, \\e_2 > 0$, setting $\\alpha_1 = \\frac{\\e_1}{1+\\e_1}$ and $\\alpha_2 = \\e_2$ in Lemma~\\ref{l:cut_sampling}, and taking the union bound over all vertices, we obtain that for $\\gamma = \\Theta(1\/(\\e^2_1 \\e_2))$, with probability at least $1 - 2\/n^2$, the following holds for all vertices $j \\in V$:\n\\begin{equation}\\label{eq:cut_sample}\n (1-\\e_1)\\rho_j - \\e_2 \\Delta \\leq \\hat{\\rho}_j \\leq\n (1+\\e_1)\\rho_j + \\e_2 \\Delta\n\\end{equation}\nTherefore, with probability at least $1-2\/n^2$, the optimal cut $\\vec{x}^\\ast$ is a feasible solution to (IP) with the estimations $\\rho_j$ obtained by restricting $\\vec{x}^\\ast$ to the vertices in $R$.\n\n\\subsection{The Cut Value of Feasible Solutions}\n\\label{s:cut_linearization}\n\nWe next show that the objective value of any feasible solution $\\vec{y}$ to (LP) is close to $p(\\vec{y})$. Therefore, assuming that $\\vec{x}^\\ast$ is feasible, any good approximation to (IP) is a good approximation to the optimal cut.\n\\begin{lemma}\\label{l:cut_approx}\nLet $\\rho_1, \\ldots, \\rho_n$ be non-negative numbers and $\\vec{y}$ be any feasible solution to (LP). Then,\n\\begin{equation}\\label{eq:cut_approx}\n p(\\vec{y}) \\in \\sum_{j \\in V} y_j (\\deg(j) - \\rho_j) \\pm 2(\\e_1 + \\e_2) m\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nUsing (\\ref{eq:cut_decomp}) and the formulation of (LP), we obtain that:\n\\begin{align*}\n p(\\vec{y}) = \\sum_{j \\in V} y_j \\left(\\deg(j) - \\sum_{i \\in N(j)} y_i\\right) & \\in\n \\sum_{j \\in V} y_j \\left(\\deg(j) - ((1\\mp \\e_1) \\rho_j \\mp \\e_2 \\Delta) \\right) \\\\\n &= \\sum_{j \\in V} y_j (\\deg(j) - \\rho_j) \\pm \\e_1 \\sum_{j \\in V} y_j \\rho_j\n \\pm \\e_2 \\Delta \\sum_{j \\in V} y_j \\\\\n &\\in \\sum_{j \\in V} y_j (\\deg(j) - \\rho_j) \\pm 2(\\e_1 + \\e_2) m\n\\end{align*}\nThe first inclusion holds because $\\vec{y}$ is feasible for (LP) and thus, $\\sum_{i \\in N(j)} y_i \\in (1\\pm \\e_1)\\rho_j \\pm \\e_2\\Delta$, for all $j$. The third inclusion holds because\n\\[ \\sum_{j \\in V} y_j \\rho_j \\leq \\sum_{j \\in V} \\rho_j\n \\leq \\sum_{j \\in V} \\deg(j) = 2m\\,,\\]\nsince each $\\rho_j$ is at most $\\deg(j)$, and because $\\Delta \\sum_{j \\in V} y_j \\leq \\Delta n = 2m$.\n\\qed\\end{proof}\n\n\\subsection{Randomized Rounding of the Fractional Optimum}\n\\label{s:cut_rounding}\n\nAs a last step, we show how to round the fractional optimum $\\vec{y}^\\ast = (y^\\ast_1, \\ldots, y^\\ast_n)$ of (LP) to an integral solution $\\vec{z} = (z_1, \\ldots, z_n)$ that almost satisfies the constraints of (IP).\n\nTo this end, we use randomized rounding, as in \\cite{RT87}. In particular, we set independently each $z_j$ to $1$, with probability $y_j^\\ast$, and to $0$, with probability $1-y_j^\\ast$. By Chernoff bounds%\n\\footnote{\\label{foot:chernoff}We use the following standard Chernoff bound (see e.g., \\cite[Theorem~1.1]{DP09}): Let $Y_1, \\ldots, Y_k$ independent random variables in $[0, 1]$ and let $Y = \\sum_{j=1}^k Y_j$. Then for all $t > 0$, $\\Prob[|Y - \\Exp[Y]| > t] \\leq 2\\exp(-2t^2\/k)$.},\nwe obtain that with probability at least $1 - 2\/n^{8}$, for each vertex $j$,\n\\begin{equation}\\label{eq:deviation}\n (1-\\e_1)\\rho_j - \\e_2\\Delta - 2\\sqrt{\\deg(j)\\ln(n)} \\leq\n \\sum_{i \\in N(j)} z_i \\leq\n (1+\\e_1)\\rho_j + \\e_2\\Delta + 2\\sqrt{\\deg(j)\\ln(n)}\n\\end{equation}\nSpecifically, the inequality above follows from the Chernoff bound in footnote~\\ref{foot:chernoff}, with $k = \\deg(j)$ and $t = 2\\sqrt{\\deg(j)\\ln(n)}$, since $\\Exp[\\sum_{i \\in N(j)} z_j] = \\sum_{i \\in N(j)} y^\\ast_j \\in (1\\pm\\e_1)\\rho_j \\pm \\e_2\\Delta$. By the union bound, (\\ref{eq:deviation}) is satisfied with probability at least $1 - 2\/n^7$ for all vertices $j$.\n\nBy linearity of expectation, $\\Exp[ \\sum_{j \\in V} z_j (\\deg(j) - \\rho_j) ] = \\sum_{j \\in V} y^\\ast_j (\\deg(j) - \\rho_j)$. Moreover, since the probability that $\\vec{z}$ does not satisfy (\\ref{eq:deviation}) for some vertex $j$ is at most $2\/n^7$ and since the objective value of (IP) is at most $n^2$, the expected value of a rounded solution $\\vec{z}$ that satisfies (\\ref{eq:deviation}) for all vertices $j$ is least $\\sum_{j \\in V} y^\\ast_j (\\deg(j) - \\rho_j) - 1$ (assuming that $n \\geq 2$). Using the method of conditional expectations, as in \\cite{Rag88}, we can find in (deterministic) polynomial time an integral solution $\\vec{z}$ that satisfies (\\ref{eq:deviation}) for all vertices $j$ and has $\\sum_{j \\in V} z_j (\\deg(j) - \\rho_j) \\geq \\sum_{j \\in V} y^\\ast_j (\\deg(j) - \\rho_j) - 1$. Next, we sometimes abuse the notation and refer to such an integral solution $\\vec{z}$ (computed deterministically) as the integral solution obtained from $\\vec{y}^\\ast$ by randomized rounding.\n\nThe following is similar to Lemma~\\ref{l:cut_approx} and shows that the objective value $p(\\vec{z})$ of the rounded solution $\\vec{z}$ is close to the optimal value of (LP).\n\\begin{lemma}\\label{l:cut_approx2}\nLet $\\vec{y}^\\ast$ be the optimal solution of (LP) and let $\\vec{z}$ be the integral solution obtained from $\\vec{y}^\\ast$ by randomized rounding (and the method of conditional expectations). Then,\n\\begin{equation}\\label{eq:cut_approx2}\n p(\\vec{z}) \\in \\sum_{j \\in V} y^\\ast_j (\\deg(j) - \\rho_j) \\pm 3(\\e_1 + \\e_2) m\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nUsing (\\ref{eq:deviation}) and an argument similar to that in the proof of Lemma~\\ref{l:cut_approx}, we obtain that:\n\\begin{align*}\n p(\\vec{z}) & = \\sum_{j \\in V} z_j \\left(\\deg(j) - \\sum_{i \\in N(j)} z_i\\right) \\\\\n & \\in \\sum_{j \\in V} z_j \\left(\\deg(j) - \\left((1\\mp \\e_1) \\rho_j \\mp \\e_2 \\Delta \\mp 2\\sqrt{\\deg(j)\\ln(n)}\\right) \\right) \\\\\n &= \\sum_{j \\in V} z_j (\\deg(j) - \\rho_j) \\pm \\e_1 \\sum_{j \\in V} z_j \\rho_j\n \\pm \\e_2 \\Delta \\sum_{j \\in V} z_j \\pm 2\\sum_{j \\in V} z_j \\sqrt{\\deg(j)\\ln(n)}\\\\\n &\\in \\sum_{j \\in V} z_j (\\deg(j) - \\rho_j) \\pm (3\\e_1 + 2\\e_2) m \\\\\n &\\in \\sum_{j \\in V} y^\\ast_j (\\deg(j) - \\rho_j) \\pm 3(\\e_1 + \\e_2) m \\\\\n\\end{align*}\nThe first inclusion holds because $\\vec{z}$ satisfies (\\ref{eq:deviation}) for all $j \\in V$. For the third inclusion, we use that $\\sum_{j \\in V} z_j \\rho_j \\leq \\sum_{j \\in V} \\deg(j) = 2m$, that $\\Delta \\sum_{i \\in V} z_i \\leq \\Delta n = 2m$ and that by Jensen's inequality,\n\\[\n 2 \\sum_{j \\in V} z_j \\sqrt{\\deg(j) \\ln n} \\leq\n \\sum_{j \\in V} \\sqrt{4\\,\\deg(j) \\ln n} \\leq\n \\sqrt{8 m n \\ln n} \\leq \\e_1 m\\,,\n\\]\nassuming that $n$ and $m = \\Omega(n^{1+\\delta})$ are sufficiently large. For the last inclusion, we recall that $\\sum_{j \\in V} z_j (\\deg(j) - \\rho_j) \\geq \\sum_{j \\in V} y^\\ast_j (\\deg(j) - \\rho_j) - 1$ and assume that $m$ is sufficiently large.\n\\qed\\end{proof}\n\n\\subsection{Putting Everything Together}\n\\label{s:together}\n\nTherefore, for any $\\eps > 0$, if $G$ is $\\delta$-almost sparse and $\\Delta = n^{\\delta}$, the algorithm described in Section~\\ref{s:cut_main}, with sample size $\\Theta(n \\ln n \/ (\\eps^3 \\Delta))$, computes estimations $\\rho_j$ such that the optimal cut $\\vec{x}^\\ast$ is a feasible solution to (IP) whp. Hence, by the analysis above, the algorithm approximates the value of the optimal cut $p(\\vec{x}^\\ast)$ within an additive term of $O(\\eps m)$. Specifically, setting $\\e_1 = \\e_2 = \\eps\/16$, the value of the cut $\\vec{z}$ produced by the algorithm satisfies the following with probability at least $1-2\/n^2$\\,:\n\\[ p(\\vec{z}) \\geq \\sum_{j \\in V} y_j^\\ast (\\deg(j) - \\rho_j) - 3 \\eps m\/8\n \\geq \\sum_{j \\in V} x_j^\\ast (\\deg(j) - \\rho_j) - 3 \\eps m\/8\n \\geq p(\\vec{x}^\\ast) - \\eps m \/ 2 \\geq (1-\\eps) p(\\vec{x}^\\ast)\n\\]\nThe first inequality follows from Lemma~\\ref{l:cut_approx2}, the second inequality holds because $\\vec{y}^\\ast$ is the optimal solution to (LP) and $\\vec{x}^\\ast$ is feasible for (LP), the third inequality follows from Lemma~\\ref{l:cut_approx} and the fourth inequality holds because the optimal cut has at least $m\/2$ edges. \n\\begin{theorem}\\label{th:maxcut}\nLet $G(V, E)$ be a $\\delta$-almost sparse graph with $n$ vertices. Then, for any $\\eps > 0$, we can compute, in time $2^{O(n^{1-\\delta} \\ln n\/\\eps^3)}$ and with probability at least $1-2\/n^2$, a cut $\\vec{z}$ of $G$ with value $p(\\vec{z}) \\geq (1-\\eps)p(\\vec{x}^\\ast)$, where $\\vec{x}^\\ast$ is the optimal cut.\n\\end{theorem}\n\\section{Approximate Maximization of Smooth Polynomials}\n\\label{s:pip}\n\nGeneralizing the ideas applied to \\MC, we arrive at the main algorithmic result\nof the paper: an algorithm to approximately optimize $\\beta$-smooth\n$\\delta$-bounded polynomials $p(\\vec{x})$ of degree $d$ over all binary vectors\n$\\vec{x} \\in \\{0, 1\\}^n$. The intuition and the main ideas are quite similar\nto those in Section~\\ref{s:maxcut}, but the details are significantly more\ninvolved because we are forced to recursively decompose degree $d$ polynomials\nto eventually obtain a linear program. \nIn what follows, we take care of the technical details.\n\n\nNext, we significantly generalize the ideas applied to \\MC\\ so that we approximately optimize $\\beta$-smooth $\\delta$-bounded polynomials $p(\\vec{x})$ of degree $d$ over all binary vectors $\\vec{x} \\in \\{0, 1\\}^n$. The structure of this section deliberately parallels the structure of Section~\\ref{s:maxcut}, so that the application to \\MC\\ can always serve as a reference for the intuition behind the generalization.\n\nAs in \\cite{AKK99} (and as explained in Section~\\ref{s:prelim}), we exploit the fact that any $n$-variate degree-$d$ $\\beta$-smooth polynomial $p(\\vec{x})$ can be decomposed into $n$ degree-$(d-1)$ $\\beta$-smooth polynomials $p_j(\\vec{x})$ such that $p(\\vec{x}) = c + \\sum_{j \\in N} x_j p_j(\\vec{x})$ (Proposition~\\ref{pr:decomposition}).\nFor smooth polynomials of degree $d \\geq 3$, we apply Proposition~\\ref{pr:decomposition} recursively until we end up with smooth polynomials of degree $1$.\nSpecifically, using Proposition~\\ref{pr:decomposition}, we further decompose each degree-$(d-1)$ $\\beta$-smooth polynomial $p_{i_1}(\\vec{x})$ into $n$ degree-$(d-2)$ $\\beta$-smooth polynomials $p_{i_1 j}(\\vec{x})$ such that\n$p_{i_1}(\\vec{x}) = c_{i_1} + \\sum_{j \\in N} x_j p_{i_1 j}(\\vec{x})$, etc.\nAt the basis of the recursion, at depth $d-1$, we have $\\beta$-smooth polynomials $p_{i_1\\ldots i_{d-1}}(\\vec{x})$ of degree $1$, one for each $(d-1)$-tuple of indices $(i_1, \\ldots, i_{d-1}) \\in N^{d-1}$. These polynomials are written as\n\\[ p_{i_1\\ldots i_{d-1}}(\\vec{x}) = c_{i_1\\ldots i_{d-1}} +\n \\sum_{j \\in N} x_j c_{i_1\\ldots i_{d-1} j}\\,,\n\\]\nwhere $c_{i_1\\ldots i_{d-1} j}$ are constants (these are the coefficients of the corresponding degree-$d$ monomials in the expansion of $p(\\vec{x})$). Due to $\\beta$-smoothness, $|c_{i_1\\ldots i_{d-1} j}| \\leq \\beta$ and $|c_{i_1\\ldots i_{d-1}}| \\leq \\beta n$. Inductively, $\\beta$-smoothness implies that each polynomial $p_{i_1\\ldots i_{d-\\ell}}(\\vec{x})$ of degree $\\ell \\geq 1$ in this decomposition%\n\\footnote{This decomposition can be performed in a unique way if we insist that $i_1 < i_2 < \\cdots < i_{d-1}$, but this is not important for our analysis.}\nhas $|p_{i_1\\ldots i_{d-\\ell}}(\\vec{x})| \\leq (\\ell+1) \\beta n^{\\ell}$ for all binary vectors $\\vec{x} \\in \\{0, 1\\}^n$. Such a decomposition of $p(\\vec{x})$ in $\\beta$-smooth polynomials of degree $d-1, d-2, \\ldots, 1$ can be computed recursively in time $O(n^d)$.\n\n\\subsection{Outline and General Approach}\n\\label{s:pip_outline}\n\nAs in Section~\\ref{s:maxcut} (and as in \\cite{AKK99}), we observe that if we have good estimations $\\rho_{i_1\\ldots i_{d-\\ell}}$ of the values of each degree-$\\ell$ polynomial $p_{i_1\\ldots i_{d-\\ell}}(\\vec{x})$ at the optimal solution $\\vec{x}^\\ast$, for each level $\\ell = 1, \\ldots, d-1$ of the decomposition, then approximate maximization of $p(\\vec{x})$ can be reduced to the solution of the following Integer Linear Program:\n\\begin{align}\n\\max \\sum_{j \\in N} y_j \\rho_j \\tag{$d$-IP}\\\\\n\\mathrm{s.t.}\\ \\ \\ \\ \\ \\ \\\nc_{i_1} + \\sum_{j\\in N} y_j \\rho_{i_1 j} & \\in\n \\rho_{i_1} \\pm \\e_1 \\rb_{i_1} \\pm \\e_2 n^{d-1+\\delta} &\n \\forall i_1 \\in N \\notag \\\\\nc_{i_1i_2} + \\sum_{j\\in N} y_j \\rho_{i_1 i_2 j} & \\in\n\\rho_{i_1 i_2} \\pm \\e_1 \\rb_{i_1 i_2} \\pm \\e_2 n^{d-2+\\delta} &\n\\forall (i_1, i_2) \\in N \\times N \\notag \\\\\n\\cdots \\notag\\\\\nc_{i_1 \\ldots i_{d-\\ell}} + \\sum_{j\\in N} y_j \\rho_{i_1 \\ldots i_{d-\\ell} j} & \\in\n\\rho_{i_1 \\ldots i_{d-\\ell}} \\pm \\e_1 \\rb_{i_1 \\ldots i_{d-\\ell}}\n\\pm \\e_2 n^{d-\\ell+\\delta} &\n\\forall (i_1, \\ldots, i_{d-\\ell}) \\in N^{d-\\ell} \\notag \\\\\n\\cdots \\notag\\\\\nc_{i_1 \\ldots i_{d-1}} + \\sum_{j\\in N} y_j c_{i_1 \\ldots i_{d-1} j} & \\in\n\\rho_{i_1 \\ldots i_{d-1}} \\pm \\e_1 \\rb_{i_1 \\ldots i_{d-1}}\\pm \\e_2 n^{\\delta} &\n\\forall (i_1, \\ldots, i_{d-1}) \\in N^{d-1} \\notag \\\\\ny_j & \\in \\{0, 1\\} & \\forall j \\in N \\notag\n\\end{align}\nIn ($d$-IP), we also use \\emph{absolute value estimations} $\\rb_{i_1 \\ldots i_{d-\\ell}}$. For each level $\\ell \\geq 1$ of the decomposition of $p(\\vec{x})$ and each tuple $(i_1, \\ldots, i_{d-\\ell}) \\in N^{d-\\ell}$, we define the corresponding absolute value estimation as $\\rb_{i_1 \\ldots i_{d-\\ell}} = \\sum_{j \\in N} |\\rho_{i_1 \\ldots i_{d-\\ell}j}|$. Namely, each absolute value estimation $\\rb_{i_1 \\ldots i_{d-\\ell}}$ at level $\\ell$ is the sum of the absolute values of the estimations $\\rho_{i_1 \\ldots i_{d-\\ell}j}$ at level $\\ell-1$.\nThe reason that we use absolute value estimations and set the lhs\/rhs of the constraints to $\\rho_{i_1 \\ldots i_{d-\\ell}} \\pm \\e_1 \\rb_{i_1 \\ldots i_{d-\\ell}}$, instead of simply to $(1\\pm\\e_1)\\rho_{i_1 \\ldots i_{d-\\ell}}$, is that we want to consider linear combinations of positive and negative estimations $\\rho_{i_1 \\ldots i_{d-\\ell}}$ in a uniform way.\n\n\nSimilarly to Section~\\ref{s:maxcut}, the estimations $\\rho_{i_1 \\ldots i_{d-\\ell}}$ (and $\\rb_{i_1 \\ldots i_{d-\\ell}}$) are computed (by exhaustive sampling) and the constants $\\e_1, \\e_2 > 0$ are calculated so that the optimal solution $\\vec{x}^\\ast$ is a feasible solution to ($d$-IP). In the following, we let $\\vec{\\rho}$ denote the sequence of estimations $\\rho_{i_1 \\ldots i_{d-\\ell}}$, for all levels $\\ell$ and all tuples $(i_1, \\ldots, i_{d-\\ell}) \\in N^{d-\\ell}$, that we use to formulate ($d$-IP). The absolute value estimations $\\rb_{i_1 \\ldots i_{d-\\ell}}$ can be easily computed from $\\vec{\\rho}$. We let ($d$-LP) denote the Linear Programming relaxation of ($d$-IP), where each $y_j \\in [0, 1]$, let $\\vec{x}^\\ast$ denote the binary vector that maximizes $p(\\vec{x})$, and let $\\vec{y}^\\ast \\in [0,1]^n$ denote the fractional optimal solution of ($d$-LP).\n\nAs in Section~\\ref{s:maxcut}, the approach is based on the facts that (i) for all constants $\\e_1, \\e_2 > 0$, we can compute estimations $\\vec{\\rho}$, by exhaustive sampling, so that $\\vec{x}^\\ast$ is a feasible solution to ($d$-IP) with high probability (see Lemma~\\ref{l:sampling} and Lemma~\\ref{l:sampling_gen}); and that (ii) the objective value of any feasible solution $\\vec{y}$ to ($d$-LP) is close to $p(\\vec{y})$ (see Lemma~\\ref{l:approx} and Lemma~\\ref{l:approx_gen}). Based on these observations, the general description of the approximation algorithm is essentially identical to the three steps described in Section~\\ref{s:cut_main} and the reasoning behind the approximation guarantee is that of (\\ref{eq:cut_est}).\n\n\\subsection{Obtaining Estimations by Exhausting Sampling}\n\\label{s:pip_sampling}\n\nWe first show how to use exhaustive sampling and obtain an estimation $\\rho_{i_1\\ldots i_{d-\\ell}}$ of the value at the optimal solution $\\vec{x}^\\ast$ of each degree-$\\ell$ polynomial $p_{i_1\\ldots i_{d-\\ell}}(\\vec{x})$ in the decomposition of $p(\\vec{x})$.\n\nAs in Section~\\ref{s:cut_sampling}, we take a sample $R$ from $N$, uniformly at random and with replacement. The sample size is $r = \\Theta(n^{1-\\delta} \\ln n)$. We try exhaustively all $0\/1$ assignments to the variables in $R$, which can performed in time $2^r = 2^{O(n^{1-\\delta}\\ln n)}$.\n\n\\def\\mathrm{Estimate}{\\mathrm{Estimate}}\n\\begin{algorithm}[t]\n\\caption{\\label{alg:estimate}Recursive estimation procedure $\\mathrm{Estimate}(p_{i_1\\ldots i_{d-\\ell}}(\\vec{x}), \\ell, R, \\vec{s})$}\n\\begin{algorithmic}\\normalsize\n \\Require $n$-variate degree-$\\ell$ polynomial $p_{i_1\\ldots i_{d-\\ell}}(\\vec{x})$, $R \\subseteq N$ and a value $s_j \\in \\{0,1\\}$ for each $j \\in R$\n \\Ensure Estimation $\\rho_{i_1\\ldots i_{d-\\ell}}$ of $p_{i_1\\ldots i_{d-\\ell}}(\\overline{\\vec{s}})$, where $\\overline{\\vec{s}}_R = \\vec{s}$\n\n \\medskip\\If{$\\ell = 0$} \\Return $c_{i_1\\ldots i_{d}}$\n \\ \\ \\ \/* $p_{i_1\\ldots i_{d}}(\\vec{x})$ is equal to the constant $c_{i_1\\ldots i_{d}}$ *\/ \\EndIf\n \\State compute decomposition\n $p_{i_1\\ldots i_{d-\\ell}}(\\vec{x}) =\n c_{i_1\\ldots i_{d-\\ell}} + \\sum_{j \\in N} x_j p_{i_1\\ldots i_{d-\\ell}j}(\\vec{x})$\n \\For{all $j \\in N$}\n \\State $\\rho_{i_1\\ldots i_{d-\\ell}j} \\leftarrow \\mathrm{Estimate}(p_{i_1\\ldots i_{d-\\ell}j}(\\vec{x}), \\ell-1, R, \\vec{s})$\n \\EndFor\n \\State $\\rho_{i_1\\ldots i_{d-\\ell}} \\leftarrow c_{i_1\\ldots i_{d-\\ell}} + \\frac{|N|}{|R|} \\sum_{j \\in R} s_j \\rho_{i_1\\ldots i_{d-\\ell}j}$\\\\\n \\Return $\\rho_{i_1\\ldots i_{d-\\ell}}$\n\\end{algorithmic}\\end{algorithm}\n\nFor each assignment, described by a $0\/1$ vector $\\vec{s}$ restricted to $R$,\nwe compute the corresponding estimations recursively, as described in Algorithm~\\ref{alg:estimate}. Specifically, for the basis level $\\ell = 0$ and each $d$-tuple $(i_1, \\ldots, i_d) \\in N^d$ of indices, the corresponding estimation is the coefficient $c_{i_1\\ldots i_d}$ of the monomial $x_{i_1}\\cdots x_{i_d}$ in the expansion of $p(\\vec{x})$.\nFor each level $\\ell$, $1 \\leq \\ell \\leq d-1$, and each $(d-\\ell)$-tuple $(i_1, \\ldots, i_{d-\\ell}) \\in N^{d-\\ell}$, given the level-$(\\ell-1)$ estimations $\\rho_{i_1\\ldots i_{d-\\ell} j}$ of $p_{i_1\\ldots i_{d-\\ell} j}(\\overline{\\vec{s}})$, for all $j \\in N$, we compute the level-$\\ell$ estimation $\\rho_{i_1\\ldots i_{d-\\ell}}$ of $p_{i_1\\ldots i_{d-\\ell}}(\\overline{\\vec{s}})$ from $\\vec{s}$ as follows:\n\\begin{equation}\\label{eq:estimation}\n \\rho_{i_1\\ldots i_{d-\\ell}} = c_{i_1\\ldots i_{d-\\ell}} +\n \\frac{n}{r} \\sum_{j \\in R} s_j \\rho_{i_1\\cdots i_{d-\\ell} j}\n\\end{equation}\nIn Algorithm~\\ref{alg:estimate}, $\\overline{\\vec{s}}$ is any vector in $\\{ 0, 1 \\}^n$ that agrees with $\\vec{s}$ on the variables of $R$. Given the estimations $\\rho_{i_1 \\ldots i_{d-\\ell}j}$, for all $j \\in N$, we can also compute the absolute value estimations $\\rb_{i_1 \\ldots i_{d-\\ell}}$ at level $\\ell$. Due to the $\\beta$-smoothness property of $p(\\vec{x})$, we have that $|c_{i_1\\ldots i_{d-\\ell}}| \\leq \\beta n^\\ell$, for all levels $\\ell \\geq 0$. Moreover, we assume that $0 \\leq \\rb_{i_1\\ldots i_{d-\\ell}} \\leq \\ell\\beta n^{\\ell}$ and $|\\rho_{i_1\\ldots i_{d-\\ell}}| \\leq (\\ell+1)\\beta n^{\\ell}$, for all levels $\\ell \\geq 1$. This assumption is wlog. because due to $\\beta$-smoothness, any binary vector $\\vec{x}$ is feasible for ($d$-IP) with such values for the estimations $\\rho_{i_1\\ldots i_{d-\\ell}}$ and the absolute value estimations $\\rb_{i_1\\ldots i_{d-\\ell}}$\\,.\n\\begin{remark}\nFor simplicity, we state Algorithm~\\ref{alg:estimate} so that it computes, from $\\vec{s}$, an estimation $\\rho_{i_1\\ldots i_{d-\\ell}}$ of the value of a given degree-$\\ell$ polynomial $p_{i_1\\ldots i_{d-\\ell}}(\\vec{x})$ at $\\overline{\\vec{s}}$. So, we need to apply Algorithm~\\ref{alg:estimate} $O(n^{d-1})$ times, one for each polynomial that arises in the recursive decomposition, with the same sample $R$ and the same assignment $\\vec{s}$. We can easily modify Algorithm~\\ref{alg:estimate} so that a single call $\\mathrm{Estimate}(p(\\vec{x}), d, R, \\vec{s})$ computes the estimations of all the polynomials that arise in the recursive decomposition of $p(\\vec{x})$. Thus, we save a factor of $d$ on the running time. The running time of the simple version is $O(dn^d)$, while the running time of the modified version is $O(n^d)$.\n\\end{remark}\n\n\\subsection{Sampling Lemma}\n\\label{s:sampling}\n\nWe use the next lemma to show that if $\\vec{s} = \\vec{x}^\\ast_R$, the estimations $\\rho_{i_1\\ldots i_{d-\\ell}}$ computed by Algorithm~\\ref{alg:estimate} are close to $c_{i_1\\ldots i_{d-\\ell}} + \\sum_{j \\in N} x^\\ast_j \\rho_{i_1\\ldots i_{d-\\ell} j}$ with high probability.\n\\begin{lemma}\\label{l:sampling}\nLet $\\vec{x}$ be any binary vector and let $( \\rho_j )_{j \\in N}$ be any sequence such that for some integer $q \\geq 0$ and some constant $\\beta \\geq 1$, $\\rho_j \\in [0, (q+1)\\beta n^q]$, for all $j \\in N$. For all integers $d \\geq 1$ and for all $\\alpha_1, \\alpha_2 > 0$, we let $\\gamma = \\Theta(d q \\beta\/(\\alpha_1^2 \\alpha_2))$ and let $R$ be a multiset of $r = \\gamma n^{1-\\delta} \\ln n$ indices chosen uniformly at random with replacement from $N$, where $\\delta \\in (0, 1]$ is any constant. If $\\rho = (n \/ r) \\sum_{j \\in R} \\rho_{j} x_j$ and $\\hat{\\rho} = \\sum_{j \\in N} \\rho_{j} x_j$, with probability at least $1 - 2\/n^{d+1}$,\n\\begin{equation}\\label{eq:pip_sample}\n (1-\\alpha_1)\\hat{\\rho} - (1-\\alpha_1)\\alpha_2 n^{q+\\delta} \\leq \\rho \\leq\n (1+\\alpha_1)\\hat{\\rho} + (1+\\alpha_1)\\alpha_2 n^{q+\\delta}\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nTo provide some intuition, we observe that if $\\hat{\\rho} = \\Omega(n^{q+\\delta})$, we have $\\Omega(n^\\delta)$ values $\\rho_j = \\Theta(n^q)$. These values are well-represented in the random sample $R$, with high probability, since the size of the sample is $\\Theta(n^{1-\\delta} \\ln n)$. Therefore, $|\\hat{\\rho} - \\rho| \\leq \\alpha_1\\hat{\\rho}$, with high probability, by standard Chernoff bounds. If $\\hat{\\rho} = o(n^{q+\\delta})$, the lower bound in (\\ref{eq:pip_sample}) becomes trivial, since it is non-positive, while $\\rho \\geq 0$. As for the upper bound, we increase the coefficients $\\rho_j$ to $\\rho'_j \\in [0, (q+1)\\beta n^q]$, so that $\\hat{\\rho}' = \\alpha_2 n^{q+\\delta}$. Then, $\\rho' \\leq (1+\\alpha_1)\\hat{\\rho}' = (1+\\alpha_1)\\alpha_2 n^{q+\\delta}$, with high probability, by the same Chernoff bound as above. Now the upper bound of (\\ref{eq:pip_sample}) follows from $\\rho \\leq \\rho'$, which holds for any instantiation of the random sample $R$.\n\nWe proceed to formalize the idea above. For simplicity of notation, we let $B = (q+1)\\beta n^q$ and $a_2 = \\alpha_2\/((q+1)\\beta)$ throughout the proof. For each sample $l$, $l = 1, \\ldots, r$, we let $X_l$ be a random variable distributed in $[0, 1]$. For each index $j$, if the $l$-th sample is $j$, $X_l$ becomes $\\rho_{j} \/ B$, if $x_j = 1$, and becomes $0$, otherwise. Therefore, $\\Exp[X_l] = \\hat{\\rho} \/ (B n)$. We let $X = \\sum_{l = 1}^r X_l$. Namely, $X$ is the sum of $r$ independent random variables identically distributed in $[0, 1]$. Using that $r = \\gamma n^{1-\\delta} \\ln n$, we have that $\\Exp[X] = \\gamma \\hat{\\rho} \\ln n \/ (B n^\\delta)$ and that $\\rho = B n X\/r = B n^\\delta X \/ (\\gamma \\ln n)$.\n\nWe distinguish between the case where $\\hat{\\rho} \\geq a_2 B n^{\\delta}$ and the case where $\\hat{\\rho} < a_2 B n^{\\delta}$.\nWe start with the case where $\\hat{\\rho} \\geq a_2 B n^{\\delta}$. Then, by Chernoff bounds%\n\\footnote{\\label{foot:chernoff2}We use the following bound (see e.g., \\cite[Theorem~1.1]{DP09}): Let $Y_1, \\ldots, Y_k$ be independent random variables identically distributed in $[0, 1]$ and let $Y = \\sum_{j=1}^k Y_j$. Then for all $\\e \\in (0, 1)$, $\\Prob[|Y - \\Exp[Y]| > \\e\\, \\Exp[Y]] \\leq 2\\exp(-\\e^2\\,\\Exp[Y]\/3)$.},\n\\begin{eqnarray*}\n \\Prob[|X - \\Exp[X]| > \\alpha_1 \\Exp[X]] & \\leq &\n 2\\exp\\!\\left(-\\frac{\\alpha_1^2 \\gamma \\hat{\\rho} \\ln n}{3 B n^{\\delta} }\\right) \\\\\n & \\leq & 2\\exp(-\\alpha_1^2 a_2 \\gamma \\ln n \/ 3) \\leq 2\/n^{d+1}\n\\end{eqnarray*}\nFor the second inequality, we use that $\\hat{\\rho} \\geq a_2 B n^{\\delta}$. For the last inequality, we use that $\\gamma \\geq 3(d+1)\/(\\alpha_1^2 a_2) = 3(d+1)(q+1)\\beta\/(\\alpha_1^2 \\alpha_2)$, since $a_2 = \\alpha_2\/((q+1)\\beta)$. Therefore, with probability at least $1 - 2\/n^{d+1}$,\n\\[\n (1-\\alpha_1) \\frac{\\gamma \\hat{\\rho} \\ln n}{B n^\\delta} \\leq X \\leq\n (1+\\alpha_1) \\frac{\\gamma \\hat{\\rho} \\ln n}{B n^\\delta}\n\\]\nMultiplying everything by $B n \/ r = B n^\\delta \/(\\gamma \\ln n)$, we have that with probability at least $1-2\/n^{d+1}$, $(1-\\alpha_1) \\hat{\\rho} \\leq \\rho \\leq (1+\\alpha_1) \\hat{\\rho}$, which clearly implies (\\ref{eq:pip_sample}).\n\nWe proceed to the case where $\\hat{\\rho} < a_2 B n^{\\delta}$. Then, $(1-\\alpha_1)\\hat{\\rho} < (1-\\alpha_1) a_2 B n^{\\delta} = (1-\\alpha_1) \\alpha_2 n^{q+\\delta}$. Therefore, since $\\rho \\geq 0$, because $\\rho_j \\geq 0$, for all $j \\in N$, the lower bound of (\\ref{eq:pip_sample}) on $\\rho$ is trivial.\nFor the upper bound, we show that with probability at least $1-1\/n^{d+1}$, $\\rho \\leq (1+\\alpha_1) a_2 B n^{\\delta} = (1+\\alpha_1)\\alpha_2n^{q+\\delta}$. To this end, we consider a sequence $(\\rho'_j)_{j \\in N}$ so that $\\rho_j \\leq \\rho'_j \\leq (q+1)\\beta n^q$, for all $j \\in N$, and\n\\( \\hat{\\rho}' = \\sum_{j \\in N} \\rho'_{j} x_j = a_2 B n^{q+\\delta} \\).\nWe can obtain such a sequence by increasing an appropriate subset of $\\rho_j$ up to $(q+1)\\beta n^q$ (if $\\vec{x}$ does not contain enough $1$'s, we may also change some $x_j$ from $0$ to $1$).\nFor the new sequence, we let $\\rho' = (n \/ r) \\sum_{j \\in R} \\rho'_{j} x_j$ and observe that $\\rho \\leq \\rho'$, for any instantiation of the random sample $R$.\nTherefore,\n\\[ \\Prob[\\rho > (1+\\alpha_1)\\alpha_2n^{q+\\delta}] \\leq\n \\Prob[\\rho' > (1+\\alpha_1)\\hat{\\rho}']\\,,\n\\]\nwhere we use that $\\hat{\\rho}' = a_2 B n^\\delta = \\alpha_2n^{q+\\delta}$.\nBy the choice of $\\hat{\\rho}'$, we can apply the same Chernoff bound as above and obtain that $\\Prob[\\rho' > (1+\\alpha_1)\\hat{\\rho}'] \\leq 1\/n^{d+1}$.\n\\qed\\end{proof}\nLemma~\\ref{l:sampling} is enough for \\MC\\ and graph optimization problems, where the estimations $\\rho_{i_1\\ldots i_{d-\\ell} j}$ are non-negative. For arbitrary smooth polynomials however, the estimations $\\rho_{i_1\\ldots i_{d-\\ell} j}$ may also be negative. So, we need a generalization of Lemma~\\ref{l:sampling} that deals with both positive and negative estimations. To this end, given a sequence of estimations $( \\rho_j )_{j \\in N}$, with $\\rho_j \\in [-(q+1)\\beta n^q, (q+1)\\beta n^q]$, we let $\\rho^+_j = \\max\\{\\rho_j, 0\\}$ and $\\rho^-_j = \\min\\{ \\rho_j, 0\\}$, for all $j \\in N$. Namely, $\\rho^+_j$ (resp. $\\rho^-_j$) is equal to $\\rho_j$, if $\\rho_j$ is positive (resp. negative), and $0$, otherwise. Moreover, we let \n\\[ \\rho^+ = (n \/ r) \\sum_{j \\in R} \\rho^+_{j} x_j\\,,\\ \\ \n \\hat{\\rho}^+ = \\sum_{j \\in N} \\rho^+_{j} x_j\\,,\\ \\ \n \\rho^- = (n \/ r) \\sum_{j \\in R} \\rho^-_{j} x_j \\mbox{\\ \\ and\\ \\ } \n \\hat{\\rho}^- = \\sum_{j \\in N} \\rho^-_{j} x_j \n\\]\nApplying Lemma~\\ref{l:sampling} once for positive estimations and once for negative estimations (with the absolute values of $\\rho_j^-$, $\\rho^-$ and $\\hat{\\rho}^-$, instead), we obtain that with probability at least $1 - 4\/n^{d+1}$, the following inequalities hold:\n\\begin{eqnarray*}\n (1-\\alpha_1)\\hat{\\rho}^+ - (1-\\alpha_1)\\alpha_2 n^{q+\\delta} \\leq & \\rho^+ & \\leq\n (1+\\alpha_1)\\hat{\\rho}^+ + (1+\\alpha_1)\\alpha_2 n^{q+\\delta} \\\\\n (1+\\alpha_1)\\hat{\\rho}^- - (1+\\alpha_1)\\alpha_2 n^{q+\\delta} \\leq & \\rho^- & \\leq\n (1-\\alpha_1)\\hat{\\rho}^- + (1-\\alpha_1)\\alpha_2 n^{q+\\delta}\n\\end{eqnarray*}\nUsing that $\\rho = \\rho^+ + \\rho^-$ and that $\\hat{\\rho} = \\hat{\\rho}^+ + \\hat{\\rho}^-$, we obtain the following generalization of Lemma~\\ref{l:sampling}.\n\\begin{lemma}[Sampling Lemma]\\label{l:sampling_gen}\nLet $\\vec{x} \\in \\{0, 1\\}^n$ and let $( \\rho_j )_{j \\in N}$ be any sequence such that for some integer $q \\geq 0$ and some constant $\\beta \\geq 1$, $|\\rho_j| \\leq (q+1)\\beta n^q$, for all $j \\in N$. For all integers $d \\geq 1$ and for all $\\alpha_1, \\alpha_2 > 0$, we let $\\gamma = \\Theta(d q \\beta\/(\\alpha_1^2 \\alpha_2))$ and let $R$ be a multiset of $r = \\gamma n^{1-\\delta} \\ln n$ indices chosen uniformly at random with replacement from $N$, where $\\delta \\in (0, 1]$ is any constant. If $\\rho = (n \/ r) \\sum_{j \\in R} \\rho_{j} x_j$, $\\hat{\\rho} = \\sum_{j \\in N} \\rho_{j} x_j$ and $\\rb = \\sum_{j \\in N} |\\rho_j|$, with probability at least $1 - 4\/n^{d+1}$,\n\\begin{equation}\\label{eq:pip_sample_gen}\n \\hat{\\rho} - \\alpha_1 \\rb - 2\\alpha_2 n^{q+\\delta} \\leq \\rho \\leq\n \\hat{\\rho} + \\alpha_1 \\rb + 2\\alpha_2 n^{q+\\delta}\n\\end{equation}\n\\end{lemma}\nFor all constants $\\e_1, \\e_2 > 0$ and all constants $c$, we use Lemma~\\ref{l:sampling_gen} with $\\alpha_1 = \\e_1$ and $\\alpha_2 = \\e_2\/2$ and obtain that for $\\gamma = \\Theta(d q \\beta \/(\\e^2_1 \\e_2))$, with probability at least $1 - 4\/n^{d+1}$, the following holds for any binary vector $\\vec{x}$ and any sequence of estimations $( \\rho_j )_{j \\in N}$ produced by Algorithm~\\ref{alg:estimate} with $\\vec{s} = \\vec{x}_R$ (note that in Algorithm~\\ref{alg:estimate}, the additive constant $c$ is included in the estimation $\\rho$ when its value is computed from the estimations $\\rho_j$).\n\\begin{equation}\\label{eq:pip_sample2}\n \\overbrace{c+\\frac{n}{r}\\sum_{j \\in R} \\rho_j x_j}^{\\rho}\n - \\e_1 \\overbrace{\\sum_{j \\in N} |\\rho_j|}^{\\rb} - \\e_2 n^{q+\\delta} \\leq\n c + \\sum_{j \\in N} x_j \\rho_j \\leq\n \\overbrace{c+\\frac{n}{r}\\sum_{j \\in R} \\rho_j x_j}^{\\rho} \n + \\e_1 \\overbrace{\\sum_{j \\in N} |\\rho_j|}^{\\rb} + \\e_2 n^{q+\\delta}\n\\end{equation}\nNow, let us consider ($d$-IP) with the estimations computed by Algorithm~\\ref{alg:estimate} with $\\vec{s} = \\vec{x}^\\ast_R$ (i.e., with the optimal assignment for the variables in the random sample $R$). Then, using (\\ref{eq:pip_sample2}) and taking the union bound over all constraints, which are at most $2n^{d-1}$, we obtain that with probability at least $1-8\/n^2$, the optimal solution $\\vec{x}^\\ast$ is a feasible solution to ($d$-IP). So, from now on, we condition on the high probability event that $\\vec{x}^\\ast$ is a feasible solution to ($d$-IP) and to ($d$-LP).\n\n\\subsection{The Value of Feasible Solutions to ($d$-LP)}\n\\label{s:pip_value}\n\nFrom now on, we focus on estimations $\\vec{\\rho}$ produced by $\\mathrm{Estimate}(p(\\vec{x}), d, R, \\vec{s})$, where $R$ is a random sample from $N$ and $\\vec{s} = \\vec{x}^\\ast_R$, and the corresponding programs ($d$-IP) and ($d$-LP). The analysis in Section~\\ref{s:pip_sampling} implies that $\\vec{x}^\\ast$ is a feasible solution to ($d$-IP) (and to ($d$-LP)), with high probability.\n\nWe next show that for any feasible solution $\\vec{y}$ of ($d$-LP) and any polynomial $q(\\vec{x})$ in the decomposition of $p(\\vec{x})$, the value of $q(\\vec{y})$ is close to the value of $c + \\sum_j y_j \\rho_j$ in the constraint of ($d$-LP) corresponding to $q$. Applying Lemma~\\ref{l:approx}, we show below (see Lemma~\\ref{l:approx_gen}) that $p(\\vec{y})$ is close to $c+\\sum_{j \\in N} y_j \\rho_j$, i.e., to the objective value of $\\vec{y}$ in ($d$-LP) and ($d$-IP), for any feasible solution $\\vec{y}$.\n\nTo state and prove the following lemma, we introduce \\emph{cumulative absolute value estimations} $\\tb_{i_1 \\ldots i_{d-\\ell}}$\\,, defined recursively as follows:\nFor level $\\ell = 1$ and each tuple $(i_1, \\ldots, i_{d-1}) \\in N^{d-1}$, we let $\\tb_{i_1 \\ldots i_{d-1}} = \\rb_{i_1 \\ldots i_{d-1}} = \\sum_{j \\in N} |c_{i_1 \\ldots i_{d-1}j}|$.\nFor each level $\\ell \\geq 2$ of the decomposition of $p(\\vec{x})$ and each tuple $(i_1, \\ldots, i_{d-\\ell}) \\in N^{d-\\ell}$, we let $\\tb_{i_1 \\ldots i_{d-\\ell}} = \\rb_{i_1 \\ldots i_{d-\\ell}} + \\sum_{j \\in N} \\tb_{i_1 \\ldots i_{d-\\ell}j}$. Namely, each cumulative absolute value estimation $\\tb_{i_1 \\ldots i_{d-\\ell}}$ is equal to the sum of all absolute value estimations that appear below the root of the decomposition tree of $p_{i_1 \\ldots i_{d-\\ell}}(\\vec{x})$.\n\\begin{lemma}\\label{l:approx}\nLet $q(\\vec{x})$ be any $\\ell$-degree polynomial appearing in the decomposition of $p(\\vec{x})$, let $q(\\vec{x}) = c+\\sum_{j \\in N} x_j q_j(\\vec{x})$ be the decomposition of $q(\\vec{x})$, let $\\rho$ and $\\{ \\rho_j \\}_{j \\in N}$ be the estimations of $q$ and $\\{ q_j \\}_{j \\in N}$ produced by Algorithm~\\ref{alg:estimate} and used in ($d$-LP), and let $\\tb$ and $\\{ \\tb_j \\}_{j \\in N}$ be the corresponding cumulative absolute value estimations. Then, for any feasible solution $\\vec{y}$ of ($d$-LP)\n\\begin{equation}\\label{eq:approx}\n\\rho - \\e_1 \\tb - \\ell \\e_2 n^{\\ell - 1+\\delta} \\leq q(\\vec{y}) \\leq\n\\rho + \\e_1 \\tb + \\ell \\e_2 n^{\\ell - 1+\\delta}\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nThe proof is by induction on the degree $\\ell$. The basis, for $\\ell=1$, is trivial, because in the decomposition of $q(\\vec{x})$, each $q_j(\\vec{x})$ is a constant $c_j$. Therefore, Algorithm~\\ref{alg:estimate} outputs $\\rho_j = c_j$ and\n\\[ q(\\vec{y}) = c + \\sum_{j \\in N} y_j q_j(\\vec{x})\n = c + \\sum_{j \\in N} y_j c_j\n \\in \\rho \\pm \\e_1 \\tb \\pm \\e_2 n^{\\delta}\\,,\n \\]\nwhere the inclusion follows from the feasibility of $\\vec{y}$ for ($d$-LP). We also use that at level $\\ell = 1$, $\\tb = \\rb$ (i.e., cumulative absolute value estimations and absolute value estimations are identical).\n\nWe inductively assume that (\\ref{eq:approx}) is true for all degree-$(\\ell-1)$ polynomials $q_j(\\vec{x})$ that appear in the decomposition of $q(\\vec{x})$ and establish the lemma for $q(\\vec{x}) = c + \\sum_{j \\in N} x_j q_j(\\vec{x})$. We have that:\n\\begin{align*}\n q(\\vec{y}) = c + \\sum_{j \\in N} y_j q_j(\\vec{y}) & \\in\n c + \\sum_{j \\in N} y_j \\left( \\rho_j \\pm \\e_1 \\tb_j\n \\pm (\\ell-1) \\e_2 n^{\\ell-2+\\delta} \\right)\\\\\n &= \\left(c + \\sum_{j \\in N} y_j \\rho_j \\right)\n \\pm \\e_1 \\sum_{j \\in N} y_j \\tb_j\n \\pm (\\ell-1) \\e_2 \\sum_{j \\in N} y_j n^{\\ell-2+\\delta} \\\\\n &\\in \\left(\\rho \\pm \\e_1 \\rb \\pm \\e_2 n^{\\ell-1+\\delta}\\right)\n \\pm \\e_1 \\sum_{j \\in N} \\tb_j \\pm (\\ell-1) \\e_2 n^{\\ell-1+\\delta}\\\\\n &\\in \\rho \\pm \\e_1 \\tb \\pm \\ell \\e_2 n^{\\ell-1+\\delta}\n\\end{align*}\nThe first inclusion holds by the induction hypothesis. The second inclusion holds because (i) $\\vec{y}$ is a feasible solution to ($d$-LP) and thus, $c + \\sum_{j \\in N} y_j \\rho_j$ satisfies the corresponding constraint; (ii) $\\sum_{j \\in N} y_j \\tb_j \\leq \\sum_{j \\in N} \\tb_j$; and (iii) $\\sum_{j \\in N} y_j \\leq n$. The last inclusion holds because $\\tb = \\rb + \\sum_{j \\in N} \\tb_j$, by the definition of cumulative absolute value estimations.\n\\qed\\end{proof}\nUsing Lemma~\\ref{l:approx} and the notion of cumulative absolute value estimations, we next show that $p(\\vec{y})$ is close to $c+\\sum_{j \\in N} y_j \\rho_j$, for any feasible solution $\\vec{y}$.\n\\begin{lemma}\\label{l:approx_gen}\nLet $p(\\vec{x}) = c+\\sum_{j \\in N} x_j p_j(\\vec{x})$ be the decomposition of $p(\\vec{x})$, let $\\{ \\rho_j \\}_{j \\in N}$ be the estimations of $\\{ p_j \\}_{j \\in N}$ produced by Algorithm~\\ref{alg:estimate} and used in ($d$-LP), and let $\\{ \\tb_j \\}_{j \\in N}$ be the corresponding cumulative absolute value estimations. Then, for any feasible solution $\\vec{y}$ of ($d$-LP)\n\\begin{equation}\\label{eq:approx_gen}\n p(\\vec{y}) \\in\n c+\\sum_{j \\in N} y_j \\rho_j \\pm \\e_1 \\sum_{j \\in N} \\tb_j \\pm (d-1)\\e_2 n^{d-1+\\delta}\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nBy Lemma~\\ref{l:approx}, for any polynomial $p_j$, $p_j(\\vec{y}) \\in \\rho_j \\pm \\e_1 \\tb_j \\pm (d-1) \\e_2 n^{d-2+\\delta}$. Therefore,\n\\begin{align*}\n p(\\vec{y}) = c + \\sum_{j \\in N} y_j p_j(\\vec{y}) & \\in\n c + \\sum_{j \\in N} y_j \\left( \\rho_j \\pm \\e_1 \\tb_j\n \\pm (d-1) \\e_2 n^{d-2+\\delta} \\right)\\\\\n &= c + \\sum_{j \\in N} y_j \\rho_j\n \\pm \\e_1 \\sum_{j \\in N} y_j \\tb_j\n \\pm (d-1) \\e_2 \\sum_{j \\in N} y_j n^{d-2+\\delta} \\\\\n &\\in c + \\sum_{j \\in N} y_j \\rho_j\n \\pm \\e_1 \\sum_{j \\in N} \\tb_j\n \\pm (d-1) \\e_2 n^{d-1+\\delta}\n\\end{align*}\nThe second inclusion holds because $y_j \\in [0,1]$ and $\\sum_{j \\in N} y_j \\leq n$.\n\\qed\\end{proof}\n\n\\subsection{Randomized Rounding of the Fractional Optimum}\n\\label{s:pip_rounding}\n\nThe last step is to round the fractional optimum $\\vec{y}^\\ast = (y^\\ast_1, \\ldots, y^\\ast_n)$ of ($d$-LP) to an integral solution $\\vec{z} = (z_1, \\ldots, z_n)$ that almost satisfies the constraints of ($d$-IP) and has an expected objective value for ($d$-IP) very close to the objective value of $\\vec{y}^\\ast$.\n\nTo this end, we use randomized rounding, as in \\cite{RT87}. In particular, we set independently each $z_j$ to $1$, with probability $y_j^\\ast$, and to $0$, with probability $1-y_j^\\ast$. The analysis is based on the following lemma, whose proof is similar to the proof of Lemma~\\ref{l:sampling}.\n\\begin{lemma}\\label{l:rounding}\nLet $\\vec{y} \\in [0, 1]^n$ be any fractional vector and let $\\vec{z} \\in \\{0, 1\\}^n$ be an integral vector obtained from $\\vec{y}$ by randomized rounding. Also, let $( \\rho_j )_{j \\in N}$ be any sequence such that for some integer $q \\geq 0$ and some constant $\\beta \\geq 1$, $\\rho_j \\in [0, (q+1)\\beta n^q]$, for all $j \\in N$. For all integers $k \\geq 1$ and for all constants $\\alpha, \\delta > 0$ (and assuming that $n$ is sufficiently large), if $\\rho = \\sum_{j \\in N} \\rho_{j} z_j$ and $\\hat{\\rho} = \\sum_{j \\in N} \\rho_{j} y_j$, with probability at least $1 - 2\/n^{k+1}$,\n\\begin{equation}\\label{eq:rounding}\n (1-\\alpha)\\hat{\\rho} - (1-\\alpha)\\alpha n^{q+\\delta} \\leq \\rho \\leq\n (1+\\alpha)\\hat{\\rho} + (1+\\alpha)\\alpha n^{q+\\delta}\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nWe first note that $\\Exp[\\rho] = \\hat{\\rho}$. If $\\hat{\\rho} = \\Omega(n^{q} \\ln n)$, then $|\\rho - \\hat{\\rho}| \\leq \\alpha \\hat{\\rho}$, with high probability, by standard Chernoff bounds. If $\\hat{\\rho} = o(n^{q} \\ln n)$, the lower bound in (\\ref{eq:rounding}) becomes trivial, because $\\rho \\geq 0$ and $o(n^{q} \\ln n) < \\alpha n^{q+\\delta}$, if $n$ is sufficiently large. As for the upper bound, we increase the coefficients $\\rho_j$ to $\\rho'_j \\in [0, (q+1)\\beta n^q]$, so that $\\hat{\\rho}' = \\Theta(n^{q} \\ln n)$. Then, the upper bound is shown as in the second part of the proof of Lemma~\\ref{l:sampling}.\n\nWe proceed to the formal proof. For simplicity of notation, we let $B = (q+1)\\beta n^q$ throughout the proof. For $j = 1, \\ldots, n$, we let $X_j = z_j \\rho_j \/ B$ be a random variable distributed in $[0, 1]$. Each $X_j$ independently takes the value $\\rho_{j} \/ B$, with probability $y_j$, and $0$, otherwise. We let $X = \\sum_{j = 1}^n X_j$ be the sum of these independent random variables. Then, $\\Exp[X] = \\hat{\\rho} \/ B$ and $X = \\sum_{j \\in N} z_j \\rho_j \/ B = \\rho\/B$.\n\nAs in Lemma~\\ref{l:sampling}, we distinguish between the case where $\\hat{\\rho} \\geq 3(k+1)B\\ln n\/\\alpha^2$ and the case where $\\hat{\\rho} < 3(k+1)B\\ln n\/\\alpha^2$.\nWe start with the case where $\\hat{\\rho} \\geq 3(k+1)B\\ln n\/\\alpha^2$. Then, by Chernoff bounds (we use the bound in footnote~\\ref{foot:chernoff2}),\n\\[\n \\Prob[|X - \\Exp[X]| > \\alpha \\Exp[X]]\n \\leq 2\\exp\\!\\left(-\\frac{\\alpha^2 \\hat{\\rho} }{3 B }\\right)\n \\leq 2\\exp(-(k+1)\\ln n) \\leq 2\/n^{k+1}\\,,\n\\]\nwhere we use that $\\hat{\\rho} \\geq 3(k+1)B\\ln n\/\\alpha^2$. Therefore, with probability at least $1 - 2\/n^{k+1}$,\n\\[\n (1-\\alpha) \\hat{\\rho} \/ B \\leq X \\leq (1+\\alpha) \\hat{\\rho} \/ B\n\\]\nMultiplying everything by $B$ and using that $X = \\rho \/ B$, we obtain that with probability at least $1 - 2\/n^{k+1}$, $(1-\\alpha) \\hat{\\rho} \\leq \\rho \\leq (1+\\alpha) \\hat{\\rho}$, which implies (\\ref{eq:rounding}).\n\nWe proceed to the case where $\\hat{\\rho} < 3(k+1)B\\ln n\/\\alpha^2$. Then, assuming that $n$ is large enough that $n^\\delta \/ \\ln n > 3(k+1)(q+1)\\beta \/ \\alpha^3$, we obtain that $(1-\\alpha)\\hat{\\rho} < (1-\\alpha) \\alpha n^{q+\\delta}$. Therefore, since $\\rho \\geq 0$, because $\\rho_j \\geq 0$, for all $j \\in N$, the lower bound of (\\ref{eq:rounding}) on $\\rho$ is trivial.\nFor the upper bound, we show that with probability at least $1-1\/n^{k+1}$, $\\rho \\leq (1+\\alpha) \\alpha n^{q+\\delta}$. To this end, we consider a sequence $(\\rho'_j)_{j \\in N}$ so that $\\rho_j \\leq \\rho'_j \\leq (q+1)\\beta n^q$, for all $j \\in N$, and\n\\[ \\hat{\\rho}' = \\sum_{j \\in N} \\rho'_{j} y_j = \\frac{3(k+1)B\\ln n}{\\alpha^2} \\]\nWe can obtain such a sequence by increasing an appropriate subset of $\\rho_j$ up to $(q+1)\\beta n^q$ (if $\\sum_{j \\in N} \\vec{y}$ is not large enough, we may also increase some $y_j$ up to $1$).\nFor the new sequence, we let $\\rho' = \\sum_{j \\in R} \\rho'_{j} z_j$ and observe that $\\rho \\leq \\rho'$, for any instantiation of the randomized rounding (if some $y_j$ are increased, the inequality below follows from a standard coupling argument).\nTherefore,\n\\[ \\Prob[\\rho > (1+\\alpha)\\alpha n^{q+\\delta}] \\leq\n \\Prob[\\rho' > (1+\\alpha)\\hat{\\rho}']\\,,\n\\]\nwhere we use that $\\hat{\\rho}' = 3(k+1)B\\ln n \/ \\alpha^2$ and that $\\alpha n^\\delta > 3(k+1)(q+1)\\beta \\ln n \/ \\alpha^2$, which holds if $n$ is sufficiently large.\nBy the choice of $\\hat{\\rho}'$, we can apply the same Chernoff bound as above and obtain that $\\Prob[\\rho' > (1+\\alpha)\\hat{\\rho}'] \\leq 1\/n^{k+1}$.\n\\qed\\end{proof}\nLemma~\\ref{l:rounding} implies that if the estimations $\\rho_j$ are non-negative, the rounded solution $\\vec{z}$ is almost feasible for ($d$-IP) with high probability. But, as in Section~\\ref{s:pip_sampling}, we need a generalization of Lemma~\\ref{l:rounding} that deals with both positive and negative estimations. To this end, we work as in the proof of Lemma~\\ref{l:sampling_gen}. Given a sequence of estimations $( \\rho_j )_{j \\in N}$, with $\\rho_j \\in [-(q+1)\\beta n^q, (q+1)\\beta n^q]$, we define $\\rho^+_j = \\max\\{\\rho_j, 0\\}$ and $\\rho^-_j = \\min\\{ \\rho_j, 0\\}$, for all $j \\in N$. Moreover, we let $\\rho^+ = \\sum_{j \\in N} \\rho^+_{j} z_j$, $\\hat{\\rho}^+ = \\sum_{j \\in N} \\rho^+_{j} y_j$, $\\rho^- = \\sum_{j \\in N} \\rho^-_{j} z_j$ and $\\hat{\\rho}^- = \\sum_{j \\in N} \\rho^-_{j} y_j$. Applying Lemma~\\ref{l:rounding}, once for positive estimations and once for negative estimations (with the absolute values of $\\rho_j^-$, $\\rho^-$ and $\\hat{\\rho}^-$, instead), we obtain that with probability at least $1 - 4\/n^{k+1}$,\n\\begin{eqnarray*}\n (1-\\alpha)\\hat{\\rho}^+ - (1-\\alpha)\\alpha n^{q+\\delta} \\leq & \\rho^+ & \\leq\n (1+\\alpha)\\hat{\\rho}^+ + (1+\\alpha)\\alpha n^{q+\\delta} \\\\\n (1+\\alpha)\\hat{\\rho}^- - (1+\\alpha)\\alpha n^{q+\\delta} \\leq & \\rho^- & \\leq\n (1-\\alpha)\\hat{\\rho}^- + (1-\\alpha)\\alpha n^{q+\\delta}\n\\end{eqnarray*}\nUsing that $\\rho = \\rho^+ + \\rho^-$ and that $\\hat{\\rho} = \\hat{\\rho}^+ + \\hat{\\rho}^-$, we obtain the following generalization of Lemma~\\ref{l:rounding}.\n\\begin{lemma}[Rounding Lemma]\\label{l:rounding_gen}\nLet $\\vec{y} \\in [0, 1]^n$ be any fractional vector and let $\\vec{z} \\in \\{0, 1\\}^n$ be an integral vector obtained from $\\vec{y}$ by randomized rounding. Also, let $( \\rho_j )_{j \\in N}$ be any sequence such that for some integer $q \\geq 0$ and some constant $\\beta \\geq 1$, $|\\rho_j| \\leq (q+1)\\beta n^q$, for all $j \\in N$. For all integers $k \\geq 1$ and for all constants $\\alpha, \\delta > 0$ (and assuming that $n$ is sufficiently large), if $\\rho = \\sum_{j \\in N} \\rho_{j} z_j$, $\\hat{\\rho} = \\sum_{j \\in N} \\rho_{j} y_j$ and $\\rb = \\sum_{j \\in N} |\\rho_j|$, with probability at least $1 - 4\/n^{k+1}$,\n\\begin{equation}\\label{eq:rounding_gen}\n \\hat{\\rho} - \\alpha\\rb - 2\\alpha n^{q+\\delta} \\leq \\rho \\leq\n \\hat{\\rho} + \\alpha\\rb + 2\\alpha n^{q+\\delta}\n\\end{equation}\n\\end{lemma}\nFor all constants $\\e_1, \\e_2 > 0$ and all constants $c$, we can use Lemma~\\ref{l:rounding_gen} with $\\alpha = \\max\\{\\e_1, \\e_2\/2\\}$ and obtain that for all integers $k \\geq 1$, with probability at least $1 - 4\/n^{k+1}$, the following holds for the binary vector $\\vec{z}$ obtained from a fractional vector $\\vec{y}$ by randomized rounding.\n\\begin{equation}\\label{eq:pip_rounding2}\n c + \\sum_{j \\in N} y_j \\rho_j -\n \\e_1 \\overbrace{\\sum_{j \\in N} |\\rho_j|}^{\\rb} - \\e_2 n^{q+\\delta} \\leq\n c + \\sum_{j \\in N} z_j \\rho_j\n \\leq c + \\sum_{j \\in N} y_j \\rho_j +\n \\e_1 \\overbrace{\\sum_{j \\in N} |\\rho_j|}^{\\rb} + \\e_2 n^{q+\\delta}\n\\end{equation}\nUsing (\\ref{eq:pip_rounding2}) with $k = 2(d+1)$, the fact that $\\vec{y}^\\ast$ is a feasible solution to ($d$-LP), and the fact that ($d$-LP) has at most $2n^{d-1}$ constraints, we obtain that $\\vec{z}$ is an almost feasible solution to ($d$-IP) with high probability. Namely, with probability at least $1-8\/n^{d+4}$, the integral vector $\\vec{z}$ obtained from the fractional optimum $\\vec{y}^\\ast$ by randomized rounding satisfies the following system of inequalities for all levels $\\ell \\geq 1$ and all tuples $(i_1, \\ldots, i_{d-\\ell}) \\in N^{d-\\ell}$ (for each level $\\ell \\geq 1$, we use $q = \\ell - 1$, since $|\\rho_{i_1\\ldots i_{d-\\ell}j}| \\leq \\ell \\beta n^{\\ell-1}$ for all $j \\in N$).\n\\begin{equation}\\label{eq:pip_deviation}\n c_{i_1\\ldots i_{d-\\ell}} + \\sum_{j \\in N} z_j \\rho_{i_1\\ldots i_{d-\\ell}j} \\in\n \\rho_{i_1\\ldots i_{d-\\ell}} \\pm 2\\e_1 \\rb_{i_1\\ldots i_{d-\\ell}}\n \\pm 2\\e_2 n^{\\ell-1+\\delta}\n\\end{equation}\nHaving established that $\\vec{z}$ is an almost feasible solution to ($d$-IP), with high probability, we proceed as in Section~\\ref{s:cut_rounding}. By linearity of expectation, $\\Exp[ \\sum_{j \\in N} z_j \\rho_j ] = \\sum_{j \\in V} y^\\ast_j \\rho_j$. Moreover, the probability that $\\vec{z}$ does not satisfy (\\ref{eq:pip_deviation}) for some level $\\ell \\geq 1$ and some tuple $(i_1, \\ldots, i_{d-\\ell}) \\in N^{d-\\ell}$ is at most $8\/n^{d+4}$ and the objective value of ($d$-IP) is at most $2(d+1)\\beta n^d$, because, due to the $\\beta$-smoothness property of $p(\\vec{x})$, $|p(\\vec{x}^\\ast)| \\leq (d+1)\\beta n^d$. Therefore, the expected value of a rounded solution $\\vec{z}$ that satisfies the family of inequalities (\\ref{eq:pip_deviation}) for all levels and tuples is least $\\sum_{j \\in V} y^\\ast_j \\rho_j - 1$ (assuming that $n$ is sufficiently large). Using the method of conditional expectations, as in \\cite{Rag88}, we can find in (deterministic) polynomial time an integral solution $\\vec{z}$ that satisfies the family of inequalities (\\ref{eq:pip_deviation}) for all levels and tuples and has $c + \\sum_{j \\in V} z_j \\rho_j \\geq c-1+\\sum_{j \\in V} y^\\ast_j \\rho_j$. As in Section~\\ref{s:cut_rounding}, we sometimes abuse the notation and refer to such an integral solution $\\vec{z}$ (computed deterministically) as the integral solution obtained from $\\vec{y}^\\ast$ by randomized rounding.\n\nThe following lemmas are similar to Lemma~\\ref{l:approx} and Lemma~\\ref{l:approx_gen}. They use the notion of cumulative absolute value estimations and show that the objective value $p(\\vec{z})$ of the rounded solution $\\vec{z}$ is close to the optimal value of ($d$-LP).\n\\begin{lemma}\\label{l:approx2}\nLet $\\vec{y}^\\ast$ be an optimal solution of ($d$-LP) and let $\\vec{z}$ be the integral solution obtained from $\\vec{y}^\\ast$ by randomized rounding (and the method of conditional expectations). Then, for any level $\\ell \\geq 1$ in the decomposition of $p(\\vec{x})$ and any tuple $(i_1, \\ldots, i_{d-\\ell}) \\in N^{d-\\ell}$,\n\\begin{equation}\\label{eq:approx2}\n p_{i_1\\ldots i_{d-\\ell}}(\\vec{z}) \\in\n \\rho_{i_1\\ldots i_{d-\\ell}} \\pm 2\\e_1 \\tb_{i_1\\ldots i_{d-\\ell}}\n \\pm 2\\ell \\e_2 n^{\\ell-1+\\delta}\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nThe proof is by induction on the degree $\\ell$ and similar to the proof of Lemma~\\ref{l:approx}. The basis, for $\\ell=1$, is trivial, because in the decomposition of $p(\\vec{x})$, each $p_{i_1\\ldots i_{d}}(\\vec{x})$ is a constant $c_{i_1\\ldots i_{d}}$\\,. Therefore, $\\rho_{i_1\\ldots i_{d}} = c_{i_1\\ldots i_{d}}$ and\n\\[ p_{i_1\\ldots i_{d-1}}(\\vec{z}) =\n c + \\sum_{j \\in N} z_j p_{i_1\\ldots i_{d-1}j}(\\vec{z})\n = c + \\sum_{j \\in N} z_j c_{i_1\\ldots i_{d-1}j}\n \\in \\rho_{i_1\\ldots i_{d-1}} \\pm 2\\e_1 \\tb_{i_1\\ldots i_{d-1}} \\pm 2\\e_2 n^{\\delta}\\,,\n \\]\nwhere the inclusion follows from the approximate feasibility of $\\vec{z}$ for ($d$-LP), as expressed by (\\ref{eq:pip_deviation}). We also use that at level $\\ell = 1$, $\\tb_{i_1\\ldots i_{d-1}} = \\rb_{i_1\\ldots i_{d-1}}$.\n\nWe inductively assume that (\\ref{eq:approx2}) is true for the values of all degree-$(\\ell-1)$ polynomials $p_{i_1\\ldots i_{d-\\ell}j}$ at $\\vec{z}$ and establish the lemma for $p_{i_1\\ldots i_{d-\\ell}}(\\vec{z}) = c_{i_1\\ldots i_{d-\\ell}} + \\sum_{j \\in N} z_j p_{i_1\\ldots i_{d-\\ell}j}(\\vec{z})$. We have that:\n\\begin{align*}\n p_{i_1\\ldots i_{d-\\ell}}(\\vec{z}) & =\n c_{i_1\\ldots i_{d-\\ell}} + \\sum_{j \\in N} z_j p_{i_1\\ldots i_{d-\\ell}j}(\\vec{z}) \\\\\n & \\in\n c_{i_1\\ldots i_{d-\\ell}} + \\sum_{j \\in N} z_j \\left( \\rho_{i_1\\ldots i_{d-\\ell}j}\n \\pm 2 \\e_1 \\tb_{i_1\\ldots i_{d-\\ell}j}\n \\pm 2(\\ell-1) \\e_2 n^{\\ell-2+\\delta} \\right)\\\\\n &= \\left(c_{i_1\\ldots i_{d-\\ell}} +\n \\sum_{j \\in N} z_j \\rho_{i_1\\ldots i_{d-\\ell}j} \\right)\n \\pm 2 \\e_1 \\sum_{j \\in N} z_j \\tb_{i_1\\ldots i_{d-\\ell}j}\n \\pm 2 (\\ell-1) \\e_2 \\sum_{j \\in N} z_j n^{\\ell-2+\\delta} \\\\\n &\\in \\left(\\rho_{i_1\\ldots i_{d-\\ell}}\n \\pm 2 \\e_1 \\rb_{i_1\\ldots i_{d-\\ell}}\n \\pm 2 \\e_2 n^{\\ell-1+\\delta}\\right)\n \\pm 2 \\e_1 \\sum_{j \\in N} \\tb_{i_1\\ldots i_{d-\\ell}j}\n \\pm 2 (\\ell-1) \\e_2 n^{\\ell-1+\\delta}\\\\\n &\\in \\rho_{i_1\\ldots i_{d-\\ell}} \\pm 2 \\e_1 \\tb_{i_1\\ldots i_{d-\\ell}}\n \\pm 2 \\ell \\e_2 n^{\\ell-1+\\delta}\n\\end{align*}\nThe first inclusion holds by the induction hypothesis. The second inclusion holds because: (i) $\\vec{z}$ is an approximately feasible solution to ($d$-IP) and thus,\n$c_{i_1\\ldots i_{d-\\ell}} + \\sum_{j \\in N} z_j \\rho_{i_1\\ldots i_{d-\\ell}j}$\nsatisfies (\\ref{eq:pip_deviation}); (ii) $\\sum_{j \\in N} z_j \\tb_{i_1\\ldots i_{d-\\ell}j} \\leq \\sum_{j \\in N} \\tb_{i_1\\ldots i_{d-\\ell}j}$; and (iii) $\\sum_{j \\in N} z_j \\leq n$. The last inclusion holds because $\\tb_{i_1\\ldots i_{d-\\ell}} = \\rb_{i_1\\ldots i_{d-\\ell}} + \\sum_{j \\in N} \\tb_{i_1\\ldots i_{d-\\ell}j}$, by the definition of cumulative absolute value estimations.\n\\qed\\end{proof}\n\\begin{lemma}\\label{l:approx2_gen}\nLet $\\vec{y}^\\ast$ be an optimal solution of ($d$-LP) and let $\\vec{z}$ be the integral solution obtained from $\\vec{y}^\\ast$ by randomized rounding (and the method of conditional expectations). Then,\n\\begin{equation}\\label{eq:approx2_gen}\n p(\\vec{z}) \\in c + \\sum_{j \\in N} z_j \\rho_j\n \\pm 2\\e_1 \\sum_{j \\in N} \\tb_{j}\n \\pm 2 (d-1) \\e_2 n^{d-1+\\delta}\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nBy Lemma~\\ref{l:approx2}, for any polynomial $p_j$ appearing in the decomposition of $p(\\vec{x})$, we have that $p_j(\\vec{z}) \\in \\rho_j \\pm 2 \\e_1 \\tb_j \\pm 2 (d-1) \\e_2 n^{d-2+\\delta}$. Therefore,\n\\begin{align*}\n p(\\vec{z}) = c + \\sum_{j \\in N} z_j p_j(\\vec{z}) & \\in\n c + \\sum_{j \\in N} z_j \\left( \\rho_j \\pm 2 \\e_1 \\tb_j\n \\pm 2 (d-1) \\e_2 n^{d-2+\\delta} \\right)\\\\\n &= c + \\sum_{j \\in N} z_j \\rho_j\n \\pm 2 \\e_1 \\sum_{j \\in N} z_j \\tb_j\n \\pm 2 (d-1) \\e_2 \\sum_{j \\in N} z_j n^{d-2+\\delta} \\\\\n &\\in c + \\sum_{j \\in N} z_j \\rho_j\n \\pm 2 \\e_1 \\sum_{j \\in N} \\tb_j\n \\pm 2 (d-1) \\e_2 n^{d-1+\\delta}\n\\end{align*}\nThe second inclusion holds because $z_j \\in \\{ 0,1\\}$ and $\\sum_{j \\in N} z_j \\leq n$.\n\\qed\\end{proof}\n\n\\input{estimations}\n\n\\subsection{The Final Algorithmic Result}\\label{s:pip_together}\n\nWe are ready now to conclude this section with the following theorem.\n\\begin{theorem}\\label{th:pip_scheme}\nLet $p(\\vec{x})$ be an $n$-variate degree-$d$ $\\beta$-smooth $\\delta$-bounded polynomial. Then, for any $\\eps > 0$, we can compute, in time $2^{O(d^7 \\beta^3 n^{1-\\delta} \\ln n\/\\eps^3)}$ and with probability at least $1-8\/n^2$, a binary vector $\\vec{z}$ so that $p(\\vec{z}) \\geq p(\\vec{x}^\\ast) - \\eps n^{d-1+\\delta}$, where $\\vec{x}^\\ast$ is the maximizer of $p(\\vec{x})$.\n\\end{theorem}\n\\begin{proof}\nBased upon the discussion above in this section, for any constant $\\eps > 0$, if $p(\\vec{x})$ is an $n$-variate degree-$d$ $\\beta$-smooth $\\delta$-bounded polynomial, the algorithm described in the previous sections computes an integral solution $\\vec{z}$ that approximately maximizes $p(\\vec{x})$. Specifically, setting $\\e_1 = \\eps\/(4 d(d-1)\\beta)$ $\\e_2 = \\eps\/(8(d-1))$, $p(\\vec{z})$ satisfies the following with probability at least $1-8\/n^2$\\,:\n\\begin{eqnarray*}\n p(\\vec{z}) & \\geq & \\left(c + \\sum_{j \\in N} y^\\ast_j \\rho_j\\right)\n - \\frac{\\eps}{2d(d-1)\\beta} \\sum_{j \\in N} \\tb_{j}\n - \\eps n^{d-1+\\delta} \/ 4\\\\\n & \\geq & \\left(c + \\sum_{j \\in N} y^\\ast_j \\rho_j\\right) -\n \\eps n^{d-1+\\delta} \/ 2\\\\\n & \\geq & \\left(c + \\sum_{j \\in N} x_j^\\ast \\rho_j\\right) -\n \\eps n^{d-1+\\delta} \/ 2\\\\\n & \\geq & \\left(p(\\vec{x}^\\ast) - \\frac{\\eps}{4d(d-1)\\beta} \\sum_{j \\in N} \\tb_{j}\n - \\eps n^{d-1+\\delta} \/ 8\\right) -\n \\eps n^{d-1+\\delta} \/ 2\\\\\n & \\geq & p(\\vec{x}^\\ast) - \\eps n^{d-1+\\delta}\n\\end{eqnarray*}\nThe first inequality follows from Lemma~\\ref{l:approx2_gen}. The second inequality follows from the hypothesis that $p(\\vec{x})$ is $\\beta$-smooth and $\\delta$-bounded. Then Lemma~\\ref{l:cum_est} implies that $\\sum_{j \\in N} \\tb_{j} \\leq \\frac{d(d-1)}{2}\\beta n^{d-1+\\delta}$\\,. As in Section~\\ref{s:values}, we assume that the constant hidden in the definition of $p(\\vec{x})$ as a $\\delta$-bounded polynomial is $1$. If this constant is some $\\kappa\\geq 1$, we should also divide $\\e_1$ by $\\kappa$. The third inequality holds because $\\vec{y}^\\ast$ is an optimal solution to ($d$-LP) and $\\vec{x}^\\ast$ is a feasible solution to ($d$-LP). The fourth inequality follows from Lemma~\\ref{l:approx_gen}. For the last inequality, we again use Lemma~\\ref{l:cum_est}. This concludes the proof of Theorem~\\ref{th:pip_scheme}.\\qed\\end{proof}\n\n\\section{Notation and Preliminaries}\n\\label{s:prelim}\n\nAn $n$-variate degree-$d$ polynomial $p(\\vec{x})$ is \\emph{$\\beta$-smooth} \\cite{AKK99}, for some constant $\\beta \\geq 1$, if for every $\\ell \\in \\{ 0, \\ldots, d\\}$, the absolute value of each coefficient of each degree-$\\ell$ monomial in the expansion of $p(\\vec{x})$ is at most $\\beta n^{d - \\ell}$.\nAn $n$-variate degree-$d$ $\\beta$-smooth polynomial $p(\\vec{x})$ is \\emph{$\\delta$-bounded}, for some constant $\\delta \\in (0, 1]$, if for every $\\ell$, the sum, over all degree-$\\ell$ monomials in $p(\\vec{x})$, of the absolute values of their coefficients is $O(\\beta n^{d-1+\\delta})$. Therefore, for any $n$-variate degree-$d$ $\\beta$-smooth $\\delta$-bounded polynomial $p(\\vec{x})$ and any $\\vec{x} \\in \\{ 0, 1\\}^n$, $|p(\\vec{x})| = O(d \\beta n^{d-1+\\delta})$.\n\nThroughout this work, we treat $\\beta$, $\\delta$ and $d$ as fixed constants and express the running time of our algorithm as a function of $n$, i.e., the number of variables in $p(\\vec{x})$.\n\n\\noindent{\\bf Optimization Problem.}\nOur approximation schemes for almost sparse instances of \\MC, \\kSAT, and \\kCSP\\ are obtained by reducing them to the following problem: Given an $n$-variate $d$-degree $\\beta$-smooth $\\delta$-bounded polynomial $p(\\vec{x})$, we seek a binary vector $\\vec{x}^\\ast \\in \\{0, 1\\}^n$ that maximizes $p$, i.e., for all binary vectors $\\vec{y} \\in \\{0, 1\\}^n$, $p(\\vec{x}^\\ast) \\geq p(\\vec{y})$.\n\n\\noindent{\\bf Polynomial Decomposition and General Approach.}\nAs in \\cite[Lemma~3.1]{AKK99}, our general approach is motivated by the fact that any $n$-variate $d$-degree $\\beta$-smooth polynomial $p(\\vec{x})$ can be naturally decomposed into a collection of $n$ polynomials $p_j(\\vec{x})$. Each of them has degree $d-1$ and at most $n$ variables and is $\\beta$-smooth.\n\\begin{proposition}[\\cite{AKK99}]\\label{pr:decomposition}\nLet $p(\\vec{x})$ be any $n$-variate degree-$d$ $\\beta$-smooth polynomial. Then, there exist a constant $c$ and degree-$(d-1)$ $\\beta$-smooth polynomials $p_j(\\vec{x})$ such that\n\\( p(\\vec{x}) = c + \\sum_{j = 1}^n x_j p_j(\\vec{x}) \\).\n\\end{proposition}\n\\begin{proof} The proposition is shown in \\cite[Lemma~3.1]{AKK99}. We prove it\nhere just for completeness. Each polynomial $p_j(\\vec{x})$ is obtained from\n$p(\\vec{x})$ if we keep only the monomials with variable $x_j$ and pull $x_j$\nout, as a common factor. The constant $c$ takes care of the constant term in\n$p(\\vec{x})$. Each monomial of degree $\\ell$ in $p(\\vec{x})$ becomes a monomial\nof degree $\\ell-1$ in $p_j(\\vec{x})$, which implies that the degree of\n$p_j(\\vec{x})$ is $d-1$. Moreover, by the $\\beta$-smoothness condition, the\ncoefficient $t$ of each degree-$\\ell$ monomial in $p(\\vec{x})$ has $|t| \\leq\n\\beta n^{d - \\ell}$. The corresponding monomial in $p_j(\\vec{x})$ has degree\n$\\ell-1$ and the same coefficient $t$ with $|t| \\leq \\beta n^{d - 1 -\n(\\ell-1)}$. Therefore, if $p(\\vec{x})$ is $\\beta$-smooth, each $p_j(\\vec{x})$\nis also $\\beta$-smooth. \\qed\\end{proof}\n\\noindent{\\bf Graph Optimization Problems.}\nLet $G(V, E)$ be a (simple) graph with $n$ vertices and $m$ edges. For each vertex $i \\in V$, $N(i)$ denotes $i$'s neighborhood in $G$, i.e., $N(i) = \\{ j \\in V: \\{i, j\\} \\in E\\}$. We let $\\deg(i) = |N(i)|$ be the degree of $i$ in $G$ and $\\Delta = 2|E|\/n$ denote the average degree of $G$.\nWe say that a graph $G$ is \\emph{$\\delta$-almost sparse}, for some constant $\\delta \\in (0, 1]$, if $m = \\Omega(n^{1+\\delta})$ (and thus, $\\Delta = \\Omega(n^\\delta)$).\n\nIn \\MC, we seek a partitioning of the vertices of $G$ into two sets $S_0$ and\n$S_1$ so that the number of edges with endpoints in $S_0$ and $S_1$ is\nmaximized. If $G$ has $m$ edges, the number of edges in the optimal cut is at\nleast $m\/2$.\n\nIn \\kDense, given an undirected graph $G(V, E)$, we seek a subset $C$ of $k$ vertices so that the induced subgraph $G[C]$ has a maximum number of edges. \n\n\\noindent{\\bf Constraint Satisfaction Problems.}\nAn instance of (boolean) \\kCSP\\ with $n$ variables consists of $m$ boolean constraints $f_1, \\ldots, f_m$, where each $f_j : \\{ 0, 1\\}^k \\to \\{0, 1\\}$ depends on $k$ variables and is satisfiable, i.e., $f_j$ evaluates to $1$ for some truth assignment. We seek a truth assignment to the variables that maximizes the number of satisfied constraints. \\kSAT\\ is a special case of \\kCSP\\ where each constraint $f_j$ is a disjunction of $k$ literals. An averaging argument implies that the optimal assignment of a \\kCSP\\ (resp. \\kSAT) instance with $m$ constraints satisfies at least $2^{-k} m$ (resp. $(1-2^{-k})m$) of them. We say that an instance of \\kCSP\\ is \\emph{$\\delta$-almost sparse}, for some constant $\\delta \\in (0, 1]$, if the number of constraints is $m = \\Omega(n^{k-1+\\delta})$.\n\nUsing standard arithmetization techniques (see e.g., \\cite[Sec.~4.3]{AKK99}), we can reduce any instance of \\kCSP\\ with $n$ variables to an $n$-variate degree-$k$ polynomial $p(\\vec{x})$ so that the optimal truth assignment for \\kCSP\\ corresponds to a maximizer $\\vec{x}^\\ast \\in \\{0, 1\\}$ of $p(\\vec{x})$ and the value of the optimal \\kCSP\\ solution is equal to $p(\\vec{x}^\\ast)$. Since each $k$-tuple of variables can appear in at most $2^k$ different constraints, $p(\\vec{x})$ is $\\beta$-smooth, for $\\beta \\in [1, 4^k]$, and has at least $m$ and at most $4^k m$ monomials. Moreover, if the instance of \\kCSP\\ has $m = \\Theta(n^{k-1+\\delta})$ constraints, then $p(\\vec{x})$ is $\\delta$-bounded and its maximizer $\\vec{x}^\\ast$ has $p(\\vec{x}^\\ast) = \\Omega(n^{k-1+\\delta})$.\n\n\\noindent{\\bf Notation and Terminology.}\nAn algorithm has \\emph{approximation ratio} $\\rho \\in (0, 1]$ (or is \\emph{$\\rho$-approximate}) if for all instances, the value of its solution is at least $\\rho$ times the value of the optimal solution.\n\nFor graphs with $n$ vertices or CSPs with $n$ variables, we say that an event $E$ happens with high probability (or whp.), if $E$ happens with probability at least $1-1\/n^c$, for some constant $c \\geq 1$.\n\nFor brevity and clarity, we sometimes write $\\alpha \\in (1\\pm \\e_1) \\beta \\pm \\e_2 \\gamma$, for some constants $\\e_1, \\e_2 > 0$, to denote that $(1-\\e_1)\\beta - \\e_2 \\gamma \\leq \\alpha \\leq (1+\\e_1)\\beta + \\e_2 \\gamma$.\n\n\\endinput\n\nExplain the difference between $O(n^{k-1+\\delta})$ and exactly $n^{k-1+\\delta}$, in particular for graphs. Either hidden in $\\beta$ or hidden in a tiny increase of $\\delta$.\n\n\nWe note that any (unweighted) binary constraint satisfaction problem with at most $d$ variables per constraint can be cast in the framework of smooth polynomial maximization. Several classical optimization problems, such as \\MC, {\\sc Max}-DICUT and \\kSAT\\ and {\\sc Max}-$k$-{\\sc Densest Subgraph}, reduce to smooth polynomial maximization (possibly under linear constraints).\n\n\n\nApplying the same idea recursively, we can further decompose each $(d-1)$-degree polynomial $p_{i_1}(\\vec{x})$ into $n$ $(d-2)$-degree polynomials $p_{i_1 j}(\\vec{x})$ such that\n$p_{i_1}(\\vec{x}) = c_{i_1} + \\sum_{j \\in N} x_j p_{i_1 j}(\\vec{x})$, etc.\nAt the basis of the recursion, we have polynomials $p_{i_1\\ldots i_{d-1}}(\\vec{x})$ of degree $1$, one for each $(d-1)$-tuple of indices $(i_1, \\ldots, i_{d-1}) \\in N^{d-1}$, which can be written as\n\\[ p_{i_1\\ldots i_{d-1}}(\\vec{x}) = c_{i_1\\ldots i_{d-1}} +\n \\sum_{j \\in N} x_j c_{i_1\\ldots i_{d-1} j}\\,,\n\\]\nwhere $c_{i_1\\ldots i_{d-1} j}$ are constants. Due to $\\beta$-smoothness, $|c_{i_1\\ldots i_{d-1} j}| \\leq \\beta$ and $|c_{i_1\\ldots i_{d-1}}| \\leq \\beta n$. Inductively, $\\beta$-smoothness implies that each polynomial $p_{i_1\\ldots i_{d-\\ell}}(\\vec{x})$ of degree $\\ell \\geq 1$ in this decomposition has $|p_{i_1\\ldots i_{d-\\ell}}(\\vec{x})| \\leq \\ell \\beta n^{\\ell}$ and $|c_{i_1\\ldots i_{d-\\ell}}+p_{i_1\\ldots i_{d-\\ell}}(\\vec{x})| \\leq (\\ell+1)\\beta n^{\\ell}$, for all vectors $\\vec{x} \\in \\{0, 1\\}^n$. This decomposition can be performed in a unique way if we insist that $i_1 < i_2 < \\cdots < i_{d-1}$, but this is not relevant for our analysis. \n\n\\section{Conclusions and Directions for Further Research}\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\n\\section{Opportunities and Challenges}\n\\label{sec:motivation}\n\nIn multi-programmed systems, NDA-integrated memory devices will be main memory for some processes and accelerators for other processes. Furthermore, the host and NDAs should access the same memory in parallel when collaboratively processing data. Under this scenario, it is necessary to effectively share memory between the host and NDAs. In this section, we identify opportunities to better utilize internal rank bandwidth compared to prior approaches and discuss four problems that we solve to utilize that bandwidth.\n\n\\subsection{Prior Approaches}\n\\label{subsec:prior_work}\n\nPrior work \\cite{farmahini2015nda} proposes two ways to share memory between the host and NDAs. First, the ownership of each rank is ping-ponged between the host and NDAs in a coarse-grain manner. Before ownership is exchanged, all the banks are precharged so that the next owner can start accessing memory from the initialized state. Since warming up memory takes time, ownership transitioning should be done in coarse granularity to amortize this overhead. However, coarse-grain ownership switching results in halving the performance of both owners compared to their ideal performance. \n\nThe second way is to partition ranks into two groups with each processor having exclusive ownership over one group of ranks. This approach eliminates the source of contention and ownership switching overhead. However, a large portion of memory capacity should be assigned to NDAs and the potential bandwidth gain of NDAs is limited by the number of ranks dedicated to NDAs. \n\n\\subsection{Opportunity: Rank Idle Periods}\n\\label{subsec:opportunities}\nBecause multiple ranks share the command and data buses within each channel, the host can access only one rank at a time per channel. In addition, to avoid rank switching penalty, memory controllers tend to minimize rank interleaving. As a result, ranks are often not accessed by the host for certain periods of time. \\fig{fig:motiv_rank_idle} shows the bandwidth utilization of rank internal buses when only host programs are executed. Our application mixes and the baseline configuration are summarized in Table \\ref{tab:eval_config}. Overall, about 60\\% of the internal-bus bandwidth is unused. However, the majority of idle periods are just $10-250$ cycles. \n\nBy opportunistically issuing NDA memory commands in these idle periods, we can better utilize internal rank bandwidth. Compared to the prior approaches, \\textit{fine-grain interleaving of NDA access minimally impacts the performance and memory capacity of the host while maximizes the utilization of rank bandwidth}. \n\n\\begin{figure}[t!]\n\\centering\n\t\\includegraphics[width=0.48\\textwidth]{fig\/motiv_rank_idle.pdf}\n\t\\caption{Rank idle-time breakdown vs. idleness granularity.}\n\t\\label{fig:motiv_rank_idle}\n\t\\vspace*{-4mm}\n\\end{figure}\n\n\\subsection{Challenge 1: Fine-Grained Access Interleaving}\n\\label{subsec:challenges}\nTo opportunistically issue NDA commands, mechanisms for fine-grain mode switching are required, raising the following challenges. \n\n\\medskip\n\\noindent\\textbf{\\textit{Extra Bank Conflicts.}}\nSince the host and NDAs share banks, fine-grain access interleaving is likely to cause additional bank conflicts. Opening and closing rows incur overhead and hinder utilizing rank bandwidth within short idle periods. \n\n\\medskip\n\\noindent\\textbf{\\textit{Read\/Write Turnaround Time.}}\nAs discussed in Section \\ref{sec:background}, interleaving read and write operations to the same rank incurs extra overhead compared to issuing the same command type back to back \\cite{stuecheli2010virtual}. The host mitigates this overhead by buffering operations with caches and write buffers. However, without coordination, the host and NDAs may issue different types of transactions, which are then interleaved if both host and NDA run in parallel. Therefore, we need a mechanism to throttle write transactions of NDAs when needed and allow issuing when the rank is idle for long enough time. \n\n\\medskip\n\\noindent\\textbf{\\textit{Overhead of State Coordination.}}\nFor DIMM-type DRAM devices, each NDA needs its own memory controller to allow NDAs to utilize the untapped bandwidth. This results in two memory controllers managing the bank and timing state of each rank. To synchronize the state between the two memory controllers, prior work adopts the precharge-all (PREA) mechanism. However, fine-grain mode switching will incur significant overhead not only because of PREA command overhead itself but also because of the warm-up time required after mode switching. \n\n\n\\medskip\n\\subsection{Challenge 2: Unified Data Layout for Collaboration}\nWhen either just the host or just the NDAs own a rank for a fairly long time, we can customize data layout and address mapping for each processor, possibly copying and laying out data differently when switching access modes. However, concurrent NDA and host access to the same data requires a single data layout and address mapping that works well for both the host and NDA at the same time. Otherwise, two copies of data with different layouts are necessary, incurring high capacity overhead. \n\n\n\n\n\\section{Background}\n\\label{sec:background}\n\n\n\\smallskip}%{\\medskip\n\\noindent\\textbf{\\textit{DRAM Basics.}}\nA memory system is composed of memory channels that operate independently. In each memory channel, one or more memory modules (DIMMs) share command\/address (C\/A) and data bus. A DIMM is usually composed of one or two physical ranks where all chips in the same rank operate together. Each chip and thus rank is composed of multiple banks and bank state is independent. Each bank can be in an opened or closed state and, if opened, which row is opened. To access a certain row, the target row must be opened first. If another row is already open, it must be closed before the target row is opened, which is called \\textit{bank conflict} and increases access latency. The DRAM protocol specifies the timing parameters and protocol accessing DRAM. These are managed by a per-channel memory controller.\n\n\\smallskip}%{\\medskip\n\\noindent\\textbf{\\textit{Address Mapping.}}\nThe memory controller translates OS-managed physical addresses into DRAM addresses, which are composed of indices to channel, rank, bank, row, and column. Typically, memory controllers follow the following policies in their address mapping to minimize access latency: interleaving address across channels with fine granularity is beneficial since they can be accessed independently from each other. On the other hand, ranks are interleaved at coarse granularity since switching to other ranks in the same channel incurs a penalty. In addition, XOR-based hash mapping functions are used when determining channel, rank, and bank addresses to maximally exploit bank-level parallelism. This also minimizes bank conflicts when multiple rows are accessed with the same access pattern since the hash function shuffles the bank address order \\cite{zhang2000permutation}. To accomplish this, some row address bits are used along with channel, rank, and bank address bits~\\cite{pessl2015reverse}.\n\n\\smallskip}%{\\medskip\n\\noindent\\textbf{\\textit{Write-to-Read Turnaround Time.}}\nIn general, interleaving read and write DRAM transactions incurs higher latency than issuing the same transaction type back to back. Issuing a read transaction immediately following a write suffers from particularly high penalty. The memory controller issues the write command and loads data to the bus after tCWL cycles. Then, data is transferred for tBL cycles to the DRAM device and written to the cells. The next read command can only be issued after tWTR cycles, which guarantees no conflict on the IO circuits in DRAM. The high penalty stems from the fact that the actual write happens at the end of the transaction whereas a read happens right after it is issued. For this reason, the opposite order, read to write, has lower penalty. \n\n\\smallskip}%{\\medskip\n\\noindent\\textbf{\\textit{NDA Basics.}}\nNear-data accelerators add processing elements near memory to overcome the physical constraints that limit host memory bandwidth. \\medel{Since memory channels are independent,} Host peak memory bandwidth is determined by the number of channels and peak bandwidth per channel. \\meadd{Any NDA accesses on the memory side of a channel can potentially increase overall system bandwidth.} \n\\meadd{For example. a memory module with multiple ranks offers more bandwidth in the module than available at the channel. Similarly, multiple banks on a DRAM die can also offer more bandwidth than available off of a DRAM chip.}\n\\medel{However, the number of ranks in the system does not affect the peak memory bandwidth of the host since only one rank per channel can transfer data to the host at any given time over the shared bus. On the other hand, near-data accelerators (NDAs) can access data internally without contending for the shared bus. This enables higher peak bandwidth than the host can achieve.}\nHowever, because NDAs only offer a BW advantage when they access data in their local memory, data layout is crucial for performance. A naive layout may result in frequent data movement among NDAs and with the host. \\medel{In this paper, we assume that inter-NDA communication is only done through the host (alternatives are discussed in~\\cite{kim2013memory,poremba2017there}).}\n\n\\smallskip}%{\\medskip\n\\noindent\\textbf{\\textit{Baseline NDA Architecture.}}\nOur work targets NDAs that are integrated within high-capacity memory modules such that their role as both main memory and as accelerators is balanced. Specifically, our baseline NDA devices are 3D-integrated within DRAM chips on a module (DIMM), similar to 3DS DDR4 \\cite{ddr43ds} yet a logic die is added. DIMMs offer high capacity and predictable memory access\\mcut{, which are the required features for main memory}. \\meadd{Designs with similar characteristics include on-DIMM PEs~\\cite{ibm_pim_dimm,alian2018nmp} and on-chip PEs within banks~\\cite{upmem}.}\nAlternatively, NDAs can utilize high-bandwidth devices, such as the hybrid memory cube (HMC) \\cite{pawlowski2011hybrid} or high bandwidth memory (HBM) \\cite{standard2013high}. These offer high internal bandwidth but have limited capacity and high cost due to numerous point-to-point connections to memory controllers \\cite{asghari2016chameleon}. HMC provides capacity scaling via a network but this results in high access latency and cost. HBM does not provide such solutions. As a result, HBM devices are better for standalone accelerators than for main memory. \n\n\\hpcacut{\n\\mattan{I copied this to intro. Probably need to rearrange some things here.}\n\\fig{fig:baseline_nda} illustrates our baseline NDA architecture. Each DIMM is composed of multiple chips, with one or more DRAM dice stacked on top of a logic die in each chip, using the low-cost commodity 3DS approach. Processing elements (PEs) and a memory controller are located on the logic die. Each PE can access memory internally through the NDA memory controller. However, this internal access cannot conflict with external accesses from the host CPU (host). Therefore, each rank is in either host or NDA access mode and only one can access it at any given time. The host uses chip-address memory-mapped registers to control the NDAs~\\cite{farmahini2015nda}. \n}\n\n\\begin{table}\\centering\n \\ra{1.2}\n \\small\n\\begin{tabular}{@{}llll@{}}\\toprule\nOperations & Description & Operations & Description \\\\\n\\midrule\nAXPBY & ${\\vec{z} = \\alpha \\vec{x} + \\beta \\vec{y}}$ & DOT & ${c = \\vec{x} \\cdot \\vec{y}}$ \\\\\nAXPBYPCZ & ${\\vec{w} = \\alpha \\vec{x} + \\beta \\vec{y} + \\gamma \\vec{z}}$ & \tNRM2 & ${c = \\sqrt{\\vec{x} \\cdot \\vec{x}}}$ \\\\\nAXPY & ${\\vec{y} = \\alpha \\vec{y} + \\vec{x}}$ & SCAL & ${\\vec{x} = \\alpha \\vec{x}}$ \\\\\nCOPY & ${\\vec{y} = \\vec{x}}$ & GEMV & ${\\vec{y} = A\\vec{x}}$ \\\\\nXMY & ${\\vec{z} = \\vec{x} \\odot \\vec{y}}$ & & \\\\\n\\bottomrule\n\\end{tabular}\n\\caption{Example NDA operations used in our case-study application. Chopim is not limited to these operations.}\n\\label{tab:nda_ops} \n\\vspace*{-4mm}\n\\end{table}\n\n\\smallskip}%{\\medskip\n\\noindent\\textbf{\\textit{Coherence.}}\nCoherence mechanisms between the host and NDAs have been studied in prior NDA work~\\cite{ahn2015pim,boroumand2019conda,boroumand2016lazypim} and can be used as is with Chopim. We therefore do not focus on coherence in this paper. In our experiments, we use the existing coherence approach of explicitly and infrequently copying the small amount of data that is not read-only using cache bypassing and memory fences. \n\n\\smallskip}%{\\medskip\n\\noindent\\textbf{\\textit{Address Translation.}}\n\\meadd{Application use of NDAs requires virtual to physical address translation. Some prior work~\\cite{hsieh2016accelerating,hong2016accelerating,gao2015practical} proposes address translation within NDAs to enable independent NDA execution without host assist. This increases both NDA and system complexity. As an alternative, NDA operations can be constrained to only access data within a physical memory region that is contiguous in the virtual address space. Hence, translation is performed by the host when targeting an NDA command at a certain physical address. This has been proposed for both very fine-grain NDA operations within single cache lines~\\cite{ahn2015pim,ahn2016scalable,bssync,kim2017toward,nai2017graphpim} and NDA operations within a virtual memory page~\\cite{oskin1998active}.}\n\\medel{Before the host and\/or NDAs accesses memory, logical-to-physical address translation should be done. One possible approach is to make the host OS do the address translation for all host and NDA accesses. On the other hand, there are prior work \\cite{hsieh2016accelerating,hong2016accelerating} that attempts to do address translation with NDAs to enable independent NDA execution without host's assist.} In this paper, \\meadd{we use host-based translation because of its low complexity and only check bounds within the NDAs for protection.} \\medel{choose the first approach where the host has direct control over NDAs.}\n\n\\smallskip}%{\\medskip\n\\noindent\\textbf{\\textit{NDA Workloads.}}\nWe focus on NDA workloads for which the host inherently cannot outperform an NDA. These exhibit low temporal locality and low arithmetic intensity and are bottlenecked by peak memory bandwidth. By offloading such operations to the NDA, we mitigate the bandwidth bottleneck by leveraging internal memory module bandwidth. Moreover, these workloads typically require simple logic for computation and integrating such logic within DRAM chips\/modules is practical because of the low area and power overhead. \n\n\nFundamental linear algebra matrix and vector operations satisfy these criteria. Dense \\meadd{vector and matrix-vector operations, which are prevalent in machine learning primitives,} are particularly good candidates because of their deterministic and regular memory access patterns and low arithmetic-intensity.\nFor example, prior work off-loads matrix and vector operations of deep learning workloads to utilize high near-memory BW~\\cite{kim2016neurocube,gao2017tetris}.\nAlso, Kwon et al. propose to perform element-wise vector reduction operations needed for a deep-learning-based recommendation system to NDAs~\\cite{kwon2019tensordimm}.\nIn this paper, we focus on accelerating the dense matrix and vector operations summarized in \\tab{tab:nda_ops}. We demonstrate and evaluate their use in the SVRG application in Section \\ref{sec:collaboration}. \\meadd{Note that we use these as a concrete example, but our contributions generalize to other NDA operations.}\n\n\n\nNDA execution of graph processing has also been proposed because graph processing can be bottlenecked by peak memory bandwidth because of low temporal and spatial locality~\\cite{nai2017graphpim,zhang2018graphp,song2018graphr,ahn2016scalable,ahn2015pim}. We do not consider graph processing in this paper because we do not innovate in this context. \\bcut{Prior work either relies on high inter-chip communication to support the irregular access patterns of graph applications, or focuses on fine-grain cache-block oriented NDA operations rather than coarse-grain operations. The former is incompatible with our economic main-memory context and our research offers nothing new if only fine-grain NDA operations are used.}\n\n\n\n\\hpcacut{\n\\smallskip}%{\\medskip\n\\noindent\\textbf{\\textit{NDA Instruction Granularity.}}\n\nAddresses used in user program are mapped to DRAM address in two steps: OS's address translation and memory controller's address mapping. The granularity of address translation is a \\textit{page}, which is typically 4KB in conventional systems and more coarse-grain (2MB and 1GB) pages are used in the systems using huge-page policies. On the other hand, the granularity of address mapping is a cache block (CB), which is typically 64B in CPU systems. Since data within cache block is contiguous in both logical and DRAM address spaces, once the DRAM address of a CB is determined, the host and NDAs will have the same view on the data within the CB. Under direct host control on NDAs, this enables simple programming models for NDA operations and, for this reason, prior work \\cite{ahn2015pim} has adopted NDA instructions that operate on each CB, which we call \\textit{fine-grain NDA instruction}. However, as more NDA devices are connected to the shared bus, more NDA instructions should be sent through the bus and, eventually, NDA performance will be bottlenecked by command bandwidth limitation. This also affects the host performance as contention on the bus increases. \n\nTo solve this problem, our approach is to enable \\textit{coarse-grain NDA instructions}. Each NDA instruction results in longer execution time so that, with less instructions, NDAs can remain active. The main challenge is how to enable this without going through address decoding steps that are required to figure out the DRAM address that NDAs have to access next. \n}\n\n\n\n\\section{Evaluation}\n\\label{sec:evaluation}\n\n\nWe present evaluation results for the various Chopim mechanisms, analyzing:\n(1) the benefit of coarse-grain NDA operations; (2) how bank partitioning improves NDA performance; (3) how stochastic issue and next-rank prediction mitigate read\/write turnarounds; (4) the impact of NDA workload write intensity and load imbalance; (5) how Chopim compares with rank partitioning; (6) the benefits of collaborative and parallel CPU\/NDA processing; and (7) energy efficiency.\n\\meadd{All results rely on the replicated FSM to enable using DDR4.}\n\n\n\\medskip\n\\noindent\\textbf{\\textit{Coarse-grain NDA Operation.}}\n\\fig{fig:cgnda} demonstrates how overhead for launching NDA instructions can degrade performance of the host and NDAs as rank count increases. To prevent other factors, such as bank conflicts, bank-level parallelism, and load imbalance from affecting performance, we use our BP mechanism, the NRM2 operation (because we can precisely control its granularity), and asynchronous launch. We run the most memory-intensive application mix (mix1) on the host. When more CBs are processed by each NDA instruction, contention between host transactions and NDA instruction launches decreases and performance of both improves. In addition, as the number of ranks grows, contention becomes severe because more NDA instructions are necessary to keep all NDAs busy. These results show that our data layout that enables coarse-grain NDA operation is beneficial, especially in concurrent access situation.\n\n\\medskip\n\\noindent\\fbox{\\begin{minipage}{0.46\\textwidth}\n Takeaway 1:\n Coarse-grain NDA operations are crucial for mitigating contention on the host memory channel.\n\\end{minipage}}\n\n\\begin{figure}[t!]\n\\centering\n\t\\includegraphics[width=0.48\\textwidth]{eval\/cgnda.pdf}\n\t\\caption{Impact of coarse-grain NDA operations. (X-axis: the number of cache blocks accessed per NDA instruction.)}\n\t\\label{fig:cgnda}\n\t\\vspace*{-4mm}\n\\end{figure}\n\n\\medskip\n\\noindent\\textbf{\\textit{Impact of Bank Partitioning.}}\n\\fig{fig:eval_bpart_vs_bshar} shows performance when banks are shared or partitioned between the host and NDAs\\mereplace{}{ which access different data}. We emphasize the impact of write intensity of NDA operations by running the extreme DOT (read intensive) and COPY (write intensive) operations. \\meadd{While not shown, SVRG falls roughly in the middle of this range.} We compare each memory access mode with an idealized case where we assume the host accesses memory without any contention and NDAs can leverage all the idle rank bandwidth without considering transaction types and other overheads. \n\nOverall, accelerating the read-intensive DOT with concurrent host access does not affect host performance significantly even with our aggressive approach. However, contention with the shared access mode significantly degrades NDA performance. This is because of the extra bank conflicts caused by interleaving host and NDA transactions. On the other hand, accelerating the write-intensive COPY degrades host performance. This happens because, in the write phase of NDAs when the NDA write buffer drains, the host reads are blocked while NDAs keep issuing write transactions due to long write-to-read turnaround time. To mitigate this problem, we show the impact of our write throttling mechanisms below. Note that host performance of mix0 is the lowest, despite its doubled core count, because contention for LLC increases and memory performance dominates overall performance.\n\n\\medskip\n\\noindent\\fbox{\\begin{minipage}{0.46\\textwidth}\n Takeaway 2: Bank partitioning increases row-buffer locality and substantially improves NDA performance, especially for read-intensive NDA operations.\n \n\\end{minipage}}\n\n\\begin{figure}[t!]\n\\centering\n\t\\includegraphics[width=0.48\\textwidth]{eval\/bpart_vs_bshar.pdf}\n\t\\caption{Concurrent access to different memory regions.}\n\t\\label{fig:eval_bpart_vs_bshar}\n\\end{figure}\n\n\n\\medskip\n\\noindent\\textbf{\\textit{Mitigating NDA Write Interference.}}\n\\fig{fig:eval_mech_nda_write} shows the impact of mechanisms for write-intensive NDA operations. In this experiment, the most write-intensive operation, COPY, is executed by NDAs and the mechanisms are applied only during the write phase of NDA execution. Stochastic issue is used with two probabilities, 1\/4 and 1\/16, which clearly shows the host-NDA performance tradeoff compared to next-rank prediction. \n\nFor stochastic issue, the tradeoff between host and NDA performance is clear. If NDAs issue with high probability, host performance degrades. The appropriate issue probability can be chosen with heuristics based on host memory intensity though we do not explore this in this paper. On the other hand, the next-rank prediction mechanism shows slightly better behavior than the stochastic approach. Compared to stochastic issue with probability 1\/16, both host and NDA performance are higher. Stochastic issue extends the tradeoff range and does not require signaling. \\meadd{We use the robust next-rank prediction approach for the rest of the paper.}\n\n\\medskip\n\\noindent\\fbox{\\begin{minipage}{0.46\\textwidth}\nTakeaway 3: Throttling NDA writes mitigates the large impact of read\/write turnaround interference on host performance; next-rank prediction is robust and effective while stochastic issue does not require additional signaling. \n\\end{minipage}}\n\n\\begin{figure}[t!]\n\\centering\n\t\\includegraphics[width=0.48\\textwidth]{eval\/mech_nda_write.pdf}\n\t\\caption{Stochastic issue and next-rank prediction impact.}\n\t\\label{fig:eval_mech_nda_write}\n\t\\vspace*{-4mm}\n\\end{figure}\n\n\n\n\\medskip\n\\noindent\\textbf{\\textit{Impact of Write-Intensity and Input Size.}}\n\\fig{fig:eval_nda_workload} shows host and NDA performance when different types of NDA operations are executed with different input sizes. The host application mix with the highest memory intensity (mix1) and the next-rank prediction mechanism is used. In addition, to identify the impact of input size, three different vector sizes are used: small (8KB\/rank), medium (128KB\/rank), and large (8MB\/rank). We evaluate asynchronous launches with the small vector size. We evaluate GEMV with three matrix sizes, where the number of columns is equal to each of the three vector sizes and the number of rows fixed at 128.\n\nOverall, performance is inversely related to write intensity, and short execution time per launch results in low NDA performance. The NRM2 operation with the small input has the shortest execution time. Because of its short execution time, NRM2 is highly impacted by the launching overhead and load imbalance caused by concurrent host access. On the other hand, GEMV executes longer than other operations and it is impacted less by load imbalance and launching overhead. With the asynchronous launch optimization, the impact of load imbalance decreases and NDA bandwidth increases.\n\n\\medskip\n\\noindent\\fbox{\\begin{minipage}{0.46\\textwidth}\nTakeaway 4: Asynchronous launch mitigates the load imbalance caused by short-duration NDA operations.\n\\end{minipage}}\n\n\\begin{figure}[t!]\n\\centering\n\t\\includegraphics[width=0.46\\textwidth]{eval\/nda_workload_size.pdf}\n\t\\caption{Impact of NDA operations and operand size.}\n\t\\label{fig:eval_nda_workload}\n\\end{figure}\n\n\n\n\n\n\n\n\n\n\\medskip\n\\noindent\\textbf{\\textit{Scalability Comparison.}}\n\\fig{fig:eval_scal} compares Chopim with the performance of rank partitioning (RP). For RP, we assume that ranks are evenly partitioned between the host and NDAs. Since read- and write-intensive NDA operations show different trends, we separate those two cases. Other application results (SVRG, CG, and SC) are shown to demonstrate that their performance falls between these two extreme cases.\\bcut{We do not evaluate SVRG with RP because it disallows sharing.} We use the most memory-intensive mix1 as the host workload. \nThe first cluster shows performance when the baseline DRAM system is used. For both the read- and write-intensive NDA workloads, Chopim performs better than rank partitioning. This shows that opportunistically exploiting idle rank bandwidth can be a better option than dedicating ranks for acceleration. The second cluster shows performance when the number of ranks is doubled. Compared to rank partitioning, Chopim shows better performance scalability. While NDA bandwidth with rank partitioning exactly doubles, Chopim more than doubles due to the increased idle time per rank. SVRG results fall between extreme DOT and COPY cases. \n\n\\medskip\n\\noindent\\fbox{\\begin{minipage}{0.46\\textwidth}\n Takeaway 5: Chopim scales better than rank partitioning because short issue opportunities grow with rank count.\n\\end{minipage}}\n\n\\begin{figure}[t!]\n\\centering\n\t\\includegraphics[width=0.43\\textwidth]{eval\/scalability.pdf}\n\t\\caption{Scalability Chopim vs.~rank partitioning.}\n\t\\label{fig:eval_scal}\n\t\\vspace*{-4mm}\n\\end{figure}\n\n\\medskip\n\\noindent\\textbf{\\textit{SVRG Collaboration Benefits.}}\n\\fig{fig:eval_svrg} shows the convergence results with and without NDA (8 NDAs). We use a shared memory region to enable concurrent access to the same data and the next-rank prediction mechanism is used. Compared to the host-only case, the optimal epoch size decreases from \\textit{N} to \\textit{N\/4} when NDAs are used. This is because the overhead of summarization decreases relative to the host-only case. Furthermore, SVRG with delayed updates gains additional performance demonstrating the benefits made possible by the concurrent host and NDA access when each processes the portion of the workload it is best suited for. Though the delayed update updates the correction term more frequently, the best performing learning rate is lower than ACC with epoch \\textit{N\/4}, which shows the impact of staleness on the delayed update.\n\nWhen NDA performance grows by adding NDAs (additional ranks), delayed-update SVRG demonstrates better performance scalability. \\fig{fig:eval_svrg_speedup} compares the performance of the best-tuned serialized and delayed-update SVRG with that of host-only with different number of NDAs. We measure performance as the time it takes the training loss to converge (when it reaches $1e-13$ away from optimum). Because more NDAs can calculate the correction term faster, its staleness decreases, consequently, a higher learning rate with faster convergence is possible.\n\n\\medskip\n\\noindent\\fbox{\\begin{minipage}{0.46\\textwidth}\nTakeaway 6: Collaborative host-NDA processing on shared data speeds up SVRG logistic regression by 50\\%. \n\\end{minipage}}\n\n\\begin{figure}[t!]\n\\centering\n\t\\subfloat [Convergence over time with and without NDA.] {\n\t\t\\includegraphics[width=0.43\\textwidth]{eval\/svrg.pdf}\n\t\t\\label{fig:eval_svrg}\n\t} \\\\\n\t\\subfloat [NDA speedup scaling (normalized to host only).] {\n\t\t\\includegraphics[width=0.37\\textwidth]{eval\/svrg_speedup.pdf}\n\t\t\\label{fig:eval_svrg_speedup}\n\t} \\\\\n\t\\caption{Impact of NDA summarization in SVRG with and without delayed update (HO: Host-Only, ACC: Accelerated with NDAs, ACC\\_Best: Best among all ACC options).}\n\t\\label{fig:eval_svrg_results}\n\t\\vspace*{-4mm}\n\\end{figure}\n\n\\medskip\n\\noindent\\textbf{\\textit{Memory Power.}}\nWe estimate the power dissipation in the memory system under concurrent access. The theoretical maximum possible power of the memory system is 8W when only the host accesses memory. When the most memory-intensive application mixes are executed, the average power is 3.6W. The maximum power of NDAs is 3.7W and is dissipated when the scratchpad memory is maximally used in the average gradient computation. In total, up to 7.3W of power is dissipated in the memory system, which is lower than the maximum possible with host-only access. This power efficiency of NDAs comes from the low-energy internal memory accesses and because Chopim minimizes overheads.\n\n\\medskip\n\\noindent\\fbox{\\begin{minipage}{0.46\\textwidth}\nTakeaway 7: Operating multiple ranks for concurrent access does not increase memory power significantly. \n\\end{minipage}}\n\n\n\n\\section{Conclusion} \n\\label{sec:conclusion}\n\nIn this paper, we introduced solutions to share ranks and enable concurrent access between the host and NDAs. Instead of partitioning memory in coarse-grain manner, both temporally and spatially, we interleave accesses in fine-grain manner to leverage the unutilized rank bandwidth. To maximize bandwidth utilization, Chopim enables coordinating state between the memory controllers of the host and NDAs in low overhead, to reduce extra bank conflicts with bank partitioning, to efficiently block NDA write transactions with stochastic issue and next-rank prediction to mitigate the penalty of read\/write turnaround time, and to have one data layout that allows the host and NDAs to access the same data and realize high performance. Our case study also shows that collaborative execution between the host and NDAs can provide better performance than using just one of them at a time. Chopim offers insights to practically enable NDA while serving main memory requests in real systems and enables more effective acceleration by eliminating data copies and encouraging tighter host-NDA collaboration.\n\n\n\\section{Introduction} \n\\label{sec:intro}\n\nProcessing data in or near memory using \\emph{near data accelerators} (NDAs) is attractive for applications with low temporal locality and low arithmetic intensity. NDAs help by \nperforming computation close to data, saving power and utilizing proximity to overcome the bandwidth bottleneck of a main memory ``bus'' (e.g.,~\\cite{stone1970pim,kogge1994execube,gokhale1995processing,kogge1997processing,patterson1997case,kang1999flexram,guo20143d,farmahini2015nda,ahn2015pim,ahn2016scalable,asghari2016chameleon,gao2017tetris,alian2018nmp,alian2019netdimm,liu2018processing,boroumand2019conda}).\nDespite decades of research and recent demonstration of true NDA technology~\\cite{upmem,alian2018nmp,ibm_pim_dimm,pawlowski2011hybrid,nair2015active}, many challenges remain for making NDAs practical, especially in the context of \\emph{main-memory NDA}.\n\nIn this paper we address several of these outstanding issues in the context of an NDA-enabled main memory. Our focus is on memory that can be concurrently accessed both as an NDA and as a memory. Such memory offers the powerful capability for the NDA and host processor to collaboratively process data without costly data copies. Prior research in this context is limited to fine-grained NDA operations of, at most, cache-line granularity. However, we develop techniques for coarse-grain NDA operations that amortize host interactions across processing entire DRAM rows.\nAt the same time, our NDA does not block host memory access, even when the memory devices are controlled directly by the host (e.g., a DDRx-like DIMM), which can reduce access latency and ease adoption.\n\n\n\n\n\\fig{fig:baseline_nda} illustrates an exemplary NDA architecture, which presents the challenges we address, and is similar to other recently-researched main-memory NDAs~\\cite{farmahini2015nda,asghari2016chameleon,alian2018nmp}. We choose a DIMM-based memory system because it offers the high capacity required for a high-end server's main memory.\nEach DIMM is composed of multiple chips, with one or more DRAM dice stacked on top of a logic die in each chip, using a low-cost commodity 3DS-like approach. Processing elements (PEs) and a memory controller are located on the logic die. Each PE can access memory internally through the NDA memory controller. These local NDA accesses must not conflict with external accesses from the host (e.g., a CPU). A rank that is being accessed by the host cannot at the same time serve NDA requests, though the bandwidth of all other ranks in the channel can be used by the NDAs. \nThere is no communication between PEs\nother than through the host. While not identical, recent commercial NDA-enabled memories exhibit similar overall characteristics~\\cite{upmem,ibm_pim_dimm}. \n\n\n\\meadd{Surprisingly, no prior work on NDA-enabled main memory examines the architectural challenges of simultaneous and concurrent access to memory devices from both the host and NDAs. In this work, we address two key challenges for enabling performance-efficient NDAs in a memory system that supports concurrent access from both a high-performance host and the NDAs.}\n\nThe first challenge is that interleaved accesses may hurt memory performance because they can both decrease row-buffer locality and introduce additional read\/write turnaround penalties. The second challenge is that each NDA can process kernels that consume entire arrays, though all the data that a single operation processes must be local to a PE (e.g., a memory chip). Therefore, enabling cooperative processing requires that host physical addresses are mapped to memory locations (channel, rank, bank, etc.) in a way that both achieves high host-access performance (through effective and complex interleaving) and maintains NDA locality across all elements of all operands of a kernel.\nWe note that these challenges exist when using either a packetized interface, where the memory-side controller interleaves accesses between NDAs and the host, or a traditional host-side memory controller that sends explicit low-level memory commands. \n\n\\begin{figure}[b!ht]\n\\centering\n\t\\includegraphics[width=0.35\\textwidth]{fig\/baseline_nda.pdf}\n\t\\caption{Exemplary NDA architecture.}\n\t\\label{fig:baseline_nda}\n\t\\vspace*{-4mm}\n\\end{figure}\n\n\n\n\\hpcacut{\n\\mattan{This paragraph is hard to parse. Should be more precise and direct about setting the context. The first and last sentences, in particular don't flow all that well.} While dedicated memory is used for a discrete NDA, integrating an NDA with main memory offers three significant advantages. First, this allows for economical high-capacity NDAs because the already large host memory is used. Second, copying data prior to acceleration is unnecessary, saving time and energy. Third, the integration enables the fine-grain collaboration between the host processor and the accelerators.\nPrior work on NDA has focused on accelerating kernels and benchmarks without evaluating collaborative processing across both host and NDAs. Our architecture enables such collaboration, and we demonstrate and evaluate its benefits.\n}\n\n\\hpcacut{\nRecent work has started to explore such an NDA, with processing elements and local memory controllers integrated within high-capacity DIMMs~\\cite{farmahini2015nda,asghari2016chameleon,alian2018nmp}. However, this prior work cannot realize the potential of fine-grain interactions between host and NDA---it places constraints on the host's use of memory while the accelerator operates because of how memory accesses are partitioned between host and NDA. \\mattan{Somehow the specific phrasing of the previous sentence with the 'while' isn't all that clear.}\nWe observe that the internal bandwidth of memory modules (with multiple ranks) is unutilized even under intensive host memory access. By opportunistically issuing NDA commands whenever internal bandwidth is available, NDAs can operate without impacting host performance and memory capacity. Exploiting this unutilized rank bandwidth requires fine-grain interleaving between host and NDA transactions because rank idle periods are short. This raises four challenges, which we address in this paper: (1) interleaving host and NDA accesses breaks row-buffer locality and reduces performance by more frequent bank conflicts, (2) access interleaving also increases write-to-read turnaround penalties, (3) data layout in memory must be simultaneously usable for both host and NDAs, and (4) if the host and NDAs attempt to control memory separately, their memory controllers must be coordinated. \\mattan{The previous list is supposed to be exciting and motivating, but it's somehow a bit ``blah''---I'll try to rewrite this at some point.}\n}\n\n\n\n\n\\emph{For the first challenge (managing concurrent access)}, we identify reduced row-buffer locality because of interleaved host requests as interfering with NDA performance. In contrast, it is the increased read\/write turnaround frequency resulting from NDA writes that mainly interfere with the host. We provide two solutions in this context. First, we develop a new bank-partitioning scheme that limits interference to just those memory regions that are shared by the host and NDAs, thus enabling colocating host-only tasks with tasks that use the NDAs. This new scheme is the first that is compatible with huge pages and also with the advanced memory interleaving functions used in recent processors.\nPartitioning mitigates interference from the host to the NDAs and substantially boosts their performance (by $1.5-2\\times$). \n\nSecond,\nwe control interference on shared ranks by opportunistically issuing NDA memory commands to those ranks that are even briefly not used by the host and curb NDA to host interference with mechanisms that can throttle NDA requests, either selectively when we predict a conflict (\\emph{next-rank prediction}) or stochastically\n\n\n\\emph{For the second challenge (NDA operand locality)}, we enable fine-grain collaboration by architecting a new data layout that preserves locality of operands within the distributed NDAs while simultaneously affording parallel accesses by the high-performance host. This layout requires minor modifications to the memory controller and utilizes coarse-grain allocations and physical-frame coloring in OS memory allocation. This combination allows large arrays to be shuffled across memory devices (and their associated NDAs) in a coordinated manner such that they remain aligned in each NDA. This is crucial for coarse-grain NDA operations that can achieve higher performance and efficiency than cacheline-oriented fine-grain NDAs (e.g.,~\\cite{ahn2015pim,kim2017toward,hsieh2016transparent}). \n\n\n\n\\emph{An additional and important challenge} exists in systems where the host maximizes its memory performance by directly controlling memory devices \\meadd{rather than relying on a packetized interface~\\cite{pawlowski2011hybrid,hadidi2018performance}}. Adding NDA capabilities requires providing local memory controllers near memory in addition to the host ones\\meadd{, which introduces a coordination challenge}. We coordinate memory controllers and ensure a consistent view of bank and timing state \\meadd{with only minimal signaling that does not impact performance by replicating the controller finite state machines (FSMs) at both the NDA and host sides of the memory channels}.\nReplicating the FSM requires all NDA accesses to be determined only by the NDA operation (known to the host controller) and any host memory operations. Thus, no explicit signaling is required from the NDAs back to the host. We therefore require that for non-packetized NDAs, each NDA operation has a deterministic access pattern for all its operands (which may be arbitrarily fine-grained). \n\nIn this paper, we introduce \\emph{Chopim}, a SW\/HW holistic solution that enables concurrent host and NDA access to main memory by addressing the challenges above with fine temporal access interleaving to physically-shared memory devices. We perform a detailed evaluation both when the host and NDA tasks process different data and when they collaborate on a single application. We demonstrate that Chopim enables high NDA memory throughput (up to 97\\% of unutilized bandwidth) while maintaining host performance. Performance and scalability are better than with prior approaches of partitioning ranks and only allowing coarse-grain temporal interleaving, or with only fine-grain NDA operations. \n\nWe demonstrate the potential of host and NDA collaboration by studying a machine-learning application (logistic regression with stochastic variance-reduced gradient descent~\\cite{johnson2013accelerating}). We map this application to the host and NDAs such that the host stochastically updates weights in a tight inner loop that utilizes the speculation and locality mechanisms of the CPU while NDAs concurrently compute a correction term across the entire input data that helps the algorithm converge faster. Collaborative and parallel NDA and host execution can speed up this application by $2\\times$ compared to host-only execution and $1.6\\times$ compared to non-concurrent host and NDA execution. We then evaluate the impact of colocating such an accelerated application with host-only tasks.\n\n\nIn summary, we make the following main contributions:\n\\begin{itemize}\n\\item We identify new challenges in concurrent access to memory from the host and NDAs: bank conflicts from host accesses curb NDA performance and read\/write-turnaround penalties from NDA writes lower host performance.\n\\item We reduce bank conflicts with a new bank partitioning architecture that, for the first time, is compatible with both huge pages and sophisticated memory interleaving.\n\\item To decrease read\/write-turnaround overheads, we throttle NDA writes with two mechanisms: \\textit{next-rank prediction} delays NDA writes to the rank actively read by the CPU; and \\textit{stochastic issue} throttles NDA writes randomly at a configurable rate.\n \n \n \n\\item We develop, also for the first time, a memory data layout that is compatible with both the host and NDAs, enabling them to collaboratively process the same data in parallel while maintaining high host performance with sophisticated memory address interleaving.\n\n\\item To show the potential of collaboratively processing the same data, we conduct a case study of an important ML algorithm that leverages the fast CPU for its main training loop and the high-BW NDAs for summarization steps that touch the entire dataset. We develop a variant that executes on the NDAs and CPU in parallel, which increases speedup to 2X.\n\n\\end{itemize}\n\n\n\n\n\n\n\n\n\n\\section{Motivation}\n\\label{sec:motiv}\nWe motivate our work with three key questions for a main-memory NDA, which we later answer with the Chopim architecture.\n\n\\medskip\n\\noindent\\textbf{\\textit{Q1: How can NDA-enabled DRAM be simultaneously used for both compute and host high-capacity memory?}} \n\nAs NDA-equipped memory devices are not only accelerators but also main memory, it is important to effectively manage the situation where memory is simultaneously accessed by both the host and NDAs. \\textit{The main challenge is how to maximize NDA performance and minimize host performance degradation while avoiding conflicts on the shared data path between the host and NDAs.}\nPrior work solves this in three directions: \\textit{unified memory interface}, \\textit{spatial partitioning}, and \\textit{time sharing}. In the first approach, every memory requests for the host and NDAs are managed in one place and issued through the same memory interface. However, as all the memory commands are transferred through the same bus, performance scalability along with the number of ranks is limited by the command-bus bandwidth. The next two approaches solve this by separating the memory interface of the host and NDAs. We also assume that each NDA can independently access memory with its own MC.\n\nThe second approach partitions memory into two independent groups (e.g. group of ranks) and allows the host and NDAs to exclusively access their own group of memory [?]. This approach requires a large fraction of memory capacity to be reserved for NDAs. Moreover, the potential bandwidth gain of NDAs is limited by the number of ranks dedicated to NDAs. In time-sharing, ownership of memory alternates between the host and NDAs and only one of them accesses memory [?]. Before the ownership switches, an existing mechanism initializes all the bank state which incurs overhead due to the initialization and warm-up overhead. Therefore, coarse-grain switching is required to amortize the overhead (see Section ? for more details). However, in this mechanism, how much time is allocated for each processor determines performance of the host and NDAs. \n\nTo mitigate this strict performance tradeoff, our approach leverages the internal memory bandwidth for NDA execution that are unutilized while the host accesses memory. \\fig{fig:motiv_rank_idle} shows the bandwidth utilization of rank internal buses when only memory-intensive host programs are executed. Our application mixes and the baseline configuration are summarized in Table \\ref{tab:eval_config}. Overall, about 60\\% of the internal-bus bandwidth is unused. However, the majority of idle periods are just $10-250$ cycles. Therefore, to utilize these idle periods for NDAs, mechanisms for fine-grain ownership switching is required. \n\n\\begin{figure}[t!]\n\\centering\n\t\\includegraphics[width=0.48\\textwidth]{fig\/motiv_rank_idle.pdf}\n\t\\caption{Rank idle-time breakdown vs. idleness granularity.}\n\t\\label{fig:motiv_rank_idle}\n\\end{figure}\n\n\n\n\\medskip\n\\noindent\\textbf{\\textit{How can the unique position of NDAs always sharing memory devices with the host be exploited?}}\n\nWe focus on the uniqueness of NDAs compared to other discrete accelerators. If many different accelerators exist in a system, what will be the reasons to use NDAs? Unlike other accelerators, NDAs share high-capacity memory with the host and can access data with high peak bandwidth. This provides an opportunity of the host and NDAs to access the same data and collaboratively process them in fine-grain manner or even concurrently. The next question would be as follows: is there any application that can benefit from this collaboration enabled by the uniqueness of NDAs? Such application should meet the following requirements: (1) The application should be able to effectively leverage the specialties of each processor, (2) coherence should be handled infrequently, (3) both processors should work on large shared data. We introduce one use case, \\textit{stochastic variance reduced gradient (SVRG)}, that satisfies these requirements, in Section \\ref{sec:collaboration}.\n\nSince the host and NDAs share the same data, we cannot customize data layout for each processor. Prior work \\cite{farmahini2015nda,akin2015hamlet} reorganizes data in between ownership switching but this approach cannot enable fine-grain access interleaving between the host and NDAs to the shared data. Therefore, the main challenge is on how to find an optimal single data layout. When either just the host or just the NDAs own memory for a fairly long time, we can customize data layout for each processor, possibly copying and laying out data differently when switching the memory ownership. However, concurrent NDA and host access to the same data requires a single data layout that works well for both the host and NDA at the same time. Otherwise, two copies of data with data layout are necessary, incurring high capacity overhead.\n\nTo meet the data-layout requirements of both processors, the following things should be considered. The host can maximally exploit bank-level parallelism when data is well-distributed across banks. To accomplish this, modern MCs use complex address hash function for physical-to-DRAM address mapping. On the other hand, NDAs can only fully utilize internal peak bandwidth when operands are all in local memory. \n\n\\medskip\n\\noindent\\textbf{\\textit{How can we practically address the above questions under direct host control for non-packetized DRAMs?}}\n \nTo enable near-data acceleration and still benefit from deterministic and low memory latency of non-packetized DRAMs, the above problems should be resolved under direct host control. Though the host and NDAs seem to only share the internal DRAM bus, in fact, they contend for the command, address, and data (DAC) bus that connects between them since the host should control NDAs directly via the DAC bus. \n\nTo mitigate this contention and realize high host and NDA performance, the following three problems should be resolved without frequent host-NDA communication. First, each memory controller should efficiently track global memory controller state for the fine-grain access interleaving. To track global memory controller state without always initializing it before the ownership switching, both MCs should frequently communicate each other. However, this method should be prevented to mitigate the contention on the DAC bus. \n\nLastly, to minimize the number of NDA instructions launched by the host, each NDA instruction should include a lot of information in a compact format (which we call \\textit{macro NDA instruction}) so that a few NDA instructions can make NDAs busy for a fairly long time. However, as addresses of operands are determined by the OS and host's memory controller, they should be calculated by the host and sent to NDAs for each cache block access of NDAs, such as PEI \\cite{ahn2015pim}. Therefore, to enable macro NDA instructions, NDAs should figure out the address of next operands without host's help. \n\n\\medskip\n\\noindent\\textbf{\\textit{Summary}}\nTo support host-NDA concurrent access to the same memory devices, the following problems should be resolved. \n\n\\begin{itemize}[topsep=.5ex,itemsep=.5ex,partopsep=1ex,parsep=0ex]\n\t\\item Fine-grain ownership switching is required to efficiently share the same memory among independent host and NDA processes. \n\t\\item A single data layout is required for the data shared between the host and NDAs to avoid capacity and performance overhead while collaboratively processing them.\n\t\\item To solve the above problems under direct host control, the host and NDAs should minimally communicate while other necessary things, such as state tracking and instruction launching, are supported. \n\\end{itemize}\n\n\n\n \n \n\n\n\n\\section{Host-NDA Collaboration}\n\\label{sec:collaboration}\n\nIn this section, we describe a case study to show the potential of concurrent host-NDA execution by collaboratively processing the same data. Our case study shows how to partition \\meadd{an ML training task between the host and NDAs such that each processor leverages its strengths. As is common to training and many data-processing tasks, the vast majority of shared data is read-only, simplifying parallelism.}\n\\medel{Also, our case study is a good example since infrequent and low-overhead operations are required to maintain coherence while the host and NDAs can independently access large and shared read-only data of which access time dominates the overall execution time.}\n\nWe use the machine-learning technique of logistic regression with stochastic variance reduced gradient (SVRG)~\\cite{johnson2013accelerating} as our case study. \\medel{SVRG is a machine learning technique that enables faster convergence by reducing variance introduced by sampling.} \\fig{fig:svrg} shows a simplified version of SVRG and the opportunity for collaboration.\n\\meadd{The algorithm consists of two main tasks within each outer-loop iteration. First, the entire large input matrix \\textit{A} is \\emph{summarized} into a single vector \\textit{g} (see \\fig{fig:impl_avg_grad_example_code} for pseudocode). This vector is used as a correction term when updating the model in the second task. This second task consists of multiple inner-loop iterations. In each inner-loop iteration the learned model \\textit{w} based on a randomly-sampled vector \\textit{a} from the large input matrix \\textit{A}, the correction term \\textit{g}, and a stored model \\textit{s}, which is updated at the end of the outer-loop iteration. }\n\\medel{\nA large input matrix, \\textit{A}, is evenly partitioned into multiple tiles and stored in memory. In every inner-loop iteration, the host samples a random element \\textit{a} within \\textit{A} to update the learned model \\textit{w}. Other than the large input, other data (\\textit{w, s,} and \\textit{g}) takes advantage of the CPU caches. The tight inner loop is therefore ideally suited for high-end CPU execution.\n}\n\n\\begin{figure}[t!]\n\\centering\n\t\\includegraphics[width=0.48\\textwidth]{fig\/svrg.pdf}\n\t\\caption{Collaboration between host and NDAs in SVRG.}\n\t\\label{fig:svrg}\n\t\\vspace*{-4mm}\n\\end{figure}\n\nThe first task is an excellent match for the NDAs. \n\\medel{The SVRG algorithm periodically calculates a correction term, \\textit{g}, by \\textit{summarizing} the entire input data (example code in \\fig{fig:impl_avg_grad_example_code}). Because} The summarization operation is simple, exhibits little reuse, and traverses the entire large input data. \\medel{, it is ideally suited for the NDAs The term \\textit{g} is used for correcting error in the host workload, \\textit{f}. With Chopim,}\nIn contrast, the second task with its tight inner loop is well suited for the host. The host can maximally exploit locality captured by its caches while NDAs can leverage their high bandwidth for accessing the entire input data \\textit{A}. Note that in SVRG, an \\textit{epoch} refers to the number of inner loop iterations.\n\nThe main tradeoff in SVRG is as follows. When summarization is done more frequently, the quality of the correction term increases and, consequently, the per-step convergence rate increases. On the other hand, the overhead of summarization also increases when it is performed more frequently, which offsets the improved convergence rate. Therefore, the \\textit{epoch} hyper-parameter, which determines the frequency of summarization, should be carefully selected to optimize this tradeoff.\n\n\\medskip\n\\noindent\\textbf{\\textit{Delayed-Update SVRG.}}\nAs Chopim enables concurrent access from the host and NDAs, we explore an algorithm change to leverage collaborative parallel processing. Instead of alternating between the summarization and model update tasks, we run them in parallel on the host and NDAs. Whenever the NDAs finish computing the correction term, the host and NDAs exchange the correction term and the most up-to-date weights before continuing parallel execution. While parallel execution is faster, it results in using stale \\textit{s} and \\textit{g} values from one epoch behind. The main tradeoff in \\textit{delayed-update SVRG} is that per-iteration time is improved by overlapping execution, whereas convergence rate per iteration degrades due to the staleness. Similar tradeoffs have been observed in prior work \\cite{bengio2003neural,langford2009slow,recht2011hogwild,dean2012large}. \\meadd{We later show that delayed-update SVRG can converge in $40\\%$ less time than when serializing the two main SVRG tasks.}\n\nTo avoid races for \\textit{s} and \\textit{g} in this delayed-update SVRG, we maintain private copies of these small variables and use a memory fence that guarantees completion of DRAM writes after the data-exchange step (which the runtime coordinates with polling). Note that we bypass caches when accessing data produced\/consumed by NDAs during the data-exchange step. Since \\textit{s} and \\textit{g} are small and copied infrequently, the overheads are small and amortized over numerous NDA computations. Whether delayed updates are used or not, the host and NDAs share the large data, \\textit{A}, without copies.\n\n\n\n\\section{Chopim}\n\\label{sec:chonda}\n\nIn this section, we present a set of solutions to enable concurrent access to the same memory and realize high performance.\n\n\n\\subsection{Opportunistic NDA Issue}\n\\label{subsec:opportunistic_nda_issue}\n\nThe basic policy for Chopim is to aggressively leverage the unutilized rank bandwidth by issuing NDA commands whenever possible. That is, if no incoming host request is detected, NDAs always issue their commands. Since the NDAs can issue whenever there is an opportunity, this maximizes bandwidth utilization.\nOne potential problem is that an NDA command issued in one cycle may delay a host command that could have issued in the next. Fortunately, read transactions of NDAs have a small impact on following host commands and ACT and PRE commands are issued infrequently by NDAs. \\mdel{However, if an NDA issues a write transaction in one cycle and the next host command is a read, the penalty of write-to-read turnaround is not negligible. We address this below.}\n\n\n\\subsection{Throttling NDA Writes}\n\\label{subsec:block_nda_write}\n\nWhen NDAs aggressively issue write commands, the read transactions of the host are blocked due to the penalty for interleaving write-read transactions while write transaction of NDAs keep issuing again since the rank is idle. This degrades host performance while improving NDA performance. To avoid this starvation problem, Chopim provides mechanisms to throttle NDA write transactions. \n\nThe first mechanism is to issue NDA writes with a predefined probability, reducing the rate of NDA writes. We call this mechanism \\textit{stochastic NDA issue}. When NDAs detect rank idleness, they flip a coin and decide whether to issue a write transaction or not. By adjusting the probability, the performance of host and NDAs can be traded off.\n\nThe second approach is to predict the rank that the host is going to access next and prevent NDA writes in that rank from issuing. The prediction is based on the requests waiting in the transaction queue. \nWith our baseline FRFCFS scheduler, we observe that the oldest request in the queue will be served next with high likelihood even though the scheduler can reorder memory operations. In other words, even a simple heuristic can be used to predict the next memory scheduler decision about which rank the host will use. Because the host MC knows whether the NDA that accesses that rank is in write mode or not, it can decide to throttle the NDA. For the packetized memory interface, such as HMC, no interface change is required to enable this mechanism since one memory controller has all the required information and control over memory requests from both sides. However, for DDR interface, we assume a dedicated pin is available for signaling the NDA to block its next write.\n\n\n\n\n\\subsection{Shared vs.~Partitioned NDA\/Host Regions}\n\\label{subsec:tradeoff_cap_perf}\n\nChopim offers two approaches for utilizing main memory by NDAs. The first is to partition memory such that NDAs only access one subset of memory while regular host loads and stores access the complement subset. Accesses to memory from NDAs and the host are still concurrent. In the second mode, the host and NDAs concurrently access the same range of addresses. The partitioned mode provides isolation and reduces interference. However, partitioning removes physical addresses from the host and assigns them to the NDA. Hence, changing partitioning decisions should be done infrequently, reducing flexibility. \\madd{On the other hand,} Sharing provides flexibility in which data is processed by the NDA and eliminates the need for data copies. The two modes can be mixed with a portion of memory dedicated to NDA access and other regions that are shared, though we do not explore mixed isolation and sharing in this paper.\n\nWe address two main challenges to enable the two modes above. \nThe challenge with sharing addresses between host and NDAs is in how data is laid out across memory locations. The typical layout for the host spreads data across DRAM chips and ranks, but chip- and rank-locality is required for the NDAs. We resolve this data alignment issue by rearranging data at the host memory controller for chip-locality and relying on physical frame contiguity or page coloring by the OS.\nWe use \\textit{bank partitioning} to provide isolation and minimally impact host performance. We develop a new bank partitioning mechanism that permits the use of sophisticated physical-to-DRAM address mapping functions, \\madd{even when huge pages that span many banks are used}. \n\n\n\n\n\n\n\n\n\n\\mcut{Our simulation results \nThe second memory option is the large-shared memory where the host and NDA transactions contend for the same banks. As a result, extra bank conflicts occur while interleaving transactions and this significantly degrades NDA performance. On the other hand, NDAs can be used without worrying about the capacity limit in this large-shared memory region. However, to allow concurrent access to the same data in this region, a data alignment issue should be resolved.\nIn the large-shared memory, the baseline address mapping is used and its address hashing causes a data alignment issue for NDAs. Typically, address hashing is used to shuffle data across banks in the system but this hinders localizing operands of NDA operations.\n\\mattan{To here. Too long to make point.} \\benjamin{Done.} To solve this problem, we apply the same page coloring and remapping mechanism in a slightly different way. The detailed mechanism is explained in Section \\ref{subsec:impl_data_layout}.\n}\n\n\n\n\n\n\n\n\\subsubsection{Data Layout for Shared Addresses}\n\\label{subsec:impl_data_layout}\n\nData layout in a shared region is challenging because the host and NDAs have different constraints or preferences for data layout: the host prefers spreading addresses across chips, ranks, banks, and channels to maximize bandwidth and reduce bank conflict likelihood while the NDAs require contiguity within chips and ranks. To satisfy both, we focus on laying out data at the chip (device) and rank levels. \n\n\n\\medskip\n\\noindent\\textbf{\\textit{Data Layout Across DRAM Chips.}}\nIn the baseline system, each \\rev{4-byte} word is striped across multiple chips,\nwhereas in our approach each word is located in a single chip so that NDAs can access words from their local memory.\nBoth the host and NDAs can access memory without copying or reformatting data (as required by prior work~\\cite{farmahini2015nda}). Memory blocks still align with cache lines, so this layout change is not visible to software. This layout precludes the critical word first optimization from DRAM, but recent work concludes the impact is minimal because the relative latency difference in current memory systems is very small (e.g.,~\\cite{yoon2012dgms}). \n\n\n\\medskip\n\\noindent\\textbf{\\textit{Data Layout Across Ranks.}}\nA typical physical to DRAM address hash-based mapping rearranges contiguous addresses across multiple DRAM ranks and banks, breaking the locality needed for NDAs. \\fig{fig:rank_layout} shows an example of naive data layout by using the baseline address mapping (left) and the desired layout for shared regions (right). \\madd{In the naive layout}, the first element of vector ${A}$ and that of vector ${B}$ are located in different ranks, breaking NDA locality. That same data is in the same rank in the desired layout, while still satisfying host access requirements \\madd{for high bandwidth and low bank contention}. We achieve the desired behavior by mapping NDA operands that are used together to groups of frames that all have the same \\sout{shuffling} \\rev{interleaving} pattern across channels and ranks. In this way, as long as their initial column alignment is the same, all operands remain aligned even though elements are spread across banks and ranks.\n\n\n\\madd{Our current implementation achieves this using coarse-grain memory allocation and coloring. \nWe allocate memory such that operands are aligned at the granularity of one DRAM row for each bank in the system which we call a \\textit{system row} (e.g., 2MB for a DDR4 1TB system). This is simple with the common buddy allocator if allocation granularity is also a system row, and can use optimizations that already exist for huge pages~\\cite{yun2014palloc,kwon2016coordinated,gorman2004understanding}. The fragmentation overheads of coarse allocation are negligible because there is little point in attempting NDA execution for small operands.}\n\n\\madd{In our baseline address mapping~\\ref{fig:baseline_addr_map}, the rank and channel addresses are determined partly by the low-order bits that fall into the frame offset field and partly by the high-order bits that fall into the physical frame number (PFN) field. The frame offsets of operands are kept the same because of the above alignment. On the other hand, PFNs are determined by the OS. Therefore, to keep those high-order bits the same among operands, the Chopim runtime indicates a \\textit{shared color} when it requests memory from the OS and uses the same color for all operands of a specific NDA operation. The OS uses the color information to ensure that all operands with the same color (same shared region) follow the same channel and rank interleaving pattern. To do this, the OS needs to know which physical address bits are used to select ranks and channels by the host memory controller. Coloring limits the region size because the bits that determine rank and channel must have the same value within the region. Multiple regions can be allocated for the same process. \\rev{Though we focus on one address mapping here, we believe our approach of coarse-grain and address-mapping-aware allocation can be generalized and applied to other address mappings as well.}}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{figure*}[t!]\n\t\\centering\n\t\\begin{minipage}[t]{0.95\\textwidth}\n\t\t\\begin{minipage}[t]{0.6\\textwidth}\n\t\t\n\t\t\t\\subfloat [Data layout across ranks for concurrent access.] {\n\t\t\t\t\\includegraphics[width=\\textwidth]{fig\/rank_layout.pdf}\n\t\t\t\t\\label{fig:rank_layout}\n\t\t\t}\n\t\t\\end{minipage} %\n\t\t\\quad %\n\t\t\\begin{minipage}[t]{0.4\\textwidth}\n\t\t\t\\subfloat [Baseline address mapping] {\n\t\t\t\t\\includegraphics[width=\\textwidth]{..\/fig\/baseline_addr_map.pdf}\n\t\t\t\t\\label{fig:baseline_addr_map}\n\t\t\t} \\\\\n\t\t\t\\subfloat [Host-side address mapping for bank partitioning.] {\n\t\t\t\t\\includegraphics[width=\\textwidth]{..\/fig\/hashing_addr_map.pdf}\n\t\t\t\t\\label{fig:hashing_addr_map}\n\t\t\t}\n\t\t\\end{minipage}\n\t\\end{minipage}\n\t\\caption{Data layout (shared region) and bank partitioning (partitioned region).}\n\t\\label{fig:addr_map}\n\n\\end{figure*}\n\n\\subsubsection{Bank Partitioning}\n\\label{subsec:impl_bpart}\n\nPrior work on bank partitioning relies on the OS to understand how physical addresses are mapped to banks~\\cite{mi2010bankpark,jeong2012balancing,liu2012software}. The OS colors pages to assign them to different bank partitions and then allocate frames that map to a specific set of banks to a specific set of colors. Memory accesses for different colors are then isolated to different banks. Colors can be assigned to different cores or threads, or in our case, for host and NDA isolated use. Unfortunately, advanced physical-to-DRAM address mapping functions and the use of 2MB pages prevent prior bank partitioning schemes from working because the physical frame number (PFN) bits that the OS can control can no longer specify arbitrary bank partitions. \n\n\\fig{fig:baseline_addr_map} shows an example of a modern physical address to DRAM address mapping \\cite{pessl2016drama}. One color bit in the baseline mapping belongs to the page offset field so bank partitioning can, at best, be done at two-bank granularity. More importantly, when huge pages are used (e.g., 2MB), this baseline mapping cannot be used to partition banks at all. \n\nTo overcome this limitation, we propose a new interface that partitions banks into two groups---host- and NDA-reserved banks---with flexible DRAM address mapping and any page size. Specifically, our mechanism only requires that the most significant physical address bits are only used to determine DRAM row address, as is common in recent hash mapping functions, as shown in \\fig{fig:hashing_addr_map}.\n\nWithout loss of generality, assume 2 banks out of 16 banks are reserved for the NDA. First, the OS splits the physical address space for host and NDA with the host occupying the bottom of the address space: $0-\\left(14\\times\\mathit{(bank\\_capacity)}-1\\right)$. The rest of the space (with the capacity of 2 banks) is reserved for the NDA and the OS does not use it for other purposes. This guarantees that the most significant bits (MSBs) of the host address are never b'111. In contrast, addresses in the NDA space always have b'111 in their MSBs.\n\nThe OS informs the memory controller that it reserved 2 banks (the topmost banks) for NDAs. Host memory addresses are mapped to DRAM locations using any hardware mapping function, which is not exposed to software and the OS. The idea is then to remap addresses that initially fall into NDA banks into the reserved address space that the host is not using. Additional simple logic checks whether the resulting DRAM address bank ID of the initial mapping is an NDA reserved bank. If they are not, the DRAM address is used as is. If the DRAM address is initially mapped to a reserved bank, the MSBs and the bank bits are swapped. Because the MSBs of a host address are never b'1110 or b'1111, the final bank ID will be one of the host bank IDs. Also, because the bank ID of the initial mapping result is 14 or 15, the final address is in a row the host cannot access with the initial mapping and there is no aliasing.\n\n\\medskip\n\\noindent\\textbf{\\textit{Host Access to NDA Region.}}\nThe OS does not allocate regular host addresses in the NDA region, but some host requests will read and write NDA data to provide initial inputs to NDA operations and read outputs. Requests with addresses that map to the NDA region use a mapping function that does not hash bank addresses and simply uses the address MSBs for banks. This ensures that NDA addresses only access NDA-reserved banks. Furthermore, the way other address bits are mapped to DRAM addresses is kept simple and exposed to software. \n\n\n\n\\subsection{Tracking Global Memory Controller State}\n\\label{subsec:track_gstate}\n\nUnlike conventional systems, Chopim also enables an architecture that has two memory controllers (MCs) managing the bank and timing state of each rank. \\madd{This is the case when the host continues to directly manage memory even when the memory itself is enhanced with NDAs}, which requires coordinating rank state information. Information about host transactions is easily obtained by the NDA MCs as they can monitor incoming transactions and update the state tables accordingly. However, the host MC cannot track NDA transactions due to command bandwidth limits. \n\nTo solve this problem, we leverage the deterministic execution flow of the NDA workloads that we focus on. Once the base address of operands and the operation type are determined, NDAs access memory deterministically and are controlled by microcode and a finite state machine (FSM). If the FSMs are replicated and located also in the host side, the host MC effectively tracks all NDA transactions. Also, the host MC knows its next transaction and target rank. With this information, the host MC deduces which NDA will be affected by its transactions and for how long. Thus, the host MC can track the global state in real time without any communication. \\madd{Stochastic NDA issue is still easily supported by replicating the pseudo-random number generator.}\n\nFigure \\ref{fig:repl_fsm} shows example operations of the replicated FSMs where one rank is being accessed by the host (left) and the other by an NDA (right). When the host accesses rank0, both host and NDA memory controller state is updated based on the issued host command. On the other hand, when the host does not access but NDA1 accesses rank1 (right), the host-side replicated FSM updates its state of rank1 in the host memory controller. This mechanism is not required when only a single memory controller exists for each rank, as with a packetized memory interface~\\cite{pawlowski2011hybrid,genz2017genz,kim2013memory}.\n\n\\medskip\n\\noindent\\textbf{\\textit{Discussion.}} \\madd{A current limitation of our replicated-FSM approach is that it applies to workloads with generally data-independent behavior \\rev{where execution flow is not determined by data values}. This includes a rich and important set of applications and kernels that are of practical importance. We leave extensions to more data-dependent workload behavior to future work. We also note that the particular synchronization problem does not exist in a packetized memory interface, while the other problems we address still do. We consider this work a starting point on illuminating and solving a set of problems for NDAs that share their physical memory with the host and where applications tightly collaborate across the host and NDAs.}\n\n\\begin{figure}[t!]\n\\centering\n \\includegraphics[width=0.37\\textwidth]{fig\/repl_fsm.pdf}\n\t\\caption{Example operation of replicated FSMs.}\n\t\\label{fig:repl_fsm}\n\n\\end{figure}\n\n\n\n\n\n\n\n \n\n\n\\section{Methodology}\n\\label{sec:method}\n\nTable \\ref{tab:eval_config} summarizes our system configuration, DRAM timing parameters, energy components, benchmarks, and machine learning configurations. For bank partitioning, we reserve one bank per rank for NDAs and the rest for the host. We use Ramulator \\cite{kim2016ramulator} as our baseline DRAM simulator and add the NDA memory controllers and PEs to execute the NDA operations. We modify the memory controller to support the Skylake address mapping~\\cite{pessl2016drama} and our bank partitioning and data layout schemes. To simulate concurrent host accesses, we use gem5 \\cite{binkert2011gem5} with Ramulator. We choose host applications that have various memory intensity from the \\textit{SPEC2006} \\cite{henning2006spec} and \\textit{SPEC2017} \\cite{panda2018wait} benchmark suites and form 9 different application mixes with different combinations (Table \\ref{tab:eval_config}). Mix0 and mix8 represent two extreme cases with the highest and lowest memory intensity, respectively. Only mix0 is run with 8 cores to simulate under-provisioned bandwidth while other mixes use 4 cores to simulate a more realistic scenario. \\bcut{Since Chopim is important only when the host processor and PEs concurrently try to access memory, we only show the results of benchmarks with medium and high memory intensity. We also ran simulations with low memory intensity benchmarks and the performance impact due to contention is negligible.} For the NDA workloads, we use DOT and COPY operations to show the impact of extremely low and high write intensity. We use the average gradient kernel (\\fig{fig:impl_avg_grad_example_code}) to evaluate collaborative execution. The performance impact of other NDA applications falls between DOT and COPY and is well represented by SVRG {\\cite{johnson2013accelerating}}, conjugate gradient (CG) {\\cite{jacob2013eigen}} and streamcluster (SC) {\\cite{pisharath2005nu}}.\n\nFor the host workloads, we use Simpoint \\cite{hamerly2005simpoint} to find representative program phases and run each simulation until the instruction count of the slowest process reaches 200M instructions. If an NDA workload completes while the simulation is still running, it is relaunched so that concurrent access occurs throughout the simulation time. Since the number of instructions simulated is different, we measure instructions per cycle (IPC) for the host performance. To show how well the NDAs utilize bandwidth, we show bandwidth utilization and compare with the idealized case where NDAs can utilize all the idle rank bandwidth. \n\n\nWe estimate power with the parameters in Table~\\ref{tab:eval_config}. We use CACTI 6.5~\\cite{muralimanohar2009cacti} for the dynamic and leakage power of the PE buffer. A sensitivity study for PE parameters exhibits that their impact is negligible. We use CACTI-3DD~\\cite{chen2012cacti} to estimate the power and energy of 3D-stacked DRAM and CACTI-IO~\\cite{jouppi2015cacti} to estimate DIMM power and energy.\n\n\\begin{table}[!t]\n\t\\centering\n\t\\noindent\\resizebox{\\linewidth}{!}{\n\t\t\\tabulinesep=0.6mm\n\t\t\\begin{tabu}{c|c|[1.0pt]c|c|c}\n\t\t\t\\hline\n\t\t\t\\rowfont{\\normalsize}\n \\multicolumn{5}{c}{System configuration} \\tabularnewline\n\t\t\t\\hline\n\t\t\n Processor & \\multicolumn{4}{c}{\\makecell{4-core OoO x86 (8 cores for mix0), 4GHz, Fetch\/Issue width (8), \\\\ LSQ (64), ROB (224)}} \\tabularnewline\n\t\t\t\\hline \n NDA & \\multicolumn{4}{c}{\\makecell{one PE per chip, 1.2GHz, fully pipelined, write buffer (128) (Section {\\ref{sec:implementation}})}} \\tabularnewline\n\t\t\t\\hline \n\t\t\tTLB & \\multicolumn{4}{c}{I-TLB:64, D-TLB:64, Associativity (4)} \\tabularnewline\n\t\t\t\\hline \n\t\t\tL1 & \\multicolumn{4}{c}{\\makecell{32KB, Associativity (L1I: 8, L1D: 8), LRU, 12 MSHRs}} \\tabularnewline\n\t\t\t\\hline \n\t\t\tL2 & \\multicolumn{4}{c}{\\makecell{256KB, Associativity (4), LRU, 12 MSHRs}} \\tabularnewline\n\t\t\t\\hline\n\t\t\tLLC & \\multicolumn{4}{c}{\\makecell{8MB, Associativity (16), LRU, 48 MSHRs, Stride prefetcher}} \\tabularnewline\n\t\t\t\\hline \n\t\t\tDRAM & \\multicolumn{4}{c}{\\makecell{DDR4, 1.2GHz, 8Gb, x8, 2channels $\\times$ 2ranks, \\\\\n\t\t\tFR-FCFS, 32-entry RD\/WR queue, Open policy, \\\\\n\t\t\tIntel Skylake address mapping \\cite{pessl2016drama}}} \\tabularnewline\n\t\t\t\\hline\n\t\t\t\\arrayrulecolor{white}\\hline\n\t\t\t\\arrayrulecolor{white}\\hline\n\t\t\t\\arrayrulecolor{white}\\hline\n\t\t\t\\arrayrulecolor{black}\\hline\n\t\t\t\\rowfont{\\normalsize}\n \\multicolumn{5}{c}{DRAM timing parameters} \\tabularnewline\n\t\t\t\\hline\n\n\t\t\t\\multicolumn{5}{c}{\\makecell{tBL=4, tCCDS=4, tCCDL=6, tRTRS=2, tCL=16, tRCD=16,\\\\\n\t\t\ttRP=16, tCWL=12, tRAS=39, tRC=55, tRTP=9, tWTRS=3,\\\\\n tWTRL=9, tWR=18, tRRDS=4, tRRDL=6, tFAW=26}} \\tabularnewline\n\t\t\t\\hline \n\n\t\t\t\\hline\n\t\t\t\\arrayrulecolor{white}\\hline\n\t\t\t\\arrayrulecolor{white}\\hline\n\t\t\t\\arrayrulecolor{white}\\hline\n\t\t\t\\arrayrulecolor{black}\\hline\n\t\t\t\\rowfont{\\normalsize}\n \\multicolumn{5}{c}{Energy Components} \\tabularnewline\n\t\t\t\\hline\n\n\t\t\t\\multicolumn{5}{c}{\\makecell{Activate energy: 1.0nJ, PE read\/write energy: 11.3pJ\/b, \\\\\n\t\t\thost read\/write energy: 25.7pJ\/b, PE FMA: 20pJ\/operation, \\\\\n\t\t\tPE buffer dynamic: 20pJ\/access, PE buffer leakage power: 11mW \\\\\n\t\t\t(Energy\/power of scratchpad memory is same as PE buffer)}} \\tabularnewline\n\t\t\t\\hline \n\t\t\t\n\t\t\t\\hline\n\t\t\t\\arrayrulecolor{white}\\hline\n\t\t\t\\arrayrulecolor{white}\\hline\n\t\t\t\\arrayrulecolor{white}\\hline\n\t\t\t\\arrayrulecolor{black}\\hline\n\t\t\t\\rowfont{\\normalsize}\n \\multicolumn{4}{c|}{Benchmarks} & MPKI\\tabularnewline\n\t\t\t\\hline\n mix0 & \\multicolumn{3}{c|}{\\makecell{mcf\\_r:lbm\\_r:omnetpp\\_r:gemsFDTD\\\\\n bwaves:milc:soplex:leslie3d}} & \\makecell{H:H:H:H\\\\\n H:M:M:M}\\tabularnewline\n\t\t\t\\hline \n\t\t\tmix1 & \\multicolumn{3}{c|}{mcf\\_r:lbm\\_r:omnetpp\\_r:gemsFDTD} & H:H:H:H\\tabularnewline\n\t\t\t\\hline \n\t\t\tmix2 & \\multicolumn{3}{c|}{mcf\\_r:lbm\\_r:gemsFDTD:soplex} & H:H:H:H\\tabularnewline\n\t\t\t\\hline \n\t\t\tmix3 & \\multicolumn{3}{c|}{lbm\\_r:omnetpp\\_r:gemsFDTD:soplex} & H:H:H:H\\tabularnewline\n\t\t\t\\hline \n\t\t\tmix4 & \\multicolumn{3}{c|}{omnetpp\\_r:gemsFDTD:soplex:milc} & H:H:H:M\\tabularnewline\n\t\t\t\\hline \n\t\t\tmix5 & \\multicolumn{3}{c|}{gemsFDTD:soplex:milc:bwaves\\_r} & H:H:M:M\\tabularnewline\n\t\t\t\\hline \n\t\t\tmix6 & \\multicolumn{3}{c|}{soplex:milc:bwaves\\_r:leslie3d} & H:M:M:M\\tabularnewline\n\t\t\t\\hline \n\t\t\tmix7 & \\multicolumn{3}{c|}{milc:bwaves\\_r:astar:cactusBSSN\\_r} & M:M:M:M\\tabularnewline\n\t\t\t\\hline \n mix8 & \\multicolumn{3}{c|}{leslie3d:leela\\_r:deepsjeng\\_r:xchange2\\_r} & M:L:L:L\\tabularnewline\n\t\t\t\\hline\n\n \\hline\n\t\t\t\\arrayrulecolor{white}\\hline\n\t\t\t\\arrayrulecolor{white}\\hline\n\t\t\t\\arrayrulecolor{white}\\hline\n\t\t\t\\arrayrulecolor{black}\\hline\n\t\t\t\\rowfont{\\normalsize}\n \\multicolumn{5}{c}{NDA Kernels} \\tabularnewline\n\t\t\t\\hline\n\n \\multicolumn{5}{c}{\\makecell{NDA basic operations (Table {\\ref{tab:nda_ops}}), SVRG (details below), \\\\\n CG (16K ${\\times}$ 16K), and SC (2M ${\\times}$ 128)}} \\tabularnewline\n\t\t\t\\hline\n\n\t\t\t\\hline\n\t\t\t\\arrayrulecolor{white}\\hline\n\t\t\t\\arrayrulecolor{white}\\hline\n\t\t\t\\arrayrulecolor{white}\\hline\n\t\t\t\\arrayrulecolor{black}\\hline\n\t\t\t\\rowfont{\\normalsize}\n \\multicolumn{5}{c}{Machine Learning Configurations} \\tabularnewline\n\t\t\t\\hline\n\n\t\t\t\\multicolumn{5}{c}{\\makecell{Logistic regression with ${\\ell2}$-regularization (10-class classification), ${\\lambda}$=1e-3,\\\\\n\t\t\tlearning rate=best-tuned, momentum=0.9, dataset=cifar10 (50000 ${\\times}$ 3072)}} \\tabularnewline\n\t\t\t\\hline \n\t\t\\end{tabu}\n\t}\n\t\\caption{Evaluation parameters.}\n\t\\label{tab:eval_config} \n\t\\vspace*{-5mm}\n\\end{table}\n\n\n\n\n\\section{Runtime and API}\n\\label{sec:implementation}\n\nChopim is general and helps whenever host\/NDA concurrent access is needed. To make the explanations and evaluation concrete, we use an exemplary \\meadd{interface} design as discussed below and summarized in \\fig{fig:impl_overview}. Command and address signals pass through the NDA memory controllers so that they can track host rank state. Processing elements (PEs) in the logic die access data by using their local NDA memory controller (\\fig{fig:baseline_nda}).\n\\bcut{We propose a similar API as other C++ math libraries~\\cite{sanderson2010armadillo,jacob2013eigen,iglberger2012high} for the example use case of accelerating linear algebra operations.} \\fig{fig:impl_avg_grad_example_code} shows example usage of our API for computing the average gradient used in the summarization task of SVRG.\n\n\\begin{figure}[t!]\n\\centering\n\t\\includegraphics[width=0.46\\textwidth]{fig\/impl_overview.pdf}\n\t\\caption{Overview of NDA architecture.}\n\t\\label{fig:impl_overview}\n\t\\vspace*{-4mm}\n\\end{figure}\n\n\nThe Chopim runtime system manages memory allocations and launches NDA operations. NDA operations are blocking by default, but can also execute asynchronously. If the programmer calls an NDA operation with operands from different shared regions (colors), the runtime system inserts appropriate data copies. We envision a just-in-time compiler that can identify such cases and more intelligently allocate memory and regions to minimize copies. For this paper, we do not implement such a compiler. Instead, programs are written to directly interact with a runtime system that is implemented within the simulator.\n\nNDAs operate directly on DRAM addresses and do not perform address translation. To launch an operation, the runtime (with help from the OS) translates the origin of each operand into a physical address, which is then communicated \\meadd{along with a bound} to the NDAs by the NDA controller. The runtime is responsible for splitting a single API call into multiple primitive NDA operations. The NDA operations themselves proceed through each operand with a regular access pattern implemented as microcode in the hardware\\meadd{, which also checks the bound for protection}. \\bcut{DRAM addresses are computed by following the same physical-to-DRAM mapping function used by the host memory controller (\\sect{subsec:impl_data_layout}).} \n\n\n\\medskip\n\\noindent\\textbf{\\textit{Optimization for Load-Imbalance.}}\nLoad imbalance occurs when the host does not access ranks uniformly over short periods of time. The AXPY operation (launched repeatedly within the loop shown in \\fig{fig:impl_avg_grad_example_code}) is short and non-uniform access by the host leads to load imbalance among NDAs. A blocking operation waits for \\emph{all} NDAs to complete before launching the next AXPY, which reduces performance. \nOur API provides asynchronous launches similar to CUDA streams \\ykadd{or OpenMP parallel \\texttt{for} with a \\texttt{nowait} clause \\cite{dagum1998openmp}}. Asynchronous launches can overlap AXPY operations from multiple loop iterations. Any load imbalance is then only apparent when the loop ends. Over such a long time period, load imbalance is much less likely. We implement asynchronous launches using \\emph{macro NDA operation}. An example of a macro operation is shown in the loop of \\fig{fig:impl_avg_grad_example_code} and is indicated by the \\texttt{parallel\\_for} annotation. \n\n\\bcut{\n\\medskip\n\\noindent\\textbf{\\textit{Exploiting Inter-Iteration Locality.}}\nEach NDA PE includes a small 1 KB scratchpad memory (sized equal to a row buffer within the DRAM chip). The runtime leverages this to reorder operations within macro NDA operations. In the AXPY macro operation example, inter-iteration locality exists for vector ${\\vec{a}}$. If ${\\vec{a}}$ does not fit in the scratchpad memory, matrix ${X}$ is decomposed in the column direction and operations are launched for one column group after another. The locality captured by the scratchpad eliminates writing intermediate results back into DRAM. This also reduces write interference (write-to-read and read-to-write turnaround times).\n}\n\n\\begin{figure}[t!]\n\\centering\n\t\\includegraphics[width=0.38\\textwidth]{fig\/avg_grad_example_code.pdf}\n\t\\caption{Average gradient example code. This code corresponds to \\textit{summarization} in SVRG (see Section \\ref{sec:collaboration}).}\n\t\\label{fig:impl_avg_grad_example_code}\n\n\\end{figure}\n\n\n\\medskip\n\\noindent\\textbf{\\textit{Launching NDA Operations.}}\n\\label{subsec:impl_launch}\nNDA operations are launched similarly to Farmahini et al.~\\cite{farmahini2015nda}. A memory region is reserved for accessing control registers of NDAs. NDA packets access the control registers and launch operations. Each packet is composed of the type of operation, the base addresses of operands, the size of data blocks, and scalar values required for scalar-vector operations. On the host side, the \\textit{NDA controller} plays two main roles. First, it accepts acceleration requests, issues commands to the NDAs in the different ranks (in a round-robin manner), and notifies software when a request completes. Second, it extends the host memory controller to coordinate actions between the NDAs and host memory controllers and enables concurrent access. It maintains the replicated FSMs using its knowledge of issued NDA operations and the status of the host memory controller. \\bcut{The NDA controller is also responsible for throttling specific NDAs if necessary to maintain host performance.}\n\n\\medskip\n\\noindent\\textbf{\\textit{Execution Flow of a Processing Element.}}\n\\label{subsec:impl_pe}\n\n\\begin{figure}[t!]\n\\centering\n\t\\includegraphics[width=0.42\\textwidth]{fig\/ndp_axpy.pdf}\n\t\\caption{PE architecture and execution flow of AXPY.}\n\t\\label{fig:eflow_axpy}\n\t\\vspace*{-4mm}\n\\end{figure}\n\nOur exemplary PE is composed of two floating-point fused multiply-add (FPFMA) units, 5 scalar registers (up to 3 operand inputs and 2 for temporary values), a 1KB buffer for accessing memory, and the 1KB scratchpad memory. The memory access granularity is 8B per chip and the performance of the two FPFMAs per chip matches this data access rate. PEs may be further optimized to support lower-precision operations or specialized for specific use cases, but we do not explore these in this paper as we focus on the new capabilities of Chopim rather than NDA in general.\n\n\\fig{fig:eflow_axpy} shows the execution flow of a PE when executing the AXPY operation. Each vector is partitioned into 1KB batches, which is the same size as DRAM page size per chip. To maximize bandwidth utilization, the vector ${X}$ is streamed into the buffer. Then, the PE opens another row, reads two elements (8 bytes) of vector ${Y}$, and stores them to FP registers. While the next two elements of ${Y}$ are read, a fused multiply-add (FMA) operation is executed. The result is stored back into the buffer and execution continues such that the read-execute-write operations are pipelined. After the result buffer is filled, the PE either writes results back to memory or to the scratchpad. This flow for one 1KB batch is repeated over the rest of the batches. This entire process is stored in PE microcode as the AXPY operation. \\meadd{Other operations (coarse or fine grained) are similarly stored and processed from microcode.}\n\n\\bcut{\nNote that, if we only have one NDA bank, changing the access order of two input vectors degrades the performance of AXPY. This is because if vector ${Y}$ is read first and vector ${X}$ next, the row for vector ${Y}$ is closed before it is updated, whereas the reverse order will guarantee the row remains opened. Also, one optimization is to close a row right after accessing the last column of the row when the row is no longer being used. In AXPY, we always apply this optimization for ${X}$ whereas only apply for ${Y}$ after writing is done.\n\nOther NDA operations (\\tab{tab:nda_ops}) follow a similar execution flow. NRM2 is a dot-product of one vector and itself. Therefore, the input to the PE should be written to the buffer and to registers at the same time. NRM2 and DOT require reductions at the end since two FPFMAs operate separately on their own accumulators; the reductions are performed by the runtime system on the host. The input of the SCAL operation is stored directly into the register and the results are written to the buffer.\n} \n\n\\medskip\n\\noindent\\textbf{\\textit{Inter-PE Communication.}}\n\\meadd{NDAs are only effective when they use memory-side bandwidth to amplify that of the host. In the DIMM- and chip-based NDAs, which we target in this paper, general inter-PE communication is therefore equivalent to communicating with the host. Communication in applications that match this NDA architecture are primarily needed for replicating data to localize operands or for global reduction operations, which follow local per-PE reductions.} \n\\medel{There are two types of communication in our case study: data replication and reduction.} In both communication cases, a global view of data layout is needed and, therefore, we enable communication only through the host. For instance, after the macro operation in \\fig{fig:impl_avg_grad_example_code}, \\meadd{a global reduction of the PE private copies (\\texttt{a\\_pvt}) accumulates the data for the final result (\\texttt{a}). The reduced result is used by the following NDA operation, requiring replication communication for its data layout has to meet NDA locality requirements with the other NDA operand (\\texttt{w}). Though communicating through the host is expensive, our coarse-grained NDA operations amortize infrequent communication overhead. Importantly}, since this communication can be done as normal DRAM accesses by the host, no change on the memory interface is required.\n\n\n\\section{Related Work}\n\\label{sec:related_work}\n\nTo the best of our knowledge, this is the first work that proposes solutions for near data acceleration while enabling the concurrent host and NDA access without data reorganization and in a non-packetized DRAM context. \\ykadd{Packetized DRAM, while scalable, may suffer from 2--4x latency longer than DDR-based protocol even under very low or no load \\cite{hadidi2018performance}. } To solve this unique problem, many previous studies have influenced our work. \n\n\nThe study of near data acceleration has been conducted in a wide range as the relative cost of data access becomes more and more expensive compared to the computation itself. The nearest place for computation is in DRAM cells \\cite{seshadri2017ambit,li2017drisa,seshadri2015fast} or the crossbar cells with emerging technologies \\cite{li2016pinatubo,chi2016prime,shafiee2016isaac,song2017pipelayer,song2018graphr,sun2017energy,chen2018regan,long2018reram}. Since the benefit of near-data acceleration comes from high bandwidth and low data transfer energy, the benefit becomes larger as computation move closer to memory. However, area and power constraints are significant, restricting adding complex logic. As a result, workloads with simple ALU operations are the main target of these studies. \n\n3D stacked memory devices enable more complex logic on the logic die and still exploit high internal memory bandwidth. Many recent studies are conducted based on this device to accelerate diverse applications \\cite{gao2017tetris,kim2016neurocube,drumond2017mondrian,ahn2016scalable,ahn2015pim,guo20143d,hsieh2016transparent,hsieh2016accelerating,liu2017concurrent,pattnaik2016scheduling,zhang2014top,gao2015practical,nair2015active,hong2016accelerating,boroumand2016lazypim,liu2018processing,boroumand2019conda}. However, in these proposals, the main memory role of the memory devices has gained less attention compared to the acceleration part. Some prior work \\cite{akin2015hamlet,sura2015data,akin2016data,boroumand2018google} attempts to support the host and NDA access to the same data but only with data reorganization and in a packetized DRAM context. Parrnaik et al. \\cite{pattnaik2016scheduling} show the potential of concurrently running both the host and NDAs on the same memory. However, they assume an idealized memory system in which there is no contention between NDA and host memory requests. We do not assume this ideal case. The main contributions of Chopim are precisely to provide mechanisms for mitigating interference.\n\nOn the other hand, \\textit{NDA} \\cite{farmahini2015nda}, Chameleon \\cite{asghari2016chameleon}, and MCN DIMM \\cite{alian2018nmp} are based on conventional DIMM devices and changes the DRAM design to practically add PEs.\\bcut{\\textit{NDA} finds the places to add TSVs from commodity DDR3 devices and solves data layout problem by shuffling. It also proposes solutions to switch mode between host and NDA (precharge-all method) and to avoid concurrent host access (rank partitioning). Chameleon finds an opportunity for near-data acceleration in Load-Reduced DIMM and places PEs in data buffer chips. To overcome the command bandwidth bottleneck, they split DQs and use a part of them for transferring memory commands for PEs. MCN DIMM realized DIMM-type NDAs by enabling the host and MCN processors to communicate with network protocol but via DDR interface. Each MCN DIMM runs a light-weight OS and acts as a small independent computing node. Based on this prior work, we focus more on host-NDA concurrent access.} Unlike rank partitioning and coarse-grain mode switching used in the prior work, we let host and PEs share ranks to maximize parallelism and partition banks to decrease contention. \n\n\n\n\n\n\n\n\n\\section{Chopim}\n\\label{sec:chonda}\n\nWe develop Chopim with four main connected goals that push the state of the art: (1) enable fine-grain interleaving of host and NDA memory requests to the same physical memory devices while mitigating the impact of their contention; (2) permit the use of coarse-grain NDA operations that process long vector instructions\/kernels; (3) simultaneously support the locality needed for NDAs and the sophisticated memory address interleaving required for high host performance; and (4) integrate with both a packetized interface and a traditional host-controlled DDRx interface.\nWe detail our solutions in this section after summarizing the need for a new approach.\n\n\\hpcacut{\nTwo Our approach on managing concurrent access on different data is for NDAs to opportunistically utilize even a brief moment that the host does not access memory. To leverage these short idle periods, overhead for memory ownership switching should be minimized. In this way, the internal memory bandwidth can be fully utilized yet gives negligible impact on the host performance. In this section, we present our mechanisms that enable fine-grain interleaving between host and NDA accesses. Note that we can allocate more time and allow an exclusive access for NDA executions based on certain policy but we do not explore this in this paper. \n\nIn addition, our approach on concurrent access to the single copy of shared data is to use memory controller's address mapping as is while localizing NDA operands at memory allocation and execution time. As no data reorganization is required between host and NDA access phases, this data layout enables concurrent and collaborative processing between the host and NDAs on the shared data. \n}\n\n\\medskip\n\\noindent\\textbf{\\textit{The need for fine-grain access interleaving with opportunistic NDA issue.}}\n\\label{subsec:nda_issue}\nAn ideal NDA opportunistically issues NDA memory requests whenever a rank is idle from the perspective of the host. This is simple to do in a packetized interface where a memory-side controller schedules all accesses, but is a challenge in a traditional memory interface because the host- and NDA-side controllers must be synchronized. Prior work proposed dedicating some ranks to NDAs and some to the host or coarse-grain temporal interleaving~\\cite{farmahini2015nda,asghari2016chameleon}. The former approach contradicts one of our goals as devices are not shared. The latter results in large performance overhead because it cannot effectively utilize periods where a rank is naturally idle due to the host access pattern. \\fig{fig:motiv_rank_idle} shows that for a range of multi-core application mixes (methodology in \\sect{sec:method}), the majority of idle periods are shorter than 100 cycles with the vast majority under 250 cycles. \\emph{Fine-grain access interleaving is therefore necessary. }\n\\vspace*{-2mm}\n\n\\begin{figure}[t!bh]\n\\centering\n\t\\includegraphics[width=0.48\\textwidth]{fig\/motiv_rank_idle.pdf}\n\t\\vspace*{-2mm}\n\t\\caption{Rank idle-time breakdown vs. idleness granularity.}\n\t\\label{fig:motiv_rank_idle}\n\t\\vspace*{-2mm}\n\\end{figure}\n\n\n\\medskip\n\\noindent\\textbf{\\textit{The need for coarse-grain NDA vector\/kernel operations.}}\n\\label{subsec:launch_ovhd}\nFine-grain access interleaving is simple if each NDA command only addresses a single cache block region of memory. Such fine-grain NDA operations have indeed been discussed in prior work~\\cite{ahn2015pim,ahn2016scalable,bssync,nai2017graphpim}. One overhead of this fine-grain approach is that of issuing numerous NDA commands, with each requiring a full memory transaction that occupies both the command and data channels to memory. Issuing NDA commands too frequently degrades host performance, while infrequent issue underutilizes the NDAs.\nCoarse-grain NDA vector operations that operate on multiple cache blocks mitigate contention on the channel and improve overall performance. The vector width, ${N}$, is specified for each NDA instruction. As long as the operands are contiguous in the DRAM address space, one NDA instruction can process numerous data elements without occupying the channel. Coarse-grain NDA operations are therefore desirable, but \\emph{introduce the data layout, memory contention, and host--NDA synchronization challenges which Chopim solves}.\n\n\n\\subsection{Localizing NDA Operands while Distributing Host Accesses}\n\\label{subsec:data_layout}\nTo execute the N-way NDA vector instructions, all the operands of each NDA instruction must be fully contained in a single rank \\meadd{(single PE)}. If necessary, data is first copied from other ranks prior to launching an NDA instruction. If the reuse rate of the copied data is low, this copying overhead will dominate the NDA execution time and contention on the memory channel will increase due to the copy commands. \n\n\\emph{We solve this problem} in Chopim by laying out data such that all the operands are localized to each NDA at memory allocation time. Thus, copies are not necessary. This is challenging, however, because the host memory controller uses complex address interleaving functions to maximally exploit channel, rank, and bank parallelism for arbitrary host access patterns. Hence, arrays that are contiguous in the host physical address space are not contiguous in physical memory and are shuffled across ranks.\\medel{, possibly in a physical-address dependent manner.} \nThis challenge is illustrated in the left side of \\fig{fig:rank_layout}, where two operands of an NDA instructions are shuffled differently across ranks and banks. The layout resulting from our approach is shown at the right of the figure, where arrays (operands) are still shuffled, but both operands follow the same pattern and remain correctly aligned to NDAs without copy operations. Note that alignment is to rank because that corresponds to an NDA partition. \n\n\\medskip\n\\noindent\\textbf{\\textit{Data layout across ranks.}} \n\nWe rely on the NDA runtime and OS to use a combination of coarse-grain memory allocation and coloring to ensure all operands of an NDA instruction are interleaved across ranks the same way \\meadd{and are thus local to a PE}. First, the runtime allocate memory for NDA operands such that they are aligned at the granularity of one DRAM row for each bank in the system which we call a \\textit{system row} (e.g., 2MiB for a DDR4 1TiB system). For all the address interleaving mechanisms we are aware of~(\\cite{pessl2016drama,liu2018get}), this ensures that NDA operands are locally aligned, as long as ranks are also kept aligned. To maintain rank alignment, we reply on OS page coloring to effect rank alignment. We explain this feature below using the Intel Skylake address mapping~\\cite{pessl2016drama} as a concrete and representative interleaving mapping (\\fig{fig:baseline_addr_map}).\n\nIn this mapping, rank and channel addresses are determined partly by the low-order bits that fall into the frame offset field and partly by the high-order bits that fall into the physical frame number (PFN) field. Frame offsets are kept the same because of the coarse-grain alignment. The OS colors allocations such that the PFN bits that determine rank and channel are aligned for a particular color; which physical address bits select ranks and channels can be reverse engineered if necessary~\\cite{pessl2016drama}. The Chopim runtime indicates a \\textit{shared color} when it requests memory from the OS and specifies the same color for all operands of an instruction. The runtime can use the same color for many operands to minimize copies needed for alignment. In our baseline system, there are 8 colors and each color corresponds to a shared region of memory of $4$GiB. Multiple regions can be allocated for the same process. Though we focus on one address mapping here, our approach works with any linear address mapping described in prior work~\\cite{pessl2016drama,liu2018get} as well.\n\nNote that coarse-grain allocation is simple with the common buddy allocator if allocation granularity is also a system row, and can use optimizations that already exist for huge pages~\\cite{yun2014palloc,kwon2016coordinated,gorman2004understanding}. The fragmentation overheads of coarse allocation are similar to those with huge pages and we find that they are negligible because coarse-grain NDA execution works best when processing long vectors. \n\n\n\\medskip\n\\noindent\\textbf{\\textit{Data layout across DRAM chips.}}\nIn the baseline system, each 4-byte word is striped across multiple chips, whereas in our approach each word is located in a single chip so that NDAs can access words from their local memory. Both the host and NDAs can access memory without copying or reformatting data (as required by prior work~\\cite{farmahini2015nda}). Memory blocks still align with cache lines, so this layout change is not visible to software. \\bcut{This layout precludes the critical word first optimization from DRAM, but recent work concludes the impact is minimal because the relative latency difference in current memory systems is very small (e.g.,~\\cite{yoon2012dgms}).} Note that this data layout does not impact the host memory controller's ECC computation (e.g. Chip-kill~\\cite{dell1997white}) because ECC protects only bits and not how they are interpreted. For NDA accesses, we rely on in-DRAM ECC with its limited coverage. We do not innovate in this respect and leave this problem for future work.\n\n\\begin{figure}[t!]\n\\centering\n \\includegraphics[width=0.52\\textwidth]{fig\/rank_layout.pdf}\n\t\\caption{Example data layout across ranks for concurrent access of the COPY operation (B[i] = A[i]). With naive data layout (left), elements with the same index are located in different ranks. With our proposed mechanism (right), elements with the same index are co-located. NDAs access contiguous columns starting from the base of each vector.}\n\t\\label{fig:rank_layout}\n\\end{figure}\n\n\\begin{figure}[t!]\n\\centering\n \\begin{minipage}[t]{0.4\\textwidth}\n\t\t\t\\subfloat [Baseline (Skylake~\\cite{pessl2016drama})] {\n\t\t\t\t\\includegraphics[width=\\textwidth]{fig\/baseline_addr_map.pdf}\n\t\t\t\t\\label{fig:baseline_addr_map}\n\t\t\t} \\\\\n\t\t\t\\subfloat [Proposed (for bank partitioning)] {\n\t\t\t\t\\includegraphics[width=\\textwidth]{fig\/hashing_addr_map.pdf}\n\t\t\t\t\\label{fig:hashing_addr_map}\n\t\t\t}\n\t\t\\end{minipage}\n\t\\caption{Baseline and proposed host-side address mapping.}\n\t\\label{fig:addr_map}\n\t\\vspace*{-4mm}\n\\end{figure}\n\n\n\\subsection{Mitigating Frequent Read\/Write Penalties}\n\\label{subsec:block_nda_write}\n\nThe basic memory access scheduling policy we use for Chopim is to always prioritize host memory requests, yet aggressively leverage unutilized rank bandwidth by issuing NDA requests whenever possible. That is, NDAs wait when incoming host requests are detected, but otherwise always issue their memory requests to maximize their bandwidth utilization and performance.\nOne potential problem is that an NDA request issued in one cycle may delay a host request that could have issued in one of the following cycles otherwise.\n\nWe find that NDAs infrequently issue row commands (ACT and PRE). We therefore prioritize host memory commands over any NDA row command to the same bank. This has negligible impact on NDA performance in our experiments.\n\nWe also find that read transactions of NDAs have only a small impact on following host commands. NDA write transactions, however, can have a large impact on host performance because of the read\/write-turnaround penalties that they frequently require. While the host mitigates turnaround overhead by buffering operations with caches and write buffers~\\cite{stuecheli2010virtual,ahn2006design}, the host and NDAs may interleave different types of transactions when accessing memory in parallel. We find that NDA writes interleaved with host reads degrade performance the most. \\emph{As a solution,} we introduce two mechanisms to selectively throttle NDA writes. \n\n\nOur first mechanism throttles the rate of NDA writes by issuing them with a predefined probability. We call this mechanism \\textit{stochastic NDA issue}. Before issuing a write transaction, the NDAs both detect if a rank is idle and flip a coin to determine whether to issue the write. By adjusting the coin weight, the performance of the host and NDAs can be traded off: higher write-issue probability leads to more frequent turnarounds while a lower probability throttles NDA progress. Deciding how much to throttle NDAs requires analysis or profiling, and we therefore propose a second approach as well. \n\nOur second approach does not require tuning, and we empirically find that it works well. In this \\emph{next rank prediction} approach, the memory controller inhibits NDA write requests when more host read requests are expected; the controller stalls the NDA in lieu of providing an NDA write queue. In a packetized interface, the memory controller schedules both host and NDA requests and is thus aware of potential required turnarounds. The traditional memory interface, however, is more challenging as the host controller must explicitly signal the NDA controller to inhibit its write request. This signal must be sent ahead of the regular host transaction because of bus delays.\n\nWe use a very simple predictor that inhibits NDA write requests in a particular rank when the oldest outstanding host memory request to that channel is a read to that same rank. Specifically, the NDA controller examines the target rank of the oldest request in the host memory controller transaction queue. Then, it signals to the NDAs in that rank to stall their writes. For now, we assume that this information is communicated over a dedicated pin and plan to develop other signaling mechanisms that can piggyback on existing host DRAM commands at a later time. Our experiments with an FRFCFS~\\cite{frfcfs} memory scheduler at the host show that this simple predictor works well and achieves performance that is comparable to a tuned stochastic issue approach.\n\n\n\n\n\\subsection{Partitioning into Host and Shared Banks}\n\\label{subsec:impl_bpart}\n\nIn addition to read\/write-turnaround overheads, concurrent access also degrades performance by decreasing DRAM row access locality. When the host and NDAs interleave accesses to different rows of the same bank, frequent bank conflicts occur. To avoid this bank contention, we propose using bank partitioning to limit bank interference to only those memory regions that must concurrently share data between the NDAs and the host. This is particularly useful in colocation scenarios when only a small subset of host tasks utilize the NDAs. However, existing bank partitioning mechanisms~\\cite{mi2010bankpark,jeong2012balancing,liu2012software} are incompatible with both huge pages and with sophisticated DRAM address interleaving schemes.\n\nBank partitioning relies on the OS to color pages where colors can be assigned to different cores or threads, or in our case, for banks isolated for the host and those that could be shared. The OS then maps pages of different color to frames that map to different banks. \n\\fig{fig:baseline_addr_map} shows an example of a modern physical address to DRAM address mapping \\cite{pessl2016drama}. One color bit in the baseline mapping belongs to the page offset field so prior bank partitioning schemes can, at best, be done at two-bank granularity. More importantly, when huge pages are used (e.g., 2MiB), this baseline mapping cannot be used to partition banks at all. \n\nTo overcome this limitation, we propose a new interface that partitions banks into two groups---host-reserved and shared banks---with flexible DRAM address mapping and any page size. Specifically, our mechanism only requires that the most significant physical address bits are only used to determine DRAM row address, as is common in recent hash mapping functions, as shown in \\fig{fig:hashing_addr_map} \\cite{pessl2016drama}.\n\nWithout loss of generality, assume 2 banks out of 16 banks are reserved for the shared data. First, the OS splits the physical address space for host-only and shared memory region with the host-only region occupying the bottom of the address space: $0-\\left(14\\times\\mathit{(bank\\_capacity)}-1\\right)$. The rest of the space (with the capacity of 2 banks) is reserved for the shared data and the OS does not use it for other purposes. This guarantees that the most significant bits (MSBs) of the address of host-only region are never b'111. In contrast, addresses in the shared space always have b'111 in their MSBs. \n\nThe OS informs the memory controller that it reserved 2 banks (the top-most banks) for shared memory region. Host-only memory addresses are mapped to DRAM locations using any hardware mapping function, which is not exposed to software and the OS. The idea is then to remap addresses that initially fall into shared banks into the reserved address space that the host is not using. Additional simple logic checks whether the resulting DRAM address bank ID of the initial mapping is a reserved bank for shared region. If they are not, the DRAM address is used as is. If the DRAM address is initially mapped to one of the reserved banks, the MSBs and the bank bits are swapped. Because the MSBs of a host address are never b'1110 or b'1111, the final bank ID will be one of the host-only bank IDs. Also, because the bank ID of the initial mapping result is 14 or 15, the final address is in a row the host cannot access with the initial mapping and there is no aliasing. Note that the partitioning decision can be adjusted, but only if all affected memory is first cleared. \n\n\\subsection{Tracking Global Memory Controller State}\n\\label{subsec:track_gstate}\n\nUnlike conventional systems, Chopim also enables an architecture that has two memory controllers (MCs) managing the bank and timing state of each rank. This is the case when the host continues to directly manage memory even when the memory itself is enhanced with NDAs. This requires coordinating rank state information between controllers. \\fig{fig:repl_fsm} shows how MCs on both sides of a memory channel track global memory controller state. Information about host transactions is easily obtained by the NDA MCs as they can monitor incoming transactions and update the state tables accordingly (left). However, the host MC cannot track all NDA transactions due to command bandwidth limits.\n\nTo solve this problem, we replicate the finite-state machines (FSMs) of NDAs and place them in the host-side NDA controller. When an NDA instruction is launched, the FSMs on both sides are synchronized. We rely on the already-synchronized DDR interface clock for FSM synchronization. Whenever an NDA memory transaction is issued, the host-side FSM also updates the state table in the host MC without communicating with the NDAs (right). If a host transaction blocks NDA transactions in one of the ranks, that transaction will be visible to both FSMs. Replicated FSMs track the NDA write buffer occupancy and detect when the write-buffer draining starts and ends to trigger write throttling. The area and power overhead of replicating FSMs are negligible (40-byte microcode store and 20-byte state registers per rank (i.e., per NDA)).\n\\meadd{\\emph{Our evaluation uses this approach to enable a DDR4-based NDA-enabled main memory and all our experiments rely on this.}}\n\n\n\\begin{figure}[t!]\n\\centering\n \\includegraphics[width=0.42\\textwidth]{fig\/repl_fsm.pdf}\n\t\\caption{Global MC state tracking when the host (left) and NDAs (right) issues memory commands. The replicated FSMs are synchronized by using the DDR interface clock.}\n\t\\label{fig:repl_fsm}\n\t\\vspace*{-2mm}\n\\end{figure}\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nDespite over 100 years of intense experimental and theoretical efforts, the origin of Galactic cosmic rays (GCRs) has still not been unambiguously identified. At energies above a few tens of GeV, much progress has been made in the last couple of years, thanks to direct observations by high-precision, high-statistics experiments like AMS-02 or PAMELA and the study of gamma-rays by \\textit{Fermi}-LAT and Cherenkov telescopes~\\cite{gabici2019}. At lower energies, however, the situation is still very much unclear. Until recently, solar modulation, that is the suppression of intensities due to interactions with the magnetised solar wind, hampered the study of GCRs at energies around a GeV and below~\\cite{potgieter2013}. Modelling of the transport of these particles therefore essentially relied on extrapolations from higher energies.\n\nIn 2013, however, the first direct observations of interstellar spectra by Voyager~1 were published and it became clear that simple extrapolations from higher energies fail~\\cite{stone2013}. Specifically, in order to fit both Voyager~1 and \\mbox{AMS-02} data, simple diffusive transport models overpredict the intensities at Voyager energies (e.g.~\\cite{Vittino:2019yme}). While phenomenological models can add a break in the source spectra around a GeV in an \\emph{ad hoc} fashion, the physical interpretation of such a break is rather questionable~\\cite{cummings2016,orlando2018,boschini2018a,boschini2018b,johannesson2018,bisschoff2019}. In fact, we would maintain that no convincing explanation of such a break has been put forward to date.\n\nThis issue is far from academic since the energy range affected is important for a number of issues. \nIn fact, most of the energy density of GCRs is contributed in the energy range around a GeV and, depending on the spectrum, possibly below.\nCorrespondingly, different spectra imply different power requirements for the sources, which provide helpful clues on the nature of GCR acceleration \\cite{ginzburg1964,recchia2019}. Moreover, GCRs are the prime agent of ionisation in dense molecular clouds (MCs) and recently, the ionisation rates inferred from nearby MCs have been shown to be in strong tensions with the local interstellar spectra as measured by Voyager~1 \\cite{phan2018,silsbee2019,padovani2020}. Furthermore, diffuse emission in radio waves and MeV gamma-rays is sensitive to this energy range (e.g.~\\cite{orlando2018}). The diffuse radio background constitutes the dominant foreground for upcoming cosmological studies of the epoch of reionisation (e.g. \\cite{Rao:2016xre}) and diffuse gamma-rays for proposed MeV missions (eAstrogam~\\cite{DeAngelis:2016slk}, AMEGO~\\cite{2019BAAS...51g.245M}). Lastly, the current picture of GCRs is simply incomplete if one cannot explain cosmic rays at MeV energies.\n\nAn important effect for MeV GCRs that has been ignored in the literature is due to the discrete nature of sources. Instead, the distribution of sources in position and time is oftentimes modelled as smooth. That is, the predicted cosmic ray density $\\psi$ is the solution of the transport equation with a source term $q$ that is a smooth function of position ($r$ and $z$), energy $E$ and time $t$,\n\\begin{equation}\n\\frac{\\partial \\psi}{\\partial t}+\\frac{\\partial}{\\partial z}\\left(u \\psi \\right) -D\\nabla^2 \\psi + \\frac{\\partial }{\\partial E}\\left(\\dot{E}\\psi\\right)=q(r, z, E, t) \\, . \\label{eq:transport}\n\\end{equation}\nHere, $u=u(z)$ is the advection velocity profile with only the component perpendicular to the Galactic disk, $D=D(E)$ is the isotropic and homogeneous diffusion coefficient, $\\dot{E}$ describes the energy loss rate for GCRs both inside the Galactic disk and in the magnetized halo. Note that it might be more customary to formulate Eq. 1 in terms of momentum (see \\footnote{See Supplemental Material at \\url{http:\/\/link.aps.org\/supplemental\/} for some discussions with supporting figures and tables, which includes Refs. \\cite{strong1998,schlickeiser1999,mertsch2020}.} for the transformation to kinetic energy).\n\nEven though the sources are likely separate, discrete objects like supernova remnants (SNRs), the approximation of a smooth source density is admissible at GeV energies, since the transport distances and times exceed the typical source separations and ages. However, if energy losses reduce the propagation times and distances, this approximation breaks down and instead the discrete nature of the sources needs to be taken into account. This can be done by replacing the smooth source density from before by a sum of individual delta-functions in distance and age,\n\\begin{equation}\nq(r, z, E, t) = \\sum_{i=1}^{N_\\text{s}} Q(E)\\frac{\\delta(r - r_i)}{2\\pi r_i}\\delta(z-z_i)\\delta(t - t_i) \\, .\n\\end{equation}\n$Q(E)$ denotes the spectrum that an individual source injects into the ISM. The total intensity from $N_{\\text{s}}$ sources is then just the sum over the Green's function $\\mathcal{G}(r, z, E; r_i, z_i, t-t_i)$ of Eq.~\\eqref{eq:transport} at the position of the solar system,\n\\begin{equation}\n\\psi = \\sum_i \\mathcal{G}(r=0, z=z_\\odot, E; r_i, z_i, t-t_i) \\, . \\label{eq:stochasticity}\n\\end{equation}\nwhere $z=z_{\\odot}\\simeq 14$ pc is the vertical offset of the solar system from the Galactic mid-plane \\cite{skowron2019}. An example where this approach has been followed are high-energy electrons and positrons at hundreds of GeV and above, which lose energy due to the synchrotron and inverse Compton processes~\\cite{mertsch2011}, but ionisation losses also severely limit the propagation of MeV GCRs. Predicting their local intensities therefore requires rather precise knowledge of the ages and distances of the sources. While some young and nearby sources might be known, catalogues of such sources remain necessarily incomplete, in particular with respect to far away and old sources.\n\nInstead, the distribution of sources can be considered a statistical ensemble, thus opening the path towards a statistical modelling of GCR intensities. Operationally, one draws a set of source distances and ages from the statistical probability density function (PDF). Adding up their intensities results in a prediction for this given realisation of the sources. Repeating this procedure for a large number of realisations, one can estimate the distribution of intensities. The first moment and second central moment of this distribution are the expectation value and the variance. Since the expectation value $\\langle \\psi \\rangle$ could be obtained by averaging over many realizations, it approaches the solution of the GCR transport equation~\\eqref{eq:transport} when the smooth source PDF, from which individual source distance and ages are drawn, is used as the source term $q$. However, as it turns out the statistics of the intensities is markedly non-Gaussian, with the second moment divergent. This is due to the long power-law tails of the intensity PDF. Its asymmetric shape renders the expectation value different from the median and from the maximum of the distribution \\cite{nolan2020}.\n\nIn this \\emph{letter}, we model the intensities of GCR protons and electrons between $1 \\, \\text{MeV}$ and $10 \\, \\text{GeV}$ taking into account the stochasticity induced by the discreteness of sources. Consequently, our predictions will be probabilistic. We will illustrate that the expectation value is a bad estimator for the intensities in individual realisations. For instance, for low enough energies the expectation value is outside the $68\\%$ uncertainty band. Furthermore, its spectral shape is markedly different than the intensity in any individual realisation. Finally, we stress that the expectation value does not reproduce the data either unless an artificial break is added to the source spectrum. Instead, we suggest considering the median of the intensity PDF as a better measure of what a ``typical'' intensity will look like, and the reference intensity around which the intensities from all realisations are distributed. Interestingly, the data for protons and electrons fall squarely within the uncertainty bands. We thus conclude that a model without artificial breaks is to be preferred in explaining the Voyager~1 and AMS-02 data as long as the stochasticity effect is taken into account.\n\n\n\\section{Modelling}\n\\label{sec:stochastic}\n\nEquation \\ref{eq:transport} is solved numerically assuming GCRs propagate within a finite cylindrical region with height $2L\\simeq 8$ kpc and radius $r_{max}\\simeq$ 10 kpc centering around the source. The other parameters of our model are chosen such that the most probable values of the intensity is compatible with the observational data. Specifically, the advection velocity is assumed to have the following profile $u(z)=u_0\\sgn(z)$ with $u_0=16$ km\/s, where $\\sgn(z)$ is the sign function. We assume also the diffusion coefficient of the form $D(E)\\sim \\beta\\gamma^{\\delta}$ as suggested in \\citep{schlickeiser2010} where $\\beta=v\/c$ is the ratio between the particle's speed and the speed of light and $\\gamma$ is the particle's Lorentz factor (note that assuming the diffusion coefficient to scale with rigidity might not qualitatively alter the results \\cite{Note1}). \nRecent analyses of GCRs seem to suggest slightly different values for $\\delta$ depending on whether or not the unstable isotope $^{10}$Be is taken into account \\cite{evoli2019,evoli2020,weinrich2020}. However, the overall results of the local spectra would remain qualitatively unchanged for different values of $\\delta$ if we slightly modified the injection spectra. In the following, we shall adopt $\\delta=0.63$ and normalize the diffusion coefficient such that $D(E=10\\textrm{ GeV})\\simeq 5\\times 10^{28}$ cm$^2$\/s for both species \\citep{evoli2019}. We caution that the diffusion coefficient in the disk and in the halo could in principle be different and so our parametrisation is to be regarded as a suitably defined average. \n\nLow-energy GCRs lose energy mostly due to ionisation interactions with the neutral gas in the disk as discussed above. There are also proton-proton interactions and radiative energy loss at high energies. All the energy loss mechanisms are effective only within the disk of size $2h\\simeq 300$ pc apart from synchrotron and inverse Compton processes. More importantly, the rate of energy loss depends also on the average number density of the hydrogen atoms in the disk. We adopt $n_\\text{H}=0.9$ cm$^{-3}$ corresponding to the surface density of 2 mg\/cm$^{2}$, which is roughly the observed value \\citep{ferriere2001}. The specific form of the energy loss rate are collected from \\citep{schlickeiser2002,mertsch2011,krakau2015,evoli2017} (see also \\cite{Note1}). \n\nWe take into account also the adiabatic energy loss due to advection with the approximation $|\\dot{E}_{ad}|=2pv u_0\\delta(z) \\simeq pv u_0\/(3h)$ \\cite{jaupart2018}. As for the injection spectrum, we shall adopt the following power-law form in momentum down to the kinetic energy of 1 MeV:\n\\begin{eqnarray}\nQ(E)=\\frac{\\xi_{CR}E_{SNR}}{(mc^2)^2\\Lambda\\beta}\\left(\\frac{p}{mc}\\right)^{2-\\alpha},\\label{eq:source_function}\n\\end{eqnarray} \nwhere $\\xi_{CR}=8.7\\%$ and $\\xi_{CR}=0.55\\%$ are the acceleration efficiencies of the source for GCR protons and electrons respectively, $E_{SNR}\\simeq 10^{51}$ erg is the total kinetic energy of the supernova explosion, $m$ is the mass of the GCR species of interest, and\n\\begin{eqnarray}\n\\Lambda=\\int^{p_{max}}_{p_{min}}\\left(\\frac{p}{mc}\\right)^{2-\\alpha}\\left[\\sqrt{\\left(\\frac{p}{mc}\\right)^2+1}-1\\right]\\frac{\\df p}{mc}.\\label{eq:Lambda_Q}\n\\end{eqnarray}\nWe shall take $\\alpha=4.23$ as suggested for the fit at high energies \\cite{evoli2019}. Such a power-law in momentum seems to be preferred from the commonly accepted theory of diffusive acceleration on SNR shocks \\citep{malkov2001,blasi2013}. Even though the extension of the spectrum down to 1 MeV seems questionable, there exist observational evidences of enhanced ionisation rates in the vicinity of SNRs indicating the presence of low-energy GCRs accelerated from these objects \\citep{vaupre2014,gabici2015,phan2020}. Note that we neglect stochastic re-acceleration for simplicity and this process might be examined in future works. \n\n\\begin{figure*}[htpb]\n\\centerline{\n\\includegraphics[width=3.5in, height=2.8in]{fg_jE_p_show_SNR_rh_1000_vA_16_h_151_nH_90_zad_10_Dgam_alpha_423_ep_87_zu_39_in.png}\n\\includegraphics[width=3.5in, height=2.8in]{fg_jE_e_show_SNR_rh_1000_vA_16_h_151_nH_90_zad_10_Dgam_alpha_423_ep_5_zu_39_in.png}\n}\n\\caption{Stochastic fluctuations of GCR protons (left panel) and electrons (right panel) in comparison with data from Voyager~1~\\cite{cummings2016} (blue) and AMS-02~\\cite{AMS2014,AMS2015} (green). The dotted and solid black curves are respectively the expectation values and the median of the intensities. The shaded regions are the 95\\% and 68\\% uncertainty ranges.}\n\\label{fg:stochastic}\n\\end{figure*}\n\nWe have built up a statistical ensemble by generating a large number $N_\\text{r}=2000$ of realisations, in each drawing a large number of sources $N_\\text{s}$ from the spatial distribution following a spiral pattern \\citep{vallee2005} with a radial modulation \\citep{case1998}, as employed in~\\cite{mertsch2011}, and with a homogeneous distributions for the time since injection and for the vertical position of sources. We limit ourselves to $r_i^{(n)}< r_{max}=10$ kpc and the time since injection $\\tau_{i}^{(n)}<\\tau_{max}=10^8$ yr since older and further sources would not contribute significantly. The total number of discrete sources in each realisation could be estimated roughly as $N_{s}=\\mathcal{R}_{s}\\tau_{max}r_{max}^2\/R_d^2\\simeq 1.33\\times 10^6$, where $\\mathcal{R}_{s}\\simeq 0.03$ yr$^{-1}$ is the source rate and $R_d\\simeq 15$ kpc is the radius of the Galactic disk. We adopt $2h_s\\simeq 80 \\, \\text{pc}$ for the vertical extension of sources expected for CCSN \\citep{prantzos2011}.\n\nWe thus obtain an ensemble of intensities \\mbox{$j^{(n)} = v\/(4 \\pi) \\psi^{(n)}$} for the individual source realisations $n$ that we can characterise statistically. For instance, a histogram of these intensities at a specific energy could serve as an estimate of the intensity PDF $p(j)$. Note that the expectation value of the intensity $\\langle j \\rangle = \\int \\mathrm{d} j \\, p(j)$ is equal to the intensity predicted for the smooth source density of Ref.~\\cite{mertsch2011}. We have found $p(j)$ to be extremely non-Gaussian with power-law tails, e.g. $p(j) \\propto j^{-2}$ for $j \\gg \\langle j \\rangle$ at $E=1$ MeV. In fact, these distribution functions are not only asymmetric but they also do not have a well-defined second moment as shown for similar analyses at high energies \\citep{mertsch2011,blasi2012,bernard2012,genolini2017}. We shall, therefore, specify the uncertainty intervals of the intensity using the percentiles as in \\citep{mertsch2011}, e.g. $j_{a\\%}$ is defined via $a\\%=\\int_0^{j_{a\\%}} \\df j \\, p(j)$. The $68$\\% and $95\\%$ uncertainty range of the intensity $j(E)$ are then $\\mathcal{I}_{68\\%}=\\left[j_{16\\%},j_{84\\%}\\right]$ and $\\mathcal{I}_{95\\%}=\\left[j_{2.5\\%},j_{97.5\\%}\\right]$.\n\n\\section{Results and Discussion}\n\\label{sec:results}\n\nWe present in Fig. \\ref{fg:stochastic} the $95\\%$ and $68\\%$ uncertainty bands of the intensities for both GCR protons (left panel) and electrons (right panel) in the energy range from 1 MeV to about 10 GeV together with the expectation values of the intensities and data from Voyager~1~\\citep{cummings2016} and AMS-02~\\citep{AMS2014,AMS2015}. The uncertainty ranges above 100 MeV are quite narrow since the energy loss time and the diffusive escape time are sufficiently large such that the distribution of GCRs inside the Galactic disk become more or less uniform. We note that this will not remain true for GCR electrons of energy above 10 GeV since the energy loss rate for these particles become increasingly larger in this energy range which will result in significant stochastic fluctuations \\citep{atoyan1995,ptuskin2006,mertsch2011,mertsch2018,recchia2019b,manconi2020,evoli2021}.\n\nThe uncertainty ranges broaden for $E\\lesssim100$ MeV until a characteristic energy $E^*$ below which the ratio between the upper and lower limit of the intensities becomes constant. \nSuch a feature emerges from the fact that the Green's function behaves as \\mbox{$\\mathcal{G}(r=0,z=z_\\odot,E,r_i,z,z_i,\\tau_i)\\sim 1\/|\\dot{E}|$} if the propagation time $\\tau_i$ is much larger than the energy loss time ($\\tau_i\\gg \\tau_l(E)= E\/|\\dot{E}|$) which is easily fulfilled for particles of energy below a few tens of MeV. Since $\\tau^{(n)}_i\\gtrsim \\tau_l(E\\lesssim 10 \\textrm{ MeV})$ for $i=\\overline{1,N_{s}}$ in each of the $n$th realization, we expect from Eq. \\ref{eq:stochasticity} that $j^{(n)}(E)\\sim v\/|\\dot{E}|$ for all realizations at sufficiently low energies and, thus, the limits of the uncertainty ranges should become parallel below a characteristic energy. The intensities of GCR protons for several realizations which are within the 68\\% uncertainty range are depicted in Fig.~\\ref{fg:sample} to better illustrate the spectral behaviour at low energies. \n\n\\begin{figure}[ht]\n\\includegraphics[width=3.5in, height=2.8in]{fg_jE_p_show_sample_SNR_rh_1000_vA_16_h_151_nH_90_zad_10_Dgam_alpha_423_ep_87_zu_39_in.png}\n\\caption{Intensities of GCR protons for several realizations (dashed grey curves) around the 68\\% uncertainty range (shaded region). Data points are as in Fig. \\ref{fg:stochastic} and the solid black curve is the median of the intensities.}\n\\label{fg:sample}\n\\end{figure}\n\nNote that a uniform distribution of GCRs will be attained if the number of sources within the diffusion loss length $l_d(E)=\\sqrt{4D(E)\\tau_l(E)}$ in the disk is much larger than one,\n\\begin{eqnarray}\n\\mathcal{R}_s\\tau_l(E) \\frac{2 l_d^3(E)}{3 R_d^2 h_s}\\gg 1 \\, .\n\\end{eqnarray}\nThe characteristic energy $E^*$ could be estimated by setting the LHS of the above inequality to one, which gives $E^*\\simeq 10$ MeV for both species. \n\nInterestingly, apart from the deviation in the energy range below a few GeVs due to solar modulation, the median corresponding to $j_{50\\%}$, the 50\\% percentile of the PDF of the intensities, seems to provide a good fit to the data of Voyager~1 and AMS-02 for both GCR protons and electrons (see Fig.~\\ref{fg:stochastic}). We note that both the expectation values and the median do not strictly correspond the intensities of any particular realizations of sources. At low energies, however, the expectation value is dominated by a few, but rather unlikely realisations with extreme intensities such that $j^{(n)}(E)> j_{84\\%}(E)$ which are outside of the 68\\% uncertainty range. Furthermore, the resulting $\\langle j(E)\\rangle$, which is also the intensities predicted for the smooth source density as stressed above, has a different energy dependence than the \\textit{universal} scaling $j^{(n)}(E)\\sim v\/|\\dot{E}|$ expected at low energies. The median, on the other hand, behaves as $j_{50\\%}(E) \\sim v\/|\\dot{E}|$ and, in fact, the intensities in many realizations seems to closely resemble the spectral behaviour of the median both at low and high energies (see Fig. \\ref{fg:sample}). It is for this reason that the median is to be preferred over the expectation value for the comparison with observational data. \n\nWe note also that the observed proton spectrum seems to have a broader peak than the median of the stochastic model and the observed electron spectrum seems to exceed the median. It is clear, however, that the local ISM should be quite inhomogeneous, and that the observed spectra in an inhomogeneous ISM could be modelled as the weighted average of spectra for different gas densities to provide better agreement with data. We relegate the details of this to future work.\n\nIt is worth mentioning also that the model with the smooth source density could fit data from both Voyager~1 and AMS-02 data under the assumption that the vertical extension of sources is $2h_{s}\\simeq 600$ pc \\citep{schlickeiser2014} expected for type \\rom{1}a SN but these events have a relatively low rate \\citep{prantzos2011}. The stochastic model, however, predicts the observational data to be within the most probable range of the intensities for both GCR protons and electrons with $2h_s\\simeq 80$ pc comparable to the vertical extension of CCSN with a higher rate \\cite{prantzos2011}. More importantly, there is no need to introduce ad hoc breaks both in the injection spectra and the diffusion coefficients. The stochastic model, therefore, seems to be a more appropriate framework for low-energy GCRs.\n\n\\section{Summary and outlook}\n\nIn this \\textit{letter} we have presented results of a modelling of proton and electron spectra between 1 MeV and 10 GeV. Before the advent of the Voyager~1 measurements outside the heliopause, this energy range had received relatively little attention previously due to the fact that solar modulation makes the inference of interstellar spectra difficult. All the models to date assume a smooth source distribution, however, these models do not reproduce the Voyager~1 data unless a spectral break is introduced in the source spectrum. From a microphysical point of view, such a break seems rather unmotivated. \n\nThe smooth approximation is, in fact, not justified since at low energies the energy loss distance becomes shorter than the average source separation. Unlike previous models we therefore considered the discrete nature of sources, modelling the distribution of intensities in a statistical ensemble. We note that the intensity prediction from a smooth density is the ensemble average of this distribution. However, we showed that the ensemble average is not representative of the distribution due to its long power-law tails. For instance, the spectral shapes of the predicted intensities in different realisations are the same below a critical energy. While the expectation value has a very different spectrum at the lowest energies, the median of the distribution does exhibit the same spectral shape. Furthermore, the expectation value is outside the $68\\%$ uncertainty range of the distribution at the lowest energies while the median is by definition always inside. We have shown that the Voyager~1 data fall squarely around the median of the distribution without the need for any unphysical breaks in the source spectra (see \\cite{Note1} for all model parameters).\n\nThe statistical model we have presented here might have interesting implications for other anomalies observed in low-energy GCRs. For instance, it has been shown recently~\\cite{phan2018} that the ionisation rate implied by the Voyager~1 data is much smaller than the ionisation rate directly inferred for a large number of molecular clouds. It would be interesting to see whether the inhomogeneities implied by our statistical model of discrete sources can alleviate this tension. In such a scenario, the Voyager~1 data would need to lie towards the lower edge of the uncertainty band while the molecular cloud measurements would be in regions of systematically higher GCR densities, possibly due to their spatial correlation with source regions. Thanks to our careful statistical model, we will be able to statistically quantify such a model in the future.\\\\\n\nThis project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sk\u0142odowska-Curie grant agreement No 665850. VHMP is grateful to Marco Kuhlen, Nhan Chau, Ngoc Khanh Vu, and Quang Nam Dam for fruitful discussions and technical support. \n\n\\bibliographystyle{apsrev4-2}\n\n\\section*{The cosmic-ray transport equation}\nWe have adopted the cosmic-ray (CR) transport equation in terms of kinetic energy $E$ for the study of stochasticity. However, it might be more customary to formulate the CR transport equation in terms of momentum $p$. For definiteness, we now lay out the procedure for the transformation. The equation in terms of momentum is \\cite{schlickeiser2002}: \n\\begin{equation}\n\\frac{\\partial f}{\\partial t}+\\frac{\\partial}{\\partial z}\\left(u f\\right) -D\\nabla^2 f + \\frac{1}{p^2} \\frac{\\partial }{\\partial p}\\left(\\dot{p}p^2f\\right)=\\Tilde{q}(r, z, p, t) \\, . \\label{eq:transport-p}\n\\end{equation}\nwhere $f(r,z,p,t)$ is the phase space density of CRs, that is the number of particles per unit volume in configuration and momentum space, $u=u(z)$ is the advection velocity with only component perpendicular to the Galactic disk, $D=D(p)$ is the isotropic and homogeneous diffusion coefficient, and $\\dot{p}$ is the momentum loss rate. The phase space density $f(r,z,p,t)$ is related to the cosmic-ray density $\\psi(r,z,E,t)$, the number of particles per unit volume and energy, as $\\psi(r,z,E,t)=4\\pi p^2 f(r,z,p,t)\/v$ and, similarly, we have $q(r,z,E,t)=4\\pi p^2 \\tilde{q}(r,z,p,t)\/v$ where $v$ is the particle's speed. It is then clear that we could now transform Eq. \\ref{eq:transport-p} into an equation for $\\psi(r,z,E,t)$ by performing the change of variable from $p$ to $E$. We note also that $\\dot{E}=\\dot{p}v$ and, in fact, the standard literature mostly quote the formulae for the energy loss rate (even when Eq. \\ref{eq:transport-p} is adopted for the study of CRs \\cite{strong1998,schlickeiser2002}).\n\n\\section*{Energy loss rate}\n\nCosmic-ray protons lose energy mostly due to ionization and proton-proton interaction with the gas in the Galactic disk. The combined energy loss rate for these two processes could be written as \\cite{schlickeiser2002,krakau2015}: \n\\begin{eqnarray}\n&&\\dot{E}\\simeq \\textrm{H}(|z|-h) 1.82\\times 10^{-7}\\left(\\frac{n_{\\textrm{H}}}{1\\textrm{ cm}^{-3}}\\right)\\nonumber\\\\\n&&\\qquad\\times\\left[(1+0.185\\ln\\beta)\\frac{2\\beta^2}{10^{-6}+2\\beta^3}+2.115\\left(\\frac{E}{1\\textrm{ GeV}}\\right)^{1.28}\\left(\\frac{E}{1\\textrm{ GeV}}+200\\right)^{-0.2}\\right]\\textrm{ eV\/s},\n\\end{eqnarray}\nwhere $H(|z|-h)$ is the Heaviside function which indicates that these energy loss mechanisms are only effective within the height $h$ of the disk, $n_{\\mathrm{H}}$ is the density of hydrogen atoms in the disk, $\\beta$ is the ratio between the particle's speed and the speed of light, and $E$ is the kinetic energy of the particle. \n\nBelow a few GeV, the main mechanisms for energy loss of CR electrons are ionization interaction and bremsstrahlung radiation in the Galactic disk. At higher energy, these particles lose energy more effectively not only in the disk but also in the CR halo due to synchrotron radiation and inverse Compton scattering. The energy loss rate could then be parametrized as \\cite{schlickeiser2002,mertsch2011,evoli2017}: \n\\begin{eqnarray}\n&&\\dot{E}\\simeq 10^{-7}\\left(\\frac{E}{1\\textrm{ GeV}}\\right)^2+\\textrm{H}(|z|-h) 1.02\\times 10^{-8}\\left(\\frac{n_\\mathrm{H}}{1\\textrm{ cm}^{-3}}\\right)\\nonumber\\\\\n&&\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\times\\left\\{18.495+2.7\\ln\\gamma+0.051\\gamma\\left[1+0.137\\left(\\ln\\gamma+0.36\\right)\\vphantom{^{\\frac{^\\frac{}{}}{}}}\\right]\\vphantom{^{\\dfrac{}{}}}\\right\\}\\textrm{ eV\/s},\n\\end{eqnarray}\nwhere $\\gamma$ is the Lorentz factor.\n\n\\section*{Parameters for the stochastic model}\nIn Tab. \\ref{tab:parameters}, we briefly summarise all the parameters adopted in order for the stochastic uncertainty bands to encompass the data from both Voyager 1 and AMS-02. Most of the parameters including the diffusion coefficient and the injection spectra are constrained from the fits at high energies \\cite{evoli2019}. \n\nIn fact, the two parameters that only the low-energy spectra are sensitive to are the number density of hydrogen atoms and the advection speed perpendicular to the disk. In fact, the value of the advection speed has also been given in the fits at high energy but it might vary slightly around 10 km\/s depending on the model and the species of CRs considered \\cite{evoli2019,mertsch2020}. We note also that $n_{\\mathrm{H}}$ is not completely free as the surface density of the disk for our Galactic neighborhood is externally constrained to be around 2 g\/cm$^{2}$ which is quite consistent with the value adopted for our fits \\cite{ferriere2001}. \n\n\\begin{table}[h!]\n\\centering\n\\caption{Externally constrained and fitted parameters for the stochastic model for both CR protons and electrons in the case for the diffusion coefficient scaling with Lorentz factor as presented in the main text.}\n\n\n\t\\label{tab:parameters}\n\t\\begin{tabular}{|c|c|c|r|}\n\t\t\\hline\n\t\t\\hline\n\t\t\\multirow{2}{*}{\\shortstack{Fitted parameters\\\\ for low-energy CRs}} & $n_{\\mathrm{H}}$ & Gas density in the disk & 0.9 cm$^{-3}$\\\\\n\t\t\\cline{2-4}\n\t\t& $u_0$ & Advection speed & 16 km\/s\\\\\n\t\t\\hline\n\t\t\\multirow{10}{*}{\\shortstack{Constrained parameters\\\\ from high-energy CRs}} & $R_d$ & $\\qquad$ Radius of the Galactic disk $\\qquad$ & 15 kpc \\\\\n\t\t\\cline{2-4}\n\t\t& $H$ & Height of the CR halo & 4 kpc\\\\\n\t\t\\cline{2-4}\n\t\t& $2h$ & Height of the gas disk for energy loss & 300 pc\\\\\n\t\t\\cline{2-4}\n\t\t& $2h_s$ & Height of the disk of sources & 80 pc\\\\\n\t\t\\cline{2-4}\n\t\t& $D(E=10\\textrm{ GeV})$ & Diffusion coefficient at 10 GeV & $5\\times 10^{28}$ cm$^2$\/s\\\\\n\t\t\\cline{2-4}\n\t\t& $\\delta$ & Index of the diffusion coefficient & 0.63\\\\\n\t\t\\cline{2-4}\n\t\t& $\\mathcal{R}_s$ & Source rate & 0.03 yr$^{-1}$\\\\\n\t\t\\cline{2-4}\n\t\t& $\\xi_{CR}^{(p)}$ & Proton acceleration efficiency & 8.7\\%\\\\\n\t\t\\cline{2-4}\n\t\t& $\\xi_{CR}^{(e)}$ & Electron acceleration efficiency & 0.55\\%\\\\\n\t\t\\cline{2-4}\n\t\t& $\\alpha$ & Index of the injection spectra & 4.23\\\\ \n\t\t\\hline\n\t\t\\hline\n\t\\end{tabular}\n\\end{table}\n\nWe note that the parameters in Tab. \\ref{tab:parameters} have been obtained for the diffusion coefficient of the form as presented in the main text $D(E)\\sim\\beta \\gamma^{\\delta}$ which is expected when the magneto-static approximation is relaxed meaning the Alfv\\'en speed is no longer negligible in comparison to the particle's speed in the resonance condition of wave-particle interaction (see e.g. \\cite{schlickeiser1999,schlickeiser2010} for more technical details). In a broader sense, it is probably fair to admit that there remain significant uncertainties since there is currently no direct observations of the mean-free path in the interstellar medium. In order to bracket this uncertainty, we have also repeated our computation with a diffusion coefficient that has a power law dependence on rigidity, $D(E)\\sim \\beta R^\\delta$ where $R$ is the particle's rigidity. We have found that this would not qualitatively change our results since the break in $D(E)$ below roughly 1 GeV does not significantly affect the spectrum of CR protons at low energies as the transport in this regime is dominated by energy loss. For CR electrons, the rigidity or Lorentz factor dependent diffusion coefficients are roughly the same down to 1 MeV. We present also in Fig.~\\ref{fg:Drig} the fits for the case of a rigidity-dependent diffusion coefficient with slightly different values for the advection speed $u_0$ and the number density of hydrogen atoms $n_\\mathrm{H}$ (see Tab.~\\ref{tab:parameters2} for the complete list of parameter values in this case)\n\n\\begin{figure}[h]\n\\centerline{\n\\includegraphics[width=3.2in, height=2.7in]{fg_jE_p_show_SNR_rh=1000_vA=18_h=151_nH=70_zad=10_Drig_alpha=423_ep=87_zu=39_in.png}\n\\includegraphics[width=3.2in, height=2.7in]{fg_jE_e_show_SNR_rh=1000_vA=18_h=151_nH=70_zad=10_Drig_alpha=423_ep=5_zu=39_in.png}}\n\\caption{Stochastic fluctuations of GCR protons (left panel) and electrons (right panel) in comparison with data from Voyager~1~\\cite{cummings2016} (blue) and AMS-02~\\cite{AMS2014,AMS2015} (green) for the case of rigidity dependent diffusion coefficient. The dotted and solid black curves are respectively the expectation values and the median of the intensities. The shaded regions are the 95\\% and 68\\% uncertainty ranges.}\n\\label{fg:Drig}\n\\end{figure}\n\n\\begin{table}[h!]\n\\centering\n\\caption{Externally constrained and fitted parameters for the stochastic model for both CR protons and electrons in the case for the diffusion coefficient scaling with rigidity.}\n\n\n\t\\label{tab:parameters2}\n\t\\begin{tabular}{|c|c|r|}\n\t\t\\hline\n\t\t\\hline\n\t\t\\multirow{2}{*}{\\shortstack{Fitted parameters\\\\ for low-energy CRs}} & $n_{\\mathrm{H}}$ & 0.7 cm$^{-3}$\\\\\n\t\t\\cline{2-3}\n\t\t& $u_0$ & 18 km\/s\\\\\n\t\t\\hline\n\t\t\\multirow{10}{*}{\\shortstack{Constrained parameters\\\\ from high-energy CRs}} & $R_d$ & $\\qquad$ 15 kpc \\\\\n\t\t\\cline{2-3}\n\t\t& $H$ & 4 kpc\\\\\n\t\t\\cline{2-3}\n\t\t& $2h$ & 300 pc\\\\\n\t\t\\cline{2-3}\n\t\t& $2h_s$ & 80 pc\\\\\n\t\t\\cline{2-3}\n\t\t& $D(E=10\\textrm{ GeV})$ & $5\\times 10^{28}$ cm$^2$\/s\\\\\n\t\t\\cline{2-3}\n\t\t& $\\delta$ & 0.63\\\\\n\t\t\\cline{2-3}\n\t\t& $\\mathcal{R}_s$ & 0.03 yr$^{-1}$\\\\\n\t\t\\cline{2-3}\n\t\t& $\\xi_{CR}^{(p)}$ & 8.7\\%\\\\\n\t\t\\cline{2-3}\n\t\t& $\\xi_{CR}^{(e)}$ & 0.55\\%\\\\\n\t\t\\cline{2-3}\n\t\t& $\\alpha$ & 4.23\\\\ \n\t\t\\hline\n\t\t\\hline\n\t\\end{tabular}\n\\end{table}\n\n\\bibliographystyle{apsrev4-2}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}