diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzcyma" "b/data_all_eng_slimpj/shuffled/split2/finalzzcyma" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzcyma" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nAs early as in 1927~\\cite{dir27}, Paul A, M. Dirac considered the problem\nof extending Heisenberg's uncertainty relations to the\nLorentz-covariant world.\nIn 1945~\\cite{dir45}, he attempted to construct the Lorentz\ngroup using the Gaussian wave function.\nIn 1949~\\cite{dir49},\nDirac pointed out the task of constructing relativistic dynamics\nis to construct a representation of the inhomogeneous Lorentz group.\nHe then wrote down the ten generators of this group and their closed set\nof commutation relations. This set is known as the Lie algebra of\nthe Poincar\\'e group.\n\nIn 1963~\\cite{dir63}, Dirac considered two coupled harmonic oscillators\nand constructed an algebra leading to the Lie algebra for the $SO(3,2)$\nde Sitter group, which is the Lorentz group applicable to three space\ndimensions and two time-like variables.\n\nFrom the mathematical point of view, it is straight-forward to contract\none of those two time-like dimensions to construct $ISO(3,1)$ or the\nPoincar\\'e group. This is what we present in this paper. However,\nfrom the physical point of view, we are deriving the Poincar\\'e symmetry\nfor the Lorentz-covariant quantum world purely from the symmetries of\nHeisenberg's uncertainty relations.\n\nIn Sec.~\\ref{sp2}, it is noted that a one-dimensional uncertainty relation\ncontains the symmetry of the $Sp(2)$ group in the two-dimensional phase\nspace. It is pointed out that this group, with three generators, is isomorphic\nto the Lorentz group applicable to two space dimensions and one time variable.\nWe can next consider another set with three additional generators.\n\nIn Sec.~\\ref{2osc}, we write those Heisenberg uncertainty relations in\nterm of step-up and step-down operators in the oscillator system. It\nis then possible to consider the two coupled oscillator system\nwith the ten generators constructed by Dirac in 1963~\\cite{dir63}.\nIt is gratifying to note that this oscillator system can serve as the basic\nlanguage for the two-photon system of current interest~\\cite{yuen76,yurke86}.\n\nIn Sec.~\\ref{contraction}, we contract one of the time-like variables in\n$SO(3,2)$ to arrive at the inhomogeneous Lorentz group $ISO(3,1)$ or the\nPoincar\\'e group. In Sec.~\\ref{remarks}, we give some concluding\nremarks.\n\n\n\\section{Sp(2) Symmetry for the Single-variable Uncertainty Relation}\\label{sp2}\nIt is known that the symmetry of quantum mechanics and quantum field\ntheory is governed by the Poincar\\'e group~\\cite{dir49,dir62}. The\nPoincar\\'e group means the inhomogeneous Lorentz group which includes\nthe Lorentz group applicable to the four-dimensional Minkowskian\nspace-time, plus space-time translations~\\cite{wig39}.\n\nThe question is whether this Poincar\\'e symmetry is derivable from\nHeisenberg's uncertainty relation, which takes the familiar form\n\\begin{equation}\\label{101}\n \\left[x_{i}, p_{j}\\right] = i \\delta_{ij}.\n\\end{equation}\nThere are three commutation relations in this equation. Let us choose\none of them, and write it as\n\\begin{equation}\\label{102}\n [x, p] = i .\n\\end{equation}\nThis commutation relation possesses the symmetry of the Poisson bracket in\nclassical mechanics~\\cite{arnold78,stern84}. The best way to address this property\nis to use the Gaussian form for the Wigner function defined in the phase\nspace, which takes the\nform~\\cite{hkn88,kiwi90ajp,knp91}\n\\begin{equation}\\label{103}\nW(x,p) = \\frac{1}{\\pi} \\exp\\{-\\left(x^2 + p^2\\right)\\}.\n\\end{equation}\nThis distribution is concentrated in the circular region around the origin.\nLet us define the circle as\n\\begin{equation}\\label{105}\nx^{2} + p^{2} = 1.\n\\end {equation}\nWe can use the area of this circle in the phase space of $x$ and $p$ as the\nminimum uncertainty. This uncertainty is preserved\nunder rotations in the phase space:\n\\begin{equation}\\label{107}\n\\pmatrix{\\cos\\theta & -\\sin\\theta \\cr \\sin\\theta & \\cos\\theta }\n\\pmatrix{ x \\cr p },\n\\end{equation}\nas well as the squeeze of the form\n\\begin{equation}\\label{108}\n\\pmatrix{e^{\\eta} & 0 \\cr 0 & e^{-\\eta} }\n\\pmatrix{x \\cr p } .\n\\end{equation}\n\nThe rotation and the squeeze are generated by\n\\begin{equation}\\label{109}\nJ_{2} = - i\\left(x \\frac{\\partial}{\\partial p} - p\\frac{\\partial}{\\partial x } \\right),\n\\qquad\nK_{1} = -i \\left(x\\frac{\\partial}{\\partial x} - p \\frac{\\partial}{\\partial p} \\right),\n\\end{equation}\nrespectively.\nIf we take the commutation relation with these two operators, the result is\n\\begin{equation}\\label{111}\n\\left[J_{2}, K_{1}\\right] = -i K_{3},\n\\end{equation}\nwith\n\\begin{equation}\\label{113}\nK_{3} = -i \\left(x\\frac{\\partial}{\\partial p} +\np \\frac{\\partial}{\\partial x} \\right).\n\\end{equation}\nIndeed, these three generators form a closed set of commutation\nrelations:\n\\begin{equation}\\label{115}\n \\left[J_{2}, K_{1}\\right] = - iK_{3}, \\qquad\n \\left[J_{2}, K_{3}\\right] = iK_{1}, \\qquad\n \\left[K_{1}, K_{3}\\right] = iJ_{2}.\n\\end{equation}\nThis closed set is called the Lie algebra of the $Sp(2)$ group, isomorphic\nto the Lorentz group applicable to two space and one time dimensions.\n\nLet us consider the Minkowskian space of $(x, y, z, t)$. It is possible to\nwrite three four-by-four matrices satisfying the Lie algebra of Eq.(\\ref{115}).\nThe three four-by-four matrices satisfying this set\nof commutation relations are:\n\\begin{equation}\\label{119}\nJ_{2} = \\pmatrix{0 & 0 & i & 0 \\cr 0 & 0 & 0 & 0 \\cr i & 0 & 0 & 0 \\cr\n 0 & 0 & 0 & 0 },\\quad\nK_{1} = \\pmatrix{0 & 0 & 0 & i \\cr 0 & 0 & 0 & 0 \\cr 0 & 0 & 0 & 0 \\cr\n i & 0 & 0 & 0 }, \\quad\n K_{3} = \\pmatrix{0 & 0 & 0 & 0 \\cr 0 & 0 & 0 & 0 \\cr 0 & 0 & 0 & i \\cr\n 0 & 0 & i & 0 } .\n\\end{equation}\nHowever, these matrices have null second rows and null second columns.\nThus, they can generate Lorentz transformations applicable only to the\nthree-dimensional space of $(x,z,t)$, while the $y$ variable remains invariant.\n\n\n\n\\section{Two-oscillator System}\\label{2osc}\nIn order to generate Lorentz transformations applicable to the full\nMinkowskian space, along with $J_{2},K_{1}$, and $K_{3}$ we need two\nmore Heisenberg commutation relations.\nIndeed, Paul A. M. Dirac started this program in 1963~\\cite{dir63}.\nIt is possible to write the two uncertainty relations using two harmonic\noscillators as\n\\begin{equation}\\label{121}\n\\left[a_{i}, a^{\\dag}_{j}\\right] = \\delta_{ij} .\n\\end{equation}\nwith\n\\begin{equation}\\label{123}\n a_{i} = \\frac{1}{\\sqrt{2}}\\left(x_{i} + ip_{i} \\right), \\qquad\n a^{\\dag}_{i} = \\frac{1}{\\sqrt{2}}\\left(x_{i} - ip_{i} \\right),\n\\end{equation}\nand\n\\begin{equation}\\label{125}\n x_{i} = \\frac{1}{\\sqrt{2}}\\left(a_{i} + a^{\\dag}_{i} \\right), \\qquad\n p_{i} = \\frac{i}{\\sqrt{2}}\\left(a^{\\dag}_{i} - a_{i} \\right),\n\\end{equation}\n where $i$ and $j$ could be 1 or 2.\n\nMore recently in 1986, this two-oscillator system was considered\nby Yurke {\\it et al.}~\\cite{yurke86} in their study of two-mode\ninterferometers. They considered first\n\\begin{equation}\\label{301}\n Q_{3} = \\frac{i}{2}\\left(a_{1}^{\\dag}a_{2}^{\\dag} - a_{1}a_{2}\\right),\n\\end{equation}\nwhich leads to the generation of the two-mode coherent state or the\nsqueezed state~\\cite{yuen76}.\n\nYurke {\\it et al.} then considered possible interferometers requiring\nthe following two additional operators.\n\\begin{equation}\\label{303}\nS_{3} = {1\\over 2}\\left(a^{\\dag}_{1}a_{1} + a_{2}a^{\\dag}_{2}\\right) ,\n\\qquad\nK_{3} = {1\\over 2}\\left(a^{\\dag}_{1}a^{\\dag}_{2} + a_{1}a_{2}\\right) .\n\\end{equation}\nThe three Hermitian operators from Eq.(\\ref{301}) and Eq.(\\ref{303})\nsatisfy the commutation relations\n\\begin{equation} \\label{305}\n\\left[K_{3}, Q_{3}\\right] = -iS_{3}, \\qquad\n\\left[Q_{3}, S_{3}\\right] = iK_{3}, \\qquad\n\\left[S_{3}, K_{3}\\right] = iQ_{3} .\n\\end{equation}\nThese relations are like those given in Eq.(\\ref{115}) for the Lorentz\ngroup applicable to two space-like and one time-like dimensions.\n\nIn addition, in the same paper~\\cite{yurke86}, Yurke {\\it et al.}\ndiscussed the possibility of constructing interferometers exhibiting\nthe symmetry generated by\n\\begin{equation}\\label{307}\n L_{1} = {1\\over 2}\\left(a^{\\dag}_{1}a_{2} + a^{\\dag}_{2}a_{1}\\right) , \\quad\n L_{2} = {1\\over 2i}\\left(a^{\\dag}_{1}a_{2} - a^{\\dag}_{2}a_{1}\\right), \\quad\n L_{3} = {1\\over 2}\\left(a^{\\dag}_{1}a_{1} - a^{\\dag}_{2}a_{2} \\right).\n\\end{equation}\nThese generators satisfy the closed set of commutation relations\n\\begin{equation}\\label{309}\n\\left[L_{i}, L_{j}\\right] = i\\epsilon_{ijk} L_{k} ,\n\\end{equation}\nand therefore define a Lie algebra which is the same as that for $SU(2)$\nor the three-dimensional rotation group.\n\nWe are then led to ask whether it is possible to construct a closed set\nof commutation relations with the six Hermitian operators from\nEqs.(\\ref{301},\\ref{303},\\ref{307}). It is not possible. We have to\nadd four additional operators, namely\n\\begin{eqnarray}\\label{311}\n&{}& K_{1} = -{1\\over 4}\\left(a^{\\dag}_{1}a^{\\dag}_{1} + a_{1}a_{1} -\n a^{\\dag}_{2}a^{\\dag}_{2} - a_{2}a_{2}\\right) , \\quad\nK_{2} = +{i\\over 4}\\left(a^{\\dag}_{1}a^{\\dag}_{1} - a_{1}a_{1} +\n a^{\\dag}_{2}a^{\\dag}_{2} - a_{2}a_{2}\\right) , \\nonumber \\\\[1ex]\n&{}& Q_{1} = -{i\\over 4}\\left(a^{\\dag}_{1}a^{\\dag}_{1} - a_{1}a_{1} -\n a^{\\dag}_{2}a^{\\dag}_{2} + a_{2}a_{2} \\right) , \\quad\nQ_{2} = -{1\\over 4}\\left(a^{\\dag}_{1}a^{\\dag}_{1} + a_{1}a_{1} +\n a^{\\dag}_{2}a^{\\dag}_{2} + a_{2}a_{2} \\right) .\n\\end{eqnarray}\n\nThere are now ten operators from Eqs.(\\ref{301},\\ref{303},\\ref{307},\\ref{311}).\nIndeed, these ten operators satisfy the following closed set of commutation\nrelations.\n\\begin{eqnarray}\\label{313}\n&{}& [L_{i}, L_{j}] = i\\epsilon _{ijk} L_{k} ,\\quad\n[L_{i}, K_{j}] = i\\epsilon_{ijk} K_{k} , \\quad\n[L_{i}, Q_{j}] = i\\epsilon_{ijk} Q_{k} , \\nonumber\\\\[1ex]\n &{}& [K_{i}, K_{j}] = [Q_{i}, Q_{j}] = -i\\epsilon _{ijk} L_{k} , \\quad\n [K_{i}, Q_{j}] = -i\\delta_{ij} S_{3}, \\nonumber\\\\[1ex]\n&{}&[L_{i}, S_{3}] = 0, \\quad [K_{i}, S_{3}] = -iQ_{i},\\quad\n[Q_{i}, S_{3}] = iK_{i} .\n\\end{eqnarray}\nAs Dirac noted in 1963~\\cite{dir63}, this set is the same as the Lie\nalgebra for the $SO(3,2)$ de Sitter group, with ten generators. This\nis the Lorentz group applicable to the three-dimensional space with\ntwo time variables. This group plays a very important role in\nspace-time symmetries.\n\n\nIn the same paper, Dirac pointed out that this set of commutation\nrelations serves as the Lie algebra for the four-dimensional\nsymplectic group commonly called $Sp(4)$, applicable to the systems\nof two one-dimensional particles, each with a two-dimensional phase\nspace.\n\n\nFor a dynamical system consisting of two pairs of canonical variables\n$x_{1}, p_{1}$ and $x_{2}, p_{2}$, we can use the four-dimensional\nspace with the coordinate variables defined as~\\cite{hkn95jmp}\n\\begin{equation}\\label{317}\n\\left(x_{1}, p_{1}, x_{2}, p_{2} \\right).\n\\end{equation}\nThen the four-by-four transformation matrix $M$ applicable to this\nfour-component vector is canonical if~\\cite{abra78,goldstein80}\n\\begin{equation}\\label{319}\nM J \\tilde{M} = J ,\n\\end{equation}\nwhere $\\tilde{M}$ is the transpose of the $M$ matrix,\nwith\n\\begin{equation}\\label{321}\nJ = \\pmatrix{0 & 1 & 0 & 0 \\cr -1 & 0 & 0 & 0 \\cr\n 0 & 0 & 0 & 1 \\cr 0 & 0 & -1 & 0 } .\n\\end{equation}\nAccording to this form of the $J$ matrix, the area of the phase space for\nthe $x_{1}$ and $p_{1}$ variables remains invariant, and the story is the\nsame for the phase space of $x_{2}$ and $p_{2}.$\n\nWe can then write the generators of the $Sp(4)$ group as\n\\begin{eqnarray}\\label{322}\n&{}& L_{1} = -\\frac{1}{2}\\pmatrix{0 & I\\cr I & 0 }\\sigma_{2} , \\quad\nL_{2} = \\frac{i}{2} \\pmatrix{0 & -I \\cr I & 0} I , \\quad\nL_{3} = \\frac{1}{2}\\pmatrix{-I & 0 \\cr 0 & I}\\sigma_{2} , \\nonumber\\\\[2ex]\n&{}& S_{3} = \\frac{1}{2}\\pmatrix{I & 0\\cr 0 & I} \\sigma_{2},\n\\end{eqnarray}\n\\noindent and\n\\begin{eqnarray}\\label{323}\n&{}&K_{1} = \\frac{i}{2}\\pmatrix{I & 0 \\cr 0 & -I } \\sigma_{1}, \\quad\nK_{2} = \\frac{i}{2} \\pmatrix{I & 0 \\cr 0 & I } \\sigma_{3}, \\quad\nK_{3} = -\\frac{i}{2}\\pmatrix{0 & I \\cr I & 0 } \\sigma_{1}, \\nonumber\\\\[2ex]\n&{}& Q_{1} = -\\frac{i}{2}\\pmatrix{I & 0 \\cr 0 & -I }\\sigma_{3}, \\quad\nQ_{2} = \\frac{i}{2}\\pmatrix{I & 0 \\cr 0 & I }\\sigma_{1} , \\quad\nQ_{3} = \\frac{i}{2}\\pmatrix{0 & I \\cr I & 0 } \\sigma_{3} ,\n\\end{eqnarray}\nwhere $I$ is the two-by-two identity matrix, while $\\sigma_{1},\n\\sigma_{2}$, and $\\sigma_{3}$ are the two-by-two Pauli matrices. The four matrices\ngiven in Eq.(\\ref{322}) generate rotations, while those of Eq.(\\ref{323}) lead to\nsqueezes in the four-dimensional phase space.\n\nAs for the difference in methods used in Secs.~\\ref{sp2} and~\\ref{2osc},\nlet us look at the ten four-by-four matrices given in Eqs.(\\ref{322}) and (\\ref{323}).\nAmong these ten matrices, six of them are diagonal. They are\n$S_{3}, L_{3}, K_{1}, K_{2} , Q_{1},$ and $Q_{2}$.\nIn the language of two harmonic oscillators, these generators do not\nmix up the first and second oscillators. There are six of them because\neach operator has three generators for its own $Sp(2)$ symmetry.\nLet us consider the three generators, $S_{3}, K_{2}$, and $Q_{2}$.\nFor each oscillator, the generators consist of\n\\begin{equation}\n \\sigma_{2}, \\quad i\\sigma_{1} \\quad \\mbox{and} \\quad i\\sigma_{3} .\n\\end{equation}\nThese separable generators thus constitute the Lie algebra of Sp(2) group\nfor the one-oscillator system, which we discussed in Sec.~\\ref{sp2}. Hence, the\none-oscillator system constitutes a subgroup of the two-oscillator system.\n\nThe off-diagonal matrix $L_{2}$ couples the first and second oscillators\nwithout changing the overall volume of the four-dimensional phase space.\nHowever, in order to construct the closed set of commutation relations,\nwe need the three additional generators: $L_{1} , K_{3},$ and $Q_{3}.$\nThe commutation relations given in Eq.(\\ref{313}) are clearly\nconsequences of Heisenberg's uncertainty relations.\n\n\n\n\n\n\n\\section{Contraction of SO(3,2) to ISO(3,1)}\\label{contraction}\n\n\nLet us next go back to the $SO(3,2)$ contents of this two-oscillator\nsystem~\\cite{dir63}. There are three space-like coordinates $(x, y, z)$\nand two time-like coordinates $s$ and $t$. It is thus possible to\nconstruct the five-dimensional space of $(x, y, z, t, s)$, and to consider\nfour-dimensional Minkowskian subspaces consisting of $(x, y, z, t)$\nand $(x, y, z, s)$.\n\nAs for the $s$ variable, we can make it longer or shorter, according\nto procedure of group contractions introduced first by In{\\\"o}n{\\\"u} and\nWigner~\\cite{inonu53}. In this five-dimensional space, the boosts along the\n$x$ direction with respect to the $t$ and $s$ variables are generated by\n\\begin{equation}\\label{333}\nA_{x} = \\pmatrix{0 & 0 & 0 & i & 0 \\cr 0 & 0 & 0 & 0 & 0 \\cr\n0 & 0 & 0 & 0 & 0 \\cr i & 0 & 0 & 0 & 0 \\cr 0 & 0 & 0 & 0 & 0 }, \\qquad\nB_{x} = \\pmatrix{0 & 0 & 0 & 0 & i \\cr 0 & 0 & 0 & 0 & 0 \\cr\n0 & 0 & 0 & 0 & 0 \\cr 0 & 0 & 0 & 0 & 0 \\cr i & 0 & 0 & 0 & 0 },\n\\end{equation}\nrespectively. The boost generators along the $y$ and $z$ directions\ntake similar forms.\n\nLet us then introduce the five-by-five contraction matrix~\\cite{kiwi87jmp,kiwi90jmp}\n\\begin{equation}\\label{335}\nC(\\epsilon) = \\pmatrix{1 & 0 & 0 & 0 & 0 \\cr 0 & 1 & 0 & 0 & 0 \\cr\n0 & 0 & 1 & 0 & 0 \\cr 0 & 0 & 0 & 1 & 0 \\cr 0 & 0 & 0 & 0 & \\epsilon } .\n\\end{equation}\nThis matrix leaves the first four columns and rows invariant, and the\nfour-dimensional Minkowskian sub-space of $(x, y, z, t)$ stays invariant.\n\nAs for the boost with respect to the $s$ variable, according to the procedure\nspelled out in Ref.~\\cite{kiwi87jmp,kiwi90jmp}, the contracted boost generator becomes\n\\begin{equation} \\label{341}\n B^{c}_{x} = \\lim_{\\epsilon\\rightarrow \\infty}\\frac{1}{\\epsilon}~\n \\left[C^{-1}(\\epsilon)~B_{x}~C(\\epsilon)\\right] =\n \\pmatrix{0 & 0 & 0 & 0 & i \\cr 0 & 0 & 0 & 0 & 0 \\cr\n0 & 0 & 0 & 0 & 0 \\cr 0 & 0 & 0 & 0 & 0 \\cr 0 & 0 & 0 & 0 & 0 } .\n\\end{equation}\nLikewise, $B^{c}_{y}$ and $B^{c}_{z}$ become\n\\begin{equation}\\label{343}\n B^{c}_{y} = \\pmatrix{0 & 0 & 0 & 0 & 0 \\cr 0 & 0 & 0 & 0 & i \\cr\n 0 & 0 & 0 & 0 & 0 \\cr 0 & 0 & 0 & 0 & 0 \\cr 0 & 0 & 0 & 0 & 0 },\n \\qquad\n B^{c}_{z} = \\pmatrix{0 & 0 & 0 & 0 & 0 \\cr 0 & 0 & 0 & 0 & 0 \\cr\n 0 & 0 & 0 & 0 & i \\cr 0 & 0 & 0 & 0 & 0 \\cr 0 & 0 & 0 & 0 & 0} ,\n\\end{equation}\nrespectively.\n\nAs for the $t$ direction, the transformation applicable to the $s$ and $t$\nvariables is a rotation, generated by\n\\begin{equation}\\label{345}\nB_{t} = \\pmatrix{0 & 0 & 0 & 0 & 0 \\cr 0 & 0 & 0 & 0 & 0 \\cr\n0 & 0 & 0 & 0 & 0 \\cr 0 & 0 & 0 & 0 & i \\cr 0 & 0 & 0 & -i & 0 } .\n\\end{equation}\nThis matrix also becomes contracted to\n\\begin{equation}\\label{347}\nB^{c}_{t} = \\pmatrix{0 & 0 & 0 & 0 & 0 \\cr 0 & 0 & 0 & 0 & 0 \\cr\n0 & 0 & 0 & 0 & 0 \\cr 0 & 0 & 0 & 0 & i \\cr 0 & 0 & 0 & 0 & 0 } .\n\\end{equation}\nThese contraction procedures are illustrated in Fig.~\\ref{contrac}.\n\n\n\n\\begin{figure\n\\centerline{\\includegraphics[scale=2.0]{contrac66.eps}}\n\\caption{Contraction of the $SO(3,2)$ group to the Poincar\\'e group.\nThe time-like $s$ coordinate is contracted with respect to the\nspace-like $x$ variable, and with respect to the time-like\nvariable $t$.}\\label{contrac}\n\\end{figure}\n\n\n\n\nThese four contracted generators lead to the five-by-five transformation\nmatrix\n\\begin{equation}\\label{349}\n\\exp\\left\\{-i\\left(aB^{c}_{x}+ bB^{c}_{y} + cB^{c}_{z} + dB^{c}_{t}\\right)\\right\\}\n= \\pmatrix{1 & 0 & 0 & 0 & a \\cr 0 & 1 & 0 & 0 & b \\cr\n0 & 0 & 1 & 0 & c \\cr 0 & 0 & 0 & 1 & d \\cr 0 & 0 & 0 & 0 & 1 } ,\n\\end{equation}\nperforming translations:\n\\begin{equation}\\label{351}\n \\pmatrix{1 & 0 & 0 & 0 & a \\cr 0 & 1 & 0 & 0 & b \\cr\n0 & 0 & 1 & 0 & c \\cr 0 & 0 & 0 & 1 & d \\cr 0 & 0 & 0 & 0 & 1 }\n\\pmatrix{x \\cr y \\cr z \\cr t \\cr 1 } =\n\\pmatrix{x + a \\cr y + b \\cr z + c \\cr t + d \\cr 1 } .\n\\end{equation}\nThis matrix leaves the first four rows and columns invariant. They are for\nthe Lorentz transformation applicable to the Minkowskian space of $(x, y, z, t)$.\n\nIn this way, the boosts along the $s$ direction become contracted to the\ntranslation. This means the group $SO(3,2)$ derivable from the Heisenberg's\nuncertainty relations becomes the inhomogeneous Lorentz group governing the\nPoincar\\'e symmetry for quantum mechanics and quantum field\ntheory in the Lorentz-covariant world~\\cite{dir63,dir62}.\n\nThe group contraction has a long history in physics, starting from the 1953\npaper by In{\\\"o}n{\\\"u} and Wigner~\\cite{inonu53}. It starts with a geometrical\nconcept. Our earth is a sphere, but is is convenient to consider a flat surface\ntangent to a given point on the spherical surface of the earth. This approximation\nis called the contraction of $SO(3)$ to $SE(2)$ or the two-dimensional Euclidean\ngroup with one rotational and two translational degrees of freedom.\n\nThis mathematical method was extended to the contraction of the $SO(3,1)$ Lorentz\ngroup to the three-dimensional Euclidean group. More recently, Kim and Wigner\nconsidered a cylindrical surface tangent to the sphere~\\cite{kiwi87jmp,kiwi90jmp}\nat its equatorial belt. This cylinder has one rotational degree of freedom and\none up-down translational degree of freedom. It was shown that the rotation and\ntranslation correspond to the helicity and gauge degrees of freedom for massless\nparticles.\n\nSince the Lorentz $SO(3,1)$ is isomorphic to the $SL(2,c)$ group of two-by-two\nmatrices, we can ask whether it is possible to perform the same contraction\nprocedure in the regime of two-by-two matrices. It does not appear possible\nto represent the $ISE(3)$ (inhomogeneous Euclidean group) with two-by-two\nmatrices. Likewise, there seem to be difficulties in addressing the\nquestion of contracting $SO(3,2)$ to $ISO(3,1)$ within the frame work of\nthe four-by-four matrices of $Sp(4)$.\n\n\n\n\n\n\\section{Concluding Remarks}\\label{remarks}\n\nSpecial relativity and quantum mechanics served as the major\ntheoretical basis for modern physics for one hundred years.\nThey coexisted in harmony: quantum mechanics augmented by Lorentz\ncovariance when needed.\nIndeed, there have been attempts in the past to construct a\nLorentz-covariant quantum world by augmenting the Lorentz group\nto the uncertainty relations~\\cite{dir27,dir45,dir49,dir62,yuka53}.\nThere are recent papers on this subject~\\cite{fkr71,lowe83,bars09}.\nThere are also papers on group contractions including contractions\nof the $SO(3,2)$ group~\\cite{gilmore74,bohm84,bohm85}.\n\nIt is about time for us to examine whether both of these two great\ntheories can be synthesized. The first step toward this process is\nto find the common mathematical ground. Before Newton, open orbits\n(comets) and closed orbits (planets) were treated differently, but\nNewton came up with one differential equation for both. Before\nMaxwell, electricity and magnetism were different branches of physics.\nMaxwell's equations synthesized these two branches into one. It is\nshown in this paper that the group $ISO(3,1)$ can be derived from\nthe algebra of quantum mechanics.\n\nIt is gratifying to note that the Poincar\\'e symmetry is derivable\nwithin the system of Heisenberg's uncertainty relations. The\nprocedure included two coupled oscillators resulting in the $SO(3,2)$\nsymmetry~\\cite{dir63}, and the contraction of this $SO(3,2)$ to the\ninhomogeneous Lorentz group $ISO(3,1)$.\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\n\n\\subsection{Experimental Procedure}\nWe study four experimental factors: the model, the batch size, the initialization point, and the learning rate. The model factor has three levels that correspond to the three models described above. The batch size will take the values corresponding to SGD-$1$, SGD-$200$, SGD-$500$, and Gradient Descent. The initialization point will be randomly selected from a uniform distribution on a ball whose radius will take values $\\lbrace 10^{-3},10^{-2},10^{-1}, 1 \\rbrace$ and is centered at either the sharpest minimizer or the flattest minimizer. The learning rates will be linear combinations of the upper and lower bounds. Specifically, if $u$ is the threshold for convergence and $l$ is the threshold for divergence, then the learning rates will be $1.5l$, $0.5(u+l)$, or $0.5u$. For each unique collection of the levels of the four factors, one hundred independent runs are executed with at most twenty iterations. For each run, the euclidean distances between the iterates and the minimizer near the initialization point are recorded. \n\n\\subsection{Results and Discussion}\nNote that the results Models 1, 2 and 3 are nearly identical for the purposes of our discussion, and so we will feature Model 1 only in our discussion below.\n\n\\begin{figure}[hbtp]\n\\centering \n\\includegraphics[width=\\textwidth]{st_fig1}\n\\caption[SGD-$k$ on Styblinski-Tang Near Flat Minimizer]{The behavior of SGD-$k$ on Model 1 for different batch sizes when initialized near the flat minimizer. The $y$-axis shows the distance (in logarithmic scale) between the iterates and the flat minimizer for all runs of the specified batch size and specified learning rate.}\n\\label{figure-sgd:st-1}\n\\end{figure}\n\n\\begin{figure}[htbp]\n\\centering \n\\includegraphics[width=\\textwidth]{st_fig3}\n\\caption[SGD-$k$ on Styblinski-Tang Near Sharp Minimizer]{The behavior of SGD-$k$ on Model 1 for different batch sizes when initialized near the sharp minimizer. The $y$-axis shows the distance (in logarithmic scale) between the iterates and the sharp minimizer for all runs of the specified batch size and specified learning rate.}\n\\label{figure-sgd:st-3}\n\\end{figure}\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[width=\\textwidth]{st_fig2}\n\\caption[Initialization and SGD-$k$ on Styblinski-Tang Near Flat Minimizer]{The behavior of SGD-$k$ on Model 1 for different starting radii when initialized near the flat minimizer. The $y$-axis shows the distance (in logarithmic scale) between the iterates and the flat minimizer for all runs of the specified starting radius and specified learning rate.}\n\\label{figure-sgd:st-2}\n\\end{figure}\n\n\\begin{figure}[hbtp]\n\\centering \n\\includegraphics[width=\\textwidth]{st_fig4}\n\\caption[Initialization and SGD-$k$ on Styblinski-Tang Near Sharp Minimizer]{The behavior of SGD-$k$ on Model 1 for different starting radii when initialized near the sharp minimizer. The $y$-axis shows the distance (in logarithmic scale) between the iterates and the sharp minimizer for all runs of the specified starting radius and specified learning rate.}\n\\label{figure-sgd:st-4}\n\\end{figure}\n\nFigure \\ref{figure-sgd:st-1} shows the distance between SGD-$k$ iterates and the flat minimizer on Model 1 for different batch sizes and different learning rates when SGD-$k$ is initialized near the flat minimizer. \nWe note that, regardless of batch size, the iterates are diverging from the flat minimizer and are converging to the corners of the feasible region for learning rates $1.5l$ and $0.5(u+l)$. \nOn the other hand, for the learning rate $0.5u$, we see stability for $k=1,200,500$ about the minimizer and we see convergence for GD (i.e., $k=\\infty$). \nSimilarly, Figure \\ref{figure-sgd:st-3} shows the distance between SGD-$k$ iterates and the sharp minimizer on Model 1 for different batch sizes and different learning rates when SGD-$k$ is initialized near the sharp minimizer. \nWe note that, regardless of batch size, the iterates are diverging from the sharp minimizer and converging to the corners of the feasible region for learning rates $1.5l$ and $0.5(u+l)$. On the other hand, for the learning rate $0.5u$, we see stability for $k=1,200,500$ about the sharp minimizer and we see convergence for GD (i.e., $k = \\infty$). \n\nTaking the results in Figures \\ref{figure-sgd:st-1} and \\ref{figure-sgd:st-3} together, we see that we are able to use our deterministic mechanism to find learning rates that either ensure divergence from or stability about a minimizer if we know its local geometric properties. Consequently, we again have evidence that our deterministic mechanism can correctly predict the behavior of SGD-$k$ for nonconvex problems. Moreover, when divergence does occur, Figures \\ref{figure-sgd:st-1} and \\ref{figure-sgd:st-3} also display exponential divergence as predicted by our deterministic mechanism.\n\nFor a different perspective, Figure \\ref{figure-sgd:st-2} shows the distance between SGD-$k$ iterates and the flat minimizer on Model 1 for different starting radii and different learning rates when SGD-$k$ is initialized near the flat minimizer. We note that, regardless of the starting radius, the iterate are diverging from the flat minimizer and are converging to the corners of the feasible region for learning rates $1.5l$ and $0.5(u+l)$. On the other hand, for the learning rate $0.5u$, we see stability and even convergence for all of the runs regardless of the starting radius. \nSimilarly, Figure \\ref{figure-sgd:st-4} shows the distance between SGD-$k$ iterates and the sharp minimizer on Model 1 for different starting radii and different learning rates when SGD-$k$ is initialized near the sharp minimizer. We note that, regardless of the starting radius, the iterates are diverging from the sharp minimizer and are converging to the corners of the feasible region for learning rates $1.5l$ and $0.5(u+l)$. On the other hand, for learning rate $0.5u$, we see stability about and even convergence to the sharp minimizer. \n\nTaking the results in Figures \\ref{figure-sgd:st-2} and \\ref{figure-sgd:st-4} together, we see that we are able to use our deterministic mechanism to find learning rates that either ensure divergence from, or stability about, a minimizer if we know its local geometric properties. Consequently, we again have evidence that our deterministic mechanism can correctly predict the behavior of SGD-$k$ for nonconvex problems. Moreover, when divergence does occur, Figures \\ref{figure-sgd:st-1} and \\ref{figure-sgd:st-3} also display exponential divergence as predicted by our deterministic mechanism.\n\n\\subsection{Experimental Procedure} \nWe study four experimental factors: the model, the learning method, the initialization point, and the learning rate. \nThe model factor has four levels as given by the four model objective functions shown in Figure \\ref{figure-sgd:qc-models}. The learning method has two levels which are SGD-$1$ or GD. \nThe initialization point also has two levels: either the initialization point is selected randomly with a uniform probability from a disc of radius $10^{-8}$ centered about the circular basin minimizer, or the initialization point is selected randomly with a uniform probability from a disc of radius $10^{-8}$ centered about the quadratic basin minimizer. The learning rate has levels that are conditional on the initialization point. \nIf the initialization point is near the minimizer of the circular basin, then the learning rate takes on values $10^{10}$, $5(10^{10})$, $10^{11}$, $5(10^{11})$, $10^{12}$, and $5(10^{12})$. \nIf the initialization point is near the minimizer of the circular basin, then the learning rate takes on values $1, 4, 16, 64, 256, 1024$. \n\nFor each unique collection of the levels of the four factors, one hundred independent runs are executed with at most twenty iterations. For each run, the euclidean distance between the iterates and the circular basin minimizer and the euclidean distance between the iterates and the quadratic basin minimizer are recorded. \n\n\\subsection{Results and Discussion}\nNote that the results of SGD-$k$ on Models 1 and 2 and Models 3 and 4 are similar when initialized around the circular basin minimizer, and the results of SGD-$k$ on Models 1 and 3 and Models 2 and 4 are similar when initialized around the quadratic basin minimizer. Hence, when we discuss the circular basin minimizer, we will compare Models 1 and 3, but we could have just as easily replaced Model 1 with Model 2 or Model 3 with Model 4 and the discussion would be identical. Similarly, when we discuss the quadratic basin minimizer, we will compare Models 1 and 2, but we could have replaced the results of Model 1 with Model 3 or Model 2 with Model 4 and the discussion would be identical.\n\n\\begin{figure}[hbt]\n\\centering\n\\includegraphics[width=\\textwidth]{cq_fig1}\n\\caption[SGD-$1$ Near Circle Basin Minimizer]{The behavior of SGD-$1$ on Models 1 and 3 when initialized near the circular minimum. The $y$-axis shows the distance (in logarithmic scale) between the estimates and the circular minimum for all runs of the specified model and the specified learning rate.}\n\\label{figure-sgd:cq-1}\n\\end{figure}\n\n\\begin{figure}[hbt]\n\\centering\n\\includegraphics[width=\\textwidth]{cq_fig2}\n\\caption[SGD-$1$ Near Quadratic Basin Minimizer]{The behavior of SGD-$1$ on Models 1 and 2 when initialized near the quadratic minimum. The $y$-axis shows the distance (in logarithmic scale) between the iterates and the quadratic minimum for all runs of the specified model and the specified learning rate.}\n\\label{figure-sgd:cq-2}\n\\end{figure}\n\nFigure \\ref{figure-sgd:cq-1} shows the distance between the iterates and the circular basin minimizer for the one hundred independent runs of SGD-$1$ for the specified model and the specified learning rate when initialized near the circular basin minimizer. The learning rates that are displayed are the ones where a transition in the convergence-divergence behavior of the method occur for the specific model. Specifically, SGD-$1$ begins to diverge for learning rates between $5 \\times 10^{10}$ and $10^{11}$ for Model 1 and between $5\\times 10^{11}$ and $10^{12}$ for Model 3. \nSimilarly, Figure \\ref{figure-sgd:cq-2} shows the distance between the iterates and the quadratic basin minimizer for the one hundred independent runs of SGD-$1$ for the specified model and the specified learning rate when initialized near the quadratic basin minimizer. SGD-$1$ begins to diverge for learning rates between $4$ and $16$ for Model 1 and between $256$ and $1024$ for Model 2. \n\nFrom these observations, we see that relatively flatter minimizers enjoy larger thresholds for divergence of SGD-$1$ in comparison to sharper minimizers. Moreover, while the bounds computed in Table \\ref{table:lb} are conservative (as we would expect), they are still rather informative, especially in the case of the quadratic basin. Thus, regarding questions (1) and (2) above, we see that while our thresholds are slightly conservative, we still correctly predict the expected behavior of the SGD-$k$ iterates. Moreover, regarding question (3), we observe that the iterates diverge from their respective minimizers in a deterministic way with an exponential rate. Again, this observations lends credence to our deterministic mechanism over the widely accepted stochastic mechanism.\n\n\n\\begin{figure}[hbt]\n\\centering\n\\includegraphics[width=\\textwidth]{cq_fig3}\n\\caption[GD Near Circle Basin Minimizer]{The behavior of GD on Models 1 and 3 when initialized near the circular basin minimizer. The $y$-axis shows the distance (in logarithmic scale) between the iterates and the circular minimizer for all runs of the specified model and the specified learning rate.}\n\\label{figure-sgd:cq-3}\n\\end{figure}\n\nFigure \\ref{figure-sgd:cq-3} shows the distance between the iterates and the circular basin minimizer for the one hundred independent runs of gradient descent (GD) for the specified model and the specified learning rate when initialized near the circular basin minimizer. If we compare Figures \\ref{figure-sgd:cq-1} and \\ref{figure-sgd:cq-3}, we notice that, for Model 1 and learning rate $5 \\times 10^{10}$, the runs of GD converge whereas some of the runs for SGD-$1$ diverge. Although we do not report the results for all of the models or all of the learning rates, we note the boundary for divergence-convergence for GD are smaller than those of SGD-$1$. In light of our deterministic mechanism, this behavior is expected: as the batch-size, $k$, increases, the lower bound on the learning rates for divergence increases. Therefore, we should expect that at those boundary learning rates where some SGD-$1$ runs are stable and others diverge, for GD, we should only see stable runs, and, indeed, this is what the comparison of Figures \\ref{figure-sgd:cq-1} and \\ref{figure-sgd:cq-3} shows.\n\n\\begin{figure}[hbt]\n\\centering\n\\includegraphics[width=\\textwidth]{cq_fig4}\n\\caption[Comparing SGD-$1$ and GD Near Quadratic Basin Minimizer]{The behavior of SGD-$1$ and GD on Model 1 when initialized near the quadratic minimum for select learning rates. The $y$-axis shows the distance (in logarithmic scale) between the iterates and the circular minimizer for all runs of the specified method and the specified learning rate.}\n\\label{figure-sgd:cq-4}\n\\end{figure}\n\nFigure \\ref{figure-sgd:cq-4} shows the distance between the iterates and the circular basin minimizer for the one hundred independent runs of SGD-$1$ and the one hundred independent runs of GD for the specified method and the specified learning rate when initialized near the quadratic basin minimizer. In Figure \\ref{figure-sgd:cq-4}, we see that for the learning rates that lead to divergence from the quadratic basin minimizer (compare to the top three subplots of Figure \\ref{figure-sgd:cq-1}) are in some cases able to converge to the circular minimizer. Again, in light of our deterministic mechanism, this is expected: the learning rates shown in Figure \\ref{figure-sgd:cq-4} guarantee divergence from the quadratic minimizer and are sufficiently small that they can lead to convergence to the circular minimizer. However, we notice in Figure \\ref{figure-sgd:cq-4} that as the learning rate increases, even though the learning rate is below the divergence bound for the circular minimizer, the iterates for SGD-$1$ and GD are diverging from both the circular and quadratic minima and are converging to the corners of the feasible region.\n\nTo summarize, we see that convergence-divergence behavior of SGD-$k$ for the quadratic-circle sums nonconvex problem is captured by our deterministic mechanism. Although our estimated bounds are sometimes conservative, they generally divide the convergence and divergence regions of SGD-$k$. Moreover, as our deterministic mechanism predicts, we observed exponential divergence away from minimizers when the learning rate is above the divergence threshold. Again, this lends credence to our mechanism for divergence and provides evidence against the stochastic mechanism for SGD-$k$ iterates to ``escape'' sharp minimizers.\n\n\\subsection{A Generic Stochastic Quadratic Problem} \\label{subsection:stochastic-quadratic}\n\nWe will begin by defining the quadratic problem, derive some of its relevant properties, and discuss the two types of minimizers that can occur for the problem.\n\n\\begin{problem}[Quadratic Problem] \\label{problem-sgd:quadratic}\nLet $(\\Omega,\\mathcal{F},\\mathbb{P})$ be a probability space. Let $Q \\in \\mathbb{R}^{p \\times p}$ be a nonzero, symmetric, positive semidefinite random matrix and $r \\in \\mathbb{R}^p$ be a random vector in the image space of $Q$ (with probability one) such that (1) $\\E{Q} \\prec \\infty$, (2) $\\E{ Q \\E{Q} Q} \\prec \\infty$, (3) $\\E{Q \\E{Q} r} $ is finite, and (4) $\\E{r'Qr}$ is finite. Let $\\lbrace (Q_N,r_N): N \\in \\mathbb{N} \\rbrace$ be independent copies of $(Q,r)$. The quadratic problem is to use $\\lbrace (Q_N,r_N) : N \\in \\mathbb{N} \\rbrace$ to determine a $\\theta^*$ such that\n\\begin{equation} \\label{eqn-sgd-problem:quadratic-obj}\n\\theta^* \\in \\underset{\\theta \\in \\mathbb{R}^p}{\\mathrm{argmin}} \\frac{1}{2}\\theta'\\E{Q}\\theta + \\E{r}'\\theta.\n\\end{equation}\n\\end{problem}\n\nThere are several aspects of the formulation of Problem \\ref{problem-sgd:quadratic} worth highlighting. First, $Q$ is required to be symmetric and positive semidefinite, and $r$ is required to be in the image space of $Q$. Together, these requirements on $(Q,r)$ ensure that for each $\\omega$ in some probability one set in $\\mathcal{F}$, there exists a minimizer for $0.5 \\theta'Q\\theta + r'\\theta$. While this condition seems quite natural for the quadratic problem, it will cause some challenges for generalizing our results directly to the nonconvex case, which we will discuss further in \\S \\ref{subsection:deterministic-mechanism}. Second, we have required a series of assumptions on the moments of $(Q,r)$. As we will see in the analysis of SGD-$k$ on Problem \\ref{problem-sgd:quadratic}, these moment assumptions will be essential to the stability of the iterates. Finally, we have not required that the solution, $\\theta^*$, be unique. That is, we are not requiring a strongly convex problem. This generality is important as it will allow us to better approximate more exotic nonconvex problems. \n\nWe now establish some basic geometric properties about the quadratic problem. We first establish a standard, more convenient reformulation of the objective function in (\\ref{eqn-sgd-problem:quadratic-obj}). Then, we formulate some important geometric properties of the quadratic problem.\n\n\\begin{lemma} \\label{lemma-sgd:objective-reformulation}\nThe objective function in (\\ref{eqn-sgd-problem:quadratic-obj}) is equivalent to (up to an additive constant)\n\\begin{equation}\n\\frac{1}{2}(\\theta - \\theta^*)'\\E{Q}(\\theta - \\theta^*),\n\\end{equation}\nfor any $\\theta^*$ satisfying (\\ref{eqn-sgd-problem:quadratic-obj}). \n\\end{lemma}\n\\begin{proof}\nIf $\\theta^*$ is a minimizer of the objective in (\\ref{eqn-sgd-problem:quadratic-obj}), then $\\theta^*$ must be a stationary point of the objective function. This implies that $-\\E{Q}\\theta^* = \\E{r}$. Therefore, the objective in (\\ref{eqn-sgd-problem:quadratic-obj}) is, up to an additive constant,\n\\begin{equation}\n\\frac{1}{2} \\theta'\\E{Q}\\theta - \\theta' \\E{Q} \\theta^* + \\frac{1}{2}(\\theta^*)'\\E{Q} \\theta^* = \\frac{1}{2}(\\theta - \\theta^*)'\\E{Q}(\\theta - \\theta^*),\n\\end{equation} \n\\end{proof}\n\nBecause $Q$ is symmetric, positive semidefinite with probability one, $\\E{Q}$ will also be symmetric, positive semidefinite. That is, $m := \\texttt{rank}(\\E{Q}) \\leq p$. For future reference, we will denote the nonzero eigenvalues of $\\E{Q}$ by $\\lambda_1 \\geq \\cdots \\lambda_m > 0$. Beyond the eigenvalues, we will also need some higher order curvature information, namely, $s_Q$ and $t_Q$, which are given by\n\\begin{equation} \\label{eqn-sgd:tQ}\nt_Q = \\sup \\left\\lbrace \\frac{v'\\E{Q \\E{Q} Q}v - v' \\E{Q}^3 v}{v' \\E{Q} v} : v \\in \\mathbb{R}^p, ~ v'\\E{Q}v \\neq 0 \\right\\rbrace,\n\\end{equation}\nand\n\\begin{equation} \\label{eqn-sgd:sQ}\ns_Q = \\inf \\left\\lbrace \\frac{v'\\E{Q \\E{Q} Q}v - v' \\E{Q}^3 v}{v' \\E{Q} v} : v \\in \\mathbb{R}^p, ~ v'\\E{Q}v \\neq 0 \\right\\rbrace.\n\\end{equation}\nFrom the definitions in (\\ref{eqn-sgd:tQ}) and (\\ref{eqn-sgd:sQ}), it follows that $t_Q \\geq s_Q$. The next result establishes that for nontrivial problems, $s_Q > 0$. \n\n\\begin{lemma} \\label{lemma-sgd:higher-moment-bounds}\nLet $Q$ be as in (\\ref{problem-sgd:quadratic}). Then, the following properties hold.\n\\begin{enumerate}\n\\item If there is an $x \\in \\mathbb{R}^p$ such that $x' \\E{Q} x = 0$ then $x' \\E{ Q \\E{Q} Q} x = 0$. \n\\item $\\E{Q \\E{Q} Q} \\succeq \\E{Q}^3$. \n\\item If $\\Prb{Q = \\E{Q}} < 1$ then $s_Q > 0$. \n\\end{enumerate} \n\\end{lemma}\n\\begin{proof}\nFor (1), when $x=0$ the result follows trivially. Suppose then that there is an $x \\neq 0$ such that $x'\\E{Q}x = 0$. Then, since $Q \\succeq 0$, $x'Qx = 0$ almost surely. Therefore, $x$ is in the null space of all $Q$ on a probability one set. Therefore, $x'Q \\E{Q} Q x = 0$ with probability one. The conclusion follows.\n\nFor (2), note that for any $x \\in \\mathbb{R}^q$, the function $f(M) = x'M \\E{Q} Mx$ is convex over the space of all symmetric, positive semi-definite matrices. Therefore, by Jensen's inequality, $\\E{f(Q)} \\geq f(\\E{Q})$. Moreover, since $x$ is arbitrary, the conclusion follows.\n\nFor (3), note that Jensen's inequality holds with equality only if $f$ is affine or if $\\Prb{Q = \\E{Q}} = 1$. Therefore, if $\\Prb{Q = \\E{Q}} < 1$, then the second condition is ruled out. Moreover, if $\\Prb{Q = \\E{Q}} < 1$, then $\\Prb{Q = 0} < 1$. Thus, $\\exists x \\neq 0$ such that $f(Q) \\neq 0$, which rules out the first condition. \n\\end{proof}\n\nNow, if $\\Prb{Q = \\E{Q}} = 1$ then there exists a solution to (\\ref{eqn-sgd-problem:quadratic-obj}) such that $Q\\theta^* + r = 0$ with probability one. This phenomenon of the existence of a single solution for all pairs $(Q,r)$ with probability one will play a special role, and motivates the following definition.\n\n\\begin{definition}[Homogeneous and Inhomogeneous Solutions]\nA solution to $\\theta^*$ satisfying (\\ref{eqn-sgd-problem:quadratic-obj}) is called homogeneous if\n\\begin{equation} \\label{eqn-sgd-definition:homogeneous}\n\\theta^* \\in \\mathrm{argmin}_{\\theta \\in \\mathbb{R}^p} \\frac{1}{2} \\theta'Q \\theta + r'\\theta \\text{ with probability one.}\n\\end{equation}\nOtherwise, the solution is called inhomogeneous.\n\\end{definition}\n\nIn the case of a homogeneous minimizer, the objective function in (\\ref{eqn-sgd-definition:homogeneous}) can be rewritten as (up to an additive constant)\n\\begin{equation}\n\\frac{1}{2} (\\theta - \\theta^*)' Q (\\theta - \\theta^*),\n\\end{equation}\nwhich follows from the same reasoning Lemma \\ref{lemma-sgd:objective-reformulation}. Importantly, SGD-$k$ will behave differently for problems with homogeneous minimizers and inhomogeneous minimizer. As we will show, for homogeneous minimizers, SGD-$k$ behaves rather similarly to classical gradient descent in terms of convergence and divergence, whereas for inhomogeneous minimizers, SGD-$k$ will behave markedly differently. To show these results, we will first define SGD-$k$ for the quadratic problem.\n\n\\begin{definition}[SGD-$k$] \\label{definition-sgd:sgd-k}\nLet $\\theta_0 \\in \\mathbb{R}^p$ be arbitrary. For Problem \\ref{problem-sgd:quadratic}, SGD-$k$ generates a sequence of iterates $\\lbrace \\theta_N : N \\in \\mathbb{N} \\rbrace$ defined by\n\\begin{equation} \\label{eqn-definition:sgd-k}\n\\theta_{N+1} = \\theta_N - \\frac{C_{N+1}}{k}\\sum_{j=Nk+1}^{N(k+1)} Q_j\\theta_N + r_j,\n\\end{equation} \nwhere $\\lbrace C_N: N \\in \\mathbb{N} \\rbrace$ is a sequence of scalars. \n\\end{definition}\n\nIn Definition \\ref{definition-sgd:sgd-k}, when $k = 1$, we have the usual notion of stochastic gradient descent and, as $k \\to \\infty$, we recover gradient descent. Therefore, this notion of SGD-$k$ will allow us to generate results that explore the complete range of stochastic to deterministic methods, and bridge the theory between stochastic gradient descent and gradient descent.\n\n\\subsection{SGD-$k$ on the Stochastic Quadratic Problem} \\label{subsection:analysis}\n\n\\begin{table}\n\\tbl{Summary of Notation}\n{\n\\begin{tabular}{@{}ll@{}} \\toprule\nNotation & Value \\\\ \\midrule\n$M$ & $\\E{Q \\E{Q} Q} - \\E{Q}^3$ \\\\\n$e_N$ & $(\\theta_N - \\theta^*)'\\E{Q}(\\theta_N - \\theta^*)$ \\\\\n$e_{N,l}$ & $(\\theta_N - \\theta^*)'\\E{Q}^l(\\theta_N - \\theta^*)$ for $l \\in \\mathbb{N}_{\\geq 2}$ \\\\\n$e_{N,M}$ & $(\\theta_N - \\theta^*)'M (\\theta_N - \\theta^*)$ \\\\ \n$m$ & $\\texttt{rank}( \\E{Q} )$ \\\\\n$\\lambda_1 \\geq \\cdots \\geq \\lambda_m > 0$ & Nonzero eigenvalues of $\\E{Q}$ \\\\ \n$t_Q, s_Q$ & Higher-order Curvature Parameters of $Q$ \\\\\\bottomrule\n\\end{tabular}\n}\n\\label{table:summary-of-notation}\n\\end{table}\n\nTable \\ref{table:summary-of-notation} summarizes the notation that will be used throughout the remainder of this work. With this notation, we now analyze the iterates generated by SGD-$k$ for Problem \\ref{problem-sgd:quadratic}. Our first step is to establish a relationship between $e_{N+1}$ and $e_N$, which is a standard computation using conditional expectations.\n\n\\begin{lemma} \\label{lemma-sgd:quadratic-recursion}\nLet $\\lbrace \\theta_N : N \\in \\mathbb{N} \\rbrace$ be the iterates of SGD-$k$ from Definition \\ref{definition-sgd:sgd-k}. Let $\\theta^*$ be a solution to the quadratic problem. Then, with probability one, \n\\begin{equation}\n\\begin{aligned}\n\\cond{e_{N+1}}{\\theta_N} &= e_{N} - 2C_{N+1}e_{N,2} + C_{N+1}^2 e_{N,3} + \\frac{C_{N+1}^2}{k} e_{N,M} \\\\\n\t\t\t\t\t\t&+ 2\\frac{C_{N+1}^2}{k}(\\theta_N - \\theta^*)' \\E{ Q \\E{Q} (Q\\theta^* + r)} \\\\\n\t\t\t\t\t\t&+ \\frac{C_{N+1}^2}{k} \\E{ (Q\\theta^* + r)' \\E{Q}(Q \\theta^* + r)}.\n\\end{aligned}\n\\end{equation}\nMoreover, if $\\theta^*$ is a homogeneous minimizer, then, with probability one,\n\\begin{equation}\n\\begin{aligned}\n\\cond{e_{N+1}}{\\theta_N} &= e_{N} - 2C_{N+1} e_{N,2} + C_{N+1}^2 e_{N,3} + \\frac{C_{N+1}^2}{k} e_{N,M}.\n\\end{aligned}\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nThe result is a straightforward calculation from the properties of SGD-$k$ and the quadratic problem. In the case of the homogeneous minimizer, recall that $Q\\theta^* +r = 0$ with probability one.\n\\end{proof} \n\nOur second step is to find bounds on $\\cond{e_{N+1}}{\\theta_N}$ in terms of $e_N$. While this is a standard thing to do, our approach differs from the standard approach in two ways. First, not only will we find an upper bound, but we will also find a lower bound. Second, our lower bounds will be much more delicate than the usual strong convexity and Lipschitz gradient based arguments \\cite[see][\\S 4]{bottou2016}. This second step can be accomplished for the homogeneous case with the following two lemmas.\n\n\\begin{lemma} \\label{lemma-sgd:technical-systematic-component}\nIn the setting of Lemma \\ref{lemma-sgd:quadratic-recursion} and for $\\alpha_j \\geq 0$ for $j=0,1,2,3$, \n\\begin{equation} \\label{eqn-sgd-lemma:technical-systematic-component}\n\\begin{aligned}\n&\\alpha_0e_N - \\alpha_1 C_{N+1} e_{N,2} + \\alpha_2 C_{N+1}^2 e_{N,3} + \\alpha_3 C_{N+1}^2 e_{N,M}\n\\end{aligned}\n\\end{equation}\nis bounded above by\n\\begin{equation}\ne_N\\left[ \\alpha_0 - \\alpha_1 C_{N+1} \\lambda_j + \\alpha_2 C_{N+1}^2 \\lambda_j^2 + \\alpha_3 C_{N+1}^2 t_Q \\right], \n\\end{equation}\nwhere\n\\begin{equation}\nj = \\begin{cases}\n1 & C_{N+1} > \\frac{\\alpha_1}{\\alpha_2} \\frac{1}{\\lambda_1 + \\lambda_m} \\text{ or } C_{N+1} \\leq 0 \\\\\nm & \\text{otherwise}.\n\\end{cases}\n\\end{equation}\nMoreover, (\\ref{eqn-sgd-lemma:technical-systematic-component}) is bounded below by\n\\begin{equation}\ne_N\\left[ \\alpha_0 - \\alpha_1 C_{N+1} \\lambda_j + \\alpha_2 C_{N+1}^2 \\lambda_j^2 + \\alpha_3 C_{N+1}^2 s_Q \\right],\n\\end{equation}\nwhere \n\\begin{equation}\nj = \\begin{cases}\n1 & C_{N+1} \\in \\left(0 ,\\frac{\\alpha_1}{\\alpha_2} \\frac{1}{\\lambda_1 + \\lambda_2}\\right] \\\\\nl ~(l \\in \\lbrace 2,\\ldots,m-1 \\rbrace) & C_{N+1} \\in \\left( \\frac{\\alpha_1}{\\alpha_2} \\frac{1}{\\lambda_{l-1} + \\lambda_l}, \\frac{\\alpha_1}{\\alpha_2} \\frac{1}{\\lambda_l + \\lambda_{l+1}} \\right] \\\\\nm & C_{N+1} > \\frac{\\alpha_1}{\\alpha_2} \\frac{1}{\\lambda_{m-1} + \\lambda_m} \\text{ or } C_{N+1} \\leq 0.\n\\end{cases}\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nWhen $e_N = 0 $, then $e_{N,j} =0$ for $j=2,3$ and $e_{N,M}=0$ by Lemma \\ref{lemma-sgd:higher-moment-bounds}. Therefore, if $e_N = 0$, the bounds hold trivially. So, we will assume that $e_N > 0$. \n\nLet $u_1,\\ldots,u_m$ be the orthonormal eigenvectors corresponding to eigenvalues $\\lambda_1,\\ldots,\\lambda_m$ of $\\E{Q}$. Then, \n\\begin{equation}\n\\frac{e_{N,j}}{e_N} = \\sum_{l=1}^m \\lambda_l^{j-1}\\frac{\\lambda_l (u_l'(\\theta_N - \\theta^*))^2}{e_N}.\n\\end{equation}\nDenote the ratio on the right hand side by $w_l$, and note that $\\lbrace w_l :l=1,\\ldots,m\\rbrace$ sum to one and are nonnegative. With this notation, we bound (\\ref{eqn-sgd-lemma:technical-systematic-component}) from above by\n\\begin{equation}\ne_N\\left[ \\alpha_0 - \\alpha_1 C_{N+1} \\sum_{l=1}^m \\lambda_l w_l + \\alpha_2 C_{N+1}^2 \\sum_{l=1}^m \\lambda_l^2 w_l + \\alpha_3 C_{N+1}^2 t_Q \\right], \n\\end{equation}\nand from below by the same equation but with $t_Q$ replaced by $s_Q$. By the properties of $\\lbrace w_l : l=1,\\ldots, m \\rbrace$, we see that the bounds are composed of convex combinations of quadratics of the eigenvalues. Thus, for the upper bound, if we assign all of the weight to the eigenvalue that maximizes the polynomial $-\\alpha_1 C_{N+1} \\lambda + \\alpha_2 C_{N+1}^2 \\lambda^2$, then we will have the upper bound presented in the result. To compute the lower bounds, we do the analogous calculation.\n\\end{proof}\n\nUsing these two lemmas, we can conclude the following for the homogeneous case.\n\n\\begin{theorem}[Homogeneous Minimizer] \\label{theorem-sgd:homogeneous}\nLet $\\lbrace \\theta_N : N \\in \\mathbb{N} \\rbrace$ be the iterates of SGD-$k$ from Definition \\ref{definition-sgd:sgd-k}. Let $\\theta^*$ be a homogeneous solution to the quadratic problem.\n\nIf $0 < C_{N+1}$ and \n\\begin{equation} \\label{eqn-sgd-theorem:homogeneous-ub}\nC_{N+1} < \\begin{cases}\n\\frac{2\\lambda_1}{\\lambda_1^2 + t_Q\/k} & k > t_Q\/(\\lambda_1 \\lambda_m) \\\\\n\\frac{2\\lambda_m}{\\lambda_m^2 + t_Q\/k} & k \\leq t_Q\/(\\lambda_1 \\lambda_m),\n\\end{cases}\n\\end{equation}\nthen $\\E{e_{N+1}} < \\E{e_N}$. Moreover, if $\\lbrace C_{N+1} \\rbrace$ satisfy (\\ref{eqn-sgd-theorem:homogeneous-ub}) and are uniformly bounded away from zero, then $\\E{e_{N}}$ decays exponentially to zero.\n\nMoreover, there exists a deterministic constant $\\rho_N > 1$ such that $\\cond{e_{N+1}}{\\theta_N} > \\rho_N e_N$ with probability one, if either $C_{N+1} < 0$ or \n\\begin{equation} \\label{eqn-sgd-theorem:homogeneous-lb}\nC_{N+1} > \\frac{2\\lambda_j}{\\lambda_j^2 + s_Q\/k},\n\\end{equation}\nwhere\n\\begin{equation}\nj = \\begin{cases}\n1 & k \\leq \\frac{s_Q}{\\lambda_1\\lambda_2} \\\\\nl ~(l \\in \\lbrace 2,\\ldots,m-1 \\rbrace) & \\frac{s_Q}{\\lambda_l\\lambda_{l-1}} < k \\leq \\frac{s_Q}{\\lambda_{l+1}\\lambda_l} \\\\\nm & k > \\frac{s_Q}{\\lambda_m \\lambda_{m-1}}.\n\\end{cases}\n\\end{equation}\n\\end{theorem}\n\\begin{proof}\nThe upper and lower bounds follow from Lemma \\ref{lemma-sgd:quadratic-recursion} and Lemma \\ref{lemma-sgd:technical-systematic-component}, and observations that $k > t_Q\/(\\lambda_1 \\lambda_m)$ implies\n\\begin{equation} \\label{eqn-sgd-theorem:homogeneous-proof-1}\n\\frac{2\\lambda_j}{\\lambda_j^2 + t_Q\/k} > \\frac{2}{\\lambda_1 + \\lambda_m},\n\\end{equation}\nfor $j = 1,m$. For convergence, when $C_{N+1} > 0$ and strictly less than expression on the left hand side of (\\ref{eqn-sgd-theorem:homogeneous-proof-1}), the upper bound in Lemma \\ref{lemma-sgd:technical-systematic-component} is strictly less than one. Thus, if $\\lbrace C_{N+1} \\rbrace$ are uniformly bounded away from zero, the exponential convergence rate of $\\E{e_N}$ follows trivially.\n\nFor the lower bound, we make note of several facts. First, if $k \\geq \\frac{s_Q}{\\lambda_l \\lambda_{l+1}}$ then $k \\geq \\frac{s_Q}{\\lambda_j \\lambda_{j+1}}$ for $j=1,2,\\ldots,l$. Moreover, when $k \\geq \\frac{s_Q}{\\lambda_l \\lambda_{l+1}}$, then \n\\begin{equation}\n\\frac{2\\lambda_l}{\\lambda_l^2 + s_Q\/k} \\geq \\frac{2}{\\lambda_l + \\lambda_{l+1}}.\n\\end{equation}\nBy Lemma \\ref{lemma-sgd:technical-systematic-component}, the left hand side of this inequality is the lower bound on $C_{N+1}$ that guarantees that $\\E{e_{N+1}} > e_N$ with probability one. Therefore, whenever $k \\geq \\frac{s_Q}{\\lambda_l\\lambda_{l+1}}$, $\\cond{e_N+1}{\\theta_N}$ is not guaranteed to diverge. On the other hand, if $k < \\frac{s_Q}{\\lambda_l \\lambda_{l-1}}$ then $k < \\frac{s_Q}{\\lambda_j \\lambda_{j-1}}$ for $j=l,l+1,\\ldots,m$. Moreover, if $k < \\frac{s_Q}{\\lambda_l \\lambda_{l-1}}$ then\n\\begin{equation}\n\\frac{2\\lambda_l}{\\lambda_l^2 + s_Q\/k} < \\frac{2}{\\lambda_l + \\lambda_{l-1}}.\n\\end{equation}\nTherefore, since the left hand side of this inequality is the lower bound on $C_{N+1}$ that guarantees that there exists a deterministic scalar $ \\rho_N > 1$ such that $\\cond{e_{N+1}}{\\theta_N} > \\rho_N e_N$ with probability one, the lower bound is guaranteed to diverge for $C_{N+1}$ larger than the right hand side.\n\nThus, by the monotonicty of the eigenvalues, for the $l \\in \\lbrace 2,\\ldots,m-1 \\rbrace$ such that $\\frac{s_Q}{\\lambda_l \\lambda_{l-1}} < k \\leq \\frac{s_Q}{\\lambda_l\\lambda_{l+1}}$, $\\cond{e_{N+1}}{\\theta_N} > \\rho_N e_N$ if\n\\begin{equation}\nC_{N+1} > \\frac{2\\lambda_l}{\\lambda_l^2 + s_Q\/k}.\n\\end{equation}\nNote, we can handle the edge cases similarly.\n\\end{proof}\n\\begin{remark}\nBecause $\\rho_N$ is a deterministic scalar quantity, we can conclude that $\\E{e_{N+1}} > \\rho_N \\E{e_N}$. \n\\end{remark}\n\nUsing Theorem \\ref{theorem-sgd:homogeneous}, as $k \\to \\infty$, we recover Theorem \\ref{theorem:gd}. That is, Theorem \\ref{theorem-sgd:homogeneous} contains gradient descent as a special case. Moreover, Theorem \\ref{theorem-sgd:homogeneous} also captures the exponential divergence property that we observed for gradient descent (see Figure \\ref{figure:divergence-gd}). Theorem \\ref{theorem-sgd:homogeneous} also contains additional subtleties. First, it captures SGD-$k$'s distinct phases for convergence and divergence, which depend on the batch size and expected geometry of the quadratic problem as represented by the eigenvalues, $t_Q$ and $s_Q$. Thus, as opposed to classical gradient descent, SGD-$k$ requires a more complex convergence and divergence statement to capture these distinct phases. \n\nWe now state the analogous statement for the inhomogeneous problem. However, we will need the following additional lemmas to bound the behavior of the cross term in Lemma \\ref{lemma-sgd:quadratic-recursion}. Again, our analysis below is more delicate than the standard approach for analyzing SGD. \n\n\\begin{lemma} \\label{lemma-sgd:technical-pseudo-inverse}\nLet $Q$ and $r$ be as in Problem \\ref{problem-sgd:quadratic}. Then, letting $(\\cdot)^\\dagger$ denote the Moore-Penrose pseudo-inverse, \n\\begin{equation}\n\\E{ Q \\E{Q}(Q \\theta^* + r )} = \\E{Q}\\E{Q}^\\dagger\\E{ Q \\E{Q}(Q \\theta^* + r )} .\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nNote that for any $x \\in \\texttt{row}(\\E{Q})$, $x = \\E{Q}\\E{Q}^\\dagger x$. Thus, we must show that $\\E{Q \\E{Q} (Q \\theta^* + r)}$ is in $\\texttt{row}(\\E{Q})$. Recall, that if there exists a $v \\in \\mathbb{R}^p$ such that $\\E{Q}v = 0$, then $v' \\E{Q} v = 0$. In turn, $v'Qv = 0$ with probability one, which implies that $Q v = 0$ with probability one. Hence, $\\texttt{null}(\\E{Q}) \\subset \\texttt{null}( Q)$ with probability one. Let the set of probability one be denoted by $\\Omega'$. Then, \n\\begin{equation}\n\\texttt{row}( \\E{Q}) = \\texttt{null}(\\E{Q})^\\perp \\supset \\bigcup_{\\omega \\in \\Omega'} \\texttt{null}(Q)^\\perp = \\bigcup_{\\omega \\in \\Omega'} \\texttt{col}( Q) \\supset \\bigcup_{\\omega \\in \\Omega'} \\texttt{col}( Q \\E{Q} ),\n\\end{equation}\nwhere $\\texttt{row}(Q) = \\texttt{col}(Q)$ by symmetry. Therefore, $Q \\E{Q} (Q\\theta^* + r) \\in \\texttt{row}(\\E{Q})$ with probability one. Hence, its expectation is in $\\texttt{row}(\\E{Q})$. \n\\end{proof}\n\n\\begin{lemma} \\label{lemma-sgd:technical-cross-term-bound}\nUnder the setting of Lemma \\ref{lemma-sgd:quadratic-recursion}, for any $\\phi > 0$ and $j \\in \\mathbb{N}$, \n\\begin{equation}\n\\begin{aligned}\n&2 \\left\\vert\\frac{C_{N+1}^2}{k} (\\theta_N - \\theta^*)' \\E{Q \\E{Q}(Q\\theta^* + r)}\\right\\vert \\\\\n&\\quad \\leq 2 \\frac{C_{N+1}^2}{k} \\norm{ \\E{Q}^{j\/2} (\\theta_N - \\theta^*)}_2 \\norm{(\\E{Q}^{j\/2})^\\dagger \\E{Q \\E{Q}(Q\\theta^* + r)}}_2 \\\\\n&\\quad \\leq \\frac{\\phi C_{N+1}^2}{k} (\\theta_N - \\theta^*)' \\E{Q}^j (\\theta_N - \\theta^*) \\\\\n&\\quad \\quad + \\frac{C_{N+1}^2}{\\phi k}\\E{(Q\\theta^* + r)' \\E{Q} Q} ( \\E{Q}^j)^\\dagger \\E{Q \\E{Q}(Q \\theta^* + r)},\n\\end{aligned}\n\\end{equation}\nfor any $N = 0,1,2,\\ldots$.\n\\end{lemma}\n\\begin{proof}\nWe will make use of three facts: Lemma \\ref{lemma-sgd:technical-pseudo-inverse}, H\\\"{o}lder's inequality, and the fact that for any $\\phi > 0$, $ \\varphi \\geq 0$ and $x \\in \\mathbb{R}$, $2 |\\varphi x| \\leq \\phi x^2 + \\phi^{-1} \\varphi^2.$ Using these facts, we have\n\\begin{equation}\n\\begin{aligned}\n&2 \\left\\vert\\frac{C_{N+1}^2}{k} (\\theta_N - \\theta^*)' \\E{Q \\E{Q}(Q\\theta^* + r)}\\right\\vert \\\\\n&\\quad = 2 \\left\\vert\\frac{C_{N+1}^2}{k} (\\theta_N - \\theta^*)'\\E{Q}^{j\/2}(\\E{Q}^{j\/2})^\\dagger \\E{Q \\E{Q}(Q\\theta^* + r)}\\right\\vert\\\\\n&\\quad \\leq 2 \\frac{C_{N+1}^2}{k} \\norm{ \\E{Q}^{j\/2} (\\theta_N - \\theta^*)}_2 \\norm{(\\E{Q}^{j\/2})^\\dagger \\E{Q \\E{Q}(Q\\theta^* + r)}}_2 \\\\\n&\\quad \\leq \\frac{\\phi C_{N+1}^2}{k} (\\theta_N - \\theta^*)' \\E{Q}^j (\\theta_N - \\theta^*) \\\\\n&\\quad + \\frac{C_{N+1}^2}{\\phi k}\\E{(Q\\theta^* + r)' \\E{Q} Q} ( \\E{Q}^j)^\\dagger \\E{Q \\E{Q}(Q \\theta^* + r)}.\n\\end{aligned}\n\\end{equation}\n\\end{proof}\n\n\\begin{lemma} \\label{lemma-sgd:quadratic-recursion-ineq}\nUnder the setting of Lemma \\ref{lemma-sgd:quadratic-recursion},\n\\begin{equation}\n\\begin{aligned}\n\\cond{e_{N+1}}{\\theta_N} &\\leq e_{N} - 2C_{N+1} e_{N,2} + C_{N+1}^2 e_{N,3} + \\frac{C_{N+1}^2}{k} e_{N,M} + \\frac{C_{N+1}^2}{k} e_{N,3} \\\\\n\t\t\t\t\t\t&+ \\frac{C_{N+1}^2}{k} \\E{ (Q\\theta^* + r)' \\E{Q}(Q \\theta^* + r)} \\\\\n\t\t\t\t\t\t&+ \\frac{C_{N+1}^2}{k} \\E{(Q\\theta^*+r)'\\E{Q} Q} ( \\E{Q}^3)^\\dagger \\E{Q \\E{Q}(Q \\theta^* + r)}\n\\end{aligned}\n\\end{equation}\nNow, for any $\\varphi > 0$, $\\exists \\psi \\in \\mathbb{R}$ such that \n\\begin{equation}\n\\begin{aligned}\n\\cond{e_{N+1}}{\\theta_N} &\\geq \\left(1 - \\frac{\\varphi}{k} \\right) e_{N} - 2C_{N+1}e_{N,2} + C_{N+1}^2 e_{N,3} + \\frac{C_{N+1}^2}{k} e_{N,M} \\\\\n\t\t\t\t\t\t&+ \\frac{C_{N+1}^2}{k} \\psi,\n\\end{aligned}\n\\end{equation}\nwhere $\\psi \\geq 0$ if\n\\begin{equation}\n\\varphi\\frac{\\E{ (Q\\theta^* + r)' \\E{Q}(Q \\theta^* + r)}}{\\E{(Q\\theta^*+r)'\\E{Q} Q} \\E{Q}^\\dagger \\E{Q \\E{Q}(Q \\theta^* + r)}} \\geq C_{N+1}^2.\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nWhen $C_{N+1} = 0,$ the statements hold by Lemma \\ref{lemma-sgd:quadratic-recursion}. Suppose $C_{N+1} \\neq 0$. Using Lemma \\ref{lemma-sgd:quadratic-recursion} and applying Lemma \\ref{lemma-sgd:technical-cross-term-bound} with $\\phi =1$ and $j=3$, the upper bound follows. Now, using Lemma \\ref{lemma-sgd:quadratic-recursion} and applying Lemma \\ref{lemma-sgd:technical-cross-term-bound} with $\\phi = C_{N+1}^{-2} \\varphi$ for $\\varphi > 0$ and $j=1$, we have\n\\begin{equation}\n\\begin{aligned}\n\\cond{e_{N+1}}{\\theta_N} &\\geq \\left(1 - \\frac{\\varphi}{k} \\right) e_{N} - 2C_{N+1} e_{N,2} + C_{N+1}^2 e_{N,3} + \\frac{C_{N+1}^2}{k} e_{N,M} \\\\\n\t\t\t\t\t\t&+ \\frac{C_{N+1}^2}{k} \\psi,\n\\end{aligned}\n\\end{equation}\nwhere\n\\begin{equation}\n\\begin{aligned}\n\\psi &= - \\frac{C_{N+1}^2}{\\varphi} \\E{(Q\\theta^*+r)'\\E{Q} Q}\\E{Q}^\\dagger \\E{Q \\E{Q}(Q \\theta^* + r)} \\\\\n&\\quad + \\E{ (Q\\theta^* + r)' \\E{Q}(Q \\theta^* + r)}.\n\\end{aligned}\n\\end{equation}\nTherefore, if $\\varphi$ is given as selected in the statement of the result, then $\\psi \\geq 0$.\n\\end{proof}\n\n\\begin{theorem}[Inhomogeneous Minimizer] \\label{theorem-sgd:inhomogeneous}\nLet $\\lbrace \\theta_N : N \\in \\mathbb{N} \\rbrace$ be the iterates of SGD-$k$ from Definition \\ref{definition-sgd:sgd-k}. Suppose $\\theta^*$ is an inhomogeneous solution to the quadratic problem.\n\nIf $0 < C_{N+1}$ and\n\\begin{equation} \\label{eqn-theorem-sgd:inhomogeneous-ub}\nC_{N+1} < \\begin{cases}\n\\frac{2\\lambda_1^2}{(1 + 1\/k) \\lambda_1^2 + t_Q\/k } & k+1 > t_Q\/(\\lambda_1 \\lambda_m) \\\\\n\\frac{2\\lambda_m^2}{(1 + 1\/k) \\lambda_m^2 + t_Q\/k } & k+1 \\leq t_Q\/(\\lambda_1\\lambda_m), \n\\end{cases}\n\\end{equation}\nthen $\\E{e_{N+1}} < \\rho_N \\E{e_{N}} + C_{N+1}^2 \\frac{\\psi}{k}$, where $\\rho_N < 1$ and is uniformly bounded away from zero for all $N$, and $\\psi > 0$. Moreover, if $\\lbrace C_{N} : N \\in \\mathbb{N} \\rbrace$ are nonnegative, converge to zero and sum to infinity then $\\E{e_{N}} \\to 0$ as $N \\to \\infty$.\n\nFurthermore, suppose $C_{N+1} < 0$ or \n\\begin{equation} \\label{eqn-theorem-sgd:inhomogeneous-lb}\nC_{N+1} > \\frac{2(\\lambda_j + \\gamma)}{\\lambda_j^2 + s_Q\/k},\n\\end{equation} \nwhere $4\\gamma^2 \\in \\left(0, \\frac{s_Q}{k} \\right]$\nand \n\\begin{equation}\nj = \\begin{cases}\n1 & k \\leq \\frac{s_Q}{ \\lambda_1 \\lambda_2 + \\gamma(\\lambda_1 + \\lambda_2)} \\\\\nl ~(l \\in \\lbrace 2,\\ldots,m-1 \\rbrace) & \\frac{s_Q}{\\lambda_l\\lambda_{l-1} + \\gamma(\\lambda_l + \\lambda_{l-1})} < k \\leq \\frac{s_Q}{\\lambda_l \\lambda_{l+1} + \\gamma (\\lambda_l + \\lambda_{l+1})} \\\\\nm & k > \\frac{s_Q}{\\lambda_m \\lambda_{m-1} + \\gamma(\\lambda_m + \\lambda_{m-1})}.\n\\end{cases}\n\\end{equation}\nThen, $\\exists \\delta > 0$ such that if $\\norm{\\E{Q}(\\theta_{N}-\\theta^*)} < \\delta$ then $\\cond{e_{N+1}}{\\theta_N} \\geq \\rho_N e_{N} + C_{N+1}^2 \\psi$ with probability one, where $\\rho_N > 1$ and $\\psi > 0$ independently of $N$.\n\\end{theorem}\n\\begin{proof}\nFor the upper bound, we apply Lemmas \\ref{lemma-sgd:quadratic-recursion-ineq} and \\ref{lemma-sgd:technical-systematic-component} to conclude that\n\\begin{equation}\n\\cond{e_{N+1}}{\\theta_N} \\leq e_{N}\\left(1 - 2 C_{N+1}\\lambda_j + \\left(1 + \\frac{1}{k}\\right) C_{N+1}^2\\lambda_j^2 + C_{N+1}^2 \\frac{t_Q}{k} \\right) + C_{N+1}^2 \\frac{\\psi}{k},\n\\end{equation}\nwhere $\\psi > 0$ is independent of $N$ and\n\\begin{equation}\nj = \\begin{cases}\n1 & C_{N+1} \\leq 0 \\text{ or } \\frac{2}{(1 + 1\/k)(\\lambda_1 + \\lambda_m)} \\\\\nm & \\text{otherwise}.\n\\end{cases}\n\\end{equation}\nMoreover, the component multiplying $e_N$ is less than one if $C_{N+1} > 0$ and\n\\begin{equation} \\label{eqn-sgd-theorem:proof-inhomo-1}\nC_{N+1} < \\frac{2\\lambda_j}{\\left( 1 + \\frac{1}{k}\\right)\\lambda_j^2 + \\frac{t_Q}{k}}.\n\\end{equation}\nWhen $k + 1 > t_Q\/(\\lambda_1\\lambda_m)$, the right hand side of (\\ref{eqn-sgd-theorem:proof-inhomo-1}) is larger than $\\frac{2}{(1 + 1\/k)(\\lambda_1 + \\lambda_m)}$. The upper bound follows. Convergence follows by applying Lemma 1.5.2 of \\cite{bertsekas1999}.\n\nFor the lower bound, note that since $\\theta^*$ is inhomogeneous, $\\Prb{Q = \\E{Q}} < 1$. Therefore, $s_Q > 0$. Now, we apply Lemmas \\ref{lemma-sgd:quadratic-recursion-ineq} and \\ref{lemma-sgd:technical-systematic-component} to conclude that for any $\\gamma > 0$ there exists $\\psi_N$ such that\n\\begin{equation}\n\\cond{e_{N+1}}{\\theta_N} \\geq e_{N}\\left(\\left(1 - \\frac{4\\gamma^2}{\\lambda_j^2 + \\frac{1}{k}s_Q} \\right) - 2 C_{N+1}\\lambda_j + C_{N+1}^2\\lambda_j^2 + C_{N+1}^2 \\frac{s_Q}{k} \\right) + C_{N+1}^2 \\frac{\\psi_N}{k},\n\\end{equation}\nwhere $\\psi_N$ are nonnegative and uniformly bounded from zero for $C_{N+1}$ sufficiently small, and\n\\begin{equation}\nj = \\begin{cases}\n1 & C_{N+1} \\in \\left(0, \\frac{2}{\\lambda_1 + \\lambda_2} \\right] \\\\\nl ~(l \\in \\lbrace 2,\\ldots,m-1 \\rbrace) & C_{N+1} \\in \\left( \\frac{2}{\\lambda_{l-1} + \\lambda_l}, \\frac{2}{\\lambda_l + \\lambda_{l+1}} \\right] \\\\\nm & C_{N+1} \\leq 0 \\text{ or } \\frac{2}{\\lambda_m + \\lambda_{m-1}} < C_{N+1}.\n\\end{cases}\n\\end{equation}\nThe term multiplying $e_N$ can be rewritten as\n\\begin{equation} \\label{eqn-sgd-theorem:proof-inhomo-2}\n1 - \\frac{4\\gamma^2}{\\lambda_j^2 + \\frac{1}{k}s_Q} - \\frac{\\lambda_j^2}{\\lambda_j^2 + \\frac{1}{k}s_Q} + \\left(\\lambda_j^2 + \\frac{1}{k}s_Q\\right)\\left( C_{N+1} - \\frac{\\lambda_j}{\\lambda_j^2 + \\frac{1}{k}s_Q} \\right)^2,\n\\end{equation}\nwhich nonnegative when $4\\gamma^2$ is in the interval given by in the statement of the result. Moreover, if\n\\begin{equation} \\label{eqn-sgd-theorem:proof-inhomo-3}\nC_{N+1} > \\frac{2(\\lambda_j + \\gamma)}{\\lambda_j^2 + \\frac{s_Q}{k}},\n\\end{equation}\nthen\n\\begin{equation}\nC_{N+1}\t> \\frac{\\lambda_j}{\\lambda_j^2 + \\frac{s_Q}{k}} + \\frac{\\sqrt{\\lambda_j^2 + 4\\gamma^2}}{\\lambda_j^2 + \\frac{s_Q}{k}},\t\n\\end{equation}\nwhich implies\n\\begin{equation}\n\\left(C_{N+1} - \\frac{\\lambda_j}{\\lambda_j^2 + \\frac{s_Q}{k}} \\right)^2 > \\frac{\\lambda_j^2 + 4\\gamma^2}{\\left(\\lambda_j^2 + \\frac{s_Q}{k}\\right)^2}.\n\\end{equation}\nFrom this inequality, we conclude that if (\\ref{eqn-sgd-theorem:proof-inhomo-3}) holds then (\\ref{eqn-sgd-theorem:proof-inhomo-2}) is strictly larger than $1$. Lastly, for any $\\tilde{j} \\neq j$, \n\\begin{equation}\n\\frac{s_Q}{\\lambda_j \\lambda_{\\tilde{j}} + \\gamma(\\lambda_j + \\lambda_{\\tilde{j}})} > k\n\\end{equation}\nis equivalent to\n\\begin{equation}\n\\frac{2}{\\lambda_j + \\lambda_{\\tilde{j}}} > \\frac{2(\\lambda_j + \\gamma)}{\\lambda_j^2 + \\frac{1}{k}s_Q}.\n\\end{equation}\nWith these facts, the conclusion for the lower bound follows just as in the proof of Theorem \\ref{theorem-sgd:homogeneous}.\n\\end{proof}\n\nComparing Theorems \\ref{theorem-sgd:homogeneous} and \\ref{theorem-sgd:inhomogeneous}, we see that inhomogeneity exacts an additional price. To be specific, call the term multiplying $e_N$ in the bounds as the systematic component and the additive term as the variance component. First, the inhomogeneous case contains a variance component that was not present for the homogeneous case. Of course, this variance component is induced by the inhomogeneity: at each update, SGD-$k$ is minimizing an approximation to the quadratic problem that has a distinct solution from $\\theta^*$ with nonzero probability, which requires decaying the step sizes to remove the variability of solutions induced by these approximations. Second, the inhomogeneity shrinks the threshold for step sizes that ensure a reduction in the systematic component, and inflates the threshold for step sizes that ensure an increase in the systematic component.\n\nContinuing to compare the results for the homogeneous and inhomogeneous cases, we also observe that the systematic component diverges exponentially when the expected gradient is sufficiently small (less than $\\delta$). Indeed, this is the precise region in which such a result is important: it says that for step sizes that are too large, any estimate near the solution set is highly unstable. In other words, if we initialize SGD-$k$ near the solution set, our iterates will diverge exponentially fast from this region for $C_N$ above the divergence threshold.\n\nTo summarize, our analysis of the stochastic quadratic problem defined in Problem \\ref{problem-sgd:quadratic} is a generalization of the standard quadratic problem considered in classical numerical optimization. Moreover, our analysis of SGD-$k$ (see Definition \\ref{definition-sgd:sgd-k}) on the stochastic quadratic problem generalizes the results and properties observed for gradient descent. In particular, our analysis (see Theorems \\ref{theorem-sgd:homogeneous} and \\ref{theorem-sgd:inhomogeneous}) derive a threshold for step sizes above which the iterates will diverge, and this threshold includes the threshold for classical gradient descent. Moreover, our analysis predicts an exponential divergence of the iterates from the minimizer. \n\n\\subsection{The Deterministic Mechanism} \\label{subsection:deterministic-mechanism}\n\nWe now use the insight developed from our analysis of the generic quadratic problem to formulate a deterministic mechanism for how SGD-$k$ ``escapes'' local minimizers. We will begin by stating the generic nonconvex problem, formalizing our deterministic mechanism, and discussing its implications for nonconvex optimization. We then briefly discuss the pitfalls of naive attempts to directly apply the theory developed in \\S \\ref{subsection:analysis} to arbitrary nonconvex problems.\n\n\\begin{problem}[Nonconvex Problem] \\label{problem:nonconvex}\nLet $(\\Omega,\\mathcal{F},\\mathbb{P})$ be a probability space, and \nlet $C^2(\\mathbb{R}^p,\\mathbb{R})$ be the set of twice continuously differentiable functions from $\\mathbb{R}^p$ to\nMoreover, let $f$ be a random quantity taking values in $C^2(\\mathbb{R}^p,\\mathbb{R})$. Moreover, suppose that\n\\begin{enumerate}\n\\item $\\E{f(\\theta)}$ exists for all $\\theta \\in \\mathbb{R}^p$, and is denoted by $F(\\theta)$.\n\\item Suppose $F$ is bounded from below. \n\\item Suppose that $F \\in C^2(\\mathbb{R}^p,\\mathbb{R})$, and $\\nabla F(\\theta) = \\E{ \\nabla f(\\theta) }$ and $\\nabla^2 F(\\theta) = \\E{ \\nabla^2 f(\\theta)}$. \n\\end{enumerate}\nLet $\\lbrace f_N: N \\in \\mathbb{N} \\rbrace$ be independent copies of $f$. The nonconvex problem is to use $\\lbrace f_N : N \\in \\mathbb{N} \\rbrace$ to determine a $\\theta^*$ that is a minimizer of $F(\\theta)$. \n\\end{problem}\n\nNote, while Problem \\ref{problem:nonconvex} focuses on twice differentiable functions, we often only have continuous functions in machine learning applications. However, for most machine learning applications, the continuous functions are only nondifferentiable at finitely many points, which can be approximated using twice continuously differentiable functions.\n\nBefore stating the deterministic mechanism, we will also need to extend our definition of SGD-$k$ to the generic nonconvex problem. Of course, this only requires replacing (\\ref{eqn-definition:sgd-k}) with \n\\begin{equation}\n\\theta_{N+1} = \\theta_N - \\frac{C_{N+1}}{k}\\sum_{j=Nk+1}^{N(k+1)} \\nabla f_j(\\theta_N).\n\\end{equation}\nWe now state our deterministic mechanism. In this statement, $B(\\theta,\\epsilon)$ denotes a ball around a point $\\theta \\in \\mathbb{R}^p$ of radius $\\epsilon > 0$.\n\n\\begin{definition}[Deterministic Mechanism] \\label{definition:deterministic mechanism}\nLet $\\lbrace \\theta_N : N \\in \\mathbb{N} \\rbrace$ be the iterates generated by SGD-$k$ with deterministic step sizes $\\lbrace C_N : N \\in \\mathbb{N} \\rbrace$ for the nonconvex problem defined in Problem \\ref{problem:nonconvex}. Suppose $\\theta^*$ is a local minimizer of Problem \\ref{problem:nonconvex}.\n\nFor an $\\epsilon > 0$ sufficiently small, let $\\lambda_1 \\geq \\cdots \\geq \\lambda_m > 0$ denote the nonzero eigenvalues of\n\\begin{equation} \n\\frac{1}{|B(\\theta^*,\\epsilon)|} \\int_{B(\\theta^*,\\epsilon)}\\nabla^2 F(\\theta) d\\theta,\n\\end{equation}\nand define $s_f$ to be the maximum between $0$ and \n\\begin{equation}\n\\frac{1}{|B(\\theta^*,\\epsilon)|}\\int_{B(\\theta^*,\\epsilon)}\\inf_{v'\\nabla^2 F(\\theta) v \\neq 0\n} \\left\\lbrace \\frac{v' \\left(\\E{ \\nabla^2 f(\\theta) \\nabla^2 F(\\theta) \\nabla^2 f(\\theta)} - (\\nabla^2 F(\\theta) )^3\\right) v}{v' \\nabla^2 F(\\theta) v} \\right\\rbrace d\\theta.\n\\end{equation}\nThere is a $\\delta > 0$ such that if $\\theta_N \\in B(\\theta^*,\\delta) \\setminus \\lbrace \\theta^* \\rbrace$ and $C_{N+1} < 0$ or\n\\begin{equation}\nC_{N+1} > \\frac{2\\lambda_j}{\\lambda_j^2 + \\frac{1}{k}s_f},\n\\end{equation} \nwhere\n\\begin{equation}\nj = \\begin{cases}\n1 & k \\leq \\frac{s_f}{ \\lambda_1 \\lambda_2 } \\\\\nl ~(l \\in \\lbrace 2,\\ldots,m-1 \\rbrace) & \\frac{s_f}{\\lambda_l\\lambda_{l-1}} < k \\leq \\frac{s_f}{\\lambda_l \\lambda_{l+1}} \\\\\nm & k > \\frac{s_f}{\\lambda_m \\lambda_{m-1}},\n\\end{cases}\n\\end{equation}\nthen for some $\\rho_N > 1$, $\\cond{F(\\theta_{N+1}) - F(\\theta^*)}{\\theta_N} > \\rho_N ( F(\\theta_N) - F(\\theta^*) )$ with probability one.\n\\end{definition}\n\nFor nonconvex problems, our deterministic mechanism makes several predictions. First, suppose a minimizer's local geometry has several large eigenvalues. Then, for $k$ sufficiently small, SGD-$k$'s divergence from a local minimizer will be determined by these large eigenvalues. This prediction is indeed observed in practice: \\cite{keskar2016} noted that methods that were ``more stochastic'' (i.e., small $k$) diverged from regions that had only a fraction of relatively large eigenvalues. \n\nSecond, suppose for a given nonconvex function, we have finitely many local minimizers and, for each $k$, we are able to compute the threshold for each local minimizer (for some sufficiently small $\\epsilon > 0$, which may differ for each minimizer). Then, for identical choices in step sizes, $\\lbrace C_{N+1} : N \\in \\mathbb{N} \\rbrace$, the number of minimizers for which SGD-$k$ can converge to is a nondecreasing function of $k$. Moreover, SGD-$k$ and Gradient Descent will only converge to minimizers that can support the step sizes $\\lbrace C_{N+1} : N \\in \\mathbb{N} \\rbrace$, which implies that SGD-$k$ will prefer flatter minimizers in comparison to gradient descent. Again, this prediction is indeed observed in practice: the main conclusion of \\cite{keskar2016} is that SGD-$k$ with smaller $k$ converges to flatter minimizers in comparison to SGD-$k$ with large $k$. \n\nThird, our deterministic mechanism states that whenever the iterates are within some $\\delta$ ball of $\\theta^*$ with step sizes that are too large, we will observe exponential divergence. Demonstrating that exponential divergence does indeed occur in practice is the subject of \\S \\ref{section:homogeneous} and \\S \\ref{section:inhomogeneous}.\n\nHowever, before discussing these experiments, it is worth noting the challenges of generalizing the quadratic analysis to the nonconvex case. In terms of convergence, recall that a key ingredient in the local quadratic analysis for classical numerical optimization is stability: once iterates enter a basin of the minimizer, then the iterates will remain within this particular basin of the minimizer. However, for stochastic problems, this will not hold with probability one, especially when the gradient has a noise component with an unbounded support (e.g., linear regression with normally distributed errors). \n\nFortunately, attempts to generalize the divergence results will not require this stability property because divergence implies that the iterates will eventually escape from any such basin. The challenge for divergence, however, follows from the fact that the lower bounds are highly delicate, and any quadratic approximation will require somewhat unrealistic conditions on the remainder term in Taylor's theorem for the nonconvex problem. Thus, generalizing the quadratic case to the nonconvex case is something that we are actively pursuing. \n\\subsection{The Stochastic Mechanism}\n\nConsider using SGD on a one-dimensional stochastic nonconvex problem whose expected objective function is given by the graph in Figure \\ref{figure:one-dim-obj-A}. The stochasticity of the nonconvex problem is generated by adding independent symmetric, bounded random variables to the derivative evaluation at any point. That is, at any iteration, SGD's search direction is generated by the derivative of the expected objective function plus some symmetric, bounded random noise. \n\nIntuitively, the noise in the SGD search directions, by the size of the basins alone, will force the iterates to spend more time in the flatter minimizer on the left. That is, the noise of in SGD's search directions will increase an iterates probability of being in the flatter basin in comparison to the sharper basin. This is precisely stochastic mechanism.\n\n\\begin{figure}[bh]\n\\centering\n\\input{figure\/sgd_stochastic_mechanism_A}\n\\caption{The expected objective function for a simple one-dimensional nonconvex problem. The minimizer on the left is the flatter minimizer. The minimizer on the right is the sharper minimizer.}\n\\label{figure:one-dim-obj-A}\n\\end{figure}\n\nFor another example, consider a similar one-dimensional stochastic nonconvex problem whose expected objective function is given by the graph in Figure \\ref{figure:one-dim-obj-B}. In this example, the minimizer on the left is the flatter minimizer while the minimizer on the right is the sharper minimizer. Now the stochastic mechanism would predict that the probability of an iterate being in the basin of the sharper minimizer is actually greater than that of the flatter minimizer. \n\n\\begin{figure}\n\\centering\n\\input{figure\/sgd_stochastic_mechanism_B}\n\\caption{The expected objective function for a different one-dimensional nonconvex problem. The minimizer on the left is the flatter minimizer. The minimizer on the right is the sharper minimizer.}\n\\label{figure:one-dim-obj-B}\n\\end{figure} \n\nThus, the stochastic mechanism is not explaining why SGD prefers flatter minimizers, but rather it is explaining why SGD has a higher probability of being in basins with larger size. Moreover, while flatness and basin size might seem to be intuitively connected, as Figure \\ref{figure:one-dim-obj-B} shows, this is not necessarily the case. Therefore, according to this example, we might assume that flatness is an inappropriate term, and it is actually the volume of the minimizer that is important (based on the stochastic mechanism) in determining to which minimizers the SGD iterates converge.\n\nHowever, ignoring flatness or sharpness of a minimizer in favor of the size of its basin of attraction does not align with another experimental observation: in high-dimensional problems, SGD even avoids those sharper minimizers that had a small fraction of their eigenvalues termed relatively ``large'' \\cite{keskar2016}. Importantly, in such high-dimensions, the probability of the noise exploring those limited directions that correspond to the ``larger'' eigenvalues is low (e.g., consider isotropic noise in large dimensions). This suggests that, while the stochastic mechanism is intuitive and can explain why SGD may prefer large basins of attractions, it is insufficient to explain why SGD prefers flatter minimizers over sharper minimizers. Accordingly, what is an appropriate mechanism to describe how SGD iterates ``select'' minimizers?\n\n\\subsection{Main Contribution}\nIn this work, we derive and propose a deterministic mechanism based on the expected local geometry of a minimizer and the batch size of SGD to describe why SGD iterates will converge or diverge from the minimizer. As a consequence of our deterministic mechanism, we can explain why SGD prefers flatter minimizers to sharper minimizers in comparison to gradient descent, and we can explain the manner in which SGD will ``escape'' from certain minimizers. Importantly, we verify the predictions of our deterministic mechanism with numerical experiments on two nontrivial nonconvex problems. \n\n\\subsection{Outline}\n\nIn \\S \\ref{section:classical}, we begin by reviewing the classical mechanism by which gradient descent diverges from a quadratic basin. In \\S \\ref{section:stochastic-quadratic}, we will rigorously generalize the classical, deterministic mechanism to SGD-$k$ (i.e., SGD with a batch size of $k$) on a generic stochastic quadratic problem, and, using this rigorous analysis, we define our deterministic mechanism. In \\S \\ref{section:homogeneous} and \\S \\ref{section:inhomogeneous}, we numerically verify the predictions of our deterministic mechanism on two nontrivial, nonconvex problem. In \\S \\ref{section:conclusion}, we conclude this work. \n\n\n\\section{Introduction}\n\\input{section\/introduction}\n\n\\section{Classical Gradient Descent} \\label{section:classical}\n\\input{section\/classical_gd}\n\n\\section{A Deterministic Mechanism} \\label{section:stochastic-quadratic}\n\\input{section\/quadratic}\n\n\\section{Numerical Study of a Homogeneous Nonconvex Problem} \\label{section:homogeneous}\n\\input{section\/homogeneous}\n\n\\section{Numerical Study of an Inhomogeneous Nonconvex Problem} \\label{section:inhomogeneous}\n\\input{section\/inhomogeneous}\n\n\\section{Conclusion} \\label{section:conclusion}\n\\input{section\/conclusion}\n\n\\section*{Acknowledgements}\nWe would like to thank Rob Webber for pointing out the simple proof item 2 in Lemma \\ref{lemma-sgd:higher-moment-bounds}. We would also like to thank Mihai Anitescu for his general guidance throughout the preparation of this work.\n\n\\section*{Funding}\nThe author is supported by the NSF Research and Training Grant \\# 1547396.\n\n\\bibliographystyle{tfs}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Conclusion}\nIn this paper, we presented MasterGNN, a joint air quality and weather prediction model that explicitly models the correlations and interactions between two predictive tasks.\nSpecifically, we first proposed a heterogeneous recurrent graph neural network to capture spatiotemporal autocorrelation among air quality and weather monitoring stations. Then, we developed a multi-adversarial learning framework to resist observation noise propagation. In addition, an adaptive training strategy is devised to automatically balance the optimization of multiple discriminative losses in multi-adversarial learning. Extensive experimental results on two real-world datasets demonstrate that the performance of MasterGNN on both air quality and weather prediction tasks consistently outperforms seven state-of-the-art baselines.\n\\section{Experiments}\n\n\\begin{table}[t]\n\\centering\n\\begin{small}\n\\caption{Statistics of datasets.}\n\\label{table-data-stats}\n\\vspace{-6pt}\n\\begin{tabular}{c | c c} \\hline\n{Data description} & Beijing & Shanghai \\\\ \\hline \\hline\n\\# of air quality stations & 35 & 76 \\\\ \n\\hline\n\\# of weather stations & 18 & 11\\\\ \n\\hline\n\\# of air quality observations & 409,853 & 1,331,520 \\\\ \n\\hline\n\\# of weather observations & 210,824 & 192,720\\\\ \n\\hline\n\\# of POIs & 900,669 & 1,061,399\\\\ \n\\hline\n\\# of road segments & 812,195 & 768,336 \\\\ \n\\hline\n\\end{tabular}\n\\end{small}\n\\vspace{-7mm}\n\\end{table}\n\n\\begin{table*}[t]\n\\centering\n\\vspace{-10pt}\n\\begin{small}\n\\caption{Overall performance comparison of different approaches. A smaller value indicates a better performance.}\n\\vspace{-5pt}\n\\label{tab-eff}\n\\begin{tabular}{c|c|c|c|c|c|c|c|c}%\n\\hline\n\\multirow{3}{*}{Model} & \\multicolumn{8}{|c}{MAE\/SMAPE}\\\\\n\\cline{2-9}\n\\multirow{3}{*}{} & \\multicolumn{4}{|c}{Beijing} & \\multicolumn{4}{|c}{Shanghai}\\\\\n\\cline{2-9}\n\\multirow{3}{*}{} & {AQI} & {Temperature} & {Humidity} & {Wind speed} & {AQI} & {Temperature} & {Humidity} & {Wind speed}\\\\\n\\hline\nARIMA & 54.97\/0.925 & 3.26\/0.475 & 16.42\/0.403 & 1.17\/0.637 & 34.36\/0.597 & 2.27\/0.394 & 11.96\/0.267 & 1.35\/0.653 \\\\\nLR & 45.27\/0.732 & 2.39\/0.426 & 9.58\/0.296 & 1.04\/0.543 & 27.68\/0.462 & 1.93\/0.342 & 9.53\/0.254 & 1.21\/0.568 \\\\\nGBRT & 38.36\/0.678 & 2.31\/0.397 & 8.43\/0.254 & 0.93\/0.489 & 24.13\/0.423 & 1.87\/0.329 & 8.91\/0.232 & 1.05\/0.497 \\\\\n\\hline\nDeepAir & 31.45\/0.613 & 2.21\/0.375 & 8.25\/0.246 & 0.82\/0.468 & 18.79\/0.297 & 1.79\/0.304 & 7.82\/0.209 & 0.93\/0.516 \\\\\nGeoMAN & 30.04\/0.595 & 2.05\/0.356 & 8.01\/0.223 & 0.71\/0.457 & 20.75\/0.348 & 1.56\/0.261 & 6.45\/0.153 & 0.82\/0.465 \\\\\nGC-DCRNN & 29.58\/0.586 & 2.14\/0.364 & 8.14\/0.235 & 0.73\/0.462 & 18.73\/0.305 & 1.61\/0.263 & 6.83\/0.186 & 0.78\/0.462 \\\\\nDUQ & 32.69\/0.624 & 2.02\/0.348 & 7.97\/0.217 & 0.69\/0.446 & 22.81\/0.364 & 1.42\/0.235 & 6.23\/0.136 & 0.67\/0.442 \\\\\n\\hline\\hline\nMasterGNN & \\textbf{27.45\/0.548} & \\textbf{1.87\/0.326} & \\textbf{7.25\/0.184}& \\textbf{0.64\/0.428} &\\textbf{16.51\/0.265} & \\textbf{1.25\/0.213} & \\textbf{5.64\/0.095}&\\textbf{0.61\/0.427}\\\\\n\\hline\n\\end{tabular}\n\\vspace{-8pt}\n\\end{small}\n\\end{table*}\n\n\\subsection{Experimental settings}\n\\subsubsection{Data description.}\nWe conduct experiments on two real-world datasets collected from Beijing and Shanghai, two metropolises in China. \nThe Beijing dataset is ranged from January 1, 2017, to April 1, 2018, and the Shanghai dataset is ranged from June 1, 2018, to June 1, 2020.\nBoth datasets include (1) air quality observations~(\\emph{i.e.},\\xspace PM2.5, PM10, O3, NO2, SO2, and CO), (2) weather observations and weather forecast~(\\emph{i.e.},\\xspace weather condition, temperature, pressure, humidity, wind speed and wind direction), and (3) urban contextual data~(\\emph{i.e.},\\xspace POI distribution and Road network distribution~\\cite{zhu2016days,liu2019hydra,liu2020polestar}). \nThe observation data of Beijing dataset is from KDD CUP 2018\\footnote{https:\/\/www.biendata.xyz\/competition\/kdd\\_2018\/data\/}, and the observation data of Shanghai dataset is crawled from China government websites\\footnote{http:\/\/www.cnemc.cn\/en\/}.\nAll air quality and weather observations are collected by every hour. \nWe associate POI and road network features to each station through an open API provided by an anonymized online map application.\nSame as existing studies~\\cite{yi2018deep,wang2019deep}, we focus on Air Quality Index~(AQI) for air quality prediction, which is derived by the Chinese AQI standard, and temperature, humidity and wind speed for weather prediction.\nWe split the data into training, validation, and testing data by the ratio of 8:1:1.\nThe statistics of two datasets are shown in Table~\\ref{tab-eff}.\n\n\\subsubsection{Implementation details.}\nOur model and all the deep learning baselines are implemented with PyTorch. All methods are evaluated on a Linux server with 8 NVIDIA Tesla P40 GPUs.\nWe set context-aware heterogeneous graph attention layers $l=2$.\nThe cell size of the GRU is set to $64$. \nWe set $\\epsilon=15$, and the hidden size of the multi-layer perceptron of each discriminator is fixed to $64$. All the learning parameters are initialized with a uniform distribution, and the model is trained by stochastic gradient descent with a learning rate $lr=0.00001$. The activation function used in the hidden layers is LeakyReLU ($\\alpha$=0.2).\nAll the baselines use the same input features as ours, except ARIMA excludes contextual features. \nEach numerical feature is normalized by Z-score. We set $T=72$ and the future time step $\\tau=48$.\nFor a fair comparison, we carefully fine-tuned the hyper-parameters of each baseline, and use one another historical observations as a part of feature input.\n\n\n\\subsubsection{Evaluation metrics.}\nWe use Mean Absolute Error (MAE) and Symmetric Mean Absolute Percentage Error (SMAPE), two widely used metrics~\\cite{luo2019accuair} for evaluation.\n\n\\subsubsection{Baselines.}\nWe compare the performance of MasterGNN with the following seven baselines:\n\n\\begin{itemize}\n \\item \\textbf{ARIMA} ~\\cite{brockwell2002introduction} is a classic time series prediction method utilizes the moving average and auto-regressive component to estimate future trends.\n \\item \\textbf{LR} uses linear regression ~\\cite{montgomery2012introduction} for joint prediction. We concatenate previous observations and contextual features as input.\n \n \\item \\textbf{GBRT} is a tree-based model widely used for regression tasks. We adopt an efficient version XGBoost~\\cite{chen2016xgboost}, and the input feature is same as LR.\n \\item \\textbf{GeoMAN}~\\cite{liang2018geoman} is a multi-level attention-based network which integrates spatial and temporal attention for geo-sensory time series prediction.\n \\item \\textbf{DeepAir}~\\cite{yi2018deep} is a deep learning method for air quality prediction. It combines spatial transformation and distributed fusion components to fuse various urban data.\n \\item \\textbf{GC-DCRNN}~\\cite{lin2018exploiting} leverages a diffusion convolution recurrent neural network to model spatiotemporal dependencies for air quality prediction.\n \\item \\textbf{DUQ}~\\cite{wang2019deep} is an advanced weather forecasting framework that improves numerical weather prediction by using uncertainty quantification.\n\\end{itemize}\n\n\\begin{figure*}[t]\n\t\\begin{minipage}{1.\\linewidth}\n\t\t\\centering\n\t\t\\vspace{0ex}\n\t\n\t\t\\subfigure[{\\small Effect of joint prediction.}]{\\label{copred}\n\t\t\t\\includegraphics[width=0.24\\textwidth]{exp\/exp_copred.pdf}}\n\t\t\\subfigure[{\\small Effect of heterogeneous recurrent graph neural network.}]{\\label{heterst}\n\t\t\t\\includegraphics[width=0.24\\textwidth]{exp\/exp_st.pdf}}\n\t\t\\subfigure[{\\small Effect of multi-adversarial learning.}]{\\label{adv}\n\t\t\t\\includegraphics[width=0.24\\textwidth]{exp\/exp_adv.pdf}}\n\t\t\\subfigure[{\\small Effect of multi-task adaptive training.}]{\\label{opti}\n\t\t\t\\includegraphics[width=0.24\\textwidth]{exp\/exp_ms.pdf}}\n\t\\end{minipage}\n\t\\vspace{-3ex}\n\t\\caption{Ablation study of MasterGNN on the Beijing dataset.} \n\t\\vspace{-7mm}\n\t\\label{parasensitivity}\n\\end{figure*}\n\n\\subsection{Overall performance}\nTable 2 reports the overall performance of MasterGNN and compared baselines on two datasets with respect to MAE and SMAPE. \nWe observe our approach consistently outperforms all the baselines using both metrics, which shows the superiority of MasterGNN.\nSpecifically, our model achieves (7.8\\%, 8.0\\%, 10.0\\%, 7.8\\%) and (6.9\\%, 6.7\\%, 17.9\\%, 4.2\\%) improvements beyond the best baseline on MAE and SMAPE on Beijing for (AQI, temperature, humidity, wind speed) prediction, respectively. Similarly, the improvement of MAE and SMAPE on Shanghai are (13.4\\%, 13.6\\%, 10.4\\%, 9.8\\%) and (12.1\\%, 10.3\\%, 43.2\\%, 3.5\\%), respectively. \nMoreover, we observe all deep learning based models outperform statistical learning based approaches by a large margin, which demonstrate the effectiveness of deep neural network for modeling spatiotemporal dependencies. \nRemarkably, GC-DCRNN performs better than all other baselines for air quality prediction, and DUQ consistently outperforms other baselines for weather prediction. \nHowever, they perform relatively poorly on the other task, which indicates their poor generalization capability and further demonstrate the advantage of joint prediction.\n\n\\begin{figure*}[t]\n\t\\begin{minipage}{1.\\linewidth}\n\t\t\\centering\n\t\t\\vspace{-10pt}\n\t\n\t\t\\subfigure[{\\small Effect of $T$.}]{\\label{length}\n\t\t\t\\includegraphics[width=0.24\\textwidth]{exp\/para_t.pdf}}\n\t\t\\subfigure[{\\small Effect of $d$.}]{\\label{hidden_units}\n\t\t\t\\includegraphics[width=0.24\\textwidth]{exp\/para_d.pdf}}\n\t\t\\subfigure[{\\small Effect of $\\epsilon$.}]{\\label{epsilon}\n\t\t\t\\includegraphics[width=0.24\\textwidth]{exp\/para_epsilon.pdf}}\n\t\t\\subfigure[{\\small Effect of $\\tau$.}]{\\label{tau}\n\t\t\t\\includegraphics[width=0.24\\textwidth]{exp\/para_tau.pdf}}\n\t\\end{minipage}\n\t\\vspace{-3ex}\n\t\\caption{Parameter sensitivities on the Beijing dataset.} \n\t\\vspace{-3ex}\n\t\\label{parasensitivity}\n\\end{figure*}\n\n\\subsection{Ablation study}\nThen we study the effectiveness of each module in MasterGNN. Due to the page limit, we only report the results on Beijing by using SMAPE. The results on Beijing using MAE and on Shanghai using both metrics are similar.\n\n\\textbf{Effect of joint prediction}. To validate the effectiveness of joint prediction, we compare the following variants of MasterGNN:\n(1) SLE models two tasks independently without information sharing,\n(2) CXT models two tasks independently but uses the other observations as a part of contextual features. \nAs depicted in Figure \\ref{copred}, we observe a performance gain of CXT by adding the other observation features, but MasterGNN achieves the best performance.\nSuch results demonstrate the advantage of modeling the interactions between two tasks compared with simply adding the other observations as the feature input.\n\n\\textbf{Effect of heterogeneous recurrent graph neural network}. We evaluate three variants of HRGNN, (1) RS removes the spatial autocorrelation block from MasterGNN, (2) AP replaces GRU by average pooling, (3) RH handles air quality and weather observations homogeneously. As shown in Figure \\ref{heterst}, we observe MasterGNN achieves the best performance compared with other variants, demonstrating the benefits of incorporating spatial and temporal autocorrelations for joint air quality and weather prediction. \n\n\\textbf{Effect of multi-adversarial learning}.\nWe compare the following variants: (1) STD removes the macro discriminator, (2) SAD removes the micro temporal discriminator, (3) TAD removes the micro spatial discriminators. As shown in Figure \\ref{adv}, the macro discriminator is the most influential one, which indicates the importance of defending noises from the global view.\nOverall, MasterGNN consistently outperforms STD, SAD, and TAD on all tasks by integrating all discriminators, demonstrating the benefits of developing multiple discriminators for joint prediction. \n\n\\textbf{Effect of multi-task adaptive training}.\nTo verify the effect of multi-task adaptive training, we develop (1) Average Loss Minimization (ALM) uses the same weight for all discriminators, (2) Fixed Loss Minimization~(FLM) sets weighted but fixed importance based on the performance loss of MasterGNN when the corresponding discriminator is removed.\nAs shown in Figure \\ref{opti}, we observe performance degradation when we fix each discriminator's weight, either on average or weighted. \nOverall, adaptively re-weighting each discriminator's importance can guide the optimization direction of the generator and lead to better performance. \n\n\n\\subsection{Parameter sensitivity}\n\nWe further study the parameter sensitivity of MasterGNN.\nWe report SMAPE on the Beijing dataset. Each time we vary a parameter, we set others to their default values.\n\nFirst, we vary the input length $T$ from 6 to 96. The results are reported in Figure ~\\ref{length}. As the input length increases, the performance first increases and then gradually decreases. The main reason is that a short sequence cannot provide sufficient temporal periodicity and trend information. But too large input length may introduce noises for future prediction, leading to performance degradation.\n\nThen, we vary $d$ from 8 to 128. The results are reported in Figure ~\\ref{hidden_units}. We can observe that the performance first increases and then remains stable. However, too large $d$ leads to extra computation overhead. Therefore, set the $d=64$ is enough to achieve a satisfactory result.\n\nAfter that, to test the impact of the distance threshold in HSG, we vary $\\epsilon$ from 6 to 20. The results are reported in Figure ~\\ref{epsilon}.\nAs $\\epsilon$ increases, we observe a performance first increase then slightly decrease, which is perhaps because a large $\\epsilon$ integrates too many stations, which introduces more noises during spatial autocorrelation modeling.\n\nFinally, we vary the prediction step $\\tau$ from 3 to 48. The results are reported in Figure ~\\ref{tau}. We observe the performance drops rapidly when $\\tau$ goes large. The reason perhaps is the uncertainty increases when $\\tau$ is large, and it is difficult for the machine learning model to quantify such uncertainty.\n\n\\subsection{Qualitative study}\n\\begin{figure}\n \\centering\n \n \\vspace{-4pt}\n \\includegraphics[width=8cm]{exp\/casestudy.pdf}\n \\vspace{-4pt}\n \\caption{Visualization of the learned attention weight of neighboring air quality and weather stations.}\\label{fig:casestudy}\n \\vspace{-7mm}\n\\end{figure}\n\nFinally, we qualitatively analyze why MasterGNN can yield better performance on both tasks. \nFigure~\\ref{fig:casestudy}~(a) shows the distribution of air quality and weather monitoring stations in the Haidian district, Beijing. \nWe perform a case study from 3:00 May 4, 2017, to 3:00 May 5, 2017.\nFirst, we take the air quality station $S_0$ as an illustrative example to show how weather stations help air quality prediction.\nIn MasterGNN, the attention score reflects the relative importance of each neighboring station for prediction.\nFigure~\\ref{fig:casestudy}~(b) visualizes the attention weights of eight neighboring weather stations of $S_0$.\nAs can be seen, the attention weights of weather stations $S_1$, $S_2$ and $S_4$ abruptly increase because a strong wind blows through this area, which results in notable air pollutant dispersion in the next five hours.\nClearly, the weather stations provide additional knowledge for air quality prediction.\nThen, we take the weather station $S_2$ to depict air quality stations' impact on weather prediction.\nFigure~\\ref{fig:casestudy}~(c) shows the attention weights of neighbouring station of $S_2$.\nWe observe the nearest air quality station $S_3$ is the most influential station during 5:00 May 4 2017 and 10:00 May 4 2017, while $S_7$ plays a more important role during 15:00 May 4 2017 and 21:00 May 4 2017, corresponding to notable air pollution concentration changes during these periods.\nThe above observations further validate the additional knowledge introduced by the joint modeling of heterogeneous stations for both predictive tasks.\n\n\\eat{\nFigure~\\ref{fig:casestudy} presents a local map of Beijing, several air quality and weather stations are scattered in geographical space. We perform a case study from 3:00 May 4, 2017 to 3:00 May 5, 2017. \nTo show how weather conditions impacts air quality, we take the air quality station $S_0$ as an example to visualize the attention weights with eight neighboring weather stations in Figure~\\ref{fig:casestudy}. The attention scores semantically represent the relative importance of each weather station. According to the attention weight matrix, the stations are not correlated before 22:00 May 4, 2017. Then, there came a strong wind and has lasted for 5 hours, the air quality dropped rapidly in this time period. We observe the attention weights increase when the wind blows strongly. Note that the nearest weather station $S_1$ have the largest attention scores, showing biggest impact to the selected air quality station. The attention weights gradually decreases with increasing distance. The above example have shown that the attention weights generated by our model are indeed meaningful in real world and can be easily interpreted.\nNext, we further illustrate how air quality impacts meteorology. As shown in Figure ~\\ref{fig:casestudy}, the attention weights of nearby air quality stations are higher than remote stations due to severe air pollution in this region. Another observation is that the attention weight of air quality to weather is lower than that of weather to air quality, this shows the influence of weather on air quality is much more significant.\nThe case study indicates that our model successfully captures the complex spatial correlation between the air quality stations and weather stations.\n}\n\\section{Introduction}\n\n\\begin{figure}\n \\centering\n \n \\vspace{-4pt}\n \\includegraphics[width=8cm]{fig\/distribution.pdf}\n \\vspace{-5pt}\n \\caption{\n Spatial distribution of air quality and weather monitoring stations in Beijing. Two types of stations are monitoring exclusively different but correlated air quality and weather conditions in different city locations.}\\label{dist}\n \\vspace{-7mm}\n\\end{figure}\n\n\n\nWith the rapid development of the economy and urbanization, people are increasingly concerned about emerging public health risks and environmental sustainability.\nAs reported by the World Health Organization~(WHO), air pollution is the world's largest environmental health risk~\\cite{campbell2019climate}, and the changing weather profoundly affects the local economic development and people's daily life~\\cite{baklanov2018urban}.\nThus, accurate and timely air quality and weather predictions are of great importance to urban governance and human livelihood.\nIn the past years, massive sensor stations have been deployed for monitoring air quality~(\\emph{e.g.},\\xspace PM2.5 and PM10) or weather conditions~(\\emph{e.g.},\\xspace temperature and humidity) and many efforts have been made for air quality and weather predictions~\\cite{liang2018geoman,yi2018deep,wang2019deep}.\n\nHowever, existing air quality and weather prediction methods are designated for either air quality or weather prediction, perhaps with one another as a side input~\\cite{zheng2015forecasting,yi2018deep}, but overlook the intrinsic connections and interactions between two tasks.\nDifferent from existing studies, this work is motivated by the following two insights. First, air quality and weather predictions are two highly correlated tasks and can be mutually enhanced.\nFor example, accurately predicting the regional wind condition can help model the future dispersion and transport of air pollutants~\\cite{ding2017air}.\nAs another example, modeling the future concentration of aerosol pollutants~(\\emph{e.g.},\\xspace PM2.5 and PM10) also can help predict the local climate~(\\emph{e.g.},\\xspace temperature and humidity), since aerosol elements can scatter or absorb the incoming radiation~\\cite{hong2020weakening}.\nSecond, as illustrated in Figure~\\ref{dist}, the geospatially distributed air quality and weather monitoring stations provide additional hints to improve both predictive tasks.\nThe air quality and weather condition variations in different city locations reflect the urban dynamics and can be exploited to improve spatiotemporal autocorrelation modeling.\n\nIn this work, we investigate the joint prediction of air quality and weather conditions by explicitly modeling the correlations and interactions between two predictive tasks.\nHowever, two major challenges arise in achieving this goal.\n(1) \\emph{Observation heterogeneity}.\nAs illustrated in Figure \\ref{dist}, the geo-distributed air quality and weather stations are heterogeneous spatial objects that are monitoring exclusively different atmospheric conditions. \nExisting methods~\\cite{zheng2013u,wang2019deep} are initially designed to model homogeneous spatial objects~(\\emph{i.e.},\\xspace either air quality or weather stations), which are not suitable for joint air quality and weather predictions.\nTherefore, the first challenge is how to capture spatiotemporal autocorrelations among heterogeneous monitoring stations to mutually benefit air quality and weather prediction.\n(2) \\emph{Compounding observation error vulnerability}.\nIn practice, observations reported by monitoring stations are often noisy due to the sensor error and environmental interference~\\cite{yi2016st}.\nHowever, most existing prediction models~\\cite{zheng2015forecasting,liang2018geoman,yi2018deep} rely on spatiotemporal dependency modeling between stations, which is susceptible to local perturbations and noise propagation~\\cite{zugner2018adversarial,bengio2015scheduled}.\nMore severely, jointly modeling spatiotemporal autocorrelation among air quality and weather monitoring stations will further accumulate errors from both spatial and temporal domains.\nAs a result, it is challenging to learn robust representations to resist compounding observation error for joint air quality and weather predictions.\n\nTo tackle the above challenges, we propose the \\emph{\\textbf{M}ulti-\\textbf{a}dversarial \\textbf{s}patio\\textbf{te}mporal \\textbf{r}ecurrent \\textbf{G}raph \\textbf{N}eural \\textbf{N}etworks}~(\\textbf{MasterGNN}) for robust air quality and weather joint predictions.\nSpecifically, we first devise a heterogeneous recurrent graph neural network to simultaneously incorporate spatial and temporal autocorrelations among heterogeneous stations conditioned on dynamic urban contextual factors~(\\emph{e.g.},\\xspace POI distributions, traffics).\nThen, we propose a multi-adversarial learning framework to against noise propagation from both microscopic and macroscopic perspectives.\nBy proactively simulating perturbations and maximizing the divergence between target and fake observations, multiple discriminators dynamically regularize air quality and weather station representations to resist the propagation of observation noises~\\cite{durugkar2016generative}.\nMoreover, we introduce a multi-task adaptive training strategy to improve the joint prediction performance by automatically balancing the importance of multiple discriminators.\nFinally, we conduct extensive experiments on two real-world datasets collected from Beijing and Shanghai, and the proposed model consistently outperforms seven baselines on both air quality and weather prediction tasks.\n\\eat{\nTo sum up, our main contributions are as follows:\n\\begin{itemize}\n\t\\item We study the novel air quality and weather co-prediction problem by exploiting the inner-connection between air quality and weather conditions.\n\t\\item We propose a heterogeneous recurrent graph neural network to capture the spatiotemporal autocorrelation among heterogeneous monitoring stations.\n\t\\item We introduce a multi-adversarial learning framework coupled with a multi-objective optimization strategy to resist observation noises.\n\t\\item We conduct extensive experiments on two real-world datasets collected from Beijing and Shanghai. The proposed model outperforms seven baselines by at least 10.7\\% and 5.5\\% on air quality and weather prediction tasks, respectively.\n\\end{itemize}\n}\n\n\\section{Methodology}\nMasterGNN simultaneously makes air quality and weather predictions based on the following intuitions.\n\n\\textbf{Intuition 1: Heterogeneous spatiotemporal autocorrelation modeling}. The geo-distributed air quality and weather monitoring stations provide additional spatiotemporal hints for air quality and weather prediction. The model should be able to simultaneously incorporate spatial and temporal autocorrelations of such heterogeneous monitoring stations for joint air quality and weather prediction.\n\n\\textbf{Intuition 2: Robust representation learning}. \nThe spatiotemporal autocorrelation based joint prediction model is vulnerable to observation noises and suffers from compounding propagation errors. The model should be robust to error propagation from both spatial and temporal domains.\n\n\\textbf{Intuition 3: Adaptive model training}. Learning robust representations introduces extra optimization objectives, which guiding the predictive model to approximate the underlying data distribution from different semantic aspects. The model should be able to balance the optimization of multiple objectives for optimal prediction adaptively. \n\n\\subsection{Framework Overview}\n\n\\begin{figure}[t]\n \\centering\n \n \n \\includegraphics[width=7.5cm]{fig\/framework.pdf}\n \\vspace{-0.1cm}\n \\caption{An overview of MasterGNN.}\\label{framework}\n \\vspace{-0.6cm}\n\\end{figure}\n\nFigure \\ref{framework} shows the architecture of our approach, including three major tasks: \n(i) modeling the spatiotemporal autocorrelation among air quality and weather stations for joint prediction; (ii) learning robust station representations via multi-adversarial learning; (iii) exploiting adaptive training strategy to ease the model learning.\nSpecifically, in the first task, we propose a \\emph{Heterogeneous Recurrent Graph Neural Network}~(HRGNN) to jointly incorporate spatial autocorrelation between heterogeneous monitoring stations and past-current temporal autocorrelation of each monitoring station.\nIn the second task, we develop a \\emph{Multi-Adversarial Learning} framework to resist the propagation of observation noises via (i) microscopic discriminators against adversarial attacks respectively from the spatial and temporal domain, and (ii) macroscopic discriminator against adversarial attacks from a global view of the city.\nIn the third task, we introduce a \\emph{Multi-Task Adaptive Training} strategy to automatically balance the optimization of multiple discriminative losses in multi-adversarial learning.\n\n\n\\subsection{Heterogeneous Recurrent Graph Neural Network}\n\n\n\\subsubsection{The base model.}\nWe adopt the encoder-decoder architecture~\\cite{zhang2020tkde,zhang2020semi} as the base model.\nSpecifically, the encoder step projects the historical observation sequence of all stations $\\mathbfcal{X}$ to a hidden state $\\mathbf{H}^t=f_{encoder}(W\\mathbfcal{X}+b)$, where $f_{encoder}(\\cdot)$ is parameterized by a neural network.\nIn the decoder step, we employ another neural network $f_{decoder}(\\cdot)$ to generate air quality and weather predictions $(\\hat{\\mathbf{Y}}^{t+1},\\hat{\\mathbf{Y}}^{t+2},\\dots,\\hat{\\mathbf{Y}}^{t+\\tau})=f_{decoder}(\\mathbf{H}^t, \\mathbf{C})$, where $\\tau$ is the future time steps we aim to predict, and $\\mathbf{C}$ are contextual features~\\cite{liu2020incorporating}.\nSimilar with conventional air quality or weather prediction models~\\cite{liang2018geoman,li2020weather}, the heterogeneous recurrent graph neural network aims to minimize the \\emph{Mean Square Error}~(MSE) between the target observations and predictions,\n\\begin{equation}\\label{equ:mse}\n\\mathcal{L}_g=\\frac{1}{\\tau}\\sum^{\\tau}_{i=1}||\\hat{\\mathbf{Y}}^{t+i}-\\mathbf{Y}^{t+i}||^2_2.\n\\end{equation}\n\n\\subsubsection{Incorporating spatial autocorrelation.}\nIn the spatial domain, the regional concentration of air pollutants and weather conditions are highly correlated and mutually influenced.\nInspired by the recent variants~\\cite{wang2019heterogeneous,zhang2019heterogeneous,liu2021vldb} of graph neural network on handling non-Euclidean semantic dependencies on heterogeneous graphs, we devise a context-aware heterogeneous graph attention block (CHAT) to model spatial interactions between heterogeneous stations.\nWe first construct the heterogeneous station graph to describe the spatial adjacency of each station.\n\n\\begin{myDef}\n\\textbf{Heterogeneous station graph}. A heterogeneous station graph (HSG) is defined as $\\mathcal{G}=\\{\\mathcal{V},\\mathcal{E}, \\psi \\}$, where $\\mathcal{V}=S$ is the set of monitoring stations, $\\psi$ is a mapping function indicates the type of each station, and $\\mathcal{E}$ is a set of directed edges indicating the connectivity among monitoring stations, defined as\n\\begin{equation}\ne_{ij}=\\left\\{\n\\begin{array}{lr}\n1,\\quad d_{ij}<\\epsilon\\\\\n0,\\quad otherwise\n\\end{array}\n\\right.,\n\\end{equation}\nwhere $d_{ij}$ is the spherical distance between station $s_i$ and station $s_j$, $\\epsilon$ is a distance threshold.\n\\end{myDef}\n\nDue to the heterogeneity of monitoring stations, there are two types of vertices~(air quality station $s^a$, weather station $s^w$) and four types of edges, \\emph{i.e.},\\xspace $\\Psi=\\{s^a$-$s^a$, $s^w$-$s^w$, $s^a$-$s^w$, $s^w$-$s^a\\}$. \nWe use $\\psi(i)$ to denote the station type of $s_i$, and $r \\in \\Psi$ to denote the semantic type of each edge.\n\nFormally, given an observation $\\mathbf{x}_i$ at a particular time step, we first devise a type-specific transformation layer to project heterogeneous observations into unified feature space,\n$\\widetilde{\\mathbf{x}}_i = \\mathbf{W}^{\\psi(i)} \\mathbf{x}_i$,\nwhere $\\widetilde{\\mathbf{x}}_i\\in\\mathcal{R}^d$ is a low-dimensional embedding vector, $\\mathbf{W}^{\\psi(i)}\\in\\mathcal{R}^{|\\mathbf{x}_i|\\times d}$ is a learnable weighted matrix shared by all monitoring stations of type $\\psi(i)$.\n\nThen, we introduce a type-dependent attention mechanism to quantify the non-linear correlation between homogeneous and heterogeneous stations under different contexts. Given a station pair $(s_i,s_j)$ which are connected by an edge of type $r$, the attention score is defined as\n\\begin{equation}\na^r_{ij} =\\frac{Attn(\\widetilde{\\mathbf{x}}_i,\\widetilde{\\mathbf{x}}_j,\\mathbf{c}_i,\\mathbf{c}_j,d_{ij})}{\\sum_{k\\in\\mathcal{N}^r_{i}}Attn(\\widetilde{\\mathbf{x}}_i,\\widetilde{\\mathbf{x}}_k,\\mathbf{c}_i,\\mathbf{c}_k,d_{ik})},\n\\end{equation}\nwhere $Attn(\\cdot)$ is a concatenation based attention function, $\\mathbf{c}_i,\\mathbf{c}_j\\in\\mathbf{C}$ are contextual features of station $s_i$ and $s_j$, $d_{ij}$ is the spherical distance between $s_i$ and $s_j$, and $\\mathcal{N}^r_i$ is the set of type-specific neighbor stations of $s_i$ in $\\mathcal{G}$.\nBased on $a^r_{ij}$, we define the context-aware heterogeneous graph convolution operation to update the type-wise station representation,\n\\begin{equation}\n\\widetilde{\\mathbf{x}}_i^{r\\prime} = GConv(\\widetilde{\\mathbf{x}}_{i}, r) = \\sigma(\\sum_{j\\in \\mathcal{N}^r_i}\\alpha^r_{ij}\\mathbf{W}^r\\widetilde{\\mathbf{x}}_j),\n\\end{equation}\nwhere $\\widetilde{\\mathbf{x}}^{r\\prime}_{i}$ is the aggregated node representation based on edge type $r$, $\\sigma$ is a non-linear activation function, $\\mathbf{W}^r\\in\\mathcal{R}^{d \\times d}$ is a learnable weighted matrix shared over all edges of type $r$.\nFinally, we obtain the updated station representation of $s_i$ by concatenating type-specific representations,\n\\begin{equation}\n\\widetilde{\\mathbf{x}}^{\\prime}_i = ||_{r\\in\\Psi}GConv(\\widetilde{\\mathbf{x}}_{i}, r),\n\\end{equation}\nwhere $||$ is the concatenation operation.\nNote that we can stack $l$ graph convolution layers to capture the spatial autocorrelation between $l$-hop heterogeneous stations.\n\n\\eat{\nConsider an air quality observation $\\mathbf{x}^a_i$ and a weather observation $\\mathbf{x}^w_j$ at a particular time step, we first employ a type-specific linear transformation layer to project heterogeneous observations into a unified feature space,\n\\begin{equation}\n\\mathbf{h}^a_i = \\mathbf{W}^a \\mathbf{x}^a_i,~~\\mathbf{h}^w_j = \\mathbf{W}^w \\mathbf{x}^w_j,\n\\end{equation}\nwhere $\\mathbf{h}^a_i\\in\\mathcal{R}^d$ and $\\mathbf{h}^w_j\\in\\mathcal{R}^d$ are low-dimensional embedding vectors, $\\mathbf{W}^a\\in\\mathcal{R}^{|\\mathbf{x}^a_i|\\times d}$ and $\\mathbf{W}^w\\in\\mathcal{R}^{|\\mathbf{x}^w_j|\\times d}$ are learnable weighted matrices shared by air quality and weather monitoring stations, respectively.\n\nWe use $\\psi_i$ to denote the node type of $v_i$, and $\\psi_{ij}$ to denote the edge type from $v_i$ to $v_j$.\n\nConsider an atmospheric observation $\\mathbf{x}_i$ at a particular time step, we first employ a type-specific linear transformation layer to project heterogeneous observations into a unified feature space,\n\\begin{equation}\n\\mathbf{h}_i = \\mathbf{W}_{\\psi_i} \\mathbf{x}_i,\n\\end{equation}\nwhere $\\mathbf{h}_i\\in\\mathcal{R}^d$ is a low-dimensional embedding vector, $\\mathbf{W}_{\\psi_i}\\in\\mathcal{R}^{|\\mathbf{X}_i|\\times d}$ is a learnable weighted matrix shared by all same-typed monitoring stations.\n}\n\n\\subsubsection{Incorporating temporal autocorrelation.}\nIn the temporal domain, the air quality and weather conditions also depend on previous observations.\nWe further extend the Gated Recurrent Units (GRU), a simple variant of recurrent neural network~(RNN), to integrate the temporal autocorrelation among heterogeneous observation sequences.\nConsider a station $s_i$, given the learned spatial representation $\\widetilde{\\mathbf{x}}^t_i$ at time step $t$, we denote the hidden state of $s_i$ at $t-1$ and $t$ as $\\mathbf{h}^{t-1}_i$ and $\\mathbf{h}^t_i$, respectively. The temporal autocorrelation between $\\mathbf{h}^{t-1}_i$ and $\\mathbf{h}^t_i$ is modeled by\n\\begin{equation}\n\\left\\{\n\\begin{aligned}\n &\\mathbf{h}^t_i=(1-\\mathbf{z}^t_i)\\odot \\mathbf{h}^{t-1}_i+\\mathbf{z}^t_i\\odot \\widetilde{\\mathbf{h}}^t_i\\\\\n &\\mathbf{r}^{t}_i = \\sigma{(\\mathbf{W}^{\\psi(i)}_r[\\mathbf{h}^{t-1}_i\\parallel\\widetilde{\\mathbf{x}}^{t}_i]+\\mathbf{b}^{\\psi(i)}_r)}\\\\\n &\\mathbf{z}^{t}_i = \\sigma(\\mathbf{W}^{\\psi(i)}_z[\\mathbf{h}^{t-1}_i\\parallel\\widetilde{\\mathbf{x}}^{t}_i]+\\mathbf{b}^{\\psi(i)}_z)\\\\\n &\\widetilde{\\mathbf{h}}^{t}_i = \\tanh(\\mathbf{W}^{\\psi(i)}_{\\widetilde{h}}[\\mathbf{r}^{t}_i\\odot\\mathbf{h}^{t-1}_i\\parallel\\widetilde{\\mathbf{x}}^t_i]+\\mathbf{b}^{\\psi(i)}_{\\widetilde{h}})\n\\end{aligned},\n\\right.\n\\end{equation}\nwhere $\\mathbf{o}^t_i$, $\\mathbf{z}^t_i$ denote reset gate and update gate at time stamp t, $W_{o}^{\\psi(i)}$, $W_{z}^{\\psi(i)}$, $W_{\\widetilde{h}}^{\\psi(i)}$, $b_{o}^{\\psi(i)}$, $b_{z}^{\\psi(i)}$, $b_{\\widetilde{h}}^{\\psi(i)}$ are trainable parameters shared by all same-typed monitoring stations, and $\\odot$ represents the Hadardmard product. \n\n\\subsection{Multi-Adversarial Learning}\n\n\\begin{figure}[t]\n \\centering\n \n \n \\includegraphics[width=8.5cm]{fig\/adver_train.pdf}\n \\vspace{-0.5cm}\n \\caption{Discriminators in multi-adversarial learning.}\\label{adver_train}\n \\vspace{-0.7cm}\n\\end{figure}\n\nWe further introduce the multi-adversarial learning framework to obtain robust station representations.\nBy proactively simulating and resisting noisy observations via a \\emph{minmax} game between the generator and discriminator~\\cite{goodfellow2014generative}, adversarial learning can make the generator more robust and generalize better to the underlying true distribution $p_{true}(\\mathbf{Y} | \\mathbfcal{X}, \\mathbf{C})$.\nCompared with standard adversarial learning, our multi-adversarial learning framework encourages the generator to make more consistent predictions from both spatial and temporal domains.\nSpecifically, we function the HRGNN $\\mathbf{\\phi}$ as the generator $\\hat{\\mathbf{Y}}=G(\\mathbfcal{X}, \\mathbf{C};\\mathbf{\\phi})$ for joint predictions.\nMoreover, we propose a set of distinct discriminators $\\{D_k(\\cdot;\\theta_k)\\}^K_{k=1}$ to distinguish the real observations and predictions from both microscopic and macroscopic perspectives, as detailed below.\n\n\\subsubsection{Microscopic discriminators.}\nMicroscopic discriminators aims at enforcing the generator to approximate the underlying spatial and temporal distributions, \\emph{i.e.},\\xspace pair-wise spatial autocorrelation and stepwise temporal autocorrelation.\n\n\\emph{Spatial discriminator}. From the spatial domain, the adversarial perturbation induces the compounding propagation error through the HSG, as illustrated in Figure~\\ref{adver_train}~(a). Consider a time step $t$, the spatial discriminator $D_s(\\mathbf{y}^t;\\theta_s)$ aims to maximize the accuracy of distinguishing city-wide real observations and predictions at the current time step,\n\\begin{equation}\\label{equ:dis1}\n\\mathcal{L}_{s}=\\log D_s(\\mathbf{y}^t;\\theta_s)+log(1-D_s(\\hat{\\mathbf{y}}^t;\\theta_s)),\n\\end{equation}\nwhere $D_s(\\mathbf{y}^t;\\theta_s)$ is parameterized by the context-aware heterogeneous graph attention block in HRGNN followed by a multi-layer perception, $\\mathbf{y}^t$ and $\\hat{\\mathbf{y}}^t$ are ground truth observations and predicted conditions of the city, respectively.\n\n\\emph{Temporal discriminator.} From the temporal domain, the adversarial perturbation induces accumulated error for each station from previous time steps. \nAs depicted in Figure~\\ref{adver_train}~(b), the temporal discriminator $D_t(\\mathbf{y}_i;\\theta_t)$ outputs a probability indicating how likely a observation sequence $\\mathbf{y}_i$ of a particular station $s_i$ is from the real data distribution $p_{true}(\\mathbf{y}_i | \\mathbf{x}_i, \\mathbf{c}_i)$ rather than the generator $p_{\\phi}(\\mathbf{y}_i | \\mathbf{x}_i, \\mathbf{c}_i)$.\n\\begin{equation}\\label{equ:dis2}\n\\mathcal{L}_{t}=\\log D_t(\\mathbf{y}_i;\\theta_t)+log(1-D_t(\\hat{\\mathbf{y}}_i;\\theta_t)).\n\\end{equation}\nDifferent from the spatial discriminator, $D_t(\\mathbf{y}_i;\\theta_t)$ is parameterized by the temporal block in HRGNN followed by a multi-layer perception. \n$\\mathbf{y}_i$ is a sequence of observations in past $T$ and future $\\tau$ time steps.\n\n\\subsubsection{Macroscopic discriminator.}\nAs illustrated in Figure~\\ref{adver_train}~(c), we further propose a macroscopic discriminator to capture the globally underlying distribution, denoted by $D_m(\\mathbf{Y};\\theta_m)$. In particular, the macroscopic discriminator aims to maximize the accuracy of distinguishing the ground truth observations and predictions generated from $G$ from a global view,\n\\begin{equation}\\label{equ:dis3}\n\\mathcal{L}_{m}=\\log D_m(\\mathbf{Y};\\theta_m)+log(1-D_m(G(\\mathbfcal{X},\\mathbf{C};\\mathbf{\\phi});\\theta_m)),\n\\end{equation}\nwhere $D_m(\\mathbf{Y};\\theta_m)$ is parameterized by a GRU followed by a multi-layer perception.\nNote that for efficiency concern, the input of $D_m(\\mathbf{Y};\\theta_m)$ is a simple concatenation of observations from all stations $S$ in past $T$ and future $\\tau$ time steps, but other graph convolution operations are also applicable.\n\n\\eat{\nGiven the generator $G$ and discriminators $\\{D_i\\}^K_{i=1}$,\nthe minmax function of the multi-adversarial learning is\n\\begin{equation}\n\\begin{aligned}\n\\min_{G} \\max_{\\{D_i\\}^K_{i=1}} V(\\{D_i\\}^K_{i=1}&,G)=\\mathbb{E}_{\\mathbf{Y} \\sim p_{\\emph{true}}}[\\log D_i(\\mathbf{Y};\\theta_i)]+\\\\\n&\\mathbb{E}_{\\mathbfcal{X} \\sim p_{\\phi}}[\\log (1-D_i(G(\\mathbfcal{X},\\mathbf{C};\\phi)))].\n\\end{aligned}\n\\end{equation}\n}\n\n\\subsection{Multi-Task Adaptive Training}\nIt is widely recognized that adversarial training suffers from the unstable and mode collapse problem, where the generator is easy to over-optimized for a particular discriminator~\\cite{salimans2016improved,neyshabur2017stabilizing}.\nTo stabilize the multi-adversarial learning and achieve overall accuracy improvement, we introduce a multi-task adaptive training strategy to dynamically re-weight discriminative loss and enforce the generator to perform well in various spatiotemporal aspects.\n\nSpecifically, the objective of MasterGNN is to minimize \n\\begin{equation}\\label{equ:all-loss}\n\\mathcal{L}=\\mathcal{L}_g+\\sum^K_{i=1}\\lambda_i \\mathcal{L}_{d_i},\n\\end{equation}\nwhere $\\mathcal{L}_g$~(Equation~\\ref{equ:mse}) is the predictive loss, $\\mathcal{L}_D=\\{\\mathcal{L}_{d_i}\\}^K_{i=1}$ (Equation~\\ref{equ:dis1}-\\ref{equ:dis3}) are discriminative losses, and\n$\\lambda_i$ is the importance of the corresponding discriminative loss.\n\nSuppose $\\mathbf{H}_{i}$ and $\\hat{\\mathbf{H}}_{i}$ are intermediate hidden states in discriminator $D_i$ based on real observations $\\mathbf{Y}$ and predictions $\\hat{\\mathbf{Y}}$.\nWe measure the divergence between $\\mathbf{H}_{i}$ and $\\hat{\\mathbf{H}}_{i}$ by $\\gamma_i=sim(\\sigma(\\mathbf{H}_{i}),\\sigma(\\hat{\\mathbf{H}}_{i}))$,\nwhere $sim(\\cdot,\\cdot)$ is an Euclidean distance based similarity function, $\\sigma$ is the sigmoid function. \nIntuitively, $\\gamma_i$ reflects the hardness of $D_i$ to distinguish the real sample.\nIn each iteration, we re-weight discriminative losses by \n$\\lambda_i = \\frac{exp(\\gamma_i)}{\\sum^{K}_{k=1}exp(\\gamma_k)}$.\nIn this way, the generator pays more attention to discriminators with a larger room to improve, and will result in better prediction performance.\n\n\n\\eat{\nSpecifically, for the generative loss $\\mathcal{L}_g$ and $K$ discriminative losses $\\mathcal{L}_D=[\\mathcal{L}_1,\\mathcal{L}_2,...,\\mathcal{L}_K]$,\nwhere each $\\mathcal{L}_k=\\mathbb{E}_{\\mathbfcal{X} \\sim p_{\\phi}}[\\log(1-D_i(G(\\mathbfcal{X},\\mathbf{C};\\phi)))]$ is the loss provided by the i-th discriminator.\nWe use a dynamic weight $\\lambda_i$ to control the importance of discriminator $i$. A multi-task training strategy is designed to adaptively adjust $\\lambda_i$. Suppose $h_{i,f}$ and $h_{i,r}$ are the hidden state of discriminator $i$ derived from future predictions and ground truth observations, respectively. We first use a Sigmoid function to map the hidden state to the range [0,1]. Then we leverage a function to measure the similarity between $h_{i,f}$ and $h_{i,r}$, defined as follows\n\\begin{equation}\n\\gamma_i=sim(h_{i,f},h_{i,r}).\n\\end{equation}\nIn this paper, we use Euclidean Distance as similarity function. When $\\gamma_i$ becomes large, it is intuitive to conclude the discriminator $i$ can easily distinguish the predictions and ground truth observations, which means the generator performs poorly on discriminator $i$. We directly adopt $\\gamma_i$ as the weight to focus on gradients from discriminator $i$.\nWe employ a softmax function to convert the dynamic weights into a probability distribution.\n\\begin{equation}\n\\lambda_i = \\frac{exp(\\gamma_i)}{\\sum^{K}_{k=1}exp(\\gamma_k)}.\n\\end{equation}\nDuring training, our proposed strategy dynamically adjusts the influence of various discriminators. \nUpdates of generator are performed to minimize the following loss\n\\begin{equation}\n\\mathcal{L}=\\mathcal{L}_g+\\sum^K_{i=1}\\lambda_i \\mathcal{L}_{i},\n\\end{equation}\n\nWe update the multiple discriminators while keeping the parameters of the generator fixed. Each discriminator is minimized using the loss described in Eq.12.\n}\n\n\n\n\n\n\\section{Preliminaries}\n\n\n\nConsider a set of monitoring stations $S=S^a\\cup S^w$, where $S^a=\\{s^a_i\\}^m_{i=1}$ and $S^w=\\{s^w_j\\}^n_{j=1}$ are respectively air quality and weather station sets.\nEach station $s_i \\in S$ is associated with a geographical location $l_i$~(\\emph{i.e.},\\xspace latitude and longitude) and a set of time-invariant contextual features $\\mathbf{c}_i \\in \\mathbf{C}$.\n\n\\begin{myDef}\n\\textbf{Observations}. Given a monitoring station $s_i \\in S$, the observations of $s_i$ at time step $t$ are defined as $\\mathbf{x}^{t}_i$, which is a vector of air-quality or weather conditions, depending on the station type. \n\\end{myDef}\n\nNote that the observations of two types of monitoring stations are different~(\\emph{e.g.},\\xspace PM2.5 and CO in air quality stations, while temperature and humidity in weather stations) and the observation dimensionality of the air quality station and the weather station are also different. \nWe use $\\mathbfcal{X}=\\{\\mathbf{X}^{a,t}\\}^T_{t=1} \\cup \\{\\mathbf{X}^{w,t}\\}^T_{t=1}$ to denote time-dependent observations of all stations in a time period $T$, use $\\mathbf{X}^{a,t}=\\{\\mathbf{x}^{a,t}_1, \\mathbf{x}^{a,t}_2, \\dots, \\mathbf{x}^{a,t}_{m}\\}$ and $\\mathbf{X}^{w,t}=\\{\\mathbf{x}^{w,t}_1, \\mathbf{x}^{w,t}_2, \\dots, \\mathbf{x}^{w,t}_{n}\\}$ to respectively denote observations of air quality and weather stations at time step $t$. We use $\\mathbf{X}_i=\\{\\mathbf{x}^{1}_i, \\mathbf{x}^{2}_i, \\dots, \\mathbf{x}^{T}_i\\}$ to denote all observations of station $s_i$.\nIn the following, without ambiguity, we will omit the superscript and subscript.\n\n\\eat{\n\\begin{myDef}\n\\textbf{Heterogeneous Station Graph}. A heterogeneous station graph (HSG) is defined as $\\mathcal{G}=\\{\\mathcal{V},\\mathcal{E}, \\phi \\}$, where $\\mathcal{V}=S$ is the set of monitoring stations, $\\phi$ is a mapping set indicates the type of each station, and $\\mathcal{E}$ is a set of edges indicating the connectivity among monitoring stations, defined as\n\\begin{equation}\ne_{ij}=\\left\\{\n\\begin{array}{lr}\n1,\\quad dist(s_i,s_j)<\\epsilon\\\\\n0,\\quad otherwise\n\\end{array}\n\\right.,\n\\end{equation}\nwhere $dist(\\cdot,\\cdot)$ is the spherical distance between station $s_i$ and station $s_j$, $\\epsilon$ is a distance threshold.\n\\end{myDef}\n}\n\n\\begin{problem}\n\\textbf{Joint air quality and weather predictions}. \nGiven a set of monitoring stations $S$, contextual features $\\mathbf{C}$, and historical observations $\\mathbfcal{X}$, our goal at a time step $t$ is to simultaneously predict air quality and weather conditions for all $s_i \\in S$ over the next $\\tau$ time steps,\n\\begin{equation}\n(\\hat{\\mathbf{Y}}^{t+1},\\hat{\\mathbf{Y}}^{t+2},...,\\hat{\\mathbf{Y}}^{t+\\tau}) \\leftarrow \\mathcal{F}(\\mathbfcal{X}, \\mathbf{C}),\n\\end{equation}\nwhere $\\hat{\\mathbf{Y}}^{t+1}=\\hat{\\mathbf{Y}}^{a,t+1} \\cup \\hat{\\mathbf{Y}}^{w,t+1}$ is the estimated target observations of all stations at time step $t+1$, and $\\mathcal{F}(\\cdot)$ is the mapping function we aim to learn. Note that $\\hat{\\mathbf{y}}^{t+1}_i \\in \\hat{\\mathbf{Y}}^{t+1}$ is also station type dependent, \\emph{i.e.},\\xspace air quality or weather observations for corresponding stations.\n\\end{problem}\n\n\\section{Related work}\n\n\\textbf{Air quality and weather prediction}.\nExisting literature on air quality and weather prediction can be categorized into two classes.\n(1) \\emph{Numerical-based models} make predictions by simulating the dispersion of various air quality or weather elements based on physical laws~\\cite{lorenc1986analysis,vardoulakis2003modelling,liu2005prediction,richardson2007weather}.\n(2) \\emph{Learning-based models} utilize end-to-end machine learning methods to capture spatiotemporal correlations based on historical observations and various urban contextual data~(\\emph{e.g.},\\xspace POI distributions, traffics)~\\cite{chen2011comparison,zheng2013u,cheng2018neural}.\nRecently, many deep learning models~\\cite{yi2018deep,wang2019deep,lin2018exploiting} have been proposed to enhance the performance of air quality and weather prediction.\nBy leveraging the representation capacity of deep learning for spatiotemporal autocorrelation modeling, learning-based models usually achieve better prediction performance than numerical-based models.\nUnlike the above approaches, our method explicitly models the correlations and interactions between the air quality and weather prediction task and achieves a better performance.\n\n\n\\textbf{Adversarial Learning}.\nAdversarial learning~\\cite{goodfellow2014generative} is an emerging learning paradigm for better capturing the data distribution via a minmax game between a generator and a discriminator.\nIn the past years, adversarial learning has been widely applied to many real-world application domains, such as sequential recommendation~\\cite{yu2017seqgan} and graph learning~\\cite{Wang2018GraphGANGR}.\nRecently, we notice several multi-adversarial frameworks have been proposed for improving image generation tasks~\\cite{nguyen2017dual,hoang2018mgan,albuquerque2019multi}.\nInspired by the above studies, we extend the multi-adversarial learning paradigm to the environmental science domain and introduce an adaptive training strategy to improve the stability of adversarial learning.\n\n\\eat{\nleverages a deep distributed fusion network for air quality prediction. The study in ~\\cite{grover2015deep} model the statistics of a set of weather-related variables via a hybrid approach based on deep neural networks. More recently, DUQ~\\cite{wang2019deep} introduced a deep uncertainty quantification method for weather forecasting.\n\nare widely used in the earlier study, such as ARIMA \\cite{chen2011comparison}, SVM~\\cite{wang2014research}, and artificial neural networks~\\cite{brunelli2007two}, they fail to capture spatiotemporal dynamics from the original data. With the development of deep learning techniques, many deep learning models are proposed to solve air quality and weather forecasting tasks. For example, DeepAir~\\cite{yi2018deep} leverages a deep distributed fusion network for air quality prediction. The study in ~\\cite{grover2015deep} model the statistics of a set of weather-related variables via a hybrid approach based on deep neural networks. More recently, DUQ~\\cite{wang2019deep} introduced a deep uncertainty quantification method for weather forecasting.\n\n\n\\subsection{Graph neural network.}\nRecent years deep learning on graphs has gained much attention, which promotes the development of graph neural networks~\\cite{kipf2016semi}. For each node on the graph, graph neural networks learn a function to aggregate features of its neighbours and generate new node embedding, which encodes both feature information and local graph structure. For example, graph attention network (GAT)~\\cite{velivckovic2017graph} uses self-attention mechanism to select important neighbours adaptively and then aggregate them with different weights. Besides, researchers also extend graph neural networks on heterogeneous graphs~\\cite{zhang2019heterogeneous}. Graph neural networks are widely used in many applications, such as traffic flow prediction~\\cite{li2017diffusion}, ride-hailing demand forecasting~\\cite{geng2019spatiotemporal} and parking availability prediction~\\cite{zhang2020semi}.\n}\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{#1}\\setcounter{equation}{0}}\n\\renewcommand{\\theequation}{\\thesection.\\arabic{equation}}\n\\newcommand{\\begin{equation}}{\\begin{equation}}\n\\newcommand{\\end{equation}}{\\end{equation}}\n\\newcommand{\\begin{eqnarray}}{\\begin{eqnarray}}\n\\newcommand{\\end{eqnarray}}{\\end{eqnarray}}\n\\newcommand{\\nonumber}{\\nonumber}\n\\newcommand{{\\underline{1}}}{{\\underline{1}}}\n\\newcommand{{\\underline{2}}}{{\\underline{2}}}\n\\newcommand{{\\underline{3}}}{{\\underline{3}}}\n\\newcommand{{\\underline{4}}}{{\\underline{4}}}\n\\newcommand{{\\underline{a}}}{{\\underline{a}}}\n\\newcommand{{\\underline{b}}}{{\\underline{b}}}\n\\newcommand{{\\underline{c}}}{{\\underline{c}}}\n\\newcommand{{\\underline{d}}}{{\\underline{d}}}\n\\newcommand{{\\underline{i}}}{{\\underline{i}}}\n\\newcommand{{\\underline{j}}}{{\\underline{j}}}\n\\newcommand{\\ku}{{\\underline{k}}} \n\\newcommand{{\\underline{l}}}{{\\underline{l}}}\n\\newcommand{{\\underline{I}}}{{\\underline{I}}}\n\\newcommand{{\\underline{J}}}{{\\underline{J}}}\n\n\\newcommand{{\\mathbb R}}{{\\mathbb R}}\n\\newcommand{{\\mathbb C}}{{\\mathbb C}}\n\\newcommand{{\\mathbb Q}}{{\\mathbb Q}}\n\\newcommand{{\\mathbb Z}}{{\\mathbb Z}}\n\\newcommand{{\\mathbb N}}{{\\mathbb N}}\n\n\\def\\dt#1{{\\buildrel {\\hbox{\\LARGE .}} \\over {#1}}} \n\n\\newcommand{\\bm}[1]{\\mbox{\\boldmath$#1$}}\n\n\\def\\double #1{#1{\\hbox{\\kern-2pt $#1$}}}\n\n\n\\newcommand{{\\hat{m}}}{{\\hat{m}}}\n\\newcommand{{\\hat{n}}}{{\\hat{n}}}\n\\newcommand{{\\hat{p}}}{{\\hat{p}}}\n\\newcommand{{\\hat{q}}}{{\\hat{q}}}\n\\newcommand{{\\hat{r}}}{{\\hat{r}}}\n\\newcommand{{\\hat{a}}}{{\\hat{a}}}\n\\newcommand{{\\hat{b}}}{{\\hat{b}}}\n\\newcommand{{\\hat{c}}}{{\\hat{c}}}\n\\newcommand{{\\hat{d}}}{{\\hat{d}}}\n\\newcommand{{\\hat{e}}}{{\\hat{e}}}\n\\newcommand{{\\hat{M}}}{{\\hat{M}}}\n\\newcommand{{\\hat{N}}}{{\\hat{N}}}\n\\newcommand{{\\hat{A}}}{{\\hat{A}}}\n\\newcommand{{\\hat{B}}}{{\\hat{B}}}\n\\newcommand{{\\hat{C}}}{{\\hat{C}}}\n\\newcommand{{\\hat{i}}}{{\\hat{i}}}\n\\newcommand{{\\hat{j}}}{{\\hat{j}}}\n\\newcommand{{\\hat{k}}}{{\\hat{k}}}\n\\newcommand{{\\hat{l}}}{{\\hat{l}}}\n\n\n\\newcommand{{\\hat{\\alpha}}}{{\\hat{\\alpha}}}\n\\newcommand{{\\hat{\\beta}}}{{\\hat{\\beta}}}\n\\newcommand{{\\hat{\\gamma}}}{{\\hat{\\gamma}}}\n\\newcommand{{\\hat{\\delta}}}{{\\hat{\\delta}}}\n\\newcommand{{\\hat{\\rho}}}{{\\hat{\\rho}}}\n\\newcommand{{\\hat{\\tau}}}{{\\hat{\\tau}}}\n\n\\newcommand{{\\dot\\gamma}}{{\\dot\\gamma}}\n\\newcommand{{\\dot\\delta}}{{\\dot\\delta}}\n\n\\newcommand{{\\tilde{\\sigma}}}{{\\tilde{\\sigma}}}\n\\newcommand{{\\tilde{\\omega}}}{{\\tilde{\\omega}}}\n\n\\renewcommand{\\Bar}{\\overline}\n\n\\newcommand{{\\underline{\\alpha}}}{{\\underline{\\alpha}}}\n\\newcommand{{\\underline{\\beta}}}{{\\underline{\\beta}}}\n\\newcommand{{\\underline{\\gamma}}}{{\\underline{\\gamma}}}\n\\newcommand{{\\underline{\\delta}}}{{\\underline{\\delta}}}\n\\newcommand{{\\underline{\\rho}}}{{\\underline{\\rho}}}\n\\newcommand{{\\underline{\\tau}}}{{\\underline{\\tau}}}\n\n\\newcommand{{\\underline{\\ad}}}{{\\underline{\\ad}}}\n\\newcommand{{\\underline{\\bd}}}{{\\underline{\\bd}}}\n\\newcommand{{\\underline{\\gd}}}{{\\underline{{\\dot\\gamma}}}}\n\\newcommand{{\\underline{\\dd}}}{{\\underline{{\\dot\\delta}}}}\n\\newcommand{{\\underline{\\dot{\\rho}}}}{{\\underline{\\dot{\\rho}}}}\n\\newcommand{{\\underline{\\dot{\\tau}}}}{{\\underline{\\dot{\\tau}}}}\n\n\n\n\\newcommand{{\\underline{\\hal}}}{{\\underline{{\\hat{\\alpha}}}}}\n\\newcommand{{\\underline{\\hbe}}}{{\\underline{{\\hat{\\beta}}}}}\n\\newcommand{{\\underline{\\hga}}}{{\\underline{{\\hat{\\gamma}}}}}\n\\newcommand{{\\underline{\\hde}}}{{\\underline{{\\hat{\\delta}}}}}\n\\newcommand{{\\underline{\\hrh}}}{{\\underline{{\\hat{\\rho}}}}}\n\n\\newcommand{{\\nabla}}{{\\nabla}}\n\\newcommand{{\\bar{\\nabla}}}{{\\bar{\\nabla}}}\n\n\n\\newcommand{{\\bar{\\sigma}}}{{\\bar{\\sigma}}}\n\n\\newcommand{{\\theta}}{{\\theta}}\n\\newcommand{{\\bar{\\theta}}}{{\\bar{\\theta}}}\n\\newcommand{{\\bar{\\theta}}}{{\\bar{\\theta}}}\n\n\\newcommand{{\\dot{\\rho}}}{{\\dot{\\rho}}}\n\\newcommand{{{\\tau}}}{{{\\tau}}}\n\\newcommand{{\\dot{\\ta}}}{{\\dot{{{\\tau}}}}}\n\n\n\\newcommand{{(u^+u^-)}}{{(u^+u^-)}}\n\n\n\\newcommand{{{\\bar{\\zeta}}}}{{{\\bar{\\zeta}}}}\n\n\n\n\n\n\\newcommand{{\\bm L}}{{\\bm L}}\n\\newcommand{{\\bm R}}{{\\bm R}}\n\\newcommand{{\\boxplus}}{{\\boxplus}}\n\\newcommand{{\\boxminus}}{{\\boxminus}}\n\\newcommand{{\\oplus}}{{\\oplus}}\n\\newcommand{{\\ominus}}{{\\ominus}}\n\n\n\\newcommand{{\\overline{I}}}{{\\overline{I}}}\n\\newcommand{{\\overline{J}}}{{\\overline{J}}}\n\\newcommand{{\\overline{K}}}{{\\overline{K}}}\n\\newcommand{{\\overline{L}}}{{\\overline{L}}}\n\\newcommand{{\\overline{M}}}{{\\overline{M}}}\n\\newcommand{{\\overline{N}}}{{\\overline{N}}}\n\\newcommand{{\\overline{P}}}{{\\overline{P}}}\n\\newcommand{{\\overline{Q}}}{{\\overline{Q}}}\n\n\\newcommand{{\\underline{I}}}{{\\underline{I}}}\n\\newcommand{{\\underline{J}}}{{\\underline{J}}}\n\\newcommand{{\\underline{K}}}{{\\underline{K}}}\n\\newcommand{{\\underline{L}}}{{\\underline{L}}}\n\\newcommand{{\\underline{M}}}{{\\underline{M}}}\n\\newcommand{{\\underline{N}}}{{\\underline{N}}}\n\\newcommand{{\\underline{Q}}}{{\\underline{Q}}}\n\\newcommand{{\\underline{P}}}{{\\underline{P}}}\n\n\n\n\n\\newcommand{{\\mathbb D}}{{\\mathbb D}}\n\\newcommand{{\\mathbb \\DB}}{{\\mathbb \\bar{D}}}\n\\newcommand{{\\mathbb S}}{{\\mathbb S}}\n\n\\newcommand{{\\bf D}}{{\\bf D}}\n\\newcommand{{\\bar{\\bfD}}}{{\\bar{{\\bf D}}}}\n\n\\newcommand{{\\bm D}}{{\\bm D}}\n\\newcommand{{\\bar{\\bmD}}}{{\\bar{{\\bm D}}}}\n\n\\newcommand{\\bm{\\nabla}}{\\bm{\\nabla}}\n\\newcommand{\\bar{\\bm{\\nabla}}}{\\bar{\\bm{\\nabla}}}\n\n\n\\def\\mathsurround=0pt{\\mathsurround=0pt}\n\\def\\fracm#1#2{\\hbox{\\large{${\\frac{{#1}}{{#2}}}$}}}\n\n\\def\\eqalign#1{\\,\\vcenter{\\openup2\\jot \\mathsurround=0pt\n \\ialign{\\strut \\hfil$\\displaystyle{##}$&$\n \\displaystyle{{}##}$\\hfil\\crcr#1\\crcr}}\\,}\n\\newif\\ifdtup\n\\def\\panorama{\\global\\dtuptrue \\openup2\\jot \\mathsurround=0pt\n \\everycr{\\noalign{\\ifdtup \\global\\dtupfalse\n \\vskip-\\lineskiplimit \\vskip\\normallineskiplimit\n \\else \\penalty\\interdisplaylinepenalty \\fi}}}\n\\def\\eqalignno#1{\\panorama \\tabskip=\\humongous %\neqalignno\n \\halign to\\displaywidth{\\hfil$\\displaystyle{##}$\n \\tabskip=0pt&$\\displaystyle{{}##}$\\hfil\n \\tabskip=\\humongous&\\llap{$##$}\\tabskip=0pt\n \\crcr#1\\crcr}}\n\\def\\eqalignnotwo#1{\\panorama \\tabskip=\\humongous\n \\halign to\\displaywidth{\\hfil$\\displaystyle{##}$\n \\tabskip=0pt&$\\displaystyle{{}##}$\n \\tabskip=0pt&$\\displaystyle{{}##}$\\hfil\n \\tabskip=\\humongous&\\llap{$##$}\\tabskip=0pt\n \\crcr#1\\crcr}}\n\n\n\n\\def\\de{{\\nabla}} \n\\def{\\bar{\\nabla}}} % \\bar{del{{\\bar{\\nabla}}} \n\\def\\sp#1{{}^{#1}} \n\\def\\sb#1{{}_{#1}} \n\\def{\\cal M}{{\\cal M}}\n\\def{\\cal R}{{\\cal R}}\n\\def{\\cal Y}{{\\cal Y}}\n\\def{\\cal F}{{\\cal F}}\n\n\n\n\n\\def{\\bar{\\de}}{{\\bar{\\de}}}\n\n\n\\newcommand{\\begin{subequations}}{\\begin{subequations}}\n\\newcommand{\\end{subequations}}{\\end{subequations}}\n\\newcommand{\\boxedalign}[1]{%\n \\[\\fbox{%\n \\addtolength{\\linewidth}{-2\\fboxsep}%\n \\addtolength{\\linewidth}{-2\\fboxrule}%\n \\begin{minipage}{\\linewidth}\\vspace{-8pt}%\n \\begin{align}#1\\end{align}%\n \\end{minipage}\\nonumber%\n }\\]\n }\n\n\\newcommand{{\\bar i}}{{\\bar i}}\n\\newcommand{{\\bar j}}{{\\bar j}}\n\\newcommand{{\\bar k}}{{\\bar k}}\n\\newcommand{{\\bar l}}{{\\bar l}}\n\\newcommand{{\\bar p}}{{\\bar p}}\n\\newcommand{{\\bar q}}{{\\bar q}}\n\\newcommand{{\\bar 1}}{{\\bar 1}}\n\\newcommand{{\\bar 2}}{{\\bar 2}}\n\\newcommand{{\\bar 0}}{{\\bar 0}}\n\\newcommand{{\\bar n}}{{\\bar n}}\n\\newcommand{{\\bar m}}{{\\bar m}}\n\\newcommand{{\\bar 4}}{{\\bar 4}}\n\n\n\\newcommand{{\\rm L}}{{\\rm L}}\n\\newcommand{{\\rm R}}{{\\rm R}}\n\n\\newcommand{{{\\bm s}}}{{{\\bm s}}}\n\\newcommand{{{\\bar{\\mu}}}}{{{\\bar{\\mu}}}}\n\\newcommand{{{\\bm S}}}{{{\\bm S}}}\n\n\\newcommand{{\\bar{\\varphi}}}{{\\bar{\\varphi}}}\n\n\\newcommand{{\\bar{\\xi}}}{{\\bar{\\xi}}}\n\\newcommand{{\\bar{\\lambda}}}{{\\bar{\\lambda}}}\n\n\n\\newcommand{{\\bm{\\xi}}}{{\\bm{\\xi}}}\n\\newcommand{{\\bm{\\xb}}}{{\\bm{{\\bar{\\xi}}}}}\n\n\\newcommand{{\\bm{\\omega}}}{{\\bm{\\omega}}}\n\n\n\n\\newcommand{\\eqinbox}[1]{%\n \\[\\fbox{%\n \\addtolength{\\linewidth}{-2\\fboxsep}%\n \\addtolength{\\linewidth}{-2\\fboxrule}%\n \\begin{minipage}{\\linewidth}\\vspace{-8pt}%\n \\begin{align}#1\\end{align}%\n \\end{minipage}\\nonumber%\n }\\]\n }\n\n\\newcommand{\\notag \\\\}{\\notag \\\\}\n\\newcommand{\\varepsilon}{\\varepsilon}\n\n\\numberwithin{equation}{section}\n\\newcommand{{\\nabla}}{{\\nabla}}\n\\newcommand{{\\mathrm{c.c.}}}{{\\mathrm{c.c.}}}\n\n\\newcommand{{\\veps}}{{\\varepsilon}}\n\n\n\\newcommand{\\mathsf{Sp}}{\\mathsf{Sp}}\n\\newcommand{\\mathsf{SU}}{\\mathsf{SU}}\n\\newcommand{\\mathsf{SL}}{\\mathsf{SL}}\n\\newcommand{\\mathsf{GL}}{\\mathsf{GL}}\n\\newcommand{\\mathsf{SO}}{\\mathsf{SO}}\n\\newcommand{\\mathsf{U}}{\\mathsf{U}}\n\\newcommand{\\mathsf{S}}{\\mathsf{S}}\n\\newcommand{\\mathsf{PSU}}{\\mathsf{PSU}}\n\\newcommand{\\mathsf{PSL}}{\\mathsf{PSL}}\n\\newcommand{\\mathsf{OSp}}{\\mathsf{OSp}}\n\\newcommand{\\mathsf{Spin}}{\\mathsf{Spin}}\n\\newcommand{\\mathsf{Mat}}{\\mathsf{Mat}}\n\\newcommand{\\mathsf{ISO}}{\\mathsf{ISO}}\n\n\n\\begin{document}\n\n\\begin{titlepage}\n\\begin{flushright}\nOctober, 2020 \\\\\n\\end{flushright}\n\\vspace{5mm}\n\n\\begin{center}\n{\\Large \\bf \nMassless particles in five and higher dimensions}\n\\end{center}\n\n\\begin{center}\n\n{\\bf Sergei M. Kuzenko and Alec E. Pindur} \\\\\n\\vspace{5mm}\n\n\\footnotesize{\n{\\it Department of Physics M013, The University of Western Australia\\\\\n35 Stirling Highway, Perth W.A. 6009, Australia}}\n~\\\\\n\\vspace{2mm}\nEmail: \\texttt{ \nsergei.kuzenko@uwa.edu.au, 21504287@student.uwa.edu.au}\\\\\n\n\n\\end{center}\n\n\\begin{abstract}\n\\baselineskip=14pt\nWe describe a five-dimensional analogue of Wigner's operator equation \n${\\mathbb W}_a = \\lambda P_a$, where ${\\mathbb W}_a $ is the Pauli-Lubanski vector, \n$P_a$ the energy-momentum operator, and $\\lambda$ the helicity of a massless particle. \nHigher dimensional generalisations are also given.\n\\end{abstract}\n\\vspace{5mm}\n\n\n\n\\vfill\n\n\\vfill\n\\end{titlepage}\n\n\\newpage\n\\renewcommand{\\thefootnote}{\\arabic{footnote}}\n\\setcounter{footnote}{0}\n\n\n\\allowdisplaybreaks\n\n\n\\section{Introduction}\n\nThe unitary representations of the Poincar\\'e group in four dimensions were classified \nby Wigner in 1939 \\cite{Wigner}, see \\cite{Weinberg95} for a recent review. Our modern understanding of elementary particles is based on this classification. \n\nUnitary representations of the Poincar\\'e group $\\mathsf{ISO}_0(d-1, 1)$ in higher dimensions,\n$d>4$, have been studied in the literature, see, e.g., \\cite{BB}. However, there still remain some aspects that are not fully understood, see, e.g., \\cite{Weinberg2020}\nfor a recent discussion. In this note we analyse the \nirreducible massless representations of $\\mathsf{ISO}_0(4,1)$ with a finite (discrete) spin.\n\nWe recall that the Poincar\\'e algebra $\\mathfrak{iso} (d-1,1) $ in $d$ dimensions is characterised by the commutation relations\\footnote{We make use of the mostly plus Minkowski metric $\\eta_{ab}$ and normalise the Levi-Civita tensor $\\ve_{a_1 \\dots a_d}$ \nby $\\ve_{0 1\\dots d-1}=1$.} \n\\begin{subequations} \\label{PA}\n\\begin{eqnarray}\n \\big[P_{a}, P_{b}\\big] & = & 0 ~, \\\\\n \\big[J_{ab}, P_{c}\\big] & = & {\\rm i}\\eta_{ac}P_{b} - {\\rm i}\\eta_{bc}P_a ~,\\\\\n \\big[J_{a b}, J_{c d}\\big] & = & {\\rm i} \\eta_{a c}J_{b d} - {\\rm i} \\eta_{a d} J_{b c} + {\\rm i} \\eta_{b d} J_{a c} - {\\rm i} \\eta_{bc} J_{a d} ~.\n\\end{eqnarray}\n\\end{subequations}\nIn any unitary representation of (the universal covering group of) the Poincar\\'e group, \nthe energy-momentum operator\n$P_a$ and the Lorentz generators $J_{ab}$ are Hermitian.\nFor every dimension $d$, the operator $P^a P_a$ is a Casimir operator. Other Casimir operators are dimension dependent. \n\nIn four dimensions, the second Casimir operator is \n${\\mathbb W}^a{\\mathbb W}_a $,\nwhere \n\\begin{equation}\n{\\mathbb W}^a=\\frac12 \\ve^{abcd} { J}_{bc}{ P}_d\n\\label{PLv}\n\\end{equation}\nis the Pauli-Lubanski vector.\nUsing the commutation relations \n \\eqref{PA}, it follows that \n the Pauli-Lubanski vector is translationally invariant, \n\\begin{subequations}\n\\begin{eqnarray}\n\\big[ P_a, {\\mathbb W}_b \\big]=0~,\n\\end{eqnarray}\nand possesses the following properties:\n\\begin{eqnarray}\n{\\mathbb W}^a{ P}_a&=&0 ~, \\\\\n\\big[{ J}_{ab},{\\mathbb W}_c \\big]&=&{\\rm i} \\eta_{ac}{\\mathbb W}_b\n-{\\rm i} \\eta_{bc}{\\mathbb W}_a~,\n\\\\ \n\\big[ {\\mathbb W}_a,{\\mathbb W}_b \\big]&=& {\\rm i} \\ve_{abcd}{\\mathbb W}^c{P}^d~. \n\\end{eqnarray}\n\\end{subequations}\nThe irreducible massive representations are characterised by the conditions \n\\begin{subequations}\n\\begin{eqnarray}\n { P}^a{ P}_a &=&-m^2\\,{\\mathbbm 1}~, \\qquad m^2 > 0~, \n \\qquad \n {\\rm sign } \\,{ P}^0 >0~, \\label{1.4a} \\\\\n {\\mathbb W}^a{\\mathbb W}_a &=& m^2s(s+1)\\,{\\mathbbm 1}~, \n \\end{eqnarray} \n \\end{subequations}\nwhere the quantum number $s$ is called spin. Its possible values \nin different representations are \n$s= 0, 1\/2, 1, 3\/2 \\dots$. \nThe massless representations are characterised by the condition $P^aP_a =0$. \nFor the physically interesting massless representations, it holds that \n\\begin{eqnarray}\n{\\mathbb W}_a = \\lambda P_a~,\n\\label{1.5}\n\\end{eqnarray}\nwhere the parameter $\\lambda$ determines the representation and is called the helicity. Its possible values are $0, \\pm\\frac12, \\pm 1, $ and so on. The parameter $|\\lambda |$ is called the spin of a massless particle.\n\nIn this paper we present a generalisation of Wigner's equation \\eqref{1.5}\nto five and higher dimensions.\n\n\n\\section{Unitary representations of $\\mathsf{ISO}_0(4,1)$}\n\nThe five-dimensional analogue of \\eqref{PLv} is the Pauli-Lubanski tensor\n\\begin{equation}\n\\mathbb{W}^{ab} = \\frac{1}{2}\\ve^{abcde}J_{cd}P_e~.\n\\label{2.1}\n\\end{equation}\nIt is translationally invariant, \n\\begin{eqnarray}\n\\big[ {\\mathbb W}_{ab} , P_c \\big] =0~,\n\\end{eqnarray}\nand possesses the following properties:\n\\begin{subequations}\n\\begin{eqnarray}\n\\mathbb{W}_{ab} P^b &=& 0~, \\\\\n\\big[\\mathbb{W}_{ab}, J_{cd}\\big] &=& {\\rm i} \\eta_{ac}\\mathbb{W}_{bd} \n- {\\rm i} \\eta_{ad}\\mathbb{W}_{bc} - {\\rm i} \\eta_{bc}\\mathbb{W}_{ad} + {\\rm i} \\eta_{bd}\\mathbb{W}_{ac}~, \\\\\n \\big[\\mathbb{W}_{ab}, \\mathbb{W}_{cd}\\big] \n &=& {\\rm i}\\ve_{acdfg} {\\mathbb{W}_b}^f P^{g} - {\\rm i} \\ve_{bcdfg} {\\mathbb{W}_a}^f P^{g}~.\n\\end{eqnarray}\n\\end{subequations}\nMaking use of ${\\mathbb W}_{ab}$ allows one to construct two Casimir operators, which are \n\\begin{equation}\n\\mathbb{W}_{ab} \\mathbb{W}^{ab}~, \\qquad \n\\mathbb{H} := \\mathbb{W}^{ab}J_{ab}~.\n\\label{2.4}\n\\end{equation}\n\n\n\\subsection{Irreducible massive representations} \n\nThe irreducible massive representations of the Poincar\\'e group $\\mathsf{ISO}_0(4,1)$\nare characterised by two conditions\n\\begin{subequations}\\label{2.5}\n\\begin{eqnarray}\n \\frac{1}{8} \\Big( \\mathbb{W}^{ab}\\mathbb{W}_{ab} + m \\mathbb{H} \\Big) &=& \n m^2 s_1(s_1 + 1)\\mathbbm{1} ~, \\\\\n\\frac{1}{8} \\Big( \\mathbb{W}^{ab}\\mathbb{W}_{ab} - m\\mathbb{H} \\Big) &=& \nm^2 s_2(s_2 + 1)\\mathbbm{1} ~,\n\\end{eqnarray}\n\\end{subequations}\nin addition to \\eqref{1.4a}.\nHere $s_1$ and $s_2$ are two spin values corresponding to the two $\\mathsf{SU}(2)$ subgroups of the universal covering group \n$\\mathsf{Spin}(4) \\cong \\mathsf{SU}(2) \\times \\mathsf{SU}(2)$\nof the little group.\\footnote{The equations \\eqref{2.5} were independently derived during the academic year 1992-93 by Arkady Segal and David Zinger, who were undergraduates at Tomsk State University at the time.}\n\n\n\\subsection{Irreducible massless representations} \\label{section2.2}\n\nIt turns out that all irreducible massless representations of \n$\\mathsf{ISO}_0(4,1)$ with a finite spin are characterised by the condition \n\\begin{eqnarray}\n\\varepsilon_{abcde}P^{c}\\mathbb{W}^{de} =0 \\quad \\Longleftrightarrow \\quad\nP^{[a} {\\mathbb W}^{bc]} =0~.\n\\label{main}\n\\end{eqnarray}\nBoth Casimir operators \\eqref{2.4} are equal to zero in these representations, \n$\\mathbb{W}_{ab} \\mathbb{W}^{ab}=0$ and \n$ \\mathbb{W}^{ab}J_{ab} =0$.\n\n\nLet $ \\ket{p, \\sigma} $ be an orthonormal basis in the Hilbert space of one-particle states, where $p^a $ \ndenotes the momentum of a particle, $ P^a \\ket{p, \\sigma} = p^a \\ket{p, \\sigma} $,\nand $\\sigma$ stands for the spin degrees of freedom.\nFor a massless particle, we choose as our standard 5-momentum $k^{a} = (E,0,0,0,E)$. On this eigenstate:\n\\begin{equation}\n\\mathbb{W}^{ab} \\ket{k, \\sigma} = \\frac{1}{2}\\varepsilon^{abcde}J_{cd}P_{e} \\ket{k, \n\\sigma} = \\frac{E}{2} \\left(\\varepsilon^{abcd4}J_{cd} - \\varepsilon^{abcd0}J_{cd} \\right) \\ket{k, \\sigma}~.\n\\end{equation}\nRunning through the elements of $\\mathbb{W}^{ab}$, one finds:\n\\begin{equation} \\label{eq:eq5}\n\\begin{split}\n\\mathbb{W}^{01} = \\mathbb{W}^{41} = -EJ_{23} ~,\\qquad & \\mathbb{W}^{12} = E(J_{30} +J_{34}) ~,\\\\\n\\mathbb{W}^{02} = \\mathbb{W}^{42} = -EJ_{31} ~,\\qquad & \\mathbb{W}^{23} = E(J_{10} +J_{14}) ~,\\\\\n\\mathbb{W}^{03} = \\mathbb{W}^{43} = -EJ_{12}~, \\qquad & \\mathbb{W}^{31} = E(J_{20} +J_{24}) ~,\\\\\n\\mathbb{W}^{04} = & \\ 0 ~.\n\\end{split} \n\\end{equation}\nIf we rescale these generators and define:\n\\begin{equation}\\label{eq:eq7}\n\\begin{split}\n\\mathbb{R}_1 \\equiv \\frac{1}{E}\\mathbb{W}^{23}~, \\qquad \\mathbb{R}_{2} \\equiv & \\frac{1}{E}\\mathbb{W}^{31}~, \\qquad \\mathbb{R}_{3} \\equiv \\frac{1}{E}\\mathbb{W}^{12} ~,\\\\\n\\qquad \\cJ_{i} \\equiv &- \\frac{1}{E} \\mathbb{W}^{0i} ~,\n\\end{split}\n\\end{equation}\nthen these new operators satisfy:\n\\begin{equation}\n\\big[\\cJ_{i}, \\cJ_{j} \\big] = {\\rm i}\\varepsilon_{ijk}\\cJ_k~, \\qquad \n\\big[\\cJ_i , \\mathbb{R}_j \\big] = {\\rm i}\\varepsilon_{ijk} \\mathbb{R}_k~ , \n\\qquad \\big[\\mathbb{R}_i , \\mathbb{R}_j \\big] = 0~.\n\\end{equation}\nThese are the commutation relations for the three-dimensional Euclidean algebra, $\\mathfrak{iso}(3)$. The irreducible unitary representations of $\\mathfrak{iso}(3)$ are labelled by a continuous parameter $\\mu^2$, corresponding to the value the Casimir operator $\\mathbb{R}^{i} \\mathbb{R}_{i}$ takes. Since $\\mathbb{R}_{i}$ commute among themselves the operators can be simultaneously diagonalised, and the eigenvectors $\\ket{r_{i}}$ taken as a basis. However the only restriction on these is that $r_i r^i = \\mu^2$, which for non-zero $\\mu^2$ permits a continuous basis and is thus an infinite dimensional representation. Because we want only finite-dimensional representations, we must take:\n\\begin{equation} \\label{eq:eq6}\n\\mu^2 = 0 \\quad \\Longrightarrow \\quad \\mathbb{R}_{i} = 0 \\quad \\Longleftrightarrow \\quad J_{0i} = -J_{4i}~.\n\\end{equation}\nWe are therefore restricted to those representations in which the translation component is trivial, and so only the generators $\\cJ_i$ remain, which generate the algebra $\\mathfrak{so}(3)$. The algebra of the little group on massless representations is thus $\\mathfrak{so}(3) $ which is isomorphic to $\\mathfrak{su}(2)$. \nAs stated previously, the irreducible representations of $\\mathfrak{su}(2)$ are labelled by a non-negative (half) integer $s$ and have a single Casimir operator $\\cJ^i \\cJ_i$ which takes the value $s(s+1)\\mathbbm{1}$. This analysis leads to \\eqref{main}.\n\nThe spin value of a massless representation can still be found using a `spin' operator.\nThe following relation holds on massless representations:\n\\begin{equation}\n \\mathbb{S}_a :=- \\frac{1}{4} \\varepsilon_{abcde} J^{bc} \\mathbb{W}^{de} \n = \\cJ^2 P_{a} = s(s+1)P_a~,\n \\label{2.12}\n\\end{equation}\nwhere $\\cJ^2 = \\cJ^{i}\\cJ_{i}$ is the Casimir operator for the $\\mathfrak{so}(3)$ generators in \\eqref{eq:eq7}. The parameter $s$ is the spin of a massless particle.\nIts possible values in different representations are \n$s= 0, 1\/2, 1, $ and so on. Equation \\eqref{2.12} naturally holds for massless spinor and vector fields \\cite{Pindur}.\n\n\n\nIn general, the operator ${\\mathbb S}_a$ is not translationally invariant,\n\\begin{equation}\n\\big[ \\mathbb{S}_b, P_{a} \\big] = \\frac{{\\rm i}}{2}\\varepsilon_{abcde}P^{c}\\mathbb{W}^{de} ~.\n\\end{equation}\nIt is only for the massless representations with finite spin that the quantity on the right vanishes so that the spin operator commutes with the momentum operators. \nEquation \\eqref{2.12} is the five-dimensional analogue of the operator equation \\eqref{1.5}. Its consistency condition is \\eqref{main}.\\footnote{The consistency condition for \\eqref{1.5} is $P^{[a} {\\mathbb W}^{b]} =0$, which is the four-dimensional counterpart of \\eqref{main}.}\n\n\n\\section{Generalisations} \n\nThe results of section \\ref{section2.2} can be generalised to $d>5$ dimensions. \nThe Pauli-Lubanski tensor \\eqref{2.1} turns into\n\\begin{eqnarray}\n\\mathbb{W}^{a_1 \\dots a_{d-3}} = \\frac{1}{2}\\ve^{a_1 \\dots a_{d-3} bc e}J_{bc}P_e~.\n\\end{eqnarray}\nThe condition \\eqref{main} is replaced with\n\\begin{eqnarray}\nP^{[a} {\\mathbb W}^{b_1 \\dots b_{d-3}]} =0~.\n\\label{maind}\n\\end{eqnarray}\nThis equation is very similar to another that has appeared in the literature\nusing the considerations of conformal invariance \n\\cite{Bracken,BrackenJ,Siegel,SiegelZ}. One readily checks that \\eqref{maind} is equivalent to \n\\begin{eqnarray}\nJ_{ab} P^2 + 2 J_{c[a} P_{b]} P^c =0 \\quad \\implies \\quad \nJ_{c[a} P_{b]} P^c =0 ~.\n\\label{maind2}\n\\end{eqnarray}\nThe latter is solved on the momentum eigenstates \nby $J_{ab} p^b \\propto p_a$, which is of the form considered\nin \\cite{Bracken,BrackenJ,Siegel,SiegelZ}.\\footnote{We are grateful to Warren Siegel for useful comments.}\n\n\nEquation \\eqref{maind} characterises all irreducible massless representations of \n$\\mathsf{ISO}_0(d-1,1)$ with a finite (discrete) spin. Finally, the spin equation \\eqref{2.12}\nturns into \n\\begin{eqnarray}\n \\mathbb{S}_a := \\frac{(-1)^d}{2 (d-3)!} \\varepsilon_{abc e_1 \\dots e_{d-3}} J^{bc} \\mathbb{W}^{e_1 \\dots e_{d-3}} \n = \\cJ^2 P_{a} ~,\n\\end{eqnarray}\nwhere $\\cJ^2 = \\frac12 \\cJ^{ij} \\cJ_{ij} $ is the quadratic Casimir operator of the algebra $\\mathfrak{so}(d-2)$, with $i,j=1, \\dots , d-2$.\nFor every irreducible massless representation of \n$\\mathsf{ISO}_0(d-1,1)$ with a finite spin, it holds that $\\cJ^2 \\propto {\\mathbbm 1}$. \n\nIt is possible to derive a five-dimensional analogue of the operator equation \ndefining the $\\cN=1$ superhelicity $\\kappa$ in four dimensions \\cite{BK}. \nThe latter has the form\\footnote{In the supersymmetric case, the conventions of \\cite{BK} are used, \nin particular the Levi-Civita tensor $\\ve_{abcd}$ is normalised by $\\ve_{0123}=-1$.} \n\\begin{eqnarray}\n{\\mathbb L}_a = \\Big( \\kappa +\\frac 14 \\Big) P_a~, \n\\label{3.4}\n\\end{eqnarray}\nwhere the operator ${\\mathbb L}_a$ is defined by \n\\begin{eqnarray}\n{\\mathbb L}_a = {\\mathbb W}_a - \\frac{1}{16} (\\tilde \\sigma_a)^{\\ad \\alpha} \\big[ Q_\\alpha , \\bar Q_\\ad\\big] ~.\n\\label{3.5}\n\\end{eqnarray}\nThe fundamental properties of the operator ${\\mathbb L}_a $\n(the latter differs from the supersymmetric Pauli-Lubanksi vector \\cite{SS})\nare that it is translationally invariant and commutes with the supercharges $Q_\\alpha$ and $\\bar Q_\\ad$ in the massless representations of the $\\cN=1$ super-Poincar\\'e group.\\footnote{The irreducible massless\n representation of superhelicity $\\kappa$ is the direct sum of two irreducible massless Poincar\\'e representations corresponding to the helicity values $\\kappa$ and $\\kappa+\\frac12$.} \nThe superhelicity operator \\eqref{3.5} was generalised to higher dimensions\nin \\cite{PZ,AMT}. Generalisations of \\eqref{3.4} to five and higher dimension will be discussed elsewhere.\n\\\\\n\n\\noindent\n{\\bf Acknowledgements:}\\\\\nWe thank Warren Siegel for pointing out important references, and Michael Ponds for comments on the manuscript. \nSMK is grateful to Ioseph Buchbinder for email correspondence, and to Arkady Segal for discussions. The work of SMK work is supported in part by the Australian \nResearch Council, project No. DP200101944.\n\n\n\\begin{footnotesize}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nRecent advances on information technologies facilitate real-time message exchanges and decision-making among geographically dispersed strategic entities. This has boosted the emergence of new generation of networked systems; e.g., the smart grid and intelligent transportation systems. These networked systems share some common features: on one hand, the entities do not belong to a single authority and may pursue different or even competitive interests; on the other hand, each entity keeps private information which is unaccessible to others. It is of great interest to design practical mechanisms which allow for efficient coordination of self-interested entities and ensure network-wide performance. Game theory along with its distributed computation algorithms represents a promising tool to achieve the goal.\n\nIn many applications, distributed computation is executed in uncertain environments. For example, mobile robots are deployed in an operating environment where environmental distribution functions are unknown to robots in advance; e.g.,~\\cite{Stankovic.Johansson.Stipanovic:11,Zhu.Martinez:SICON13}. In traffic pricing, pricing policies of system operators may not be available to drivers. In optimal power flow control, the structural parameters of power systems are of national security interest and kept confidential from the public. The absence of such information makes game components; e.g., objective and constraint functions, inaccessible to players. Very recently, the informational constraint has been stimulating the investigation of \\emph{adaptive} algorithms, including~\\cite{Frihauf.Krstic.Basar:11,SL-MK:11,Stankovic.Johansson.Stipanovic:11} for continuous games and~\\cite{JRM-HPY-GA-JSS:06,Zhu.Martinez:SICON13} for discrete games.\n\n\\emph{Literature review.} Non-cooperative game theory has been widely used as a mathematical framework to reason about multiple selfish decision makers; see for instance~\\cite{Basar.Olsder:82}. These games\nhave found a variety of applications in economics, communication and robotics; see~\\cite{Altman.Basar:98,Chung.Hollinger.Isler:11,Dockner.Jorgensen.Long.Sorger:06,Frihauf.Krstic.Basar:11,Mitchell.Bayen.Tomlin:05}. In non-cooperative games, decision making of individuals is inherently distributed. Very recently, this attractive feature has been utilized to synthesize cooperative control schemes, and a partial reference list for this regard includes~\\cite{Arsie.Savla.ea:TAC09,Arslan.Marden.Shamma:07a,Li.Marden:11,Stankovic.Johansson.Stipanovic:11,Zhu.Martinez:SICON13}.\n\nThe set of papers more relevant to our work is concerned with generalized Nash games where strategy spaces are continuous and the actions of players are coupled through utility and constraint functions. Generalized Nash games are first formulated in~\\cite{Arrow.Debreu:54}. Since then, a great effort has been dedicated to studying the existence and structural properties of generalized Nash equilibria in; e.g.,~\\cite{Rosen:65} and the recent survey paper~\\cite{Facchinei.Kanzow:07}. A number of algorithms have been proposed to compute generalized Nash equilibria, including ODE-based methods~\\cite{SL-TB:87,Rosen:65}, nonlinear Gauss-Seidel-type approaches~\\cite{Pang.Scutari.Facchinei.Wang:08}, iterative primal-dual Tikhonov schemes~\\cite{Yin.Shanbhag.Mehta:11} and best-response dynamics~\\cite{Palomar.Eldar:10}.\n\nAs mentioned, the set of papers~\\cite{Frihauf.Krstic.Basar:11,SL-MK:11,JRM-HPY-GA-JSS:06,Stankovic.Johansson.Stipanovic:11,Zhu.Martinez:SICON13}\ninvestigates the \\emph{adaptiveness} of game theoretic learning algorithms. However, none of the papers mentioned in the last two paragraphs studies the \\emph{robustness} of the algorithms with respect to network unreliability; e.g., data transmission delays, quantization and dynamically changing topologies. In contrast, the robustness has been extensively studied for consensus and distributed optimization, including, to name a few,~\\cite{Jadbabaie.Lin.Morse:03,Nedic.Ozdaglar.Parrilo:08} for time-varying topologies,~\\cite{Rabbat.Nowak:05} for quantization and~\\cite{Munz.Papachristodoulou.Allgower:10} for time delays. Yet the adaptiveness issue has not been addressed in this group of papers.\n\n\n\\emph{Contributions.} In this paper, we aim to solve a class of generalized convex games over unreliable networks where the structures of component functions are unknown to the associated players. That is, we aim to simultaneously address the issues of adaptiveness and robustness for generalized convex games.\n\nIn the games, each player is associated with a convex objective function and subject to a private convex inequality constraint and a private convex constraint set. The component functions are assumed to be smooth and are unknown to the associated players. We investigate distributed first-order gradient-based computation algorithms for the following two scenarios:\n\n[Scenario One] The game map is pseudo-monotone and the maximum delay (equivalently, the maximum number of packet dropouts or link breaks) is bounded but unknown;\n\n[Scenario Two] The inequality constraints are absent, the (reduced) game map is strongly monotone and the maximum delay is known.\n\nInspired by simultaneous perturbation stochastic approximation for optimization in~\\cite{JS:03}, we utilize finite differences with diminishing approximation errors to estimate first-order gradients. We propose two distributed algorithms for the two scenarios and formally prove their asymptotic convergence. The comparison of the two proposed algorithms is given in Section~\\ref{sec:comparison}. The analysis integrates the tools from convex analysis, variational inequalities and simultaneous perturbation stochastic approximation. The algorithm performance is verified through demand response on the IEEE 30-bus Test System. A preliminary version of the current paper was published in~\\cite{Zhu.Frazzoli:CDC12} where the adaptiveness issue was not investigated.\n\n\n\\section{Problem formulation}\\label{sec:formulation}\n\nIn this section, we present the generalized convex game considered in the paper. It is followed by the notions and notations used throughout the paper.\n\n\\subsection{Generalized convex game}\n\nConsider the set of players $V \\triangleq \\{1,\\cdots,N\\}$ where the state of player~$i$ is denoted as $x^{[i]}\\in X_i\\subseteq \\mathds{R}^{n_i}$. The players are selfish and pursue different interests. In particular, given the joint state $x^{[-i]}\\in X_{-i} \\triangleq \\prod_{j\\neq i}X_j$ of its rivals\\footnote{We use the shorthand $-i\\triangleq V\\setminus\\{i\\}$ throughout the paper.}, each player~$i$ aims to solve the following program parameterized by $x^{[-i]}\\in X_{-i}$: \\begin{align}\\min_{x^{[i]}\\in X_i}f_i(x^{[i]},x^{[-i]}),\\quad {\\rm s.t.} \\quad G^{[i]}(x^{[i]},x^{[-i]})\\leq0,\\label{e10}\\end{align} where $f_i : \\mathds{R}^n\\rightarrow \\mathds{R}$ and $G^{[i]} : \\mathds{R}^n\\rightarrow\\mathds{R}^{m_i}$ with $n\\triangleq \\sum_{i\\in V}n_i$. In the remainder of the paper, we assume that the following properties about problem~\\eqref{e10} hold:\n\n\\begin{assumption} The maps $f_i$ and $G^{[i]}$ are smooth, and the maps $f_i(\\cdot,x^{[-i]})$ and $G^{[i]}(\\cdot,x^{[-i]})$ are convex in $x^{[i]}$. The set $X_i$ is convex and compact, and $X\\cap Y\\neq\\emptyset$ where $X \\triangleq \\prod_{i\\in V}X_i$ and $Y \\triangleq \\prod_{i\\in V}Y_i$ with $Y_i \\triangleq \\{x\\in X\\; |\\; G^{[i]}(x) \\leq 0\\}$.\\label{asm7}\n\\end{assumption}\n\n\n\nWe now proceed to provide an equivalent form of problem~\\eqref{e10}. To achieve this, we define the set-valued map $X_i^f : X_{-i} \\rightarrow 2^{X_i}$ as follows: \\begin{align*}X_i^f(x^{[-i]}) = \\{x^{[i]}\\in X_i \\; | \\; G^{[i]}(x^{[i]},x^{[-i]})\\leq 0\\}.\\end{align*} The set $X_i^f(x^{[-i]})$ represents the collection of feasible actions for player~$i$ when its opponents choose the joint state of $x^{[-i]}\\in X_{-i}$. With the map $X_i^f$, problem~\\eqref{e10} of player~$i$ is equivalent to the following one: \\begin{align}\\min_{x^{[i]}\\in X_i^f(x^{[-i]})}f_i(x^{[i]},x^{[-i]}).\\label{e11}\\end{align}\n\nGiven $x^{[-i]}\\in X_{-i}$, each player~$i$ aims to solve problem~\\eqref{e11}. The collection of such coupled optimization problems consists of the \\emph{generalized convex game} (for short, CVX). For the CVX game, we adopt the \\emph{generalized Nash equilibrium} (for short, GNE) as the solution notion which none of the players is willing to unilaterally deviate from:\n\n\\begin{definition} The joint state $\\tilde{x}\\in X\\cap Y$ is a generalized Nash equilibrium of the CVX game if the following holds: \\begin{align}&f_i(\\tilde{x})\\leq f_i(x^{[i]},\\tilde{x}^{[-i]}),\\quad \\forall x^{[i]}\\in X_i^f(\\tilde{x}^{[-i]}),\\quad \\forall i\\in V.\\nonumber\\end{align}\\label{def1}\n\\end{definition}\n\nDenote by $\\mathbb{X}_{\\rm CVX}$ the set of GNEs of the CVX game. The following lemma verifies the non-emptiness of $\\mathbb{X}_{\\rm CVX}$.\n\n\\begin{lemma} The set of generalized Nash equilibria of the CVX game is not empty, i.e., $\\mathbb{X}_{\\rm CVX} \\neq \\emptyset$.\\label{lem3}\n\\end{lemma}\n\n\\textbf{Proof:} Recall that $f_i$ is convex and $X\\cap Y$ is compact. Hence, $\\mathbb{X}_{\\rm CVX} \\neq \\emptyset$ is a direct result of~\\cite{Facchinei.Kanzow:07,Rosen:65}. \\oprocend\n\n\nIn the CVX game, the players desire to seek a GNE. It is noted that $f_i$, $G^{[i]}$, and $X_i$ are private information of player~$i$ and unaccessible to others. In order to compute a GNE, it becomes necessary that the players are inter-connected and able to communicate with each other to exchange their partial estimates of GNEs. The interconnection between players will be represented by a directed graph ${\\mathcal{G}} = (V,\\mathcal{E})$ where\n$\\mathcal{E}\\subset V\\times V\\setminus {\\rm diag}(V)$ is the set of\nedges. The neighbor relation is determined by the dependency of $f_i$ and\/or $G^{[i]}$ on $x^{[j]}$. In particular, $(i,j)\\in\\mathcal{E}$ if and only if $f_i$ and\/or $G^{[i]}$ depend upon $x^{[j]}$. Denote by $\\mathcal{N}_i^{\\rm IN} \\triangleq \\{j\\in V\\;|\\; (i,j)\\in\\mathcal{E}\\}$ the set of in-neighbors of\nplayer~$i$.\n\nIn this paper, we aim to develop distributed algorithms which allow for the computation of GNEs in the presence of the following two challenges.\n\\begin{enumerate}\n\\item Data transmissions between the players in $V$ are unreliable, and subject to transmission delays, packet dropouts and\/or link breaks. We let $x^{[j]}(k - \\tau^{[i]}_j(k))$ with $\\tau^{[i]}_j(k)\\geq0$ be either $(i)$ the outdated state of player~$j$ received by player~$i$ at time~$k$ due to transmission delays; or $(ii)$ the latest state of player~$j$ received by player~$i$ by time~$k$ due to packet dropouts and\/or link breaks. For example, if the link from player $j$ to player $i$ is broken at time $k$, then player~$i$ cannot receive the message $x^{[j]}(k)$ from player $j$. For this case, player~$i$ uses the more recent received message sent from player $j$ which is $x^{[j]}(k-\\tau_j^{[i]}(k))$. Denote by $\\Lambda_i(k)\\triangleq \\{x^{[j]}(k-\\tau^{[i]}_j(k))\\}_{j\\in\\mathcal{N}^{\\rm IN}_i}$ the set of latest states of $\\mathcal{N}_i^{\\rm IN}$ received by player~$i$ at time~$k$. Let $\\tau_{\\max} \\triangleq \\sup_{k\\geq0}\\max_{i\\in V}\\max_{j\\in\\mathcal{N}_i^{\\rm IN}}\\tau^{[i]}_j(k)$ be the maximum delay or the maximum number of successive packet dropouts.\n\\item Each player~$i$ is unaware of the structures of $f_i$ and $G^{[i]}$ but can observe their realizations. That is, if the players input the joint state $x$ into $f_i$, then player~$i$ can observe the realized value of $f_i(x)$.\n\\end{enumerate}\n\nThe disclosure of the value $f_i(x)$ is case dependent. In~\\cite{Stankovic.Johansson.Stipanovic:11,Zhu.Martinez:SICON13}, mobile sensors are unaware of environmental distribution functions but they can observe induced utilities via on-site measurements. This is an example of engineering systems. Section~\\ref{sec:simulations} will provide a concrete example of demand response of power networks where the system operator discloses realized values via communication given the decisions of end-users. This is an example of social systems.\n\n\n\n\n\n\n\n\n\\section{Preliminaries}\\label{sec:preliminaries}\n\nIn the CVX game, each player is subject to an inequality constraint. In optimization literature, Lagrangian relaxation is a widely used approach to handle inequality constraints. Following this vein, we will perform Lagrangian relaxation on the CVX game and obtain the unconstrained convex (for short, UC) game.\n\nIn the UC game, there are two sets of players: the set of primal players $V$ and the set of dual players $V_m \\triangleq \\{1,\\cdots,N\\}$. Define the following private Lagrangian $\\mathcal{H}_i : \\mathds{R}^n\\times\\mathds{R}^{m_i}_{\\geq0}\\rightarrow\\mathds{R}$: $\\mathcal{H}_i(x,\\mu^{[i]}) \\triangleq f_i(x) + \\langle \\mu^{[i]}, G^{[i]}(x)\\rangle$, for primal player~$i\\in V$ and dual player~$i\\in V_m$. Given any $x^{[-i]}\\in X_{-i}$ and $\\mu^{[i]}\\in\\mathds{R}^{m_i}_{\\geq0}$, each primal player~$i\\in V$ aims to minimize $\\mathcal{H}_i$ over $x^{[i]}\\in X_i$; i.e., $\\min_{x^{[i]}\\in X_i}\\mathcal{H}_i(x^{[i]},x^{[-i]},\\mu^{[i]})$, and, instead, the objective of the dual player $i\\in V_m$ is to maximize $\\mathcal{H}_i$ over $\\mu^{[i]}\\in M_i\\subseteq\\mathds{R}^{m_i}_{\\geq0}$; i.e., $\\min_{\\mu^{[i]}\\in M_i}-\\mathcal{H}_i(x,\\mu^{[i]})$ where the set $M_i\\subseteq\\mathds{R}^{m_i}_{\\geq0}$ is convex, non-empty and will be introduced in the sequel. We let $\\eta \\triangleq (x,\\mu)$ and the set $K \\triangleq X\\times M$ with $M \\triangleq \\prod_{i\\in V}M_i$. The above game among the players in $V\\cup V_m$ is referred to as the UC game parameterized by the set $M$. The set of $M$ will play an important role in determining the properties of the UC game and we will discuss the choice of $M$ later. The solution concept for the UC game parameterized by $M$ is the standard notion of Nash equilibrium given below:\n\n\\begin{definition} The joint state of $(\\tilde{x},\\tilde{\\mu}) \\in X\\times M$ is a Nash equilibrium of the UC game parameterized by $M$ if the following holds for all $i\\in V$: $\\mathcal{H}_i(\\tilde{x},\\tilde{\\mu}^{[i]})\\leq \\mathcal{H}_i(x^{[i]},\\tilde{x}^{[-i]},\\tilde{\\mu}^{[i]}),\\quad \\forall x^{[i]}\\in X_i$, and the following holds for all $i\\in V_m$: $\\mathcal{H}_i(\\tilde{x},\\mu^{[i]})\\leq \\mathcal{H}_i(\\tilde{x},\\tilde{\\mu}^{[i]}),\\quad \\forall \\mu^{[i]}\\in M_i$.\\label{def2}\n\\end{definition} Denote by $\\mathbb{X}_{\\rm UC}(M)$ the set of NEs of the UC game parameterized by $M$. We now proceed to illustrate the relation between the UC game and the CVX game. Before doing so, let us state the following boundedness assumption on the dual solutions:\n\n\\begin{assumption} There is a vector $\\vartheta \\triangleq (\\vartheta_i)_{i\\in V}\\in\\mathds{R}^N_{>0}$ such that for any $(\\tilde{x},\\tilde{\\mu})\\in \\mathbb{X}_{\\rm UC}(\\mathds{R}^m_{\\geq0})$, $\\|\\tilde{\\mu}^{[i]}\\| \\leq \\vartheta_i$ for $i\\in V$.\\label{asm1}\n\\end{assumption}\n\n\\begin{remark} We would like to make a remark on Assumption~\\ref{asm1}. For convex optimization, it is shown in~\\cite{Hiriart.Lemarechal:96} that the Lagrangian multipliers are uniformly bounded under the standard Slater's condition. This boundedness property is used in~\\cite{Nedic.Ozdagalr:08} and further in~\\cite{Zhu.Martinez:09tac,Zhu.Martinez:10tac} to solve convex and non-convex programs in centralized and distributed manners. In this paper, Assumption~\\ref{asm1}, on one hand, ensures the existence of GNEs, and on the other hand, guarantees the boundedness of estimates and gradients for the case of unknown $\\tau_{\\max}$. We will discuss the verification of Assumption~\\ref{asm1} in Section~\\ref{sec:discussion}.\\oprocend\\label{rem6}\n\\end{remark}\n\nThe following proposition characterizes the relations between the UC game and the CVX game.\n\n\\begin{proposition} The following properties hold:\n\\begin{enumerate}\n\\item[(P1)] Consider any $(\\tilde{x}, \\tilde{\\mu})\\in\\mathbb{X}_{\\rm UC}(M)$. We have $\\tilde{x}\\in \\mathbb{X}_{\\rm CVX}$ if the following properties hold:\n\\begin{itemize}\n\\item feasibility; i.e., $G^{[i]}(\\tilde{x})\\leq0$ for all $i\\in V$;\n\\item slackness complementarity; i.e., $\\langle \\tilde{\\mu}^{[i]}, G^{[i]}(\\tilde{x})\\rangle = 0$ for all $i\\in V$.\n\\end{itemize}\n\\item[(P2)] Suppose Assumption~\\ref{asm1} holds. Consider the UC game parameterized by $M$ with $M_i \\triangleq \\{\\mu^{[i]}\\in\\mathds{R}^{m_i}_{\\geq0}\\;|\\;\\|\\mu^{[i]}\\|\\leq \\vartheta_i+r_i\\}$ for some $r_i>0$. Then $\\mathbb{X}_{\\rm UC}(M)\\neq\\emptyset$. In addition, for any $(\\tilde{x}, \\tilde{\\mu})\\in\\mathbb{X}_{\\rm UC}(M)$, it holds that $\\tilde{x}\\in \\mathbb{X}_{\\rm CVX}$.\n\\end{enumerate} \\label{pro5}\n\\end{proposition}\n\n\\textbf{Proof:} The proof of (P1) is a slight extension of the results in~\\cite{Bertsekas:09} to our game setup. For the sake of completeness, we summarize the analysis here. Since $\\langle \\tilde{\\mu}^{[i]}, G^{[i]}(\\tilde{x})\\rangle = 0$, we have the following relation: \\begin{align}f_i(\\tilde{x}) = \\mathcal{H}_i(\\tilde{x},\\tilde{\\mu}^{[i]}) - \\langle\\tilde{\\mu}^{[i]},G^{[i]}(\\tilde{x})\\rangle = \\mathcal{H}_i(\\tilde{x},\\tilde{\\mu}^{[i]}).\\label{e7}\\end{align} Since $(\\tilde{x},\\tilde{\\mu})\\in\\mathbb{X}_{\\rm UC}(M)$, then the following relation holds for all $x^{[i]}\\in X_i$: \\begin{align}\\mathcal{H}_i(\\tilde{x},\\tilde{\\mu}^{[i]})\n-\\mathcal{H}_i(x^{[i]},\\tilde{x}^{[-i]},\\tilde{\\mu}^{[i]}) \\leq 0.\\label{e17}\\end{align} Substitute~\\eqref{e7} into~\\eqref{e17}, and it renders the following for any $x^{[i]}\\in X_i$:\n\\begin{align}f_i(\\tilde{x}) \\leq f_i(x^{[i]},\\tilde{x}^{[-i]}) + \\langle\\tilde{\\mu}^{[i]},G^{[i]}(x^{[i]},\\tilde{x}^{[-i]})\\rangle,\\nonumber\\end{align} and thus $f_i(\\tilde{x}) \\leq f_i(x^{[i]},\\tilde{x}^{[-i]}),\\quad \\forall x^{[i]}\\in X_i^f(\\tilde{x}^{[-i]})$, by noting that $\\langle\\tilde{\\mu}^{[i]},G^{[i]}(x^{[i]},\\tilde{x}^{[-i]})\\rangle \\leq 0$ for any $ x^{[i]}\\in X_i^f(\\tilde{x}^{[-i]})$. Recall that $G^{[i]}(\\tilde{x})\\leq0$ for all $i\\in V$. The above arguments hold for all $i\\in V$, and thus it establishes that $\\tilde{x}\\in\\mathbb{X}_{\\rm CVX}$.\n\nWe now proceed to show (P2). Since $\\mu^{[i]}\\geq0$, $\\mathcal{H}_i$ is a positive combination of convex functions of $x^{[i]}$. Thus $\\mathcal{H}_i$ is convex in $x^{[i]}$. It is easy to see that $-\\mathcal{H}_i$ is convex (actually affine) in $x^{[i]}$. Since $X$ and $M$ are convex and compact, it follows that $\\mathbb{X}_{\\rm UC}(M) \\neq \\emptyset$ by the results on generalized convex games in; e.g.,~\\cite{Facchinei.Kanzow:07,Rosen:65}. Pick any $(\\tilde{x},\\tilde{\\mu})\\in\\mathbb{X}_{\\rm UC}(M)$ and then $\\|\\tilde{\\mu}^{[i]}\\| \\leq \\vartheta_i$ by Assumption~\\ref{asm1}. We then have the following relation: \\begin{align}{\\mathcal{H}}_i(\\tilde{x},\\tilde{\\mu}^{[i]})\n -{\\mathcal{H}}_i(\\tilde{x},\\mu^{[i]}) \\geq 0,\\quad \\forall\\mu^{[i]}\\in M_i,\\nonumber\\end{align} and thus,\n\\begin{align}\\langle \\mu^{[i]}-\\tilde{\\mu}^{[i]}, G^{[i]}(\\tilde{x})\\rangle \\leq 0,\\quad \\forall\\mu^{[i]}\\in M_i.\\label{e6}\\end{align}\n\nWe now proceed to verify by contradiction the feasibility of $G^{[i]}(\\tilde{x}) \\leq 0$ and the complementary slackness of $\\langle \\tilde{\\mu}^{[i]}, G^{[i]}(\\tilde{x})\\rangle = 0$. Assume that $G^{[i]}(\\tilde{x})_{\\ell} > 0$. We choose $\\mu^{[i]}$ such that $\\mu^{[i]}_{\\ell} = \\tilde{\\mu}^{[i]}_{\\ell} + \\pi r_i$ and $\\mu^{[i]}_{\\ell'} = 0$ for $\\ell' \\neq \\ell$ where $\\pi>0$ is sufficiently small such that $\\mu^{[i]}\\in M_i$. Then it follows from~\\eqref{e6} that $r_i G^{[i]}(\\tilde{x})_{\\ell} \\leq 0$ which is a contradiction. Hence, we have the feasibility of $G^{[i]}(\\tilde{x})\\leq 0$. Combine it with $\\tilde{\\mu}^{[i]}\\geq0$, and it renders that $\\langle \\tilde{\\mu}^{[i]}\\rangle, G^{[i]}(\\tilde{x})\\leq 0$. On the other hand, we let $\\mu^{[i]} = 0$ in the relation~\\eqref{e6} and have $\\langle \\tilde{\\mu}^{[i]}, G^{[i]}(\\tilde{x})\\rangle \\geq 0$. Hence, it renders that $\\langle \\tilde{\\mu}^{[i]}, G^{[i]}(\\tilde{x})\\rangle = 0$. We reach the desired result by using the property (P1). \\oprocend\n\n\\subsection{Notations and notions}\\label{subsection:notations}\n\nThe vector $\\textbf{1}_n$ represents the column vector with $n$ ones. The vector $e^{[\\ell]}_{n}$ is the one in $\\mathds{R}^n$ whose $\\ell$-th coordinate is one and whose other coordinates are all zero. For any pair of vectors $a,b\\in\\mathds{R}^p$, the relation $a\\leq b$ means $a_{\\ell}\\leq b_{\\ell}$ for all $1\\leq\\ell\\leq p$. Since the function $f_i$ is continuous and $X_i$ is compact, then the following quantities are well-defined:\n$\\sigma_{i,\\min} = \\inf_{x\\in X_i}f_i(x),\\quad \\sigma_{i,\\max} = \\sup_{x\\in X_i}f_i(x),\\quad\n\\sigma_{\\min} = \\inf_{x\\in X}\\sum_{i\\in V}f_i(x),\\quad \\sigma_{\\max} = \\sup_{x\\in X}\\sum_{i\\in V}f_i(x)$. Given a non-negative scalar sequence $\\{\\alpha(k)\\}_{k\\geq0}$, it is \\emph{summable} if $\\sum_{k=0}^{\\infty}\\alpha(k)<+\\infty$ and \\emph{square summable} if $\\sum_{k=0}^{\\infty}\\alpha(k)^2<+\\infty$.\n\nIn the remainder of the paper, we will use some notions about \\emph{monotonicity}, and the readers are referred to~\\cite{Aubin.Frankowska:09,Facchinei.Pang:03} for detailed discussion. The mapping $F : Z \\rightarrow Z'$ is \\emph{strongly monotone} with constant $\\rho > 0$ over $Z$ if for each pair of $\\eta,\\eta'\\in Z$, the following holds: \\begin{align*}\\langle F(\\eta) - F(\\eta'), \\eta - \\eta' \\rangle \\geq \\rho \\|\\eta - \\eta'\\|^2.\\end{align*} The mapping $F : Z \\rightarrow Z'$ is \\emph{monotone} over $Z$ if for each pair of $\\eta,\\eta'\\in Z$, the following holds: \\begin{align*}\\langle F(\\eta) - F(\\eta'), \\eta - \\eta' \\rangle \\geq 0.\\end{align*} The mapping $F : Z \\rightarrow Z'$ is \\emph{pseudo-monotone} over $Z$ if for each pair of $\\eta,\\eta'\\in Z$, it holds that $\\langle F(\\eta'), \\eta - \\eta' \\rangle \\geq 0$ implies $\\langle F(\\eta), \\eta - \\eta' \\rangle \\geq 0$. It is known that strong monotonicity implies monotonicity, and monotonicity implies pseudo-monotonicity~\\cite{Facchinei.Pang:03}.\n\nGiven the non-empty, convex, and closed set $Z\\in {\\mathds{R}}^n$, the\nprojection operator onto $Z$, $P_Z: \\mathds{R}^n \\rightarrow Z$, is\ndefined as $P_Z[z] = {\\rm argmin}_{x\\in Z}\\|x-z\\|$. The following is the non-expansiveness of the projection operator; e.g.,~\\cite{Bertsekas:09}.\n\n\\begin{lemma} Let $Z$ be a non-empty, closed and convex set in ${\\mathds{R}}^n$. For any $z\\in{\\mathds{R}}^n$, the following holds for any $y\\in Z$: $\\|P_X[z] - y\n \\|^2 \\leq \\| z - y \\|^2 - \\| P_X[z] - z\\|^2$. \\label{lem6}\n\\end{lemma}\n\nWe define $\\nabla_{x^{[i]}}\\mathcal{H}_i(x,\\mu^{[i]})$ as the gradient of the convex function $\\mathcal{H}_i(\\cdot,x^{[-i]},\\mu^{[i]})$ at $x^{[i]}$, and $\\nabla_{\\mu^{[i]}}\\mathcal{H}_i(x,\\mu^{[i]}) = G^{[i]}(x)$ as the gradient of the concave (actually affine) function $\\mathcal{H}_i(x,\\cdot)$ at $\\mu^{[i]}$. Define $\\nabla \\Theta$ as the map of partial gradients of the players' objective functions: \\begin{align*}\\nabla \\Theta(\\eta) &\\triangleq [\\nabla_{x^{[1]}}\\mathcal{H}_1(x,\\mu^{[1]})^T\\cdots \\nabla_{x^{[N]}}\\mathcal{H}_N(x,\\mu^{[N]})^T\\\\ &\\nabla_{\\mu^{[1]}}-\\mathcal{H}_1(x,\\mu^{[1]})\\cdots \\nabla_{\\mu^{[N]}}-\\mathcal{H}_N(x,\\mu^{[N]})]^T.\\end{align*} The map $\\nabla \\Theta$ is referred to as the \\emph{game map}. It is well-known that $\\nabla\\Theta$ is monotone (resp. strongly monotone) over $K$ if and only if its Jacobian $\\nabla^2\\Theta(\\eta)$ is positive semi-definite (resp. positive definite) for all $\\eta\\in K$.\n\nFor $G^{[i]}(x)\\leq0$ being absent, we define $\\nabla_{x^{[i]}}f_i(x)$ as the gradient of the convex function $f_i(\\cdot,x^{[-i]})$ at $x^{[i]}$. Define $\\nabla \\Theta^r$ as the map of partial gradients of the players' objective functions: \\begin{align*}\\nabla \\Theta^r(x) \\triangleq [\\nabla_{x^{[1]}}f_1(x)^T\\cdots \\nabla_{x^{[N]}}f_N(x)^T]^T.\\end{align*} The map $\\nabla \\Theta^r$ is referred to as the \\emph{(reduced) game map} for $G^{[i]}(x)\\leq0$ being absent.\n\n\nGiven any convex and closed set $Z$, let $\\mathbb{P}_Z$ be the projection operator onto the set $Z$.\n\nThe following lemma is a direct result of the smoothness of the component functions and the compactness of the constraint sets.\n\n\\begin{lemma} The image set of the game map $\\nabla\\Theta$ is uniformly bounded over $K$. In addition, the game map $\\nabla\\Theta$ (resp. $\\nabla\\Theta^r$) is Lipschitz continuous over $K$ with constant $L_{\\Theta}$ (resp. $L_{\\Theta^r}$). \\label{lem4}\n\\end{lemma}\n\n\\section{Distributed computation algorithms}\\label{sec:algorithm}\n\nIn this section, we will study the scenarios mentioned in the introduction. Proposition~\\ref{pro5} reveals that it suffices to compute a Nash equilibrium in $\\mathbb{X}_{\\rm UC}$ if Assumption~\\ref{asm1} holds. In the remainder of this section, we will suppose Assumption~\\ref{asm1} holds, and then each dual player~$i$ can define the set of $M_i \\triangleq \\{\\mu^{[i]}\\in\\mathds{R}^{m_i}_{\\geq0}\\;|\\;\\|\\mu^{[i]}\\|\\leq \\vartheta_i+r_i\\}$ with $r_i>0$. Assumption~\\ref{asm1} will be justified in Section~\\ref{sec:assumption}.\n\n\\subsection{Scenario one: pseudo-monotone game map and unknown $\\tau_{\\max}$}\n\nIn this section, we synthesize a distributed first-order algorithm for the case where the quantity $\\tau_{\\max}$ is unknown and the game map $\\nabla \\Theta$ is merely pseudo-monotone.\n\nAlgorithm~\\ref{ta:algo1} is based on projected primal-dual gradient methods where each primal or dual player~$i$\\footnote{In practical implementation, the update rules of primal player~$i$ and dual player~$i$ are both executed by real player~$i$.} updates its own estimate by moving along its partial gradient with a certain step-size and projecting the estimate onto the local constraint set. Recall that $f_i$ and $G^{[i]}$ are unknown to player~$i$. Then player~$i$ cannot compute partial gradient $\\nabla_{x^{[i]}}\\mathcal{H}_i$. Inspired by simultaneous perturbation stochastic approximation in; e.g.,~\\cite{JS:03}, we use finite differences to approximate the partial gradient $\\nabla_{x^{[i]}}\\mathcal{H}_i$; i.e., for $\\ell = 1,\\cdots,n_i$, \\begin{align}&[\\nabla_{x^{[i]}}\\mathcal{H}_i(x(k),\\mu^{[i]}(k))]_{\\ell}\\nonumber\\\\\n&\\approx \\frac{1}{2c_i(k)}\\big(\\mathcal{H}_i(x^{[i]}(k)+c_i(k)e^{[\\ell]}_{n_i},x^{[-i]}(k),\\mu^{[i]}(k))\\nonumber\\\\\n&-\\mathcal{H}_i(x^{[i]}(k)-c_i(k)e^{[\\ell]}_{n_i},x^{[-i]}(k),\\mu^{[i]}(k))\\big),\\label{e51}\\end{align}\nwhere $-c_i(k)e^{[\\ell]}_{n_i}$ and $c_i(k)e^{[\\ell]}_{n_i}$ are two-way perturbations. In addition, $\\nabla_{\\mu^{[i]}}\\mathcal{H}_i(x(k),\\mu^{[i]}(k)) = G^{[i]}(x(k))$. In Algorithm~\\ref{ta:algo1}, $\\mathcal{D}^{[i]}_x(k)$ (resp. $\\mathcal{D}^{[i]}_{\\mu}(k)$) is the right-hand side of~\\eqref{e51} (resp. $\\nabla_{\\mu^{[i]}}\\mathcal{H}_i(x,\\mu^{[i]})$) with delayed estimates. Here we assume that player~$i$ can observe the \\emph{values} associated with $f_i$ and $G^{[i]}$ and thus $\\mathcal{D}^{[i]}_x(k)$ and $\\mathcal{D}^{[i]}_{\\mu}(k)$.\n\nThe scalar $c_i(k)>0$ is the perturbation magnitude. When it is small, the finite-difference approximation is close to the partial gradient. In order to asymptotically eliminate the error induced by the finite-difference approximation, $c_i(k)$ needs to be diminishing. The decreasing rate of $c_i(k)$ should match that of computation step-sizes $\\alpha(k)$. Otherwise, the convergence to Nash equilibrium may be prevented. This is captured by (A4) of Theorem~\\ref{the1}.\n\n\\begin{algorithm}[htbp]\\caption{Distributed gradient-based algorithm for Scenario one} \\label{ta:algo1}\n\\textbf{Require:} Each primal player $i\\in V$ chooses the initial state $x^{[i]}(0)\\in X_i$. And each dual player~$i\\in V_m$ defines the set $M_i$ and chooses the initial state $\\mu^{[i]}(0)\\in M_i$.\n\n\\textbf{Ensure:} At each $k \\ge 0$, each player in $V\\cup V_m$ executes the following steps:\n\n1. Each primal player $i\\in V$ updates its state according to the following rule:\\begin{align*} x^{[i]}(k+1) = \\mathbb{P}_{X_i}[x^{[i]}(k) - \\alpha(k)\\mathcal{D}^{[i]}_x(k)],\n\\end{align*} where the approximate gradient $\\mathcal{D}^{[i]}_x(k)$ is given by \\begin{align}&[\\mathcal{D}^{[i]}_x(k)]_{\\ell} = \\frac{1}{2c_i(k)}\n\\{f_i(x^{[i]}(k)+c_i(k)e^{[\\ell]}_{n_i},\\Lambda_i(k))\\nonumber\\\\\n&+ \\langle\\mu^{[i]}(k),G^{[i]}(x^{[i]}(k)+c_i(k)e^{[\\ell]}_{n_i},\\Lambda_i(k))\\rangle\\nonumber\\\\\n&-f_i(x^{[i]}(k)-c_i(k)e^{[\\ell]}_{n_i},\\Lambda_i(k))\\nonumber\\\\\n&- \\langle\\mu^{[i]}(k),G^{[i]}(x^{[i]}(k)\n-c_i(k)e^{[\\ell]}_{n_i},\\Lambda_i(k))\\rangle\\},\\label{e38}\\end{align} for $\\ell = 1,\\cdots,n_i$.\n\n2. Each dual player $i\\in V_m$ updates its state according to the following rule:\\begin{align*}\n\\mu^{[i]}(k+1) = \\mathbb{P}_{M_i}[\\mu^{[i]}(k) + \\alpha(k)\\mathcal{D}^{[i]}_{\\mu}(k)],\n\\end{align*} where the gradient $\\mathcal{D}^{[i]}_{\\mu}(k)$ is given by $\\mathcal{D}^{[i]}_{\\mu}(k) = \\nabla_{\\mu^{[i]}}\\mathcal{H}_i(x^{[i]}(k),\\Lambda_i(k),\\mu^{[i]}(k)) = G^{[i]}(x^{[i]}(k),\\Lambda_i(k))$.\n\n3. Repeat for $k = k+1$.\n\\end{algorithm}\n\nThe following theorem demonstrates that Algorithm~\\ref{ta:algo1} is able to achieve a GNE from any initial state in $K$.\n\n\\begin{theorem} Suppose the following hold:\n\\begin{enumerate}\n\\item[(A1)] the quantity $\\tau_{\\max}$ is finite;\n\\item[(A2)] Assumptions~\\ref{asm7} and~\\ref{asm1} hold;\n\\item[(A3)] the sequence of $\\{\\alpha(k)\\}$ is positive, not summable but square summable;\n\\item[(A4)] the sequence of $\\displaystyle{\\{\\alpha(k)\\max_{i\\in V}c_i(k)\\}}$ is summable;\n\\item[(A5)] the game map $\\nabla\\Theta$ is pseudo-monotone over $K$.\n\\end{enumerate}\nFor any initial state $\\eta(0)\\in K$, the sequence of $\\{\\eta(k)\\}$ generated by Algorithm~\\ref{ta:algo1} converges to some $(\\tilde{x},\\tilde{\\mu})\\in{\\mathbb{X}}_{\\rm UC}$ where $\\tilde{x}\\in\\mathbb{X}_{\\rm CVX}$.\\label{the1}\n\\end{theorem}\n\n\\textbf{Proof:} First of all, it is noted that $K$ is compact since each $X_i$ and $X_i$ is compact in Assumption~\\ref{asm1} and (P2) in Proposition~\\ref{pro5} as well as $\\nabla\\Theta$ is uniformly bounded over $K$ in Lemma~\\ref{lem4}.\n\nThen we write Algorithm~\\ref{ta:algo1} in the following compact form:\n\\begin{align}\\eta(k+1) = \\mathbb{P}_K[\\eta(k)-\\alpha(k)\\mathcal{D}(k)],\\label{e19}\n\\end{align} where $\\mathcal{D}(k) \\triangleq ((\\mathcal{D}^{[i]}_x(k)^T)^T_{i\\in V},(-\\mathcal{D}^{[i]}_{\\mu}(k)^T)^T_{i\\in V_m})^T$ is subject to time delays and perturbations. Then pick any $\\eta\\in K$. It follows from the non-expansiveness of the projection operator $\\mathbb{P}_K$, Lemma~\\ref{lem6}, that for any $\\eta\\in K$, the following relation holds:\n\\begin{align}&\\|\\eta(k+1)-\\eta\\|^2\\nonumber\\\\\n&\\leq \\|\\eta(k)-\\alpha(k)\\mathcal{D}(k)-\\eta\\|^2-\\|\\mathcal{D}(k)\\|^2\\nonumber\\\\\n&\\leq \\|\\eta(k)-\\alpha(k)\\mathcal{D}(k)-\\eta\\|^2\\nonumber\\\\\n&= \\|\\eta(k)-\\eta\\|^2 - 2\\alpha(k)\\langle \\mathcal{D}(k), \\eta(k)-\\eta\\rangle\\nonumber\\\\\n&+ \\alpha(k)^2\\|\\mathcal{D}(k)\\|^2.\\label{e25}\n\\end{align}\n\nIt follows from~\\eqref{e25} that\n\\begin{align}&2\\alpha(k)\\langle \\mathcal{D}(k), \\eta(k)-\\eta\\rangle\\leq \\|\\eta(k) - \\eta\\|^2\\nonumber\\\\\n&- \\|\\eta(k+1) - \\eta\\|^2 +\\alpha(k)^2\\|\\mathcal{D}(k)\\|^2.\\label{e23}\\end{align}\n\nFor any $k\\geq0$ and $i$, we define the following: \\begin{align}&\\hat{\\mathcal{D}}^{[i]}_x(k) = \\nabla_{x^{[i]}}\\mathcal{H}_i(x^{[i]}(k),\\{x^{[j]}(k)\\}_{j\\in\\mathcal{N}^{\\rm IN}_i},\\mu^{[i]}(k)),\\nonumber\\\\\n&\\hat{\\mathcal{D}}^{[i]}_{\\mu}(k) \\triangleq \\nabla_{\\mu^{[i]}}\\mathcal{H}_i(x^{[i]}(k),\\{x^{[j]}(k)\\}_{j\\in\\mathcal{N}^{\\rm IN}_i},\\mu^{[i]}(k)),\\nonumber\\\\\n&\\hat{\\mathcal{D}}(k) \\triangleq ((\\hat{\\mathcal{D}}^{[i]}_x(k)^T)^T_{i\\in V},(-\\hat{\\mathcal{D}}^{[i]}_{\\mu}(k)^T)^T_{i\\in V_m})^T,\\label{e16}\\end{align} which are the gradients evaluated at the delay-free states. So, the quantity $\\hat{\\mathcal{D}}(k)$ is free of time delays and perturbations.\n\nSimilarly, for any $k\\geq0$ and $i$, we define the following: \\begin{align}&\\tilde{\\mathcal{D}}^{[i]}_x(k) = \\nabla_{x^{[i]}}\\mathcal{H}_i(x^{[i]}(k),\\Lambda_i(k),\\mu^{[i]}(k)),\\nonumber\\\\\n&\\tilde{\\mathcal{D}}^{[i]}_{\\mu}(k) \\triangleq \\nabla_{\\mu^{[i]}}\\mathcal{H}_i(x^{[i]}(k),\\Lambda_i(k),\\mu^{[i]}(k)),\\nonumber\\\\\n&\\tilde{\\mathcal{D}}(k) \\triangleq ((\\tilde{\\mathcal{D}}^{[i]}_x(k)^T)^T_{i\\in V},(-\\tilde{\\mathcal{D}}^{[i]}_{\\mu}(k)^T)^T_{i\\in V_m})^T,\\nonumber\\end{align} which are the gradients evaluated at the delayed states. Then the quantity $\\tilde{\\mathcal{D}}(k)$ is free of perturbations but subject to time delays.\n\nWith the above notations at hand, the relation~\\eqref{e23} implies the following: \\begin{align}&2\\alpha(k)\\langle \\hat{\\mathcal{D}}(k), \\eta(k)-\\eta\\rangle \\leq \\alpha(k)^2\\|\\mathcal{D}(k)\\|^2 + \\|\\eta(k) - \\eta\\|^2\\nonumber\\\\&- \\|\\eta(k+1) - \\eta\\|^2 + 2\\alpha(k)\\langle\\tilde{\\mathcal{D}}(k)-\\mathcal{D}(k),\\eta(k)-\\eta\\rangle\\nonumber\\\\\n&+ 2\\alpha(k)\\langle\\hat{\\mathcal{D}}(k)-\\tilde{\\mathcal{D}}(k),\\eta(k)-\\eta\\rangle\\nonumber\\\\\n&\\leq \\alpha(k)^2\\|\\mathcal{D}(k)\\|^2 + \\|\\eta(k) - \\eta\\|^2\\nonumber\\\\&- \\|\\eta(k+1) - \\eta\\|^2 + 2\\alpha(k)\\|\\tilde{\\mathcal{D}}(k)-\\mathcal{D}(k)\\| \\|\\eta(k)-\\eta\\|\\nonumber\\\\\n&+ 2\\alpha(k)\\|\\hat{\\mathcal{D}}(k)-\\tilde{\\mathcal{D}}(k)\\| \\|\\eta(k)-\\eta\\|.\\label{e30}\\end{align}\n\nNow let us examine the term with $\\|\\hat{\\mathcal{D}}(k)-\\tilde{\\mathcal{D}}(k)\\|$ on the right-hand side of~\\eqref{e30}. Since $x(k)\\in X$ and $X$ is convex and closed, it then follows from the non-expansiveness of the projection operator $\\mathbb{P}_X$, that \\begin{align}\\|\\eta(k+1)-\\eta(k)\\|\n&=\\|\\mathbb{P}_K[\\eta(k)-\\alpha(k)\\mathcal{D}(k)]-\\mathbb{P}_K[\\eta(k)]\\|\\nonumber\\\\\n&\\leq\\alpha(k)\\|\\mathcal{D}(k)\\|.\\label{e14}\\end{align} Consequently,\nit follows from the Lipschitiz continuity of the game map $\\nabla\\Theta$ and~\\eqref{e30},~\\eqref{e14} that \\begin{align} &\\|\\hat{\\mathcal{D}}(k)-\\tilde{\\mathcal{D}}(k)\\|\\nonumber\\\\\n&\\leq \\sum_{i\\in V}(\\|\\hat{\\mathcal{D}}^{[i]}_x(k)-\\tilde{\\mathcal{D}}^{[i]}_x(k)\\| + \\|\\hat{\\mathcal{D}}^{[i]}_{\\mu}(k)-\\tilde{\\mathcal{D}}^{[i]}_{\\mu}(k)\\|)\\nonumber\\\\\n&\\leq 2N L_{\\Theta}\\sum_{\\tau=k-\\tau_{\\max}}^{k-1}\\|\\eta(\\tau+1)-\\eta(\\tau)\\|\\nonumber\\\\\n&\\leq 2N L_{\\Theta}\\sum_{\\tau=k-\\tau_{\\max}}^{k-1}\\alpha(\\tau)\\|\\mathcal{D}(\\tau)\\|.\\label{e45}\\end{align}\n\nSince $K$ is compact and the image of $\\nabla\\Theta$ is uniformly compact, ~\\eqref{e45} implies that there is $\\Upsilon > 0$ such that \\begin{align}&2\\alpha(k)\\|\\hat{\\mathcal{D}}(k)-\\tilde{\\mathcal{D}}(k)\\|\\|\\eta(k)-{\\eta}\\|\\nonumber\\\\\n&\\leq 2NL_{\\Theta}\\sum_{\\tau=k-\\tau_{\\max}}^{k-1}2\\alpha(k)\\alpha(\\tau)\n\\|\\mathcal{D}(\\tau)\\|\\|\\eta(k)-{\\eta}\\|\\nonumber\\\\\n&\\leq 2N L_{\\Theta}\\sum_{\\tau=k-\\tau_{\\max}}^{k-1}\\big(\\alpha(k)^2+\\alpha(\\tau)^2\n\\|\\mathcal{D}(\\tau)\\|^2\\|\\eta(k)-{\\eta}\\|^2\\big)\\nonumber\\\\\n&\\leq 2N L_{\\Theta}\\tau_{\\max}\\alpha(k)^2 + \\Upsilon\\sum_{\\tau=k-\\tau_{\\max}}^{k-1}\\alpha(\\tau)^2.\\label{e46}\\end{align}\n\nNow consider the term with $\\|\\tilde{\\mathcal{D}}(k)-\\mathcal{D}(k)\\|$ on the right-hand side in~\\eqref{e30}. Recall that $K$ is compact. By the Taylor expansion, we reach that \\begin{align}&\\mathcal{D}^{[i]}_x(k) =\\nabla_{x^{[i]}}f_i(x^{[i]}(k),\n\\Lambda_i(k))\\nonumber\\\\\n&+\\sum_{\\ell=1}^{m_i}\\mu^{[i]}_{\\ell}(k)\\nabla_{x^{[i]}}\nG^{[i]}_{\\ell}(x^{[i]}(k),\\Lambda_i(k))+O(c_i(k))\\textbf{1}_{n_i}.\\label{e21}\\end{align}\n\nWith the above relation, we have the following for $\\tilde{\\mathcal{D}}^{[i]}(k)-\\mathcal{D}^{[i]}(k)$:\n\\begin{align}\\tilde{\\mathcal{D}}^{[i]}_{\\mu}(k)-\\mathcal{D}^{[i]}_{\\mu}(k) = 0,\\quad\n\\tilde{\\mathcal{D}}^{[i]}_x(k)-\\mathcal{D}^{[i]}_x(k) = O(c_i(k))\\textbf{1}_{n_i}.\\label{e12}\\end{align}\n\nNotice that $\\|\\mathcal{D}(k)\\|$ is uniformly bounded. Substitute~\\eqref{e46} and~\\eqref{e12} into~\\eqref{e30}, sum over $[0,T]$, and it renders that there is some $\\Upsilon', \\Upsilon'' > 0$ such that the following estimate holds: \\begin{align}&2\\sum_{k=0}^T\\alpha(k)\\langle \\hat{\\mathcal{D}}(k), \\eta(k)-\\eta\\rangle\\nonumber\\\\\n&\\leq \\|\\eta(0) - \\eta\\|^2 - \\|\\eta(T+1) - \\eta\\|^2 + \\Upsilon'\\sum_{k=0}^T\\alpha(k)^2\\nonumber\\\\\n&+ \\Upsilon''\\sum_{k=0}^T\\alpha(k)\\max_{i\\in V}c_i(k).\\label{e26}\\end{align}\n\nSince $\\{\\alpha(k)^2\\}$ and $\\{\\alpha(k)\\max_{i\\in V}c_i(k)\\}$ are summable, then the right-hand side of~\\eqref{e26} is finite when we let $T\\rightarrow+\\infty$. We now show the following by contradiction: \\begin{align}\\liminf_{k\\rightarrow+\\infty}\\langle \\hat{\\mathcal{D}}(k),\\eta(k)-\\eta\\rangle\\leq0.\\label{e34}\\end{align}\nAssume that there are $k_0\\geq0$ and $\\epsilon>0$ such that $\\langle \\hat{\\mathcal{D}}(k),\\eta(k)-\\eta\\rangle \\geq \\epsilon$ for all $k\\geq k_0$. Since $\\alpha(k) > 0$ and $\\{\\alpha(k)\\}$ is not summable, then we have the following: \\begin{align}&\\sum_{k=0}^{+\\infty}\\alpha(k)\\langle \\hat{\\mathcal{D}}(k),\\eta(k)-\\eta\\rangle\\nonumber\\\\\n&\\geq \\sum_{k=0}^{k_0}\\alpha(k)\\langle \\hat{\\mathcal{D}}(k),\\eta(k)-\\eta\\rangle\\nonumber\\\\\n&+ \\sum_{k=k_0+1}^{+\\infty}\\alpha(k)\\langle \\hat{\\mathcal{D}}(k),\\eta(k)-\\eta\\rangle\\nonumber\\\\\n&\\geq \\sum_{k=0}^{k_0}\\alpha(k)\\langle \\hat{\\mathcal{D}}(k),\\eta(k)-\\eta\\rangle\n+ \\epsilon\\sum_{k=k_0+1}^{+\\infty}\\alpha(k)\\nonumber\\\\\n&\\geq +\\infty.\\nonumber\\end{align}\nWe then reach a contradiction, and thus~\\eqref{e34} holds. Equivalently, the following relation holds: \\begin{align*}\\limsup_{k\\rightarrow+\\infty}\\langle \\hat{\\mathcal{D}}(k),\\eta-\\eta(k)\\rangle\\geq0.\\end{align*} Since $\\nabla\\Theta$ is closed, the above relation implies that there is a limit point $\\tilde{\\eta}\\in K$ of the sequence $\\{\\eta(k)\\}$ such that the following holds: $\\langle \\nabla\\Theta(\\tilde{\\eta}),\\eta-\\tilde{\\eta}\\rangle\\geq0, \\quad \\forall\\eta\\in K$. Since $\\nabla\\Theta$ is pseudo-monotone, then we have the following relation: \\begin{align}\\langle \\nabla\\Theta(\\eta),\\eta-\\tilde{\\eta}\\rangle\\geq0, \\quad \\forall\\eta\\in K.\\label{e22}\\end{align}\n\nWe now set out to show the following by contradiction: \\begin{align}\\langle \\nabla\\Theta(\\tilde{\\eta}),\\eta-\\tilde{\\eta}\\rangle\\geq0,\\quad \\forall \\eta\\in K.\\label{e32}\\end{align} Assume that there is $\\hat{\\eta}\\in K$ such that the following holds: \\begin{align}\\langle \\nabla\\Theta(\\tilde{\\eta}),\\hat{\\eta}-\\tilde{\\eta}\\rangle < 0.\\label{e33}\\end{align} Now choose $\\varepsilon\\in(0,1)$, and define $\\eta_{\\varepsilon} \\triangleq \\hat{\\eta} + \\varepsilon(\\tilde{\\eta}-\\hat{\\eta})$. Since $K$ is convex, then we have $\\eta_{\\varepsilon}\\in K$. The following holds: \\begin{align}\\langle \\nabla\\Theta(\\eta_{\\varepsilon}),(1-\\varepsilon)(\\hat{\\eta}-\\tilde{\\eta})\\rangle = \\langle \\nabla\\Theta(\\eta_{\\varepsilon}),\\eta_{\\varepsilon}-\\tilde{\\eta}\\rangle\\geq0,\\label{e28}\n\\end{align} where in the inequality we use~\\eqref{e22}. It follows from~\\eqref{e28} that the following relation holds for any $\\varepsilon\\in(0,1)$: \\begin{align}\\langle \\nabla\\Theta(\\eta_{\\varepsilon}),\\hat{\\eta}-\\tilde{\\eta}\\rangle\\geq0,\\quad \\forall \\eta\\in K.\\label{e29}\n\\end{align} Since $\\nabla\\Theta$ is closed, letting $\\varepsilon\\rightarrow1$ in~\\eqref{e29} gives that: $\\langle \\nabla\\Theta(\\tilde{\\eta}),\\hat{\\eta}-\\tilde{\\eta}\\rangle\\geq0$, which contradicts~\\eqref{e33}. As a result, the relation~\\eqref{e32} holds. By~\\cite{Palomar.Eldar:10}, the relation~\\eqref{e32} implies that $\\tilde{\\eta}\\in\\mathbb{X}_{\\rm UC}$. We replay $\\eta$ by $\\eta(k)$ in~\\eqref{e32}, and establish the following:\n\\begin{align}\\langle \\nabla\\Theta(\\tilde{\\eta}),\\eta(k)-\\tilde{\\eta}\\rangle\\geq0.\\label{e31}\\end{align} Since $\\nabla\\Theta$ is pseudo-monotone, it follows from~\\eqref{e31} that\\begin{align}\\langle \\hat{\\mathcal{D}}(k), \\eta(k)-\\tilde{\\eta}\\rangle \\geq 0.\\label{e27}\\end{align} Replace $\\eta$ with $\\tilde{\\eta}$ in~\\eqref{e23}, apply~\\eqref{e27}, and it renders: \\begin{align}\\|\\eta(k+1) - \\tilde{\\eta}\\|^2 -\\|\\eta(k) - \\tilde{\\eta}\\|^2\n\\leq\\alpha(k)^2\\|\\mathcal{D}(k)\\|^2.\\label{e35}\\end{align} Sum up~\\eqref{e35} over $[s,k]$ and we have \\begin{align}\\|\\eta(k) - \\tilde{\\eta}\\|^2\n\\leq \\|\\eta(s) - \\tilde{\\eta}\\|^2 + \\sum_{\\tau=s}^{k-1}\\alpha(\\tau)^2\\|\\mathcal{D}(\\tau)\\|^2.\\label{e36}\\end{align}\nWe take the limits on $k$ first and then $s$ on both sides of~\\eqref{e36}. Since $\\{\\alpha(k)\\}$ is square summable and $\\{\\mathcal{D}(k)\\}$ is uniformly bounded, we have \\begin{align}\\limsup_{k\\rightarrow\\infty}\\|\\eta(k) - \\tilde{\\eta}\\|^2\n\\leq \\liminf_{s\\rightarrow\\infty}\\|\\eta(s) - \\tilde{\\eta}\\|^2.\\nonumber\\end{align} It implies that $\\{\\|\\eta(k) - \\tilde{\\eta}\\|\\}$ converges. Since $\\tilde{\\eta}$ is a limit point of $\\{\\eta(k)\\}$, $\\{\\eta(k)\\}$ converges to $\\tilde{\\eta}\\in\\mathbb{X}_{\\rm UC}$. Furthermore, we have $\\tilde{x}\\in\\mathbb{X}_{\\rm CVX}$ by (P2) in Proposition~\\ref{pro5}.\\oprocend\n\n\\subsection{Scenario two: strongly monotone reduced game map and known $\\tau_{\\max}$}\n\nIn the last section, the convergence of Algorithm~\\ref{ta:algo1} is ensured under the mild assumptions: $\\tau_{\\max}$ is unknown and the game map $\\nabla\\Theta$ is merely pseudo-monotone. This set of assumptions requires the usage of diminishing step-sizes. Note that diminishing step-sizes may cause a slow convergence rate. The shortcoming can be partially addressed by choosing a constant step-size when the inequality constraints are absent and (A5) in Theorem~\\ref{the1} is strengthened to the following one: \\begin{assumption} The map $\\nabla\\Theta^r$ is strongly monotone over $X$ with constant $\\rho$.\\label{asm5}\\end{assumption}\n\n\\begin{algorithm}[htbp]\\caption{The distributed gradient-based algorithm for Scenario two} \\label{ta:algo2}\n\n\\textbf{Require:} Each player in $V$ chooses the initial state $x^{[i]}(0)\\in X_i$.\n\n\\textbf{Ensure:} At each $k \\ge 0$, each player~$i\\in V$ executes the following steps:\n\n1. Each player $i\\in V$ updates its state according to the following rule:\\begin{align*} x^{[i]}(k+1) = \\mathbb{P}_{X_i}[x^{[i]}(k) - \\alpha D^{[i]}(k)],\\end{align*} where the approximate gradient $D^{[i]}(k)$ is given by: \\begin{align*}[D^{[i]}(k)]_{\\ell} &= \\frac{1}{2c_i(k)}\n\\{f_i(x^{[i]}(k)+c_i(k)e^{[\\ell]}_{n_i},\\Lambda_i(k))\\nonumber\\\\\n&-f_i(x^{[i]}(k)-c_i(k)e^{[\\ell]}_{n_i},\\Lambda_i(k))\\},\\end{align*} for $\\ell = 1,\\cdots,n_i$.\n\n2. Repeat for $k = k+1$.\n\\end{algorithm}\n\nAlgorithm~\\ref{ta:algo2} is proposed to address Scenario two. In particular, Algorithm~\\ref{ta:algo2} is similar to Algorithm~\\ref{ta:algo1}, but has two distinctions. Firstly, Algorithm~\\ref{ta:algo2} excludes the inequality constraints. Because the game map $\\nabla \\Theta$ cannot be strongly monotone due to $\\mathcal{H}_i$ being linear in $\\mu^{[i]}$. Secondly, thanks to the strong monotonicity of $\\nabla \\Theta$, a constant step-size replaces diminishing step-sizes. The following theorem summarizes the convergence properties of Algorithm~\\ref{ta:algo2}.\n\n\\begin{theorem} Suppose the following holds:\n\\begin{enumerate}\n\\item[(B1)] the quantity $0\\leq\\tau_{\\max}<\\frac{\\rho}{4NL_{\\Theta^r}}$ is known;\n\\item[(B2)] Assumptions~\\ref{asm7} and~\\ref{asm1} hold;\n\\item[(B3)] the step-size $\\alpha\\in(0,\\frac{\\rho-4NL_{\\Theta^r}\\tau_{\\max}}{L^2_{\\Theta^r}(2+16N^2\\tau_{\\max})})$;\n\\item[(B4)] The sequence $\\{\\max_{i\\in V}c_i(k)\\}$ is summable.\n\\item[(B5)] Assumption~\\ref{asm5} holds.\n\\end{enumerate}\nFor any initial state $x(0)\\in X$, the sequence of $\\{x(k)\\}$ generated by Algorithm~\\ref{ta:algo2} converges to $\\tilde{x}\\in\\mathbb{X}_{\\rm CVX}$.\\label{the2}\n\\end{theorem}\n\n\\textbf{Proof:} First of all, for any $k\\geq0$ and $i$, we define $\\hat{\\mathcal{D}}^{[i]}(k) \\triangleq \\nabla_{x^{[i]}}f_i(x^{[i]}(k),\\{x^{[j]}(k)\\}_{j\\in\\mathcal{N}^{\\rm IN}_i})$, which is the gradient evaluated at the delay-free states. So, the quantity $\\hat{\\mathcal{D}}(k)$ is free of time delays and perturbations. Similarly, for any $k\\geq0$ and $i$, we define $\\tilde{\\mathcal{D}}^{[i]}(k) \\triangleq \\nabla_{x^{[i]}}f_i(x^{[i]}(k),\\Lambda_i(k))$ which is the gradient evaluated at the delayed states. Then the quantity $\\tilde{\\mathcal{D}}(k)$ is free of perturbations but subject to time delays.\n\nWe then write Algorithm~\\ref{ta:algo2} in the following compact form: \\begin{align}x(k+1) = \\mathbb{P}_X[x(k)-\\alpha D(k)].\\label{e39}\n\\end{align} It is noticed that $\\tilde{x}\\in{\\mathbb{X}}_{\\rm CVX}$ is a fixed point of the operator $\\mathbb{P}_X[\\cdot-\\alpha \\nabla\\Theta^r(\\cdot)]$ for any $\\alpha > 0$; i.e., it holds that $\\tilde{x} = \\mathbb{P}_X[\\tilde{x}-\\alpha \\nabla\\Theta^r(\\tilde{x})]$. By this, we have the following relations: \\begin{align}&\\|x(k+1)-\\tilde{x}\\|^2\\nonumber\\\\\n&= \\|\\mathbb{P}_X[x(k)-\\alpha D(k)] - \\mathbb{P}_X[\\tilde{x}-\\alpha \\nabla\\Theta^r(\\tilde{x})]\\|^2\\nonumber\\\\\n&\\leq \\|(x(k) - \\tilde{x})-\\alpha(D(k)-\\nabla\\Theta^r(\\tilde{x}))\\|^2\\nonumber\\\\\n&= \\|x(k) - \\tilde{x}\\|^2-2\\alpha\\langle x(k) - \\tilde{x}, \\hat{D}(k)-\\nabla\\Theta^r(\\tilde{x})\\rangle\\nonumber\\\\\n&+ \\alpha^2\\|(D(k)-\\tilde{D}(k))+(\\tilde{D}(k)-\\hat{D}(k))+(\\hat{D}(k)-u)\\|^2\\nonumber\\\\\n&-2\\alpha\\langle x(k) - \\tilde{x}, D(k)-\\tilde{D}(k)\\rangle\\nonumber\\\\\n&-2\\alpha\\langle x(k) - \\tilde{x}, \\tilde{D}(k)-\\hat{D}(k)\\rangle\\nonumber\\\\\n&\\leq (1-2\\rho\\alpha+4L_{\\Theta^r}^2\\alpha^2)\\|x(k)-\\tilde{x}\\|^2\\nonumber\\\\\n&+ 4\\alpha^2(\\|D(k)-\\tilde{D}(k)\\|^2+\\|\\tilde{D}(k)-\\hat{D}(k)\\|^2)\\nonumber\\\\\n&+2\\alpha\\|x(k) - \\tilde{x}\\|(\\|D(k)-\\tilde{D}(k)\\|+\\|\\tilde{D}(k)-\\hat{D}(k)\\|),\\label{e42}\\end{align}\nwhere we use the non-expansiveness property of the projection operator $\\mathbb{P}_X$ in the first inequality, and the strong monotonicity and the Lipschitz continuity of $\\nabla\\Theta^r$ in the last inequality. For the term of $\\|\\hat{D}(k)-\\tilde{D}(k)\\|$ in~\\eqref{e42}, one can derive the following relations: \\begin{align}&\\|\\hat{D}(k)-\\tilde{D}(k)\\| \\leq NL_{\\Theta^r}\\sum_{\\tau=k-\\tau_{\\max}}^{k-1}\\|x(\\tau+1)-x(\\tau)\\|\\nonumber\\\\\n&\\leq NL_{\\Theta^r}\\sum_{\\tau=k-\\tau_{\\max}}^{k-1}\\big(\\|x(\\tau+1)-\\tilde{x}\\|\n+\\|x(\\tau)-\\tilde{x}\\|\\big)\\nonumber\\\\\n&\\leq 2NL_{\\Theta^r}\\sum_{\\tau=k-\\tau_{\\max}}^{k-1}\\|x(\\tau)-\\tilde{x}\\|.\\label{e47}\\end{align}\n\nSimilar to~\\eqref{e12}, we have \\begin{align}\\tilde{D}^{[i]}(k)-D^{[i]}(k) = O(c_i(k))\\textbf{1}_{n_i}.\\label{e13}\\end{align}\n\nSubstituting~\\eqref{e47} and~\\eqref{e13} into~\\eqref{e42} yields: \\begin{align}&\\|x(k+1)-\\tilde{x}\\|^2\\nonumber\\\\\n&\\leq (1-2\\rho\\alpha+4L_{\\Theta^r}^2\\alpha^2+4NL_{\\Theta^r}\\alpha\\tau_{\\max})\n\\|x(k)-\\tilde{x}\\|^2\\nonumber\\\\\n&+ (32N^2L_{\\Theta^r}^2\\alpha^2+4NL_{\\Theta^r}\\alpha)\n\\sum_{\\tau=k-\\tau_{\\max}}^{k-1}\\|x(\\tau)-\\tilde{x}\\|^2\\nonumber\\\\\n&+4\\alpha^2(\\max_{i\\in V}c_i(k))^2 + 2\\alpha\\max_{i\\in V}c_i(k)\\|x(k)-\\tilde{x}\\|,\\label{e48}\\end{align}\nwhere we use the relation $2ab \\leq a^2 + b^2$.\n\nTo study the convergence of $x(k)$ in~\\eqref{e48}, we define the following notation: \\begin{align*}&\\psi(k) \\triangleq x(k) - \\tilde{x},\\quad\\|\\chi(k)\\| \\triangleq \\max_{k-\\tau_{\\max}\\leq \\tau \\leq k-1}\\|\\psi(\\tau)\\|,\\nonumber\\\\\n&e(k)\\triangleq 4\\alpha^2(\\max_{i\\in V}c_i(k))^2 + 2\\alpha\\max_{i\\in V}c_i(k)\\|x(k)-\\tilde{x}\\|.\\end{align*} Then~\\eqref{e48} is rewritten as follows: \\begin{align}\\|\\psi(k+1)\\|\\leq a\\|\\psi(k)\\|+b\\|\\chi(k)\\|+e(k).\\label{e43}\\end{align} where $a \\triangleq (1-2\\rho\\alpha+4L_{\\Theta^r}^2\\alpha^2+4NL_{\\Theta^r}\\alpha\\tau_{\\max})$, $b \\triangleq \\tau_{\\max}(32N^2L_{\\Theta^r}^2\\alpha^2+4NL_{\\Theta^r}\\alpha)$ and the sequence $\\{e(k)\\}$ is diminishing. From the recursion of~\\eqref{e43}, we can derive the following relation for any pair of $k>s\\geq0$: \\begin{align}&\\|\\psi(k)\\| \\leq a^{k-s}\\|\\psi(s)\\| + \\sum_{\\tau=s}^{k-1}a^{k-\\tau}b\\chi(\\tau) + \\sum_{\\tau=s}^{k-1}e(\\tau)\\nonumber\\\\\n&\\leq a^{k-s}\\|\\psi(s)\\| + b\\sup_{s\\leq\\tau\\leq k}\\|\\chi(\\tau)\\|\\sum_{\\tau=s}^{k-1}a^{k-\\tau} + \\sum_{\\tau=s}^{k-1}e(\\tau)\\nonumber\\\\\n&\\leq a^{k-s}\\|\\psi(s)\\| + \\frac{b}{1-a}\\sup_{s\\leq\\tau\\leq k}\\|\\chi(\\tau)\\| + \\sum_{\\tau=s}^{k-1}e(\\tau).\\label{e44}\\end{align} Recall that $\\{\\max_{i\\in V}c_i(k)\\}$ and thus $\\{(\\max_{i\\in V}c_i(k))^2\\}$ are summable. Take the limits on $k$ and $s$ at both sides of~\\eqref{e44}, and it gives the following relation: \\begin{align}\\limsup_{k\\rightarrow+\\infty}\\|\\psi(k)\\| \\leq \\frac{b}{1-a}\\limsup_{k\\rightarrow+\\infty}\\|\\chi(k)\\|.\\label{e49}\\end{align} On the other hand, one can see that \\begin{align}\\limsup_{k\\rightarrow+\\infty}\\|\\chi(k)\\| = \\limsup_{k\\rightarrow+\\infty}\\|\\psi(k)\\|.\\label{e50}\\end{align}\n\nSince $\\alpha\\in(0,\\frac{\\rho-4NL_{\\Theta^r}\\tau_{\\max}}{L^2_{\\Theta^r}(2+16N^2\\tau_{\\max})})$, then $\\frac{a}{1-b}<1$. The combination of~\\eqref{e49} and~\\eqref{e50} renders that \\begin{align}\\limsup_{k\\rightarrow+\\infty}\\|\\psi(k)\\| = 0.\\label{e40}\\end{align} Apparently, \\begin{align}\\liminf_{k\\rightarrow+\\infty}\\|\\psi(k)\\| \\geq 0.\\label{e41}\\end{align} The combination of~\\eqref{e40} and~\\eqref{e41} establishes the convergence of $\\{x(k)\\}$ to $\\tilde{x}$. It completes the proof.\\oprocend\n\n\\section{Discussions}\\label{sec:discussion}\n\n\\subsection{Comparison of two scenarios}\\label{sec:comparison}\n\nThe two proposed algorithms are complementary. Algorithm~\\ref{ta:algo1} can address inequality constraints, does not need to know $\\tau_{\\max}$ and merely requires the game map to be pseudo monotone. It comes with the price of potentially slow convergence due to the utilization of diminishing step-sizes. In contrast, Algorithm~\\ref{ta:algo2} cannot deal with inequality constraints, needs to know $\\tau_{\\max}$ and requires the game map to be strongly monotone. It comes with the benefit of potentially fast convergence due to the utilization of a constant step-size.\n\n\n\\subsection{Discussion on Assumption~\\ref{asm1}}\\label{sec:assumption}\n\nThe proposed algorithms rely upon Assumption~\\ref{asm1}. In what follows, we will provide two sufficient conditions for Assumption~\\ref{asm1}. Recall that $\\sigma_{i,\\min}$, $\\sigma_{i,\\max}$,\n$\\sigma_{\\min}$ and $\\sigma_{\\max}$ are defined in Section~\\ref{subsection:notations}.\n\n\\subsubsection{Global Slater vectors}\n\n\\begin{assumption} There exist $\\bar{x}\\in X$ and $\\sigma' > 0$ such that $\\|-G^{[i]}(\\bar{x})\\|_{\\infty} \\geq \\sigma'$ for all $i\\in V$.\\label{asm3}\n\\end{assumption}\n\nThe vector $\\bar{x}$ that satisfies Assumption~\\ref{asm3} is referred to as the \\emph{global} Slater vector. The existence of the global Slater vector ensures the boundedness of $\\mathbb{X}_{\\rm UC}(M)$ as follows.\n\n\\begin{lemma} If Assumption~\\ref{asm3} holds and $G^{[i]}_{\\ell}$ is convex in $x$, then for any $(\\tilde{x},\\tilde{\\mu})\\in \\mathbb{X}_{\\rm UC}(\\mathds{R}^m_{\\geq0})$, it holds that $\\|\\tilde{\\mu}^{[i]}\\|_{\\infty} \\leq \\frac{\\sigma_{\\max}-\\sigma_{\\min}}{\\sigma'}$ for $i\\in V$.\\label{lem2}\n\\end{lemma}\n\n\\textbf{Proof:} Pick any $\\tilde{\\eta}\\triangleq(\\tilde{x},\\tilde{\\mu})\\in \\mathbb{X}_{\\rm UC}(\\mathds{R}^m_{\\geq0})$. It holds that \\begin{align}\\langle \\nabla\\Theta(\\tilde{\\eta}),\\tilde{\\eta}-\\eta\\rangle\\leq0.\\label{e55}\\end{align}\nChoose $\\eta = (\\bar{x}^T,0^T)^T$ in~\\eqref{e55}, and we have\n\\begin{align}&0\\geq\\langle \\nabla\\Theta^r(\\tilde{x}),\\tilde{x}-\\bar{x}\\rangle - \\sum_{i\\in V}\\langle \\tilde{\\mu}^{[i]},G^{[i]}(\\tilde{x})\\rangle\\nonumber\\\\\n&+ \\sum_{i\\in V}\\sum_{\\ell=1}^{m_i}\\tilde{\\mu}_{\\ell}^{[i]}\\langle \\nabla_{x^{[i]}} G^{[i]}_{\\ell}(\\tilde{x}),\\tilde{x}^{[i]}-\\bar{x}^{[i]}\\rangle\\nonumber\\\\\n&\\geq \\langle \\nabla\\Theta^r(\\tilde{x}),\\tilde{x}-\\bar{x}\\rangle - \\sum_{i\\in V}\\langle \\tilde{\\mu}^{[i]},G^{[i]}(\\bar{x})\\rangle,\\label{e56}\n\\end{align} where the last inequality uses $\\tilde{\\mu}^{[i]}_{\\ell}\\geq0$ and Taylor theorem and the convexity of $G^{[i]}_{\\ell}$ in $x$. It follows from~\\eqref{e56} and Assumption~\\ref{asm3} that \\begin{align}\\|\\tilde{\\mu}^{[i]}\\|_{\\infty}\\leq\\frac{-\\langle \\nabla\\Theta^r(\\tilde{x}),\\tilde{x}-\\bar{x}\\rangle}{\\sigma'}\\leq \\frac{\\sigma_{\\max}-\\sigma_{\\min}}{\\sigma'}.\\nonumber\\end{align}\n\\oprocend\n\n\n\\subsubsection{Private Slater vectors}\n\nIn Lemma~\\ref{lem2}, each player has to access the global information of $\\bar{x}$, $\\sigma'$, $\\sigma_{\\min}$ and $\\sigma_{\\max}$. In what follows, we will derive a sufficient condition which only requires private information of each player~$i$ to determine an upper bound on $\\mu^{[i]}$.\n\n\\begin{assumption} For each $i\\in V$, there exists $\\sigma_i > 0$ and $\\bar{x}_i\\in X_i$ such that for any $(\\tilde{x},\\tilde{\\mu})\\in\\mathbb{X}_{\\rm UC}$, $\\|-G^{[i]}(\\bar{x}^{[i]},\\tilde{x}^{[-i]})\\|_{\\infty} \\geq \\sigma_i$ holds.\\label{asm2}\n\\end{assumption}\n\nThe vector $\\bar{x}^{[i]}$ that satisfies Assumption~\\ref{asm3} is referred to as the \\emph{private} Slater vector. One case where Assumption~\\ref{asm2} holds is that $G^{[i]}$ only depends upon $x^{[i]}$ and, for each $i\\in V$, there is $\\bar{x}^{[i]}\\in X_i$ such that $\\|-G^{[i]}(\\bar{x}^{[i]})\\|_{\\infty} \\geq \\sigma_i$ holds for some $\\sigma_i > 0$.\n\n\\begin{lemma} If Assumption~\\ref{asm2} holds, then for any $(\\tilde{x},\\tilde{\\mu})\\in \\mathbb{X}_{\\rm UC}(\\mathds{R}^m_{\\geq0})$, it holds that $\\|\\tilde{\\mu}^{[i]}\\|_{\\infty} \\leq \\frac{\\sigma_{i,\\max}-\\sigma_{i,\\min}}{\\sigma_i}$ for $i\\in V$.\\label{lem5}\n\\end{lemma}\n\n\\textbf{Proof:} Pick any $(\\tilde{x},\\tilde{\\mu})\\in\\mathbb{X}_{\\rm UC}(\\mathds{R}^m_{\\geq0})$. By Assumption~\\ref{asm2}, there is $\\bar{x}^{[i]}\\in X_i$ such that \\begin{align*}\\|-G^{[i]}(\\bar{x}^{[i]},\\tilde{x}^{[-i]})\\|_{\\infty} \\geq \\sigma.\\end{align*} Since $(\\tilde{x},\\tilde{\\mu})\\in\\mathbb{X}_{\\rm UC}$, we then have the following: \\begin{align}&\\mathcal{H}_i(\\tilde{x}^{[i]},\\tilde{x}^{[-i]},0)\\leq\n\\mathcal{H}_i(\\tilde{x}^{[i]},\\tilde{x}^{[-i]},\\tilde{\\mu}^{[i]})\\nonumber\\\\\n&\\leq\n\\mathcal{H}_i(\\bar{x}^{[i]},\\tilde{x}^{[-i]},\\tilde{\\mu}^{[i]})\\nonumber\\end{align} where in the first inequality we use $0\\in M_i$ and in the second inequality we use $\\bar{x}^{[i]}\\in X_i$. The above relations further render the following: \\begin{align}f_i(\\tilde{x})\\leq f_i(\\bar{x}^{[i]},\\tilde{x}^{[-i]}) + \\langle \\tilde{\\mu}^{[i]}, G^{[i]}(\\bar{x}^{[i]},\\tilde{x}^{[-i]})\\rangle.\\label{e24}\\end{align} Recall $\\|-G^{[i]}(\\bar{x}^{[i]},\\tilde{x}^{[-i]})\\|_{\\infty} \\geq \\sigma_i$. Then the relation~\\eqref{e24} implies the relation of $\\|\\tilde{\\mu}^{[i]}\\|_{\\infty} \\leq \\frac{\\sigma_{i,\\max}-\\sigma_{i,\\min}}{\\sigma_i}$.\\oprocend\n\n\n\n\n\n\n\\subsection{Future directions}\n\nThe current paper imposes several assumptions: (1) the compactness of $X_i$; (2) the smoothness of component functions; (3) identical $\\alpha(k)$ or $\\alpha$ for all the players; (4) the convexity of component functions and constraints. In particular, the compactness of $X_i$ ensures the uniform boundedness of partial gradients. The convexity guarantees that local optimum are also globally optimal in the sense of Nash equilibrium. It is of interest to relax these assumptions.\n\n\\section{Numerical simulations}\\label{sec:simulations}\n\nIn this section, we will provide a set of numerical simulations to verify the performance of our proposed algorithms. The procedure ${\\rm sample}(A)$ returns a uniform sample from the set $A$.\n\n\\subsection{Algorithm~\\ref{ta:algo1}}\\label{sec:simulations-alg1}\n\nConsider a power network which can be modeled as an interconnected graph $\\mathcal{G}_p\\triangleq \\{\\mathcal{V}_p,\\mathcal{E}_p\\}$ where each node $i\\in \\mathcal{V}_p$ represents a bus and each link in $\\mathcal{E}_p$ represents a branch. The buses in $\\mathcal{V}_p$ are indexed by $1,\\cdots,N$ and bus $1$ denotes the feeder which has a fixed voltage and flexible power injection. A lossless DC model is used to characterize the relation between power generations and loads at different buses and power flows across various branches. Each bus is connected to either a power load or supply and each load is associated with an end-user. The set of load buses is denoted by $\\mathcal{V}_l\\subseteq \\mathcal{V}_p$ and the set of supply buses is denoted by $\\mathcal{V}_s\\subseteq \\mathcal{V}_p$.\n\nThe maximum available power supply at bus $i\\in \\mathcal{V}_s$ is denoted by $S_i\\geq0$, and the intended power load at bus $i\\in \\mathcal{V}_l$ is denoted by $L_i\\geq0$. If the total power supply exceeds the total intended power load, then all the intended loads can be satisfied. Otherwise, some loads need to reduce. The reduced load of end-user~$i$ is denoted by $R_i\\in[0,L_i]$. The objective function of each end-user~$i$ is given by $f_i(R_i) = c_i R_i + p(\\textbf{1}^T_{|\\mathcal{V}_l|}(L-R))(L_i-R_i) - u_i(L_i-R_i)$. The quantity $c_iR_i$ represents the disutility induced by load reduction $R_i$ with $c_i>0$. The scalar $p(\\textbf{1}^T(L-R))$ is the charged price given the total actual load $\\textbf{1}^T(L-R)$. The value $u_i(L_i-R_i)$ stands for the benefit produced by load $L_i-R_i$.\n\nEach end-user~$i$ aims to minimize its own objective function $f_i(R_i)$ by adjusting $R_i$. Such decision making is subject to the physical constraints of the power grid. The first constraint is that the total actual load cannot exceed the maximum available supply; i.e., \\begin{align}\n\\textbf{1}_{|\\mathcal{V}_l|}^T(L-R) \\leq \\textbf{1}_{|\\mathcal{V}_s|}^TS.\\label{e52}\\end{align} Another set of constraints are induced by the limitations of power flows on branches. Let $H_g\\in[-1,1]^{|\\mathcal{E}|\\times|\\mathcal{V}_g|}$ (resp. $H_l\\in[-1,1]^{|\\mathcal{E}|\\times|\\mathcal{V}_l|}$) be the generation (resp. load) shift factor matrix. For $H_g$, the $(i,\\ell)$ entry represents the power that is distributed on line $\\ell$ when $1MW$ is injected into bus $i$ and withdrawn at the reference bus. Denote by $f_e^{\\max}$ the maximum capacity of branch $e$. The power flow constraint for branch $e$ can be expressed as: \\begin{align}-f^{\\max}\\leq H_g S - H_l (L-R)\\leq f^{\\max},\\label{e53}\\end{align} where $f^{\\max}\\triangleq{\\rm col}[f_e^{\\max}]_{e\\in \\mathcal{E}_p}$, $S\\triangleq {\\rm col}[S_i]_{i\\in\\mathcal{V}_s}$, $L\\triangleq {\\rm col}[L_i]_{i\\in\\mathcal{V}_l}$ and $R\\triangleq {\\rm col}[R_i]_{i\\in\\mathcal{V}_l}$.\n\nThe above description defines a non-cooperative game among end-users in $\\mathcal{V}_l$ as follows:\n\\begin{align}&\\min_{R_i\\in [0,L_i]}c_i R_i + p(\\textbf{1}_{|\\mathcal{V}_l|}^T(L-R))(L_i-R_i) - u_i(L_i-R_i)\\nonumber\\\\\n&{\\rm s.t.}\\quad \\textbf{1}_{|\\mathcal{V}_l|}^T(L-R) \\leq \\textbf{1}_{|\\mathcal{V}_s|}^TS\\nonumber\\\\\n&\\quad\\quad H_g S - H_l (L-R) - f^{\\max}\\leq 0\\nonumber\\\\\n&\\quad\\quad - H_g S + H_l (L-R) -f^{\\max}\\leq 0.\\label{e54}\n\\end{align}\n\nIn game~\\eqref{e54}, $p$ is the pricing policy of load serving entity (LSE). This is confidential information of LSE and should not be disclosed to end-users. So the objective function $f_i$ is partially unknown to end-user~$i$. In addition, the numerical values of generation and load shift factor matrices are of national security interest and are kept confidential from the public. Therefore, the power flow constraints are unknown to end-users. Assume that all the end-users can communicate with LSE. Given $R$, LSE broadcasts the price value $p(\\textbf{1}_{|\\mathcal{V}_l|}^T(L-R))$, the power imbalance $\\textbf{1}_{|\\mathcal{V}_l|}^T(L-R) - \\textbf{1}_{|\\mathcal{V}_s|}^TS$, and power limit violations $H_g S - H_l (L-R) - f^{\\max}$ and $- H_g S + H_l (L-R) -f^{\\max}$. In this way, end-users can access the values of $f_i$ and $G^{[i]}$ in Algorithm~\\ref{ta:algo1} without knowing their structures.\\footnote{Many networked engineering systems operate in a hierarchical structure; e.g., Internet, power grid and transportation systems, where a central system operator is placed at the top layer and end-users are placed at the bottom layer~\\cite{MC-SHL-ARC-JCD:07a,Wu.Moslehi.Bose:89}. Here we assume that LSE can communicate with all the end-users. This assumption is widely used in; e.g., network flow control~\\cite{LL-99}.}\n\nWe choose $u_i(L_i-R_i) = -\\frac{1}{2}a_i(L_i-R_i)^2+b_i(L_i-R_i)$ with $a_i>0$. For the pricing mechanism, we use (4) in~\\cite{Bulow.Peiderer:83}; i.e., $p(w) = \\beta w^{\\frac{1}{\\eta}}$ with $\\beta>0$, and $\\eta\\in(0,1)$. So $f_i(R) = c_iR_i + (L_i-R_i)\\beta (\\textbf{1}_{|\\mathcal{V}_l|}^T(L-R))^{\\frac{1}{\\eta}} +\\frac{1}{2}a_i(L_i-R_i)^2-b_i(L_i-R_i)$. One can compute the following: \\begin{align}\\frac{\\partial^2f_i}{\\partial^2R_i}&=a_i-2\\beta\\frac{1}{\\eta}\n(\\textbf{1}_{|\\mathcal{V}_l|}^T(L-R))^{\\frac{1}{\\eta}-1}\\nonumber\\\\\n&-\\beta (L_i-R_i)\\frac{1}{\\eta}(\\frac{1}{\\eta}-1)(\\textbf{1}_{|\\mathcal{V}_l|}^T(L-R))^{\\frac{1}{\\eta}-2},\\nonumber\\\\\n\\frac{\\partial^2f_i}{\\partial R_i \\partial R_j}&=-\\beta\\frac{1}{\\eta}\n(\\textbf{1}_{|\\mathcal{V}_l|}^T(L-R))^{\\frac{1}{\\eta}-1}\\nonumber\\\\\n&-\\beta (L_i-R_i)\\frac{1}{\\eta}(\\frac{1}{\\eta}-1)(\\textbf{1}_{|\\mathcal{V}_l|}^T(L-R))^{\\frac{1}{\\eta}-2}.\\nonumber\\end{align}\n\nChoose $a_i$ such that $a_i > (N-2)\\beta\\frac{1}{\\eta}\n(\\textbf{1}_{|\\mathcal{V}_l|}^TL)^{\\frac{1}{\\eta}-1}+(N-1)\\beta L_i\\frac{1}{\\eta}(\\frac{1}{\\eta}-1)(\\textbf{1}_{|\\mathcal{V}_l|}^TL)^{\\frac{1}{\\eta}-2}>0$. Then the monotonicity property holds. Assume that there is a global Slater vector. Figure~\\ref{fig1} shows the simulation results for the IEEE $30$-bus Test System~\\cite{PSTCA} and the system parameters are adopted from MATPOWER~\\cite{MATPOWER}. The delays at each iteration are randomly chosen from $0$ to $10$.\n\n\\begin{figure*}[bh]\n \\centering\n \\includegraphics[width = \\linewidth]{DemandResponse}\n \\caption{The estimates of Algorithm~\\ref{ta:algo1} at $30$ buses for $\\tau_{\\max} = 10$.}\\label{fig1}\n\\end{figure*}\n\n\n\\subsection{Algorithm~\\ref{ta:algo2}}\\label{sec:simulations-alg2}\n\nConsider $10$ players and their components functions are defined as follows: \\begin{align*}f_i(x) = \\frac{1}{2}((Q^{[i]})^Tx)^2 + (q^{[i]})^Tx,\\end{align*} where\n\\begin{itemize}\n\\item $X_i = [\\psi_i^m\\;\\;\\psi_i^M]$ with $\\psi_i^m = {\\rm sample}([-10\\;\\;-5])$ and $\\psi_i^M = {\\rm sample}([5\\;\\;10])$;\n\\item $Q^{[i]}\\in\\mathds{R}^{10}$ with $Q^{[i]}_i = 1$ and $Q^{[i]}_j = \\frac{1}{10}$ for $j\\neq i$;\n\\item $q^{[i]}\\in\\mathds{R}^{10}$ with $q^{[i]}_j = {\\rm sample}([-50\\;\\;50])$ for $j\\in V$.\n\\end{itemize}\n\nOne can verify that $\\rho = \\frac{1}{10}$ and $L_{\\Theta^r} = 2$ in Theorem~\\ref{the2}. By (B4), we have $\\varrho_b>1$. In the simulation, we choose $\\varrho_b = 3$. Figure~\\ref{fig4} presents the estimate evolution of the players.\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width = \\linewidth]{Algorithm2-x}\n \\caption{The estimates of Algorithm~\\ref{ta:algo2} for $\\tau_{\\max} = 10$.}\\label{fig4}\n\\end{figure}\n\n\\section{Conclusions}\\label{sec:conclusions}\n\nWe have studied a set of distributed robust adaptive computation algorithms for a class of generalized convex games. We have shown their asymptotic convergence to Nash equilibrium in the presence of network unreliability and the uncertainties of component functions and illustrated the algorithm performance via numerical simulations.\n\n\n\n\\bibliographystyle{plain}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}