diff --git a/.gitattributes b/.gitattributes index dbf18c387a97738a6f3a5fe94d502d2dbf45e842..c80c1d244965d5f7772a08bd32ff0d8905074963 100644 --- a/.gitattributes +++ b/.gitattributes @@ -233,3 +233,4 @@ data_all_eng_slimpj/shuffled/split/split_finalaa/part-04.finalaa filter=lfs diff data_all_eng_slimpj/shuffled/split/split_finalad/part-14.finalad filter=lfs diff=lfs merge=lfs -text data_all_eng_slimpj/shuffled/split/split_finalaa/part-13.finalaa filter=lfs diff=lfs merge=lfs -text data_all_eng_slimpj/shuffled/split/split_finalaa/part-16.finalaa filter=lfs diff=lfs merge=lfs -text +data_all_eng_slimpj/shuffled/split/split_finalaa/part-12.finalaa filter=lfs diff=lfs merge=lfs -text diff --git a/data_all_eng_slimpj/shuffled/split/split_finalaa/part-12.finalaa b/data_all_eng_slimpj/shuffled/split/split_finalaa/part-12.finalaa new file mode 100644 index 0000000000000000000000000000000000000000..a78208cdeb7bebeb538e969fdb50ba9d503839c4 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split/split_finalaa/part-12.finalaa @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3e8ed87f8bf9e7fc91e75b0dd3c64825a6467e7df947fb42fff8cb2ab7c1d410 +size 12577390075 diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzamml b/data_all_eng_slimpj/shuffled/split2/finalzzamml new file mode 100644 index 0000000000000000000000000000000000000000..d00e00face8ef96374270a0653e80728e1976c4b --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzamml @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nThe vertical structure of eddying flow in the oceanic mesoscale is a fundamental\nproblem in geophysical fluid dynamics, one that has has been reinvigorated by the\nneed to interpret altimetric observation of surface velocity fields\n\\citep{ScottWang05,LapeyreJPO09}. Energy cascades and self-organization processes\nare fairly well understood and characterized in two-dimensional turbulence, but\nhow these results generalize to rotating, continuously stratified flows is still\nan open question. We ask in this paper what is the final organization of a freely\nevolving inviscid quasi-geostrophic flow with continuous stratification. In\nparticular, we examine what are the precise conditions for the oft-cited\nbarotropization process to occur. Barotropization refers to the tendency of a\nquasi-geostrophic flow to reach the gravest vertical mode \\citep{Charney71,\nRhines77}. Because the gravest vertical mode is the barotropic one, i.e. a depth\nindependent flow, ``barotropization'' means a tendency toward the formation of\ndepth independent flows. We study in particular the key role played by the beta\neffect (the existence of planetary vorticity gradients) in such barotropization\nprocesses. It has been previously noticed that the presence of these large scale\nplanetary vorticity gradients favors barotropic process, see e.g. \\citet{SmithVallis01}, but this phenomenon remains to our knowledge unexplored and unexplained.\n\n\nContinuously stratified quasi-geostrophic flows take place in three dimensions,\nbut their dynamics are quasi-two-dimensional because the non-divergent advecting\nvelocity field has only horizontal components. The dynamics admits similar\nconservation laws as the two-dimensional Euler equations, including among others\nthe total energy and the enstrophy of each layer. These conservation laws imply\nan inverse energy cascade toward large scales, and weak energy dissipation in\npresence of weak viscosity\n\\citep{Kraichnan_Motgommery_1980_Reports_Progress_Physics}. This situation\ncontrasts with three dimensional turbulence where the energy cascades forward\ntoward small scales, giving rise to finite energy dissipation no matter how weak\nthe viscosity.\n\nJust as for two-dimensional Euler flows, inviscid, freely evolving continuously stratified quasi-geostrophic flows are disordered and an unstable initial condition rapidly evolves into a strongly disordered fine-grained flow with filaments being stretched, folded, and consequently cascading toward smaller and smaller scales. However, the inverse energy cascade leads to the formation of robust, large scale coherent structures filling the domain in which the flow takes place, contrasting strongly with the analogous three dimensional flow with its energy transfer to small scales.\n\nThe aim of this paper is to provide a prediction for the vertical structure of the large scale flow organization resulting from inviscid, freely evolving continuously stratified quasi-geostrophic dynamics using a statistical mechanics approach. The goal of the statistical mechanics approach is to drastically reduce the complexity of the problem of determining the large scale flow organization to the study of a few key parameters given by the dynamical invariants, as for instance the total energy and the fine-grained enstrophy in each layer. Such a theory has been independently proposed by \\cite{Robert:1990_CRAS,SommeriaRobert:1991_JFM_meca_Stat} and \\cite{Miller:1990_PRL_Meca_Stat}, and henceforth will be referred to as MRS theory. It is an equilibrium theory, developed in the case without forcing and dissipation. Our contribution will be to apply this theory to the explicit computation of a class of equilibrium states of the stratified quasi-geostrophic equations.\n\n{A crucial ingredient of the theory comes from the constraints given by dynamical invariants. In the present paper, we will focus on the role played by the energy and the enstrophy constraints. In the framework of the MRS theory, it is justified in a low energy limit, and it is only a first step before more comprehensive studies including the effects of additional invariants, see e.g \\cite{MajdaWangBook,CorvellecBouchet2010}.}\n\n{The phenomenology of two-dimensional and geostrophic turbulence is often explained by considering the energy and enstrophy constraints only, see e.g \\citet{Kraichnan_Motgommery_1980_Reports_Progress_Physics,Salmon_1998_Book}. Actually, earlier statistical mechanics approaches were developed for Galerkin-truncated Euler or quasi-geostrophic dynamics, which allowed for a simpler theoretical treatment than in the continuous dynamics case, see e.g. \\cite{Kraichnan_1967PhFl...10.1417K,SalmonHollowayHendershott:1976_JFM_stat_mech_QG}. In the case of Galerkin-truncated models, only the energy and the enstrophy are conserved quantities, so the statistical mechanics of these systems were called energy-enstrophy theory.}\n\n{A first reason to consider MRS theory rather than earlier energy-enstrophy statistical mechanics approach is that real flows are continuous, not Galerkin-truncated. Because real flows are also forced-dissipated, our first working hypothesis is that the computation of the MRS equilibrium states of the continuous dynamics is a necessary step before taking into account the effect of weak forcing and dissipation. This is for instance the approach followed by \\cite{BouchetSimonnetPRL09}, who found that in the presence of weak forcing and dissipation, the dynamics were close to MRS equilibrium states which were different from energy-enstrophy equilibrium states.}\n\n{One could argue that numerical models often at best conserve energy and enstrophy, so there would be at first sight no reason to consider higher order invariants to predict the simulated self-organization. However, the truncated dynamics in numerical models may lead to the formation of quasi-stationary states having a life time that tends to infinity as the resolution is refined. These quasi-stationary states are distinct from the energy-enstrophy equilibria, and thus require more invariants to describe using a statistical theory. The formation of such quasi-stationary states is a characteristic of long-range interacting systems (e.g., \\cite{CampaDauxoisRuffo} and references therein). Given these results, our second working hypothesis is that the quasi-stationary states found in the truncated dynamics of numerical simulations may be close to a state predicted by MRS theory.}\n\n{There are only a few studies of statistical equilibria of the continuously stratified quasi-geostrophic dynamics. \\cite{Merryfield98JFM} obtained and discussed a class of solutions in the framework of energy-enstrophy statistical theory above topography, for which there is linear relation between potential vorticity $q$ and streamfunction $\\psi$. The solutions obtained were critical points of the theory, but the actual entropy maxima were not selected. Here we consider the particular class of MRS equilibrium states characterized by a linear $q-\\psi$ relation, so we describe the same class of equilibrium states, except that we do not take into account bottom topography. The effect of bottom topography will be addressed in a future work. The novelty of our work is to relate each critical point to the energy and fine-grained enstrophy constraints and to select the actual entropy maxima in different limit cases.}\n\n\\citet{SchecterPRE03} provided a direct comparison between numerical simulations\nand numerical computation of MRS statistical equilibria, focusing on the\nparticular case of initial conditions given by a random field statistically\nhomogeneous in space, as in \\citet{McWilliams94}. {This corresponds to a particular case for which the enstrophy profile is only weakly varying on the vertical.} We will outline in this paper the important consequence of the conservation of enstrophy at each depth. This important role played by the enstrophy constraint allows us to discuss how the beta effect can favor barotropization. For a given initial streamfunction field, the beta effect modifies the initial distribution of potential vorticity, and therefore modifies the vertical profile of fine-grained enstrophy. More precisely, increasing the beta effect tends to increase the depth independent part of the fine-grained enstrophy profile. We will show that statistical equilibria associated with depth independent fine-grained enstrophy profiles are barotropic, which means that increasing the beta favors barotropization, according to statistical mechanics predictions.\n\nThe paper is organized as follows: the continuously stratified quasi-geostrophic equations are introduced on section \\ref{sec:csqg}. The MRS equilibrium statistical mechanics theory of such systems is presented in section 3. Here it is shown that the class of MRS statistical equilibria characterized by a linear relation between potential vorticity and streamfunction are solutions of a minimum enstrophy problem, which is solved in various limit cases, first without beta effect and then with beta effect. The theory predicts an increase of barotropization with increasing value of beta, which is tested against numerical simulations in section 4. A summary of the main results and a discussion of their application to the ocean is given section 5.\n \n\n\\section{Continuously stratified quasi-geostrophic model} \\label{sec:csqg}\n\n\\subsection{Equations of motion}\n\nWe consider an initial value problem for a freely evolving, unforced,\nundissipated, quasi-geostrophic flow with continuous stratification\n$N(z)$ (see e.g., \\citet{VallisBook}, section 5.4):\n\n\\begin{equation}\n\\partial_{t}q+J(\\psi,q)=0\\ ,\\label{eq:FullDynamics}\\end{equation}\n \\begin{equation}\nq=\\Delta\\psi+\\frac{\\partial}{\\partial z}\\left(\\frac{f_{0}^{2}}{N^{2}}\\frac{\\partial}{\\partial z}\\psi\\right)+\\beta y\\ ,\\label{eq:PV_definition}\\end{equation}\n with $J(\\psi,q)=\\partial_{x}\\psi\\partial_{y}q-\\partial_{y}\\psi\\partial_{x}q$\n, and $f=f_{0}+\\beta y$ is the Coriolis parameter with the $\\beta$-plane\napproximation.\nNeglecting buoyancy and topography variations, the boundary condition at the bottom ($z=-H$, where $H$ is a constant) is given by \n\\begin{equation}\n \\partial_{z}\\psi|_{z=-H}=0.\t\n\\end{equation}\nThe boundary condition at the surface (defined as $z=0$,\nusing the rigid lid approximation), is given by the advection of buoyancy\n\\begin{equation}\n \\partial_{t}b_{s}+J(\\psi,b_{s})=0, \\quad \n \\frac{f_{0}^{2}}{N^{2}}\\partial_{z}\\psi|_{z=0}=b_{s}.\n\\label{eq:FullDynamicsB}\n\\end{equation}\n\n\n{The reason we focus on quasi-geostrophic dynamics is that for these flows, there is no direct cascade of energy and no dissipative anomaly, see e.g. \\cite{Salmon_1998_Book}, which makes possible a straightforward generalization of statistical mechanics theories developed initially in the framework of two-dimensional Euler equations. As far as the ocean is concerned, quasi-geostrophic dynamics is relevant to describe ``mesoscale turbulence'', but does not account for ``sub-mesoscale turbulence'', which involve ageostrophic effects, with part of the energy that may cascade forward, see e.g. \\cite{LeithThomas,FerrariWunsch09} and references therein.}\n\n\\subsection{Modal decomposition \\label{sub:Vertical-and-horizontal-eigenmodes}}\n\n\\subsubsection{Laplacian and baroclinic eigenmodes}\n\nWe consider the case of a doubly periodic domain $\\mathcal{D}=[-\\pi\\ \\pi]\\times[-\\pi\\ \\pi]$.\nThe Fourier modes $e_{k,l}(x,y)$ are a complete orthonormal basis\nof the 2D Laplacian $\\Delta e_{k,l}=-K^{2}e_{k,l}$\nwhere $(k,l)$ are the wavenumbers in each directions and $K^2 = k^2 + l^2$.\nIn the vertical, the complete orthonormal basis of barotropic-baroclinic\nmodes is defined by the solutions $F_{m}(z)$ of the Sturm-Liouville\neigenvalue problem \n\\begin{equation}\n\\frac{\\partial}{\\partial z}\\left(\\frac{f_{0}^{2}}{N^{2}}\\frac{\\partial}{\\partial z}F_{m}\\right)=-\\lambda_{m}^{2}F_{m}\\quad\\text{with}\\quad F^{\\prime}(0)=F^{\\prime}(-H)=0,\n\\label{eq:VerticalModes}\n\\end{equation}\n where $m\\ge0$ is an integer, $\\lambda_{m}^{2}$ is the eigenvalue\nassociated with mode $m$, which defines the $m$$^{th}$-baroclinic\nRossby radius of deformation $\\lambda_{m}^{-1}$. The barotropic mode\nis the (depth independent) mode $F_{0}$, associated with $\\lambda_{0}=0$.\n \n\n\n\\subsubsection{Surface quasi-geostrophic (SQG) modes \\label{sub:SQG_modes}}\n\nIn the following, it will be convenient to distinguish the interior\ndynamics from the surface dynamics. For a given potential vorticity\nfield $q(x,y,z,t)$ and surface buoyancy field $b_{s}(x,y,t)$ , the\nstreamfunction can be written $\\psi=\\psi_\\text{int}+\\psi_\\text{surf}$, where $\\psi_\\text{int}$ is the solution of \n\\begin{equation}\n \\Delta\\psi+\\frac{\\partial}{\\partial z} \n \\left(\\frac{f_{0}^{2}}{N^{2}}\\frac{\\partial}{\\partial z}\\psi\\right)=q-\\beta y\\quad\\text{with}\\quad\\partial_{z}\\psi|_{z=0}=\\partial_{z}\\psi|_{z=-H}=0,\n\\label{eq:PV_interior}\n\\end{equation}\n and where $\\psi_\\text{surf}$ is the solution of \n\\begin{equation}\n\\Delta\\psi+\\frac{\\partial}{\\partial z} \\left(\\frac{f_{0}^{2}}{N^{2}}\\frac{\\partial}{\\partial z}\\psi\\right)=0, \\text{~with ~~} \\partial_{z}\\psi|_{z=0}=\\left(\\frac{N(0)}{f_{0}}\\right)^2 b_{s},\n \\quad\\partial_{z}\\psi|_{z=-H}=0 \\ .\\label{eq:PV_surface}\n\\end{equation}\n\nEach projection of $\\psi_\\text{surf}$ on a (horizontal) Laplacian eigenmode\n$(l,k)$ defines a surface quasi-geostrophic eigenmode (SQG mode hereafter). When the stratification $N$ is constant and when $f_{0}\/(NK)\\ll H$, these SQG modes are decreasing exponential functions \n\\begin{equation}\n\\widehat{\\psi}_{\\text{surf},k,l}=A_{k,l}e^{zNK\/f_{0}} + O\\left(\\frac{h_{K}}{H}\\right),\n\\label{eq:SQGmode_ExpDecay}\n\\end{equation}\nwhere the coefficients $A_{k,l}$ are determined using the boundary\ncondition at the surface. When the stratification is not constant,\nbut when $f_{0}\/N(0)K\\ll H,$ the SQG modes are still characterized\nby a decreasing exponential, with an $e$-folding depth $h_{K}=f_{0}\/KN(0)$.\n\nThe distinction between surface and interior streamfunction is sometimes\nuseful to describe the dynamics , see e.g. \\citet{LapeyreKlein06},\nbut a shortcoming of such a decomposition is that $\\psi_\\text{int}$ and\n$\\psi_\\text{surf}$ are not orthogonal.\n\n\\subsection{Delta-sheet approximation and SQG-like modes \\label{sub:SQGlike_modes}}\n\nThe boundary condition (\\ref{eq:FullDynamicsB}) can be formally replaced by the condition of no buoyancy variation ($ \\partial_z \\psi = 0 $ at $z=0$), provided that surface buoyancy anomalies are interpreted as a thin sheet of potential vorticity just below the rigid lid \\citep{Bretherton66}. For this reason, and without loss of generality, we will consider that $b_s=0$ {in the remainder of this paper}. \n\nLet us now consider a case with no surface buoyancy, but with a surface intensified potential vorticity field defined as $q=q_\\text{surf}(x,y)\\Theta(z+H_{1})+q_\\text{int}(x,y,z)$, with $H_{1}q_\\text{surf}\\gg Hq_\\text{int}$, $H_{1}\\ll H$ and $\\Theta$ the Heaviside function. Using the linearity of (\\ref{eq:PV_interior}), one can still decompose the streamfunction field into $\\psi=\\psi_\\text{surf}+\\psi_\\text{int}$ where $\\psi_\\text{surf}$ is the flow induced by the PV field $q=q_\\text{surf}\\Theta(z+H_{1})$, obtained by inverting (\\ref{eq:PV_definition}), and $\\psi_\\text{int}$ the flow induced by $q_\\text{int}$. For $z<-H_{1}$, the streamfunction $\\psi_\\text{surf}$ satisfies equation (\\ref{eq:PV_surface}), just as SQG modes. This is why we call it an SQG-like mode. \n\n\n\\section{Equilibrium statistical mechanics of stratified quasi-geostrophic\nflow}\\label{sec:Equilibrium-statistical-mechanics}\n\nIn this section we briefly introduce the MRS equilibrium statistical mechanics theory, and\nintroduce a general method to compute the MRS equilibria. This allows for explicit computation\nof statistical equilibria on the $f-$plane in subsection\n\\ref{sub:Computation-of-the-RSM-f-plane}, and on the $\\beta$-plane in subsection\n\\ref{sub:Including-the-beta}.\n\n\n\\subsection{Theory\\label{sub:Theory}}\n\nLet us consider an initial (fine-grained) PV field $q_{0}(x,y,z)$.\nThe dynamics (\\ref{eq:FullDynamics}) induced by this PV field stirs\nthe field itself, which rapidly leads to its filamentation at smaller\nand smaller scale, with the concomitant effect of stretching and folding.\nThere are two (related) consequences, one in phase space, one in physical\nspace.\n\nIn physical space, the filamentation toward small scales implies that\nafter a sufficiently long time evolution, the PV field can be described\nat a macroscopic level by a probability field\n$\\rho(x,y,z,\\sigma,t)\\mathrm{d}\\sigma$ to measure a given PV level\nbetween $\\sigma$ and $\\sigma+\\mathrm{d}\\sigma$ at a given point\n($x,y,z$). This probability distribution function (pdf hereafter)\n$\\rho$ is normalized locally \\begin{equation}\n\\forall ~ x,y,z\\ \\mathcal{N}[\\rho]=\\int_{\\Sigma}\\mathrm{d}\\sigma\\ \\rho=1,\\label{eq:normalization}\\end{equation}\nwhere the integral is performed over the range $\\Sigma$ of PV levels,\nwhich is prescribed by the initial condition. The field $\\rho$ is\na macrostate because it is associated with a huge number of fine-grained\nPV configurations $q(x,y,z)$ (the microscopic states), in\nthe sense that many realizations of the (microscopic, or fine-grained)\nPV field $q$ leads to the same macroscopic state $\\rho$. \n\nThe phase space is defined by all configurations of the field $q(x,y,z)$.\nThe cornerstone of the theory is to assume that stirring\nallows the system to explore evenly each possible configuration of\nthe phase space that satisfies the constraints of the dynamics, so\nthat these configurations can be considered equiprobable. When this assumption fails, as in the linear regime discussed in section 4.4, the statistical theory fails. \n\nThe main idea of the theory is then to count the number of microstate\nconfigurations associated with a macrostate (the pdf field $\\rho$),\nand to say that the equilibrium state is the most probable macrostate\nthat satisfies the constraints of the dynamics, i.e. the one that\nis associated with the largest number of microstates satisfying the\nconstraints. One can then show that an overwhelming number of microstates\nare associated with this most probable macrostate \\citep{Michel_Robert_LargeDeviations94C}. An important physical consequence is that an observer who wants to see the actual equilibrium macrostate would neither need to perform ensemble average nor time average, but would simply have to wait a sufficiently long time.\n\n\nNote that the output of the theory is the pdf field $\\rho$, but the\nquantity of interest to describe the flow structure is the coarse-grained PV field defined as \n\\begin{equation}\n\\overline{q}=\\int_{\\Sigma}\\mathrm{d}\\sigma\\ \\sigma\\rho.\n\\label{eq:q_coarse_grained}\n\\end{equation}\nIn practice, counting the number of microstates $q$ associated with\na macrostate $\\rho$ is a difficult task, because $q$ is a continuous\nfield, and it requires the use of large deviation theory{, see e.g. \\cite{BouchetCorvellec10}}. However,\nit can be shown that classical counting arguments \ndo apply here, and that the most probable state $\\rho$ maximizes\nthe usual Boltzmann mixing entropy\n\\begin{equation}\n\\mathcal{S}=-\\int_{\\mathcal{D}}\\mathrm{d}x\\mathrm{d}y\\ \\int_{-H}^{0}\\mathrm{d}z\\ \\int_{\\Sigma}\\mathrm{d}\\sigma\\ \\rho\\ln\\rho\\ ,\n\\label{eq:entropy_mixing}\n\\end{equation}\nsee \\citet{BouchetCorvellec10} and references therein for further details.\n\n\n\nLet us now collect the constraints that $\\rho$ must satisfy, in addition\nto the local normalization (\\ref{eq:normalization}). \nAn infinite number of constraints are given by conservation\nof the global distribution of PV levels $\\mathcal{P}(\\sigma,z)$, which is prescribed by the initial global distribution of PV levels $P_{0}(\\sigma,z)$: \n\\begin{equation}\n\\mathcal{P}(\\sigma,z)[\\rho]=\\int_{\\mathcal{D}}\\mathrm{d}x\\mathrm{d}y\\ \\rho=P_{0}(\\sigma,z).\\label{eq:PV_constraint}\\end{equation}\nThese conservation laws include the conservation of the fine-grained enstrophy\n\\begin{equation}\n\\int_{\\mathcal{D}} \\mathrm{d} x \\mathrm{d} y \\int_{\\Sigma} \\mathrm{d} \\sigma\\ \\sigma^2 \\rho =Z_0 \\ . \\label{eq:Z0} \n\\end{equation}\nAnother constraint is given by the conservation of the total energy\n \\begin{equation}\n\\mathcal{E}=\\frac{1}{2}\\int_{-H}^{0}\\mathrm{d}z \\int_{\\mathcal{D}}\\mathrm{d}x\\mathrm{d}y\\ \\left(\\left(\\nabla\\psi\\right)^{2}+\\frac{f_{0}^{2}}{N^{2}}\\left(\\partial_{z}\\psi\\right)^{2}\\right) \\ \n\\label{eq:Energie_definition}\\end{equation}\nBecause an overwhelming number of microstates are close to the equilibrium state $\\rho$, the energy of the fluctuations are negligible with respect to the energy due to the coarse-grained PV field $\\overline{q}$ \\citep{Michel_Robert_LargeDeviations94C}. Integrating \\ref{eq:Energie_definition} by parts and replacing $q$ by $\\overline{q}$ gives the constraint $E_{0}=\\mathcal{E}[\\rho]$, where $E_{0}$ is the energy of the initial condition, and where \n\\begin{equation}\n\\mathcal{E}=\\frac{1}{2} \\int_{-H}^{0}\\mathrm{d}z \\int_{\\mathcal{D}}\\mathrm{d}x\\mathrm{d}y \\left(\\beta y-\\overline{q} \\right)\\psi=\\frac{1}{2} \\int_{-H}^{0}\\mathrm{d}z\\int_{\\mathcal{D}}\\mathrm{d}x\\mathrm{d}y \\int_{\\Sigma}\\mathrm{d} \\sigma \\left( \\beta y-\\sigma \\right)\\psi\\rho\\ .\\label{eq:energy_functional_rho} \n\\end{equation}\nFinally, the MRS theory provides a variational problem\n\\begin{equation}\nS\\left(E_{0},P_{0}(\\sigma,z)\\right)=\\max_{\\rho,\\mathcal{N}[\\rho]=1}\\left\\{ \\mathcal{S}\\left[\\rho\\right]\\ |\\ \\mathcal{E}\\left[\\rho\\right]=E_{0}\\ \\&\\ \\mathcal{P}\\left[\\rho\\right]=P_{0}(\\sigma,z)\\right\\} \\,,\\label{eq:RSM}\\end{equation}\nwhich means that the equilibrium state is the density probability field\n$\\rho_\\text{rsm}$ that satisfies the local normalization constraint (\\ref{eq:normalization}), the energy constraint $\\mathcal{E}\\left[\\rho\\right]=E_{0}$ with $\\mathcal{E}$ given by (\\ref{eq:energy_functional_rho}), the incompressibility constraint $\\mathcal{P}(\\sigma,z)\\left[\\rho\\right]=P_{0}(\\sigma,z)$ with $\\mathcal{P}$ given by (\\ref{eq:PV_constraint}), and that maximizes the entropy functional $\\mathcal{S}$ given by (\\ref{eq:entropy_mixing}). We see that the parameters of this problem are the values of the constraints, which are prescribed by the initial condition $q_{0}(x,y,z)$.\n\n\n\\subsection{MRS equilibrium states characterized by a linear $q-\\psi$ relation \\label{sub:Energy-enstrophy-equilibrium-states}}\n\n\n\\subsubsection{Simplification of the variational problem \\label{sub:Simplification-of-the}}\n\nIn practice, MRS equilibrium states are difficult to compute, because\nsuch computations involve the resolution of a variational problem\nwith an infinite number of constraints. Computing the critical points\nof (\\ref{eq:RSM}) is straightforward, see e.g. equation (\\ref{eq:rhoChar})\nof appendix A, but showing that second order variations of the entropy\nfunctional are negative is in general difficult. However, recent results\nhave allowed considerable simplifications of the analytical computations\nof these equilibrium states \\citep{Bouchet:2008_Physica_D}.\nThe idea is first to prove equivalence (usually\nfor a restricted range of parameters) between the complicated variational\nproblem (\\ref{eq:RSM}) and other variational problems more simple\nto handle, involving less constraints than the complicated problem,\nbut characterized by the same critical points, and then to explicitly\ncompute solutions of the simpler variational problem. The equivalence\nbetween the variational problems ensures that solution of the\nsimple problem are also solutions of the more complicated one.\nWe will follow this approach in the following. \n\n\\subsubsection{Minimum enstrophy variational problem \\label{sub:MRS-linear}}\n\nStarting from the MRS variational (\\ref{eq:RSM}), it is shown in appendix A that any MRS equilibrium state characterized by a linear $\\overline{q}-\\psi$ relation is a solution of the variational problem\n\\begin{equation}\nZ_\\text{cg}^\\text{tot}\\left(E_{0},Z_{0}(z)\\right)=\\min_{\\overline{q}}\\left\\{ \\mathcal{Z}_\\text{cg}^\\text{tot}\\left[\\overline{q}\\right]\\ |\\ \\mathcal{E}\\left[\\overline{q}\\right]=E_{0}\\right\\} \\label{eq:MinimumEnstrophyVP}\\end{equation}\nwith\n\\begin{equation}\n\\mathcal{Z}_\\text{cg}^\\text{tot}\\left[\\overline{q}\\right]=\\int_{-H}^{0}\\mathrm{d}z\\ \\frac{\\mathcal{Z}_{cg}}{Z_{0}},\\quad\\mathcal{Z}_{cg}=\\frac{1}{2}\\int_{\\mathcal{D}}\\mathrm{d}x\\mathrm{d}y\\ \\overline{q}^{2},\\label{eq:Enstrophy_coarse-grained}\n\\end{equation}\nwhere $Z_{0}(z)$ is the fine-grained enstrophy profile, $E_{0}$\nthe energy. This variational problem amounts to find the minimizer\n$\\overline{q}_{m}$ (a coarse grained PV field) of the coarse-grained\nenstrophy $\\mathcal{Z}_{cg}^{tot}$ (with $\\mathcal{Z}_{cg}^{tot}\\left[\\overline{q}_{m}\\right]=Z_{cg}^{tot}\\left(E_{0},Z_{0}(z)\\right)$), among all the fields $\\overline{q}$ satisfying the constraint $\\mathcal{E}[\\overline{q}]=E_{0}$ given by (\\ref{eq:Energie_definition}).%\n\nCritical points of the variational problem (\\ref{eq:MinimumEnstrophyVP})\nare computed by introducing a Lagrange multiplier $\\beta_{t}$ associated\nwith the energy constraint, and by solving\\[\n\\delta\\mathcal{Z}_{cg}^{tot}+\\beta_{t}\\delta\\mathcal{E}=0,\\]\n where first variations of the functionals are taken with respect\nto the field $\\overline{q}$. Because $\\beta_{t}$ is the Lagrange\nparameter associated with the energy constraint, it is called inverse\ntemperature%\n\\footnote{The notation $\\beta$ is traditionally used in thermodynamics for\ninverse temperature. We use here the convention $\\beta_{t}$ for this\ninverse temperature, since $\\beta=\\partial_{y}f$ refers to the beta\neffect, i.e. to the variation of the Coriolis parameter with latitude. %\n}. Using $\\delta\\mathcal{Z}_{cg}^{tot}=\\int_{\\mathcal{D}}\\mathrm{d}x\\mathrm{d}y\\ \\int_{-H}^{0}\\mathrm{d}z\\ \\overline{q}\\delta\\overline{q}\/Z_{0}$\nand $\\delta\\mathcal{E}=-\\int\\mathrm{d}x\\mathrm{d}ydz\\ \\psi\\delta\\overline{q}$,\none finds that critical points of this problem are \\begin{equation}\n\\overline{q}=\\beta_{t}Z_{0}\\psi.\\label{eq:q-psi_relation_with_coeff_determined}\\end{equation}\n The flow structure of these critical points can then be computed\nby solving\n\n\\begin{equation}\n\\Delta\\psi+\\frac{\\partial}{\\partial z}\\frac{f_{0}^{2}}{N^{2}}\\frac{\\partial}{\\partial z}\\psi+\\beta y=Z_{0}\\beta_{t}\\psi\\quad\\text{with}\\quad\\partial_{z}\\psi|_{z=0,z=-H}=0.\\label{eq:CriticalPoints}\\end{equation}\nCritical points can then be classified according to the value of their\ninverse temperature $\\beta_{t}$. \nAll the problem is then to find which of these critical point are\nactual minima of the coarse-grained enstrophy $\\mathcal{Z}_{cg}^{tot}[\\overline{q}]$ for a given energy $\\mathcal{E}[\\overline{q}]=E_{0}$. %\n\nMRS equilibrium states characterized by linear $\\overline{q}-\\psi$\nrelations are expected either in a low energy limit or in a strong\nmixing limit, see \\citet{BouchetVenaillePhysRep11} and references\ntherein. \nLet us estimate when the low energy limit is valid. For a given distribution\n$P_{0}(\\sigma,z)$ of PV levels $\\sigma$, one can compute the maximum\nenergy $E_{max}$ among all the energies of the PV fields characterized\nby the distribution $P_{0}(\\sigma,z)$. Low energy states characterized\nby the same distribution $P_{0}(\\sigma,z)$ are therefore those characterized\nby $E\\ll E_{max}$. \nThe strong mixing limit, which can in some case overlap the low energy\nlimit, is obtained by assuming $\\beta_{t}\\sigma\\psi\\ll1$ in the expression\n(\\ref{eq:rhoChar}) of the critical points of the MRS variational\nproblem (\\ref{eq:RSM}), see \\citep{ChavanisSommeria:1996_JFM_Classification}. {Note that these results would remain valid in the presence of bottom topography. The only change would be the expression of the energy, and the bottom boundary condition.}\n\n\\subsubsection{Link with the phenomenological principle of enstrophy minimization and other equilibrium states}\n\nAt a phenomenological level, the variational problem (\\ref{eq:MinimumEnstrophyVP}) can be interpreted as a generalization of the minimum enstrophy principle\nof \\citet{BrethertonHaidvogel} to the continuous stratified flow:\nthe equilibrium state minimizes the vertical integral of the coarse-grained\nenstrophy normalized by the fine-grained enstrophy in each layer,\nwith a global constraint given by energy conservation. This normalization\nis necessary to give a measure of the degree of mixing comparable\nfrom one depth $z$ to another. \n\nIt has been shown by \\cite{Carnevale_Frederiksen_NLstab_statmech_topog_1987JFM} that solutions of the minimum enstrophy problem of \\cite{BrethertonHaidvogel} for one layer quasi-geostrophic flows, are energy-enstrophy equilibrium states of the Galerkin-truncated dynamics of these models, in the limit when the high-wave number cut-off goes to infinity. These energy-enstrophy equilibrium states were first introduced in the framework of 2D truncated Euler flows by \\cite{Kraichnan_1967PhFl...10.1417K}. \\cite{SalmonHollowayHendershott:1976_JFM_stat_mech_QG} generalized the theory to one or few layers truncated quasi-geostrophic flows. \\cite{FrederiksenSawford} applied the theory for a barotropic flow on a sphere, and \\cite{fredericksen91GAFDa} generalized these results for two-layers quasi-geostrophic flow on a sphere. In this latter case, interesting stability results were obtained by \\cite{FrederiksenB}.\n\nThe case of continuously stratified flow above topography was addressed by \\citet{Merryfield98JFM} for truncated dynamics. \\citet{Merryfield98JFM} computed critical points of the statistical mechanics variational problem, found $\\overline{q}=\\mu_t(z) \\psi$ and discussed the vertical structures of these flows depending on $\\mu_t$. We showed in the previous subsection that the parameter $\\mu_t$ is related to the fine-grained enstrophy profile through $\\mu_t=\\beta_t Z_0(z)$. In addition, the critical states described by \\citet{Merryfield98JFM} could either be saddle, minima or maxima of the entropy functional. We show in the next section that we can select which of them are actual entropy maxima (or equivalently coarse-grained enstrophy minima), in different limit cases.\n\n\\subsubsection{Transients, mean and sharp equilibrium states}\n\n{In the framework of energy-enstrophy theory, a distinction is often made between ``transients'' and ``mean'' energy, see e.g. \\cite{SalmonHollowayHendershott:1976_JFM_stat_mech_QG,fredericksen91GAFDa,MajdaWangBook}, among others. Statistical equilibria are called ``sharp'' when the ``transients'' vanish. In earlier works, the ``mean'' quantities were often computed by summing over all the microstates distributed according to the canonical measure, which were implicitly assumed to be equivalent to the microcanonical measure. In the absence of topography and beta plane, the ``mean'' states defined in this way are always zero in a doubly periodic domain, whatever the resolution of the truncated model, and all the energy is in the ``transients''.}\n\n{By contrast, the equilibrium states computed in the MRS framework in the \\textit{microcanonical ensemble} are always sharp: the energy of the fluctuations necessarily vanish, which can be shown with large deviation theory, see \\cite{BouchetCorvellec10} and references therein. This apparent paradox is solved when one realizes that microcanonical and canonical ensembles are, in general, not equivalent for long-ranged interacting systems, see e.g. \\cite{Ellis00} for general considerations and \\cite{VenailleBouchetJSP11} for application to two-dimensional and geophysical flows.} \n\n{In the case of a doubly periodic domain without bottom topography and beta effect, the MRS equilibrium states (or the energy-enstrophy equilibria) are degenerate, because of the symmetries of the domain, see \\citep{BouchetVenaillePhysRep11} for more details and for a discussion on how to remove the degeneracy, see also \\cite{Corentin} for the case of a spherical domain. In practice, the dynamics breaks the system symmetry by selecting one of the degenerate equilibrium states. Of course, if one considers the average potential vorticity field over all the degenerate equilibria, then one recovers zero, and this is what happens when computing the ``mean'' state in the canonical ensemble. We stress that for either quasi-geostrophic flows or truncated quasi-geostrophic flows in the limit of infinite resolution, the freely evolving dynamics itself can not jump from one state to another once the symmetry is broken: in the microcanonical ensemble, there are no ``transients''. We also stress that the physically relevant statistical ensemble for an isolated system such as a freely evolving flow is the microcanonical ensemble.}\n\n\\subsection{Enstrophy minima on a $f$-plane \\label{sub:Computation-of-the-RSM-f-plane}}\n\nIn the previous subsection, we have shown that MRS equilibrium states\ncharacterized by a linear $\\overline{q}-\\psi$ relationship are solution\nof a minimum coarse-grained enstrophy variational problem (\\ref{eq:MinimumEnstrophyVP}). Such MRS equilibrium states will be referred to as ``coarse-grained\nenstrophy minima'' in the following. We found in the previous section\nthat the affine relation of these coarse-grained enstrophy minima\nare of the form $\\overline{q}=\\beta_{t}Z_{0}\\psi$, with\na coefficient proportional to the fine-grained enstrophy $Z_{0}$,\nand are solutions of (\\ref{eq:CriticalPoints}). The critical points\nof the variational problem (\\ref{eq:MinimumEnstrophyVP}) are any\nstates $\\overline{q}$ satisfying these conditions. The aim of this\nsubsection is to find which of these critical point $\\overline{q}$\nare the actual coarse-grained enstrophy minima, solutions of variational\nproblem (\\ref{eq:MinimumEnstrophyVP}), when there is no $\\beta-$effect,\nbut for arbitrary stratification $N$.\n\nInjecting (\\ref{eq:q-psi_relation_with_coeff_determined}) in (\\ref{eq:Enstrophy_coarse-grained}),\nand using the expression (\\ref{eq:energy_functional_rho}) with the\nconstraint $E_{0}=\\mathcal{E}[\\rho]$, one finds that the coarse-grained\nenstrophy of each critical point $\\overline{q}$ is given by \\begin{equation}\n\\mathcal{Z}_\\text{cg}^\\text{tot}=-\\beta_{t}E_{0}\\ .\\label{eq:enstrophy_critical_point_simple_case}\n\\end{equation}\n Remarkably, the coarse grained enstrophy of a critical point $\\overline{q}$\ndepends only on the inverse temperature $\\beta_{t}$ . We conclude\nthat coarse-grained enstrophy minimum states are the solutions of\n(\\ref{eq:CriticalPoints}) associated with the largest value $\\beta_{t}$.\n\n \nProjecting (\\ref{eq:CriticalPoints}) on the Laplacian eigenmode\n$e_{l,k}(x,y)$ gives \\begin{equation}\n\\frac{\\partial}{\\partial z}\\left(\\frac{f_{0}^{2}}{N^{2}}\\frac{\\partial}{\\partial z}\\widehat{\\psi}_{k,l}\\right)=\\left(\\beta_{t}Z_{0}+K^{2}\\right)\\widehat{\\psi}_{k,l}\\ ,\n\\label{eq:VertStruc}\n\\end{equation}\nwith\n\\[\n\\partial_{z}\\widehat{\\psi}_{k,l}|_{z=0,-H}=0,\\quad K^{2}=k^{2}+l^{2},\\quad \\psi=\\sum_{k,l}\\widehat{\\psi}_{k,l}e_{kl}.\\]\nWe see that each critical point is characterized by a given wavenumber modulus $K$. Its vertical structure and the corresponding value of $\\beta_t$ must be computed numerically in the case of arbitrary profiles $Z_0(z)$. \nLet us consider the example shown in Fig.\\ \\ref{fig:e-folding_depth}, for a two-step fine-grained enstrophy profile \n\\begin{equation}\nZ_{0}=Z_\\text{surf}\\Theta\\left(z+H_{1}\\right)+Z_\\text{int}\\Theta\\left(-z-H_{1}\\right),\\quad H_{1}\\ll H,\\label{eq:Z0_profile_PieceWise_TwoLayers}\n\\end{equation}\nwhere $\\Theta$ is the Heaviside function, and for $Z_\\text{int}\/Z_\\text{surf}$\nvarying between $0$ and $1$. \nWe find that the minimum coarse-grained\nenstrophy states are always characterized by the gravest horizontal\nmode on the horizontal ($K=1$). As for the vertical structure, we\nobserve Fig.\\ \\ref{fig:e-folding_depth} a tendency toward more barotropic\nflows when the ratio $Z_\\text{int}\/Z_\\text{surf}$ tends to one.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=\\textwidth]{z0psi_xfig}\n\\caption{Left panel: three different fine grained enstrophy profiles. Right\npanel: corresponding vertical structure of statistical equilibrium\nstates ($\\psi(z)\/\\psi(0)$ on the left panel), in the case of constant\nstratification ( $f_{0}^{2}\/N^{2}=0.1$). The $e$-folding depth in\ncase A is $h=f_{0}\/NK$ (with here $K=1$ for the statistical equilibrium\nstate). \n\\label{fig:e-folding_depth}}\n\\end{center}\n\\end{figure}\nIt is instructive to consider the limit cases $Z_\\text{surf}=Z_\\text{int}$\nand $Z_\\text{int}=0$, corresponding respectively to case A and C of Fig.\\ \\ref{fig:e-folding_depth},\nfor which minimum enstrophy can be explicitly solved.\nWhen the enstrophy profile is depth independent ($Z_{0}(z)=Z_\\text{int}$,\ncase C of figure \\ref{fig:e-folding_depth}), solutions of (\\ref{eq:VertStruc})\nare given by the modes $F_{m}(z)e_{K}(x,y)$ defined in subsection\n(\\ref{sub:Vertical-and-horizontal-eigenmodes}), and are associated\nwith inverse temperatures $\\beta_{t}=-\\left(\\lambda_{m}^{2}+K^{2}\\right)\/Z_\\text{int}$. We see that the maximum value of $\\beta_{t}$ is reached for the gravest\nhorizontal mode ($K=1$) and the gravest vertical mode $(\\mbox{\\ensuremath{\\lambda}}_{0}=0)$.\nIt means that the coarse-grained enstrophy minimum state is barotropic.\n\nWhen $Z_\\text{int}=0$ (case A of Fig. \\ref{fig:e-folding_depth}), it\nis straightforward to show that critical points, the solutions of\n(\\ref{eq:VertStruc}), are SQG-like modes (see subsection \\ref{sub:SQGlike_modes})\nof $e$-folding depth $h=f_{0}\/N(0)K$, associated with $\\beta_{t}=-N(0)K\/(H_{1}Z_\\text{surf}f_{0})$\nwith corrections of order $H_{1}\/H$ and of order $h\/H$. The coarse-grained\nenstrophy minimum is therefore the SQG-like mode associated with $K=1$\nand with the largest $e-$folding depth $h=Lf\/2\\pi N$, where $L$\nis the domain length scale.\n\nThese examples show the importance of the conservation of fine-grained\nenstrophy to the vertical structure of the equilibrium state. The\nmain result is that statistical mechanics predicts a tendency for\nthe flow to reach the gravest Laplacian mode on the horizontal. The\nvertical structure associated with this state is fully prescribed\nby solving (\\ref{eq:VertStruc}) with $K=1$. Because the barotropic\ncomponent of such flows are larger than solutions of (\\ref{eq:VertStruc})\nwith $K>1$, we can say that the inverse cascade on the horizontal\nis associated with a tendency to reach the gravest vertical mode compatible\nwith the vertical fine-grained enstrophy profile $Z_{0}$. This means\na tendency toward barotropization, although in general, the fact that\nthe profile $Z_{0}$ is non constant prevents complete barotropization.%\n\n{Note that the previous results are presented in the case of a doubly periodic domain, but it would be straightforward to generalize them to channel geometries, or to any other domains with boundaries. The only change would be the spatial structure of the Laplacian eigenmodes. For these domain geometries, one would also need to discuss the effect of the circulation values at the boundaries.}\n\n\\subsection{Including the $\\beta$-effect \\label{sub:Including-the-beta}.}\n\nWe now discuss how the results from the previous section generalize in the presence of beta. For a given initial condition $\\psi_{0}(x,y,z)$, increasing $\\beta$ increases the contribution of the (depth independent) available potential enstrophy defined as \n\\begin{equation}\nZ_{p}=\\beta^{2}\\int_{\\mathcal{D}}\\mathrm{d}x\\mathrm{d}y\\ y^{2},\\label{eq:Potential_Enstrophy}\n\\end{equation}\nto the total fine-grained enstrophy profile $Z_{0}(z)=\\int_{\\mathcal{D}}\\mathrm{d}x\\mathrm{d}y\\ q_{0}^{2}$, where $q_{0}$ is the initial PV field that can be computed by injecting $\\psi_{0}$ in (\\ref{eq:PV_definition}). For sufficiently large values of $\\beta$, the PV field is dominated by the beta effect ($q_0 \\approx \\beta y $), $Z_{0}$ therefore tends to $Z_{p}$ and becomes depth independent. Because statistical equilibria computed in the previous subsection were fully barotropic when the fine-grained enstrophy $Z_{0}$ was depth-independent, we expect a tendency toward barotropization by increasing $\\beta$.\n\nConsider for instance the case where $\\psi_{0}$ is an SQG-like mode\nsuch that $q_{0}=\\beta y$ for $z<-H_{1}$, with $H_{1}\\ll H$ and\nassociated with a surface fine-grained enstrophy $Z_{0}=Z_\\text{surf}$\nfor $z>-H_{1}$. The initial profile of fine-grained enstrophy $Z_{0}$\nis thus given by (\\ref{eq:Z0_profile_PieceWise_TwoLayers}), with\nthe interior enstrophy $Z_\\text{int}=Z_{p}$ prescribed by the available\npotential enstrophy (\\ref{eq:Potential_Enstrophy}). The question\nis then to determine whether the results obtained previously without\nbeta effect still hold in presence of it.\n\n{In oder to apply properly MRS theory in the presence of beta, one must consider southern and northern boundaries. Indeed, MRS theory relies on the conservation of the global fine-grained PV distribution. Strictly speaking, this distribution is not conserved in the doubly periodic configuration on a beta plane, since any fluid particle that travels $2\\pi$ in the $y$ direction gains a value $-2 \\pi \\beta$, and one can not know a priori how many times a fluid particle will turn over the domain in the $y$ direction before the flow reaches equilibrium. In practice, the theory can still be used in the doubly periodic case to make qualitative predictions, as explained in the next section.}\n\n{In the channel configuration, the streamfunction is a constant at $y= \\pm \\pi$, and we can compute the statistical equilibria in two limit cases. First, when $\\beta=0$, we recover the configuration with surface intensification of the fine grained enstrophy profile $Z_{0}$, for which we found that statistical equilibria were the SQG-like modes associated with the gravest Laplacian horizontal modes, see subsection \\ref{sub:Computation-of-the-RSM-f-plane}. Second, with $\\beta$ sufficiently large, such that $Z_{p}\\gg Z_\\text{surf}$, the statistical equilibria can be computed by considering the case of a constant fine-grained enstrophy profile $Z_{0}$. In the general case, the values of the streamfunction at the northern and southern boundaries are different, and are determined using constraints given by mass conservation and circulation conservation along each boundary, see e.g. \\citep{PedloskyBook}. Here we consider the simple case $\\psi(x,y=\\pm\\pi,z)=0$, with zero total circulation ($\\int_{\\mathcal{D}} \\mathrm{d} x \\mathrm{d} y q=0$) at each depth. It is necessary to take into account the conservation of linear momentum (associated with translational invariance of the problem in the $x$ direction), which provides an additional constraint $\\mathcal{L}[\\overline{q}]=L_{0}$, with}\n\\begin{equation}\n\\mathcal{L}[\\overline{q}]= \\int_{-H}^0 \\mathrm{d} z \\ \\int_{\\mathcal{D}} \\mathrm{d}x\\mathrm{d}y\\ \\overline{q}y.\\label{eq:Momentum_y} \\end{equation}\n{Critical points of the variational problem (\\ref{eq:MinimumEnstrophyVP}) with this additional constraint satisfy}\n\\begin{equation}\n\\overline{q}=\\left(\\beta_{t} \\psi -\\mu y\\right)Z_{0}, \\label{eq:q-psi_with_linear_momentum}\n\\end{equation}\n{where $\\mu$ and $\\beta_t$ are respectively the Lagrange multipliers associated with the linear momentum conservation and the energy conservation. Equilibrium states therefore satisfy} \n\\begin{equation}\nZ_{0}\\left(\\beta_{t} \\psi - \\mu y\\right)=\\Delta\\psi+\\frac{\\partial}{\\partial z}\\left(\\frac{f_{0}^{2}}{N^{2}}\\frac{\\partial}{\\partial z}\\psi \\right) +\\beta y\\ . \\label{eq:critical_points_channel}\n\\end{equation}\n{Taking $\\mu=-\\beta\/Z_{0}$, we recover the $f$-plane case of the previous subsection: solutions of Eq. (\\ref{eq:critical_points_channel}) are barotropic-baroclinic modes on the vertical with eigenvalue $-\\lambda_m^2$, and are Laplacian eigenmodes in the channel (with eigenvalue denoted by $-K^2$), with $\\beta_t Z_0=-K^2+\\lambda_m^2$. These states characterized by $\\beta_{t}=-\\left(K^2+\\lambda_m^2\\right)\/Z_{0}$, $\\mu=-\\beta\/Z_{0}$ are drifting towards the west at speed of Rossby waves: $U_{drift}=-\\beta\/\\left(\\lambda_m^2+K^2\\right)$. The enstrophy minimizer among these states is the one associated with the smallest value of $\\beta_t$, namely the barotropic mode associated with the gravest Laplacian eigenmode consistent with the circulation and momentum constraints. When $\\mu \\ne \\beta\/Z_0$, Eq. (\\ref{eq:critical_points_channel}) can be inverted, and $\\psi$ is barotropic since $(\\beta-\\mu Z_0) y$ is depth-independent. The computation of the minimum enstrophy state is therefore equivalent to computing the minimum enstrophy state of a barotropic flow in a channel. A complete discussion of minimum enstrophy states in a barotropic channel are presented in \\cite{CorvellecPhD}, see also \\cite{MajdaWangBook}, with detailed computations including the effect of bottom topography and of non-zero circulations at the boundaries. The important point here is that in the large beta limit, the statistical equilibrium states are barotropic.}\n\n\\section{Numerical simulations \\label{sec:Numerical-simulations}}\n\n\\subsection{Experimental set-up\\label{sub:Numerical-settings}}\n\nWe consider in this section the final state organization of an initial SQG-like\nmode (defined in subsection \\ref{sub:SQGlike_modes}), varying the values of\n$\\beta$ in a doubly periodic square domain of size $L=2\\pi$. More precisely, we\nconsider a vertical discretization and an horizontal Galerkin truncation of the\ndynamics (\\ref{eq:FullDynamics}), for an initial potential vorticity field\n$q_{0}=q_{0,surf}(x,y)\\Theta\\left(z+H_{1}\\right)+\\beta y$, such that $q_{0}=\\beta\ny$ in the interior ($-H-H_{1}$. The surface PV $q_\\text{surf}(x,y)$ is a random field with random\nphases in spectral space, and a Gaussian power spectrum peaked at wavenumber\n$K_{0}=5$, with variance $\\delta K_{0}=2$, and normalized such that the total\nenergy is equal to one ($E_{0}=1$). This corresponds to the case discussed\nsubsection (\\ref{sub:Including-the-beta}), with a vertical profile of fine-grained\nenstrophy given by (\\ref{eq:Z0_profile_PieceWise_TwoLayers}), where the interior\nenstrophy is given by the available potential enstrophy\n(\\ref{eq:Potential_Enstrophy}): for $z<-H_{1}$, $Z_{0}=4\\pi^{4}\\beta^{2}\/3$.\n\nWe perform simulations of the dynamics by considering a vertical discretization\nwith $10$ layers of equal depth, horizontal discretization of $512^{2}$, $H=1$,\nand $F=(Lf_{0}\/HN)^{2}=1$, using a pseudo-spectral quasi-geostrophic model for the horizontal dynamics\n without small scale dissipation \\citep{SmithVallis01}. We choose $H_{1}=H\/10$ for the initial condition, so that there is non zero enstrophy only in the upper layer in the absence of a beta\neffect. We also perform experiments in the presence of small scale dissipation\n(Laplacian viscosity or hyperviscosity), and found no qualitative differences as\nfar as the large scale horizontal structure and the vertical structure of the flow\nwere concerned. The only difference in presence of small scale dissipation is the\ndecay of the enstrophy in each layer, because small scale dissipation smooths out\nPV filaments. Because of the energy cascade, the energy remains constant at lowest\norder even in presence of small scale dissipation.\n\nTime integration of the freely evolving dynamics proceeds for about $250$ eddy turnover times.\nTypically, i) the unstable initial condition leads to a strong turbulent stirring and the flow\nself-organizes rapidly into a few vortices, which takes a few eddy turnover times ii) same sign\nvortices eventually merge together on a longer time scale (a few dozen of eddy turnover times),\nand iii) the remaining dipole evolves very slowly on a slow time scale (more than hundreds of\neddy turnover times). In absence of $\\beta$-effect, a good indicator of the convergence is given\nby the convergence of the $q-\\psi$ relation to a single functional relation. As we shall see in\nthe following, this phenomenology is complicated by the presence of beta.\n\n{We saw in the previous section that a proper application of the MRS theory in presence of a beta plane would require considering northern or southern boundaries, since the fine-grained PV distribution is not conserved in the doubly-periodic domain case. However, we still can use the theory to interpret qualitatively the simulations in the doubly periodic case. For small values of $\\beta$, planetary vorticity gradients are negligible with respect to relative vorticity and stretching terms in the PV expression, in which case one can check that the fine grained PV distribution remains nearly the same during the flow evolution, and MRS can be applied. For large values of $\\beta$, the fluid particles can not escape in the meridional direction, they are confined in zonal bands having a width of the order of the Rhines scale\\footnote{Here $E_{0}=1$ is the initial energy, so that $E_{0}^{1\/2}$ is a good metric for typical velocities of the flow (assuming it becomes barotropic).} $L_R = 2 \\pi \\sim E_0^{1\/4}\/\\beta^{1\/2}$, and thus the fine-grained PV distribution remains close to the initial one, and MRS theory can be applied.} \n\n\\subsection{SQG-like dynamics ($\\beta = 0$) \\label{sub:SQG-like-dynamics-(small-beta)}}\n\nThe case with $\\beta=0$ is presented in Fig. \\ref{fig:beta2}.\nThe initial PV and streamfunction fields have horizontal structures\naround wave number $K_{0}=5$, which is associated with an SQG-like\nmode of typical e-folding depth $f_{0}\/(NK_{0})\\approx0.1$. \n\nAn inverse cascade in the horizontal leads to flow structures with an horizontal wavenumber $K$ decreasing with time, associated with a tendency toward barotropization: the $e$-folding depth of the SQG-like mode increases as $f_{0}\/2NK$. According to statistical mechanics arguments of the previous section, the flow should tend to the ``gravest'' horizontal SQG-like mode ($K=1$), which is also the ``gravest'' vertical SQG mode, i.e. the one associated with the largest $e$-folding depth. The concomitant horizontal inverse cascade (most of the kinetic energy is in the gravest horizontal mode $K=1$ at the end of the simulation) and the increase of the $e$-folding depth are observed on Fig. \\ref{fig:beta2}, showing good qualitative agreement between statistical mechanics and numerical simulations.\n\n {There are however quantitative differences between the observed final state organization and the prediction of MRS equilibria characterized by a linear $\\overline{q}-\\psi$ relation. As seen on Fig. \\ref{fig:q-psiBETA0}, the $\\overline{q}-\\psi$ relation is well defined, but it is not linear. Taking into account higher order invariants in the MRS theory would allow to better describe this $\\sinh$-like relation, see e.g. \\citep{BouchetSimonnetPRL09}, but it may not be sufficient to describe the ``unmixed'' cores of the remaining vortex in the final dipole \\citep{SchecterPRE03}.}\n\nThe presence of these ``unmixed'' PV blobs implies the existence of structures smaller than wavenumber $K=1$. Because each projection of the PV field on a different wavenumber is associated with an SQG-like mode associated with a different $e$-folding depth, the vertical structure of the flow is actually a linear combination of SQG-like modes associated with different wavenumbers. A consequence is that the effective e-folding depth of the kinetic energy is $f_{0}\/(2NK^\\text{eff})$. We estimated the coefficient $K_\\text{eff}\\approx3.5$ by considering the linear $q-\\psi$ relation passing through the extremal points of the observed $q-\\psi$ relation of Fig. \\ref{fig:q-psiBETA0}. {Let us conclude on this case by discussing the relaxation towards equilibrium: we observed that the $q-\\psi$ relation did not change at all between $150$ and $250$ eddy turnover times, proving that the system reached a stationary state. In addition, we see in Fig. \\ref{fig:Emodes}-a that the kinetic energy of each vertical mode (and therefore of each layer) reaches a plateau after a few eddy turnover times. As expected from the MRS theory, the statistical equilibrium is sharp, there is no transient.}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=\\textwidth]{FigF1BETA0_xfig}\n\\caption{case $\\beta=0$ (SQG-like). Only the fields in upper, middle and lower\nlayer are shown.\n \\label{fig:beta2}}\n\\end{center}\n\\end{figure}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.5\\textwidth]{scat3beta0}\n\\caption{$q-\\psi$ relation in the upper layer for $t=250$ eddy turnover time.\n\\label{fig:q-psiBETA0}}\n\\end{center}\n\\end{figure}\n\n\\subsection{Effect of beta ($\\beta \\sim 1$)}\n\nWe now consider the free evolution of the same initial condition $\\psi_{0}(x,y,z)$\nas in the previous subsection, but on a beta plane. Because\n$\\beta\\ne0$, the initial condition for potential vorticity is different\nthan in the previous section: the total energy and initial velocity\nfields are the same as before, but the vertical profile of fine grained\nenstrophy is different, since there is now available potential enstrophy\nin the interior. As explained in subsection \\ref{sub:Computation-of-the-RSM-f-plane}, this contribution from interior fine-grained enstrophy shall increase\nthe relative barotropic component of the statistical equilibrium state.\nThis is what is actually observed in the final state organization\nof Fig. \\ref{fig:beta3}. We conclude that in this regime, the beta\neffect is a catalyst\\footnote{By catalyst we do not mean a dynamical effect, as in chemistry, but simply the fact that barotropization in the final state is enhanced in presence of a beta plane.} of barotropization, as predicted by statistical mechanics.\n\n{Additionally, the $q-\\psi$ relation presented in Fig. \\ref{fig:q-psiBETA1} presents a very good agreement with the prediction of our computation of barotropic MRS equilibria carried in subsection \\ref{sub:Including-the-beta}: we observe a linear relation between $(q-\\beta y)$ and $\\psi$ with a slope $\\beta_{t}Z_{0}=-1$. This means that the dipole structure observed in the streamfunction field is the actual gravest horizontal Laplacian eigenmode $K=1$, and that this eigenmode is drifting westward at a speed $\\beta\/K^{2}$. In addition, we see in Fig. \\ref{fig:Emodes} that after a few eddy turnover times, almost all the kinetic energy is in the barotropic mode, and that the energy levels in each vertical mode reach a plateau: there are no transients.}\n\n{We conjecture that the good agreement with MRS theory occurs since the initial distribution of PV levels and the initial energy place the flow in the low energy regime considered in the Appendix. In addition, for intermediate values of $\\beta$, the Rhines scale $L_R=2\\pi E_O^{1\/4}\/\\beta^{1\/2}$ is of the order of the domain scale. The beta effect limit the meridional displacements of fluid particles at this scale; this is why the computations performed Appendix 2 in a channel geometry are qualitatively relevant to this case.} \n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=\\textwidth]{FigF1BETA2p1_xfig}\n\\caption{case $\\beta=2.1$. See Fig.\\ \\ref{fig:beta2} for legend. Note that\nthe colors scales are changed at each time; this is why the interior `` beta plane`` is not visible in the upper panel of potential\nvorticity. \n\\label{fig:beta3}}\n\\end{center}\n\\end{figure}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.5\\textwidth]{scat3beta2p1}\n\\caption{{Relation between $q-\\beta y$ and streamfunction in the upper layer, at $t=100$ eddy turnover times, for $\\beta=2.1$. The fact that this relation is close to be linear shows that the small energy limit considered in this paper is relevant to this case. The coefficient of the slope is approximatively one, meaning that the energy is condensed in the gravest horizontal Laplacian eigenmode, which is propagating westward at the speed of barotropic waves. This is consistent with the computation of minimum enstrophy states on a beta plane, see subsection \\ref{sub:Including-the-beta}}. \\label{fig:q-psiBETA1}}\n\\end{center}\n\\end{figure}\n\n\\subsection{Effect of strong Rossby waves ($\\beta\\gg1$) \\label{sub:High-values-of-beta}}\n\nFor large values of beta there is still an early stage, lasting a few eddy turnover times, of turbulent stirring of the unstable initial condition. But this initial stirring is limited to scales smaller than the Rhines scale $L_{R}=2\\pi E_{0}^{1\/4}\/\\beta^{1\/2}$, which eventually leads to a zonal PV field perturbed by Rossby waves around this scale, see Fig. \\ref{fig:beta4}. The dynamics is then dominated by Rossby waves, just as in a single layer model \\citet{Rhines77,Rhines75}. \n\n\nBecause the perturbation is confined in the upper layer, irreversible turbulent stirring occurs in the upper layer only, but not at the bottom. We did check that the global distribution of coarse-grained PV levels did not change in the lower layer through time, while this distribution was changed in the upper layer. In this case, the fundamental assumption that the system explores evenly the available phase space through turbulent stirring is not valid. One might reasonably argue that the statistical mechanic theory is incomplete, for it cannot identify the phase space that can be explored. In any case, we conclude that large values of beta prevent the convergence toward this equilibrium, leaving the question of the convergence itself to future work. \n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=\\textwidth]{FigF1BETA10_xfig}\n\n\\caption{case $\\beta=10$, see Fig. \\ref{fig:beta2} for legend. \\label{fig:beta4} }\n\\end{center}\n\\end{figure}\n\n\n\n\\subsection{Turbulent stirring vs wave dynamics \\label{sub:The-key-role-of-beta}}\n\nWe summarize the effect of planetary vorticity gradients on Fig.\\\n\\ref{fig:Ebaro_vs_beta} by considering the ratio $E_\\text{baro}\/E_{0}$\nof barotropic energy to total energy, after $250$ eddy turnover times,\nfor varying values of $\\beta$. The plain red line represents statistical\nmechanics predictions. In order to allow comparison with the numerical\nexperiments, for which the horizontal PV field did present structures\nsmaller than the gravest mode $K=1$ for low values of $\\beta$ (see\nsubsection \\ref{sub:SQG-like-dynamics-(small-beta)}), we computed\nthe vertical structure with equation (\\ref{eq:VertStruc}), using\n$K_\\text{eff}=3.5$, and using the enstrophy profile $Z_{0}(\\beta)$ of\nthe initial condition $\\psi_{0}$. Strictly speaking, this effective\nhorizontal wavenumber $K_\\text{eff}$ should vary with beta between\n$K_\\text{eff}=3.5$ for $\\beta=0$ and $K_\\text{eff}=1$ for $\\beta>1$, but taking into account this variations would not change much the shape of the red plain curve.\n\nThe critical value of $\\beta$ between the turbulent stirring regime for which statistical mechanics predictions are useful and the wave regime can be estimated by considering the case in which the Rhines scale $L_{Rh}\\sim2\\pi\\left(E_{0}^{1\/2}\/\\beta\\right)^{1\/2}$ is of the order of the domain scale $L=2\\pi$. Here the total energy is $E_{0}=1$, which renders the critical value $\\beta\\sim1$ in our simulations.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.9\\textwidth]{baro_vs_betaF1_xfig}\n\\caption{{Variation of the Barotropic energy $E_{baro}$ normalized by the total energy $E_0$, varying the ratio of the domain scale with the Rhines scale $L\/L_R=\\beta^{1\/2} E_{0}^{-1\/4}L\/2\\pi$, at the end of each run, for the same initial condition $\\psi_{0}$ (here $L=2\\pi$, $E_{0}=1$, so only $\\beta$ varies). Red continuous line: predictions from statistical mechanics. The vertical structure of the equilibrium state is computed with equation (\\ref{eq:VertStruc}), using $K_\\text{eff}=3.5$, and using the enstrophy profile $Z_{0}(\\beta)$ of the initial condition $\\psi_{0}$. The dashed-dotted line separates the wave regime from the turbulent stirring regime, in which we expect the statistical mechanics predictions to be valid.} \\label{fig:Ebaro_vs_beta}}\n\\end{center}\n\\end{figure}\n\nBecause the ratio $E_\\text{baro}\/E_{0}$ gives incomplete information on\nthe flow structure, and because dynamical information is useful to\nunderstand where statistical mechanics fails to predict\nthe vertical structure, we represent in Fig. \\ref{fig:Emodes} the\ntemporal evolution of the energies $E_{m}$ of the baroclinic modes%\n\\footnote{Note that with this notation, $E_{m=0}=E_\\text{baro}$ is the energy of\nthe barotropic mode, which is in general different from the total\nenergy of the initial condition $E_{0}$.%\n} defined in (\\ref{eq:VerticalModes}).\n\nFor small values of beta (see the first panel of Fig. \\ref{fig:Emodes}),\nthe dynamics reaches the equilibrium after stirring during\nfew eddy turnover times, and the contribution of each mode $E_{m}$\nsimply reflects the SQG-like structure of the final state. \nFor values of beta of order one (see the second panel of Fig. \\ref{fig:Emodes}),\nthe same stirring mechanism leads to a barotropic flow after\na few eddy turnover times.\nFor values of $\\beta$ larger than one, there is still an initial\n stirring regime, but the duration of this regime tends to\ndiminish with larger values of beta. We observe a slow transfer of\nenergy from first baroclinic to the barotropic mode in the case $\\beta=10$\n(see the third panel of Fig. \\ref{fig:Emodes}), but it is unclear\nif it will eventually lead to a condensation of the energy in the\nbarotropic mode. In the case $\\beta=100$ (see the fourth panel of\nFig. \\ref{fig:Emodes}), there is in average almost no energy transfers\nbetween the vertical modes, after the initial stirring regime. Again,\nit is unclear if interactions between waves would eventually condense\ntheir energy into the barotropic mode in a longer numerical run. The\nstrong temporal fluctuations of the contribution of each vertical\nmode visible on the wave regime of figure Fig. \\ref{fig:Emodes} may\nexplain why they are large variations of $E_\\text{baro}\/E_{tot}$ (in addition\nto the mean tendency of a decay with $\\beta$) in the wave regime\n figure \\ref{fig:Ebaro_vs_beta}. \n\n{To conclude, in the turbulent stirring regime (i.e. $L_R \\sim L$ or $L_R>L$) MRS statistical mechanics predictions are qualitatively correct, and there are no ``transients'' (no temporal fluctuations) once the flow is self-organized. By contrast, in the wave regime (i.e. $L_R \\ll L$) , there are strong temporal fluctuations in the mode amplitudes, and we believe that statistical mechanics (either MRS of energy-enstrophy equilibria for the truncated dynamics) can not account for these observations.}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=\\textwidth]{modes_xfig}\n\n\\caption{Temporal evolution of the contribution of the energy of each baroclinic\nmode to the total energy, for different values of $\\beta$. The modes\nare indexed by $m$, with $m=0$ the barotropic mode and $m=1$ the\nfirst baroclinic mode. \\label{fig:Emodes}}\n\\end{center}\n\\end{figure}\n\n\n\\section{Discussions and conclusions \\label{sec:Conclusion}}\n\nWe have used statistical mechanics to predict the\nflow structure resulting from the self-organization of freely evolving\ninviscid stratified quasi-geostrophic flows. The only assumption of\nthe theory is that the flow explores evenly the available phase space.\nIn order to compute explicitly statistical equilibria, and to discuss\nbasic physical properties of these equilibria, we have focused on\nMRS equilibria characterized by linear relation between potential\nvorticity and streamfunction. For such MRS equilibrium states, we need the total energy and the fine-grained enstrophy in each layer, although all the invariants are implicitly taken into account.\n\nWe explained that these states are expected in a low energy limit\nor in a strong mixing limit. By applying a method proposed by \\citep{Bouchet:2008_Physica_D}, we have shown that MRS equilibria characterized by a linear $q-\\psi$ relation are solutions of a minimum coarse-grained enstrophy variational\nproblem, which generalizes the Bretherton-Haidvogel minimum enstrophy\nprinciple to the stratified case. \n\n\nThe central result of the paper is the elucidation of the physical\nconsequences of the conservation of the fine-grained enstrophy $Z_{0}(z)$\non the structure of the equilibrium states. We showed first that the\naffine coefficients between the coarse-grained PV field and the streamfunction\nare proportional to the fine-grained enstrophy: $\\overline{q}=\\beta_{t}Z_{0}(z)\\psi$.\nThis relation allowed to compute explicitly the statistical equilibria,\nand to discuss their structure depending on the profile $Z_{0}(z)$.\nWhen this profile $Z_{0}$ is depth independent, the equilibrium state\nis fully barotropic, whatever the stratification profile $N$. When\nthis profile $Z_{0}$ is surface intensified, the equilibrium state\nis an SQG-like mode, characterized by an $e$-folding depth $\\sim f_{0}\/NK$,\nwhere $1\/K$ is bounded by the domain scale. Since the $e$-folding\ndepth increases with $K^{-1}$, larger horizontal flow structures\nimply ``more barotropic'' flows. Since statistical equilibria\nare associated with the gravest Laplacian horizontal modes ($K=1$),\nthe equilibrium state is the {}``most barotropic one'' given an\nenstrophy profile $Z_{0}$. \n\nWe conclude that the dynamics leads to the \\emph{gravest\nhorizontal mode on the horizontal (inverse cascade), associated with\nthe gravest vertical structure consistent with the conservation of\nthe fine grained enstrophy. The flow becomes fully barotropic when\nthe fine-grained enstrophy profile is depth independent. }\n\n\nThese results can be used to understand the role of beta in barotropization in the following way: consider an initial SQG-like flow, with $\\beta =0$, prescribed by its streamfunction; its enstrophy profile is surface intensified, and so will be the equilibrium sate. But when one switches on the beta effect, with the same initial streamfunction, the contribution of the depth independent part of the fine-grained enstrophy increases, which means a tendency toward a more barotropic equilibria, according to the previous conclusion. This result reflects the fact that in physical space, the initial SQG-like mode stirs the interior PV field (initially a beta plane), which in turn induces an interior flow, which stirs even more the interior PV field, and so on.\n\nWhen the beta effect becomes large enough, the PV\ndistribution at each depth $z$ becomes prescribed by the initial\nbeta plane, so the most probable state should be also characterized\nby a depth independent PV field, i.e. by a barotropic flow. \n\nTo reach the equilibrium state, the PV field must be stirred\nenough to explore the phase space. Yet large values of beta stabilize the initial condition, the system is trapped in a stable state different from the equilibrium state, and the flow dynamics becomes dominated by the interaction between Rossby waves rather than by turbulent stirring. The reason for this trapping is that the dynamics can not provide by itself sufficient perturbations to escape from the stable state and to explore other regions of the phase space, which would attract the system toward the equilibrium state. We observed in numerical simulations that this wave regime did not lead to a barotropization of the flow after a few hundred turnover times. We do not know if still longer numerical runs would lead to more barotropization. We estimated the critical value of beta between the turbulent stirring regime and the Rossby wave regime as the value such that the Rhines scale is of the order of the domain scale, which gives $\\beta=4\\pi^{2}E_{0}^{1\/2}\/L^{2}$.\n\n{In this paper, we focused on the effect of $\\beta$ on barotropization process. More generally, our results show that vertical transfers of energy and momentum are favored by the presence of any lateral potential vorticity gradients, since these gradients provide a source of available perturbation enstrophy. These lateral PV gradients may for instance be due to large scale mean flows set by external process (large scale wind pattern for the oceans, and large scale temperature gradients for the atmosphere). A similar situation occurs in the framework of interactions between waves (or eddies) and mean flows: horizontal momentum can be transfered vertically by inviscid form drag if there are horizontal fluxes of potential vorticity, which require themselves the presence of large scale potential vorticity gradients, see e.g. \\cite{VallisBook}.}\n\n{Another source of fine-grained potential vorticity would be provided by the addition of bottom topography. This should in this case play against barotropization, since the topography induces potential fine-grained enstrophy in the lower layer only. In fact, an initially surface intensified flow may evolve towards a bottom trapped current above the topographic anomaly, which can be explained by the statistical mechanics arguments presented in this paper, see \\cite{Venaille12}.}\\\\\n\nWe now discuss how these results may apply to the mesoscale ocean, and in what respect they may provide interesting eddy parametrization. First, we have considered the relaxation of an initial condition by an inviscid flow. Real oceanic flows are forced dissipated. Statistical mechanics predictions should apply if a typical time scale for self-organization is smaller than typical time scale of forcing and dissipation. Issues of forcing and dissipation lead to another difficulty: what is the domain in which the flow self-organizes ? Strictly speaking, it should be the oceanic basin. But forcing and dissipation become dominant at basin scale (with, for instance, the Sverdrup balance setting the gyre structures). In practice, on a phenomenological basis, one could still apply the statistical mechanics predictions by considering an artificial domain scale with prescribed scale of arrest for the inverse cascade, governed say by dissipation processes, but which could not be predicted by statistical mechanics itself.\n\nWe have also neglected bottom friction, because equilibrium statistical mechanics apply for non-dissipative systems. It is generally believed that bottom friction plays a key role in the vertical structure of oceanic flows \\citet{ArbicFriction,ThompsonYoung2006}. On a phenomenological basis, quasi-statistical equilibria could be computed in the high bottom friction limit by adding a constraint of zero velocity at the bottom, for instance by considering $\\psi(-H)=0$ in equation (\\ref{eq:VertStruc}).\n\nThird, equilibrium statistical mechanics predict the final state organization\nof the freely evolving inviscid dynamics, but neither predicts\nthe route toward this state nor the corresponding typical time scales\nfor the convergence toward this state. Various parametrizations that relax\nthe flow toward the equilibrium state, following a path of maximum\nentropy production have been proposed, see \\citet{KazantsevSommeria98,RobertSommeria92}. These parametrization satisfy basic physical constraints satisfied by the dynamics (PV distribution conservation and energy conservation), but the actual dynamics may in some cases follow a different path than one of maximum entropy production. \n\n{For instance, \\citet{SmithVallis01,FuFlierl80} reported the existence of two time scales for the flow evolution in a freely evolving quasi-geostrophic turbulent flow with surface intensified stratification. On a fast time scale, the dynamics lead to a inverse cascade of vertical modes toward the first baroclinic modes, which is surface intensified, and observed the tendency toward barotropization at much larger time scale. The existence of these two time scales is a dynamical effect that can not be accounted by statistical mechanics.} \n\nThe results presented in this paper have highlighted the important\nrole of the fine-grained enstrophy profile, for this imposes strong constraints\nfor the eddy structures. On a practical point of view, from the perspective\nof eddy parametrizations, our result suggest that rather than assuming\na vertical structure of eddies given by SQG modes, or given by combination\nof barotropic and first baroclinic modes, one could compute the eddy\nstructure with equation (\\ref{eq:VertStruc}), assuming that $K$\nis the scale of arrest, and $Z_{0}$ an enstrophy profile that could\nbe deduced from the resolved flow. For instance, $Z_{0}$ could be\nset by the structure of the most unstable mode of local baroclinic\ninstabilities.\n\n\nObservations indicate the presence of long-lived surface intensified\neddies in the ocean \\citep{chelton,eddy1,eddy2}, and we here speculate as to how these eddies may be interpreted according to a statistical \nmechanical theory. First, the eddies may be interpreted as local\nstatistical equilibrium states of the continuously stratified dynamics\nassociated with a surface intensified fine-grained enstrophy profile,\nmuch as we considered in the present paper. Second, they could be\ninterpreted as statistical equilibrium states of a 1.5 layer\nquasi-geostrophic model assuming that only the upper layer of the\nocean is active, see e.g. \\cite{VenailleBouchetJPO10}. Third, they\ncould be far from equilibrium states driven by physical mechanisms\npreventing convergence towards statistical equilibrium (dynamical\neffects due to non-uniform stratification, bottom friction, large\nscale forcing). Understanding which hypothesis is relevant to explain\nthe formation of long-lived surface intensified ocean vortices will\nrequire further research. \n\n\\paragraph{Acknowledgments}\n This work was supported by DoE grant DE-SC0005189 and NOAA grant NA08OAR4320752. The authors sincerely thank Peter Rhines and two other reviewers for their very helpful comments provided during the review process. The authors also warmly thank F. Bouchet and J. Sommeria for fruitful discussions, and K.S. Smith for sharing his QG code. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{INTRODUCTION} \nLithium is of fundamental astrophysical importance. This fragile \nlight element is quickly destroyed in stars when \nexposed to $T{\\ge}2.5{\\times}10^6$ K, making it useful in studying \nmatter transport in stars and providing observational feedback on our \nunderstanding of stellar structure and evolution \n\\citep[for a review, see][]{pinsonneaultarap}. In addition, stellar Li \nabundances have cosmological applications. Combining the \nA(Li)\\footnote{${\\rm A}(Li){\\equiv}{\\log}N({\\rm Li})$ on the usual scale\nwhere the logarithmic number abundance of hydrogen is given by \n${\\log}{N}({\\rm H}){\\equiv}12$.} \nin \nold stars with a) accurate stellar evolution models to account for stellar \ndestruction, and b) accurate chemical evolution models to account for Galactic \nproduction, should lead to the primordial A(Li), which provides \nconstraints on Big Bang nucleosynthesis \\citep{BS85} and an independent check \non cosmological parameters determined by WMAP \\citep{WMAPp}. However, \nthe vulnerability of Li to destruction that makes it a good probe of stellar \ninteriors also complicates the derivation of {\\it initial} halo \ndwarf A(Li)--our best connection to the Big Bang A(Li).\n\nThe astrophysical usefulness of Li, therefore, strongly depends on having \ncorrect stellar models that accurately trace the {\\it in situ} history of Li. \nStandard\\footnote{That is, spherically symmetric models that ignore \nrotation, diffusion, mass loss, magnetic fields, and other physics \nthat could affect real stars \\citep{DDK90,pinsonneaultarap}.} \nstellar models (SSMs) suggest that, for stars of about 1.2$M_\\odot$, \nthe observed surface A(Li) can go down only, \na) during the early pre-main sequence (PMS) when \nsome of the interior Li is actually destroyed, and b) during subgiant \nevolution as the deepening surface convection zone (SCZ) dilutes it. Standard \ntheory predicts no surface Li depletion during the main sequence (MS). \nMore specifically, since Li burns at relatively low temperatures, \nanytime the SCZ reaches deep enough into the star's \ninterior, visible Li depletion will occur at the surface. This occurs during \nthe PMS phase and, for stars of around 1.2$M_\\odot$, the SCZ is not deep \nenough during the MS to carry Li into the interior regions where it can be \nburned appreciably. Even during late PMS convective burning, such depletion \nis minimal. For a 1.2$M_\\odot$ star with [Fe\/H]${\\le}-0.1$, the \nmaximum expected depletion on both the MS and the PMS is $<0.1$ dex \n\\citep{pinsonneaultarap}. Therefore, little Li depletion in stars of this \nmass (or higher) is expected on the PMS, and virtually none on the MS, based \non the predictions from SSMs \\citep{conferencep}. \n\nIn contradistinction, striking Li \\citep[and Be;]{BK02} depletion is observed \nin disk MS stars with $6300{\\leq}T_{\\mathrm{eff}}\\leq6900$ K. In the Hyades, \\citet{boesgaardp} \ndiscovered that stars with $6500{\\leq}T_{\\mathrm{eff}}\\leq6850$ K were \ndepleted by ${\\geq}0.5-2$ dex as compared to stars merely 200--300 K hotter \nor cooler. This abrupt F-star Li(Be) depletion (the Li dip) occurs mainly \n{\\it during the MS}, since much younger clusters such as the Pleiades \n(${\\sim}100$ Myr) show a nearly flat Li-$T_{\\mathrm{eff}}$\\ relation \\citep{BBR88,DRS93}. \nThe Li dip presents a challenge to SSMs, which predict negligible \nLi depletion on the MS, and casts doubt on their ability to \neither predict or backtrack Li abundances to initial values. Instead, \nstellar models are required that can explain the Li dip \naccurately by modeling Li abundances over a star's lifetime. \nA number of refinements to SSMs have been proposed to explain the Li dip \n\\citep{balachandranp,Deli2000,SR2005}, but here we focus on \nrotationally-induced mixing (RIM) \\citep{RIMp,RIM2p}. RIM models propose that \na loss of angular momentum, as stars spin down on the MS, could cause slow \nmixing between the photosphere and the hotter denser region beneath the SCZ, \nresulting in the depletion of surface Li.\n\nTo test this theory, \\citet{Litestp} suggest that Li abundances of \nshort-period tidally-locked binaries (SPTLBs) be measured. RIM and stellar tidal \ntheory \\citep{ZBp} together predict that a SPTLB loses most of its angular momentum \nduring the very early PMS, when interior densities and temperatures are too low to \nburn Li. Hence, lacking significant angular momentum to lose and drive future\nrequisite mixing, neither does it suffer MS Li depletion responsible for the Li dip. \nThus, one method to test if angular momentum transfer is indeed the causal force \nbehind the Li dip is to find SPTLBs whose ages and temperatures should place them in \nthe Li dip, and measure their Li. A value of A(Li) higher than otherwise similar Li dip \nstars would support RIM models. V505 Persei, a Pop I intermediate-age short-period binary \nsystem, is made up of two F stars of nearly equal mass, radius, and $T_{\\mathrm{eff}}$\\ \n\\citep{marschallp, tomasellap}. Both masses, $T_{\\mathrm{eff}}$, and ${\\sim}4$ day period \nmake them good candidates to test if an early PMS loss of angular momentum can \nlater preserve Li compared to otherwise similar non-SPTLB stars. We measure \nthe A(Li)s of V505 Per and compare them in the A(Li)\nversus zero-age MS (ZAMS) $T_{\\mathrm{eff}}$\\ plane with stars of intermediate-age open \nclusters, showing in the process that the components of V505 Per have \nA(Li)s higher than single stars of their ZAMS $T_{\\mathrm{eff}}$. \n\n\\section{DATA AND ANALYSIS}\n\\subsection{Observations}\nHigh-resolution ($R=45,000$) spectroscopy of HD 14384 was obtained on 30 \nand 31 August 1997 using Keck\/HIRES. \nThree nightly exposures totalled 8 and 5.5 \nminutes on the first and second nights, respectively. \nDebiasing, flat-fielding, scattered light removal, order \ntracing\/extraction, and wavelength calibration were carried out with standard \nroutines in IRAF.\n\nGiven the short integration spans relative to the 4.2 d period, \nwe co-added the 3 nightly spectra. \nThe resulting ${\\lambda}6707$ continuum level Poisson SNR\/pixel \nis 850 and 580 on the first and second \nnights. Spectra were fit with a low order polynomial to perform \na continuum normalization. Figure 1 shows our co-added, normalized \nspectra. The physically similar components of the SB2 are \ndistinguished and identified both by slight \ndifferences in the strengths of the ${\\lambda}6717$ CaI feature and the \nrelative Doppler shifts expected from the orbital ephemeris of \n\\citet{tomasellap}. The relative Doppler shifts of the components in each \nnight's coadded spectrum, needed for the spectrum synthesis described next, \nwere measured from the centroids of the CaI features.\n\n\\subsection{Syntheses and Comparison}\nTo derive Li and Fe abundances, we conducted spectrum synthesis of the \n${\\lambda}$6707 \\ion{Li}{1} region to account for the line blending between \nthe stellar components. Such synthesis requires knowledge of $T_{\\mathrm{eff}}$, ${\\log} g$, \nradius ($R$), and microturbulent velocity ($\\xi$) of each star. The first\nof three of these parameters are taken from \\citet{tomasellap}, who determine radii \nand associated uncertainties from a \\citet{WD71} code-based solution with\nupdated model stellar atmospheres \\citep{MSK92}; these radii agree with \nalternate solutions derived from the radial velocities and $BV$ photometry\nalone to within 0.05 $R_{\\odot}$, which is well within their stated ${\\pm}0.11$ \n$R_{\\odot}$ uncertainty adopted here. The $T_{\\mathrm{eff}}$ values and uncertainties \nof \\citet{tomasellap}, $6512{\\pm}21$ and $6462{\\pm}12$ K for the A and B components, \nare derived from multiparameter ${\\chi}^2$ fitting of 700 {\\AA} of their own high-resolution \n($R{\\sim}$20,000) spectra against synthetic spectra grids. Encouragingly, the $T_{\\mathrm{eff}}$ {\\ }differences \nof the components they derive as part of the orbital solutions agree to within 18 K of the \nspectroscopically-derived value. \n\nThe \\citet{tomasellap} masses and radii yield ${\\log} g=4.33{\\pm}0.02$ for both components.\nWe determined $\\xi$ (1.73 and 1.70 km s$^{-1}$) using the $T_{\\mathrm{eff}}$- and ${\\log}g$-dependent calibration \nof \\citet{microturbulencep}. We note, though, the derived Li abundances are insensitive to\nthe value of log $g$ and $\\xi$. . While [Fe\/H] was eventually determined from a comparison of \nthe observed and synthetic spectra, we adopted an initial metallicity of [Fe\/H]$=-0.35$ from the \nphotometric determination of \\citet{nordstromp}. \n\nWe used the $T_{\\mathrm{eff}}$, log g, and [Fe\/H] values to \ninterpolate model atmospheres from the grids of \\citet{kuruczp}. MOOG\\footnote{http:\/\/as.utexas.edu\/~chris\/moog.html} \nwas used to create a synthetic spectrum for each star using the ${\\lambda}6707$ linelist of \n\\citet{kingp}. The spectra were smoothed by convolving with a Gaussian with FWHM measured \nfrom clean, weak lines in the observed spectra. Each component's synthetic spectrum was Doppler \nshifted and then combined, using the product of the square radii and the Planck function value at \n6707{\\AA} (a pseudo-monochromatic luminosity) as a weighting factor. This correction for flux dilution\nis appropriate for the first night's observations (orbital phase of 0.74) since each star's \nundiminished flux contributes to the spectrum. The orbital phase (0.97) of second night's observations \nplaces the system on the cusp of primary eclipse; however, the eclipses are very sharp, and the \ntotal system flux diminishment in $B$ at this phase is $\\lesssim$0.01 mag of the ${\\sim}0.5$ mag total at primary eclipse. \n\nFinally, we compared the synthetic spectra to the observed spectra using $\\chi^2$ minimization methods.\nWe used the ${\\lambda}6705$ \\ion{Fe}{1} line to determine [Fe\/H], forcing both \nstars to have an assumed identical Fe abundance. Once a best-fit value of [Fe\/H] \nhad been determined, we moved on to A(Li), which was allowed to differ in \nthe two components to allow for possible differing Li depletion in the two stars. \nThese synthetic spectra are compared to the observed data in Figure 1.\n\n\\subsection{Results}\nThe analysis of the first night's data yielded [Fe\/H]$=-0.15{\\pm}0.03$. The \nquoted error is due to the 1${\\sigma}$ level fitting uncertainties alone; even \nso, our metallicity estimate is in good agreement with \n[M\/H]$=-0.12{\\pm}0.03$ from \\citet{tomasellap}. The metallicity from \nthe second night was not calculated due to the unfortunate placement of the \n${\\lambda}6705$ FeI feature in the secondary star with a \ndetector\/reduction artifact that can be seen in Figure 1; in carrying out the \nLi syntheses, we assumed the [Fe\/H] value from the first night's data. \n\nThe Li syntheses (Figure 1) yield average A(Li) of \n$2.67{\\pm}0.1$ and $2.42{\\pm}0.2$ for the primary and secondary components, \nrespectively. While the best-fit Li abundances differ by only a few hundredths \nof a dex, the larger quoted uncertainties in A(Li) are \ndominated by uncertainties in the continuum location. Contributions from \nfitting uncertainties in the $\\chi^2$ minimization and $T_{\\mathrm{eff}}$\\ uncertainties \namount to only ${\\pm}0.03-0.06$ dex and ${\\pm}0.01-0.02$ dex, respectively; \ncontributions from uncertainties in ${\\xi}$ and log $g$ are similarly small \nor smaller and are ignored. Abundance uncertainties arising from uncertainties \nin the flux dilution factors of each component, which in turn arise from uncertainties \nin $T_{\\mathrm{eff}}$\\ and the \\citet{tomasellap} stellar radii, are $0.01-0.02$ dex.\n\n\\section{DISCUSSION}\n\n\\subsection{Li versus ZAMS $T_{\\rm eff}$}\nInterpreting the A(Li) of our SPTLB components requires that \nthey be placed in the context of the Li dip morphology defined by other single \n(or non-SPTLB) stars. In her comparisons of the Li dip in various open \nclusters, \\citet{balachandranp} found that the $T_{\\mathrm{eff}}$\\ at which the Li dip \noccurs varies based on metallicity, but that the ZAMS $T_{\\mathrm{eff}}$\\ at which the Li \ndip is located does not; this provides a means by which the Li dip \nmorphology of different populations of disk stars can be compared. \nAdditionally, \\citet{balachandranp} found that the morphology of the cool side \nof the Li dip is age-dependent. Some evidence suggests that the Li dip may \nbegin to form as early as an age of 150 Myr \\citep{SD2004} or even 100 Myr \n\\citep{Marg2007}. Clearly, comparing our stars with bona fide Li dip stars \nrequires knowledge of our SPTLB components' ZAMS $T_{\\mathrm{eff}}$\\ and age. \n\nFor consistency, we followed the approach of \\citet{balachandranp} to find the \nZAMS $T_{\\mathrm{eff}}$\\ of each star by looking at differences implied by isochrones (and \ntheir assumed color transformations) between our stars in their current \nevolutionary state and on the ZAMS. This required that we first determine the \nage of our SPTLB stars, which we did by placing the components in the radius \nversus mass plane and comparing these locations with sequences from \n[m\/H]$=-0.14$ Yonsei-Yale isochrones \\citep{Y2isop}. As shown in Figure 2, \nthis implies an age of $1.15{\\pm}0.15$ Gyr for the system; the majority of the \nage uncertainty comes from uncertainty in the radii estimates. \n\nWe then used the Yonsei-Yale isochrones with the Green color-temperature relations, \nwhich are also employed by the Revised Yale Isochrones \\citep{isochronep} used \nby \\citet{balachandranp}, to determine the difference \nbetween the $T_{\\mathrm{eff}}$\\ at 1.15 Gyr and on the ZAMS. This $T_{\\mathrm{eff}}$\\ \ndifference was then applied to our current $T_{\\mathrm{eff}}$\\ value from \\citet{tomasellap}, yielding \nZAMS $T_{\\mathrm{eff}}$\\ values of 6483 and 6432K for the primary and secondary \ncomponents. The $1{\\sigma}$ level uncertainties in our interpolation of the \n$T_{\\mathrm{eff}}$-mass relations of the isochrones are ${\\le}12$ K.\n\nFigure 3 presents the Li-ZAMS $T_{\\mathrm{eff}}$\\ diagram containing: {\\ }the V505 Per \ncomponents; the literature data reanalyzed by \\citet{balachandranp} for the \nopen cluster NGC\\,752, having a $1.45$ Gyr age \\citep{AT2009} and \n[Fe\/H]$=-0.15$ \\citep{NGC752p}; the Li data for the 1.75 Gyr, \n[Fe\/H]$=-0.08$ cluster NGC\\,3680\\ \\citep{AT2009}; and the Li data of \n\\citet{balachandranp} for the 650 Myr [Fe\/H]$=+0.13$ Hyades cluster. We \ndetermined the NGC\\,3680\\ object masses using a Legendre polynomial relation \nto map the dual $V$ magnitude and mass abscissas of Figure 4 of \\cite{AT2009}. \nWe then used the Yonsei-Yale isochrones as described above to determine the $T_{\\mathrm{eff}}$\\ \ndifference between the ZAMS and at 1.75 Gyr at a given mass. This difference \nwas then applied to the {AT2009} $T_{\\mathrm{eff}}$\\ values to yield ZAMS $T_{\\mathrm{eff}}$\\ values. \n\n\\subsection{v505 Per versus the Hyades}\n\nIf the v505 Per $T_{\\rm eff}$ values and uncertainties of \\citet{tomasellap} are reliable, \nand if the {\\it relative\\\/} uncertainties in the to-ZAMS $T_{\\rm eff}$ corrections for the \nv505 Per components compared to those of the open cluster comparison stars are not \nseveral times the size of the corrections themselves (${\\sim}30$ K for our binary components \nand ${\\sim}40$ K for similar mass stars in the slightly older NGC 752 cluster), then \nFigure 3 indicates that both our SPTLB components \nare positioned inside the Li dip that is well defined by the Hyades data \\citep[or those of the\nor Praesepe data not shown here; see Figure 12 of][]{balachandranp}. The larger Li abundances \nin the v505 Per components compared to nearly all of the younger and more metal-rich Hyades \ndata in Figure 3 is especially notable given the metallicity difference and the age difference, \nwhich we discuss in turn, between the two. \n\nFirst, comparisons of the v505 Per Li abundances with those in cluster stars are most meaningful \nif some account of initial Li abundance differences can be made. An empirical \napproach to parameterize Galactic disk Li production in terms of Fe production is to\nuse the upper envelope of the Li-Fe relation exhibited by large samples of field stars. The field \nstar data over the range $-1{\\leq}\\mathrm{[Fe\/H}{\\leq}0$ in Figure 7 of \n\\citet{LHE} suggest an initial Li-to-Fe (logarithmic by number) relation in \nthe local disk having slope ${\\sim}1$ dex\/dex. For comparison, the Galactic \nchemical evolution model in Figure 9 of \\citet{Travagliop} produces a slope of \n${\\sim}0.7$ dex\/dex over the same [Fe\/H] range, though this slope may be too \nsmall since it is unable to reproduce the initial solar Li abundance. Indeed, determinations\nof the slope of the Li-to-Fe relation using Li abundances on the G-dwarf Li peak of various open \nclusters are significantly larger at 1.4 dex\/dex \\citep{Boes91} and 1.0 dex\/dex \\citep{Cumm11}. \n\nThe Hyades and Praesepe have super-solar metallicities: [Fe\/H]${\\sim}+0.10$ to $+0.15$ \n\\citep{Boes89,BF90}. The local disk field and \\citet{Cumm11} open cluster Li-Fe relation \nthus implies the initial Hyades and Praesepe Li abundances were a factor of 2 larger than \nfor the v505 Per components; the \\citet{Boes91} open cluster Li-Fe relation implies initial \nLi abundances a factor of 2.6 larger than for the v505 Per components. Accounting for initial \nLi differences in this way makes the observed present-day difference between v505 Per and \nHyades Li abundances even more remarkable. \n\nSecond, the red side of the Li dip is known to flatten with increasing age, which may be due\nto increasing Li depletion in the dip stars with age \\citep{balachandranp}. Nevertheless, \ndespite their older age, our SPTLB components exhibit Li abundances a factor of 2 {\\it larger} \nthan nearly all Li detections or upper limits at similar ZAMS {$T_{\\mathrm{eff}}$} in the younger ($\\sim$650 Myr) \nHyades and Praesepe. Once reaching the v505 Per age, stars on the red side of the dip \nin these clusters would presumably have even lower Li abundance than at present. \n\n\\subsection{Comparison with other data}\n \nFigure 3 also compares V505 Per with stars of more similar [Fe\/H]: those in NGC\\,752\\ \nand NGC 3680. The A(Li) of our SPTLB components are a factor of 2-5 larger than the upper \nlimits for the NGC\\,752\\ and NGC 3680 Li dip stars (one 3680 star on the steep blue side \nof the dip is within the error bar of our primary component). Note that \\cite{SRP94} argue \nthat their high-resolution spectroscopy of solar-type dwarfs in NGC\\,752\\ suggests \n[Fe\/H]$=+0.01$ for this cluster, 0.15 dex larger than the canonical value quoted by \\citet{NGC752p}. \nIf so, the above discussion of the initial Li abudnance versus [Fe\/H] relation suggests an even \nlarger difference between the Li depletion factors of our SPTLB components and the NGC\\,752\\ Li \ndip stars.\n\nFinally, it is important to recall that the preservation of Li by SPTLBs \noccurs if angular momentum loss occurs sufficiently early during the PMS, when \ninterior temperatures and densities are too low to burn Li. This early \nsynchronization is predicted to occur in systems with periods below some \ncritical period that is a function of stellar mass and metallicity \n\\citep{Zahn94}. The models of \\citet{Zahn94} for Galactic disk stars of mass \n1.2M$_{\\odot}$ indicate this critical period is 6 days. The 4.2d period of \nV505 Per falls under this critical period, though the 1.25 and \n1.27M$_{\\odot}$ components slightly exceed the 1.2M$_{\\odot}$ maximum mass \nconsidered by the modeling of \\citet{Zahn94}. The Hyades SPTLB vB 34 resides \nwithin the blue region of the Hyades Li gap, but does not clearly demonstrate \na Li abundance larger than similar single stars \\citep[see, e.g., Figure 1 of][]{Litestp}. \nHowever, Yonsei-Yale isochrones ([Fe\/H]$=+0.13$, 650 Myr for the Hyades; \n[Fe\/H]$=-0.14$, 1.15Gyr for v505 Per) indicate that the vB 34 masses are 0.20-0.23M$_{\\odot}$ \nlarger than for v505 Per; thus, the 3.1d period of vB 34 may not be below a necessary \ncritical period for its larger mass components. \n\n\\section{CONCLUSIONS}\nLithium is important for probing stellar interiors and evolution. It can be used to \nstudy transport of matter in the stars, Galactic chemical evolution, and BBN. Such uses \ndepend heavily on accurate predictions of Li abundance evolution within stars. \nIt is known that SSMs are unable to explain the Li dip in disk mid-F dwarfs. \nModified stellar models that include the action of rotationally-induced slow \nmixing can explain the Li dip as the result of slow Li mixing in stars \nthat are currently undergoing angular momentum loss and are sufficiently \nmassive that interior temperatures and densities are sufficiently large to \nburn Li as a result of such mixing. \n\nAlthough other types of physical mechanisms have also been proposed as \nexplanations of the Li dip, including diffusion, mass loss, and other types of \nmixing, a variety of observational evidence favors, often quite strongly, the \nRIM-type models over other mechanisms. This evidence includes (but is not \nlimited to): a) the Li\/Be depletion correlation where {\\it both} elements are \ndepleted, but Li more severely \\citep{Stephensp,Deli98,BAKDSp} {\\,}b) the \nBe\/B depletion correlation \\citep{Boes2005} {\\,}c) subgiants in M67 revealing \nthe size and shape of the (MS) stellar preservation region as they evolve \nout of the cool side of the Li dip \\citep{SD2000} {\\,}d) and the early \nMS formation of the Li dip.\n\nHere, we present an independent observational test of the RIM explanation of \nthe Li dip. The $T_{\\rm eff}$ values and uncertainties of \\citet{tomasellap} \nindicate that the components of the mildly metal-poor \n([Fe\/H]${\\sim}-0.15$) intermediate-age (${\\sim}1.1$Gyr) short-period \ntidally-locked binary V505 Per both reside in the Pop\\,I Li dip defined by \nopen cluster observations (assuming the differential ${\\sim}10$ K to-ZAMS\n$T_{\\rm eff}$ corrections suggested by isochrones for our \nstars relative to the younger Hyades and older NGC 752 and NGC 3680 clusters \nare not, in fact, an order of magnitude larger). If angular momentum loss in such a system occurred \nvery early during the pre-main sequence phase, then it would suffer no or \nreduced RIM during the MS compared to non-SPTLB stars occupying the Li dip \nregion; as a result, the V505 Per components would exhibit larger Li \nabundances than otherwise similar stars in the Li gap. \n\nWe find that \nthe V505 Per components' Li abundances are at least 2-5 times \nlarger than both: a) the Li upper limits in the ${\\sim}1.5-2$Gyr and \nsimilarly mildly metal-poor ({Fe\/H}${\\sim}-0.15$ and $-0.08$) clusters \nNGC\\,752\\ and 3680, and b) the upper limits and Li detections in the younger \nmetal-rich Hyades and Praesepe clusters. If there exists an initial Li-Fe \nrelation of positive slope, as field star data and open cluster observations and Galactic chemical evolution \nmodels each independently suggest, and initial Li abundances indeed scale with [Fe\/H] in the \nrecent disk, then the Li overabundance of V505 Per is even more dramatic in \nthe case of the younger clusters (which presumably started with a higher \ninitial Li abundance) and perhaps in NGC\\,752\\ if one assumes the higher [Fe\/H] \nvalue of \\citet{SRP94}. \n\nOur results suggest, independently, that angular momentum evolution on the \nMS is responsible for the Li dip, confirming the conclusions drawn from the \nvariety of observational evidence listed above, involving both field and \ncluster dwarfs. SPTLBs with higher-than-normal Li have been found in the \n650-Myr old Hyades \\citep[][see Soderblom et al. 1990 for a related \nidea]{THDP93}, the 4.5Gyr-old M67 \\citep{Deli94}, and in other contexts \n\\citep{Litestp}; and, \nsignificantly, SPTLBs have {\\it normal} Li in clusters, such as the Pleiades, \nwhich are too young for much RIM-related depletion to have occurred \n\\citep{Litestp}. Our results for V505 Per complement these previous findings \nin that the V505 Per SPTLB stars are the hottest high-Li SPTLBs discovered so \nfar, and are indeed very close to the limiting $T_{\\mathrm{eff}}$\\ beyond which the models \nof \\citet{Zahn94} can no longer synchronize both components sufficiently early \nduring the pre-MS to prevent Li destruction. The SCZ of hotter \nSPTLBs is too shallow and the their Hayashi paths too short for the components \nto grab onto each other sufficiently effectively so as to cause \ntidal locking during that same evolutionary phase. \n\nWhile the Li abundances in our V505 Per components likely reflect some Li \ndepletion from a plausible initial abundance in the range \n$A(Li)=3.0-3.3$, which is not unexpected \n\\citep[see section 2.2.1 of][]{Litestp}, they nevertheless suggest that early \ntidal circularization can be efficient in mid-F stars in the Li dip and \nmitigate the effects of RIM on Li depletion. Identification of additional Li \ndip SPTLBs, analysis of their A(Li), and comparison with that of \nsingle stars of similar Li dip position, metallicity, and age would be a \nprofitable means to extend these conclusions. \n\n\\acknowledgments\nThis work was supported by NSF awards AST-0239518 and AST-0908342 to JRK, \nand AST-0607567 to CPD. We thank Bruce Twarog for providing us with the \nNGC\\,3680\\ data. \n\n{\\it Facility:} \\facility{KECK}\n\n\n\\newpage\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Background and motivation}\n\\label{sec:background}\n\nIn this section, we review the network by Zhao et al.~ \\cite{Zhao2020JointSR}, since our method relies on it. Then, we discuss the motivation of our method.\n\\subsection{Background}\n\nWe assume the lighting and viewing directions to be identical, as it happens when the flash light is close to the camera lens. \nWe generate four maps to represent the SVBRDF: the diffuse albedo $\\rho_{d} \\in \\mathbb{R}^{3}$, specular albedo $\\rho_{s} \\in \\mathbb{R}$, roughness $\\alpha \\in \\mathbb{R}$, and surface normal $\\mathbf{n} \\in \\mathbb{R}^{3}$. Our goal is to compute $\\mathbf{u} = {[\\rho_{d},\\rho_{s},\\alpha,\\mathbf{n}]}$. We use the Cook-Torrance reflectance model \\cite{cook1982reflectance} for rendering.\n\n\\textbf{Network architecture.}\nZhao et al.~\\cite{Zhao2020JointSR} proposed a generative adversarial network for SVBRDF recovery and synthesis. The network is unsupervised, thus no dataset is required for training. With a stationary image as an input, the network produces four SVBRDF maps, by training on different cropped tiles from the input image. The network consists of a \\emph{two-stream generator} and a \\emph{patch discriminator}. \n\nThe generator includes an encoder and two decoders, where two groups of maps of the tiles are generated separately: normal and roughness, diffuse and specular. The maps are then used to render an image. Both this rendered image and the tile from the input image are fed to the discriminator to determine the correctness of the generated SVBRDF maps. Regarding the network structures, please see Zhao et al.~\\cite{Zhao2020JointSR} for more details.\n\n\n\\textbf{Loss function.}\nThe loss function in Zhao et al.~\\cite{Zhao2020JointSR} consists of a guessed diffuse map loss and the adversarial loss:\n\\begin{equation}\\label{Lzhao}\n\\mathcal{L}_\\mathrm{\\added{Zhao}} = \\lambda \\mathcal{L}_\\mathrm{GAN}(G,D) + \\mathcal{L}_{d}(G),\n\\end{equation}\n\\begin{equation}\\label{ganlossu}\n\\mathcal{L}_\\mathrm{GAN}(G,D) = \\mathbb{E}[\\log {Dis}(\\mathbf{x})] + \\mathbb{E}[\\log (1-{Dis}(\\mathbf{y})],\n\\end{equation}\n\\begin{equation}\\label{initd}\n\\mathcal{L}_{d}(G) = \\mathbb{E}\\left[\\|\\tilde{\\mathbf{\\rho_{d}}}-\\mathbf{\\rho_{d}}\\|_{1}\\right].\n\\end{equation}\nThe guessed diffuse map $\\tilde{\\mathbf{\\rho_{d}}}$ is obtained via normalizing the input image, and considering the statistical distribution, which is used as the ground truth of the diffuse map, since the ground truth maps are absent. \n\n\\subsection{Analysis}\nWhen highlights exist in the input image, Zhao et al.~\\cite{Zhao2020JointSR} fail to recover satisfactory SVBRDF maps due to the ambiguous highlight spot. The high intensity region is often classified as part of the specular albedo, resulting in wrong results for the roughness and specular maps (see Figure~\\ref{fig:synthetic-brick}). This is an issue for all existing SVBRDF acquisition methods, due to the ambiguity between illumination and material. \n\nIn this paper, we introduce extra information to resolve the ambiguity: the material maps we wish to recover are stationary, that is they repeat themselves after a certain period. As a consequence, the recovered maps should also be stationary. We enforce the stationarity of the recovered maps with a new loss function, based on their Fourier transform. We focus on the acquisition part of Zhao et al.~\\cite{Zhao2020JointSR} and ignore the texture synthesis part, although it can be easily included.\n\nZhao et al.\\cite{Zhao2020JointSR} also has an issue with computation time: as the network is trained from scratch on each individual image, processing can take up to 4 hours for a single image. We solve this issue with a two-stage training strategy, using a pretrained model for initialization and a fine-tuning stage. The computation time is down to 30~mn for each image. \n\n\n\n\n\n\\section{Conclusion}\n\\label{sec:conclusion}\nIn this paper, we improve the unsupervised SVBRDF recovery generative adversarial network, by introducing two new loss functions: a Fourier loss function and a perceptual loss function to enforce the stationarity of SVBRDFs, yielding better quality for the SVBRDF maps, especially for input images with intense highlights. Then, we propose a two-stage training strategy to reduce the training time with 8$\\times$ speedup. In the end, our method is able to generate high-quality SVBRDFs and produce more plausible rendering results, compared with the state-of-the-art methods.\n\n\\section{Introduction}\n\nThe reconstruction of real world material appearance is a long standing problem in computer graphics and vision. The reflectance parameters of opaque materials can be modeled by the 6D spatially-varying bi-directional reflectance distribution function (SVBRDF). It is difficult to recover the SVBRDFs of a real world material because of its high dimensionality and the inherent ambiguity of the unknown parameters: color variations could be caused by changes in any material parameter: albedo, roughness or normal. Several previous methods required complex acquisition equipments to densely sample materials in different light and view directions. These can faithfully capture the appearance parameters of a material, but are also very expensive and time-consuming, limiting the accessibility. \n\nRecent works have shown that it is possible to recover the SVBRDFs from a few photos, or even a single image, of the material~\\cite{deschaintre2018single, li2018materials, gao2019deep, Guo2020MaterialGANRC, Guo2021HighlightawareTN}. These lightweight methods used deep neural networks to capture four SVBRDF maps (diffuse and specular albedo, normal map and roughness parameters) from photographs of a material. They usually rely on convolutional neural networks (CNNs), trained on synthetic images and corresponding SVBRDF maps, to model the appearance of real world materials. \n\nThese deep learning-based methods are {\\em supervised} and require large training datasets. These datasets are difficult to acquire~\\cite{li2017modeling, deschaintre2018single, aittala2015two}. Existing methods either need professional designers to generate procedural models, or rely on numerous samples of real world materials. A concurrent approach relies on generative adversarial network (GAN)~\\cite{goodfellow2014generative} to avoid heavy work in dataset collection. Zhao et al.~\\cite{Zhao2020JointSR} proposed the first approach to exploit GAN architecture for unsupervised SVBRDF maps recovery. They rely on a two-stream generator to train the SVBRDF maps (diffuse, specular, normal, roughness) and calculate the adversarial loss. Their network is able to predct plausible SVBRDF maps from a single input image, and does not require any dataset. They also provide high quality texture synthesis through a well-designed encoder-decoder structure. \n\nWhen the input image includes an intense specular highlight, it is difficult for acquisition methods to separate between albedo and illumination. As the specular highlight has a large intensity, it gets a strong priority in the learning process, often resulting in bright spots at the center of the albedo maps. To solve this issue, we need to introduce a specific constraint: we make the hypothesis that the material we acquire is stationary (its features repeat themselves after a certain period). We enforce stationarity in the reconstructed SVBRDF maps using a new loss function based on the Fourier transforms of the SVBRDF maps. Also, the rendering loss function based on pixel-wise comparisons do not work well when the exact positions of the camera and the light source are not well known. We introduce a new loss function, based on the perceptual difference between the input image and the reconstructed image~\\cite{johnson2016perceptual}. Combined together, these new loss functions generate high-quality SVBRDF maps from a single input image. In particular, when there are overexposed highlights in the input photograph, our method generates more reasonable SVBRDF maps compare to previous works, leading to more realistic results when re-rendering with different viewing and illumination.\n\nTo speed-up the reconstruction process, we also introduce a two-stage training strategy: we pretrain our network on a single material, which provides the starting parameters for the training on each input image. This pretraining strategy makes the treatment of an image 8 times faster. \n\n\nIn summary, our contributions include:\n\\begin{itemize}\n\\item a Fourier loss function to enforce stationarity in the SVBRDFs, which produces more plausible results when the input image has intense highlights,\n\\item a perceptual loss function that measure the semantic similarity between the input image and re-rendering result, and \n\\item a two-stage training strategy to speedup the train process without quality degradation.\n\n\\end{itemize}\n\nThe rest of the paper is organized as follows. In Sec.~\\ref{sec:related}, we review previous works on SVBRDFs recovery. As our method builds on Zhao et al.\\cite{Zhao2020JointSR}, we present this method in depth in Sec.~\\ref{sec:background}. We present our method and implementation in Sec.~\\ref{sec:recover}. We show and discuss our results in Sec.~\\ref{sec:results}, and conclude in Sec.~\\ref{sec:conclusion}.\n\n\n\\section{Our method}\n\\label{sec:recover}\n\nIn this section, we propose a novel loss for the SVBRDF GAN~\\cite{Zhao2020JointSR} to enforce the stationarity of SVBRDF maps and relax the pixel-wise connection between the guessed diffuse map and the input image. Then we present a two-stage training strategy to reduce the training time cost. Lastly, we show the implementation details.\n\n\n\\subsection{Stationarity-aware loss function}\n\\label{sec:loss}\n\n\n\nWe propose a joint loss, including a Fourier loss and a perceptual loss, where Fourier loss enforces the stationarity in SVBRDF maps and perceptual loss makes the rerendering result more plausible.\nIn Figure~\\ref{fig:newframework}, we show the difference between our method and Zhao et al.~\\cite{Zhao2020JointSR}.\n\n\n\n\\textbf{Fourier loss.} With a single image, it is difficult to separate between the color changes due to the material and those due to illumination. Without guidance, the network tends to place the highlights as part of the albedo or normal map. We introduce an extra constraint: the material should be stationary. In a stationary texture, variations are high-frequency and illumination effects are low-frequency. We introduce a new loss function based on Fourier analysis to enforce this stationarity: we compute the Fourier transform of the guessed diffuse map $\\tilde{\\mathbf{\\rho_{d}}}$ and of the predicted SVBRDF maps $\\mathbf{u}$(${[\\rho_{d},\\rho_{s},\\alpha,\\mathbf{n}]}$). We compute the Fourier loss function as the $L_1$ loss in the logarithmic domain:\n\t\\begin{equation}\\label{Lf}\n\t\t\\mathcal{L}_{F}(\\mathbf{u},\\tilde{\\rho_{d}}) = \\log\\mathbb{E}\\left[\\| \\mathrm{FFT}(\\mathbf{u})-\\mathrm{FFT}(\\tilde{\\rho_{d}})\\|_{1}\\right].\n\t\\end{equation}\nWe use the fact that, after normalization, the guessed diffuse map has a stationary distribution of gray scale. \nWith Fourier loss as a guidance on the frequency domain, the predicted SVBRDF maps will be less affected by the illumination and have similar variations as guessed diffuse map.\n\n\n\\textbf{Perceptual loss.} The exact lighting and viewing directions are sometimes unknown, especially for captured photographs. Using loss functions based on pixel-wise difference between input images and rendered images produces poor results, because we cannot guarantee the consistency of our rendering parameters: the rendered image is not rendered with exactly the same viewing and lighting condictions as the input image. To solve this issue, we use a perceptual loss function, to measure the semantic similarity between the input $I$ and the re-rendered result $R$, via a pretrained VGG-19 network:\n\t\\begin{equation}\\label{Lt}\n\t\t\\mathcal{L}_{P}(I,R) = \\mathbb{E}\\left[\\| \\mathrm{VGG}(I)-\\mathrm{VGG}(R)\\|_{1}\\right].\n\t\\end{equation}\nWith this perceptual rendering loss, the re-rendering result of predicted SVBRDF maps is more realistic and reliable.\n\n\\textbf{Summary.} We present a joint loss function combining these three losses:\n\\begin{equation}\\label{Lfinal}\n\\mathcal{L}_\\mathrm{final} = \\mathcal{L}_\\mathrm{Zhao} + \\lambda_{1}\\mathcal{L}_{F}(\\mathbf{u},\\tilde{\\rho_{d}}) + \\lambda_{2}\\mathcal{L}_{P}(I,R)\n\\end{equation}\nTrained with this joint loss function, the network can achieve better recovery result of SVBRDFs, leading to more reasonable rendering results with novel viewing\/lighting directions, especially when handling images with intense highlights. We will show our recovery results in Sec.~\\ref{sec:recovresult}.\n\n\\textbf{Discussion:} our loss function assumes that all maps being computed for the current SVBRDF have the similarly structured patterns with the same frequency, and that non-stationarity comes from illumination. This is a reasonable expectation for a large class of materials (leather, fabrics, wallpapers), but can be wrong for other materials, with local patterns. We also assume that all maps have the same frequency in their patterns, and that we can use the guessed diffuse map as a guide for learning in the other maps.\n\n\n\n\\begin{figure*}[htbp]\n\t\\centering\n\t\\includegraphics[width=0.9\\linewidth]{synthetic-brick.pdf}\n\t\\caption{\\label{fig:synthetic-brick}%\n\t SVBRDF maps recovered from synthetic images of $1024 \\times 1024$, compared with Gao et al.~\\cite{gao2019deep}, Guo et al.~\\cite{Guo2020MaterialGANRC}, Guo et al.~\\cite{Guo2021HighlightawareTN} and Zhao et al.~\\cite{Zhao2020JointSR}. Note how the specular highlight at the center of the image is challenging for all acquisition methods, resulting in dark or bright areas in the specular albedo map, and flattened areas in the normal map. We use the network of~\\cite{deschaintre2018single} as initialization for ~\\cite{gao2019deep} and set the number of input images to one. Guo et al.~\\cite{Guo2020MaterialGANRC} can only produce $256 \\times 256$ maps so we scale them to proper size for comparison.\n }\n\\end{figure*}\n\\subsection{Two-stage training strategy}\n\nTraining the network from scratch for each input image is time consuming: all parameters in the network have to be initialized as random value and trained over and over again for each new input. It can take several hours for the network to converge.\n\nTo improve this process, we propose a two-stage training strategy. We first train our network on an image (e.g. red book) for 10,000 steps to get a pretrained model. For a new input (e.g. brick), we use this pretrained model as an initialization of network parameters, and then train on this model for another 3,000 iterations to get the ``plausible'' SVBRDF maps. The key insight is that the generator acts as a prior knowledge about how the four maps look like in general after training for 10,000 steps:\n\\begin{itemize}\n \\item the RGB value of normal map is close to light blue, due to the planar property of material,\n \\item the roughness map looks ``grey'',\n \\item the specular map is ``dark'', and\n \\item the color of diffuse map mostly depends on the color of input image.\n\\end{itemize}\n Besides, as shown in Figure~\\ref{fig:pretrain}, without any extra training, the generator could recover texture information of the new input image in SVBRDF maps (although the color is not ``correct''). Apparently, with the pretrained parameters as a good initialization, it become relatively easier for the network to get a plausible recovery result of SVBRDF maps, comparing to training from scratch.\n\nWe have tried different pretrained models with different input image and find little difference in the final results. We provide some results in Figure~\\ref{fig:diffmodel}.\n\n \\begin{figure*}[htbp]\n\t\\centering\n\t\\includegraphics[width=1\\linewidth]{realphoto-leather.pdf}\n\t\\caption{\\label{fig:photo-leather}%\n SVBRDF maps recovered from real photos of $1024 \\times 1024$, compared with Gao et al.~\\cite{gao2019deep}, Guo et al.~\\cite{Guo2020MaterialGANRC}, Guo et al.~\\cite{Guo2021HighlightawareTN} and Zhao et al.~\\cite{Zhao2020JointSR}. Note how the specular highlight at the center of the image is challenging for all acquisition methods. Our method produces more stationary SVBRDF maps and more plausible rendering results, compared to previous works.\n }\n\\end{figure*}\n\\subsection{Training and implementation}\n\nWe implemented our framework in TensorFlow. The generator and discriminator were trained using Adam optimizers with a fixed learning rate of 2e-5. We set the hyper-parameter $\\lambda$, $\\lambda_{1}$, $\\lambda_{2}$ to 0.1, 0.1, 0.2 respectively.\nAt the first stage, we train our network on an arbitrary input image for 10000 iterations from scratch to obtain a pretrained model. Then with a new image as input, we fine-tune the pretrained network for another 3000 iterations to get a plausible result. It takes about 2 hours to get the pretrained model, which we then use for any new input images. It then costs about 30 minutes to train the model on a new image, using a RTX 2080Ti GPU. Note that training from scratch for a new input image, rather than using the pretrained model, requires 20,000 iterations with 4 hours, using the same GPU. Hence, the pretrained model represents an approximate 8 times speedup.\n\n\n\n\n\\section{Related work}\n\\label{sec:related}\n\nThe problem of appearance capture has been extensively researched. Please refer to Guarnera et al.~\\cite{guarnera2016brdf} and Gao et al.~\\cite{gao2019deep} for a more comprehensive introduction. In this paper, we focus on light-weight appearance capture, which can be grouped into multi-image methods and single-image methods according to the number of input images.\n\n\n\\subsection{Multi-image appearance modeling} \n\n\\textbf{Non-learning based methods.} With multiple images as input, previous works can capture the SVBRDF based on optimization usually with some domain-specific priors or assumptions, e.g. the known illumination~\\cite{chandraker2014shape,hui2015dictionary,riviere2016mobile}, or sparsity in some domain ~\\cite{hui2017reflectance,dong2014appearance,Xia:2016:Shape}. Aittala et al.~\\cite{aittala2015two} used two photographs (one with flash and one without) to recover the reflectance, assuming the maps are stationary. Xu et al.~\\cite{Xu2016NearField} used two images from a near-field perspective camera, and assume spatial relation for reflectance recovery.\n\n\\textbf{Learning based methods.} Recently, deep learning has been widely used in appearance modeling. Deschaintre et al.~\\cite{Valentin2019Flexible} extracted the feature from each input image via a single-image appearance modeling network (similar to~\\cite{deschaintre2018single}) and then fused the features for SVBRDF recovery, to support arbitrary number of input images. Gao et al.~\\cite{gao2019deep} proposed an auto-encoder to extract the latent space from SVBRDFs as a material 'prior', and then optimized material maps in this latent space to better leverage the inherent connections between maps. However, their method needs an initial value of SVBRDF maps. Guo et al.~\\cite{Guo2020MaterialGANRC} trained a MaterialGAN to produce plausible material maps from a small number (3-7) of images. They used three optimizing strategies for the intermediate vector and noise vector in the latent space to learn the correlations in SVBRDF parameters. In order to tackle the shape\/SVBRDF ambiguity, Boss et al.~\\cite{Boss2020-TwoShotShapeAndBrdf} designed a cascaded network for shape, illumination and SVBRDF estimation, using two images captured by a cellphone with flash both on and off.\n\n\n\\begin{figure*}[htb]\n\t\\centering\n\t\\includegraphics[width=0.993\\linewidth]{newframework.pdf}\n\t\\caption{\\label{fig:newframework}%\n\t\t Our method is similar to Zhao et al.~\\cite{Zhao2020JointSR}, with several key differences: we reduce the number of de-conv layers in the decoder to let the $Generator$ output SVBRDF maps of same resolution as input. The predicted maps are used to calculate the diffuse loss $L_{d}$ and Fourier loss $L_{F}$. The input tile and re-rendered image are feed into discriminator to get the adversial loss $L_{adv}$ and a pretrained VGG-19 network to get the perceptual loss $L_{p}$. Our new loss terms are shown in red in the architecture.\n\t}\n\\end{figure*}\n\n\n\n\\subsection{Single-image appearance modeling} \n\nAnother group of works only use single image as input. Aittala et al.~\\cite{aittala2016reflectance} proposed a convolutional neural network (CNN) to extract a neural Gram-matrix texture descriptor from a single image to estimate the reflectance properties of a stationary textured materials. Under the same assuption, Zhao et al.~\\cite{Zhao2020JointSR} proposed an unsupervised generative adversarial network for joint SVBRDF recovery and synthesis. When the input image has intense highlights, their method confuses material properties and tends to produce maps with a bright spot for the specular albedo. It also takes a long time to process each input image. Our method addresses both issues. \n\nLi et al.~\\cite{li2017modeling} trained a CNN with a novel self-augmentation training strategy, which requires only a small number of labeled SVBRDF training pairs, to learn a large number of unlabeled photos of spatially varying materials. Ye et al.~\\cite{ye2018single} improved on this method and completely eliminated the need for labeled training dataset. Deschaintre et al.~\\cite{deschaintre2018single} proposed a secondary network to extract global features from each stage of an U-net architecture. They also introduce a rendering loss to enhance the estimated reflectance parameters by comparing the appearance rendered of predicted SVBRDF maps with the input image. Li et al.~\\cite{li2018materials} designed an in-network rendering layer to regress SVBRDF maps from single image and a material classifier to constrain the latent representation of a CNN. They also utilized a densely-connected conditional random fields module to further refine the results. A current work by Guo et al.~\\cite{Guo2021HighlightawareTN} proposed a new convolution variant called highlight-aware convolution (HA-convolution). They train the HA-convolution to ``guess'' the saturated pixels (specular highlight area) by the unsaturated area surrounded, making the extracted features more uniform. Their work achieve state-of-the-art performance on single-image SVBRDF acquisition and can well handle images with intense highlights. Compared to all these works, our method avoids the large dataset and learns the maps individually, under the stationary assumption. \n\nTo remove the limitation of planar materials, Li et al.~\\cite{li2018learning} proposed a cascaded network architecture to recover shape and SVBRDF simultaneously from a single image. This method is further extended to handle complex indoor scenes~\\cite{li2020inverse}. \n \nIn terms of predicting procedural texture parameters, Hu et al.~\\cite{hu2019novel} introduced a novel framework for inverse procedural texture modeling: they trained an unsupervised clustering model to select a most appropriate procedural model and then used a CNN pool to map images to material parameters. \n\n\\begin{figure*}[htbp]\n\t\\centering\n\t\\includegraphics[width=0.993\\linewidth]{pretrainmodel.pdf}\n\t\\caption{\\label{fig:pretrain}%\n\t\tOur two-stage training strategy. At the first stage, we train the network on one image to get the pretrained model. For any new input images, we use this pretrained model as an initialization to start our second stage training. Note that at second stage, texture information already exists in the initialized SVBRDF maps, without extra training. \n\t}\n\\end{figure*}\n\n\\begin{figure*}[htbp]\n\t\\centering\n\t\\includegraphics[width=0.9\\linewidth]{synthetic-book.pdf}\n\t\\caption{\\label{fig:synthetic-book}%\n\t\tSVBRDF maps recovered from synthetic images of $1024 \\times 1024$, compared with Gao et al.~\\cite{gao2019deep}, Guo et al.~\\cite{Guo2020MaterialGANRC}, Guo et al.~\\cite{Guo2021HighlightawareTN} and Zhao et al.~\\cite{Zhao2020JointSR}. \n\t\tNote how the specular highlight at the center of the image is challenging for all acquisition methods, resulting in dark or bright areas in the specular albedo map, and flattened areas in the normal map. We use the network of~\\cite{deschaintre2018single} as initialization for ~\\cite{gao2019deep} and set the number of input images to one. Guo et al.~\\cite{Guo2020MaterialGANRC} can only produce $256 \\times 256$ maps so we scale them to proper size for comparison.\n\t}\n\\end{figure*}\n\n\n\n\n\\section{Results and discussion}\n\\label{sec:results}\n\n\n\\begin{figure*}[htbp]\n\t\\centering\n\t\\includegraphics[width=1\\linewidth]{realphoto-plastic.pdf}\n\t\\caption{\\label{fig:photo-plastic}%\n\tSVBRDF maps recovered from real photos of $1024 \\times 1024$, compared with Gao et al.~\\cite{gao2019deep}, Guo et al.~\\cite{Guo2020MaterialGANRC}, Guo et al.~\\cite{Guo2021HighlightawareTN} and Zhao et al.~\\cite{Zhao2020JointSR}. Note how the specular highlight at the center of the image is challenging for all acquisition methods. Our method produces more stationary SVBRDF maps and more plausible rendering results, comparing to previous works.\n }\n\\end{figure*}\n\n\n\n\\begin{figure*}[htbp]\n\t\\centering\n\t\\includegraphics[width=0.95\\linewidth]{lossabl-leather.pdf}\n\t\\caption{\\label{fig:lossleather}%\n\tAblation study on several maps to valid the the impacts of Fourier loss and perceptual loss in our SVBRDF recovery network. }\n\\end{figure*}\n\n\\begin{figure*}[htbp]\n\t\\centering\n\t\\includegraphics[width=0.9\\linewidth]{differentFFT.pdf}\n\t\\caption{\\label{fig:diffFFT}%\n\tInfluence of Fourier loss on different maps. Without Fourier loss on roughness map (R) or specular map (S), the bright spot still exist.}\n\\end{figure*}\n\n \\begin{figure*}[htbp]\n\t\\centering\n\t\\includegraphics[width=1\\linewidth]{limitation.pdf}\n\t\\caption{\\label{fig:limitation}%\n\t\tFailure case. Images with sharp contrast may not be well treated since our method failed to get an stationary ``guessed'' diffuse map.\n\t}\t\n\\end{figure*}\n\n\\begin{figure}[htbp]\n\t\\centering\n\t\\includegraphics[width=1.0\\linewidth]{differentModel-bookbluegray.pdf}\n\t\\caption{\\label{fig:diffmodel}%\n\tComparison on different pretrained models and different training strategies. There is little difference in the three results. }\n\\end{figure}\n\n\n\n\nWe first compare the results of our network with Gao et al.~\\cite{gao2019deep}, Guo et al.~\\cite{Guo2020MaterialGANRC}, Guo et al.~\\cite{Guo2021HighlightawareTN} and Zhao et al.~\\cite{Zhao2020JointSR} on both synthetic images and captured photos (Sec.~\\ref{sec:recovresult}). Then we show the influence of different loss term by a loss ablation experiment (Sec.~\\ref{sec:ablationStudy}) and the effects of our two-stage training strategy (Sec.~\\ref{sec:twoStageTraining}).\n\n\n\\subsection{Comparison with previous works}\n\\label{sec:recovresult}\n\nWe ran our experiments on images with strong highlights to show the effectiveness of our approach. The input real photos and reference maps are from the two-shot dataset~\\cite{aittala2015two} and a free material website$\\footnotemark$\\footnotetext[1]{https:\/\/texturehaven.com}. For the two-shot dataset, we cropped the $3264 \\times 2448$ SVBRDF maps to $1600 \\times 1600$ and resized them to $1024 \\times 1024$ resolution. We render the input synthetic images using the Cook-Torrance reflectance model~\\cite{cook1982reflectance} with a point light and camera right above the center of the plane, consistent with our re-rendering process in the network.\n\n\n\\textbf{Comparison on synthetic images.} \nIn Figure~\\ref{fig:synthetic-book} and Figure~\\ref{fig:synthetic-brick}, we compare our results on synthetic images with Gao et al.~\\cite{gao2019deep}, Guo et al.~\\cite{Guo2020MaterialGANRC}, Guo et al.~\\cite{Guo2021HighlightawareTN} and Zhao et al.~\\cite{Zhao2020JointSR}. For Gao et al.~\\cite{gao2019deep}, we use the network of Deschaintre et al.~\\cite{deschaintre2018single} as an initialization and set the number of inputs to one for fair comparison. Although \\cite{gao2019deep} achieves plausible rendering results through optimization steps, strong artifacts still exist in the recovered SVBRDF maps, leading to poor performance in novel view rendering. Guo et al.~\\cite{Guo2020MaterialGANRC} also produce unpleasant SVBRDF maps and novel view rendering. Zhao et al.~\\cite{Zhao2020JointSR} preserve some details in SVBRDF maps but still suffer from highlights regions, especially in roughness map and specular map. Our method recovers the stationarity in SVBRDF maps, which are comparable to the reference maps, thus lead to more plausible appearance in novel view rendering. More results are shown in the supplemental materials.\n\n\n\n\n\n\n\\textbf{Comparison on captured photos.} \nIn Figures~\\ref{fig:photo-leather} and \\ref{fig:photo-plastic}, we validate our method on three captured photos, comparing to Gao et al.~\\cite{gao2019deep}, Guo et al.~\\cite{Guo2020MaterialGANRC}, Guo et al.~\\cite{Guo2021HighlightawareTN} and Zhao et al.~\\cite{Zhao2020JointSR}. The photos are from the two-shot dataset~\\cite{aittala2015two}, cropped so that the brightest part of the image is at the center. There are no reference SVBRDFs for the captured photos. As shown in Figure~\\ref{fig:photo-leather}, Gao et al.~\\cite{gao2019deep} and Guo et al.~\\cite{Guo2021HighlightawareTN} produce ``polluted'' SVBRDF maps that are highly affected by highlights regions, while Guo et al.~\\cite{Guo2020MaterialGANRC} produce blurred results that lack detailed structure in SVBRDF maps. Zhao et al.~\\cite{Zhao2020JointSR} produce plausible diffuse maps, but still suffer from highlights regions in the other maps. All the previous works fail to handle the ambiguity in reflectance and illumination, yielding unpleasant novel view rendering results. Our method produces more stationary SVBRDF maps and more plausible rendering results under novel views. As shown in Figure~\\ref{fig:photo-plastic}, our method recovers detailed variations in normal map and suppresses the bright spots in other three maps. Thus, our method better handles the ambiguity in reflectance and illumination, producing more reasonable rendering results under novel view.\n\n\n \n\\subsection{Ablation study}\n\\label{sec:ablationStudy}\n\nThere are two important components in our SVBRDF recovery network: Fourier loss and perceptual loss. We ran an ablation study to validate the impacts of these components in Figure~\\ref{fig:lossleather}. We compare Zhao et al.~\\cite{Zhao2020JointSR}, our model (with Fourier loss $\\mathcal{L}_{F}$ only), our model (with both Fourier loss $\\mathcal{L}_{F}$ and perceptual loss $\\mathcal{L}_{P}$) and the reference. \n\nZhao et al.~\\cite{Zhao2020JointSR} (first row) suffer from bright spots in the roughness and specular maps, and over-flat normal map, leading to obvious difference from the reference SVBRDF maps, where the material properties are stationary. By introducing the Fourier loss, the predicted maps (second row) become more uniform and have less artifacts at the center. The novel view rendering results have more pronounced variations in illumination due to the more uniform SVBRDF maps, thus produce more plausible appearance compared to Zhao et al.~\\cite{Zhao2020JointSR}. However, we found that the texture variation is blurred in the highlights region. Further introducing perceptual loss (third row) solves this issue, via measuring the semantic similarity between the input image and re-rendering result. The joint loss function with both Fourier loss and perceptual loss produces the best results. The bright spots in the SVBRDF maps have been removed, making them decoupled from the illumination in the input images, thus leading to more plausible re-rendering results under different viewing\/lighting directions. More results are shown in the supplemental materials.\n\nWe also tried using the Fourier loss on only some of the SVBRDF maps, such as the roughness or specular albedos. The results of this study are shown in Figure~\\ref{fig:diffFFT}. The maps that were computed without the Fourier loss are highly affected by the bright spot, while the other maps are not. We used the Fourier loss on all 4 maps to ensure the stationarity in all SVBRDF maps.\n\n\n\\subsection{Validation of the two-stage training strategy}\n\\label{sec:twoStageTraining}\nIn Figure~\\ref{fig:diffmodel}, we compare the SVBRDF maps recovered with two-stage training strategy and one-stage training strategy (without pretraining). For the two-stage training strategy, we show the results recovered from two pretrained models which are trained with different images (book-red and brick). For one-stage training strategy (without pretraining), the training is performed on the input image from scratch, with 20,000 iterations. By comparison, we find that the difference between one-stage and two-stage training strategies is subtle, and the difference with different pretrained models are also not obvious. Thus, we believe that our two-stage training strategy greatly reduces the training time, without any quality degradation and the pretrained model can be trained on arbitrary single image under our assumption.\n\n\n\\subsection{Limitations}\n\n\nWe have identified three main limitations for our\nmethod. First, we assume that all maps have the same frequency in their patterns, and that we can use the guessed diffuse map as a guide for learning in the other maps. If the SVBRDF maps have different frequency, our Fourier loss will provide the wrong guidance. Second, our method does not work well for input images with sharp contrast, as shown in Figure~\\ref{fig:limitation}. The highlights are so strong that the texture information in this region is almost obscured, which prevent us from getting a plausible guessed diffuse map. Third, although the training time has been reduced significantly, compared to the previous work, each image still needs 30 minutes for training. Further reducing the training time will be a valuable work, and we leave it for the future work.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nMagnetic fields generated in laser-matter interactions are of primary interest in high energy density physics \\cite{laserplasma}. For example, magnetic fields generated by the Weibel instability can explain the collisionless shocks that are found in young galaxies and other astrophysical systems \\cite{weibel, astro}. In inertial confinement fusion, magnetic fields are used in one approach to reduce heat losses and thus improve performance of implosions \\cite{magicf}, and in another approach (using cylindrical implosions) as a necessary criterion to reach ignition \\cite{icfcylindrical}. Also, magnetic reconnection is a commonly studied process which converts some of the magnetic energy of a system into heat, and understanding the heating mechanism well could lead to better hohlraum design for inertial confinement fusion \\cite{magneticreconnection}.\n\nProton radiography is an extensively used technique that characterizes electric and magnetic fields in plasmas over a wide range of field strengths \\cite{roth}. A polyenergetic proton beam, with typical energies on the order of 10 MeV, is usually produced by high intensity laser interaction with solid targets \\cite{hedfrontiers}. This beam then interacts with an object of interest (such as plasmas or shock-compressed matter) and gets deflected as a result of the Lorentz force or collisions with atoms \\cite{radiographyscatter}. The outgoing beam is captured on a radiochromic film (RCF) stack which can resolve both spatial and energy profiles \\cite{rcf}. \n\nVarious methods have been developed in analyzing proton radiographs. Using the principles of differential scattering and stopping, density profiles of dense matter can be retrieved from radiographs \\cite{laserdriven}. Via scaling laws, field strengths of electric and magnetic fields can be estimated \\cite{measuringeb,quantitative1,quantitative2}. Also, radiographs can be used to qualitatively understand electric and magnetic field structures \\cite{qualitative2,qualitative3,qualitative4}. Furthermore, radiographs can be simulated numerically in order to identify features found in experimental radiographs \\cite{levy,compare}.\n\nIt is only recently that techniques have been developed to reconstruct fields. The relations between the field structures and proton radiographs have been established by Kugland $\\textit{et al}.$ \\cite{Kugland} under certain simplifying assumptions, allowing one to obtain the line-integrated transverse magnetic field from a radiograph by solving a 2-D Poisson equation. Graziani $\\textit{et al.}$ \\cite{morphology} and Kasim $\\textit{et al.}$ \\cite{kasim} provided extensions to this technique, under similar assumptions. As such, radiographs of systems which do not obey any of the assumptions in \\cite{Kugland,morphology,kasim} can only be analyzed qualitatively. \n\nMachine learning, a field of study which enables the performance of a computer (with respect to a certain task) to increase with its experience, has seen many applications in artificial intelligence problems such as image recognition, recommender systems and speech-to-text \\cite{LeCun}. Due to its ability to discover structures in high dimensional data, artificial neural networks (one example of machine learning) has seen many applications in physics, such as analyzing particle accelerator data \\cite{particle}, reconstructing images in optical tomography \\cite{Kamilov:15} and retrieving 3-D potentials in electron scattering \\cite{dynamical}. The flexible nature of artificial neural networks and the prevalence of its usage in image recognition problems prompt us to posit its usage in imaging 3-D magnetic field structures without a need for simplifying assumptions, addressing the gaps found in existing radiograph inversion techniques.\n\nIn this paper, we first review existing work on inverting proton radiographs. We then introduce key ideas of artificial neural networks and review their applications in physics. Next, we outline the new method of using artificial neural networks to reconstruct magnetic fields and retrieving field parameters such as characteristic lengths. Via simulations, we show a proof of concept for the above ideas, and discuss how noise and selection of training data affect our results. Using an example, we highlight the need for proton tomography. Finally, we compare the artificial neural network technique with the existing methods of radiograph inversion and suggest a variety of extensions to our research.\n\n\\section{Theory}\n\n\\subsection{Existing methods of retrieving magnetic fields from radiographs}\n\\label{subsec:existingmethod}\n\nIn this subsection, we will outline the foundational work on proton radiograph inversion by Kugland $\\textit{et al.}$, move on to discuss Graziani $\\textit{et al.}$ and Kasim $\\textit{et al.}$'s extensions, and conclude with the gaps in these methods.\n\nFirst, we go through Kugland $\\textit{et al.}$'s \\cite{Kugland} definitions: The coordinates are defined such that the object is placed at $z=0$ (object plane), and $(x, y)$ refers to the coordinates on the image plane (see Fig. \\ref{fig:setup}). At the object plane, the proton's coordinates are denoted as $(x_0,y_0)$. The distance between the proton source and the object is $l$ while the distance between the object to the image plane (radiochromic film stack) is $L$. $a$, the characteristic length of the object, is assumed to be much smaller than $l$ (paraxial limit) and $L\\gg l$ for high magnification.\n\n\\begin{figure}\n\\includegraphics[width=0.5\\textwidth]{images\/setup.pdf}\n\\caption{Diagram of a typical proton radiography setup. A point source of distance $l$ away from the object emits a beam of protons moving generally in the $z$-direction. $L$ is the distance between the object and image plane.}\n\\label{fig:setup}\n\\end{figure}\n\nIn order to get a tractable result, Kugland $\\textit{et al.}$ have made some simplifying assumptions. We start off with those relating to the proton source: (i) The source can be treated as a point source. Else, the radiograph will be blurred and the resolution of field structures will be affected. (ii) The protons deviate from their straight-line trajectories solely due to the Lorentz force interaction with the object, and we can ignore space-charge effects because the beam is charge-neutral as a result of co-moving electrons \\cite{comovingelectrons}. (iii) The angular width of the beam is much greater than $a\/l$ so that intensity variations in the image plane are due to proton interactions with the object, and not the angular distribution of the proton beam. \n\nConsider the dimensionless parameter\n\n\\begin{eqnarray}\n\\mu\\equiv \\frac{l\\beta}{a},\n\\end{eqnarray}\n\n\\noindent where $a$ is a characteristic length of the electromagnetic field, and $\\beta$ is a characteristic deflection angle. One core assumption in Kugland $\\textit{et al.}$ is that $\\mu\\ll1$ (hence known as the linear regime), where the spatial variation of the intensity on the screen is small. This is in contrast to the non-linear regime ($\\mu$ on the order of 1 or more) where the intensity variations are large, leading to non-linear features. One example of non-linear features is caustics, which occurs when the Jacobian determinant\n\n\\begin{eqnarray}\n \\left|\\frac{\\partial (x,y)}{\\partial (x_0,y_0)}\\right|=0,\n\\end{eqnarray}\n\n\\noindent resulting in features of high intensity (usually multiples of the background intensity).\n\nFurthermore, assuming that the velocity of the proton $\\mathbf{v}$ is approximately constant while the proton is within the object, (trajectories are not perturbed within the plasma so $\\mathrm{dt=d}z_0\/v$), the only relevant component of the magnetic potential is the one in the $z$-direction, $A_z$. Defining the line-integrated potential as\n\n\\begin{eqnarray}\n\\Phi(x_0,y_0)= \\int_{-\\infty}^{\\infty}A_z(x_0,y_0,z_0)\\mathrm{d}z_0,\n\\end{eqnarray}\n\n\\noindent then with all the assumptions listed above, Kugland $et\\:al.$'s formula for radiograph inversion reads:\n\n\\begin{eqnarray}\n \\mathbf{\\nabla_\\perp^2}\\Phi(x_0,y_0) = \\frac{\\sqrt{2m_p K}}{el} \\Big( 1-\\frac{I}{I_0}\\frac{L^2}{l^2}\\Big),\n\\label{eqn:poisson}\n\\end{eqnarray}\n\n\\noindent where $\\nabla_\\perp$ is the gradient with respect to the transverse coordinates $(x_0,y_0)$, $m_p$ is the mass of the proton, $K$ is the (non-relativistic) kinetic energy of the proton, $e$ is the charge of an electron, $I$ is the proton intensity distribution at the image plane and $I_0$ is the proton intensity distribution in the object plane. As such, given the intensity profile at the object plane $I_0(x_0, y_0)$ and radiograph intensity profile $I(x, y)$ (which can be transformed to $I(x_0,y_0)$ via the mapping $x=\\frac{L}{l}x_0,y=\\frac{L}{l}y_0$) in the regime $\\mu\\ll 1$, one can solve a 2-D Poisson equation to get the line-integrated potential $\\Phi(x_0,y_0)$, thereby allowing one to reconstruct the line-integrated transverse magnetic field.\n\nUsing a series of perturbations, and assuming the linear regime $\\mu \\ll 1$, Graziani $\\textit{et al.}$ \\cite{morphology} proposed a correction term in the right hand side of equation (20) in Kugland $\\textit{et al.}$ (equation (\\ref{eqn:poisson}) in this paper) which leads to a second-order non-linear partial differential equation. The authors then conducted a simulation of their proposed equation, and found that their method reconstructed the line-integrated magnetic field accurately at locations near the peak field strength, but was inaccurate at locations where the field strengths are at least 3 orders of magnitude less than the peak field strength. Also, Graziani $\\textit{et al.}$ briefly sketched a method to retrieve the line-integrated magnetic field in the non-linear regime, assuming that the direction of the proton trajectory within the object is nearly constant. \n\nAnother method, based on computational geometry, was implemented by Kasim $\\textit{et al.}$ \\cite{kasim}. This method works well in the beginning of the caustic regime (early part of the non-linear regime where $\\mu\\textgreater 1$), but the relative errors start to become very large in the regime of branching caustics (later part of the non-linear regime). Also, Kasim $\\textit{et al.}$ demonstrated the large errors that come with solving the Poisson equation in Kugland $\\textit{et al.}$ for a system in the non-linear regime. \n\nSo far, we have seen that existing methods of magnetic field reconstruction require simplifying assumptions in order to get an equation which, when solved, gives the line-integrated transverse magnetic field. This highlights two gaps: (i) In later parts of the non-linear regime (e.g. branching caustics regime), there is no known reconstruction method despite the fact that non-linear features do occur in some experimental radiographs \\cite{nonlinear1,nonlinear2,nonlinear3}. In this regime, experimental radiographs are analyzed by comparison to simulated radiographs of a hypothesized magnetic field structure. (ii) In both regimes, there is no reconstruction method that can give the 3-D magnetic field. As we will demonstrate in the next few subsections, the proposed artificial neural network method can address both gaps.\n\n\\subsection{Artificial neural networks (ANN)}\n\\input{chapters\/ann}\n\n\\section{Methods}\n\\input{chapters\/methods}\n\n\\section{Results and discussion}\n\nWe have simulated special cases of equation (\\ref{eqn:parameters}) as a proof of concept of the idea in section \\ref{subsec:recon}. All results shown here come from applying a trained artificial neural network on the testing set (which is not used in training the artificial neural network), and is indicative of the performance when tested on experimental data. Some of the simulation parameters used in the following subsections can be found in table \\ref{table:params}. \n\n\n\\begin{table*}[]\n\\centering\n\\begin{tabular}{|c|c|c|}\n\\hline\n\\multicolumn{1}{|l|}{} & Value in subsection A-E & Value in subsection F \\\\ \\hline\nEllipsoidal blob parameter $a$\/mm & 0.1 & 0.7 \\\\ \\hline\nEllipsoidal blob parameter $b$\/mm & \\multicolumn{2}{c|}{1} \\\\ \\hline\nFlux rope height\/mm & 2 & 0.3 \\\\ \\hline\nFlux rope parameter $a$\/mm & 0.8 & 0.5 \\\\ \\hline\nDistance between proton source and object $l$\/mm & \\multicolumn{2}{c|}{7} \\\\ \\hline\nDistance between object and screen $L$\/mm & \\multicolumn{2}{c|}{93} \\\\ \\hline\nNumber of neurons in input layer & 2500 & 5000 \\\\ \\hline\nNumber of neurons in output layer & \\multicolumn{2}{c|}{2 for subsections A, B, D, F and 1 for subsections C, E } \\\\ \\hline\nVelocity of protons in the $z$ direction\/ms$^{-1}$ & \\multicolumn{2}{c|}{$10^6$} \\\\ \\hline\nVelocity of protons in the $x,\\:y$ direction\/ms$^{-1}$ & Ranges from -5$\\times 10^4$ to 5$\\times 10^4$ & Ranges from -6.9$\\times 10^5$ to 6.9$\\times 10^5$\\\\ \\hline\n\\end{tabular}\n\\caption{Parameters for simulations in the following subsections. The proton velocities for subsection F refer to protons in the beam fired in the $z$ direction.}\n\\label{table:params}\n\\end{table*}\n\n\\subsection{Reconstructing magnetic fields, a proof of concept}\n\nConsider the following two fields: (a) a magnetic ellipsoidal blob, representative of fields generated by the Weibel instability \\cite{weibel}, that can be written as\n\n\\begin{eqnarray}\nB_\\phi = B_0\\frac{r_0}{a}\\mathrm{exp}(-(\\frac{r_0^2}{a^2}+\\frac{z_0^2}{b^2})),\n\\label{eqn:blob}\n\\end{eqnarray}\n\n\\noindent where $B_0$ is proportional to the peak field strength, $r_0$ is the distance to the center in the $xy$ plane, $z_0$ is the distance to the center along the $z$-axis, and $a,\\:b$ are characteristic lengths of the ellipsoid; (b) a magnetic flux rope of Gaussian cross section, representative of fields due to laser generated plasma flows \\cite{fluxrope}, can be written as\n\n\\begin{eqnarray}\nB_y = B_0\\mathrm{exp}(-\\frac{x_0^2+z_0^2}{a^2}),\n\\end{eqnarray}\n\n\\noindent where $B_0$ is the peak field strength, $x_0$ and $z_0$ are the distances to the center along the $x$- and $z$- axes respectively, and $a$ is a characteristic length of the Gaussian.\n\n\\begin{figure}\n\\includegraphics[width=0.5\\textwidth]{images\/matlabfig\/89b_reg_hist}\n\\caption{Error histograms for the $\\alpha$ coefficients of two basis fields, using an artificial neural network with 1 hidden layer consisting of 10 neurons. The mean errors are 0.34$\\%$ and 2.74$\\%$ while the median errors are 0.20$\\%$ and 1.29$\\%$ for $\\alpha_1$ (ellipsoidal blob) and $\\alpha_2$ (flux rope) respectively. More parameters can be found in table \\ref{table:params}.}\n\\label{fig:twoalphas_reg}\n\\end{figure}\n\nIn terms of equation (\\ref{eqn:parameters}), we assign $\\alpha_1$ to $B_0$ of the magnetic ellipsoidal blob and $\\alpha_2$ to $B_0$ of the magnetic flux rope. $\\alpha_1$ was varied from 5 to 6 T (defocusing) in steps of 0.01 T while $\\alpha_2$ was varied from 2.01 to 3 T in steps of 0.03 T, and all other parameters were kept constant. Radiographs of 50 by 50 pixels were generated for each configuration. As mentioned earlier, 70$\\%$ of these radiographs were randomly chosen to train the artificial neural network, 15$\\%$ were randomly assigned to the validation set to prevent overfitting, and the trained artificial neural network was used to predict the $\\alpha_1$ and $\\alpha_2$ values on the remainder $15\\%$ of the radiographs. The errors, defined as $\\left|\\frac{\\mathrm{predicted\\:value}-\\mathrm{actual\\:value}}{\\mathrm{actual\\:value}}\\right|$, are plotted in Fig. \\ref{fig:twoalphas_reg}. We see that nearly all the errors are less than $5\\%$, suggesting that the full scale implementation outlined in section \\ref{subsec:recon} will work given enough basis fields. Though there are some undesirable outliers in $\\alpha_2$, it is likely to be a result of inadequate data rather than a flaw in the artificial neural network method. This will be discussed in section \\ref{subsec:data_accuracy}.\n\n\\subsection{Obtaining $\\mathbf{B}$ field parameters from a magnetic ellipsoidal blob}\n\\label{subsec:blobparams}\n\n\\begin{figure}\n\\centering\n\\subfloat[ ]{\\includegraphics[width=0.26\\textwidth]{images\/88_radiograph_noncaustic_17aug2016}}\n\\subfloat[ ]{\\includegraphics[width=0.26\\textwidth]{images\/88_radiograph_caustic_16aug2016}}\n \\caption{(a) Radiograph for a magnetic ellipsoidal blob at B = 0.1 T, $\\sigma$ = 1. This is in the non-caustic regime, where the ring around the center is smeared out. (b) Radiograph for a magnetic ellipsoidal blob at B = 0.3 T, $\\sigma$ = 1. This is in the caustic regime, where most of the protons fall into a very thin ring. The scales are in arbitrary units.}\n\\label{fig:caustic}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[width=0.5\\textwidth]{images\/matlabfig\/88_reg_hist}\n\\caption{Error histograms for $\\alpha$ and $\\sigma$ for a magnetic ellipsoidal blob, using an artificial neural network with 1 hidden layer consisting of 50 neurons. The mean errors are 0.26$\\%$ and 0.05$\\%$ while the median errors are 0.20$\\%$ and 0.04$\\%$ for $\\alpha$ and $\\sigma$ respectively. More parameters can be found in table \\ref{table:params}.}\n\\label{fig:alphasigma_reg}\n\\end{figure}\n\nIn this subsection, we demonstrate that (i) the artificial neural network method works in the non-linear regime, and (ii) the parameter retrieval concept in section \\ref{subsec:retrieval} can be done. Here, we retrieve the field strength coefficient $\\alpha$ and the scaling factor $\\sigma$.\n\nRadiographs for a magnetic ellipsoidal blob were generated with $\\alpha$ (representing $B_0$ in equation (\\ref{eqn:blob})) ranging from 0.1 to 0.3 T (defocusing) in steps of $2\\times 10^{-4}$ T and $\\sigma$ ranging from 0.9 to 1 in steps of 0.02. This spectrum of $\\alpha$ spans both the caustic and non-caustic regime, as can be seen by the radiographs plotted in Fig. \\ref{fig:caustic}. The histogram of errors are plotted in Fig. \\ref{fig:alphasigma_reg}. We can see that the average errors are well below $1\\%$, suggesting that artificial neural networks can be used for parameter retrieval, an alternative to reconstructing entire magnetic fields. We also see that this method works in the non-linear regime, where $\\mu\\approx 2$.\n\n\\begin{figure}\n\\includegraphics[width=0.5\\textwidth]{images\/matlabfig\/92_branchingprofile}\n\\caption{Horizontal profile of a radiograph for a magnetic ellipsoidal blob at 1.5 T, the branching caustics regime. Notice that there are two maxima in the intensity profile, instead of one in the case of the caustic regime.}\n\\label{fig:branchingprofile}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[width=0.5\\textwidth]{images\/matlabfig\/92_reg_hist}\n\\caption{Error histogram of $\\alpha$ for a magnetic ellipsoidal blob spanning the linear, caustic and branching caustic regime, using an artificial neural network with 1 hidden layer consisting of 10 neurons. The mean error is 0.06$\\%$ while the median error is 0.04$\\%$. More parameters can be found in table \\ref{table:params}.}\n\\label{fig:branching}\n\\end{figure}\n\n\\subsection{Branching caustics}\n\\label{subsec:branching_caustics}\n\nThe power diagram method \\cite{kasim} gives relative errors of more than $10\\%$ in the branching caustics regime. Here, we show that the artificial neural network method is flexible enough to accommodate this scenario. We extend the range of field strengths in section \\ref{subsec:blobparams} to range from 0.1 T to 1.5 T in steps of $2\\times 10^{-4}$ T, spanning the linear, caustic and branching caustic regime. As an illustration, the horizontal profile of the radiograph at 1.5 T (branching caustics regime) is plotted in Fig. \\ref{fig:branchingprofile}. The error histogram is plotted in Fig. \\ref{fig:branching} and we can see that all the errors are well below $1\\%$.\n\n\\subsection{Effect of noise on accuracy}\n\n\\begin{figure*}\n\\makebox[\\textwidth][c]{\\includegraphics[width=1.25\\textwidth]{images\/matlabfig\/90_reg_hist}}\n\\caption{Error histograms of $\\alpha$ and $\\sigma$ when 5, 10, 20, and 30 percent noise is introduced into the radiographs for a magnetic ellipsoidal blob. The artificial neural network configurations are: 8 hidden layers with 80 neurons per layer, 5 hidden layers with 50 neurons per layer, 6 hidden layers with 70 neurons per layer and 7 hidden layers with 100 neurons per layer for 5, 10, 20, and 30 percent noise respectively. For $\\alpha$, the mean errors are 0.92$\\%$, 1.14$\\%$, 1.49$\\%$ and 1.91$\\%$ while the median errors are 0.73$\\%$, 0.82$\\%$, 1.19$\\%$ and 1.46$\\%$ for 5, 10, 20, and 30 percent noise respectively. For $\\sigma$, the mean errors are 0.18$\\%$, 0.24$\\%$, 0.31$\\%$ and 0.41$\\%$ while the median errors are 0.14$\\%$, 0.18$\\%$, 0.25$\\%$ and 0.31$\\%$ for 5, 10, 20, and 30 percent noise respectively. Notice that it takes an increase from 5$\\%$ noise to 30$\\%$ noise in order to roughly double the mean and median errors, indicating that the artificial neural network method is robust to noise. More parameters can be found in table \\ref{table:params}.}\n\\label{fig:alphasigma_noise}\n\\end{figure*}\n\nSo far we have shown that artificial neural networks trained on noise-free radiographs can retrieve quantities from noise-free radiographs with a high accuracy. We proceed to explore the changes in accuracy when noise is introduced into all the radiographs (training, validation and testing sets). Suppose a pixel in the radiograph has a value of $\\chi$ and we want to introduce random noise of $x\\%$. Then each pixel is replaced by a random value from a Gaussian distribution with a mean of $\\chi$ and a standard deviation of $\\chi\\times x\\%$. This was done for $x$ = 5, 10, 20 and 30 percent on the radiographs in section \\ref{subsec:blobparams}, and the entire process of training and prediction was repeated. The error histograms for $\\alpha$ and $\\sigma$ are plotted in Fig. \\ref{fig:alphasigma_noise}. We notice that for both quantities, it takes an increase from 5$\\%$ to 30$\\%$ noise in order to roughly double the mean and median errors. This demonstrates the robustness of artificial neural networks to noise, although noise does occasionally cause very high errors. It is worth noting that the right model to use is Poisson noise, but that model approximates Gaussian noise for a large number of particles per pixel, which is true in our case. The relationship between errors and input noise for various configurations of artificial neural networks is further explored in \\cite{nwnoise1,nwnoise2,nwnoise3,sizenoise}.\n\n\\subsection{Effect of the amount of training data on accuracy}\n\\label{subsec:data_accuracy}\n\n\\begin{figure}\n\\includegraphics[width=0.5\\textwidth]{images\/matlabfig\/93_reg_hist}\n\\caption{Error histogram in $\\alpha$ for a magnetic ellipsoidal blob (more parameters in table \\ref{table:params}) when the step size is increased by a factor of 5, leading to less data. The mean error is 0.20$\\%$ and the median error is 0.16$\\%$, using an artificial neural network with 7 hidden layers with 40 neurons per layer.}\n\\label{fig:stepsize}\n\\end{figure}\n\nWhile the artificial neural network method seems promising so far, it is reliant on the large amounts of training data (specifically, the amount of information in the data, or information entropy) for its accuracy. To elucidate this fact, we generated data for a magnetic ellipsoidal blob and varied only $\\alpha$ between the values 0.1 to 1.5 T, similar to the scenario in section \\ref{subsec:branching_caustics}, except with a larger step size of $10^{-3}$ T. The error histogram is plotted in Fig. \\ref{fig:stepsize}. In comparison to Fig. \\ref{fig:branching}, we see that having a larger step size and thus having less information leads to an increase in errors. This, combined with the universality of the artificial neural network proved in \\cite{hornik}, suggests that extreme outliers in errors can be overcome by generating more data that increases the information entropy of the data set and re-training the neural network. The relationship between errors and size of data set for various configurations of artificial neural networks is further explored in \\cite{nwsize1,nwsize2,nwsize3,sizenoise}.\n\n\\subsection{Limitations of proton radiography and the need for proton tomography}\n\nProton radiographs do not necessarily form one-to-one relationships with field structures: Suppose that at the edge of a plasma that is facing the proton beam, there is a very strong $\\mathbf{B}$ field that deflects the incoming protons before these protons could penetrate further. Then the radiograph formed is independent of the $\\mathbf{B}$ fields in the remainder of the plasma, because no protons interact with it. Due to the lack of information in the radiographs, no method can fully reconstruct the $\\mathbf{B}$ fields. As such, there is a need to modify the experimental set-up to capture more information from the $\\mathbf{B}$ field.\n\nOne possible way to capture more information is to include more probe beams in different directions (tomography). As an example, consider two adjacent field structures, the ellipsoidal blob (field strength parameter assigned to $\\alpha_1$, ranging from 9 to 9.25 T in steps of 0.005 T) and the flux rope (field strength parameter assigned to $\\alpha_2$, ranging from 0.3 to 0.4 T in steps of 0.002 T), with the former obscuring the latter in the $z$ direction by 0.5 mm. When the artificial neural network method was used on radiographs due to a beam fired in the $z$ direction, the errors in the retrieved field strength of the flux rope are very high (top panel, Fig. \\ref{fig:tomography}) due to the lack of protons probing the field structure. When another probe beam was fired in the $x$ direction (with $x$ velocity of $10^6$ ms$^{-1}$ and $y,\\:z$ velocities ranging from -2$\\times 10^5$ ms$^{-1}$ to 2$\\times 10^5$ ms$^{-1}$) and the radiographs were used in addition to the ones from the probe beam in the $z$ direction, errors for both field strengths decrease by at least an order of magnitude (bottom panel, Fig. \\ref{fig:tomography}). This demonstrates two facts: (i) the artificial neural network method (and any other method) cannot fully reconstruct magnetic fields if the radiographs carry insufficient information; (ii) including more information decreases errors, even if the field structure is not obscured, as can be seen by the reduction in errors for $\\alpha_1$ in Fig. \\ref{fig:tomography}. We hope this will inspire future work on theoretical error bounds in artificial neural networks given the lack of information in the data set.\n\n\n\\begin{figure}\n\\makebox[\\columnwidth][c]{\\includegraphics[width=1.15\\columnwidth]{images\/matlabfig\/97_reg_hist}}\n\\caption{Error histograms of $\\alpha_1$ and $\\alpha_2$ for two scenarios. Top panel: Using radiographs generated by proton beams in the $z$ direction as training data, the mean errors are 1.20$\\times 10^{-2}\\%$ and $3.48\\%$ while the median errors are 3.59$\\times 10^{-3}\\%$ and $2.50\\%$ for $\\alpha_1$ and $\\alpha_2$ respectively, using an artificial neural network with 1 hidden layer consisting of 30 neurons. (more parameters in table \\ref{table:params}). The errors for $\\alpha_2$ (field strength of the flux rope) are high because the field structure associated with $\\alpha_1$ (ellipsoidal blob) is deflecting many protons away from the flux rope, causing a lack of information in the resulting radiographs. Bottom panel: Using radiographs generated by proton beams in the $z$ and $x$ directions as training data, the mean errors are $1.81\\times 10^{-4}\\%$ and $0.11\\%$ while the median errors are 1.41$\\times 10^{-4}\\%$ and $8.68\\times 10^{-2}\\%$ for $\\alpha_1$ and $\\alpha_2$ respectively, using an artificial neural network with 1 hidden layer consisting of 110 neurons. We see that upon including data from the proton beam in the $x$ direction, more information for both field structures is added to the data set and errors reduce by at least an order of magnitude.}\n\\label{fig:tomography}\n\\end{figure}\n\n\\subsection{Comparison with the existing radiograph inversion techniques}\n\nThe artificial neural network method addresses the two gaps in existing reconstruction techniques, by being able to work in the non-linear regime (such as the branching caustic regime), and being able to produce 3-D reconstruction of the magnetic field. While existing inversion techniques rely on the paraxial limit for simplicity, the artificial neural network technique does not rely on such a limit, and in fact would benefit more if the paraxial limit was not used--The protons should ideally have non-zero velocities in the $x$ and $y$ directions so that it will be deflected by $\\mathbf{B_\\textit{z}}$, allowing the artificial neural network to capture more information and thus reconstruct the magnetic fields more accurately. \n\nAlso, existing techniques assume that the protons move in a straight line within the plasma, but this assumption does not hold when the $\\mathbf{B}$ field is so strong that deflection occurs within the plasma. As a result, the existing techniques will inevitably fail in the limit of extremely large $\\mathbf{B}$ fields. In comparison, the neural network method will work because it does not require this assumption. \n\nOne major benefit of using artificial neural networks is the long-run computational cost savings. Generating each set of 50 by 50 pixel radiographs (one radiograph for each variation of parameters) takes on the order of hours\/days using 16 cores on one node of the Arcus Phase B supercomputer \\cite{ARC}. Training the artificial neural network takes on the order of minutes\/hours when using a single GPU on the Arcus Phase B, for neural networks with up to 10 hidden layers, with each layer consisting of up to 150 neurons. Reconstruction takes on the order of seconds without using any parallel processing\/GPU. If this project were to go full-scale, we can see that most of the computational cost is in the generation of training data and the training of the artificial neural network, which is a one-off cost. In comparison, existing methods of reconstruction have a recurring cost. As such, over the long run, if the artificial neural network is used to invert sufficiently many radiographs, the artificial neural network method is computationally more efficient.\n\nHowever, the artificial neural network method has some drawbacks. For example, the overall accuracy of the artificial neural network can only be determined empirically, whereas error-propagation can be performed for existing techniques. While the artificial neural network method will allow for computational cost savings over the long run, the minimal start-up computational cost to get it working for a non-trivial field structure is quite high, because the artificial neural network must be trained with many basis fields before it can be used. This is in contrast to existing techniques, where any field, as long as the assumptions are met, can be imaged with the computational cost of solving a differential equation.\n\n\\section{Conclusions and future work}\n\\label{sec:future}\n\nIn conclusion, we have reviewed existing techniques on analyzing $\\mathbf{B}$ fields from proton radiography, and the basics of artificial neural networks. Using the fact that artificial neural networks are highly flexible function approximators, we proposed for the first time the idea of using artificial neural networks to reconstruct arbitrary $\\mathbf{B}$ fields and retrieve important field parameters.\n\nVia simulations, we showed that an artificial neural network can reconstruct $\\mathbf{B}$ fields that can be expressed as linear combinations of two fields, and retrieve useful quantities of $\\mathbf{B}$ fields such as characteristic lengths. We also explored the effects of noise and size of data set on the accuracy of the artificial neural network, and found that artificial neural networks are robust to noise. Artificial neural networks can accommodate a wide variety of scenarios and assumptions which existing techniques cannot, such as the branching caustics part of the non-linear regime. We also highlighted the need for proton tomography as certain field structures cannot be reconstructed fully due to the lack of information from a single radiograph.\n\nAs the usage of artificial neural networks in diagnosing $\\mathbf{B}$ fields in high energy density plasmas is new, there are many avenues where this work can be developed further. There are at least three ways to improve the accuracy of the artificial neural network: (i) experiment with other types of artificial neural network architecture. For example, convolutional neural networks are a type of feedforward artificial neural network where the connections between neurons are inspired by the animal visual cortex \\cite{convnet} and as such, perform very well in image recognition. Since radiograph inversion involves image recognition, convolutional neural networks could offer better performance than the fully connected (dense) feedforward neural network used in this paper. Recurrent neural networks are a class of artificial neural networks where the neuron connections form directed cycles, and such architecture has advantages in analyzing time series data. We could use recurrent neural networks on a time series of proton radiographs to shed light on the dynamics of the $\\mathbf{B}$ field and hence the plasma. (ii) Include energy-resolved radiographs. In our simulations, we only looked at the spatial distribution of the protons, so including extra information on the proton energies could improve accuracy. (iii) Study the effects of the number of pixels on accuracy. It is interesting to note that promising results were obtained despite the low resolution of the radiographs (50 by 50 pixels). Understanding the effects of discretization noise could help us determine the quality of radiographs to be generated in order to train an artificial neural network to a specific accuracy.\n\nFurthermore, the artificial neural network approach can be extended to similar systems, such as diagnosing electric fields in plasmas or characterizing micromagnetic patterns in magnetic media via electron scattering \\cite{micromag}. Finally, a full scale implementation of an artificial neural network that can reconstruct any $\\mathbf{B}$ field is a possibility we can look forward to.\n\n\\begin{acknowledgements}\n\nThe authors would like to acknowledge the support from the plasma physics HEC Consortium EPSRC grant number EP\/L000237\/1, as well as the Hartree Centre, Daresbury Laboratory, Central Laser Facility, and the Scientific Computing Department at the Rutherford Appleton Laboratory. The authors would like to acknowledge the use of the University of Oxford Advanced Research Computing (ARC) facility in carrying out this work \\cite{ARC}. N.C. acknowledges financial support from the Singapore government. M.F.K. gratefully thanks the Indonesian Endowment Fund for Education for its support. M.C.L. thanks the Royal Society Newton International Fellowship for support. The authors acknowledge support from OxCHEDS and P.A.N. for his William Penney Fellowship with AWE plc. \n\n\\end{acknowledgements}\n\n\n\n\\subsection{Reconstruction of an arbitrary $\\mathbf{B}$ field}\n\\label{subsec:recon}\n\n\\begin{figure*}\n\\includegraphics[width=1\\textwidth]{images\/process.pdf}\n\\caption{Schematic of the prediction and reconstruction process.}\n\\label{fig:predictreconstructprocess}\n\\end{figure*}\n\nIn this section, we outline the steps to using an artificial neural network to reconstruct any arbitrary $\\mathbf{B}$ field. First, we expand the magnetic field \\textbf{B(r)} as a linear combination\n\n\\begin{eqnarray}\n\\mathbf{B(r)} =&& \\alpha_1 \\mathbf{B_1}(\\mathbf{\\frac{r-r_1}{\\sigma_1}}) + \\alpha_2 \\mathbf{B_2}(\\mathbf{\\frac{r-r_2}{\\sigma_2}}) + \\ldots \\\\\n=&&\\sum_{n=1}^{N} \\alpha_n \\mathbf{B_\\textit{n}}(\\mathbf{\\frac{r-r_\\textit{n}}{\\sigma_\\textit{n}}}),\n\\label{eqn:expansion}\n\\end{eqnarray}\n\n\\noindent where $N$ is the number of terms used in the expansion, $\\alpha_n$ is a scalar coefficient for the $n^\\textrm{th}$ term, $\\mathbf{B_\\textit{n}}$ is a `basis' magnetic field, $\\mathbf{r_\\textit{n}}$ is the position offset of the field and $\\sigma_\\textit{n}$ is a scaling factor. While not necessary, these basis fields should be chosen such that most magnetic fields in plasmas can be represented with as few basis fields as possible, so that we require less training data to train the artificial neural network. One possible way to achieve this is to use principal components analysis (PCA) \\cite{pca} on a large dataset of known $\\mathbf{B}$ fields in plasmas. Principal components analysis looks at a large set of multidimensional vectors and first finds the direction of highest variance in the data (the first principal component), and then finds a set of vectors orthogonal to the first principal component that explains the remainder of the variance. While $\\mathbf{B(r)}$ is a vector field, it can be converted into a vector $\\mathbf{c}$ for principal components analysis by concatenating the magnetic fields at various different points, e.g. for a grid that runs from 0-9 in the $x$, $y$ and $z$ directions, we can write\n\n\\begin{eqnarray}\n\\mathbf{c} = \n\\begin{bmatrix}\n\\mathbf{B(r_{000})} \\\\ \\mathbf{B(r_{001})} \\\\ \\vdots \\\\ \\mathbf{B(r_{999})}\n\\end{bmatrix},\n\\label{eqn:vectorc}\n\\end{eqnarray}\n\n\\noindent where $\\mathbf{r_\\textit{xyz}}$ is the vector ($x, y, z$). If such convenient basis fields cannot be determined, we can use the fact that all magnetic fields can be written in the form of equation (\\ref{eqn:vectorc}), and let each element of the vector correspond to a basis field (i.e. $\\mathbf{B_1}$ corresponds to $\\mathbf{B_x(r_{000})}$, $\\mathbf{B_2}$ corresponds to $\\mathbf{B_y(r_{000})}$ and so on).\n\nNext, generate training data by creating variations of the parameters $\\alpha_n, \\sigma_n \\text{ and } \\mathbf{r_\\textit{n}}$ in the form\n\n\\begin{eqnarray}\n\\mathbf{g} = \n\\begin{bmatrix}\n\\alpha_1 \\\\ \\mathbf{r_1} \\\\ \\sigma_1 \\\\ \\vdots \\\\ \\alpha_N \\\\ \\mathbf{r_\\textit{N}} \\\\ \\sigma_N\n\\end{bmatrix},\n\\label{eqn:parameters}\n\\end{eqnarray}\n\n\\noindent and then conducting numerical simulations (e.g. using software packages mentioned in \\cite{levy,compare}) to obtain the radiograph for each variation of the parameters. The radiograph is expressed as a vector where each element represents the intensity of the protons at a specific pixel.\n\nThen, using the radiograph pixel values as inputs and $\\mathbf{g}$ as the targets, apply backpropagation and optimization algorithms to train the feedforward neural network. After training, the artificial neural network is ready to reconstruct $\\mathbf{B}$ fields: Input the radiograph into the artificial neural network to obtain the predicted parameters (in the form of equation (\\ref{eqn:parameters})), and insert these values into equation (\\ref{eqn:expansion}). See Fig. \\ref{fig:predictreconstructprocess} for a schematic of the training and reconstruction process.\n\n\\subsection{Assumptions, practical considerations and implementation}\n\nIn our simulations, we have made some assumptions for simplicity, but these assumptions are not crucial in the success of our approach. We assumed that the probe beam only interacts with the plasma via the $\\mathbf{B}$ field (no electric fields or collisions with matter). We also assumed that the proton source is a point source, and the probe beam is a planar sheet (velocities in the $z$ direction, before deflection from the plasma, are uniform). As feedforward artificial neural networks are universal function approximators, in principle the technique outlined in the previous section will still work even if the assumptions are violated (e.g. protons interact with the electric field of the plasma, protons collide with the plasma, proton source is of finite size, probe beam follows a specific angular distribution), as long as we include these effects during the production of training data (radiographs). \n\nTo obtain the radiographs we start off with the Lorentz force equation for $\\mathbf{B}$ fields only, given by\n\n\\begin{eqnarray}\n\\frac{\\mathrm{d}\\mathbf{v}}{\\mathrm{d}t}=\\frac{e}{m_p}\\mathbf{v}\\times\\mathbf{B}.\n\\end{eqnarray}\n\n\\noindent This equation, along with $\\frac{\\mathrm{d}\\mathbf{r}}{\\mathrm{d}t}=\\mathbf{v}$ was numerically integrated given the initial conditions of $\\mathbf{r}$ and $\\mathbf{v}$ to get the final positions of the protons on the screen. These final coordinates are then binned in order to produce radiographs.\n\n\nWe used a fully connected (dense) feedforward artificial neural network for simplicity, and we will discuss the possibilities of using other types of artificial neural networks in section \\ref{sec:future}. Scaled conjugate gradient was the optimization algorithm of choice during training because: (i) it is not RAM intensive (this is an important factor because in order to get more accurate results, training with more complicated $\\mathbf{B}$ fields and higher resolution radiographs are required, resulting in an increase in the number of weights in the artificial neural network. If the optimization algorithm does not scale well, an impractical amount of RAM will be required); (ii) it can take advantage of parallel CPU and GPU computing, allowing it to run effectively on supercomputing clusters.\n\nBefore the training process, the entire data set is scaled so that each feature of the input and target (e.g. $\\sigma_1$, $\\alpha_1$, the proton intensity in pixel 1 etc.) falls in the range [-1,1] to prevent features of small magnitude from converging slowly during optimization \\cite{efficientbackprop}, and the scaling is undone afterwards. The objective function was chosen to be the mean squared error (MSE) between the artificial neural network output and the target. \n\nDue to the flexibility of artificial neural networks, overfitting (accidental modeling of noise) is an issue so early stopping and neural network regularization are implemented. In early stopping, training is halted when the errors starts increasing on a data set that was not used in the training process \\cite{earlystopping}. This is done by first splitting the entire simulated data set into training, validation and testing sets at random in the ratio 70\/15\/15. The artificial neural network is applied to the training set, and during each iteration the mean squared error for the validation set is calculated. Initially, after each iteration, the artificial neural network becomes better at modeling the physical phenomenon and the validation mean squared error will decrease. There will come to a point when the artificial neural network starts to model the noise in the training set, and the validation mean squared error will stop decreasing and eventually start increasing (See Fig. \\ref{fig:tvt} for an illustration). Training is halted after a specified number of iterations fail to decrease the validation mean squared error. In neural network regularization \\cite{weightdecay}, the objective function is modified by adding a term proportional to the mean squared weight, and the constant of proportionality is known as the regularization parameter (chosen via cross-validation). This penalizes the neural network for having large weights or too many neurons, thus encouraging simpler models. \n\nAt this point, the artificial neural network is used to predict quantities on the testing set, and the testing errors are indicative of the artificial neural network's overall performance. For each simulated data set, the training process is run with the number of neurons in the hidden layer ranging from 10 to 100 in steps of 10, and the configuration with the lowest value of the objective function (mean squared error plus regularization term) is initially picked. If the value of the objective function is still decreasing when 100 neurons are used, then the search is extended in steps of 10 till 150 neurons. Once a single layer configuration is picked, another search is performed with multiple hidden layers (in increments of one layer), up to 5 hidden layers. Similarly, if the value of the objective function is still decreasing when 5 hidden layers are used, the search is extended to 10 hidden layers. After this search, the configuration with the lowest value of the objective function is picked and reported in the results section.\n\n\\begin{figure}\n\\includegraphics[width=0.5\\textwidth]{images\/tvt.pdf}\n\\caption{Typical curves of training and validation errors with respect to training iteration. Beyond a certain point, the artificial neural network starts to model noise, causing the validation error to increase. Training should be halted when this happens.}\n\\label{fig:tvt}\n\\end{figure}\n\n\\subsection{Retrieval of specific parameters}\n\\label{subsec:retrieval}\n\nThe idea presented in section \\ref{subsec:recon} requires large amounts of data and processing power, and might be more than necessary if the user only intends to retrieve certain parameters of the $\\mathbf{B}$ field, such as the peak field strength or the full width half max (FWHM) of a Gaussian magnetic flux rope, instead of reconstructing the entire field. This assumes that the user already knows the remainder of the parameters beforehand. For example, if the user only wants to retrieve the peak $\\mathbf{B}$ field strength (proportional to the $\\alpha_i$ coefficient), then the model of the $\\mathbf{B}$ field, the offset $\\mathbf{r_i}$ and the scaling factor $\\sigma_i$ must be known. In this case, the user can repeat the procedure in section \\ref{subsec:recon}, except with the following changes: (i) data is generated by varying only the parameter(s) of interest; (ii) the target vector consists of only the parameter(s) of interest. In fact, this can be applied to parameters other than $\\alpha_i, \\mathbf{r_i}$ and $\\sigma_i$. For example, in an ellipsoidal magnetic blob (which is a spheroid), there are two characteristic lengths, one characterizing the length in the $xy$ plane $a$ and the other characterizing the length along the $z$-axis $b$. If the user knows all other parameters and wants to retrieve $a$ and $b$, then an artificial neural network trained on simulated radiographs which variation is only due to varying values of $a$ and $b$ will do the job.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nIn human history, violence is continuously with us, despite our optimistic belief \nthat it is less and less widespread. In our minds, violence of armed against disarmed \npeople is particularly repulsive. However, still it happens in numerous places on earth. \nPerhaps a new element is that all of us are more conscious of the situation, than ever \nbefore. The question is if victims of the violence - treated as a community - can accept \nit. If yes, the situation will remain stable; if not, they will resist, and the violence \nis expected to spread. This duality - to resist or not - is especially inevitable in a ghetto,\nwhere an escape is not possible or at least very difficult. Here we are going to attack \nthis problem by sociophysical methods, i.e. by a construction of an appropriate model.\\\\\n\nAs a model base, we propose two elements. First is the Maslow theory \\cite{maslow}. \nIts basic assumption is \nthat people are going to satisfy their needs one after another, in order from most \nbasic to most sophisticated. Our central question - the decision of the victims about \nresistance - is to be considered in the context of their subsequent decisions how to \nsatisfy their needs. In other words, each vital decision in the situation of violence \nis to be made in relation to this violence. Second element of the model is the mean field\napproach, as applied to a strike by Galam, Gefen and Shapir \\cite{galam}. This \napproach profits from the analogy to ferro-paramagnetic transition in the Ising model. In this\ndescription, the ferromagnetic phase with given orientation of spins is equivalent to\na given decision (as to take part in a strike or not), made by the majority as a consequence\nof social interactions. \\\\\n\nThis author imagines that the goal of this paper is twofold. First is a reconstruction \nof chains of subsequent decisions of people. We apply the decision tree - a concept from \nthe game theory \\cite{straffin}. This concept is modified here in the sense that \nthere is only one player. Still, each decision selects a branch, and getting there \ndefines a new situation. In some cases, estimation of expected payoffs allows to \napply the pruning technique: as the player decides as to get larger payoff, some branches\nwith very low payoff can be {\\it a priori} eliminated. The \nsecond goal is to use the obtained scheme to discuss the problem of resistance in a ghetto. \\\\\n\nHistorically, \"ghetto\" is the area of iron foundry in Venice, where an enclosed \nneighborhood was created for Jews in 1516 to protect them against persecution from \nRoman Catholic Church. More recently, the term 'ghetto' is explicitly assigned to \nbounded areas in Warsaw, Lodz, Riga, Budapest, Cluj, Terezin and many others under \nNazi rules, where Jewish people were gathered prior to the Holocaust. In social \nsciences, the meaning of the term includes also Jewish diaspora in early modern \nEurope, quarters of black Americans in large cities and some ethnic communities \nin Africa and East Asia \\cite{smelser}. Although this meaning remains under \ndispute \\cite{smelser}, attempts to describe ghettos in current world should at \nleast be remarked \\cite{hannerz, ron}, not pretending to completeness. Social\nprocesses leading to the formation of ghettos was simulated by \\cite{lling} and \nmore recently by \\cite{meyort,sch}. The present work concerns with dynamics of\ndecisions in a ghetto community. For our \npurposes, two traits are to be distinguished: {\\it i)} an attempt of an inhabitant of \nghetto to leave the area makes his situation worse, {\\it ii)} human laws, as understood \nby inhabitants, are broken by an external power. This wide definition applies to \nrefugee camps as well as settlements in countries controlled by army, as in \nPalestine, Tibet, Chechnya and Darfur. Although in most cases ghettos are inhabited \nby ethnic minorities, here we do not need to emphasize this ethnic trait.\\\\\n\nTo refer to the game theory, below we adopt the abbreviations $C$ and $D$ (cooperate or \ndefect), although maybe withdraw or resist would be more appropriate. In two subsequent \nsections we introduce the model and we apply it to the case of ghetto. Last section is \ndevoted to discussion.\n\n\\section{The model}\n\\subsection{Hierarchy of human needs}\nAccording to the theory of Maslow, human needs are arranged hierarchically, from physiological \nneeds, safety, belongingness, to esteem and self-actualization \\cite{maslow}. People \nbecome interested in their safety to the extent of which their physiological \nneeds are satisfied; being safe, they start to struggle for belongingness, and so on. \nThis author and maybe this reader happened to be born in a milieu \nwhere three first needs were satisfied from the very beginning till adulthood. However, \nin numerous cases the situation is less fortuitous. More than often, a human unit has \nto determine a strategy to realize his\/her most basic needs in this or that way. In \nsuch a strategy, one of most important decision is which limitations of human needs \nare to be accepted \\cite{kepinski}. This problem appears to be even more crucial in \nghetto, where the above mentioned limitations are particularly painful.\\\\\n\nTrying to reach its needs in any milieu, a human unit has to consider at each step \nthe context of situation. In particular, in ghetto the problem of any action is if \nit is legal, or - in other words - if this action is allowed by the ruling power. This \nremains true when we ask about actions taken up in order to satisfy human needs \nat all levels. At the physiological level, to cooperate is equivalent to join common \nlife in frames of the society, using money, sleeping home and eating food bought in \na market. An alternative is to look for a desolate place in a forest or a desert, or \nto form a small community out of or at least at the border of, say, normal civilization. \nTo continue, at the safety level the problem is to accept law or not. Whereas in our \nworld of white collars this alternative is concentrated around payment of taxes, in \nghetto the defection can include uprising, riots, guerilla or terror. At the level of \nbelongingness, \nwe have to select our group of reference. Again, in ghetto the world is sharply divided \ninto two: \"we\" and \"they\". The power is with \"them\", and the quest is to identify with \nwhom? Further, at the level of esteem the problem is, in which group this esteem is \nlooked for? Here we guess that this choice is strongly correlated with the previous \none, and our analysis will be simpler because of this correlation. Finally, reaching \nesteem, our human unit tends to self-actualization. In principle, this again can be \nexpressed as a social action directed against the power or supporting it. However, \nconsequences of these decisions are usually less crucial, human behavior at this \nlevel is much more individualized and it is often affective and expressive rather \nthan aim-oriented \\cite{weber}. That is why the level of self-actualization is \ndifficult for a sociophysical modelling.\n\n\\subsection{Tree of decisions}\nThose who defect at the very root, i.e. at the physiological level, place themselves \nout of frames of the society. It is very hard to defect public access to water, shops \nand houses. The alternative is to live wildly in the forest. Yet this choice happens \nin several places on Earth, where climatic circumstances allow to do it at least \ntemporarily and some strong obstacles prevent to live in accord with law. This \nkind of defection happens in large social scale only in societies in strongest crisis, \ne.g. during a civil war. Although these situations are of central importance, they \nwill be not discussed here. In a ghetto, there is no possibility to fight in open \nway; main splitting of human behavior happens at higher levels. Then, for the sake \nof our subject all decisions discussed here start from $C$ (cooperate).\\\\\n\nIn the same way we are going to comment further decisions, which form chains as the \none presented in Table 1. This particular chain will be denoted as $CCDDD$ from now on. \nIn this notation, $CD$ means that we discuss the decision $D$ (at the safety level) of \nthose who decided $C$ at the physiological level. \\\\\n\\bigskip\n\n\\begin{tabular}{|l|l|}\n\\hline\nphysiology&C\\\\\nsafety&C\\\\\nbelongingness&D\\\\\nesteem&D\\\\\nself-actualization&D\\\\\n\\hline\n\\end{tabular}\n\n\\bigskip\n\nTable 1. A chain of subsequent decisions of a human unit (man or woman): \ndefect ($D$) or cooperate ($C$) with the power. \n\n\\bigskip\nIn Fig.1 a part of the resulting tree of decisions is shown. There, the whole branch \nstarting from $D$ at the physiological level is omitted, except its beginning. Also,\nthe decisions $D$ or $C$ at the level of self-actualization are not shown for clarity\nof the figure. \\\\\n\nOmitting the physiological needs, we are going to concentrate on the level of safety.\nA population considered selected $C$ as their first choice, i.e. they decided to live\nwithin the community and to profit its facilities. Now and each time their decision \nis $C$ or $D$, i.e. they wonder if their path is to be $CC$ or $CD$. The probabilities of these\npaths depend on the expected payoffs. Then the choice these people is to decide, if they \nwill be safer when cooperating with the power of when defecting it.\\\\\n\n\n\\begin{figure} \n\\vspace{0.3cm} \n{\\par\\centering \\resizebox*{10cm}{8cm}{\\rotatebox{0}{\\includegraphics{drzewo.eps}}} \\par} \n\\vspace{0.3cm} \n\\caption{Right half of the tree of decisions. Last level (self-actualization) is not shown.\nBeing at the root and selecting $C$, one is placed at node $C$; selecting $D$ as next level one \nis placed at node $CD$ and so on. Then, nodes at the physiology level are indexed with one \nlabel, $D$ and $C$ from left to right, nodes at the safety level - by two labels ($DD$ or \n$CD$ not shown, or $CD$ or $CC$ shown from left to right), and so on. At the esteem level,\nfirst node from the left is indexed as $CDDD$.} \n\\end{figure}\n\nAs it was indicated by Maslow, people are able to struggle for their safety\nto the extent in which their physiological needs are satisfied. Further, their search for \nbelongingness is limited by their lack of safety, and so on. Maslow gives an example \nwith numbers; an average citizen could have satisfied his successive needs in 85, 70,\n50, 40 and 10 percent, in order as in Table 1. Provided, for example, that the safety needs \nof somebody are not satisfied at all, he will not bother about belongingness, not to speak about \nesteem or self-actualization. It is not clear how the effort for a need of next level \ndepends on the satisfaction of a need in a previous level; the Maslow theory is \nformulated in words, not in numbers. The area is open for speculations, with the only condition \nthat any proposed mathematical formulation reflects the above mentioned rules. On the other hand \nit is obvious that the validity\nof any numbers we can get is limited to statistical considerations. It seems that\nfor this kind of problems, the fundamental equations \\cite{vank} can provide a proper tool. \\\\\n\n\\subsection{Mathematical formulation}\n\nFrom these equations,\nwe expect to obtain the probabilities that people in a given situation (read: at a given\nnode of the tree) select this or that decision. External conditions met by a given community can be\nintroduced as the set of payoffs $\\alpha_X$, describing maximal possible percentage of satisfying\nneeds at node $X$ of the decision tree. As explained in the caption to Fig. 1, the node index $X$ \nis equivalent to a chain of decisions, leading to that node.\nThe root is treated as the chain of zeroth length. Simultaneously, $\\alpha_X$ is \nthe maximal amount of people who are able to struggle for satisfaction of higher needs at nodes\n$XC$ and $XD$. Both these 'maximal' deal with a virtual case when the payoff is limited neither by \nparameters of previous nodes, nor by human decisions at these nodes.\\\\\n\nKeeping the above example of the chain $CCDDD$ as an individual path, \nthe value of satisfaction $s_X$ of a human unit - member of the \ncommunity - at node $C$ (at the physiological level) is then $s_C=\\alpha_C$. At higher level $s_X$\nfulfils an iterative equation\n\n\\begin{equation}\ns_{Y}=s_X \\alpha_{Y},\n\\end{equation}\nfor $Y=XC,XD$. Provided, that the set $s_X$ of satisfaction of our human unit accords with \nthe above exemplary values given by Maslow, we obtain at five successive nodes of the path \n$\\alpha_C=0.85$, $\\alpha_{CC}=0.7\/0.85\\approx 0.82$, $\\alpha_{CCD}=0.5\/0.7\\approx 0.71$, \n$\\alpha_{CCDD}=0.4\/0.5=0.8$ and $\\alpha_{CCDDD}=0.1\/0.4=0.25$. These values of $\\alpha_X$\nallow to reproduce {\\it via} Eq. 1 the above given exemplary chain of individual satisfactions: \n$s_C=0.85$, $s_{CC}=0.7$, $s_{CCD}=0.5$, $s_{CCDD}=0.4$ and $s_{CCDDD}=0.1$. \\\\ \n\n\nThe above vital path consists successive decisions, for our example $CCDDD$, \nas in Table 1. In reality, these decisions are much more detailed than a cooperation\nwith or a defection the ruling power. Actually, the decision can be as specific as to marry\none particular member of a group of revolutionists - or not to marry. However, having defined our \nissue - to withdraw or to resist, we are interested not as in a decision of selecting a detailed person, but - averaging out\nover different possibilities - in a decision to be involved in a revolutionistic group, which\nis equivalent to satisfying some needs by the choice $D$. \\\\\n\nUp to now, we dealt with individual path. Now we can introduce conditional probabilities\n$p(C\\mid X)=p(XC)\/p(X)$ and $p(D\\mid X)=p(XD)\/p(X)$ that, leaving node $X$, a human unit is going \nto $C$ or $D$.\nIn this case the normalization condition should be $p(C\\mid X)+p(D\\mid X)+1-\\alpha_X=1$. This \nshould be not \nconfused with a probability that a human unit will stay at node X with probability $1-\\alpha_X$.\nSuch a formulation would disagree with the original interpretation of Maslow. Instead, the factor\n$1-\\alpha_X$ measures the amount of effort spent inefficiently at node $X$, in the same way as \nit was\nassumed for an individual path. In the latter case, either $p(C\\mid X)=0$ or $p(D\\mid X)=0$, \nand the path\nwas fully determined by subsequent individual decisions. Then, individual satisfaction at \nsubsequent levels depends only on the payoffs $\\alpha_X$, as explained in the second paragraph \nof this section. Instead of using the conditional probabilities, it is simpler to use individual \neffort $w_X$ and averaged effort $W_X$. In the above example of individual path, $CCDDD$, the \nset of individual efforts is: $w_C=1$ at the root, $w_{CC}=1$ at the node $C$, $w_{CCD}=1$ at\nthe node $CC$, $w_{CCDD}=1$ at the node $CCD$ and $w_{CCDDD}=1$ at the node $CCDD$. Other efforts\nare zero, either along the decision (as $w_{CD}$) or because a given node was not reached\nby a given human unit (as $w_{DD})$. \\\\ \n \n\nAveraging over individual paths, we get a set of average amounts of effort $W_X$ \nat all nodes $X$. Then for the physiological level we have $W_C+W_D=1$. The average satisfactions\nat the physiological level, $X=C,D$, are $S_C=W_C\\alpha_C$, and $S_D=W_D\\alpha_D$. Considering \nthe safety level we take into account that efforts to reach the nodes $CC$ and $CD$ \nare reduced because $\\alpha_C\\le 1$. Then, $W_{CC}+W_{CD}=W_C\\alpha_C$, and similarly \n$W_{DC}+W_{DD}=W_D\\alpha_D$. As a rule,\n \n\\begin{equation}\nW_{XC}+W_{XD}=W_X\\alpha_X,\n\\end{equation}\nwhere $XC$ and $XD$ are nodes available from node $X$ by decision $C$ or $D$.\nThe whole set $W_X$ is equivalent to a map of social efforts, put into various ways of attempts\nof satisfying the needs. At each node, the average satisfaction $S_X=W_X\\alpha_X\\le W_X$.\nSatisfaction is less or equal than effort, for individual paths as well as in the average.\\\\\n\n\nIn a deterministic picture, people are expected to select always the nodes with larger payoff. \nHowever, it is clear even for a physicist that in reality people have their individual preferences,\nand a common payoff for everybody can be introduced only for a statistical description. This\nintuition on individual character of payoffs is confirmed by the utility theory \\cite{straffin}.\nWorking in statistical physics, we are tempted to use some noise as a measure of, say, lack\nof information of the community members. Then we expect that the ratio $W_{XC}\/W_{XD}$ in stationary \nstate depends on $\\beta(\\alpha_{XC}-\\alpha_{XD})$, where $\\beta=0$ for absolute lack of information\non the payoffs, and $\\beta$ is large when the information is well accessible. From this point, it \nis only one step to mimic the statistical mechanics, writing the stationary probability of \nselecting $C$ from node $X$\n\n\\begin{equation}\np(XC)_{eq}=\\frac{W_{XC}}{\\alpha_X W_X}\\propto \\exp{[\\beta(\\alpha_{XC}-\\alpha_{XD})]},\n\\end{equation}\nand to postulate a dynamic description in the form of fundamental (or Master) equation\n\n\\begin{equation}\n\\frac{dp(XC,t)}{dt}=-r(XC)p(XC,t)+r(XD)[1-p(XC,t)],\n\\end{equation}\nwhere $r(XC)\\propto p(XC)_{eq}$. Here, the constant of proportionality determines the \ntimescale of the dynamics. The dynamics of the probabilities $p(X)$ is equivalent to the dynamics \nof efforts $W_X$.\\\\\n\nIn, say, a standard society the information on the payoffs is well accessible and the \nsuccessive selections are almost deterministic. Then, people who decide to live in a wild forest \nare rare exceptions in the society: almost everybody selects $C$ at the physiological level.\nIt is less clear if the payoffs for those who break law are indeed smaller than for the others.\nIn any case, a great effort is paid to ensure the population that sooner or later this\npayoff will be strongly reduced. Because of this effort, the statistical data on the choice \nof $CD$ are usually less sure. Looking for belongingness and needs of higher order is not directly \nconnected with our issue; anyway, in democratic systems we are partially involved\ninto the ruling power, which cannot then be treated as external and is maybe not entirely against \nour will. Summarizing this section, this author believes that the concept of the decision tree,\nas an adaptation of the Maslow hierarchy, can be useful in many issues.\n\n\\section{The case of ghetto}\n\nAs it is expected to be clear from the definition of ghetto, accepted above, the key point\nof the decision tree is the node $C$, where crucial decision is to be taken: $CC$ or $CD$. The reason \nis as follows. All what we know about ghetto confirms that there, it is almost impossible to \nsatisfy the safety need. The payoff of a useful solution $CC$ is drastically reduced with respect\nto other communities. Examples of this painful truth fill newspapers and TV or, even worse,\nremain unknown if information is prohibited. To bring these examples here, athough justified\nfrom the point of view of the subject, would drive us too far from sociophysics. Instead, let us \nconsider the consequences for the payoff.\\\\\n\nImagine that the safety is strongly reduced in an initially normal society. The reason can be \nwar or revolution, or other abrupt fall of the political system. It is clear that the accessibility\nof information deteriorates, and in this situation many people do not know what to do. What\nis the payoff if I withdraw? if I resist? who will win? what will be the consequences for me?\nmy family? my assets? and so on. As a rule, a remarkable percentage of people resist, just\nbecause - a physicist would say - large entropy in the system. This thermodynamic \nformulation should not be offending to anyone. Obviously, it does not comprise individual \ndecisions, which are sometimes dramatic and full of unanswered questions. It is a common \nexperience that we\ndecide, not knowing the final results; in most difficult situations, the amount of information\nis too low to allow for a logical reasoning. This experience is encoded in sociology as the law \nof unforeseen consequences \\cite{oxf}. However, here we consider the case when finally \nsome power, external for the ghetto inhabitants, prevails and the information on payoffs becomes \nmore clear. But the above mentioned \ngroup keeps resisting; despite the variety of their motivations, their effort can be translated \ninto numbers and handled by statistical tools.\n They fight against the external power and its supporters - the mechanism\nknown too well, indeed. Relaxing to the stationary state, the system finds that the payoff\nof the choice $CC$ is reduced by an expected repression by the resisting group. The ruling\npower tries to balance this repression by defeating the resistance fighters. Soon, the level of\naggression of both sides becomes equivalent; both find convenient justifications. \\\\\n\nThis author believes that what can be said mathematically, can be said - although longer - in words.\nHere we try the opposite way. Violence bears violence - this sentence is short. In sociophysical\nlanguage, the same content can be expressed as a stability of the solution of Eq. 4, characterized\nby the condition $\\alpha_{CC}<\\alpha_{CD}$. This stability relies on the following premises: \n{\\it i)} the payoff $\\alpha_{CC}$ is drastically lowered by the repressive actions of the resistant\ngroup, {\\it ii)} struggling for their safety, people are not motivated to select $CC$ instead of \n$CD$, if $\\alpha_{CC}<\\alpha_{CD}$, {\\it iii)} selection of $CD$ in a social scale reinforces\nthe resistant group. As we see, this closed circle does not rely on a particular choice of the\nfunctional dependence of the effort $w$ on the payoff $\\alpha$. In fact, the resistant group\ncan be compared to a nucleation center, which initiates the new phase. However, the nucleation\nprocess cannot be described within the simplest version of the mean field theory, used here.\\\\\n\nAs a result, the whole tree becomes degenerated. For those who decided to resist, it is not possible \nto look for belongingness or esteem out of the resistant group. On the other hand, those who \nselect $CC$ remain under fear of, from one side, being accused of treason and, from the other side, \nblind actions of the ruling power. Not being able to get safety, they follow\nthe solidarity with the resistant group, whey they look for belongingness and esteem. As a rule, \nwhen the safety is at risk, no effort can be put to struggle for higher needs. In effect,\nupper branches disappear. \\\\\n\nTrying to illustrate the above processes with some simulations, we need the values of several \nparameters, as the payoffs at the nodes etc. Measurement of these parameters or at least a thorough\ndiscussion of their values far excesses the frames of this work - in social sciences, this is \nalmost an euphemism. Instead, we can present\na qualitative consequences of an abrupt change of ruling power. The event is a special and \nmost simple example of what was discussed before. We limit the calculations to the safety level. \nIn the formalism developed above, the dynamics of this level does not depend on the parameters \non higher levels. In the calculations, the difference of the payoffs consists two factors:\nexternal $\\Delta$ (provided by the ruling power, old or new) and internal, due to interaction \nbetween the community members. The latter is proportional to the actual difference of efforts,\n$W_{CD}-W_{CC}$. This proportionality encodes the above discussed positive feedback between\nthe value of the difference of efforts and its time derivative. In fact, this positive feedback \nis at the core of the mean field theory \\cite{binder}.\\\\\n\nTo simulate the change of the ruling power (for example, from a well-established to an \nexternal), two agents cannot be omitted: a strong decrease of $\\alpha_C$, which is a direct \nconsequence of unavoidable war, and a change of sign of $\\Delta$. Simultaneously, the up-to-now\ncooperators become defectors and the opposite. We keep the node $D$ unoccupied ($W_C=1$ and $W_D=0$); \nthis reflects\nthe assumed fact that nobody can leave the ghetto. For simplicity we keep the parameter $\\beta$\nconstant in time, although it is almost surely not realistic; still we are left with three parameters.\nThe value of $\\alpha_C$ before the political overthrow is assumed to be unity. Its value after \nthe overthrow, kept constant in time, is one of the parameters. The remaining parameters are\n$\\beta$ and $\\Delta$. Initial ratio of the variables $W_{CC}$ and $W_{CD}$ is taken as their ratio \nat equilibrium before the overthrow. As a rule, $W_{CD}>W_{CC}$ at initial time, because most \npeople supported the ancient regime before the overthrow; what was the cooperation, now is treated \nas defection, and the opposite.\n\n\n\\begin{figure} \n\\vspace{0.3cm} \n{\\par\\centering \\resizebox*{10cm}{8cm}{\\rotatebox{270}{\\includegraphics{resistance.eps}}} \\par} \n\\vspace{0.3cm} \n\\caption{The effort $W_{CD}$ put at the resistance at the safety level against the parameter\n$\\alpha_C$.} \n\\end{figure} \n\nIn Fig. 1 we show the effort $W_{CD}$ put at the resistance, against the satisfaction $\\alpha_C$\nof the physiological needs at node C. These data are for the stationary state. As remarked above, \nwe assume that all the social effort\nat the root is put to satisfy physiological needs within the community. However, these needs can\nbe satisfied only partially. The parameter $\\alpha_C$ measures the level of this satisfaction. \nFurther, it measures also the effort which can be put to struggle for safety, in this ($CC$) or that\n($CD$) way. The parameters for the plot are: $\\beta=3.0$, and $\\Delta=0.2$. \\\\\n\nAs we see in Fig. 1, there is a jump of the data on $W_{CD}$ near $\\alpha_C=0.7$. Below this value, \nthe effort put to resistance is negligible. Above this value, it is close to its maximal value \n$\\alpha_C$. This means that the initial state of resistance is stable. The results are typical,\nas long as $\\beta$ is not too small, and $\\Delta$ is not too large. Within the magnetic analogy,\nthe results mean that the metastable phase is possible as long as the field ($\\Delta$) and the \ntemperature ($1\/\\beta$) are not too large. Within the sociophysical picture, it means that it is \nadvantageous for the ruling power to keep the whole ghetto community at the limit of starvation\ni.e. with small value of satisfaction $\\alpha_C$ of physiological needs. Then, instead of fighting, \nthey are kept in a queue for water and flour, provided by the army. Then, the best thing is to make \na movie and show it in TV news; those who get water are happy. Please do not blame this author for\nthe invention - it is known for a long time.\n\n\n\\section{Discussion}\n\nOur conclusions are to be divided in three parts. The first is sociophysical. Our mathematical \ndescription is equivalent to the mean-field theory of the ferromagnetic phase, where\ntwo stable solutions coexist \\cite{binder}. This model is well established in applications\nof physics to social sciences \\cite{galam,weidlich}. It is known that the stability of the \nferromagnetic\nphase is overestimated by mean field theory; in fact, it depends on the structure. Here\nwe are faced with the question, what is a realistic structure of a community. Much effort has been\ndone by sociologists to advance our knowledge on the subject; however, even the characteristic \nsize of social networks remains under dispute \\cite{kill,mars}. On the other hand, stability\nof ordered Ising phase at low temperatures has been found in computer simulations for most of \ninvestigated structures \\cite{holyst,tadic,makowiec,malarz}, with directed Albert-Barab\\'asi \nnetworks \\cite{sum,lima} and one-dimensional chains and related \nmodels \\cite{novot} as exceptions. Actually, time dependence of persisting opinion\nof a resistant group was discussed recently by \\cite{aydiner} on the basis of one-dimensional Ising model. \n(We note that the condition of low temperature\nis equivalent to large value of the uncertainty factor $\\beta$ in our considerations.) For social \napplications, the condition of an eternal stability of the ferromagnetic phase can be \nsubstituted by a weaker condition of appearance of long-living ordered domains. We can conclude \nthat sociophysical arguments work for this hypothesis, and not against it.\\\\\n\nThe second conclusion is aimed to be sociological. The results of our analysis indicate, that when\nan external power struggles for control of an isolated community, \nthe problem of safety remains crucial. Obvious aim of the power must be: to guarantee the safety\nfor still neutral part of the community. If this is not possible, war becomes eternal, without\nwinners. Not so rarely, the responsability for safety of isolated communities remains in hands \nof army, without\ncontrol of civil agencies or free press. This is precisely what eliminates the possibility\nof a peaceful solution; army people are trained to fight, not to bring safety.\nIn sociology, the role of safety is known for a long time: first edition of the Maslow's\nbook \\cite{maslow} appeared in 1954. The advantage of this work is to express it in more\nformalized language. One could ask if such a formulation is worthwhile. On the other hand,\nstill some powerful people seem not to recognize the validity of the conclusions of Maslow theory.\nMaybe they will be convinced by mathematics.\\\\\n\nIn my last word I declare to share the opinion that ghettos are shameful for human \ncivilisation. Nevertheless this respectable and rather common opinion, such places exist, \nas we are mercilessly informed by free media. Some people even claim, that some \nof these places are established to protect our laws to free life, where at the last level \nof the tree of decisions we can write our sociophysical papers. If this is done without\ncare about safety of the ghetto inhabitants, the way is destructive and mindless. \\\\\n\n{\\bf Acknowledgements.} The author thanks Ma{\\l}gorzata Krawczyk and Francis Tuerlinckx \nfor their kind help. Thanks are due also to Dietrich Stauffer for helpful criticism and \nreference data.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzamyh b/data_all_eng_slimpj/shuffled/split2/finalzzamyh new file mode 100644 index 0000000000000000000000000000000000000000..559edfc27e5d21a1d618bca903ed67bab184bc26 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzamyh @@ -0,0 +1,5 @@ +{"text":"\\section{INTRODUCTION}\n The distribution of cometary orbital elements in the Oort cloud depends on the dynamical evolution of the solar system's planetesimal disk and the environment in which the solar system formed. Unfortunately, the vast majority of Oort cloud comets are unobservable, usually being seen only when they are perturbed onto orbits with perihelion $\\lesssim 5$ AU. Furthermore, the orbits of visible comets have generally been modified by poorly understood non-gravitational forces \\citep{Yeomans04}. For both reasons it is difficult to infer the properties of the Oort cloud from the statistical distribution of comet orbits.\n\\par \nThis paper describes the expected distribution of orbital elements of nearly isotropic comets (NICs). We define these to be comets that have been perturbed into the planetary region from the Oort cloud. Theoretical models of the distribution of NICs have been constructed by others e.g., \\citet{Wiegert99}, \\citet{Levison01} and \\citet{Fouchard1, Fouchard2, Fouchard3}; the novel feature of the present study is that we focus on much larger heliocentric distances (up to 45 AU) in anticipation of deep wide-angle sky surveys that are currently under development. We use a simple physical model that assumes a static spherical Oort cloud with comets uniformly distributed on a surface of constant energy in phase space. Models of the formation of the Oort cloud \\citep[e.g.,][]{Dones04} suggest that this approximation is reasonable except perhaps at the smallest semi-major axis we examine, $a=5,\\!000$ AU, where the cloud is somewhat flattened. Furthermore, we assume that these comets evolve solely under the influence of the Galactic tide and perturbations from the giant planets. We do not consider the stochastic effects of passing stars, as they have effects qualitatively similar to the effects of the Galactic tide \\citep{Heisler87, Collins10}; see \\citet{Fouchard1, Fouchard2, Fouchard3} for a detailed comparison of the influence of these two agents on the Oort Cloud. We follow the evolution of these comets through N-body simulations for up to 4.5 Gyr and use the results of these orbit integrations to construct a simulated comet catalog. \n\\par\nThis topic is of special interest now as the Large Synoptic Survey Telescope (LSST) will likely see many of these NICs in the outer solar system. LSST will survey 20,000 square degrees of sky (48\\% of the sphere) about 2,000 times over 10 years, down to an $r$-band magnitude of 24.5 \\citep{LSST09}. LSST has a flux limit 3.2 magnitudes fainter than, and more than three times the area of, the current leader in finding faint distant solar system objects --- the Palomar Distant Solar System Survey \\citep{Schwamb10}. It is expected to find tens of thousands of trans-Neptunian objects \\citep{LSST09}; however we are not aware of predictions made specifically for objects originating in the Oort cloud. \n\\par\n\\citet{Francis05} studied the long-period ($P > 200$ years) comet population using the Lincoln Near-Earth Asteroid Research (LINEAR) survey \\citep{Stokes00}. Most observed long-period comets likely originated in the Oort cloud. He found a sample of 51 long-period comets which were either discovered by LINEAR or would have been, had they not previously been discovered by another group. Fifteen of these had perihelion distances beyond 5 AU, but none beyond 10 AU. He used this sample to estimate properties of the Oort cloud. He found a``suggestive\" discrepancy between the distribution of cometary perihelion distances in the observed sample and in theoretical models \\citep{Tsujii92, Wiegert99}, but cautioned that the difference could be the result of a poor understanding of the rate at which comets brighten as they approach the Sun due to cometary activity. LSST will address this question by observing many comets at large heliocentric radii where they are inactive (see Section \\ref{sect:disrupt}).\n\\par\n\\citet{Hills81} proposed that the apparent inner edge of the Oort cloud at around 10,000 AU is not due to a lack of comets at smaller semi-major axes, but rather because the perihelion distances of those comets evolve slowly, so they are ejected or evolve to even smaller semi-major axes due to perturbations from the outer planets before they become visible from Earth. In contrast, comets with semi-major axis $a \\gtrsim$ 10,000 AU have their perihelion distance changed by more than the radius of Saturn's orbit in one orbital period, so they are able to jump across the dynamical barrier of the outer planets, and be seen in the inner solar system \\citep{Hills81}. This barrier is not 100\\% leak-proof, but as is demonstrated later in the paper, one expects the number density of comets with initial semi-major axes of 10,000 AU to decline by over two orders of magnitude interior to 10 AU. LSST should detect NICs at distances $>10\\,$-15 AU and so will enable us to estimate the population of this inner Oort cloud directly, because we will be able to see NICs outside the region of phase space from which they are excluded by the giant planets. The properties of this cloud may contain information about the density and mass distribution in the Sun's birth cluster \\citep{Brasser12}. \n\\par\nObserving NICs far from the Sun also probes in unique ways the parts of the Oort cloud that do send comets near Earth. Non-gravitational forces due to outgassing when the comet comes near the Sun are the primary source of error in determining the original orbits of these comets \\citep{Yeomans04}. It is somewhat uncertain at what radius outgassing begins, but a reasonable estimate would be around 5 AU (see discussion in Section \\ref{sect:disrupt}). Therefore, astrometric observations of comets beyond $\\sim \\!10$ AU should allow much more precise determination of their original orbits (see discussion at the end of Section \\ref{sect:orbel}).\n\\section{SIMULATION DESCRIPTION}\n\\label{desc}\nWe divide phase space into three regions, based on the perihelion distance of the cometary orbit. We define the ``visibility region\" as containing orbits with perihelion distance $q$ in the range $0 \\; {\\rm AU} < q < 45$ AU. A ``buffer region\" includes orbits with $45 \\; {\\rm AU} < q < 60\\; {\\rm AU}$. All other orbits are defined to be in the ``Oort region\". \n \\par\n We simulated orbits with the Rebound package, developed by \\citet{Rein12}. We used their IAS15 integrator, a 15th order adaptive-timestep integrator that is sufficiently accurate to be symplectic to double precision \\citep{Rein15}.\n \\par\n Our goal is to model the steady-state distribution of NICs with perihelia within 45 AU of the Sun that are produced from an Oort cloud with orbital elements uniformly distributed on an energy surface in phase space (so $dN \\sim \\sin{I} dI de^2$, where $I$ and $e$ are the cometary inclination and eccentricity). This approximation assumes that perturbations from the Galactic tide, passing stars or molecular clouds over the last four Gyr have been sufficient to isotropize comets both in position space (seen from the Sun) and velocity space at a fixed position. This has been shown to be roughly true for comets with semi-major axes greater than 2,000 AU \\citep{Duncan87}. \n \\par\n To initialize the simulation, we generated comets at random from this phase-space distribution for four discrete values of initial semi-major axis, $a_i =$ 5,000, 10,000, 20,000, and 50,000 AU, with perihelion distances in the range 60 AU to 60 + $\\Delta$ AU. Then, using an analytic approximation to the torque from the Galactic tide (see appendix \\ref{Rtorque}), we determined an upper bound on the time $\\tau_{\\rm entry}$ (as a function of $a_i$) such that no comet from outside (60 + $\\Delta$) AU could enter the buffer or visibility regions within the next $\\tau_{\\rm entry}$ years. We chose $\\Delta = 5$, but it is straightforward to see that the results do not depend on this choice. We picked $\\tau_{\\rm entry} = 10^7, 2.5 \\cdot 10^6, 6.25 \\cdot 10^5$ and $10^5$ years, for $a_i$ = 5,000, 10,000, 20,000 and 50,000 AU respectively. These numbers satisfy our upper bound. \n \\par\nWe then evolved the comets under the influence of the Sun, the Galactic tide, and the four outer planets for $\\tau_{\\rm entry}$. After $\\tau_{\\rm entry}$ had elapsed, we removed any comet with perihelion greater than 60 AU from the simulation. The remaining comets were allowed to evolve under the influence of the Galactic tide and gravity from the four giant planets and the Sun. At fixed intervals $\\tau_{\\rm sample}$ (taken to be 10 years), we recorded the position and velocity of any comet that was within 45 AU of the Sun in a catalog. This procedure gives us the same expected comet count and distribution of orbital elements as if we had allowed the system to evolve to steady state, and then catalogued the comets visible within 45 AU at an instant in time, and multiplied the flux by $\\tau_{\\rm entry}\/\\tau_{\\rm sample}$.\n\\par\nComets are removed from the simulation if they collide with a planet, come within 0.1 AU of the Sun, move outside 200,000 AU, or are perturbed back into the Oort region ($q >$ 60 AU)\\footnote{Because the boundary between the buffer and Oort region at 60 AU corresponds to a perihelion distance twice the semi-major axis of Neptune's orbit, we expect the planets to have a negligible effect on the orbits of comets in the Oort region. Therefore, it is reasonable to assume that a comet with an orbit aligned such that the Galactic tide pulls it from the buffer region into the Oort region will not return for a long time. }. \n\\subsection{Orbital Elements}\n\\label{sect:orbel}\nThe treatment of orbital elements for highly eccentric orbits that pass through the orbits of massive planets is somewhat subtle. A comet that is having a close encounter with one of the giant planets will undergo large short-term perturbations to its orbital elements that do not reflect changes to its orbit that will last longer than the duration of the encounter. Short-term perturbations from such encounters are more serious for comets with large semi-major axes because the energy of the comet in the planetary potential well can equal or exceed the total orbital binding energy of the comet. At distance $r_p$ from a planet with mass $M_p$, the fractional change in energy due to the potential energy of the planet is \n\\begin{equation}\n\\label{perturbation}\n\\frac{\\Delta E}{E} = \\frac{2a_c M_p}{r_p M_\\odot} = 0.19\\; \\frac{a_c}{100 \\; {\\rm AU}} \\frac{1 \\,{\\rm AU}}{r_p} \\frac{M_p}{M_{\\rm Jupiter}},\n\\end{equation}\nwhere $a_c$ is the semi-major axis of the comet prior to the close encounter. We stress that $a_c$ and $a_i$ are not the same quantity: $a_c$ is the current semi-major axis of the comet, whereas $a_i$ is the semi-major axis of the comet when the simulation was initialized. \n\\par\nTo lessen the short-term planetary perturbations to cometary orbital elements, we adopt the following procedure. For comets with $a_c < 100$ AU, we simply report heliocentric orbital elements. These comets have large enough binding energy that they would have to pass close to a planet (within 2.0 AU for Jupiter) to obtain enough extra kinetic energy for $a_c$ to vary by more than 10\\% during the close passage (see Equation \\eqref{perturbation}). In order to prevent very close planetary approaches from contaminating our results, we discarded any observation in which the comet is currently close enough to a planet that the specific potential energy due to the planet is more than 10\\% of the specific binding energy in an orbit around the Sun with semi-major axis of 100 AU. This occurs for only $0.004\\%$ of all catalog entries, or $0.3\\%$ of all catalog entries with $R<10$ AU.\n\\par\nComets in the visibility region with $a_c > 100$ AU often have potential energies due to the planets which are comparable to their binding energies. For this reason, we report the barycentric orbital elements of the comet the last time it was in the range [90 AU, 110 AU]. These elements are well-behaved, since they are calculated far outside the orbits of the giant planets where it is appropriate to represent the solar system as having all its mass located at the barycenter. \n\\par\nThe ease with which LSST can determine orbital elements for slowly moving nearly unbound objects is also of interest to this study. To address this question, we searched the JPL small body database\\footnote{\\url{http:\/\/ssd.jpl.nasa.gov\/?horizons}} for objects with semi-major axis greater than 300 AU and perihelion distance greater than 10 AU. It listed seven objects with a data-arc longer than 5 years. The estimated errors in $x = 1\/a$ ranged from $1.5\\cdot 10^{-6}$ AU$^{-1}$ to $1.5 \\cdot 10^{-5}$ AU$^{-1}$ for these objects. It therefore seems reasonable to expect orbits to generally be determined to at least this level of accuracy purely from 10 years of LSST data.\n\\subsection{Disrupted Comets}\n\\label{sect:disrupt}\nThere is a substantial body of evidence suggesting that comets ``fade\" over time (e.g., \\citealt{Fernandez81, Wiegert99}). A number of processes have been proposed to explain this phenomenon \\citep{Weissman80}: a comet could run out of volatile material, it could have its surface covered with a crust that prevents volatiles from escaping, or it could physically be broken apart by outgassing or tidal stress. \n\\par\n\\citet{Fernandez05} gives 3 AU as a likely cut-off to comet activity based on the sublimation temperature of water, but cautions that many comets experience some activity outside 3 AU due to the sublimation of more volatile elements. Comet 67P\/Churyumov-Gerasimenko first showed signs of activity when it was 4.3 AU from the Sun \\citep{Snodgrass13}. Comet Hale-Bopp showed substantial activity on its approach to perihelion as far out as 7.2 AU \\citep{Weaver97}, and at 27.2 AU after perihelion passage \\citep{Kramer14}. When we calculate numbers of visible NICs, we restrict ourselves to NICs further than 5 AU from the Sun. For this reason, we do not consider the effect of comet activity on magnitude, and just calculate the magnitude from the size and albedo of the bare nucleus, see Section \\ref{sect:vis}. We believe that our assumption that there is negligible activity beyond 5 AU is reasonable, though not certain, given existing observations. In any event this assumption gives us a conservative estimate of the number of comets that a survey like LSST will discover.\n\\par\nBecause comet activity does not affect brightness in our model, we are only sensitive to physical disruption of comets, not loss of volatiles. For this reason, throughout this paper, we refer to ``disruption\", rather than ``fading\".\n\nIn the results that follow, we remove a comet after it has made 10 apparitions in the catalog with radius $R < 3$ AU (corresponding to a total exposure to the Sun at $R < 3$ AU of about 100 years, since $\\tau_{\\rm sample} = 10$ years). Comets are also removed if they ever travel within 0.1 AU of the Sun (even if they do not appear so close in the catalogue) or if they suffer a collision with one of the gas giants.\n\\section{COMET LIFETIMES IN THE SIMULATION REGION}\n\\label{lifetime}\nFigure \\ref{Occurrence} shows the fraction of NICs in our simulations with $a_c > 34.2$ AU (period $>$ 200 years) appearing in the region with $R<45$ AU for more than $t$ years, as a function of $t$. Different curves correspond to different values of the initial semi major axis $a_i$. In this plot we terminate each orbit integration after 4.5 Gyr. The error bars are derived from the re-sampling procedure described in Section \\ref{sect:concentration}. In this and all subsequent plots, only comets with periods greater than 200 years (corresponding to semi-major axes greater than 34.2 AU) are counted.\n\\par\n\\citet{Yabushita79} argued using a simple random walk model that the number of NICs surviving more than $N_{\\rm peri}$ perihelion passages should scale as $P(>\\! \\!N_{\\rm peri}) \\propto N_{\\rm peri}^{-1\/2}$. Assuming for the sake of argument that NICs spend a fixed amount of time in the visibility region per perihelion passage, then the number of NICs having a given number of catalog entries is proportional to the number of NICs surviving for more than a given number of perihelion passages. We should therefore recover the same power-law as \\citet{Yabushita79} if his model is a good approximation to the full physics captured by the simulation. This plot largely confirms the predictions of \\citet{Yabushita79}, but the exponent of the power-law seems to be slightly steeper than his value of $-1\/2$. \n \\par\n Deviations from power-law behavior at short times occur because the visibility region is larger than the region of influence of the planets, so there is some delay before NICs that have entered the visibility region interact with the planets. Comets with smaller values of $a_i$ experience less torque due to the Galactic tide, so the delay is larger. They also have more binding energy that must be overcome prior to ejection. This explains the trend seen in Figure \\ref{Occurrence} that NICs with smaller $a_i$ take longer to be ejected.\n\\par\nThe fact that some of our simulated particles survive for longer than 1 Gyr leads to concern about the physical validity of our assumption that there is a static Oort cloud. Likely, many of the NICs that will be observed with LSST exited the Oort cloud more than 1 Gyr in the past, when it may have had different physical properties.\n\\par \nEven if the properties of the Oort cloud have not changed over 4.5 Gyr, long-lived comets may bias our simulations, as the following argument shows. Suppose, as seems reasonable from Figure \\ref{Occurrence}, that the true distribution of time $t$ that a comet spends in the visibility region is given by a power law, i.e.\n\\begin{equation}\n\\label{simpledndt}\ndp\/dt = \\left\\{\n\\begin{array}{ll}\n\\frac{(\\alpha-1) t^{-\\alpha}}{t_{\\rm min}^{1-\\alpha}} & t > t_{\\rm min} \\\\\n0 & t 34.2$ AU (corresponding to long-period comets --- comets with period $>$ 200 years) were counted.}\n\\label{Occurrence}\n\\vspace{-.05cm}\n\\end{figure} \n\\par\nIn the following sections we present simulation data for different values of $t_{\\rm cutoff}$. $t_{\\rm cutoff}$ is defined relative to the time when the comet first enters within 45 AU {\\it except} when $t_{\\rm cutoff} = 4.5$ Gyr, in which case it is defined relative to the start of the simulation. We believe that a value of a few Gyr is most observationally relevant, and most of the following discussion is based on such cutoff times, however given the limitations discussed above, we show results for shorter cutoff times as well.\n\\section{CONCENTRATION OF NICS DUE TO THE GIANT PLANETS}\nIn this section we describe how the distribution of NICs is affected by the planets.\n\\subsection{Expected Distribution of $R$ and $q$ with no Planets}\nWe first calculate the expected distribution of orbital radius $R$ and perihelion distance $q$ in the absence of perturbations from the planets, constructing what we call the zero-planet model. We assume a uniform distribution of cometary orbital elements in phase space at four fixed energies. These results provide a natural normalization to the plots in the following subsection, which show the distributions of $R$ and $q$ in our simulated catalog.\n\\subsubsection{Distribution in Radius and Perihelion at a Snapshot in Time}\nLet there be $N_0$ comets distributed uniformly on the energy surface corresponding to semi-major axis $a$. There is no need to distinguish between the initial semi-major axis $a_i$ and the current semi-major axis $a_c$ here, since the semi-major axis is not changed in the absence of planetary perturbations. Approximating the orbits as parabolic in the visibility region, we find that the radial velocity at radius $R$ of an orbit with perihelion distance $q$ is\n\\begin{equation}\n\\label{vr}\nv_R(q, R) = \\sqrt{\\frac{2GM_\\odot(1-q\/R)}{R}}.\n\\end{equation}\nSince we assume a uniform distribution of comets on the energy surface, the probability density of the squared eccentricity $e^2$ is \n\\begin{equation}\nN(e^2) de^2 = N_0de^2.\n\\end{equation}\nThen, since $q=a(1-e)$, the number of comets per unit perihelion distance $N(q)$ is given by \n\\begin{equation}\n\\label{Nq}\nN(q)dq = 2N_0\\left(1-\\frac{q}{a}\\right )\\frac{dq}{a}\\simeq \\frac{2N_0}{a} dq,\n\\end{equation}\nwhere the last equality holds because we are interested in comets with $q \\ll a$. A comet on a near-parabolic orbit with perihelion $q$ spends a fraction of its time $f(R,q)dR$ in the radial interval between $R$ and $R+dR$, where \n\\begin{equation}\n\\label{frq}\nf(R, q)dR = \\frac{2 dR}{Pv_R(q, R)},\n\\end{equation}\nand $P$ is the period of the orbit. Then, using Equations \\eqref{Nq} and \\eqref{frq}, we can solve for the number of comets in a radial interval $N(R)dR$:\n\\begin{equation}\n\\label{Nr}\nN(R) = \\int_{q=0}^R N(q) f(R, q) dq = \\frac{2 \\sqrt{2} N_0 R^{3\/2}}{\\pi a^{5\/2}}.\n\\end{equation}\n\\par\nThe number of comets with perihelion in the range $q$ to $q + dq$ expected to be present out to a maximum value of $R$ is given by \n\\begin{equation}\nN(q|R_{\\rm max})dq = N(q)dq \\int_q^{R_{\\rm max}} f(R', q) dR'.\n\\end{equation}\nThis evaluates to \n\\begin{equation}\n\\label{Nqexpect}\nN(q|R_{\\rm max})dq = \\frac{2 \\sqrt{2}}{3 \\pi a^{5\/2}} \\left(\\frac{R_{\\rm max}}{q}-1\\right)^{1\/2} \\left(2 + \\frac{R_{\\rm max}}{q}\\right)q^{3\/2} dq.\n\\end{equation}\nAs mentioned in Section \\ref{desc}, because of the way we have set up our simulation, and the fact that we sample every $\\tau_{\\rm sample}$ years, we would expect our catalog to contain a number of comets with orbital elements $\\psi$ equal to \n\\begin{equation}\n\\label{sampleToActual}\nN_{\\rm cat}(\\psi|R_{\\rm max})= \\frac{\\tau_{\\rm entry}}{\\tau_{\\rm sample}} N(\\psi|R_{\\rm max}),\n\\end{equation}\nif we had not included any planets in the simulation.\n\\subsubsection{Concentration Factors}\n\\label{sect:concentration}\nIn this subsection we compare our catalog to the one which would be produced had we not included planets in our simulations. In Figure \\ref{Rconcentration}, we compare our simulated comet catalog with the zero-planet model (Equations \\eqref{Nr} and \\eqref{sampleToActual}). We have plotted a ``concentration factor\" --- the ratio of comets appearing in our catalog within a given radial interval to the number calculated from Equations \\eqref{Nr} and \\eqref{sampleToActual}, assuming the same density of comets in the Oort cloud as was used to initialize our simulations. As stated previously, only comets with periods greater than 200 years are counted. Each panel corresponds to a different value of the initial cometary semi-major axis $a_i$. Different colored lines correspond to different values of $\\tau_{\\rm cutoff}$ as shown in the legend. \n\\par\nThese data represent the results from following $N_{\\rm sim}$ comets where $N_{\\rm sim} =$ 10,000, 16,000, 40,000, and 420,000 for $a_i =$ 5,000, 10,000, and 20,000 AU, and 50,000 AU respectively. Note that fewer than 10\\% of these ever enter the region $R < 45$ AU (955, 1548, 3949, and 29215 respectively). Most comets do not evolve to $q < 60$ AU within the entry period and are therefore discarded at the end of the entry period. Some of those that do come within $q < 60$ AU never reach $q < 45$ AU (if the angular momentum is nearly perpendicular to the torque).\n\\par \nWe would like to know the true distribution of orbital elements in the limit that we simulate a very large number of comets. To estimate our random error, we employed re-sampling. For each point on the curve, we drew $N_{\\rm sim}$ comets with replacement from our $N_{\\rm sim}$ simulated comets. The points are the mean of the re-sampled distribution, and the error bars correspond to the 16$^{\\rm th}$ and 84$^{\\rm th}$ percentiles of the re-sampled distribution (if the distribution were normally distributed, these would be 1-sigma error bars). The error bars on nearby points are highly correlated in most of our plots because the same comet contributes to several bins in the course of its evolution, hence the low point-to-point scatter relative to the error bars. \n\\par\nAlthough these data reflect the orbits of thousands of comets, the statistical errors are large in many cases, since often the majority of the contribution to a particular bin comes from only one or two long-lived comets, (see the discussion in \\S \\ref{lifetime}).\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=.5\\textwidth]{Rconcentration.pdf}\n\\caption{Number of comets in the simulated catalog at radius $R$, normalized by the expected number assuming no planets (Equations \\eqref{Nr}, \\eqref{sampleToActual}). Different curves correspond to different values of the cutoff time $\\tau_{\\rm cutoff}$. Errors were determined via bootstrapping (see Section \\ref{sect:concentration} for details). Points and error bars from curves corresponding to different cutoff times have been horizontally displaced slightly for clarity. }\n\\label{Rconcentration}\n\\vspace{-.05cm}\n\\end{figure} \n\n\\begin{figure}\n\\centering\n\\includegraphics[width=.5\\textwidth]{Qconcentration.pdf}\n\\caption{Number of comets in the catalog at perihelion $q$, normalized by the expected number assuming no planets (Equations \\eqref{Nqexpect}, \\eqref{sampleToActual}). Different curves correspond to different values of the cutoff time $\\tau_{\\rm cutoff}$.}\n\\label{Qconcentration}\n\\vspace{-.05cm}\n\\end{figure} \nFigure \\ref{Qconcentration} is similar to Figure \\ref{Rconcentration}, except that instead of plotting the number of appearances in our catalog in bins of heliocentric radius $R$, we plot appearances in bins of perihelion $q$. The distribution of $q$ provides more information about the NIC orbits: a sharp feature in the perihelion distribution is smoothed out when one looks at the distribution of heliocentric radius, since the same orbit can be observed at a range of values of $R$. \n \\par\nIn Figures \\ref{Rconcentration} and \\ref{Qconcentration}, and in most of the subsequent plots, we have horizontally offset the blue, green, yellow and red curves curves by $-4.5\\%$, $-1.5\\%$, $1.5\\%$, and $4.5\\%$ respectively in order to make the curves distinguishable in regions where they overlap.\n\\par\nThe following qualitative features of Figures \\ref{Rconcentration} and \\ref{Qconcentration} are straightforward to explain:\n\\begin{itemize}\n\\item The flux of comets with $a_i \\lesssim 20,\\!000$ AU shows a sharp drop-off (in both $N(q)$ and $N(R)$) interior to 10 AU. This is because comets originating at small semi-major axes are subjected to weak Galactic tides and change their perihelia slowly. The majority of kicks given to a comet with $q<10$ AU are large enough to either unbind a comet infalling from outside a few thousand AU, or to reduce $a_c$ to the point that the timescale to change the perihelion distance is much longer than an orbital period. We therefore expect to see a jump in the number of comets appearing inside 5 AU at values of $a_i$ exceeding that at which a comet can go from perihelion $\\gtrsim 15$ AU (largely unaffected by Jupiter and Saturn) to perihelion $\\lesssim 5$ AU in one orbit. This occurs at approximately 30,000 AU. Therefore, in order for a comet starting from inside $\\sim 30,\\!000$ AU to appear in the inner solar system, it needs to have a lucky orientation with respect to Jupiter and Saturn\\footnote{Or a kick from a star that passes unusually close to the Sun, a rare event not included in our model.}. This lucky orientation can either yield multiple small energy kicks on subsequent perihelion passages, or yield a kick that increases the semi-major axis so that the comet receives a larger torque from the Galaxy \\citep{KaibQuinn09}. \n \\par \n Thus we conclude that comets with $a_i \\lesssim 20,\\!000$ AU are mostly ejected by interactions with the outer planets before they reach small heliocentric radii. \\citet{Hills81} arrives at a similar result considering the effects of passing stars discretely rather than as a smooth Galactic tidal potential. \\citet{Collins10} provide a discussion of when it is appropriate to treat the influence of the Galaxy as being due to a smooth tidal field, and when it should be modeled as discrete stellar encounters. \n \\par\n The lower concentration factors for comets with smaller values of $a_i$ do not mean that we will see fewer NICs for a given number of Oort cloud comets at that energy. This is because the fraction of the total comets that are in the visibility region at a given time in the zero-planet model scales with $a^{-2.5}$ (see Equation \\eqref{Nr}). \n\\item \nThe difference between the green curves and the orange and red curves grows with increasing $q$. This is because the kicks from the planets are smaller at large $q$, so the comets survive longer. The exception to this is the curve for $\\tau_{\\rm cutoff} = 100$ Myr and $a_i =$ 5,000 AU. In this case, comets have mostly had insufficient time to be torqued to small values of $q$. It should be noted however, that the time to reach a given perihelion distance is not completely determined by the initial semi-major axis because a comet could be scattered to larger $a_i$ by an early encounter with Neptune, and subsequently evolve more rapidly. \n\\par\nThere is little difference between the results for $t_{\\rm cutoff} = 1$ Gyr, and $t_{\\rm cutoff} = 4.5$ Gyr for comets with $a_i \\geq 20,\\!000$ AU. As discussed previously, this is likely an artifact of our simulations including insufficiently many comets with these values of $a_i$ to capture the tail of the lifetime distribution (see Section \\ref{lifetime}).\n\\item\nThe concentration factors for large cutoff time in Figure \\ref{Qconcentration} approach unity as $q$ approaches 45 AU, however the concentration factors in Figure \\ref{Rconcentration} are still on the order of 10 at 45 AU. This is because even a concentration of comets with $q \\ll 45$ AU affects the distribution of comets at $R=45$ AU. \n\\item\nWe do not expect $N(q)$ to drop exactly to unity as soon as $q$ is larger than the extent of the planetary perturbations, because NICs which have interacted with the planets may be systematically carried away from the planetary region by the tide at a different rate than they were carried in (due to a change in orientation or semi-major axis). \n\\end{itemize}\n\\section{DISTRIBUTION OF ORBITAL ELEMENTS FOR VISIBLE COMETS}\nThe above analysis shows the degree to which the giant planets concentrate NICs in the outer solar system and exclude them from inside the orbit of Jupiter. In this section, we use an estimate of the size distribution of NICs, the relationship between magnitude, size and heliocentric distance, and the concentration effect due to interactions with the planets to calculate the number of NICs expected to be seen in an all-sky survey as a function of the limiting magnitude $m_{\\rm lim}$. \n\\subsection{Size Distribution}\nComets have so far been observed primarily within a few AU of the Sun, where their brightness is influenced strongly by their activity. At the larger distances that we focus on here, comets are believed to be generally inactive (see discussion in Section \\ref{sect:disrupt}), so their brightness is determined solely by their size, distance, and albedo. Let $H$ be the apparent magnitude of a comet 1 AU from the Sun and 1 AU from the observer, seen from zero phase angle. Based on a sample of long-period comets from about $H = 5$ to $H = 9$, \\citet{Sosa11} derive a relation for active comets between the radius $r$ (in kilometers) and $H$:\n\\begin{equation}\n\\label{rH}\n\\ln{r} = \\alpha+\\beta H,\n\\end{equation}\nwhere $\\alpha$ = 2.072 and $\\beta = -0.2993$. We caution that our use of this formula requires substantial extrapolation: the largest comet used to determine the formula has a radius of 1.8 km, more than an order of magnitude smaller than the smallest comets detectable at 30 AU in a survey with the limiting magnitude of LSST (see Section \\ref{sect:vis}). \\citet{Sosa11} note that the relation in Equation \\eqref{rH} predicts a radius (13 km) for comet Hale-Bopp that is somewhat below other estimates (mostly falling in the 20--35 km range). This discrepancy suggests that Equation \\eqref{rH} may underpredict the radii of large comets, in which case our estimates of the observable comet population will be conservative. Note that by using Equation \\eqref{rH} we are assuming that long-period comets are mostly the same population as NICs (or at least have the same size distribution).\n\\par\n\\citet{Hughes01} finds that the number $N_{\\rm peri}$ of long-period comets with brightness $H < 6.5$ passing through perihelion in the inner solar system per year per AU of perihelion distance is given by \n\\begin{equation}\n\\frac{dN_{\\rm peri}}{dH} = c_0 e^{\\gamma H},\n\\end{equation}\nwith $c_0 = 2.047 \\cdot 10^{-3}$ and $\\gamma = 0.827$. We can then transform variables to $r$ using Equation \\eqref{rH}. We find that\n\\begin{equation}\n\\label{dNdr}\ndN_{\\rm peri}\/dr = -\\frac{c_0}{\\beta} \\mathlarger{\\mathlarger{e^{-\\gamma \\alpha\/\\beta} r^{\\gamma\/\\beta - 1}}} = 2.09\\cdot r^{-3.76}.\n\\end{equation}\n\\par\nThis distribution holds down to $r(H = 6.5) = 1.1$ km, however, for simplicity we extrapolate it down to $0.9$ km --- the smallest comet visible at 5 AU in our model (see next section). This size distribution leads to a weak divergence in total mass at the large end of the spectrum. Nevertheless, we assume that the power-law behavior holds up to several tens of kilometers. The size distribution in Equation \\eqref{dNdr} is steeper than the relation ($dN_{\\rm peri}\/dr \\sim r^{-2.79}$) estimated in \\citet{Hughes01} because he uses a different relation between $H$ and $r$. It is also substantially steeper than the relation ($dN_{\\rm peri}\/dr \\sim r^{-2.92}$) found in \\citet{Snodgrass11} for the Jupiter-family comets. If the size distribution is shallower than we have estimated at large radii, then our estimates of the observable comet population will be conservative. \n\\subsection{Visibility Model}\n\\label{sect:vis}\nIn this section we describe our model for determining how likely a given simulated comet is to be visible. We assume that comets have an $r$-band geometric albedo $A_g$ of 0.04 as suggested in \\citet{Lamy04}. We find that the magnitude $m$ of an inactive comet is \n\\begin{align}\n\\label{mag}\nm &= -27.08 - 2.5 \\log{\\left(\\frac{r^2 A_g {\\rm (AU)}^2}{R^4}\\right)} \n\\nonumber\\\\\n&= 24.28 - 2.5 \\log{(A_g\/0.04}) -5 \\log{(r\/{\\rm 1km})} + 10 \\log{(R\/{\\rm 5AU})},\n\\end{align}\nwhere we have used $-27.08$ as the apparent $r$-band magnitude of the Sun. Equation \\eqref{mag} is only valid for comets far from the Sun, since we have assumed that $R_{\\rm Sun, comet} = R_{\\rm Earth, comet}$, that the phase angle is zero, and most importantly, that we are seeing the bare nucleus of an inactive comet. \\citet{Lamy04} state that magnitude drops off at about a rate of 0.04 magnitudes\/degree of phase angle, meaning that error due to this nonzero phase angle is limited to at most 0.23 magnitudes for a comet at 10 AU. Similarly, at 10 AU, the largest possible error arising from the approximation that the Sun-comet distance is equal to the Earth-comet distance also corresponds to a magnitude error of $\\Delta m = 0.23$.\n\\subsection{Weighting of Observations}\nUsing Equation \\eqref{mag} we can solve for $r_{\\rm min}(R)$, the radius in kilometers of the smallest comet visible at distance $R$ in a survey with limiting magnitude $m_{\\rm lim}$, assuming $A_g = 0.04$:\n\\begin{equation}\n\\label{rR}\nr_{\\rm min}(R) = 0.903 \\cdot 10^{0.2(24.5-m_{\\rm lim})} \\left(\\frac{R}{\\rm 5\\; AU}\\right)^2.\n\\end{equation}\nThe comet size is not explicitly tracked in our simulations. We assume that the sizes of comets in our simulation are drawn from the distribution in Equation \\eqref{dNdr}, with a lower cutoff radius of $0.903$ km --- the smallest comet visible at 5 AU in our model. To account for the fact that not all comets are visible at all orbital radii, we weight a simulated comet appearance at radius $R$ by the fraction of comets that would be visible at the observed value of $R$ given the assumed size distribution in the simulation. An appearance at high $R$ will receive a low weight, since most comets would not be visible so far away. We assign weight 1 to observations at $R = 5$ AU, as all comets in our assumed size distribution would be visible at 5 AU. Then at general $R$, we assign weight \n\\begin{equation}\nW(R) = \\frac{\\int_{r_{\\rm min}(R)}^\\infty r^{-3.76} dr}{\\int_{r_{\\rm min}(5 \\; {\\rm AU})}^\\infty r^{-3.76} dr} = \\left(\\frac{5 \\; {\\rm AU}}{R}\\right)^{5.52}.\n\\end{equation}\n\\par \nWe quote numbers of comets with a given set of orbital elements per $10^{11}$ comets larger than one kilometer in a spherical distribution at the assumed initial semi-major axis. To achieve this normalization, we multiply our counts by\n\\begin{equation}\n\\frac{10^{11}f_{60-65}(a_i)}{N_{\\rm init}} \\frac{\\tau_{\\rm sample}}{\\tau_{\\rm entry}} \\frac{\\int_{0.903}^\\infty N(r) dr}{\\int_1^\\infty N(r)dr},\n\\end{equation}\nwhere $f_{60-65}(a_i)$ is the fraction of the phase space of orbits with semi-major axis $a_i$ that consists of orbits with perihelia between 60 and 65 AU, given by \n\\begin{equation}\nf_{60-65}(a_i) = \\frac{(a_i - 60 \\; {\\rm AU})^2 -(a_i - 65 \\; {\\rm AU})^2}{a_i^2},\n\\end{equation}\nand $N_{\\rm init}$ is the number of comets we initialize between 60 and 65 AU.\n\n\n\n\n\n\\subsection{Distribution of Visible Comets}\n\\begin{figure}\n\\centering\n\\includegraphics[width=.5\\textwidth]{Rvisible.pdf}\n\\caption{Number of NICs expected to be seen per logarithmic interval in $R$ at a snapshot of time in an all-sky survey with limiting $r$-band magnitude $m_{\\rm lim} = 24.5$. This assumes there are $10^{11}$ comets with $r > 1$ km at the value of initial semi-major axis $a_i$ specified in each panel. Errors were determined via bootstrapping (see Section \\ref{sect:concentration} for details). Different curves correspond to different values of $t_{\\rm cutoff}$.}\n\\label{Rvisible}\n\\vspace{-.05cm}\n\\end{figure} \n \\begin{figure}\n\\centering\n\\includegraphics[width=0.5\\textwidth]{Qvisible.pdf}\n\\caption{Number of NICs expected to be seen per logarithmic interval in $q$ at a snapshot of time. This assumes there are $10^{11}$ comets with $r > 1$ km at the value of $a_i$ specified in each panel. Different curves correspond to different values of $t_{\\rm cutoff}$.}\n\\label{Qvisible}\n\\vspace{-.05cm}\n\\end{figure} \n\\\nIn this section we present results from our simulations showing how many NICs are visible over the whole sky at a given snapshot of time as a function of $R$ and $q$, for observations taken between $R = 5 \\; {\\rm AU}$ and $R = 45 \\; {\\rm AU}$. In all cases, we assume a limiting $r$-band magnitude $m_{\\rm lim}$ of 24.5, equivalent to the one-exposure limit for LSST \\citep{LSST09}. The number of distant NICs expected to be discovered by LSST differs from the results presented here for two reasons. First, LSST is expected to operate for 10 years, so it should see more than just the comets visible in a snapshot, particularly in the case of the closer comets where $R\/v$ is less than 10 years. Second, LSST will only survey 48\\% of the sky, so will only see about half of the comets that would be visible in an equivalent all-sky survey. Comets move slowly enough that trailing losses will be insignificant given the 30 second exposure time. Using Equation (8) from \\citet{Ivezic08}, we estimate a comet at 10 AU will have a limiting magnitude only 0.06 magnitudes brighter due to trailing losses.\n\\par\nFigures \\ref{Rvisible} and \\ref{Qvisible} show the number $N$ of NICs expected to be visible outside $5$ AU per unit of $\\ln{R}$ and $\\ln{q}$ respectively, per $10^{11}$ comets with $r$ greater than 1 km at the labeled initial semi-major axis in the Oort cloud. The shapes of the curves are substantially different for different values of $a_i$, particularly in the region between 5 and 10 AU, where the statistics are the best. In the $a_i =$ 5,000 AU case, the expected count {\\it increases} by a factor of 5 from $R=$ 5 AU to $R=$ 10 AU for $t_{\\rm cutoff} \\geq 1$ Gyr. In the $a_i =$ 50,000 AU case it {\\it decreases} by a factor of $\\approx 5$. Observations of comets in this regime will therefore allow us to observationally constrain the distribution of $a_i$. The peak of $RdN\/dR$ moves smoothly from around 15 AU for $a_i = 5,\\!000$ AU to less than 5 AU for $a_i = 50,\\!000$ AU.\n\\par\nAs shown in Appendix \\ref{sect:isovis}, we expect $RdN\/dR$ and $qdN\/dq$ to decline as $R^{-3.02}$ and $q^{-3.02}$ respectively in the zero-planet model. Deviations from this behavior are due to variation in the concentration factor as shown in Figures \\ref{Rconcentration} and \\ref{Qconcentration}.\n \\begin{figure}\n\\centering\n\\includegraphics[width=.5\\textwidth]{largea.pdf}\n\\caption{Number of NICs with $a > 300$ AU expected to be seen per logarithmic interval in $q$ at a snapshot of time. This assumes there are $10^{11}$ comets with $r > 1$ km at the value of $a_i$ specified in each panel.}\n\\label{largea}\n\\vspace{-.05cm}\n\\end{figure} \n\\par\nWe also examined what happened if we broke up the sample into two groups depending on the current semi-major axis of the comet. Figure \\ref{largea} is identical to Figure \\ref{Qvisible} except that we have only considered those comets that have semi-major axes greater than 300 AU. The error bars are smaller, because the comets with the most appearances tend to diffuse to smaller values of $a_c$, leaving a population with less spread in number of appearances. For this reason, this subset of comets, although smaller in number, has more power to discriminate between Oort cloud models.\n \\begin{figure}\n\\centering\n\\includegraphics[width=.5\\textwidth]{smalla.pdf}\n\\caption{Number of NICs with $a < 300$ AU expected to be seen per logarithmic interval in $q$ at a snapshot of time. This assumes there are $10^{11}$ NICs with $r > 1$ km at the value of $a_i$ specified in each panel.}\n\\label{smalla}\n\\vspace{-.05cm}\n\\end{figure} \nIn Figure \\ref{smalla} we plot $qdN\/dq$ for only the comets in Figure \\ref{Qvisible}, but not Figure \\ref{largea}, i.e., those comets whose orbits have $a_c<300$ AU. We see that $qdN\/dq$ declines more sharply with $q$ than in the whole sample of long-period comets. This is because it is difficult for a comet to attain $a_c < 300$ AU at large perihelion, because the kicks are too small. We also note that the overall numbers are larger by a factor of a few for comets with $a_c < 300$ AU.\n\n\n\\subsection{Distribution in Semi-major Axis}\n\\begin{figure}\n\\centering\n\\includegraphics[width=.5\\textwidth]{avisible.pdf}\n\\caption{Number of NIC appearances per logarithmic interval in semi-major axis for different values of perihelion distance (different color curves) and different values of the initial semi-major axis (different panels). Each panel assumes that the Oort cloud contains $10^{11}$ comets with $r > 1$ km at the specified value of $a_i$.}\n\\label{avisible}\n\\vspace{-.05cm}\n\\end{figure} \nFigure \\ref{avisible} shows the semi-major axis distribution (number of appearances per unit logarithmic interval in semi-major axis) for all the NICs in a given perihelion bin (denoted by the color of the curve) and initial semi-major axis (panel). The error bars are generally larger for the points at small semi-major axis, implying that the statistics in these bins are dominated by a few comets. \n\\par\n \\begin{figure}\n\\centering\n\\includegraphics[width=.5\\textwidth]{inversea.pdf}\n\\caption{Number of NIC appearances per linear interval in inverse semi-major axis $x = 1\/a$ for different values of perihelion distance (different colors) and different values of $a_i$ (different panels). Each panel assumes that the Oort cloud contains $10^{11}$ comets with $r > 1$ km at the specified value of $a_i$. }\n\\label{inversea}\n\\vspace{-.05cm}\n\\end{figure} \n\nWe find it more illuminating to make this plot in the coordinate $x = 1\/a$, which is proportional to the energy. Doing so allows us to test the predictions of random walk models such as those in \\citet{Yabushita79} and \\citet{Everhart76}. In their simplest form, one can imagine a comet starting with energy $E = -\\epsilon$. Every perihelion passage, it takes a step of size $\\epsilon$ towards higher or lower energy. It is removed if $E = 0$ (it becomes unbound), or if $E < E_{sp}$, where $E_{sp}$ is the critical energy level to be re-classified as a short-period comet. We ignore the possibility that a short period orbit could be perturbed back to the long-period regime. Comets are injected near $x = 0$. In the limit that $-E_{sp}\/\\epsilon \\gg 1$ , one can show that a steady-state distribution of comet energies normalized by the rate of perihelion passage is given by a linear equation of the form\n\\begin{equation}\n\\label{NE}\nN(E) = k (E - E_{sp}),\n\\end{equation}\nwhere $k$ is a constant depending on the comet injection rate and the size of the kick. \n\\par \nWe see some support for Equation \\eqref{NE} in Figure \\ref{inversea}. We have only considered comets with $a_c > 1,\\!000$ AU ($x < 0.001$ AU$^{-1}$). This is a small enough range that we would expect the curves to be nearly flat if Equation \\eqref{NE} were correct (since $E_{\\rm sp}$ is much more negative than the energies shown in Figure \\ref{inversea}). The curves are generally flat for perihelion distances 5 AU $ 1,\\!000$ AU. For comets with $a_c < 1,\\!000$ AU, we find prograde fractions of $0.42 \\pm 0.15, 0.26 \\pm 0.13, 0.32 \\pm 0.09$, and $0.40 \\pm 0.09$. These data are consistent with the random walk model.\n\\par\n While the simulation data agree with the random walk model, they contradict the observations. There is only a slight preference in the observational data for retrograde comets with high perihelion. 64 out of 110 comets (58\\%) with period greater than 200 years and perihelion greater than 5 AU in the database at \\url{http:\/\/ssd.jpl.nasa.gov\/dat\/ELEMENTS.COMET} are retrograde.\n\\subsection{Size Distribution}\nA bigger telescope enables us to see rare large comets because it can search more volume. It is impossible to say exactly what size distribution of comets to expect in the observed sample, however we can make an estimate based on extrapolation of the size distribution from \\citet{Fernandez12}. It is instructive to first consider the zero-planet model with a fixed power-law for the size distribution. \n \\begin{figure}\n\\centering\n\\includegraphics[width=.5\\textwidth]{sizePlot.pdf}\n\\caption{Number of predicted detections of bodies in the range 5 AU $ 1$ but $\\forall i \\in M, m_i = 1$, our problem reduces to the well-studied ``concurrent open shop'' problem.\n\nUsing Graham et al.'s taxonomy, the concurrent open shop problem is written as $PD||\\sum w_j C_j$. Three groups \\cite{Chen2000, Garg2007, llp} independently discovered an LP-based 2-approximation for $PD||\\sum w_j C_j$ using the work of Queyranne \\cite{Queyranne1993}. The linear program in question has an exponential number of constraints, but can still be solved in polynomial time with a variant of the Ellipsoid method. Our ``strong'' algorithm for concurrent cluster scheduling refines the techniques contained therein, as well as those of Schulz \\cite{Schulz1996, Schulz2012} (see Section \\ref{sec:lpAlg}).\n\nMastrolilli et al. \\cite{mqssu} developed a primal-dual algorithm for $PD || \\sum w_j C_j$ that does not use LP solvers. ``MUSSQ''\\footnote{A permutation of the author's names: Mastrolilli, Queyranne, Schulz, Svensson, and Uhan.} is significant for both its speed and the strength of its performance guarantee : it achieves an approximation ratio of 2 in only $O(n^2 + nm)$ time. Although MUSSQ does not require an LP solver, its proof of correctness is based on the fact that it finds a feasible solution to the dual a particular linear program. Our ``fast'' algorithm for concurrent cluster scheduling uses MUSSQ as a subroutine (see Section \\ref{sec:TSPT}).\n\nHung, Golubchik, and Yu \\cite{HGY} presented a framework designed to improve scheduling across geographically distributed data centers. The scheduling framework had a centralized scheduler (which determined a job ordering) and local dispatchers which carried out a schedule consistent with the controllers job ordering. Hung et al. proposed a particular algorithm for the controller called ``SWAG.'' SWAG performed well in a wide variety of simulations where each data center was assumed to have the same number of identical parallel machines. We adopt a similar framework to Hung et al., but we show in Section \\ref{subsec:swagDegenerate} that SWAG has no constant-factor performance guarantee.\n\n\\subsection{Paper Outline \\& Algorithmic Results}\\label{subsec:outlineAndResults}\n\nAlthough only one of our algorithms requires \\textit{solving} a linear program, both algorithms use the same linear program in their proofs of correctness; we introduce this linear program in Section \\ref{sec:introduceLP} before discussing either algorithm. Section \\ref{sec:listSched} establishes how an ordering of jobs can be processed to completely specify a schedule. This is important because the complex work in both of our algorithms is to generate an ordering of jobs for each cluster.\n\nSection \\ref{sec:lpAlg} introduces our ``strong'' algorithm: CC-LP. CC-LP can be applied to any instance of concurrent cluster scheduling, including those with non-zero release times $r_{ji}$. A key in CC-LP's strong performance guarantees lay in the fact that it allows different permutations of subjobs for different clusters. By providing additional structure to the problem (but while maintaining a generalization of concurrent open shop) CC-LP becomes a 2-approximation. This is significant because it is NP-Hard to approximate concurrent open shop (and by extension, our problem) with ratio $2-\\epsilon$ for any $\\epsilon > 0$ \\cite{nphard2}.\n\nOur combinatorial algorithm (``CC-TSPT'') is presented in Section \\ref{sec:TSPT}. The algorithm is fast, provably accurate, and has the interesting property that it can schedule all clusters using the same permutation of jobs.\\footnote{We call such schedules ``single-$\\sigma$ schedules.'' As we will see later on, CC-TSPT serves as a constructive proof of existence of near-optimal single-$\\sigma$ schedules for all instances of $CC||\\sum w_j C_j$, \\textit{including} those instances for which single-$\\sigma$ schedules are strictly sub-optimal. This is addressed in Section \\ref{sec:discAndConc}.} After considering CC-TSPT in the general case, we show how fine-grained approximation ratios can be obtained in the ``fully parallelizable'' setting of Zhang et al. \\cite{zwl}. We conclude with an extension of CC-TSPT that maintains performance guarantees while offering improved empirical performance.\n\nThe following table summarizes our results for approximation ratios. For compactness, condition $Id$ refers to identical machines (i.e. $v_{\\ell i}$ constant over $\\ell$), condition $A$ refers to $r_{ji} \\equiv 0$, and condition $B$ refers to $p_{jit} \\text{ constant over } t \\in T_{ji}$.\n\\begin{center}\n\\begin{tabular}{l| cccccc}\n\\hline\n \t\t& $(Id,A,B)$ & $(Id, \\neg A, B)$ & $(Id,A,\\neg B)$ & $(Id,\\neg A, \\neg B)$ & $(\\neg Id, A)$ & $(\\neg Id, \\neg A)$ \\\\ \\hline\nCC-LP\t&\t2\t & 3 & 3 & 4 & $2+R$ & $3+R$ \\\\\nCC-TSPT & 3 & - & 3 & - & $2+R$ & - \\\\ \\hline\n\\end{tabular}\n\\end{center}\nThe term $R$ is the maximum over $i$ of $R_i$, where $R_i$ is the ratio of fastest machine to \\textit{average} machine speed at cluster $i$.\n\nThe most surprising of all of these results is that our scheduling algorithms are remarkably simple. The first algorithm solves an LP,\nand then the scheduling can be done easily on each cluster. The second algorithm is again a rather surprising simple reduction to the\ncase of one machine per cluster (the well understood concurrent open shop problem) and yields a simple combinatorial algorithm. The proof\nof the approximation guarantee is somewhat involved however.\n\nIn addition to algorithmic results, we demonstrate how our problem subsumes that of minimizing total weighted lateness on a bank of identical parallel machines (see Section \\ref{sec:relationshipsBetweenProbs}). Section \\ref{sec:discAndConc} provides additional discussion and highlights our more novel technical contributions.\n\n\n\\section{The Core Linear Program }\\label{sec:introduceLP}\n\n\n\nOur linear program has an unusual form. Rather than introduce it immediately, we conduct a brief review of prior work on similar LP's. All the LP's we discuss in this paper have objective function $\\sum w_j C_j$, where $C_j$ is a decision variable corresponding to the completion time of job $j$, and $w_j$ is a weight associated with job $j$. \n\n\\textit{For the following discussion only, we adopt the notation in which job $j$ has processing time $p_j$. In addition, if multiple machine problems are discussed, we will say that there are $\\mathsf{m}$ such machines (possibly with speeds $s_i, i \\in \\{1,\\ldots, \\mathsf{m}\\}$).} \n\nThe earliest appearance of a similar linear program comes from Queyranne \\cite{Queyranne1993}. In his paper, Queyranne presents an LP relaxation for sequencing $n$ jobs on a single machine where all constraints are of the form $\\sum_{j \\in S} p_j C_j \\geq \\frac{1}{2}\\left[\\left(\\sum_{j \\in S} p_j \\right)^2 + \\sum_{j \\in S} p_j^2\\right]$ where $S$ is an arbitrary subset of jobs. Once a set of optimal $\\{C_j^\\star\\}$ is found, the jobs are scheduled in increasing order of $\\{C_j^\\star\\}$. These results were primarily theoretical, as it was known at his time of writing that sequencing $n$ jobs on a single machine to minimize $\\sum w_j C_j$ can be done optimally in $O(n \\log n)$ time.\n\nQueyranne's constraint set became particularly useful for problems with \\textit{coupling} across distinct machines (as occurs in concurrent open shop). Four separate groups \\cite{Chen2000,Garg2007,llp, mqssu} saw this and used the following LP in a 2-approximation for concurrent open shop scheduling.\n\\begin{equation}\n(\\text{LP0}) ~~ \\min \\sum_{j \\in N} w_j C_j ~~ \\text{s.t.} ~~ \\textstyle\\sum_{j \\in S} p_{ji} C_j \\geq \n\t\t\\frac{1}{2}\n\t\t\\left[ \n\t\t\t\\left(\\textstyle\\sum_{j \\in S} p_{ji}\\right)^2 + \\left(\\textstyle\\sum_{j \\in S} p_{ji}^2\\right) \n\t\t\\right] ~ \\forall ~ \\substack{S \\subseteq N \\\\ i \\in M }\\nonumber\n\\end{equation}\nIn view of its tremendous popularity, we sometimes refer to the linear program above as the \\textit{canonical relaxation} for concurrent open shop.\n\nAndreas Schulz's Ph.D. thesis developed Queyranne's constraint set in greater depth \\cite{Schulz1996}. As part of his thesis, Schulz considered scheduling $n$ jobs on $\\mathsf{m}$ identical parallel machines with constraints of the form $\\sum_{j \\in S} p_j C_j \\geq \\frac{1}{2\\mathsf{m}} \\left(\\sum_{j \\in S} p_j \\right)^2 + \\frac{1}{2}\\sum_{j \\in S} p_j^2$. In addition, Schulz showed that the constraints $\\sum_{j \\in S} p_j C_j \\geq \\left[2 \\sum_{i=1}^{\\mathsf{m}} s_i \\right]^{-1}\\left[\\left(\\sum_{j \\in S} p_j \\right)^2 + \\sum_{j \\in S} p_j^2\\right]$ are satisfied by any schedule of $n$ jobs on $\\mathsf{m}$ uniform machines. In 2012, Schulz refined the analysis for several of these problems \\cite{Schulz2012}. For constructing a schedule from the optimal $\\{C_j^\\star\\}$, Schulz considered scheduling jobs by increasing order of $\\{C_j^\\star\\}$, $\\{C_j^\\star - p_j\/2\\}$, and $\\{C_j^\\star - p_j\/(2\\mathsf{m})\\}$.\n\n\n\\subsection{Statement of LP1}\n\nThe model we consider allows for more fine-grained control of the job structure than is indicated by the LP relaxations above. Inevitably, this comes at some expense of simplicity in LP formulations. In an effort to simplify notation, we define the following constants, and give verbal interpretations for each. \n\\begin{equation}\n \\mu_{i} \\doteq \\textstyle\\sum_{\\ell = 1}^{m_i} v_{\\ell i} \\qquad q_{ji} \\doteq \\min{\\lbrace|T_{ji}|, m_i\\rbrace} \\qquad \\mu_{ji} \\doteq \\textstyle\\sum_{\\ell = 1}^{q_{ji}} v_{\\ell i} \\qquad p_{ji} \\doteq \\textstyle\\sum_{t \\in T_{ji}} p_{jit}\n\\end{equation}\nFrom these definitions, $\\mu_i$ is the processing power of cluster $i$. For subjob $(j,i)$, $q_{ji}$ is the maximum number of machines that could process the subjob, and $\\mu_{ji}$ is the maximum processing power than can be brought to bear on the same. Lastly, $p_{ji}$ is the total processing requirement of subjob $(j,i)$. In these terms, the core linear program, LP1, is as follows.\n\\begin{align*}\n\t\\text{(LP1) } \\min &\\textstyle\\sum_{j \\in N} w_j C_j \\\\\n\ts.t.\\quad (1A) \\quad & \\textstyle\\sum_{j \\in S} p_{ji} C_j \n\t \\geq \\frac{1}{2} \n\t \\left[ \n\t \\left(\\textstyle\\sum_{j \\in S} p_{ji}\\right)^2\/\\mu_i\n + \\textstyle\\sum_{j \\in S} p_{ji}^2\/\\mu_{ji} \n \\right] \\qquad ~\\forall S \\subseteq N, i \\in M \\\\\n\t(1B) \\quad\t& C_j \\geq p_{jit}\/v_{1i} + r_{ji} \\qquad ~\\forall i \\in M,~ j \\in N,~ t \\in T_{ji}\\\\\n\t(1C) \\quad\t& C_j \\geq p_{ji}\/\\mu_{ji} + r_{ji} \\qquad ~\\forall j \\in N,~ i \\in M \t \n\\end{align*} \n\nConstraints ($1A$) are more carefully formulated versions of the polyhedral constraints introduced by Queyranne \\cite{Queyranne1993} and developed by Schulz \\cite{Schulz1996}. The use of $\\mu_{ji}$ term is new and allows us to provide stronger performance guarantees for our framework where subjobs are composed of \\textit{sets} of tasks. As we will see, this term is one of the primary factors that allows us to parametrize results under varying machine speeds in terms of maximum to \\textit{average} machine speed, rather than maximum to \\textit{minimum} machine speed. Constraints ($1B$) and ($1C$) are simple lower bounds on job completion time. \n\n\nThe majority of this section is dedicated to proving that LP1 is a valid relaxation of $CC|r|\\sum w_jC_j$. Once this is established, we prove the that LP1 can be solved in polynomial time by providing a separation oracle with use in the Ellipsoid method. Both of these proofs use techniques established in Schulz's Ph.D. thesis \\cite{Schulz1996}. \n\n\\subsection{Proof of LP1's Validity}\n\n\n\n\nThe lemmas below establish the basis for both of our algorithms. Lemma \\ref{lem:sumOfSquaresDiffSpeeds} generalizes an inequality used by Schulz \\cite{Schulz1996}. Lemma \\ref{lem:feasForLP1} relies on Lemma \\ref{lem:sumOfSquaresDiffSpeeds} and cites an inequality mentioned in the preceding section (and proven by Queyranne \\cite{Queyranne1993}). \n\\begin{lemma}\nLet $\\{a_1,\\ldots a_z\\}$ be a set of non-negative real numbers. We assume that $k \\leq z$ of them are positive. Let $b_i$ be a set of decreasing positive real numbers. Then\n\\begin{center}\n$ \\sum_{i = 1}^z a_i^2 \/ b_i \\geq \\left(\\sum_{i = 1}^z a_i \\right)^2 \/ \\left(\\sum_{i = 1}^k b_i\\right)$.\n\\end{center}\n\\label{lem:sumOfSquaresDiffSpeeds}\n\\end{lemma}\n\\begin{proof}\\footnote{The proceedings version of this paper stated that the proof cites the AM-GM inequality and proceeds by induction from $z=k=2$. We have opted here to demonstrate a different (simpler) proof that we discovered only after the proceedings version was finalized.}\nWe only show the case where $k=z$. Define $\\mathbf{a} = [a_1,\\ldots,a_k] \\in \\mathbb{R}^k_+$, $\\mathbf{b} = [b_1, \\ldots, b_k ] \\in \\mathbb{R}^k_{++}$, and $\\mathbbm{1}$ as the vector of $k$ ones. Now, set $\\mathbf{u} = \\mathbf{a} \/ \\sqrt{\\mathbf{b}}$ and $\\mathbf{w} = \\sqrt{\\mathbf{b}}$ (element-wise), and note that $\\langle \\mathbf{a}, \\mathbbm{1} \\rangle = \\langle \\mathbf{u}, \\mathbf{w} \\rangle$. In these terms, it is clear that $(\\sum_{i=1}^k a_i)^2 = \\langle \\mathbf{u}, \\mathbf{w} \\rangle^2$.\n\nGiven this, one need only cite Cauchy-Schwarz (namely, $\\langle \\mathbf{u}, \\mathbf{w} \\rangle^2 \\leq \\langle \\mathbf{u}, \\mathbf{u} \\rangle \\cdot \\langle \\mathbf{w}, \\mathbf{w} \\rangle$) and plug in the definitions of $\\mathbf{u}$ and $\\mathbf{w}$ to see the desired result.\n\\end{proof}\n\n\n\\begin{lemma}[Validity Lemma]\nEvery feasible schedule for an instance $I$ of $CC|r|\\sum w_jC_j$ has completion times that define a feasible solution to LP1($I$). \\label{lem:feasForLP1}\n\\end{lemma}\n\\begin{proof}\nAs constraints ($1B$) and ($1C$) are clear lower bounds on job completion time, it suffices to show the validity of constraint ($1A$). Thus, let $S$ be a non-empty subset of $N$, and fix an arbitrary but feasible schedule ``$F$'' for $I$. \n\nDefine $C^{F}_{ji}$ as the completion time of subjob $(j,i)$ under schedule $F$. Similarly, define $C^{F}_{ji\\ell}$ as the first time at which tasks of subjob $(j,i)$ scheduled on machine $\\ell$ of cluster $i$ are finished. Lastly, define $p^{\\ell}_{ji}$ as the total processing requirement of job $j$ scheduled on machine $\\ell$ of cluster $i$. Note that by construction, we have $C^{F}_{ji} = \\max_{\\ell \\in \\{1,\\ldots,m_i\\}}{C^{F}_{ji\\ell}}$ and $C^F_j = \\max_{i \\in M}{C^F_{ji}}$. \nSince $p_{ji} = \\sum_{\\ell = 1}^{m_i} p^{\\ell}_{ji}$, we can rather innocuously write\n\\begin{equation}\n\\textstyle\\sum_{j \\in S} p_{ji} C^{F}_{ji} = \\textstyle\\sum_{j \\in S}\\left[ \\textstyle\\sum_{\\ell = 1}^{m_i} p^{\\ell}_{ji} \\right] C^{F}_{ji} . \n\\end{equation} \nBut using $C^{F}_{ji} \\geq C^{F}_{ji\\ell}$, we can lower-bound $\\sum_{j \\in S} p_{ji} C^{F}_{ji}$. Namely,\n\\begin{equation}\n\\textstyle\\sum_{j \\in S} p_{ji} C^{F}_{ji} \\geq \\textstyle\\sum_{j \\in S}\\textstyle\\sum_{\\ell = 1}^{m_i} p^{\\ell}_{ji} C^{F}_{ji\\ell} = \\textstyle\\sum_{\\ell = 1}^{m_i} v_{\\ell i}\\textstyle\\sum_{j \\in S} \\left[p^{\\ell}_{ji}\/v_{\\ell i} \\right]C^{F}_{ji\\ell} \\label{eq:specificMachines}\n\\end{equation}\nThe next inequality uses a bound on $\\textstyle\\sum_{j \\in S}\\left[p^{\\ell}_{ji}\/v_{\\ell i} \\right]C^{F}_{ji\\ell}$ proven by Queyranne \\cite{Queyranne1993} for any subset $S$ of $N$ jobs with processing times $\\left[p^{\\ell}_{ji}\/v_{\\ell i} \\right]$ to be scheduled on a single machine.\\footnote{Here, our machine is machine $\\ell$ on cluster $i$.}\n\\begin{equation}\n\\textstyle\\sum_{j \\in S} \\left[p^{\\ell}_{ji}\/v_{\\ell i} \\right]C^{F}_{ji\\ell} \\geq \\frac{1}{2} \\left[\\left(\\textstyle\\sum_{j \\in S} \\left[p^{\\ell}_{ji}\/v_{\\ell i} \\right]\\right)^2 + \\textstyle\\sum_{j \\in S} \\left(\n\\left[p^{\\ell}_{ji}\/v_{\\ell i} \\right]\\right)^2 \\right]\\label{eq:queyranne}\n\\end{equation}\nCombining inequalities \\eqref{eq:specificMachines} and \\eqref{eq:queyranne}, we have the following.\n\\begin{align}\n\\textstyle\\sum_{j \\in S} p_{ji} C^{F}_{ji} &\\geq \\frac{1}{2} \\textstyle\\sum_{\\ell=1}^{m_i} v_{\\ell i} \\left[\\left(\\textstyle\\sum_{j \\in S} \\left[p^{\\ell}_{ji}\/v_{\\ell i} \\right]\\right)^2 + \\textstyle\\sum_{j \\in S} \\left(\n\\left[p^{\\ell}_{ji}\/v_{\\ell i} \\right]\\right)^2 \\right] \\\\\n& \\geq \\frac{1}{2} \\left[\\textstyle\\sum_{\\ell = 1}^{m_i}\\left(\\sum_{j \\in S} p^\\ell_{j i}\\right)^2 \/ v_{\\ell i} + \\sum_{j \\in S} \\sum_{\\ell = 1}^{m_i} \\left(p^\\ell_{j i}\\right)^2 \/ v_{\\ell i} \\right] \\label{eq:differentBoundForLP1}\n\\end{align}\nNext, we apply Lemma \\ref{lem:sumOfSquaresDiffSpeeds} to the right hand side of inequality \\eqref{eq:differentBoundForLP1} a total of $|S|+1$ times.\n\\begin{align}\n&\\textstyle\\sum_{\\ell = 1}^{m_i} \\left(\\textstyle\\sum_{j \\in S} p^\\ell_{j i}\\right)^2 \/v_{\\ell i} \\geq \\left(\\textstyle\\sum_{\\ell = 1}^{m_i}\\textstyle\\sum_{j \\in S} p^\\ell_{j i}\\right)^2\/ \\textstyle\\sum_{\\ell = 1}^{m_i} v_{\\ell i} = \\left(\\textstyle\\sum_{j \\in S} p_{ji}\\right)^2 \/ \\mu_i \\\\\n&\\textstyle\\sum_{\\ell = 1}^{m_i} \\left(p^\\ell_{j i}\\right)^2 \/ v_{\\ell i} \\geq \\left(\\sum_{\\ell = 1}^{m_i}p^\\ell_{j i}\\right)^2 \/ \\textstyle\\sum_{\\ell = 1}^{q_{j i}} v_{\\ell i} = p_{j i}^2\/\\mu_{j i} ~~\\forall~ j \\in S\n\\end{align}\nCiting $C^{F}_{j} \\geq C^{F}_{ji}$, we arrive at the desired result.\n\\begin{equation}\n\\textstyle\\sum_{j \\in S} p_{ji} C^{F}_{j} \\geq \\frac{1}{2}\\left[\\left(\\textstyle\\sum_{j \\in S} p_{ji} \\right)^2\/\\mu_i + \\textstyle\\sum_{j \\in S}p_{ji}^2\/\\mu_{ji}\\right] \\qquad \\text{``constraint }(1A)\\text{''}\n\\end{equation}\n\n\\end{proof}\n\n\n\n\\subsection{Theoretical Complexity of LP1}\\label{subsec:thankGodPolyTime}\nAs the first of our two algorithms requires solving LP1 directly, we need to address the fact that LP1 has $m \\cdot (2^n - 1) + n$ constraints. \nLuckily, it is still possible to such solve linear programs in polynomial time with the Ellipsoid method; we introduce the following separation oracle for this purpose.\n\n\\begin{definition}[Oracle LP1]\nDefine the \\textit{violation}\n\\begin{equation}\nV(S,i) = \\frac{1}{2} \\left[ \n\t \\left(\\textstyle\\sum_{j \\in S} p_{j i}\\right)^2\/\\mu_i \n + \\textstyle\\sum_{j \\in S} p_{j i}^2\/\\mu_{j i} \n \\right] - \\textstyle\\sum_{j \\in S} p_{j i} C_{j}\n\\end{equation}\nLet $\\{C_j\\} \\in \\mathbb{R}^n$ be a \\textit{potentially} feasible solution to LP1. Let $\\sigma_i$ denote the ordering when jobs are sorted in increasing order of $C_j - p_{j i}\/(2\\mu_{ji})$. Find the most violated constraint in $(1A)$ for $i \\in M$ by searching over $V(S_i,i)$ for $S_i$ of the form $\\{\\sigma_i(1),\\ldots,\\sigma_i(j-1),\\sigma_i(j)\\},~ j \\in \\{1,\\ldots,n\\}$. If any of maximal $V(S_i^*,i) > 0$, then return $(S_i^*,i)$ as a violated constraint for ($1A$). Otherwise, check the remaining $n$ constraints $((1B)$ and $(1C))$ directly in linear time.\n\\end{definition} \n\nFor fixed $i$, Oracle-LP1 finds the subset of jobs that maximizes ``violation'' for cluster $i$. That is, Oracle-LP1 finds $S_i^*$ such that $V(S_i^*,i) = \\text{max}_{S \\subset N} V(S,i)$. We prove the correctness of Oracle-LP1 by establishing a necessary and sufficient condition for a job $j$ to be in $S_i^*$.\n\n\\begin{lemma}\nFor $\\mathbb{P}_i(A) \\doteq \\textstyle\\sum_{j \\in A} p_{ji}$, we have $x \\in S_i^* \\Leftrightarrow$ $ C_x - p_{xi}\/(2\\mu_{xi}) \\leq \\mathbb{P}_i(S_i^*)\/\\mu_i$. \n\\label{lem:separation}\n\\end{lemma}\n\\begin{proof}\nFor given $S$ (not necessarily equal to $S_i^*$), it is useful to express $V(S,i)$ in terms of $V(S\\cup x, i)$ or $V(S\\setminus x, i)$ (depending on whether $x \\in S$ or $x \\in N \\setminus S$). Without loss of generality, we restrict our search to $S : x \\in S \\Rightarrow p_{x,i} > 0$.\n\nSuppose $ x \\in S$. By writing $\\mathbb{P}_i(S) = \\mathbb{P}_i(S\\setminus x) + \\mathbb{P}_i(x)$, and similarly decomposing the sum $\\textstyle\\sum_{j \\in S} p_{j i}^2\/(2\\mu_{ji})$, one can show the following.\n\\begin{align}\nV(S, i) = & V(S\\setminus x, i) + p_{xi}\\left(\\frac{1}{2}\\left(\\frac{2\\mathbb{P}_i(S) - p_{xi}}{\\mu_i} + \\frac{p_{xi}}{\\mu_{xi}}\\right) - C_x \\right) \\label{eq:xInS}\n\\end{align}\nNow suppose $ x \\in N\\setminus S$. In the same strategy as above (this time writing $ \\mathbb{P}_i(S) = \\mathbb{P}_i(S\\cup x) - \\mathbb{P}_i(x)$), one can show that\n\\begin{align}\nV(S, i) =& V(S\\cup x, i) + p_{xi}\\left(C_x - \\frac{1}{2}\\left(\\frac{2\\mathbb{P}_i(S) + p_{xi}}{\\mu_i} + \\frac{p_{xi}}{\\mu_{xi}}\\right) \\right). \\label{eq:xNotInS}\n\\end{align}\nNote that Equations \\eqref{eq:xInS} and \\eqref{eq:xNotInS} hold for all $S$, including $S = S_i^*$. Turning our attention to $S_i^*$, we see that $x \\in S_i^*$ implies that the second term in Equation \\eqref{eq:xInS} is non-negative, i.e. \n\\begin{equation}\nC_x - p_{xi}\/(2\\mu_{xi}) \\leq \\left(2\\mathbb{P}_i(S_i^*) - p_{xi}\\right)\/(2\\mu_i) < \\mathbb{P}_i(S_i^*)\/\\mu_i.\n\\end{equation}\nSimilarly, $x \\in N \\setminus S_i^*$ implies the second term in Equation \\eqref{eq:xNotInS} is non-negative.\n\\begin{equation}\nC_x - p_{x i}\/(2\\mu_{x i}) \\geq \\left(2\\mathbb{P}_i(S_i^*) + p_{x i}\\right)\/(2\\mu_i) \\geq \\mathbb{P}_i(S_i^*)\/\\mu_i\n\\end{equation}\nIt follows that $x \\in S_i^*$ iff $C_x - p_{x i}\/(2\\mu_{x i}) < \\mathbb{P}_i(S_i^*)\/\\mu_i$.\n\\end{proof}\n\nGiven Lemma \\ref{lem:separation}, It is easy to verify that sorting jobs in increasing order of $C_x - p_{xi}\/(2\\mu_{xi})$ to define a permutation $\\sigma_i$ guarantees that $S_i^*$ is of the form $\\{\\sigma_i(1),\\ldots,\\sigma_i(j-1),\\sigma_i(j)\\}$ for some $j \\in N$. This implies that for fixed $i$, Oracle-LP1 finds $S_i^*$ in $O(n \\log(n))$ time. This procedure is executed once for each cluster, leaving the remaining $n$ constraints in $(1B)$ and $(1C)$ to be verified in linear time. Thus Oracle-LP1 runs in $O(mn\\log(n))$ time.\n\nBy the equivalence of separation and optimization, we have proven the following theorem:\n\\begin{theorem}\nLP1($I$) is a valid relaxation of $I \\in \\Omega_{CC}$, and is solvable in polynomial time. \\label{thm:LP1feasAndSolve}\n\\end{theorem}\n\nAs was explained in the beginning of this section, linear programs such as those in \\cite{Chen2000, Garg2007, llp, Queyranne1993, Schulz1996, Schulz2012} are processed with an appropriate sorting of the optimal decision variables $\\{C^\\star_j\\}$. It is important then to have bounds on job completion times for a particular ordering of jobs. We address this next in Section \\ref{sec:listSched}, and reserve our first algorithm for Section \\ref{sec:lpAlg}.\n\\section{List Scheduling from Permutations}\\label{sec:listSched}\n\n\n\n\n\n\n\n\nThe complex work in both of our proposed algorithms is to generate a \\textit{permutation} of jobs. The procedure below takes such a permutation and uses it to determine start times, end times, and machine assignments for every task of every subjob.\n\\vspace{1em}\n\n\\noindent \\textbf{List-LPT} : Given a single cluster with $m_i$ machines and a permutation of jobs $\\sigma$, introduce $\\text{List}(a,i) \\doteq (p_{ai1}, p_{ai2},\\ldots,p_{ai|T_{ai}|})$ as an ordered set of tasks belonging to subjob $(a,i)$, ordered by longest processing time first. Now define $\\text{List}(\\sigma) \\doteq \\text{List}(\\sigma(1),i) \\oplus \\text{List}(\\sigma(2),i) \\oplus \\cdots \\oplus \\text{List}(\\sigma(n),i)$, where $\\oplus$ is the concatenation operator. \n\nPlace the tasks of $\\text{List}(\\sigma)$ in order- from the largest task of subjob $(\\sigma(1),i)$, to the smallest task of subjob $(\\sigma(n),i)$. When placing a particular task, assign it whichever machine and start time results in the task being completed as early as possible (without moving any tasks which have already been placed). Insert idle time (on all $m_i$ machines) as necessary if this procedure would otherwise start a job before its release time.\n\\vspace{1em}\n\nThe following Lemma is essential to bound the completion time of a set of jobs processed by List-LPT. The proof is adapted from Gonzalez et al. \\cite{Gonzalez1977}.\n\\begin{lemma}\nSuppose $n$ jobs are scheduled on cluster $i$ according to List-LPT($\\sigma$). Then for $ \\bar{v_i} \\doteq \\mu_i\/m_i$, the completion time of subjob $(\\sigma(j),i)$ $($denoted $C_{\\sigma(j)i}$ $)$ satisfies\n\\begin{align}\n&C_{\\sigma(j)i} \\leq \\max_{1\\leq k \\leq j}{r_{\\sigma(k)i}} + p_{\\sigma(j)i1}\/\\bar{v_i} + \\left(\\textstyle\\sum_{k=1}^{j} p_{\\sigma(k)i} - p_{\\sigma(j)i1}\\right)\/\\mu_i \\label{eq:generalGonzalezLemma}\n\\end{align} \\label{lem:CompTimesOnUniformMachines}\n\\end{lemma}\n\\begin{proof}\nFor now, assume all jobs are released at time zero. Let the task of subjob $(\\sigma(j),i)$ to finish last be denoted $t^*$. If $t^*$ is not the task in $T_{\\sigma(j)i}$ with least processing time, then construct a new set $T'_{\\sigma(j)i} = \\{ t : p_{\\sigma(j)it^*} \\leq p_{\\sigma(j)it} \\} \\subset T_{\\sigma(j)i}$. Because the tasks of subjob $(\\sigma(j),i)$ were scheduled by List-LPT (i.e. longest-processing-time-first), the sets of potential start times and machines for task $t^*$ (and hence the set of potential completion times for task $t^*$) are the same regardless of whether subjob $(\\sigma(j),i)$ consisted of tasks $T_{\\sigma(j)i}$ or the subset $T'_{\\sigma(j)i}$. Accordingly, reassign $T_{\\sigma(j)i} \\leftarrow T'_{\\sigma(j)i}$ without loss of generality.\n\nLet $D_{\\ell}^j$ denote the total demand for machine $\\ell$ (on cluster $i$) once all tasks of subjobs $(\\sigma(1),i)$ through $(\\sigma(j-1),i)$ and all tasks in the set $T_{\\sigma(j)i}\\setminus \\{t^*\\}$ are scheduled. Using the fact that $C_{\\sigma(j)i}v_{\\ell i} \\leq ({D}_{\\ell}^{j} + p_{\\sigma(j)i t^*}) \\forall \\ell \\in \\{1,\\ldots,m_i\\}$, sum the left and right and sides over $\\ell$. This implies $C_{\\sigma(j)i}\\left( \\textstyle\\sum_{\\ell = 1}^{m_i} v_{\\ell i} \\right) \\leq ~ m_i p_{\\sigma(j) i t^*} + \\textstyle\\sum_{\\ell = 1}^{m_i} {D}_{\\ell}^{j}$. Dividing by the sum of machine speeds and using the definition of $\\mu_i$ yields\n\\begin{equation}\nC_{\\sigma(j)i} \n\t~ \\leq ~ m_i p_{\\sigma(j)i t^*}\/\\mu_i + \\textstyle\\sum_{\\ell = 1}^{m_i} {D}_{\\ell}^j \/\\mu_i ~ \\leq ~ p_{\\sigma(j)i 1}\/\\bar{v_i} + \\left(\\textstyle\\sum_{k = 1}^{j} p_{\\sigma(k)i} - p_{\\sigma(j)i1}\\right)\/\\mu_i \\label{eq:mainGonzalezLemma}\n\\end{equation}\nwhere we estimated $p_{\\sigma(j)i t^*}$ upward by $p_{\\sigma(j)i 1}$. Inequality \\eqref{eq:mainGonzalezLemma} completes our proof in the case when $r_{ji} \\equiv 0$. \n\nNow suppose that some $r_{ji} > 0$. We take our policy to the extreme and suppose that all machines are left idle until every one of jobs $\\sigma(1)$ through $\\sigma(j)$ are released; note that this occurs precisely at time $\\max_{1 \\leq k \\leq j} r_{\\sigma(k)i}$. It is clear that beyond this point in time, we are effectively in the case where all jobs are released at time zero, hence we can bound the remaining time to completion by the right hand side of Inequality \\ref{eq:mainGonzalezLemma}. As Inequality \\ref{eq:generalGonzalezLemma} simply adds these two terms, the result follows.\n\\end{proof}\n\nLemma \\ref{lem:CompTimesOnUniformMachines} is cited directly in the proof of Theorem \\ref{thm:uniformLP1} and Lemma \\ref{lem:TSPTboundInTermsOfPDandOPT}. Lemma \\ref{lem:CompTimesOnUniformMachines} is used implicitly in the proofs of Theorems \\ref{thm:identLP}, \\ref{thm:identLP_2appxWithConstantTasks}, and \\ref{thm:tspt_unit_tasks}.\n\\section{An LP-based Algorithm}\\label{sec:lpAlg}\n\n\n\nIn this section we show how LP1 can be used to construct near optimal schedules for concurrent cluster scheduling both when $r_{ji} \\equiv 0$ and when some $r_{ji} > 0$. Although solving LP1 is somewhat involved, the algorithm itself is quite simple:\n\\vspace{0.5em}\n\n\\noindent \\textbf{Algorithm CC-LP} : Let $I = (T, r, w, v)$ denote an instance of $CC | r | \\sum w_j C_j$. Use the optimal solution $\\{C_j^\\star\\}$ of LP1($I$) to define $m$ permutations $\\{\\sigma_i : i \\in M\\}$ which sort jobs in increasing order of $C^\\star_j - p_{ji}\/(2\\mu_{ji})$. For each cluster $i$, execute List-LPT($\\sigma_i$).\n\\vspace{0.5em}\n\nEach theorem in this section can be characterized by how various assumptions help us cancel an additive term\\footnote{``$+p_{xit^*}$''; see associated proofs.} in an upper bound for the completion time of an arbitrary subjob $(x,i)$. Theorem \\ref{thm:uniformLP1} is the most general, while Theorem \\ref{thm:identLP_2appxWithConstantTasks} is perhaps the most surprising.\n\n\\subsection{CC-LP for Uniform Machines}\\label{sec:unifLP}\n\\begin{theorem}\nLet $\\hat{C}_j$ be the completion time of job $j$ using algorithm CC-LP, and let $R$ be as in Section \\ref{subsec:outlineAndResults}. If $r_{ji} \\equiv 0$, then $ \\textstyle\\sum_{j \\in N} w_j \\hat{C}_j \\leq \\left(2 + R\\right)OPT $. Otherwise, $ \\textstyle\\sum_{j \\in N} w_j \\hat{C}_j \\leq \\left(3 + R\\right)OPT$.\n\\label{thm:uniformLP1}\n\\end{theorem}\n\\begin{proof}\nFor $y \\in \\mathbb{R}$, define $y^+ = \\max\\{y,0\\}$. Now let $x \\in N$ be arbitrary, and let $i \\in M$ be such that $p_{xi} > 0$ (but otherwise arbitrary). Define $t^*$ as the last task of job $x$ to complete on cluster $i$, and let $j_i$ be such that $\\sigma_i(j_i) = x$. Lastly, denote the optimal LP solution $\\{C_j\\}$.\\footnote{We omit the customary $\\star$ to avoid clutter in notation.} Because $\\{C_j\\}$ is a feasible solution to LP1, constraint $(1A)$ implies the following (set $S_i = \\{\\sigma_i(1),\\ldots,\\sigma_i(j_i - 1),x\\}$)\n\\begin{align}\n\\frac{\\left( \\textstyle\\sum_{k = 1}^{j_i} p_{\\sigma_i(k)i} \\right)^2}{2\\mu_i}\n\t&\\leq \\sum_{k = 1}^{j_i} p_{\\sigma_i(k)i}\\left(C_{\\sigma_i(k)} - \\frac{p_{\\sigma_i(k)i}}{2\\mu_{\\sigma_i(k)i}}\\right) \\leq \\left(C_{x} - \\frac{p_{xi}}{2\\mu_{xi}}\\right)\\sum_{k = 1}^{j_i} p_{\\sigma_i(k)i} \\label{eq:compTimeInUnifLP}\n\\end{align}\nwhich in turn implies $\\textstyle\\sum_{k = 1}^{j_i} p_{\\sigma_i(k)i}\/\\mu_i \\leq 2C_x - p_{xi}\/\\mu_{xi}$.\n\nIf all subjobs are released at time zero, then we can combine this with Lemma \\ref{lem:CompTimesOnUniformMachines} and the fact that $p_{xit^*} \\leq p_{xi} = \\textstyle\\sum_{t \\in T_{xi}} p_{xit}$ to see the following (the transition from the first inequality the second inequality uses $C_x \\geq p_{xit^*}\/v_{1i}$ and $R_i = v_{1i}\/\\bar{v}_i$).\n\\begin{align}\n\\hat{C}_{xi} \n\t&\\leq 2C_x - \\frac{p_{xi}}{\\mu_{xi}} + \\frac{p_{xit^*}}{\\bar{v}_i} - \\frac{p_{xit^*}}{\\mu_i} \\leq \n\t\tC_x(2 + \\left[R_i(1 - 2\/m_i)\\right]^+) \\label{eq:generalCompTimeWithOUTReleaseUnif} \n\\end{align}\n\nWhen one or more subjobs are released after time zero, Lemma \\ref{lem:CompTimesOnUniformMachines} implies that it is sufficient to bound $\\displaystyle\\max_{1 \\leq k \\leq j_i}{\\left\\lbrace r_{\\sigma_i(k)i} \\right\\rbrace}$ by some constant multiple of $C_x$. Since $\\sigma_i$ is defined by increasing $L_{ji} \\doteq C_j - p_{ji}\/(2\\mu_{ji})$, $L_{\\sigma_i(a)i} \\leq L_{\\sigma_i(b)i}$ implies\n\\begin{align}\n&r_{\\sigma_i(a)i} + \\frac{p_{\\sigma_i(a)i}}{2\\mu_{\\sigma_i(a)i}} + \\frac{p_{\\sigma_i(b)i}}{2\\mu_{\\sigma_i(b)i}} \\leq C_{\\sigma_i(a)} - \\frac{p_{\\sigma_i(a)i}}{2\\mu_{\\sigma_i(a)i}} + \\frac{p_{\\sigma_i(b)i}}{2\\mu_{\\sigma_i(b)i}} \\leq C_{\\sigma_i(b)} ~\\forall~ a \\leq b\n\\end{align}\nand so $\\max_{1 \\leq k \\leq j_i}{\\left\\lbrace r_{\\sigma_i(k)i} \\right\\rbrace} + p_{xi}\/(2\\mu_{xi}) \\leq C_{x}$. As before, combine this with Lemma \\ref{lem:CompTimesOnUniformMachines} and the fact that $p_{xit^*} \\leq p_{xi} = \\textstyle\\sum_{t \\in T_{xi}} p_{xit}$ to yield the following inequalities\n\\begin{align}\n\\hat{C}_{xi} \n\t&\\leq 3C_x - \\frac{3p_{xi}}{2\\mu_{xi}} + \\frac{p_{xit^*}}{\\bar{v}_i} - \\frac{p_{xit^*}}{\\mu_i} \\leq C_x(3 + \\left[R_i(1 - 5\/(2m_i))\\right]^+) \\label{eq:generalCompTimeWithReleaseUnif} \n\\end{align}\n-which complete our proof.\n\\end{proof}\n\\subsection{CC-LP for Identical Machines}\\label{sec:identLP}\n\\begin{theorem}\nIf machines are of unit speed, then CC-LP yields an objective that is...\n\\begin{center}\n\\begin{tabular}{l | c c}\n\\hline\n & $r_{ji} \\equiv 0$ & some $r_{ji} > 0$ \\\\ \n \\hline\nsingle-task subjobs & $\\leq$ 2 $OPT$ & $\\leq $ 3 $OPT$ \\\\\nmulti-task subjobs & $\\leq$ 3 $OPT$ & $\\leq$ 4 $OPT$ \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\label{thm:identLP}\n\\end{theorem}\n\\begin{proof}\nDefine $[\\cdot]^+$, $x$, $C_x$, $\\hat{C}_x$, $i$, $\\sigma_i$, and $t^*$ as in Theorem \\ref{thm:uniformLP1}. When $r_{ji} \\equiv 0$, one need only give a more careful treatment of the first inequality in \\eqref{eq:generalCompTimeWithOUTReleaseUnif} (using $\\mu_{ji} = q_{ji}$).\n\\begin{align}\n\\hat{C}_{x,i} \n\t&\\leq 2C_x + p_{xit^*} - p_{xit^*}\/m_i - p_{xi}\/q_{xi} \n\t\\leq C_x(2 + \\left[1 - 1\/m_i -1\/q_{xi} \\right]^+) \\label{eq:GeneralIdentCompTimeBound}\n\\end{align}\nSimilarly, when some $r_{ji} > 0$, the first inequality in \\eqref{eq:generalCompTimeWithReleaseUnif} implies the following.\n\\begin{align}\n\\hat{C}_{x,i}\n\t&\\leq 3C_x + p_{xit^*} - p_{xit^*}\/m_i - 3p_{xi}\/(2q_{xi})\n\t\\leq C_x(3 + \\left[1 - 1\/m_i - 3\/(2q_{xi})\\right]^+) \\label{eq:forCstTimeThmWithRelease}\n\\end{align}\n\\end{proof}\nThe key in the refined analysis of Theorem \\ref{thm:identLP} lay in how $-p_{xi}\/q_{xi}$ is used to annihilate $+p_{xit^*}$. While $q_{xi} = 1$ (i.e. single-task subjobs) is sufficient to accomplish this, it is not strictly \\textit{necessary}. The theorem below shows that we can annihilate the $+p_{xit^*}$ term whenever all tasks of a given subjob are of the same length. Note that the tasks need not be \\textit{unit}, as the lengths of tasks across different subjobs can differ.\n\\begin{theorem}\nSuppose $v_{\\ell i} \\equiv 1$. If $p_{jit}$ is constant over $t \\in T_{ji}$ for all $j \\in N$ and $i \\in M$, then algorithm CC-LP is a 2-approximation when $r_{ji} \\equiv 0$, and a 3-approximation otherwise. \\label{thm:identLP_2appxWithConstantTasks}\n\\end{theorem}\n\\begin{proof}\nThe definition of $p_{xi}$ gives $p_{xi}\/q_{xi} = \\textstyle\\sum_{t \\in T_{xi}} p_{xit} \/ q_{xi}$. Using the assumption that $p_{jit}$ is constant over $t \\in T_{ji}$, we see that $p_{xi}\/q_{xi} = (q_{xi} + |T_{xi}| - q_{xi})p_{xit^*} \/ q_{xi} $, where $|T_{xi} |\\geq q_{xi}$. Apply this to Inequality \\eqref{eq:GeneralIdentCompTimeBound} from the proof of Theorem \\ref{thm:identLP}; some algebra yields \n\\begin{align}\n\\hat{C}_{xi} \n\t\\leq& 2C_x - p_{xit^*}\/m_i - p_{xit^*}\\left(|T_{xi}| - q_{xi}\\right)\/q_{xi} \\leq 2C_x.\n\\end{align}\nThe case with some $r_{ji} > 0$ uses the same identity for $p_{xi}\/q_{xi}$.\n\\end{proof}\nSachdeva and Saket \\cite{nphard2} showed that it is NP-Hard to approximate $CC|m_i \\equiv 1|\\sum w_j C_j$ with a constant factor less than 2. \nTheorem \\ref{thm:identLP_2appxWithConstantTasks} is significant because it shows that CC-LP can attain the same guarantee for \\textit{arbitrary} $m_i$, provided $v_{\\ell i} \\equiv 1$ and $p_{jit}$ is constant over $t$.\n\n\\section{Combinatorial Algorithms}\\label{sec:TSPT}\n\n\n\n\n\n\nIn this section, we introduce an extremely fast combinatorial algorithm with performance guarantees similar to CC-LP for ``unstructured'' inputs (i.e. those for which some $v_{\\ell i} > 1$, or some $T_{ji}$ have $p_{jit}$ non-constant over $t$). \nWe call this algorithm \\textit{CC-TSPT}. \nCC-TSPT uses the MUSSQ algorithm for concurrent open shop (from \\cite{mqssu}) as a subroutine. As SWAG (from \\cite{HGY}) motivated development of CC-TSPT, we first address SWAG's worst-case performance.\n\n\\subsection{A Degenerate Case for SWAG}\\label{subsec:swagDegenerate}\n\\begin{wrapfigure}{r}{0.5\\textwidth}\n\t\\vspace{-0.45cm}\n \\centering\n \\begin{minipage}{0.50\\textwidth}\n\t\\begin{algorithm}[H]\n\t\\begin{algorithmic}[1]\n\t\\Procedure{SWAG}{$N,M,p_{ji}$}\n\t\\State $J\\gets \\emptyset$\n\t\\State $q_i\\gets 0,\\forall i\\in M$\n\t\\While{$|J|\\not=|N|$}\n\t\\State mkspn$_j\\gets$ max$_{i\\in M}\\left(\\frac{q_i+p_{ji}}{m_i}\\right)$\n\t\\item[] \\qquad \\qquad $\\forall j\\in N\\setminus J$\n\t\\State nextJob $\\gets$ argmin$_{j \\in N \\setminus J}\\ $mkspn$_j$\n\t\\State $J.$append$($nextJob$)$\n\t\\State $q_i\\gets q_i+ p_{ji}$\n\t\\EndWhile\n\t\\State \\textbf{return} $J$\n\t\\EndProcedure\n\t\\end{algorithmic}\n\t\\end{algorithm}\n \\end{minipage}\n \\vspace{-0.45cm}\n\\end{wrapfigure}\nAs a prerequisite for addressing worst-case performance of an existing algorithm, we provide psuedocode and an accompanying verbal description for SWAG.\n\nSWAG computes queue positions for every subjob of every job, supposing that each job was scheduled next. \nA job's potential makespan (``mkspn'') is the largest of the potential finish times of all of its subjobs (considering current queue lengths $q_i$ and each subjob's processing time $p_{ji}$). \nOnce potential makespans have been determined, the job with smallest potential makespan is selected for scheduling. \nAt this point, all queues are updated. \nBecause queues are updated, potential makespans will need to be re-calculated at the next iteration. \nIterations continue until the very last job is scheduled. Note that SWAG runs in $O(n^2m)$ time.\n\n\\begin{theorem}\nFor an instance $I$ of $PD || \\sum C_j$, let $SWAG(I)$ denote the objective function value of SWAG applied to $I$, and let $OPT(I)$ denote the objective function value of an optimal solution to $I$. \nThen for all $L \\geq 1$, there exists an $I \\in \\Omega_{PD || \\sum C_j}$ such that $SWAG(I) \/ OPT(I) > L$.\n\\label{thm:swagBad}\n\\end{theorem}\n\\begin{proof}\nLet $L \\in \\mathbb{N}^+$ be a fixed but arbitrary constant. \nConstruct a problem instance $I_L^m$ as follows: \n\n$N = N_1 \\cup N_2$ where $N_1$ is a set of $m$ jobs, and $N_2$ is a set of $L$ jobs. \nJob $j \\in N_1$ has processing time $p$ on cluster $j$ and zero all other clusters. \nJob $j \\in N_2$ has processing time $p(1-\\epsilon)$ on all $m$ clusters. \n$\\epsilon$ is chosen so that $\\epsilon < 1\/L$\n(see Figure \\ref{fig:swag1}).\n\n\\begin{figure}[ht]\n\\includegraphics[width=\\linewidth]{swag.png}\n\\centering\n\\caption{At left, an input for SWAG example with $m=3$ and $L=2$. At right, SWAG's resulting schedule, and an alternative schedule.}\n\\label{fig:swag1}\n\\end{figure}\n\nIt is easy to verify that SWAG will generate a schedule where all jobs in $N_2$ precede all jobs in $N_1$ (due to the savings of $p \\epsilon$ for jobs in $N_2$). \nWe propose an \\textit{alternative} solution in which all jobs in $N_1$ preceed all jobs in $N_2$. \nDenote the objective value for this alternative solution $ALT(I_L^m)$, noting $ALT(I_L^m) \\geq OPT(I_L^m)$.\n\nBy symmetry, and the fact that all clusters have a single machine, we can see that $SWAG(I_L^m)$ and $ALT(I_L^m)$ are given by the following\n\\begin{align}\nSWAG(I_L^m) &= p(1-\\epsilon)L(L+1)\/2 + p(1-\\epsilon)L m + p m \\\\\nALT(I_L^m) &= p(1-\\epsilon)L(L + 1)\/2 + pL + p m\n\\end{align}\nSince $L$ is fixed, we can take the limit with respect to $m$.\n\\begin{align}\n\\lim_{m \\rightarrow \\infty}{\\frac{SWAG(I_L^m)}{ALT(I_L^m)}} \n\t&= \\lim_{m \\rightarrow \\infty}{\\frac{p(1-\\epsilon)L m + p m}{p m}} = L(1-\\epsilon) + 1 > L\n\\end{align}\nThe above implies the existence of a sufficiently large number of clusters $\\overline{m}$, such that $m \\geq \\overline{m}$ implies $ SWAG(I_L^{m})\/OPT(I_L^{m}) > L $. This completes our proof.\n\\end{proof}\nTheorem \\ref{thm:swagBad} demonstrates that that although SWAG performed well in simulations, it may not be reliable. \nThe rest of this section introduces an algorithm not only with superior runtime to SWAG (generating a permutation of jobs in $O(n^2 + nm)$ time, rather than $O(n^2m)$ time), but also a constant-factor performance guarantee.\n\n\\subsection{CC-TSPT : A Fast 2 + R Approximation}\\label{subsec:fastreduction}\nOur combinatorial algorithm for concurrent cluster scheduling exploits an elegant transformation to concurrent open shop. \nOnce we consider this simpler problem, it can be handled with MUSSQ \\cite{mqssu} and List-LPT. \nOur contributions are twofold: (1) we prove that this intuitive technique yields an approximation algorithm for a decidedly more general problem, and (2) we show that a \\textit{non-intuitive} modification can be made that maintains theoretical bounds while improving empirical performance. \nWe begin by defining our transformation.\n\n\\begin{definition}[The Total Scaled Processing Time (TSPT) Transformation]\nLet $\\Omega_{CC}$ be the set of all instances of $CC || \\sum w_j C_j$, \nand let $\\Omega_{PD}$ be the set of all instances of \n$PD || \\sum w_j C_j$. Note that $\\Omega_{PD} \\subset \\Omega_{CC}$.\nThen the Total Scaled Processing Time Transformation is a mapping\n\\begin{align*}\nTSPT: ~\\Omega_{CC} \\to \\Omega_{PD} \\quad \\text{ with } \\quad (T, v, w) &\\mapsto (X, w) ~:~ x_{ji} = \\textstyle\\sum_{t \\in T_{ji}} p_{jit} \/ \\mu_i\n\\end{align*}\ni.e., $x_{ji}$ is the total processing time required by subjob $(j,i)$, scaled by the sum of machine speeds at cluster $i$. \nThroughout this section, we will use $I = (T, v, w)$ to denote an arbitrary instance of $CC || \\sum w_j C_j$, and $I' = (X, w)$ as the image of $I$ under TSPT. \nFigure \\ref{fig:tspt} shows the result of TSPT applied to our baseline example. \n\\end{definition}\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.8\\linewidth]{tspt.png}\n\\caption{An instance $I$ of $CC||\\sum w_j C_j$, and its image $I' = TSPT(I)$. The schedules were constructed with List-LPT using the same permutation for $I$ and $I'$. }\n\\label{fig:tspt}\n\\end{figure}\n\nWe take the time to emphasize the simplicity of our reduction. Indeed, the TSPT transformation is perhaps the first thing one would think of given knowledge of the concurrent open shop problem. What is surprising is how one can attain constant-factor performance guarantees even after such a simple transformation.\n\n\\vspace{1em}\n\\noindent \\textbf{Algorithm CC-TSPT} : Execute MUSSQ on $I'= TSPT(I)$ to generate a permutation of jobs $\\sigma$. List schedule instance $I$ by \n$\\sigma$ on each cluster according to List-LPT.\n\\vspace{1em}\n\nTowards proving the approximation ratio for CC-TSPT, we will establish a critical inequality in Lemma \\ref{lem:TSPTboundInTermsOfPDandOPT}. \nThe intuition behind Lemma \\ref{lem:TSPTboundInTermsOfPDandOPT} requires thinking of every job $j$ in $I$ as having a corresponding representation in $j'$ in $I'$. \nJob $j$ in $I$ will be scheduled in the $CC$ environment, while job $j'$ in $I'$ will be scheduled in the $PD$ environment. \nWe consider what results when the same permutation $\\sigma$ is used for scheduling in both environments. \n\nNow the definitions for the lemma: let $C^{CC}_{\\sigma(j)}$ be the completion time of job $\\sigma(j)$ resulting from List-LPT on an arbitrary permutation $\\sigma$. \nDefine $C^{CC\\star}_{\\sigma(j)}$ as the completion time of job $\\sigma(j)$ in the $CC$ environment in the optimal solution. \nLastly, define $C^{PD,I'}_{\\sigma(j')}$ as the completion time of job $\\sigma(j')$ in $I'$ when scheduling by List-LPT($\\sigma$) in the $PD$ environment.\n\n\\begin{lemma}\nFor $I' = TSPT(I)$, let $j'$ be the job in $I'$ corresponding to job $j$ in $I$. \nFor an arbitrary permutation of jobs $\\sigma$, we have $C^{CC}_{\\sigma(j)} \\leq C^{PD,I'}_{\\sigma(j')} + R\\cdot C^{CC\\star}_{\\sigma(j)}$. \\label{lem:TSPTboundInTermsOfPDandOPT}\n\\end{lemma}\n\\begin{proof}\nAfter list scheduling has been carried out in the $CC$ environment, we may determine $C^{CC}_{\\sigma(j)i}$ - the completion time of subjob $(\\sigma(j),i)$. \nWe can bound $C^{CC}_{\\sigma(j)i}$ using Lemma \\ref{lem:CompTimesOnUniformMachines} (which implies \\eqref{eqInLem:TSPTbound1}), and the serial-processing nature of the $PD$ environment (which implies \\eqref{eqInLem:TSPTbound2}).\n\\begin{align}\n& C^{CC}_{\\sigma(j)i} \\leq p_{\\sigma(j)i1}\\left(1\/\\bar{v} - 1\/\\mu_i\\right) + \\textstyle\\sum_{\\ell = 1}^{j} p_{\\sigma(\\ell)i}\/\\mu_i \\label{eqInLem:TSPTbound1} \\\\\n& \\textstyle\\sum_{\\ell = 1}^j p_{\\sigma(\\ell)i}\/\\mu_i \\leq C^{PD,I'}_{\\sigma(j')} \\quad \\forall ~i \\in M \\label{eqInLem:TSPTbound2}\n\\end{align}\nIf we relax the bound given in Inequality \\eqref{eqInLem:TSPTbound1} and combine it with Inequality \\eqref{eqInLem:TSPTbound2}, we see that $C^{CC}_{\\sigma(j)i} \\leq C^{PD,I'}_{\\sigma(j')} + p_{\\sigma(j)i1}\/\\bar{v}$. \nThe last step is to replace the final term with something more meaningful. Using $p_{\\sigma(j)1}\/\\bar{v} \\leq R \\cdot C^{CC\\star}_{\\sigma(j)}$ (which is immediate from the definition of $R$) the desired result follows.\n\\end{proof}\nWhile Lemma \\ref{lem:TSPTboundInTermsOfPDandOPT} is true for arbitrary $\\sigma$, now we consider $\\sigma = MUSSQ(X, w)$. \nThe proof of MUSSQ's correctness established the first inequality in the chain of inequalities below. \nThe second inequality can be seen by substituting $p_{ji} \/ \\mu_{i}$ for $x_{ji}$ in LP0($I'$) (this shows that the constraints in LP0($I'$) are weaker than those in LP1($I$)). \nThe third inequality follows from the Validity Lemma.\n\\begin{equation}\n\\textstyle\\sum_{j \\in N} w_{\\sigma(j)} C^{PD,I'}_{\\sigma(j)} \n\t\\leq 2 \\textstyle\\sum_{j \\in N} w_j C^{\\text{LP0}(I')}_j\n\t\\leq 2 \\textstyle\\sum_{j \\in N} w_j C^{\\text{LP1}(I)}_j\n\t\\leq 2 OPT(I) \\label{eq:tsptCore}\n\\end{equation}\nCombining Inequality \\eqref{eq:tsptCore} with Lemma \\ref{lem:TSPTboundInTermsOfPDandOPT} allows us to bound the objective in a way that does not make reference to $I'$.\n\\begin{equation}\n\\textstyle\\sum_{j \\in N} w_{\\sigma(j)}C^{CC}_{\\sigma(j)} \n\t\\leq \\textstyle\\sum_{j \\in N} w_{\\sigma(j)}\\left[C^{PD,I'}_{\\sigma(j)} + R\\cdot C^{CC\\star}_{\\sigma(j)}\\right] \\leq ~ 2 \\cdot OPT(I) + R \\cdot OPT(I) \\label{eq:dontReferenceIPrime}\n\\end{equation}\nInequality \\eqref{eq:dontReferenceIPrime} completes our proof of the following theorem.\n\\begin{theorem}\nAlgorithm CC-TSPT is a $2 + R$ approximation for $CC || \\sum w_j C_j$.\n\\label{thm:algCCTspt}\n\\end{theorem}\n\n\\subsection{CC-TSPT with Unit Tasks and Identical Machines}\\label{subsec:tsptOnUnitTasks}\nConsider concurrent cluster scheduling with $v_{\\ell i} = p_{jit} = 1$ (i.e., all processing times are unit, although the size of the collections $T_{ji}$ are unrestricted). In keeping with the work of Zhang, Wu, and Li \\cite{zwl} (who studied this problem in the single-cluster case), we call instances with these parameters ``fully parallelizable,'' and write $\\beta = fps$ for Graham's $\\alpha|\\beta|\\gamma$ taxonomy.\n\nZhang et al. showed that scheduling jobs greedily by ``Largest Ratio First'' (decreasing $w_j \/ p_{j}$) results in a 2-approximation, where 2 is a tight bound. \nThis comes as something of a surprise since the Largest Ratio First policy is \\textit{optimal} for $1||\\sum w_j C_j~$- which their problem very closely resembles. \nWe now formalize the extent to which $P|fps|\\sum w_j C_j$ resembles $1||\\sum w_j C_j~$: define the \\textit{time resolution} of an instance $I$ of $CC |fps| \\sum w_jC_j$ as $ \\rho_I = \\min_{j \\in N, i \\in M}{\\big\\lceil{p_{ji}\/m_i}\\big\\rceil}$. \nIndeed, one can show that as the time resolution increases, the performance guarantee for LRF on $P | fps | \\sum w_j C_j$ approaches that of LRF on $1||\\sum w_j C_j$. \nWe prove the analogous result for our problem. \n\\begin{theorem}\nCC-TSPT for $CC |fps| \\sum w_jC_j$ is a $(2 + 1\/\\rho_I)-$approximation.\n\\label{thm:tspt_unit_tasks}\n\\end{theorem}\n\\begin{proof}\nApplying techniques from the proof of Lemma \\ref{lem:TSPTboundInTermsOfPDandOPT} under the hypothesis of this theorem, we have $C^{CC}_{\\sigma(j), i} \\leq C^{PD,I'}_{\\sigma(j)} + 1$. \nNext, use the fact that for all $j \\in N$, $C^{CC,OPT}_{\\sigma(j)} \\geq \\rho_I$ by the definition of $\\rho_I$. These facts together imply\n$C^{CC}_{\\sigma(j), i} \\leq C^{PD,I'}_{\\sigma(j)} + C^{CC,OPT} \/ \\rho_I$. Thus\n\\begin{align}\n\\textstyle\\sum_{j \\in N} w_j C^{CC}_{\\sigma(j)} \n\t&\\leq \\textstyle\\sum_{j \\in N} w_j \\left[C^{PD,I'}_{\\sigma(j)} + C^{CC,OPT} \/ \\rho_I\\right] \\leq 2 \\cdot OPT + OPT \/ \\rho_I.\n\\end{align}\n\\end{proof}\n\n\\subsection{CC-ATSPT : Augmenting the LP Relaxation}\nThe proof of Theorem \\ref{thm:algCCTspt} appeals to a trivial lower bound on $C^{CC\\star}_{\\sigma(j)}$, namely $p_{\\sigma(j)1}\/\\bar{v} \\leq R \\cdot C^{CC\\star}_{\\sigma(j)}$. We attain constant-factor performance guarantees in spite of this, but it is natural to wonder how the \\textit{need} for such a bound might come hand-in-hand with empirical weaknesses. Indeed, TSPT can make subjobs consisting of many small tasks look the same as subjobs consisting of a single very long task.\nAdditionally, a cluster hosting a subjob with a single extremely long task might be identified as a bottleneck by MUSSQ, even if that cluster has more machines than it does tasks to process.\n\nWe would like to mitigate these issues by introducing the simple lower bounds on $C_j$ as seen in constraints $(1B)$ and $(1C)$. This is complicated by the fact that MUSSQ's proof of correctness only allows constraints of the form in $(1A)$. For $I \\in \\Omega_{PD}$ this is without loss of generality, since $|S| = 1$ in LP0 implies $C_j \\geq p_{ji}$, but since we apply LP0 to $I' = TSPT(I)$, $C_j \\geq x_{ji}$ is equivalent to $C_j \\geq p_{ji}\/\\mu_i$ (a much weaker bound than we desire). \n\nNevertheless, we can bypass this issue by introducing additional clusters and appropriately defined subjobs. We formalize this with the ``Augmented Total Scaled Processing Time'' (ATSPT) transformation. \nConceptually, ATSPT creates $n$ ``imaginary clusters'', where each imaginary cluster has nonzero processing time for exactly one job.\n\\begin{definition}[The Augmented TSPT Transformation]\nLet $\\Omega_{CC}$ and $\\Omega_{PD}$ be as in the definition for TSPT. Then the Augmented TSPT Transformation is likewise a mapping\n\\begin{align*}\nATSPT: ~\\Omega_{CC} \\to \\Omega_{PD} \\quad \\text{ with } \\quad (T, v, w) &\\mapsto (X, w) ~:~ X = \\big[\\begin{array}{c|c} X_{TSPT(I)} & D \\end{array}\\big].\n\\end{align*}\nWhere $D \\in \\mathbb{R}^{n \\times n}$ is a diagonal matrix with $d_{jj}$ as any valid lower bound on the completion time of job $j$ (such as the right hand sides of constraints ($1B$) and ($1C$) of LP1).\n\\end{definition}\nGiven that $d_{jj}$ is a valid lower bound on the completion time of job $j$, it is easy to verify that for $I' = ATSPT(I)$, LP1($I'$) is a valid relaxation of $I$. \nBecause MUSSQ returns a permutation of jobs for use in list scheduling by List-LPT, these ``imaginary clusters'' needn't be accounted for beyond the computations in MUSSQ.\n\n\\section{A Reduction for Minimizing Total Weighted Lateness on Identical Parallel Machines }\\label{sec:relationshipsBetweenProbs}\nThe problem of minimizing total weighted lateness on a bank of identical parallel machines is typically denoted $P || \\sum w_jL_j$, where the lateness of a job with deadline $d_j$ is $L_j \\doteq \\max{\\{C_j - d_j, 0\\}}$. The reduction we offer below shows that $P || \\sum w_j L_j$ can be stated in terms of $CC || \\sum w_jC_j$ \\textit{at optimality}. Thus while a $\\Delta$ approximation to $CC || \\sum w_jC_j$ does not imply a $\\Delta$ approximation to $P || \\sum w_j L_j$, the reduction below nevertheless provides new insights on the structure of $P || \\sum w_j L_j$.\n\n\\begin{definition}[Total Weighted Lateness Reduction]\nLet $I = (p, d, w, m)$ denote an instance of $P || \\sum w_j L_j$. \n$p$ is the set of processing times, $d$ is the set of deadlines, $w$ is the set of weights, \nand $m$ is the number of identical parallel machines. \nGiven these inputs, we transform $I \\in \\Omega_{P || \\sum w_j L_j}$ \nto $I' \\in \\Omega_{CC}$ in the following way.\n\nCreate a total of $n + 1$ clusters. Cluster 0 has $m$ machines. Job $j$ has processing time $p_j$ on this cluster, and $|T_{j0}| = 1$. Clusters 1 through $n$ each consist of a single machine. Job $j$ has processing time $d_j$ on cluster $j$, and zero on all clusters other than cluster 0 and cluster $j$. Denote this problem $I'$.\n\\end{definition}\nWe refer the reader to Figure \\ref{fig:probstm3and4} for an example output of this reduction.\n\\begin{theorem}\nLet $I$ be an instance of $P || \\textstyle\\sum w_j L_j$. Let $I'$ be an instance of $CC|| \\sum w_j C_j$ resulting from the transformation described above. Any list schedule $\\sigma$ that is optimal for $I'$ is also optimal for $I$.\n\\end{theorem}\n\\begin{proof}\nIf we restrict the solution space of $I'$ to single permutations (which we may do without loss of generality), then any schedule $\\sigma$ for $I$ or $I'$ produces the same value of $\\sum_{j \\in N} w_j(C_j - d_j)^+$ for $I$ and $I'$.\nThe additional clusters we added for $I'$ ensure that $C_j \\geq d_j$. Given this, the objective for $I$ can be written as $\\sum_{j \\in N} w_j d_j + w_j(C_j - d_j)^+$. Because $w_j d_j$ is a constant, any permutation to solve $I'$ optimally also solves $\\sum_{j \\in N} w_j (C_j - d_j)^+$ optimally. Since $\\sum_{j \\in N} w_j (C_j - d_j)^+ = \\sum_{j \\in N} w_j L_j$, we have the desired result.\n\\end{proof}\n\\section{Closing Remarks}\\label{sec:discAndConc}\nWe now take a moment to address a subtle issue in the concurrent cluster problem: what price do we pay for using the same permutation on all clusters (i.e. single-$\\sigma$ schedules)? For concurrent open shop, it has been shown (\\cite{Sris1993, mqssu}) that single-$\\sigma$ schedules may be assumed without loss of optimality. As is shown in Figure \\ref{fig:singleVsMultiPerm}, this does \\textit{not} hold for concurrent cluster scheduling in the general case. In fact, that is precisely why the strong performance guarantees for algorithm CC-LP rely on clusters having possibly unique permutations.\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.7\\textwidth]{singleVsMultiPerm.png}\n\\caption{An instance of $CC||\\sum C_j$ (i.e. $w_j \\equiv 1$) for which there does not exist a single-$\\sigma$ schedule which attains the optimal objective value. In the single-$\\sigma$ case, one of the jobs necessarily becomes delayed by one time unit compared to the multi-$\\sigma$ case. As a result, we see a 20\\% optimality gap even when $v_{\\ell i } \\equiv 1$.}\\label{fig:singleVsMultiPerm}\n\\centering\n\\end{figure}\n\nOur more novel contributions came in our analysis for CC-TSPT and CC-ATSPT. First, we could not rely on the processing time of the last task for a job to be bounded above by the job's completion time variable $C_j$ in LP0($I'$), and so we appealed to a lower bound on $C_j$ that was not stated in the LP itself. The need to incorporate this second bound is critical in realizing the strength of algorithm CC-TSPT, and uncommon in LP rounding schemes. Second, CC-ATSPT is novel in that it introduces constraints that would be redundant for LP0($I$) when $I \\in \\Omega_{PD}$, but become relevant when viewing $LP0(I')$ as a relaxation for $I \\in \\Omega_{CC}$. This approach has potential for more broad applications since it represented effective use of a limited constraint set supported by a known primal-dual algorithm.\n\nWe now take a moment to state some open problems in this area. One topic of ongoing research is developing a factor 2 purely combinatorial algorithm for the special case of concurrent cluster scheduling considered in Theorem \\ref{thm:identLP_2appxWithConstantTasks}. In addition, it would be of broad interest to determine the worst-case loss to optimality incurred by assuming single-permutation schedules for $CC|v\\equiv 1|\\sum w_j C_j$. The simple example above shows that an optimal single-$\\sigma$ schedule can have objective 1.2 times the globally optimal objective. Meanwhile, Theorem \\ref{thm:algCCTspt} shows that there always exists a single-$\\sigma$ schedule with objective no more than 3 times the globally optimal objective. Thus, we know that the worst-case performance ratio is in the interval $[1.2,3]$, but we do not know its precise value. As a matter outside of scheduling theory, it would be valuable to survey primal-dual algorithms with roots in LP relaxations to determine which have constraint sets that are amenable to implicit modification, as in the fashion of CC-ATSPT.\n\n\\subparagraph*{Acknowledgments.}\n\nSpecial thanks to Andreas Schulz for sharing some of his recent work with us \\cite{Schulz2012}. His thorough analysis of a linear program for $P||\\sum w_j C_j$ drives the LP-based results in this paper. Thanks also to Chien-Chung Hung and Leana Golubchik for sharing \\cite{HGY} while it was under review, and to Ioana Bercea and Manish Purohit for their insights on SWAG's performance. Lastly, our sincere thanks to William Gasarch for organizing the REU which led to this work, and to the 2015 CAAR-REU cohort for making the experience an unforgettable one; in the words of Rick Sanchez \\textit{wubalubadubdub!}\n\n\n\\section{Introduction}\\label{sec:intro}\n\nIt is becoming increasingly impractical to store full copies of large datasets on more than one data center \\cite{Hajjat2012}. As a result, the data for a single job may be located not on multiple machines, but on multiple \\textit{clusters} of machines. \nTo maintain fast response-times and avoid excessive network traffic, it is advantageous to perform computation for such jobs in a completely distributed fashion \\cite{HGY}.\nIn addition, commercial platforms such as AWS Lambda and Microsoft's Azure Service Fabric are demonstrating a trend of centralized cloud computing frameworks in which the user manages neither data flow nor server allocation \\cite{AWSLambda, Azure}. \nIn view of these converging issues, the following scheduling problem arises:\n\n\\textit{If computation is done locally to avoid excessive network traffic, how can individual clusters on the broader grid coordinate schedules for maximum throughput? }\n\nThis was precisely the motivation for Hung, Golubchik, and Yu in their 2015 ACM Symposium on Cloud Computing paper \\cite{HGY}. \nHung et al. modeled each cluster as having an arbitrary number of identical parallel machines, and choose an objective of average job completion time. \nAs such a problem generalizes the NP-Hard concurrent open shop problem, they proposed a heuristic approach. \nTheir heuristic (called ``SWAG'') runs in $O(n^2m)$ time and performed well on a variety of data sets. Unfortunately, SWAG offers poor worst-case performance, as we show in Section \\ref{sec:TSPT}.\n\nOur contributions to this problem are to extend the model considered by Hung et al. and to introduce the first constant-factor approximation algorithms for this general problem. \nOur extensions of Hung et al.'s model are (1) to allow different machines within the same cluster to operate at different speeds, (2) to incorporate pre-specified ``release times'' (times before which a subjob cannot be processed), and (3) to support \\textit{weighted} average job completion time.\nWe present two algorithms for the resulting problem.\nOur combinatorial algorithm exploits a surprisingly simple mapping to the special case of one machine per cluster, where the problem can be approximated in $O(n^2 + nm)$ time. We also present an LP-rounding approach with strong performance guarantees. E.g., a 2-approximation when machines are of unit speed and subjobs are divided into equally sized (but not necessary \\textit{unit}) tasks.\n\n\\subsection{Formal Problem Statement}\\label{subsec:ps}\n\\begin{definition}[Concurrent Cluster Scheduling]{\\color{white} . } \\hfill \n\\vspace{2pt}\n\\begin{itemize}\n\\item There is a set $M$ of $m$ clusters, and a set $N$ of $n$ jobs. \nFor each job $j \\in N$, there is a set of $m$ ``subjobs'' (one for each cluster).\n\n\\item Cluster $i \\in M$ has $m_i$ parallel machines, and machine $\\ell$ in cluster $i$ has speed $v_{\\ell i}$. \nWithout loss of generality, assume $v_{\\ell i}$ is decreasing in $\\ell$.\n\\footnote{Where we write ``decreasing'', we mean ``non-increasing.'' Where we write ``increasing'', we mean ``non-decreasing''.} \n\n\\item The $i^{\\text{th}}$ subjob for job $j$ is specified by a set of tasks to be performed by machines in cluster $i$, denote this set of tasks $T_{ji}$. \nFor each task $t \\in T_{ji}$, we have an associated processing time $p_{jit}$ (again w.l.o.g., assume $p_{jit}$ is decreasing in $t$). \nWe will frequently refer to ``the subjob of job $j$ at cluster $i$'' as ``subjob $(j,i)$.''\n\n\\item Different subjobs of the same job may be processed concurrently on different clusters. \n\\item Different tasks of the same subjob may be processed concurrently on different machines within the same cluster.\n\\item A subjob is complete when all of its tasks are complete, and a job is complete when all of its subjobs are complete. We denote a job's completion time by ``$C_j$''.\n\\item The objective is to minimize weighted average job completion time (job $j$ has weight $w_j$).\n\\item For the purposes of computing approximation ratios, it is equivalent to minimize $\\sum w_j C_j$. We work with this equivalent objective throughout this paper.\n\\end{itemize}\n\\end{definition}\n\nA machine is said to operate at \\textit{unit speed} it if can complete a task with processing requirement ``$p$'' in $p$ units of time. More generally, a machine with speed ``$v$'' ($v \\geq 1$) processes the same task in $p\/ v$ units of time. Machines are said to be \\textit{identical} if they are all of unit speed, and \\textit{uniform} if they differ only in speed.\n\nIn accordance with Graham et al.'s $\\alpha|\\beta|\\gamma$ taxonomy for scheduling problems \\cite{Graham1979} we take $\\alpha = CC$ to refer to the concurrent cluster environment, and denote our problem by $CC|| \\sum w_jC_j$.\\footnote{ A problem $\\alpha|\\beta|\\gamma$ implies a particular environment $\\alpha$, objective function $\\gamma$, and optional constraints $\\beta$.} Optionally, we may associate a release time $r_{ji}$ to every subjob. If any subjobs are released after time zero, we write $CC| r | \\sum w_jC_j$.\n\n\\subsubsection{Example Problem Instances}\\label{subsubsec:introToModel}\n\nWe now illustrate our model with several examples (see Figures \\ref{fig:probstm1and2} and \\ref{fig:probstm3and4}). The tables at left have rows labeled to identify jobs, and columns labeled to identify clusters; each entry in these tables specifies the processing requirements for the corresponding subjob. The diagrams to the right of these tables show how the given jobs might be scheduled on clusters with the indicated number of machines. \n\\begin{figure}[ht!]\n\\centering\n \\includegraphics[width=0.8\\linewidth]{example.png}\n\t\\caption{Two examples of our scheduling model. \\textbf{Left}: Our baseline example. There are 4 jobs and 2 clusters. Cluster 1 has 2 identical machines, and cluster 2 has 3 identical machines. Note that job 4 has no subjob for cluster 1 (this is permitted within our framework). In this case every subjob has at most one task. \\textbf{Right}: Our baseline example with a more general subjob framework : subjob (2,2) and subjob (3,1) both have two tasks. The tasks shown are unit length, but our framework \\textit{does not} require that subjobs be divided into equally sized tasks. }\n \\label{fig:probstm1and2}\n\\end{figure}\n\\begin{figure}[ht!]\n \\includegraphics[width=\\linewidth]{figure2.png}\n\t\\caption{Two additional examples of our model. \\textbf{Left}: Our baseline example, with variable machine speeds. Note that the benefit of high machine speeds is only realized for tasks assigned to those machines in the final schedule. \\textbf{Right}: A problem with the peculiar structure that (1) all clusters but one have a single machine, and (2) most clusters have non-zero processing requirements for only a single job. We will use such a device for the total weighted lateness reduction in Section \\ref{sec:relationshipsBetweenProbs}.}\n \\label{fig:probstm3and4}\n\\end{figure}\n\n\\subsection{Related Work}\n\nConcurrent cluster scheduling subsumes many fundamental machine scheduling problems. For example, if we restrict ourselves to a single cluster (i.e. $m = 1$) we can schedule a set of jobs on a bank of identical parallel machines to minimize makespan ($C_{\\max}$) or total weighted completion time ($\\sum w_j C_j$). With a more clever reduction, we can even minimize \\textit{total weighted lateness} ($\\sum w_j L_j$) on a bank of identical parallel machines (see Section \\ref{sec:relationshipsBetweenProbs}). Alternatively, with $m > 1$ but $\\forall i \\in M, m_i = 1$, our problem reduces to the well-studied ``concurrent open shop'' problem.\n\nUsing Graham et al.'s taxonomy, the concurrent open shop problem is written as $PD||\\sum w_j C_j$. Three groups \\cite{Chen2000, Garg2007, llp} independently discovered an LP-based 2-approximation for $PD||\\sum w_j C_j$ using the work of Queyranne \\cite{Queyranne1993}. The linear program in question has an exponential number of constraints, but can still be solved in polynomial time with a variant of the Ellipsoid method. Our ``strong'' algorithm for concurrent cluster scheduling refines the techniques contained therein, as well as those of Schulz \\cite{Schulz1996, Schulz2012} (see Section \\ref{sec:lpAlg}).\n\nMastrolilli et al. \\cite{mqssu} developed a primal-dual algorithm for $PD || \\sum w_j C_j$ that does not use LP solvers. ``MUSSQ''\\footnote{A permutation of the author's names: Mastrolilli, Queyranne, Schulz, Svensson, and Uhan.} is significant for both its speed and the strength of its performance guarantee : it achieves an approximation ratio of 2 in only $O(n^2 + nm)$ time. Although MUSSQ does not require an LP solver, its proof of correctness is based on the fact that it finds a feasible solution to the dual a particular linear program. Our ``fast'' algorithm for concurrent cluster scheduling uses MUSSQ as a subroutine (see Section \\ref{sec:TSPT}).\n\nHung, Golubchik, and Yu \\cite{HGY} presented a framework designed to improve scheduling across geographically distributed data centers. The scheduling framework had a centralized scheduler (which determined a job ordering) and local dispatchers which carried out a schedule consistent with the controllers job ordering. Hung et al. proposed a particular algorithm for the controller called ``SWAG.'' SWAG performed well in a wide variety of simulations where each data center was assumed to have the same number of identical parallel machines. We adopt a similar framework to Hung et al., but we show in Section \\ref{subsec:swagDegenerate} that SWAG has no constant-factor performance guarantee.\n\n\\subsection{Paper Outline \\& Algorithmic Results}\\label{subsec:outlineAndResults}\n\nAlthough only one of our algorithms requires \\textit{solving} a linear program, both algorithms use the same linear program in their proofs of correctness; we introduce this linear program in Section \\ref{sec:introduceLP} before discussing either algorithm. Section \\ref{sec:listSched} establishes how an ordering of jobs can be processed to completely specify a schedule. This is important because the complex work in both of our algorithms is to generate an ordering of jobs for each cluster.\n\nSection \\ref{sec:lpAlg} introduces our ``strong'' algorithm: CC-LP. CC-LP can be applied to any instance of concurrent cluster scheduling, including those with non-zero release times $r_{ji}$. A key in CC-LP's strong performance guarantees lay in the fact that it allows different permutations of subjobs for different clusters. By providing additional structure to the problem (but while maintaining a generalization of concurrent open shop) CC-LP becomes a 2-approximation. This is significant because it is NP-Hard to approximate concurrent open shop (and by extension, our problem) with ratio $2-\\epsilon$ for any $\\epsilon > 0$ \\cite{nphard2}.\n\nOur combinatorial algorithm (``CC-TSPT'') is presented in Section \\ref{sec:TSPT}. The algorithm is fast, provably accurate, and has the interesting property that it can schedule all clusters using the same permutation of jobs.\\footnote{We call such schedules ``single-$\\sigma$ schedules.'' As we will see later on, CC-TSPT serves as a constructive proof of existence of near-optimal single-$\\sigma$ schedules for all instances of $CC||\\sum w_j C_j$, \\textit{including} those instances for which single-$\\sigma$ schedules are strictly sub-optimal. This is addressed in Section \\ref{sec:discAndConc}.} After considering CC-TSPT in the general case, we show how fine-grained approximation ratios can be obtained in the ``fully parallelizable'' setting of Zhang et al. \\cite{zwl}. We conclude with an extension of CC-TSPT that maintains performance guarantees while offering improved empirical performance.\n\nThe following table summarizes our results for approximation ratios. For compactness, condition $Id$ refers to identical machines (i.e. $v_{\\ell i}$ constant over $\\ell$), condition $A$ refers to $r_{ji} \\equiv 0$, and condition $B$ refers to $p_{jit} \\text{ constant over } t \\in T_{ji}$.\n\\begin{center}\n\\begin{tabular}{l| cccccc}\n\\hline\n \t\t& $(Id,A,B)$ & $(Id, \\neg A, B)$ & $(Id,A,\\neg B)$ & $(Id,\\neg A, \\neg B)$ & $(\\neg Id, A)$ & $(\\neg Id, \\neg A)$ \\\\ \\hline\nCC-LP\t&\t2\t & 3 & 3 & 4 & $2+R$ & $3+R$ \\\\\nCC-TSPT & 3 & - & 3 & - & $2+R$ & - \\\\ \\hline\n\\end{tabular}\n\\end{center}\nThe term $R$ is the maximum over $i$ of $R_i$, where $R_i$ is the ratio of fastest machine to \\textit{average} machine speed at cluster $i$.\n\nThe most surprising of all of these results is that our scheduling algorithms are remarkably simple. The first algorithm solves an LP,\nand then the scheduling can be done easily on each cluster. The second algorithm is again a rather surprising simple reduction to the\ncase of one machine per cluster (the well understood concurrent open shop problem) and yields a simple combinatorial algorithm. The proof\nof the approximation guarantee is somewhat involved however.\n\nIn addition to algorithmic results, we demonstrate how our problem subsumes that of minimizing total weighted lateness on a bank of identical parallel machines (see Section \\ref{sec:relationshipsBetweenProbs}). Section \\ref{sec:discAndConc} provides additional discussion and highlights our more novel technical contributions.\n\n\n\\section{The Core Linear Program }\\label{sec:introduceLP}\n\n\n\nOur linear program has an unusual form. Rather than introduce it immediately, we conduct a brief review of prior work on similar LP's. All the LP's we discuss in this paper have objective function $\\sum w_j C_j$, where $C_j$ is a decision variable corresponding to the completion time of job $j$, and $w_j$ is a weight associated with job $j$. \n\n\\textit{For the following discussion only, we adopt the notation in which job $j$ has processing time $p_j$. In addition, if multiple machine problems are discussed, we will say that there are $\\mathsf{m}$ such machines (possibly with speeds $s_i, i \\in \\{1,\\ldots, \\mathsf{m}\\}$).} \n\nThe earliest appearance of a similar linear program comes from Queyranne \\cite{Queyranne1993}. In his paper, Queyranne presents an LP relaxation for sequencing $n$ jobs on a single machine where all constraints are of the form $\\sum_{j \\in S} p_j C_j \\geq \\frac{1}{2}\\left[\\left(\\sum_{j \\in S} p_j \\right)^2 + \\sum_{j \\in S} p_j^2\\right]$ where $S$ is an arbitrary subset of jobs. Once a set of optimal $\\{C_j^\\star\\}$ is found, the jobs are scheduled in increasing order of $\\{C_j^\\star\\}$. These results were primarily theoretical, as it was known at his time of writing that sequencing $n$ jobs on a single machine to minimize $\\sum w_j C_j$ can be done optimally in $O(n \\log n)$ time.\n\nQueyranne's constraint set became particularly useful for problems with \\textit{coupling} across distinct machines (as occurs in concurrent open shop). Four separate groups \\cite{Chen2000,Garg2007,llp, mqssu} saw this and used the following LP in a 2-approximation for concurrent open shop scheduling.\n\\begin{equation}\n(\\text{LP0}) ~~ \\min \\sum_{j \\in N} w_j C_j ~~ \\text{s.t.} ~~ \\textstyle\\sum_{j \\in S} p_{ji} C_j \\geq \n\t\t\\frac{1}{2}\n\t\t\\left[ \n\t\t\t\\left(\\textstyle\\sum_{j \\in S} p_{ji}\\right)^2 + \\left(\\textstyle\\sum_{j \\in S} p_{ji}^2\\right) \n\t\t\\right] ~ \\forall ~ \\substack{S \\subseteq N \\\\ i \\in M }\\nonumber\n\\end{equation}\nIn view of its tremendous popularity, we sometimes refer to the linear program above as the \\textit{canonical relaxation} for concurrent open shop.\n\nAndreas Schulz's Ph.D. thesis developed Queyranne's constraint set in greater depth \\cite{Schulz1996}. As part of his thesis, Schulz considered scheduling $n$ jobs on $\\mathsf{m}$ identical parallel machines with constraints of the form $\\sum_{j \\in S} p_j C_j \\geq \\frac{1}{2\\mathsf{m}} \\left(\\sum_{j \\in S} p_j \\right)^2 + \\frac{1}{2}\\sum_{j \\in S} p_j^2$. In addition, Schulz showed that the constraints $\\sum_{j \\in S} p_j C_j \\geq \\left[2 \\sum_{i=1}^{\\mathsf{m}} s_i \\right]^{-1}\\left[\\left(\\sum_{j \\in S} p_j \\right)^2 + \\sum_{j \\in S} p_j^2\\right]$ are satisfied by any schedule of $n$ jobs on $\\mathsf{m}$ uniform machines. In 2012, Schulz refined the analysis for several of these problems \\cite{Schulz2012}. For constructing a schedule from the optimal $\\{C_j^\\star\\}$, Schulz considered scheduling jobs by increasing order of $\\{C_j^\\star\\}$, $\\{C_j^\\star - p_j\/2\\}$, and $\\{C_j^\\star - p_j\/(2\\mathsf{m})\\}$.\n\n\n\\subsection{Statement of LP1}\n\nThe model we consider allows for more fine-grained control of the job structure than is indicated by the LP relaxations above. Inevitably, this comes at some expense of simplicity in LP formulations. In an effort to simplify notation, we define the following constants, and give verbal interpretations for each. \n\\begin{equation}\n \\mu_{i} \\doteq \\textstyle\\sum_{\\ell = 1}^{m_i} v_{\\ell i} \\qquad q_{ji} \\doteq \\min{\\lbrace|T_{ji}|, m_i\\rbrace} \\qquad \\mu_{ji} \\doteq \\textstyle\\sum_{\\ell = 1}^{q_{ji}} v_{\\ell i} \\qquad p_{ji} \\doteq \\textstyle\\sum_{t \\in T_{ji}} p_{jit}\n\\end{equation}\nFrom these definitions, $\\mu_i$ is the processing power of cluster $i$. For subjob $(j,i)$, $q_{ji}$ is the maximum number of machines that could process the subjob, and $\\mu_{ji}$ is the maximum processing power than can be brought to bear on the same. Lastly, $p_{ji}$ is the total processing requirement of subjob $(j,i)$. In these terms, the core linear program, LP1, is as follows.\n\\begin{align*}\n\t\\text{(LP1) } \\min &\\textstyle\\sum_{j \\in N} w_j C_j \\\\\n\ts.t.\\quad (1A) \\quad & \\textstyle\\sum_{j \\in S} p_{ji} C_j \n\t \\geq \\frac{1}{2} \n\t \\left[ \n\t \\left(\\textstyle\\sum_{j \\in S} p_{ji}\\right)^2\/\\mu_i\n + \\textstyle\\sum_{j \\in S} p_{ji}^2\/\\mu_{ji} \n \\right] \\qquad ~\\forall S \\subseteq N, i \\in M \\\\\n\t(1B) \\quad\t& C_j \\geq p_{jit}\/v_{1i} + r_{ji} \\qquad ~\\forall i \\in M,~ j \\in N,~ t \\in T_{ji}\\\\\n\t(1C) \\quad\t& C_j \\geq p_{ji}\/\\mu_{ji} + r_{ji} \\qquad ~\\forall j \\in N,~ i \\in M \t \n\\end{align*} \n\nConstraints ($1A$) are more carefully formulated versions of the polyhedral constraints introduced by Queyranne \\cite{Queyranne1993} and developed by Schulz \\cite{Schulz1996}. The use of $\\mu_{ji}$ term is new and allows us to provide stronger performance guarantees for our framework where subjobs are composed of \\textit{sets} of tasks. As we will see, this term is one of the primary factors that allows us to parametrize results under varying machine speeds in terms of maximum to \\textit{average} machine speed, rather than maximum to \\textit{minimum} machine speed. Constraints ($1B$) and ($1C$) are simple lower bounds on job completion time. \n\n\nThe majority of this section is dedicated to proving that LP1 is a valid relaxation of $CC|r|\\sum w_jC_j$. Once this is established, we prove the that LP1 can be solved in polynomial time by providing a separation oracle with use in the Ellipsoid method. Both of these proofs use techniques established in Schulz's Ph.D. thesis \\cite{Schulz1996}. \n\n\\subsection{Proof of LP1's Validity}\n\n\n\n\nThe lemmas below establish the basis for both of our algorithms. Lemma \\ref{lem:sumOfSquaresDiffSpeeds} generalizes an inequality used by Schulz \\cite{Schulz1996}. Lemma \\ref{lem:feasForLP1} relies on Lemma \\ref{lem:sumOfSquaresDiffSpeeds} and cites an inequality mentioned in the preceding section (and proven by Queyranne \\cite{Queyranne1993}). \n\\begin{lemma}\nLet $\\{a_1,\\ldots a_z\\}$ be a set of non-negative real numbers. We assume that $k \\leq z$ of them are positive. Let $b_i$ be a set of decreasing positive real numbers. Then\n\\begin{center}\n$ \\sum_{i = 1}^z a_i^2 \/ b_i \\geq \\left(\\sum_{i = 1}^z a_i \\right)^2 \/ \\left(\\sum_{i = 1}^k b_i\\right)$.\n\\end{center}\n\\label{lem:sumOfSquaresDiffSpeeds}\n\\end{lemma}\n\\begin{proof}\\footnote{The proceedings version of this paper stated that the proof cites the AM-GM inequality and proceeds by induction from $z=k=2$. We have opted here to demonstrate a different (simpler) proof that we discovered only after the proceedings version was finalized.}\nWe only show the case where $k=z$. Define $\\mathbf{a} = [a_1,\\ldots,a_k] \\in \\mathbb{R}^k_+$, $\\mathbf{b} = [b_1, \\ldots, b_k ] \\in \\mathbb{R}^k_{++}$, and $\\mathbbm{1}$ as the vector of $k$ ones. Now, set $\\mathbf{u} = \\mathbf{a} \/ \\sqrt{\\mathbf{b}}$ and $\\mathbf{w} = \\sqrt{\\mathbf{b}}$ (element-wise), and note that $\\langle \\mathbf{a}, \\mathbbm{1} \\rangle = \\langle \\mathbf{u}, \\mathbf{w} \\rangle$. In these terms, it is clear that $(\\sum_{i=1}^k a_i)^2 = \\langle \\mathbf{u}, \\mathbf{w} \\rangle^2$.\n\nGiven this, one need only cite Cauchy-Schwarz (namely, $\\langle \\mathbf{u}, \\mathbf{w} \\rangle^2 \\leq \\langle \\mathbf{u}, \\mathbf{u} \\rangle \\cdot \\langle \\mathbf{w}, \\mathbf{w} \\rangle$) and plug in the definitions of $\\mathbf{u}$ and $\\mathbf{w}$ to see the desired result.\n\\end{proof}\n\n\n\\begin{lemma}[Validity Lemma]\nEvery feasible schedule for an instance $I$ of $CC|r|\\sum w_jC_j$ has completion times that define a feasible solution to LP1($I$). \\label{lem:feasForLP1}\n\\end{lemma}\n\\begin{proof}\nAs constraints ($1B$) and ($1C$) are clear lower bounds on job completion time, it suffices to show the validity of constraint ($1A$). Thus, let $S$ be a non-empty subset of $N$, and fix an arbitrary but feasible schedule ``$F$'' for $I$. \n\nDefine $C^{F}_{ji}$ as the completion time of subjob $(j,i)$ under schedule $F$. Similarly, define $C^{F}_{ji\\ell}$ as the first time at which tasks of subjob $(j,i)$ scheduled on machine $\\ell$ of cluster $i$ are finished. Lastly, define $p^{\\ell}_{ji}$ as the total processing requirement of job $j$ scheduled on machine $\\ell$ of cluster $i$. Note that by construction, we have $C^{F}_{ji} = \\max_{\\ell \\in \\{1,\\ldots,m_i\\}}{C^{F}_{ji\\ell}}$ and $C^F_j = \\max_{i \\in M}{C^F_{ji}}$. \nSince $p_{ji} = \\sum_{\\ell = 1}^{m_i} p^{\\ell}_{ji}$, we can rather innocuously write\n\\begin{equation}\n\\textstyle\\sum_{j \\in S} p_{ji} C^{F}_{ji} = \\textstyle\\sum_{j \\in S}\\left[ \\textstyle\\sum_{\\ell = 1}^{m_i} p^{\\ell}_{ji} \\right] C^{F}_{ji} . \n\\end{equation} \nBut using $C^{F}_{ji} \\geq C^{F}_{ji\\ell}$, we can lower-bound $\\sum_{j \\in S} p_{ji} C^{F}_{ji}$. Namely,\n\\begin{equation}\n\\textstyle\\sum_{j \\in S} p_{ji} C^{F}_{ji} \\geq \\textstyle\\sum_{j \\in S}\\textstyle\\sum_{\\ell = 1}^{m_i} p^{\\ell}_{ji} C^{F}_{ji\\ell} = \\textstyle\\sum_{\\ell = 1}^{m_i} v_{\\ell i}\\textstyle\\sum_{j \\in S} \\left[p^{\\ell}_{ji}\/v_{\\ell i} \\right]C^{F}_{ji\\ell} \\label{eq:specificMachines}\n\\end{equation}\nThe next inequality uses a bound on $\\textstyle\\sum_{j \\in S}\\left[p^{\\ell}_{ji}\/v_{\\ell i} \\right]C^{F}_{ji\\ell}$ proven by Queyranne \\cite{Queyranne1993} for any subset $S$ of $N$ jobs with processing times $\\left[p^{\\ell}_{ji}\/v_{\\ell i} \\right]$ to be scheduled on a single machine.\\footnote{Here, our machine is machine $\\ell$ on cluster $i$.}\n\\begin{equation}\n\\textstyle\\sum_{j \\in S} \\left[p^{\\ell}_{ji}\/v_{\\ell i} \\right]C^{F}_{ji\\ell} \\geq \\frac{1}{2} \\left[\\left(\\textstyle\\sum_{j \\in S} \\left[p^{\\ell}_{ji}\/v_{\\ell i} \\right]\\right)^2 + \\textstyle\\sum_{j \\in S} \\left(\n\\left[p^{\\ell}_{ji}\/v_{\\ell i} \\right]\\right)^2 \\right]\\label{eq:queyranne}\n\\end{equation}\nCombining inequalities \\eqref{eq:specificMachines} and \\eqref{eq:queyranne}, we have the following.\n\\begin{align}\n\\textstyle\\sum_{j \\in S} p_{ji} C^{F}_{ji} &\\geq \\frac{1}{2} \\textstyle\\sum_{\\ell=1}^{m_i} v_{\\ell i} \\left[\\left(\\textstyle\\sum_{j \\in S} \\left[p^{\\ell}_{ji}\/v_{\\ell i} \\right]\\right)^2 + \\textstyle\\sum_{j \\in S} \\left(\n\\left[p^{\\ell}_{ji}\/v_{\\ell i} \\right]\\right)^2 \\right] \\\\\n& \\geq \\frac{1}{2} \\left[\\textstyle\\sum_{\\ell = 1}^{m_i}\\left(\\sum_{j \\in S} p^\\ell_{j i}\\right)^2 \/ v_{\\ell i} + \\sum_{j \\in S} \\sum_{\\ell = 1}^{m_i} \\left(p^\\ell_{j i}\\right)^2 \/ v_{\\ell i} \\right] \\label{eq:differentBoundForLP1}\n\\end{align}\nNext, we apply Lemma \\ref{lem:sumOfSquaresDiffSpeeds} to the right hand side of inequality \\eqref{eq:differentBoundForLP1} a total of $|S|+1$ times.\n\\begin{align}\n&\\textstyle\\sum_{\\ell = 1}^{m_i} \\left(\\textstyle\\sum_{j \\in S} p^\\ell_{j i}\\right)^2 \/v_{\\ell i} \\geq \\left(\\textstyle\\sum_{\\ell = 1}^{m_i}\\textstyle\\sum_{j \\in S} p^\\ell_{j i}\\right)^2\/ \\textstyle\\sum_{\\ell = 1}^{m_i} v_{\\ell i} = \\left(\\textstyle\\sum_{j \\in S} p_{ji}\\right)^2 \/ \\mu_i \\\\\n&\\textstyle\\sum_{\\ell = 1}^{m_i} \\left(p^\\ell_{j i}\\right)^2 \/ v_{\\ell i} \\geq \\left(\\sum_{\\ell = 1}^{m_i}p^\\ell_{j i}\\right)^2 \/ \\textstyle\\sum_{\\ell = 1}^{q_{j i}} v_{\\ell i} = p_{j i}^2\/\\mu_{j i} ~~\\forall~ j \\in S\n\\end{align}\nCiting $C^{F}_{j} \\geq C^{F}_{ji}$, we arrive at the desired result.\n\\begin{equation}\n\\textstyle\\sum_{j \\in S} p_{ji} C^{F}_{j} \\geq \\frac{1}{2}\\left[\\left(\\textstyle\\sum_{j \\in S} p_{ji} \\right)^2\/\\mu_i + \\textstyle\\sum_{j \\in S}p_{ji}^2\/\\mu_{ji}\\right] \\qquad \\text{``constraint }(1A)\\text{''}\n\\end{equation}\n\n\\end{proof}\n\n\n\n\\subsection{Theoretical Complexity of LP1}\\label{subsec:thankGodPolyTime}\nAs the first of our two algorithms requires solving LP1 directly, we need to address the fact that LP1 has $m \\cdot (2^n - 1) + n$ constraints. \nLuckily, it is still possible to such solve linear programs in polynomial time with the Ellipsoid method; we introduce the following separation oracle for this purpose.\n\n\\begin{definition}[Oracle LP1]\nDefine the \\textit{violation}\n\\begin{equation}\nV(S,i) = \\frac{1}{2} \\left[ \n\t \\left(\\textstyle\\sum_{j \\in S} p_{j i}\\right)^2\/\\mu_i \n + \\textstyle\\sum_{j \\in S} p_{j i}^2\/\\mu_{j i} \n \\right] - \\textstyle\\sum_{j \\in S} p_{j i} C_{j}\n\\end{equation}\nLet $\\{C_j\\} \\in \\mathbb{R}^n$ be a \\textit{potentially} feasible solution to LP1. Let $\\sigma_i$ denote the ordering when jobs are sorted in increasing order of $C_j - p_{j i}\/(2\\mu_{ji})$. Find the most violated constraint in $(1A)$ for $i \\in M$ by searching over $V(S_i,i)$ for $S_i$ of the form $\\{\\sigma_i(1),\\ldots,\\sigma_i(j-1),\\sigma_i(j)\\},~ j \\in \\{1,\\ldots,n\\}$. If any of maximal $V(S_i^*,i) > 0$, then return $(S_i^*,i)$ as a violated constraint for ($1A$). Otherwise, check the remaining $n$ constraints $((1B)$ and $(1C))$ directly in linear time.\n\\end{definition} \n\nFor fixed $i$, Oracle-LP1 finds the subset of jobs that maximizes ``violation'' for cluster $i$. That is, Oracle-LP1 finds $S_i^*$ such that $V(S_i^*,i) = \\text{max}_{S \\subset N} V(S,i)$. We prove the correctness of Oracle-LP1 by establishing a necessary and sufficient condition for a job $j$ to be in $S_i^*$.\n\n\\begin{lemma}\nFor $\\mathbb{P}_i(A) \\doteq \\textstyle\\sum_{j \\in A} p_{ji}$, we have $x \\in S_i^* \\Leftrightarrow$ $ C_x - p_{xi}\/(2\\mu_{xi}) \\leq \\mathbb{P}_i(S_i^*)\/\\mu_i$. \n\\label{lem:separation}\n\\end{lemma}\n\\begin{proof}\nFor given $S$ (not necessarily equal to $S_i^*$), it is useful to express $V(S,i)$ in terms of $V(S\\cup x, i)$ or $V(S\\setminus x, i)$ (depending on whether $x \\in S$ or $x \\in N \\setminus S$). Without loss of generality, we restrict our search to $S : x \\in S \\Rightarrow p_{x,i} > 0$.\n\nSuppose $ x \\in S$. By writing $\\mathbb{P}_i(S) = \\mathbb{P}_i(S\\setminus x) + \\mathbb{P}_i(x)$, and similarly decomposing the sum $\\textstyle\\sum_{j \\in S} p_{j i}^2\/(2\\mu_{ji})$, one can show the following.\n\\begin{align}\nV(S, i) = & V(S\\setminus x, i) + p_{xi}\\left(\\frac{1}{2}\\left(\\frac{2\\mathbb{P}_i(S) - p_{xi}}{\\mu_i} + \\frac{p_{xi}}{\\mu_{xi}}\\right) - C_x \\right) \\label{eq:xInS}\n\\end{align}\nNow suppose $ x \\in N\\setminus S$. In the same strategy as above (this time writing $ \\mathbb{P}_i(S) = \\mathbb{P}_i(S\\cup x) - \\mathbb{P}_i(x)$), one can show that\n\\begin{align}\nV(S, i) =& V(S\\cup x, i) + p_{xi}\\left(C_x - \\frac{1}{2}\\left(\\frac{2\\mathbb{P}_i(S) + p_{xi}}{\\mu_i} + \\frac{p_{xi}}{\\mu_{xi}}\\right) \\right). \\label{eq:xNotInS}\n\\end{align}\nNote that Equations \\eqref{eq:xInS} and \\eqref{eq:xNotInS} hold for all $S$, including $S = S_i^*$. Turning our attention to $S_i^*$, we see that $x \\in S_i^*$ implies that the second term in Equation \\eqref{eq:xInS} is non-negative, i.e. \n\\begin{equation}\nC_x - p_{xi}\/(2\\mu_{xi}) \\leq \\left(2\\mathbb{P}_i(S_i^*) - p_{xi}\\right)\/(2\\mu_i) < \\mathbb{P}_i(S_i^*)\/\\mu_i.\n\\end{equation}\nSimilarly, $x \\in N \\setminus S_i^*$ implies the second term in Equation \\eqref{eq:xNotInS} is non-negative.\n\\begin{equation}\nC_x - p_{x i}\/(2\\mu_{x i}) \\geq \\left(2\\mathbb{P}_i(S_i^*) + p_{x i}\\right)\/(2\\mu_i) \\geq \\mathbb{P}_i(S_i^*)\/\\mu_i\n\\end{equation}\nIt follows that $x \\in S_i^*$ iff $C_x - p_{x i}\/(2\\mu_{x i}) < \\mathbb{P}_i(S_i^*)\/\\mu_i$.\n\\end{proof}\n\nGiven Lemma \\ref{lem:separation}, It is easy to verify that sorting jobs in increasing order of $C_x - p_{xi}\/(2\\mu_{xi})$ to define a permutation $\\sigma_i$ guarantees that $S_i^*$ is of the form $\\{\\sigma_i(1),\\ldots,\\sigma_i(j-1),\\sigma_i(j)\\}$ for some $j \\in N$. This implies that for fixed $i$, Oracle-LP1 finds $S_i^*$ in $O(n \\log(n))$ time. This procedure is executed once for each cluster, leaving the remaining $n$ constraints in $(1B)$ and $(1C)$ to be verified in linear time. Thus Oracle-LP1 runs in $O(mn\\log(n))$ time.\n\nBy the equivalence of separation and optimization, we have proven the following theorem:\n\\begin{theorem}\nLP1($I$) is a valid relaxation of $I \\in \\Omega_{CC}$, and is solvable in polynomial time. \\label{thm:LP1feasAndSolve}\n\\end{theorem}\n\nAs was explained in the beginning of this section, linear programs such as those in \\cite{Chen2000, Garg2007, llp, Queyranne1993, Schulz1996, Schulz2012} are processed with an appropriate sorting of the optimal decision variables $\\{C^\\star_j\\}$. It is important then to have bounds on job completion times for a particular ordering of jobs. We address this next in Section \\ref{sec:listSched}, and reserve our first algorithm for Section \\ref{sec:lpAlg}.\n\\section{List Scheduling from Permutations}\\label{sec:listSched}\n\n\n\n\n\n\n\n\nThe complex work in both of our proposed algorithms is to generate a \\textit{permutation} of jobs. The procedure below takes such a permutation and uses it to determine start times, end times, and machine assignments for every task of every subjob.\n\\vspace{1em}\n\n\\noindent \\textbf{List-LPT} : Given a single cluster with $m_i$ machines and a permutation of jobs $\\sigma$, introduce $\\text{List}(a,i) \\doteq (p_{ai1}, p_{ai2},\\ldots,p_{ai|T_{ai}|})$ as an ordered set of tasks belonging to subjob $(a,i)$, ordered by longest processing time first. Now define $\\text{List}(\\sigma) \\doteq \\text{List}(\\sigma(1),i) \\oplus \\text{List}(\\sigma(2),i) \\oplus \\cdots \\oplus \\text{List}(\\sigma(n),i)$, where $\\oplus$ is the concatenation operator. \n\nPlace the tasks of $\\text{List}(\\sigma)$ in order- from the largest task of subjob $(\\sigma(1),i)$, to the smallest task of subjob $(\\sigma(n),i)$. When placing a particular task, assign it whichever machine and start time results in the task being completed as early as possible (without moving any tasks which have already been placed). Insert idle time (on all $m_i$ machines) as necessary if this procedure would otherwise start a job before its release time.\n\\vspace{1em}\n\nThe following Lemma is essential to bound the completion time of a set of jobs processed by List-LPT. The proof is adapted from Gonzalez et al. \\cite{Gonzalez1977}.\n\\begin{lemma}\nSuppose $n$ jobs are scheduled on cluster $i$ according to List-LPT($\\sigma$). Then for $ \\bar{v_i} \\doteq \\mu_i\/m_i$, the completion time of subjob $(\\sigma(j),i)$ $($denoted $C_{\\sigma(j)i}$ $)$ satisfies\n\\begin{align}\n&C_{\\sigma(j)i} \\leq \\max_{1\\leq k \\leq j}{r_{\\sigma(k)i}} + p_{\\sigma(j)i1}\/\\bar{v_i} + \\left(\\textstyle\\sum_{k=1}^{j} p_{\\sigma(k)i} - p_{\\sigma(j)i1}\\right)\/\\mu_i \\label{eq:generalGonzalezLemma}\n\\end{align} \\label{lem:CompTimesOnUniformMachines}\n\\end{lemma}\n\\begin{proof}\nFor now, assume all jobs are released at time zero. Let the task of subjob $(\\sigma(j),i)$ to finish last be denoted $t^*$. If $t^*$ is not the task in $T_{\\sigma(j)i}$ with least processing time, then construct a new set $T'_{\\sigma(j)i} = \\{ t : p_{\\sigma(j)it^*} \\leq p_{\\sigma(j)it} \\} \\subset T_{\\sigma(j)i}$. Because the tasks of subjob $(\\sigma(j),i)$ were scheduled by List-LPT (i.e. longest-processing-time-first), the sets of potential start times and machines for task $t^*$ (and hence the set of potential completion times for task $t^*$) are the same regardless of whether subjob $(\\sigma(j),i)$ consisted of tasks $T_{\\sigma(j)i}$ or the subset $T'_{\\sigma(j)i}$. Accordingly, reassign $T_{\\sigma(j)i} \\leftarrow T'_{\\sigma(j)i}$ without loss of generality.\n\nLet $D_{\\ell}^j$ denote the total demand for machine $\\ell$ (on cluster $i$) once all tasks of subjobs $(\\sigma(1),i)$ through $(\\sigma(j-1),i)$ and all tasks in the set $T_{\\sigma(j)i}\\setminus \\{t^*\\}$ are scheduled. Using the fact that $C_{\\sigma(j)i}v_{\\ell i} \\leq ({D}_{\\ell}^{j} + p_{\\sigma(j)i t^*}) \\forall \\ell \\in \\{1,\\ldots,m_i\\}$, sum the left and right and sides over $\\ell$. This implies $C_{\\sigma(j)i}\\left( \\textstyle\\sum_{\\ell = 1}^{m_i} v_{\\ell i} \\right) \\leq ~ m_i p_{\\sigma(j) i t^*} + \\textstyle\\sum_{\\ell = 1}^{m_i} {D}_{\\ell}^{j}$. Dividing by the sum of machine speeds and using the definition of $\\mu_i$ yields\n\\begin{equation}\nC_{\\sigma(j)i} \n\t~ \\leq ~ m_i p_{\\sigma(j)i t^*}\/\\mu_i + \\textstyle\\sum_{\\ell = 1}^{m_i} {D}_{\\ell}^j \/\\mu_i ~ \\leq ~ p_{\\sigma(j)i 1}\/\\bar{v_i} + \\left(\\textstyle\\sum_{k = 1}^{j} p_{\\sigma(k)i} - p_{\\sigma(j)i1}\\right)\/\\mu_i \\label{eq:mainGonzalezLemma}\n\\end{equation}\nwhere we estimated $p_{\\sigma(j)i t^*}$ upward by $p_{\\sigma(j)i 1}$. Inequality \\eqref{eq:mainGonzalezLemma} completes our proof in the case when $r_{ji} \\equiv 0$. \n\nNow suppose that some $r_{ji} > 0$. We take our policy to the extreme and suppose that all machines are left idle until every one of jobs $\\sigma(1)$ through $\\sigma(j)$ are released; note that this occurs precisely at time $\\max_{1 \\leq k \\leq j} r_{\\sigma(k)i}$. It is clear that beyond this point in time, we are effectively in the case where all jobs are released at time zero, hence we can bound the remaining time to completion by the right hand side of Inequality \\ref{eq:mainGonzalezLemma}. As Inequality \\ref{eq:generalGonzalezLemma} simply adds these two terms, the result follows.\n\\end{proof}\n\nLemma \\ref{lem:CompTimesOnUniformMachines} is cited directly in the proof of Theorem \\ref{thm:uniformLP1} and Lemma \\ref{lem:TSPTboundInTermsOfPDandOPT}. Lemma \\ref{lem:CompTimesOnUniformMachines} is used implicitly in the proofs of Theorems \\ref{thm:identLP}, \\ref{thm:identLP_2appxWithConstantTasks}, and \\ref{thm:tspt_unit_tasks}.\n\\section{An LP-based Algorithm}\\label{sec:lpAlg}\n\n\n\nIn this section we show how LP1 can be used to construct near optimal schedules for concurrent cluster scheduling both when $r_{ji} \\equiv 0$ and when some $r_{ji} > 0$. Although solving LP1 is somewhat involved, the algorithm itself is quite simple:\n\\vspace{0.5em}\n\n\\noindent \\textbf{Algorithm CC-LP} : Let $I = (T, r, w, v)$ denote an instance of $CC | r | \\sum w_j C_j$. Use the optimal solution $\\{C_j^\\star\\}$ of LP1($I$) to define $m$ permutations $\\{\\sigma_i : i \\in M\\}$ which sort jobs in increasing order of $C^\\star_j - p_{ji}\/(2\\mu_{ji})$. For each cluster $i$, execute List-LPT($\\sigma_i$).\n\\vspace{0.5em}\n\nEach theorem in this section can be characterized by how various assumptions help us cancel an additive term\\footnote{``$+p_{xit^*}$''; see associated proofs.} in an upper bound for the completion time of an arbitrary subjob $(x,i)$. Theorem \\ref{thm:uniformLP1} is the most general, while Theorem \\ref{thm:identLP_2appxWithConstantTasks} is perhaps the most surprising.\n\n\\subsection{CC-LP for Uniform Machines}\\label{sec:unifLP}\n\\begin{theorem}\nLet $\\hat{C}_j$ be the completion time of job $j$ using algorithm CC-LP, and let $R$ be as in Section \\ref{subsec:outlineAndResults}. If $r_{ji} \\equiv 0$, then $ \\textstyle\\sum_{j \\in N} w_j \\hat{C}_j \\leq \\left(2 + R\\right)OPT $. Otherwise, $ \\textstyle\\sum_{j \\in N} w_j \\hat{C}_j \\leq \\left(3 + R\\right)OPT$.\n\\label{thm:uniformLP1}\n\\end{theorem}\n\\begin{proof}\nFor $y \\in \\mathbb{R}$, define $y^+ = \\max\\{y,0\\}$. Now let $x \\in N$ be arbitrary, and let $i \\in M$ be such that $p_{xi} > 0$ (but otherwise arbitrary). Define $t^*$ as the last task of job $x$ to complete on cluster $i$, and let $j_i$ be such that $\\sigma_i(j_i) = x$. Lastly, denote the optimal LP solution $\\{C_j\\}$.\\footnote{We omit the customary $\\star$ to avoid clutter in notation.} Because $\\{C_j\\}$ is a feasible solution to LP1, constraint $(1A)$ implies the following (set $S_i = \\{\\sigma_i(1),\\ldots,\\sigma_i(j_i - 1),x\\}$)\n\\begin{align}\n\\frac{\\left( \\textstyle\\sum_{k = 1}^{j_i} p_{\\sigma_i(k)i} \\right)^2}{2\\mu_i}\n\t&\\leq \\sum_{k = 1}^{j_i} p_{\\sigma_i(k)i}\\left(C_{\\sigma_i(k)} - \\frac{p_{\\sigma_i(k)i}}{2\\mu_{\\sigma_i(k)i}}\\right) \\leq \\left(C_{x} - \\frac{p_{xi}}{2\\mu_{xi}}\\right)\\sum_{k = 1}^{j_i} p_{\\sigma_i(k)i} \\label{eq:compTimeInUnifLP}\n\\end{align}\nwhich in turn implies $\\textstyle\\sum_{k = 1}^{j_i} p_{\\sigma_i(k)i}\/\\mu_i \\leq 2C_x - p_{xi}\/\\mu_{xi}$.\n\nIf all subjobs are released at time zero, then we can combine this with Lemma \\ref{lem:CompTimesOnUniformMachines} and the fact that $p_{xit^*} \\leq p_{xi} = \\textstyle\\sum_{t \\in T_{xi}} p_{xit}$ to see the following (the transition from the first inequality the second inequality uses $C_x \\geq p_{xit^*}\/v_{1i}$ and $R_i = v_{1i}\/\\bar{v}_i$).\n\\begin{align}\n\\hat{C}_{xi} \n\t&\\leq 2C_x - \\frac{p_{xi}}{\\mu_{xi}} + \\frac{p_{xit^*}}{\\bar{v}_i} - \\frac{p_{xit^*}}{\\mu_i} \\leq \n\t\tC_x(2 + \\left[R_i(1 - 2\/m_i)\\right]^+) \\label{eq:generalCompTimeWithOUTReleaseUnif} \n\\end{align}\n\nWhen one or more subjobs are released after time zero, Lemma \\ref{lem:CompTimesOnUniformMachines} implies that it is sufficient to bound $\\displaystyle\\max_{1 \\leq k \\leq j_i}{\\left\\lbrace r_{\\sigma_i(k)i} \\right\\rbrace}$ by some constant multiple of $C_x$. Since $\\sigma_i$ is defined by increasing $L_{ji} \\doteq C_j - p_{ji}\/(2\\mu_{ji})$, $L_{\\sigma_i(a)i} \\leq L_{\\sigma_i(b)i}$ implies\n\\begin{align}\n&r_{\\sigma_i(a)i} + \\frac{p_{\\sigma_i(a)i}}{2\\mu_{\\sigma_i(a)i}} + \\frac{p_{\\sigma_i(b)i}}{2\\mu_{\\sigma_i(b)i}} \\leq C_{\\sigma_i(a)} - \\frac{p_{\\sigma_i(a)i}}{2\\mu_{\\sigma_i(a)i}} + \\frac{p_{\\sigma_i(b)i}}{2\\mu_{\\sigma_i(b)i}} \\leq C_{\\sigma_i(b)} ~\\forall~ a \\leq b\n\\end{align}\nand so $\\max_{1 \\leq k \\leq j_i}{\\left\\lbrace r_{\\sigma_i(k)i} \\right\\rbrace} + p_{xi}\/(2\\mu_{xi}) \\leq C_{x}$. As before, combine this with Lemma \\ref{lem:CompTimesOnUniformMachines} and the fact that $p_{xit^*} \\leq p_{xi} = \\textstyle\\sum_{t \\in T_{xi}} p_{xit}$ to yield the following inequalities\n\\begin{align}\n\\hat{C}_{xi} \n\t&\\leq 3C_x - \\frac{3p_{xi}}{2\\mu_{xi}} + \\frac{p_{xit^*}}{\\bar{v}_i} - \\frac{p_{xit^*}}{\\mu_i} \\leq C_x(3 + \\left[R_i(1 - 5\/(2m_i))\\right]^+) \\label{eq:generalCompTimeWithReleaseUnif} \n\\end{align}\n-which complete our proof.\n\\end{proof}\n\\subsection{CC-LP for Identical Machines}\\label{sec:identLP}\n\\begin{theorem}\nIf machines are of unit speed, then CC-LP yields an objective that is...\n\\begin{center}\n\\begin{tabular}{l | c c}\n\\hline\n & $r_{ji} \\equiv 0$ & some $r_{ji} > 0$ \\\\ \n \\hline\nsingle-task subjobs & $\\leq$ 2 $OPT$ & $\\leq $ 3 $OPT$ \\\\\nmulti-task subjobs & $\\leq$ 3 $OPT$ & $\\leq$ 4 $OPT$ \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\label{thm:identLP}\n\\end{theorem}\n\\begin{proof}\nDefine $[\\cdot]^+$, $x$, $C_x$, $\\hat{C}_x$, $i$, $\\sigma_i$, and $t^*$ as in Theorem \\ref{thm:uniformLP1}. When $r_{ji} \\equiv 0$, one need only give a more careful treatment of the first inequality in \\eqref{eq:generalCompTimeWithOUTReleaseUnif} (using $\\mu_{ji} = q_{ji}$).\n\\begin{align}\n\\hat{C}_{x,i} \n\t&\\leq 2C_x + p_{xit^*} - p_{xit^*}\/m_i - p_{xi}\/q_{xi} \n\t\\leq C_x(2 + \\left[1 - 1\/m_i -1\/q_{xi} \\right]^+) \\label{eq:GeneralIdentCompTimeBound}\n\\end{align}\nSimilarly, when some $r_{ji} > 0$, the first inequality in \\eqref{eq:generalCompTimeWithReleaseUnif} implies the following.\n\\begin{align}\n\\hat{C}_{x,i}\n\t&\\leq 3C_x + p_{xit^*} - p_{xit^*}\/m_i - 3p_{xi}\/(2q_{xi})\n\t\\leq C_x(3 + \\left[1 - 1\/m_i - 3\/(2q_{xi})\\right]^+) \\label{eq:forCstTimeThmWithRelease}\n\\end{align}\n\\end{proof}\nThe key in the refined analysis of Theorem \\ref{thm:identLP} lay in how $-p_{xi}\/q_{xi}$ is used to annihilate $+p_{xit^*}$. While $q_{xi} = 1$ (i.e. single-task subjobs) is sufficient to accomplish this, it is not strictly \\textit{necessary}. The theorem below shows that we can annihilate the $+p_{xit^*}$ term whenever all tasks of a given subjob are of the same length. Note that the tasks need not be \\textit{unit}, as the lengths of tasks across different subjobs can differ.\n\\begin{theorem}\nSuppose $v_{\\ell i} \\equiv 1$. If $p_{jit}$ is constant over $t \\in T_{ji}$ for all $j \\in N$ and $i \\in M$, then algorithm CC-LP is a 2-approximation when $r_{ji} \\equiv 0$, and a 3-approximation otherwise. \\label{thm:identLP_2appxWithConstantTasks}\n\\end{theorem}\n\\begin{proof}\nThe definition of $p_{xi}$ gives $p_{xi}\/q_{xi} = \\textstyle\\sum_{t \\in T_{xi}} p_{xit} \/ q_{xi}$. Using the assumption that $p_{jit}$ is constant over $t \\in T_{ji}$, we see that $p_{xi}\/q_{xi} = (q_{xi} + |T_{xi}| - q_{xi})p_{xit^*} \/ q_{xi} $, where $|T_{xi} |\\geq q_{xi}$. Apply this to Inequality \\eqref{eq:GeneralIdentCompTimeBound} from the proof of Theorem \\ref{thm:identLP}; some algebra yields \n\\begin{align}\n\\hat{C}_{xi} \n\t\\leq& 2C_x - p_{xit^*}\/m_i - p_{xit^*}\\left(|T_{xi}| - q_{xi}\\right)\/q_{xi} \\leq 2C_x.\n\\end{align}\nThe case with some $r_{ji} > 0$ uses the same identity for $p_{xi}\/q_{xi}$.\n\\end{proof}\nSachdeva and Saket \\cite{nphard2} showed that it is NP-Hard to approximate $CC|m_i \\equiv 1|\\sum w_j C_j$ with a constant factor less than 2. \nTheorem \\ref{thm:identLP_2appxWithConstantTasks} is significant because it shows that CC-LP can attain the same guarantee for \\textit{arbitrary} $m_i$, provided $v_{\\ell i} \\equiv 1$ and $p_{jit}$ is constant over $t$.\n\n\\section{Combinatorial Algorithms}\\label{sec:TSPT}\n\n\n\n\n\n\nIn this section, we introduce an extremely fast combinatorial algorithm with performance guarantees similar to CC-LP for ``unstructured'' inputs (i.e. those for which some $v_{\\ell i} > 1$, or some $T_{ji}$ have $p_{jit}$ non-constant over $t$). \nWe call this algorithm \\textit{CC-TSPT}. \nCC-TSPT uses the MUSSQ algorithm for concurrent open shop (from \\cite{mqssu}) as a subroutine. As SWAG (from \\cite{HGY}) motivated development of CC-TSPT, we first address SWAG's worst-case performance.\n\n\\subsection{A Degenerate Case for SWAG}\\label{subsec:swagDegenerate}\n\\begin{wrapfigure}{r}{0.5\\textwidth}\n\t\\vspace{-0.45cm}\n \\centering\n \\begin{minipage}{0.50\\textwidth}\n\t\\begin{algorithm}[H]\n\t\\begin{algorithmic}[1]\n\t\\Procedure{SWAG}{$N,M,p_{ji}$}\n\t\\State $J\\gets \\emptyset$\n\t\\State $q_i\\gets 0,\\forall i\\in M$\n\t\\While{$|J|\\not=|N|$}\n\t\\State mkspn$_j\\gets$ max$_{i\\in M}\\left(\\frac{q_i+p_{ji}}{m_i}\\right)$\n\t\\item[] \\qquad \\qquad $\\forall j\\in N\\setminus J$\n\t\\State nextJob $\\gets$ argmin$_{j \\in N \\setminus J}\\ $mkspn$_j$\n\t\\State $J.$append$($nextJob$)$\n\t\\State $q_i\\gets q_i+ p_{ji}$\n\t\\EndWhile\n\t\\State \\textbf{return} $J$\n\t\\EndProcedure\n\t\\end{algorithmic}\n\t\\end{algorithm}\n \\end{minipage}\n \\vspace{-0.45cm}\n\\end{wrapfigure}\nAs a prerequisite for addressing worst-case performance of an existing algorithm, we provide psuedocode and an accompanying verbal description for SWAG.\n\nSWAG computes queue positions for every subjob of every job, supposing that each job was scheduled next. \nA job's potential makespan (``mkspn'') is the largest of the potential finish times of all of its subjobs (considering current queue lengths $q_i$ and each subjob's processing time $p_{ji}$). \nOnce potential makespans have been determined, the job with smallest potential makespan is selected for scheduling. \nAt this point, all queues are updated. \nBecause queues are updated, potential makespans will need to be re-calculated at the next iteration. \nIterations continue until the very last job is scheduled. Note that SWAG runs in $O(n^2m)$ time.\n\n\\begin{theorem}\nFor an instance $I$ of $PD || \\sum C_j$, let $SWAG(I)$ denote the objective function value of SWAG applied to $I$, and let $OPT(I)$ denote the objective function value of an optimal solution to $I$. \nThen for all $L \\geq 1$, there exists an $I \\in \\Omega_{PD || \\sum C_j}$ such that $SWAG(I) \/ OPT(I) > L$.\n\\label{thm:swagBad}\n\\end{theorem}\n\\begin{proof}\nLet $L \\in \\mathbb{N}^+$ be a fixed but arbitrary constant. \nConstruct a problem instance $I_L^m$ as follows: \n\n$N = N_1 \\cup N_2$ where $N_1$ is a set of $m$ jobs, and $N_2$ is a set of $L$ jobs. \nJob $j \\in N_1$ has processing time $p$ on cluster $j$ and zero all other clusters. \nJob $j \\in N_2$ has processing time $p(1-\\epsilon)$ on all $m$ clusters. \n$\\epsilon$ is chosen so that $\\epsilon < 1\/L$\n(see Figure \\ref{fig:swag1}).\n\n\\begin{figure}[ht]\n\\includegraphics[width=\\linewidth]{swag.png}\n\\centering\n\\caption{At left, an input for SWAG example with $m=3$ and $L=2$. At right, SWAG's resulting schedule, and an alternative schedule.}\n\\label{fig:swag1}\n\\end{figure}\n\nIt is easy to verify that SWAG will generate a schedule where all jobs in $N_2$ precede all jobs in $N_1$ (due to the savings of $p \\epsilon$ for jobs in $N_2$). \nWe propose an \\textit{alternative} solution in which all jobs in $N_1$ preceed all jobs in $N_2$. \nDenote the objective value for this alternative solution $ALT(I_L^m)$, noting $ALT(I_L^m) \\geq OPT(I_L^m)$.\n\nBy symmetry, and the fact that all clusters have a single machine, we can see that $SWAG(I_L^m)$ and $ALT(I_L^m)$ are given by the following\n\\begin{align}\nSWAG(I_L^m) &= p(1-\\epsilon)L(L+1)\/2 + p(1-\\epsilon)L m + p m \\\\\nALT(I_L^m) &= p(1-\\epsilon)L(L + 1)\/2 + pL + p m\n\\end{align}\nSince $L$ is fixed, we can take the limit with respect to $m$.\n\\begin{align}\n\\lim_{m \\rightarrow \\infty}{\\frac{SWAG(I_L^m)}{ALT(I_L^m)}} \n\t&= \\lim_{m \\rightarrow \\infty}{\\frac{p(1-\\epsilon)L m + p m}{p m}} = L(1-\\epsilon) + 1 > L\n\\end{align}\nThe above implies the existence of a sufficiently large number of clusters $\\overline{m}$, such that $m \\geq \\overline{m}$ implies $ SWAG(I_L^{m})\/OPT(I_L^{m}) > L $. This completes our proof.\n\\end{proof}\nTheorem \\ref{thm:swagBad} demonstrates that that although SWAG performed well in simulations, it may not be reliable. \nThe rest of this section introduces an algorithm not only with superior runtime to SWAG (generating a permutation of jobs in $O(n^2 + nm)$ time, rather than $O(n^2m)$ time), but also a constant-factor performance guarantee.\n\n\\subsection{CC-TSPT : A Fast 2 + R Approximation}\\label{subsec:fastreduction}\nOur combinatorial algorithm for concurrent cluster scheduling exploits an elegant transformation to concurrent open shop. \nOnce we consider this simpler problem, it can be handled with MUSSQ \\cite{mqssu} and List-LPT. \nOur contributions are twofold: (1) we prove that this intuitive technique yields an approximation algorithm for a decidedly more general problem, and (2) we show that a \\textit{non-intuitive} modification can be made that maintains theoretical bounds while improving empirical performance. \nWe begin by defining our transformation.\n\n\\begin{definition}[The Total Scaled Processing Time (TSPT) Transformation]\nLet $\\Omega_{CC}$ be the set of all instances of $CC || \\sum w_j C_j$, \nand let $\\Omega_{PD}$ be the set of all instances of \n$PD || \\sum w_j C_j$. Note that $\\Omega_{PD} \\subset \\Omega_{CC}$.\nThen the Total Scaled Processing Time Transformation is a mapping\n\\begin{align*}\nTSPT: ~\\Omega_{CC} \\to \\Omega_{PD} \\quad \\text{ with } \\quad (T, v, w) &\\mapsto (X, w) ~:~ x_{ji} = \\textstyle\\sum_{t \\in T_{ji}} p_{jit} \/ \\mu_i\n\\end{align*}\ni.e., $x_{ji}$ is the total processing time required by subjob $(j,i)$, scaled by the sum of machine speeds at cluster $i$. \nThroughout this section, we will use $I = (T, v, w)$ to denote an arbitrary instance of $CC || \\sum w_j C_j$, and $I' = (X, w)$ as the image of $I$ under TSPT. \nFigure \\ref{fig:tspt} shows the result of TSPT applied to our baseline example. \n\\end{definition}\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.8\\linewidth]{tspt.png}\n\\caption{An instance $I$ of $CC||\\sum w_j C_j$, and its image $I' = TSPT(I)$. The schedules were constructed with List-LPT using the same permutation for $I$ and $I'$. }\n\\label{fig:tspt}\n\\end{figure}\n\nWe take the time to emphasize the simplicity of our reduction. Indeed, the TSPT transformation is perhaps the first thing one would think of given knowledge of the concurrent open shop problem. What is surprising is how one can attain constant-factor performance guarantees even after such a simple transformation.\n\n\\vspace{1em}\n\\noindent \\textbf{Algorithm CC-TSPT} : Execute MUSSQ on $I'= TSPT(I)$ to generate a permutation of jobs $\\sigma$. List schedule instance $I$ by \n$\\sigma$ on each cluster according to List-LPT.\n\\vspace{1em}\n\nTowards proving the approximation ratio for CC-TSPT, we will establish a critical inequality in Lemma \\ref{lem:TSPTboundInTermsOfPDandOPT}. \nThe intuition behind Lemma \\ref{lem:TSPTboundInTermsOfPDandOPT} requires thinking of every job $j$ in $I$ as having a corresponding representation in $j'$ in $I'$. \nJob $j$ in $I$ will be scheduled in the $CC$ environment, while job $j'$ in $I'$ will be scheduled in the $PD$ environment. \nWe consider what results when the same permutation $\\sigma$ is used for scheduling in both environments. \n\nNow the definitions for the lemma: let $C^{CC}_{\\sigma(j)}$ be the completion time of job $\\sigma(j)$ resulting from List-LPT on an arbitrary permutation $\\sigma$. \nDefine $C^{CC\\star}_{\\sigma(j)}$ as the completion time of job $\\sigma(j)$ in the $CC$ environment in the optimal solution. \nLastly, define $C^{PD,I'}_{\\sigma(j')}$ as the completion time of job $\\sigma(j')$ in $I'$ when scheduling by List-LPT($\\sigma$) in the $PD$ environment.\n\n\\begin{lemma}\nFor $I' = TSPT(I)$, let $j'$ be the job in $I'$ corresponding to job $j$ in $I$. \nFor an arbitrary permutation of jobs $\\sigma$, we have $C^{CC}_{\\sigma(j)} \\leq C^{PD,I'}_{\\sigma(j')} + R\\cdot C^{CC\\star}_{\\sigma(j)}$. \\label{lem:TSPTboundInTermsOfPDandOPT}\n\\end{lemma}\n\\begin{proof}\nAfter list scheduling has been carried out in the $CC$ environment, we may determine $C^{CC}_{\\sigma(j)i}$ - the completion time of subjob $(\\sigma(j),i)$. \nWe can bound $C^{CC}_{\\sigma(j)i}$ using Lemma \\ref{lem:CompTimesOnUniformMachines} (which implies \\eqref{eqInLem:TSPTbound1}), and the serial-processing nature of the $PD$ environment (which implies \\eqref{eqInLem:TSPTbound2}).\n\\begin{align}\n& C^{CC}_{\\sigma(j)i} \\leq p_{\\sigma(j)i1}\\left(1\/\\bar{v} - 1\/\\mu_i\\right) + \\textstyle\\sum_{\\ell = 1}^{j} p_{\\sigma(\\ell)i}\/\\mu_i \\label{eqInLem:TSPTbound1} \\\\\n& \\textstyle\\sum_{\\ell = 1}^j p_{\\sigma(\\ell)i}\/\\mu_i \\leq C^{PD,I'}_{\\sigma(j')} \\quad \\forall ~i \\in M \\label{eqInLem:TSPTbound2}\n\\end{align}\nIf we relax the bound given in Inequality \\eqref{eqInLem:TSPTbound1} and combine it with Inequality \\eqref{eqInLem:TSPTbound2}, we see that $C^{CC}_{\\sigma(j)i} \\leq C^{PD,I'}_{\\sigma(j')} + p_{\\sigma(j)i1}\/\\bar{v}$. \nThe last step is to replace the final term with something more meaningful. Using $p_{\\sigma(j)1}\/\\bar{v} \\leq R \\cdot C^{CC\\star}_{\\sigma(j)}$ (which is immediate from the definition of $R$) the desired result follows.\n\\end{proof}\nWhile Lemma \\ref{lem:TSPTboundInTermsOfPDandOPT} is true for arbitrary $\\sigma$, now we consider $\\sigma = MUSSQ(X, w)$. \nThe proof of MUSSQ's correctness established the first inequality in the chain of inequalities below. \nThe second inequality can be seen by substituting $p_{ji} \/ \\mu_{i}$ for $x_{ji}$ in LP0($I'$) (this shows that the constraints in LP0($I'$) are weaker than those in LP1($I$)). \nThe third inequality follows from the Validity Lemma.\n\\begin{equation}\n\\textstyle\\sum_{j \\in N} w_{\\sigma(j)} C^{PD,I'}_{\\sigma(j)} \n\t\\leq 2 \\textstyle\\sum_{j \\in N} w_j C^{\\text{LP0}(I')}_j\n\t\\leq 2 \\textstyle\\sum_{j \\in N} w_j C^{\\text{LP1}(I)}_j\n\t\\leq 2 OPT(I) \\label{eq:tsptCore}\n\\end{equation}\nCombining Inequality \\eqref{eq:tsptCore} with Lemma \\ref{lem:TSPTboundInTermsOfPDandOPT} allows us to bound the objective in a way that does not make reference to $I'$.\n\\begin{equation}\n\\textstyle\\sum_{j \\in N} w_{\\sigma(j)}C^{CC}_{\\sigma(j)} \n\t\\leq \\textstyle\\sum_{j \\in N} w_{\\sigma(j)}\\left[C^{PD,I'}_{\\sigma(j)} + R\\cdot C^{CC\\star}_{\\sigma(j)}\\right] \\leq ~ 2 \\cdot OPT(I) + R \\cdot OPT(I) \\label{eq:dontReferenceIPrime}\n\\end{equation}\nInequality \\eqref{eq:dontReferenceIPrime} completes our proof of the following theorem.\n\\begin{theorem}\nAlgorithm CC-TSPT is a $2 + R$ approximation for $CC || \\sum w_j C_j$.\n\\label{thm:algCCTspt}\n\\end{theorem}\n\n\\subsection{CC-TSPT with Unit Tasks and Identical Machines}\\label{subsec:tsptOnUnitTasks}\nConsider concurrent cluster scheduling with $v_{\\ell i} = p_{jit} = 1$ (i.e., all processing times are unit, although the size of the collections $T_{ji}$ are unrestricted). In keeping with the work of Zhang, Wu, and Li \\cite{zwl} (who studied this problem in the single-cluster case), we call instances with these parameters ``fully parallelizable,'' and write $\\beta = fps$ for Graham's $\\alpha|\\beta|\\gamma$ taxonomy.\n\nZhang et al. showed that scheduling jobs greedily by ``Largest Ratio First'' (decreasing $w_j \/ p_{j}$) results in a 2-approximation, where 2 is a tight bound. \nThis comes as something of a surprise since the Largest Ratio First policy is \\textit{optimal} for $1||\\sum w_j C_j~$- which their problem very closely resembles. \nWe now formalize the extent to which $P|fps|\\sum w_j C_j$ resembles $1||\\sum w_j C_j~$: define the \\textit{time resolution} of an instance $I$ of $CC |fps| \\sum w_jC_j$ as $ \\rho_I = \\min_{j \\in N, i \\in M}{\\big\\lceil{p_{ji}\/m_i}\\big\\rceil}$. \nIndeed, one can show that as the time resolution increases, the performance guarantee for LRF on $P | fps | \\sum w_j C_j$ approaches that of LRF on $1||\\sum w_j C_j$. \nWe prove the analogous result for our problem. \n\\begin{theorem}\nCC-TSPT for $CC |fps| \\sum w_jC_j$ is a $(2 + 1\/\\rho_I)-$approximation.\n\\label{thm:tspt_unit_tasks}\n\\end{theorem}\n\\begin{proof}\nApplying techniques from the proof of Lemma \\ref{lem:TSPTboundInTermsOfPDandOPT} under the hypothesis of this theorem, we have $C^{CC}_{\\sigma(j), i} \\leq C^{PD,I'}_{\\sigma(j)} + 1$. \nNext, use the fact that for all $j \\in N$, $C^{CC,OPT}_{\\sigma(j)} \\geq \\rho_I$ by the definition of $\\rho_I$. These facts together imply\n$C^{CC}_{\\sigma(j), i} \\leq C^{PD,I'}_{\\sigma(j)} + C^{CC,OPT} \/ \\rho_I$. Thus\n\\begin{align}\n\\textstyle\\sum_{j \\in N} w_j C^{CC}_{\\sigma(j)} \n\t&\\leq \\textstyle\\sum_{j \\in N} w_j \\left[C^{PD,I'}_{\\sigma(j)} + C^{CC,OPT} \/ \\rho_I\\right] \\leq 2 \\cdot OPT + OPT \/ \\rho_I.\n\\end{align}\n\\end{proof}\n\n\\subsection{CC-ATSPT : Augmenting the LP Relaxation}\nThe proof of Theorem \\ref{thm:algCCTspt} appeals to a trivial lower bound on $C^{CC\\star}_{\\sigma(j)}$, namely $p_{\\sigma(j)1}\/\\bar{v} \\leq R \\cdot C^{CC\\star}_{\\sigma(j)}$. We attain constant-factor performance guarantees in spite of this, but it is natural to wonder how the \\textit{need} for such a bound might come hand-in-hand with empirical weaknesses. Indeed, TSPT can make subjobs consisting of many small tasks look the same as subjobs consisting of a single very long task.\nAdditionally, a cluster hosting a subjob with a single extremely long task might be identified as a bottleneck by MUSSQ, even if that cluster has more machines than it does tasks to process.\n\nWe would like to mitigate these issues by introducing the simple lower bounds on $C_j$ as seen in constraints $(1B)$ and $(1C)$. This is complicated by the fact that MUSSQ's proof of correctness only allows constraints of the form in $(1A)$. For $I \\in \\Omega_{PD}$ this is without loss of generality, since $|S| = 1$ in LP0 implies $C_j \\geq p_{ji}$, but since we apply LP0 to $I' = TSPT(I)$, $C_j \\geq x_{ji}$ is equivalent to $C_j \\geq p_{ji}\/\\mu_i$ (a much weaker bound than we desire). \n\nNevertheless, we can bypass this issue by introducing additional clusters and appropriately defined subjobs. We formalize this with the ``Augmented Total Scaled Processing Time'' (ATSPT) transformation. \nConceptually, ATSPT creates $n$ ``imaginary clusters'', where each imaginary cluster has nonzero processing time for exactly one job.\n\\begin{definition}[The Augmented TSPT Transformation]\nLet $\\Omega_{CC}$ and $\\Omega_{PD}$ be as in the definition for TSPT. Then the Augmented TSPT Transformation is likewise a mapping\n\\begin{align*}\nATSPT: ~\\Omega_{CC} \\to \\Omega_{PD} \\quad \\text{ with } \\quad (T, v, w) &\\mapsto (X, w) ~:~ X = \\big[\\begin{array}{c|c} X_{TSPT(I)} & D \\end{array}\\big].\n\\end{align*}\nWhere $D \\in \\mathbb{R}^{n \\times n}$ is a diagonal matrix with $d_{jj}$ as any valid lower bound on the completion time of job $j$ (such as the right hand sides of constraints ($1B$) and ($1C$) of LP1).\n\\end{definition}\nGiven that $d_{jj}$ is a valid lower bound on the completion time of job $j$, it is easy to verify that for $I' = ATSPT(I)$, LP1($I'$) is a valid relaxation of $I$. \nBecause MUSSQ returns a permutation of jobs for use in list scheduling by List-LPT, these ``imaginary clusters'' needn't be accounted for beyond the computations in MUSSQ.\n\n\\section{A Reduction for Minimizing Total Weighted Lateness on Identical Parallel Machines }\\label{sec:relationshipsBetweenProbs}\nThe problem of minimizing total weighted lateness on a bank of identical parallel machines is typically denoted $P || \\sum w_jL_j$, where the lateness of a job with deadline $d_j$ is $L_j \\doteq \\max{\\{C_j - d_j, 0\\}}$. The reduction we offer below shows that $P || \\sum w_j L_j$ can be stated in terms of $CC || \\sum w_jC_j$ \\textit{at optimality}. Thus while a $\\Delta$ approximation to $CC || \\sum w_jC_j$ does not imply a $\\Delta$ approximation to $P || \\sum w_j L_j$, the reduction below nevertheless provides new insights on the structure of $P || \\sum w_j L_j$.\n\n\\begin{definition}[Total Weighted Lateness Reduction]\nLet $I = (p, d, w, m)$ denote an instance of $P || \\sum w_j L_j$. \n$p$ is the set of processing times, $d$ is the set of deadlines, $w$ is the set of weights, \nand $m$ is the number of identical parallel machines. \nGiven these inputs, we transform $I \\in \\Omega_{P || \\sum w_j L_j}$ \nto $I' \\in \\Omega_{CC}$ in the following way.\n\nCreate a total of $n + 1$ clusters. Cluster 0 has $m$ machines. Job $j$ has processing time $p_j$ on this cluster, and $|T_{j0}| = 1$. Clusters 1 through $n$ each consist of a single machine. Job $j$ has processing time $d_j$ on cluster $j$, and zero on all clusters other than cluster 0 and cluster $j$. Denote this problem $I'$.\n\\end{definition}\nWe refer the reader to Figure \\ref{fig:probstm3and4} for an example output of this reduction.\n\\begin{theorem}\nLet $I$ be an instance of $P || \\textstyle\\sum w_j L_j$. Let $I'$ be an instance of $CC|| \\sum w_j C_j$ resulting from the transformation described above. Any list schedule $\\sigma$ that is optimal for $I'$ is also optimal for $I$.\n\\end{theorem}\n\\begin{proof}\nIf we restrict the solution space of $I'$ to single permutations (which we may do without loss of generality), then any schedule $\\sigma$ for $I$ or $I'$ produces the same value of $\\sum_{j \\in N} w_j(C_j - d_j)^+$ for $I$ and $I'$.\nThe additional clusters we added for $I'$ ensure that $C_j \\geq d_j$. Given this, the objective for $I$ can be written as $\\sum_{j \\in N} w_j d_j + w_j(C_j - d_j)^+$. Because $w_j d_j$ is a constant, any permutation to solve $I'$ optimally also solves $\\sum_{j \\in N} w_j (C_j - d_j)^+$ optimally. Since $\\sum_{j \\in N} w_j (C_j - d_j)^+ = \\sum_{j \\in N} w_j L_j$, we have the desired result.\n\\end{proof}\n\\section{Closing Remarks}\\label{sec:discAndConc}\nWe now take a moment to address a subtle issue in the concurrent cluster problem: what price do we pay for using the same permutation on all clusters (i.e. single-$\\sigma$ schedules)? For concurrent open shop, it has been shown (\\cite{Sris1993, mqssu}) that single-$\\sigma$ schedules may be assumed without loss of optimality. As is shown in Figure \\ref{fig:singleVsMultiPerm}, this does \\textit{not} hold for concurrent cluster scheduling in the general case. In fact, that is precisely why the strong performance guarantees for algorithm CC-LP rely on clusters having possibly unique permutations.\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.7\\textwidth]{singleVsMultiPerm.png}\n\\caption{An instance of $CC||\\sum C_j$ (i.e. $w_j \\equiv 1$) for which there does not exist a single-$\\sigma$ schedule which attains the optimal objective value. In the single-$\\sigma$ case, one of the jobs necessarily becomes delayed by one time unit compared to the multi-$\\sigma$ case. As a result, we see a 20\\% optimality gap even when $v_{\\ell i } \\equiv 1$.}\\label{fig:singleVsMultiPerm}\n\\centering\n\\end{figure}\n\nOur more novel contributions came in our analysis for CC-TSPT and CC-ATSPT. First, we could not rely on the processing time of the last task for a job to be bounded above by the job's completion time variable $C_j$ in LP0($I'$), and so we appealed to a lower bound on $C_j$ that was not stated in the LP itself. The need to incorporate this second bound is critical in realizing the strength of algorithm CC-TSPT, and uncommon in LP rounding schemes. Second, CC-ATSPT is novel in that it introduces constraints that would be redundant for LP0($I$) when $I \\in \\Omega_{PD}$, but become relevant when viewing $LP0(I')$ as a relaxation for $I \\in \\Omega_{CC}$. This approach has potential for more broad applications since it represented effective use of a limited constraint set supported by a known primal-dual algorithm.\n\nWe now take a moment to state some open problems in this area. One topic of ongoing research is developing a factor 2 purely combinatorial algorithm for the special case of concurrent cluster scheduling considered in Theorem \\ref{thm:identLP_2appxWithConstantTasks}. In addition, it would be of broad interest to determine the worst-case loss to optimality incurred by assuming single-permutation schedules for $CC|v\\equiv 1|\\sum w_j C_j$. The simple example above shows that an optimal single-$\\sigma$ schedule can have objective 1.2 times the globally optimal objective. Meanwhile, Theorem \\ref{thm:algCCTspt} shows that there always exists a single-$\\sigma$ schedule with objective no more than 3 times the globally optimal objective. Thus, we know that the worst-case performance ratio is in the interval $[1.2,3]$, but we do not know its precise value. As a matter outside of scheduling theory, it would be valuable to survey primal-dual algorithms with roots in LP relaxations to determine which have constraint sets that are amenable to implicit modification, as in the fashion of CC-ATSPT.\n\n\\subparagraph*{Acknowledgments.}\n\nSpecial thanks to Andreas Schulz for sharing some of his recent work with us \\cite{Schulz2012}. His thorough analysis of a linear program for $P||\\sum w_j C_j$ drives the LP-based results in this paper. Thanks also to Chien-Chung Hung and Leana Golubchik for sharing \\cite{HGY} while it was under review, and to Ioana Bercea and Manish Purohit for their insights on SWAG's performance. Lastly, our sincere thanks to William Gasarch for organizing the REU which led to this work, and to the 2015 CAAR-REU cohort for making the experience an unforgettable one; in the words of Rick Sanchez \\textit{wubalubadubdub!}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe Standard Model is flawed by the large number of free parameters, for which there is at present no explanation.\nThere is no prediction of the family replication pattern, nor of the number of families. All the families are really treated on the same footing.\nMost of the Standard Model free parameters reside in ``flavour space'' - with six quark masses, six lepton masses, four quark mixing angles and ditto for the leptonic sector, as well as the strong CP-violating parameter $\\bar{\\Theta}$. \nThe structure of flavour space is determined by the fermion mass matrices, i.e. by the form that the mass matrices take in the ``weak interaction basis'' where mixed fermion states interact weakly, in contrast to the ``mass bases'', where the mass matrices are diagonal.\n\nOne may wonder how one may ascribe such importance to the different bases in flavour space, considering that\nthe information content of a matrix is contained in its matrix invariants, which in the case of a $N\\times N$ matrix $M$ are the $N$ sums and products of the eigenvalues $\\lambda_j$, such as $trace M$, $detM$,\n\\begin{equation}\n \\def1.1{1.1}\n \\begin{array}{r@{\\;}l} \nI_1 = &\\sum_j\\lambda_j = \\lambda_1+\\lambda_2+\\lambda_3...\\\\\n\nI_2 = &\\sum_{jk}\\lambda_j\\lambda_k = \\lambda_1\\lambda_2+\\lambda_1\\lambda_3+\\lambda_1\\lambda_4+... \\\\\n\nI_3 = &\\sum_{jkl}\\lambda_j\\lambda_k\\lambda_l = \\lambda_1\\lambda_2\\lambda_3+\\lambda_1\\lambda_2\\lambda_4+...\\\\\n & \\vdots\\\\\nI_N = &\\lambda_1\\lambda_2 \\cdots \\lambda_N \n\\end{array}\n\\end{equation} \nThese expressions are invariant under permutations of the eigenvalues, which in the context of mass matrices means that they are flavour symmetric, and obviously independent of any choice of flavour space basis.\n\nEven if the information content of a matrix is contained in its invariants,\nthe form of a matrix may also carry information, albeit of another type. The idea - the hope - is that the form that the mass matrices have in the weak interaction basis can give some hint about the origin of the unruly masses. There is a certain circularity to this reasoning; to make a mass matrix ansatz is in fact to define what we take as the weak interaction basis in flavour space.\nWe denote the quark mass matrices of the up- and down-sectors in the weak interaction basis by $M$ and $M'$, respectively. \nWe go from the weak interaction basis to the mass bases by rotating the matrices by the unitary matrices $U$ and $U'$,\n\\begin{equation}\\label{mss}\nM \\rightarrow UMU^{\\dagger} = D = diag(m_u,m_c,m_t)\n\\end{equation}\n\\[\nM' \\rightarrow U'M'U'^{\\dagger} = D' = diag(m_d,m_s,m_b)\\\\\n\\]\n \\begin{figure}[htb]\n \\begin{center}\n \\includegraphics[scale=0.87]{b.png}\n\\end{center}\n \\end{figure}\n\nThe lodestar in the hunt for the right mass matrices is the family hierarchy, with two lighter particles in the first and second family, and a much heavier particle in the third family. This hierarchy is present in all the charged sectors, with fermions in different families exhibiting very different mass values, ranging from the electron mass to the about $10^{5}$ times larger top mass. It is still an open question whether the neutrino masses also follow this pattern \\cite{neutrino hier}.\n\n\n\n\n\\section{Democratic mass matrices}\nIn the democratic approach \\cite{demo}, \\cite{koide}, \\cite{Fritzsch} the family hierarchy is taken very seriously. It is assumed that in the weak basis the fermion mass matrices have a form close to the $S(3)_L \\times S(3)_R $ symmetric ``democratic'' matrix \n\\begin{equation}\\label{nambu}\n{\\bf{N}}= k\\begin{pmatrix}\n 1 & 1 & 1\\\\\n 1 & 1 & 1\\\\\n 1 & 1 & 1\\\\\n\\end{pmatrix}\n\\end{equation}\nwith the eigenvalues $(0,0,3k)$, reflecting the family hierarchy.\n\nThe underlying philosophy is that in the Standard Model, where the fermions get their masses from the Yukawa couplings by the Higgs mechanism, there is no reason why there should be a different Yukawa coupling for each fermion. \nThe couplings to the gauge bosons of the strong, weak and\nelectromagnetic interactions are identical for all the fermions in a given charge sector, it thus seems like a natural assumption that they should also have identical Yukawa couplings.\nThe difference is that the weak interactions take place in a specific flavour space basis, while the other interactions are flavour independent. \n\nThe democratic assumption is thus that the fermion fields of the same charge initially have the same Yukawa couplings. \nWith three families, the quark mass matrices in the weak interaction basis then have the (zeroth order) form\n\\begin{equation}\\label{nambu2}\nM^{(0)}= k_u\\begin{pmatrix}\n 1 & 1 & 1\\\\\n 1 & 1 & 1\\\\\n 1 & 1 & 1\\\\\n \\end{pmatrix},\\hspace{2cm} \nM'^{(0)}= k_d\\begin{pmatrix}\n 1 & 1 & 1\\\\\n 1 & 1 & 1\\\\\n 1 & 1 & 1\\\\\n \\end{pmatrix}\n\\end{equation}\nwhere $k_u$ and $k_d$ have dimension mass. \nThe corresponding mass spectra $(m_1,m_2,m_3) \\sim (0,0,3k_j)$ reflect\nthe family hierarchy with two light families and a third much heavier family, a mass hierarchy that can be interpreted as the representation ${\\bf{1}}\\oplus {\\bf{2}}$ of $S(3)$.\nIn order to obtain realistic mass spectra with non-zero masses, the $S(3)_L \\times S(3)_R $ symmetry must obviously be broken, \nand the different democratic matrix ans\\\"{a}tze correspond to different schemes for breaking the democratic symmetry.\n\n\\subsection{The lepton sector}\nWe can apply the democratic approach to the lepton sector as well, postulating democratic (zeroth order) mass matrices for the charged leptons and the neutrinos, whether they are Fermi-Dirac or Majorana states, \n\\begin{equation}\\label{leptons}\nM^{(0)}_l= k_l\\begin{pmatrix}\n 1 & 1 & 1\\\\\n 1 & 1 & 1\\\\\n 1 & 1 & 1\\\\\n \\end{pmatrix},\\hspace{2cm} \nM^{(0)}_{\\nu}= k_{\\nu}\\begin{pmatrix}\n 1 & 1 & 1\\\\\n 1 & 1 & 1\\\\\n 1 & 1 & 1\\\\\n \\end{pmatrix}\n\\end{equation}\nRelative to the quark ratio \n$k_u\/k_d \\sim m_t\/m_b \\sim 40 - 60$, the leptonic ratio $k_{\\nu}\/k_l < 10^{-8}$ is so extremely small that it seems unnatural. One way out is to simply assume that $k_{\\nu}$ vanishes, meaning that the neutrinos get no mass contribution in the democratic limit \\cite{Frit}. According to the democratic philosophy, then there would be no reason for a hierarchical pattern \\`{a} la the one observed in the charged sectors; the neutrino masses could even be of the same order of magnitude.\n\nData are indeed compatible with a much weaker hierarchical structure for the neutrino masses than the hierarchy displayed by the charged fermion masses. \n\n\nUnlike the situation for the quark mixing angles, in lepton flavour mixing there are two quite large mixing angles and a third much smaller mixing angle, these large mixing angles can be interpreted as indicating weak hierachy of the neutrino mass spectrum. The neutrino mass spectrum hierarchy could even be inverted; \nif the solar\nneutrino doublet $(\\nu_1,\\nu_2)$ has a mean mass larger than the remaining\natmospheric neutrino $\\nu_3$, the hierarchy is called\n\"inverted\", otherwise it is called \"normal\". \n\nSupposing that the neutrino masses do not emerge from a democratic scheme, a (relatively) flat neutrino mass spectrum could be taken as a support for the idea that the masses in the charged sectors emerge from a democratic scheme.\n\n\n\n\\section{The democratic basis}\nIn the case that both the up- and down-sector mass matrices have a purely democratic texture, the quark mixing matrix is $V = UU'^{\\dagger} = U_{dem}U_{dem}^{\\dagger} = {\\bf{1}}$, where \n\\begin{equation}\\label{dem}\nU_{dem} =\\frac{1}{\\sqrt{6}}\n \\begin{pmatrix}\n \\sqrt{3} & -\\sqrt{3} & \\hspace{2mm} 0 \\\\\n 1 & 1 & -2 \\\\\n \\sqrt{2} & \\sqrt{2} & \\sqrt{2}\n \\end{pmatrix}\n\\end{equation}\nis the unitary matrix that diagonalizes the democratic matrix (\\ref{nambu}).\n \nWe use this to define the democratic basis, meaning the flavour space basis where the mass matrices are diagonalized by (\\ref{dem}) and the mass Lagrangian is symmetric under permutations of the fermion fields $(\\varphi_1,\\varphi_2,\\varphi_3)$ of a given charge sector.\n\nIn the democratic basis the mass Lagrangian\n\\[\n{\\mathcal{L}}_m =\\bar{\\varphi} M_{(dem)} \\varphi =k\\sum_{jk=1}^3 \\bar{\\varphi}_j \\varphi_k\n\\]\nis symmetric under permutations of the fermion fields $(\\varphi_1,\\varphi_2,\\varphi_3)$, while \nin the mass basis\nwith \n\\[\nM_{(mass)} \n=\\begin{pmatrix}\n \\lambda_1\\\\\n& \\lambda_2\\\\\n&& \\lambda_3\\\\\n\\end{pmatrix}\n\\]\n the mass Lagrangian has the form\n \\begin{equation} \n{\\mathcal{L}}_m =\\lambda_1 \\bar{\\psi}_1 \\psi_1+\\lambda_2 \\bar{\\psi}_2 \\psi_2+\\lambda_3 \\bar{\\psi}_3 \\psi_3\n\\end{equation} \nwhich is clearly not invariant under permutations of ($\\psi_1$,$\\psi_3$,$\\psi_3$). \n\nWe can perform a shift of the democratic matrix, by just adding a unit matrix $diag(a,a,a)$, \n$M_0 \\rightarrow M_1$, \n\\begin{equation}\\label{pert}\nM_1 =\nk \\begin{pmatrix}\n 1 & 1 & 1\\\\\n 1 & 1 & 1\\\\\n 1 & 1 & 1\\\\\n \\end{pmatrix}+\n\\begin{pmatrix}\n a\\\\\n&a\\\\\n&&a\\\\\n\\end{pmatrix}=\n\\begin{pmatrix}\n k+a & k & k\\\\\n k & k+a & k\\\\\n k & k & k+a\\\\\n \\end{pmatrix}\n\\end{equation}\ncorresponding to the mass spectrum $(a,a,3a+3k)$. The matrix\n$M_1$ has a democratic texture, both because it is diagonalized by $U_{dem}$, and because the mass Lagrangian is invariant under permutations of the quark fields,\n\\begin{equation}{\\mathcal{L}}_{M_1} = (k+a)\\sum \\bar{\\varphi}_j \\varphi_j+ k \\sum_{j\\neq k} \\bar{\\varphi}_j \\varphi_j\n\\end{equation}\n\nIf $M_1$ and $M'_1$ both have a texture like (\\ref{pert}), there is no CP-violation. This is independent of how many families there are, because of the degeneracy of the mass values. \nCP-violation only occurs once there are three or more non-degenerate families, because only then the phases can no longer be defined away.\n\nWe can repeat the democratic scheme with a number $n$ of families, where the fermion mass matrices again are proportional to the $S(n)_L \\times S(n)_R $ symmetric democratic matrix \nwhich is diagonalized by a unitary matrix analogous to $U_{dem}$ in (\\ref{dem}). \nTo the $n\\times n$-dimensional democratic matrix term, we can again add a $n\\times n$-dimensional diagonal matrix $diag(a,a,...,a)$, and get a $n\\times n$-dimensional mass spectrum \nwith $n$ massive states, and $n-1$ degenerate masses.\nThe mass matrix still has a democratic texture, and there is still no CP-violation.\n\n\n\\section{Breaking the democratic symmetry}\nIn order to obtain non-degenerate, non-vanishing masses for the physical flavours $(\\psi_1,\\psi_2,\\psi_3)$, the permutation symmetry of the fermion fields $(\\varphi_1,\\varphi_2,\\varphi_3)$ in the democratic basis must be broken. The proposal here is to derive mass matrices with a nearly democratic texture, not by explicitly perturbing the assumed initial democratic form (\\ref{nambu}), but instead by perturbing the matrix $U_{dem}$ which diagonalises the democratic mass matrix. This is done by deriving the unitary rotation matrices $U$, $U'$ for the up- and down- sectors, from a specific parameterisation of the weak mixing matrix $V = UU^{'\\dagger}$.\n\nThe idea is to embed the assumption of democratic symmetry into the Standard Model mixing matrix, by expressing the mixing matrix as a product\n\\begin{equation}\\label{UU} \nV=UU'^{\\dagger}= ({\\tilde{U}}U_{dem})(U_{dem}^{\\dagger}{\\tilde{U'}}^{\\dagger})\n\\end{equation} \nSince both the mixing matrix and its factors, according to the standard parameterisation \\cite{Stand}, are so close to the unit matrix, the rotation matrices $U$, $U'$ are effectively perturbations of the democratic diagonalising matrix (\\ref{dem}). In this way, the weak interaction basis remains close to the democratic basis.\n\n\n\n\\subsection{Factorizing the mixing matrix} \nThe Cabbibo-Kobayashi-Maskawa (CKM) mixing matrix \\cite{CKM} can of course be parametrized - and factorized - in many different ways, and different factorizations correspond to different\nrotation matrices $U$ and $U'$.\nThe most obvious and ``symmetric'' factorization of the CKM mixing matrix is, following the standard parametrization \\cite{Stand} with three Euler angles $\\alpha$, $\\beta$, $2\\theta$,\n\\begin{equation}\nV =\\begin{pmatrix}\n c_{\\beta} c_{2\\theta} &s_{\\beta} c_{2\\theta} &s_{2\\theta} e^{-i\\delta}\\\\\n-c_{\\beta} s_{\\alpha} s_{2\\theta} e^{i\\delta}-s_{\\beta}c_{\\alpha}&-s_{\\beta}s_{\\alpha}s_{2\\theta} e^{i\\delta}+c_{\\beta}c_{\\alpha}&s_{\\alpha}c_{2\\theta}\\\\\n-c_{\\beta} c_{\\alpha} s_{2\\theta} e^{i\\delta}+s_{\\beta}s_{\\alpha}&-s_{\\beta}c_{\\alpha}s_{2\\theta} e^{i\\delta}-c_{\\beta}s_{\\alpha}&c_{\\alpha}c_{2\\theta}\\\\\n\\end{pmatrix}= UU^{'\\dagger}\n\\end{equation} \nwith the diagonalizing rotation matrices for the up- and down-sectors \n\\begin{equation}\\label{diagu}\nU =\n \\begin{pmatrix}\n 1 & 0 & 0 \\\\\n 0 &\\cos\\alpha &\\sin\\alpha \\\\\n 0 &-\\sin\\alpha &\\cos\\alpha \\\\\n \\end{pmatrix}\n\\begin{pmatrix}\n e^{-i\\gamma} \\\\\n & 1\\\\\n && e^{i\\gamma} \\\\\n \\end{pmatrix}\n\\begin{pmatrix}\n \\cos\\theta & 0 & \\sin\\theta \\\\\n 0 & 1 & 0\\\\\n -\\sin\\theta & 0 & \\cos\\theta \\\\\n \\end{pmatrix}\n \\begin{pmatrix}\n \\frac{1}{\\sqrt{2}} & -\\frac{1}{\\sqrt{2}} & \\hspace{2mm} 0 \\\\\n \\frac{1}{\\sqrt{6}} &\\frac{1}{\\sqrt{6}} & -\\frac{2}{\\sqrt{6}} \\\\\n \\frac{1}{\\sqrt{3}} &\\frac{1}{\\sqrt{3}} & \\frac{1}{\\sqrt{3}}\n \\end{pmatrix}\n\\end{equation}\nand\n\\[\nU' =\n \\begin{pmatrix}\n \\cos\\beta &-\\sin\\beta &0\\\\\n \\sin\\beta &\\cos\\beta &0\\\\\n 0 & 0 &1\n \\end{pmatrix}\n\\begin{pmatrix}\n e^{-i\\gamma} \\\\\n & 1\\\\\n && e^{i\\gamma} \\\\\n \\end{pmatrix}\n\\begin{pmatrix}\n \\cos\\theta & 0 & -\\sin\\theta \\\\\n 0 & 1 & 0\\\\\n \\sin\\theta & 0 & \\cos\\theta \\\\\n \\end{pmatrix}\n \\begin{pmatrix}\n \\frac{1}{\\sqrt{2}} & -\\frac{1}{\\sqrt{2}} & \\hspace{2mm} 0 \\\\\n \\frac{1}{\\sqrt{6}} &\\frac{1}{\\sqrt{6}} & -\\frac{2}{\\sqrt{6}} \\\\\n \\frac{1}{\\sqrt{3}} &\\frac{1}{\\sqrt{3}} & \\frac{1}{\\sqrt{3}}\n \\end{pmatrix},\n\\]\nrespectively,\nwhere $\\alpha$, $\\beta$, $\\theta$ and $\\gamma$ correspond to the parameters in the standard parametrization in such a way that\n$\\gamma = \\delta\/2$, $\\delta = 1.2 \\pm 0.08$ rad, and $2\\theta = 0.201 \\pm 0.011 ^{\\circ}$, while \n$\\alpha = 2.38 \\pm 0.06 ^{\\circ}$ \nand $\\beta = 13.04\\pm 0.05 ^{\\circ}$.\n\nFrom the rotation matrices $U$ and $U'$ we then obtain the mass matrices \n$M={U^{\\dagger}}diag(m_u,m_c,m_t)U$ and \n$M'={U'^{\\dagger}}diag(m_d,m_s,m_b)U'$, such that \n\\begin{equation}\\label{1}\nM=\\frac{1}{6}\\begin{pmatrix}\nX+H &\\hat{M}_{12} &Z+W\\\\\n\\hat{M}_{12}^*\\hspace{0.5mm} &X-H &Z-W\\\\\n\\hspace{0.5mm}Z^*+W^* &\\hspace{2mm}Z^*-W^* &6T-2X\\\\\n \\end{pmatrix} \n\\end{equation}\nwhere $T$ is the trace $T= m_u+m_c+m_t$, and with $D= \\sqrt{3}s_{\\theta} -\\sqrt{2}c_{\\theta}$, \n$ C= \\sqrt{3}s_{\\theta} +\\sqrt{2}c_{\\theta}$, \n$ F= c_{\\alpha}s_{\\alpha}(m_t-m_c)$,\n\\begin{description}\n\\item $ X=\\frac{1}{2}(m_cs_{\\alpha}^2+m_tc_{\\alpha}^2-m_u)(D^2+C^2-2)+F(D-C)\\cos\\gamma+T+3m_u$\n\\item $ H=\\frac{1}{2}(m_cs_{\\alpha}^2+m_tc_{\\alpha}^2-m_u)(D^2-C^2)+F\\cos\\gamma(D+C)$\n\\end{description}\n\\begin{description}\n\\item $W = \\frac{1}{4}(m_cs_{\\alpha}^2+m_tc_{\\alpha}^2-m_u)\\hspace{1mm}(D^2-C^2)-F\\hspace{1mm}(D+C)\\hspace{1mm}e^{-i\\gamma}$\n\\item $Z = (m_cs_{\\alpha}^2+m_tc_{\\alpha}^2-m_u)\\left[2+\\frac{1}{4}(D-C)^2\\right]\\hspace{1mm}+\\frac{F}{2}\\hspace{1mm}(D-C)\\hspace{1mm}(e^{i\\gamma}-2\\hspace{1mm}e^{-i\\gamma})-2T+6\\hspace{1mm}m_u$\n\\item $\\hat{M}_{12}= -(m_cs_{\\alpha}^2+m_tc_{\\alpha}^2-m_u)\\hspace{1mm}(D\\hspace{1mm}C+1)-F\\hspace{1mm}(C\\hspace{1mm}e^{i\\gamma}-D\\hspace{1mm}e^{-i\\gamma})+T-3\\hspace{1mm}m_u$ \n\\end{description}\nSimilarly for the down-sector,\n\\begin{equation}\\label{11}\nM'=\\frac{1}{6}\\begin{pmatrix}\nX'+H' &\\hat{M}'_{12} &Z'+W'\\\\\n\\hat{M}_{12}^{'*}\\hspace{0.5mm} &X'-H' &Z'-W'\\\\\n\\hspace{0.5mm}Z^{'*}+W^{'*} &\\hspace{2mm}Z^{'*}-W^{'*} &6T'-2X'\\\\\n \\end{pmatrix} \n\\end{equation}\nwith the parameters $T'= m_d+m_s+m_b$, $ G= \\sqrt{2}s_{\\theta} -\\sqrt{3}c_{\\theta}$, $ J= \\sqrt{2}s_{\\theta} +\\sqrt{3}c_{\\theta} $,\n\\begin{description}\n\\item$ F'= c_{\\beta}s_{\\beta}(m_b-m_s)$, and \n\\item $X'= \\frac{1}{2}(m_ss_{\\beta}^2+m_bc_{\\beta}^2-m_d)(G^2+J^2-2)-F'(J+G)\\cos\\gamma+T'+3m_b$\n\\item $H'= \\frac{1}{2}(m_ss_{\\beta}^2+m_bc_{\\beta}^2-m_d)(G^2-J^2)+F'(J-G)\\cos\\gamma$\n\\item $W'= \\frac{1}{4}(m_ss_{\\beta}^2+m_bc_{\\beta}^2-m_d)(G^2-J^2)+F'(G-J)e^{i\\gamma}$\n\\item $Z'=(m_ss_{\\beta}^2+m_bc_{\\beta}^2-m_d)\\left[2+\\frac{1}{4}(J+G)^2\\right]+\\frac{F'}{2}(J+G)(2e^{i\\gamma}-e^{-i\\gamma})-2T'+6m_b$\n\\item $\\hat{M}'_{12}= (m_ss_{\\beta}^2+m_bc_{\\beta}^2-m_d)\\hspace{1mm}(G\\hspace{1mm}J-1)-F'\\hspace{1mm}(J\\hspace{1mm}e^{i\\gamma}-G\\hspace{1mm}e^{-i\\gamma})+T'-3\\hspace{1mm}m_b$ \n\\end{description}\n\nIn order to evaluate to what degree these rather opaque matrices are democratic, we calculate numerical matrix elements by inserting numerical mass values.\nFor the up-sector\nwe get the (nearly democratic) matrix texture\n\\begin{equation}\\label{CU}\nM=C_u\\left[\\begin{pmatrix}\n 1\\\\ \n & k\\hspace{1mm}e^{-i(\\mu+\\rho)} \\\\\n &&kp\\hspace{1mm} e^{-i\\mu} \\\\\n \\end{pmatrix}\n\\begin{pmatrix}\n1&1&1\\\\\n1&1&1\\\\\n1&1&1\\\\\n\\end{pmatrix}\n\\begin{pmatrix}\n 1\\\\\n & k\\hspace{1mm}e^{i(\\mu+\\rho)} \\\\\n &&kp\\hspace{1mm} e^{i\\mu} \\\\\n\\end{pmatrix}+\\Lambda \\right]\n\\end{equation}\nwhere the ``small'' matrix\n\\[\n\\Lambda=\n\\begin{pmatrix}\n0 & 0& 0\\\\\n0 &\\varepsilon &\\varepsilon'e^{-i\\rho}\\\\\n0 &\\varepsilon'e^{i\\rho}&\\eta\\\\\n\\end{pmatrix},\n\\]\nwith \n$\\varepsilon \\sim \\varepsilon' \\ll \\eta < k, p$, is what breaks the democratic symmetry, supplying the two lighter families with non-zero masses. With mass values calculated at $\\mu = M_Z$ (Jamin 2014) \\cite{Jamin}, \n\\begin{equation}\\label{jam1}\n(m_u(M_Z),m_c(M_Z),m_t(M_Z)) =(1.24, 624 , 171550 ) MeV,\n\\end{equation}\nwe get \n\\begin{description}\n\\item$\\mu \\sim 2.7895^o$,\\hspace{1mm} $\\rho \\sim 2.7852^o$,\\hspace{1mm}$C_u=54240.36$ MeV $\\approx m_t\/3 $,\nand\n\\item$k\\approx 1.00438$,\\hspace{1mm} $p\\approx 1.06646$,\\hspace{1mm}$\\varepsilon' \\approx 5.05\\hspace{1mm} 10^{-5}$,\n\\item$\\varepsilon \\approx 4.6\\hspace{1mm} 10^{-5} \\approx 2\\frac{m_u}{C_u}$,\\hspace{1mm}$\\eta = 1.815\\hspace{1mm} 10^{-2}\\approx \\frac{1}{2}\\frac{m_t}{C_u}\\frac{m_c}{C_u}$.\n\\end{description}\nFor the down-sector, with \n\\begin{equation}\\label{jam2}\n(m_d(M_Z), m_s(M_Z), m_b(M_Z))=(2.69, 53.8, 2850) MeV\n\\end{equation}\nwe get another democratic texture,\n\\begin{equation}\nM'=C_d\n\\begin{pmatrix}\nX+A & Ye^{-i\\tau} &\\hspace{2mm} e^{-i\\nu}\\\\ \nYe^{i\\tau} & X-A & (1+2A)e^{i\\kappa}\\\\ \ne^{i\\nu} & (1+2A)e^{-i\\kappa} &\\hspace{2mm} X+Y-A-1\\\\ \n\\end{pmatrix}\n\\end{equation}\nwhere \n\\begin{description}\n\\item$C_d=966.5 MeV$, $A=5.6\\hspace{1mm}10^{-3}$, $X=1.0362$, $Y=1.0305$, and\n$\\tau \\leq \\kappa \\sim 0.22^o$ $<$ $\\nu \\sim 0.226^o$.\n\\end{description}\nJust like in the up-sector mass matrix, the matrix elements in $M'$ display a nearly democratic texture. For both the up-sector and the down-sector the mass matrices are thus approximately democratic.\n\n\\section{Calculability}\nIn the mass matrix literature there is an emphasis on ``calculability''. The ideal is to obtain mass matrices that have a manageable form, but there is nothing that forces nature to serve us such user-friendly formalism. It is however tempting to speculate that there are relations between the elements that could make the democratic matrices more calculable, and in the search for matrices that are reasonably transparent and calculable, we look at a more radical factorization of the mixing matrix, viz. \n\\begin{equation}\\label{ddiagun}\nU =\n \\begin{pmatrix}\n 1 & 0 & 0 \\\\\n 0 &\\cos\\alpha &\\sin\\alpha \\\\\n 0 &-\\sin\\alpha &\\cos\\alpha \\\\\n \\end{pmatrix}\n\\begin{pmatrix}\n \\cos\\omega & 0 & \\sin\\omega \\hspace{1mm}e^{-i\\delta} \\\\\n 0 & 1 & 0\\\\\n -\\sin\\omega \\hspace{1mm}e^{ i\\delta} & 0 & \\cos\\omega \\\\\n \\end{pmatrix}\n \\begin{pmatrix}\n \\frac{1}{\\sqrt{2}} & -\\frac{1}{\\sqrt{2}} & \\hspace{2mm} 0 \\\\\n \\frac{1}{\\sqrt{6}} &\\frac{1}{\\sqrt{6}} & -\\frac{2}{\\sqrt{6}} \\\\\n \\frac{1}{\\sqrt{3}} &\\frac{1}{\\sqrt{3}} & \\frac{1}{\\sqrt{3}}\n \\end{pmatrix}\n\\end{equation}\nand\n\\[\nU' =\n \\begin{pmatrix}\n \\cos\\beta &-\\sin\\beta &0\\\\\n \\sin\\beta &\\cos\\beta &0\\\\\n 0 & 0 &1\n \\end{pmatrix}\n \\begin{pmatrix}\n \\frac{1}{\\sqrt{2}} & -\\frac{1}{\\sqrt{2}} & \\hspace{2mm} 0 \\\\\n \\frac{1}{\\sqrt{6}} &\\frac{1}{\\sqrt{6}} & -\\frac{2}{\\sqrt{6}} \\\\\n \\frac{1}{\\sqrt{3}} &\\frac{1}{\\sqrt{3}} & \\frac{1}{\\sqrt{3}}\n \\end{pmatrix}\n\\]\nwhere, as before, $\\delta = 1.2 \\pm 0.08$ rad, and $\\omega=2\\theta = 0.201 \\pm 0.011 ^{\\circ}$, while \n$\\alpha = 2.38 \\pm 0.06 ^{\\circ}$, \nand $\\beta = 13.04\\pm 0.05 ^{\\circ}$.\nThese rotation matrices are still ``perturbations'' of the democratic diagonalizing matrix (\\ref{dem}), and\nthe up-sector mass matrix has a texture similar to (\\ref{1}),\n\\begin{equation}\\label{4}\nM=\n\\frac{1}{6}\\begin{pmatrix}\nR+Q+S\\hspace{1mm}\\cos\\delta &R-Q-iS\\hspace{1mm}\\sin\\delta& A-Be^{-i\\delta}\\\\\nR-Q+iS\\hspace{1mm}\\sin\\delta&R+Q-S\\hspace{1mm}\\cos\\delta & A+Be^{-i\\delta}\\\\\nA-Be^{i\\delta} &A+Be^{i\\delta} &T-2(R+Q)\\\\\n \\end{pmatrix} \n\\end{equation}\nwhere $T$ is the trace, $T= m_u+m_c+m_t$, and\n\\begin{description}\n\\item $R= N \\hspace{1mm}(2\\hspace{1mm}c_{\\omega}^2-1)+T-2\\hspace{1mm}\\sqrt{2}\\hspace{1mm}c_{\\omega}\\hspace{1mm}F$, \\hspace{1mm} $Q= 3\\hspace{1mm}s_{\\omega}^2\\hspace{1mm}N+3\\hspace{1mm}m_u$, \n\\item$S= -2\\sqrt{6}\\hspace{1mm}c_{\\omega}\\hspace{1mm}s_{\\omega}\\hspace{1mm}N+2\\hspace{1mm}\\sqrt{3}s_{\\omega}\\hspace{1mm}F$ \n\\item $A= N\\hspace{1mm}(2\\hspace{1mm}c_{\\omega}^2+2)-2\\hspace{1mm}T+\\sqrt{2}\\hspace{1mm}c_{\\omega}\\hspace{1mm}F+6\\hspace{1mm}m_u$, \\hspace{1mm} $B= \\sqrt{6}\\hspace{1mm}c_{\\omega}\\hspace{1mm}s_{\\omega}\\hspace{1mm}N+2\\hspace{1mm}\\sqrt{3}\\hspace{1mm}F\\hspace{1mm}s_{\\omega}$ \n\\end{description}\nwith\n$N= m_c\\hspace{1mm}s_{\\alpha}^2+m_t\\hspace{1mm}c_{\\alpha}^2-m_u$,\\hspace{1mm} $F= c_{\\alpha}\\hspace{1mm}s_{\\alpha}\\hspace{1mm}(m_t-m_c)$.\nThis matrix can be reformulated in a form similar to (\\ref{CU}), \n\n\\[ \nM_u=\nC_u \\left[\\begin{pmatrix}\n 1\\\\\n & k\\hspace{1mm}e^{-i\\mu} \\\\\n &&kp\\hspace{1mm}e^{-i(\\mu-\\rho)} \\\\\n \\end{pmatrix}\n\\begin{pmatrix}\n1&1&1\\\\\n1&1&1\\\\\n1&1&1\\\\\n\\end{pmatrix}\n\\begin{pmatrix}\n 1\\\\\n & k\\hspace{1mm}e^{i\\mu} \\\\\n &&kp\\hspace{1mm}e^{i(\\mu-\\rho)} \\\\ \n\\end{pmatrix}+\\Lambda\\right]\n\\]\nwhere $C_u=R+Q+S\\cos\\delta$, $\\mu=\\arctan\\left[S\\sin\\delta\/(Q-R)\\right]$, $\\rho=\\arctan\\left(B\\sin\\delta\/(A+B\\cos\\delta)\\right)$, and\n\\[\n\\Lambda=\n\\begin{pmatrix}\n0 & 0& 0\\\\\n0 &\\varepsilon &\\varepsilon'e^{-i\\rho}\\\\\n0 &\\varepsilon'e^{i\\rho}&\\eta\\\\\n\\end{pmatrix}\n\\]\nwith \n\\begin{description}\n\\item $k=|M_{12}|\/M_{11}=\\frac{|R-Q-iS\\sin\\delta|}{R+Q+S\\cos\\delta}$,\\hspace{4mm}$p=|M_{13}|\/|M_{12}|=\\frac{|A-Be^{-i\\delta}|}{|R-Q-iS\\sin\\delta|}$,\n\\item $\\varepsilon=(|M_{22}||M_{11}|-|M_{12}|^2)\/|M_{11}|^2=\n\\frac{4RQ-S^2}{|R+Q+S\\cos\\delta|^2}$,\n\\item $\\varepsilon'=(|M_{23}||M_{11}|-|M_{13}||M_{12}|)\/|M_{11}|^2$,\n\\item$\\eta=(|M_{33}||M_{11}|-|M_{13}|^2)\/|M_{11}|^2$\n\\end{description}\nInserting the mass values (\\ref{jam1}) gives\n\\begin{description}\n\\item$C_u=53723.5 MeV$,\\hspace{1mm}$k=1.00318$,\\hspace{1mm} $p=1.0828$, and\n\\item $\\varepsilon\\approx4.65\\hspace{1mm}10^{-5} \\approx 2\\frac{m_u}{C_u}$,\\hspace{1mm} $\\varepsilon'\\approx4.44\\hspace{1mm}10^{-5}$,\\hspace{1mm} $\\eta\\approx1.85\\hspace{1mm} 10^{-2}\\approx \\frac{1}{2}\\frac{m_t}{C_u}\\frac{m_c}{C_u}$\n\\end{description}\nFor the down-sector, with \n\\[ \nU' =\n \\begin{pmatrix}\n \\cos\\beta &-\\sin\\beta &0\\\\\n \\sin\\beta &\\cos\\beta &0\\\\\n 0 & 0 &1\n \\end{pmatrix}\n \\begin{pmatrix}\n \\frac{1}{\\sqrt{2}} & -\\frac{1}{\\sqrt{2}} & \\hspace{1mm} 0 \\\\\n \\frac{1}{\\sqrt{6}} &\\frac{1}{\\sqrt{6}} & -\\frac{2}{\\sqrt{6}} \\\\\n \\frac{1}{\\sqrt{3}} &\\frac{1}{\\sqrt{3}} & \\frac{1}{\\sqrt{3}}\n \\end{pmatrix},\n\\]\nthe mass matrix $U'^{\\dagger} diag(m_d,m_s,m_b)U'$ reads \n\\[\nM'=\nC_d\\begin{pmatrix}\nX+A & Y &\\hspace{1mm} 1\\\\ \nY & X-A &\\hspace{1mm} 1+2A\\\\ \n1 & 1+2A &\\hspace{1mm} X+Y-A-1\\\\ \n\\end{pmatrix}\n\\]\nwhere\n\\begin{description} \n\\item$C_d= 2(m_d c^2_{\\beta}+m_s s^2_{\\beta})-2\\sqrt{3}c_{\\beta}s_{\\beta}(m_s-m_d))+2(m_b-m_s-m_d)$\n\\item$X= (2m_b+m_s+m_d+2(m_d c^2_{\\beta}+m_s s^2_{\\beta})+2\\sqrt{3}c_{\\beta}s_{\\beta}(m_s-m_d))\/C_d$\n\\item$Y=(2m_b+m_s+m_d-4(m_d c^2_{\\beta}+m_s s^2_{\\beta}))\/C_d$,\n\\item$A=2\\sqrt{3}c_{\\beta}s_{\\beta}(m_s-m_d))\/C_d$. \n\\end{description}\nInserting the mass values (\\ref{jam2}) \nwe moreover get the numerical values \n\\begin{description}\n\\item $C_d= 926.448 MeV \\approx m_b\/3$,\\hspace{1mm}$X = 1.0375$,\\hspace{1mm}$A = 7\\hspace{1mm} 10^{-3}$,\\hspace{1mm}$Y = 1.0318$.\n\\end{description}\n\n\\section{Conclusion}\nBy including the democratic rotation matrix in the parametrization of the weak mixing matrix, we obtain mass matrices with specific democratic textures. In this way we make contact between the democratic hypothesis and the experimentally derived parameters of the CKM mixing matrix, avoiding the introduction of additional concepts. \n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzaumo b/data_all_eng_slimpj/shuffled/split2/finalzzaumo new file mode 100644 index 0000000000000000000000000000000000000000..4f04b969fbc2e4f4edb51bc21141796aff60ee59 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzaumo @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n \nOdor-guided navigation is common across the animal kingdom \\citep{Baker2018-ys}. Olfactory cues inform an animal of its location in a natural environment \\citep{Boie2018-eb}, and allow it to adjust its locomotion to navigate an odor landscape in a goal directed manner \\citep{bargmann1991chemosensory,berg1972chemotaxis,aceves1979learning}. Odor guided navigation is an ethologically relevant task that is important for the animal's survival, and it has been a useful framework with which to study genes and circuits underlying sensory-motor transformations \\citep{calhoun2017quantifying, clark2013mapping}.\nSmall model organisms navigate continuous gradients established by the spread of odorants from their sources due to diffusion and drift. \nHow animals interpret these gradients and use them to inform their actions remains an active and productive area of research especially in genetic model systems like \\textit{C. elegans} and \\textit{Drosophila melanogaster} \\citep{bargmann1991chemosensory,aceves1979learning,Levy2020-oh,mattingly2021escherichia, Gomez-Marin2011-ok,Gepner2015-wm}\n\n\n\n\nA major challenge is to quantitatively relate the animal's behavior to the precise olfactory cue that the animal experiences moment-by-moment.\nTherefore it is critical to precisely control the odor environment and record the sensory cues experienced by these animals. The need to \\emph{control} and \\emph{measure} odorants still pose a formidable challenge. While many techniques exist to either present or measure odors in a lab environment, no technique currently exists for precise control and continuous monitoring of an odor landscape. \n\nAll approaches to generate odor landscapes in a lab environment must contend with the odor's diffusivity and interaction with other substrates. Early approaches to \\textit{control} odor concentration relied on passive diffusion to construct a quasi-stationary spatial odor gradient, for example by adding a droplet of odorant in a petri dish in a ``droplet assay'' \\citep{Louis2008-ju,Iino2009-al,Pierce-Shimomura1999-nt, monte1989characterization}. Diffusion places severe limits on the space of possible landscapes that can be created and on the timescales over which they are stable, and the created odor profile is sensitive to adsorption of odor to surfaces, absorption into the substrate, temperature gradients, and air currents, all parameters that are difficult to measure, model, or control. Microfulidics allow water-soluble odors to be continuously delivered to a chamber in order to provide spatiotemporal control \\citep{chronis2007microfluidics, Albrecht2011-fj, lockery2008artificial}. Microfluidics devices, however, are limited in extent, require water-soluble odors and must be tailored-designed to the specific attributes of the animal's size and locomotion. While a post array has been shown to support \\textit{C. elegans} locomotion, no microfluidic device has been demonstrated to support olfactory navigation of \\textit{Drosophila} larvae, for example. \n\nWe previously reported a macroscopic gas-based active flow cell that uses parallel flow paths to construct temporally stable odor profiles \\citep{Gershow2012-nt}. That approach allows for finer spatiotemporal control of the odor gradient, is compatible with \\textit{Drosophila} larvae, and works with volatile airborne-based odor cues. This device used an array of solenoid valves to generate programmable odor profiles, but perhaps because of its complexity has not been widely adopted. \n\n\n\n\n\n\nMost methods to create an odor landscape do not provide a means for knowing or specifying the spatiotemporal odor concentration. In other words while an experimenter may know that some regions of an area have higher odor concentrations, they cannot quantify the animal's behavior given a precise concentration of the odor. This limits the ability to quantitatively characterize sensorimotor processing. To address this shortcoming, various methods have been proposed to \\textit{measure} odor concentration across space. For example, gas samples at specific locations could be taken and measured offline \\citep{Yamazoe-Umemoto2018-nx}. In one of the most comprehensive measurements to date, Louis and colleagues\n\\citep{Louis2008-ju, tadres2022depolarization} used infra-red spectroscopy to measure the spatial profile of a droplet based odor gradient. \n\nIn all of these cases, measurements were performed offline, not during animal behavior, and the odor concentration was assumed to be the same across repeats of the same experiment, and when animals are present. But even a nominally stable odor landscape is subject to subtle but significant disruptions over time from small changes in airflow, from temperature variation, and from the odor's interaction with the substrate, which can include absorption, adhesion, and reemission \\citep{Gorur-Shandilya2019-me, Yamazoe-Umemoto2018-nx,tanimoto2017calcium,Yamazoe-Umemoto2015-ru}.\nThis is challenging to account for and control within a single behavior experiment, and is even more difficult to account for across multiple instances of such experiments. Additional variability also arises across experiments as a result of the introduction of animals, changes to agar substrates, and alteration in humidity or other environmental conditions. To recover the odor concentration that an animal experienced, there is a need to measure odor concentration and animals' behaviors concurrently.\n\n\nOur previously reported flow cell used a photo-ionization detector (PID) sensor moved across the lid before behavioral experiments to measure the odor concentration across space at a single point in time \\citep{Gershow2012-nt}. During experiments, the total concentration of odor in the chamber was monitored concurrently with measurements of behavior. While this provided some assurances that the overall odor concentration was relatively stable, it did not provide any spatial information concurrently with behavior measurements.\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n\n\n\n\nHere we present a new flow chamber and a new multi-sensor odor array that addresses these prior limitations and can be used for measurement of the odor gradient with high spatial and temporal resolution. The array of sensors can be used two ways: the full array can be used to measure the generated gradient throughout the extent of the chamber, or parts of the array can be used on the borders to monitor, \\textit{during behavioral experiments}, the odor profile in the chamber. By varying flow rates and the sites of odor introduction, we show a variety of odor profiles can be generated and stabilized.\n\n\nTo demonstrate the utility of the apparatus, we applied this instrument to quantitatively characterize the sensorimotor transformation underlying navigational strategies used by \\textit{C. elegans} and \\textit{D. melanogaster} larva to climb up a butanone odor gradient. Butanone is a water-soluble odorant found naturally in food sources \\citep{worthy2018identification} that is often used in odor-guided navigation studies \\citep{bargmann1993odorant, Levy2020-oh, Cho2016-is, Torayama2007-qi}. We show that the agar gel used during behavioral experiments greatly disrupts an applied butanone gradient, and we demonstrate a pre-equilibration protocol allowing generation of stable gradients taking into consideration the effects of agar. Moreover we monitor these gradients during ongoing behavior measurements via continuous measurements of the odor profile along the boundaries of the arena. %\n\nUsing these stable and continuously measured butanone gradients, we measure odor-guided navigation in animals by tracking their posture and locomotion as they navigate the odor landscape. We record chemotaxis behavior and identify navigation strategies in response to the changing odor concentration they experience. \nIn \\textit{C. elegans}, we observe the presence of navigational strategies that were reported in other sensory-guided navigation conditions, such as salt chemotaxis \\citep{Iino2009-al, Dahlberg2020-ip, Luo2014-pc}. These two strategies are: a biased random turn, known as a pirouette \\citep{Pierce-Shimomura1999-nt}, and a gradual veering, known as weathervaning \\citep{Iino2009-al, Izquierdo2015-la}. %\nIn \\textit{Drosophila melanogaster} larvae, we identify runs followed by directed turns \\citep{Gershow2012-nt, Louis2008-ju, Gomez-Marin2011-ok}. %\nBy using concurrent measurements of behavior and odor gradient we characterize olfactory navigation in these small animals on agar with known butanone odor concentrations, which for \\textit{C. elegans} has not been reported before.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Results}\n\n\nWe developed new methods both for generating and measuring odor gradients which we describe here. The systems are modular, scalable, and flexible. The components, which can be used independently of each other, can be fabricated directly from provided files using online machining services, or the provided plans can be modified for other geometries. \n\n\\subsection{Flow chamber for generating spatiotemporal patterns of airborne odors}\n\nWe first sought to develop a method of creating odor gradients that satisfied the following criteria:\n\\begin{enumerate}\n \\item The spatial odor profile should be \\textit{controllable}. Varying control parameters (e.g. flow rates, tubing connections) should result in predictable changes to the resulting odor landscape. \n \\item The odor profile should be \\textit{stable} and \\textit{verifiable}. The same spatial profile should be maintained over the course of an experiment lasting up to an hour, and this should be verifiable via concurrent measurements during behavior experiments.\n \\item The apparatus should be \\textit{straightforward} to construct and to use, and \\textit{flexible} to adapt to various experimental configurations, including using with either \\textit{C. elegans} or \\textit{Drosophila} larva, and with agar arenas of various sizes.\n\\end{enumerate}\n\n\\begin{figure}\n\\begin{fullwidth}\n\\includegraphics[width=0.85\\linewidth]{Odor_flow_Fig1_v2.png}\n\\caption{\\textbf{Odor flow chamber with controlled and measured odor concentration.} \\textbf{(a)} Schematic of airflow paths. Airflow paths for odor solution and water are controlled separately by mass flow controllers (MFCs) and spatially arranged into the odor chamber. The outflux from the chamber connects to a flow meter and photo-ionization detector (PID). \\textbf{(b)} Flow chamber design. \\textbf{(c)} Odor sensory array (OSA). Seven odor sensor bars are connected to a sensor hub. Each bar has 16 odor sensors (OS) and 8 temperature\/humidity sensors (THS). Measured odor concentrations from the OSA of a spatially patterned butanone odor concentration shown in \\textbf{(d)} for each sensor, \\textbf{(e)} interpolated across the arena with square dashed line indicating the area where agar and animals are placed. \\textbf{(f)} A two-parameter analytic flow model fit to measurement. \\textbf{(g)} Cross-sections from (f) at at 4 different x-axis positions (show as colored arrows). Sensor readouts are overlaid points on the smooth curves from a 1D diffusion model. \\label{fig:fig1}\n}\n\\figsupp[Long-duration calibration confirms stable control and measurements.]{Long-duration calibration confirms stable control and measurements. (a, top) Odor flow rate driven by the MFC, (a, middle) raw reading of an odor sensor (OS) located in the chamber, (a, bottom) odor concentration readout from the photo ionization detector (PID) located at the outlet of the chamber. Note that the measurement is stable across the 90 minute recording. (b) We correct for the time lag between sensors and plot the mapping between PID measurements and OS readings.}{\\includegraphics[width=10cm]{Odor_flow_SI1_2.png}}\n\\label{figsupp:figSI1-2}\n\\end{fullwidth}\n\\end{figure}\n\n\nWe constructed a flow chamber to control odor air flow across an arena (\\FIG{fig1}a). Odor and humidified air are sourced from two bubblers, one containing pure water and the other an aqueous solution of odorant and water. Flow rates are controlled by separate mass flow controllers (MFCs) upstream of the bubblers. Downstream of the bubblers, the odor and air streams are divided into parallel sections of equal lengthed tubing. Each tube is connected to one input port of the flow chamber. The pattern of connections and the flow rates set by the two MFCs determines the shape of the produced odor profile. For instance, if odor is provided at a single central inlet, the resulting profile is a `cone' (\\FIG{fig1}d-g) whose peak concentration and divergence are controlled with the MFCs (e.g. speeding up the odor flow while slowing down the air flow broadens the cone). Temporal gradients can be achieved by varying the odor flow in time, subject to constraints imposed by the odor's absorption into the agar gel.\n\nThe outflux from the flow chamber is connected to a flow meter and photo-ionization detector to monitor the overall flow rate and odor concentration respectively. The geometry of this flow chamber is shown in (\\FIG{fig1}b), where parallel tubings are connected from the side and the chamber is vacuum sealed with a piece of acrylic on top during experiments. The chamber is designed for use with $\\sim$100 mm square agar plates. The extra width (2.5 cm on either side) diminishes the influence of the chamber boundary on the odor profile over the arena. Interchangeable inserts allow for different agar substrates (e.g. circular plates) or for full calibration by odor sensor arrays (\\FIG{fig1}c), discussed in the next section. Metal components are designed for low-cost fabrication by automated mechanisms (either laser cut-able or 3-axis CNC machinable). The fabrication plans for the flow chamber, design for the agar plate inserts, and required components to construct the flow path are publicly available in the \\nameref{ssec:num1} section.\n\n\n\\subsection{Measuring the spatiotemporal odor distributions} A central difficulty in measuring animals' responses to olfactory cues is quantifying airborne odor concentrations that vary in space and time. This difficulty is exacerbated in turbulent environments where odor plumes carry abrupt spatial and temporal jumps in concentration far from the source with fundamentally unpredictable dynamics. But even in laminar flows, boundary conditions, slight changes in temperature, and the presence of absorbing substrates like agar make this challenging. \n\nThere is therefore a need, even for quasi-stationary gradients, to characterize the odor profiles in situ and to monitor these profiles during experiments. Various optical techniques, like laser induced fluorescence or optical absorption \\citep{Louis2008-ju,tadres2022depolarization,demir2020walking}, %\nexist to monitor concentration across planar arenas, but in general, these are incompatible with behavior experiments, expensive to construct, require specially designed arenas, or some combination of these disadvantages. Electronic chemical sensors can reveal the time-varying concentration at a particular point in space. A tiled array of these sensors acts as a `camera' forming a 2D spatiotemporal reading of the concentration. The gold-standard for measurement of odor concentration is the photo-ionization detector (PID), but even the smallest versions of these sensors are both too large ($\\sim$ 2 cm in all dimensions) and too expensive ($\\sim$ \\$500 each) to make an array. Metal-oxide odor sensors, designed to be used in commercial air quality sensors, are available in inexpensive and compact integrated circuit packages. However, in general, commercial metal-oxide sensors are not designed for precision work - they tend to drift due to variations in heater temperature, humidity, adsorption of chemicals and ageing effects. Most such sensors are designed to detect the presence of gas above a particular concentration but not to precisely measure the absolute concentration. We became aware of a newer metal-oxide sensor, the Sensirion SGP30 that was designed for long-term stability and concentration measurement; we wondered if such a sensor could be calibrated for use in an odor sensor array.\n\nTo calibrate the sensor, we created a controllable concentration source by bubbling air through butanone. The odor reservoir contains butanone dissolved in water and is kept below the saturation concentration (11 mM or 110 mM odor sources).\nWe then mixed this odorized air flow into a carrier stream of pure air. We kept the carrier air flow rate constant ($\\sim 400$ mL\/min) and varied the flow rate through the odor source ($0-50$ mL\/min); the odor flow rate was slow enough that the vapor remained saturated, so the concentration of butanone in the mixed stream was proportional to the flow rate through the butanone bubbler, as directly measured with a PID (\\FIGSUPP[fig1]{figSI1-2}). We typically calibrated concentration with continuously ramped flow rate in triangle wave with 500 s period for 2-3 cycles.\nWe found a one-to-one correspondence between the odor sensor reading and the PID reading that persisted over time and showed no hysteresis. We reasoned that after applying this calibration procedure to an array of sensors, we could use the array to measure spatiotemporal odor concentration distributions with accuracy derived from the PID. Continuous calibration for 90 minutes showed that the odor sensors reliably reported concentration across durations (\\FIGSUPP[fig1]{figSI1-2}) much longer than the typical behavioral experiment.\n\n\nWe constructed the sensor array from `odor sensor bars' (OSBs), printed circuit boards each containing 16 sensors in two staggered rows of 8. Each OSB also contained 8 temperature and humidity sensors to allow compensation of the odor sensor readings. The OSBs are mounted orthogonal to the direction of air flow; 7 OSBs fit inside our flow chamber (112 sensors total) allowing a full measurement of the odor profile. Taken together these 112 sensors formed an odor sensor array (OSA), capable of measuring odor concentrations with %\n$\\sim$ 1 cm spatial and 1 second temporal resolution. Prior to all experiments we calibrated the OSA in situ by varying the butanone concentration across the entire anticipated range of measurement while simultaneously recording the odor sensor and PID readings. \n\nTo verify the ability of the OSA to measure concentration gradients we created an artificially simple steady-state odor landscape by flowing odorized air ($\\sim 30$ mL\/min) through a central tube and clean air through the others ($\\sim 400$ mL\/min distributed into 14 surrounding tubings) in an environment without agar and without animals. This results in an air flow velocity $\\sim 5$ mm\/s in the flow chamber. As the flow rates and concentrations are all known and the flow is non-turbulent, and there is no agar or animals present, the concentration across the chamber should match a convection-diffusion model. After establishing the gradient, we recorded from the discrete odor sensors on the array (\\FIG{fig1}d) and estimated the values in between sensors using spline interpolation with length scale equal to the inter-sensor distances (\\FIG{fig1}e). We compared this stationary profile with a two-parameter convection-diffusion flow model (\\FIG{fig1}f,g) fit to the data and described in the Methods section. \nThe measured concentrations in this artificially simplistic odor gradient show good agreement with the fit convection-diffusion model, especially in the central region where experiments are to be conducted, leading us to conclude the OSA can accurately report the odor concentration. We proceed to consider more complex odor environments. %\n\n\n\n\n\n\n\n\n\n\n\nWe demonstrate several examples of flexible control over the odor profile. In the configuration shown in \\FIG{fig1} the center-most input provides odorized air and all surrounding inputs provide moisturized clean air to form a cone-shape stationary odor pattern. A narrower cone can be created by increasing the air flow (600 mL\/min) of the surrounding inputs relative to the middle odorized flow (\\FIG{fig2}a). %\nThe cone can be inverted by placing odorized air in the \ntwo most distal inputs, and clean air in all middle inputs (\\FIG{fig2}b). This inverse cone has lower concentration in the middle and higher on the sides. In later animal experiments, we restrict odorized air to one side to form a biased-cone odor landscapes, resulting in a cone with an offset from the middle line of the arena. Many more configurations are possible, demonstrating that the odor flow chamber enables the flexible control of airborne odor landscapes that are much more complex than a single odor point source. To show that the flow control and measurement methods are not restricted to any single odor molecule, we created and measured a cone profile using ethanol (\\FIG{fig2}c).\n\n\\begin{figure}\n\\begin{fullwidth}\n\\includegraphics[width=1.0\\linewidth]{Odor_flow_SI2.png}\n\\caption{\n\\textbf{Flexible control of a steady-state odor landscape.} \\textbf{(a)} Configuration for a narrow cone with butanone. Colorbar shows interpolated odor concentration in ppm as measured by the odor sensor array. \\textbf{(b)} An inverse cone landscape that has higher concentration of butanone on both sides and lower in the middle. \\textbf{(c)}Another stationary odor landscape of a different shape, this time with ethanol.\n\\label{fig:fig2}\n}\n\\end{fullwidth}\n\\end{figure}\n\n\n\\subsection{Odor-agar interactions dominate classical droplet assays}\n\n\\begin{figure}\n\\begin{fullwidth}\n\\includegraphics[width=1.0\\linewidth]{Odor_flow_droplet_2.png}\n\\caption{\\textbf{The presence of agar in the droplet assay alters the time-evolution of butanone odor landscape.}\n\\textbf{(a)} Concentration measured by odor sensor array is reported immediately after butanone droplet is introduced into the arena without agar. Red dot indicates the position of butanone droplet on the lid over the sensor array ($2\\mu L$ of $10\\%$ v\/v butanone in water). \\textbf{(b)} Same but three minutes later.\n\\textbf{(c)} Same measurement as in (a) but now droplet is added onto agar (gray). Two odor sensor bars have been removed to make space for the agar. \\textbf{(d)} Same as (c) but three minutes later. Side view of the configurations with OSB sensors, butanone droplets, and agar gel are shown on the right.}\n\\label{fig:fig3}\n\\figsupp[Concentration measurements with an odor droplet and experimental perturbation.]{Concentration measurements with an odor droplet and experimental perturbation. (a) The initial concentration readout from a droplet of butanone on the lid near the middle of the arena. (b) Same condition as (a), but 20 minutes after the recording (left) and after shortly opening and closing the lid back (right) to mimic perturbation during worm experiments. (c) When there is agar in the chamber, the odor concentration is better maintained in the chamber. Location of the odor droplet is shown with a red dot. (d) After 20 minutes of recording (left) and after shortly opening and closing the lid back (right) to mimic experimental perturbation.}{\\includegraphics[width=10cm]{Odor_flow_SI2-2.png}}\\label{figsupp:figSI3}\n\\videosupp{Time-evolution of odor landscape from a butanone droplet with (right) and without (left) agar. The experimental conditions are the same as \\FIG{fig3}. Blank sensor positions indicate sensors replaced by agar. The video updates every 2 seconds and measures butanone concentration from a droplet in the first 3 minutes. \nVideo available online at \\href{https:\/\/figshare.com\/articles\/dataset\/Continuous_odor_profile_monitoring_to_study_olfactory_navigation_in_small_animals\/21737303}{10.6084\/m9.figshare.21737303}\n}\\label{videosupp:sv1}\n\\end{fullwidth}\n\\end{figure}\n\nClassic chemotaxis experiments in small animals commonly construct odor environments with odor droplets in a petri dish, usually with a substrate like agar. Our odor delivery instrument is designed to be compatible with a similar environment. To first better understand classical chemotaxis experiments, we sought to characterize the spatiotemporal odor profile from an odor droplet point source using our odor sensor array. We first considered the case without agar. In that case the odor concentration should be governed entirely by gas-phase diffusion. %\nWe placed a $2 \\mu$L droplet of $10\\%$ butanone in water %\non the lid of our instrument centered in the arena above the full OSA and without any airflow (\\FIG{fig3}a,b). Butanone was observed to diffuse across the arena in the first three minutes (supplementary video (\\VIDEOSUPP[fig3]{sv1})\nand the equilibrium concentration is close to uniform across the odor sensors. We note that the final concentration of roughly 100 ppm, and the equilibration timescale both match what we would expect from first principals for $\\sim 10^{-6}$ mol of butanone in a $\\sim 225 \\text{mL}$ arena that have a diffusion rate of $\\sim 0.08 \\text{cm}^2\/$s in air. \nA uniform odor landscape is not helpful for studying odor guided navigation, but most behavioral experiments are not conducted in a bare flow chamber but contain a biologically compatible substrate, such \nas an gar gel, as is typically used in droplet assays. We therefore sought to investigate the role that agar plays in sculpting the odor landscape.\n\n\n\nWe introduced agar into the droplet assay by removing two sensor bars and replacing them with agar. We placed a butanone droplet directly on the agar, as done classically, and measured the odor landscape over time (\\FIG{fig3}c,d). The odor concentration measured with agar is dramatically different from that measured without agar. Instead of quickly equilibrating to a uniform concentration, in the presence of agar there was instead a local maximum of butanone surrounding the droplet that persists even after 3 minutes. This difference in airborne odor concentration with and without agar persists after experimental perturbation such as removing and replacing the lid over the chamber (\\FIGSUPP[fig3]{figSI3}). More broadly, the odor landscape we observed in the presence of agar would have been hard to predict ahead of time. An important consequence of this finding is that, to create a specific odor landscape (as in \\FIG{fig1} or \\FIG{fig2}) with agar, one will need to account for the effect of agar. We therefore sought to study odor-agar interactions more systematically and in the context of air flow. %\n\n\n\n\n\\subsection{Measuring and compensating odor-agar interactions with flow}\n\nWe first sought to measure whether the presence of agar changed the odor profiles generated due to flow in a bare chamber (\\FIG{fig1}, \\FIG{fig2}). As in \\FIG{fig3}c,d, we replaced two odor sensor bars with a rectangular strip of agar gel or a metal plate as a control, and then measured airborne odor concentration upstream and downstream of the agar under odorized airflow that would normally produce a cone profile (\\FIG{fig4}a-b).\nWhile the agar had little effect on the odor landscape upstream of the agar, it drastically altered the downstream odor landscape (\\FIG{fig4}b), suggesting that the agar absorbs\nthe airborne butanone molecules. This finding is consistent with the odor droplet experiments (\\FIG{fig3}) and to be expected since butanone is highly soluble in water (275 g\/L). Pulse-chase style experiments confirm that agar does indeed absorb and reemit butanone (\\FIGSUPP[fig4]{figSI4-2}b). We also observed disruptions to the odor landscape when we used a full-sized 96 mm square agar plate intended for use with animals, \\FIG{fig4}c. To accommodate the full sized agar plate we measured only the one dimensional odor profiles upstream and downstream of the agar \\FIG{fig4}d. Taken together, these experiments suggest that agar-butanone interaction presents a challenge for setting up and maintaining stable odor landscapes. \n\nWe next sought a method to generate desired odor landscapes even in the presence of agar. We generate the odor profile by constant flow, which continuously replenishes the airborne odor. In principle, the disruption caused by agar should be overcome by constant flow of a sufficiently long duration, after which the agar and airborne odor would be in quasi-equilibrium at all spatial locations, with the concentration of odor dissolved in the gel proportional to the airborne concentration above it. \n\nWe measured odor concentration downstream of the agar and found that the airborne concentration failed to approach equilibrium on the timescales of single experiments \\FIG{fig4}e,f. This suggests that it is not practical to simply wait for the agar and odor to reach equilibrium. Instead we developed a pre-equilibration protocol to more efficiently bring the agar and airborne odor into equilibrium before our experiments.\n \nTo more rapidly establish a desired airborne odor landscape, we briefly first exposed the agar to an airflow pattern corresponding to higher-than-desired odor concentration, created by replacing the odor reservoir with one containing a higher concentration of butanone.\nWe monitored the odor profile downstream of the agar until it reached the desired concentration and then switched to the original bubbler to maintain that concentration. Using this pre-equilibration protocol, we reached quasi-equilibrium quickly, typically after the order of ten minutes, \\FIGSUPP[fig4]{figSI4-2},c. Note the spatial parameters of the two airflow patterns were the same, only the concentration of the odor source changes. Pre-equilibration allows the generation of airborne odor gradients in the presence of agar that match those in the absence of agar (\\FIG{fig4}b,d,f right vs left column). We modeled the pre-equilibration protocol using a reaction-convection-diffusion model considering first order interactions between odor and agar. Under reasonable assumptions about the absorption rate, reemission rate, and capacity of the agar, simulations of this simple model provided qualitative agreement to our observations (\\FIG{PE_model}a-c).\n\n\\subsection*{Monitoring the boundary determines the odor landscape at quasi-equilibrium}\n\n\\begin{figure}\n\\begin{fullwidth}\n\\includegraphics[width=0.95\\linewidth]{Odor_flow_Fig2_v3.png}\n\\caption{\\textbf{Under flow, agar interacts with odor to disrupt the downstream spatial odor profile, but a pre-equilibration protocol can coax the system into quasi-equilibrium and restore the odor profile.} %\n\\textbf{(a)} Two odor sensor bars are replaced with agar to observe the effect of introducing agar on the downstream spatial odor profile. %\n\\textbf{(b)} Measured odor profile is shown upstream and downstream of the removed odor sensor bars in the absence (left) and presence of agar (middle).\nTransiently delivering a specific higher odor concentration ahead of time via a pre-equilibration (PE) protocol restores the downstream odor profile even in the presence of agar. \\textbf{(c)} Additional odor sensor bars are removed and replaced with a larger agar, as is typical for animal experiments. \\textbf{(d)} Measurements from the downstream sensor bar under the same three conditions in (b). The dots are sensor measurements and the smooth curve is a Gaussian fit. \\textbf{(e)} The same experimental setup in (a), here focusing on time traces of only three downstream odor-sensors (colored circles for selected OS). \\textbf{(f)} Concentration time series of three sensors color-coded in (e). Traces for three conditions are shown: time aligned to initial flow without agar (left), time aligned to initial flow with agar (middle), and traces after PE (right, with transparent line showing measurements another 20 min after the protocol). The dash-lines indicate the target steady-state concentration for each sensor.\n\\label{fig:fig4}\n}\n\\figsupp[Time series of concentration change that capture effects of agar gel and the PE protocol]{Time series of concentration change that capture effects of agar gel and the PE protocol. (a) Concentration readout from the downstream PID (top) in response to the impulse of air flow rate though the odor bottle controlled by MFC (bottom), with no agar in the flow chamber. The background clean air flow is constant $\\sim$ 400 mL\/min throughout the recording. (b) Same as (a) but with agar plate in the flow chamber. Note that the response time scales to the same impulse are significantly different. (c) The time trace recorded from the PE protocol with agar plates. Odor reservoir with high butanone concentration (110 mM) is applied in the beginning, swapped back to the target concentration (11 mM) at $\\sim$ 200 seconds, the odor concentration readout relaxes and stabilizes after $\\sim$ 900 seconds, which enters steady-state for a duration longer than animal experiments.\n}{\\includegraphics[width=10cm]{Odor_flow_SI_PE2.png}}\n\\label{figsupp:figSI4-2}\n\\end{fullwidth}\n\\end{figure}\n\n\n\\begin{figure}\n\\begin{fullwidth}\n\\includegraphics[width=0.9\\linewidth]{Odor_flow_SI_PE_model.png}\n\\caption{\\textbf{Simulations from a reaction-convection-diffusion model of odor-agar interaction show that at quasi-equilibrium the airborne odor concentration is the same with or without agar.} \\textbf{(a)} Simulation results of steady-state odor concentration in air without agar and with flow configured as in \\FIG{fig1}. \\textbf{(b)} The same simulation condition in (a) but now shortly after agar is introduced and before quasi-equilibrium is reached. A schematic of odor-agar interaction model is shown below. When agar is introduced, it absorbs the odor in air and decreases concentration measured downstream, producing a non-equilibrium (NE) concentration profile. \\textbf{(c)} Odor concentration profile in air, with agar present, but after the pre-equilibration (PE) protocol brings this system to quasi-equilibrum. The PE protocol is shown in the schematic below, followed with steady-state (SS) with the stable odor concentration profile shown above. \\textbf{(d)} The absolute difference of concentration profile without agar (a) and with agar after PE (c) is shown. \\textbf{(e)} Upstream and \\textbf{(f)} downstream odor concentrations along the agar boundary are shown for all three conditions.\n }\n \\label{fig:PE_model}\n \\end{fullwidth}\n\\end{figure}\n\n\nIt is critical to monitor the odor landscape during animal experiments because the landscape is sensitive to environmental and experimental conditions which may fluctuate within and between experiments. But it is inconvenient to measure airborne concentration directly over the agar (e.g. because sensors impede optical access and also require heat management). \nFortunately, measuring the odor profile upstream and downstream of the agar places strong constraints on the airborne odor concentration over the agar such that in practice the spatial concentration can be confidently inferred.\n\nIf the airborne odor concentration upstream and downstream of the agar matches the profile in the absence of agar, one can infer that the airborne odor concentration landscape above the agar is also the same. The argument is straightforward: in the absence of sources or sinks, the fact that two concentration distributions obey the same differential equations and share the same conditions on all boundaries means that the the distributions are identical throughout the interior. Given identical measurements of the with-agar and without agar profiles at the inlet and outlets and reflecting boundary conditions on both walls, the only way for the with-agar distribution to differ from that without-agar is for sources and sinks of odor in the agar to be precisely arranged so that the all excess odor emitted from one point is exactly reabsorbed somewhere else before reaching the boundary. Not only is such an arrangement unlikely, it is inherently temporally unstable. A mathematical version of this argument is presented in the \\hyperref[ssec:appendix]{Appendix}. \n\n\n\n\n\n\n\nThis quasi-equilibrium argument is supported by empirical concentration measurements shown in \\FIG{fig4} and numerical results with the reaction-convection-diffusion model demonstrated in \\FIG{PE_model}. The simulation results show that there is negligible difference between conditions with and without agar at quasi-equilibrium (\\FIG{PE_model}d) when the odor concentration along the boundary is the same (\\FIG{PE_model}e,f). Together, our numerical estimations, along with empirically observations allow us to safely infer that when measurements along the boundary indicate that they system is in quasi-equilibrium, the odor concentration experienced by animals on agar are the same as the concentrations measured in the absence of agar.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Butanone chemotaxis in \\emph{C. elegans}} %\nWe sought to directly quantify \\textit{C. elegans}' navigation strategies for airborne butanone using our odor delivery system.\n\\textit{C. elegans} are known to climb gradients towards butanone \\citep{bargmann1993odorant, Cho2016-is, Levy2020-oh}. Microfluidic environments suggest that they use a biased random walk strategy to navigate in a liquid butanone environment \\citep{Levy2020-oh, Albrecht2011-fj}. Worms are also known to use weathervaning to navigate airborne odor gradients \\citep{Iino2009-al, kunitomo2013concentration} although to our knowledge this has not been specifically investigated for butanone. \n\n\nWorms were imaged crawling on agar in the flow chamber under an airborne butanone odor landscape illuminated by infrared light. Here 6 recording assays were presented, with approximately $50-100$ animals per assay, and two different odor landscapes were used.\n\\textit{C. elegans} navigated up the odor gradient towards higher concentrations of butanone, as expected (\\FIG{fig5}a,b). Importantly, the odor concentration experienced by the animal at every point in time was inferred from concurrent measurements of the odor profile along the boundary of the agar, \\FIG{fig5}c. On average, animals were more likely to travel in a direction up the local gradient than away from the local gradient, as expected for chemo-attraction \\FIG{fig5}d. We use the term ``bearing to local gradient'' to describe the animal's direction of travel with respect to the local odor gradient that it experiences.\n\nWe find quantitative evidence that the worm exhibits both biased random walk and weathervaning strategies. \nTo investigate biased random walks, we measured the animal's probability of turning (pirouette) depending on its bearing with respect to the local airborne butanone gradient \\FIG{fig5}e.\nWe find that the animal is least likely to turn when it navigates up the local gradient and most likely to turn when it navigates down the gradient, a key signature of the biased random walk strategy \\citep{berg2018random, mattingly2021escherichia}. \n\nTo test for weathervaning, we measured how the curvature of the animal's trajectory depended on its bearing with respect to the local airborne butanone gradient, \\FIG{fig5}f. \nWhen the animal navigated up the butanone gradient (\\FIG{fig5}f, blue) the distribution of the curvature of its trajectory was roughly symmetric and centered around 0 (straight line trajectory). By contrast, when the animal navigated perpendicular to the gradient (\\FIG{fig5}f, yellow and red) the distribution of the curvature of its trajectories was skewed. The skew was such that it enriched for cases where the animal curved its trajectories towards the local gradient, a key signature of weathervaning.\nBoth the biased random walk and weathervaning behavior was absent in control experiments with flow but not odor, and we observed no evidence of anemotaxis at the $\\sim 5$mm\/s air velocities encountered by the animals (\\FIGSUPP[fig5]{no_odor_control}).\nWe conclude that \\textit{C. elegans} utilize both biased random walk and weathervaning strategies to navigate butanone airborne odor landscapes. We note that the quantitative analysis needed to make this conclusion relied on knowledge of the local airborne odor gradient experienced by the animal, which was provided by our odor profile measurements.\n\n\n\n\n\n\n\n\n\n\\begin{figure}\n\\begin{fullwidth}\n\\includegraphics[width=1.0\\linewidth]{Odor_flow_Fig3_v3.png}\n\\caption{\\textbf{\\textit{C. elegans} use both biased random walk and weathervaning to navigate in a butanone odor landscape.} Animals on agar were exposed to butanone in the flow chamber. \\textbf{(a,b)} Measured animal trajectories are shown overlaid on airborne butanone concentration for different odor landscapes. Green dots are each animal's initial positions and red dots are the endpoints. %\n\\textbf{(c)} An animal's trajectory is shown colored by the butanone concentration it experiences at each position (top). Its turning behavior is quantified and plotted over time. Turning bouts are highlighted in gray. \\textbf{(d)} Distribution of the animal's bearing with respect to the local airborne odor gradient is shown. Peak around zero is consistent with chemotaxis. \\textbf{(e)} \nProbability of observing a sharp turn per time is shown as a function of the absolute value of the bearing relative to the local gradient. Modulation of turning is a signature of biased random walk. Error bars show error for counting statistics. Data analyzed from over 9,000 tracks produced from $\\sim$300 worms, resulting in 108 hours of observations. \\textbf{(f)} Probability density of the curvature of the animal's trajectory is shown conditioned on bearing with respect to the local gradient. Weathervaning strategy is evident by a skew in the distribution of trajectory curvature when the animal travels perpendicular to the gradient (yellow and red). Three distributions are significantly different from each other according to two-sample Kolmogorov-Smirnov test ($p < 0.001$). Means are shown as vertical dashed lines.\n\\label{fig:fig5}}\n\\figsupp[Control measurements with air flow and without odor gradient.]{Control measurements with air flow but no odor. (a) Behavioral trajectories overlaid on the odor landscape that would have been expected had odor been present (mock odor landscape). No odor is presented, only moisturized airflow. (b) Distribution of bearing to the mock gradients under clean air flow. (c) Turn probability at different bearing conditions to the mock gradient. (d) Curvature conditioned on different bearing measurements.\n}{\\includegraphics[width=10cm]{Odor_flow_SI_noodor.png}}\n\\label{figsupp:no_odor_control}\n\\end{fullwidth}\n\\end{figure}\n\n\n\n\n\nTo quantify the overall navigational response with respect to local gradients, we further compute the animal's drift velocity as a function of local gradients (\\FIG{fig6}). This captures the animal's overall gradient climbing performance as a result of all the navigational strategies it uses, including the biased random walk and weathervaning. This calculation is only possible with a knowledge of the odor concentration experienced by the animal. \n\n\n\n\n\n\n\n\\begin{figure}\n\\begin{fullwidth}\n\\includegraphics[width=0.7\\linewidth]{Odor_flow_Fig6_v3.png}\n\\caption{\n\\textbf{Tuning curve relating animal drift velocity to experienced odor concentration gradient.} \\textbf{(a)} Schematic of a worm tracked in the odor landscape. The crawling velocity vector $V$, local concentration gradient $\\nabla C$, bearing angle $\\theta$, and drift velocity $V\\cos(\\theta)$ are shown. \\textbf{(b)} Tuning curve shows the drift velocity $V\\cos(\\theta)$\nas a function of the odor concentration gradient. Gray dash line indicates an unbiased performance with zero drift velocity, gray dots are the discrete measurements, and the black line shows the average value within bins. Error bar shows lower and upper quartiles of the measurements. %\n}\n\\label{fig:fig6}\n\\end{fullwidth}\n\\end{figure}\n\n\n\\subsection{Butanone chemotaxis in \\emph{Drosophila} larvae}\nTo further evaluate the utility of the flow chamber and gradient calibration for the study of small animal navigation, we investigated how larval \\textit{Drosophila} navigate butanone. Although butanone is not as commonly used as a stimulus with \\textit{Drosophila} as with \\textit{C. elegans}, butanone is known to be attractive to larval flies \\citep{dubin1995scutoid, dubin1998involvement} and has been variously reported to be attractive \\citep{park2002inactivation} and aversive \\citep{israel2022olfactory, lerner2020differential} to adult flies. \nTo investigate the larva's navigational strategy in a butanone gradient, we created a \"cone\" shaped butanone gradient over the agar substrate using the pre-equilibration protocol, as before, and we confirmed the presence and stability of the gradient by continuously measuring the spatial distribution of butanone upstream and downstream of the agar arena. We monitored the orientation and movement of 59 larvae over 6 separate 10 minute experiments ($\\sim$ 10 larvae per experiment) with an average observation time of 7 min per larva (\\FIG{fig7}). \n\nLarvae moved towards higher concentration of butanone (\\FIG{fig7}a). To analyze the strategy by which they achieved this, we first constructed a coordinate system in which 0 degrees was in the direction of the odor gradient (towards higher concentration) and 180 degrees was directly down-gradient; angles increased counterclockwise when viewed from above.\nWe found that larvae initiated turns at a higher rate when headed down-gradient ($\\pm 180^\\circ$ bearing with respect to the local gradient) than up gradient ($0^\\circ$) (\\FIG{fig7}b). When larvae turned, their reorientations tended to orient up gradient (negative angle changes from $+90^\\circ$ bearing with respect to the local gradient, and positive angles changes from $-90^\\circ$) (\\FIG{fig7}c). Thus \\textit{Drosophila} larvae use similar navigational strategies to \\textit{C. elegans} to move towards butanone \n\n\n\\begin{figure}\n\\begin{fullwidth}\n\\includegraphics[width=0.9\\linewidth]{Odor_flow_Fig7.png}\n\\caption{\n\\textbf{\\emph{D. melanogaster} larvae chemotaxis in the odor flow chamber.} \\textbf{(a)} Trajectories overlaid on the measured butanone odor concentration landscape. Example tracks are highlighted and the initial points are indicated with white dots. \\textbf{(b)} Top: turn rate versus the bearing, which is the instantaneous heading relative to the gradient defined by quadrants shown on top. Error bar show counting statistics. Bottom: average heading change versus the bearing prior to all turns (re-orientation with at least one head cast). Error bars show standard error of the mean. Data analyzed from 6 experiments, 59 animals, with 620 turns over 6.8 hours of observation.\n\\label{fig:fig7}\n} %\n\\end{fullwidth}\n\\end{figure}\n\n\n\n\n\n\\section{Discussion}\n\n\n\n\n\n\n\n\n\n\nWe present a custom-designed flow chamber and odor sensor array that enables us to measure navigation strategies of worms and fly larvae within the context of a controlled and measured odor environment.\nThe key features of this odor delivery system are that (1) the odor concentration profile through space is controlled in the flow chamber, (2) the odor sensor array provides a spatial readout to calibrate and measure the profile, and (3) the odor concentration profile is monitored during animal experiments. This last feature, the ability to monitor the spatial profile of odor concentration on the boundary during experiments, sets this method apart from previous approaches. \n\nThe ability to monitor spatial profile during experiments, along with a quantitative understanding of odor-agar interactions, provides confident knowledge of the odor experienced by the animal over time. This in turn allows us to extract tuning curves that describe the animal's behavioral response to the odor it experiences. In the future, such tuning curves may form the basis of investigations into neural mechanisms driving the sensorimotor transformations underlying navigation. \n\nIn contrast to liquid delivery of odor gradients via microfluidic chips \\citep{Albrecht2011-fj,Larsch2015-xy}, our method allows worms to crawl freely on an agar surface. This allows our behavior measurements to be directly compared against classical chemotaxis assays \\citep{bargmann1993odorant, Louis2008-ju, Pierce-Shimomura1999-nt}. Additionally, the macroscopic odor airflow chamber makes it straightforward to flexibly adjust the the spatial pattern between experiments without the need to redesign the chamber. \n\n\n\nOur setup uses low flow rates corresponding to low wind speed velocities (5 mm\/s) to avoid anemotaxis. Larger organisms, including adult flies, navigate towards odor sources by combining odor and wind flow measurements \\citep{vergassola_infotaxis_2007, matheson_neural_2022}. Fly larvae exhibit negative anemotaxis at wind speeds 200 to 1000 times higher than those used here \\citep{jovanic_neural_2019}, but previous work showed that they do not exhibit anemotaxis at lower windspeeds like the ones used here \\citep{Gershow2012-nt}. For example, they do not exhibit anemotaxis at 12 mm\/s which is still higher than the velocities they experience here \\citep{Gershow2012-nt}. Therefore we do not expect \\textit{Drosophila} larvae to exhibit anemotaxis under our flow conditions. \\textit{C. elegans} are not thought to respond to airflow. In experiments in aqueous microfluidic chips under flow, \\textit{C. elegans} move towards higher concentrations of attractant and do not respond to the flow of the liquid \\citep{Albrecht2011-fj}. In agreement, we do not observe any evidence of \\textit{C. elegans} anemotaxis in our chamber in response to control experiments without odor and with wind speeds of 5 mm\/s \\FIGSUPP[fig5]{no_odor_control}.\n\nWe focused on the odor butanone because it is important for a prominent associative learning assay \\citep{Torayama2007-qi, kauffman_c._2011}. Butanone is soluble in water, and therefore it interacts strongly with agar. In this work we showed that this odor-agar interaction makes it challenging to \\textit{a priori} infer an odor landscape experienced by the animal when agar is present, but that continuously monitoring the odor profile on the boundary overcomes this challenge. Other odors may instead have interactions with other substrates, such as glass, aluminum or plastic, which would also necessitate the use of our continuous monitoring approach. We show that our system is also compatible with less water-soluble odors, such as ethanol. \n\n\n\nHere we have addressed the problem of creating airborne odor landscapes. The biophysical processes governing odor sensing in small animals such as \\textit{C. elegans} are not fully understood. The worm carries a thin layer of moisture around its body as it moves on the agar substrate \\citep{Bargmann2006-dy} and it is unclear to what extent the worm pays attention to the concentration of an odorant in the agar below it vs the air above it.\nOur reaction-convection-diffusion model suggests that at the quasi-equilibrium conditions used in our experiments the odor concentration in agar is related to the airborne odor concentration directly above it up to a scalar that we predict to be constant across the agar. Although we have not measured this empirically, this suggests that even in the extreme case that the the animal only senses odor molecules in the agar, the odor concentration experienced by the animal in our experiments should differ by no more than a scaling factor compared to our estimates based on the airborne odor concentration. \n\n\n\n\n\n\n\n\n\n\nKnowing the concentration experienced by the animal is not only useful for measuring navigational strategies more precisely than in classical assays, like the droplet chemotaxis assays. It will also be crucial for studying \\textit{changes} in navigational strategy, such as those in the context of associative learning \\citep{Cho2016-is, Torayama2007-qi}, sensory adaptation \\citep{Levy2020-oh, itskovits2018concerted}, and long time scale behavioral states \\citep{Calhoun2014-aa, Gomez-Marin2011-ok, klein2017exploratory}. In all those cases, it will be critical to disambiguate slight changes to the odor landscape from gradual changes in the navigational strategies. Continuously monitoring the odor landscape during behavior will remove this ambiguity. \n\n\n\n\\section{Methods and Materials}\n\n\\subsection{Odor flow chamber}\n\n\\subsubsection{Flow chamber setup}\n\nThe odor chamber (\\FIG{fig1}b) was machined from \naluminum (CAD file in supplementary \\nameref{ssec:num1} section). %\nThe chamber is vacuum sealed with an acrylic lid. The inner arena contains an aluminum insert that can hold the odor sensor array or a square petri dish lid (96x96 mm). The heading in which air flow can travel above the insert in the arena is 1 cm tall.\nThe whole setup is mounted on an optical breadboard and enclosed in a black box during imaging.\n\nThe airflow system is connected to a pressurized air source, passing through a particulate filter (Wilkerson F08) and a coalescing filter (Wilkerson M03), then regulated by mass flow controllers (MFCs, Aalborg GFC). \nMFCs are controlled via a Labjack D\/A board from a computer using custom Labview code. \nWe modulate the flow rate bubbling through liquid in enclosed bottles (Duran GL 45). The moisturized or odorized air is then passed into the flow chamber through inlet tubings. The outlets are connected to a copper manifold, then passed to a flow meter to assure that the inlet and outlet flow rate match. An optical flow sensor is fixed on the flow meter to make time stamps for opening and closing of the lid of the flow chamber during animal experiments. A photo-ionization detector (PID, piD-TECH 10.6 eV lamp) is connected to the outlet of airflow, providing calibration for the odor sensor array and detection of air leaks or odor residuals in the system.\nOutput readings from the PID, MFC, odor sensors described in the next section, and imaging camera, are all captured on the same computer sharing the same clock. Analog signals from the PID readout and MFC readback are digitized via a Labjack and recorded with the Labview program.\n\n\n\\subsubsection{Odor flow control}\nTo construct different odor landscapes tubes from the liquid-odor and water reservoirs are connected to the flow chamber in different configurations. \nFor a centered \"cone-shape\" odor landscape the tubing carrying odorized airflow is connected to the middle inlet. For the \"biased-cone\" landscape, the tubing for odorized air is connected to the inlet 4 cm off-center. For uniform patterns, all are connected to the same source through a manifold. \nFor all experiments the background airflow that carries moisturized clean air is set to $\\sim$ 400 mL\/min, except for \\FIG{fig2} where this value was varied, \nThe odor reservoir contains either a 11mM or 110mM butanone solution in water with $\\sim$ 30 mL\/min airflow bubbling through the liquid.\n\nOverall flow rates across the chamber in experiments were always around or less than $\\sim$ 400 mL\/min to avoid turbulence. We confirmed that this regime had no turbulence by visualizing flow in a prototype chamber using dry ice and dark field illumination. Our empirical observations matched theory: Given that the chamber is 15 cm wide and 1 cm deep, a flow rate up to 1 L\/min corresponds to $\\sim$ 1.1 cm\/s. With kinematic viscosity of air $\\sim 0.15$ cm$^2$\/s, the Reynolds number is 7.3 times the flow rate in L\/min, which is below the turbulence onset (Re=2000). \n\n\\subsection{Odor sensor array}\nA spatial array of metal-oxide based gas sensors (Sensorion, SGP30) along with a relative humidity and temperature sensors (ams, ENS210) was used to measure the odor concentration field in the flow chamber. \nSensors are arranged together into groups of 16 odor sensors and 8 humidity sensors on a custom circuit board (MicroFab, Plano, TX) called an odor sensor bar (OSB). OSB's are in turn plugged into a second circuit board (OSH Park, Portland, OR) called the odor sensor hub (OSH). OSBs can be added or removed in different arrangements depending on the experiment, for example to make room for agar. Depending on the experiment, up to 112 odor sensors are arranged in a triangular grid such that no sensor directly blocks the flow from its downstream neighbor, accompanied by 56 humidity sensors in a rectangular grid. \n\nSensors are read out via the I2C protocol. Each SGP30 sensor has the same I2C address, as does each ENS210 sensor (different from the SGP30); to address multiple sensors of the same type we use an I2C bus multiplexer (NXP, \nPCA9547\n). Each OSB contains 2 multiplexers for its 16 sensors. The multiplexers are also addressed over I2C and can have one of 8 addresses (3 address bits). On each board, the two multiplexers share two bits (set by DIP switches); the remaining bit is hardwired to be opposite on the two multiplexers. Thus each OSB can have one of 4 addresses set by DIP switches, and 4 OSBs can be shared on one I2C bus. \n\nTo communicate with the sensors, we used a Teensy 4.0 microcontroller (PJRC, Sherwood, OR) running custom Arduino software. While the Teensy has two I2C busses, we found it more straightforward to use two micro-controllers instead. Both micro-controllers communicated via USB serial to a desktop computer running custom LabView software. Measurements from all sensors are saved to computer disk in real time. Readouts from the humidity sensors are also sent to their neighboring odor sensors in real time for an on-chip humidity compensation algorithm.\n\n\n\n\n\n\n\n\n\\subsubsection*{Heat management}\nTo avoid generating thermal gradients, the system has been designed to dissipate heat to the optics table. Each metal oxide odor sensor contains a micro hotplate which consumes 86 mW power during readings. To dissipate this heat the aluminum insert inside the flow chamber serves as a heat sink. Odor sensor bars are connected to the insert using heat conductive tape and thermal paste. The insert and chamber are in turn in direct thermal contact with the optics table. Temperature and humidity is constantly monitored at 8 locations per OSB via the on-board temperature and humidity sensors during experiments to confirm that there is no thermal or moisture gradient created in the environment.\n\n\\subsubsection{Measurements and calibration}\n\nWe measure from the odor sensors at 1 Hz for both calibration and behavior experiment modes. \nWe sample from the PID at up to 13 Hz. We synchronize and time align the measurements from the odor sensor array, MFC read-back, and PID recording with the same computer clock. %\n\nTo calibrate the odor sensors to the PID as in \\FIGSUPP[fig1]{figSI1-2}, a spatial uniform flow was delivered in a triangle wave or a step pattern.\nTime series from each odor sensor and the downstream PID were aligned by time shifting according to the peak location found via cross-correlation. The time shift was confirmed to be reasonable based on first principle estimates form the flow rate. \n\nAfter measuring odor sensors' baseline response under clean moisturized air for 5 minutes, an odorized air was delivered. \nTo fit calibration curves, the raw sensor readout was fit to the PID measurements with an exponential of form:\n\\begin{equation}\n \\text{PID}(t) = A \\exp(B*\\text{OS}(t-\\tau))\n\\end{equation}\nwhere $\\text{PID}(t)$ voltage is on the left hand side, the scale factor $A$ and sensitivity $B$ are fitted to match the raw sensor reading $\\text{OS}(t-\\tau)$ that is time shifted by time window $\\tau$. This fitted curve maps from raw readings to odor concentration for each sensor. We validate the fitted curve across different recordings. The distribution of the coefficients $A$ and $B$ are relatively uniform across sensors in the middle of the arena. The sensor mapping are also reliable, so using $\\pm$std of the fitted curve changes less than $10\\%$ of the overall concentration scale of the landscape.\n\n\n\n\n\\subsection{Models for odor flow and odor-agar interaction}\n\nWe use two models in our work: (1) a convection-diffusion model that captures quasi-steady state odor concentration profile measured without agar used for the fits in \\FIG{fig1}f,g and (2) a reaction-convection-diffusion model for odor-agar interaction shown in \\FIG{PE_model}. A version of this second model is also used to justify the pre-equilibration protocol, as discussed in the \\hyperref[ssec:appendix]{Appendix}.\n\n\n\n\n\\subsubsection{Convection-diffusion model for odor flow without agar}\n\nTo model odor flow without agar, for example for the fits in \\FIG{fig1}f,g, we use a two-dimensional convection-diffusing model:\n\n\n\\begin{equation}\n \\frac{\\partial C(x,y,t)}{\\partial t} = -v\\nabla C + D\\nabla^2 C \\label{eq:convection-diffusion}\n\\end{equation}\nwhere the concentration across space and time is $C(x,y,t)$, flow velocity is $v$, and the diffusion coefficient of our odor is $D$. In our chamber, at steady state $(\\frac{\\partial C}{\\partial t} =0)$ we have:\n\\begin{equation}\n v\\frac{\\partial C}{\\partial x} = D\\frac{\\partial^2 C}{\\partial y^2}\n \\label{eq:steady-state}\n\\end{equation}\nbecause with our configuration flow along the $x$ axis is dominated by convection while flow along the $y$ axis is dominated by diffusion, and therefore $\\frac{\\partial^2 C}{\\partial x^2} \\ll \\frac{\\partial^2 C}{\\partial y^2}$.\n\n\n\n\nThe fit in \\FIG{fig1}f is the solution to equation \\ref{eq:steady-state}:\n\\begin{equation}\n\\label{eq:2dmodelfit}\n C(x,y) = \\frac{C_o}{2}(1-\\erf(\\frac{x}{2\\sqrt{D\\frac{x}{v}}})) \\exp(-\\frac{y^2}{4D\\frac{x}{v}}),\n\\end{equation}\nwhere $\\erf$ is the error function and $C_o$ is the odor source concentration measured in air.\n\nIn \\FIG{fig1}g we show a fit for a one dimensional slice along $y$ at various positions along $x_c$, for the situation in which there is an\nodor-source at $(y=0,x=0)$:\n\n\\begin{equation}\n C(y) = \\frac{C(x_c,y=0)}{\\sqrt{4\\pi D \\frac{x_c}{v}}} \\exp(-\\frac{y^2}{4D\\frac{x_c}{v}})\n\\end{equation}\nwhere $\\frac{x_c}{v}$ is an analogy of time in non-stationary diffusion process at the cross-section at $x_c$. %\n\n\nFor the fits in \\FIG{fig1} the air flow velocity is set to be $v\\sim0.5$ cm\/s based on the flow rate and geometry of the chamber (15 parallel tubes provide around 450 mL\/min of flow into a $\\sim$255 mL chamber with $\\sim$15 cm$^2$ cross section). The diffusion coefficient $D$ is left as a free parameter and the value that minimizes the mean-squared error between the model and the empirical measurement is used. We chose to leave the diffusion coefficient as a free parameter instead of using butanone's nominal diffusion constant of $D\\sim0.08$ cm$^2$\/s, because we expect butanone's effective diffusion coefficient to be different in a confined chamber with background flow. \nWe note that the fitted profile shown in \\FIG{fig1}g,f and the fitted value agrees with what is expected in a stable convection-diffusion process (Peclet number $\\sim 80$).\n\n\n\n\n\n\n\\subsubsection{Reaction-convection-diffusion model for odor-agar interaction}\n\n\nTo justify the pre-equilibration protocol of \\FIG{fig4} and to show that measurements of odor concentration along the agar's boundary allows us to infer the concentration on the agar, we propose a reaction-convection-diffusion model. This phenomenological model forms the basis of \\FIG{PE_model}. Compared to the convection-diffusion model, we include the \"reaction\" term to account for odor-agar interactions. \n\nThe model used is a 2D generalization of this non-spatial model:\n\\begin{equation} \\label{eqn_flow+agar}\n \\frac{dC}{dt} = -\\frac{1}{\\tau}(C - C_o) - w \\frac{dA}{dt}\n\\end{equation}\n\n\\begin{equation} \\label{eqn_agar}\n \\frac{dA}{dt} = k_a C (1-\\frac{A}{M}) - k_d A \n\\end{equation}\nwhere $C$ is a downstream concentration readout after the airflow has surface interaction with the agar gel. The influx odor concentration is $C_o$ and the odor concentration in agar is $A$. Without agar interaction, the flow chamber has its own timescale $\\tau$ and the molecular flux into the agar is weighted by a scalar $w$ (so $w=0$ when there's no agar in the chamber). The association and dissociation constants are $k_a$ and $k_d$ and the maximum capacity of odor concentration that can be absorbed is $M$. This model is similar to the description of odorant pulse kinetics shown in \\citep{Gorur-Shandilya2019-me}.\n\n\n\nIn \\FIG{PE_model} we use the 2D generalization:\n\\begin{equation}\n \\frac{\\partial}{\\partial t} C(x,y) = \\mathcal{L} C(x,y) - w \\frac{\\partial}{\\partial t} A(x,y)\n\\end{equation}\n\\begin{equation}\n \\frac{\\partial}{\\partial t} A(x,y) = k_a C(x,y)(1-\\frac{A(x,y)}{M(x,y)}) - k_d A(x,y)\n\\end{equation}\nwhere $\\mathcal{L} = -v\\nabla + D\\nabla^2 $ (\\autoref{eq:convection-diffusion}) is a linear operator for the convection-diffusion process and the odor influx is at the boundary $C(x=0,y=0)=C_o$. We perform numerical analysis on the set of 2D equations and permit $A$ to be non-zero only in the region where agar is present. We use a target concentration $C_o$ that is lower than $M$ and $k_a \\gg k_d$ to capture odor absorption into agar. In the simulated pre-equilibration protocol we temporarily increase $C_o$ above $M$ then switch back to the target concentration to efficiently reach a steady-state. \n\nA slightly simplified version of this model forms the basis of the arguments in the \\hyperref[ssec:appendix]{Appendix}.\n\n\n\\subsection{Animal handling}\n\n\\subsubsection{\\emph{C. elegans}}\n\nWild type \\emph{C. elegans} (N2) worms were maintained at 20 C on NGM agar plates with OP50 food patches. Before each chemotaxis experiments, we synchronized batches of worms and conducted measurements on young adults. Worms were rinsed with M9 solution and kept in S. Basal solution for around 30 min, while applying the pre-equilibration protocol to the flow chamber. Experiments were performed on $1.6\\%$ agar pads with chemotaxis solution (5 mM phosphate buffer with pH 6.0, 1 mM CaCl$_2$, 1 mM MgSO$_4$) \\citep{Bargmann1993-is, Bargmann2006-dy} formed in the lid of a 96x96 mm square dish. 50-100 worms were deposited onto the plate by pippetting down droplets of worms and removing excess solution with kimwipes. The plate was then placed in the odor flow chamber to begin recordings.\n\n\\subsubsection{\\emph{D. melanogaster}}\n\nWild type \\emph{D. melanogaster} (NM91) were maintained at 25 C incubator with 12 hr light cycle. Around 20 pairs of male and female flies were introduced into a 60 mm embryo-collection cage. A petri dish with apple juice and yeast paste was fixed at the bottom of the cage and replace every 3 hrs for two rounds during the day time. The collected eggs were kept in the petri dish in the same 25 C environment for another 48-60 hours to grow to second instars. We washed down and sorted out the second instar larva from the plate via $30\\%$ sucrose in water around 10 min before each behavioral experiments. We used a 96x96 mm lid with $2.5\\%$ agar containing $0.75\\%$ activated charcoal for larval experiments \\citep{Gepner2015-wm, Gershow2012-nt}. Around 10-20 larva were rinsed with water in a mesh and placed onto the agar plate with a paint brush. The same imaging setup and flow chamber configuration as the worm experiments were used for \\textit{Drosophila} larva.\n\n\n\n\n\n\n\n\\subsection{Imaging and behavioral analysis}\n\n\\subsubsection{Image acquisition}\n\nAnimals are imaged via a CMOS camera (Baslar, acA4112-30, with Kowa LM16FC lens) suspended above the flow chamber and illuminated by a rectangular arrangement of 850 nm LED lights. The camera acquires $2,500 \\times 3,000$ pixel images at 14 fps. A single pixel corresponded to 32 $\\mu$m on the agar plate. Labview scripts acquired images during experiments.\n\n\n\\subsubsection{\\emph{C. elegans} behavioral analysis}\n\nTo increase contrast for worm imaging, a blackout fabric sheet is placed underneath the agar plate. Custom Matlab scripts based on \\citep{Liu2018-mv} were used to process acquired images after experiments, as linked in the \\nameref{ssec:num2} section. Briefly, the centroid position of worms were found in acquired images via thresholding and binarization. The animal's centerline was found, and its body pose was estimated follwing \\citep{Liu2018-mv}, but in this work only the position and velocity was used. The tracking parameters are adjusted for this imaging setup and we extract the centroid position and velocity of worm.\n\nThe analysis pipeline focuses on the trajectory of animal navigation in the arena. The trajectories are smoothed in space with a third order polynomial in a 0.5 s time window to remove tracking noise. We only consider tracks that appear in the recording for more than 1 minutes and produce displacement larger than 3 mm across the recordings. Trajectories starting at a location with odor concentration higher than $70 \\%$ of the maximum odor concentration in space is removed, since these are likely tracks from animals that have performed chemotaxis already. We calculate the displacement of the center of the worm body in the camera space. The location in pixel space is aligned with the odor landscape constructed with the odor sensor array to compute concentration gradient given a position. To avoid double counting turns when the animal turns slowly, and to mitigate effects of small displacements from tracking noise, we measure the angle change between displacement vectors over 1 s time window and define turns as angle changes larger than 60 degrees. To quantify the curvature of navigation trajectories, we measure the angle between displacement vectors over 1 mm displacement in space. \n\n\\subsubsection{\\emph{D. melanogaster} behavioral analysis}\nAnalysis of fly larvae is performed as previously \\citep{Gepner2015-wm, Gershow2012-nt}.\n\n\n\n\n\n\n\n\n\\subsection{Data sharing}\n\\label{ssec:num1}\nRecordings for odor flow control, concentration measurements, and behavioral tracking data are publicly available: \\href{https:\/\/figshare.com\/articles\/dataset\/Continuous_odor_profile_monitoring_to_study_olfactory_navigation_in_small_animals\/21737303}{10.6084\/m9.figshare.21737303}\n\n\\subsection{Software sharing}\n\\label{ssec:num2}\n\\begin{itemize}\n \\item Odor sensor array: \\href{https:\/\/github.com\/GershowLab\/OdorSensorArray}{https:\/\/github.com\/GershowLab\/OdorSensorArray}\n \\item Worm imaging and analysis: \\href{https:\/\/github.com\/Kevin-Sean-Chen\/leifer-Behavior-Triggered-Averaging-Tracker-new}{https:\/\/github.com\/Kevin-Sean-Chen\/leifer-Behavior-Triggered-Averaging-Tracker-new}\n \\item Larvae imaging: \\href{https:\/\/github.com\/GershowLab\/Image-Capture-Software}{https:\/\/github.com\/GershowLab\/Image-Capture-Software}\n\\end{itemize}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Acknowledgments}\nResearch reported in this work was supported by the National Institutes of Health National Institute of Neurological Disorders and Stroke under New Innovator award number DP2-NS116768 to AML and DP2-EB022359 to MHG; the Simons Foundation under award SCGB \\#543003 to A.M.L.; by the National Science Foundation, through NSF 1455015 to MHG, an NSF CAREER Award to AML (IOS-1845137), under Grant No. NSF PHY-1748958 and through the Center for the Physics of Biological Function (PHY-1734030). This work was also supported in part by the Gordon and Betty Moore Foundation Grant No. 2919.02. We thank the Kavli Institute for Theoretical Physics at University of California Santa Barbara for hosting us during the completion of this work.\nStrains from this work are being distributed by the CGC, which is funded by the NIH Office of Research Infrastructure Programs (P40 OD010440). We thank the Murthy Lab and Gregor Labs for flies.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\nIn this work, we introduce and study the notion of a minimal Cohen-Macaulay complex. Fix a field $k$. Let $\\Delta$ be a simplicial complex. We say $\\Delta$ is minimal Cohen-Macaulay (over $k$) if it is Cohen-Macaulay and removing any facet from the facet list of $\\Delta$ results in a complex which is not Cohen-Macaulay. See Section \\ref{sec2} for precise definitions.\n\nFor the rest of the paper we shall write CM for Cohen-Macaulay. We first observe a crucial fact. \n\n\\begin{theorem}\\label{t1}\nAny CM complex is shelled over a minimal CM complex. $($Theorem \\ref{thm1}$)$ \n\\end{theorem}\n\nThus, in a strong sense, understanding CM complexes amounts to understanding the minimal ones. We support this claim by demonstrating that many interesting examples of CM complexes in combinatorics are minimal. Theorem \\ref{t1} also puts shellable complexes in a broader context: they are precisely complexes shelled over the empty one. Its proof relies on a simple but somewhat surprising statement (Lemma \\ref{isshelled}), which might be of independent interest.\n\nBelow is a collection of our main technical results which establish various necessary and sufficient conditions for a complex to be minimal CM.\n\n\\begin{theorem} The following statements hold.\n\\begin{enumerate}\n\\item[$(1)$] A minimal CM complex is acyclic. $($Corollary \\ref{cor2}$)$\n\\item[$(2)$] Let $\\Delta$ be CM and $i$-fold acyclic. If no facet of $\\Delta$ contains more than $i-1$ boundary ridges, then $\\Delta$ is minimal CM. $($Theorem \\ref{thm3}$)$\n\\item[$(3)$] If $\\Delta$ is a ball, then $\\Delta$ is minimal CM if and only if it is strongly non-shellable in the sense of \\cite{Zi98}. $($Proposition \\ref{strongnonshell}$)$\n\\item[$(4)$] If $\\Delta$ is minimal CM and $\\Gamma$ is CM, then $\\Delta \\star \\Gamma$ is minimal CM. $($Theorem \\ref{thm4}$)$\n\n\\end{enumerate}\n\\end{theorem}\n\n\nIn Section \\ref{sec2}, we give the formal definitions, provide background, and set notation. Section \\ref{main} contains the proofs of Theorem \\ref{thm1}, Theorem \\ref{thm3}, Corollary \\ref{cor2} and Proposition \\ref{strongnonshell}. In Section \\ref{newmCM}, we provide many ways to build new minimal Cohen Macaulay from old ones, such as gluing (Corollary \\ref{glue1}, Proposition \\ref{glue2}) and taking joins (Theorem \\ref{thm4}). In the last section, we use our results to examine many classical and recent examples of Cohen-Macaulay complexes from the literature and show that they are minimal. \n\n\n\n\n\n\n\n\n\n\\section{Background and Notation}\\label{sec2}\n\n\nOnce and for all, fix the base field $k$. We let $\\tilde{H}_i$ denote $i$th simplicial or singular homology, as appropriate, always with coefficients in $k$. We use $\\tilde{\\chi}$ for reduced Euler characteristic. Throughout this paper, we let $\\Delta$ be a simplicial complex of dimension $d-1$ with facet list $\\{F_1,\\dots,F_e\\}$, and we denote by $\\Delta_{F_i}$ the subcomplex of $\\Delta$ with facet list $\\{F_1,\\dots,F_{i-1},F_{i+1},\\dots,F_e\\}$. We write $f_i(\\Delta)$ for the number of $i$-dimensional faces of $\\Delta$, and $h_i(\\Delta)$ for the $i$th entry of the $h$-vector of $\\Delta$; so $h_i(\\Delta)=\\sum^i_{k=0} \\binom{d-k}{i-k}(-1)^{i-k}f_{k-1}(\\Delta)$. In particular, we note that $h_d(\\Delta)=\\sum^d_{k=0}(-1)^{d-k}f_{k-1}(\\Delta)=(-1)^{d-1}\\tilde{\\chi}(\\Delta)$. The $\\operatorname{depth}$ of $\\Delta$ is, by definition, the depth of the Stanley-Reisner ring $k[\\Delta]$ of $\\Delta$. We say $\\Delta$ is CM if $\\operatorname{depth} \\Delta=d$. The following consequence of Hochster's formula (\\cite[Theorem 5.3.8]{BH98}) is an extension of Reisner's famous criterion for Cohen-Macaulayness (\\cite[Theorem]{Re76}) and gives a combinatorial characterization of $\\operatorname{depth}$. \n\n\\begin{prop}\\label{depth}\n\n$\\operatorname{depth} \\Delta \\ge \\ell$ if and only if $\\tilde{H}_{i-1}(\\operatorname{lk}_{\\Delta}(T))=0$ whenever $i+|T|<\\ell$.\n\n\n\\end{prop}\n\nWe use $\\Delta^{(i)}:=\\{ \\sigma \\in \\Delta \\colon |\\sigma| \\le i+1 \\}$ to denote the $i$-skeleton of $\\Delta$, and we note that $\\operatorname{depth} \\Delta=\\max\\{i \\mid \\Delta^{(i-1)} \\mbox{ is CM}\\}$.\n\n\n\n\n\n\nThe following definition gives the main focus of this paper.\n\n\\begin{defn}\nWe say $\\Delta$ is \\textbf{minimal CM} if $\\Delta$ is CM but $\\Delta_{F_i}$ is not CM for any $i$.\n\\end{defn}\n\n\n\nThe following related concept provides an extension of the notion of shellability.\n\n\n\\begin{defn}\n\nWe say $\\Delta_F$ to $\\Delta$ is a \\textbf{shelling move} if $\\langle F \\rangle \\cap \\Delta_F$ is pure of dimension $|F|-2$. If $\\Gamma$ is a subcomplex of $\\Delta$ generated by facets of $\\Delta$, we say $\\Delta$ is \\textbf{shelled over $\\Gamma$} if there exists a sequence of shelling moves taking $\\Gamma$ to $\\Delta$.\n\n\n\\end{defn}\n\n\nWe note that shellable complexes are exactly those which are shelled over $\\varnothing$.\n\n\nWe will use the following definitions in the later sections. \n\n\n\\begin{defn}\nWe say that $\\Delta$ is $l$-fold acyclic if $\\operatorname{lk}_{\\Delta}(\\sigma)$ is acyclic whenever $|\\sigma|, font=\\scriptsize,>=latex]\n (m-1-1) edge (m-1-2)\n (m-1-2) edge (m-1-3)\n (m-1-3) edge (m-1-4);\n \\path[overlay,->, font=\\scriptsize,>=latex]\n (m-2-3) edge[out=365,in=185] (m-1-1)\n (m-2-2) edge (m-2-3)\n (m-2-1) edge (m-2-2)\n (m-3-3) edge[out=365,in=185] (m-2-1);\n \\path[overlay,->, font=\\scriptsize,>=latex]\n (m-3-2) edge (m-3-3)\n (m-3-1) edge (m-3-2);\n \\path[overlay,->, font=\\scriptsize,>=latex]\n (m-4-3) edge[out=365,in=185,dashed] (m-3-1); \n\\end{tikzpicture}\\]\n\n\nAs $S\/I \\otimes_S k \\cong S\/(I,f) \\otimes_S k \\cong k$, the map $S\/I \\otimes_S k \\to S\/(I,f) \\otimes_S k$ is an isomorphism. So $\\operatorname{Tor}^S_1(S\/(I,f),k) \\cong k^{\\mu(I)+1}$ and $\\operatorname{Tor}^S_1(S\/(I,f),k) \\cong k^{\\mu(I)}$. By additivity of dimensions, it follows that the map $\\operatorname{Tor}^S_1(S\/I,k) \\to \\operatorname{Tor}^S_1(S\/(I,f),k)$ is injective. Hence the map $\\operatorname{Tor}_2(S\/(I,f),k) \\to \\operatorname{Tor}_1^S(S\/(I:f)(-c),k)$ is surjective. But since $\\Delta$ satisfies $(S_2)$, $S\/(I,f)$ has linear first syzygy (see \\cite[Corollary 3.7]{Ya00}), so $\\operatorname{Tor}_2^S(S\/(I,f),k) \\cong k^{\\beta^S_1(I,f)}(-c-1)$. Hence it must be that $\\operatorname{Tor}_1^S(S\/(I:f)(-c),k)$ is generated in degree $-c-1$. Thus $(I:f)$ is generated in degree $1$, and the claim follows from Theorem \\ref{dualstuff} (3). \n\n\n\n\\end{proof}\n\n\n\\begin{theorem}\\label{thm1}\nIf \\(\\Delta\\) is a CM complex, then there is a minimal CM complex \\(\\Gamma\\) so that \\(\\Delta\\) is shelled over \\(\\Gamma\\).\n\\end{theorem}\n\n\\begin{proof}\nSince \\(\\Delta\\) is CM, it also satisfies \\((S_2)\\). We can then apply Lemma \\ref{isshelled} to conclude that, for every facet \\(F\\) of \\(\\Delta\\), \\(\\Delta\\) is shelled over \\(\\Delta_{F}\\). If none of these is CM, then $\\Delta$ is minimal CM by definition. If not, we may continue this process to eventually reach a minimal one. \n\\end{proof}\n\n\\begin{remark}\nIt is not hard to see that a given CM complex can be shelled over two different minimal ones. For instance, let $\\Delta= K_{6,2}$ be the complete two-skeleton of the simplex on $6$ vertices, and let $\\Gamma$ be a triangulation of the projective plane on $6$ vertices. Then $\\Delta$ is shellable and is also shelled over $\\Gamma$. That $\\Gamma$ is minimal CM follows from Corollary \\ref{twofacet}.\n\\end{remark}\n\nNext, we aim to prove that minimal CM complexes are acyclic. This is accomplished by showing a more general result. \n\n\\begin{theorem}\\label{facetdeath}\nSuppose $\\tilde{H}_{d-1}(\\Delta) \\ne 0$. Then there is a maximal facet $F_i$ of $\\Delta$ so that the following hold: \n\n\\begin{align}\n\\tag{1}\n\\dim \\tilde{H}_{i-1}(\\Delta_{F_i})&=\\begin{cases} \\dim \\tilde{H}_{i-1}(\\Delta) & \\mbox{if } 0 \\le i0, \\beta\\in R\\right\\}\\dot\\cup R^+$ (cf. \\cite{kac}).\nFollowing the notation in \\cite{kac}, we shall work with the set of generators for\n${\\widehat W}$ given by $\\{s_0, s_1,\\cdots, s_{n-1}\\}$, where\n$s_i, 0\\le i\\le n-1$ are the reflections with respect to\n$\\alpha_i, 0\\le i\\le n-1$. Note that $\\{\\alpha_i, 1\\le i\\le n-1\\}$\nis simply the set of simple roots of $SL_n$ (with respect to the\nBorel subgroup $B$). \nIn particular, the Weyl group $W$ of $SL_n(\\mathbb C)$ is simply the subgroup of $\\widehat W$ generated by $\\{s_1,\\cdots,s_{n-1}\\}$. \n\n\\def\\ensuremath Q{\\ensuremath Q}\n\\subsection{The Affine Presentation:}\n\\label{affine}\nThe generators $s_i,\\, 1\\leq i \\leq n-1$ have the following canonical lifts to $N(K[t,t^{-1}])$:\n$s_i$ is the permutation\nmatrix $(a_{rs})$, with $a_{jj}=1,j\\ne i,i+1,\\ a_{i\\,i+1}=1,\na_{i+1\\,i}=-1$, and all other entries are $0$. \nA canonical lift for $s_0$ is given by\n$$\\begin{pmatrix}\n0&0&\\cdots & t^{-1}\\\\\n0&1&\\cdots &0\\\\\n\\vdots & \\vdots & \\vdots & \\vdots\\\\\n0&\\cdots &1 &0\\\\\n-t &0&0&0\n\\end{pmatrix}$$\nLet $s_\\theta\\in W$ be the reflection with respect to the longest root $\\theta$ in $\\mathbf A_{n-1}$ given by $\\theta=\\alpha_1+\\cdots+\\alpha_{n-1}$.\nLet $L$ (resp. \\ensuremath Q) be the root (resp. coroot) lattice of $\\mathfrak{sl}_n(=\\operatorname{Lie}(SL_n))$, and let $\\langle\\,,\\,\\rangle$ be the canonical pairing on $L\\times\\ensuremath Q$. \nConsider $\\theta^\\vee\\in\\ensuremath Q$ given by $\\theta^\\vee=\\alpha_1^\\vee+\\cdots+\\alpha_{n-1}^\\vee$.\nThere exists (cf. \\cite{kumar}, \\S 13.1.6) a group isomorphism $\\widehat{W}\\rightarrow W\\ltimes\\ensuremath Q$ given by \\begin{align*}\n s_i&\\mapsto s_i &\\text{ for }1\\leq i\\leq n-1\\\\\n s_0&\\mapsto s_\\theta\\lambda_{-\\theta^{\\vee}} &\n\\end{align*}\nwhere we write $\\lambda_q$ for $(\\operatorname{id},q)\\in W\\ltimes\\ensuremath Q$.\nIn particular, we get $s_0s_\\theta\\mapsto\\lambda_{\\theta^\\vee}$, which we use to compute a lift of $\\lambda_{\\theta^\\vee}$ to $N(K[t,t^{-1}])$: \\begin{align*} \n\\begin{pmatrix}\n0&0&\\cdots & t^{-1}\\\\\n0&1&\\cdots &0\\\\\n\\vdots & \\vdots & \\vdots & \\vdots\\\\\n0&\\cdots &1 &0\\\\\n-t &0&0&0\n\\end{pmatrix}\n\\begin{pmatrix}\n0&0&\\cdots &-1\\\\\n0&1&\\cdots &0\\\\\n\\vdots & \\vdots & \\vdots & \\vdots\\\\\n0&\\cdots &1 &0\\\\\n1 &0&0&0\n\\end{pmatrix\n=\\begin{pmatrix}\nt^{-1}&0&\\cdots &0\\\\\n0&1&\\cdots &0\\\\\n\\vdots & \\vdots & \\vdots & \\vdots\\\\\n0&\\cdots &1 &0\\\\\n0 &0&0&t\n\\end{pmatrix}\n\\end{align*}\nConsider the element $w\\in W$ corresponding to $(1,i)(i+1,n)\\in S_n$, and \nobserve that $w(\\theta^\\vee)=\\alpha_i^\\vee$, the $i^{th}$ simple coroot. \nIt follows that a lift of $\\lambda_{\\alpha_i^\\vee}=w\\lambda_{\\theta^\\vee}w^{-1}$ is given by\\begin{align*}\nw\\begin{pmatrix}\nt^{-1}&0&\\cdots &0\\\\\n0&1&\\cdots &0\\\\\n\\vdots & \\vdots & \\vdots & \\vdots\\\\\n0&\\cdots &1 &0\\\\\n0 &0&0&t\n\\end{pmatrix}w^{-1\n=\\begin{pmatrix}\n \\ddots& &&\\\\\n & t^{-1} & &\\\\\n & & t&\\\\\n & & &\\ddots\\\\\n\\end{pmatrix}\n\\end{align*}\nwhere in the matrix on the right hand side, the dots are $1$, and the off-diagonal entries are $0$, i.e., the matrix on the right hand side is the diagonal matrix with $i,\\,(i+1)$-th entries being $t^{-1},t$ respectively, and all other diagonal entries being $1$.\n\\\\\n\\\\\nThe (Coxeter) length of $\\lambda_q$ is given by the following formula (cf. \\cite{kumar}, \\S13.1.E(3)):$$\n l(\\lambda_q)=\\sum\\limits_{\\alpha\\in R^+}\\lvert\\alpha(q)\\rvert,\\qquad q\\in\\ensuremath Q$$ \nwhere $\\alpha(q):=\\langle\\alpha,q\\rangle$. \nThe action of $\\lambda_q$ on the root system of $\\mathcal G$ is determined by the following formulae (cf. \\cite{kumar}, \\S13.1.6):\\begin{align*}\n \\lambda_q(\\alpha)&=\\alpha-\\alpha(q)\\delta,\\qquad\\text{ for }\\alpha\\in R,q\\in\\ensuremath Q\\\\\n \\lambda_q(\\delta)&=\\delta\n\\end{align*} \nIn particular, for $\\alpha\\in R^+$, $\\lambda_q(\\alpha)>0$ if and only if $\\alpha(q)\\leq 0$.\n\\begin{cor}\n\\label{count}\nFor $\\alpha\\in R^+,\\,q\\in\\ensuremath Q$, $l(\\lambda_qs_\\alpha)>l(\\lambda_q)$ if and only if $\\alpha(q)\\leq 0$.\n\\end{cor}\n\\begin{proof}\nFollows from the equivalence $ws_\\alpha>w$ if and only if $w(\\alpha)>0$, applied to $w=\\lambda_q$.\n\\end{proof}\n\n\n\\section{The element $\\kappa_0$}\\label{elt} \nOur goal is to give a compactification of the cotangent bundle $T^*G\/B$ as a (left) $SL_n$ stable subvariety of the affine Schubert variety $X(\\kappa_0)$, where $\\kappa_0$ is as defined below:\\begin{align*}\n \\tau&:=s_{n-1}\\cdots s_2s_1s_0 \\\\\n \\kappa&:=\\tau^{n-1}\\\\\n \\kappa_0&:=w'\\tau^{n-1}\n\\end{align*}\nwhere $w'$ is the longest element in the Weyl group generated by $s_1,\\cdots s_{n-2}$.\nWe first prove some properties of $\\kappa$ and $\\tau$ which are consequences of the braid relations\n$$\\begin{gathered}s_is_{i+1}s_{i}=s_{i+1}s_{i}s_{i+1},\n0\\le i\\le n-2,\\\\\ns_0s_{n-1}s_{0}=s_{n-1}s_{0}s_{n-1}\\end{gathered}$$ \nand the commutation relations:\n$$s_is_j=s_js_i, 1\\le i,j\\le n-1, |i-j|>1,\\ \\ s_0s_i=s_is_0,\n2\\le i\\le n-2$$\n\n\\subsection{Some Facts:}\\label{facts}\n\n\\noindent\\textbf{Fact 1:} $\\tau(\\delta)=\\delta$\n\n\\noindent\\textbf{Fact 2:}\n$\\tau(\\alpha_1+\\cdots+\\alpha_{n-1})=2\\delta+\\alpha_{n-1}$\n\n\\noindent\\textbf{Fact 3:} $\\tau(r\\delta+\\alpha_i+\\cdots+\\alpha_{n-1})=\n(r+1)\\delta+\\alpha_{i-1}+\\alpha_i+\\cdots+\\alpha_{n-1},2\\le i\\le\nn-1,r\\in\\mathbb{Z}_+$\n\n\\noindent\\textbf{Fact 4:} $s_{n-1}\\cdots\ns_{j+1}(\\alpha_j)=\\alpha_{j}+\\alpha_{j+1}+\\cdots+\\alpha_{n-1},j\\ne\n0, n-1$\n\n\\noindent\\textbf{Fact 5:} $s_{n-1}\\cdots\ns_{1}(\\alpha_0)=\\delta+\\alpha_{n-1}$\n\n\\noindent\\textbf{Fact 6:}\n$\\tau(\\alpha_{n-1})=\\delta+\\alpha_{n-2}+\\alpha_{n-1}$ (a special\ncase of Fact 3 with $r=0,i=n-1$)\n\n\\noindent\\textbf{Fact 7:} $\\tau(\\alpha_1)=\\alpha_0+\\alpha_{n-1}$\n\n\\noindent\\textbf{Fact 8:} $\\tau(\\alpha_i)=\\alpha_{i-1},i\\ne 1,n-1$\n\n\\noindent\\textbf{Fact 9:} $\\tau(\\alpha_0+\\alpha_{n-1})=\\alpha_{n-2}$\n\n\\begin{remark}\\label{cyclic}\nFacts 7, 8, 9 imply that $(\\alpha_{n-1}+\\alpha_0,\\alpha_{n-2},\\alpha_{n-3},\\ldots,\\alpha_1)$ is a cycle of order $n-1$ for $\\tau$.\nIn particular, each of these roots is fixed by $\\kappa$.\n\\end{remark}\n\n\\subsection{A reduced expression for $\\kappa$}\\label{kap} Let $\\kappa$ be the element in\n${\\widehat{W}}$ defined as above. We may write $\\kappa=\\tau_1\\cdots\\tau_{n-1}$, where\n$\\tau_i$'s are equal, and equal to $\\tau(=s_{n-1}\\cdots\ns_2s_1s_0)$ (we have a specific purpose behind writing $\\kappa$ as\nabove).\n\\begin{lem}\\label{reduced}\n The expression $\\tau_1\\cdots\\tau_{n-1}$\nfor $\\kappa$ is reduced.\n\\end{lem}\n\\begin{proof}\n\n\\noindent\\textbf{Claim:} $\\tau_1\\cdots\\tau_{i}s_{n-1}\\cdots\ns_{j+1}(\\alpha_j),1\\le i\\le n-2, 0\\le j\\le n-2,\n\\tau_1\\cdots\\tau_{i}(\\alpha_{n-1})$, \n\n\\noindent $1\\le i\\le n-2$ are positive\nreal roots.\n\nNote that the Claim implies the required result. We divide the\nproof of the Claim into the following three cases.\n\n\\noindent\\textbf{Case 1:} \\emph{To show:}\n$\\tau_1\\cdots\\tau_{i}(\\alpha_{n-1}),1\\le i\\le n-2$ is a positive\nreal root.\n\n\\noindent We have\n\n\\noindent $\\tau_1\\cdots\\tau_{i}(\\alpha_{n-1})$\n\n\\noindent $=\\tau_1\\cdots\\tau_{i-1}(\\delta+\\alpha_{n-2}+\\alpha_{n-1})$\n(cf. \\S \\ref{facts}, Fact 6)\n\n\\noindent $=\\tau_1\\cdots\\tau_{i-2}(2\\delta+\\alpha_{n-3}+\\alpha_{n-2}+\n\\alpha_{n-1})$ (cf. \\S \\ref{facts}, Fact 3)\n\n\\noindent $=\\tau_1\\cdots\\tau_{i-k}(k\\delta+\\alpha_{n-k-1}+\\cdots+\n\\alpha_{n-1}), 0\\le k\\le i-1$ (cf. \\S \\ref{facts}, Fact 3)\n\n\\vskip.2cm\\noindent Note that $k\\le i-1$ implies that $n-k-1\\ge n-i\\ge 2$,\nand hence we can apply \\S \\ref{facts}, Fact 3. Corresponding to\n$k=i-1$, we obtain\n$\\tau_1\\cdots\\tau_{i}(\\alpha_{n-1})=\\tau_1((i-1)\\delta+\n\\alpha_{n-i}+\\cdots+\\alpha_{n-1}$ ). Hence once again using \\S\n\\ref{facts}, Fact 3, we obtain\n\n $$\\tau_1\\cdots\\tau_{i}(\\alpha_{n-1})=\ni\\delta+\\alpha_{n-i-1}+\\cdots+\\alpha_{n-1},\\ 1\\le i\\le n-2$$ (note\nthat for $1\\le i\\le n-2$, $n-i-1\\ge 1$).\n\n\\noindent\\textbf{Case 2:} \\emph{To show:}\n$\\tau_1\\cdots\\tau_{i}s_{n-1}\\cdots s_{1}(\\alpha_{0}), 1\\le i\\le\nn-2$ is a positive real root.\n\n\\noindent We have\n\n\\noindent $\\tau_1\\cdots\\tau_{i}s_{n-1}\\cdots s_{1}(\\alpha_{0})$\n\n\\noindent $=\\tau_1\\cdots\\tau_{i}(\\delta+\\alpha_{n-1})$ (cf. \\S\n\\ref{facts}, Fact 5)\n\n\\noindent $=\\tau_1\\cdots\\tau_{i-1}(2\\delta+\\alpha_{n-2}+\\alpha_{n-1})$\n(cf. \\S \\ref{facts}, Fact 6)\n\n\\noindent $=\\tau_1\\cdots\\tau_{i-k}((k+1)\\delta+\\alpha_{n-k-1}+\n\\cdots+\\alpha_{n-1}),0\\le k\\le i-1$ (cf. \\S \\ref{facts}, Fact 3)\n\n\\vskip.2cm\\noindent Note that as in Case 1, for $k\\le i-1$, we have,\n$n-k-1\\ge 2$, and therefore \\S \\ref{facts}, Fact 3 holds.\nCorresponding to $k=i-1$, we have,\n\n\\noindent $\\tau_1\\cdots\\tau_{i}s_{n-1}\\cdots\ns_{1}(\\alpha_{0})=\\tau_1(i\\delta+\\alpha_{n-i}+\\cdots+\\alpha_{n-1})$.\n Hence once again using \\S\n\\ref{facts} Fact 3, we obtain $$\\tau_1\\cdots\\tau_{i}s_{n-1}\\cdots\ns_{1}(\\alpha_{0})=(i+1)\\delta+\\alpha_{n-i-1}+\\cdots+\\alpha_{n-1},\n1\\le i\\le n-2$$ (note that for $1\\le i\\le n-2$, $n-i-1\\ge 1$).\n\n\\noindent\\textbf{Case 3:} \\emph{To show:}\n$\\tau_1\\cdots\\tau_{i}s_{n-1}\\cdots s_{j+1}(\\alpha_j), 1\\le i\\le\nn-2, j\\ne 0,n-1$ is a positive real root.\n\n\\noindent We have $\\tau_1\\cdots\\tau_{i}s_{n-1}\\cdots s_{j+1}(\\alpha_j)=\\tau^i(\\alpha_{j}+\\alpha_{j+1}+\\cdots+\\alpha_{n-1})$\n(cf. \\S \\ref{facts}, Fact 4)\n\\noindent $=\\tau^i(\\alpha_j)+\\ldots+\\tau^i(\\alpha_{n-2})+\\tau^i(\\alpha_{n-1})$\nwhich is positive because each term is positive (cf. Case 1 and Remark \\ref{cyclic}).\n\\end{proof}\n\\begin{cor}\\label{length}\n$\\ell(\\kappa)=n(n-1)$.\n\\end{cor}\n\\subsection{Minimal representative-property for $\\kappa$}\\label{rep}\n\n\\begin{lem}\\label{minimal} $\\kappa(\\alpha_i)$ is a real positive root\nfor all $i\\ne 0$.\n\\end{lem}\n\\begin{proof} \nFor $1\\leq i\\leq n-2$, $\\kappa(\\alpha_i)=\\alpha_i$ is positive from Remark \\ref{cyclic}.\nFurther, $\\tau_1\\cdots\\tau_{n-1}(\\alpha_{n-1})$\n\n\\noindent $=\\tau_1\\cdots\\tau_{n-2}(\\delta+\\alpha_{n-2}+\\alpha_{n-1})$\n(cf. \\S \\ref{facts}, Fact 6))\n\n\\noindent $=\\tau_1\\cdots\\tau_{n-k}((k-1)\\delta+\\alpha_{n-k}+\\cdots+\n\\alpha_{n-1}), 1\\le k\\le n-1$ (cf. \\S \\ref{facts}, Fact 3))\n\n\\noindent Note that for $1\\le k\\le n-2, n-k\\ge 2$ and hence \\S\n\\ref{facts}, Fact 3 holds. Corresponding to $k=n-1$, we get,\n\n\\noindent $\\tau_1\\cdots\\tau_{n-1}(\\alpha_{n-1})$\n\n\\noindent $=\\tau_1((n-2)\\delta+\\alpha_{1}+\\cdots+ \\alpha_{n-1})$\n\n\\noindent $=n\\delta+\\alpha_{n-1}$ (cf.\\S \\ref{facts}, Facts 1,2 )\n\n\\end{proof}\n\n\\begin{cor}\\label{minimal'}\n$\\kappa$ is a minimal representative in ${\\widehat W}\/\\widehat W_{G_0}$.\n\\end{cor}\n\\noindent For $w\\in {\\widehat{W}}$, we shall denote the Schubert variety in\n$\\mathcal{G}\/G_0$ by $X_{G_0}(w)$.\n\\begin{lem}\\label{stable}\n$X_{G_0}(\\kappa)$ is stable for multiplication on the left by\n$G_0$.\n\\end{lem}\n\\begin{proof}\nIt suffices to show that\n$$s_i\\kappa\\le\\kappa(\\,mod\\,\\widehat{W}_{G_0}), 1\\le i\\le n-1\\leqno{(*)}$$ \nThe assertion (*) is clear if $i=n-1$. \nObserve that $ws_\\alpha=s_{w(\\alpha)}w$.\nIn particular, since $\\kappa$ fixes $\\alpha_i$, $1\\leq i\\leq n-2$, it follows $s_i\\kappa=\\kappa s_i=\\kappa(\\,mod\\,\\widehat W_{G_0})$, for $1\\leq i\\leq n-2$.\n\\end{proof}\n\n\\begin{lemma}\n\\label{sn2}\nLet \\ensuremath{\\mathcal P}\\ be the parabolic subgroup of $\\mathcal G$ corresponding to the choice of simple roots $\\left\\{\\alpha_1,\\cdots\\alpha_{n-2}\\right\\}$.\nThe element $\\kappa$ is a minimal representative in $\\widehat W_\\ensuremath{\\mathcal P}\\backslash\\widehat W$.\n\\end{lemma}\n\\begin{proof}\nIt is enough to show that $s_i\\kappa>\\kappa$, or equivalently, $\\kappa^{-1}(\\alpha_i)>0$ for $1\\leq i\\leq n-2$.\nThis follows from Remark \\ref{cyclic}.\n\\end{proof}\n\n\\begin{rem}\nFor the discussion in \\S \\ref{kap}, \\S \\ref{rep}, concerning reduced expressions, minimal-representative property and $G_0$-stability, we have used the expression for elements of $\\widehat{W}$, $\\widehat{W}$ being considered as a Coxeter group. One may as well carry out the discussion using the permutation presentations for elements of $\\widehat{W}$.\n\\end{rem}\n\n\\begin{theorem}\n[A reduced expression for $\\kappa_0$]\nThe element $\\kappa_0(=w'\\tau^{n-1})$ is the maximal representative of $\\kappa$ in $\\widehat W_{G_0}\\backslash\\widehat W$, i.e. the unique element in $\\widehat W$ such that \n$$X(\\kappa_0)={\\overline{G_0\\kappa\\mathcal{B}}}(mod\\,\\mathcal{B})$$\nIn particular, $X(\\kappa_0)$ is (left) $G_0$-stable.\nLet $\\underline w'$ be a reduced expression for the longest element $w'$ in $\\widehat W_\\ensuremath{\\mathcal P}$ and $\\underline\\tau$ the reduced expression $s_{n-1}\\cdots s_1s_0$.\nThen $\\underline w'\\underline\\tau^{n-1}$ is a reduced expression for $\\kappa_0$.\n\\end{theorem}\n\\begin{proof}\nObserve that $\\underline w=\\underline w's_{n-1}\\cdots s_1$ is a reduced expression for the longest element $w$ in $\\widehat W_{G_0}$, and so $w'\\kappa=ws_0\\tau^{n-2}$.\nLemma \\ref{sn2} implies that $\\underline w'\\underline\\tau^{n-1}$ is a reduced expression. \nIn particular, $$l(\\kappa_0)=l(w'\\kappa)=l(w's_{n-1}\\cdots s_1)+l(s_0\\tau^{n-2})=l(w)+l(s_0\\tau^{n-2})$$\nIt remains to show that $w'\\kappa$ is a maximal representative in $\\widehat W_{G_0}\\backslash\\widehat W$, i.e $s_iw'\\kappaj$ should have order $>0$, and $h_{ij},in$ being arbitrary.\n\nWe prove the Claim by induction on $n$. We shall first show that\n$A_{n-1}$ can be identified in a natural way as a submatrix of\n$A_n$. We want to think of the rows of $A_n$ forming $(n-1)$\nblocks (referred to as \\emph{row-blocks} in the sequel) of size\n$n-1,n-2,\\cdots,n-j,\\cdots,1$, namely, the $j$-th block consists\nof $n-j$ rows given by the coefficients occurring on the left hand\nside of (**) for $j\\ge 2$, and for $j=1$, the first block consists\nof $n-1$ rows given by the coefficients occurring on the left hand\nside of the following $n-1$ equations:\n$$-a_{12}g_{2n}^{(1)}=-1,\\ -g_{2n}^{(i)}-\\sum_{3\\le k\\le\nn}\\,a_{2k}g_{kn}^{(i)}=0,2\\le i\\le n-1$$ Similarly, we want to\nthink of the columns of $A_n$ forming $(n-1)$ blocks (referred to\nas \\emph{column-blocks} in the sequel) of size\n$n-1,n-2,\\cdots,n-j,\\cdots,1$, namely, the $j$-th block consisting\nof $n-j$ columns indexed by $g_{jn}^{(i)}, j-1\\le i\\le n$. Then\nindexing the $n-j$ rows in the $j$-th row-block as $j, j+1,\\cdots,\nn-1$, the entries in the rows of the $j$-th row-block have the\nfollowing description:\n\nThe non-zero entries in the $i$-th row in the $j$th row-block\n($j\\ge 2$) are\n\n\\noindent $1, -a_{23},-a_{24}, \\cdots, -a_{2\\,i+1} $ respectively,\noccurring at the columns indexed by\n\n\\noindent $g_{2n}^{(i-1)},g_{3n}^{(i)},\\cdots,g_{i+1\\,n}^{(i)}$.\n\nThe non-zero entries in the $i$-th row in the first row-block\n($j\\ge 2$) are\n\n\\noindent $-a_{12}, -a_{13}, \\cdots, -a_{2\\,i+1} $ respectively,\noccurring at the columns indexed by\n\n\\noindent $g_{2n}^{(i)},g_{3n}^{(i)},\\cdots,g_{i+1\\,n}^{(i)}$.\n\nFrom this it follows that $A_{n-1}$ is obtained from $A_n$ by\ndeleting the first row in each row-block and the first column in\neach column-block. For instance, we describe below $A_5$ and\n$A_4$; for convenience of notation, we denote $b_{ij}=-a_{ij}$. We\nhave,\n$$A_5=\\left(\\begin{array}{>{\\columncolor{lightgray}}cccc>{\\columncolor{lightgray}}ccc>{\\columncolor{lightgray}}cc>{\\columncolor{lightgray}}c}\n\\rowcolor{lightgray}\n{b}_{12}&0&0&0 &0&0&0&0&0&0\\\\\n0&b_{12}&0&0&b_{13}&0&0&0&0&0\\\\\n0&0&b_{12}&0&0 &b_{13}&0&b_{14}&0&0\\\\\n0&0&0&b_{12} &0&0&b_{13}&0&b_{14}&b_{15}\\\\\n\\rowcolor{lightgray}\n1&0&0&0 &b_{23}&0&0&0&0&0\\\\\n0&1&0&0&0&b_{23}&0&b_{24}&0&0\\\\\n0&0&1&0 &0&0&b_{23}&0&b_{24}&b_{25}\\\\\n\\rowcolor{lightgray}\n0&0&0& &1&0&0&b_{34}&0&0\\\\\n0&0&0&0 &0&1&0&0&b_{34}&b_{35}\\\\\n\\rowcolor{lightgray} 0&0&0&0 &0&0&0&1&0&b_{45}\n\\end{array}\\right)$$\n\n$$A_4=\\begin{pmatrix}\n\nb_{12}&0&0&0&0&0\\\\\n0&b_{12}&0 &b_{13}&0&0\\\\\n0&0&b_{12} &0&b_{13}&b_{14}\\\\\n1&0&0&b_{23}&0&0\\\\\n0&1&0 &0&b_{23}&b_{24}\\\\\n0&0&0 &1&0&b_{34}\n\\end{pmatrix}$$\nAs rows (respectively columns) of $A_5$, the positions of the\nfirst row (respectively, the first column) in each of the four\nrow-blocks (respectively columns-blocks) in $A_5$ are given by\n$1,5,8,10$; deleting these rows and columns in $A_5$, we get\n$A_4$. These rows and columns are highlighted in $A_5$.\n\nAs above, let $b_{ij}=-a_{ij}$. Now expanding $A_n$ along the\nfirst row, we have that $|A_n|$ equals $b_{12}|M_{1}|, M_{1}$\nbeing the submatrix of $A_n$ obtained by deleting the first row\nand first column in $A_n$ (i.e., deleting the first row\n(respectively, the first column) in the first row-block\n(respectively, the first column-block)). Now in $M_{1}$, in the\nfirst row in the second row-block the only non-zero entry is\n$b_{23}$, and it is a diagonal entry in $M_{1}$. Hence expanding\n$M_{1}$ through this row, we get that $|A_n|$ equals\n$b_{12}b_{23}|M_{2}|, M_{2}$ being the submatrix of $A_n$ obtained\nby deleting the first rows (respectively, the first columns)\nin the first two row-blocks (respectively, the first two\ncolumn-blocks) in $A_n$. Now in $M_{2}$, in the first row in the\nthird row-block, the only non-zero entry is $b_{34}$, and it is a\ndiagonal entry in $M_{2}$. Hence expanding $M_{2}$ along this row,\nwe get that $|A_n|$ equals $b_{12}b_{23}b_{34}|M_{3}|, M_{3}$\nbeing the submatrix of $A_n$ obtained by deleting the first\nrows (respectively, the first columns) in the first three\nrow-blocks (respectively, the first three column-blocks) in $A_n$.\nThus proceeding, at the $(n-1)$-th step, we get that $|A_n|$\nequals $b_{12}b_{23}\\cdots b_{n-1\\,n}|A_{n-1}|$. By induction, we\nhave $|A_{n-1}|=(-1)^{n-1\\choose 2} \\prod_{1\\le i\\le\nn-2}a_{i\\,i+1}^{n-1-i}$. Substituting back for $b_{ij}$'s, we\nobtain $|A_n|=(-1)^{n\\choose 2}\\prod_{1\\le i\\le\nn-1}a_{i\\,i+1}^{n-i}$. It remains to verify the statement of the\nclaim when $n=2$ (starting point of induction). In this case, we\nhave $$\\begin{gathered}g=\\begin{pmatrix}0&1\\\\\n-1&g_{22}\\end{pmatrix}, \\kappa=\\begin{pmatrix}t&0\\\\\n0&t^{-1}\\end{pmatrix},\\\\\n{\\underline{Y}^{-1}}=\n\\begin{pmatrix}1&-t^{-1}a_{12}\\\\\n0&1\\end{pmatrix},h=\\begin{pmatrix}a_{12}&t^{-1}-t^{-2}a_{12}g_{22}\\\\\n-t&-t^{-1}g_{22}\\end{pmatrix} \\end{gathered}$$ Hence the linear\nsystem consists of the single equation $$-a_{12}g_{22}^{(1)}=-1$$\nHence $A_2$ is the $1\\times 1$ matrix $(-a_{12})$, and\n$|A_2|=-a_{12}$, as required.\n\\end{proof}\n\n\\section{Lusztig's map}\\label{lumap} Consider $\\mathcal{N}$,\nthe variety of nilpotent elements in $\\frak{g}$ (the Lie algebra\nof $G$). In this section, we spell out (Lusztig's) isomorphism\nwhich identifies $X_{G_0}(\\kappa)$ as a compactification of $\\mathcal{N}$.\n\\subsection{The map $\\psi$:}\\label{luss} Consider the map $$\\psi:\\mathcal{N}\\rightarrow\n \\mathcal{G}\/G_0,\n\\psi(N)=(Id+t^{-1}N+t^{-2}N^2+\\cdots)(mod\\,G_0),N\\in\\mathcal{N}$$\nNote that the sum on the right hand side is finite, since $N$ is\nnilpotent. We now list some properties of $\\psi$.\n\n\\noindent\\textbf{(i) $\\psi$ is injective:} Let $\\psi(N_1)=\\psi(N_2)$.\nDenoting $\\lambda_i:=\\psi(N_i), i=1,2$, we get that\n$\\lambda_2^{-1}\\lambda_1$ belongs to $G_0$. On the other hand, \n$$\\lambda_2^{-1}\\lambda_1=(Id-t^{-1}N_2)(Id+t^{-1}N+t^{-2}N^2+\\cdots)$$\nNow $\\lambda_2^{-1}\\lambda_1$ is integral.\nIt follows that both sides of the above equation equal $Id$. This\nimplies $\\lambda_1=\\lambda_2$ which in turn implies that\n$N_1=N_2$. Hence we obtain the injectivity of $\\psi$.\n\n\\noindent\\textbf{(ii) $\\psi$ is $G$-equivariant:} We have\n\n\\noindent $\\psi(g\\cdot N)=\\psi(gNg^{-1})$\n\n\\noindent $=(Id+t^{-1}gNg^{-1}+t^{-2}gN^2g^{-1}+\\cdots)(mod\\,G_0)$\n\n\\noindent $=g(Id+t^{-1}N+t^{-2}N^2+\\cdots)g^{-1}(mod\\,G_0)$\n\n\\noindent $=g(Id+t^{-1}N+t^{-2}N^2+\\cdots)(mod\\,G_0)$ (since $g^{-1}\\in\nG_0$)\n\n\\noindent $=g\\psi(N)$ \n\n\n\\begin{prop}\\label{lu}\nFor $N\\in \\mathcal{N}, \\psi(N)$ belongs to $X_{G_0}(\\kappa)$.\n\\end{prop}\n\n\\begin{proof}\nWe divide the proof into two cases.\n\n\\noindent \\textbf{Case 1:} Let $N$ be upper triangular, say,\n$$N=\\left(n_{ij}\\right)_{1\\le i,j\\le n}$$ where $n_{ij}=0$,\nfor $i\\ge j$; note that $N\\in {\\underline{b}_u},\n{\\underline{b}_u}$ being the Lie algebra of $B_u$, the unipotent\nradical of $B$. We may work in the open subset $x_{ii+1}\\ne 0,1\\le i\\le\nn-1$ in ${\\underline{b}_u}$, $\\sum_{1\\le i6$. \n\\end{prop}\n\\begin{proof}\nFrom \\S\\ref{maps}, we may assume $p(Y)=1+\\sum\\limits_{i\\geq1}p_i(t)Y^i$. \nWe first claim that $p_1(t)\\notin A$.\nAssume the contrary.\nFor $$Z=\\begin{pmatrix}0&1&0\\\\0&0&0\\\\0&0&0\\end{pmatrix}$$\nwe see that $Z^2=0$, and so $p(Z)=1+p_1(t)Z\\in\\mathcal B$. \nIn particular, $\\psi_p(Z)=\\psi_p(0)$, contradicting the injectivity of $\\psi_p$.\n\\\\\n\\\\\nWe now write\n $ p(Y)=1-t^{-a}qY-t^{-b}rY^2$\nwhere \\begin{itemize}\n \\item $q,r\\in A$. \n \\item $q(0)\\neq 0$. \n \\item $a\\geq 1$.\n \\item Either $r=0$ or $r(0)\\neq 0$.\n\\end{itemize}\nWe now fix $Y= \\begin{pmatrix}\n 0 & 1 & 0 \\\\\n 0 & 0 & 1 \\\\\n 0 & 0 & 0 \n \\end{pmatrix}$ and \n$g= \\begin{pmatrix}\n 0 & 0 &-1 \\\\\n 0 &-1 & 0 \\\\\n -1 & 0 & 0 \n \\end{pmatrix}$, so that \n$$gp(Y)=\\begin{pmatrix}\n 0 & 0 & -1\\\\\n 0 &-1 &t^{-a}q\\\\\n -1 &t^{-a}q&t^{-b}r\n \\end{pmatrix}$$\nOur strategy is to find elements $C,D\\in\\mathcal B$ such that $Cgp(Y)D\\in N(K[t,t^{-1}])$.\nWe can then identify the Bruhat cell containing $gp(Y)$, and so identify the minimal Schubert variety containing $\\psi_p(g,Y)$.\nThe choice of $C,D$ depends on the values of certain inequalities, which we divide into $4$ cases.\nWe draw here a decision tree showing the relationship between the inequalities and the choice $C,D$.\n\n\\tikzstyle{level 1}=[level distance=2.5cm, sibling distance=2cm]\n\\tikzstyle{level 2}=[level distance=3.5cm, sibling distance=1.3cm]\n\n\\tikzstyle{bag} = [text width=4em, text centered]\n\\tikzstyle{end} = [circle, minimum width=6pt,fill, inner sep=0pt]\n\n\\begin{tikzpicture}[grow=right, sloped]\n\\node[end] {}\n child {\n node[bag] {Case $1$} \n edge from parent \n node[above] {$r=0$}\n }\n child {\n node[end] {} \n child {\n node[bag] {Case $1$}\n edge from parent\n node[below] {$\\qquad b\\leq a$}\n }\n child {\n node[bag] {Case $2$} \n edge from parent\n node[above] {$\\qquad al(\\lambda_q)=6a\\geq 6$.\n\\item\nSuppose $a6$.\n\\item\nSuppose either $b>2a$, or $b=2a$ and $r+q^2\\neq0$. \nIn particular, $r+q^2t^{b-2a}\\neq0$ and $b\\geq2$.\nLet \\begin{align*}\n C&= \\begin{pmatrix}\n -r-q^2t^{b-2a} &-qt^{b-a} &-t^b \\\\\n 0 &-\\dfrac{r}{r+q^2t^{b-2a}} & \\dfrac{qt^{b-a}}{r+q^2t^{b-2a}} \\\\\n 0 & 0 & \\dfrac{1}{r} \n \\end{pmatrix}\\\\\n D&= \\begin{pmatrix}\n 1 & 0 & 0 \\\\\n \\dfrac{qt^{b-a}}{q^2t^{b-2a}+r} & 1 & 0 \\\\\n \\dfrac{t^b}{q^2t^{b-2a}+r} & -\\dfrac{qt^{b-a}}{r} & 1 \n \\end{pmatrix} \n\\end{align*} \nWe compute\\begin{align*}\n Cgp(Y)D= \\begin{pmatrix}\n t^b & 0 & 0 \\\\\n 0 & 1 & 0 \\\\\n 0 & 0 & t^{-b} \n \\end{pmatrix}\n\\end{align*}\nIt follows $gp(Y)\\in\\mathcal B\\lambda_q\\mathcal B$, where $q=-b\\alpha_1^\\vee-b\\alpha_2^\\vee$.\nWe calculate\\begin{align*}\n l(\\lambda_q)&=\\lvert\\alpha_1(q)\\rvert+\\lvert\\alpha_2(q)\\rvert+\\lvert\\alpha_1(q)+\\alpha_2(q)\\rvert\\\\\n &=b+b+2b\\\\\n &=4b\\geq 8\n\\end{align*}\n\\end{enumerate}\n\\end{proof}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\n\\label{sec:intro}\n\n\\subsection{Motivation} Convolutional networks (CNNs) have shown near human performance in image classification~\\cite{Kolesnikov2020BigT} over non-structured dense networks.\nHowever, CNNs are vulnerable to specifically designed adversarial attacks~\\cite{adversarialexamples2015}. Several papers in adversarial machine learning literature reveal the brittleness of convolutional networks to adversarial examples. For example, gradient based methods \\cite{Goodfellow2018existence,Kurakin2017} design a perturbation by taking steps proportional to the gradient of the loss of the input image $x$ in a given $\\ell_p$ neighborhood.\nThis has led to refined robust training approaches, or defenses, which train the network to see adversarial examples during the training stage and produce the unaltered label corresponding to it \\cite{madry2018towards,trades}.\n\nVision transformers (ViT) were recently introduced \\cite{Dosovitskiy2021AnII}, as a new network architecture inspired by transformers \\cite{Vaswani2017AttentionIA} which have been successfully used for modeling language data. ViTs rely on self attention \\cite{Vaswani2017AttentionIA}, a mechanism that allows the network to find correlations between spatially separated parts of the input data. In the context of vision, these are small non-overlapping \\textit{patches} which serve as \\textit{tokens} to the transformer. ViTs and more recently distillation based Data Efficient Image Transformers (DeIT) \\cite{Touvron2021TrainingDI} have shown to have competitive performance on classification tasks and rely on pre-training on very large datasets. It is of imminent interest to therefore study the robustness of self-attention based networks.\n\nThere has been some preliminary work on adversarial robustness of vision transformers. \\cite{Bhojanapalli2021UnderstandingRO} show that under certain regimes, vision transformers are at least as robust to $\\ell_2$ and $\\ell_\\infty$ PGD attacks as ResNets. While $\\ell_2$ and $\\ell_\\infty$ threat models are useful in understanding fundamental properties of deep networks, they are not realizable in the real world and do not capture actual threats. Transformer based networks also introduce the need for tokenizing the image, leading to an encoded bias in the input. It is therefore important to understand the sensitivity of the architecture to token level changes rather than to the full image.\n\nSpecifically, we attempt to answer:\n\\emph{Are transformers robust to perturbations to a subset of the input tokens?} We present a systemic approach to answer this query by constructing token level attacks by leveraging block sparsity constraints.\n\n\\subsection{Our contributions} \n\nIn this paper, we propose a patch based block sparse attack where the attack budget is defined by the number of tokens the attacker is allowed to perturb. We identify top salient pixels using the magnitude of their loss gradients and perturb them to create attacks. We extend a similar idea to block sparsity by constraining salient pixels to lie in non-overlapping patches. We probe three families of neural architectures using our token attack; self-attention (ViT~\\cite{Dosovitskiy2021AnII}, DeIT~\\cite{Touvron2021TrainingDI}), convolutional (Resnets~\\cite{He2016DeepRL} and WideResNet~\\cite{Zagoruyko2016WideRN}) and MLP based (MLP Mixer~\\cite{Tolstikhin2021MLPMixerAA}).\n\nWe make the following contributions and observations:\n\\begin{enumerate}[nolistsep, left=0pt]\n\\item We propose a new attack which imposes block sparsity constraints, allowing for \\textit{token attacks} for Transformers. \n\n\\item We show classification performance of all architectures on token attacks of varying patch sizes and number of patches.\n\n\\item We demonstrate that for token attacks matching the architecture token size, vision transformers are less resilient to token attacks as compared to MLP Mixers and ResNets.\n\n\\item For token attacks smaller than architecture token size, vision transformers are comparably robust to ResNets.\n\n\\item We also specifically note the shortcomings of previous studies on robustness of transformers~\\cite{Bhojanapalli2021UnderstandingRO}, where ViTs are shown to be more robust than ResNets. \n\n\\item With our token attacks we can break Vision transformers using only $1\\%$ of pixels as opposed to $\\ell_2$ or $\\ell_\\infty$ attacks which rely on perturbing all image pixels.\n\n\\end{enumerate}\nWe therefore motivate designing attacks adaptively modeled after neural architectures.\n\n\n\\subsection{Related work} \n\n\n\\noindent\\textbf{\\textit{Threat models:}} Deep networks are vulnerable to imperceptible changes to input images as defined by the $\\ell_\\infty$ distance \\cite{Szegedy2014intriguing}. There exist several test-time attack algorithms with various threat models: $\\ell_p$ constrained~\\cite{adversarialexamples2015, Kurakin2017, Carlini2017cwl2}, black-box~\\cite{Ilyas2018blackbox, Ilyas2018alimitedqueries}, geometric attacks~\\cite{engstrom2019a, Xiao2018SpatiallyTA}, semantic and meaningful attacks~\\cite{joshi2019semantic, zhang2019camou, song2018constructing} and data poisoning based~\\cite{Shafahi2018poisonfrogs}.\n\n \n\\noindent\\textbf{\\textit{Defenses:}} Due to the vast variety of attacks, adversarial defense is a non-trivial problem. Empirical defenses as proposed by \\cite{madry2018towards}, \\cite{trades}, and \\cite{jagatap2020adversarially} rely on adversarial data augmentation and modified loss functions to improve robustness. Several works~\\cite{samangouei2018defensegan, yin2020defense} propose preprocessing operations as defenses. However, such defenses often fail to counter adaptive attacks~\\cite{Athalye2018obfuscated}. \\cite{wong2018provable}, \\cite{cohen2019certified} and \\cite{salman2019provably} provide methods that guarantee robustness in terms of a volume around an input. Such methods often fail or provide trivial certificates for larger networks, and large high resolution images. Apart from algorithmic approaches, newer papers discuss optimal hyper-parameter tuning as well as combination of regularizers from aformentioned techniques, choice of activation functions, choice of architecture and data augmentation to extract best possible robust accuracies using pre-existing algorithms \\cite{Gowal2020UncoveringTL, Pang2021BagOT}.\n\n\\noindent\\textbf{\\textit{Patch attacks:}} Patch attacks~\\cite{Brown2017advpatch} are practically realizable threat model. \\cite{zolfi2021translucent, thys2019fooling, wu2020making} have successfully attacked detectors and classifiers with physically printed patches. In addition, \\cite{croce2019sparse, croce2019sparse} also show that spatially limited sparse perturbations suffice to consistently reduce the accuracy of classification model. This motivates our analysis of the robustness of recently invented architectures towards sparse and patch attacks.\n\n\n\n\\noindent\\textbf{\\textit{Vision transformers}}\nWhile convolutional networks have successfully achieved near human accuracy on massive datasets~\\cite{Kolesnikov2020BigT, Xie2020SelfTrainingWN}, there has been a surge of interest in leveraging self-attention as an alternative approach. Transformers~\\cite{Vaswani2017AttentionIA} have been shown to be extremely successful at language tasks~\\cite{Devlin2019BERTPO, Sanh2019DistilBERTAD, Brown2020LanguageMA}. \\cite{parmar2018image} extend this for image data, where in they use pixels as tokens. While they some success in generative tasks, the models had a large number of parameters and did not scale well. \\cite{Dosovitskiy2021AnII} improve upon this by instead using non-overlapping patches as tokens and show state of the art classification performance on the ImageNet dataset. \\cite{Touvron2021TrainingDI} further leverage knowledge distillation to improve efficiency and performance. Further improvements have been suggested by \\cite{Dai2021CoAtNetMC}, \\cite{Wu2021CvTIC} and \\cite{Touvron2021GoingDW} to improve performance using architectural modifications, deeper networks and better training methods. In parallel, \\cite{Tolstikhin2021MLPMixerAA} instead propose a pure MLP based architecture that achieves nearly equivalent results with faster training time. However, studies on generalization and robust performance of such networks is still limited. We discuss a few recent works below. \n\n\n\\noindent\\textbf{\\textit{Attacks on vision transformers:}\n\\cite{Bhojanapalli2021UnderstandingRO,hendrycks2020pretrained} analyse the performance of vision transformers in comparison to massive ResNets under various threat models and concur that vision transformers (ViT) are at least as robust as Resnets when pretrained with massive training datasets. \\cite{mahmood2021robustness} show that adversarial examples do not transfer well between CNNs and transformers, and build an ensemble based approach towards adversarial defense. \\cite{paul2021vision} claims that Transformers are robust to a large variety of corruptions due to attention mechanism.\n\n\n\\begin{comment}\n\\section{Problem formulation}\n\nSpecifically, if forward map between the inputs $x \\in \\mathbb{R}^d$ and output logits $y\\in \\mathbb{R}^m$ corresponding to classes $\\{1,\\dots m\\}$, is modelled via a neural network as $y = f(w;x)$; $w$ being the set of trainable weights, then neural prediction $\\hat{y}(x) = f(\\hat{w};x)$, can be very sensitive to changes in $x$. For a bounded perturbation to a test image input, $$\\hat{y}_i = f(\\hat{w};x_i+\\delta_i)$$ where $\\delta_i$ represents the perturbation, the predicted label $\\hat{y_i}$ can be made \\emph{arbitrarily} different from the true label $\\max_j y_i$, $j\\in\\{1,\\dots m\\}$.\n\n\n\n\n\n\n\\subsection{Transformer-based models}\n\n\\textcolor{red}{directly present vision transformer formulation, we dont need to explain how vision transformer itself was derived}\n\nThe Transformer block was introduced by \\cite{Vaswani2017AttentionIA}, for text input. The basic idea of the Transformer model is to leverage an efficient form of ``self-attention''. A standard attention block is formally defined as,\n\\begin{equation}\n {\\mathbf{x}}_{out} = \\text{Softmax}\\left(\\frac{{\\mathbf{x}}{\\mathbf{W}}_Q {\\mathbf{W}}_k {\\mathbf{x}}^T}{\\sqrt{d}}\\right) {\\mathbf{x}} {\\mathbf{W}}_V,\n \\label{eq:selfatt}\n\\end{equation}\nwhere ${\\mathbf{x}} \\in {\\mathbb{R}}^{d\\times n}$ is an input string, ${\\mathbf{x}}_{out} \\in {\\mathbb{R}}^{d\\times n}$ is the output of the self-attention block, ${\\mathbf{W}}_Q,~{\\mathbf{W}}_K~\\text{and}{\\mathbf{W}}_V$ are the learnable \\emph{query}, \\emph{key} and the \\emph{value} matrices. Note that ${\\mathbf{x}}$ is actually a concatenation of $n$ ``tokens'' of size $d$, which each represent some part of the input. \n\\emph{Multi-headed self attention} stacks multiple such blocks in a single layer. The Transformer model has multiple such layers followed by a final output attention layer with a \\emph{classification token}. \\cite{Dosovitskiy2021AnII} proposed using using non-overlapping patches of $16\\times16$ as tokens to ensure near state of the art accuracies. \\cite{Touvron2021TrainingDI} propose a data-efficient distillation based method to train Transformers and improves upon both the sample complexity and the performance over Vision Transformers. \n\\end{comment}\n\n\n\n\n\n\n\n\\section{Token Attacks on Vision transformers}\n\\label{sec:blocksparse}\n\n\n\\noindent\\textbf{Threat Model:} We define the specific threat model that we consider in our analysis. Let ${\\mathbf{x}} \\in {\\mathbb{R}}^d$ be a $d$-dimensional image, and $f:{\\mathbb{R}}^d \\to [m]$ be a classifier that takes ${\\mathbf{x}}$ as input and outputs one of $m$ class labels. For our attacks, we focus on sparsity as the constraining factor. Specifically, we restrict the number of pixels or blocks of pixels that an attacker is allowed to change. We consider ${\\mathbf{x}}$ as a concatenation of $B$ blocks $[{\\bm{x}}_1, \\dots {\\bm{x}}_b, \\dots, {\\bm{x}}_B]$, where each block is of size $p$. In order to construct an attack, the attacker is allowed to perturb up to $K\\leq B$ such blocks for a $K$-token attack. We also assume a white-box threat model, that is, the attacker has access to all knowledge about the model including gradients and preprocessing. We consider two varying attack budgets. In both cases we consider a block sparse token budget, where we restrict the attacker to modifying $K$ patches or ``tokens\" (1) with an unconstrained perturbation allowed per patch (2) a ``mixed norm'' block sparse budget, where the pixelwise perturbation for each token is restricted to an $\\ell_\\infty$ ball with radius $\\epsilon$ defined as $K, \\epsilon$-attack.\n\n\n\n\\noindent\\textbf{Sparse attack:}\nTo begin, consider the simpler case of a sparse ($\\ell_0$) attack. This is a special case of the block sparse attack with block size is \\emph{one}. Numerous such attacks have been proposed in the past (refer to appendix). The general idea behind most such attacks is to analyse which pixels in the input image tend to affect the output the most \n $S(x_{i}) := \\left |\\frac{\\partial L(f({\\mathbf{x}}, {\\mathbf{y}}))}{\\partial x_{i}}\\right|$,\n \nwhere $L(\\cdot)$ is the adversarial loss, and $c$ is the true class predicted by the network.\nThe next step is to perturb the top $s$ most salient pixels for a $s$-sparse attack by using gradient descent to create the least amount of change in the $s$ pixels to adversarially flip the label\n\n\\noindent\\textbf{Patchwise token attacks:}Instead of inspecting saliency of single pixel we check the norm of gradients of pixels belonging to non-overlapping patches using patch saliency $S(\\mathbf{x}_b) := \\sqrt{\\sum_{x_i\\in {\\bm{x}}_b} \\left |\\frac{\\partial L(f({\\mathbf{x}}, {\\mathbf{y}}))}{\\partial x_{i}}\\right|^2 }$, for all $b\\in \\{1,\\dots B\\}$. We pick top $K$ blocks according to patch saliency. The effective sparsity is thus $s=K\\cdot p$.\nThese sequence of operations are summarized in \\Algref{alg:attack}. \n\n\\begin{algorithm}[tp]\n\\small\n\\caption{Adversarial Token Attack}\n\\label{alg:attack}\n\\begin{algorithmic}[1]\n\\Require ${\\mathbf{x}}_0$:Input image, $f(.)$: Classifier, ${\\mathbf{y}}:$ Original label, $K$: Number of patches to be perturbed, $p$: Patch size.\n $i \\gets 0$\n\\State $[b_1\\dots b_K]$= Top-K of $S(\\mathbf{x}_b) = \\sqrt{\\sum_{x_i\\in {\\bm{x}}_b} \\left |\\frac{\\partial L(f({\\mathbf{x}}, {\\mathbf{y}}))}{\\partial x_{i}}\\right|^2 }$,~$\\forall b$.\n\\While $f({\\mathbf{x}}) \\neq y \\text{ OR MaxIter}$\n \\State ${\\mathbf{x}}_{b_k} = {\\mathbf{x}}_{b_k} + \\nabla_{{\\mathbf{x}}_{b_k}} L;~~\\forall~~b_k~\\in~\\{b_1,\\dots, b_K\\}$\n \\State $ {\\mathbf{x}}_{b_k} = Project_{\\epsilon_\\infty} ({\\mathbf{x}}_{b_k})$ (optional)\n\\EndWhile\n\\end{algorithmic}\n\\end{algorithm}\n\nWe use non-overlapping patches to understand the effect of manipulating salient tokens instead of arbitrarily choosing patches. In order to further test the robustness of transformers, we also propose to look at the minimum number of patches that would required to be perturbed by an attacker. For this setup, we modify \\Algref{alg:attack} by linearly searching over the range of $1$ to $K$ patches.\n\n\\begin{figure*}\n\\centering\n\\begin{tabular}{c}\n \\hspace{15pt} Original \\hspace{38pt} Adversarial (patch) \\hspace{8pt} Pertubation (patch) \\hspace{10pt} Adversarial (sparse) \\hspace{8pt} Pertubation (sparse)\\\\\n \\includegraphics[height=0.15\\linewidth]{images\/155_1.png} \\includegraphics[height=0.15\\linewidth]{images\/s155_1_deit224_distill_crop.png}\\\\\n\\end{tabular}\n\n\\caption{\\sl\\textbf{Patch and sparse attacks on transformers}: The attack images are generated with a fixed budget of $20$ patches of size $16\\times 16$, or $5120$ pixels for sparse attack on vision transformer (ViT). Note that the perturbations are imperceptible. The third and fifth columns shows the perturbations brightened $10$ times.}\n\\label{fig:vis_blocksparse}\n\\end{figure*}\n\n\\noindent\\textbf{{Mixed-norm attacks:}}\nMost approaches~\\cite{croce2019sparse, croce2020sparse} additionally rely on a mixed $\\ell_2$-norm based sparse attack in order to generate imperceptible perturbations. Motivated by this setting, we propose a mixed-norm version of our modified attack as well. In order to ensure that our block sparse attacks are imperceptible, we enforce an additional $\\ell_\\infty$ projection step post the gradient ascent step. This is enforced via Step 4 in Alg. \\ref{alg:attack}\n\n\n\n\n\n\n\n\n\\section{Experiments and Results}\n\\label{sec:exps}\n\n\n\n\\noindent\\textbf{Setup:} To ensure a fair comparison, we choose the best models for the Imagenet dataset~\\cite{ILSVRC15} reported in \\cite{Dosovitskiy2021AnII}, \\cite{Touvron2021TrainingDI} and \\cite{Zagoruyko2016WideRN}. The models achieve near state-of-the-art results in terms of classification accuracy. They also are all trained using the best possible hyperparameters for each case. We use these weights and the shared models from the \\texttt{Pytorch Image models}~\\cite{rw2019timm} repository.\nWe restrict our analysis to a fixed subset of $300$ randomly chosen images from the Imagenet validation dataset.\n\n\\begin{comment}\n\\begin{table}[htp]\n \\centering\n \\caption{\\sl \\textbf{Clean Accuracies for the sub-sampled ImageNet dataset.}}\n \\begin{tabular}{c c}\n \\toprule \\\\\n \\textbf{Model} & \\textbf{Clean Accuracy} \\\\\n \\midrule\n ViT 224 & 88.70\\\\\n ViT 384 & 90.03\\\\\n DeIT & 85.71\\\\\n DeIT (Distilled) & 87.70 \\\\\n Wide Resnet & 87.04 \\\\\n \\bottomrule \n \\end{tabular}\n \n \\label{t:clean}\n\\end{table}\n\\end{comment}\n\n\\noindent\\textbf{Models:} In order to compare the robustness of transformer models to standard CNNs, we consider three different families of architectures:(1) Vision Transformer (ViT)~\\cite{Dosovitskiy2021AnII}, Distilled Vision Transformers (DeIT)~\\cite{Touvron2021TrainingDI}, (2) Resnets~\\cite{He2016DeepRL, Zagoruyko2016WideRN} and (3) MLP Mixer \\cite{Tolstikhin2021MLPMixerAA}.\nFor transformers, \\cite{Dosovitskiy2021AnII} show that best performing Imagenet models have a fixed input token size of $16\\times16$.\nIn order to ensure that the attacks are equivalent, we ensure that any norm or patch budgets are appropriately scaled as per the pre-processing used \\footnote{In case of varying image sizes due to pre-processing, we calculate the scaling factor in terms of the number of pixels and appropriately increase or decrease the maximum number of patches.}. We also scale the $\\epsilon$-norm budget for mixed norm attacks to eight gray levels of the input image post normalization. Additionally, we do a hyper parameter search to find the best attacks for each model analysed. Specific details can be found in the Appendix\\footnote{\\url{https:\/\/github.com\/NYU-DICE-Lab\/TokenAttacks_Supplementary.git}}.\n\n\\begin{figure*}[htp]\n \\centering\n \\begin{tabular}{c c }\n \\includegraphics[width=0.39\\linewidth]{plots\/varypatches_16.tex} &\n \\includegraphics[width=0.5\\linewidth]{plots\/varyingtokens.tex} \n \\end{tabular}\n \\caption{(a) \\small \\sl \\textbf{Robustness to Token Attacks with varying budgets ($p=16$).} Vision transformers are less robust than MLP Mixer and ResNets against patch attacks with patch size matching token size of transformer architecture, (b) \\textbf{Token attacks with varying patch sizes.$K=5$} When the attack patch size is smaller than token size of architecture, vision transformers are comparably robust against patch attacks, to MLP and ResNets. Detailed results can be found in the Appendix }\n \\label{fig:results}\n\\end{figure*}\n\n\n\n\n\\noindent\\textbf{Patch attacks:} We allow the attacker a fixed budget of tokens as per Algorithm \\ref{alg:attack}.\nWe use the robust accuracy as the metric of robustness, where a higher value is better.\nWe start with an attack budget of $1$ token for an image size of $224\\times224$ for the attacker where each token is a patch of the size $16\\times 16$. In order to compensate for the differences in the size of the input, we scale the attack budget for ViT-384 by allowing for more patches ($3$ to be precise) to be perturbed. However, we do not enforce any imperceptibility constraints. We run the attack on the fixed subset of ImageNet for the network architectures defined above. \\Figref{fig:results}(a) shows the result of our analysis. Notice that Transformer architectures are more vulnerable to token attacks as compared to ResNets and MLP-Mixer. Further, ViT-384 proves to be the most vulnerable, and ResNet-101 is the most robust model. DeiT which uses a teacher-student network is more robust than ViTs. We therefore conclude that distillation improves robustness to single token attacks. \n\n\\noindent\\textbf{\\textit{Varying the Token budget:}} For this experiment, we start with a block-budget of $1$ patch, and iterate upto $40$ patches to find the minimum number of tokens required to break an image. We then measure the robust accuracy for each constraint and for each model. For this case, we only study attacks for a fixed patch (token) size of $16\\times16$ and represent our findings in \\Figref{fig:results}(a). We clearly observe a difference in the behavior of ViT versus ResNets here. In general, for a given token budget, ResNets outperform all other token based models. In addition, the robust accuracies for Transformers fall to zero for as few as \\emph{two} patches. The advantage offered by distillation for single token attacks is also lost once the token budget increases. \n\n\\noindent\\textbf{\\textit{Varying patch sizes:}}In order to further analyse if these results hold across stronger and weaker block sparse constraints, we further run attacks for varying patch sizes. Smaller patch sizes are equivalent to partial token manipulation. We fix the token budget to be $5$ or $15$ tokens as dictated by the input size. Here, this corresponds to allowing the attacker to perturb $5$ $p\\times p$ patches. As one would expect, a smaller partial token attack is weaker than a full token attack. Surprisingly, the Transformer networks are comparable or better than ResNets for attacks smaller than a single token. This leads us to conclude that Transformers can compensate for adversarial perturbations within a tokens. However, as the patch size approaches the token size, Resnets achieve better robustness. We also see that MLP-Mixers, while also using the token based input scheme, perform better than Transformers as the patch attack size increases.\n\nHowever, this approach allows for unrestricted changes to the tokens. Another approach would be to study the effect of ``mixed norm'' attacks which further constrain the patches to be \\emph{imperceptibly} perturbed.\n\n\n\n\n\n\n\\noindent\\textit{\\textbf{Mixed Norm Attacks:}} For the mixed norm attacks, we analyse the robustness of all networks for a fixed $\\epsilon$ $\\ell_\\infty$ budget of one gray level. We vary the token budgets from $1$ to $5$. Here, almost all the networks show similar robustness for a small token budget ($K$=1,2); refer \\Tableref{t:mixednorm}. However, as the token budget increases, Transformer and MLP Mixer networks are far more vulnerable. Note that this behavior contradicts~\\cite{Bhojanapalli2021UnderstandingRO}, where ViTs outperform ResNets. Since our threat model leverages the token based architecture of the Transformers, our attacks are far more successful at breaking ViTs over Resnets.\n\\begin{small}\n\\begin{table}[h]\n\\centering\n\\begin{minipage}{0.48\\linewidth}\n\\caption{\\small \\textbf{\\sl Robust Accuracy for Mixed Norm Attacks:} The models are attacked with a $K,(1\/255)$ Patch Attack. Note that for smaller token budgets, the models perform nearly the same. However, as the token budget increases, Resnets are more robust than Transformers.}\n\\label{t:mixednorm}\n\\begin{tabular}{c c c c c} \n \\toprule\n \\textbf{Model} & \\textbf{Clean} & \\multicolumn{3}{c}{\\textbf{Token Budget}} \\\\\n \\midrule \n {} & & 1 & 2 & 5 \\\\\n \\midrule\n ViT-224 & 88.70 & 68.77 & 50.83 & 15.28 \\\\\n ViT-384 & \\textbf{90.03} & \\textit{53.48} & \\textit{28.57} & \\textit{4.98} \\\\ \n \\midrule\n DeIT & 85.71 & \\textbf{72.42} & 46.84 & 6.31 \\\\\n DeIT-Distilled & 87.70 & 68.77 & 54.15 & 16.61 \\\\\n \\midrule\n Resnet-101 & 85.71 & 69.10 & 55.14 & \\textbf{32.89} \\\\\n Resnet-50 & 85.38 & 67.44 & \\textbf{55.81} & 31.22 \\\\\n Wide Resnet & 87.04 & 54.81 & 32.89 & 11.62 \\\\\n \\midrule\n MLP-Mixer & \\textit{83.78} & 63.78 & 37.87 & 5.98 \\\\\n \\bottomrule \n\\end{tabular}\n\\end{minipage}\n\\begin{minipage}{0.48\\linewidth}\n \\centering\n \\small\n \\caption{\\sl \\textbf{Robust accuracies, $s=256$ sparse and $K=1$, $16\\times16$ patch attack .} }\n \\begin{tabular}{c c c c}\n \\toprule\n \\textbf{Model} & & \\multicolumn{2}{c}{\\textbf{Norm constraint}} \\\\\n \\midrule \n {} & Clean & Sparse & Patch\\\\\n \\midrule \n ViT 224 & 88.70 & 5.98 & 13.62 \\\\\n ViT 384 & \\textbf{90.03} & 3.32 & \\textit{1.33}\\\\\n \\midrule\n DeIT & 85.71 & 4.65 & 17.27 \\\\\n DeIT (Distilled) & 87.70& 14.95 & 17.94\\\\\n \\midrule\n MLP Mixer & \\textit{83.72} & 5.98 & 26.91 \\\\\n \\midrule\n ResNet 50 & 85.38 & 13.95 & 19.90 \\\\\n ResNet 101 & 85.71 & \\textbf{23.59} & \\textbf{49.50} \\\\\n Wide Resnet & 87.04 & \\textit{1.33} & 26.57 \\\\\n \\bottomrule \\\\\n \\end{tabular} \n \\label{tab:sparse}\n\\end{minipage}\n\\end{table}\n\\end{small}\n\n\\noindent\\textbf{Sparse Attacks:} The sparse variant of our algorithm restricts the patch size to $1\\times 1$. We allow for a sparsity budget of $0.5\\%$ of original number of pixels. In case of the standard $224\\times 224$ ImageNet image, the attacker is allowed to perturb $256$ pixels. We compare the attack success rate of both sparse attack and patch-based token attack at same sparsity budget; to compare we chose $1, 16\\times 16$ patch attack (refer \\Tableref{tab:sparse}).\nWe see that as is the case with token attacks, even for sparse attacks, vision transformers are less robust as compared to ResNets. With the same sparsity budget, sparse attacks are stronger than token attacks; however we stress that sparse threat model is less practical to implement as the sparse coefficients may be scattered anywhere in the image. \n\n\n\\section{Discussion and Conclusion}\nAnalysing the above results, we infer certain interesting properties of transformers. \n\\begin{enumerate}[nolistsep, parsep=0pt]\n \\item We find that Transformers are generally susceptible to token attacks, even for very low token budgets.\n \\item However, Transformers appear to compensate for perturbations to patch attacks smaller than the token size. \n \\item Further, ResNets and MLP-Mixer outperform Transformers for token attacks consistently.\n\\end{enumerate}\n\nAn interesting direction of follow-up work is to develop strong certifiable defenses for token attacks. Further directions of research also include analysis of the effect of distillation and semi-supervised pre-training.\n \n\\section*{Acknowledgements}\nThe authors were supported in part by the National Science Foundation under grants CCF-2005804 and CCF-1815101, USDA\/NIFA under grant USDA-NIFA:2021-67021-35329, and ARPA-E under grant DE:AR0001215.\n\n\n\n\n\n\n\n\\bibliographystyle{IEEEbib}\n\\begin{small}\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn this paper, we investigate the Liouville property of nonnegative solutions to the following Dirichlet problems for elliptic equations in exterior domains\n\\begin{equation}\\label{GPDE}\\\\\\begin{cases}\n(-\\Delta)^{\\frac{\\alpha}{2}}u(x)=f(x,u(x)), \\,\\,\\,\\,\\,\\,\\, u(x)\\geq0, \\,\\,\\,\\,\\,\\,\\,\\, x\\in\\Omega_{r}, \\\\\nu(x)\\equiv0,\\,\\,\\,\\,\\,\\,\\,\\, x\\in\\mathbb{R}^{n}\\setminus\\Omega_{r},\n\\end{cases}\\end{equation}\nwhere the exterior domains $\\Omega_{r}:=\\{x\\in\\mathbb{R}^{n}\\,|\\,|x|>r\\}$ with arbitrary $r>0$, $n\\geq2$, $0<\\alpha\\leq2$ and the nonlinear terms $f:\\, \\Omega_{r}\\times\\overline{\\mathbb{R}_{+}}\\rightarrow \\overline{\\mathbb{R}_{+}}$. When $0<\\alpha<2$, the nonlocal fractional Laplacians $(-\\Delta)^{\\frac{\\alpha}{2}}$ is defined by\n\\begin{equation}\\label{nonlocal defn}\n (-\\Delta)^{\\frac{\\alpha}{2}}u(x)=C_{\\alpha,n} \\, P.V.\\int_{\\mathbb{R}^n}\\frac{u(x)-u(y)}{|x-y|^{n+\\alpha}}dy:=C_{\\alpha,n}\\lim_{\\epsilon\\rightarrow0}\\int_{|y-x|\\geq\\epsilon}\\frac{u(x)-u(y)}{|x-y|^{n+\\alpha}}dy\n\\end{equation}\nfor functions $u\\in C^{1,1}_{loc}\\cap\\mathcal{L}_{\\alpha}(\\mathbb{R}^{n})$, where the constant $C_{\\alpha,n}=\\big(\\int_{\\mathbb{R}^{n}}\\frac{1-\\cos(2\\pi\\zeta_{1})}{|\\zeta|^{n+\\alpha}}d\\zeta\\big)^{-1}$ and the function spaces\n\\begin{equation}\\label{0-1}\n \\mathcal{L}_{\\alpha}(\\mathbb{R}^{n}):=\\Big\\{u: \\mathbb{R}^{n}\\rightarrow\\mathbb{R}\\,\\Big|\\,\\int_{\\mathbb{R}^{n}}\\frac{|u(x)|}{1+|x|^{n+\\alpha}}dx<\\infty\\Big\\}.\n\\end{equation}\nFor $0<\\alpha<2$, we assume the solution $u\\in C_{loc}^{1,1}(\\Omega_{r})\\cap C(\\overline{\\Omega_{r}})\\cap\\mathcal{L}_{\\alpha}(\\mathbb{R}^{n})$. For $\\alpha=2$, we assume the solution $u\\in C^{2}(\\Omega_{r})\\cap C(\\overline{\\Omega_{r}})$.\n\nWe say equations \\eqref{GPDE} have critical order if $\\alpha=n$ and non-critical order if $0<\\alpha0$, $\\sigma>1$, $-\\alpha<\\tau<+\\infty$ and $00$, $a=0$, $a<0$, respectively. These equations have numerous important applications in conformal geometry and Sobolev inequalities. In particular, in the case $a=0$, \\eqref{GPDE} becomes the well-known Lane-Emden equation, which models many phenomena in mathematical physics and in astrophysics.\n\nThe nonlinear terms in \\eqref{PDE} is called critical if $p=p_{s}(a):=\\frac{n+\\alpha+2a}{n-\\alpha}$ ($:=+\\infty$ if $n=\\alpha$), subcritical if $0-2$ and $11$ such that $f(x,u)$ satisfies the lower bound \\eqref{e3} in $(\\mathcal{C}\\cap\\Omega_{2\\sigma r})\\times\\overline{\\mathbb{R}_{+}}$. This allows us to have much more admissible choices of the nonlinearities $f(x,u)$ (see Remark \\ref{rem0} and \\ref{rem1}). Second, Theorem \\ref{Thm1} can also be applied to general fractional order cases $0<\\alpha<2$ with $n\\geq2$ and the critical order cases $\\alpha=n=2$. Third, in assumption $(\\mathbf{f_{2}})$, we only assume $(|x|-r)^{\\theta}f(x,u)$ (not $f(x,u)$ itself) is locally Lipschitz on $u$, this allows $f(x,u)$ to have some singularities near the sphere $S_{r}:=\\{x\\in\\mathbb{R}^{n}\\,|\\,|x|=r\\}$.\n\\end{rem}\n\nIn particular, we consider the following Dirichlet problems for the H\\'{e}non-Hardy type equations in exterior domains\n\\begin{equation}\\label{HPDE}\\\\\\begin{cases}\n(-\\Delta)^{\\frac{\\alpha}{2}}u(x)=|x|^{a}(|x|-r)^{b}u^{p}, \\,\\,\\,\\,\\,\\,\\, u(x)\\geq0, \\,\\,\\,\\,\\,\\,\\,\\, x\\in\\Omega_{r}, \\\\\nu(x)\\equiv0,\\,\\,\\,\\,\\,\\,\\,\\, x\\in\\mathbb{R}^{n}\\setminus\\Omega_{r},\n\\end{cases}\\end{equation}\nwhere $n\\geq2$, $0<\\alpha\\leq2$.\n\nAs a consequence of Theorem \\ref{Thm1} and Remark \\ref{rem0}, we deduce the following corollary.\n\\begin{cor}\\label{Cor1}\nAssume $0\\leq b<+\\infty$, $-b-\\alpha0$ in $\\Omega_{1}$. Next, we will carry out our proof by discussing the non-critical order cases and the critical order case separately.\n\n\\subsection{The non-critical order cases $0<\\alpha\\alpha$, $0<\\alpha\\leq2$, $f(x,u)$ is subcritical and satisfies assumptions $(\\mathbf{f_{1}})$, $(\\mathbf{f_{2}})$ and $(\\mathbf{f_{3}})$. Suppose $u$ is a positive solution to integral equations \\eqref{IEe}, then it satisfies the following lower bound estimates: for all $|x|\\geq2\\sigma$,\n\\begin{equation}\\label{lb1-e}\n u(x)\\geq C_{\\kappa}|x|^{\\kappa} \\quad\\quad \\forall \\, \\kappa<\\frac{\\alpha+\\tau}{1-p}, \\quad\\quad \\text{if} \\,\\,\\,\\, 01$, we define the Kelvin transform of $u$ centered at $0$ by\n\\begin{equation}\\label{Kelvin-e}\n u_{\\lambda}(x):=\\left(\\frac{\\lambda}{|x|}\\right)^{n-\\alpha}u\\left(\\frac{\\lambda^{2}x}{|x|^{2}}\\right)\n\\end{equation}\nfor arbitrary $x\\in\\{x\\in\\overline{\\Omega_{1}}\\,|\\,1\\leq|x|\\leq\\lambda^{2}\\}$, and define the reflection of $x$ about the sphere $S_{\\lambda}:=\\{x\\in\\mathbb{R}^{n}\\,|\\,|x|=\\lambda\\}$ by $x^{\\lambda}:=\\frac{\\lambda^{2}x}{|x|^{2}}$.\n\nNow, we will carry out the process of scaling spheres in $\\Omega_{1}$ with respect to the origin $0\\in\\mathbb{R}^{n}$.\n\nLet $\\lambda>1$ be an arbitrary real number and let $\\omega^{\\lambda}(x):=u_{\\lambda}(x)-u(x)$ for any $x\\in B_{\\lambda^{2}}(0)\\setminus\\overline{B_{\\lambda}(0)}$. We will first show that, for $\\lambda>1$ sufficiently close to $1$,\n\\begin{equation}\\label{2-7-e}\n \\omega^{\\lambda}(x)\\leq0, \\,\\,\\,\\,\\,\\, \\forall \\,\\, x\\in B_{\\lambda^{2}}(0)\\setminus\\overline{B_{\\lambda}(0)}.\n\\end{equation}\nThen, we start dilating the sphere $S_{\\lambda}$ from near the unit sphere $S_{1}$ outward as long as \\eqref{2-7-e} holds, until its limiting position $\\lambda=+\\infty$ and derive lower bound estimates on asymptotic behaviour of $u$ as $|x|\\rightarrow+\\infty$. Therefore, the scaling sphere process can be divided into two steps.\n\n\\emph{Step 1. Start dilating the sphere $S_{\\lambda}$ from near $\\lambda=1$.} Define\n\\begin{equation}\\label{2-8-e}\n (B_{\\lambda^{2}}\\setminus\\overline{B_{\\lambda}})^{+}:=\\{x\\in B_{\\lambda^{2}}(0)\\setminus\\overline{B_{\\lambda}(0)} \\, | \\, \\omega^{\\lambda}(x)>0\\}.\n\\end{equation}\nWe will show that, for $\\lambda>1$ sufficiently close to $1$,\n\\begin{equation}\\label{2-9-e}\n (B_{\\lambda^{2}}\\setminus\\overline{B_{\\lambda}})^{+}=\\emptyset.\n\\end{equation}\n\nSince $u$ is a positive solution to the integral equations \\eqref{IEe}, through direct calculations, we get, for any $\\lambda>1$,\n\\begin{equation}\\label{2-34-e}\n u(x)=\\int_{|y|>\\lambda}G_{\\alpha}(x,y)f(y,u(y))dy+\\int_{B_{\\lambda^{2}}(0)\\setminus\\overline{B_{\\lambda}(0)}}G_{\\alpha}(x,y^{\\lambda})\n \\left(\\frac{\\lambda}{|y|}\\right)^{2n}f(y^{\\lambda},u(y^{\\lambda}))dy\n\\end{equation}\nfor any $x\\in\\overline{\\Omega_{1}}$. By direct calculations, one can also verify that $u_{\\lambda}$ satisfies the following integral equation\n\\begin{equation}\\label{2-35-e}\n u_{\\lambda}(x)=\\int_{|y|>1}G_{\\alpha}(x^{\\lambda},y)\\left(\\frac{\\lambda}{|x|}\\right)^{n-\\alpha}f(y,u(y))dy\n\\end{equation}\nfor any $x\\in\\{x\\in\\overline{\\Omega_{1}}\\,|\\,1\\leq|x|\\leq\\lambda^{2}\\}$, and hence, it follows immediately that\n\\begin{eqnarray}\\label{2-36-e}\n u_{\\lambda}(x)&=&\\int_{|y|>\\lambda}G_{\\alpha}(x^{\\lambda},y)\\left(\\frac{\\lambda}{|x|}\\right)^{n-\\alpha}f(y,u(y))dy \\\\\n \\nonumber \\quad\\quad &&+\\int_{B_{\\lambda^{2}}(0)\\setminus\\overline{B_{\\lambda}(0)}}G_{\\alpha}(x^{\\lambda},y^{\\lambda})\\left(\\frac{\\lambda}{|x|}\\right)^{n-\\alpha}\n \\left(\\frac{\\lambda}{|y|}\\right)^{2n}f(y^{\\lambda},u(y^{\\lambda}))dy.\n\\end{eqnarray}\nTherefore, we have, for any $x\\in B_{\\lambda^{2}}(0)\\setminus\\overline{B_{1}(0)}$,\n\\begin{eqnarray}\\label{omega-e}\n && \\omega_{\\lambda}(x)=u_{\\lambda}(x)-u(x) \\\\\n \\nonumber &=& \\int_{B_{\\lambda^{2}}(0)\\setminus\\overline{B_{\\lambda}(0)}}\\Bigg\\{\\left[\\left(\\frac{\\lambda}{|x|}\\right)^{n-\\alpha}G_{\\alpha}(x^{\\lambda},y^{\\lambda})\n -G_{\\alpha}(x,y^{\\lambda})\\right]\\left(\\frac{\\lambda}{|y|}\\right)^{2n}f(y^{\\lambda},u(y^{\\lambda})) \\\\\n \\nonumber && -\\left[G_{\\alpha}(x,y)-\\left(\\frac{\\lambda}{|x|}\\right)^{n-\\alpha}G_{\\alpha}(x^{\\lambda},y)\\right]f(y,u(y))\\Bigg\\}dy \\\\\n \\nonumber && +\\int_{|y|>\\lambda^{2}}\\left[\\left(\\frac{\\lambda}{|x|}\\right)^{n-\\alpha}G_{\\alpha}(x^{\\lambda},y)-G_{\\alpha}(x,y)\\right]f(y,u(y))dy.\n\\end{eqnarray}\n\nNow we need the following Lemma on properties of the Green's function $G_{\\alpha}(x,y)$.\n\\begin{lem}\\label{G-e}\nThe Green's function $G_{\\alpha}(x,y)$ satisfies the following point-wise estimates:\n\\begin{flalign}\n\\nonumber &\\text{$(i)\\,\\, 0\\leq G_{\\alpha}(x,y)\\leq\\frac{C'}{|x-y|^{n-\\alpha}}, \\quad\\quad \\forall \\,\\, x,y\\in\\mathbb{R}^{n};$}& \\\\\n\\nonumber &\\text{$(ii) \\,\\, G_{\\alpha}(x,y)\\geq\\frac{C''}{|x-y|^{n-\\alpha}}, \\quad\\quad \\forall \\,\\, |x|,|y|\\geq2;$}& \\\\\n\\nonumber &\\text{$(iii) \\,\\, \\left(\\frac{\\lambda}{|x|}\\right)^{n-\\alpha}G_{\\alpha}(x^{\\lambda},y)-G_{\\alpha}(x,y)\\leq0, \\quad\\quad \\forall \\,\\, \\lambda<|x|<\\lambda^{2}, \\,\\, \\lambda<|y|<+\\infty;$}& \\\\\n\\nonumber &\\text{$(iv) \\,\\, \\left(\\frac{\\lambda^{2}}{|x|\\cdot|y|}\\right)^{n-\\alpha}G_{\\alpha}(x^{\\lambda},y^{\\lambda})-\\left(\\frac{\\lambda}{|y|}\\right)^{n-\\alpha}\n G_{\\alpha}(x,y^{\\lambda})\\leq G_{\\alpha}(x,y)-\\left(\\frac{\\lambda}{|x|}\\right)^{n-\\alpha}G_{\\alpha}(x^{\\lambda},y),$}& \\\\\n\\nonumber &\\text{$\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\quad \\forall \\,\\, \\lambda<|x|,|y|<\\lambda^{2}.$}&\n\\end{flalign}\n\\end{lem}\n\nLemma \\ref{G-e} can be proved by direct calculations, so we omit the details here.\n\nFrom Lemma \\ref{G-e} and the integral equations \\eqref{omega-e}, one can derive that, for any $x\\in B_{\\lambda^{2}}(0)\\setminus\\overline{B_{\\lambda}(0)}$,\n\\begin{eqnarray}\\label{2-37-e}\n &&\\omega^{\\lambda}(x)=u_{\\lambda}(x)-u(x) \\\\\n \\nonumber &\\leq&\\int_{\\lambda<|y|<\\lambda^{2}}\\Bigg[G_{\\alpha}(x,y)-\\left(\\frac{\\lambda}{|x|}\\right)^{n-\\alpha}G_{\\alpha}(x^{\\lambda},y)\\Bigg] \\left[\\left(\\frac{\\lambda}{|y|}\\right)^{n+\\alpha}f(y^{\\lambda},u(y^{\\lambda}))-f(y,u(y))\\right]dy\\\\\n\\nonumber &<&\\int_{\\lambda<|y|<\\lambda^{2}}\\Bigg(G_{\\alpha}(x,y)-\\left(\\frac{\\lambda}{|x|}\\right)^{n-\\alpha}G_{\\alpha}(x^{\\lambda},y)\\Bigg) \\left[f(y,u_{\\lambda}(y))-f(y,u(y))\\right]dy\\\\\n\\nonumber &\\leq&C\\int_{\\left(B_{\\lambda^{2}}\\setminus\\overline{B_{\\lambda}}\\right)^{+}}\\frac{1}{|x-y|^{n-\\alpha}}\\left[f(y,u_{\\lambda}(y))-f(y,u(y))\\right]dy\\\\\n\\nonumber &=&C\\int_{\\left(B_{\\lambda^{2}}\\setminus\\overline{B_{\\lambda}}\\right)^{+}}\\frac{1}{|x-y|^{n-\\alpha}}\\cdot\\frac{f(y,u_{\\lambda}(y))-f(y,u(y))}{u_{\\lambda}(y)-u(y)}\n\\omega^{\\lambda}(y)dy,\n\\end{eqnarray}\nwhere we have used the subcritical condition on $f(x,u)$ for $\\mu=\\left(\\frac{\\lambda}{|y|}\\right)^{n-\\alpha}<1$ to derive the second inequality and the assumption $(\\mathbf{f_{1}})$ on $f(x,u)$ to derive the third inequality.\n\nBy Hardy-Littlewood-Sobolev inequality and \\eqref{2-37-e}, we have, for any $\\frac{n}{n-\\alpha}0$ small enough, such that\n\\begin{equation}\\label{3-15-e}\n C\\left\\|\\frac{f(y,u_{\\lambda}(y))-f(y,u(y))}{u_{\\lambda}(y)-u(y)}\\right\\|_{L^{\\frac{n}{\\alpha}}((B_{\\lambda^{2}}\\setminus\\overline{B_{\\lambda}})^{+})}\\leq\\frac{1}{2}\n\\end{equation}\nfor all $1<\\lambda\\leq1+\\epsilon_{0}$, and hence \\eqref{3-14-e} implies\n\\begin{equation}\\label{3-16-e}\n \\|\\omega^{\\lambda}\\|_{L^{q}((B_{\\lambda^{2}}\\setminus\\overline{B_{\\lambda}})^{+})}=0,\n\\end{equation}\nwhich means $(B_{\\lambda^{2}}\\setminus\\overline{B_{\\lambda}})^{+}=\\emptyset$. Therefore, we have proved for all $1<\\lambda\\leq1+\\epsilon_{0}$, $(B_{\\lambda^{2}}\\setminus\\overline{B_{\\lambda}})^{+}=\\emptyset$, that is,\n\\begin{equation}\\label{3-17-e}\n \\omega^{\\lambda}(x)\\leq0, \\,\\,\\,\\,\\,\\,\\, \\forall \\, x\\in B_{\\lambda^{2}}(0)\\setminus\\overline{B_{\\lambda}(0)}.\n\\end{equation}\nThis completes Step 1.\n\n\\emph{Step 2. Dilate the sphere $S_{\\lambda}$ outward until $\\lambda=+\\infty$ to derive lower bound estimates on asymptotic behaviour of $u$ as $|x|\\rightarrow+\\infty$.} Step 1 provides us a start point to dilate the sphere $S_{\\lambda}$ from near $\\lambda=1$. Now we dilate the sphere $S_{\\lambda}$ outward as long as \\eqref{2-7-e} holds. Let\n\\begin{equation}\\label{2-29-e}\n \\lambda_{0}:=\\sup\\{1<\\lambda<+\\infty\\,|\\, \\omega^{\\mu}\\leq0 \\,\\, in \\,\\, B_{\\mu^{2}}(0)\\setminus\\overline{B_{\\mu}(0)}, \\,\\, \\forall \\, 1<\\mu\\leq\\lambda\\}\\in(1,+\\infty],\n\\end{equation}\nand hence, one has\n\\begin{equation}\\label{2-30-e}\n \\omega^{\\lambda_{0}}(x)\\leq0, \\quad\\quad \\forall \\,\\, x\\in B_{\\lambda_{0}^{2}}(0)\\setminus\\overline{B_{\\lambda_{0}}(0)}.\n\\end{equation}\nIn what follows, we will prove $\\lambda_{0}=+\\infty$ by contradiction arguments.\n\nSuppose on contrary that $1<\\lambda_{0}<+\\infty$. In order to get a contradiction, we will first prove\n\\begin{equation}\\label{2-31-e}\n \\omega^{\\lambda_{0}}(x)\\equiv0, \\,\\,\\,\\,\\,\\,\\forall \\, x\\in B_{\\lambda_{0}^{2}}(0)\\setminus\\overline{B_{\\lambda_{0}}(0)}\n\\end{equation}\nby using contradiction arguments.\n\nSuppose on contrary that \\eqref{2-31-e} does not hold, that is, $\\omega^{\\lambda_{0}}\\leq0$ but $\\omega^{\\lambda_{0}}$ is not identically zero in $B_{\\lambda_{0}^{2}}(0)\\setminus\\overline{B_{\\lambda_{0}}(0)}$, then there exists a $x^{0}\\in B_{\\lambda_{0}^{2}}(0)\\setminus\\overline{B_{\\lambda_{0}}(0)}$ such that $\\omega^{\\lambda_{0}}(x^{0})<0$. We will obtain a contradiction with \\eqref{2-29-e} via showing that the sphere $S_{\\lambda}$ can be dilated outward a little bit further, more precisely, there exists a $\\varepsilon>0$ small enough such that $\\omega^{\\lambda}\\leq0$ in $B_{\\lambda^{2}}(0)\\setminus\\overline{B_{\\lambda}(0)}$ for all $\\lambda\\in[\\lambda_{0},\\lambda_{0}+\\varepsilon]$.\n\nFor that purpose, we will first show that\n\\begin{equation}\\label{2-32-e}\n \\omega^{\\lambda_{0}}(x)<0, \\,\\,\\,\\,\\,\\, \\forall \\, x\\in B_{\\lambda_{0}^{2}}(0)\\setminus\\overline{B_{\\lambda_{0}}(0)}.\n\\end{equation}\nIndeed, since we have assumed there exists a point $x^{0}\\in B_{\\lambda_{0}^{2}}(0)\\setminus\\overline{B_{\\lambda_{0}}(0)}$ such that $\\omega^{\\lambda_{0}}(x^{0})<0$, by continuity, there exists a small $\\delta>0$ and a constant $c_{0}>0$ such that\n\\begin{equation}\\label{2-33-e}\nB_{\\delta}(x^{0})\\subset B_{\\lambda_{0}^{2}}(0)\\setminus\\overline{B_{\\lambda_{0}}(0)} \\,\\,\\,\\,\\,\\, \\text{and} \\,\\,\\,\\,\\,\\,\n\\omega^{\\lambda_{0}}(x)\\leq -c_{0}<0, \\,\\,\\,\\,\\,\\,\\,\\, \\forall \\, x\\in B_{\\delta}(x^{0}).\n\\end{equation}\nSince $f(x,u)$ is subcritical and satisfies the assumption $(\\mathbf{f_{1}})$, one can derive from \\eqref{2-33-e}, Lemma \\ref{G-e} and \\eqref{2-37-e} that, for any $x\\in B_{\\lambda_{0}^{2}}(0)\\setminus\\overline{B_{\\lambda_{0}}(0)}$,\n\\begin{eqnarray}\\label{9-37-e}\n &&\\omega^{\\lambda_{0}}(x)=u_{\\lambda_{0}}(x)-u(x) \\\\\n \\nonumber &\\leq&\\int_{\\lambda_{0}<|y|<\\lambda_{0}^{2}}\\Bigg[G_{\\alpha}(x,y)-\\left(\\frac{\\lambda_{0}}{|x|}\\right)^{n-\\alpha}G_{\\alpha}(x^{\\lambda_{0}},y)\\Bigg] \\left[\\left(\\frac{\\lambda_{0}}{|y|}\\right)^{n+\\alpha}f(y^{\\lambda_{0}},u(y^{\\lambda_{0}}))-f(y,u(y))\\right]dy\\\\\n\\nonumber &<&\\int_{B_{\\delta}(x_{0})}\\Bigg(G_{\\alpha}(x,y)-\\left(\\frac{\\lambda_{0}}{|x|}\\right)^{n-\\alpha}G_{\\alpha}(x^{\\lambda_{0}},y)\\Bigg) \\left[f(y,u_{\\lambda_{0}}(y))-f(y,u(y))\\right]dy\\leq0,\n\\end{eqnarray}\nthus we arrive at \\eqref{2-32-e}.\n\nNow, we choose a $0r-l\\right\\}\n\\end{equation}\nfor $r>0$ and $00.\n\\end{equation}\n\\end{thm}\n\\begin{proof}\nGiven any $\\lambda>1$, we define the Kelvin transform of $u$ centered at $0$ by\n\\begin{equation}\\label{Kelvin-ec}\n u_{\\lambda}(x):=u\\left(\\frac{\\lambda^{2}x}{|x|^{2}}\\right)\n\\end{equation}\nfor arbitrary $x\\in\\{x\\in\\overline{\\Omega_{1}}\\,|\\,1\\leq|x|\\leq\\lambda^{2}\\}$.\n\nNow, we will carry out the process of scaling spheres in $\\Omega_{1}$ with respect to the origin $0\\in\\mathbb{R}^{n}$.\n\nLet $\\lambda>1$ be an arbitrary real number and let $\\omega^{\\lambda}(x):=u_{\\lambda}(x)-u(x)$ for any $x\\in B_{\\lambda^{2}}(0)\\setminus\\overline{B_{\\lambda}(0)}$. We will first show that, for $\\lambda>1$ sufficiently close to $1$,\n\\begin{equation}\\label{2-7-ec}\n \\omega^{\\lambda}(x)\\leq0, \\,\\,\\,\\,\\,\\, \\forall \\,\\, x\\in B_{\\lambda^{2}}(0)\\setminus\\overline{B_{\\lambda}(0)}.\n\\end{equation}\nThen, we start dilating the circle $S_{\\lambda}:=\\{x\\in\\mathbb{R}^{2}\\,|\\,|x|=\\lambda\\}$ from near the unit circle $S_{1}$ outward as long as \\eqref{2-7-ec} holds, until its limiting position $\\lambda=+\\infty$ and derive lower bound estimates of $u$ for $|x|$ large. Therefore, the scaling sphere process can be divided into two steps.\n\n\\emph{Step 1. Start dilating the circle $S_{\\lambda}$ from near $\\lambda=1$.} Define\n\\begin{equation}\\label{2-8-ec}\n (B_{\\lambda^{2}}\\setminus\\overline{B_{\\lambda}})^{+}:=\\{x\\in B_{\\lambda^{2}}(0)\\setminus\\overline{B_{\\lambda}(0)} \\, | \\, \\omega^{\\lambda}(x)>0\\}.\n\\end{equation}\nWe will show that, for $\\lambda>1$ sufficiently close to $1$,\n\\begin{equation}\\label{2-9-ec}\n (B_{\\lambda^{2}}\\setminus\\overline{B_{\\lambda}})^{+}=\\emptyset.\n\\end{equation}\n\nSince $u$ is a positive solution to integral equations \\eqref{IEe}, through direct calculations, we get, for any $\\lambda>1$,\n\\begin{equation}\\label{2-34-ec}\n u(x)=\\int_{|y|>\\lambda}G_{2}(x,y)f(y,u(y))dy+\\int_{B_{\\lambda^{2}}(0)\\setminus\\overline{B_{\\lambda}(0)}}G_{2}(x,y^{\\lambda})\n \\left(\\frac{\\lambda}{|y|}\\right)^{4}f(y^{\\lambda},u_{\\lambda}(y))dy\n\\end{equation}\nfor any $x\\in\\overline{\\Omega_{1}}$. By direct calculations, one can also verify that $u_{\\lambda}$ satisfies the following integral equation\n\\begin{equation}\\label{2-35-ec}\n u_{\\lambda}(x)=\\int_{|y|>1}G_{2}(x^{\\lambda},y)f(y,u(y))dy\n\\end{equation}\nfor any $x\\in\\{x\\in\\overline{\\Omega_{1}}\\,|\\,1\\leq|x|\\leq\\lambda^{2}\\}$, and hence, it follows immediately that\n\\begin{eqnarray}\\label{2-36-ec}\n u_{\\lambda}(x)&=&\\int_{|y|>\\lambda}G_{2}(x^{\\lambda},y)f(y,u(y))dy \\\\\n \\nonumber \\quad\\quad &&+\\int_{B_{\\lambda^{2}}(0)\\setminus\\overline{B_{\\lambda}(0)}}G_{2}(x^{\\lambda},y^{\\lambda})\n \\left(\\frac{\\lambda}{|y|}\\right)^{4}f(y^{\\lambda},u_{\\lambda}(y))dy.\n\\end{eqnarray}\nTherefore, we have, for any $x\\in B_{\\lambda^{2}}(0)\\setminus\\overline{B_{1}(0)}$,\n\\begin{eqnarray}\\label{omega-ec}\n && \\omega^{\\lambda}(x)=u_{\\lambda}(x)-u(x)\\\\\n \\nonumber &=&\\int_{B_{\\lambda^{2}}(0)\\setminus\\overline{B_{\\lambda}(0)}}\\bigg\\{\\left[G_{2}(x^{\\lambda},y^{\\lambda})\n -G_{2}(x,y^{\\lambda})\\right]\\left(\\frac{\\lambda}{|y|}\\right)^{4}f(y^{\\lambda},u_{\\lambda}(y)) \\\\\n \\nonumber && -\\left[G_{2}(x,y)-G_{2}(x^{\\lambda},y)\\right]f(y,u(y))\\bigg\\}dy+\\int_{|y|>\\lambda^{2}}\\left[G_{2}(x^{\\lambda},y)-G_{2}(x,y)\\right]f(y,u(y))dy.\n\\end{eqnarray}\n\nNow we need some basic properties about the Green's function $G_{2}(x,y)$. From \\eqref{GREENe-0c}, one can obtain that for any $x,\\,y \\in B_{\\lambda^{2}}(0)\\setminus\\overline{B_{1}(0)}$, $x\\neq y$,\n\\begin{equation}\\label{GP-ec}\nG_{2}(x,y)=C\\ln\\left[1+\\frac{(|x|^{2}-1)(|y|^{2}-1)}{|x-y|^{2}}\\right]\\leq C \\ln{\\left(1+\\frac{\\lambda^4}{|x-y|^2}\\right)}.\n\\end{equation}\nIt is well known that\n\\begin{equation}\\label{wk-ec}\n\\ln{(1+t)}=o(t^\\varepsilon), \\,\\,\\quad \\text{as}\\,\\, t\\rightarrow +\\infty,\n\\end{equation}\nwhere $\\varepsilon$ is an arbitrary positive real number. This implies, for any given $\\varepsilon>0$, there exists a $\\delta(\\varepsilon)>0$ such that\n\\begin{equation}\\label{ln-ec}\n\\ln{(1+t)}\\leq t^\\varepsilon, \\,\\,\\qquad \\forall \\, t>\\frac{1}{{\\delta(\\varepsilon)}^2}.\n\\end{equation}\n\nTherefore, by \\eqref{GP-ec}, \\eqref{ln-ec} and straightforward calculations, we have the following Lemma on properties of the Green's function $G_{2}(x,y)$.\n\\begin{lem}\\label{G-ec}\nThe Green's function $G_{2}(x,y)$ satisfies the following point-wise estimates:\n\\begin{flalign}\n\\nonumber &\\text{$(i)\\,\\, G_{2}(x,y)\\leq C\\lambda^{4\\varepsilon} \\frac{1}{|x-y|^{2\\varepsilon}}, \\,\\,\\qquad \\forall\\,\\,\\,1<|x|,\\,|y|<\\lambda^{2}, \\, |x-y|<\\lambda^{2}\\delta(\\varepsilon);$}& \\\\\n\\nonumber &\\text{$(ii) \\,\\, G_{2}(x,y)\\leq C \\ln\\left(1+\\frac{1}{{\\delta(\\varepsilon)}^2}\\right), \\,\\,\\qquad \\forall\\,\\,\\,1<|x|,\\,|y|<\\lambda^{2}, \\,\n|x-y|\\geq\\lambda^{2}\\delta(\\varepsilon);$}& \\\\\n\\nonumber &\\text{$(iii) \\,\\, G_{2}(x,y)\\geq C>0, \\,\\,\\qquad \\forall\\,\\,\\,|x|,\\,|y|\\geq2;$}& \\\\\n\\nonumber &\\text{$(iv) \\,\\, G_{2}(x^{\\lambda},y)-G_{2}(x,y)\\leq0, \\quad\\quad \\forall \\,\\, \\lambda<|x|<\\lambda^{2}, \\,\\, \\lambda<|y|<+\\infty;$}& \\\\\n\\nonumber &\\text{$(v) \\,\\, G_{2}(x^{\\lambda},y^{\\lambda})-G_{2}(x,y^{\\lambda})\\leq G_{2}(x,y)-G_{2}(x^{\\lambda},y), \\qquad \\forall \\,\\, \\lambda<|x|,|y|<\\lambda^{2}.$}&\n\\end{flalign}\n\\end{lem}\n\nLemma \\ref{G-ec} can be proved by direct calculations, so we omit the details here.\n\nFrom Lemma \\ref{G-ec} and \\eqref{omega-ec}, one can derive that, for any $x\\in B_{\\lambda^{2}}(0)\\setminus\\overline{B_{\\lambda}(0)}$,\n\\begin{eqnarray}\\label{2-37-ec}\n &&\\omega^{\\lambda}(x)=u_{\\lambda}(x)-u(x) \\\\\n \\nonumber &\\leq&\\int_{\\lambda<|y|<\\lambda^{2}}\\left[G_{2}(x,y)-G_{2}(x^{\\lambda},y)\\right]\n \\left[\\left(\\frac{\\lambda}{|y|}\\right)^{4}f(y^{\\lambda},u_{\\lambda}(y))-f(y,u(y))\\right]dy\\\\\n\\nonumber &<&\\int_{\\lambda<|y|<\\lambda^{2}}\\left(G_{2}(x,y)-G_{2}(x^{\\lambda},y)\\right)\\left(f(y,u_{\\lambda}(y))-f(y,u(y))\\right)dy\\\\\n\\nonumber &\\leq&C\\int_{\\left(B_{\\lambda^{2}}\\setminus\\overline{B_{\\lambda}}\\right)^{+}}G_{2}(x,y)\\frac{f(y,u_{\\lambda}(y))-f(y,u(y))}{u_{\\lambda}(y)-u(y)}\n\\omega^{\\lambda}(y)dy\\\\\n\\nonumber &\\leq&C\\lambda^{4\\varepsilon}\\int_{\\left(B_{\\lambda^{2}}\\setminus\\overline{B_{\\lambda}}\\right)^{+}\\cap B_{\\lambda^{2}\\delta(\\varepsilon)}(x)}\\frac{1}{|x-y|^{2\\varepsilon}}\\cdot\\frac{f(y,u_{\\lambda}(y))-f(y,u(y))}{u_{\\lambda}(y)-u(y)}\n\\omega^{\\lambda}(y)dy\\\\\n\\nonumber &&+C(\\delta(\\varepsilon))\\int_{\\left(B_{\\lambda^{2}}\\setminus\\overline{B_{\\lambda}}\\right)^{+}\\setminus B_{\\lambda^{2}\\delta(\\varepsilon)}(x)}\\frac{f(y,u_{\\lambda}(y))-f(y,u(y))}{u_{\\lambda}(y)-u(y)}\n\\omega^{\\lambda}(y)dy,\n\\end{eqnarray}\nwhere we have used the subcritical condition on $f(x,u)$ for $\\mu=\\left(\\frac{\\lambda}{|y|}\\right)^{2}<1$ to derive the second inequality and the assumption $(\\mathbf{f_{1}})$ on $f(x,u)$ to derive the third inequality.\n\nBy Hardy-Littlewood-Sobolev inequality, H\\\"{o}lder inequality and \\eqref{2-37-ec}, we have, for any $\\frac{1}{\\varepsilon}0$ sufficiently small such that $-\\frac{n\\theta}{n-2\\varepsilon}>-1$, then choose $q>\\frac{1}{\\varepsilon}$ sufficiently large such that $-\\frac{q\\theta}{q-1}>-1$. Then, since $u\\in C(\\overline{\\Omega_{1}})$ and $f(x,u)$ satisfies the assumption $(\\mathbf{f_{2}})$, there exists a $\\delta_{0}>0$ small enough, such that\n\\begin{eqnarray}\\label{3-15-ec}\n && C\\lambda^{4\\varepsilon}\n \\left\\|\\frac{f(y,u_{\\lambda}(y))-f(y,u(y))}{u_{\\lambda}(y)-u(y)}\\right\\|_{L^{\\frac{n}{n-2\\varepsilon}}((B_{\\lambda^{2}}\\setminus\\overline{B_{\\lambda}})^{+})}\\\\\n \\nonumber &&\\qquad+C(\\delta(\\varepsilon))\\,\\left|\\left(B_{\\lambda^{2}}\\setminus\\overline{B_{\\lambda}}\\right)^{+}\\right|^{\\frac{1}{q}}\n \\left\\|\\frac{f(y,u_{\\lambda}(y))-f(y,u(y))}{u_{\\lambda}(y)-u(y)}\\right\\|_{L^{\\frac{q}{q-1}}((B_{\\lambda^{2}}\\setminus\\overline{B_{\\lambda}})^{+})}\\leq\\frac{1}{2}\n\\end{eqnarray}\nfor all $1<\\lambda\\leq1+\\delta_{0}$, and hence \\eqref{3-14-ec} implies\n\\begin{equation}\\label{3-16-ec}\n \\|\\omega^{\\lambda}\\|_{L^{q}((B_{\\lambda^{2}}\\setminus\\overline{B_{\\lambda}})^{+})}=0,\n\\end{equation}\nwhich means $(B_{\\lambda^{2}}\\setminus\\overline{B_{\\lambda}})^{+}=\\emptyset$. Therefore, we have proved for all $1<\\lambda\\leq1+\\delta_{0}$, $(B_{\\lambda^{2}}\\setminus\\overline{B_{\\lambda}})^{+}=\\emptyset$, that is,\n\\begin{equation}\\label{3-17-ec}\n \\omega^{\\lambda}(x)\\leq0, \\,\\,\\,\\,\\,\\,\\, \\forall \\, x\\in B_{\\lambda^{2}}(0)\\setminus\\overline{B_{\\lambda}(0)}.\n\\end{equation}\nThis completes Step 1.\n\n\\emph{Step 2. Dilate the circle $S_{\\lambda}$ outward until $\\lambda=+\\infty$ to derive lower bound estimates of $u$ for $|x|$ large.} Step 1 provides us a start point to dilate the circle $S_{\\lambda}$ from near $\\lambda=1$. Now we dilate the circle $S_{\\lambda}$ outward as long as \\eqref{2-7-ec} holds. Let\n\\begin{equation}\\label{2-29-ec}\n \\lambda_{0}:=\\sup\\{1<\\lambda<+\\infty\\,|\\, \\omega^{\\mu}\\leq0 \\,\\, in \\,\\, B_{\\mu^{2}}(0)\\setminus\\overline{B_{\\mu}(0)}, \\,\\, \\forall \\, 1<\\mu\\leq\\lambda\\}\\in(1,+\\infty],\n\\end{equation}\nand hence, one has\n\\begin{equation}\\label{2-30-ec}\n \\omega^{\\lambda_{0}}(x)\\leq0, \\quad\\quad \\forall \\,\\, x\\in B_{\\lambda_{0}^{2}}(0)\\setminus\\overline{B_{\\lambda_{0}}(0)}.\n\\end{equation}\nIn what follows, we will prove $\\lambda_{0}=+\\infty$ by contradiction arguments.\n\nSuppose on contrary that $1<\\lambda_{0}<+\\infty$. In order to get a contradiction, we will first prove\n\\begin{equation}\\label{2-31-ec}\n \\omega^{\\lambda_{0}}(x)\\equiv0, \\,\\,\\,\\,\\,\\,\\forall \\, x\\in B_{\\lambda_{0}^{2}}(0)\\setminus\\overline{B_{\\lambda_{0}}(0)}\n\\end{equation}\nby using contradiction arguments.\n\nSuppose on contrary that \\eqref{2-31-ec} does not hold, that is, $\\omega^{\\lambda_{0}}\\leq0$ but $\\omega^{\\lambda_{0}}$ is not identically zero in $B_{\\lambda_{0}^{2}}(0)\\setminus\\overline{B_{\\lambda_{0}}(0)}$, then there exists a $x^{0}\\in B_{\\lambda_{0}^{2}}(0)\\setminus\\overline{B_{\\lambda_{0}}(0)}$ such that $\\omega^{\\lambda_{0}}(x^{0})<0$. We will obtain a contradiction with \\eqref{2-29-ec} via showing that the circle $S_{\\lambda}$ can be dilated outward a little bit further, more precisely, there exists a $\\epsilon>0$ small enough such that $\\omega^{\\lambda}\\leq0$ in $B_{\\lambda^{2}}(0)\\setminus\\overline{B_{\\lambda}(0)}$ for all $\\lambda\\in[\\lambda_{0},\\lambda_{0}+\\epsilon]$.\n\nFor that purpose, we will first show that\n\\begin{equation}\\label{2-32-ec}\n \\omega^{\\lambda_{0}}(x)<0, \\,\\,\\,\\,\\,\\, \\forall \\, x\\in B_{\\lambda_{0}^{2}}(0)\\setminus\\overline{B_{\\lambda_{0}}(0)}.\n\\end{equation}\nIndeed, since we have assumed there exists a point $x^{0}\\in B_{\\lambda_{0}^{2}}(0)\\setminus\\overline{B_{\\lambda_{0}}(0)}$ such that $\\omega^{\\lambda_{0}}(x^{0})<0$, by continuity, there exists a small $\\delta>0$ and a constant $c_{0}>0$ such that\n\\begin{equation}\\label{2-33-ec}\nB_{\\delta}(x^{0})\\subset B_{\\lambda_{0}^{2}}(0)\\setminus\\overline{B_{\\lambda_{0}}(0)} \\,\\,\\,\\,\\,\\, \\text{and} \\,\\,\\,\\,\\,\\,\n\\omega^{\\lambda_{0}}(x)\\leq -c_{0}<0, \\,\\,\\,\\,\\,\\,\\,\\, \\forall \\, x\\in B_{\\delta}(x^{0}).\n\\end{equation}\nSince $f(x,u)$ is subcritical and satisfies the assumption $(\\mathbf{f_{1}})$, one can derive from \\eqref{2-33-ec}, Lemma \\ref{G-ec} and \\eqref{2-37-ec} that, for any $x\\in B_{\\lambda_{0}^{2}}(0)\\setminus\\overline{B_{\\lambda_{0}}(0)}$,\n\\begin{eqnarray}\\label{9-37-ec}\n &&\\omega^{\\lambda_{0}}(x)=u_{\\lambda_{0}}(x)-u(x) \\\\\n \\nonumber &\\leq&\\int_{\\lambda_{0}<|y|<\\lambda_{0}^{2}}\\left[G_{2}(x,y)-G_{2}(x^{\\lambda_{0}},y)\\right] \\left[\\left(\\frac{\\lambda_{0}}{|y|}\\right)^{4}f(y^{\\lambda_{0}},u_{\\lambda_{0}}(y))-f(y,u(y))\\right]dy\\\\\n\\nonumber &<&\\int_{B_{\\delta}(x_{0})}\\left(G_{2}(x,y)-G_{2}(x^{\\lambda_{0}},y)\\right)\\left(f(y,u_{\\lambda_{0}}(y))-f(y,u(y))\\right)dy\\leq0,\n\\end{eqnarray}\nthus we arrive at \\eqref{2-32-ec}.\n\nNow, we choose a $00, \\quad\\quad \\forall \\,\\, \\sigma\\leq|x|<\\infty.\n\\end{equation}\n\nThis finishes our proof of Theorem \\ref{lower-ec}.\n\\end{proof}\n\nSince $-2\\leq\\tau<+\\infty$ in assumption $(\\mathbf{f_{3}})$, we can deduce from the assumption $(\\mathbf{f_{3}})$ on $f(x,u)$, the integral equations \\eqref{IEe}, Lemma \\ref{G-ec} and Theorem \\ref{lower-ec} that, for any $2\\sigma\\leq|x|<+\\infty$,\n\\begin{eqnarray}\\label{2-51-ec'}\n u(x)&\\geq&\\overline{C}\\int_{\\mathcal{C}\\cap\\{|y|\\geq2|x|\\}}G_{2}(x,y)|y|^{\\tau}C^{p}_{0}dy \\\\\n \\nonumber &\\geq&C\\int_{\\mathcal{C}\\cap\\{|y|\\geq2|x|\\}}|y|^{\\tau}dy=+\\infty,\n\\end{eqnarray}\nwhich is a contradiction! Therefore, we must have $u\\equiv0$ in $\\overline{\\Omega_{1}}$, that is, the unique nonnegative solution to IEs \\eqref{IEe} with $\\alpha=n=2$ is $u\\equiv0$ in $\\overline{\\Omega_{1}}$.\n\nThis concludes our proof of Theorem \\ref{Thm0}.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzbbqv b/data_all_eng_slimpj/shuffled/split2/finalzzbbqv new file mode 100644 index 0000000000000000000000000000000000000000..43c434de43c5366885cd4ce3fb74a71981b6e76e --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzbbqv @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\subsection{NGC 2419}\nNGC 2419 is a stellar aggregate with a number of puzzling characteristics and its nature and origin are yet unclear. \nWith a half-light radius $r_h$ of 21.4 pc it is the fifth-most extended object listed in the 2010-version of the Harris (1996) catalogue, while \nit is also one of the most luminous Globular Clusters (GCs) in the Milky Way (MW; Fig.~1). At a Galactocentric distance of 90 kpc it \nresides in the outermost halo. All these traits have fueled discussions of whether it contains any dark matter or could be affected by non-Newtonian\ndynamics (Baumgardt et al. 2005; Conroy et al. 2011; Ibata et al. 2012). For instance, Ibata et al. (2012) argue that its kinematics is incompatible with a dark matter content in excess of some 6\\% of its total mass. \nOverall, these morphological and dynamical considerations beg the question to what extent NGC~2419 has evolved in isolation and \nwhether it could be associated with a once-accreted, larger system like a dwarf (spheroidal) galaxy.\n\\begin{figure}[t!]\n\\resizebox{\\hsize}{!}{\n\\includegraphics[clip=true]{Koch_f1.eps}\n}\n\\caption{\\footnotesize\nMagnitude-half light radius plot for GCs (black dots), luminous dSphs (blue squares) and ultrafaint MW satellites (red circles). NGC~2419 is labeled -- what is this object?}\n\\vspace{-0.5cm}\n\\end{figure}\n\n\nAlso chemically, NGC 2419 has much to offer: Cohen \\& Kirby (2012) and Mucciarelli et al. (2012) identified a population of stars (ca. 30\\% by number) with remarkably low Mg- and high K-abundances, \nwhich could be the result of ``extreme nucleosynthesis'' (Ventura et al. 2012). The question of an abundance-spread has been addressed by several authors using high-resolution spectroscopy and \nlow-resolution measurements of the calcium triplet. However, the large abundance variation of \nthe electron-donor Mg will upset the commonly used stellar model atmospheres so that any claimed spread in iron-, Ca-, and thus overall metallicity needs to be considered with caution. \nHowever, settling exactly this aspect is of prime importance, since any significant spread in heavy elements is a trademark signature of an object with a likely extragalactic origin \n(e.g., Fig.~1 in Koch et al. 2012). \n\nThe color-magnitude diagrams (CMDs) of Di Criscienzo et al. (2011) show a hint of a color-spread towards the subgiant branch and the presence of a \nhot, faint Horizontal Branch (HB), \nconsistent with a second generation of stars with a strongly increased He-content. Thus, also NGC~2419 does appear to show signs of multiple stellar populations,\nin line with the majority of the MW GC system. \n\\subsection{Str\\\"omgren photometry}\nWhile broad-band filter combinations have succeeded in unveiling multiple stellar populations in sufficiently deep data sets and more massive systems (e.g., Piotto et al. 2007), additional \nobservations in intermediate-band \nStr\\\"omgren filters\nare desirable for a number of reasons:\n\\begin{enumerate}\n\\item[\\em i)] The $c_1 = (u-v)- (v-b)$ index in combination with a color such as $v-y$ is a powerful {\\em dwarf\/giant separator} and can efficiently remove any foreground contamination (e.g., Faria et al. 2007). \nAt $b=25\\degr$ this can be expected to be less of a problem in NGC~2419, but see, e.g., Ad\\'en et al. (2009) for an impressive demonstration of such a CMD cleaning. \nOur first assessment of the $c_1$-$(b-y)$ plane indicates that the foreground contamination is indeed minimal on the upper RGB (see also Fig.~2). \n\\item[\\em ii)] The index $m_1 = (v-b)- (b-y)$ is a good proxy for stellar {\\em metallicity} and calibrations have been devised by several authors (e.g., Hilker 2000; Calamida et al. 2007; \nAd\\'en et al. 2009). \n\\item[\\em iii)] {\\em Multiple populations} in terms of split red giant branches (RGBs), multiple subgiant branches, and main sequence turnoffs are well separated in CMDs that use combinations \nof Str\\\"omgren filters, e.g., $\\delta_4 = c_1 + m_1$ (Carretta et al. 2011), where optical CMDs based on broad-band filters still show unimodal, ``simple stellar populations''. \n\\item[\\em iv)] This is immediately interlinked with the {\\em chemical abundance variations} in the light chemical elements (e.g., Anthony-Twarog et al. 1995) that accompany the multiple populations, \nmost prominently driven by N-variations. Accordingly, Yong et al. (2008) confirmed linear correlations of $c_y = c_1 - (b-y)$ with the [N\/Fe] ratio.\n\\end{enumerate}\n\\section{Data and analysis}\nWe obtained imaging in all relevant Str\\\"omgren filters ($u$,$b$,$v$,$y$) using the Wide Field Camera (WFC) at the 2.5-m Isaac Newton Telescope (INT) at La Palma, Spain. \nIts large field of view ($33\\arcmin\\times33\\arcmin$) allows us to trace the large extent of NGC~2419 out to several times its tidal radius ($r_t\\sim7.5\\arcmin$). \n\nInstrumental magnitudes were obtained via PSF-fitting using the \\textsc{Daophot\/Allframe} software packages \n(Stetson 1987).\nThe instrumental magnitudes were transformed to the standard Str\\\"omgren system using ample observations of standard stars \n(Schuster \\& Nissen 1988).\nWe set up transformation equations similar to those given by Grundahl, Stetson \\& Andersen (2002).\n\\section{Preliminary results: CMDs and [M\/H]}\nFig.~2 shows two CMDs of NGC~2419, where we restrict our analysis to the bona-fide region between 1 and 3 half-light radii to avoid potentially crowded, inner regions, yet \nminimizing the field star contamination of the outer parts. \nFor the present analysis, we adopted a constant reddening of E($B-V$)$= 0.061$\\,mag\\footnote{Obtained from \\url{http:\/\/irsa.ipac.caltech.edu\/applications\/DUST}}, \nand its respective transformations to the Str\\\"omgren system (Calamida et al. 2009).\n\\begin{figure*}[t!]\n\\resizebox{\\hsize}{!}{\n\\includegraphics[width=0.02\\hsize]{Koch_f2.eps}\n}\n\\caption{\\footnotesize\nCMDs of NGC 2419 for two possible Str\\\"omgren-filter combinations. Shown are stars between $r_h < r < 3\\,r_h$; no other selection criteria have been applied. \nStars falling within the lines shown right were used to construct the metallicity distribution in Fig.~3.}\n\\end{figure*}\n\nWhile we do not resolve the main sequence turnoff, our photometry reaches about 1 mag below the HB at $y_{\\rm HB}$$\\equiv$$V_{\\rm HB}$$\\sim$$20.5$ mag. \nAll regions of the CMD are well reproduced, showing \na clear RGB, hints of an AGB and bright AGB (which stand out more clearly in other color indices; Frank et al. in prep.), and a prominent HB. Hess diagrams also highlight the presence of a clear \nRGB bump at $y_0\\sim20.3$ mag. \nMoreover, the extreme, hotter HB stands out in the bluer u-band (left panel), confirming the presence of this He-rich, secondary population (di Crisicienzo et al. 2011). \n\nTo obtain a first impression of the metallicity distribution function (MDF) of NGC~2419 (Fig.~3, right panel), we convert our Str\\\"omgren photometry to metallicities, [M\/H], through the \ncalibration by Ad\\'en et al. (2009). \nThis was carried out for stars on the RGB (see ridge lines in Fig.~2, right panel, and Fig.~3, left). As a result, we find \na mean [M\/H] of $-2$ dex. This is in good agreement with the values listed in the Harris catalogue and the high-resolution data of Mucciarelli et al. (2012) and Cohen \\& Kirby (2012) of [Fe\/H] = $-2.15$ dex. \n\\begin{figure*}[t!]\n\\resizebox{\\hsize}{!}{\n\\includegraphics[clip=true,width=0.55\\hsize]{Koch_f3a.eps}\n\\includegraphics[clip=true,width=0.53\\hsize]{Koch_f3b.eps}\n}\n\\caption{\\footnotesize\nPreliminary metallicity calibration in the $m_1$-vs-$(b-y)$ plane (left panel). Dashed lines indicate iso-metallicity curves based on the calibration of Ad\\'en et al. (2009) for [M\/H] = $-2.5$ up to +0.5 dex in steps of 0.5 (bottom to top). \nBlack dots are those within the RGB-ridge lines of Fig.~2, used to infer the MDF (light gray) in the right panel. The dark gray MDF in this plot uses a stricter RGB criterion of $(v-y)_0 > 1.4$.}\n\\label{eta}\n\\end{figure*}\n\n\nThe MDF also indicates the presence of a broad metallicity spread, where we find a nominal 1$\\sigma$-spread of 0.5 dex, but this is probably still dominated by \nremaining foreground contaminants and photometric errors. While we cannot exclude the presence of an abundance spread in NGC 2419 from the \npresent data, it is very likely much smaller than the one suggested by Fig.~3.\n\\section{Discussion} \nAlthough our CMD does not allow us to clearly isolate any multiple stellar populations at this stage of our analysis, we nevertheless \nfind strong reason to believe in their presence in NGC~2419, bolstered by recent optical images (di Criscienzo et al. 2011). \nThese authors detected a color spread at the base of the RGB and an extreme, hot HB, indicative of an increased He-abundance of a populous second stellar generation. \nThis HB population is also visible in our intermediate-band CMDs. \n\nAlthough our first analysis suggests a broad metallicity spread in NGC~2419, this is probably not significant and further CMD filtering is necessary. \nHowever, our derived mean metallicity is in line with the results from high-resolution spectroscopy, which indicates that our Str\\\"omgren photometry \nis well calibrated. \n\\begin{acknowledgements}\nAK, MF, and NK gratefully acknowledge the Deutsche Forschungsgemeinschaft for funding from Emmy-Noether grant Ko 4161\/1. \nThis research has made use of the NASA\/ IPAC Infrared Science Archive, which is operated by the Jet Propulsion Laboratory, California Institute of \nTechnology, under contract with the National Aeronautics and Space Administration.\n\\end{acknowledgements}\n\\bibliographystyle{aa}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{%\n \\@startsection\n {section}%\n {1}%\n {\\z@}%\n {0.8cm \\@plus1ex \\@minus .2ex}%\n {0.5cm}%\n {%\n \\normalfont\\small\\bfseries\n \\centering\n }%\n}%\n\\def\\@hangfrom@section#1#2#3{\\@hangfrom{#1#2}\\MakeTextUppercase{#3}}%\n\\def\\subsection{%\n \\@startsection\n {subsection}%\n {2}%\n {\\z@}%\n {.8cm \\@plus1ex \\@minus .2ex}%\n {.5cm}%\n {%\n \\normalfont\\small\\bfseries\n \\centering\n }%\n}%\n\\def\\subsubsection{%\n \\@startsection\n {subsubsection}%\n {3}%\n {\\z@}%\n {.8cm \\@plus1ex \\@minus .2ex}%\n {.5cm}%\n {%\n \\normalfont\\small\\itshape\n \\centering\n }%\n}%\n\\def\\paragraph{%\n \\@startsection\n {paragraph}%\n {4}%\n {\\parindent}%\n {\\z@}%\n {-1em}%\n {\\normalfont\\normalsize\\itshape}%\n}%\n\\def\\subparagraph{%\n \\@startsection\n {subparagraph}%\n {5}%\n {\\parindent}%\n {3.25ex \\@plus1ex \\@minus .2ex}%\n {-1em}%\n {\\normalfont\\normalsize\\bfseries}%\n}%\n\\def\\section@preprintsty{%\n \\@startsection\n {section}%\n {1}%\n {\\z@}%\n {0.8cm \\@plus1ex \\@minus .2ex}%\n {0.5cm}%\n {%\n \\normalfont\\small\\bfseries\n }%\n}%\n\\def\\subsection@preprintsty{%\n \\@startsection\n {subsection}%\n {2}%\n {\\z@}%\n {.8cm \\@plus1ex \\@minus .2ex}%\n {.5cm}%\n {%\n \\normalfont\\small\\bfseries\n }%\n}%\n\\def\\subsubsection@preprintsty{%\n \\@startsection\n {subsubsection}%\n {3}%\n {\\z@}%\n {.8cm \\@plus1ex \\@minus .2ex}%\n {.5cm}%\n {%\n \\normalfont\\small\\itshape\n }%\n}%\n \\@ifxundefined\\frontmatter@footnote@produce{%\n \\let\\frontmatter@footnote@produce\\frontmatter@footnote@produce@endnote\n }{}%\n\\def\\@pnumwidth{1.55em}\n\\def\\@tocrmarg {2.55em}\n\\def\\@dotsep{4.5pt}\n\\setcounter{tocdepth}{3}\n\\def\\tableofcontents{%\n \\addtocontents{toc}{\\string\\tocdepth@munge}%\n \\print@toc{toc}%\n \\addtocontents{toc}{\\string\\tocdepth@restore}%\n}%\n\\def\\tocdepth@munge{%\n \\let\\l@section@saved\\l@section\n \\let\\l@section\\@gobble@tw@\n}%\n\\def\\@gobble@tw@#1#2{}%\n\\def\\tocdepth@restore{%\n \\let\\l@section\\l@section@saved\n}%\n\\def\\l@part#1#2{\\addpenalty{\\@secpenalty}%\n \\begingroup\n \\set@tocdim@pagenum{#2}%\n \\parindent \\z@\n \\rightskip\\tocleft@pagenum plus 1fil\\relax\n \\skip@\\parfillskip\\parfillskip\\z@\n \\addvspace{2.25em plus\\p@}%\n \\large \\bf %\n \\leavevmode\\ignorespaces#1\\unskip\\nobreak\\hskip\\skip@\n \\hb@xt@\\rightskip{\\hfil\\unhbox\\z@}\\hskip-\\rightskip\\hskip\\z@skip\n \\par\n \\nobreak %\n \\endgroup\n}%\n\\def\\tocleft@{\\z@}%\n\\def\\tocdim@min{5\\p@}%\n\\def\\l@section{%\n \\l@@sections{}{section\n}%\n\\def\\l@f@section{%\n \\addpenalty{\\@secpenalty}%\n \\addvspace{1.0em plus\\p@}%\n \\bf\n}%\n\\def\\l@subsection{%\n \\l@@sections{section}{subsection\n}%\n\\def\\l@subsubsection{%\n \\l@@sections{subsection}{subsubsection\n}%\n\\def\\l@paragraph#1#2{}%\n\\def\\l@subparagraph#1#2{}%\n\\let\\toc@pre\\toc@pre@auto\n\\let\\toc@post\\toc@post@auto\n\\def\\listoffigures{\\print@toc{lof}}%\n\\def\\l@figure{\\@dottedtocline{1}{1.5em}{2.3em}}\n\\def\\listoftables{\\print@toc{lot}}%\n\\let\\l@table\\l@figure\n\\appdef\\class@documenthook{%\n \\@ifxundefined\\raggedcolumn@sw{\\@booleantrue\\raggedcolumn@sw}{}%\n \\raggedcolumn@sw{\\raggedbottom}{\\flushbottom}%\n}%\n\\def\\tableft@skip@float{\\z@ plus\\hsize}%\n\\def\\tabmid@skip@float{\\@flushglue}%\n\\def\\tabright@skip@float{\\z@ plus\\hsize}%\n\\def\\array@row@pre@float{\\hline\\hline\\noalign{\\vskip\\doublerulesep}}%\n\\def\\array@row@pst@float{\\noalign{\\vskip\\doublerulesep}\\hline\\hline}%\n\\def\\@makefntext#1{%\n \\def\\baselinestretch{1}%\n \\reset@font\n \\footnotesize\n \\leftskip1em\n \\parindent1em\n \\noindent\\nobreak\\hskip-\\leftskip\n \\hb@xt@\\leftskip{%\n \\Hy@raisedlink{\\hyper@anchorstart{footnote@\\the\\c@footnote}\\hyper@anchorend}%\n \\hss\\@makefnmark\\\n }%\n #1%\n \\par\n}%\n\\prepdef\n\\section{Introduction}\n\\label{Sect.I}\n\nHeavy-ion collisions are excellent factory for producing \nboth elementary and composed particles as well as for studying their \nproperties and production mechanism. Since many years \nefforts of theorists and experimentalists were focused on the investigation \nof time-space evolution of the quark-gluon plasma (QGP) and production of\ndifferent species of particles, primarily hadrons \n(pions, kaons, nucleons, hyperons, etc.) emitted in the collision. \nAt high energies, the velocities of such beam nuclei are close to light \nvelocity thus they are often called ultra-relativistic \nvelocities (URV). \n\nCentral collisions are the most interesting in the context of\nthe QGP studies. Plasma is, of course, also produced in more peripheral \ncollisions.\nIn peripheral collisions, the so-called spectators are relatively large \nand have large moving charge. It was realized relatively late that \nthis charge generates strong quickly changing electromagnetic fields \nthat can influence the trajectories and some observables \nfor charged particles.\n\nSuch effects were investigated in previous studies of one of the present\nauthors \\cite{Rybicki:2006qm,Rybicki:2013qla}.\nOn one hand side the EM effects strongly modify the Feynman $x_F$ spectra\nof low-$p_T$ pions, creating a dip for $\\pi^+$ and\nan enhancement for $\\pi^-$ at $x_F \\approx \\frac{m_{\\pi}}{m_N}$.\nIn \\cite{Rybicki:2006qm} a formalism of charged meson evolution\nin the EM field (electric and magnetic) of fast moving nuclei was\ndeveloped. Later on the spectacular effects were confronted with \nthe SPS data \\cite{Rybicki:2009zz} confirming the theoretical predictions.\nThe investigation was done for $^{208}$Pb+$^{208}$Pb at 158~GeV\/nucleon \nenergy ($\\sqrt{{s}_{NN}} =$ 17.3~GeV) at \nCERN Super Proton Synchrotron (SPS) \\cite{Schlagheck:1999aq}.\nIn \\cite{Rybicki:2013qla} the influence of the EM fields\non azimuthal flow parameters ($v_n$) was studied and confronted in\n\\cite{Rybicki:2014rna} with the RHIC data. It was found that the EM field\nleads to a split of the directed flow for opposite e.g.\ncharges. In the initial calculation, a simple model of single initial\ncreation point of pions was assumed for simplicity.\nMore recently, such a calculation was further developed by taking \ninto account also the time-space evolution of the fireball, treated \nas a set of firestreaks \\cite{Ozvenchuk:2019cve}. \nThe distortions of the $\\pi^+$ and $\\pi^-$ distributions allow to\ndiscuss the electromagnetic effects of the spectator and charged pions \nin URV collisions of heavy ions and nicely explain the experimental data. \nThis study was done for non-central collisions where the remaining \nafter collision object, called 'spectator', loses only a small part \nof nucleons of the original beam\/target nucleus.\n \nThe ultra-peripheral heavy-ion collisions (e.g.~$^{208}$Pb+$^{208}$Pb) at \nultra-relativistic energies ($\\sqrt{s_{NN}}\\ge 5~$GeV) \\cite{\nKlusek-Gawenda:2016suk}\nallow to produce particles in a broad region of impact parameter\nspace, even far from ``colliding'' nuclei.\nThe nuclei passing near one to each other with ultrarelativistic energies \nare a source of virtual photons that can collide producing\ne.g. a pair of leptons.\nIn real current experiments (RHIC, LHC), the luminosity is big enough to observe\ne.g. $AA \\rightarrow AA\\rho_0$ and $AA \\rightarrow AAe^+e^-$, \n$AA \\rightarrow AA\\mu^+\\mu^-$ processes. \nOne of the most interesting phenomena is multiple interaction \n\\cite{Klusek-Gawenda:2016suk,vanHameren:2017krz}\nwhich may lead to the production of more than one lepton pair.\n\nThe studies on the creation of positron-electron pairs started in early \n1930'ties with the discovery of positron by Dirac \\cite{Dirac:1934} and works\nof G. Breit and J.A. Wheeler \\cite{Breit:1934zz}, where they calculated \nthe cross section for the production of such pairs in the electric field. \nIt was E.J. Williams who realized \\cite{Williams:1935} that the \nproduction of $e^+ e^-$ pairs is enhanced in the vicinity of the atomic\nnucleus. An overview of the theoretical investigation of \nthe $e^+ e^-$ pairs creation in historical context was presented \nby e.g. J.H. Hubbell \\cite{Hubbel:2006} \nand the detailed discussion about this process in physics and\nastrophysics was written by R. Ruffini et al. \\cite{Ruffini:2009hg}.\n\nThe early analyses were done in the momentum space and therefore\ndid not include all details in the impact parameter space.\nAn example of the calculation where such details are taken into\naccount can be found e.g. in \n\\cite{Klusek-Gawenda:2010vqb,Klusek-Gawenda:2016suk}.\nIn ultra-peripheral collision (UPC), the nuclei do not collide\nwithout loosing, in principle, any nucleon.\nHowever, the electromagnetic interaction induced by fast moving\nnuclei may cause excitation of the nuclei and subsequent emission \nof different particles, in particular neutrons \\cite{Klusek-Gawenda:2013ema}\nthat can be measured both at RHIC and at the LHC.\nMoreover the UPC are responsible not only \nfor Coulomb excitation of the spectator but also for \nthe multiple scattering and production of more than one dielectron\npair \\cite{vanHameren:2017krz}. \nWith large transverse momentum cut typical at RHIC and the LHC the effect\nis not dramatic.\n\nCan the strong EM fields generated at high energies modify \nthe electron\/positron distributions?\nNo visible effect was observed for electrons with $p_T >$ 1~GeV\nas discussed in \\cite{vanHameren:2017krz}, where the ALICE distributions\nwere confronted with the $b$-space equivalent photon approximation (EPA)\nmodel.\nBut the electromagnetic effect is expected rather at \nlow transverse momenta.\nAccording to our knowledge this topic was not discussed \nin the literature.\nAs the spectators, which in ultra-peripheral collisions are almost \nidentical to colliding nuclei, are charged,\nthey can interact electromagnetically with electrons and positrons as \nit was in the case of pions. \nSimilar effects as observed for pions may be expected also for charged \nleptons. The motion of particles in EM field depends not only on their \ncharge but also on their mass. Thus the distortions of $e^+\/e^-$\ndistributions should be different than those for $\\pi^+\/\\pi^-$ distributions. \nAlso the mechanism of production is completely different. \nIn contrast to pion production, where the emission site is well localized,\nthe electron-positron pairs produced by photon-photon fusion can \nbe produced in a broad configuration space around the ``collision''\npoint - point of the closest approach of nuclei. \nA pedagogical illustration of the impact parameter dependence can \nbe found e.g. in \\cite{Klusek-Gawenda:2010vqb}.\n\nFrom one side, the previous works \\cite{Ozvenchuk:2019cve} and reference \ntherein, were connected with the electromagnetic effects caused by \nthe emission of the pions from the fireball region. \nFrom the other side, the model considered \\cite{Klusek-Gawenda:2010vqb,Klusek-Gawenda:2018zfz,Klusek-Gawenda:2020eja} can correctly estimate \nthe localization in the impact parameter space.\nThe present study will be focused on electromagnetic \ninteraction between electrons\/positrons with highly positively charged nuclei.\n\nOur approach consists of two steps. First, the $e^+ e^-$ distributions \nare calculated within EPA in terms of initial distributions in a given space \npoint and at a given initial rapidity and transverse momentum.\nSecondly, the space-time evolution of leptons in the electromagnetic fields\nof fast moving nuclei with URV is performed by solving relativistic \nequation of motion \\cite{Rybicki:2011zz}.\n\nIn Section \\ref{Sect.II} the details of the calculation of \nthe differential cross section of $e^+ e^-$ production will be\npresented.\nWe will not discuss in detail equation of motion which was presented\ne.g. in \\cite{Rybicki:2011zz}. \nThe results of the evolution of electrons\/positrons in\nthe EM field of ``colliding'' nuclei are presented in Section \\ref{Sect.III}. \n\\section{Lepton pair production, equivalent photon approximation}\n\\label{Sect.II}\n\nThe particles originated from photon-photon collisions can be created\nin full space around excited nucleus thus first of all the geometry of \nthe reaction should be defined. \n\nIn the present study, the ultra-peripheral collisions (UPC) are\ninvestigated in the reaction plane ($b_x,b_y$) which are perpendicular \nto the beam axis taken as $z$-direction.\n\nThe collision point ($b_x=0,b_y=0$) is time-independent center of mass (CM)\nof the reaction as shown in Fig.~\\ref{fig01}. The impact parameter\nis fixed as double radius of each (identical) Pb nucleus b=(13.95~fm, 14.05~fm). \nFor comparison we will show results also for b=(49.95~fm, 50.05~fm).\n\nFour characteristic points ($\\pm$15~fm, 0), (0, $\\pm$15~fm), which are discussed later are also \nmarked in the figure. In the present paper \nwe shall show results for these initial emission points for \nillustrating the effect of evolution of electrons\/positrons in \nthe EM field of nuclei.\n\n\\begin{figure}[!hbt]\n \\begin{center}\n \\includegraphics[scale=0.30]{geometra01.pdf}\n \\caption{ The impact parameter space and the\n five selected points ($b_x,b_y$):\n (0, 0), ($\\pm$15~fm,0), (0,$\\pm$15~fm), for which \n the distribution in rapidity and transverse momentum will be\n compared latter on in the text, shown in the CM rest frame.\n \n }\n \\label{fig01}\n \\end{center}\n\n\\end{figure}\n\nUsually the exclusive dilepton production was estimated by using \nthe monopole charge form factor which allows to reproduce correctly \nthe total cross section. \nThe differential cross sections are more sensitive to\ndetails, thus realistic charge form factor (Fourier transform of the\ncharge distribution) has to be employed.\n\\footnote{Double scattering production of positron-electron pairs using \nthe realistic charge form factor has been discussed \nin \\cite{Klusek-Gawenda:2016suk} and \\cite{vanHameren:2017krz}.} \nThe total cross section for the considered process\n($A A \\to A A e^+ e^-$) can be written as:\n\n\\begin{eqnarray}\n&& \\sigma_{A_1A_2\\rightarrow A_1A_2e^+e^-} (\\sqrt{s_{A_1A_2}})=\t\\nonumber \\\\\n &=&\\int \\frac{d\\sigma_{\\gamma\\gamma \\rightarrow e^+e^-}(W_{\\gamma\\gamma}) }{d\\cos{\\theta}} \nN(\\omega_1,b_1) N(\\omega_2,b_2) S^2_{abs}(b) \\nonumber\\\\\n&\\times& 2\\pi bdbd\\overline{b_x} d \\overline{b_y}\n\\frac{W_{\\gamma\\gamma}}{2} d W_{\\gamma\\gamma}d Y_{e^+e^-} \\ d\\cos{\\theta},\n\\label{EPA}\n\\end{eqnarray}\nwhere $N(\\omega_i,b_i)$ are photon fluxes, $W_{\\gamma\\gamma}=M_{e^+e^-}$\n is invariant mass and $Y_{e^+e^-}= (y_{e^+} + y_{e^-})\/2$ is rapidity of \nthe outgoing system and $\\theta$ is the scattering angle in the $\\gamma\\gamma\\rightarrow e^+e^-$ \ncenter-of mass system. The gap survival factor $S^2_{abs}$ assures that \nonly ultra-peripheral reactions are considered.\n\\begin{figure}[!hbt]\n \\begin{center}\n \\includegraphics[scale=0.5]{paw_ypt_b14.png}\n \\caption{The map of differential cross section in rapidity of\n electron or positron and lepton transverse momentum \n for ($b_x,b_y$) = (0, 0) which will be called CM point for brevity.}\n\n \\label{fig02a}\n \\end{center}\n\\end{figure}\n\nThe\n$\\overline{b_x}=(b_{1x}+b_{2x})\/2$ and\n$\\overline{b_y}=(b_{1y}+b_{2y})\/2$\nquantities are particularly useful for our purposes.\nWe define $\\vec{{\\bar b}} = (\\vec{b}_1 + \\vec{b}_2)\/2$ which is\n(initial) position\nof electron\/positron in the impact parameter space.\nThis will be useful when considering motion of electron\/positron\nin the EM field of nuclei.\nThe energies of photons are included by the relation: \n$\\omega_{1,2}=W_{\\gamma\\gamma}\/2 \\exp(\\pm Y_{e^+e^-})$.\nIn the following for brevity we shall use $b_x, b_y$ instead of \n$\\overline{b_x}, \\overline{b_y}$.\nThen $(b_x, b_y)$ is the position in the impact parameter plane,\nwhere the electron and positron are created.{\\footnote{Expression (\\ref{EPA}) allows to estimate not only \nthe lepton pair production but also a production of any other \nparticle pair \\cite{Klusek-Gawenda:2010vqb}.}}\nThe differential (in rapidity and transverse momentum) cross section \ncould be obtained in each emission point \nin the impact parameter space ($b_x,b_y$).\n\n\\begin{figure}[!hbt]\n \\begin{center}\n \\includegraphics[scale=0.60]{rap_e_xy_weight_17GeV.pdf}\n \\includegraphics[scale=0.60]{pt_e_xy_weight_17GeV.pdf}\n \\caption{(Color on-line) The differential cross section for various emission points\n of electrons\/positrons produced in the $^{208}$Pb+$^{208}$Pb\n reaction at 158~GeV\/nucleon energy ($\\sqrt{{s}_{NN}} =$ 17.3~GeV)\n at impact parameter 14$\\pm$0.05~fm. The cross section \n for selected points ($b_x,b_y$): (0, 0), ($\\pm$15~fm, 0), (0, $\\pm$15~fm) and (40~fm, 0)\n are integrated over $p_T$ (a) and rapidity (b), respectively.}\n \\label{fig02b}\n \\end{center}\n\\end{figure}\n\n\nThe calculations will be done assuming the collision of\n$^{208}$Pb+$^{208}$Pb \nat 158~GeV\/nucleon energy ($\\sqrt{{s}_{NN}} =$ 17.3~GeV) corresponding to\nthe CERN SPS and $\\sqrt{{s}_{NN}} =$ 200~GeV of \nthe STAR RHIC at impact parameter 14$\\pm$0.05~fm which is \napproximately twice the radius of the lead nucleus. \nThis is minimal configuration assuring ultra-peripheral collisions.\n\nFigure \\ref{fig02a} illustrates the differential cross section on \nthe plane of rapidity ($y$) vs. transverse momentum ($p_T$). \nRather broad range of rapidity (-5, 5) is chosen, but the distribution in $p_T$ will be limited\nto (0, 0.1~GeV) as the cross section drops at $p_T$ = 0.1~GeV \nalready a few orders of magnitude. The electromagnetic effects may \nbe substantial only in the region of the small transverse momenta.\n\nThus for our exploratory study here we have limited the range for rapidity \nto (-5, 5) and for transverse momentum to $p_T$=(0, 0.1~GeV).\nThe integrated distribution can be seen in Fig.~\\ref{fig02b}(a) and\n(b). There we compare the distributions obtained for \ndifferent emission points ($b_x,b_y$): (0, 0), ($\\pm$15~fm, 0), \n(0, $\\pm$15~fm) as shown in Fig.~\\ref{fig01}.\nThe behavior of the differential cross section is very similar in each \n($b_x,b_y$) point but it differs in normalization as it is shown \nin Fig.~\\ref{fig02b}.\n\n\\begin{figure}[!hbt]\n\\includegraphics[scale=0.4]{dsig_dbm.eps}\n\\caption{\nDistribution of the cross section in the impact parameter $b$\nfor different energies:\n$\\sqrt{s_{NN}}$ = 17.3, 50 and 200~GeV (from bottom to top).\n}\n\\label{fig:dsig_dbm}\n\\end{figure}\n\nIn Fig.~\\ref{fig:dsig_dbm} we show a distribution of the cross section\nin impact parameter $b$ for different collision energies\n$\\sqrt{s_{NN}}$ = 17.3, 50, 200~GeV.\nIn this calculation we have taken $p_T >$ 0~GeV (the cross section\nstrongly depends on the lowest value of lepton transverse momentum $p_T$).\nIn general, the larger collision energy the broader the range of impact\nparameter. However, the cross section for $b \\approx R_{A_1} + R_{A_2}$\nis almost the same. Only taking into account limitation, e.g. on the momentum transfer, makes the difference in the cross section significant even at $b=14$~fm.\n\n\\begin{figure}[!hbt]\n \\begin{center}\n \\includegraphics[scale=0.450]{paw_baxbay_b14_check.eps}\\\\\n \\vspace{-8cm}\\hspace{-5.0cm}{\\bf(a)}\\\\\n \\vspace{7cm}\n \\includegraphics[scale=0.450]{paw_baxbay_b50_check.eps}\\\\\n \\vspace{-8cm}\\hspace{-5.0cm}{\\bf(b)}\\\\\n \\vspace{7cm}\n \\caption{ Two-dimensional cross section\n as a function of $b_x$ and $b_y$ for\n two values of impact parameter: \n (a) b=14$\\pm$0.05~fm and (b) b=50$\\pm$0.05~fm.}\n \\label{fig02d}\n \\end{center}\n\\end{figure}\n\n\nThe emission point of the electrons\/positrons does not change the\nbehavior (shape) of the cross section on the ($y,p_T$) plane but \nchanges the absolute value of the cross section.\nAs it is visible in Fig.~\\ref{fig02b} (a) and (b) the biggest\ncross section is obtained for the CM emission point. The production of \n$e^+,e^-$ at ($b_x,b_y$) = (40~fm, 0) i.e. far from the CM point, \nis hindered by three orders of magnitude. \nMoreover, the production at $b_x$=$\\pm$15~fm and $b_y$=0 is more\npreferable than the production at $b_x$=0 and $b_y$=$\\pm$15~fm \nwhat is fully understandable taking into account the geometry of \nthe system (see Fig.~\\ref{fig01}). \nAs the system taken here into consideration is fully symmetric\n($A_1 = A_2, Z_1 = Z_2$), \nthus corresponding results are symmetric under the following\nreplacements: $b_x \\to -b_x$ or $b_y \\to -b_y$.\n\n\nFigure~\\ref{fig02d} compares the integrated cross section on reaction \nplane ($b_x,b_y$) for two impact parameters: (a) b=14$\\pm$0.05~fm \n(when nuclei are close to each other) and (b) b=50$\\pm$0.05~fm (when nuclei are well separated). \nThe landscape reflects the position of the nuclei in the moment of\nthe closest approach.\nSimilar plots have also been done for higher $\\sqrt{s_{NN}}$ but \nthe shape is almost unchanged, only the cross section value is\ndifferent. \nThis figure illustrates the influence of the geometry of the reaction. \nRegardless of the impact parameter (b), distance between colliding nuclei, the cross-section has\na maximum at $b_x=0$. The change of $b$ is correlated with the shift \nin a peak at $b_x$.\n\n\\begin{figure}[!hbt]\n \\begin{center}\n \\includegraphics[scale=0.60]{rap_e_xy_weight.pdf}\n \\includegraphics[scale=0.60]{pt_e_xy_weight.pdf}\n \\caption{(Color on-line) The differential cross section for various emission points\n of electrons in the $^{208}$Pb+$^{208}$Pb reaction \n at $\\sqrt{{s}_{NN}} =$ 17.3, 50, 200~GeV at impact parameter 14$\\pm$0.05~fm.}\n \\label{fig02c}\n \\end{center}\n\\end{figure}\n\n\nThe calculations confirm that the shape of the electron\/positron \ndistribution, shown in Fig.~\\ref{fig02c} does not depend on \nthe energy of the colliding nuclei. There are visible small differences \nin the magnitude of cross sections \nfor $\\sqrt{{s}_{NN}} =$ 17.3 and 200~GeV\nat least in the selected limited $p_T$=(0, 0.1)~GeV range. \nDependence on the rapidity is even weaker as the differences are visible\nonly for $|y|>$3.\n\nThese cross sections are used as weights in calculation of electromagnetic\neffects between electrons\/positrons and the fast moving nuclei. The\ncorresponding matrix has following dimensions: $b_{x,y}$=(-50~fm, 50~fm) - \n99$\\times$99 points in the reaction plane and 100$\\times$15 in the ($y,p_T$) space.\n\n\\begin{figure}[!hbt]\n\t\\begin{center}\n\t\t\\includegraphics[scale=0.4]{dsig_dxipt.eps}\n\t\t\\caption{\n\t\t\tDistribution of the cross section in $\\log_{10}p_T$\n\t\t\tfor different energies:\n\t\t\t$\\sqrt{s_{NN}}$ = 17.3, 50 and 200~GeV (from bottom to top).\n\t\t}\n\t\t\\label{fig:dsig_dxipt}\n\t\\end{center}\n\\end{figure}\n\nRather small transverse momenta enter such calculation.\nTo illustrate this in Fig.~\\ref{fig:dsig_dxipt} we show a distribution\nin $log_{10}(p_T)$. As seen from the figure the cross section is\nintegrable and we have no problem with this with our Monte Carlo\nroutine \\cite{Lepage:1977sw}.\n\n\n\\section{Electromagnetic interaction effects} \\label{Sect.III}\n\nThe spectator system are modeled as two uniform spheres in \ntheir respective rest frames that change into disks in the \noverall center-of-mass collision frame.\nThe total charge of nuclei is 82 consistent with UPC. \nThe lepton emission region is reduced to a single point and the time of emission is a free parameter. \n\\begin{figure*}[!hbt]\n\\includegraphics[width=5.8cm,height=5.4cm]{ym_pt_000_17GeV_b14fm.png}\n\\includegraphics[width=5.8cm,height=5.4cm]{ym_pt_0150_17GeV_b14fm.png}\n\\includegraphics[width=5.8cm,height=5.4cm]{ym_pt_1500_17GeV_b14fm.png}\\\\\n\\vspace{-5.3cm}\\hspace{0.cm} {\\bf (a) ($b_x$=0, $b_y$=0) \\hspace{2.cm} (b) ($b_x$=0, $b_y$=15~fm) \\hspace{2.5cm} (c) ($b_x$=15~fm, $b_y$=0)} \\\\\n\\vspace{4.55cm}\n\\includegraphics[width=5.8cm,height=5.4cm]{ym_pt_full_17GeV_b14fm.png}\n\\includegraphics[width=5.8cm,height=5.4cm]{ym_pt_0m150_17GeV_b14fm.png}\n\\includegraphics[width=5.8cm,height=5.4cm]{ym_pt_m1500_17GeV_b14fm.png}\\\\\n\\vspace{-5.3cm}\\hspace{0.cm} {\\bf (d) full ($b_x$, $b_y$) space \\hspace{2.cm} (e) ($b_x$=0, $b_y$=-15~fm) \\hspace{2.cm} (f) ($b_x$=-15~fm, $b_y$=0)} \\\\\n\\vspace{5.cm}\n\\caption{(Color on-line) Rapidity vs $p_T$ distributions for final (subjected to EM\n effects) electrons for different emission points: \n(a) (0, 0) and (d) full xy plane; (b)\n(0, 15~fm); (c) (15~fm, 0); (e) (0, -15~fm) and (f) (-15~fm, 0).\nThese results are for $\\sqrt{s_{NN}}$ = 17.3~GeV.}\\label{fig03}\n\\includegraphics[width=5.8cm,height=5.4cm]{yp_pt_000_17GeV_b14fm.png}\n\\includegraphics[width=5.8cm,height=5.4cm]{yp_pt_0150_17GeV_b14fm.png}\n\\includegraphics[width=5.8cm,height=5.4cm]{yp_pt_1500_17GeV_b14fm.png}\\\\\n\\vspace{-5.3cm}\\hspace{0.cm} {\\bf (a) ($b_x$=0, $b_y$=0) \\hspace{3.cm} (b) ($b_x$=0,$b_y$=15~fm) \\hspace{2.5cm} (c) ($b_x$=15~fm, $b_y$=0)} \\\\\n\\vspace{4.55cm}\n\\includegraphics[width=5.8cm,height=5.4cm]{yp_pt_full_17GeV_b14fm.png}\n\\includegraphics[width=5.8cm,height=5.4cm]{yp_pt_0m150_17GeV_b14fm.png}\n\\includegraphics[width=5.8cm,height=5.4cm]{yp_pt_m1500_17GeV_b14fm.png}\\\\\n\\vspace{-5.3cm}\\hspace{0.6cm} {\\bf (d) full ($b_x$, $b_y$) space \\hspace{3.0cm} (e) ($b_x$=0, $b_y$=-15~fm) \\hspace{2.6cm} (f) ($b_x$=-15~fm, $b_y$=0)} \\hspace{0cm}\\\\\n\\vspace{5.cm}\n\\caption{(Color on-line) Rapidity vs $p_T$ distributions for final (subjected to EM \neffects) positrons for different emission points: \n(a) (0, 0) and (d) full xy plane; (b) (0, 15~fm); (c) (15~fm, 0);\n(e) (0, -15~fm) and (f) (-15~fm, 0).\nThese results are for $\\sqrt{s_{NN}}$ = 17.3~GeV.}\\label{fig04}\n\\end{figure*}\n\n\n\\begin{figure*}[!hbt]\n\\includegraphics[width=8.6cm,height=6.cm]{yf_m_p_0_0_0_17GeV.png}\n\\hspace{0.cm}\n\\includegraphics[width=8.6cm,height=6.cm]{yf_m_p_0_0_0_17GeV_b50fm.png}\\\\\n\\vspace{-1.5cm}\\hspace{0.6cm} {\\bf (a) $\\sqrt{s_{NN}}$= 17.3~GeV \\hspace{5.2cm} (b) $\\sqrt{s_{NN}}$= 17.3~GeV\\\\\n\\vspace{-1.5cm}\\hspace{0.cm} b=14~fm ($b_x$=0,$b_y$=0) \\hspace{6.3cm} b=50~fm}\\\\\n\\vspace{1.7cm}\n\\includegraphics[width=8.6cm,height=6.cm]{yf_m_p_0_0_0_50GeV.png}\n\\includegraphics[width=8.6cm,height=6.cm]{yf_m_p_0_0_0_200GeV.png}\\\\\n\\vspace{-1.5cm}\\hspace{0.2cm}{\\bf(c)\\hspace{0.5cm} $\\sqrt{s_{NN}}$= 50~GeV \\hspace{4.2cm} (d)\\hspace{1.cm} $\\sqrt{s_{NN}}$= 200~GeV\\\\\n\\vspace{-1.5cm}\\hspace{0.2cm} b=14~fm \\hspace{7.3cm} b=14~fm}\\\\\n\\vspace{2.3cm}\n\\caption{(Color on-line) Reduced rapidity distributions for final electrons (blue) and\n positrons (red) for fixed $b$ and ($b_x$=0,$b_y$=0) plane of\nemission points at three collision energies: $\\sqrt{s_{NN}}$=17.3, 50 and 200~GeV.\n}\n \\label{fig05a}\n\\end{figure*}\n\n\nIn this work we assume there is no delay time between collisions of\nnuclei and the start of the EM interactions. \nThe $z$-dependence of the first occurrence of the $e^+ e^-$ pair is\nbeyond the EPA and is currently not known.\nIn our opinion production of $e^+ e^-$ happens when the moving\ncones, fronts of the EM fields, cross each other. This happens\nfor $z \\approx$ 0. \nIn the following we assume $z$ = 0 for simplicity.\n\\footnote{Any other distribution could be taken.}\n\n\\begin{figure*}[!hbt]\n\\includegraphics[width=8.6cm,height=6.cm]{yf_m_p_full_17GeV.png}\n\\includegraphics[width=8.6cm,height=6.cm]{yf_m_p_full_17GeV_b50fm.png}\\\\\n\\vspace{-5.5cm}\\hspace{0cm}{\\bf (a)\\hspace{1.cm} $\\sqrt{s_{NN}}$= 17.3~GeV \\hspace{4.cm} (b)\\hspace{1.cm} $\\sqrt{s_{NN}}$= 17.3~GeV\\\\\n\\vspace{2.5cm}\\hspace{-8cm} full ($b_x$,$b_y$) space \\hspace{12.5cm} \\\\\n\\vspace{0.5cm}\\hspace{0cm} b=14~fm \\hspace{7.5cm} b=50~fm}\\\\\n\\vspace{0.9cm}\n\\includegraphics[width=8.6cm,height=6.cm]{yf_m_p_full_50GeV.png}\n\\includegraphics[width=8.6cm,height=6.cm]{yf_m_p_full_200GeV.png}\\\\\n\\vspace{-5.5cm}{\\bf (c)\\hspace{0.8cm} $\\sqrt{s_{NN}}$= 50~GeV \\hspace{4.0cm} (d)\\hspace{1.cm} $\\sqrt{s_{NN}}$= 200~GeV\\\\\n\\vspace{3.5cm}\\hspace{0cm} b=14~fm \\hspace{7.5cm} b=14~fm}\\\\\n\\vspace{1.3cm}\n\\caption{(Color on-line) Reduced rapidity distributions for final electrons (blue) \nand positrons (red) for fixed $b$ and full ($b_x$,$b_y$) plane of\nemission points at three collision energies: $\\sqrt{s_{NN}}$=17.3, 50 and 200~GeV.}\n \\label{fig05b}\n\\end{figure*}\n\n\nThe trajectories of $e^{\\pm}$ in the field of moving nuclei are obtained\nby solving the equation of motion numerically for electrons\/positrons:\n\\begin{equation}\n\\frac{d \\vec{p}_{e^{\\pm}}}{d t} =\n\\vec{F}_{1,e^{\\pm}}(\\vec{r}_1,t) + \\vec{F}_{2,e^{\\pm}}(\\vec{r}_2,t) \\; .\n\\label{equation_of_motion}\n\\end{equation}\n\n\nThe total interaction is a superposition of interactions with both\nnuclei which positions depend on time.\nWe solve the motion of electron\/positron in the overall center \nof mass system, i.e. both position and time are given in this frame.\nIn this frame we have to deal with both electric and magnetic force\n\\cite{Rybicki:2006qm}.\nBecause nuclei are very heavy compared to electrons\/positron\ntheir motion is completely independent and is practically not distorted\nby the EM interaction.\nWe take:\n\\begin{eqnarray}\n\\vec{r}_1(t) = + {\\hat z} c t + \\vec{b}\/2 \\; , \\nonumber \\\\\n\\vec{r}_2(t) = - {\\hat z} c t - \\vec{b}\/2 \\; ,\n\\end{eqnarray} \ni.e. assume that the nuclei move along straight trajectories\nindependent of the motion of electron\/positron.\nThe step of integration depends on energy and must be carefully adjusted.\n\nThe rapidity vs $p_T$ distributions of initial leptons are \nobtained by randomly choosing the position on the two-dimensional space. \nThe path of particle in electromagnetic field generated by nuclei\nare traced up to 10 000~fm away from the original interaction point. \nThe Monte Carlo method is used to randomize the \ninitial rapidity and $p_T$ from uniform distribution. \nThe initial rapidity and $p_T$ of electrons\/positrons are randomly\nchosen in the range: $y$=(-5,5) and $p_T$=(0, 0.1)~GeV as fixed\nin the previous section. The ($y,p_T$) distributions for final\n(subjected to the EM evolution) leptons\nare presented in Fig.~\\ref{fig03} for electrons and in Fig.~\\ref{fig04} \nfor positrons. These distributions were obtained by analyzing \nthe EM evolution event-by-event.\nThe number of events taken here is $n_{event}$ = 10$^7$ \nfor each two-dimensional plot.\n\nFor electron production an enhancement and for positron a loss with\nrespect to the neighborhood or\/and flat initial distribution\nis observed for $y \\approx \\pm$ 3. This corresponds to the beam rapidity\nat $\\sqrt{s_{NN}}$= 17.3~GeV energy.\n\nThese two sets of two-dimensional plots illustrate \neffect of the EM interaction between $e^+$ or $e^-$ and the moving nuclei. \nThe motion of particles in the EM field of nuclei changes \nthe initial conditions and the final ($y,p_T$) are slightly different.\nFig.~\\ref{fig03} shows the behavior of electrons and \nFig.~\\ref{fig04} of positrons at CM (a) and \ndifferent impact parameter points: (panels c,f) ($\\pm$15~fm,0) and \n(panels b,e) (0,$\\pm$15~fm) marked in Fig.~\\ref{fig01}. \nWe observe that the maximal number of electrons is located \nwhere the cross section for positrons has minimum. The Coulomb effects are well visible as \na missing areas for positrons for $p_T<0.02$~GeV for particles\nemitted from the CM point. The emission of leptons from $b_y$=$\\pm$15~fm gives\nlower effect. The asymmetry in emission is well visible for\n$b_x$=$\\pm$15~fm, \nwhere a larger empty space is for positive rapidity when $b_x$=15~fm and \nfor negative rapidity when $b_x$=-15~fm. \n\nAlthough the EM effects are noticeable for \nelectrons\/positrons in different impact parameter points, \nthe integration over full reaction plane washes out almost totally the effect.\n\nFor comparison the results of integration over full space \n$b_x$=(-50~fm, 50~fm) and $b_y$=(-50~fm, 50~fm) are shown \nin panels (d) of Figs.~\\ref{fig03} and \\ref{fig04}. \nThese results are independent of the source of leptons, thus it could \nbe treated as a general trend and an indication for which \nrapidity-transverse momentum ranges one could observe effects of the EM\ninteraction between leptons and nuclei.\n\\begin{figure*}\n\\includegraphics[width=8.6cm,height=6.cm]{pTf_m_p_full_17GeVa.png}\\hspace{-0.cm}\n\\includegraphics[width=8.6cm,height=6.cm]{pTf_m_p_full_17GeV_b50fma.png}\\\\\n\\vspace{-5.0cm}\\hspace{-2cm}{\\bf (a) $\\sqrt{s_{NN}}$= 17.3~GeV \\hspace{4.9cm} (b) $\\sqrt{s_{NN}}$= 17.3GeV\\\\\n\\vspace{1.5cm}\\hspace{-1.0cm} b=14~fm \\hspace{6.5cm} b=50~fm}\\\\\n\\vspace{2.4cm}\n\\includegraphics[width=8.6cm,height=6.cm]{pTf_m_p_full_50GeVa.png}\\hspace{-0.cm}\n\\includegraphics[width=8.6cm,height=6.cm]{pTf_m_p_full_200GeVa.png}\\\\\n\\vspace{-5.4cm}\\hspace{-2cm}{\\bf(c) $\\sqrt{s_{NN}}$= 50~GeV \\hspace{4.9cm} (d) $\\sqrt{s_{NN}}$= 200~GeV\\\\\n\\vspace{1.5cm}\\hspace{-1.0cm} b=14~fm \\hspace{6.5cm} b=14~fm}\\\\\n\\vspace{3.5cm}\n\\caption{(Color on-line) Transverse momentum distributions for final electrons (blue) \nand positrons (red) for fixed $b$ and full ($b_x$, $b_y$) plane of\nemission points at three collision energies: $\\sqrt{s_{NN}}$=17.3, 50 and 200~GeV.}\n \\label{fig05c}\n\\end{figure*}\n\nThe distribution in reduced rapidity of final leptons are shown \nin Fig.~\\ref{fig05a} and \\ref{fig05b}. The reduced rapidity\n(dimensionless quantity) is the rapidity $y$ normalized to the beam rapidity $y_{beam}$\n(different for various collision energies)\n\\begin{equation}\ny_{red} = y \/ y_{beam} \\; ,\n\\end{equation}\nwhere\n\\begin{equation}\ny_{beam} = \\pm ln \\left( \\frac{\\sqrt{s_{NN}}}{m_p} \\right)\n\\end{equation}\nand $m_p$ is the proton mass.\nThe results shown above were obtained, somewhat arbitrarily, with\nuniform distribution in $(y,p_T)$.\nThis leads to the observation of peaks or dips at beam rapidities.\nNo such peaks appear for $\\sqrt{s}$ = 200~GeV as here the chosen range\nof rapidity (-5,5) is not sufficient.\nWhether such effects survive when weighting with the b-space EPA\ncross section will be discussed below.\n\nFig.~\\ref{fig05a} is focused on emission from the center of mass point \nand Fig.~\\ref{fig05b} is obtained when integrating over full ($b_x,b_y$) plane. \nThe main differences between electron (blue lines) and \npositron (red lines) distributions are not only at midrapidities \nbut also around the beam rapidity. The effect is more visible \nfor the CM emission point but it is slightly smoothed out when \nthe full $(b_x, b_y)$ plane is taken into consideration. \nMoreover increasing the impact parameter (panels (a) and (b)) \ndiminishes the difference between rapidity distributions of final \nelectrons and positrons. The beam energy is another crucial parameter. \nThe collision with $\\sqrt{s_{NN}} > 100$~GeV (panel (d)) does not \nallow for sizeable effects of electromagnetic interaction between \nleptons and nuclei, at least at midrapidities.\n\nThe discussion of the EM interaction between $e^+e^-$ and nuclei has \nto be completed by combining with the cross section of lepton production\nas obtained within EPA.\nTaking into account the leptons coming from photon-photon fusion \nthe distributions from Fig.~\\ref{fig03} and \\ref{fig04} are multiplied \nby differential cross section obtained with Eq.(~\\ref{EPA}).\n \nThe details of the method are presented in Ref.~\\cite{Rybicki:2006qm} \nand adapted here from pion emission to electron\/positron emission.\n\\begin{figure*}[!hbt]\n \\begin{center}\n \\includegraphics[scale=0.60]{y_p_m_000_EM_pure_weighta.pdf}\n \\includegraphics[scale=0.60]{y_p_m_full_EM_pure_weighta.pdf}\n \\caption{(Color on-line)\n The electron and positron emission cross section normalized to 100\\% \n in the $^{208}$Pb+$^{208}$Pb reaction at 158~GeV\/nucleon energy \n($\\sqrt{{s}_{NN}} =$ 17.3~GeV) at impact parameter 14$\\pm$0.05~fm\nassuming the $p_T$=(0, 0.1)~GeV produced in the center of mass (0, 0)\npoint (a) and when integrating over full reaction space (b).\nShown are original EPA distributions (dotted line) and results when\nincluding evolution in the EM field of nuclei for positrons (solid line)\nand for electrons (dashed line).\n}\n \\label{fig07}\n \\end{center}\n\\end{figure*}\nIn Fig.~\\ref{fig05c} we show the influence of EM interaction on $p_T$ distributions.\nHere we integrate over rapidity and ($b_x,b_y$). One can observe that the EM effects lead to a diffusion of transverse momenta \n(see the diffused edge at $p_T=0.1$~GeV, marked by green vertical line). No spectacular effect is observed when changing the \nimpact parameter or beam energy.\n\nFig.~\\ref{fig07} shows a comparison of rapidity distribution of \nfinal electrons and positrons, assuming the particles are emitted \nfrom (a) the center of mass (0, 0) point and (b) when integrating over \n$b_x,b_y$. \nThe comparison is done between EPA distribution relevant for initial stage (red, dotted line) with the final stage, resulting from the EM interaction of charged leptons with positively charged nuclei. \nIf leptons are produced in the CM point, the electron\ndistributions are almost unchanged but positron distributions \nare squeezed to $|y| < 2$. \nIf the cross section is integrated over full ($b_x,b_y$) parameter space, \nthe positron distribution is still steeper than that for electrons \nbut mainly for $|y| <$ 2. \n\n\n\nEven when the leptons produced in the full ($b_x,b_y$) plane are considered, the\n$e^+$ and $e^-$ distributions are different from the initial ones. The electrons under the EM interactions are focused at midrapidities.\n\n\n\\begin{figure*}[!hbt]\n\\includegraphics[width=7.8cm,height=6.cm]{bx_ym_17GeV_14fm.png}\n\\includegraphics[width=7.8cm,height=6.cm]{bx_yp_17GeV_14fm.png}\\\\\n\\vspace{-6.0cm}{\\bf\\hspace{-5cm} (a) \\hspace{7.0cm} (b)}\\\\\n\\vspace{5.5cm}\n\\includegraphics[width=7.8cm,height=6.cm]{by_ym_17GeV_14fm.png}\n\\includegraphics[width=7.8cm,height=6.cm]{by_yp_17GeV_14fm.png}\\\\\n\\vspace{-6.0cm}{\\bf\\hspace{-5cm} (c) \\hspace{7.0cm} (d)}\\\\\n\\vspace{5.5cm}\n\\caption{ Distribution of electrons ((a), (c)) and positrons ((b), (d)) for $\\sqrt{s_{NN}}$= 17.3\n ~GeV at b=14~fm\n integrated over ($b_x,b_y$)=(-50~fm, 50~fm),\n $p_T^{ini}$=(0, 0.1~GeV)}\n \\label{fig08}\n\\end{figure*}\n\\begin{figure*}\n\\includegraphics[width=7.8cm,height=6.cm]{bx_ym_17GeV_50fm.png}\n\\includegraphics[width=7.8cm,height=6.cm]{bx_yp_17GeV_50fm.png}\\\\\n\\vspace{-6.0cm}{\\bf\\hspace{-5cm} (a) \\hspace{7.0cm} (b)}\\\\\n\\vspace{5.5cm}\n\\includegraphics[width=7.8cm,height=6.cm]{by_ym_17GeV_50fm.png}\n\\includegraphics[width=7.8cm,height=6.cm]{by_yp_17GeV_50fm.png}\\\\\n\\vspace{-6.0cm}{\\bf\\hspace{-5cm} (c) \\hspace{7.0cm} (d)}\\\\\n\\vspace{5.5cm}\n\\caption{ Distribution of electrons ((a), (c)) and positrons ((b), (d)) for $\\sqrt{s_{NN}}$= 17.3\n ~GeV at b=50~fm \n integrated over ($b_x,b_y$)=(-100~fm, 100~fm), \n $p_T^{ini}$=(0, 0.1~GeV).}\n \\label{fig09}\n\\end{figure*}\n\\begin{figure*}[!hbt]\n\\includegraphics[width=7.8cm,height=6.cm]{bx_ym_200GeV_14fm.png}\n\\includegraphics[width=7.8cm,height=6.cm]{bx_yp_200GeV_14fm.png}\\\\\n\\vspace{-6.0cm}{\\bf\\hspace{-5cm} (a) \\hspace{7.0cm} (b)}\\\\\n\\vspace{5.5cm}\n\\includegraphics[width=7.8cm,height=6.cm]{by_ym_200GeV_14fm.png}\n\\includegraphics[width=7.8cm,height=6.cm]{by_yp_200GeV_14fm.png}\\\\\n\\vspace{-6.0cm}{\\bf\\hspace{-5cm} (c) \\hspace{7.0cm} (d)}\\\\\n\\vspace{5.5cm}\n\\caption{ Distribution of electrons ((a), (c)) and positrons ((b), (d)) for $\\sqrt{s_{NN}}$= 200\n ~GeV at b=14~fm \nintegrated over ($b_x,b_y$)=(-50~fm, 50~fm) for $p_T^{ini}$=(0, 0.1~GeV)}\n \\label{fig11}\n\\end{figure*}\n\nThe dependence on the position of emission and final rapidity allows to \nunderstand how the geometry influences the electromagnetic interaction between \nleptons and nuclei. Figures \\ref{fig08},\\ref{fig09} \nand \\ref{fig11} present the cross section distribution in \n$b_x$ (top rows) and rapidity for electrons (a) and positrons (b) and (bottom rows)\n$b_y$ and rapidity for electrons (c) and positrons (d).\nFigs.~\\ref{fig08} and \\ref{fig09} are for $\\sqrt{s_{NN}}$= 17.3~GeV but with \nthe impact parameter 14$\\pm$0.05 and 50$\\pm$0.05~fm.\nFig.~\\ref{fig11} is for $\\sqrt{s_{NN}}$= 200~GeV and\nb=14~fm. \nThese plots allow to investigate the anisotropy caused by the\ninteraction between leptons and nuclei. It is more visible for larger \nimpact parameter when the spectators are well separated (Fig.~\\ref{fig09}). \n\\begin{figure*}\n\\includegraphics[width=7.8cm,height=6.cm]{y_m_full_x_weight_17_50_200.pdf}\n\\includegraphics[width=7.8cm,height=6.cm]{y_p_full_x_weight_17_50_200.pdf}\n\\caption{Rapidity distribution of electrons for $\\sqrt{s_{NN}}$= 17.3~GeV (b=14~fm,\n 50~fm) and 50~GeV and 200~GeV with b=14$\\pm$0.05~fm (only) integrated over \n($b_x,b_y$)=(-50~fm, 50~fm), $p_T^{ini}$=(0, 0.1~GeV).}\n \\label{fig13}\n\\end{figure*}\n\n\\begin{figure}\n\\includegraphics[width=7.8cm,height=6.cm]{y_pm_full_x_weight_17_50_200.pdf}\n\\caption{The ratio of rapidity distributions of positrons and electrons for \n $\\sqrt{s_{NN}}$= 17.3~GeV and fixed b=14~fm and 50~fm \n and $\\sqrt{s_{NN}}$ = 50~GeV and 200~GeV with fixed b=14~fm \n integrated over ($b_x,b_y$)=(-50~fm, 50~fm) and transverse momenta in the interval \n $p_T^{ini}$=(0, 0.1~GeV).}\n \\label{fig14}\n\\end{figure}\n\nDistributions for fixed impact parameter and different beam energies \nreflect the behavior seen in Fig.~\\ref{fig05b}. For collision with \n$\\sqrt{s_{NN}}$= 200~GeV (Fig.~\\ref{fig11})\nthe electrons and positrons almost do not feel the presence of the EM\nfields of the nuclei. \n\nIntegrating over full ($b_x,b_y$) plane one obtains \nthe rapidity distribution of final leptons\nshown in Fig.~\\ref{fig13} (a) separately for electrons and (b) positrons, \nfor two impact parameters: 14~fm (full lines) and 50~fm (dashed lines). \nElectrons have a somewhat wider distribution than positrons and \nthis is independent of the impact parameter.\nWhile positron rapidity distributions only weakly depend on collision energy it is not the case for electrons, where sizeable differences can be observed.\n\nThe ratio of distributions for positrons and electrons (Fig.~\\ref{fig14}) reflects the behavior seen in Figs.~\\ref{fig08} and \\ref{fig09}. \nThis plot shows a combined effect of EPA cross section and \nthe EM interactions of leptons with nuclei. Thus despite the production \ncross section is lower for larger impact parameter, the discussed\nphenomenon should be visible for larger rapidities. \nThe ratio quickly changes with energy and tends to 1 for larger energies (see dotted line for $\\sqrt{{s}_{NN}}$ = 200~GeV).\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Conclusions}\n\nIn the last 15 years the electromagnetic effects due to a large (moving)\ncharge of the spectators on charged pion momentum distributions were \nobserved both in theoretical calculations and experimentally \nin peripheral heavy ion collisions at SPS and RHIC energies.\nInteresting and sometimes spectacular effects were identified.\n\nIn the present paper we have discussed whether such effects could also\nbe observed for the distributions of electrons\/positrons produced\nvia photon-photon fusion in heavy ion UPC.\nThe corresponding cross section can be rather reliably calculated\nand turned out to be large, especially for low transverse\nmomentum electrons\/positrons. The impact parameter equivalent photon\napproximation is well suitable for investigating the electromagnetic\neffects.\nOn the experimental side only rather large transverse momentum\nelectrons\/positrons could be measured so far at RHIC and the LHC, \ntypically larger than 0.5~GeV.\n\nWe have organized calculations that include the EM effects using as an input EPA distributions.\nFirst multidifferential (in momenta and impact parameter) distributions \nfor the diphoton production of the $e^+ e^-$ pairs are prepared in \nthe impact parameter equivalent photon approximation. \nSuch distributions are used next to calculate the propagation of \nthe electrons\/positrons in strong EM fields generated by \nthe quickly moving nuclei.\nThe propagation has been done by solving numerically relativistic \nequation of motion. Strong EM effects have been observed only at \nvery small transverse momenta of electron\/positron. Therefore \nto accelerate calculations we have limited to really small initial \ntransverse momenta $p_T <$ 0.1~GeV.\n\n\nThe shape of the differential cross section in rapidity and transverse\nmomentum does not depend on the energy of the process but rather \non the emission point in the impact parameter plane ($b_x,b_y$). \nWe have investigated effects for different initial conditions, \ni.e. different emission positions in the impact parameter space.\n\nThe leptons interact electromagnetically with charged nuclei which \nchanges their trajectories. The biggest effect has been identified for \nthe CM emission point. \nHowever, the integration over full ($b_x,b_y$) plane washes out this effect to large extent. \nThe range of $p_T$=(0, 0.02~GeV) has turned out the most preferable \nto investigate the influence of the EM effects on leptons originating from various \n($b_x,b_y$) plane points.\n\nMoreover, the impact parameter influences not only the value of \nthe cross section but also the shapes of distributions of final leptons. \n\nThe $AA \\rightarrow AAe^+e^-$ process creates leptons in a broad\nrange of rapidities. We have found that only at small transverse momenta of\nelectrons\/positrons one can observe sizeable EM effects.\n \nThe performed calculations allow to conclude that the maximal \nbeam energy for the Pb+Pb collision, where the EM effects between leptons\nand nuclei are evident at midrapidities, is probably \n$\\sqrt{{s}_{NN}}$=100~GeV.\nObservation of the effect at higher energies may be therefore \nrather difficult, if not impossible.\nThe effect survives even at high energies but close to beam rapidity. \nHowever, this region of the phase space is usually not instrumented\nand does not allow electron\/positron measurement.\n\nSo far the effect of EM interaction was studied for fixed values of \nthe impact parameter (mainly for $b$ = 14~fm). However, the impact parameter\ncannot be measured.\nThe integration over the impact parameter is rather difficult and\ngoes beyond the scope of the present paper.\nSuch an integration will be studied elsewhere.\n\nOn the experimental side a good measurement of electrons and positrons\nat low transverse momenta ($p_T <$ 0.1~GeV is necessary to see \nan effect.\nIn principle, the measurement of the $e^+ \/ e^-$ ratio as a function\nof lepton rapidity and transverse momentum would be useful in \nthis context.\nAccording to our knowledge there are definite plans only for\nhigh energies (ALICE-3 project) where, however, the EM effect should\nbe very small (high collision energy). At CERN SPS energies the effect\nis rather large but at very small transverse momenta.\nRHIC would probably be a good place to observe\nthe effect of the discussed here EM interactions but this would\nrequire a modification of the present apparatus.\n\n{\\bf Acknowledgement}\n\nA.S. is indebted to Andrzej Rybicki for past collaboration on\nelectromagnetic effects in heavy ion collisions.\nThis work is partially supported by\nthe Polish National Science Centre under Grant\nNo. 2018\/31\/B\/ST2\/03537\nand by the Center for Innovation and Transfer of Natural Sciences \nand Engineering Knowledge in Rzesz\\'ow (Poland).\n\n\n\n\\bibliographystyle{spphys} \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Appendix}\n\\label{sec:appendix}\n\\begin{proof}[Proof of Lemma \\ref{lem:wtforg}]\nTo construct the weight function $w$, we associate a sequence $P(u,v)$ of edges of $G'$ with each edge $(u,v)$ of $G$. Assume that $T'$ is a rooted tree, and root is the highest node in the tree. The heights of all the other nodes are one less than that of their parent. As we mentioned that $T'$ is a tree decomposition of $G$. For an edge $(u,v)$, we know that there are unique highest bags $B_1$ and $B_2$ that contain vertices $u$ and $v$, respectively.\n\\begin{itemize}\n\\item If $B_1 = B_2$ then $P(u,v) = (u_{B_1} , v_{B_2})$.\n\n\\item If $B_1$ is an ancestor of $B_2$ then \\\\$P(u,v) = (u_{B_1}, u_{parent(\\cdots (parent(B_2)))}), \\ldots ,(u_{parent(B_2)}, u_{B_2}),(u_{B_2},v_{B_2})$.\n\n\\item If $B_1$ is a descendant of $B_2$ then \\\\ $P(u,v) = (u_{B_1}, v_{B_1}), (v_{B_1}, v_{parent(B_1)}), \\ldots , (v_{parent(\\cdots (parent(B_1)))},v_{B_2})$.\n\\end{itemize}\n\nThe weight function $w$ for the graph $G$ is defined as follows:\n\\begin{eqnarray*}\nw(u,v) = \\sum_{e\\in P(u,v)} w'(e)\n\\end{eqnarray*}\n\n\nFor a simple cycle $C = e_1,e_2, \\ldots ,e_j$ in $G$, we define $P(C) = P(e_1),P(e_2), \\ldots ,P(e_j)$. Note that $P(C)$ is a closed walk in $G'$. Let $E'_d(C)$ be the subset of edges of $G'$ such for all edges $e \\in E'_d(C)$ both $e$ and $e^{r}$ appear in $P(C)$, where $e^r$ denotes the edge obtained by reversing the direction of $e$. We prove that if we remove the edges of $E'_d(C)$ from $P(C)$ then the remaining edges $P(C) - E'_d(C)$ form a simple cycle in $G'$. \n\n\\begin{claim}\n\\label{clm:simp}\nEdges in the set $P(C)- E'_d(C)$ form a simple cycle in $G'$. \n\\end{claim}\n\n\\begin{proof}\nNote that the lemma follows trivially if $P(C)$ is a simple cycle. Therefore, assume that $P(C)$ is not a simple cycle. We start traversing the walk $P(C)$ starting from the edges of the sequence $P(e_1)$. Let $P(e_k)$ be the first place where a vertex in the walk $ P(e_1) P(e_2).....P(e_k)$ repeats, i.e., edges in the sequence $P(e_1) P(e_2).....P(e_{k-1})$ form a simple path, but after adding the edges of $P(e_k)$ some vertices are visited twice in the walk $P(e_1) P(e_2).....P(e_{k-1})P(e_k)$ for some $k \\leq j$. This implies that some vertices are visited twice in the sequence $P(e_{k-1})P(e_k)$. Let $e_{k-1}=(u,v)$ and $e_k=(v,x)$. This implies that some copies of the vertex $v$ appear twice in the sequence $P(u,v)P(v,x)$. Let $B_1$ and $B_2$ be the highest bags such that $B_1$ contains the copies of vertices $u$ and $v$, and $B_2$ contains the copies of $v$ and $x$. Let bag $B$ be the lowest common ancestor of $B_1$ and $B_2$. We know that $B$ must contain a copy of the vertex $v$, i.e., $v_B$. Let $B'$ be the highest bag containing a copy of vertex $v$, i.e., $v_{B'}$. First, consider the case when neither $B_1$ is an ancestor of $B_2$ and vice-versa, other cases can be handled similarly. In that case sequence $P(u,v) = u_{B_1}v_{B_1} v_{parent(B_1)} \\ldots v_{B} \\ldots v_{B'}$ and $P(v,w) = v_{B'} \\ldots v_{B} \\ldots v_{parent(B_2)} v_{B_2}w_{B_2}$. Note that in $P(u,v)$ a path goes from $v_B$ to $v_{B'}$ and the same path appear in reverse order from $v_{B'}$ to $v_{B}$ in the sequence $P(v,x)$. Therefore if we remove these two paths from $P(u,v)$ and $P(v,w)$ the remaining subsequence of the sequence $P(u,v)P(v,w)$ will be a simple path, i.e., no vertex will appear twice since $B$ is the lowest common ancestor of $B_1$ and $B_2$. Now repeat this procedure for $P_{k+1},P_{k_2} \\ldots$ and so on till $P_{k}$. In the end, we will obtain a simple cycle.\n\\end{proof}\n\n\nSince we assumed that the weight function $w'$ is skew-symmetric, we know that $w'(e) = -w (e^{r})$, for all $e \\in G'$. This implies that $w'(E'_d(C)) = 0 $. Therefore $w(C) = w'(P(C)) = w'(P(C) - E'_d(C))$. From Claim \\ref{clm:simp} we know that edges in the set $P(C) - E'_d(C)$ form a simple cycle and we assumed that $w'$ gives nonzero circulation to every simple cycle therefore, $w'(P(C) - E'_d(C)) \\neq 0$. This implies that $w(C) \\neq 0$. This finishes the proof of Lemma \\ref{lem:wtforg}.\n\\end{proof}\n\n\\begin{proof}[Proof of Claim \\ref{clm:connect}]\nNote that if we treat each vertex in the bags of $T'$ distinctly, then there is a one-to-one correspondence between vertices of $G'$ and vertices in the bags of $T'$. Therefore, in $T'$ a vertex of $G'$ is identified by its corresponding vertex. Note that all the bags which contain vertices of the cycle $C$ form a connected component in $T'$. We will now prove that if a bag $B$ contains some vertices of $C$, then either $B$ has some edges of $C$ associate with it or no bag in the subtree rooted at $B$ has any edge of $C$ associated with it. From this, we can conclude that the bags which have some edges of $C$ associated with them form a connected component in $T'$.\n\nAssume that $B$ is a bag which contains a vertex of $C$ but no edge of $C$ is associated with it. This implies that $C$ never enters in any of the children of $B$. Because, let us assume it enters to some child $B'$ of $B$ through some vertex $v_{B'}$ of $B'$. In that case, there will be an edge $(v_{B},v_{B'})$ of $C$ associated with the bag $B$, which is a contradiction. Therefore subtree rooted at $B$ will not have any edge of the cycle $C$ associated with it. This finishes the proof.\n\\end{proof}\n\\section{Weight function}\n\\label{sec:wtfn}\nIn order to construct the desired weight function for a given graph $G_0 \\in \\langle \\mathcal{G}_{P, W}\\rangle_3$, we modify the component tree $T_0$ of $G_0$ such that it has the following properties.\n\\begin{itemize}\n\\item No two separating sets share a common vertex. \\item A separating set is shared by at most two components.\n\\item any virtual triangle, i.e., the triangle consists of virtual edge, in a planar component is always a face.\n\\end{itemize}\n\nLet $T$ be this modified component tree, and $G$ be the graph represented by $T$. We show that if we have a weight function that gives nonzero circulation to every cycle in $G$, then we can obtain a weight function that will give nonzero circulation to all the cycles in $G_0$.\nArora et al. \\cite{AGGT16} showed how a component tree satisfying these properties can be obtained for $K_{3,3}$-free and $K_5$-free graphs. We give a similar construction below and show that we can modify the components of $T_0$ such that $T$ satisfies the above properties (see Section \\ref{sec:modt}). Note that if the graphs inside two nodes of $T_0$ share a separating set $\\tau$ and they both are constant tree-width graphs, then we can take the clique-sum of these two graphs on the vertices of $\\tau$, and the resulting graph will also be a constant tree-width graph. Therefore, we can assume that if two components share a separating set, then either both of them are planar, or one of them is planar and the other is of constant tree-width.\n\n\\subsection{Modifying the Component Tree}\n\\label{sec:modt}\nIn this section, we show that how we obtain the component tree $T$ from $T_0$ so that it satisfies the above three properties.\n\\subparagraph{(i) No two separating sets share a common vertex:}\nFor a node $D$ in $T_0$, let $G_D$ be the graph inside node $D$. Assume that $G_D$ contains a vertex $v$ which is shared by separating sets $\\tau_1,\\tau_2, \\ldots ,\\tau_k$, where $k>1$, present in $G_D$. We replace the vertex $v$ with a gadget $\\gamma$ defined as follows: $\\gamma $ is a star graph such that $v$ is the center node and $v_1,,v_2, \\ldots ,v_k$ are the leaf nodes of $\\gamma$. The edges which were incident on $v$ and had their other endpoints in $\\tau_i$, will now incident on $v_i$ for all $i \\in[k]$. All the other edges which were incident on $v$ will continue to be incident on $v$. We do this for each vertex which is shared by more than one separating set in $G_{D}$. Let $G_{D'}$ be the graph obtained after replacing each such vertex with gadget $\\gamma$. It is easy to see that if $G_D$ was a planar component, then $G_{D'}$ will also be a planar component. We show that the same holds for constant tree-width components as well.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=1]{figures\/singlecmg}\n\\caption{(Left)A separating set $\\{x_1,x_2\\}$ is shared by components $D_1,D_2$ and $D_3$. (Right) Replace them by adding the gadget $\\beta$ and connect $D_1,D_2$ and $D_3$ to $\\beta$.}\n\\label{fig:sepset}\n\\end{center}\n\\end{figure}\n\n\\begin{claim}\nIf $G_D$ is a constant treewidth graph, then $G_{D'}$ will also be of constant treewidth.\n\\end{claim}\n\\begin{proof}\nLet $T_D$ be a tree decomposition of $G_D$ such that each bag of $T_D$ is of constant size, i.e., contains some constant number of vertices. Let $v$ be a vertex shared by $k$ separating sets $\\{x_i,y_i,v\\}$, for all $i \\in [k]$ in $G_D$. Let $B_1,B_2, \\ldots B_k$ be the bags in $T_D$ that contain separating sets $\\{x_1,y_1,v\\}, \\{x_2,y_2,v\\} , \\ldots ,\\{x_k,y_k,v\\}$ respectively (note that one bag may contain many separating sets). Now we obtain a tree decomposition $T_{D'}$ of the graph $G_{D'}$ using $T_D$ as follows: add the vertices $v_i$ in the bag $B_i$, for all $i \\in [k]$. Repeat this for each vertex $v$ in $G_D$, which is shared by more than one separating set to obtain $T_{D'}$. Note that in each bag of $T_D$ we add at most one new vertex with respect to each separating set contained in the bag in order to obtain $T_{D'}$. Since each bag in $T_D$ can contain vertices of only constant many separating sets, size of each bag remain constant in $T_{D'}$. Also, $T_{D'}$ is a tree decomposition of $G_{D'}$.\n\\end{proof}\n\n\n\\subparagraph{(ii) A separating set is shared by at most two components:} Assume that a separating set of size $t$, $\\tau =\\{x_i\\}_{i \\leq t}$ is shared by $k$ components $D_1,D_2, \\ldots D_k$, for $k>2$, in $T_0$. Let $\\beta$ be a gadget defined as follows: the gadget consists of $t$ star graphs $\\{\\gamma_i\\}_{i \\leq t}$ such that $x_i$ is the center node of $\\gamma_i$ and each $\\gamma_i$ has $k$ leaf nodes $\\{x_i^1,x_i^2, \\ldots x_i^k\\}$. There are virtual cliques present among the vertices $\\{x_i^j\\}_{i \\leq t}$ for all $j \\in [k]$ and among $\\{x_i\\}_{i \\leq t}$ (see Figure \\ref{fig:sepset}). If there is an edge present between any pair of vertices in the set $\\{x_i\\}_{i \\leq t}$ in the original graph, then we add a real edge between respective vertices in $\\beta$. $\\beta$ shares the separating set $\\{x_i^j\\}_{i \\leq t}$ with the component $D_j$ for all $j \\in [k]$.\n\nNote that in this construction, we create new components ($\\beta$) while all the other components in the component tree remain unchanged. Notice that the tree-width of $\\beta$ is constant (at most $5$ to be precise). We can define a tree decomposition of $\\beta$ of tree-width $5$ as follows: $B_0,B_1',B_2', \\ldots ,B_k'$ be the bags in the tree decomposition such that $B_0 = \\{x_1,x_2,x_3\\}$, $B_i'= \\{x_1,x_2,x_3,x^i_1,x^i_2,x^i_3\\}$ and there is an edge from $B_0$ to $B_i'$ for all $i \\in [k]$.\n\n\\subparagraph{(iii) Any virtual triangle, i.e., the triangle consists of virtual edges, in a planar component is always a face:} $3$-cliques in a $3$-clique sum of a planar and a bounded tree-width component is always a face in the planar component. This is because suppose there is a planar component $G_i$ in which the $3$-clique on $u,v,w$ occurs but does not form a face. Then the triangle $u,v,w$ is a separating set in $G_i$, which separates the vertices in its interior $V_1$ from the vertices in its exterior $V_2$. Notice that neither of $V_1,V_2$ is empty by assumption since $u,v,w$ is not a face. However, then we can decompose $G_i$ further.\n\n\\subsection{Preserving nonzero circulation:}\n\\label{subsec:presnzc}\nWe can show that if we replace a vertex with the gadget $\\gamma$, then the nonzero-circulation in the graph remains preserved: let $G_1(V_1,E_1)$ be a graph such that a vertex $v$ in $G_1$ is replaced with the gadget $\\gamma$ (star graph). Let this new graph be $G_2(V_2,E_2)$. We show that if we have a skew-symmetric weight function $w_2$ that gives nonzero circulation to every cycle in $G_2(V_2,\\dvec{E}_2)$, then we can obtain a skew-symmetric weight function $w_1$ that gives nonzero circulation to every cycle in $G_1(V_1,\\dvec{E}_2)$ as follow. Let $u_1,u_2, \\ldots, u_k$ be the neighbors of $v$ in $G_1$. For the sake of simplicity, assume that $v$ is replaced with $\\gamma$ such that $\\gamma$ has only two leaves $v_1$ and $v_2$ and $v$ is the center of $\\gamma$. Now assume that $u_1,u_2, \\ldots ,u_j$ become neighbors of $v_1$ and, $u_{j+1},u_{j+2}, \\ldots , u_k$ become neighbors of $v_2$ in $G_2$, for some $j max(2^{m+2},7)$, where $m$ is the maximum number of edges associated with any constant size component.\n\n\\subparagraph{ If $G'_{B_i}$ is a planar component:}\n$w_1$ for such components is same as the weight function defined in \\cite{BTV09} for planar graphs. We know that given a planar graph $G$, its planar embedding can be computed in logspace \\cite{AllenderMahajan04}.\n\n\\begin{theorem}[\\cite{BTV09}]\nGiven a planar embedding of a graph $H$, there exists a logspace computable function $w$ such that for every cycle $C$ of $H$, circulation of the cycle $w(C) \\neq 0$.\n\\end{theorem}\nThe above weight function gives nonzero circulation to every cycle that is completely contained in a planar component.\n\nThe weight function $w_2$ for planar components is defined as follows. $w_2$ assigns weights to only those faces of the component, which are adjacent to some separating set. For a subtree of $T_s$ of $A(T')$, let $l(T_s)$ and $r(T_s)$ denote the number of leaf nodes in $T_s$ and root node of $T_s$, respectively. For a bag $B_i$, $h(B_i)$ denotes the height of the bag in $A(T')$. If $B_i$ is the only bag in the subtree rooted at $B_i$, then each face in $G'_{B_i}$ is assigned weight zero. Otherwise, let $\\tau$ be a separating set where some subtree $T_i$ is attached to $B_i$. The faces adjacent to $\\tau$ in $G'_{B_i}$ are assigned weight $2\\times K^{h(r(T_i))} \\times l(T_i)$. If a face is adjacent to more than one separating set, then the weight assigned to the face is the sum of the weights due to each separating set. The weight of a face is defined as the sum of the weights of the edges of the face in clockwise order. If we have a skew-symmetric weight function, then the weight of the clockwise cycle will be the sum of the weights of the faces inside the cycle \\cite{BTV09}. Therefore assigning positive weights to every face inside a cycle will ensure that the circulation of the cycle is nonzero. Given weights on the faces of a graph, we can obtain weights for the edges so that the sum of the weights of the edges of a face remains the same as the weight of the face assigned earlier \\cite{Kor09}.\n\n\n\\subparagraph{If $G'_{B_i}$ is a constant size component:} For this type of component, we need only one weight function. Thus we set $w_2$ to be zero for all the edges in $G'_{B_i}$ and $w_1$ is defined as follows. Let $e_1,e_2, \\ldots ,e_k$ be the edges in the component $Q_i$, for some $k \\leq m$. Edge $e_j$ is assigned weight $2^i \\times K^{h(r(T_i))-1} \\times l(T_{i})$ (for some arbitrarily fixed orientation), Where $T_{i}$ is the subtree of $A(T')$ rooted at $B_i$. Note that for any subset of edges of $G'_{B_i}$, the sum of the weight of the edges in that subset is nonzero with respect to $w_1$.\n\n\nThe final weight function is $w' = \\langle w_1 + w_2 \\rangle.$ Since the maximum height of a bag in $A(T')$ is $O(\\log n)$, the weight of an edge is at most $O( n^c)$, for some constant $c>0$.\n\n\\begin{lemma}\n\\label{lem:maxval}\nFor a cycle $C$ in $G'$ sum of the weights of the edges of $C$ associated with the bags in a subtree $T_i$ of $A(T')$ is $< K^{h(r(T_i))} \\times l(T_i) $.\n\\end{lemma}\n\n\\begin{proof}\nLet $w(C_{T_i})$ denotes the sum of the weight of the edges of a cycle $C$ associated with the bags in $T_i$. We prove the Lemma by induction on the height of the root of the subtrees of $A(T')$. Note that the Lemma holds trivially for the base case when the height of the root of a subtree is $1$.\n\n\\textit{Induction hypothesis}: Assume that it holds for all the subtrees such that the height of their root is $ 2^{m+2}] \\\\\nw(C_{T_i}) &<& K^{h(r(T_i))} \\times l(T_i)\n\\end{eqnarray*}\n\n\\item When $G'_{r(T_i)}$ is a planar graph: let $\\tau_1,\\tau_2, \\ldots ,\\tau_k$ be the separating sets present in $G'_{r(T_i)}$ such that the subtree $T_i^j$ is attached to $r(T_i)$ at $\\tau_j$, for all $j \\in [k]$. A separating set can be present in at most 3 faces. Thus it can contribute $2 \\times 3 \\times K^{h(r(T_i^j))} \\times l(T_i^j)$ to the circulation of the cycle $C$. Therefore,\n\\begin{eqnarray*}\nw(C_{T_i}) &\\leq & \\sum_{j=1}^k 6 \\times K^{h(r(T_i^j))} \\times l(T_i^j) + \\sum_{j=1}^k K^{h(r(T_i^j))} \\times l(T_i^j) \\\\\nw(C_{T_i}) &\\leq& 7 \\times K^{h(r(T_i))-1} \\times l(T_i)\\hspace{5cm} [K > 7] \\\\\nw(C_{T_i}) &<& K^{h(r(T_i))} \\times l(T_i)\n\\end{eqnarray*}\n\\end{itemize}\n\\end{proof}\n\n\n\\begin{lemma}\n\\label{lem:domwt}\nFor a cycle, $C$ in $G$ let $B_i$ be the unique highest bag in $A(T')$ that have some edges of $C$ associated with it. Then the sum of the weights of the edges of $C$ associated with $B_i$ will be more than that of the rest of the edges of $C$ associated with the other bags.\n\\end{lemma}\n\n\\begin{proof}\nLet $T_i$ be the subtree of $A(T')$ rooted at $B_i$. We know that sum of the weights of the edges of $C$ associated $B_i$ is $\\geq 2 \\times K^{h(r(T_i))-1} \\times l(T_i)$. Let $T_i^1, T_i^2 , \\ldots ,T_i^k$ be the subtree of $T_i$ rooted at children of $B_i$. By Lemma \\ref{lem:maxval}, we know that the sum of the weight of the edges of $C$ associated with the bags in these subtrees is $< \\sum_{j=1}^k K^{h(r(T_i^j))} \\times l(T_i^j) = K^{h(r(T_i))-1} \\times l(T_i)$. Therefore, the lemma follows.\n\\end{proof}\n\n\\begin{lemma}\n\\label{lem:nonzerowt}\nCirculation of a simple cycle $C$ in the graph $G'$ is nonzero with respect to $w'$.\n\\end{lemma}\n\n\\begin{proof}\nIf $C$ is contained within a component, i.e., its edges are associated with a single bag $B_i$, then we know that $w_1$ assigns nonzero circulation to $C$. Suppose the edges of $C$ are associated with more than one bag in $T'$. By Claim \\ref{clm:connect}, we know that these bags form a connected component. By the (ii) property of $A(T')$, we know that there is a unique highest bag $B_i$ in $A(T')$ which have edges of $C$ associated with it. Therefore from Lemma \\ref{lem:domwt} we know that the circulation of $C$ will be nonzero.\n\\end{proof}\n\n\\begin{proof}[Proof of Theorem \\ref{thm:main}]\nProof of Theorem \\ref{thm:main} follows from Lemma \\ref{lem:wtforg} and \\ref{lem:nonzerowt}.\n\\end{proof}\n\n\\section{Conclusion}\n\\label{sec:concl}\nWe have given a construction of a nonzero circulation weight function for the class of graphs that can be expressed as 3-clique-sums of planar and constant treewidth graphs. However, it seems that our technique can be extended to the class of graphs that can be expressed as 3-clique-sums of constant genus and constant treewidth graphs. Further extending our results to larger graph classes would require fundamentally new techniques. This is so because the most significant bottleneck in parallelizing matching algorithms for larger graph classes such as apex minor free graphs or $H$-minor free graphs for a finite $H$ is the absence of a parallel algorithm for the structural decomposition of such families. Thus we would need to\nrevisit the Robertson-Seymour graph minor theory to parallelize it. This paper thus serves the dual purpose of delineating the boundaries of the known regions of parallel (bipartite) matching and reachability and as an invitation to the vast unknown of parallelizing the Robertson-Seymour structure theorems.\n\\section{Introduction}\nDirected graph reachability and perfect matching are two fundamental problems in computer science. The history of the two problems has been inextricably linked together from the inception of computer science (and before!) \\cite{FordF56}. The problems and their variants, such as shortest path \\cite{Dijkstra59} and maximum matching \\cite{Edmonds65} have classically been studied in the sequential model of computation. Since the 1980s, considerable efforts have been spent trying to find parallel algorithms for matching problems spurred on by the connection to reachability which is, of course, parallelizable.\nThe effort succeeded only in part with the discovery of randomized parallel algorithms \\cite{KUW85,MulmuleyVV87}. While we know that the reachability problem is complete for the complexity class $\\NL$, precise characterization has proved to be elusive for matching problems. The 1990s saw attempts in this\ndirection when surprisingly ``small'' upper bounds were proved \\cite{ARZ99}\nfor the perfect matching problem, although in the non-uniform setting.\nAt roughly the same time, parallel algorithms for various\nversions of the matching problem for restricted graph classes like planar\n\\cite{MN95} and bounded genus \\cite{MV00} graphs were\ndiscovered. The last two decades have seen efforts towards pinning down the\nexact\nparallel complexity of reachability and matching related problems in restricted\ngraph classes \\cite{BTV09,KV10,DKR10,DKTV11,AGGT16,KT16,GST19,GST20}.\nMost of these papers are based on the method of constructing \\emph{nonzero circulations}.\n\nThe circulation of a simple cycle is the sum of its edge-weights in a\nfixed orientation (see Section~\\ref{sec:prelims} for the definition)\nand\nwe wish to assign polynomially bounded weights to the edges of a graph, such\nthat every simple cycle has a nonzero circulation.\nAssigning such weights \\emph{isolates} a reachability witness or a matching witness in the graph \\cite{TV12}. Constructing\npolynomially bounded isolating weight function in parallel for general graphs has been\nelusive so far.\nThe last five years\nhave seen rapid progress in the realm of matching problems, starting with\n\\cite{FGT} which showed that the method of nonzero circulations could\nbe extended from topologically restricted (bipartite) graphs to general\n(bipartite) graphs. A subsequent result extended this to all graphs\n\\cite{ST17}. More recently, the endeavour to parallelize planar\nperfect matching has borne fruit \\cite{Sankowski18,AV20} and has been followed up by further exciting work \\cite{AV2}. \n\nWe know that polynomially bounded weight functions that give nonzero circulation to every cycle can be constructed in logspace for planar graphs, bounded genus graphs and bounded treewidth graphs \\cite{BTV09,DKTV11,DKMTVZ20} . Planar graphs are both $K_{3,3}$-free and $K_5$-free graphs. Such a weight function is also known to be constructable in logspace for $K_{3,3}$-free graphs and $K_5$-free graphs, individually \\cite{AGGT16}. A natural question arises if we can construct such a weight function for $H$-minor-free graphs for any arbitrary graph $H$. A major hurdle in this direction is the absence of a space-efficient (Logspace) or parallel algorithm (\\NC) for finding a structural decomposition of $H$-minor free graphs. However, such a decomposition is known when $H$ is a single crossing graph. This induces us to solve the problem for single crossing minor-free (SCM-free) graphs. An SCM-free graph can be decomposed into planar and bounded treewidth graphs. Moreover, $K_{3,3}$ and $K_5$ are single crossing graphs. Hence our result can also be seen as a generalization of the previous results on these classes. There have also been important follow-up works on parallel algorithms for SCM-free graphs \\cite{EV21}. SCM-free graphs have been studied in several algorithmic works (for example \\cite{STW16,CE13,DHNRT04}).\n\n\n\\subsection{Our Result}\nIn this paper, we show that results for previously studied graph classes\n(planar, constant tree-width and $H$-minor free for $H \\in \\{K_{3,3},K_5\\}$)\ncan be extended and unified to yield similar results for SCM-free graphs.\n\\begin{theorem}\n\\label{thm:main}\nThere is a logspace algorithm for computing polynomially-bounded, skew-symmetric nonzero circulation weight function in SCM-free graphs.\n\\end{theorem}\n\nAn efficient solution to the circulation problem for a class of graphs yields better complexity bounds for determining reachability in the directed version of that class and constructing minimum weight maximum-matching in the bipartite version of that class. Theorem~\\ref{thm:main}\nwith the results of \\cite{DKKM18,RA00}, yields the following:\n\\begin{corollary}\\label{cor:stat}\nFor SCM-free graphs, reachability is in $\\UL\\cap\\coUL$ and minimum weight bipartite maximum matching is in $\\SPL$.\n\\end{corollary}\n\nAlso using the result of \\cite{TW10}, we obtain that the \\textit{Shortest path} problem in SCM-free graphs can be solved in $\\UL\\cap \\coUL$. \n\\subparagraph*{Overview of Our Techniques and Comparison With Previous Results:} We know that for planar graphs and constant treewidth graphs nonzero circulation weights can be constructed in logspace \\cite{BTV09,DKMTVZ20}. We combine these weight functions using the techniques from Arora et al. \\cite{AGGT16}, Datta et al. \\cite{DKKM18} and, Datta et al. \\cite{DKMTVZ20} together with some modifications to obtain the desired weight function. In \\cite{AGGT16}, the authors decompose the given input graph $G$ ($K_{3,3}$-free or $K_5$-free) and obtain a component tree that contains planar and constant size components. They modify the components of the component tree so that they satisfy few properties which they use for constructing nonzero circulation weights (these properties are mentioned at the beginning of Section \\ref{sec:wtfn}). The new graph represented by these modified components preserves the perfect matchings of $G$. Then, they construct a \\emph{working-tree} of height $O(\\log n)$ corresponding to this component tree and use it to assign nonzero circulation weights to the edges of this new graph. The value of the weights assigned to the edges of the new graph is exponential in the height of the working tree.\n\nWhile $K_{3,3}$-free and $K_5$-free graphs can be decomposed into planar and constant size components, an SCM-free graph can be decomposed into planar and constant treewidth components. Thus the component tree of the SCM-free graph would have several non-planar constant treewidth components. While we can construct a working tree of height $O(\\log n)$, this tree would contain constant-treewidth components and hence make it difficult to find nonzero circulation weights. A na\\\"ive idea would be to replace each constant treewidth component with its tree decomposition in the working tree. However, the resultant tree would have the height $O(\\log^2 n)$. Thus the weight function obtained in this way is of $O(\\log^2 n)$-bit. We circumvent this problem as follows: we obtain a component tree $T$ of the given SCM-free graph $G$ and modify its components to satisfy the same property as \\cite{AGGT16} (however, we use different gadgets for modification). Now we replace each bounded treewidth component with its tree decomposition in $T$. Using this new component tree, say $T'$, we define another graph $G'$. We use the technique from \\cite{DKKM18} to show that if we can construct the nonzero circulation for $G'$, then we can \\textit{pull back} nonzero circulation for $G$. Few points to note here: (i) pull back technique works because of the new gadget that we use to modify the components in $T$, (ii) since ultimately we can obtain nonzero circulation for $G$, it allows us to compute maximum matching in $G$ in $\\SPL$, which is not the case in \\cite{AGGT16}.\n \n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Organization of the Paper}\nAfter introducing the definitions and preliminaries in Section~\\ref{sec:prelims}, in Section~\\ref{sec:wtfn} we discuss the weight function that achieves non-zero circulation in single-crossing minor free graphs and its application to maximum matching in Section~\\ref{sec:staticmatch}. Finally, we conclude with Section~\\ref{sec:concl}.\n\n\n\n\\section{Dynamic isolation from static non-zero circulation}\\label{sec:iso}\nIn previous sections, we have shown that we can compute a non-zero\ncirculation (for Matchings) in a bipartite single crossing minor free graph \nin $\\L$. This allows us to compute if the graph has a perfect matching \nin $\\SPL \\subseteq \\NC$.\n\nIn the dynamic setting, we allow the graph to evolve by insertion and \ndeletion of a small number of edges at every time step under the promise \nthat the graph stays $H$-minor free for a fixed single crossing graph $H$. \nWe would like to maintain if the graph has a perfect matching in $\\DynFO$.\nOne plausible approach would be to maintain the non-zero circulation as\ndescribed in Section~\\ref{sec:wtfn}.\nHowever, this seems non-trivial as we would need to maintain the decomposition\nof the given graph into planar and constant tree-width parts in $\\DynFO$. Even\nif we could somehow do that, the weight of many edges may change due to \neven a single edge change. Thus many more than polylog many entries of the\nadjacency and Tutte matrices may change. This would preclude the use of ``small\nchange'' techniques like the Sherman-Morrison-Woodbury formula.\nThis induces us to side-step the update of the non-zero\ncirculation by the method of Section~\\ref{sec:wtfn}.\n\nInstead we use a result from \\cite{FGT} to convert the given static circulation\nto dynamic isolating weights. Notice that \\cite{FGT} yields a black box recipe\nto produce isolating weights of quasipolynomial magnitude in the following way.\nGiven a bipartite graph $G$, they first consider a non-zero circulation\nof exponential magnitude viz. $w_0(e) : e \\mapsto 2^e$. Next, they consider a \nlist of $\\ell = O(\\log{n})$ primes $\\vec{p} = \\left$ \nwhich yield a weight function $w_{\\vec{p}}(e)$. This is defined\nby considering the $\\ell$ weight functions \n$w_0\\bmod{p_i}$ for $i \\in \\{1,\\ldots,\\ell\\}$\nand concatenating them after introducing a \\emph{shift} in the bit\npositions i.e.:\n$w_{\\vec{p}}(e) = \\left$.\nThis is so that there is no\noverflow from the $i$-th field to the $(i-1)$-st for any \n$i \\in \\{2,\\ldots,\\ell\\}$. \n\nSuppose we start with a graph with static weights ensuring non-zero \ncirculation. In a step, some edges are \ninserted or deleted.\nThe graph after deletion is a subgraph of the original graph; hence the\nnon-zero circulation remains non-zero after a deletion\\footnote{If we merely had isolating\nweights this would not necessarily preserve isolation.}, but we have to do more\nin the case of insertions. We aim to give the newly inserted edges FGT-weights\n (from \\cite{FGT})\nin the higher order bits while giving weight $0$ to all the original edges in \n$G$ again in the higher order bits. Thus the weight of all perfect matchings\nthat survive the deletions in a step remains unchanged. Moreover, if none such\nsurvive but new perfect matchings are introduced (due to insertion of edges)\nthe lightest of them is determined solely by the weights of the newly\nintroduced edges. In this case, our modification of the existential proof \nfrom \\cite{FGT} ensures that the minimum weight perfect matching is unique.\n\nNotice that the FGT-recipe applied to a graph with polylog ($N = \\log^{O(1)}{n}$)\nedges yields\nquasipolylogarithmically ($N^{\\log^{O(1)}{N}}$) \nlarge weights which are therefore still subpolynomial ($2^{(\\log{\\log{n}})^{O(1)}} = 2^{o(\\log{n})} = n^{o(1)}$).\nThus the weights remain polynomial when shifted to accommodate for the old \nweights. Further the number of primes is polyloglog ($\\log^{O(1)}{N} = (\\log{\\log{n}})^{O(1)}$) and so sublogarithmic ($= \\log^{o(1)}{n}$)\nthus the number of possible different weights is subpolynomial hence our \nalgorithm can be derandomized.\n\nBefore getting into technical details, we point out that in \\cite{DKMTVZ20} a\nsimilar scheme is used for reachability and bears the same relation to\n\\cite{KT16} as this section does to \\cite{FGT}. We have the following lemma\n, which we prove in Appendix~\\ref{subsec:isoApp}:\n\\begin{lemma} \\label{lem:combFGT}\nLet $G$ be a bipartite graph with a non-zero circulation\n$w$. Suppose $N = \\log^{O(1)}{n}$ edges are inserted into $G$ to yield $G^{new}$\nthen we can compute polynomially many weight functions in $\\FOar$ that\nhave $O(\\log{n})$ bit weights, and at least one of them,\n$w^{new}$ is isolating. Further the weights of the original edges remain\nunchanged under $w^{new}$.\n\\end{lemma}\n\n\n\\section{The details from Section~\\ref{sec:iso}}\\label{subsec:isoApp}\nWe use the same general strategy as in \\cite{DKMTVZ20} and divide the edges into\n\\emph{real} and \\emph{fictitious} where the former represent the newly inserted\nedges and the latter original undeleted edges\\footnote{We use the terms\nold $\\leftrightarrow$ fictitious and new $\\leftrightarrow$ real interchangeably\nin this section.}.\n\nLet $\\mathcal{C}$ be a set of cycles containing both real and fictitious edges that \noccur in any PM.\nLet $w$ be a weight \nfunction on the edges that gives non-zero weight only to the real edges.\nDefine $c_w(\\mathcal{C})$ to be the set of all circulations in the cycles of \n$\\mathcal{C}$. Here the circulation is the absolute value of the alternating sum\nof weights $c_w(C) = |w(f_1) - w(f_2) + w(f_3) - \\ldots|$ where \n$C = f_1,f_2,f_3,\\ldots$ is the sequence of edges in the cycle. \n\nWe say that a weight function,\nthat gives non-zero weights to the real edges, \\emph{real isolates} $\\mathcal{M}$ \nfor a set system $\\mathcal{M}$ if the minimum weight set in $\\mathcal{M}$ is unique. In our context $\\mathcal{M}$ will refer to the set of perfect\/maximum\n matchings.\n\nNext we follow the proof idea of \\cite{FGT} but focus on assigning weights\nto real edges which are, say, $N$ in number.\n We do this in $\\log{N}$ stages starting with a graph $G_0 = G$\nand ending with the acyclic graph $G_\\ell$ where $\\ell = \\log{N}$. The inductive\nassumption is that:\n\\begin{invariant}\\label{inv:analogFGT}\n For $i \\geq 1$, $G_i$ contains no cycles with at most $2^{i+1}$ real edges. \n\\end{invariant}\n\n\nNotice that induction starts at $i > 0$.\n\nWe first show how to construct $G_{i+1}$ from $G_i$ such that if $G_i$ satisfies\nthe inductive invariant~\\ref{inv:analogFGT} then so does $G_{i+1}$.\nLet $i > 1$, then in the $i$-th stage, let $\\mathcal{C}_{i+1}$ be the set of cycles that contain \nat most $2^{i+2}$ real edges. For each such cycle \n$C = f_0,f_1,\\ldots$ containing $k \\leq 2^{i+2}$ real edges (with $f_0$ being the\nleast numbered real edge in the cycle) edge-partition it into $4$ consecutive \npaths $P_j(C)$ for $j \\in \\{0,1,2,3\\}$ such that the first $3$ paths contain exactly \n$\\lfloor\\frac{k}{4}\\rfloor$ real edges and the last path contains the rest. In\naddition ensure that the first edge in each path is a real edge. Let the first\nedge of the $4$-paths be respectively $f_0 = f'_0, f'_1, f'_2, f'_3$. We \nhave the following which shows that the associated \n$4$-tuples $\\left$\nuniquely characterise cycles in $\\mathcal{C}_{i+1}$.\n\\begin{claim}\nThere is at most one cycle in $\\mathcal{C}_{i+1}$ that has a given $4$-tuple \n$\\left$ associated with it.\n\\end{claim}\n\\begin{proof}\nSuppose two distinct cycles $C,C' \\in \\mathcal{C}_{i+1}$ have\na tuple $\\left$ \nassociated with them. Then for least one $j \\in \\{0,1,2,3\\}$\n$P_j(C) \\neq P_j(C')$. Then $P_j(C) \\cup P_j(C')$\nis a closed walk in $G_i$ containing at most \n$2\\times \\lceil\\frac{2^{i+2}}{4}\\rceil = 2^{i+1}$ many real edges,\ncontradicting the assumption on $G_i$.\n\\end{proof}\n\nThis lemma shows that there are at most $N^4$ elements in $\\mathcal{C}_i$.\nNext consider the following lemma from \\cite{FKS}:\n\\begin{lemma}[\\cite{FKS}]\nFor every constant $c>0$ there is a constant $c_0>0$ such that for\nevery set $S$ of $m$ bit integers with $|S| \\leq m^c$,\nthe following holds: There is a $ c_0 \\log{m}$ bit prime\nnumber $p$ such that for any $x,y \\in S$ it holds that if $x \\neq y$ then \n$x \\not\\equiv y \\bmod{p}$.\n\\end{lemma}\nWe apply it to the set $c_{w_0}(\\mathcal{C}_i) = \\{c_{w_0}(C) : C \\in \\mathcal{C}_i\\}$. \nHere, the weight\nfunction $w_0$ assigns weights $w_0(e_j) = 2^j$ to the real edges which are\n$e_1,e_2,\\ldots,e_N$ in an arbitrary but fixed order.\nNotice that from the above claim, the size of this set \n$|w_0(\\mathcal{C}_i)| \\leq N^4$.\nAnd $w_0(e_j)$ is $j$-bits long hence $c_{w_0}(C)$ for any cycle\n$C \\in \\mathcal{C}_i$ that has less than $2^{i+2}$ real edges \nis at most $i+j+2 < 4N$-bits long. Thus, we obtain a prime $p_{i+1}$ \nof length at most $c_0\\log{4N}$ by picking $c = 4$. We define \n$w_{i+1}(e_j) = w_0(e_j) \\bmod{p_{i+1}}$.\n\nNow consider the following crucial lemma from \\cite{FGT}:\n\\begin{lemma}[\\cite{FGT}]\\label{lem:crucialFGT}\nLet $G = (V,E)$ be a bipartite graph with weight function $w$. \nLet $C$ be a cycle in $G$ such that $c_w(C) \\neq 0$. \nLet $E_1$ be the union of all minimum weight perfect matchings in $G$.\nThen the graph $G_1(V,E_1)$ does not contain the cycle $C$.\nMoreover all the perfect matchings in $G_1$ have the same weight.\n\\end{lemma}\nLet $B$ be a large enough constant (though bounded by a polynomial in $N$) \nto be specified later.\nWe shift the original accumulated weight function\n$W_i$ and add the new weight function $w_{i+1}$ to obtain:\n$W_{i+1}(e) = W_{i}(e)B + w_{i+1}(e)$. \nApply $W_{i+1}$ on the graph $G_i$ to obtain the graph $G_{i+1}$.\n Inductively suppose we have\n the invariant~\\ref{inv:analogFGT}\n that the graph $G_i$ did not have any cycles containing at least \n$2^{i+1}$ real edges. This property is preserved when we take all the\nperfect matchings in $G_i$ and apply $W_{i+1}$ yielding $G_{i+1}$. Moreover \nfrom Lemma~\\ref{lem:crucialFGT} and the construction of $w_{i+1}$ the cycles of\n$\\mathcal{C}_i$ disappear from $G_{i+1}$ restoring the invariant. \n\nNotice that it suffices to take $B$ greater than the number of real edges\ntimes the maximum of $w_i(e)$ over $i,e$.\nShowing that $G_1$ contains no cycle of length at most $4$ mimics the above\nmore general proof and we skip it here.\nWe can now complete the proof of Lemma~\\ref{lem:combFGT}:\n\\begin{lemma*} (Lemma~\\ref{lem:combFGT} restated)\nLet $G$ be a bipartite graph with a non-zero circulation\n$w^{old}$. Suppose $N = \\log^{O(1)}{n}$ edges are inserted into $G$ to yield $G^{new}$\nthen we can compute polynomially many weight functions in $\\FOar$ that\nhave $O(\\log{n})$ bit weights, and at least one of them,\n$w^{new}$ is isolating. Further the weights of the original edges remains\nunchanged under $w^{new}$.\n\\end{lemma*}\n\n\\begin{proof}(of Lemma~\\ref{lem:combFGT})\nFrom the invariant above $G_\\ell$ does not contain\nany cycles. From the construction of $G_\\ell$ if $G$ has a perfect matching then\nso does $G_\\ell$ and hence it is a perfect matching. Notice that $W_\\ell$\nis obtained from $p_1,\\ldots,p_\\ell$ that include $O((\\log{\\log{n}})^2) = o(\\log{n})$ \nmany bits. Thus there are (sub)polynomially many such weighting functions $W_\\ell$, depending on the primes $\\vec{p}$.\nLet $w = B\\cdot W_\\ell + w^{old}$ where we recall that $W_\\ell(e)$ is non-zero only\nfor the new (real) edges and $w^{old}$ is non-zero only for the old (fictitious)\nedges. Thus, any perfect matching that consists of only old edges is lighter\nthan any perfect matching containing at least one new edge. Moreover if \nthe real edges in two matchings differ then from the construction of \n$W_\\ell$ (for some choice of $\\vec{p}$) both matchings cannot be lightest \nas $W_\\ell$ real isolates a matching. Thus the only remaining case is\nthat we have two distinct lightest perfect matchings which differ only in the\nold edges. But the symmetric difference of any two such perfect matchings\nis a collection of cycles consisting of old edges. But each cycle has a\nnon-zero circulation in the old graph and so we can obtain a matching\nof even lesser weight by replacing the edges of one of the matchings in one\ncycle by the edges of the other one. This contradicts that both matchings were\nof least weight. This completes the proof.\n\\end{proof}\n\n\n\\subsection{Language Definitions}\\label{subsec:langDefs}\n\\begin{definition}\n\\begin{itemize}\n\\item $\\PM$ is a set of independent edges covering all the vertices of a graph. \n\\item $\\BPM$ is a $\\PM$ in a bipartite graph.\n\\item $\\PMD$ Given an undirected graph $G(V,E)$ determine if there exists a \nperfect matching in $G$.\n\\item $\\PMS$ Construct a perfect matching for a given graph $G$ if one exists.\n\\item $\\BMWPMS$ In an edge weighted bipartite graph construct a $\\PM$ of least weight.\n\\item $\\MCM$ is a set of largest size consisting of independent edges.\n\\item $\\BMCM$ is an $\\MCM$ in a bipartite graph.\n\\item $\\MWMCM$ is an $\\MCM$ of least weight in an edge weighted graph.\n\\item $\\BMWMCM$ is an $\\MWMCM$ in a weighted bipartite graph.\n\\item $\\MCMSz$ determine the size of an $\\MCM$.\n\\item $\\BMCMSz$ the $\\MCMSz$ problem in bipartite graphs.\n\\item $\\BMWMCMS$ Construct a $\\BMWMCM$. \n\\item $\\Reach$ Given a directed graph $G(V,E)$ and $s,t \\in V$ does there exist a directed path from $s$ to $t$ in $G$.\n\\item $\\Dist$ Given a directed graph $G(V,E)$, polynomially bounded edge weights and $s,t \\in V$ find the weight of a least weight path from $s$ to $t$.\n\\item $\\Rank$ Given an $m \\times n$ matrix $A$ with integer entries find \nthe rank of $A$ over $\\mathbb{Q}$.\n\\end{itemize}\n\\end{definition}\n\n\n\\section{Preliminaries and Notations}\n\\label{sec:prelims}\n\\subparagraph*{Tree decomposition:} Tree decomposition is a well-studied concept in graph theory. Tree decomposition of a graph, in some sense, reveals the information of how much tree-like the graph is. We use the following definition of tree decomposition.\n\n\\begin{definition}\n\\label{def:treedec}\nLet $G(V,E)$ be a graph and $\\tilde{T}$ be a tree, where nodes of the $\\tilde{T}$ are $\\{B_1, \\ldots ,B_k \\mid B_i \\subseteq V\\}$ (called bags). $T$ is called a tree decomposition of $G$ if the following three properties are satisfied:\n\\begin{itemize}\n\n\\item $B_1 \\cup \\ldots \\cup B_k = V$,\n\\item for every edge $(u,v) \\in E$, there exists a bag $B_i$ which contains both the vertices $u$ and $v$,\n\\item for a vertex $v \\in V$, the bags which contain the vertex $v$ form a connected component in $\\tilde{T}$.\n\\end{itemize}\n\n\\end{definition}\n\nThe width of a tree decomposition is defined as one less than the size of the largest bag. The treewidth of a graph $G$ is the minimum width among all possible tree decompositions of $G$. Given a constant treewidth graph $G$, we can find its tree decomposition $\\tilde{T}$ in logspace such that $\\tilde{T}$ has a constant width \\cite{EJT10}.\n\n\\begin{lemma}[\\cite{EJT10}]\n\\label{lem:boundtw}\nFor every constant $c$, there is a logspace algorithm that takes a graph as input and outputs its tree decomposition of treewidth at most $c$, if such a decomposition exists.\n\\end{lemma}\n\n\\begin{definition}\n\\label{def:cls}\nLet $G_1$ and $G_2$ be two graphs containing cliques of equal size. Then the \\emph{clique-sum} of $G_1$ and $G_2$ is formed from their disjoint union by identifying pairs of vertices in these two cliques to form a single shared clique, and then possibly deleting some of the clique edges.\n\\end{definition}\nFor a constant $k$, a $k$-clique-sum is a clique-sum in which both cliques have at most $k$ vertices. One may also form clique-sums of more than two graphs by repeated application of the two-graph clique-sum operation. For a constant $W$, we use the notation $ \\langle \\mathcal{G}_{P, W}\\rangle_k$ to denote the class of graphs that can be obtained by taking repetitive $k$-clique-sum of planar graphs and graphs of treewidth at most $W$. In this paper, we construct a polynomially bounded skew-symmetric weight function that gives nonzero circulation to all the cycles in a graph $G \\in \\langle \\mathcal{G}_{P, W}\\rangle_3$. Note that if a weight function gives nonzero circulations to all the cycles in the biconnected components of $G$, it will give nonzero circulation to all the cycles in $G$ because no simple cycle can be a part of two different biconnected components of $G$. We can find all the biconnected components of $G$ in logspace by finding all the articulation points. Therefore, without loss of generality, assume that $G$ is biconnected.\n\n\nThe \\emph{crossing number} of a graph $G$ is the lowest number of edge crossings of a plane drawing of $G$. A \\emph{single-crossing} graph is a graph whose crossing number is at most 1. SCM-free graphs are graphs that do not contain $H$ as a minor, where $H$ is a fixed single crossing graph. Robertson and Seymour have given the following characterization of SCM-free graphs.\n\n\\begin{theorem}[\\cite{RS91}]\n\\label{thm:rs}\nFor any single-crossing graph $H$, there is an integer\n$c_H \\geq 4$ (depending only on $H$) such that every graph with no minor isomorphic to $H$ can\nbe obtained as $3$-clique-sum of planar graphs and graphs of treewidth at most $c_H$.\n\\end{theorem}\n\n\\subparagraph*{Component Tree:} In order to construct the desired weight function for a graph $G \\in \\langle \\mathcal{G}_{P, W}\\rangle_3$, we decompose $G$ into smaller graphs and obtain a component tree of $G$ defined as follows: we first find $3$-connected and $4$-connected components of $G$ such that each of these components is either planar or of constant treewidth. We know that these components can be obtained in logspace \\cite{TW09}. Since $G$ can be formed by taking repetitive $3$-clique-sum of these components, the set of vertices involved in a clique-sum is called a separating set. Using these components and separating sets, we define a component tree of $G$. A component tree $T$ of $G$ is a tree such that each node of $T$ contains a $3$-connected or $4$-connected component of $G$, i.e., each node contains either a planar or constant treewidth subgraph of $G$. There is an edge between two nodes of $T$ if the corresponding components are involved in a clique-sum operation. If two nodes are involved in a clique-sum operation, then copies of all the vertices of the clique are present in both components. It is easy to see that $T$ will always be a tree. Within a component, there are two types of edges present, \\textit{real} and \\textit{virtual edges}. Real edges are those edges that are present in $G$. Let $\\{a,b,c\\}$(or $\\{a,b\\}$) be a separating triplet(or pair) shared by two nodes of $T$, then there is a clique $\\{a,b,c\\}$ (or $\\{a,b\\}$) of virtual edges present in both the components. Suppose there is an edge present in $G$ between any pair of vertices of a separating set. In that case, there is a real edge present between that pair of vertices parallel to the virtual edge, in exactly one of the components which share that separating set.\n\n\\subparagraph*{Weight function and circulation:} Let $G(V, E)$ be an undirected graph with vertex set $V$ and edge set $E$. By $\\dvec{E}$, we denote the set of bidirected edges corresponding to $E$. Similarly, by $G(V, \\dvec{E})$, we denote the graph corresponding to $G(V, E)$ where each of its edges is replaced by a corresponding bidirected edge. A weight function $w : \\dvec{E} \\rightarrow \\mathbb{Z}$ is called skew-symmetric if for all $e\\in \\dvec{E}$, $w(e) = -w(e^r)$ (where $e^r$ represent the edge with its direction reversed). We know that if $w$ gives nonzero circulation to every cycle that consists of edges of $\\dvec{E}$ then it isolates a directed path between each pair of vertices in $G(V,\\dvec{E})$. Also, if $G$ is a bipartite graph, then the weight function $w$ can be used to construct a weight function $w^{\\textrm{{\\tiny{und}}}} : E \\rightarrow \\mathbb{Z}$ that isolates a perfect matching in $G$ \\cite{TV12}.\n\n\nA convention is to represent by $\\left$ the weight function that on edge $e$ takes the weight $\\sum_{i=1}^k{w_i(e)B^{k-i}}$ where $w_1,\\ldots,w_k$ are weight functions such that $\\max_{i=1}^k{(nw_i(e))} \\leq B$.\n\n\\subparagraph*{Complexity Classes:} The complexity classes \\Log\\ and \\NL\\ are the classes of languages accepted by deterministic and non-deterministic logspace Turing machines, respectively. $\\UL$ is a class of languages that can be accepted by an $\\NL$ machine that has at most one accepting path on each input, and hence $\\UL \\subseteq \\NL$. $\\SPL$ is the class of languages whose characteristic function can be written as a logspace computable integer determinant. \n\n\n\\section{Maximum Matching}\\label{sec:staticmatch}\nIn this section, we consider the complexity of the maximum matching problem in single crossing minor free graphs. Recently Datta et al.~\\cite{DKKM18} have shown that the bipartite maximum matching can be solved in \\SPL\\ in the planar, bounded genus, $K_{3,3}$-free and $K_{5}$-free graphs.\n\nTheir techniques can be extended to any graph class where nonzero circulation weights can be assigned in logspace. For constructing a maximum matching in $K_{3,3}$-free and $K_{5}$-free bipartite graphs, they use the logspace algorithm of~\\cite{AGGT16} as a black box. Since from Theorem~\\ref{thm:main} nonzero circulation weights can be computed for the more general class of any single crossing minor free graphs, we get the bipartite maximum matching result of Corollary~\\ref{cor:stat}.\n\nIn a related work recently, Eppstein and Vazirani~\\cite{EV21} have shown an $\\NC$ algorithm for the case when the graph is not necessarily bipartite. However, the result holds only for constructing perfect matchings. In non-bipartite graphs, there is no known parallel (e.g., \\NC) or space-efficient algorithm for deterministically constructing a maximum matching even in the case of planar graphs~\\cite{Sankowski18, DKKM18}. Datta et al.~\\cite{DKKM18} givelo an approach to design a \\emph{pseudo-deterministic} \\NC\\ algorithm for this problem. Pseudo-deterministic algorithms are probabilistic algorithms for search problems that produce a unique output for each given input with high probability. That is, they return the same output for all but a few of the possible random choices. We call an algorithm pseudo-deterministic $\\NC$ if it runs in $\\RNC$ and is pseudo-deterministic.\n\nUsing the Gallai-Edmonds decomposition theorem, \\cite{DKKM18} shows that the search version of the maximum matching problem reduces to determining the size of the maximum matching in the presence of algorithms to (a) find a perfect matching and to (b) solve the bipartite version of the maximum matching, all in the same class of graphs. This reduction implies a pseudo-deterministic $\\NC$ algorithm as we only need to use randomization for determining the size of the matching, which always returns the same result. For single crossing minor free graphs, using the $\\NC$ algorithm of \\cite{EV21} for finding a perfect matching and our $\\SPL$ algorithm for finding a maximum matching in bipartite graphs, we have the following result:\n\\begin{theorem}\nMaximum matching in single-crossing minor free graphs (not necessarily bipartite) is in pseudo-deterministic \\NC.\n\\end{theorem}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{sec:intro}\nSolar coronal jets are collimated plasma ejections that occur in the solar corona and they offer ways for plasma and particles to enter interplanetary space. They are transient (tens of minutes) but ubiquitous, with a typical height of $\\sim5\\times10^4$ km and a typical width of $\\sim8\\times10^3$ km \\citep[e.g.][]{2007PASJ...59S.771S}. X-ray emissions from coronal jets were first observed by the Soft X-ray Telescope (SXT) onboard \\textit{Yohkoh} in the early 1990s \\citep{1992PASJ...44L.173S}. Since then, coronal jets have been studied in various aspects including morphology, dynamics, driving mechanisms, and more \\citep[see, e.g.,][]{2016SSRv..201....1R, 2021RSPSA.47700217S}. Jets or jet-like events have also been observed in other wavelengths such as extreme ultraviolet (EUV), ultraviolet (UV), and H alpha (those studied in H alpha are historically known as ``surges'') \\citep[e.g.][]{1997Natur.386..811I, 1999ApJ...513L..75C,2009SoPh..259...87N}. These wavebands cover a wide range of plasma temperatures (from chromospheric to coronal), and one important feature of jets is the presence of both hot and cool components in many events \\citep{2010ApJ...720..757M, 2013ApJ...769..134M}.\n\nCurrent models generally suggest that jets are formed by magnetic reconnection between open and closed magnetic field lines; however, the detailed triggering process for such magnetic reconnection is still not fully clear. In the emerging flux model, jets are generated through interchange reconnection when field lines of the newly emerging magnetic flux reach those of the pre-existing open field \\citep{1992PASJ...44L.173S}. As shown in the 2D simulation by \\citet{1995Natur.375...42Y, 1996PASJ...48..353Y}, this model could successfully produce a hot jet and an adjacent cool jet (or surge) simultaneously. The embedded-bipole model developed by \\citet[][etc.]{2015A&A...573A.130P, 2016A&A...596A..36P} considers a 3D fan-spine topology where magnetic reconnection occurs around the 3D null point. In their simulation, straight jets are generated through slow reconnection at the current sheet and driven by magnetic tension, while helical jets are generated through explosive magnetic reconnection triggered by a kink-like instability and driven by a rapid untwisting process of magnetic field lines. Recently, a few studies have reported small-scale filament structures (known as ``minifilaments'') at the base of some coronal jets, leading to the minifilament eruption model \\citep{2015Natur.523..437S, 2016ApJ...821..100S, 2016ApJ...832L...7P, 2017ApJ...844..131P, 2018ApJ...853..189P, 2017Natur.544..452W, 2018ApJ...852...98W, 2018ApJ...859....3M, 2019ApJ...882...16M}. This model suggests that jets are generated through miniature filament eruptions similar to those that drive larger eruptive events such as coronal mass ejections (CMEs). In addition to the external\/interchange magnetic reconnection, this process also involves internal magnetic reconnection inside the filament-carrying field, and the jet bright point (JBP, which corresponds to the solar flare arcade in the larger-scale case) appears underneath the erupting minifilament. Many recent observations have shown that the triggers for these minifilaments eruptions are usually magnetic flux cancellation \\citep{2011ApJ...738L..20H, 2012A&A...548A..62H, 2014ApJ...783...11A, 2016ApJ...832L...7P, 2017ApJ...844..131P, 2018ApJ...853..189P, 2019ApJ...882...16M, 2021ApJ...909..133M}.\n\nHXR observations can also provide helpful insights into jet formation mechanisms by constraining energetic electron populations within coronal jets. \\citet{2011ApJ...742...82K} investigated HXR emissions for 16 flare-related energetic electron events and found that 7 of them showed three distinct HXR footpoints, which was consistent with the interchange reconnection geometry. (In the remaining events, the fact that they showed less then three sources was likely due to instrument limitations.) Also in that study, EUV jets were found in all 6 events that had EUV data coverage. HXR bremsstrahlung emissions could also directly come from coronal jets if there are energetic electrons, but those extended sources are usually much fainter than the footpoint sources and only a few studies \\citep{2009A&A...508.1443B, 2012ApJ...754....9G} have reported such observations. More recently, \\citet{2018ApJ...867...84G} combined HXR observations with microwave emission, EUV emission, and magnetogram data, performing 3D modeling of electron distributions for a flare-related jet. They obtained direct constraints on energetic electron populations within that event. \\citet{2020ApJ...889..183M} carried out a statistical study of 33 flare-related coronal jets using HXR and EUV data, and they observed non-thermal emissions from energetic electrons in 8 of these events. They also studied the relation between jets and the associated flares but found no clear correlations between jet and flare properties. \n\nIn most of the previous studies of coronal jets, hot plasma and HXR emissions were found near the base of the jet (the location of the primary reconnection site) \\citep[e.g.][]{2011ApJ...742...82K, 2016A&A...589A..79M, 2020ApJ...889..183M}. However, for two coronal jets on November 13, 2014, HXR thermal emissions were observed near the far end of the jet spire (hereafter the ``top''). In fact, in the second event which had full HXR coverage, HXR emissions were observed at three different locations: the base of the jet, the top of the jet, and a location to the north of the jet. Here we present a multi-wavelength analysis of these two jets using data from the Atmospheric Imaging Assembly (AIA) onboard the \\textit{Solar Dynamic Observatory} (\\textit{SDO}), the \\textit{Reuven Ramaty High Energy Solar Spectroscopic Imager} (\\textit{RHESSI}), the X-ray Telescope (XRT) onboard \\textit{Hinode}, and the \\textit{Interface Region Imaging Spectrograph} (\\textit{IRIS}). We found that all these different HXR sources showed evidence of mildly accelerated electrons, and particle acceleration also happened near the jet top in addition to the site at the jet base. To our knowledge, this is the most thorough HXR study to date of particle acceleration in coronal jets.\n\nThe paper is structured as follows: In section \\ref{sec:data}, we describe the observations from each instrument. In Section \\ref{sec:analysis}, we show results from differential emission measure (DEM) analysis, imaging spectroscopy, and velocity estimation. In Section \\ref{sec:discussion}, we calculate the energy budget for one of the jets, discuss the interpretations of observational results, and compare them with jet models. Finally, in Section \\ref{sec:summ}, we summarize the key findings of this work. \n\n\n\n\\section{Observations} \\label{sec:data}\nOn November 13, 2014, more than ten recurrent jets were ejected from NOAA Active Region 12209 near the eastern solar limb at different times throughout the day. While most (if not all) of the jets can be identified in one or more AIA channels, only two events, SOL2014-11-13T17:20 and SOL2014-11-13T20:47, were simultaneously observed by AIA and {\\textit{RHESSI}}. We select these two flare-related jets for this study, and we add supporting observations from XRT and {\\textit{IRIS}}. The associated flares are GOES class C1.5-1.7 without background subtraction (see top row of Figure \\ref{fig:t_profile}), or B2.4-3.7 with background subtraction.\n\n\n\\begin{figure}\n\t\\includegraphics[width=0.5\\textwidth]{fig_t_profile1.pdf}\n\t\\includegraphics[width=0.5\\textwidth]{fig_t_profile2.pdf}\n\t\\caption{Time profiles of the $\\sim$17:20 jet (left) and the $\\sim$20:50 jet (right) on November 13, 2014. Top row: GOES light curves in the 1-8 {\\AA} channel. Second row: GOES light curves in the 0.5-4 {\\AA} channel. Third row: {\\textit{RHESSI}} emission in 3-6 keV (black) and 6-12 keV (red), using detectors 3, 6, 8, and 9. The first event only has partial coverage from {\\textit{RHESSI}} due to spacecraft night. Fourth row: Examples of AIA EUV emissions from the jet base\/top. Blue lines show light curves of a 3\"$\\times$3\" box at the base of each jet in the 304 {\\AA} channel, and red lines show light curves of a 3\"$\\times$3\" box at the top of each jet in the 131 {\\AA} channel (boxes are not shown). Bottom row: XRT measurements of selected regions (3\"$\\times$3\", not shown) at the jet base (blue) and the jet top (red) in the thin-Be filter. Both events only have partial coverage from XRT.}\n\t\\label{fig:t_profile}\t\n\\end{figure}\n\n\n\\subsection{AIA data}\n\nThe AIA instrument provides full-disk solar images in ten EUV\/UV\/visible-light channels with a spatial resolution of 1.5 arcsec \\citep{2012SoPh..275...17L}. In this work we use data from the seven EUV channels of AIA: 94 {\\AA}, 131 {\\AA}, 171 {\\AA}, 193 {\\AA}, 211 {\\AA}, 304 {\\AA}, and 335 {\\AA}, which have a cadence of 12 seconds and cover plasma temperatures from $\\sim$0.05 MK up to $\\sim$20 MK.\n\nFigure \\ref{fig:aia_im} shows AIA images of the two jets in the 131 {\\AA} and 304 {\\AA} channels at selected times. At the beginning of each event, a minifilament (pointed by yellow arrows) was identified in multiple AIA channels at the base of the jet. After the minifilamnet eruption, a JBP (pointed by white arrows) appeared underneath the prior minifilament location.\n\nInterestingly, both events showed slightly different jet evolution in cool and hot AIA channels. In cooler channels including 171 {\\AA}, 193 {\\AA}, 211 {\\AA}, 304 {\\AA}, and 335 {\\AA}, the first jet started at $\\sim$17:15 UT, reached its maximum extent at $\\sim$17:23 UT, and lasted about 20 minutes. However, in hot channels that are sensitive to $\\gtrsim$10 MK plasma (94 {\\AA} and 131 {\\AA}, particularly), the jet reached its maximum height within five minutes after the same starting time; then it slightly expanded transversely and gradually faded away in a much longer time. (The 193 {\\AA} channel in principle could also measure hot plasma \\citep{2012SoPh..275...17L}, but its response was dominated by temperatures below 10 MK and thus looked like a cool channel.) Similar behavior was observed in the later jet, which started at $\\sim$20:40 UT and reached its maximum extent at $\\sim$20:47 UT in the hotter 94 {\\AA} and 131 {\\AA} channels or at $\\sim$20:50 UT in the rest of the channels. The jet had already disappeared in those cooler channels before 20:58 UT, but it was visible in 94 {\\AA} and 131 {\\AA} for more than an hour. \n\n\\begin{figure*}\n\t\\centering\n\t\\includegraphics[width=0.98\\textwidth]{fig_aia.pdf}\n\t\\caption{AIA 131 {\\AA} and 304 {\\AA} images of the two jets at selected times. The top two rows show the evolution of the earlier jet and the bottom two rows show the evolution of the later jet. The yellow arrows point to the minifilament while the blue arrows point to the JBP in each event. Both jets reached their maximum extents at earlier times and lasted longer in the hotter 131 {\\AA} channel (sensitive to both $\\sim$0.4 MK and $\\sim$10 MK temperatures) compared to the cool 304 {\\AA} channel (sensitive to chromospheric temperatures around 0.05 MK).}\n\t\\label{fig:aia_im}\t\n\\end{figure*}\n\n\n\n\\subsection{RHESSI data} \\label{sec:rhessi}\n\n{\\textit{RHESSI}} was a solar-dedicated HXR observatory launched in 2002 and decommissioned in 2018. It consisted of nine rotating modulation collimators, each placed in front of a cooled germanium detector, and used indirect Fourier imaging techniques. {\\textit{RHESSI}} measured both images and spectra over the full sun in the energy range of 3 keV - 17 MeV and had good spatial and energy resolutions especially for lower energies (2.3 arcsec and $\\sim$1 keV, respectively) \\citep{2002SoPh..210....3L}. \n\n{\\textit{RHESSI}} was in eclipse during 16:46-17:23 UT, so it didn't capture the entire first jet; but {\\textit{RHESSI}} did have full coverage for the later jet. Figure \\ref{fig:rhessi_im} shows {\\textit{RHESSI}} images in 3-12 keV using detectors 3, 6, 8, and 9. All images were produced using the CLEAN algorithm in the HESSI IDL software package. In both events, HXR emissions were observed near the top of the jet. Furthermore, time slices of the later jet show that there were actually three HXR sources in that event. The first HXR source appeared at the base of the jet a few minutes after the jet's starting time and peaked at around 20:46 UT. The location of this source is consistent with the erupting minifilament site where magnetic reconnection took place. Meanwhile, starting from $\\sim$20:46 UT, the second HXR source appeared near the top of the jet and it became dominant during 20:48-20:51 UT. Finally, after the source at the jet top had faded away, another HXR source was observed to the north of the jet which reached its maximum intensity at $\\sim$20:53 UT.\n\n\\begin{figure*}\n\t\\centering\n\t\\includegraphics[width=0.99\\textwidth]{fig_rhessi.pdf}\n\t\\caption{{\\textit{RHESSI}} contours in 3-12 keV overlaid on the AIA 94 {\\AA} images. Panel (a) shows an image of the earlier event and panels (b)-(j) show time slices of the later event. HXR emissions were observed near the top of the jet in both events. The later event showed three different HXR sources: one at the base of the jet at $\\sim$20:46 UT (panel c), one near the top of the jet at $\\sim$20:50 UT (panel g), and one to the north of the jet at $\\sim$20:53 UT (panel j).}\n\t\\label{fig:rhessi_im}\t\n\\end{figure*}\n\n\n\\subsection{XRT data} \\label{sec:xrt}\nXRT provides additional coverage of high-temperature plasma beyond AIA hot channels and {\\textit{RHESSI}}, though only data in the thin-Be filter for part of each jet are available. This filter is sensitive to plasma temperatures around 10 MK, and shows very similar jet behavior as the AIA 94 {\\AA} and 131 {\\AA} filters. Here we include these data as supplementary observations.\n\nThin-Be filter data were available for the first 15 minutes of the earlier jet (before 17:30 UT) with a cadence of a half minute. The jet started with a very fast flow at $\\sim$17:15 UT, and reached its maximum extent in just a few minutes. Then after $\\sim$17:20 UT, it grew slightly wider and remained visible towards the end of the observation time (Figure \\ref{fig:xrt_iris}). As for the later jet, XRT missed most of its erupting process since no data were available between 20:45 UT and 21:09 UT. But after 21:09 UT, the jet was still visible in the thin-Be filter until it finally faded away at $\\sim$22:00 UT (not shown).\n\n\n\\subsection{IRIS data} \n{\\textit{IRIS}} has full temporal coverage and partial spatial coverage of the earlier jet in its 1330 {\\AA} slit-jaw images. This channel is sensitive to temperatures around 0.02 MK with a spatial resolution of 0.33 arcsec and a cadence of 10 seconds, thus it helps to investigate the dynamics of plasma at chromospheric temperatures. For the earlier event, the jet was at the corner of the field of view and most part of the jet body (but no jet base or top) was captured in these images (Figure \\ref{fig:xrt_iris}). Jet evolution in this channel was similar to that in the AIA 304 {\\AA} channel, and we used these data (in addition to the AIA data) to estimate jet velocities (see Section \\ref{velocities}). However, this channel had much less coverage of the later jet and it was not considered for that event. \n\n\n\\begin{figure*}\n\t\\centering\n\t\\includegraphics[width=0.8\\textwidth]{fig_xrt_iris.pdf}\n\t\\caption{XRT and {\\textit{IRIS}} images of the two jets at selected times. Panels (a)-(d): XRT Be-thin images of the earlier event. Jet evolution is similar to that in AIA hot filters (94 {\\AA} and 131 {\\AA}). Panels (e)-(g): XRT Be-thin images of the later event. No data was available between 20:44 and 21:09 UT. Panel (h): An {\\textit{IRIS}} SJI 1330{\\AA} image of the earlier event. The jet was located at the corner of the {\\textit{IRIS}} field of view. }\n\t\\label{fig:xrt_iris}\t\n\\end{figure*}\n\n\n\\section{Data analysis} \\label{sec:analysis}\n\n\\subsection{Differential emission measure (DEM) analysis} \\label{sec:DEM}\nWe carried out a differential emission measure (DEM) analysis to investigate the temperature profile for these two events. A DEM describes a plasma distribution with respect to temperature along the line of sight, and it is directly related to the observed flux $F$ for a particular instrument via\n\\begin{equation}\n\tF=\\int{R(T) \\cdot DEM(T) \\, dT},\n\\end{equation} \nwhere $R$ is the temperature response of that instrument. In this analysis, we used the regularization method developed by \\citet{2012A&A...539A.146H, 2013A&A...553A..10H} for DEM inversion. We considered two different data selections: (a) AIA bandpass filter data only, where we used data from six AIA bandpass filters that are sensitive to coronal temperatures: 94 {\\AA}, 131 {\\AA}, 171 {\\AA}, 193 {\\AA}, 211 {\\AA}, and 335 {\\AA}; and (b) a combination of multi-instrument data, where we used the same set of AIA data, together with HXR measurements in 4-5, 5-6, and 6-7 keV bands from {\\textit{RHESSI}}, and thin-Be filter data from XRT if available. The {\\textit{RHESSI}} 4-5 keV and 5-6 keV energy bands were selected because they measure plasma temperature via the bremsstrahlung continuum, and the 6-7 keV energy band was particularly important as it includes the 6.7 keV Fe line complex. The uncertainties for AIA data were estimated via the SSWIDL procedure ``\\texttt{aia\\textunderscore bp\\textunderscore estimate\\textunderscore error.pro}'', added in quadrature with a systematic error of 10\\%. The uncertainties for {\\textit{RHESSI}} and XRT data were both estimated as 20\\%.\n\nThe temperature responses for AIA and XRT filters were generated through SSWIDL routines ``\\texttt{aia\\textunderscore get\\textunderscore response.pro}'' and ``\\texttt{make\\textunderscore xrt\\textunderscore temp\\textunderscore resp.pro}'', respectively. To obtain the temperature responses for {\\textit{RHESSI}} in different energy bands, we first calculated the isothermal HXR spectra as a function of energy for multiple temperatures ranging from 3 MK to 30 MK, using the SSWIDL routine ``\\texttt{f\\textunderscore vth.pro}''. Thus for each energy band, we obtained a series of photon fluxes at different temperatures, which would correspond to the temperature response (in photon space) for that energy band after applying proper normalization. (The {\\textit{RHESSI}} instrument response was already taken into account when producing HXR images, thus it did not need to be included in the temperature response.) In above calculation, coronal abundances were adopted.\n\nWe calculated the DEMs for four regions where HXR emissions were observed, around the times when each HXR source reached its maximum intensity: the top of the earlier jet at 17:24 UT, the base of the later jet at 20:46 UT, the top of the later jet at 20:50 UT, and the loop to the north of the later jet at 20:53 UT. Each region was selected based on contours in AIA 131 {\\AA} images and the observed intensities were averaged over the whole region. We first obtained the DEM results using AIA data only (black lines in Figure \\ref{fig:dem_ave}), as well as the corresponding residuals in data space (asterisks in Figure \\ref{fig:dem_ave}). All these AIA-alone DEMs indicate the existence of multi-thermal plasma, each with a high-temperature component peaking around 10 MK. However, although these AIA-alone DEMs had good enough predictions in the AIA channels that were used for the DEM inversion, they failed to predict the HXR measurements well. As shown by the blue asterisks in the residual plots, the photon fluxes predicted by the AIA-alone DEMs in the {\\textit{RHESSI}} 4-5 keV and 5-6 keV energy bins were always lower than the actual measurements, and the line emissions in the 6-7 keV energy bin were very prominent compared to the bremsstrahlung continuum.\n\nTo have a better understanding of the level of agreement between AIA and {\\textit{RHESSI}}, we carried out a more complete quantitative comparison of HXR fluxes between these two instruments. In this exercise, we predicted the HXR spectrum in the 3-15 keV energy range for the top region of the earlier jet\u00a0using the AIA-alone DEM and compared\u00a0it with the spectrum directly measured by {\\textit{RHESSI}}. The predicted HXR spectrum was calculated according to Eq.(1), with R being the {\\textit{RHESSI}} temperature response. The results are shown in the left panel of Figure \\ref{fig:comp}, where the AIA-predicted HXR fluxes were consistently lower than the {\\textit{RHESSI}} fluxes, indicating a possible cross-calibration factor between AIA and {\\textit{RHESSI}}. And again, the AIA-alone DEM predicted much stronger line emissions in the 6-7 keV energy bin over the continuum as compared to the actual {\\textit{RHESSI}} observation. Reasons for these disagreements could be some instrumental effects that are not well understood, such as the change in {\\textit{RHESSI}} blanketing with respect to time, or could be the possible ``non-standard'' elemental abundances in these events that we are unable to characterize.\n\nBecause of the discrepancies mentioned above, incorporating {\\textit{RHESSI}} data into this DEM analysis is challenging. To obtain a DEM solution that could successfully predict both the HXR continuum and the line feature at the same time, we found that a cross-calibration factor between AIA and {\\textit{RHESSI}} was required and the initial DEM guess must be very carefully chosen. We had the best chance of success when using a ``modified'' AIA-alone DEM as the initial guess, where we substituted the high-temperature component of each AIA-alone DEM with a Gaussian distribution that peaked around the temperature given by {\\textit{RHESSI}} spectroscopy (more details of spectroscopy will be discussed in Section \\ref{sec:imag_spec}). The height, width, and exact peak location of that Gaussian distribution were tested with a series of values, and we selected the most robust ones. In this piece of analysis, we scaled down the photon fluxes from {\\textit{RHESSI}} by a factor of 3.5, but this cross-calibration factor could be in the range of 3-5 and was not well constrained. Incidentally, this factor that we found here is similar to the AIA-{\\textit{RHESSI}} discrepancy found by \\citet{2013ApJ...779..107B}. In addition, in some literature a factor of 2-3 was suggested for cross-calibration between AIA and XRT \\citep[e.g.][]{2015ApJ...806..232S, 2017ApJ...844..132W}. We found that multiplying a factor of 2 to the XRT response would result in a better agreement between the predicted and measured Be-thin filter data; thus that factor was also included here.\n\nThe joint DEMs (red lines) and the data-space residuals (red triangles) are also plotted in Figure \\ref{fig:dem_ave}, along with the AIA-alone DEM results. These joint DEMs are the only set of solutions we found that fit both the line emission and the bremsstrahlung continuum well. For all the selected regions, the joint DEMs have very similar cool components as the AIA-alone DEMs, but the hot components of the joint DEMs tend to be more isothermal and slightly cooler. Particularly, HXR constraints significantly reduced the amount of plasma above $\\sim$15 MK (otherwise the predicted line emission was always too prominent). However, previous studies have seen larger discrepancies between bandpass filter DEMs and the ones that included HXR constraints. For example, in the DEM analysis for a quiescent active region presented by \\citet{2009ApJ...704..863S}, a high-temperature component that peaked around $10^{7.4}$K was found when only using data from XRT filters, but the DEM for that component was reduced by more than one order of magnitude when combining observations from both XRT and {\\textit{RHESSI}}. Compared to that study, the AIA-alone DEMs here are not too far from the joint DEMs that incorporated HXR data.\n\nWe further compared the HXR fluxes predicted by the joint DEM with {\\textit{RHESSI}} measurements (with a cross-calibration factor applied) for the top region of the earlier jet, as shown in the right panel of Figure \\ref{fig:comp}. As expected, the two HXR spectra had a much better agreement at lower energies than the spectra predicted from AIA-only DEMs did, including both the overall continuum and the line feature. Besides, for higher energies around 10 keV, {\\textit{RHESSI}} measurements had systematically higher emissions, suggesting a possible non-thermal component for this source. This is consistent with our later findings through spectral analysis (Section \\ref{sec:imag_spec}). As a side note, later spectral analysis suggests that for some of the HXR sources non-thermal emissions might dominate in the 6-7 keV (and maybe 5-6 keV) energy bin(s). In this scenario, the fluxes from those thermal sources would be even lower and the joint-DEMs shown here provide upper limits for the possible amount of hot plasma. \n\nAs the last part of the DEM analysis, we examined the temporal evolution of the DEM maps in 11-14 MK (i.e. the hot component) for each event (Figures \\ref{fig:em1} and \\ref{fig:em2}). Because of the missing {\\textit{RHESSI}} and XRT data, and the fact that the AIA-alone DEMs in this temperature range qualitatively agree with the joint DEMs, these DEM maps were all generated using AIA data only. In both events, hot plasma first appeared at the base of the jet (which was the same location from where the minifilament erupted and magnetic reconnection occurred); however, as the hot plasma at the jet base gradually cooled down, more and more hot plasma was observed near the top of the jet and that location was mostly stationary. These DEM maps show consistent results with the location and temporal evolution of the {\\textit{RHESSI}} HXR sources.\n\n\\begin{figure}\n\t\\plotone{fig_dem_comp.pdf}\n\t\\caption{DEMs and data-space residuals (defined as (model-data)\/error) for four different selected regions: the top of the earlier jet at 17:24 UT (top left), the base of the later jet at 20:47 UT (top right), the top of the later jet at 20:49 UT (bottom left), and a loop to the north of the later jet at 20:53 UT (bottom right). The black lines show results for the DEM inversion that used AIA data only, and the red lines show results for the DEM inversion that used multi-instrument data from AIA, {\\textit{RHESSI}}, and XRT if available. The AIA-alone DEMs and the joint DEMs agree qualitatively, but the joint DEMs require a more isothermal and slightly cooler high-temperature component for each source. In the residual plots, asterisks stand for results from the AIA-alone DEMs (among which black asterisks show the residuals in AIA channels that were used for the DEM inversion while blue asterisks show the residuals in {\\textit{RHESSI}} energy bands and possibly XRT Be-thin filter predicted by this DEM), and red triangles stand for results from the joint DEMs. The HXR fluxes predicted by the joint DEMs have a much better agreement with the actual data.}\n\t\\label{fig:dem_ave}\t\n\\end{figure}\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=0.8\\textwidth]{fig_hxr_comp.pdf}\n\t\\caption{\\textit{Left}: HXR spectrum for the source at the top of the earlier jet deduced from the AIA-alone DEM (black), compared to {\\textit{RHESSI}} measurements (red). \\textit{Right}:\u00a0HXR spectrum for the same source deduced from the joint DEM (black), compared to {\\textit{RHESSI}} measurements with a cross-calibration factor applied (red).\u00a0With a cross-calibration factor of 3.5, the HXR spectrum from the joint DEM could successfully predict the bremsstrahlung continuum and the line feature at 6.7 keV simultaneously.}\n\t\\label{fig:comp}\t\n\\end{figure}\n\n\\begin{figure}\n\t\\plotone{fig_emmaps1.pdf}\n\t\\caption{Temporal evolution of the DEM maps in 11-14 MK for the earlier jet. Hot plasma appeared at the base of the jet during the first few minutes, but starting from $\\sim$17:20 more and more hot plasma was observed near the top of the jet and the top source became dominant at $\\sim$17:22. Color scale is in units of cm$^{-5}$ K$^{-1}$.}\n\t\\label{fig:em1}\t\n\\end{figure}\n\n\\begin{figure}\n\t\\plotone{fig_emmaps2.pdf}\n\t\\caption{Temporal evolution of the DEM maps in 11-14 MK for the later jet. Similar to the earlier event, hot plasma first appeared at the base of the jet, but was also observed near the top of the jet starting from $\\sim$20:45 and the top source became dominant at $\\sim$20:50. Color scale is in units of cm$^{-5}$ K$^{-1}$.}\n\t\\label{fig:em2}\t\n\\end{figure}\n\n\n\\subsection{Jet velocities} \\label{velocities}\n\nIdentifying different velocities associated with the jet will be helpful to differentiate possible mechanisms behind those jets. A common method for velocity estimation is making time-distance plots \\citep[e.g.][]{2016A&A...589A..79M, 2020ApJ...889..183M}. Such plots are usually produced by putting together time slices of the intensity profile along the direction of the jet, in which case the jet velocities (in the plane of the sky) are the slopes. To take into account everything within the width of the jets, here we selected a rectangular region around each jet and we summed the intensities across the width of this region. \n\nFigure \\ref{fig:td1} shows the time-distance plots for the earlier jet using seven EUV filters of AIA and the slit-jaw 1330{\\AA} filter of {\\textit{IRIS}}. Interestingly, the chromospheric filters (AIA 304 {\\AA} and {\\textit{IRIS}} 1330 {\\AA}) are the ones where the velocities are most clearly identified and show very consistent results. The 304 {\\AA} filter shows multiple upward velocities ranging from 104 km\/s to 226 km\/s, while the 1330 {\\AA} filter shows upward velocities ranging from 83 km\/s to 404 km\/s (the uncertainties for those velocities are on the order of 10-20$\\%$ considering the pixel size and the temporal cadence of the images). Also, both filters clearly indicate that some plasma returned to the solar surface (possibly) along the same trajectory as the original jet, with downflow velocities of $\\sim$110 km\/s. For the rest of the AIA filters, similar upward and downward velocities as mentioned above can partly be seen in the 171 {\\AA} filter (sensitive to $\\sim$0.6 MK plasma), but could barely be seen in other ones. However, there were also some really fast outflows at the beginning of this jet in the 131{\\AA} filter (sensitive to both $\\sim$0.4 MK and $\\sim$10 MK plasma), which has a velocity of $\\sim$700 km\/s.\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=0.75\\textwidth]{fig_td_jet1_paper.pdf}\n\t\\caption{Time-distance plots of the earlier jet in seven AIA EUV filters and the {\\textit{IRIS}} 1330 {\\AA} filter. Slopes for velocity calculation are shown as dashed lines. The chromospheric filters (AIA 304 {\\AA} and {\\textit{IRIS}} 1330 {\\AA}) show various upward velocities around 200 km\/s and downward velocities around 110 km\/s. The AIA 131 {\\AA} filter (sensitive to both $\\sim$0.4 MK and $\\sim$10 MK) shows a faster upward velocity of 694 km\/s at the beginning of the jet. In the rest of the AIA filters, velocities are less apparent. (The black line at $\\sim$17:21 in the 171 {\\AA} plot is due to some instrument issue.)}\n\t\\label{fig:td1}\t\n\\end{figure}\n\nThe time-distance plots for the later jet describe a slightly different picture (Figure \\ref{fig:td2}). The 304 {\\AA} filter again shows multiple upward velocities ranging from 192 km\/s to 251 km\/s, and downward velocities around 130 km\/s (the uncertainty is on the order of 10-20\\%). However, these main upward velocities\u00a0can be clearly identified in all seven AIA filters, including both cool and hot ones. Besides, in the 131 {\\AA} filter, a faster outflow at the beginning of the jet is still identifiable but harder to see compared to the earlier event, and the velocity for this outflow is 377 km\/s. These velocities will be compared to other studies and to models in Section \\ref{sec:v_discuss}.\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=0.75\\textwidth]{fig_td_jet2_paper.pdf}\n\t\\caption{Time-distance plots of the later jet in seven AIA EUV filters. Slopes for velocity calculation are shown as dashed lines. Unlike the earlier event, the main upward velocities can be clearly identified in all filters. The AIA 131 {\\AA} filter still shows a faster upward velocity of 377 km\/s at the beginning of the jet, but this velocity is less apparent.}\n\t\\label{fig:td2}\t\n\\end{figure}\n\n\n\\subsection{Imaging spectroscopy} \\label{sec:imag_spec}\n\nTo study the accelerated electron populations in these events, we performed imaging spectroscopy for the four HXR sources observed by {\\textit{RHESSI}}. For each source, a one-minute time interval during which the source reached its maximum HXR intensity was first selected by eye based on the {\\textit{RHESSI}} images. These images were produced using the CLEAN algorithm and detectors 3, 6, 8, and 9. Then we chose a circular region which contained that source and obtained the spectrum for the selected region. Finally, we carried out spectral fitting using the OSPEX software package in the energy range of 3-15 keV. \n\nAs mentioned in Section \\ref{sec:DEM}, the comparison of the joint DEM and {\\textit{RHESSI}} measurements suggests a non-thermal component in the HXR spectrum. To further confirm this, we first fitted the spectra with an isothermal model (not shown), but the models always overpredicted the fluxes at the 6.7 keV line complex and had systematically low emissions at energies above 10 keV for all the sources. This indicated that there should be another component in the spectra, either due to a second thermal distribution or a non-thermal distribution. However, the results of fitting for a double thermal model (not shown) had unphysical fit parameters for one of the thermal components, and it again overpredicted the line emission, making this scenario unlikely. Therefore, we confirmed that there should be non-thermal emissions in these events. We then added a thick-target non-thermal component to the fitting (the justification for the thick-target regime will be discussed in Section 4.2), and could obtain good fits across the entire observed energy range. (We used the temperatures from those fits as a reference when generating the initial DEM guess for the joint DEM inversion.) However, due to the limited number of energy bins and the number of free parameters, some of the fit parameters were not well constrained and had uncertainties over 100$\\%$. To further reduce the uncertainties, we performed another fitting with a fixed temperature, which was chosen to be the average temperature of the hot component derived from the joint DEMs. The resulting spectra are shown in Figure \\ref{fig:spec} and the parameters are reported in Table \\ref{tab:spec_fit}. Interestingly, the non-thermal electron power laws in all sources have similar spectral indices around 10 and low energy cutoffs around 9 keV. While these non-thermal power laws are steeper than in most flares, the parameters are consistent with the range found for microflares by \\citet{2008ApJ...677..704H}.\n\n\\begin{figure}\n\t\\plotone{fig_spec_paper.pdf}\n\t\\caption{{\\textit{RHESSI}} spectra for the four HXR sources observed in these two events, each during a one-minute interval when the source approximately reached its maximum HXR intensity. All spectra can be fitted well with an isothermal (blue) plus thick-target (red) model. Note that in these fits, the isothermal temperatures were fixed to be the average temperatures of the hot components derived from the joint DEMs.}\n\t\\label{fig:spec}\t\n\\end{figure}\n\n\\begin{deluxetable*}{lcccccc}\n\t\\tablenum{1}\n\t\\tablecaption{Fit parameters for the four {\\textit{RHESSI}} HXR sources assuming an isothermal plus thick-target model \\label{tab:spec_fit}}\n\t\\tablehead{\n\t\t\\colhead{} & \\colhead{Time} & \n\t\t\\colhead{Emission measure} &\n\t\t\\colhead{Temperature} & \\colhead{spectral index} &\n\t\t\\colhead{low energy cutoff} \\\\\n\t\t\\colhead{} & \\colhead{} &\n\t\t\\colhead{($10^{46} \\mathrm{cm}^{-3}$)} &\n\t\t\\colhead{(MK, fixed)} & \\colhead{} & \\colhead{(keV)} \n\t}\n\t\\startdata\n\tJet 1 top source & 17:23:30-17:24:30 & 1.8 $\\pm$ 0.5 & 11.1 & 11.4 $\\pm$ 3.6 & 9.3 $\\pm$ 1.6 \\\\\n\tJet 2 base source & 20:46:00-20:47:00 & 4.7 $\\pm$ 1.0 & 9.9 & 9.1 $\\pm$ 0.9 & 8.5 $\\pm$ 0.9 \\\\\n\tJet 2 top source & 20:50:00-20:51:00 & 4.1 $\\pm$ 0.5 & 10.5 & 10.3 $\\pm$ 1.4 & 9.4 $\\pm$ 1.2 \\\\\n\tJet 2 northern source & 20:53:00-20:54:00 & 3.9 $\\pm$ 1.3 & 9.6 & 8.5 $\\pm$ 0.8 & 8.7 $\\pm$ 1.0 \n\t\\enddata\n\\end{deluxetable*}\n\n\n\n\\section{Discussion} \\label{sec:discussion}\n\\subsection{Jet velocities and driving mechanisms} \\label{sec:v_discuss}\nIn these two events, we observed two types of upward velocities in jets. One type falls within the range of 80-400 km\/s, most clearly seen in the AIA 304 {\\AA} and {\\textit{IRIS}} 1330 {\\AA} filters (sensitive to chromospheric temperatures) and possibly visible in other filters. This type of velocity is consistent with many previous studies of coronal jets \\citep[e.g.][]{1996PASJ...48..123S, 2007PASJ...59S.771S, 2016A&A...589A..79M, 2016ApJ...832L...7P, 2020ApJ...889..183M} where the jet velocities usually range from a few tens of km\/s to $\\sim$500 km\/s, with an average around 200 km\/s. The other velocities, $\\sim$700 km\/s and $\\sim$400 km\/s respectively for the two jets, could only be identified in the 131 {\\AA} filter (sensitive to $\\sim$0.4 MK and hot temperatures $\\sim$10 MK) at the beginning of each event (though harder for the later jet). The velocity of $\\sim$400 km\/s is still within the common range for coronal jets, but it is faster than the other velocities observed in that jet. The velocity of $\\sim$700 km\/s seems to be faster than most of the observed jets. However, such velocities are not rare and have been reported in a few coronal jet observations by XRT \\citep{2007PASJ...59S.771S, 2007Sci...318.1580C}.\n\nOne possible acceleration mechanism for coronal jets is chromospheric evaporation, which is also the responsible mechanism for some plasma flows in solar flares. In this process, the energy released from magnetic reconnection is deposited in the chromosphere, compresses and heats the plasma there, and produces a pressure-driven evaporation outflow on the order of sound speed. In fact, \\citet{1984ApJ...281L..79F} derived a theoretical upper limit for the velocity of this evaporation outflow to be 2.35 times the local sound speed, where the sound speed $c_s$ can be calculated as: $c_{s}=147\\sqrt{\\frac{T}{\\mathrm{1MK}}}$ km\/s assuming an isothermal model \\citep[e.g.][]{2004psci.book.....A}. In the 304 {\\AA} filter (characteristic temperature $10^{4.7}$ K), this upper limit corresponds to a very low speed of 77 km\/s, indicating that the cool plasma is unlikely driven by chromospheric evaporation. Also, common velocities reported by observations of chromospheric evaporation usually fall within the range of tens of km\/s up to 400 km\/s \\citep[e.g.][]{2013ApJ...767...55D, 2015ApJ...811..139T, 2015ApJ...805..167S}, thus chromospheric evaporation seems not able to explain the very fast flow of the earlier jet observed in the hot 131{\\AA} filter. Furthermore, it is expected that the velocity\u00a0would increase\u00a0with the temperature if a jet is generated\u00a0by chromospheric evaporation \\citep{2012ApJ...759...15M}, but here we have seen very consistent velocities in all seven AIA filters that are sensitive to different temperatures in the later event. For all these reasons, if both jets are driven by the same mechanism, that mechanism is likely \\textit{not} chromospheric evaporation but magnetic tension instead. However, it is not clear why the earlier jet shows more complicated and various velocities (even in a single channel) if both jets are driven similarly.\n\n\n\\subsection{Particle acceleration locations} \nIn Section \\ref{sec:imag_spec}, we fitted the {\\textit{RHESSI}} spectra of the four HXR sources with an isothermal plus thick-target model. Here we first justify that the thick-target regime is a reasonable approximation.\n\nThe column depth (defined as $N_s=\\int ndz$ where n is the plasma density) to fully stop an electron of energy E (in units of keV) can be calculated as: $N_s=1.5\\times10^{17}\\mathrm{cm^{-2}} E^2$ \\citep[e.g.][]{krucker2008hard}. Based on this formula, Figure \\ref{fig:stopping_d} plots the relation between the stopping distance and the plasma density for a given electron energy. Under the thick-target regime, according to Table \\ref{tab:spec_fit}, the average electron energy for the source at the top of the earlier jet is $\\sim$10 keV and the density there is $6\\times10^{9}$ $\\mathrm{cm}^{-3}$ (derived from the joint DEM), which corresponds to a distance of 36 arcsec in average that the electrons can travel before fully stopped by the ambient plasma. Similar average electron energies around 10 keV are found for the other three HXR sources in the later event, and the densities of those sources are $(0.8-1.9)\\times10^{10}$ $\\mathrm{cm}^{-3}$, resulting in stopping distances of 10-28 arcsec. Moreover, in Section \\ref{sec:DEM}, we report a possible cross-calibration factor around 3.5 between AIA and {\\textit{RHESSI}}. That factor is not included in the above calculation; however, if the cross-calibration factor is included, all the densities above would be multiplied by $\\sqrt{3.5}$, corresponding to even shorter stopping distances of 5-20 arcsec. In general, these stopping distances are comparable to the size of the HXR sources, meaning that accelerated electrons deposit a considerable portion of their energies into each source. Furthermore, if this is a thin-target regime, the spectral indices would be slightly smaller but the average electron energies would still be $\\sim$10 keV. This would result in very similar stopping distances that are comparable to source sizes, which is not consistent with the thin-target assumption. Therefore, we conclude that the HXR sources observed in these events can be approximated as thick targets, and mildly accelerated electrons are found at all these locations.\n\nSince the three HXR sources observed in the later event share similar electron distributions, here comes the next question: were the HXR emissions in the later event produced by the same population of accelerated electrons that traveled to different locations, or were they produced by different groups of accelerated electrons individually? In the standard jet models, reconnection happens near the base of the jet, which would require electrons to be accelerated near the base and travel upwards along the magnetic field lines to produce the HXR source at the jet top. However, from the DEM analysis, we find that the densities in the body of the jet (where there were not many HXR emissions) are $\\sim1\\times10^{10}$ $\\mathrm{cm}^{-3}$, so the stopping distance along the jet body is still 20-30 arcsec (for both jets). This is a few times smaller than the distance from the jet base to the jet top; therefore the HXR source at the top of each jet was produced by electrons that were accelerated very close to this source, rather than electrons that traveled far from the primary reconnection site at the jet base. This finding is in line with a similar one made for the powerful X8.3 class flare on September 10, 2017, obtained with an entirely different methodology that employs microwave imaging spectroscopy \\citep{fleishman2022solar}. \n\nAnother possible explanation for the sources at the jet top could be that the jets were actually ejected along large closed loops perpendicular to the plane of the sky rather than the so-called ``open'' field lines. Then the top of the jet is in fact the apex of the loop, which would have higher emissions purely because of the line-of-sight effect. However, even in this scenario the stopping distance along the jet body would remain the same, thus the conclusion of an additional particle acceleration site near the jet top (or loop apex) still holds regardless of jet geometry.\n\nThe HXR source to the north of the later jet appeared last among the three HXR sources, but still during a time when the jet was visible in EUV filters. It is also likely related to the jet because the formation of the jet would change the magnetic configuration of the active region, but neither the electron path nor the density along the path is clear if the energetic electrons traveled from the jet base to the northern location. The typical coronal density for an active region is about $10^{9}$ $\\mathrm{cm^{-3}}$ \\citep[e.g.][]{1961ApJ...133..983N}, corresponding to a stopping distance of 200 arcsec for electrons of 10 keV. Thus, in general situations energetic electrons could travel a decent distance in the corona, but it is also possible that the density in this active region is larger than that typical value. However, due to lack of data, we couldn't determine which was the case for this source.\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=0.75\\textwidth]{fig_stopping_d.pdf}\n\t\\caption{The relation between collisional stopping distances and ambient plasma densities for electrons of certain energies. Red plus signs mark the values for the densities (without a cross-calibration factor) and average energies for the four observed HXR sources, while brown triangles mark the values with a cross-calibration factor applied. The stopping distances for these sources are less than a few tens of arcsecs, but for lower densities and\/or higher electron energies, accelerated electrons could travel an appreciable distance in the corona. }\n\t\\label{fig:stopping_d}\t\n\\end{figure}\n\n\n\\subsection{Energy budget}\nInvestigating the partition of different energy components can help to understand the energy release process in these events. Such calculations have been done in the past for a number of flares and CMEs \\citep[e.g.][]{2012ApJ...759...71E, 2015ApJ...802...53A, 2016A&A...588A.116W}, but only for a few jets so far \\citep{2013ApJ...776...16P}. Here we present our estimates of various energy components for the later event, including kinetic energy, gravitational energy, thermal energy, and the energy in non-thermal electrons. We calculated the maximum amount of energy that could be converted into each of the forms above.\n\nThe jet's major eruption started from 20:46 UT, which was visible in all AIA filters and had a speed of $\\sim$260 km\/s (Figure \\ref{fig:td2}). The density of this plasma was derived from its DEM, which is $1.3\\times10^{10}$ $\\mathrm{cm}^{-3}$. Assuming the jet body that contained this group of plasma to be a cylinder, the peak kinetic energy of the jet is $5\\times10^{26}$ erg.\n\nThe maximum height of this jet is $\\sim$80 arcsec; however, as the height increases the amount of plasma that traveled there decreases, and it's not clear what fraction of plasma finally reached the maximum height. Therefore, instead of calculating the maximum gravitational energy of the jet, we set an upper limit of $2\\times10^{26}$ erg, which is the gravitational energy if all the plasma of the major eruption reached a height of 80 arcsec. This upper limit is smaller than the kinetic energy of the jet, meaning the eruption is not ballistic. \n\nThe thermal energy, $E_{th} = 3k_BTnV$, is dominated by contributions from HXR sources. Using the joint DEMs, the peak thermal energy for each HXR source is about $5\\times10^{27}$ erg. This value is consistent with the flare thermal energies found for other jets in \\citet{2020ApJ...889..183M}.\n\nThe energy in non-thermal electrons can be simply estimated as $E_{nonth} = N_eE_{e,ave}$ where $N_e$ is the total number of accelerated electrons and $E_{e,ave}$ is the average electron energy. Adopting the thick-target approximation and using the parameters from Table \\ref{tab:spec_fit}, the non-thermal energy for each HXR source is about $(6-11)\\times10^{29}$ erg. However, this value here is calculated based on {\\textit{RHESSI}} measurements. If we apply the cross-calibration factor between AIA and {\\textit{RHESSI}} to match the calculations of other energy forms, the non-thermal energy for each HXR source becomes $(3-6)\\times10^{29}$ erg.\n\nThe energies of HXR sources in this event can be compared to those in previous studies of flare energetics. \\citet{2012ApJ...759...71E} studied 38 eruptive events (all except one were M or X class flares and most flares were accompanied by a CME), and they found that the flare thermal energies were always smaller than the energies in accelerated particles. Similar results were found in a later study by \\citet{2016A&A...588A.116W}, where the median ratio of thermal energies to non-thermal energies in electrons for 24 (C-to-X class) flares was 0.3. In a subclass of ``cold'' flares \\citep{2018ApJ...856..111L}, the thermal energy is equal (within the uncertainties) to the non-thermal energy deposition \\citep{2016ApJ...822...71F, 2020ApJ...890...75M, 2021ApJ...913...97F}. (Theoretically, the thermal energy cannot be less than the non-thermal energy as the latter one decays into the thermal one.) For our jet event that contains low-C class flares, the thermal energies are more than one order of magnitude smaller than the non-thermal energies. Therefore, the conclusion that the non-thermal energy is always larger than or at least equal to the thermal energy is likely consistent across a wide range of flare classes and regardless of whether the flare is associated with a jet\/CME or not. \n \nHowever, the energy partition between the jet and the associated flares is different from the energy partition between a CME and a flare. For this event, the kinetic\/gravitational energy of the jet is more than one order of magnitude smaller than the energy of the flares (thermal\/non-thermal), while in \\citet{2012ApJ...759...71E} the total energy of the CME is usually significantly larger. (The kinetic energy in confined flares is much smaller \\citep{2021ApJ...913...97F}, though.) This variety could be explained in the minifilament eruption scenario that jets and CMEs are still both parts of the same eruptive events but the energy partition changes with scale, or this could also indicate that there are fundamental differences between jets and CMEs. To further answer this question, future studies with more samples of flare-related jets are needed.\n\nLastly, it should also be noted that there are still other forms of energy that were not considered in the calculations above, such as magnetic energy, wave energy, etc. These energies could also be important components of the event energy budget, but are hard to evaluate here due to limited data.\n\n\n\\subsection{Comparison to the current jet models} \\label{comp_models}\nConsidering the locations of hot plasma as well as the HXR sources, these two jets are interesting examples to be compared with current jet models. On the one hand, the source at the base of the jet is consistent with what is expected from jet models. During the minifilament eruption at the jet base, magnetic reconnection happens close to the bottom of the corona, heating the plasma there directly and generating accelerated electrons near the reconnection site. The downward-traveling energetic electrons radiate bremsstrahlung emissions as they collide with the dense chromosphere, producing a HXR source and\/or further heating the ambient plasma at the base of the jet. On the other hand, processes after a jet's eruption are generally not considered by those models; thus the hot plasma and the HXR source at the top of the jet are not expected. Our observations have shown that additional particle acceleration could happen at other locations besides the jet base. In other words, there could be multiple reconnection and energy release sites in a single jet event. Also, despite the significantly different particle acceleration sites (and even two separate events), the non-thermal electrons share very similar energy distributions. The spectral indices around 10 and the low energy cutoffs around 9 keV suggest that jet reconnection typically produces only mild particle acceleration. These low energy cutoffs are similar to those of the cold flares \\citep[e.g.,][]{2020ApJ...890...75M}, while the spectra are much softer in the case of the jets.\n\nAnother interesting point about these events is the relation between hot and cool material. For both jets, the cool ejections observed in the 304 {\\AA} filter were adjacent to the hot ejections observed in the 94 {\\AA} and 131 {\\AA} filters. While past simulations have successfully produced a hot jet and a cool jet (or surge) in a single event, it is generally expected that hot and cool jets are driven through different mechanisms. For example, in the simulation by \\citet{1996PASJ...48..353Y}, the hot jet was accelerated by the pressure gradient while the cool surge was accelerated by magnetic tension. Similarly in an observational study by \\citet{2012ApJ...759...15M}, the hot component (at coronal temperatures) was generated by chromospheric evaporation while the cool component (at chromospheric temperatures) was accelerated by magnetic force. However, though the observation of the earlier jet doesn't conflict with this picture, the later jet had consistent velocities in hot and cool filters, indicating that some of the hot components might be driven by a very similar process as the cool components in that event. Therefore, at least in some cases the hot and cool components must be more closely related, and a jet model should be able to explain this kind of observation as well as those similar to \\citet{2012ApJ...759...15M}.\n\n\n\n\\section{Summary} \\label{sec:summ}\nIn this paper, we present a multi-wavelength analysis of two active region jets that were associated with low C-class flares on November 13, 2014. Key aspects of this study include:\n\n\\begin{enumerate}\n\t\\item In both events, hot ($\\gtrsim$10MK) plasma not only appeared near the base of the jet (which is the location of the primary reconnection site) at the beginning, but also appeared near the top of the jet after a few minutes. \n\t\\item Four {\\textit{RHESSI}} HXR sources were observed: one (at the jet top) in the first event and three (at the jet base, jet top, and a location to the north of the jet) in the later event. All those sources showed evidence of mildly accelerated electrons which had spectral indices around 10 and extended to low energies around 9 keV. \n\t\\item Various jet velocities were identified through time-distance plots, including major upward velocities of $\\sim$250 km\/s and downward velocities of $\\sim$100 km\/s. Fast outflows of $\\sim$700 km\/s or $\\sim$400 km\/s were observed only in the hot AIA 131 {\\AA} filter at the beginning of each jet.\u00a0These velocities indicate that the jets were likely driven by magnetic force.\n\t\\item The HXR source and hot plasma at the base of the jet were expected from current models. However, the HXR sources at the top of the jet were produced by energetic electrons that were accelerated very close to the top location, rather than electrons that were accelerated near the jet base but traveled to the top. This means that there was more than one reconnection and particle acceleration site in each event.\t\n\\end{enumerate}\t\n\nCoronal jets are an important form of solar activity that involves particle acceleration, and they share similarities with larger eruptive events such as CMEs. HXRs can provide important constraints on hot plasma within a coronal jet, as well as unique diagnostics of energetic electron populations. To obtain the best constraints for jet models, observations should take advantage of state-of-the-art instruments in different wavebands, but only a few studies have included HXR observations to date. In future work, we would like to extend the method described in this paper to other coronal jets. Those jets could come from the jet database that will be generated by the citizen science project Solar Jet Hunter \\footnote{https:\/\/www.zooniverse.org\/projects\/sophiemu\/solar-jet-hunter} (which was launched through the Zooniverse platform in December 2021). We expect studies with more jet samples to further advance our understanding of particle acceleration in jets.\n\nFurthermore, as shown in this study, HXR sources that are associated with jets could be found in the corona, and they could be faint in some events, thus not identified by current instruments. One solution is to develop direct focusing instruments, such as that demonstrated by the Focusing Optics X-ray Solar Imager (\\textit{FOXSI}) sounding rocket experiment, which will provide better sensitivity and dynamic range for future HXR observations. \n\n\\acknowledgments\nThis work is supported by NASA Heliophysics Guest Investigator grant 80NSSC20K0718. Y.Z. is also supported by the NASA FINESST program 80NSSC21K1387. N.K.P. acknowledges support from NASA's {\\textit{SDO}}\/AIA and HGI grant. We thank Samaiyah Farid for helpful discussions. We are also grateful to the {\\textit{SDO}}\/AIA, {\\textit{RHESSI}}, {\\textit{Hinode}}\/XRT, and {\\textit{IRIS}} teams for their open data policy. {\\textit{Hinode}} is a Japanese mission developed and launched by ISAS\/JAXA, with NAOJ as domestic partner and NASA and STFC (UK) as international partners. It is operated by these agencies in co-operation with ESA and the NSC (Norway). {\\textit{IRIS}} is a NASA small explorer mission developed and operated by LMSAL with mission operations executed at NASA Ames Research Center and major contributions to downlink communications funded by ESA and the Norwegian Space Centre.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe discrete logarithm problem (DLP) was first proposed as a hard\nproblem in cryptography in the seminal article of Diffie and\nHellman~\\cite{DiHe76}. Since then, together with factorization, it has\nbecome one of the two major pillars of public key cryptography. As a\nconsequence, the problem of computing discrete logarithms has\nattracted a lot of attention. From an exponential algorithm in $1976$,\nthe fastest DLP algorithms have been greatly improved during the past\n$35$ years. A first major progress was the realization that the DLP in\nfinite fields can be solved in subexponential time, i.e. $L(1\/2)$\nwhere $L_N(\\alpha)=\\exp\\left(O((\\log N)^\\alpha(\\log\\log\n N)^{1-\\alpha})\\right)$. The next step further reduced this to a\nheuristic $L(1\/3)$ running time in the full range of finite fields,\nfrom fixed characteristic finite fields to prime\nfields~\\cite{Adl79,Cop84,Gor93,Adl94,JoLe06,JLVS07}.\n\nRecently, practical and theoretical advances have been\nmade~\\cite{Jo13faster,GGMZ13,Joux13} with an\nemphasis on small to medium characteristic finite fields and composite\ndegree extensions. The most general and efficient\nalgorithm~\\cite{Joux13} gives a complexity of $L(1\/4+o(1))$ when the\ncharacteristic is smaller than the square root of the extension\ndegree. Among the ingredients of this approach, we find the use of a\nvery particular representation of the finite field; the use of the\nso-called {\\em systematic equation}\\footnote{While the terminology is\nsimilar, no parallel is to be made with the systematic equations as\ndefined in early works related to the computation discrete logarithms in\n${\\mathbb F}_{2^n}$, as~\\cite{BlFuMuVa84}.}; and the use of algebraic\nresolution of bilinear polynomial systems in the individual logarithm\nphase.\n\nIn this work, we present a new discrete logarithm algorithm, in\nthe same vein as in~\\cite{Joux13} that uses an asymptotically more\nefficient descent approach. The main result gives a {\\it\n quasi-polynomial} heuristic complexity for the DLP in finite fields\nof small characteristic. By quasi-polynomial, we mean a complexity of\ntype $n^{O(\\log n)}$ where $n$ is the bit-size of the cardinality of\nthe finite field. Such a complexity is smaller than any\n$L(\\epsilon)$ for $\\epsilon>0$. It remains super-polynomial\nin the size of the input, but offers a major asymptotic improvement\ncompared to $L(1\/4+o(1))$.\n\nThe key features of our algorithm are the following.\n\\begin{itemize}\n\\item We keep the field representation and the systematic equations of~\\cite{Joux13}.\n\\item The algorithmic building blocks are elementary. In particular,\n we avoid the use of Gr\u00f6bner basis algorithms.\n\\item The complexity result relies on three key heuristics:\nthe existence of a polynomial representation of the appropriate\nform; the fact that the smoothness probabilities of some non-uniformly\ndistributed\npolynomials are similar to the probabilities for uniformly random\npolynomials of the same degree; and the linear independence of some\nfinite field elements related to the action of $\\PGL_2({\\mathbb F}_{q})$.\n\\end{itemize}\n\nThe heuristics are very close to the ones used in~\\cite{Joux13}. In\naddition to the arguments in favor of these heuristics already given\nin~\\cite{Joux13}, we performed some experiments to validate them on\npractical instances. \\medskip\n\nAlthough we insist on the case of finite fields of small\ncharacteristic, where quasi-polynomial complexity is obtained, our new\nalgorithm improves the complexity of discrete logarithm computations in a\nmuch larger range of finite fields.\n\nMore precisely, in finite fields of the form ${\\mathbb F}_{q^k}$, where $q$\ngrows as $L_{q^k}(\\alpha)$, the complexity becomes\n$L_{q^k}(\\alpha+o(1))$. As a consequence, our algorithm is\nasymptotically faster than the Function Field Sieve algorithm in\nalmost all the range previously covered by this algorithm. Whenever \n$\\alpha<1\/3$, our new algorithm offers the smallest complexity. For\nthe limiting case $L(1\/3,c)$, the Function Field Sieve remains more\nefficient for small values of $c$, and the Number Field Sieve is better\nfor large values of $c$ (see~\\cite{JLVS07}).\n\\bigskip\n\nThis article is organized as follows. In Section~\\ref{sec:main}, we state\nthe main result, and discuss how it can be used to design a complete\ndiscrete logarithm algorithm. In Section~\\ref{sec:csq}, we analyze how\nthis result can be interpreted for various types of finite fields,\nincluding the important case of fields of small characteristic.\nSection~\\ref{sec:descent-one-step} is devoted to the description of our\nnew algorithm. It relies on heuristics that are discussed in\nSection~\\ref{sec:heur}, from a theoretical and a practical point of view.\nBefore getting to the conclusion, in Section~\\ref{sec:improvement}, we\npropose a few variants of the algorithm.\n\n\\section{Main result}\n\\label{sec:main}\n\nWe start by describing the setting in which our algorithm applies. It is\nbasically the same as in~\\cite{Joux13}: we need a large enough subfield,\nand we assume that a sparse representation can be found. This is\nformalized in the following definition.\n\n\\begin{definition}\n A finite field $K$ admits a {\\em sparse medium subfield representation} if\n \\begin{itemize}\n \\item it has a subfield of $q^2$ elements for a prime power $q$,\n\t\ti.e. $K$ is isomorphic to ${\\mathbb F}_{q^{2k}}$ with $k\\geq1$;\n \\item there exist two polynomials $h_0$ and $h_1$ over\n ${\\mathbb F}_{q^2}$ of small degree, such that $h_1X^q-h_0$ has a\n degree $k$ irreducible factor.\n \\end{itemize}\n\\end{definition}\n\nIn what follows, we will assume that all the fields under consideration\nadmit a sparse medium subfield representation. Furthermore, we assume that\nthe degrees of the polynomials $h_0$ and $h_1$ are uniformly bounded by a\nconstant $\\delta$. Later, we will provide heuristic arguments for the\nfact that any finite field of the form ${\\mathbb F}_{q^{2k}}$ with $k \\le q+2$\nadmits a sparse medium subfield representation with polynomials $h_0$ and\n$h_1$ of degree at most 2. But in fact, for our result to hold, allowing\nthe degrees of $h_0$ and $h_1$ to be bounded by any constant $\\delta$\nindependent of $q$ and $k$ or even allowing $\\delta$ to grow\nslower than $O(\\log q)$ would be sufficient.\n\nIn a field in sparse medium subfield representation, elements will\nalways be represented as polynomials of degree less than $k$ with\ncoefficients in ${\\mathbb F}_{q^2}$. When we talk about the discrete logarithm of\nsuch an element, we implicitly assume that a basis for this discrete\nlogarithm has been chosen, and that we work in a subgroup whose order has\nno small irreducible factor (we refer to the Pohlig-Hellman\nalgorithm~\\cite{PoHe78} to limit ourselves to this case).\n\n\\begin{prop}\\label{prop:onestep}\n Let $K={\\mathbb F}_{q^{2k}}$ be a finite field that admits a sparse medium subfield\n representation.\n Under the heuristics explained below, there exists an algorithm whose\n complexity is polynomial in $q$ and $k$ and which can be used for the\n following two tasks. \n\n \\begin{enumerate}\n \\item \n Given an element of $K$ represented by a polynomial\n $P\\in{\\mathbb F}_{q^2}[X]$ with $2\\leq \\deg P\\leq k-1$,\n the algorithm returns an expression of\n $\\log P(X)$ as a linear combination of at most $O(kq^2)$\n logarithms $\\log P_i(X)$ with $\\deg P_i \\leq \\lceil\n \\frac12 \\deg P\\rceil$ and of $\\log h_1(X)$.\n\n \\item\n The algorithm returns the logarithm of $h_1(X)$ and \n the logarithms of all the elements of $K$\n of the form $X+a$, for $a$ in ${\\mathbb F}_{q^2}$. \n \\end{enumerate}\n\\end{prop}\n\nBefore the presentation of the algorithm, which is made in Section~\\ref{sec:descent-one-step}, we explain how to use it as a building block for a complete discrete logarithm algorithm.\n\nLet $P(X)$ be an element of $K$ for which we want to compute the discrete\nlogarithm. Here $P$ is a polynomial of degree at most $k-1$ and with\ncoefficients in ${\\mathbb F}_{q^2}$. We start by applying the algorithm of Proposition~\\ref{prop:onestep} to $P$. We obtain a relation of the form\n$$ \\log P = e_0 \\log h_1 + \\sum e_i \\log P_i,$$\nwhere the sum has at most $\\kappa q^2 k$ terms for a constant\n$\\kappa$ and the $P_i$'s have degree at most $\\lceil \\frac12 \\deg P\\rceil$.\nThen, we apply\nrecursively the algorithm to the $P_i$'s, thus creating a descent\nprocedure where at each step, a given element $P$ is expressed as a\nproduct of elements, whose degree is at most half the degree of $P$\n(rounded up) and the arity of the descent tree is in $O(q^2 k)$.\n\nAt the end of the process, the logarithm of $P$ is expressed as a linear\ncombination of the logarithms of $h_1$ and of the linear polynomials,\nfor which the logarithms are computed with the algorithm in\nProposition~\\ref{prop:onestep} in its second form.\n\nWe are left with the complexity analysis of the descent\nprocess. Each internal node of the descent tree corresponds to one application of\nthe algorithm of Proposition~\\ref{prop:onestep}, therefore each internal\nnode has a cost which is bounded by a polynomial in $q$ and~$k$. The total cost\nof the descent is therefore bounded by the number of nodes in the descent\ntree times a polynomial in $q$ and $k$. The depth of the descent tree is in\n$O(\\log k)$. The number of nodes of the tree is then less than or equal to\nits arity raised to the power of its depth, which is $(q^2\nk)^{O(\\log k)}$. Since any polynomial in $q$ and\n$k$ is absorbed in the $O()$ notation in the exponent, we obtain the\nfollowing result.\n\n\\begin{theo}\\label{thm}\n Let $K={\\mathbb F}_{q^{2k}}$ be a finite field that admits a sparse medium\n subfield representation. Assuming the same heuristics as in\n Proposition~\\ref{prop:onestep}, any discrete logarithm in $K$ can be\n computed in a time bounded by \n $$ \\max(q,k)^{O(\\log k)}.$$\n\\end{theo}\n\n\n\\section{Consequences for various ranges of parameters}\n\\label{sec:csq}\n\nWe now discuss the implications of Theorem~\\ref{thm} depending on the\nproperties of the finite field ${\\mathbb F}_Q$ where we want to compute discrete\nlogarithms in the first place. The complexities will be expressed in\nterms of $\\log Q$, which is the size of the input.\n\nThree cases are considered. In the first one, the finite field admits a\nsparse medium subfield representation, where $q$ and $k$ are almost\nequal. This is the optimal case. Then we consider the case where the\nfinite field has small (maybe constant) characteristic. And finally, we\nconsider the case where the characteristic is getting larger so that the\nonly available subfield is a bit too large for the algorithm to have an\noptimal complexity.\n\nIn the following, we always assume that for any field of the form\n${\\mathbb F}_{q^{2k}}$, we can find a sparse medium subfield representation.\n\n\\subsection{Case where the field is ${\\mathbb F}_{q^{2k}}$, with $q\\approx k$}\n\nThe finite fields ${\\mathbb F}_Q = {\\mathbb F}_{q^{2k}}$ for which $q$ and $k$ are almost\nequal are tailored for our algorithm. In that case, the complexity of\nTheorem~\\ref{thm} becomes $q^{O(\\log q)}$. Since $Q \\approx q^{2q}$, we\nhave $q=(\\log Q)^{O(1)}$. This gives an expression of the form\n$2^{O\\left((\\log \\log Q)^2\\right)}$, which is sometimes called\nquasi-polynomial in complexity theory.\n\n\\begin{cor}\\label{cor1}\n For finite fields of cardinality $Q = q^{2k}$ with $q+O(1)\\geq k$\n and $q=(\\log Q)^{O(1)}$,\n there exists a heuristic algorithm for computing discrete logarithms\n in quasi-polynomial time\n $$ 2^{O\\left((\\log \\log Q)^2\\right)}.$$\n\\end{cor}\n\nWe mention a few cases which are almost directly covered by\nCorollary~\\ref{cor1}. First, we consider the case where $Q=p^n$ with\n$p$ a prime bounded by $(\\log\nQ)^{O(1)}$, and yet large enough so that $n \\le (p+\\delta)$. In this\ncase ${\\mathbb F}_Q$, or possibly ${\\mathbb F}_{Q^2}$ if $n$ is odd, can be represented in\nsuch a way that Corollary~\\ref{cor1} applies.\n\nMuch the same can be said in the case where $n$ is composite and factors\nnicely, so that ${\\mathbb F}_Q$ admits a large enough subfield ${\\mathbb F}_q$ with\n$q=p^m$. This can be used to solve certain discrete logarithms in, say,\n${\\mathbb F}_{2^n}$ for adequately chosen $n$ (much similar to records tackled\nby~\\cite{record1778,record1971,record4080,record6120,record6168}).\n\n\\subsection{Case where the characteristic is polynomial in the input size}\n\nLet now ${\\mathbb F}_Q$ be a finite field whose characteristic $p$ is bounded by\n$(\\log Q)^{O(1)}$, and let $n=\\log Q \/ \\log p$, so that $Q = p^n$. While\nwe have seen that Corollary~\\ref{cor1} can be used to treat some cases,\nits applicability might be hindered by the absence of an appropriately\nsized subfield: $p$ might be as small as $2$,\nand $n$ might not factor adequately. In those cases, we use the same strategy as\nin~\\cite{Joux13} and embed the discrete logarithm problem in ${\\mathbb F}_Q$ into\na discrete logarithm problem in a larger field.\n\nLet $k$ be $n$ if $n$ is odd and $n\/2$ if $n$ is even. Then, we set\n$q = p^{\\lceil \\log_p k \\rceil}$, and we work in the field ${\\mathbb F}_{q^{2k}}$.\nBy construction this field contains ${\\mathbb F}_Q$ (because $p|q$ and $n|2k$) and\nit is in the range of applicability of Theorem~\\ref{thm}. Therefore,\none can solve a discrete logarithm problem in ${\\mathbb F}_Q$ in time\n$\\max(q, k)^{O(\\log k)}$. Rewriting this complexity in terms of $Q$, we get\n$\\log_p(Q)^{O(\\log\\log Q)}$. And finally, we get a similar complexity\nresult as in the previous case. Of course, since we had to embed in a\nlarger field, the constant hidden in the $O()$ is larger than for\nCorollary~\\ref{cor1}.\n\n\\begin{cor}\\label{cor2}\n For finite fields of cardinality $Q$ and characteristic bounded by\n $\\log(Q)^{O(1)}$, there exists a heuristic algorithm for \n computing discrete logarithms in quasi-polynomial time\n $$ 2^{O\\left((\\log \\log Q)^2\\right)}.$$\n\\end{cor}\n\nWe emphasize that the case ${\\mathbb F}_{2^n}$ for a prime $n$ corresponds to\nthis case. A direct consequence of Corollary~\\ref{cor2} is that discrete logarithms in ${\\mathbb F}_{2^n}$ can be computed in\nquasi-polynomial time $2^{O((\\log n)^2)}$.\n\n\\subsection{Case where $q = L_{q^{2k}}(\\alpha)$}\nIf the characteristic of the base field is not so small compared to\nthe extension degree, the complexity of our algorithm does not keep\nits nice quasi-polynomial form. However, in almost the whole range of\napplicability of the Function Field Sieve algorithm, our algorithm is\nasymptotically better than FFS.\n\nWe consider here finite fields that can be put into the form ${\\mathbb F}_Q =\n{\\mathbb F}_{q^{2k}}$, where $q$ grows not faster than an expression of the form\n$L_Q(\\alpha)$. In the following, we assume that there is equality, which\nis of course the worst case. The condition can then be rewritten as\n$\\log q = O((\\log Q)^\\alpha(\\log\\log\nQ)^{1-\\alpha})$ and therefore $k = \\log Q \/ \\log q = O((\\log Q \/ \\log\\log\nQ)^{1-\\alpha})$. In particular we have $k\\leq q+\\delta$, so that\nTheorem~\\ref{thm} can be applied and gives a complexity of $q^{O(\\log\nk)}$. This yields the following result.\n\n\\begin{cor}\\label{cor3}\n For finite fields of the form ${\\mathbb F}_Q = {\\mathbb F}_{q^{2k}}$ where $q$ is\n bounded by $L_Q(\\alpha)$, there exists a heuristic algorithm for computing\n discrete logarithms in subexponential time\n $$ L_Q(\\alpha)^{O(\\log \\log Q)}.$$\n\\end{cor}\n\nThis complexity is smaller than $L_Q(\\alpha')$ for any $\\alpha' >\n\\alpha$. Hence, for any $\\alpha<1\/3$, our algorithm is faster than the\nbest previously known algorithm, namely FFS and its variants.\n\n\n\\section{Main algorithm: proof of Proposition~\\ref{prop:onestep}}\n\\label{sec:descent-one-step}\n\nThe algorithm is essentially the same for proving the two points of\nProposition~\\ref{prop:onestep}. The strategy is to find relations between\nthe given polynomial $P(X)$ and its translates by a constant in\n${\\mathbb F}_{q^2}$. Let $D$ be the degree of $P(X)$, that we assume to be at\nleast 1 and at most $k-1$.\n\nThe key to find relations is the {\\em systematic equation}:\n\\begin{equation}\\label{eq:frobenius}\n X^q-X=\\prod_{a\\in {\\mathbb F}_q}(X-a)\\text.\n\\end{equation}\n\nWe like to view Equation~\\eqref{eq:frobenius} as involving\nthe projective line $\\ensuremath{\\mathbb{P}}^1({\\mathbb F}_q)$. Let $\\ensuremath{\\mathcal S}=\\{(\\alpha,\\beta)\\}$ be a set\nof representatives of the $q+1$ points $(\\alpha:\\beta)\\in\\ensuremath{\\mathbb{P}}^1({\\mathbb F}_q)$,\nchosen adequately so that the following equality holds.\n\\begin{equation}\n \\label{eq:frobenius-proj}\n X^qY-XY^q=\\prod_{(\\alpha,\\beta)\\in\\ensuremath{\\mathcal S}}(\\beta X-\\alpha Y)\\text.\n\\end{equation}\n\nTo make translates of $P(X)$ appear, we consider the action of {\\em\nhomographies}. \nAny matrix $m = \\begin{pmatrix}a & b\\\\ c& d\\end{pmatrix}$ acts on $P(X)$\nwith the following formula:\n$$m\\cdot P = \\frac{aP+b}{cP+d}.$$\nIn the following, this action will become trivial if the matrix $m$ has\nentries that are defined over ${\\mathbb F}_q$. This is also the case if $m$\nis non-invertible. Finally, it is clear that multiplying all the\nentries of $m$ by a non-zero constant does not change its action on\n$P(X)$. Therefore the matrices of the homographies that we consider are\ngoing to be taken in the following set of cosets:\n$$ \\ensuremath{\\mathcal{P}}_q = \\PGL({\\mathbb F}_{q^2}) \/ \\PGL({\\mathbb F}_q).$$\n(Note that in general $\\PGL_2({\\mathbb F}_q)$ is not a\nnormal subgroup of $\\PGL_2({\\mathbb F}_{q^2})$, so that $\\ensuremath{\\mathcal{P}}_q$ is not a quotient\ngroup.) \n\nTo each element $m = \\begin{pmatrix}a & b\\\\ c& d\\end{pmatrix}\\in\n\\ensuremath{\\mathcal{P}}_q$, we associate the equation~\\eqref{eq:Em} obtained by substituting $aP+b$\nand $cP+d$ in place of $X$ and $Y$ in \nEquation~\\eqref{eq:frobenius-proj}.\n\\def\\mathop{\\raise-.0125ex\\hbox{x}}{\\mathop{\\raise-.0125ex\\hbox{x}}}\n\\begin{align*}\n \\tag{$E_m$}\\label{eq:Em}\n(aP+b)^q(cP+d) - (aP+b)(cP+d)^q & =\n \\prod_{(\\alpha,\\beta)\\in\\ensuremath{\\mathcal S}} \\beta(aP+b) - \\alpha(cP+d) \\\\\n & =\\prod_{(\\alpha,\\beta)\\in\\ensuremath{\\mathcal S}}\n (-c\\alpha + a\\beta) P - (d\\alpha - b\\beta) \\\\\n & =\\lambda\\prod_{(\\alpha,\\beta)\\in\\ensuremath{\\mathcal S}}\n P - \\mathop{\\raise-.0125ex\\hbox{x}}(m^{-1} \\cdot (\\alpha:\\beta))\\text.\n\\end{align*}\nThis sequence of formulae calls for a short comment because of an abuse\nof notation in the last expression. First, $\\lambda$ is the constant in\n${\\mathbb F}_{q^2}$ which makes the leading terms of the two sides match. Then,\nthe term $P-\\mathop{\\raise-.0125ex\\hbox{x}}(m^{-1} \\cdot\n(\\alpha:\\beta))$ denotes $P-u$ when $m^{-1} \\cdot (\\alpha:\\beta)=(u:1)$\n(whence we have $u=\\frac{d\\alpha - b\\beta}{-c\\alpha + a\\beta}$), or $1$\nif $m^{-1} \\cdot (\\alpha:\\beta)=\\infty$. The latter may occur since when\n$a\/c$ is in ${\\mathbb F}_q$, the expression $-c\\alpha + a\\beta$ vanishes for a\npoint $(\\alpha:\\beta)\\in\\ensuremath{\\mathbb{P}}^1({\\mathbb F}_{q})$ so that one of the factors of the\nproduct contains no term in $P(X)$.\n \nHence the right-hand side of Equation~\\eqref{eq:Em} is, up to a\nmultiplicative constant, a product of $q+1$ or $q$ translates of the\ntarget $P(X)$ by elements of\n${\\mathbb F}_{q^2}$. The equation obtained is actually related to the set of\npoints $m^{-1}\\cdot\\ensuremath{\\mathbb{P}}^1({\\mathbb F}_q)\\subset \\ensuremath{\\mathbb{P}}^1({\\mathbb F}_{q^2})$.\n\\medskip\n\n\nThe polynomial on the left-hand side of~\\eqref{eq:Em} can be rewritten as\na smaller degree equivalent. For this, we use the\nspecial form of the defining polynomial: in $K$ we have $X^q \\equiv\n\\frac{h_0(X)}{h_1(X)}$. Let us denote by $\\tilde{a}$ the element $a^q$ when $a$\nis any element of ${\\mathbb F}_{q^2}$. Furthermore, we write\n$\\tilde{P}(X)$ the polynomial $P(X)$ with all its coefficients\nraised to the power $q$. The left-hand side of~\\eqref{eq:Em} is\n$$(\\tilde{a}\\tilde{P}(X^q)+\\tilde{b})(cP(X)+d)\n- (aP(X) + b)(\\tilde{c}\\tilde{P}(X^q)+\\tilde{d}),$$\nand using the defining equation for the field $K$, it is congruent to\n$$\n\\ensuremath{\\mathcal{L}}_m \\mathrel{:=} \\left(\\tilde{a}\\tilde{P}\\left(\\frac{h_0(X)}{h_1(X)}\\right)+\\tilde{b}\\right)(cP(X)+d)\n- (aP(X) +\nb)\\left(\\tilde{c}\\tilde{P}\\left(\\frac{h_0(X)}{h_1(X)}\\right)+\\tilde{d}\\right).\n$$\nThe denominator of $\\ensuremath{\\mathcal{L}}_m$ is a\npower of~$h_1$ and its numerator has degree at most $(1+\\delta) D$ where\n$\\delta=\\max(\\deg h_0,\\deg h_1)$. We say that $m\\in\\ensuremath{\\mathcal{P}}_q$ yields a\nrelation if this numerator of $\\ensuremath{\\mathcal{L}}_m$ is $\\lceil D\/2 \\rceil$-smooth. \n\nTo any $m\\in\\ensuremath{\\mathcal{P}}_q$, we associate a row vector $v(m)$ of dimension $q^2+1$ in the following\nway. Coordinates are indexed by $\\mu\\in\\ensuremath{\\mathbb{P}}^1({\\mathbb F}_{q^2})$, and the value\nassociated to $\\mu\\in{\\mathbb F}_{q^2}$ is $1$ or $0$ depending on whether\n$P-\\mathop{\\raise-.0125ex\\hbox{x}}(\\mu)$ appears in the right-hand side of Equation~\\eqref{eq:Em}. Note that\nexactly $q+1$ coordinates are $1$ for each $m$. Equivalently, we may write\n\\begin{equation}\\label{eq:v(m)}\nv(m)_{\\mu\\in\\ensuremath{\\mathbb{P}}^1({\\mathbb F}_{q^2})}=\\left\\{\n \\begin{array}{l}\n 1\\text{ if }\\mu=m^{-1}\\cdot(\\alpha:\\beta) \\text{ with }\n\t (\\alpha:\\beta)\\in\\ensuremath{\\mathbb{P}}^1({\\mathbb F}_q),\\\\\n 0\\text{ otherwise}.\n \\end{array}\n\\right.\n\\end{equation}\n\nWe associate to the polynomial $P$ a matrix $H(P)$ whose rows are\nthe vectors $v(m)$ for which $m$ yields a relation, taking at most one\nmatrix $m$ in each coset of $\\ensuremath{\\mathcal{P}}_q$. The validity of Proposition~\\ref{prop:onestep} crucially relies on the\nfollowing heuristic.\n\n\\begin{heuristic}\\label{heu:fullrank}\n For any $P(X)$, the set of rows $v(m)$ for cosets\n$m\\in\\ensuremath{\\mathcal{P}}_q$ that yield a relation form a matrix which has full rank $q^2+1$.\n\\end{heuristic}\n\nAs we will note in Section~\\ref{sec:heur}, the matrix $H(P)$ is\nheuristically expected to have $\\Theta(q^3)$ rows, where the implicit\nconstant depends on $\\delta$. This means that for our decomposition\nprocedure to work, we rely on the fact that $q$ is large enough\n(otherwise $H(P)$ may have less than $q^2+1$ rows, which precludes the\npossibility that it have rank $q^2+1$).\n\\medskip\n\nThe first point of Proposition~\\ref{prop:onestep}, where we descend a\npolynomial $P(X)$ of degree $D$ at least 2, follows by linear algebra\non this matrix.\nSince we assume that the matrix has full rank, \nthen the vector $(\\ldots,0,1,0,\\ldots)$ with $1$\ncorresponding to $P(X)$ can be written as a linear combination of the rows.\nWhen doing this linear combination on the equations~\\eqref{eq:Em} corresponding to\n$P$ we write $\\log P(X)$ as a linear combination of $\\log P_i$ where\n$P_i(x)$ are the elements occurring in the\nleft-hand sides of the equations. Since there are $O(q^2)$ columns, the elimination process\ninvolves at most $O(q^2)$ rows, and since each row corresponds to\nan equation~\\eqref{eq:Em}, it involves at most $\\deg \\ensuremath{\\mathcal{L}}_m\\leq (1+\\delta)D$ polynomials in the\nleft-hand-side\\footnote{This estimate of the number of irreducible\n factors is a pessimistic upper bound. In practice, one expects to\n have only $O(\\log D)$ factors on average. Since the crude estimate\n does not change the overall complexity, we keep it that way to avoid\n adding another heuristic.}. In total, the polynomial $D$ is\nexpressed by a linear combination of at most $O(q^2D)$ polynomials of\ndegree less than $\\lceil D\/2\\rceil$. The logarithm of $h_1(X)$ is also\ninvolved, as a denominator of $\\ensuremath{\\mathcal{L}}_m$. We have not made precise the\nconstant in ${\\mathbb F}_{q^2}^*$ which occurs to take care of the leading\ncoefficients. Since discrete logarithms in ${\\mathbb F}_{q^2}^*$ can certainly be\ncomputed in polynomial time in $q$, this is not a problem.\n\nSince the order of $\\PGL_2({\\mathbb F}_{q^i})$ is $q^{3i}-q^i$, the set of cosets\n$\\ensuremath{\\mathcal{P}}_q$ has $q^3+q$ elements. For each $m \\in\\ensuremath{\\mathcal{P}}_q$, testing\nwhether~\\eqref{eq:Em}\nyields a relation amounts to some polynomial manipulations and a \nsmoothness test. All of them can be done in polynomial time in $q$ and\nthe degree of $P(X)$ which is bounded by $k$. Finally, the linear algebra\nstep can be done in $O(q^{2\\omega})$ using asymptotically fast matrix\nmultiplication algorithms, or alternatively $O(q^5)$ operations using\nsparse matrix techniques.\nIndeed, we have $q+1$\nnon-zero entries per row and a size of $q^2+1$.\nTherefore, the overall cost is polynomial in $q$ and $k$ as claimed.\n\\medskip\n\nFor the second part of Proposition~\\ref{prop:onestep} we replace $P$ by $X$ during the\nconstruction of the matrix. In that case, both sides of the\nequations~\\eqref{eq:Em} involve only linear polynomials.\nHence we obtain a linear system whose unknowns\nare $\\log (X+a)$ with $a\\in{\\mathbb F}_{q^2}$. Since Heuristic~\\ref{heu:fullrank}\nwould give us only the full rank of the system corresponding to the\nright-hand sides of the equations~\\eqref{eq:Em}, we have to rely on a\nspecific heuristic for this step:\n\\begin{heuristic}\\label{heu:linfullrank}\n The linear system constructed from all the equations~\\eqref{eq:Em}\n for $P(X)=X$ has full rank.\n\\end{heuristic}\nAssuming that this heuristic holds, we can\nsolve the linear system and obtain the discrete logarithms of the linear\npolynomials and of $h_1(X)$. \n\n\\section{Supporting the heuristic argument in the proof}\n\\label{sec:heur}\n\nFor Heuristic~\\ref{heu:fullrank}, we propose two approaches to support this\nheuristic. Both allow to gain some confidence in the validity of\nthe heuristic, but of course none affect the heuristic nature of this\nstatement.\n\nFor the first line of justification, we denote by $\\ensuremath{\\mathcal{H}}$ the matrix of all\nthe $\\#\\ensuremath{\\mathcal{P}}_q=q^3+q$ vectors $v(m)$ defined as in\nEquation~\\eqref{eq:v(m)}. Associated to a polynomial~$P$,\nSection~\\ref{sec:descent-one-step} defines the matrix\n$H(P)$ formed of the\nrows $v(m)$ such that the numerator of $\\ensuremath{\\mathcal{L}}_m$ is smooth. We will give\nheuristics that $H(P)$ has $\\Theta(q^3)$ rows and then prove that $\\ensuremath{\\mathcal{H}}$ has\nrank $q^2+1$, which of course does not prove that its submatrix $H(P)$ has full rank. \n\nIn order to estimate the number of rows of $H(P)$ we assume that the\nnumerator of $\\ensuremath{\\mathcal{L}}_m$ has\nthe same probability to be $\\lceil \\frac{D}{2}\\rceil$-smooth as a random\npolynomial of same degree. In this paragraph, we assume that the\ndegrees of $h_0$ and $h_1$ are bounded by $2$, merely to avoid awkward\nnotations; the result holds for any constant bound $\\delta$.\nThe degree of the numerator of $\\ensuremath{\\mathcal{L}}_m$ is then bounded by $3D$, so we have\nto estimate\nthe probability that a polynomial in ${\\mathbb F}_{q^2}[X]$ of degree $3D$ is\n$\\lceil \\frac{D}{2}\\rceil$-smooth. For any prime power $q$ and integers\n$1\\leq m\\leq n$, we denote by $N_q(m,n)$ the number of $m$-smooth monic\npolynomials of degree $n$. Using analytic methods, Panario et\nal. gave a precise estimate of this quantity (Theorem~$1$\nof~\\cite{FGP98}):\n\\begin{equation}\\label{eq:Flajolet}\n\tN_{q}(n,m)=q^n \\rho\\left(\\frac{n}{m}\\right)\\left(1+O\\left(\\frac{\\log\nn}{m}\\right)\\right),\n\\end{equation}\nwhere $\\rho$ is Dickman's function defined as the unique continuous\nfunction such that $\\rho(u)=1$ on $[0,1]$ and $u\\rho'(u)=\\rho(u-1)$ for\n$u>1$. We stress that the constant $\\kappa$ hidden in the $O()$ notation\nis independent of $q$.\nIn our case, we are interested in the value of $N_{q^2}(3D, \\lceil\n\\frac{D}{2}\\rceil)$. Let us call $D_0$ the least\ninteger such that $1+\\kappa\\left(\\frac{\\log (3D)}{\\lceil D\/2\\rceil}\\right)$\nis at least $1\/2$. For $D>D_0$, we will use the\nformula~\\eqref{eq:Flajolet}; and for $D\\le D_0$, we will use the crude\nestimate $N_q(n,m) \\ge N_q(n,1) = q^n\/n!$. Hence the smoothness\nprobability of $\\ensuremath{\\mathcal{L}}_m$ is at least\n$\\min\\left(\\frac{1}{2}\\rho(6),1\/(3D_0)!\\right)$.\n\nMore generally, if $\\deg h_0$ and $\\deg h_1$ are bounded by a constant\n$\\delta$ then we have a smoothness probability of $\\rho(2\\delta+2)$ times\nan absolute constant. Since we have $q^3+q$ candidates and a constant\nprobability of success, $H(P)$ has $\\Theta(q^3)$ rows.\n\n\n\nNow, unless some theoretical obstruction\noccurs, we expect a matrix over ${\\mathbb F}_\\ell$ to have full rank with\nprobability at least $1-\\frac{1}{\\ell}$. The matrix $\\ensuremath{\\mathcal{H}}$ is however peculiar, and does enjoy\nregularity properties which are worth noticing.\nFor instance, we have the following proposition.\n\\begin{prop}\n \\label{prop:bigmat-fullrank}\nLet $\\ell$ be a prime not dividing $q^3-q$. Then the matrix\n$\\ensuremath{\\mathcal{H}}$ over ${\\mathbb F}_\\ell$ has full rank $q^2+1$.\n\\end{prop}\n\\begin{proof}\n We may obtain this result in two ways. First, \n\\ensuremath{\\mathcal{H}}\\ is\nthe incidence matrix of a $3-(q^2+1,q+1,1)$ combinatorial design called\n\\emph{inversive plane} (see e.g.~\\cite[Theorem 9.27]{Stinson03}). As such\nwe obtain the identity $$\\ensuremath{\\mathcal{H}}^T\\ensuremath{\\mathcal{H}}=(q+1)(J_{q^2+1}-(1-q)I_{q^2+1})$$\n(see~\\cite[Theorem 1.13 and Corollary 9.6]{Stinson03}), where\n$J_n$ is the $n\\times n$ matrix with all entries equal to one, and $I_n$\nis the $n\\times n$ identity matrix. This readily gives the result exactly\nas announced.\n\n We also provide an elementary proof of the Proposition.\n We have a\n bijection between rows of $\\ensuremath{\\mathcal{H}}$ and the different possible image\n sets of the projective line $\\ensuremath{\\mathbb{P}}^1({\\mathbb F}_q)$ within $\\ensuremath{\\mathbb{P}}^1({\\mathbb F}_{q^2})$,\n under injections of the form $(\\alpha:\\beta)\\mapsto\n m^{-1}\\cdot(\\alpha:\\beta)$. All these $q^3+q$ image sets have size\n $q+1$, and by symmetry all points of $\\ensuremath{\\mathbb{P}}^1({\\mathbb F}_{q^2})$ are reached\n equally often. Therefore, the sum of all rows of $\\ensuremath{\\mathcal{H}}$ is the\n vector whose coordinates are all equal to $\\frac1{1+q^2}(q^3+q)(q+1)=q^2+q$.\n\n Let us now consider the sum of the rows in $\\ensuremath{\\mathcal{H}}$ whose first\n coordinate is $1$ (as we have just shown, we have $q^2+q$ such rows).\n Those correspond to image sets of $\\ensuremath{\\mathbb{P}}^1({\\mathbb F}_q)$ which contain one\n particular point, say $(0:1)$. The value of the sum for any other\n coordinate indexed by e.g.\\ $Q\\in\\ensuremath{\\mathbb{P}}^1({\\mathbb F}_{q^2})$ is the number of\n image sets $m^{-1}\\cdot\\ensuremath{\\mathbb{P}}^1({\\mathbb F}_q)$ which contain both $(0:1)$ and\n $Q$, which we prove is equal to $q+1$ as follows. Without loss of generality, we may assume $Q=\\infty=(1:0)$.\n We need to count the relevant\n homographies $m^{-1}\\in\\PGL_2({\\mathbb F}_{q^2})$, modulo\n $\\PGL_2({\\mathbb F}_q)$-equivalence $m\\equiv hm$. By\n $\\PGL_2({\\mathbb F}_q)$-equivalence, we may without loss of generality assume\n that $m^{-1}$ fixes $(0:1)$ and $(1:0)$.\n Letting $m^{-1}=\n\\begin{pmatrix}a&b\\\\c&d\\end{pmatrix}$, we obtain $(b:d)=(0:1)$ and\n $(a:c)=(1:0)$, whence $b=c=0$, and both $a,d\\not=0$. We may\n normalize to $d=1$, and notice that multiplication of $a$ by a scalar\n in ${\\mathbb F}_q^*$ is absorbed in $\\PGL_2({\\mathbb F}_q)$-equivalence. Therefore\n the number of suitable $m$ is $\\#{{\\mathbb F}_{q^2}^*}\/{{\\mathbb F}_q^*}=q+1$.\n\n\n These two facts show that the row span of $\\ensuremath{\\mathcal{H}}$\n contains the vectors $(q^2+q, \\ldots, q^2+q)$ and $(q^2+q, q+1,\n \\ldots, q+1)$. The vector $(q^3-q,0,\\ldots,0)$ is obtained as a linear\n combination of these two vectors, which suffices to prove that\n $\\ensuremath{\\mathcal{H}}$ has full rank, since the same reasoning holds\n for any coordinate.\n\n\\end{proof}\n\n\nProposition~\\ref{prop:bigmat-fullrank}, while encouraging, is clearly not\nsufficient. We are, at the moment, unable to provide a proof of a\nmore useful statement. On the experimental side, it is reasonably easy to\nsample arbitrary subsets of the rows of $\\ensuremath{\\mathcal{H}}$ and check for their rank.\nTo this end, we propose the following experiment. We have considered\nsmall values of $q$ in the range $[16,\\ldots,64]$, and made~50 random picks of\nsubsets $S_i\\subset\\ensuremath{\\mathcal{P}}_q$, all of size exactly $q^2+1$. For each we\nconsidered the matrix of the corresponding linear system, which is made of\nselected rows of the matrix \\ensuremath{\\mathcal{H}}, and computed its determinant $\\delta_i$.\nFor all values of $q$ considered,\nwe have observed the following facts.\n\\begin{itemize}\n \\item First, all square matrices considered had full rank over \\ensuremath{\\mathbb{Z}}.\n Furthermore, their determinants had no common factor apart\n possibly from those appearing in the factorization of $q^3-q$ as\n predicted by Proposition~\\ref{prop:bigmat-fullrank}. In fact,\n experimentally it seems that only the factors of $q+1$ are\n causing problems.\n \\item We also explored the possibility that modulo some primes, the\n determinant could vanish with non-negligible probability. We thus\n computed the pairwise GCD of all~50 determinants computed, for\n each $q$. Again, the only prime factors appearing in the GCDs\n were either originating from the factorization of $q^3-q$, or\n sporadically from the birthday paradox.\n\\end{itemize}\n \\begin{table}\n\\begin{center}\n \\begin{minipage}[t]{0.5\\textwidth}\n\\begin{tabular}{c|c|l|l}\n $q$ & \\#trials & in $\\gcd(\\{\\delta_i\\})$ &\n in $\\gcd(\\delta_i, \\delta_j)$\\\\\n \\hline\n16 & 50 & 17 & 691\\\\\n17 & 50 & 2, 3 & 431, 691\\\\\n19 & 50 & 2, 5 & none above $q^2$\\\\\n23 & 50 & 2, 3 & none above $q^2$\\\\\n25 & 50 & 2, 13 & none above $q^2$\\\\\n27 & 50 & 2, 7 & 1327\\\\\n29 & 50 & 2, 3, 5 & none above $q^2$\\\\\n31 & 50 & 2 & 1303, 3209\\\\\n32 & 50 & 3, 11 & none above $q^2$\\\\\n\n \n\\end{tabular}\n \\end{minipage}%\n \\begin{minipage}[t]{0.5\\textwidth}\n\\begin{tabular}{c|c|l|l}\n $q$ & \\#trials & in $\\gcd(\\{\\delta_i\\})$ &\n in $\\gcd(\\delta_i, \\delta_j)$\\\\\n \\hline\n37 & 50 & 2, 19 & 2879\\\\\n41 & 50 & 2, 3, 7 & none above $q^2$\\\\\n43 & 50 & 2, 11 & none above $q^2$\\\\\n47 & 50 & 2, 3 & none above $q^2$\\\\\n49 & 50 & 2, 5 & none above $q^2$\\\\\n53 & 50 & 2, 3 & none above $q^2$\\\\\n59 & 50 & 2, 3, 5 & none above $q^2$\\\\\n61 & 50 & 2, 31 & none above $q^2$\\\\\n64 & 50 & 5, 13 & none above $q^2$\\\\\n\\end{tabular}\n \\end{minipage}%\n\n\\caption{\\label{tab:experiment1}Prime factors appearing in determinant of\nrandom square submatrices of \\ensuremath{\\mathcal{H}}\\ (for one given set of random trials)}\n\\end{center}\n \\end{table}\n These results are \nsummarized in\ntable~\\ref{tab:experiment1}, where the last column omits small prime\nfactors below $q^2$.\nOf course, we remark that considering square submatrices is a more demanding check than\nwhat Heuristic~\\ref{heu:fullrank} suggests, since our algorithm only\nneeds a slightly larger matrix of size $\\Theta(q^3)\\times(q^2+1)$ to have\nfull rank.\n\\medskip\n\nA second line of justification is more direct and natural, as it is\npossible to implement the algorithm outlined in\nSection~\\ref{sec:descent-one-step}, and verify that it does provide the\ndesired result. A \\textsc{Magma} implementation validates this claim, and\nhas been used to implement descent steps for an example field of\ndegree~$53$ over ${\\mathbb F}_{53^2}$. An example step in this context is given\nfor applying our algorithm to a polynomial of degree~10, attempting to\nreduce it to polynomials of degree~6 or less.\nAmong the 148,930 elements\nof $\\ensuremath{\\mathcal{P}}_q$, it sufficed to consider only 71,944 matrices $m$, of which about 3.9\\%\nled to relations, for a minimum sufficient number of relations equal to\n$q^2+1=2810$ (as more than half of the elements of $\\ensuremath{\\mathcal{P}}_q$ had not even\nbeen examined at this point, it is clear that getting more relations was\neasy---we did not have to).\nAs the defining polynomial\nfor the finite field considered was constructed with $\\delta=\\deg\nh_{0,1}=1$, all left-hand sides involved\nhad degree 20.\nThe polynomials appearing in their\nfactorizations had the\nfollowing degrees (the number in brackets give the number of distinct\npolynomials found for each degree): 1(2098), 2(2652), 3(2552), 4(2463), 5(2546), 6(2683).\nOf course this tiny example size uses no\noptimization, and is only intended to check the validity of\nProposition~\\ref{prop:onestep}.\n\n\\bigskip\n\nAs for Heuristic~\\ref{heu:linfullrank}, it is already present\nin~\\cite{Joux13} and~\\cite{GGMZ13}, so this is not a new heuristic.\nJust like for Heuristic~\\ref{heu:fullrank}, it is based on the fact that\nthe probability that a left-hand side is $1$-smooth and yields a relation\nis constant. Therefore, we have a system with $\\Theta(q^3)$ relations\nbetween $O(q^2)$ indeterminates, and it seems reasonable to expect that\nit has full rank. On the other hand, there is not as much algebraic\nstructure in the linear system as in Heuristic~\\ref{heu:fullrank}, so that\nwe see no way to support this heuristic apart from testing it on several\ninputs. This was already done (including for record computations)\nin~\\cite{Joux13} and~\\cite{GGMZ13}, so we do not elaborate on our own\nexperiments that confirm again that Heuristic~\\ref{heu:linfullrank} seems\nto be valid except for tiny values of $q$.\n\n\\paragraph{An obstruction to the heuristics.}\n\nAs noted by Cheng, Wan and Zhuang~\\cite{traps13}, the irreducible factors\nof $h_1X^q-h_0$ other than the degree $k$ factor that is used to define\n${\\mathbb F}_{q^{2k}}$ are problematic. Let $P$ be such a problematic polynomial.\nThe fact that it divides the defining equation implies that it also\ndivides the $\\ensuremath{\\mathcal{L}}_m$ quantity that is involved when trying to build a\nrelation that relates $P$ to other polynomials. Therefore the first part\nof Proposition~\\ref{prop:onestep} can not hold for this $P$. Similarly, if\n$P$ is linear, its presence will prevent the second part of\nProposition~\\ref{prop:onestep} to hold since the logarithm of $P$ can not\nbe found with the technique of Section~\\ref{sec:descent-one-step}.\nWe present here a technique to deal with the problematic polynomials.\n(The authors of~\\cite{traps13} proposed another solution to keep the\nquasi-polynomial nature of algorithm.)\n\n\n\\begin{prop}\\label{solvetrap}\nFor each problematic polynomial $P$ of degree $D$, we can find a linear relation\nbetween $\\log P$, $\\log h_1$ and $O(D)$ logarithms of polynomials of degree at\nmost $(\\delta-1)D$ which are not problematic.\n\\end{prop}\n\n\\begin{proof}\nLet $P$ be an irreducible factor of $h_1X^q-h_0$ of degree $D$. Let us\nconsider $P^q$; by reducing modulo $h_1X^q-h_0$ and clearing\ndenominators, there exists a polynomial $A(X)$ such that\n\\begin{equation}\\label{eq:freerel}\nh_1^D P^q = h_1^D\\tilde{P}\\left(\\frac{h_0}{h_1}\\right)\n+(h_1X^q-h_0)A(X).\n\\end{equation}\nSince $P$ divides two of the terms of this equality, it must also\ndivide the third one, namely the polynomial $\\ensuremath{\\mathcal R} =\nh_1^D\\tilde{P}\\left(h_0\/h_1\\right)$. Let $v_P\\ge 1$ be the valuation of\n$P$ in $\\ensuremath{\\mathcal R}$. In the finite field ${\\mathbb F}_{q^{2k}}$ we obtain the following\nequalities between logarithms:\n\\begin{equation*}\n\t(q-v_P)\\log P = -D\\log h_1 +\\sum_i e_i \\log Q_i,\n\\end{equation*}\nwhere $Q_i$ are the irreducible factors of $\\ensuremath{\\mathcal R}$ other than $P$ and $e_i$\ntheir valuation in~$\\ensuremath{\\mathcal R}$. A polynomial $Q_i$ can not be problematic.\nOtherwise, it would divide the right-hand side of\nEquation~\\eqref{eq:freerel}, and therefore, also the left-hand side, which\nis impossible.\nSince $v_P\\leq \\frac{\\deg\n\\ensuremath{\\mathcal R}}{\\deg P}\\leq \\delta 1$, it will be possible to rewrite its logarithm in terms of\nlogarithms of non-problematic polynomials of at most the same degree that\ncan be descended in the usual way. Similarly, each problematic\npolynomial of degree 1 can have its logarithm rewritten in terms of the\nlogarithms of other non-problematic linear polynomials. Adding these\nrelations to the ones obtained in Section~\\ref{sec:descent-one-step}, we\nexpect to have a full-rank linear system.\n\nIf $\\delta>2$, we need to rely on the additional heuristic. Indeed, when\ndescending the $Q_i$ that have a degree potentially larger than the\ndegree of $D$, we could hit again the problematic polynomial we started\nwith, and it could be that the coefficients in front of $\\log P$ in the\nsystem vanishes. More generally, taking into account all the problematic\npolynomials, if when we apply Proposition~\\ref{solvetrap} to them we get\npolynomials $Q_i$ of higher degrees, it could be that descending those we\ncreates loops so that the logarithms of some of the problematic\npolynomials could not be computed. We expect this event to be very\nunlikely. Since in all our experiments it was always possible to obtain\n$\\delta=2$, we did not investigate further.\n\n\\paragraph{Finding appropriate $h_0$ and $h_1$.}\n\nOne key fact about the algorithm is the existence of two polynomials\n$h_0$ and $h_1$ in ${\\mathbb F}_{q^2}[X]$ such that $h_1(X)X^q-h_0(X)$ has an\nirreducible factor of degree $k$. A partial solution is due to\nJoux~\\cite{Joux13} who showed how to construct such polynomials when\n$k\\in\\{q-1,q,q+1\\}$. \nNo such deterministic construction is known in the general\ncase, but experiments show that one can apparently choose $h_0$ and $h_1$ of\ndegree at most $2$. We performed an experiment for every odd prime\npower $q$ in $[3,\\ldots,1000]$ and every $k\\leq q$ and found that we could\nselect $a\\in{\\mathbb F}_{q^2}$ such that $X^q+X^2+a$ has an irreducible factor\nof degree $k$. Finally, note that the result is similar to a commonly made\nheuristic in discrete logarithm algorithms: for fixed\n$f\\in{\\mathbb F}_{q^2}[X,Y]$ and random $g\\in{\\mathbb F}_{q^2}[X,Y]$, the polynomial\n$\\text{Res}_Y(f,g)$ behaves as a random polynomial of same degree with\nrespect to the degrees of its irreducible factors.\n\n\n\n\n\\section{Some directions of improvement}\n\\label{sec:improvement}\nThe algorithm can be modified in several ways. On the one hand one can\nobtain a better complexity if one proves\na stronger result on the smoothness probability. On the other\nhand, without changing the complexity, one can obtain a version which\nshould behave better in practice. \n\n\\subsection{Complexity improvement}\nHeuristic~\\ref{heu:fullrank} tells that a rectangular matrix with $\\Theta(q)$\ntimes more rows than columns has full rank. It seems reasonable to expect\nthat only a constant times more rows than columns would be enough to get\nthe full rank properties (as is suggested by the experiments proposed in\nSection~\\ref{sec:heur}). Then, it means that we expect to have a lot of\nchoices to select the best relations, in the sense that their left-hand\nsides split into irreducible factors of degrees as small as possible.\n\nOn average, we expect to be able to try $\\Theta(q)$ relations for each row\nof the matrix. So, assuming that the numerators of $\\ensuremath{\\mathcal{L}}_m$ behave like\nrandom polynomials of similar degrees, we have to evaluate the\nexpected smoothness that we can hope for after trying $\\Theta(q)$\npolynomials of degree $(1+\\delta)D$ over ${\\mathbb F}_{q^2}$. Set $u=\\log q \/\n\\log\\log q$, so that $u^u\\approx q$. According to~\\cite{FGP98} it is then possible to replace $\\lceil\nD\/2\\rceil$ in\nProposition~\\ref{prop:onestep} by the value $O(D\\log\\log q\/\\log q)$.\n\nThen, the discussion leading to Theorem~\\ref{thm} can be changed to\ntake this faster descent into account. We keep the same estimate for the\narity of each node in the tree, but the depth is now only in $\\log k \/\n\\log\\log q$. Since this depth ends up in the exponent, the resulting\ncomplexity in Theorem~\\ref{thm} is then\n$$ \\max(q, k)^{O(\\log k \/ \\log\\log q)}.$$\n\n\\subsection{Practical improvements}\nBecause of the arity of the descent tree, the breadth eventually\nexceeds the number of polynomials below some degree bound. It\nmakes no sense, therefore, to use the descent procedure beyond\nthis point, as the recovery of discrete logarithms of all these\npolynomials is better achieved as a pre-computation. Note that this\ncorresponds to the computations of the $L(1\/4+\\epsilon)$ algorithm which starts by\npre-computing the logarithms of polynomials up to degree $2$.\nIn our case, we could in principle go up to degree $O(\\log q)$ without\nchanging the complexity.\n\\medskip\n\nWe propose another practical improvement in the case where we would like\nto spend more time descending a given polynomial $P$ in order to improve\nthe quality of the descent tree rooted at $P$. \nThe set of polynomials appearing in the right-hand side of\nEquation~\\eqref{eq:Em} in Section~\\ref{sec:descent-one-step} is\n$\\{P-\\lambda\\}$, because in the factorization of $X^q-X$, we\nsubstitute $X$ with $m\\cdot P$ for homographies~$m$. In fact, we\nmay apply $m$ to $(P:P_1)$ for any polynomial $P_1$ whose degree\ndoes not exceed that of $P$. In the right-hand sides, we will have only\nfactors of form $P - \\lambda P_1$ for $\\lambda$ in ${\\mathbb F}_{q^2}$.\nOn the left-hand sides, we have polynomials of the same degree as before,\nso that the smoothness probability is expected to be the same.\nNevertheless, it is possible to test several $P_1$ polynomials, and to\nselect the one that leads to the best tree.\n\nThis strategy can also be useful in the following context (which will not\noccur for large enough $q$):\nit can\nhappen that for some triples $(q,D,D')$ one has $N_{q^2}(3D,D')\/q^n\\approx 1\/q$. In\nthis case we have no certainty that we can descend a degree-$D$\npolynomial to degree $D'$, but we can hope that at least one of the\n$P_1$ allows to descend.\n\nFinally, if one decides to use several auxiliary $P_1$ polynomials to descend\na polynomial $P$, it might be interesting to take a set of polynomials\n$P_1$ with an arithmetic structure, so that the smoothness tests on the\nleft-hand sides can benefit from a sieving technique.\n\n\\section{Conclusion}\nThe algorithm presented in this article achieves a significant improvement\nof the asymptotic complexity of discrete logarithm in finite fields, in\nalmost the whole range of parameters where the Function Field Sieve was\npresently the most competitive algorithm. Compared to existing\napproaches, and in particular to the line of recent\nworks~\\cite{Jo13faster,GGMZ13}, the practical relevance of our algorithm\nis not clear, and will be explored by further work. \n\nWe note that the analysis of the algorithm presented here is heuristic, as discussed\nin Section~\\ref{sec:heur}. Some of the heuristics we stated,\nrelated to the properties of matrices $H(P)$ extracted from the\nmatrix $\\ensuremath{\\mathcal{H}}$, seem accessible to more solid justification. It seems\nplausible to have the validity of algorithm rely on the sole heuristic of\nthe validity of the smoothness estimates.\n\nThe crossing point between the $L(1\/4)$ algorithm and our quasi-polynomial\none is not determined yet. One of the key factors which hinders the practical\nefficiency of this algorithm is the $O(q^2D)$ arity of the descent tree,\ncompared to the $O(q)$ arity achieved by techniques based on Gr\u00f6bner\nbases~\\cite{Jo13faster} at the expense of a $L(1\/4+\\epsilon)$ complexity.\nAdj et al.~\\cite{AMOR13} proposed to mix the two algorithms\nand deduced that the new descent technique must be used for cryptographic\nsizes. Indeed, by estimating the time required to\ncompute discrete logarithms in ${\\mathbb F}_{3^{6\\cdot 509}}$, they showed the weakness\nof some pairing-based cryptosystems. \n\n\\ifanon\n\\else\n\\section*{Acknowledgements}\nThe authors would like to thank Daniel J. Bernstein for his comments on\nan earlier version of this work, and for pointing out to us the possible\nuse of asymptotically fast linear algebra for solving the linear systems\nencountered.\n\n\\fi\n\n\\bibliographystyle{splncs03}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzbhyb b/data_all_eng_slimpj/shuffled/split2/finalzzbhyb new file mode 100644 index 0000000000000000000000000000000000000000..f958f82829e5ede8674e5a9134cbd872db565f18 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzbhyb @@ -0,0 +1,5 @@ +{"text":"\\section*{Abstract}\nIn the current work, a problem-splitting approach and a scheme motivated by transfer learning is applied to a structural health monitoring problem. The specific\nproblem in this case is that of localising damage on an aircraft wing. The original experiment is described, together with the initial approach, in which a neural \nnetwork was trained to localise damage. The results were not ideal, partly because of a scarcity of training data, and partly because of the difficulty in \nresolving two of the damage cases. In the current paper, the problem is split into two sub-problems and an increase in classification accuracy is obtained. The \nsub-problems are obtained by separating out the most difficult-to-classify damage cases. A second approach to the problem is considered by adopting ideas from \ntransfer learning (usually applied in much deeper) networks to see if a network trained on the simpler damage cases can help with feature extraction in the more\ndifficult cases. The transfer of a fixed trained batch of layers between the networks is found to improve classification by making the classes more separable in the feature\nspace and to speed up convergence.\n\n\\textbf{Key words: Structural health monitoring (SHM), machine learning, classification, problem splitting, transfer learning.}\n\n\t\n\\section{Introduction}\n\\label{sec:intro}\n\nStructural health monitoring (SHM) refers to the process of implementing a damage detection strategy for aerospace, civil or mechanical engineering infrastructure \\cite{farrar2012structural}. Here, damage is defined as changes introduced into a system\/structure, either intentionally or unintentionally, that affect current or \nfuture performance of the system. Detecting damage is becoming more and more important in modern societies, where everyday activities depend increasingly on \nengineering systems and structures. One the one hand, safety has to be assured, both for users and for equipment or machinery existing within these structures. On \nthe other hand, infrastructure is often designed for a predefined lifetime and damage occurrence may reduce the expected lifetime and have a huge economic impact \nas a result of necessary repairs or even rebuilding or decommison. Damage can be visible on or in structures, but more often it is not, and has to be inferred from \nsignals measured by sensors placed on them.\n\nAn increasingly useful tool in SHM is {\\em machine learning} (ML) \\cite{farrar2012structural}. In many current applications large sets of data are gathered by sensors \nor generated by models and these can be exploited to gain insight into structural dynamics and materials engineering. Machine learning is employed because of its \nefficiency in classification, function interpolation and prediction using data. Data-driven models are built and used to serve SHM purposes. These models can also be \nused to further understand how structures react to different conditions and explain their physics. However, one of the main drawbacks of such methods is the need for \nlarge datasets. ML models may have many parameters which are established during {\\em training} on data which may need to span all the health conditions of interest for the\ngiven structure or system. Larger datasets assist in better tuning of the models as far as accuracy and generalisation are concerned. However, even if large datasets are available, sometimes there are very few observations on damaged states, which are important in SHM. In the current paper, increased \naccuracy of a data-driven SHM classifier will be discussed in terms of two strategies: splitting the problem into two sub-problems and attempting transfer of information\nbetween the two sub-problems in a manner motivated by transfer learning \\cite{Pan2010}.\n\nTransfer learning is the procedure of taking knowledge from a source domain and task and applying it to a different domain and task to help improve performance on the \nsecond task \\cite{Pan2010}. Transfer learning is useful, as we know that a model trained on a dataset can not naturally be applied on another due to difference in data distribution, but can be further tuned to also apply on the second dataset. An accurate representation of the difference between traditional and transfer learning schemes can be seen in Figure \\ref{fig:ml_schemes}. \nThe SHM problem herein will be addressed using neural networks \\cite{Bishop:1995:NNP:525960}, for which transfer learning has been proven quite efficient (although \nusually in deeper learning architectures \\cite{GabrielPuiCheongFung2006,AlMubaid2006}). Due to the layered structure of the networks, after having created a model for a task, transferring a part of it (e.g.\\ some \nsubset of the layers) is easy. The method is used in many disciplines, such as computer vision \\cite{oquab2014learning,shin2016deep}. The most commonly-used learners\nare Convolutional Neural Networks (CNNs), which can be very slow to train and may need a lot of data, which in many cases can be hard to obtain (e.g.\\ labelled \nimages). These problems can be dealt with by using the fixed initial layers of pre-trained models to extract features of images, and then train only the last layers to \nclassify in the new context. In this way, both the number of trainable parameters and the need for huge datasets and computation time are reduced. Another topic that \ntransfer learning has been used in is natural language processing (NLP) \\cite{DBLP:journals\/corr\/BingelS17}, where the same issues of lack of labelled data and large \namounts of training time are dealt with by transferring of pre-trained models into new tasks. Further examples of the benefits of transfer learning can be found in web \ndocument classification \\cite{GabrielPuiCheongFung2006,AlMubaid2006}; in these cases, in newly-created web sites, lack of labelled data occurs. To address this problem, \neven though the new web sites belong to a different domain than the training domain of the existing sites, the same models can be used to help classify documents in the \nnew websites.\n\nIn the context of the current work, transfer learning is considered in transferring knowledge from one sub-problem to the other by introducing pre-trained layers into\nnew classifiers. The classification problem that will be presented is related to damage class\/location. A model trained to predict a subset of the damage classes (source \ntask) with data corresponding of that subset (source domain), will be used to boost performance of a second classifier trained to identify a different subset of damage\nstates. \n\n\\begin{figure}[ht!]\n\t\\centering\n\t\\begin{subfigure}{.5\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=.95\\linewidth]{Figure_1a}\n\t\t\\caption{}\n\t\t\\label{fig:trad_ml}\n\t\\end{subfigure}%\n\t\\begin{subfigure}{.5\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=.95\\linewidth]{Figure_1b}\n\t\t\\caption{}\n\t\t\\label{fig:transfer_ml}\n\t\\end{subfigure}\\\\\n\t\\caption{Traditional (a) and transfer (b) learning schemes (following \\cite{Pan2010}).}\n\t\\label{fig:ml_schemes}\n\\end{figure}\n\\section{Problem description}\n\\label{sec:problem_desc}\n\nSimilar to the aforementioned applications, in SHM machine learning is also used for classification and regression. In data driven SHM one tries to identify features that will reveal whether a structure is damaged or what type of damage is present and so, labelled data are necessity. Therefore, in SHM applications lack of labelled data about damage location or severity is a drawback. SHM problems can be categorised in many ways but are often broken down according to the hierarchical structure proposed by Rytter \\cite{rytter1993vibrational}: \n\n\\begin{enumerate}\n\t\\item Is there damage in the system ({\\em existence})?\n\t\\item Where is the damage in the system ({\\em location})?\n\t\\item What kind of damage is present ({\\em type\/classification})?\n\t\\item How severe is the damage ({\\em extent\/severity})?\n\t\\item How much useful (safe) life remains ({\\em prognosis})?\n\\end{enumerate}\n\nA common approach to the first level is to observe the structure in its normal condition and try to find changes in features extracted from measured signals that \nare sensitive to damage. This approach is called {\\em novelty detection} \\cite{Worden1997,WORDEN2000}, and it has some advantages and disadvantages. The main advantage \nis that it is usually an {\\em unsupervised} method, that is only trained on data that are considered to be from the undamaged condition of the structure, without \na specific target class label. These methods are thus trained to detect any {\\em changes} in the behaviour of the elements under consideration, which can be a \ndisadvantage, since structures can change their behaviour for benign reasons, like changes in their environmental or operational conditions; such benign changes or\n{\\em confounding influences} can raise false alarms.\n\nIn this work a problem of damage localisation is considered (at Level 2 in Rytter's hierarchy \\cite{rytter1993vibrational}); the structure of interest being a wing of \na Gnat trainer aircraft. The problem is one of supervised-learning, as the data for all damage cases were collected and a classification model was trained accordingly. Subsequently, the classifier was used to predict the damage class of newly-presented data. The features used as inputs to the classifier were novelty indices calculated \nbetween frequency intervals of the transmissibilities of the normal condition of the structure (undamaged state) and the testing states. The transmissibility between \ntwo points of a structure is given by equation (\\ref{eq:transmissibility}), and this represents the ratio of two response spectra. This feature is useful because it \ndescribes the response of the structure in the frequency domain, without requiring any knowledge of the frequency content of the excitation. The transmissibility is\ndefined as,\n\n\\begin{equation} \n\\label{eq:transmissibility}\n T_{ij} = \\frac{FRF_i}{FRF_j} = \n \\frac{\\frac{\\mathcal{F}_{i}}{\\mathcal{F}_{excitation}}}{\\frac{\\mathcal{F}_{j}}{\\mathcal{F}_{excitation}}} = \\frac{\\mathcal{F}_{i}}{\\mathcal{F}_{j}}\n\\end{equation}\nwhere, $\\mathcal{F}_{i}$ is the Fourier Transform of the signal given by the $i^{th}$ sensor and $FRF_i$ is the {\\em Frequency Response Function} (FRF) at the $i$th \npoint. \n\nThe experiment was set up as described in \\cite{Worden2007}. The wing of the aircraft was excited with a Gaussian white noise using an electrodynamic shaker attached \non the bottom surface of the wing. The configuration of the sensors placed on the wing can be seen in Figure \\ref{fig:sensors}. Responses were measured with \naccelerometers on the upper surface of the wing, and the transmissibilities between each sensor and the corresponding reference sensor were calculated. The \ntransmissibilities were recorded in the 1-2 kHz range, as this interval was found to be sensitive to the damage that was going to be introduced to the structure. Each transmissibility contained 2048 spectral lines.\n\n\\begin{figure}[ht!]\n\t\\centering\n\t\\includegraphics[scale=0.60]{Figure_2.png}\n\t\\caption{Configuration of sensors on the Gnat aircraft wing \\cite{manson2003experimental}.}\n\t\\label{fig:sensors}\n\\end{figure}\n\nInitially, the structure was excited in its normal condition, i.e.\\ with no introduced damage. The transmissibilities of this state were recorded and subsequently, \nto simulate damage, several panels were removed from the wing, one at a time. In each panel removal, the wing was excited again with white Gaussian noise and the transmissibilities were recorded. The panels that were removed are shown in Figure \\ref{fig:panels}. Each panel has a different size, varying from 0.008 to 0.08m$^{2}$ \nand so the localisation of smaller panels becomes more difficult, since their removal affects the transmissibilities less than the bigger panels. The measurements were \nrepeated 200 times for each damage case, ultimately leading to 1800 data points belonging to nine different damage cases\/classes. The data were separated into training, \nvalidation and testing sub sets, each having 66 points per damage case.\n\n\\begin{figure}[h!]\n\t\\centering\n\t\\includegraphics[scale=0.70]{Figure_3.png}\n\t\\caption{Schematic showing wing panels removed to simulate the nine damage cases \\cite{manson2003experimental}.}\n\t\\label{fig:panels}\n\\end{figure}\n\nFor the purposes of damage localisation, features had to be selected which would be sensitive to the panel removals; this was initially done manually \\cite{manson2003experimental}, selecting by visual `engineering judgement' the intervals of the transmissibilities that appeared to be more sensitive to damage and \ncalculating the novelty indices of each state by comparison with the transmissibilities of the undamaged state. The novelty indices were computed using the \nMahalanobis squared-distance (MSD) $D^2_{\\zeta}$ of the feature vectors $\\mathbf{x_{\\zeta}}$, which in this case contained the magnitudes of transmissibility spectral \nlines. The MSD is defined by,\n\n\\begin{equation} \n\\label{eq:mahal_dist}\n D_{\\zeta}^{2} = (\\mathbf{x_{\\zeta}} - \\mathbf{\\overline{x}})^{T} S^{-1}(\\mathbf{x_{\\zeta}} - \\mathbf{\\overline{x}})\n\\end{equation}\nwere $\\mathbf{\\overline{x}}$ is the sample mean on the normal condition feature data, and $S$ is the sample covariance matrix.\n\nAfter selecting `by eye' the most important features for damage detection \\cite{manson2003experimental}, a genetic algorithm was used \\cite{Worden2008} to choose the \nmost sensitive features, in order to localise\/classify the damage. Finally, nine features were chosen as the most sensitive and an MLP neural network \\cite{Bishop:1995:NNP:525960} with nine nodes in the input layer, ten nodes in the hidden layer and nine nodes in the decision layer was trained. The confusion matrix \nof the resulting classifier is shown in the Table \\ref{Tab:Initial_network_conf_mat}. It can be seen that the misclassification rate is very low and that the damage \ncases that are most confused are the ones where the missing panel is Panel 3 or Panel 6, which were the smallest ones.\n\n\n\\section{Problem splitting}\n\\label{sec:splitting}\n\nAs mentioned in \\cite{tarassenko1998guide}, the rule-of-thumb for a network that generalises well is that it should be trained with at least ten samples per weight \nof the network. The aforementioned network had 180 trainable weights (and another 19 bias terms) so the 596 training samples are not ideal for the neural network. As a solution, \na splitting of the original problem into two sub-problems is considered here to try and reduce the misclassification rate on the testing data even further. The dataset \nis split into two parts, one containing all the damage cases except Panels 3 \\& 6 and the second containing the rest of the data. Subsequently, two neural network \nclassifiers were trained separately on the new datasets. This was thought to be a good practice, since the panels are the smallest, and their removal affects the \nnovelty indices less than the rest of the panel removals. The impact is that the points appear closer to each other in the feature space, and are swamped by points \nbelonging to other classes, so the initial classifier cannot separate them efficiently. By assigning the tasks to different classifiers, an increase in performance \nis expected, especially in the case of separating the two smallest panel classes.\n\n\\begin{table}[h!]\n\t\\centering\n\t\\begin{tabular}{ |p{3cm}||p{1.0cm}|p{1.0cm}|p{1.0cm}|p{1.0cm}|p{1.0cm}|p{1.0cm}|p{1.0cm}|p{1.0cm}|p{1.0cm}| }\n\t\t\\hline\n\t\tPredicted panel & 1 & 2 & 3& 4& 5 & 6 & 7 & 8 & 9\\\\\n\t\t\\hline\n\t\tMissing panel 1& 65 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\\\\n\t\tMissing panel 2& 0 & 65 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\\\\n\t\tMissing panel 3& 1 & 0 & 62 & 0 & 0 & 1 & 0 & 1 & 1 \\\\\n\t\tMissing panel 4 & 0 & 0 & 0 & 66 & 0 & 0 & 0 & 0 & 0 \\\\\n\t\tMissing panel 5& 0 & 0 & 0 & 0 & 66 & 0 & 0 & 0 & 0 \\\\\n\t\tMissing panel 6& 0 & 3 & 0 & 0 & 0 & 62 & 0 & 1 & 0 \\\\\n\t\tMissing panel 7& 0 & 0 & 0 & 0 & 0 & 0 & 66 & 0 & 0 \\\\\n\t\tMissing panel 8& 1 & 0 & 0 & 0 & 0 & 0 & 0 & 65 & 0 \\\\\n\t\tMissing panel 9& 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 66 \\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\caption{Confusion Matrix of neural network classifier, test set, total accuracy: 98.14\\% \\cite{Worden2008}}\n\t\\label{Tab:Initial_network_conf_mat}\n\\end{table}\n\nTo illustrate the data feature space, a visualisation is attempted here. Since the data belong to a nine-dimensional feature space, principal component analysis (PCA) \nwas performed on the data and three of the principal components, explaining 71\\% of total variance, are plotted in scatter plots shown in Figure \\ref{fig:all_data_pcs}. \nPoints referring to data corresponding to the missing panels 3 and 6 (grey and magenta points respectively) are entangled with other class points causing most of the misclassification rate shown above.\n\n\\begin{figure}[ht!]\n\t\\centering\n\t\\begin{subfigure}{.5\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=.98\\linewidth]{Figure_4a}\n\t\t\\caption{}\n\t\t\\label{fig:all_data_pcs}\n\t\\end{subfigure}%\n\t\\begin{subfigure}{.5\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=.98\\linewidth]{Figure_4b}\n\t\t\\caption{}\n\t\t\\label{fig:7_classes_pcs}\n\t\\end{subfigure}\\\\\n\t\\begin{subfigure}{.5\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=.98\\linewidth]{Figure_4c}\n\t\t\\caption{}\n\t\t\\label{fig:2_classes_pcs}\n\t\\end{subfigure}\n\t\\caption{Principal components of all samples (a), samples excepting panels 3 and 6 (b) and samples of panels 3 and 6(c).}\n\t\\label{fig:first_pcs}\t\n\\end{figure}\n\nRandom initialisation was followed for the neural networks. Initial values of the weights and biases of the networks were sampled from a normal zero-mean distribution. The two networks were initialised several times and trained for different sizes of the hidden layer to find the ones with optimal structure for the \nnewly-defined problems. After randomly initialising and training multiple neural networks for both cases and keeping the ones with the minimum loss function value \nthe best architectures were found to be networks with nine nodes in the hidden layer for both cases and seven output nodes for the first dataset and two for the \nsecond. The loss function used in training was the categorical cross-entropy function given by,\n\n\\begin{equation} \n\\label{eq:categorical_crossentropy}\n L(y, \\hat{y}) = -\\frac{1}{N}\\sum_{i=1}^{N}\\sum_{j=1}^{n_{cl}} [y_{i, j}log \\hat{y}_{i, j} + (1 - y_{i, j}) log (1 - \\hat{y}_{i, j})]\n\\end{equation}\n\nIn Equation (\\ref{eq:categorical_crossentropy}), $N$ is the number of samples during training, $n_{cl}$ is the number of possible classes, $\\hat{y}_{i,j}$ the estimated probability that the $i$th point belongs to the $j$th class and $y_{i, j}$ is 1 if the $i$th sample belongs to the $j$th class, otherwise it is 0.\n\nConfusion matrices on the test sets for the classifiers are shown in Tables \\ref{Tab:First_set_conf_mat} and \\ref{Tab:Second_set_conf_mat}. By splitting the dataset \ninto two subsets the total accuracy is slightly increased from 98.14\\% to 98.82\\%. This is best considered in terms of classification error, which has been reduced from\n1.86\\% to 1.18\\%, and this is an important reduction in SHM terms. Reduction of the number of trainable parameters has certainly contributed to this improvement, since \nthe amount of training data is small. Performance on the task of separating only the two smallest panel classes was also increased because it is an easier task for the \nclassifier than trying to discriminate them among the panel removals with greater impact on the novelty indices. This fact is also clear in Figure \\ref{fig:2_classes_pcs}, \nwhere the principal components of samples belonging to the classes of missing Panels 3 and 6 are clearly separable.\n\n\\begin{table}[ht!]\n\t\\centering\n\t\\begin{tabular}{ |p{3cm}||p{1.0cm}|p{1.0cm}|p{1.0cm}|p{1.0cm}|p{1.0cm}|p{1.0cm}|p{1.0cm}| }\n\t\t\\hline\n\t\tPredicted panel & 1 & 2 & 4& 5 & 7 & 8 & 9\\\\\n\t\t\\hline\n\t\tMissing panel 1& 65 & 1 & 0 & 0 & 0 & 0 & 0 \\\\\n\t\tMissing panel 2& 0 & 63 & 1 & 0 & 0 & 0 & 2\\\\\n\t\tMissing panel 4 & 1 & 0 & 65 & 0 & 0 & 0 & 0 \\\\\n\t\tMissing panel 5& 0 & 0 & 0 & 66 & 0 & 0 & 0 \\\\\n\t\tMissing panel 7& 0 & 0 & 0 & 0 & 66 & 0 & 0 \\\\\n\t\tMissing panel 8& 1 & 0 & 0 & 0 & 0 & 65 & 0 \\\\\n\t\tMissing panel 9& 1 & 0 & 0 & 0 & 0 & 0 & 65 \\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\caption{Confusion Matrix of neural network classifier trained on the first dataset, test set, total accuracy: 98.48\\%}\n\t\\label{Tab:First_set_conf_mat}\n\\end{table}\n\n\\begin{table}[ht!]\n\t\\centering\n\t\\begin{tabular}{ |p{3cm}||p{1.0cm}|p{1.0cm}|}\n\t\t\\hline\n\t\tPredicted panel & 3 & 6 \\\\\n\t\t\\hline\n\t\tMissing panel 3& 66 & 0 \\\\\n\t\tMissing panel 6& 0 & 66 \\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\caption{Confusion Matrix of neural network classifier trained on the first dataset, test set, total accuracy: 100\\%}\n\t\\label{Tab:Second_set_conf_mat}\n\\end{table}\n\n\\section{Knowledge transfer between the two problems}\n\\label{sec:transfer}\n\nHaving split the problem into two sub-problems, a scheme motivated by transfer learning in deeper learners was examined. The idea being to establish if the features\nextracted at the hidden layer in one problem, could be used for the other. In transfer learning termology, the seven-class problem specifies the source domain and \ntask, while the two-class problem gives the target domain and task. The transfer is carried out by using the fixed input and hidden layers from the classifier in\nthe source task, as the input and hidden layers of the target task; this means that only the weights between the hidden and output layers remain to be trained for\nthe target task. This strategy reduces the number of parameters considerably. The functional form of the network for the source task is given by,\n\n\\begin{equation} \n\\label{eq:network_output}\n \\mathbf{y} = f_{0}W_2(f_{1}(W_{1}\\mathbf{x} + b_{1}) + b_{2}\n\\end{equation}\nwhere $f_0$ and $f_1$ are the non-linear activation functions of the output layer and the hidden layer respectively, $W_{1,2}$ are the weight matrices of the \ntransformations between the layers, $b_{1, 2}$ are the bias vectors of the layers, $\\mathbf{x}$ is the input vector and $\\mathbf{y}$ the output vector. The {\\em softmax} \nfunction is chosen to be the activation function of the decision layer, as this is appropriate to a classification problem. The prediction of the network, concerning \nwhich damage class the sample belongs to, is the index that maximises the output vector $\\mathbf{y}$; the outputs are interpreted as the {\\em a posteriori} probabilities\nof class membership, so this leads to a Bayesian decision rule. Loosely speaking, one can think of the transformation between the hidden and ouput layers as the actual\nclassifier, and the transformation between the input layer into the hidden layer as a map to latent states in which the classes are more easily separable. In the context\nof deep networks, the hope is that the earlier layers carry out an automated feature extraction which facilitates an eventual classifier. In the deep context, transfer\nbetween problems is carried out by simply copying the `feature extraction' layers directly into the new network, and only training the later classificatiion layers. The\nsimple idea explored here, is whether that strategy helps in the much more shallow learner considered in this study. The transfer is accomplished by copying the weights $W_1$\nand biases $b_1$ from sub-problem one directly into the network for sub-problem two, and only training the weights $W_2$ and biases $b_2$.\n\nAs before, multiple neural networks were trained on the first dataset. In a transfer learning scheme, it is even more important that models should not be overtrained, \nsince that will make the model too case-specific and it would be unlikely for it to carry knowledge to other problems. To achieve this for the current problem, an early stopping strategy \nwas followed. Models were trained until a point were the value of the loss function decreases less than a percentage of the current value. An example of this can be \nseen in Figure \\ref{fig:early_stoping} where instead of training the neural network for 1000 epochs, training stops at the point indicated with the red arrow.\n\n\\begin{figure}[ht!]\n \t\\centering\n \t\\includegraphics[scale=0.70]{Figure_5}\n \t\\caption{Training and validation loss histories and the point of early stopping (red arrow).}\n \t\\label{fig:early_stoping}\n\\end{figure}\n\nAfter multiple networks were trained following the early stopping scheme above, the network with the lowest value on validation loss was determined and the transfer \nlearning scheme was applied to the second problem. The nonlinear transformation given by the transition from the input layer to the hidden layer was applied on the data \nof the second dataset. Consequently, another neural network was trained on the transformed data, having only one input layer and one output\/decision layer. To comment \non the effect of the transformation, another two-layer network was trained on the original second dataset and the results were compared. \n\n\\begin{table}[ht!]\n\t\\centering\n\t\\begin{tabular}{ |p{3cm}||p{1.0cm}|p{1.0cm}|}\n\t\t\\hline\n\t\tPredicted panel & 3 & 6 \\\\\n\t\t\\hline\n\t\tMissing panel 3& 65 & 1 \\\\\n\t\tMissing panel 6& 2 & 64 \\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\caption{Confusion Matrix of neural network classifier trained on the original data of the second dataset, test set, total accuracy: 97.72\\%}\n\t\\label{Tab:conf_original_data_2_layer}\n\\end{table}\n\n\\begin{table}[ht!]\n\t\\centering\n\t\\begin{tabular}{ |p{3cm}||p{1.0cm}|p{1.0cm}|}\n\t\t\\hline\n\t\tPredicted panel & 3 & 6 \\\\\n\t\t\\hline\n\t\tMissing panel 3& 65 & 1 \\\\\n\t\tMissing panel 6& 3 & 63 \\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\caption{Confusion Matrix of neural network classifier trained on the transformed data of the second dataset, test set, total accuracy: 96.96\\%}\n\t\\label{Tab:conf_trans_data_2_layer}\n\\end{table}\n\n\\begin{figure}[ht!]\n\t\\centering\n\t\\begin{subfigure}{.5\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=.8\\linewidth]{Figure_6a}\n\t\t\\caption{}\n\t\t\\label{fig:first_dataset_features_pcs}\n\t\\end{subfigure}%\n\t\\begin{subfigure}{.5\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=.8\\linewidth]{Figure_6b}\n\t\t\\caption{}\n\t\t\\label{fig:first_dataset_features_pcs_transed}\n\t\\end{subfigure}\n\t\\caption{Principal components of original features of the first dataset (a) and transformed features (b).}\n\t\\label{fig:first_pcs_1}\t\n\\end{figure}\n\n\\begin{figure}[ht!]\n\t\\centering\n\t\\begin{subfigure}{.5\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=.8\\linewidth]{Figure_7a}\n\t\t\\caption{}\n\t\t\\label{fig:second_dataset_features_pcs}\n\t\\end{subfigure}%\n\t\\begin{subfigure}{.5\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=.8\\linewidth]{Figure_7b}\n\t\t\\caption{}\n\t\t\\label{fig:second_dataset_trans_features_1_pcs}\n\t\\end{subfigure}\n\t\\caption{Principal components of original features of the second dataset (a) and transformed features (b).}\n\t\\label{fig:second_pcs}\t\n\\end{figure}\n\nThe confusion matrices of the two neural networks on the testing data are given in Tables \\ref{Tab:conf_original_data_2_layer} and \\ref{Tab:conf_trans_data_2_layer}; \nthe misclassification rates are very similar. However, it is interesting to also look at the effect of the transfer on the convergence rate of the network trained \non the transferred data and also to illustrate the feature transformation on the first and the second datasets.\n\n\\begin{figure}[ht!]\n\t\\centering\n\t\\includegraphics[scale=0.5]{Figure_8.png}\n\t\\caption{Loss histories of transferred model: train(blue), validation(cyan) and model trained on initial data: train(red), validation(magenta).}\n\t\\label{fig:loss_histories}\n\\end{figure}\n\nThe training histories of the two models can be seen in Figure \\ref{fig:loss_histories}. It is clear that the loss history of the model with transformed data (blue \nand cyan lines) converges faster, especially in the initial part of the training, and it also reaches a lower minimum value for the loss function in the same number \nof training epochs. This can be explained by looking at the effect of the learnt transformation on the data. In Figures \\ref{fig:second_pcs} and \\ref{fig:first_pcs_1} \nthis effect is illustrated. (Note that the points are different from those in Figure \\ref{fig:first_pcs}, because principal component analysis was performed this time \non the normalised data in the interval [-1, 1] for the neural network training). The transformation spreads out the points of the original problem (first dataset) in \norder to make their separation by the decision layer easier; however, it is clear that it also accomplishes the same result on the second dataset. The points in Figure \\ref{fig:second_dataset_trans_features_1_pcs} are spread out compared to the initial points and thus, their separation by the single layer neural network is easier. \nFurthermore, the points lay further away from the required decision boundary and this explains both the faster training convergence and the lower minimum achieved. \nIn contrast to the transformation of the first dataset, in the second dataset, the transformation does not concentrate points of the same class in specific areas of the feature space. \nIn Figure \\ref{fig:first_dataset_features_pcs_transed} the points are both spread and concentrated closer according to the class they belong. This probably means that \nonly a part of the physics of the problem is transferred in the second problem through this specific transformation.\n\n\n\\section{Discussion and Conclusions}\n\\label{sec:conclusions}\n\nFor the SHM classification (location) problem considered here, splitting the dataset into two subsets contributed to increasing the classification accuracy by a \nsmall percentage. This result was explained by the lesser effect that the small panel removals had on the novelty index features. This issue arose because the \npoints representing these classes were close to each other and also points from other classes -- those corresponding to large panel removal\/damage. By considering \nthe two damage cases as different problem, perfect accuracy was achieved in the task of classifying damage to the small panels, and there was also a small increase \nin the performance of the classifier tasked to identify the more severe damage states.\n\nAn attempt at a crude form of transfer learning was also investigated. Having trained the neural network classifier on the first dataset of the seven damage cases, \ntransfer of knowledge to the second sub-problem was considered. This was accomplished by copying the first two layers of the first classifier -- the `feature \nextraction' layers -- directly into the second classifier and only training the connections from the hidden layer to the output. The result is not particularly \nprofound; the transfer does allow a good classifier, even with the smaller set of trainable parameters, but is not as good as training the network from scratch.\nThe result is interesting, because it is clear that the source network is carrying out a feature clustering and cluster separation on the source data, that is still\nuseful when the target data are presented. This suggests that the main issue with the small-panel damage classification is that the data are masked by the close \npresence of the large-panel data. Separating out the small-panel is the obvious answer. The results are interesting because they illustrate in a `toy' example, how\nthe early layers in deeper networks are manipulating features automatically in order to improve the ultimate classification step. The other benefit of the \nseparation into sub-problems, was the faster convergence of the network training.\n\n\\section{Acknowledgement}\n\\label{sec:ack}\n\nThe authors would like to acknowledge Graeme Manson for providing the data used. Moreover, the authors would like to acknowledge the support of the Engineering and Physical Science Research Council (EPSRC) and the European Union (EU). G.T is supported by funding from the EU's Horizon 2020 research and innovation programme under the Marie Sk\u0142odowska-Curie grant agreement DyVirt (764547). The other authors are supported by EPSRC grants EP\/R006768\/1, EP\/R003645\/1, EP\/R004900\/1 and EP\/N010884\/1.\n\n\\bibliographystyle{unsrt}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\vspace{-0.25cm}\nAn intelligent system that interacts with its environment faces the problem of \\emph{continual learning}, in which new experience is constantly being acquired, while old experience may still be relevant \\cite{ring1997child}. An effective continual learning system must optimize for two potentially conflicting goals. First, when a previously encountered scenario is encountered, performance should immediately be good, ideally as good as it was historically. Second, maintenance of old skills should not inhibit rapid acquisition of a new skill. These simultaneous constraints -- maintaining the old while still adapting to the new -- represent the challenge known as the \\emph{stability-plasticity} dilemma \\cite{grossberg1982does}.\n\nThe quintessential failure mode of continual learning is \\emph{catastrophic forgetting}, in which new knowledge supplants old knowledge, resulting in high plasticity but little stability. Within biological systems, hippocampal replay has been proposed as a systems-level mechanism to reduce catastrophic forgetting and improve generalization, as in the theory of complementary learning systems \\cite{mcclelland1998complementary}. This process complements local consolidation of past experience at the level of individual neurons and synapses, which is also believed to be present in biological neural networks \\cite{benna2016computational, leimer2019synaptic}.\n\nWithin artificial continual learning systems, deep neural networks have become ubiquitous tools and increasingly are applied in situations where catastrophic forgetting becomes relevant. In reinforcement learning (RL) settings, this problem is often circumvented by using massive computational resources to ensure that all tasks are learned simultaneously, instead of sequentially. Namely, in simulation or in self-play RL, fresh data can be generated on demand and trained on within the same minibatch, resulting in a stable data distribution.\n\nHowever, as RL is increasingly applied to continual learning problems in industry or robotics, situations arise where gathering new experience is expensive or difficult, and thus simultaneous training is infeasible. Instead, an agent must be able to learn from one task at a time, and the sequence in which tasks occur is not under the agent's control. In fact, boundaries between tasks will often be unknown -- or tasks will deform continuously and not have definite boundaries at all \\cite{kaplanis2019policy}. Such a paradigm for training eliminates the possibility of simultaneously acting upon and learning from several tasks, and leads to the possibility of catastrophic forgetting.\n\nThere has recently been a surge of interest in methods for preventing catastrophic forgetting in RL, inspired in part by neuroscience \\cite{kaplanis2018continual,kirkpatrick2017overcoming,rusu2016progressive, schwarz2018progress}. Remarkably, however, such work has focused on synaptic consolidation approaches, while possibilities of experience replay for reducing forgetting have largely been ignored, and it has been unclear how replay could be applied in this context within a deep RL framework.\n\nWe here demonstrate that replay can be a powerful tool in continual learning. We propose a simple technique, Continual Learning with Experience And Replay (CLEAR), mixing on-policy learning from novel experiences (for plasticity) and off-policy learning from replay experiences (for stability). For additional stability, we introduce behavioral cloning between the current policy and its past self. While it can be memory-intensive to store all past experience, we find that CLEAR is just as effective even when memory is severely constrained. Our approach has the following advantages:\n\\begin{itemize}\n \\item \\textbf{Stability and plasticity.} CLEAR performs better than state-of-the-art (Elastic Weight Consolidation, Progress \\& Compress), almost eliminating catastrophic forgetting.\n \\item \\textbf{Simplicity.} CLEAR is much simpler than prior methods for reducing forgetting and can also be combined easily with other approaches.\n \\item \\textbf{No task information.} CLEAR does not rely upon the identity of tasks or the boundaries between them, by contrast with prior art. This means that CLEAR can be applied to a much broader range of situations, where tasks are unknown or are not discrete.\n\\end{itemize}\n\n\\vspace{-0.25cm}\n\\section{Related work}\n\\vspace{-0.25cm}\nThe problem of catastrophic forgetting in neural networks has long been recognized \\cite{grossberg1982does}, and it is known that rehearsing past data can be a satisfactory antidote for some purposes \\cite{french1999catastrophic, mcclelland1998complementary}. Consequently, in the supervised setting that is the most common paradigm in machine learning, forgetting has been accorded less attention than in cognitive science or neuroscience, since a fixed dataset can be reordered and replayed as necessary to ensure high performance on all samples.\n\nIn recent years, however, there has been renewed interest in overcoming catastrophic forgetting in RL and in supervised learning from streaming data, where it is not possible simply to reorder the data (see e.g.~\\cite{ hayes2018memory, lopez2017gradient, parisi2018continual, rios2018closed}). Current strategies for mitigating catastrophic forgetting have primarily focused on schemes for protecting the parameters inferred in one task while training on another. For example, in Elastic Weight Consolidation (EWC) \\cite{kirkpatrick2017overcoming}, weights important for past tasks are constrained to change more slowly while learning new tasks. Progressive Networks \\cite{rusu2016progressive} freezes subnetworks trained on individual tasks, and Progress \\& Compress \\cite{schwarz2018progress} uses EWC to consolidate the network after each task has been learned. Kaplanis et al.~\\cite{kaplanis2018continual} treat individual synaptic weights as dynamical systems with latent dimensions \/ states that protect information. Outside of RL, Zenke et al.~\\cite{zenke2017continual} develop a method similar to EWC that maintains estimates of the importance of weights for past tasks, Li and Hoiem \\cite{li2017learning} leverage a mixture of task-specific and shared parameters, and Milan et al.~\\cite{milan2016forget} develop a rigorous Bayesian approach for estimating unknown task boundaries. Notably all these methods assume that task identities or boundaries are known, with the exception of \\cite{milan2016forget}, for which the approach is likely not scalable to highly complex tasks.\n\nRehearsing old data via experience replay buffers is a common technique in RL, but such methods have largely been driven by the goal of data-efficient learning on single tasks \\cite{gu2017interpolated,lin1992self, mnih2015human}. Research in this vein has included prioritized replay for maximizing the impact of rare experiences \\cite{schaul2015prioritized}, learning from human demonstration data seeded into a buffer \\cite{hester2017deep}, and methods for approximating replay buffers with generative models \\cite{shin2017continual}. A noteworthy use of experience replay buffers to protect against catastrophic forgetting was demonstrated in Isele and Cosgun \\cite{isele2018selective} on toy tasks, with a focus on how buffers can be made smaller. Previous works \\cite{gu2017interpolated,o2016combining, wang2016sample} have explored mixing on- and off-policy updates in RL, though these were focused on speed and stability in individual tasks and did not examine continual learning.\n\nWhile it may not be surprising that replay can reduce catastrophic forgetting to some extent, it is remarkable that, as we show, it is powerful enough to outperform state-of-the-art methods. There is a marked difference between reshuffling data in supervised learning and replaying past data in RL. Notably, in RL, past data are typically leveraged best by \\emph{off-policy} algorithms since historical actions may come from an out-of-date policy distribution. Reducing this deviation is our motivation for the behavioral cloning component of CLEAR, inspired by work showing the power of policy consolidation \\cite{kaplanis2019policy}, self-imitation \\cite{oh2018self}, and knowledge distillation \\cite{furlanello2018born}.\n\n\\vspace{-0.25cm}\n\\section{The CLEAR Method}\n\\vspace{-0.25cm}\nCLEAR uses actor-critic training on a mixture of new and replayed experiences. We employ distributed training based on the Importance Weighted Actor-Learner Architecture presented in \\cite{espeholt2018impala}. Namely, a single learning network is fed experiences (both novel and replay) by a number of acting networks, for which the weights are asynchronously updated to match those of the learner. Training proceeds as in \\cite{espeholt2018impala} by the \\emph{V-Trace} off-policy learning algorithm, which uses truncated importance weights to correct for off-policy distribution shifts. While V-Trace was designed to correct for the lag between the parameters of the acting networks and those of the learning network, we find it also successfully corrects for the distribution shift corresponding to replay experience. Our network architecture and training hyperparameters are chosen to match those in~\\cite{espeholt2018impala} and are not further optimized.\n\nFormally, let $\\theta$ denote the network parameters, $\\pi_\\theta$ the (current) policy of the network over actions $a$, $\\mu$ the policy generating the observed experience, and $h_s$ the hidden state of the network at time $s$. Then, the V-Trace target $v_s$ is given by:\n$$v_s:=V(h_s) + \\sum_{t=s}^{s+n-1} \\gamma^{t-s}\\left(\\prod_{i=s}^{t-1} c_i\\right)\\delta_t V,$$\nwhere $\\delta_t V:= \\rho_t\\left(r_t + \\gamma V(h_{t+1}) - V(h_t)\\right)$ for truncated importance sampling weights $c_i := \\min(\\bar{c}, \\frac{\\pi_\\theta(a_i| h_i)}{\\mu(a_i|h_i)})$, and $\\rho_t = \\min(\\bar{\\rho}, \\frac{\\pi_\\theta(a_t| h_t)}{\\mu(a_t|h_t)})$ (with $\\bar{c}$ and $\\bar{\\rho}$ constants). The policy gradient loss is:\n$$L_\\text{policy-gradient} := -\\rho_s\\log\\pi_\\theta(a_s| h_s)\\left(r_s+\\gamma v_{s+1} - V_\\theta(h_s)\\right).$$\nThe value function update is given by the L2 loss, and we regularize policies using an entropy loss:\n$$L_\\text{value} := \\left(V_\\theta(h_s) - v_s\\right)^2,\\quad L_\\text{entropy} := \\sum_a \\pi_\\theta(a|h_s) \\log \\pi_\\theta(a|h_s).$$\nThe loss functions $L_\\text{policy-gradient}$, $L_\\text{value}$, and $L_\\text{entropy}$ are applied both for new and replay experiences. In addition, we add $L_\\text{policy-cloning}$ and $L_\\text{value-cloning}$ for replay experiences only. In general, our experiments use a 50-50 mixture of novel and replay experiences, though performance does not appear to be very sensitive to this ratio. Further implementation details are given in Appendix \\ref{sec:methods}.\n\nIn the case of replay experiences, two additional loss terms are added to induce behavioral cloning between the network and its past self, with the goal of preventing network output on replayed tasks from drifting while learning new tasks. We penalize (1) the KL divergence between the historical policy distribution and the present policy distribution, (2) the L2 norm of the difference between the historical and present value functions. Formally, this corresponds to adding the loss functions:\n$$L_\\text{policy-cloning} := \\sum_a \\mu(a|h_s) \\log\\frac{\\mu(a|h_s)}{\\pi_\\theta(a|h_s)},\\quad L_\\text{value-cloning} := ||V_\\theta(h_s) - V_\\text{replay}(h_s)||_2^2.$$\nNote that computing $\\text{KL}[\\mu||\\pi_\\theta]$ instead of $\\text{KL}[\\pi_\\theta||\\mu]$ ensures that $\\pi_\\theta(a|h_s)$ is nonzero wherever the historical policy is as well.\n\n\\vspace{-0.25cm}\n\\section{Results}\n\\vspace{-0.25cm}\n\\begin{figure*}[htb]\n \\centering\n \\includegraphics[scale=0.33]{figures\/separate.png}\n \\includegraphics[scale=0.33]{figures\/simultaneous.png}\n \\includegraphics[scale=0.33]{figures\/sequential.png}\n \\caption{Separate, simultaneous, and sequential training: the $x$-axis denotes environment steps summed across all tasks and the $y$-axis episode score. In ``Sequential'', thick line segments are used to denote the task currently being trained, while thin segments are plotted by evaluating performance without learning. In simultaneous training, performance on \\texttt{explore\\_object\\_locations\\_small} is higher than in separate training, an example of modest constructive interference. In sequential training, tasks that are not currently being learned exhibit very dramatic catastrophic forgetting. See Appendix \\ref{sec:cumsum} for plots of the same data, showing cumulative performance.}\n \\label{fig:sequential}\n \\vspace{-0.5em}\n\\end{figure*}\n\\begin{figure*}[htb]\n \\centering\n \\includegraphics[scale=0.5]{figures\/clear.png}\n \\includegraphics[scale=0.5]{figures\/clear_without_behavioral_cloning.png}\n \\caption{Demonstration of CLEAR on three DMLab tasks, which are trained cyclically in sequence. CLEAR reduces catastrophic forgetting so significantly that sequential tasks train almost as well as simultaneous tasks (compare to Figure \\ref{fig:sequential}). When the behavioral cloning loss terms are ablated, there is still reduced forgetting from off-policy replay alone. As above, thicker line segments are used to denote the task that is currently being trained. See Appendix \\ref{sec:cumsum} for plots of the same data, showing cumulative performance.}\n \\vspace{-1.5em}\n \\label{fig:clear}\n\\end{figure*}\n\n\\subsection{Catastrophic forgetting vs.~interference}\n\\vspace{-0.25cm}\n\\label{subsec:interference}\nOur first experiment (Figure \\ref{fig:sequential}) was designed to distinguish between two distinct concepts that are sometimes conflated, \\emph{interference} and \\emph{catastrophic forgetting}, and to emphasize the outsized role of the latter as compared to the former. Interference occurs when two or more tasks are incompatible (\\emph{destructive interference}) or mutually helpful (\\emph{constructive interference}) within the same model. Catastrophic forgetting occurs when a task's performance goes down not because of incompatibility with another task but because the second task overwrites it within the model. As we aim to illustrate, the two are independent phenomena, and while interference may happen, forgetting is ubiquitous.\\footnote{As an example of (destructive) interference, learning how to drive on the right side of the road may be difficult while also learning how to drive on the left side of the road, because of the nature of the tasks involved. Forgetting, by contrast, results from sequentially training on one and then another task - e.g. learning how to drive and then not doing it for a long time.}\n\nWe considered a set of three distinct tasks within the DMLab set of environments \\cite{beattie2016deepmind}, and compared three training paradigms on which a network may be trained to perform these three tasks: (1) Training networks on the individual tasks \\emph{separately}, (2) training a single network examples from all tasks \\emph{simultaneously} (which permits interference among tasks), and (3) training a single network \\emph{sequentially} on examples from one task, then the next task, and so on cyclically. Across all training protocols, the total amount of experience for each task was held constant. Thus, for separate networks training on separate tasks, the $x$-axis in our plots shows the total number of environment frames summed across all tasks. For example, at three million frames with one million each on tasks one, two, and three. This allows a direct comparison to simultaneous training, in which the same network was trained on all three tasks.\n\nWe observe that in DMLab, there is very little difference between separate and simultaneous training. This indicates minimal interference between tasks. If anything, there is slight constructive interference, with simultaneous training performing marginally better than separate training. We assume this is a result of (i) commonalities in image processing required across different tasks, and (ii) certain basic exploratory behaviors, e.g.,~moving around, that are advantageous across tasks. (By contrast, destructive interference might result from incompatible behaviors or from insufficient model capacity.) \n\nBy contrast, there is a large difference between either of the above modes of training and sequential training, where performance on a task decays immediately when training switches to another task -- that is, catastrophic forgetting. Note that the performance of the sequential training appears at some points to be greater than that of separate training. This is purely because in sequential training, training proceeds exclusively on a single task, then exclusively on another task. For example, the first task quickly increases in performance since the network is effectively seeing three times as much data on that task as the networks training on separate or simultaneous tasks.\n\\begin{figure*}[htb]\n\\centering\n \\begin{tabular}{l|c|c|c}\n &\\small{\\texttt{explore...}}&\\small{\\texttt{rooms\\_collect...}}&\\small{\\texttt{rooms\\_keys...}}\\\\ \\hline\n Separate&29.24&8.79&19.91\\\\ \\hline\n Simultaneous&32.35&8.81&20.56\\\\ \\hline \\hline\n Sequential (no CLEAR)&17.99&5.01&10.87\\\\ \\hline\n CLEAR (50-50 new-replay) &\\textbf{31.40}&\\textbf{8.00}&18.13\\\\ \\hline\n CLEAR w\/o behavioral cloning&28.66&7.79&16.63\\\\ \\hline\n CLEAR, 75-25 new-replay&30.28&7.83&17.86\\\\ \\hline\n CLEAR, 100\\% replay&31.09&7.48&13.39\\\\ \\hline\n CLEAR, buffer 5M&30.33&\\textbf{8.00}&18.07\\\\ \\hline\n CLEAR, buffer 50M&30.82&7.99&\\textbf{18.21}\\\\\n \\end{tabular}\n \\caption{Quantitative comparison of the final cumulative performance between standard training (``Sequential (no CLEAR)'') and various versions of CLEAR on a cyclically repeating sequence of DMLab tasks. We also include the results of training on each individual task with a separate network (``Separate'') and on all tasks simultaneously (``Simultaneous'') instead of sequentially (see Figure \\ref{fig:sequential}). As described in Section \\ref{subsec:interference}, these are no-forgetting scenarios and thus present upper bounds on the performance expected in a continual learning setting, where tasks are presented sequentially. Remarkably, CLEAR achieves performance comparable to ``Separate'' and ``Simultaneous'', demonstrating that forgetting is virtually eliminated. See Appendix \\ref{sec:cumsum} for further details and plots.}\n \\vspace{-1.5em}\n \\label{tab:dmlab}\n\\end{figure*}\n\\vspace{-0.25cm}\n\\subsection{Stability}\n\\vspace{-0.25cm}\nWe here demonstrate the efficacy of CLEAR for diminishing catastrophic forgetting (Figure \\ref{fig:clear}). We apply CLEAR to the cyclically repeating sequence of DMLab tasks used in the preceding experiment. Our method effectively eliminates forgetting on all three tasks, while preserving overall training performance (see ``Sequential'' training in Figure \\ref{fig:sequential} for reference). When the task switches, there is little, if any, dropoff in performance when using CLEAR, and the network picks up immediately where it left off once a task returns later in training. Without behavioral cloning, the mixture of new experience and replay still reduces catastrophic forgetting, though the effect is reduced.\n\nIn Figure \\ref{tab:dmlab}, we perform a quantitative comparison of the performance for CLEAR against the performance of standard training on sequential tasks, as well as training on tasks separately and simultaneously. In order to perform a comparison that effectively captures the overall performance during continual learning (including the effect of catastrophic forgetting), the reward shown for time $t$ is the average $(1\/t)\\sum_{s0}$\nsuch that if one can protect $B$ vertices at each time\nstep (instead of just $1$), then there is a protection\nstrategy where none of the leaves of the tree\ncatches fire.\nIn this context, $B$ is referred to as\nthe \\emph{number of firefighters}.\n\n\nBoth the Firefighter problem and RMFC---both restricted\nto trees as defined above---are known to be computationally\nhard problems.\nMore precisely, Finbow, King, MacGillivray and\nRizzi~\\cite{FinbowKingMacGillivrayRizzi2007} showed \nNP-hardness for the Firefighter problem on trees with \nmaximum degree three.\nFor RMFC on trees, it is NP-hard to decide\nwhether one firefighter,\ni.e., $B=1$, is sufficient~\\cite{king_2010_firefighter};\nthus, unless $\\textsc{P}=\\textsc{NP}$, there is\nno (efficient) approximation algorithm with an approximation\nfactor strictly better than $2$.\n\nOn the positive side, several approximation algorithms\nhave been suggested for the Firefighter problem and RMFC.\nHartnell and Li~\\cite{HartnellLi2000} showed that a \nnatural greedy algorithm is a $\\frac{1}{2}$-approximation for the\nFirefighter problem. This approximation guarantee\nwas later improved by Cai, Verbin\nand Yang~\\cite{CaiVerbinYang2008} to $1-\\frac{1}{e}$,\nusing a natural linear programming (LP) relaxation\nand dependent randomized rounding.\nIt was later observed by Anshelevich, Chakrabarty, Hate \nand Swamy~\\cite{AnshelevichChakrabartyHateSwamy2009}\nthat the Firefighter problem on\ntrees can be interpreted as a monotone submodular\nfunction maximization (SFM) problem subject to a\npartition matroid constraint.\nThis leads to alternative ways to obtain a\n$(1-\\frac{1}{e})$-approximation by using a recent\n$(1-\\frac{1}{e})$-approximation for monotone SFM subject\nto a matroid\nconstraint~\\cite{vondrak_2008_optimal,calinescu_2011_maximizing}.\nThe factor $1-\\frac{1}{e}$ was later only improved for\nvarious restricted tree\ntopologies (see~\\cite{IwaikawaKamiyamaMatsui2011})\nand hence, for arbitrary trees,\nthis is the best known approximation factor to date.\n\nFor RMFC on trees, Chalermsook and\nChuzhoy~\\cite{ChalermsookChuzhoy2010} presented an\n$O(\\log^* n)$-approximation, where $n=|V|$ is the\nnumber of vertices.\\footnote{\n$\\log^* n$ denotes the minimum number $k$ of\nlogs of base two that have to be nested such\nthat $\\underbrace{\\log\\log\\dots\\log}_{k \\text{ logs}} n \\leq 1$.}\nTheir algorithm is based on a natural\nlinear program which is a straightforward\nadaptation of the one used in~\\cite{CaiVerbinYang2008}\nto get a $(1-\\frac{1}{e})$-approximation for the\nFirefighter problem on trees.\n\n\n\nWhereas there are still considerable gaps between\ncurrent hardness results and approximation algorithms\nfor both the Firefighter problem and RMFC on trees, \nthe currently best approximations essentially match\nthe integrality gaps of the underlying LPs.\nMore precisely,\nChalermsook and Vaz~\\cite{chalermsook_2016_new}\nshowed that for any $\\epsilon >0$, the canonical LP used\nfor the Firefighter problem on trees has an integrality\ngap of $1-\\frac{1}{e}+\\epsilon$. This generalized\na previous result by Cai, Verbin and Yang~\\cite{CaiVerbinYang2008},\nwho showed the same gap if the integral solution is required\nto lie in the support of an optimal LP solution.\nFor RMFC on trees, the integrality gap\nof the underlying LP\nis~$\\Theta(\\log^* n)$~\\cite{ChalermsookChuzhoy2010}.\n\n\nIt remained open to what extent these integrality\ngaps may reflect the approximation hardnesses of the\nproblems.\nThis question is motivated by two related problems\nwhose hardnesses of approximation indeed matches the\nabove-mentioned integrality gaps for the Firefighter\nproblem and RMFC.\nIn particular, many versions of monotone SFM subject\nto a matroid constraint---which we recall was shown\nin~\\cite{AnshelevichChakrabartyHateSwamy2009}\nto capture the Firefigther problem on\ntrees as a special\ncase---are\nhard to approximate up to a factor of\n$1-1\/e+\\epsilon$ for any constant $\\epsilon >0$.\nThis includes the problem of maximizing an explicitly\ngiven coverage function subject to a single cardinality\nconstraint, as shown by Feige~\\cite{feige_1998_threshold}.\nMoreover, as highlighted in~\\cite{ChalermsookChuzhoy2010},\nthe Asymmetric $k$-center problem is similar in nature\nto RMFC, and has an approximation hardness of\n$\\Theta(\\log^* n)$.\n\n\nThe goal of this paper is to fill the gap between\ncurrent approximation ratios and hardness results\nfor the Firefighter problem and RMFC on trees.\nIn particular, we present approximation ratios\nthat nearly match the hardness results, thus showing\nthat both problems can be approximated to factors\nthat are substantially better than the integrality\ngaps of the natural LPs.\nOur results are based on several new techniques,\nwhich may be of independent interest.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\t\n\\subsection{Our results}\n\n\nOur main results show\nthat both the Firefighter\nproblem and RMFC admit strong approximations\nthat essentially match known hardness bounds,\nshowing that approximation factors can be achieved that\nare substantially stronger than the integrality\ngaps of the natural LPs.\nIn particular, we obtain the following result\nfor RMFC.\n\\begin{theorem}\\label{thm:O1RMFC}\nThere is a $12$-approximation for RMFC.\n\\end{theorem}\nRecalling that RMFC is hard to approximate\nwithin any factor better than $2$, the above\nresult is optimal up to a constant factor,\nand improves on the previously best\n$O(\\log^* n)$-approximation of\nChalermsook and Chuzhoy~\\cite{ChalermsookChuzhoy2010}.\n\n\nMoreover, our main result for the Firefighter problem\nis the following, which, in view of NP-hardness \nof the problem, is essentially best possible in\nterms of approximation guarantee.\n\\begin{theorem}\\label{thm:PtasFF}\nThere is a PTAS for the Firefighter problem\non trees.\\footnote{A polynomial\ntime approximation scheme (PTAS) is\nan algorithm that, for any constant\n$\\epsilon > 0$, returns in polynomial time\na $(1-\\epsilon)$-approximate solution.}\n\\end{theorem}\n\nNotice that the Firefighter problem does not admit\nan FPTAS\\footnote{A fully polynomial time approximation\nscheme (FPTAS) is a PTAS with running\ntime polynomial in the input size and $\\frac{1}{\\epsilon}$.}\nunless $\\textsc{P}=\\textsc{NP}$, since the optimal\nvalue of any Firefighter problem on a tree of\n$n$ vertices is bounded by $O(n)$.\\footnote{\nThe nonexistence of FPTASs unless $\\textsc{P}=\\textsc{NP}$\ncan often be derived easily from strong\nNP-hardness. Notice that the Firefighter problem\nis indeed strongly NP-hard because\nits input size is $O(n)$, in which case NP-hardness is\nequivalent to strong NP-hardness.\n}\nWe introduce several new techniques that allow us\nto obtain approximation factors well beyond \nthe integrality gaps of the natural LPs,\nwhich have been a barrier for previous approaches.\nWe start by providing an overview of these techniques.\n\n\n\\smallskip\n\nDespite the fact that we obtain\napproximation\nfactors beating the integrality gaps, the natural LPs\nplay a central role in our approaches.\nWe start by introducing general transformations\nthat allow for transforming the Firefighter problem\nand RMFC into a more compact and better structured\nform, only losing small factors in terms of\napproximability.\nThese transformations by themselves do not decrease\nthe integrality gaps.\nHowever, they allow us to identify small\nsubstructures, over which we can\noptimize efficiently, and having an optimal solution\nto these subproblems we can define\na residual LP with small integrality gap.\n\nSimilar high-level approaches,\nlike guessing a constant-size\nbut important subset of an optimal solution are well-known\nin various contexts to decrease \nintegrality gaps of natural LPs. The best-known example\nmay be classic PTASs for the knapsack problem, where the\nintegrality gap of the natural LP can be decreased to\nan arbitrarily small constant by first guessing a constant\nnumber of heaviest elements of an optimal solution.\nHowever, our approach differs substantially \nfrom this standard enumeration idea.\nApart from the above-mentioned transformations which,\nas we will show later, already lead to new results\nfor both RMFC and the Firefighter problem, we\nwill introduce new combinatorial approaches to gain information\nabout a \\emph{super-constant} subset of an optimal solution.\nIn particular, for the RMFC problem we define\na recursive enumeration\nalgorithm which, despite being very slow for enumerating all\nsolutions, can be shown to reach a good subsolution \nwithin a small recursion depth that can be reached in\npolynomial time.\nThis enumeration procedure explores the space step by\nstep, and at each step we first solve an LP that determines\nhow to continue the enumeration in the next step.\nWe think that this LP-guided enumeration\ntechnique may be of independent interest.\nFor the Firefighter problem, we use a well-chosen\nenumeration procedure to identify a polynomial\nnumber of additional constraints to be added to the\nLP, that improves its integrality gap to\n$1-\\epsilon$.\n\n\n\n\n\n\n\n\n\n\\subsection{Further related results}\nIwaikawa, Kamiyama and Matsui~\\cite{IwaikawaKamiyamaMatsui2011}\nshowed that the approximation guarantee of $1-\\frac{1}{e}$ can be improved \nfor some restricted families of trees, in particular of low maximum degree.\nAnshelevich, Chakrabarty, Hate and \nSwamy~\\cite{AnshelevichChakrabartyHateSwamy2009} studied the approximability\nof the Firefighter problem in general graphs, which they prove admits no \n$n^{1-\\epsilon}$-approximation for any $\\epsilon > 0$, unless\n$\\textsc{P}=\\textsc{NP}$. In a \ndifferent model, where the protection also spreads through the graph \n(the \\emph{Spreading Model}), the authors show that the problem \nadmits a polynomial $(1-\\frac{1}{e})$-approximation on general graphs. \nMoreover, for RMFC, an $O(\\sqrt n)$-approximation for general \ngraphs and an $O(\\log n)$-approximation for directed layered\ngraphs is presented.\nThe\nlatter result was obtained independently by Chalermsook and Chuzhoy~\\cite{ChalermsookChuzhoy2010}.\nKlein, Levcopoulos and Lingas~\\cite{KleinLevcopoulosLingas2014} introduced a \ngeometric variant of the Firefighter problem, proved its NP-hardness and provided \na constant-factor approximation algorithm. \nThe Firefighter problem and RMFC are natural special cases\nof the Maximum Coverage Problem with \nGroup Constraints (MCGC)~\\cite{ChekuriKumar2004} and the \nMultiple Set Cover problem (MSC)~\\cite{ElkinKortsarz2006}, respectively. \nThe input in MCGC is a set system consisting of a finite set $X$\nof elements with nonnegative weights, a\ncollection of subsets $\\mathcal{S} = \\{S_1, \\cdots, S_k\\}$ of $X$ and \nan integer $k$. The sets in $\\mathcal{S}$ are partitioned into \ngroups $G_1, \\cdots, G_l\\subseteq \\mathcal{S}$.\nThe goal is to pick a\nsubset $H\\subseteq \\mathcal{S}$ of $k$ \nsets from $\\mathcal{S}$ whose union covers elements of total\nweight as large as possible with the \nadditional constraint that $|H \\cap G_j| \\leq 1$\nfor all $j\\in [l]\\coloneqq \\{1,\\dots, l\\}$. In MSC,\ninstead of the fixed bounds for groups and the parameter $k$, the goal is\nto choose a subset $H\\subseteq \\mathcal{S}$\nthat covers $X$ completely, while \nminimizing $\\max_{j\\in [l]}|H \\cap G_j|$. The Firefighter\nproblem and RMFC can naturally be interpreted as special cases of the latter\nproblems with a laminar set system $\\mathcal{S}$.\n\n\nThe Firefighter problem admits polynomial time algorithms in some restricted classes \nof graphs. Finbow, King, MacGillivray and Rizzi~\\cite{FinbowKingMacGillivrayRizzi2007}\nshowed that, while the problem is NP-hard on trees with maximum degree three, \nwhen the fire starts at a vertex with degree two in a subcubic tree, the problem is solvable \nin polynomial time. Fomin, Heggernes and van Leeuwen~\\cite{FominHeggernes_vanLeeuwen2012}\npresented polynomial algorithms for interval graphs, split graphs,\npermutation graphs and $P_k$-free graphs.\n\nSeveral sub-exponential exact algorithms were developed for the Firefighter\nproblem on trees. Cai, Verbin and Yang~\\cite{CaiVerbinYang2008} presented\na $2^{O(\\sqrt{n}\\log n)}$-time algorithm. \nFloderus, Lingas and Persson~\\cite{FloderusLingasPersson2013} presented a\nsimpler algorithm with a slightly better running time, as well as a\nsub-exponential algorithm for general graphs in the\nspreading model and an $O(1)$-approximation in planar graphs\nunder some further conditions.\n\nAdditional directions of research on the Firefighter problem include parameterized\ncomplexity (Cai, Verbin and Yang~\\cite{CaiVerbinYang2008}, Bazgan, Chopin and \nFellows~\\cite{BazganChopinFellows2011}, Cygan, Fomin and \nvan Leeuwen~\\cite{CyganFomin_vanLeeuwen2012} and \nBazgan, Chopin, Cygan, Fellows, Fomin and \nvan Leeuwen~\\cite{BazganChopinCyganFellowsFomin_van_Leeuwen2014}), generalizations\nto the case of many initial fires and many firefighters (Bazgan, Chopin and \nRies~\\cite{BazganChopinRies2013} and Costa, Dantas, Dourado, Penso and \nRautenbach~\\cite{CostaDantasDouradoPensoRautenbach2013}),\nand the study of potential strengthenings of the canonical LP for\nthe Firefighter problem on trees (Hartke~\\cite{Hartke2006} and Chalermsook and Vaz~\\cite{chalermsook_2016_new}).\n\nComputing the \\emph{Survivability} of a graph is \na further problem closely related to Firefighting\nthat has attracted considerable attention\n(see~\\cite{CaiWang2009,CaiChengVerbinZhou2010,Pralat2013,Esperet_Van_Den_HeuvelMaffraySipma2013,Gordinowicz2013,KongZhangWang2014}).\nFor a graph $G$ and a parameter\n$k\\in \\mathbb{Z}_{\\geq 0}$, the $k$-survivability of $G$ \nis the average fraction of nodes that one can save with $k$ firefighters\nin $G$, when the fire starts at a random node.\n\n\n\nFor further references we refer the reader to the survey of Finbow and \nMacGillivray~\\cite{FinbowMacGillivray2009}.\n\n\n\\subsection{Organization of the paper}\n\n\n\nWe start by introducing the classic linear programming\nrelaxations for the Firefighter problem and RMFC\nin Section~\\ref{sec:preliminaries}.\nSection~\\ref{sec:overview} outlines our main\ntechniques and algorithms. Some \nproofs and additional discussion\nare deferred to later sections, namely\nSection~\\ref{sec:proofsCompression}, providing\ndetails on a compression technique that is\ncrucial for both our algorithms, Section~\\ref{sec:proofsFF},\ncontaining proofs for results related to the\nFirefighter problem, and Section~\\ref{sec:proofsRMFC},\ncontaining proofs for results related to RMFC.\nFinally, Appendix~\\ref{apx:trans} contains some basic\nreductions showing how to reduce different variations\nof the Firefighter problem to each other.\n\n\n\n\n\n\n\n\n\\section{Classic LP relaxations and preliminaries}\n\\label{sec:preliminaries}\n\nInterestingly, despite the fact that we\nobtain approximation factors considerably\nstronger than the known integrality gaps of the\nnatural LPs,\nthese LPs still play a central role in our approaches.\nWe thus start by introducing the natural LPs together with\nsome basic notation and terminology.\n\nLet $L\\in \\mathbb{Z}_{\\geq 0}$ be the \\emph{depth} of\nthe tree, i.e., the largest\ndistance---in terms of number of edges---between $r$\nand any other vertex in $G$.\nHence, after at most $L$ time steps, the fire\nspreading process will halt.\nFor $\\ell\\in [L]:=\\{1,\\dots, L\\}$, let\n$V_\\ell\\subseteq V$ be the set of all vertices of\ndistance $\\ell$ from $r$, which we call the\n\\emph{$\\ell$-th level} of the instance.\nFor brevity, we use $V_{\\leq \\ell} = \\cup_{k=1}^\\ell V_k$,\nand we define in the same spirit $V_{\\geq \\ell}$, \n$V_{< \\ell}$, and $V_{> \\ell}$.\nMoreover, we denote by\n$\\Gamma \\subseteq V$ the set of all leaves\nof the tree, and for any $u\\in V$, the set\n$P_u\\subseteq V\\setminus \\{r\\}$ denotes\nthe set of all vertices on the unique $u$-$r$ path\nexcept for the root $r$.\n\nThe relaxation for RMFC used in~\\cite{ChalermsookChuzhoy2010}\nis the following:\n\\begin{equation}\\label{eq:lpRMFC}\n\\begin{array}{*2{>{\\displaystyle}r}c*2{>{\\displaystyle}l}}\n\\min & B & & \\\\\n & x(P_u) &\\geq &1 &\\forall u\\in \\Gamma \\\\\n & x(V_{\\leq \\ell}) &\\leq &B\\cdot \\ell\\hspace*{2em}\n &\\forall \\ell\\in [L]\\\\\n & x &\\in &\\mathbb{R}_{\\geq 0}^{V\\setminus \\{r\\}},\n\\end{array}\\tag{$\\mathrm{LP_{RMFC}}$}\n\\end{equation}\nwhere $x(U):=\\sum_{u\\in U} x(u)$ for any $U\\subseteq V\\setminus \\{r\\}$.\nIndeed, if one enforces $x\\in \\{0,1\\}^{V\\setminus \\{r\\}}$\nand $B\\in \\mathbb{Z}$\nin the above relaxation, an exact description of RMFC is\nobtained where $x$ is the characteristic vector of the\nvertices to be protected and $B$ is the number of\nfirefighters:\nThe constraints $x(P_u)\\geq 1$ for $u\\in \\Gamma$ enforce \nthat for each leaf $u$, a vertex between $u$ and $r$\nwill be protected, which makes sure that $u$ will not\nbe reached by the fire;\nmoreover, the\nconstraints $x(V_{\\leq \\ell})\\leq B\\cdot \\ell$\nfor $\\ell\\in [L]$ describe the vertex sets that can be\nprotected given $B$ firefighters per time step\n(see~\\cite{ChalermsookChuzhoy2010} for more details).\nAlso, as already highlighted in~\\cite{ChalermsookChuzhoy2010},\nthere is an optimal solution to RMFC (and also to the Firefighter\nproblem), that protects with the firefighters available at\ntime step $\\ell$ only vertices in $V_\\ell$.\nHence, the above relaxation\ncan be transformed into one with same optimal objective value\nby replacing the constraints\n$x(V_{\\leq \\ell})\\leq B\\cdot \\ell$ \\;$\\forall\\ell\\in [L]$\nby the constraints\n$x(V_\\ell) \\leq B$ \\;$\\forall \\ell\\in [L]$.\n\n\nThe natural LP relaxation for the Firefighter\nproblem, which leads to the previously best\n$(1-1\/e)$-approximation presented in~\\cite{CaiVerbinYang2008},\nis obtained analogously.\nDue to higher generality, and even more importantly\nto obtain more flexibility\nin reductions to be defined later, we work on a slight\ngeneralization of the Firefighter problem on trees,\nextending it in two ways:\n\\begin{enumerate}[nosep,label=(\\roman*)]\n\\item Weighted version: vertices $u\\in V\\setminus \\{r\\}$ have\nweights $w(u)\\in \\mathbb{Z}_{\\geq 0}$, and the goal\nis to maximize the total weight of vertices not catching\nfire.\nIn the classical Firefighter problem all weights are one.\n\n\\item General budgets\/firefighters:\nWe allow for having a different number of\nfirefighters at each time step, say $B_\\ell \\in \\mathbb{Z}_{>0}$\nfirefighters for time step $\\ell\\in [L]$.\\footnote{Without\nloss of generality we exclude $B_\\ell=0$, since a level\nwith zero budget can be eliminated through a simple\ncontraction operation. For more details we refer to\nthe proof of Theorem~\\ref{thm:compressionFF} which,\nas a sub-step, eliminates zero-budget levels.\n}\n\\end{enumerate}\nIndeed, the above generalizations are mostly for convenience\nof presentation, since general budgets can be reduced to\nunit budgets (see Appendix~\\ref{apx:trans} for a proof):\n\\begin{lemma}\\label{lem:genBudgetsToUnit}\nAny weighted Firefighter problem on trees with $n$ vertices\nand general budgets can be transformed efficiently \ninto an equivalent weighted Firefighter problem with\nunit-budgets and $O(n^2)$ vertices.\n\\end{lemma}\nWe also show in Appendix~\\ref{apx:trans} that\nup to an arbitrarily small error in terms\nof objective, any weighted Firefighter instance can be\nreduced to a unit-weighted one.\nIn what follows, we always assume to deal with\na weighted Firefighter instance if not specified\notherwise. Regarding the budgets, we will be\nexplicit about whether we work with unit or\ngeneral budgets, since some techniques are easier\nto explain in the unit-budget case, even though\nit is equivalent to general budgets by\nLemma~\\ref{lem:genBudgetsToUnit}.\n\n\\smallskip\n\nAn immediate extension of the LP relaxation\nfor the unit-weighted unit-budget Firefighter\nproblem used in~\\cite{CaiVerbinYang2008}---which\nis based on an IP formulation\npresented in~\\cite{macgillivray_2003_firefighter}---leads\nto the\nfollowing LP relaxation \nfor the weighted Firefighter problem\nwith general budgets. \nFor $u\\in V$, we denote by $T_u\\subseteq V$\nthe set of all vertices in the subtree starting\nat $u$ and including $u$, i.e., all vertices $v$\nsuch that the unique $r$-$v$ path in $G$ contains $u$.\n\n\\begin{equation}\\label{eq:lpFF}\n\\begin{array}{*2{>{\\displaystyle}r}c*2{>{\\displaystyle}l}}\n\\max & \\sum_{u\\in V\\setminus \\{r\\}} x_u w(T_u) & & \\\\\n & x(P_u) &\\leq &1 &\\forall u\\in \\Gamma \\\\\n & x(V_{\\leq \\ell}) &\\leq &\\sum_{i=1}^\\ell B_i\\hspace*{2em}\n &\\forall \\ell\\in [L]\\\\[1.5em]\n & x &\\in &\\mathbb{R}_{\\geq 0}^{V\\setminus \\{r\\}}.\n\\end{array}\\tag{$\\mathrm{LP_{FF}}$}\n\\end{equation}\nThe constraints\n$x(P_u)\\leq 1$ exclude redundancies, i.e., a vertex\n$u$ is forbidden of being protected if another vertex above\nit, on the $r$-$u$ path, is already protected. This\nelimination of redundancies allows for writing the objective\nfunction as shown above.\n\nWe recall that the integrality gap of~\\ref{eq:lpRMFC}\nwas shown to be $\\Theta(\\log^* n)$~\\cite{ChalermsookChuzhoy2010},\nand the integrality gap of~\\ref{eq:lpFF} is\nasymptotically $1-1\/e$\n(when $n\\to \\infty$)~\\cite{chalermsook_2016_new}.\n\n\\smallskip\n\nThroughout the paper, all logarithms are of base $2$ if\nnot indicated otherwise.\nWhen using big-$O$ and related notations\n(like $\\Omega, \\Theta, \\ldots$), we will always\nbe explicit about the dependence on \nsmall error terms $\\epsilon$---as used when talking\nabout $(1-\\epsilon)$-approximations---and not consider\nit to be part of the hidden constant.\nTo make statements where $\\epsilon$ is part of the\nhidden constant, we will use the notation\n$O_{\\epsilon}$ and likewise\n$\\Omega_{\\epsilon}, \\Theta_{\\epsilon},\\ldots$.\n\n\n\n\n\n\n\n\\section{Overview of techniques and algorithms}\n\\label{sec:overview}\n\nIn this section, we present our main technical\ncontributions and outline our algorithms.\nWe start by introducing a compression technique in\nSection~\\ref{subsec:compression} that works\nfor both RMFC and the Firefighter problem and allows\nfor transforming any instance to one on a tree with\nonly logarithmic depth.\nOne key property we achieve with compression,\nis that we can later use (partial)\nenumeration techniques with exponential running\ntime in the depth of the tree. \nHowever, compression on its own already leads to\ninteresting results. In particular, it allows\nus to obtain a QPTAS for\nthe Firefighter problem, and a quasipolynomial time\n$2$-approximation for RMFC.\\footnote{The running time\nof an algorithm is \\emph{quasipolynomial} if it is\nof the form $2^{\\polylog(\\langle \\mathrm{input} \\rangle)}$,\nwhere $\\langle \\mathrm{input} \\rangle$ is the input\nsize of the problem. A QPTAS is an algorithm that,\nfor any constant $\\epsilon >0$, returns\na $(1-\\epsilon)$-approximation in quasipolynomial\ntime.}\nHowever, it seems highly\nnon-trivial to transform these quasipolynomial time\nprocedures to efficient ones. \n\n\n\nTo obtain the claimed results, we develop two \n(partial) enumeration methods to reduce the integrality\ngap of the LP.\nIn Section~\\ref{subsec:overviewFirefighter}, we provide\nan overview of our PTAS for the Firefighter problem,\nand Section~\\ref{subsec:overviewRMFC} presents \nour $O(1)$-approximation for RMFC.\n\n\n\n\n\n\n\n\n\n\n\\subsection{Compression}\\label{subsec:compression}\n\nCompression is a technique that is applicable to both\nthe Firefighter problem and RMFC. It allows for reducing\nthe depth of the input tree at a very small loss in\nthe objective.\nWe start by discussing compression in the context of\nthe Firefighter problem. \n\nTo reduce the depth of the tree, we will\nfirst do a sequence of what we call \\emph{down-pushes}.\nEach down-push acts on two levels $\\ell_1,\\ell_2\\in [L]$\nwith $\\ell_1 < \\ell_2$ of the tree, and moves the budget $B_{\\ell_1}$\nof level $\\ell_1$ down to $\\ell_2$, i.e., the new\nbudget of level $\\ell_2$ will be $B_{\\ell_1}+B_{\\ell_2}$,\nand the new budget of level $\\ell_1$ will be $0$.\nClearly, down-pushes only restrict our options for\nprotecting vertices. However, we can show that\none can do a sequence of down-pushes such that\nfirst, the optimal objective value of the new instance\nis very close to the one of the original instance,\nand second,\nonly $O(\\log L)$ levels have non-zero budgets.\nFinally, levels with $0$-budget can easily be removed\nthrough a simple contraction operation, thus leading\nto a new instance with only $O(\\log L)$ depth.\n\nTheorem~\\ref{thm:compressionFF} below\nformalizes our main compression\nresult for the Firefighter problem, which we state for\nunit-budget Firefighter instances for simplicity.\nSince Lemma~\\ref{lem:genBudgetsToUnit}\nimplies that every general-budget\nFirefighter instance with $n$ vertices can\nbe transformed into a unit-budget Firefighter instance\nwith $O(n^2)$ vertices---and thus $O(n^2)$\nlevels---Theorem~\\ref{thm:compressionFF} can also be used to reduce\nany Firefighter instance on $n$ vertices to one\nwith $O(\\frac{\\log n}{\\delta})$\nlevels, by losing a factor of at most $1-\\delta$\nin terms of objective.\n\n\n\\begin{theorem}\\label{thm:compressionFF}\nLet $\\mathcal{I}$ be a unit-budget Firefighter instance \non a tree with depth $L$, and let $\\delta\\in (0,1)$.\nThen one can efficiently construct a general budget\nFirefighter instance $\\overline{\\mathcal{I}}$ with depth\n$L'=O(\\frac{\\log L}{\\delta})$, and such that\nthe following holds, where $\\val(\\mathsf{OPT}(\\overline{\\mathcal{I}}))$\nand $\\val(\\mathsf{OPT}(\\mathcal{I}))$ are the optimal values of\n$\\overline{\\mathcal{I}}$ and $\\mathcal{I}$, respectively.\n\n\\smallskip\n\n\\begin{enumerate}[nosep,label=(\\roman*)]\n\\item\n$\\val(\\mathsf{OPT}(\\overline{\\mathcal{I}}))\n \\geq (1-\\delta) \\val(\\mathsf{OPT}(\\mathcal{I}))$, and\n\n\\item any solution to $\\overline{\\mathcal{I}}$ can be\ntransformed efficiently into a solution of $\\mathcal{I}$\nwith same objective value.\n\\end{enumerate}\n\\end{theorem}\n\n\n\n\nFor RMFC we can use a very similar compression technique\nleading to the following.\n\n\\begin{theorem}\\label{thm:compressionRMFC}\nLet $G=(V,E)$ be a rooted tree of depth $L$.\nThen one can construct efficiently a rooted\ntree $G'=(V',E')$ with $|V'|\\leq |V|$ and\ndepth $L'=O(\\log L)$, such that:\n\n\\smallskip\n\n\\begin{enumerate}[nosep,label=(\\roman*)]\n\\item If the RMFC problem on $G$ has a \nsolution with budget $B\\in \\mathbb{Z}_{> 0}$ at\neach level, then the RMFC problem on $G'$\nwith non-uniform budgets, where level $\\ell \\geq 1$\nhas a budget of $B_\\ell=2^{\\ell} \\cdot B$, has a solution.\n\n\\item Any solution to the RMFC problem on $G'$,\nwhere level $\\ell$ has budget $B_\\ell=2^{\\ell} \\cdot B$,\ncan be transformed efficiently into an RMFC solution\nfor $G$ with budget $2B$.\n\\end{enumerate}\n\n\\end{theorem}\n\nInterestingly, the above compression results already\nallow us to obtain strong quasipolynomial approximation\nalgorithms for the Firefighter problem and RMFC,\nusing dynamic programming.\nConsider for example the RMFC problem. We can first guess\nthe optimal budget $B$, which can be done efficiently\nsince $B\\in \\{1,\\dots, n\\}$. Consider now the instance\n$G'$ claimed by Theorem~\\ref{thm:compressionRMFC}\nwith budgets $B_\\ell = 2^{\\ell} B$.\nBy Theorem~\\ref{thm:compressionRMFC},\nthis RMFC instance is feasible\nand any solution to it can be converted to one of\nthe original RMFC problem with budget $2B$.\nIt is not hard to see that, for the fixed budgets $B_\\ell$,\none can solve the RMFC problem on $G'$ in quasipolynomial\ntime using a bottom-up dynamic programming approach.\nMore precisely, starting with the leaves and moving\nup to the root, we compute for each vertex $u\\in V$\nthe following table. Consider a subset of the available\nbudgets, which can be represented as a vector\n$q\\in [B_1]\\times \\dots \\times [B_{L'}]$. For each such\nvector $q$ we want to know whether or not using the sub-budget\ndescribed by $q$ allows for disconnecting $u$ from all\nleaves below it.\nSince $L'=O(\\log L)$ and the size of each budget $B_\\ell$\nis at most the number of vertices, the table size is\nquasipolynomial.\nMoreover, one can check that these tables can be\nconstructed bottom-up in quasipolynomial time.\nHence, this approach leads to a quasipolynomial time\n$2$-approximation for RMFC. We recall that there is no\nefficient approximation algorithm with an approximation\nratio strictly below $2$,\nunless $\\textsc{P}=\\textsc{NP}$.\nA similar dynamic programming approach for the Firefighter\nproblem on a compressed instance leads to a\nQPTAS.\n\nHowever, our focus is on efficient algorithms, and it\nseems non-trivial to transform the\nabove quasipolynomial time dynamic programming approaches\ninto efficient procedures. To obtain our results,\nwe therefore combine\nthe above compression techniques with\nfurther approaches to be discussed next.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Overview of PTAS for Firefighter problem}\n\\label{subsec:overviewFirefighter}\n\nDespite the fact that~\\ref{eq:lpFF} has a large integrality\ngap\n---which can be shown to be the case even after\ncompression\\footnote{This follows from the fact\nthat through compression with some\nparameter $\\delta\\in (0,1)$, both the optimal\nvalue and optimal LP value change at most\nby a $\\delta$-fraction.}---%\nit is a crucial tool in our PTAS.\nConsider a general-budget\nFirefighter instance,\nand let $x$ be a vertex solution to~\\ref{eq:lpFF}.\nWe say that a vertex $u\\in V\\setminus \\{r\\}$ is\n\\emph{$x$-loose}, or simply \\emph{loose}, if\n$u\\in \\operatorname{supp}(x):=\\{v\\in V\\setminus \\{r\\} \\mid x(v) > 0\\}$\nand $x(P_u) < 1$.\nAnalogously, we call a vertex $u\\in V\\setminus \\{r\\}$\n\\emph{$x$-tight}, or simply \\emph{tight}, if\n$u\\in \\operatorname{supp}(x)$ and $x(P_u)=1$.\nHence, $\\operatorname{supp}(x)$ can be partitioned into\n$\\operatorname{supp}(x)=V^{\\mathcal{L}} \\cup V^{\\mathcal{T}}$,\nwhere $V^{\\mathcal{L}}$ and $V^{\\mathcal{T}}$\nare the set of all loose and tight vertices, respectively.\nUsing a sparsity argument for vertex solutions\nof~\\ref{eq:lpFF} we can bound the number of\n$x$-loose vertices.\n\n\n\\begin{lemma}\\label{lem:sparsityFF}\nLet $x$ be a vertex solution to~\\ref{eq:lpFF}\nfor a Firefighter problem with general budgets.\nThen the number of $x$-loose vertices is at\nmost $L$, the depth of the tree.\n\\end{lemma}\n\nHaving a vertex solution $x$ to~\\ref{eq:lpFF},\nwe can consider a simplified LP obtained from~\\ref{eq:lpFF}\nby only allowing to protect vertices that are $x$-tight.\nA simple yet useful property of $x$-tight vertices is that\nfor any $u,v\\in V^{\\mathcal{T}}$ with $u\\neq v$\nwe have $u\\not\\in P_v$. Indeed, if $u\\in P_v$, then\n$x(P_u) \\leq x(P_v) - x(v) < x(P_v)=1$ because $x(v)>0$.\nHence, no two tight vertices lie on the same leaf-root path.\nThus, when restricting~\\ref{eq:lpFF} to $V^{\\mathcal{T}}$,\nthe path constraints $x(P_u) \\leq 1$ for $u\\in \\Gamma$ transform\ninto trivial constraints requiring $x(v)\\leq 1$ for\n$v\\in V^{\\mathcal{T}}$, and one can easily observe that the\nresulting constraint system is totally unimodular because\nit describes a laminar matroid constraint given by the\nbudget constraints (see~\\cite[Volume B]{Schrijver2003} for more details on\nmatroid optimization).\nRe-optimizing over this LP we get an integral solution\nof objective value at least\n$\\sum_{u\\in V\\setminus \\{r\\}} x_u w(T_u)\n - \\sum_{u\\in V^{\\mathcal{L}}} x_u w(T_u)$,\nbecause the restriction of $x$ to $V^{\\mathcal{T}}$\nis still feasible for the new LP.\n\n\nIn particular, if $\\sum_{u\\in V^{\\mathcal{L}}} x_u w(T_u)$\nwas at most $\\epsilon\\cdot\\val(\\mathsf{OPT})$, where\n$\\val(\\mathsf{OPT})$ is the optimal value of the instance,\nthen this would lead to a PTAS.\nClearly, this is not true in general, since it would contradict\nthe $(1-\\frac{1}{e})$-integrality gap of~\\ref{eq:lpFF}.\nIn the following, we will present techniques to limit\nthe loss in terms of LP-value when re-optimizing\nonly over variables corresponding to tight vertices $V^{\\mathcal{T}}$.\n\n\nNotice that when we work with a compressed instance, by\nfirst invoking Theorem~\\ref{thm:compressionFF} with $\\delta=\\epsilon$,\nwe have $|V^{\\mathcal{L}}|=O(\\frac{\\log N}{\\epsilon})$, where $N$ is the number of\nvertices in the original instance. Hence, a PTAS would\nbe achieved if for all $u\\in V^{\\mathcal{L}}$, we had\n$w(T_u) = \\Theta(\\frac{\\epsilon^2}{\\log N})\\cdot \\val(\\mathsf{OPT})$.\nOne way to achieve this in quasipolynomial time is\nto first guess a subset of $\\Theta(\\frac{\\log N}{\\epsilon^2})$ many\nvertices of an optimal solution with highest impact,\ni.e., among all vertices $u\\in \\mathsf{OPT}$ we\nguess those with largest $w(T_u)$.\nThis techniques has been used in various other settings\n(see for example~\\cite{ravi_1996_constrained,grandoni_2014_new}\nfor further details)\n and leads to another QPTAS for the Firefighter problem.\nAgain, it is unclear how \nthis QPTAS could be turned into an efficient procedure.\n\n\nThe above discussion motivates to investigate vertices\n$u\\in V\\setminus \\{r\\}$ with \n$w(T_u) \\geq \\eta$ for some\n$\\eta = \\Theta(\\frac{\\epsilon^2}{\\log N}) \\val(\\mathsf{OPT})$.\nWe call such vertices \\emph{heavy};\nlater, we will provide an explicit definition of $\\eta$\nthat does not depend on the unknown $\\val(\\mathsf{OPT})$ and\nis explicit about the hidden constant.\nLet $H=\\{u\\in V\\setminus \\{r\\} \\mid w(u) \\geq \\eta\\}$\nbe the set of all heavy vertices.\nObserve that $G[H\\cup \\{r\\}]$---i.e., the induced subgraph\nof $G$ over the vertices $H\\cup\\{r\\}$---is a subtree of\n$G$, which we call the \\emph{heavy tree}.\n\n\nRecall that by the above discussion, if we work on a\ncompressed instance with $L=O(\\frac{\\log N}{\\epsilon})$ levels,\nand if an optimal vertex solution to~\\ref{eq:lpFF}\nhas no loose vertices that are heavy, then an integral\nsolution can be obtained of value at \nleast $1-\\epsilon$ times the LP value.\nHence, if we were able to guess the\nheavy vertices contained in an optimal solution,\nthe integrality gap of the reduced problem\nwould be small\nsince no heavy vertices are left in the LP,\nand can thus not be loose anymore.\n\nWhereas there are too many options\nto enumerate over all possible subsets\nof heavy vertices that an optimal solution\nmay contain, we will do a coarser\nenumeration.\nMore precisely, we will partition\nthe heavy vertices into $O_{\\epsilon}(\\log N)$\nsubpaths and guess for each subpath whether\nit contains a vertex of $\\mathsf{OPT}$.\nFor this to work out we need\nthat the heavy tree has a very\nsimple topology;\nin particular, it should only have\n$O_{\\epsilon}(\\log N)$ leaves.\nWhereas this does not hold in general,\nwe can enforce it by a further transformation\nmaking sure that $\\mathsf{OPT}$ saves a\nconstant-fraction of $w(V)$ which---as we\nwill observe next---indeed limits the\nnumber of leaves of the heavy tree to $O_{\\epsilon}(\\log N)$.\nFurthermore, this transformation is useful to\ncomplete our definition of heavy vertices\nby explicitly defining the threshold $\\eta$.\n\n\n\\begin{lemma}\\label{lem:pruning}\nLet $\\mathcal{I}$ be a general-budget Firefighter instance\non a tree $G=(V,E)$ with weights $w$.\nThen for any $\\lambda\\in \\mathbb{Z}_{\\geq 1}$,\none can efficiently construct\na new Firefighter instance\n$\\overline{\\mathcal{I}}$ on a subtree $G'=(V',E')$ of $G$\nwith same budgets,\nby starting from $\\mathcal{I}$ and applying\nnode deletions and weight reductions, such that\n\\smallskip\n\\begin{enumerate}[nosep, label=(\\roman*)]\n \\item\\label{item:pruningSmallLoss}\n $\\val(\\mathsf{OPT}(\\overline{\\mathcal{I}})) \\geq\n \\left(1 - \\frac{1}{\\lambda}\\right)\n \\val(\\mathsf{OPT}(\\mathcal{I}))$, and\n\n \\item\\label{item:pruningLargeOpt}\n $\\val(\\mathsf{OPT}(\\overline{\\mathcal{I}})) \\geq\n \\frac{1}{\\lambda} w'(V')$,\nwhere $w'\\leq w$ are the vertex weights\nin instance $\\overline{\\mathcal{I}}$.\n\\end{enumerate}\n\\smallskip\nThe deletion of $u\\in V$ corresponds to removing the whole\nsubtree below $u$ from $G$, i.e., all vertices in $T_u$.\n\\end{lemma}\n\nSince Lemma~\\ref{lem:pruning} constructs a new instance\nusing only node deletions and weight reductions, any\nsolution to the new instance is also a solution to the\noriginal instance of at least the same objective value.\n\nOur PTAS for the Firefighter problem first applies the\ncompression Theorem~\\ref{thm:compressionFF} with $\\delta=\\epsilon\/3$\nand then Lemma~\\ref{lem:pruning} with\n$\\lambda = \\lceil\\frac{3}{\\epsilon} \\rceil$ to obtain\na general budget Firefighter instance on a tree $G=(V,E)$.\nWe summarize the properties of this new instance $G=(V,E)$\nbelow. As before, to avoid confusion, we denote by $N$\nthe number of vertices of the \noriginal instance.\n\n\n\\begin{property}\\leavevmode\n\\label{prop:preprocessedFF}\n\n\\begin{enumerate}[nosep, label=(\\roman*)]\n\\item The depth $L$ of $G$ satisfies $L=O(\\frac{\\log N}{\\epsilon})$.\n\n\\item $\\val(\\mathsf{OPT}) \\geq \\lceil\\frac{3}{\\epsilon} \\rceil^{-1} w(V)\n \\geq \\frac{1}{4}\\epsilon w(V)$.\n\n\\item The optimal value $\\val(\\mathsf{OPT})$ of the new instance is\nat least a $(1-\\frac{2}{3}\\epsilon)$-fraction of the\noptimal value of the original instance.\n\n\\item Any solution to the new instance can be transformed\nefficiently into a solution of the original instance\nof at least the same value.\n\\end{enumerate}\n\\end{property}\n\nHence, to obtain a PTAS for the original instance, it\nsuffices to obtain, for any $\\epsilon >0$, a \n$(1-\\frac{\\epsilon}{3})$-approximation for an instance\nsatisfying Property~\\ref{prop:preprocessedFF}.\nIn what follows, we assume to work with an instance\nsatisfying Property~\\ref{prop:preprocessedFF} and show\nthat this is possible.\n\n\nDue to the lower bound on $\\val(\\mathsf{OPT})$ provided\nby Property~\\ref{prop:preprocessedFF}, we now define\nthe threshold\n$\\eta = \\Theta(\\frac{\\epsilon}{\\log N}) \\val(\\mathsf{OPT})$\nin terms of $w(V)$ by\n\\begin{equation*}\n\\eta = \\frac{1}{12} \\frac{\\epsilon^2}{L} w(V),\n\\end{equation*}\nwhich implies that we can afford losing $L$ times a weight\nof $\\eta$, which will sum up to a total loss of at most\n$\\frac{1}{12}\\epsilon^2 w(V) \\leq \\frac{1}{3} \\epsilon \\val(\\mathsf{OPT})$,\nwhere the inequality is due to\nProperty~\\ref{prop:preprocessedFF}.\n\nConsider again the heavy tree $G[H\\cup \\{r\\}]$. Due to\nProperty~\\ref{prop:preprocessedFF} its topology is quite\nsimple. More precisely, the heavy tree has\nonly $O(\\frac{\\log N}{\\epsilon^3})$ leaves.\nIndeed, each leaf $u\\in H$ of the heavy tree fulfills\n$w(T_u) \\geq \\eta$, and two different leaves $u_1,u_2\\in H$\nsatisfy $T_{u_1} \\cap T_{u_2} = \\emptyset$; since the\ntotal weight of the tree is $w(V)$, the heavy tree\nhas at most\n$w(V)\/\\eta = 12 L \/ \\epsilon^2 = O(\\frac{\\log N}{\\epsilon^3})$\nmany leaves.\n\nIn the next step, we define a well-chosen\nsmall subset $Q$ of heavy vertices\nwhose removal (together with $r$) from $G$ will\nbreak $G$ into components of weight at most $\\eta$.\nSimultaneously, we choose $Q$ such that removing\nit together with $r$ from the heavy tree breaks\nit into paths, over which we will do an enumeration\nlater.\n\n\n\\begin{lemma}\\label{lem:setQ}\nOne can efficiently determine a set $Q\\subseteq H$\nsatisfying the following.\n\n\\begin{enumerate}[nosep,label=(\\roman*)]\n\\item $|Q|=O(\\frac{\\log N}{\\epsilon^3})$.\n\\item $Q$ contains all leaves and all vertices\nof degree at least $3$ of the heavy tree,\nexcept for the root $r$.\n\\item Removing $Q\\cup\\{r\\}$ from $G$ leads to a\ngraph $G[V\\setminus (Q\\cup \\{r\\})]$ where each\nconnected component has vertices whose weight\nsums up to at most $\\eta$.\n\\end{enumerate}\n\n\\end{lemma}\n\n\nFor each vertex $q\\in Q$, let $H_q\\subseteq H$ be\nall vertices that are visited when traversing the\npath $P_q$ from $q$ to $r$\nuntil (but not including) the next\nvertex in $Q\\cup \\{r\\}$.\nHence, $H_q$ is a subpath of the heavy tree such\nthat $H_q\\cap Q = \\{q\\}$, which we call for brevity\na \\emph{$Q$-path}.\nMoreover the set of all $Q$-paths partitions $H$.\n\n\nWe use an enumeration procedure to determine on which\n$Q$-paths to protect a vertex. Since $Q$-paths are subpaths\nof leaf-root paths, we can assume that at most one vertex\nis protected in each $Q$-path.\nOur algorithm enumerates over all $2^{|Q|}$ possible\nsubsets $Z \\subseteq Q$, where $Z$\nrepresents the $Q$-paths on which we will protect a\nvertex. Incorporating this guess into~\\ref{eq:lpFF},\nwe get the following linear program~\\ref{eq:lpFFZ}:\n\n\\begin{equation}\\label{eq:lpFFZ}\n\\begin{array}{*2{>{\\displaystyle}r}c*2{>{\\displaystyle}l}}\n\\max & \\sum_{u\\in V\\setminus \\{r\\}} x_u w(T_u) & & \\\\\n & x(P_u) &\\leq &1 &\\forall u\\in \\Gamma \\\\\n & x(V_{\\leq \\ell}) &\\leq &\\sum_{i=1}^\\ell B_i\\hspace*{2em}\n &\\forall \\ell\\in [L]\\\\\n & x(H_q) &= &1 &\\forall q\\in Z\\\\\n & x(H_q) &= &0 &\\forall q\\in Q\\setminus Z\\\\\n & x &\\in &\\mathbb{R}_{\\geq 0}^{V\\setminus \\{r\\}}.\n\\end{array}\\tag{$\\mathrm{LP_{FF}}(Z)$}\\labeltarget{eq:lpFFZtarget}\n\\end{equation}\n\n\nWe start with a simple observation regarding~\\ref{eq:lpFFZ}.\n\\begin{lemma}\\label{lem:isFaceFF}\nThe polytope over which~\\ref{eq:lpFFZ} optimizes is a face\nof the polytope describing the feasible\nregion of~\\ref{eq:lpFF}.\nConsequently, any vertex solution of~\\ref{eq:lpFFZ} is a\nvertex solution of~\\ref{eq:lpFF}.\n\\end{lemma}\n\\begin{proof}\nThe statement immediately follows by observing that\nfor any $q\\in Q$, the inequalities $x(H_q)\\leq 1$\nand $x(H_q)\\geq 0$ are valid inequalities\nfor~\\ref{eq:lpFF}.\nNotice that $x(H_q)\\leq 1$ is a valid inequality\nfor~\\ref{eq:lpFF} because $H_q$ is a subpath of\na leaf-root path, and the load on any leaf-root\npath is limited to $1$ in~\\ref{eq:lpFF}.\n\\end{proof}\n\nAnalogously to~\\ref{eq:lpFF} we define loose and tight\nvertices for a solution to~\\ref{eq:lpFFZ}.\nA crucial implication of Lemma~\\ref{lem:isFaceFF} is that\nLemma~\\ref{lem:sparsityFF} also applies to any vertex\nsolution of~\\ref{eq:lpFFZ}.\n\nWe will show in the following that for any choice of\n$Z\\subseteq Q$, the integrality gap of~\\ref{eq:lpFFZ}\nis small and we can efficiently obtain an integral solution of\nnearly the same value as the optimal value of~\\ref{eq:lpFFZ}.\nOur PTAS then follows by enumerating all $Z\\subseteq Q$\nand considering the set $Z\\subseteq Q$ of \nall $Q$-paths on which $\\mathsf{OPT}$ protects a vertex.\nThe low integrality gap of~\\ref{eq:lpFFZ} will follow from the fact\nthat we can now limit the impact of loose vertices.\nMore precisely, any loose vertex outside of the heavy\ntree has LP contribution at most $\\eta$ by definition of\nthe heavy tree. Furthermore, for each loose vertex $u$\non the heavy tree, which lies on some $Q$-path $H_q$,\nits load $x(u)$ can be moved to the single tight vertex\non $H_q$. As we will show, such a load redistribution\nwill decrease the LP-value by at most $\\eta$, due to our choice of $Q$.\n\n\nWe are now ready to state our $(1-\\frac{\\epsilon}{3})$-approximation\nfor an instance satisfying Property~\\ref{prop:preprocessedFF},\nwhich, as discussed, implies a PTAS for the Firefighter problem.\nAlgorithm~\\ref{alg:FF} describes our\n$(1-\\frac{\\epsilon}{3})$-approximation.\n\n\n\n\\begin{algorithm}\n\n\\begin{enumerate}[rightmargin=1em]\n\\item Determine heavy vertices $H=\\{u\\in V \\mid w(T_u) \\geq \\eta\\}$,\nwhere $\\eta=\\frac{1}{12} \\frac{\\epsilon^2}{L} w(V)$.\n\n\\item Compute $Q\\subseteq H$ using Lemma~\\ref{lem:setQ}.\n\n\\item For each $Z\\subseteq Q$, obtain an optimal vertex solution\nto~\\ref{eq:lpFFZ}. Let $Z^*\\subseteq Q$ be a set for which the\noptimal value\nof~\\hyperlink{eq:lpFFZtarget}{$\\mathrm{LP_{FF}(Z^*)}$}\nis largest among\nall subsets of $Q$, and let $x$ be an optimal vertex\nsolution to~\\hyperlink{eq:lpFFZtarget}{$\\mathrm{LP_{FF}(Z^*)}$}.\n\n\\item\\label{algitem:reoptFF}\nLet $V^{\\mathcal{T}}$ be the $x$-tight vertices.\nObtain an optimal vertex solution\nto~\\ref{eq:lpFF} restricted to variables\ncorresponding to vertices in $V^{\\mathcal{T}}$.\nThe solution will be a $\\{0,1\\}$-vector, being the\ncharacteristic vector of a\nset $U\\subseteq V^{\\mathcal{T}}$ which we return.\n\\end{enumerate}\n\n\n\\caption{A $(1-\\frac{\\epsilon}{3})$-approximation for\na general-budget Firefighter instance satisfying\nProperty~\\ref{prop:preprocessedFF}.}\n\\label{alg:FF}\n\\end{algorithm}\n\n\nThe following statement completes the proof of\nTheorem~\\ref{thm:PtasFF}.\n\\begin{theorem}\\label{thm:PtasFFProp}\nFor any general-budget Firefighter instance satisfying\nProperty~\\ref{prop:preprocessedFF},\nAlgorithm~\\ref{alg:FF} computes efficiently a feasible\nset of vertices $U\\subseteq V\\setminus \\{r\\}$ to protect\nthat is a $(1-\\frac{\\epsilon}{3})$-approximation. \n\\end{theorem}\n\\begin{proof}\nFirst observe that the linear program solved in\nstep~\\ref{algitem:reoptFF} will indeed lead to\na characteristic vector with only $\\{0,1\\}$-components.\nThis is the case since no two $x$-tight vertices\ncan lie on the same leaf-root path. Hence, as discussed\npreviously, the linear program~\\ref{eq:lpFF} restricted\nto variables corresponding to $V^{\\mathcal{T}}$ is totally\nunimodular; indeed, the leaf-root path constraints $x(P_u)\\leq 1$\nfor $u\\in \\Gamma$\nreduce to $x(v)\\leq 1$ for $v\\in V^{\\mathcal{T}}$, and\nthe remaining LP corresponds to a linear program over a laminar\nmatroid, reflecting the budget constraints.\nMoreover, the set $U$ is clearly budget-feasible since \nthe budget constraints are enforced by~\\ref{eq:lpFF}.\nAlso, Algorithm~\\ref{alg:FF} runs in polynomial time\nbecause $|Q|=O(\\frac{\\log N}{\\epsilon^3})$\nby Lemma~\\ref{lem:setQ} and hence,\nthe number of subsets of $Q$ is bounded by\n$N^{O(\\frac{1}{\\epsilon^3})}$.\n\n\nIt remains to show that $U$ is a\n$(1-\\frac{\\epsilon}{3})$-approximation.\nLet $\\mathsf{OPT}$ be an optimal solution to the considered\nFirefighter instance with value $\\val(\\mathsf{OPT})$.\nObserve first that the value $\\nu^*$\nof~\\hyperlink{eq:lpFFZtarget}{$\\mathrm{LP_{FF}(Z^*)}$}\nsatisfies $\\nu^* \\geq \\val(\\mathsf{OPT})$, because\none of the sets $Z\\subseteq Q$ corresponds to\n$\\mathsf{OPT}$, namely $Z=\\{q\\in Q \\mid H_q\\cap \\mathsf{OPT} \\neq \\emptyset\\}$,\nand for this $Z$ the characteristic vector\n$\\chi^{\\mathsf{OPT}}\\in \\{0,1\\}^{V\\setminus \\{r\\}}$\nof $\\mathsf{OPT}$ is feasible\nfor~\\ref{eq:lpFFZ}.\nWe complete the proof of Theorem~\\ref{thm:PtasFFProp}\nby showing that the value $\\val(U)$ of $U$ satisfies\n$\\val(U) \\geq (1-\\frac{\\epsilon}{3}) \\nu^*$.\nFor this we show how to transform an optimal solution\n$x$ of~\\hyperlink{eq:lpFFZtarget}{$\\mathrm{LP_{FF}(Z^*)}$}\ninto a solution $y$\nto~\\hyperlink{eq:lpFFZtarget}{$\\mathrm{LP_{FF}(Z^*)}$}\nwith $\\operatorname{supp}(y) \\subseteq V^{\\mathcal{T}}$\nand such that the objective value $\\val(y)$ of $y$ satisfies\n$\\val(y)\\geq (1-\\frac{\\epsilon}{3}) \\nu^*$.\n\nLet $V^{\\mathcal{L}} \\subseteq \\operatorname{supp}(x)$ be the set of\n$x$-loose vertices, and let $H$ be all heavy vertices,\nas usual. To obtain $y$, we start with $y=x$\nand first set $y(u)=0$ for each $u\\in V^{\\mathcal{L}}\\setminus H$.\nMoreover, for each $u\\in V^{\\mathcal{L}}\\cap H$ \nwe do the following. Being part of the heavy vertices and\nfulfilling $x(u)>0$, the vertex $u$\nlies on some $Q$-path $H_{q_u}$ for some $q_u\\in Z^*$.\nBecause $x(H_{q_u})=1$, there is a tight vertex\n$v\\in H_{q_u}$. We move the $y$-value from vertex\n$u$ to vertex $v$, i.e., $y(v) = y(v)+y(u)$ and\n$y(u)=0$. This finishes the construction of $y$.\nNotice that $y$ is feasible\nfor~\\hyperlink{eq:lpFFZtarget}{$\\mathrm{LP_{FF}(Z^*)}$},\nbecause it was obtained from $x$ by reducing values\nand moving values to lower levels. \n\nTo upper bound the reduction of the LP-value when\ntransforming $x$ into $y$, we show that the modification\ndone for each loose vertex $u\\in V^{\\mathcal{L}}$ decreased\nthe LP-value by at most $\\eta$.\nClearly, for each $u\\in V^{\\mathcal{L}}\\setminus H$,\nsince $u$ is not heavy we have $w(T_u)\\leq \\eta$; thus\nsetting $y(u)=0$ will have an impact of at most $\\eta$\non the LP value.\nSimilarly, for $u\\in V^{\\mathcal{L}}\\cap H$, moving the\n$y$-value of $u$ to $q_u$ decreases the LP objective value\nby\n\\begin{equation*}\ny(u) \\cdot \\left(w(T_u) - w(T_{v})\\right)\n\\leq\nw(T_u) - w(T_v)\n= w(T_u \\setminus T_{v})\n\\leq \\eta,\n\\end{equation*}\nwhere the last inequality follows by observing\nthat $T_u \\setminus T_{v}\\subseteq T_u\\setminus T_{q_u}$\nare vertices in the\nsame connected component of $G[V\\setminus (Q\\cup \\{r\\})]$,\nand thus have a total weight of at most $\\eta$\nby Lemma~\\ref{lem:setQ}.\n\nHence,\n$\\val(x) - \\val(y) \\leq |V^{\\mathcal{L}}|\n \\cdot \\eta \\leq L\\cdot \\eta$,\nwhere the second inequality follows by\nProperty~\\ref{prop:preprocessedFF}.\nThis completes the proof by \nobserving \nthat $|V^{\\mathcal{L}}| \\leq L$\nby Lemma~\\ref{lem:sparsityFF}, and thus\n\\begin{align*}\n\\val(y) &= \\val(x) + \\left(\\val(y) - \\val(x)\\right)\n\\geq \\val(\\mathsf{OPT}) + \\val(y) - \\val(x)\n\\geq \\val(\\mathsf{OPT}) - L\\cdot \\eta\\\\\n&= \\val(\\mathsf{OPT}) - \\frac{1}{12}\\epsilon^2 w(V)\n\\geq \\left(1-\\frac{1}{3}\\epsilon\\right) \\val(\\mathsf{OPT}),\n\\end{align*}\nwhere the last inequality\nis due to Property~\\ref{prop:preprocessedFF}.\n\n\\end{proof}\n\n\n\n\n\n\\subsection{Overview of $O(1)$-approximation for RMFC}\n\\label{subsec:overviewRMFC}\n\nAlso our $O(1)$-approximation for RMFC uses the natural LP,\ni.e,~\\ref{eq:lpRMFC}, as a crucial tool to guide the algorithm.\nThroughout this section we will work on a compressed instance\n$G=(V,E)$ of RMFC, obtained through Theorem~\\ref{thm:compressionRMFC}.\nHence, the number of levels is $L=O(\\log N)$, where $N$ is the\nnumber of vertices of the original instance. Furthermore, the\nbudget on level $\\ell\\in [L]$ is given by $B_\\ell = 2^{\\ell} B$.\nThe advantage of working with a compressed instance for\nRMFC is twofold.\nFirst, we will again apply sparsity reasonings to limit in certain\nsettings the number of loose (badly structured) vertices by the\nnumber of levels of the instance.\nSecond, the fact that low levels---i.e., levels far away from\nthe root---have high budget, will allow\nus to protect a large number of loose vertices by only\nincreasing $B$ by a constant.\n\n\nFor simplicity, we work with a slight variation \nof~\\ref{eq:lpRMFC}, where we replace, for $\\ell\\in [L]$,\nthe budget constraints\n$x(V_{\\leq \\ell}) \\leq \\sum_{i=1}^{\\ell} B_i$\nby $x(V_\\ell) \\leq B_\\ell$.\nFor brevity, we define\n\\begin{equation*}\nP_B = \\left\\{x\\in \\mathbb{R}_{\\geq 0}^{V\\setminus \\{r\\}}\n \\;\\middle\\vert\\;\n x(V_\\ell) \\leq B\\cdot 2^\\ell \\;\\;\\forall \\ell\\in [L]\n \\right\\}.\n\\end{equation*}\nAs previously mentioned (and shown\nin~\\cite{ChalermsookChuzhoy2010}), the resulting LP\nis equivalent to~\\ref{eq:lpRMFC}.\nFurthermore, since the budget $B$ for a feasible RMFC solution\nhas to be chosen integral, we require $B\\geq 1$.\nHence, the resulting linear relaxation asks to find\nthe minimum $B\\geq 1$ such that \nthe following polytope is non-empty:\n\\begin{equation*}\n\\bar{P}_B = P_B \\cap\n \\left\\{x\\in \\mathbb{R}^{V\\setminus \\{r\\}}_{\\geq 0}\n\\;\\middle\\vert\\;\nx(P_u)\\geq 1 \\;\\;\\forall u\\in \\Gamma\\right\\}.\n\\end{equation*}\n\n\nWe start by discussing approaches to partially round a\nfractional point $x\\in \\bar{P}_B$, for some fixed budget $B\\geq 1$.\nAny leaf $u\\in \\Gamma$ is fractionally cut off from\nthe root through the $x$-values on $P_u$. A crucial property\nwe derive and exploit is that leaves that are \n(fractionally) cut off from $r$ largely on low levels,\ni.e., there is high $x$-value on $P_u$ on vertices\nfar away from the root, can be cut off from the root\nvia a set of vertices to be protected that are budget-feasible\nwhen increasing $B$ only by a constant.\nTo exemplify the above statement, consider the level\n$h=\\lfloor \\log L \\rfloor$ as a threshold to define\ntop levels $V_\\ell$ as those with indices $\\ell\\leq h$\nand bottom levels when $\\ell > h$. For any leaf\n$u \\in \\Gamma$,\nwe partition the path $P_u$ into its top\npart $P_u \\cap V_{\\leq h}$ and its bottom part\n$P_u \\cap V_{> h}$. Consider all leaves that are cut\noff in bottom levels by at least $0.5$ units:\n$W=\\{u\\in \\Gamma \\mid x(P_u\\cap V_{> h}) \\geq 0.5\\}$.\nWe will show that there is a subset of\nvertices $R\\subseteq V_{>h}$ on bottom levels\nto be protected that\nis feasible for budget $\\bar{B}=2B+1 \\leq 3B$ and cuts off\nall leaves in $W$ from the root.\nWe provide a brief sketch why this result holds,\nand present a formal proof later.\nIf we set all entries of $x$ on top levels $V_{\\leq h}$\nto zero, we get a vector $y$ with $\\operatorname{supp}(y) \\subseteq V_{>h}$\nsuch that $y(P_u) \\geq 0.5$ for $u\\in W$. Hence, $2y$ fractionally\ncuts off all vertices in $W$ from the root and is feasible\nfor budget $2B$. To increase sparsity, we can replace $2y$ by\na vertex $\\bar{z}$ of the polytope\n\\begin{equation*}\nQ=\\left\\{z\\in \\mathbb{R}_{\\geq 0}^{V\\setminus \\{r\\}}\n \\;\\middle\\vert\\;\n z(V_\\ell) \\leq 2B\\cdot 2^\\ell \\;\\;\\forall \\ell\\in [L],\n z(V_{\\leq h}) = 0, z(P_u)\\geq 1 \\;\\;\\forall u\\in W\\right\\},\n\\end{equation*}\nwhich describes possible ways to cut off $W$ from $r$\nonly using levels $V_{> h}$, and $Q$ is non-empty\nsince $2y\\in Q$.\nExhibiting a sparsity reasoning analogous to the\none used for the Firefighter problem, we can show that\n$z$ has no more than $L$ many $z$-loose vertices. \nThus, we can first include all $z$-loose vertices\nin the set $R$ of vertices to be protected by increasing\nthe budget of each level $\\ell > h$ by at most\n$L\\leq 2^{h+1} \\leq 2^\\ell$.\nThe remaining vertices in $\\operatorname{supp}(z)$ are well structures\n(no two of them lie on the same leaf-root path), and an \nintegral solution can be obtained easily.\nThe new budget value is $\\bar{B}=2B+1$, where the ``$+1$''\nterm pays for the loose vertices.\n\nThe following theorem formalizes the above reasoning\nand generalizes it in two ways. First, for a leaf $u\\in \\Gamma$\nto be part of $W$, we required it to have a total $x$-value\nof at least $0.5$ within the bottom levels; we will allow\nfor replacing $0.5$ by an arbitrary threshold $\\mu\\in (0,1]$.\nSecond, the level $h$ defining what is top and bottom\ncan be chosen to be of the form $h=\\lfloor \\log^{(q)} L\\rfloor$\nfor $q\\in \\mathbb{Z}_{\\geq 0}$, where\n$\\log^{(q)} L \\coloneqq\n\\log\\log\\dots\\log L$ is the value obtained by\ntaking $q$ many logs of $L$, and\nby convention we set $\\log^{(0)}L \\coloneqq L$.\nThe generalization in terms of $h$ can be thought of as\niterating the above procedure on the RMFC instance\nrestricted to $V_{\\leq h}$.\n\n\n\\begin{theorem}\\label{thm:bottomCover}\nLet $B\\in \\mathbb{R}_{\\geq 1}$, $\\mu \\in (0,1]$, \n$q\\in \\mathbb{Z}_{\\geq 1}$, and\n$h = \\lfloor \\log^{(q)} L\\rfloor$.\nLet $x\\in P_B$ with $\\operatorname{supp}(x)\\subseteq V_{> h}$,\nand we define $W=\\{u\\in \\Gamma \\mid x(P_u) \\geq \\mu\\}$.\nThen one can efficiently compute\na set $R\\subseteq V_{>h}$ such that\n\\smallskip\n\\begin{enumerate}[nosep,label=(\\roman*)]\n\\item $R\\cap P_u \\neq \\emptyset \\quad \\forall u\\in W$, and\n\\item $\\chi^R \\in P_{B'}$, where $B'= \\frac{q}{\\mu}B + 1$\nand $\\chi^R\\in \\{0,1\\}^{V\\setminus \\{r\\}}$ is the\ncharacteristic vector of $R$.\n\\end{enumerate}\n\n\\end{theorem}\n\n\nTheorem~\\ref{thm:bottomCover} has several interesting\nconsequences.\nIt immediately implies an\nLP-based $O(\\log^* N)$-approximation for RMFC, thus\nmatching the currently best approximation result\nby Chalermsook and Chuzhoy~\\cite{ChalermsookChuzhoy2010}:\nIt suffices to start with an optimal LP solution $B\\geq 1$\nand $x\\in \\bar{P}_B$ and invoke the above theorem with\n$\\mu=1$, $q=1+\\log^* L$.\nNotice that by definition of $\\log^*$ we have\n$\\log^* L = \\min \\{\\alpha \\in \\mathbb{Z}_{\\geq 0} \\mid\n\\log^{(\\alpha)} L \\leq 1\\}$; hence\n$h=\\lfloor \\log^{(1+\\log^* L)} L\\rfloor = 0$, implying that\nall levels are bottom levels.\nSince the integrality gap of the LP\nis~$\\Omega(\\log^* N)=\\Omega(\\log^* L)$,\nTheorem~\\ref{thm:bottomCover} captures the limits of what\ncan be achieved by techniques based on the standard LP.\n\nInterestingly, Theorem~\\ref{thm:bottomCover} also implies\nthat the $\\Omega (\\log^* L)$ integrality gap is only\ndue to the top levels of the instance. More precisely, if,\nfor any $q=O(1)$ and $h=\\lfloor \\log^{(q)} L \\rfloor$,\none would know what vertices an optimal solution $R^*$ protects\nwithin the levels $V_{\\leq h}$, then a constant-factor\napproximation for RMFC follows easily \nby solving an LP on the\nbottom levels $V_{> h}$ and using Theorem~\\ref{thm:bottomCover}\nwith $\\mu=1$\nto round the obtained solution.\n\n\nAlso, using Theorem~\\ref{thm:bottomCover} it is not hard\nto find constant-factor approximation algorithms for RMFC\nif the optimal budget $B_\\mathsf{OPT}$ is large enough, say\n$B \\geq \\log L$.\\footnote{Actually, the argument we\npresent in the following works for any\n$B = \\log^{(O(1))}L$. However, we later only\nneed it for $B\\geq \\log L$ and thus focus \non this case.}\nThe main idea is to solve the LP and define\n$h=\\lfloor \\log L \\rfloor$. Leaves that are largely\ncut off by $x$ on bottom levels can be handled using\nTheorem~\\ref{thm:bottomCover}. For the remaining leaves,\nwhich are cut off mostly on top levels, we can resolve an\nLP only on the top levels $V_{\\leq h}$ to cut them off.\nThis LP solution is sparse and contains at most $h\\leq B$\nloose nodes. Hence, all loose vertices can be selected\nby increasing the budget by at most $h\\leq B$, leading\nto a well-structured residual problem for which one can\neasily find an integral solution.\nThe following theorem summarizes this discussion.\nA formal proof for Theorem~\\ref{thm:bigBIsGood}\ncan be found in Section~\\ref{sec:proofsRMFC}. \n\n\\begin{theorem}\\label{thm:bigBIsGood}\nThere is an efficient algorithm that computes a\nfeasible solution to a (compressed) instance of\nRMFC with budget $B\\leq 3 \\cdot \\max\\{\\log L, B_{\\mathsf{OPT}}\\}$.\n\\end{theorem}\n\n\n\n\n\\medskip\n\nIn what follows, we therefore assume $B_\\mathsf{OPT} < \\log L$\nand present an efficient way to partially\nenumerate vertices to be protected on top levels, \nleading to the claimed $O(1)$-approximation.\n\n\n\\subsubsection*{Partial enumeration algorithm}\n\nThroughout our algorithm, we set \n\\begin{equation*}\nh=\\lfloor\\log^{(2)} L\\rfloor\n\\end{equation*}\nto be the threshold level defining top vertices $V_{\\leq h}$\nand bottom vertices $V_{> h}$.\nWithin our enumeration procedure we will solve LPs\nwhere we explicitly include some vertex set\n$A\\subseteq V_{\\leq h}$ to be part of the protected\nvertices, and also exclude some set $D\\subseteq V_{\\leq h}$\nfrom being protected. Our enumeration works by growing\nthe sets $A$ and $D$ throughout the algorithm.\nWe thus define the following LP for two disjoint\nsets $A,D \\subseteq V_{\\leq h}$:\n\\begin{equation}\\label{eq:lpRMFCAD}\n\\begin{array}{*2{>{\\displaystyle}r}c*2{>{\\displaystyle}l}}\n\\min & B & & & \\\\\n & x &\\in & \\bar{P}_B & \\\\\n & B &\\geq & 1 & \\\\\n & x(u) &= & 1 & \\quad \\forall u\\in A\\\\\n & x(u) &= & 0 & \\quad \\forall u\\in D\\enspace .\\\\\n\\end{array}\\tag{$\\mathrm{LP(A,D)}$}\\labeltarget{eq:lpRMFCADtarget}\n\\end{equation}\nNotice that~\\ref{eq:lpRMFCAD} is indeed an LP even though the\ndefinition of $\\bar{P}_B$ depends on $B$ (but it does so linearly).\n\nThroughout our enumeration procedure, the disjoint\nsets $A, D \\subseteq V_{\\leq h}$ that we consider are\nalways such that for any $u\\in A\\cup D$, we have\n$P_u\\setminus\\{u\\} \\subseteq D$. In other words, the vertices\n$A\\cup D \\cup \\{r\\}$ form the vertex set of a subtree\nof $G$ such that no root-leaf path contains two vertices\nin $A$. We call a disjoint pair of\nsets $A,D\\subseteq V_{\\leq h}$ with this property\na \\emph{clean pair}.\n\n\n\nBefore formally stating our enumeration procedure,\nwe briefly discuss the main idea behind it.\nLet $\\mathsf{OPT}\\subseteq V\\setminus \\{r\\}$ be an optimal solution\nto our (compressed) RMFC instance corresponding to some\nbudget $B_{\\mathsf{OPT}} \\in \\mathbb{Z}_{\\geq 1}$. We assume without loss\nof generality that $\\mathsf{OPT}$ does not contain redundancies, i.e.,\nthere is precisely one vertex of $\\mathsf{OPT}$ on each leaf-root\npath.\nAssume that we already guessed some clean pair\n$A,D \\subseteq V_{\\leq h}$ of vertex sets to be\nprotected and not to be protected, respectively,\nand that this guess is compatible with $\\mathsf{OPT}$, i.e.,\n$A\\subseteq \\mathsf{OPT}$ and $D\\cap \\mathsf{OPT}=\\emptyset$.\nLet $(x,B)$ be an optimal solution to~\\ref{eq:lpRMFCAD}.\nBecause we assume that the sets $A$ and $D$ are compatible with\n$\\mathsf{OPT}$, we have $B\\leq B_{\\mathsf{OPT}}$ because\n$(B_\\mathsf{OPT}, \\chi^\\mathsf{OPT})$ is feasible for \\ref{eq:lpRMFCAD}. We define\n\\begin{equation*}\nW_x = \\left\\{u\\in \\Gamma \\;\\middle\\vert\\;\n x(P_u \\cap V_{> h}) \\geq \\frac{2}{3}\\right\\}\n\\end{equation*}\nto be the set of leaves cut off from the root\nby an $x$-load of at least $\\mu=\\frac{2}{3}$\nwithin bottom levels.\nFor each $u\\in \\Gamma\\setminus W_x$,\nlet $f_u\\in V_{\\leq h}$ be the vertex closest\nto the root among all vertices in\n$(P_u \\cap V_{\\leq h}) \\setminus D$, and we define\n\\begin{equation}\\label{eq:defFx}\nF_x = \\{f_u \\mid u\\in \\Gamma\\setminus W_x\\} \\setminus A.\n\\end{equation}\nNotice that by definition, no two vertices of $F_x$ lie on\nthe same leaf-root path.\nFurthermore, every leaf $u\\in \\Gamma\\setminus W_x$\nis part of the subtree\n$T_f$ for precisely one $f\\in F_x$.\nThe main motivation for considering $F_x$ is that to guess\nvertices in top levels, we can show that it suffices\nto focus on vertices\nlying below some vertex in $F_x$, i.e., vertices\nin the set $Q_x = V_{\\leq h} \\cap (\\cup_{f\\in F_x} T_{f})$.\nTo exemplify this, we first consider the special case\n$\\mathsf{OPT}\\cap Q_x = \\emptyset$, which will also play\na central role later in the analysis of our algorithm.\nWe show that for this case we can get an\n$O(1)$-approximation to RMFC, even though we may only\nhave guessed a proper subset $A\\subsetneq \\mathsf{OPT}\\cap V_{\\leq h}$\nof the $\\mathsf{OPT}$-vertices within the top levels.\n\n\n\\begin{lemma}\\label{lem:goodEnum}\nLet $(A, D)$ be a clean pair of\nvertices that is compatible with $\\mathsf{OPT}$, i.e.,\n$A\\subseteq \\mathsf{OPT}, D\\cap \\mathsf{OPT} = \\emptyset$,\nand let $x$ be an optimal solution\nto~\\ref{eq:lpRMFCAD}.\nMoreover, let $(y,\\bar{B})$ be an optimal solution to\n\\hyperlink{eq:lpRMFCADtarget}{$\\mathrm{LP(A,V_{\\leq h} \\setminus A)}$}.\nThen, if $\\mathsf{OPT}\\cap Q_x=\\emptyset$, we have\n$\\bar{B}\\leq \\frac{5}{2} B_{\\mathsf{OPT}}$.\n\nFurthermore, if $\\mathsf{OPT}\\cap Q_x = \\emptyset$,\nby applying Theorem~\\ref{thm:bottomCover}\nto $y\\wedge \\chi^{V_{> h}}$ with $\\mu=1$ and $q=2$, a set \n$R\\subseteq V_{> h}$ is obtained such that\n$R\\cup A$ is a feasible solution to RMFC with respect\nto the budget $6 \\cdot B_{\\mathsf{OPT}}$.\\footnote{For two vectors\n$a,b\\in \\mathbb{R}^n$ we denote by $a\\wedge b\\in \\mathbb{R}^n$\nthe component-wise minimum of $a$ and $b$.}\n\\end{lemma}\n\\begin{proof}\nNotice that $\\mathsf{OPT}\\cap Q_x=\\emptyset$\nimplies that for each $u\\in \\Gamma \\setminus W_x$,\nwe either have $A\\cap P_u \\neq \\emptyset$ and thus\na vertex of $A$ cuts $u$ off from the root, or\nthe set $\\mathsf{OPT}$ contains a vertex on $P_u \\cap V_{>h}$.\nIndeed, consider a leaf $u\\in \\Gamma \\setminus W_x$\nsuch that $A\\cap P_u = \\emptyset$.\nThen\n$\\mathsf{OPT}\\cap Q_x = \\emptyset$ implies that no vertex\nof $T_{f_u}\\cap V_{\\leq h}$ is part of $\\mathsf{OPT}$.\nFurthermore, $P_{f_u}\\setminus T_{f_u} \\subseteq D$\nbecause $(A,D)$ is a clean pair and $f_u$ is the\ntopmost vertex on $P_u$ that is not in $D$.\nTherefore, $\\mathsf{OPT} \\cap P_u \\cap V_{\\leq h} = \\emptyset$,\nand since $\\mathsf{OPT}$ must contain a vertex in $P_u$, we must\nhave $\\mathsf{OPT}\\cap P_u \\cap V_{>h}\\neq \\emptyset$.\n\nHowever, this observation implies\nthat $z=\\frac{3}{2}(x\\wedge \\chi^{V_{>h}})\n+(\\chi^{\\mathsf{OPT}} \\wedge \\chi^{V_{>h}})+\\chi^A$\nsatisfies\n$z(P_u) \\geq 1$ for all $u\\in \\Gamma$.\nMoreover we have $z\\in P_{\\frac{3}{2}B+B_{\\mathsf{OPT}}}$\ndue to the following.\nFirst, $x\\wedge \\chi^{V_{>h}} \\in P_B$ and\n$\\chi^{\\mathsf{OPT}} \\in P_{B_{\\mathsf{OPT}}}$, which implies\n$z-\\chi^A\\in P_{\\frac{3}{2}B+B_{\\mathsf{OPT}}}$.\nFurthermore, $\\chi^A\\in P_B$, and the vertices in\n$A$ are all on levels $V_{\\leq h}$ which are disjoint\nfrom the levels on which vertices in \n$\\operatorname{supp}(z-\\chi^A)\\subseteq V_{>h}$ lie,\nand thus do not compete\nfor the same budget.\nHence, $(z,\\frac{3}{2}B+B_{\\mathsf{OPT}})$ is feasible for\n\\hyperlink{eq:lpRMFCADtarget}{$\\mathrm{LP(A,V_{\\leq h} \\setminus A)}$},\nand thus\n$\\bar{B} \\leq \\frac{3}{2}B + B_{\\mathsf{OPT}} \\leq \\frac{5}{2} B_{\\mathsf{OPT}}$,\nas claimed.\n\nThe second part of the lemma follows in a straightforward\nway from Theorem~\\ref{thm:bottomCover}.\nObserve first that each leaf $u\\in \\Gamma$ is either\nfully cut off from the root by $y$ on only top levels\nor only bottom levels because $y$ is a $\\{0,1\\}$-solution\non the top levels $V_{\\leq h}$, since on top levels it was\nfixed to $\\chi^A$ because it is a solution to\n\\hyperlink{eq:lpRMFCADtarget}{$\\mathrm{LP(A,V_{\\leq h} \\setminus A)}$}.\nReusing the notation in Theorem~\\ref{thm:bottomCover},\nlet $W=\\{u\\in \\Gamma \\mid (y\\wedge \\chi^{V_{> h}})(P_u) \\geq 1\\}$\nbe all leaves cut off from the root by $y\\wedge \\chi^{V_{>h}}$.\nBy the above discussion, every leaf is thus either part of $W$\nor it is cut off from the root by vertices in $A$. \nTheorem~\\ref{thm:bottomCover} guarantees that $R\\subseteq V_{>h}$\ncuts off all leaves in $W$ from the root, and hence, $R\\cup A$\nindeed cuts off all leaves from the root.\nMoreover, by Theorem~\\ref{thm:bottomCover}, the set\n$R\\subseteq V_{> h}$ is feasible with respect to the\nbudget $5B_{\\mathsf{OPT}} +1 \\leq 6 B_{\\mathsf{OPT}}$.\nFurthermore, $A$ is feasible for budget $B_{\\mathsf{OPT}}$ because\nit is a subset of $\\mathsf{OPT}$. Since $A\\subseteq V_{\\leq h}$\nand $R\\subseteq V_{> h}$ are on disjoint levels, the\nset $R\\cup A$ is feasible for the budget $6 B_{\\mathsf{OPT}}$.\n\\end{proof}\n\n\nOur final algorithm is based on a recursive enumeration\nprocedure that computes a polynomial\ncollection of clean pairs $(A,D)$\nsuch that there is one pair $(A,D)$ in the collection\nwith a corresponding LP solution $x$ of \n\\hyperlink{eq:lpRMFCADtarget}{$\\mathrm{LP(A,D)}$}\nsatisfying that the triple $(A,D,x)$ fulfills the conditions of\nLemma~\\ref{lem:goodEnum}, and thus leading to a\nconstant-factor approximation.\nOur enumeration algorithm\n\\hyperlink{alg:enumRMFCtarget}{$\\mathrm{Enum}(A,D,\\gamma)$}\nis described below.\nIt contains a parameter $\\gamma\\in \\mathbb{Z}_{\\geq 0}$\nthat bounds the recursion depth of the enumerations.\n\n\\smallskip\n\n{\n\\renewcommand{\\thealgocf}{}\n\\begin{algorithm}[H]\n\\SetAlgorithmName{$\\bm{\\mathrm{Enum}(A,D,\\gamma)}$\n\\labeltarget{alg:enumRMFCtarget}\n}{}\n\n\\begin{enumerate}[rightmargin=1em]\n\\item Compute optimal solution $(x,B)$ to \n\\hyperlink{eq:lpRMFCADtarget}{$\\mathrm{LP(A,D)}$}.\n\n\\item\\label{item:stopWhenBLarge}\n\\textbf{If} $B > \\log L$\\textbf{:} \\textbf{stop}.\nOtherwise, continue with step~\\ref{item:addTriple}.\n\n\\item\\label{item:addTriple}\nAdd $(A,D,x)$ to the family of triples to be considered.\n\n\\item\\label{item:enumRecCall} \\tm{if}{\\textbf{I}}%\n\\textbf{f} $\\gamma\\neq 0$ \\textbf{:}\n\\hfill \\texttt{\/\/recursion depth not yet reached \\quad}\n\n\\quad \\tm{for}{\\textbf{F}}\\textbf{or $u\\in F_x$:}\n\\hfill \\texttt{\/\/$F_x$ is defined as in~\\eqref{eq:defFx} \\quad}\n\n\\quad\\quad Recursive call to $\\mathrm{Enum}(A\\cup\\{u\\},D,\\gamma-1)$.\\\\\n\\quad\\quad \\tm[overlay]{end}{}Recursive call\nto $\\mathrm{Enum}(A,D\\cup \\{u\\},\\gamma-1)$.\n\n\\begin{tikzpicture}[overlay, remember picture]\n\\draw (if) ++ (0,-0.5em) |- ($(if |- end) + (0.2,-0.2)$);\n\\draw (for) ++ (0,-0.5em) |- ($(for |- end) + (0.2,-0.1)$);\n\\end{tikzpicture}\n\n\\vspace{-1.5em}\n\n\\end{enumerate}\n\n\n\\caption{Enumerating triples $(A,D,x)$ to find one \nsatisfying the conditions of Lemma~\\ref{lem:goodEnum}.\n}\n\\label{alg:enumRMFC}\n\n\\end{algorithm}\n\\addtocounter{algocf}{-1}\n}%\n\n\\smallskip\n\n\n\nNotice that for any clean pair $(A,D)$ and $u\\in F_x$,\nthe two pairs $(A\\cup \\{u\\}, D)$ and $(A, D\\cup \\{u\\})$\nare clean, too. Hence, if we start \n\\hyperlink{alg:enumRMFCtarget}%\n{$\\mathrm{Enum}(A,D,\\gamma)$}\nwith a clean pair $(A,D)$, we will encounter\nonly clean pairs during all recursive calls.\n\nThe key property of the above enumeration procedure\nis that only a small recursion\ndepth $\\gamma$ is needed for the enumeration algorithm \nto explore a good triple $(A,D,x)$, which satisfies\nthe conditions of Lemma~\\ref{lem:goodEnum}, if we\nstart with the trivial clean pair $(\\emptyset, \\emptyset)$.\nFurthermore, due to step~\\ref{item:stopWhenBLarge},\nwe always have $B\\leq \\log L$ whenever the\nalgorithmm is in step~\\ref{item:enumRecCall}. As we will see\nlater, this allows us to prove that $|F_x|$ is small, which\nwill limit the width of our recursive calls, and leads to\nan efficient procedure as highlighted in the following Lemma.\n\n\n\\begin{lemma}\\label{lem:enumWorks}\nLet $\\bar{\\gamma}= 2(\\log L)^2 \\log^{(2)} L$.\nThe enumeration procedure \\hyperlink{alg:enumRMFCtarget}%\n{$\\mathrm{Enum}(\\emptyset,\\emptyset,\\bar{\\gamma})$}\nruns in polynomial time.\nFurthermore, if $B_\\mathsf{OPT} \\leq \\log L$, then\n\\hyperlink{alg:enumRMFCtarget}%\n{$\\mathrm{Enum}(\\emptyset,\\emptyset,\\bar{\\gamma})$} will\nencounter a triple $(A,D,x)$ satisfying\nthe conditions of Lemma~\\ref{lem:goodEnum}, i.e.,\n\\begin{enumerate}[nosep, label=(\\roman*)]\n\\item $(A,D)$ is a clean pair,\n\\item $A\\subseteq \\mathsf{OPT}$,\n\\item $D\\cap \\mathsf{OPT} = \\emptyset$, and\n\\item $\\mathsf{OPT}\\cap Q_x = \\emptyset$.\n\\end{enumerate}\n\\end{lemma}\n\n\nHence, combining Lemma~\\ref{lem:enumWorks} and\nLemma~\\ref{lem:goodEnum} completes our enumeration procedure\nand implies the following result.\n\n\\begin{corollary}\\label{cor:summaryEnum}\nLet $\\mathcal{I}$ be an RMFC instance on $L$ levels\non a graph $G=(V,E)$ with budgets $B_\\ell = 2^\\ell \\cdot B$.\nThen there is a procedure with running time polynomial\nin $2^L$, returning\na solution $(Q,B)$ for $\\mathcal{I}$, where\n$Q\\subseteq V\\setminus \\{r\\}$ is a set of vertices\nto protect that is feasible for budget $B$,\nsatisfying the following:\nIf the optimal budget $B_{\\mathsf{OPT}}$ for $\\mathcal{I}$ satisfies\n$B_{\\mathsf{OPT}} \\leq \\log L$, then $B\\leq 6 B_\\mathsf{OPT}$.\n\\end{corollary}\n\\begin{proof}\nIt suffices to run \n\\hyperlink{alg:enumRMFCtarget}%\n{$\\mathrm{Enum}(\\emptyset,\\emptyset,\\bar{\\gamma})$} to\nfirst efficiently obtain a family of triples\n$(A_i,D_i,x_i)_i$, where $(A_i, D_i)$ is a clean pair,\nand $x_i$ is an optimal solution to\n\\hyperlink{eq:lpRMFCADtarget}{$\\mathrm{LP(A_i,D_i)}$}.\nBy Lemma~\\ref{lem:enumWorks}, one of these triples\nsatisfies the conditions of Lemma~\\ref{lem:goodEnum}.\n(Notice that these conditions cannot be checked since\nit would require knowledge of $\\mathsf{OPT}$.)\nFor each triple $(A_i,D_i,x_i)$ we obtain a corresponding\nsolution for $\\mathcal{I}$ following the construction\ndescribed in Lemma~\\ref{lem:goodEnum}. More precisely,\nwe first compute an optimal solution $(y_i,\\bar{B}_i)$ to \n\\hyperlink{eq:lpRMFCADtarget}{$\\mathrm{LP(A_i,V_{\\leq h} \\setminus A_i)}$}.\nThen, by applying Theorem~\\ref{thm:bottomCover} to\n$y_i\\wedge \\chi^{V_{> h}}$ with $\\mu=1$ and $q=2$,\na set of vertices\n$R_i\\subseteq V_{> h}$ is obtained such that\n$R_i\\cup A_i$ is feasible for $\\mathcal{I}$ for some\nbudget $B_i$.\nAmong all such sets $R_i\\cup A_i$, we return the one\nwith minimum $B_i$.\nBecause Lemma~\\ref{lem:enumWorks} guarantees that\none of the triples $(A_i, D_i, x_i)$ satisfies the\nconditions of Lemma~\\ref{lem:goodEnum}, we have by\nLemma~\\ref{lem:goodEnum} that the best protection\nset $Q=R_j\\cup A_j$ among all $R_i\\cup A_i$ has a\nbudget $B_j$ satisfying $B_j \\leq 6 B_{\\mathsf{OPT}}$.\n\\end{proof}\n\n\n\n\n\n\n\n\\subsection*{Summary of our $O(1)$-approximation for RMFC}\n\nStarting with an RMFC instance $\\mathcal{I}^{\\mathrm{orig}}$\non a tree with $N$ vertices, we\nfirst apply our compression result, Theorem~\\ref{thm:compressionRMFC},\nto obtain an RMFC instance $\\mathcal{I}$ on a graph $G=(V,E)$ with depth\n$L=O(\\log N)$, and non-uniform budgets $B_\\ell = 2^\\ell B$\nfor $\\ell\\in [L]$.\nLet $B_{\\mathsf{OPT}}\\in \\mathbb{Z}_{\\geq 1}$ be the optimal\nbudget value for $B$ for instance $\\mathcal{I}$%\n---recall that $B=B_{\\mathsf{OPT}}$ in instance $\\mathcal{I}$\nimplies that level $\\ell\\in [L]$ has budget $2^{\\ell} \\cdot B_{\\mathsf{OPT}}$---%\nand let $B_{\\mathsf{OPT}}^{\\mathrm{orig}}$\nbe the optimal budget for $\\mathcal{I}^{\\mathrm{orig}}$.\nBy Theorem~\\ref{thm:compressionRMFC}, we have\n$B_{\\mathsf{OPT}} \\leq B_{\\mathsf{OPT}}^{\\mathrm{orig}}$, and any solution\nto $\\mathcal{I}$ using budget $B$ can efficiently be transformed\ninto one of $\\mathcal{I}^{\\mathrm{orig}}$ of budget\n$2B$.\n\nWe now invoke\nTheorem~\\ref{thm:bigBIsGood} and Corollary~\\ref{cor:summaryEnum}.\nBoth guarantee that a solution to $\\mathcal{I}$ with certain properties\ncan be computed efficiently.\nAmong the two solutions derived from Theorem~\\ref{thm:bigBIsGood}\nand Corollary~\\ref{cor:summaryEnum}, we consider the one\n$(Q,B)$ with lower budget $B$, where $Q\\subseteq V\\setminus \\{r\\}$\nis a set of vertices to protect, feasible for budget\n$B$.\nIf $B\\geq \\log L$, then Theorem~\\ref{thm:bigBIsGood} implies\n$B\\leq 3 B_{\\mathsf{OPT}}$, otherwise Corollary~\\ref{cor:summaryEnum}\nimplies $B\\leq 6 B_{\\mathsf{OPT}}$. Hence, in any case we have\na $6$-approximation for $\\mathcal{I}$. As mentioned before,\nTheorem~\\ref{thm:compressionRMFC} implies that the solution\n$Q$ can efficiently be transformed into a solution for the\noriginal instance $\\mathcal{I}^{\\mathrm{orig}}$ that is\nfeasible with respect to the budget\n$2 B \\leq 12 B_{\\mathsf{OPT}} \\leq 12 B^{\\mathrm{orig}}_{\\mathsf{OPT}}$,\nthus implying Theorem~\\ref{thm:O1RMFC}.\n\n\n\n\\section{Details on compression results}\\label{sec:proofsCompression}\n\nIn this section, we present the proofs for our compression results,\nTheorem~\\ref{thm:compressionFF} and Theorem~\\ref{thm:compressionRMFC}.\nWe start by proving Theorem~\\ref{thm:compressionFF}. The same ideas are used\nwith a slight adaptation in the proof of Theorem~\\ref{thm:compressionRMFC}. \n\nWe call an instance $\\overline{\\mathcal{I}}$ obtained from \nan instance $\\mathcal{I}$ by a sequence of down-push operations\na \\emph{push-down of} $\\mathcal{I}$.\nWe prove Theorem~\\ref{thm:compressionFF} by proving\nthe following result, of which Theorem~\\ref{thm:compressionFF}\nis an immediate consequence, as we will soon show.\n\n\\begin{theorem}\\label{thm:compressionDownPush}\nLet $\\mathcal{I}$ be a unit-budget Firefighter instance\nwith depth $L$, and let $\\delta\\in (0,1)$.\nThen one can efficiently construct a push-down\n$\\overline{\\mathcal{I}}$\nof $\\mathcal{I}$ such that\n\\smallskip\n\\begin{enumerate}[nosep,label=(\\roman*)]\n\\item\\label{item:closeToOPT}\n $\\val(\\mathsf{OPT}(\\overline{\\mathcal{I}}))\n \\geq (1-\\delta)\\val(\\mathsf{OPT}(\\mathcal{I}))$, and\n\n\\item\n$\\overline{\\mathcal{I}}$ has nonzero budget\non only $O(\\frac{\\log L}{\\delta})$ levels.\n\\end{enumerate}\n\\end{theorem}\n\nBefore we prove Theorem~\\ref{thm:compressionDownPush}, we show\nhow it implies \nTheorem~\\ref{thm:compressionFF}.\n\n\\begin{proof}[Proof of Theorem~\\ref{thm:compressionFF}]\nWe start by showing \nhow levels of zero budget can be removed \nthrough the following \\emph{contraction operation}. \nLet $\\ell \\in \\{2,\\dots, L\\}$ be a level whose budget\nis zero. For each vertex\n$u \\in V_{\\ell-1}$ we contract all edges from $u$\nto its children and increase the\nweight $w(u)$ of $u$ by the sum of the weights\nof all of its children.\nFormally, if $u$ has children $v_1, \\dots, v_k\\in V_\\ell$,\nthe vertices $u,v_1, \\dots, u_k$ are replaced by a single\nvertex $z$ with weight $w(z) = w(u) + \\sum_{i=1}^k w(v_i)$,\nand $z$ is adjacent to the parent of $u$ and to all children\nof $v_1,\\dots, v_k$.\nOne can easily observe that this is an ``exact''\ntransformation in the sense that any solution before\nthe contraction remains one after contraction\nand vice versa (when identifying the vertex $z$\nin the contracted version with $v$);\nmoreover, solutions before and\nafter contraction have the same value.\n\nNow, by first applying Theorem~\\ref{thm:compressionDownPush}\nand then applying the latter contraction operations level by\nlevel to all levels\n$\\ell\\in \\{2,\\dots, L\\}$\nwith zero budget (in an arbitrary order),\nwe obtain an equivalent instance with the desired \ndepth, thus satisfying the conditions of\nTheorem~\\ref{thm:compressionFF}.\n\\end{proof}\n\n\nIt remains to prove Theorem~\\ref{thm:compressionDownPush}.\n\n\n\\begin{proof}[Proof of Theorem~\\ref{thm:compressionDownPush}]\nConsider a unit-budget Firefighter instance on a tree\n$G=(V,E)$ with depth $L$.\nThe push-down $\\overline{\\mathcal{I}}$ that we construct\nwill have nonzero budgets precisely on the following\nlevels $\\mathcal{L} \\subseteq [L]$:\n\\begin{equation*}\n\\mathcal{L} = \\left\\{\\left\\lceil(1+\\delta)^j\\right\\rceil\n \\;\\middle\\vert\\; j\\in \\left\\{0,\\dots,\n \\left\\lfloor\\frac{\\log L}{\\log(1+\\delta)}\n \\right\\rfloor\\right\\}\\right\\}\n \\cup \\{L\\}.\n\\end{equation*}\nFor simplicity, let $\\mathcal{L}= \\{\\ell_1,\\dots, \\ell_k\\}$\nwith $1=\\ell_1 < \\ell_2 < \\dots < \\ell_k=L$.\nHence,\n$k=O(\\frac{\\log L}{\\log(1+\\delta)})\n = O(\\frac{\\log L}{\\delta})$. The push-down\n$\\overline{\\mathcal{I}}$ is obtained by pushing\nany budget on a level not in $\\mathcal{L}$ down\nto the next level in $\\mathcal{L}$. Formally,\nfor $i\\in [k]$, the budget $B_{\\ell_i}$\nat level $\\ell_i$ is given by\n$B_{\\ell_i} = \\ell_i - \\ell_{i-1}$, where\nwe set $\\ell_{0}=0$.\nMoreover, $B_\\ell=0$ for\n$\\ell\\in [L]\\setminus \\mathcal{L}$.\nClearly, the instance $\\overline{\\mathcal{I}}$ can be\nconstructed efficiently. Furthermore, the number\nof levels with nonzero budget is equal to\n$k=O(\\frac{\\log L}{\\delta})$ as desired. It remains\nto show point~\\ref{item:closeToOPT}\nof Theorem~\\ref{thm:compressionDownPush}.\n\nTo show~\\ref{item:closeToOPT}, consider an optimal\nredundancy-free solution $S^*\\subseteq V$ of $\\mathcal{I}$; hence,\n$\\val(\\mathsf{OPT}(\\mathcal{I})) = \\sum_{u\\in S^*} w(T_u)$ and\nno two vertices of $S^*$ lie on the same leaf-root path.\nWe will show that there is a feasible solution\n$\\overline{S}$ to $\\overline{\\mathcal{I}}$ such that\n$\\overline{S}\\subseteq S^*$ and the value of\n$\\overline{S}$ is at least $(1-\\delta)\\val(\\mathsf{OPT}(\\mathcal{I}))$.\nNotice that since $S^*$ is redundancy-free, any subset\nof $S^*$ is also redundancy-free. Hence, the value of\nthe set $\\overline{S}$ to construct will be equal\nto $\\sum_{u\\in \\overline{S}} w(T_u)$.\nThe set $S^*$ being (budget-)feasible for $\\mathcal{I}$\nimplies \n\\begin{equation}\\label{eq:SStarFeasible}\n|S^*\\cap V_{\\leq \\ell}| \\leq \\ell\n \\quad \\forall \\ell\\in [L].\n\\end{equation}\nAnalogously, a set $S\\subseteq V$ is feasible for\n$\\overline{\\mathcal{I}}$ if and only if\n\\begin{equation}\\label{eq:SFeasibleFull}\n|S\\cap V_{\\leq \\ell}| \\leq \\sum_{i=1}^\\ell B_i\n \\quad \\forall \\ell\\in [L].\n\\end{equation}\nHence, we want to show that there is a set $\\overline{S}$\nsatisfying the above system and such that\n$\\sum_{u\\in \\overline{S}}w(T_u)\n \\geq (1-\\delta)\\val(\\mathsf{OPT}(\\mathcal{I}))$.\nNotice that in~\\eqref{eq:SFeasibleFull}, the constraint\nfor any $\\ell\\in [L-1]$ such that $B_{l+1}=0$ is\nredundant due to the constraint for level $\\ell+1$\nwhich has the same right-hand side but a larger\nleft-hand side.\nThus, system~\\eqref{eq:SFeasibleFull} is equivalent\nto the following system\n\\begin{equation}\\label{eq:SFeasibleShort}\n\\begin{aligned}\n|S\\cap V_{\\leq \\ell_{i+1}-1}| &\\leq \\ell_{i} \n \\quad \\forall i\\in [k-1],\\\\\n|S\\cap V| &\\leq L.\n\\end{aligned}\n\\end{equation}\nTo show that there is a good subset\n$\\overline{S}\\subseteq S^*$ that\nsatisfies~\\eqref{eq:SFeasibleShort} we use a\npolyhedral approach.\nObserve that~\\eqref{eq:SFeasibleFull} is the\nconstraint system of a laminar matroid\n(see~\\cite[Volume B]{Schrijver2003} for more information on matroids).\nHence,\nthe convex hull of all characteristic vectors\n$\\chi^S\\in \\{0,1\\}^V$ of sets $S\\subseteq S^*$\nsatisfying~\\eqref{eq:SFeasibleShort} is given\nby the following polytope\n\\begin{equation*}\nP = \\left\\{\nx\\in [0,1]^V \\;\\middle\\vert\\;\n\\begin{minipage}[c]{0.4\\linewidth}\n\\vspace{-1em}\n\\begin{align*}\nx(V_{\\leq \\ell_{i+1}-1}) &\\leq \\ell_{i} \\;\\;\\forall i\\in [k-1],\\\\\nx(V) &\\leq L,\\\\\nx(V\\setminus S^*) &= 0\n\\end{align*}\n\\end{minipage}\n\\right\\}.\n\\end{equation*}\nAlternatively, to see that $P$ indeed\ndescribes the correct polytope,\nwithout relying on matroids, one can observe that its\nconstraint matrix is totally unimodular because it\nhas the consecutive-ones property with respect to the\ncolumns.\n\n\nThus there exists a set $\\overline{S}\\subseteq S^*$ with\n$\\sum_{u\\in \\overline{S}} w(T_u) \\geq (1-\\delta)\\val(\\mathsf{OPT}(\\mathcal{I}))$\nif and only if\n\\begin{equation}\\label{eq:polSb}\n\\max\\left\\{\\sum_{u\\in S^*} x(u)\\cdot\n w(T_u) \\;\\middle\\vert\\;\n x\\in P\\right\\}\\geq (1-\\delta)\\val(\\mathsf{OPT}(\\mathcal{I})).\n\\end{equation}\nTo show~\\eqref{eq:polSb}, and thus complete the proof,\nwe show that $y=\\frac{1}{1+\\delta} \\chi^{S^*}\\in P$.\nThis will indeed imply~\\eqref{eq:polSb} since the\nobjective value of $y$ satisfies\n\\begin{equation*}\n\\sum_{u\\in S^*} y(u) \\cdot w(T_u) =\n \\frac{1}{1+\\delta}\\val(\\mathsf{OPT}(\\mathcal{I}))\n \\geq (1-\\delta)\\val(\\mathsf{OPT}(\\mathcal{I})).\n\\end{equation*}\n\nTo see that $y\\in P$, notice that\n$y(V\\setminus S^*)=0$ and\n$y(V) = \\frac{1}{1+\\delta} |S^*|\n\\leq \\frac{1}{1+\\delta} L \\leq L$, where the\nfirst inequality follows by $S^*$\nsatisfying~\\eqref{eq:SStarFeasible} for $\\ell=L$.\nFinally, for $i\\in [k-1]$, we have\n\\begin{align*}\ny(V_{\\leq \\ell_{i+1}-1}) &=\n \\frac{1}{1+\\delta}\n |S^* \\cap V_{\\leq \\ell_{i+1}-1}|\n\\leq \\frac{1}{1+\\delta}(\\ell_{i+1}-1),\n\\end{align*}\nwhere the inequality follows from $S^*$\nsatisfying~\\eqref{eq:SStarFeasible}\nfor $\\ell=\\ell_{i+1}-1$.\nIt remains to show $\\ell_{i+1} -1 \\leq (1+\\delta)\\ell_i$\nto prove $y\\in P$.\nLet $\\alpha \\in \\mathbb{Z}_{\\geq 0}$ be the smallest\ninteger for which we have\n$\\ell_{i+1} = \\lceil (1+\\delta)^{\\alpha}\\rceil$. In\nparticular, this implies\n$\\ell_{i}=\\lceil (1+\\delta)^{\\alpha-1}\\rceil$. We\nthus obtain\n\\begin{equation*}\n\\ell_{i+1} - 1 \\leq (1+\\delta)^{\\alpha}\n = (1+\\delta) (1+\\delta)^{\\alpha-1}\n \\leq (1+\\delta) \\ell_i,\n\\end{equation*}\nas desired.\n\n\\end{proof}\n\n\n\n\nWe conclude with the proof of Theorem~\\ref{thm:compressionRMFC}.\n\n\\begin{proof}[Proof of Theorem~\\ref{thm:compressionRMFC}.]\n\n We start by describing the construction of $G' = (V',E')$. As is the case\nin the proof of Theorem~\\ref{thm:compressionFF}, we first change the \nbudget assignment of the instance and then contract all levels with zero budgets.\nNotice that, for a given budget $B$ per layer,\nwe can consider an RMFC instance as a Firefighter instance,\nwhere each leaf $u\\in \\Gamma$ has weight $w(u)=1$, and all other\nweights are zero. Since our goal is to save all leaves, we want\nto save vertices of total weight $|\\Gamma|$.\n\n\nFor simplicity of presentation we assume that $L$ is a power of $2$. This assumption does\nnot compromise generality, as one can always augment the original tree with one path starting from the root and going down to level\n$2^{\\lceil\\log L\\rceil}$.\n\nThe set of levels in which the transformed instance will have\nnonzero budget is \n\\begin{equation*}\n\\mathcal{L} = \\left\\{2^j-1 \\,\\middle\\vert\\, j\\in \\{1,\\ldots, \\log L \\} \\right\\}.\n\\end{equation*}\nHowever, instead of down-pushes we will do \\emph{up-pushes} were\nbudget is moved upwards. More precisely, \nthe budget of any level $\\ell\\in [L]\\setminus \\mathcal{L}$\nwill be assigned to the first level in $\\mathcal{L}$ that\nis above $\\ell$, i.e., has a smaller index than $\\ell$.\nAs for the Firefighter case, we now remove all $0$-budget\nlevels using contraction, which will lead to a new\nweight function $w'$ on the vertices. Since our goal\nis to save the weight of the whole tree,\nwe can remove for each vertex $u$ with $w'(u) > 0$, the\nsubtree below $u$. This does not change the problem since\nwe have to save $u$, and thus will anyway also save its subtree.\nThis finishes our construction of $G'=(V',E')$, and the task\nis again to remove all leaves of $G'$.\nNotice that $G'$ has $L' \\leq |\\mathcal{L}| = \\log L $\nmany levels, and level $\\ell\\in [L']$ has a budget of\n$B 2^{\\ell}$ as desired.\nAnalogous to the\ndiscussion for compression in the context of the Firefighter \nproblem we have that if the original problem is feasible,\nthen so is the RMFC problem on $G'$ with\nbudgets $B 2^{\\ell}$.\nIndeed, before performing the contraction operations (which\ndo not change the problem), the original RMFC problem was\na push-down of the one we constructed.\n\n\nSimilarly, one can observe that before contraction,\nthe instance we obtained is itself a push-down of\nthe original instance with budgets $2B$ on each level.\nHence, analogously to the compression result for\nthe Firefighter case, any solution to the RMFC problem\non $G'$ can \nefficiently be transformed into a solution to the original\nRMFC problem on $G$ with budgets $2B$ on each level.\n\n\\end{proof}\n\n\n\n\n\n\\section{Missing details for Firefighter PTAS}\\label{sec:proofsFF}\n\nIn this section we present the missing proofs for our PTAS for the\nFirefighter problem.\n\n\nWe start by proving Lemma~\\ref{lem:sparsityFF}, showing that\nany vertex solution $x$ to \\ref{eq:lpFF} has\nfew $x$-loose vertices.\nMore precisely, the proof below shows that the number\nof $x$-loose vertices is upper bounded by the number\nof tight budget constraints.\nThe precise same reasoning used in the proof of\nLemma~\\ref{lem:sparsityFF} can also be applied\nin further contexts, in particular for the RMFC problem.\n\n\n\\subsubsection*{Proof of Lemma~\\ref{lem:sparsityFF}}\n\nLet $x$ be a vertex of the polytope defining the feasible set\nof~\\ref{eq:lpFF}.\nHence, $x$ is uniquely defined by\n$|V\\setminus\\{r\\}|$ many linearly independent and tight\nconstraints of this polytope.\nNotice that the tight constraints can be partitioned into\nthree groups:\n\\begin{enumerate}[label=(\\roman*),nosep]\n\\item Tight nonnegativity constraints, one for\neach vertex in $\\mathcal{F}_1=\\{u\\in V\\setminus \\{r\\} \\mid x(u) = 0\\}$.\n\n\\item Tight budget constraints, one for each level in\n$\\mathcal{F}_2 = \\{\\ell\\in [L] \\mid x(V_{\\leq \\ell})=\\sum_{i=1}^\\ell B_i\\}$.\n\n\\item Tight leaf constraints, one for each vertex in\n$\\mathcal{F}_3 = \\{u\\in \\Gamma \\mid x(P_u) = 1\\}$.\n\\end{enumerate}\nDue to potential degeneracies of the polytope describing\nthe feasible set of~\\ref{eq:lpFF} there may be several\noptions to describe $x$ as the unique solution to\na full-rank linear subsystem of the constraints described\nby $\\mathcal{F}_1 \\cup \\mathcal{F}_2 \\cup \\mathcal{F}_3$.\nWe consider a system that contains all tight\nnonnegativity constraints, i.e.,\nconstraints corresponding to $\\mathcal{F}_1$, and\ncomplement these constraints with arbitrary subsets\n$\\mathcal{F}'_2\\subseteq \\mathcal{F}_2$ and \n$\\mathcal{F}'_3\\subseteq \\mathcal{F}_3$ of\nbudget and leaf constraints that lead to a full rank\nlinear system corresponding to the constraints\n$\\mathcal{F}_1 \\cup \\mathcal{F}'_2 \\cup \\mathcal{F}'_3$.\nHence\n\\begin{equation}\\label{eq:fullRankSys}\n|\\mathcal{F}_1| + |\\mathcal{F}'_2| + |\\mathcal{F}'_3| = |V| - 1.\n\\end{equation}\n\n\nLet $V^{\\mathcal{L}}\\subseteq \\operatorname{supp}(x)$\nand $V^{\\mathcal{T}}\\subseteq \\operatorname{supp}(x)$\nbe the $x$-loose and $x$-tight vertices, respectively.\nWe first show $|\\mathcal{F}'_3|\\leq |V^{\\mathcal{T}}|$.\nFor each leaf $u\\in \\mathcal{F}'_3$, let $f_u\\in V^\\mathcal{T}$ be \nthe first vertex on the unique $u$-root path that is part of\n$\\operatorname{supp}(x)$. In particular, if $u\\in \\operatorname{supp}(x)$ then $f_u=u$.\nClearly, $f_u$ must be an $x$-tight vertex because\nthe path constraint with respect to $u$ is tight.\nNotice that for any distinct vertices $u_1,u_2\\in \\mathcal{F}'_3$,\nwe must have $f_{u_1}\\neq f_{u_2}$. Assume by sake of\ncontradiction that $f_{u_1}= f_{u_2}$. However, this implies\n$\\chi^{P_{u_1}} - \\chi^{P_{u_2}}\\in \\spn(\\{\\chi^{v} \\mid v\\in \\mathcal{F}_1\\})$, since \n$P_{u_1} \\Delta P_{u_2} := (P_{u_1} \\setminus P_{u_2})\\cup (P_{u_2} \\setminus P_{u_1}) \\subseteq \\mathcal{F}_1$, and leads to a contradiction\nbecause we exhibited a linear dependence among the constraints\ncorresponding to $\\mathcal{F}'_3$ and $\\mathcal{F}_1$.\nHence, $f_{u_1}\\neq f_{u_2}$ which implies that the\nmap $u \\mapsto f_u$ from $\\mathcal{F}'_3$ to $V^{\\mathcal{T}}$\nis injective and thus\n\\begin{equation}\\label{eq:boundLeafConstr}\n|\\mathcal{F}'_3| \\leq |V^{\\mathcal{T}}|.\n\\end{equation}\nWe thus obtain\n\\begin{align*}\n|\\operatorname{supp}(x)| &= |V|-1-|\\mathcal{F}_1|\n && \\text{($\\operatorname{supp}(x)$ consists of all $u\\in V\\setminus \\{r\\}$ with\n $x(u)\\neq 0$, i.e., $u\\not\\in \\mathcal{F}_1$)}\\\\\n &= |\\mathcal{F}'_2| + |\\mathcal{F}'_3|\n && \\text{(by~\\eqref{eq:fullRankSys})}\\\\\n &\\leq |\\mathcal{F}'_2| + |V^{\\mathcal{T}}|\n && \\text{(by~\\eqref{eq:boundLeafConstr})},\n\\end{align*}\nwhich leads to the desired result since\n\\begin{equation*}\n|V^{\\mathcal{L}}| = |\\operatorname{supp}(x)| - |V^{\\mathcal{T}}|\n \\leq |\\mathcal{F}'_2| \\leq L.\n\\end{equation*}\n\n\n\n\n\n\\qed\n\n\n\\subsubsection*{Proof of Lemma~\\ref{lem:pruning}}\nWithin this proof we focus on protection sets where the budget available\nfor any level is spent on the same level (and not a later one).\nAs discussed, there is always an optimal protection set\nwith this property.\n\nLet $B_\\ell \\in \\mathbb{Z}_{\\geq 0}$ be the budget available at level $\\ell\\in [L]$ and let \n$\\lambda_\\ell = \\lambda B_\\ell$.\n We construct the tree $G'$ using the following greedy procedure. Process\nthe levels of $G$ from the first one to the last one. At every level $\\ell\\in [L]$,\npick $\\lambda_\\ell$ vertices $u^\\ell_1, \\cdots, u^\\ell_{\\lambda_\\ell}$ at the $\\ell$-th \nlevel of $G$ greedily, i.e., pick each next vertex such that the subtree corresponding to that \nvertex has largest weight among all remaining vertices in the level. \nAfter each selection of a vertex the greedy procedure can no longer \nselect any vertex in the corresponding subtree in subsequent iterations.\\footnote{\nFor $\\lambda=1$ this procedure produces a set of vertices, which comprise\na $\\frac{1}{2}$-approximation for the Firefighter problem, as it coincides\nwith the greedy algorithm of Hartnell and Li~\\cite{HartnellLi2000}.}\n\nNow, the tree $G'$ is constructed by deleting from $G$ any vertex\nthat is both not contained in any subtree $T_{u^\\ell_i}$, and not \ncontained in any path $P_{u^\\ell_i}$ for $\\ell\\in [L]$ and $i\\in [\\lambda_\\ell]$.\nIn other words, if $U\\subseteq V$ is the set of all leaves\nof $G$ that were disconnected from the root by the greedy\nalgorithm, then we consider the subtree of $G$ induced\nby the vertices $\\cup_{u\\in U}P_u$.\nFinally, the weights of vertices on the paths \n$P_{u^\\ell_i} \\setminus \\{u^\\ell_i\\}$ for $\\ell\\in [L]$ and $i\\in [\\lambda_\\ell]$ are reduced\nto zero. This concludes the construction of $G'=(V',E')$ and the new weight function $w'$. Denote\nby $D_\\ell = \\{u^\\ell_1,\\cdots, u^\\ell_{\\lambda_\\ell}\\}$ the set of vertices chosen by the\ngreedy procedure in level $\\ell$, and let $D=\\cup_{\\ell\\in [L]} D_{\\ell}$.\nObserve that by construction we have that each vertex\nwith non-zero weight is in the subtree of a vertex in $D$, i.e.,\n$$\nw'(V') = \\sum_{u\\in D} w'(T'_u).\n$$\nThe latter immediately implies point~\\ref{item:pruningLargeOpt}\nof Lemma~\\ref{lem:pruning} because the vertices $D$ can\nbe partitioned into $\\lambda$ many vertex sets that are\nbudget-feasible and can thus be protected in a Firefighter solution.\nHence an optimal solution to the Firefighter problem\non $G'$ covers at least a $\\frac{1}{\\lambda}$-fraction of the total\nweight of $G'$.\n\n\nIt remains to prove point~\\ref{item:pruningSmallLoss} of the Lemma.\nLet $S^* = S^*_1\\cup \\cdots \\cup S^*_L$ be the vertices protected in some optimal\nsolution in $G$, where $S^*_\\ell \\subseteq V_\\ell$ are the vertices protected in level $\\ell$ (and\nhence $|S^*_\\ell| \\leq B_\\ell$). \nWithout loss of generality, we assume that $S^*$ is redundancy-free.\nFor distinct vertices $u,v\\in V$ we say that $u$ \\emph{covers} $v$ if $v\\in T_u \\setminus \\{u\\}$.\n\nFor $\\ell \\in [L]$, let $I_\\ell = S^*_l \\cap D_\\ell$ be the set of vertices protected \nby the optimal solution that are also chosen by the greedy algorithm in level $\\ell$.\nFurthermore, let $J_\\ell \\subseteq S^*_\\ell$\nbe the set of vertices of the optimal solution that are \ncovered by vertices chosen by the greedy algorithm in earlier\niterations, i.e.,\n$J_\\ell = S^*_\\ell \\cap \\bigcup_{u\\in D_1\\cup\\cdots\\cup D_{\\ell -1}} T_u$. \nFinally, let $K_\\ell = S^*_\\ell \\setminus (I_\\ell \\cup J_\\ell)$ be all other\noptimal vertices in level $\\ell$. Clearly, $S^*_\\ell = I_\\ell \\cup J_\\ell \\cup K_\\ell$ \nis a partition of $S^*_\\ell$.\n\nConsider a vertex $u\\in K_\\ell$ for some $\\ell\\in [L]$. From the guarantee of the greedy \nalgorithm it holds that for every vertex $v\\in D_\\ell$ we have $w'(T_v) = w(T_v) \\geq w(T_u)$. \nThe same does not necessarily hold for covered vertices. \nOn the other hand, covered vertices\nare contained in $G'$ with their original weights. We exploit these two \nproperties to prove the existence of a solution in $G'$\nof almost the same weight as $S^*$.\n\nTo prove the existence of a good solution we construct\na solution $A = A_1 \\cup \\cdots \\cup A_L$ with $A_\\ell \\subseteq V_\\ell$ and $|A_\\ell| \\leq B_\\ell$\nrandomly, and prove a bound on its expected quality.\nWe process the levels of the tree $G'$ top-down to construct $A$ step\nby step.\nThis clearly does not compromise generality. Recall that we only need to prove the \nexistence of a good solution, and not compute it efficiently. We can hence assume the\nknowledge of $S^*$ in the construction of $A$. To this end assume that all levels\n$\\ell' < \\ell$ were already processed, and the corresponding sets $A_{\\ell'}$ were\nconstructed. The set $A_{\\ell}$ is constructed as follows:\n\n\\begin{enumerate}\n\\item Include in $A_\\ell$ all vertices in $I_\\ell$.\n\\item Include in $A_\\ell$ all vertices in $J_\\ell$ that are not \ncovered by vertices in $A_1\\cup \\cdots \\cup A_{\\ell-1}$ (vertices selected so far).\n\\item Include in $A_\\ell$ a \\emph{uniformly random subset} of $|K_\\ell|$ vertices\nfrom $D_\\ell \\setminus I_\\ell$.\n\\end{enumerate}\n\nIt is easy to verify that the latter algorithm returns a redundancy-free solution, as no two\nchosen vertices in $A$ lie on the same path to the root. Next, we show that the expected\nweight of vertices saved by $A$ is at least $(1-\\frac{1}{\\lambda})\\val(\\mathsf{OPT}(\\overline{\\mathcal{I}}))$, \nwhich will prove our claim, since then at least one solution has the desired quality.\n\nSince we only need a bound on the expectation we can focus on a single level $\\ell \\in [L]$ \nand show that the contribution of vertices in $A_\\ell$ is in expectation at least $1-\\frac{1}{\\lambda}$\ntimes the contribution of the vertices in $S^*_\\ell$. Observe that the vertices in $I_\\ell$ are\ncontained both in $S^*_\\ell$ and in $A_\\ell$, hence it suffices to show that the contribution\nof $A_\\ell \\setminus I_\\ell$ is at least $1-\\frac{1}{\\lambda}$ times the contribution \nof $S^*_\\ell \\setminus I_\\ell$, in expectation. Also, recall that every vertex in $D_\\ell$\ncontributes at least as much as any vertex in $K_\\ell$, by the greedy selection rule. It follows\nthat the $|K_\\ell|$ randomly selected vertices in $A_\\ell$ have at least as much contribution\nas the vertices in $K_\\ell$. Consequently, to prove the claim is suffices to bound the \nexpected contribution of vertices in $A_\\ell \\cap J_\\ell$ with respect to the contribution of\n$J_\\ell$. Since $A_\\ell \\cap J_\\ell \\subseteq J_\\ell$ it suffices to show that every vertex\n$u\\in J_\\ell$ is also present in $A_\\ell$ with probability at least $1-\\frac{1}{\\lambda}$.\n\nTo bound the latter probability we make use of the random choices in the construction\nof $A$ as follows. Let $\\ell' < \\ell$ be the level at which for some $w\\in D_{\\ell'}$ it \nholds that $u\\in T_w$. In other words, $\\ell'$ is the level that contains the ancestor \nof $u$ that was chosen by the greedy construction of $G'$. Now, since $S^*$ is redundancy-free,\nand by the way that $A$ is constructed, it holds that if $u\\not\\in A_\\ell$ \nthen $w\\in A_{\\ell'}$, namely if $u$ is covered, it can only be covered by the \nunique ancestor $w$ of $u$ that was chosen in the greedy construction of $G'$. Furthermore,\nin such a case the vertex $w$ was selected randomly in the third step of the $\\ell'$-th\niteration. Put differently, the probability that the vertex $u$ is covered \nis exactly the probability that its ancestor $w$ is chosen randomly to be part of $A_{\\ell'}$.\nSince these vertices are chosen to be a random subset of $|K_{\\ell'}|$ vertices from the set $D_{\\ell'}\\setminus I_{\\ell'}$,\nthis probability is at most \n$$\n\\frac{|K_{\\ell'}|}{|D_{\\ell'}| - |I_{\\ell'}|} =\n\\frac{|K_{\\ell'}|}{\\lambda B_{\\ell'} - |I_{\\ell'}|} \\leq \n\\frac{1}{\\lambda}, \n$$\nwhere the last inequality follows from $|K_{\\ell'}| + |I_{\\ell'}| \\leq B_{\\ell'}$.\nThis implies that $u\\in A_\\ell$ with probability of at least $1-\\frac{1}{\\lambda}$, as required\nand concludes the proof of the lemma.\n\n\n\\qed\n\n\n\n\n\n\\subsubsection*{Proof of Lemma~\\ref{lem:setQ}}\n\n\n\nWe construct the set $Q$ in two phases as follows. First we construct \na set $\\overline Q \\subseteq H$ of vertices fulfilling the first and the third properties, i.e.,\nit will satisfy $|\\overline Q| = O(\\frac{\\log N}{\\epsilon^3})$, as well as the property that\n$G[V\\setminus \\overline Q\\cup \\{r\\}]$ has connected components each of weight at most $\\eta$. Then,\nwe add to $\\overline Q$ all vertices of $H$ of degree at least three to arrive\nat the final set $Q$.\n\nIt will be convenient to define heavy vertices and heavy tree with respect to any \nsubtree $G'= (V', E')$ of $G$ which contains the root $r$. Concretely, we \ndefine $H_{G'} = \\{u\\in V'\\setminus \\{r\\} \\,\\mid\\, w(T'_u)\\geq \\eta\\}$ \nto be the set of $G'$-heavy vertices. The $G'$-heavy tree is the\nsubtree $G'[H_{G'} \\cup \\{r\\}]$ of $G'$. Observe that $H = H_G$ and that\n$H_{G'} \\subseteq H$ for every subtree $G'$ of $G$.\n\nTo construct $\\overline Q$ we process the tree $G$ in a bottom-up \nfashion starting with $\\overline Q = \\emptyset$. We will also remove\nparts of the tree in the end of every iteration. The first iteration \nstarts with $G' = G$. In every iteration that starts with tree $G'$, include in \n$\\overline Q$ an arbitrary leaf $u\\in H_{G'}$ of the heavy tree and remove $u$ and all vertices\nin its subtree from $G'$. The procedure ends when there is\neither no heavy vertex in $G'$ anymore, or when $r$ is the\nonly heavy vertex in $G'$.\n\nLet us verify that the claimed properties indeed hold. The fact that \n$|\\overline Q| = O(\\frac{\\log N}{\\epsilon^3})$ follows from the fact that at each iteration \nwe remove a $G'$-heavy vertex including all its subtree from the \ncurrent tree $G'$. This implies that the total weight of the tree $G'$\ndecreases by at least $\\eta$ in every iteration. Since we only include one \nvertex in every iteration we have\n$|\\overline Q| \\leq \\frac{w(V)}{\\eta} = O(\\frac{\\log N}{\\epsilon^3})$.\n\nThe third property follows from the fact that we always remove a leaf\nof the $G'$-heavy tree. Observe that the connected components of \n$G[V\\setminus (\\overline Q \\cup \\{r\\})]$ are contained in the subtrees\nwe disconnect in every iteration in the construction of $\\overline Q$.\nBy definition of $G'$-heavy leaves, in any such iteration where \na $G'$-heavy leaf $u$ is removed from the tree, these parts have weight \nat least $\\eta$, but any subtree rooted at any descendant of $u$ has\nweight strictly smaller than $\\eta$ (otherwise this descendant would\nbe $G'$-heavy as well, contradicting the assumption that it has a\n$G'$-heavy leaf $u$ as an ancestor). Now, since $u$ is included in $\\overline Q$,\nthe connected components are exactly these subtrees, so the property indeed holds.\n\nTo construct $Q$ and conclude the proof it remains to include in $\\overline Q$\nall remaining nodes of degree at least three in the heavy tree. The \nfact that also all leaves of the heavy tree are included in $Q$ is\nreadily implied by the construction of $\\overline Q$, so the second property \nholds for $Q$. Clearly, by removing more vertices from the heavy tree, the sizes\nof connected components only get smaller, so $Q$ also satisfies the third\ncondition, since $\\overline Q$ already did. Finally, the number of \nvertices of degree at least three in the heavy tree is strictly\nless than the number of its leaves, which is $O(\\frac{\\log N}{\\epsilon^3})$;\nfor otherwise a contradiction would occur since the tree would\nhave an average degree of at least $2$.\nThis implies that, in\ntotal, $|Q| = O(\\frac{\\log N}{\\epsilon^3})$,\nso the first property also holds.\n\nTo conclude the proof of the lemma it remains to note that the latter\nconstruction can be easily implemented in polynomial time.\n\n\\qed\n\n\n\n\\section{Missing details for $O(1)$-approximation\nfor RMFC}\\label{sec:proofsRMFC}\n\nThis section contains the missing proofs for our\n$12$-approximation for RMFC.\n\n\n\\subsection*{Proof of Theorem~\\ref{thm:bottomCover}}\n\n\n\nTo prove Theorem~\\ref{thm:bottomCover} we first show\nthe following result, based on which Theorem~\\ref{thm:bottomCover}\nfollows quite directly.\n\n\n\\begin{lemma}\\label{lem:sliceCover}\nLet $B\\in \\mathbb{R}_{\\geq 1}$, $\\eta\\in (0,1]$,\n$k \\in \\mathbb{Z}_{\\geq 1}$, and\n$\\ell_1 = \\lfloor \\log^{(k)} L \\rfloor$,\n$\\ell_2 = \\lfloor \\log^{(k-1)} L \\rfloor$.\nLet $x\\in P_B$ with\n$\\operatorname{supp}(x)\\subseteq V_{(\\ell_1,\\ell_2]}\n \\coloneqq V_{>\\ell_1} \\cap V_{\\leq \\ell_2}$,\nand we define $Y = \\{u\\in \\Gamma \\mid x(P_u) \\geq \\eta\\}$.\nThen one can efficiently compute a\nset $R\\subseteq V_{(\\ell_1,\\ell_2]}$ such\nthat\n\\smallskip\n\\begin{enumerate}[nosep, label=(\\roman*)]\n\\item\\label{item:scHitPath}\n$R\\cap P_u \\neq \\emptyset \\quad \\forall u\\in Y$, and\n\n\\item\\label{item:scBudgetOk}\n$\\chi^R\\in P_{\\bar{B}}$,\nwhere $\\bar{B} = \\frac{1}{\\eta} B + 1$.\n\\end{enumerate}\n\\end{lemma}\n\n\nWe first observe that Lemma~\\ref{lem:sliceCover} indeed\nimplies Theorem~\\ref{thm:bottomCover}.\n\n\\begin{proof}[Proof of Theorem~\\ref{thm:bottomCover}]\nFor $k=1,\\dots, q$, let\n$\\ell_1^k = \\lfloor \\log^{(k)} L\\rfloor$ and\n$\\ell_2^k = \\lfloor \\log^{(k-1)} L\\rfloor$, and we define\n$x^k\\in P_B$ by $x^k = x \\wedge \\chi^{V_{(\\ell_1^k, \\ell_2^k]}}$.\nHence, $x=\\sum_{k=1}^q x^k$.\nFor each $k\\in [q]$, we apply Lemma~\\ref{lem:sliceCover} to\n$x^k$ with $\\eta = \\frac{\\mu}{q}$ to obtain a set\n$R^k \\subseteq V_{(\\ell_1^k, \\ell_2^k]}$ satisfying\n\\begin{enumerate}[nosep, label=(\\roman*)]\n\\item $R^{k}\\cap P_u \\neq \\emptyset$ \\quad\n$\\forall u\\in Y^k=\\{u\\in \\Gamma \\mid x^k(P_u) \\geq \\eta\\}$, and\n\n\\item $\\chi^{R^k} \\in P_{\\bar{B}}$, where\n$\\bar{B} \\coloneqq \\frac{1}{\\eta} B + 1 = \\frac{q}{\\mu} B + 1\n \\eqqcolon B'$.\n\\end{enumerate}\nWe claim that $R=\\cup_{k=1}^q R^k$ is a set satisfying\nthe conditions of Theorem~\\ref{thm:bottomCover}.\nThe set $R$ clearly satisfies $\\chi^R \\in P_{B'}$\nsince $\\chi^{R^k}\\in P_{B'}$ for $k\\in [q]$\nand the sets $R^k$ are on disjoint levels.\nFurthermore, for each $u\\in W=\\{v\\in \\Gamma \\mid x(P_v)\\geq \\mu\\}$\nwe indeed have $P_u\\cap R\\neq\\emptyset$ due to the following.\nSince $x=\\sum_{k=1}^q x^k$ and $x(P_u) \\geq \\mu$ there exists\nan index $j\\in [q]$ such that $x^j(P_u) \\geq \\eta = \\frac{\\mu}{q}$,\nand hence $P_u \\cap R \\supseteq P_u \\cap R^j \\neq \\emptyset$.\n\n\\end{proof}\n\n\n\n\nThus, it remains to prove Lemma~\\ref{lem:sliceCover}.\n\n\\begin{proof}[Proof of Lemma~\\ref{lem:sliceCover}]\\leavevmode\n\nLet $\\tilde{B} = \\frac{1}{\\eta} B$.\nWe start by determining an optimal vertex solution $y$\nto the linear program $\\min\\{z(V\\setminus \\{r\\}) \\mid z\\in Q\\}$,\nwhere\n\\begin{equation*}\nQ = \\{z\\in P_{\\tilde{B}}\n \\mid z(u) = 0\\;\\forall u\\in V\n \\setminus (V_{(\\ell_1,\\ell_2]} \\cup \\{r\\}),\\;\\;\nz(P_u) \\geq 1 \\;\\forall u\\in Y\\}.\n\\end{equation*}\nNotice that $Q\\neq \\emptyset$\nsince $\\frac{1}{\\eta} x \\in Q$; hence, the above\nLP is feasible.\nFurthermore, notice that $y(P_u)\\leq 1$ for $u\\in \\Gamma$;\nfor otherwise, there is a vertex $v\\in \\operatorname{supp}(y)$ such that\n$y(P_v) > 1$, and hence $y - \\epsilon \\chi^{\\{v\\}}\\in Q$\nfor a small enough $\\epsilon >0$, violating that \n$y$ is an \\emph{optimal} vertex solution.\n\n\nLet $V^{\\mathcal{L}}$ be all $y$-loose vertices.\nWe will show that the set\n\\begin{equation*}\nR = V^{\\mathcal{L}} \\cup \\{u\\in V\\setminus \\{r\\} \\mid y(u)=1\\}\n\\end{equation*}\nfulfills the properties claimed by the lemma.\nClearly, $R\\subseteq V_{(\\ell_1,\\ell_2]}$ since\n$\\operatorname{supp}(y) \\subseteq V_{(\\ell_1,\\ell_2]}$.\n\nTo see that condition~\\ref{item:scHitPath}\nholds, let $u\\in Y$, and notice that we have $y(P_u)=1$.\nEither $|P_u \\cap \\operatorname{supp}(y)| =1$, in which case\nthe single vertex $v$ in $P_u\\cap \\operatorname{supp}(y)$ satisfies\n$y(u)=1$ and is thus contained in $R$; or $|P_u\\cap \\operatorname{supp}(y)| > 1$,\nin which case $P_u\\cap V^{\\mathcal{L}} \\neq \\emptyset$ which again\nimplies $R\\cap P_u \\neq \\emptyset$.\n\n\nTo show that $R$ satisfies~\\ref{item:scBudgetOk},\nwe have to show that $R$ does not exceed the budget\n$\\bar{B}\\cdot 2^\\ell = (\\frac{1}{\\eta}B + 1) 2^\\ell$ of any\nlevel $\\ell\\in \\{\\ell_1+1,\\dots, \\ell_2\\}$.\nWe have\n\\begin{align*}\n|R\\cap V_\\ell| \\leq y(V_\\ell) + |V^{\\mathcal{L}}|\n\\leq \\tilde{B} 2^\\ell + |V^{\\mathcal{L}}|\n= \\frac{1}{\\eta} B 2^\\ell + |V^{\\mathcal{L}}|,\n\\end{align*}\nwhere the second inequality follows from $y\\in Q$.\nTo complete the proof it suffices to show\n$|V^{\\mathcal{L}}| \\leq 2^\\ell$.\nThis follows by a sparsity reasoning analogous to\nLemma~\\ref{lem:sparsityFF} implying that the number\nof $y$-loose vertices is bounded by the number\nof tight budget constraints, and thus\n\\begin{equation}\\label{eq:budgetBoundsFirstStep}\n|V^{\\mathcal{L}}| \\leq \\ell_2 - \\ell_1 \\leq \\ell_2\n = \\lfloor \\log^{(k-1)} L \\rfloor.\n\\end{equation}\nFurthermore,\n\\begin{align*}\n2^\\ell &\\geq 2^{\\ell_1+1} = 2^{\\lfloor \\log^{(k)} L \\rfloor + 1}\n\\geq 2^{\\log^{(k)} L} = \\log^{(k-1)} L,\n\\end{align*}\nwhich, together with~\\eqref{eq:budgetBoundsFirstStep},\nimplies $|V^{\\mathcal{L}}| \\leq 2^\\ell$ and thus\ncompletes the proof.\n\n\\end{proof}\n\n\n\\subsection*{Proof of Theorem~\\ref{thm:bigBIsGood}}\n\nLet $(y,B)$ be an optimal solution to the RMFC\nrelaxation $\\min\\{B \\mid x\\in \\bar{P}_B\\}$\nand let $h=\\lfloor \\log L \\rfloor$.\nHence, $B\\leq B_\\mathsf{OPT}$.\nWe invoke Theorem~\\ref{thm:bottomCover} with respect\nto the vector $y\\wedge \\chi^{V_{>h}}$ and $\\mu=0.5$\nto obtain a set $R_1\\subseteq V_{>h}$ satisfying\n\\begin{enumerate}[nosep,label=(\\roman*)]\n\\item $R_1\\cap P_u\\neq \\emptyset\n\\quad\\forall u\\in W$,\nand\n\n\\item $\\chi^{R_1} \\in P_{2B+1}$,\n\\end{enumerate}\nwhere\n$W = \\{u\\in \\Gamma \\mid y(P_u\\cap V_{>h}) \\geq 0.5\\}$.\nHence, $R_1$ cuts off all leaves in $W$ from\nthe root by only protecting vertices on\nlevels $V_{> h}$ and using budget bounded by\n$2B+1 \\leq 3B \\leq 3 \\max\\{\\log L, B_\\mathsf{OPT}\\}$.\n\n\nWe now focus on the leaves $\\Gamma \\setminus W$,\nwhich we will cut off from the root by protecting\na vertex set $R_2 \\subseteq V_{\\leq h}$ feasible\nfor budget $3 \\max\\{\\log L, B_\\mathsf{OPT}\\}$.\nLet $(z,\\bar{B})$ be an optimal vertex\nsolution to the\nfollowing linear program\n\\begin{equation}\\label{eq:reoptTop}\n\\min\\left\\{\\bar{B} \\;\\middle\\vert\\;\nx\\in P_{\\bar{B}},\\; \nx(P_u) = 1 \\;\\forall u\\in \\Gamma\\setminus W\n\\right\\}.\n\\end{equation}\nFirst, notice that~\\eqref{eq:reoptTop} is feasible\nfor $\\bar{B}\\leq 2B$. This follows by observing\nthat the vector $q= 2(y\\wedge \\chi^{V_{\\leq h}})$\nsatisfies $q\\in P_{2 B}$ since $y\\in P_B$.\nMoreover, for $u\\in \\Gamma\\setminus W$,\nwe have\n\\begin{equation*}\nq(P_u) = 2 y(P_u \\cap V_{\\leq h})\n= 2 (1-y(P_u \\cap V_{> h})) > 1,\n\\end{equation*}\nwhere the last inequality follows from\n$y(P_u\\cap V_{>h}) < 0.5$ because\n$u\\in \\Gamma\\setminus W$.\nFinally, there exists a vector\n$q' < q$ such that\n$q'(P_u) =1$ for $u\\in \\Gamma\\setminus W$.\nThe vector $q'$ can be obtained from $q$ by\nsuccessively reducing values on vertices\n$v\\in \\operatorname{supp}(q)$ satisfying\n$q(P_v) > 1$.\nThis shows that $(q',2B)$ is a feasible\nsolution to~\\eqref{eq:reoptTop} and hence\n$\\bar{B} \\leq 2B$.\n\nConsider the set of all $z$-loose vertices\n$V^{\\mathcal{L}}=\\{u\\in \\operatorname{supp}(z) \\mid z(P_u)<1\\}$.\nWe define\n\\begin{equation*}\nR_2 = V^{\\mathcal{L}} \\cup \n\\{u\\in \\operatorname{supp}(z) \\mid z(u)=1\\}.\n\\end{equation*}\nNotice that for each $u\\in \\Gamma\\setminus W$,\nthe set $R_2$ contains a vertex on the\npath from $u$ to the root. Indeed, either\n$|\\operatorname{supp}(z)\\cap P_u|=1$ in which case there\nis a vertex $v\\in P_u$ with $z(v)=1$, which is\nthus contained in $R_2$, or $|\\operatorname{supp}(z)\\cap P_u|>1$\nin which case the vertex $v\\in \\operatorname{supp}(z)\\cap P_u$\nthat is closest to the root among all vertices in\n$\\operatorname{supp}(z)\\cap P_u$ is a $z$-loose vertex.\nHence, the set $R=R_1\\cup R_2$ cuts off all leaves\nfrom the root. It remains to show that it is\nfeasible for budget $3 \\max\\{\\log L, B_\\mathsf{OPT}\\}$.\n\nUsing an analogous sparsity reasoning as in\nLemma~\\ref{lem:sparsityFF}, we obtain that\n$|V^{\\mathcal{L}}|$ is bounded by the number\nof tight budget constraints, which is at most\n$h=\\lfloor \\log L \\rfloor \\leq \\log L$.\nHence, for any level $\\ell\\in [h]$, we have\n\\begin{align*}\n|R_2 \\cap V_\\ell| &\\leq |V^{\\mathcal{L}}| + z(V_\\ell) \\\\\n &\\leq \\log L + 2^\\ell \\bar{B} && \\text{($(z,\\bar{B})$ feasible\nfor~\\eqref{eq:reoptTop})}\\\\\n &\\leq \\log L + 2^\\ell \\cdot (2 B) && \\text{($\\bar{B}\\leq 2B$)}\\\\\n &\\leq 2^\\ell \\cdot (3 \\max\\{\\log L, B_\\mathsf{OPT}\\}).\n && \\text{($B\\leq B_\\mathsf{OPT}$)}\n\\end{align*}\nThus, both $R_1$ and $R_2$ are budget-feasible for\nbudget $3 \\max\\{\\log L, B_\\mathsf{OPT}\\}$, and since they\ncontain vertices on disjoint levels, $R=R_1\\cup R_2$\nis feasible for the same budget.\n\n\\qed\n\n\n\n\n\n\\subsection*{Proof of Lemma~\\ref{lem:enumWorks}}\n\nTo show that the running time of\n\\hyperlink{alg:enumRMFCtarget}%\n{$\\mathrm{Enum}(\\emptyset,\\emptyset,\\bar{\\gamma})$}\nis polynomial, we show that there is only a polynomial\nnumber of recursive calls to\n\\hyperlink{alg:enumRMFCtarget}%\n{$\\mathrm{Enum}(A,D,\\gamma)$}. Notice that the number\nof recursive calls done in one execution of\nstep~\\ref{item:enumRecCall} of the algorithm is equal\nto $2 |F_x|$.\nWe thus start by upper bounding $|F_x|$ for any solution\n$(x,B)$ to \\ref{eq:lpRMFCAD} with $B < \\log L$.\nConsider a vertex $f_u\\in F_x$, where\n$u\\in \\Gamma\\setminus W_x$.\nSince $u$ is a leaf not in $W_x$, we have\n$x(P_u \\cap V_{\\leq h}) > \\frac{1}{3}$, and\nthus\n\\begin{equation*}\nx(T_{f_u}\\cap V_{\\leq h}) > \\frac{1}{3}\n\\quad \\forall f_u \\in F_x.\n\\end{equation*}\nBecause no two vertices of $F_x$ lie on the same\nleaf-root path, the sets $T_{f_u} \\cap V_{\\leq h}$\nare all disjoint for different $f_u\\in F_x$,\nand hence\n\\begin{align*}\n\\frac{1}{3}|F_x| &< \\sum_{f \\in F_x} x(T_{f}\\cap V_{\\leq h})\\\\\n &\\leq x(V_{\\leq h})\n && \\text{(disjointness of sets $T_{f}\\cap V_{\\leq h}$\n for different $f \\in F_x$})\\\\\n &\\leq \\sum_{\\ell=1}^h 2^\\ell B\n && \\text{($x$ satisfies budget constraints of~\\ref{eq:lpRMFCAD} )}\\\\\n &< 2^{h+1} B\\\\\n &< 2 (\\log L)^2.\n && \\text{($h=\\lfloor \\log^{(2)} L \\rfloor$ and $B < \\log L$)}\n\\end{align*}\nSince the recursion depth is\n$\\bar{\\gamma}=2(\\log L)^2 \\log^{(2)} L$,\nthe number of recursive calls is bounded by\n\\begin{align*}\nO\\left((2 |F_x|)^{\\bar{\\gamma}}\\right) &= \n(\\log L)^{O((\\log L)^2 \\log^{(2)} L)}\n=2^{o(L)} = o(N),\n\\end{align*}\nthus showing that\n\\hyperlink{alg:enumRMFCtarget}%\n{$\\mathrm{Enum}(\\emptyset,\\emptyset,\\bar{\\gamma})$}\nruns in polynomial time.\n\nIt remains to show that \\hyperlink{alg:enumRMFCtarget}%\n{$\\mathrm{Enum}(\\emptyset,\\emptyset,\\bar{\\gamma})$} finds\na triple satisfying the conditions of Lemma~\\ref{lem:goodEnum}.\nFor this we identify a particular execution path of the\nrecursive procedure \n\\hyperlink{alg:enumRMFCtarget}%\n{$\\mathrm{Enum}(\\emptyset,\\emptyset,\\bar{\\gamma})$} that,\nat any point in the algorithm, will maintain a clean\npair $(A,D)$ that is compatible with $\\mathsf{OPT}$,\ni.e., $A\\subseteq \\mathsf{OPT}$ and $D\\cap \\mathsf{OPT} = \\emptyset$.\nAt the beginning of the algorithm we clearly have\ncompatibility with $\\mathsf{OPT}$ since $A=D=\\emptyset$.\nTo identify the execution path we are interested\nin, we highlight which recursive call we want to follow\ngiven that we are on the execution path.\nHence, consider a clean pair $(A,D)$\nthat is compatible with $\\mathsf{OPT}$ and assume we are\nwithin the execution of\n\\hyperlink{alg:enumRMFCtarget}%\n{$\\mathrm{Enum}(A,D,\\gamma)$}.\nLet $(x,B)$ be an optimal solution to~\\ref{eq:lpRMFCAD}.\nNotice that $B \\leq B_\\mathsf{OPT} \\leq \\log L$, because\n$(A,D)$ is compatible with $\\mathsf{OPT}$.\nIf $\\mathsf{OPT}\\cap Q_x=\\emptyset$, then $(A,D,x)$ fulfills the\nconditions of Lemma~\\ref{lem:goodEnum} and we are done.\nHence, assume $\\mathsf{OPT}\\cap Q_x \\neq \\emptyset$, and\nlet $f \\in F_x$ be such that\n$\\mathsf{OPT}\\cap T_{f}\\cap V_{\\leq h}\\neq \\emptyset$.\nIf $f \\in \\mathsf{OPT}$, then consider the execution path\ncontinuing with the call of \n\\hyperlink{alg:enumRMFCtarget}%\n{$\\mathrm{Enum}(A\\cup \\{f\\},D,\\gamma-1)$}; otherwise,\nif $f\\not\\in \\mathsf{OPT}$, we focus on the call of\n\\hyperlink{alg:enumRMFCtarget}%\n{$\\mathrm{Enum}(A,D\\cup \\{f\\},\\gamma-1)$}.\nNotice that compatibility with $\\mathsf{OPT}$ is maintained\nin both cases.\n\nTo show that the thus identified execution path of \n\\hyperlink{alg:enumRMFCtarget}%\n{$\\mathrm{Enum}(\\emptyset,\\emptyset,\\bar{\\gamma})$}\nindeed leads to a triple satisfying the conditions\nof Lemma~\\ref{lem:goodEnum}, we measure progress\nas follows. For any clean pair $(A,D)$\ncompatible with $\\mathsf{OPT}$, we define\na potential function $\\Phi(A,D)\\in \\mathbb{Z}_{\\geq 0}$\nin the following way.\nFor each $u\\in \\mathsf{OPT}\\cap V_{\\leq h}$,\nlet $d_u\\in \\mathbb{Z}_{\\geq 0}$\nbe the distance of $u$ to the first vertex in\n$A\\cup D \\cup \\{r\\}$ when following the unique\n$u$-$r$ path. We define\n$\\Phi(A,D)= \\sum_{u\\in \\mathsf{OPT} \\cap V_{\\leq h}} d_u$.\nNotice that as long as we have a triple $(A,D,x)$\non our execution path that does\nnot satisfy the conditions of Lemma~\\ref{lem:goodEnum},\nthen the next triple $(A',D',x')$ on our execution\npath satisfies $\\Phi(A',D') < \\Phi(A,D)$.\nHence, either we will encounter a triple on our\nexecution path satisfying\nthe conditions of Lemma~\\ref{lem:goodEnum}\nwhile still having a strictly positive potential,\nor we will encounter a triple $(A,D,x)$ compatible\nwith $\\mathsf{OPT}$ and $\\Phi(A,D)=0$, which implies\n$\\mathsf{OPT}\\cap V_{\\leq h} = A$,\nand we thus correctly guessed all vertices of\n$\\mathsf{OPT}\\cap V_{\\leq h}$ implying that\nthe conditions of Lemma~\\ref{lem:goodEnum}\nare satisfied for the triple $(A,D,x)$.\nSince $\\Phi(A,D)\\geq 0$ for any compatible clean\npair $(A,D)$, this implies that a triple\nsatisfying the conditions of Lemma~\\ref{lem:goodEnum}\nwill be encountered if the recursion depth $\\bar{\\gamma}$\nis at least $\\Phi(\\emptyset,\\emptyset)$.\nTo evaluate $\\Phi(\\emptyset,\\emptyset)$ we have to compute\nthe sum of the distances of all\nvertices $u\\in \\mathsf{OPT}\\cap V_{\\leq h}$\nto the root. The distance of $u$ to the root is at\nmost $h$ since $u\\in V_{\\leq h}$. Moreover, \n$|\\mathsf{OPT} \\cap V_{\\leq h}| < 2^{h+1} B_{\\mathsf{OPT}}$\ndue to the budget constraints. Hence,\n\\begin{align*}\n\\Phi(\\emptyset, \\emptyset)\n &< h \\cdot 2^{h+1} \\cdot B_{\\mathsf{OPT}}\\\\\n &\\leq 2 \\log^{(2)} L \\cdot (\\log L)^2\n && \\text{($h=\\lfloor \\log^{(2)} L \\rfloor$ and $B_\\mathsf{OPT} \\leq \\log L$)}\\\\\n &= \\bar{\\gamma},\n\\end{align*}\nimplying that a triple fulfilling the conditions of\nLemma~\\ref{lem:goodEnum} is encountered by\n\\hyperlink{alg:enumRMFCtarget}%\n{$\\mathrm{Enum}(\\emptyset,\\emptyset,\\bar{\\gamma})$}.\n\n\n\\qed\n\n\n\n\n\n\n\n\n\n\\section*{Acknowledgements}\nWe are grateful to Noy Rotbart for many stimulating discussions\nand for bringing several relevant references to our attention.\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nCosmological observations have confirmed the big bang cosmology and determined the cosmological parameters precisely~\\cite{Ade:2015xua}. The matter contents of the Universe may be phenomenologically given by the standard model particles, the cosmological constant $\\Lambda$, and cold dark matter (CDM). However, the theoretical explanation of the origin of the extra ingredients, dark matter and dark energy, is still lacked. The theoretically expected value of the cosmological constant is too large to explain the present accelerating expansion. An alternative idea is that the acceleration is obtained by a potential of a scalar field instead of $\\Lambda$, and this idea is often called the quintessence model~\\cite{Caldwell:1997ii}. This scalar field could be originated from the gravity sector~\\cite{Fujii:2003pa}. A large class of scalar-tensor theories and $f(R)$ theories can be recast in the form of a theory of a canonical scalar field with a potential after the conformal transformation $\\tilde{g}_{\\mu\\nu}=A^2(\\phi) g_{\\mu\\nu}$ and the field redefinition $\\Phi=\\Phi(\\phi)$ where $\\Phi$ is the canonically normalized field. The metric $\\tilde{g}_{\\mu\\nu}$ is called the Jordan frame metric which the standard model particles are minimally coupled to whereas $g_{\\mu\\nu}$ is the Einstein frame metric in which the gravitational action is given by the Einstein-Hilbert action. In this case, the scalar field has the non-minimal coupling to the matter fields via the coupling function $A$.\n\n\nDark matter is also one of the biggest mystery of the modern cosmology. Although many dark matter candidates have been proposed in the context of the particle physics, any dark matter particles have not been discovered yet~\\cite{Agashe:2014kda,Ackermann:2015zua,Ahnen:2016qkx,Ackermann:2015lka,Khachatryan:2014rra,Conrad:2017pms}. The existence of dark matter is confirmed via only gravitational interactions. Hence, exploring dark matter candidate in the context of gravity is also a considerable approach. Not only dark energy but also dark matter could be explained by modifications of gravity. For instance, a natural extension of general relativity is a theory with a massive graviton (see \\cite{deRham:2014zqa} for a review). If a graviton obtains a mass, the massive graviton can be a dark matter candidate~\\cite{Dubovsky:2004ud,Pshirkov:2008nr,Aoki:2016zgp,Babichev:2016hir,Babichev:2016bxi,Aoki:2017cnz}.\n\n\nA viable dark matter scenario has to explain the present abundance of dark matter which usually leads to a constraint on a production scenario. However, a question arises: why are the energy densities of dark matter and baryon almost the same? If baryon and dark matter are produced by a common mechanism, almost the same abundance could be naturally obtained. On the other hand, if productions of the two are not related but independent, the coincidence might indicate that two energy densities are tuned to be the same order of the magnitude by a mechanism after the productions.\n\n\nIn the present paper, we shall combine two ideas of the modifications of gravity by using the proposal of \\cite{DeFelice:2017oym}: the non-minimal coupling of $\\phi$ and the existence of the massive graviton. We call this theory the chameleon bigravity theory which contains three types of gravitational degrees of freedom: the massless graviton, the massive graviton, and the chameleon field $\\phi$. We identify the massive graviton with dark matter. Since dark matter is originated from the gravity sector, the coupling between $\\phi$ and the dark matter may be given by a different way from the matter sector. We promote parameters in the graviton mass terms to functions of $\\phi$~\\cite{D'Amico:2012zv,Huang:2012pe}, giving rise to a new type of coupling between $\\phi$ and dark matter. In this case, as discussed in \\cite{DeFelice:2017oym}, the field value of $\\phi$ depends on the environment due to the non-minimal coupling as with the chameleon field \\cite{Khoury:2003aq,Khoury:2003rn}, which makes the graviton mass to depend on the environment.\n\n\nWe find that the ratio between energy densities of dark matter and baryon is dynamically adjusted to the observed value by the motion of $\\phi$ and then the ratio at the present is independent of the initial value. Hence, our model can explain the coincidence of the abundance of dark matter and baryon. Furthermore, if the potential of $\\phi$ is designed to be dark energy, the chameleon field $\\phi$ can give rise to the present acceleration of the universe. Both dark energy and dark matter are explained by the modifications of gravity in our model. \n\n\nThe paper is organized as follows. We introduce the chameleon bigravity theory in Sec.~\\ref{sec_bigravity}. In Sec.~\\ref{sec_Fri}, we show the Friedmann equation regarding the massive graviton is dark matter. We also point out the reason why the dark matter-baryon ratio can be naturally explained in the chameleon bigravity theory if we consider the massive graviton as dark matter. Some analytic solutions are given in Sec.~\\ref{sec_analytic} and numerical solutions are shown in Sec.~\\ref{sec_numerical}. These solutions reveal that the observed dark matter-baryon ratio is indeed dynamically obtained independently from the initial ratio. We summarize our results and give some remarks in Sec. \\ref{summary}. In Appendix, we detail the derivation of the Friedmann equation.\n\n\n\\section{Chameleon bigravity theory}\n\\label{sec_bigravity}\n\nWe consider the chameleon bigravity theory in which the mass of the massive graviton depends on the environment~\\cite{DeFelice:2017oym}. The action is given by\n\\begin{align}\nS&=\\int d^4x \\sqrt{-g} \\Biggl[ \\frac{M_g^2}{2}R[g]-\\frac{1}{2}\n K^2(\\phi) \ng^{\\mu\\nu}\\partial_{\\mu}\\phi\\partial_{\\nu}\\phi\n\\nn\n&\\qquad \\qquad \\qquad \\quad +M_g^2m^2 \\sum_{i=0}^4 \\beta_i (\\phi) U_i[s] \\Biggl]\n\\nn\n&+\\frac{M_f^2}{2}\\int d^4 x\\sqrt{-f}R[f]+S_{\\rm m}[\\tilde{g},\\psi]\\,, \\label{action}\n\\end{align}\nwhere $\\phi$ is the chameleon field and $S_{\\rm m}$ is the matter action. The functions $K(\\phi)$ and $\\beta_i(\\phi)$ are arbitrary functions of $\\phi$. The matter fields universally couple to the Jordan frame metric $\\tilde{g}_{\\mu\\nu}=A^2(\\phi)g_{\\mu\\nu}$ with a coupling function $A(\\phi)$. The potentials $U_i[s] \\, (i=0,\\cdots, 4)$ are the elementary symmetric polynomials of the eigenvalues of the matrix $s^{\\mu}{}_{\\nu}$ which is defined by the relation~\\cite{deRham:2010ik,deRham:2010kj,Hassan:2011zd}\n\\begin{align}\ns^{\\mu}{}_{\\alpha}s^{\\alpha}{}_{\\nu}=g^{\\mu\\alpha}f_{\\alpha\\nu}\\,.\n\\end{align}\nThe potential of $\\phi$ is not added explicitly since the couplings between $\\phi$ and the potentials $U_i$ yield the potential of $\\phi$ and thus an additional potential is redundant.\n\nNote that the field $\\phi$ is not a canonically normalized field. The canonical field $\\Phi$ is given by the relation\n\\begin{align}\nd \\Phi = K(\\phi) d\\phi \\,,\n\\end{align}\nby which the function $K$ does not appear explicitly in the action when we write down the theory in terms of $\\Phi$. Since $\\beta_i$ and $A$ are arbitrary functions, we can set $K=1$ by the redefinitions of $\\beta_i$ and $A$ without loss of generality. Nevertheless, we shall retain $K$ and discuss the general form of the action \\eqref{action}.\n\nIn general, the functions $\\beta_i(\\phi)$ can be chosen independently. In the present paper, however, we consider the simplest model such that $\\beta_i(\\phi)=-c_i f(\\phi)$ where $c_i$ are constant while $f(\\phi)$ is a function of $\\phi$. As we will see in next section, the graviton mass and the potential of $\\phi$ around the cosmological background are given by\n\\begin{align}\nm_T^2(\\phi)&:=\\frac{1+\\kappa}{\\kappa}m^2 f(\\phi)(c_1+2c_2+c_3)\n\\,, \\\\\nV_0(\\phi)&:=m^2M_p^2 f(\\phi)(c_0+3c_1+3c_2+c_3)\n\\,, \\label{bare_potential}\n\\end{align}\nwith $\\kappa=M_f^2\/M_g^2$ and $M_p^2=M_g^2+M_f^2$. In this case, both the potential form of $\\phi$ and the $\\phi$-dependence of the graviton mass are determined by $f(\\phi)$ only.\\footnote{Since we have absorbed the potential of $\\phi$ in the mass term of the graviton, $m_T^2M_p$ and $V_0$ seem to be a same order of magnitude. However, $m_T^2M_p^2$ and $V_0$ are not necessary to be the same order because they represent different physical quantities. Indeed, we will assume $V_0 \\ll m_T^2 M_p^2$.} Note that $V_0$ is the bare potential of $\\phi$. The effective potential of $\\phi$ is given by not only $V_0$ but also the amplitude of the massive graviton as well as the energy density of matter due to the non-minimal couplings (see Eq.~\\eqref{effective_potential}).\n\n\n\n\n\n\\section{Basic equations}\n\\label{sec_Fri}\nIn this section, we derive the basic equations to discuss the cosmological dynamics in the model \\eqref{action} supposing that the massive graviton is dark matter. We assume the coherent dark matter scenario in which dark matter is obtained from the coherent oscillation of the zero momentum mode massive gravitons~\\cite{Aoki:2017cnz}. Since the zero momentum mode of the graviton corresponds to the anisotropy of the spacetime, we study the Bianchi type I universe instead of the Friedmann-Lema{\\^i}tre-Robertson-Walker (FLRW) universe. The ansatz of the spacetime metrics are\n\\begin{align}\nds_g^2&=-dt^2 +a^2[ e^{4\\sigma_g} dx^2+e^{-2\\sigma_g}(dy^2+dz^2)]\\,, \\label{Bianchi_g} \\\\\nds_f^2&=\\xi^2\\left[ -c^2 dt^2+a^2\\{ e^{4\\sigma_f} dx^2+e^{-2\\sigma_f}(dy^2+dz^2)\\} \\right] \\,, \\label{Bianchi_f}\n\\end{align}\nwhere $\\{a,\\xi,c,\\sigma_g,\\sigma_f\\}$ are functions of the time $t$. We assume the matter field is a perfect fluid whose energy-momentum tensor is given by\n\\begin{align}\nT^{\\mu}{}_{\\nu}=A^4(\\phi)\\times {\\rm diag}[-\\rho(t),P(t),P(t),P(t)]\\,, \\label{Tmunu}\n\\end{align}\nwhere $\\rho$ and $P$ are the energy density and the pressure in the Jordan frame, respectively. The conservation law of the matter field is\n\\begin{align}\n\\dot{\\rho}+3\\frac{(Aa)^{\\cdot}}{Aa}(\\rho+P)=0\\,, \\label{conservation}\n\\end{align}\nwhere a dot is the derivative with respect to $t$.\n\nAs shown in \\cite{Maeda:2013bha,Aoki:2017cnz}, the small anisotropies $\\sigma_g$ and $\\sigma_f$ can be a dark matter component of the universe in the bimetric model without the chameleon field $\\phi$. We generalize their calculations to those in the present model \\eqref{action}. All equations under the ansatz \\eqref{Bianchi_g} and \\eqref{Bianchi_f} are summarized in Appendix. Here, we only show the Friedmann equation and the equations of motion of the massive graviton and the chameleon field because other equations are not important for the following discussion.\n\n\nWe assume the graviton mass $m_T$ is larger than the Hubble expansion rate $H:=\\dot{a}\/a$. After expanding the equations in terms of anisotropies and a small parameter $\\epsilon:=H\/m_T$, the Friedmann equation is given by\n\\begin{align}\n3M_p^2H^2= \\rho A^4+\\frac{1}{2}\n K^2 \n\\dot{\\phi}^2+V_0+\\frac{1}{2}\\dot{\\varphi}^2+\\frac{1}{2}m_T^2\\varphi^2 \\,, \\label{Fri_no_h}\n\\end{align}\nwhere $\\varphi$ is the massive graviton which is given by a combination of the anisotropies $\\sigma_g$ and $\\sigma_f$ (see Eqs.~\\eqref{graviton1} and \\eqref{graviton2}).\nThe equations of motion of the massive graviton $\\varphi(t)$ and the chameleon field $\\phi(t)$ are \n\\begin{align}\n\\ddot{\\varphi}+3H \\dot{\\varphi}+m_T^2(\\phi) \\varphi &=0 \\,, \\label{eq_varphi}\\\\\n K \\left( \\ddot{\\phi}+3H\\dot{\\phi} \\right)\n + \\dot{K} \\dot{\\phi} + \\frac{\\partial V_{\\rm eff}}{ \\partial \\phi}&=0\\,, \\label{eq_cham}\n\\end{align}\nwhere the effective potential of the chameleon field is given by\n\\begin{align}\nV_{\\rm eff}:= V_0(\\phi)+\\frac{1}{2}m_T^2(\\phi)\\varphi^2 + \\frac{1}{4}A^4(\\phi) (\\rho-3p)\n\\,. \\label{effective_potential}\n\\end{align}\nNote that, although the bigravity theory contains the degree of freedom of the massless graviton (see Eq.~\\eqref{Friedmann_eq}), we neglect the contribution to the Friedmann equation from the massless graviton because the energy density of the massless graviton decreases faster than those of other fields. The effect of the massless graviton is not important for our discussions.\n\n\nWe notice that the basic equations \\eqref{Fri_no_h}, \\eqref{eq_varphi} and \\eqref{eq_cham} are exactly the same as the equations in the theory with two scalar fields given by the action\n\\begin{align}\nS=\\int d^4x \\sqrt{-g} \\Biggl[ &\\frac{M_p^2}{2} R[g]\n-\\frac{1}{2}K^2(\\phi) (\\partial \\phi)^2 -V_0(\\phi) \\nn\n&-\\frac{1}{2} (\\partial \\varphi)^2 -\\frac{1}{2}m_T^2(\\phi) \\varphi^2 \\Biggl]+S_m [\\tilde{g},\\psi]\n\\,. \\label{another_action}\n\\end{align}\nThe cosmological dynamics in \\eqref{action} with $H \\gg m_T$ can be reduced into that in \\eqref{another_action}. Our results obtained below can be straightforward generalized even in the case of \\eqref{another_action} up to the discussion about the cosmological dynamics. The action \\eqref{another_action} gives a toy model of the chameleon bigravity theory. However, the equivalence between \\eqref{action} and \\eqref{another_action} holds only for the background dynamics of the universe in $H\\gg m_T$. The equivalence between the two actions does not hold for small-scale perturbations around the cosmological background~\\cite{Aoki:2017cnz}. \n\n\nWe first consider a solution $\\phi=\\phi_{\\rm min}=$ constant which is realized when\n\\begin{align}\n\\frac{\\partial V_{\\rm eff}}{\\partial \\phi}\n=\\alpha_f \\left[ V_0 +\\frac{1}{2}m_T^2 \\varphi^2 \\right] +\\alpha_A (\\rho-3P) A^4=0\n\\,,\n\\label{phi=const}\n\\end{align}\nwhere\n\\begin{align}\n\\alpha_A :=\\frac{1}{K}\\frac{ d \\ln A}{d \\phi}=\\frac{ d \\ln A}{d \\Phi}\\,, \\quad\n\\alpha_f:=\\frac{1}{K}\\frac{d \\ln f}{d \\phi}=\\frac{ d \\ln f}{d \\Phi} \\,.\n\\end{align}\nThe equation \\eqref{phi=const} is not always compatible with $\\phi=$ constant since each term in \\eqref{phi=const} has different time dependence in general. Nonetheless, as we shall see below, they can be compatible with each other if $\\epsilon \\ll 1 $. In other words, a common constant value of $\\phi_{\\rm min}$ can be a solution all the way from the radiation dominant (RD) epoch to the matter dominant (MD) epoch of the universe. When the chameleon field is constant, the bare potential $V_0$ acts as a cosmological constant which has to be subdominant in the RD and the MD eras. The constant $\\phi$ implies that the graviton mass does not vary and thus we obtain\n\\begin{align}\n\\langle \\dot{\\varphi}^2 \\rangle_T= \\langle m_T^2 \\varphi^2 \\rangle_T \\propto a^{-3}\n\\,,\\label{eqn:scaling-mTconst}\n\\end{align}\nwhere $\\langle\\cdots \\rangle_T$ represents the time average over an oscillation period. The massive gravitons behave like a dark matter component of the universe. When we focus on the time scales much longer than $m_T^{-1}$, $m_T^2 \\varphi^2$ in Eq.~\\eqref{phi=const} can be replaced with $\\langle m_T^2 \\varphi^2 \\rangle_T$, which scales as \\eqref{eqn:scaling-mTconst}. Since $\\rho-3P$ also scales as $\\propto a^{-3}$ in the RD and the MD, the decaying laws of $\\rho-3P$ and $m_T^2 \\varphi^2$ in \\eqref{phi=const} are the same in this case. Hence, when the oscillation timescale of the massive graviton is much shorter than the timescale of the cosmic expansion, i.e., $\\epsilon \\ll 1 $, $\\phi=$ constant can be a solution all the way from the RD to the MD. The value of $\\phi_{\\rm min}$ is determined by simply solving Eq.~\\eqref{phi=const}.\n\n\nSupposing that the massive graviton is the dominant component of dark matter, Eq.~\\eqref{phi=const} in the RD and MD eras is replaced with\n\\begin{align}\n\\left( \\alpha_f \\rho_G+2\\alpha_A \\rho_{b} \\right) A^4=0\\,,\n\\end{align}\nwhere $\\rho_b$ is the baryon energy density and we have ignored $V_0$. The energy density of massive graviton in the Jordan frame is defined by\n\\begin{align}\n\\rho_G:=\\frac{1}{2}A^{-4}\\langle \\dot{\\varphi}^2+m_T^2\\varphi^2 \\rangle_T=A^{-4}m_T^2\\langle \\varphi^2 \\rangle_T\\,,\n\\end{align}\nwhich depends on the chameleon $\\phi$. Therefore, if $\\alpha_A$ and $\\alpha_f$ are assumed to be $\\alpha_A\/\\alpha_f \\simeq -5\/2$, the ratio between dark matter and baryon is automatically tuned to be the observational value. The dark matter-baryon ratio could be naturally explained without any fine-tuning of the productions of dark matter and baryon.\n\n\n\nNeedless to say, the initial value of $\\phi$ must not be at the bottom of the effective potential $(\\phi=\\phi_{\\rm min})$. We shall study the dynamics of $\\phi$ and discuss whether $\\phi$ approaches $\\phi_{\\rm min}$ before the MD era of the universe. \nAlthough we do not assume $\\phi$ is constant, we assume $\\phi$ does not rapidly move so that the graviton mass varies adiabatically\n\\begin{align}\n\\frac{\\dot{m}_T}{m_T^2} \\ll 1 \\,.\n\\label{adiabatic_condition}\n\\end{align}\nUnder the adiabatic condition \\eqref{adiabatic_condition} we can take the adiabatic expansion for the massive graviton:\n\\begin{align}\n\\varphi=u(t)\\cos\\left[ \\int m_T[\\phi(t)] dt \\right]+\\cdots\\,,\n\\end{align}\nwith a slowly varying function $u(t)$. The adiabatic condition \\eqref{adiabatic_condition} is indeed viable for $\\epsilon \\ll 1$ since we will see the time dependence of $m_T$ is given by a power law of $a$ (see Eq.~\\eqref{time_dep_m} for example). The time average over an oscillation period yields $\\langle \\dot{\\varphi}^2 \\rangle_T=\\langle m_T^2 \\varphi^2 \\rangle_T=m_T^2 u^2\/2$. \n\n\nAfter taking the time average over an oscillation period under the adiabatic condition, the equations are reduced into\n\\begin{align}\n3M_p^2H^2= A^4 \\rho_r +A^4 \\rho_b +\\frac{1}{2} K^2 \\dot{\\phi}^2 +V_0+ \\frac{1}{2}m_T^2 u^2 \\,, \\label{Fri}\n\\end{align}\nand\n\\begin{align}\n K \\left( \\ddot{\\phi}+3H \\dot{\\phi} \\right)\n +\\dot{K} \\dot{\\phi} +\\alpha_f V_0 \n& \\nn\n+\\frac{1}{4}\\alpha_f m_T^2 u^2+\\alpha_A A^4\\rho_b&=0\n\\,, \\label{eq_phi} \\\\\n4\\dot{u}+6H u+ \\alpha_f u K\\dot{\\phi}&=0\n\\,, \\label{eq_u}\n\\end{align} \nwhere $\\rho_r$ and $\\rho_b$ are the energy densities of radiation and baryon which decrease as $\\rho_r\\propto (aA)^{-4}$ and $\\rho_b \\propto (aA)^{-3}$ because of the conservation equation. The dynamics of the scale factor $a$, the chameleon field $\\phi$, and the amplitude of the massive graviton $u$ are determined by solving these three equations.\n\n\nBy using the density parameters, the Friedmann equation is rewritten as\n\\begin{align}\n1=\\Omega_r+\\Omega_b+\\Omega_{\\phi}+\\Omega_G \\,,\n\\end{align}\nwith\n\\begin{align}\n\\Omega_r&:=\\frac{A^4\\rho_r}{3M_p^2H^2} \n\\,, \\\\\n\\Omega_b&:=\\frac{A^4\\rho_b}{3M_p^2H^2} \n\\,, \\\\\n\\Omega_{\\phi}&:=\\frac{\\dot{\\phi}^2+2V_0}{6M_p^2H^2}\n\\,, \\\\\n\\Omega_G&:=\\frac{m_T^2 u^2}{6M_p^2H^2}\\,.\n\\end{align}\nWe also introduce the total equation of state parameter in the Einstein frame\n\\begin{align}\nw_E:=-1-\\frac{2\\dot{H}}{3H^2}\\,.\n\\end{align}\n\nThe above quantities are defined in the Einstein frame. Since the matter fields minimally couple with the Jordan frame metric, the observable universe is expressed by the Jordan frame metric. Hence, we also define the Hubble expansion rate and the effective equation of state parameter in the Jordan frame as\n\\begin{align}\nH_J&:=\\frac{(Aa)^{\\cdot}}{A^2a} \\,, \\\\\nw_{\\rm tot}&:=-1-\\frac{2\\dot{H}_J}{3AH_J^2}\\,.\n\\end{align}\n\n\n\n\n\n\n\n\n\n\\section{Analytic solutions}\n\\label{sec_analytic}\nIn this section we show some analytic solutions under the simplest case\n\\begin{align}\nK=1\\,, \\quad A=e^{\\beta\\phi\/M_p}\\,, \\quad\nf=e^{-\\lambda \\phi\/M_p}\\,, \\label{model_A}\n\\end{align}\nwith the dimensionless constants $\\beta$ and $\\lambda$. This model yields that the coupling strengths $\\alpha_A$ and $\\alpha_f$ are constant.\nWe consider four stages of the universe: the radiation dominant era, around the radiation-matter equality, the matter dominant era, and the accelerate expanding era. The analytic solutions are found in each stages of the universe as follows.\n\n\\subsection{Radiation dominant era}\nWe first consider the regime when the contributions to the Friedmann equation from baryon and dark matter are subdominant, that is, $\\Omega_b, \\Omega_G \\ll 1$. The Hubble expansion rate is then determined by the energy densities of radiation and $\\phi$. Since the effective potential of $\\phi$ are determined by the energy densities of baryon and dark matter, in this situation the potential force can be ignored compared with the Hubble friction term ($V_0$ is assumed to be always ignored during both radiation and matter dominations).\nThen, we obtain\n\\begin{align}\n\\dot{\\phi}\\propto a^{-3}\n\\,,\n\\end{align}\nwhich indicates that the field $\\phi$ loses its velocity due to the Hubble friction and then $\\phi$ becomes a constant $\\phi_i$. We can ignore $\\Omega_{\\phi}$ and then find the standard RD universe. At some fixed time deep in the radiation dominant era, we therefore set $\\phi=\\phi_i$ as the initial condition of $\\phi$. We shall then denote the initial values of the energy densities of baryon and the massive graviton as $\\rho_{b,i}$ and $\\rho_{G,i}$, respectively. \n\n\nNote that this constant initial value of $\\phi$ is not necessary to coincide with the potential minimum $\\phi=\\phi_{\\rm min}$, i.e., $\\phi_i \\neq \\phi_{\\rm min}$. The ratio $\\rho_{G,i}\/\\rho_{b,i}$ is not tuned to be five at this stage.\n\n\n\n\\subsection{Following-up era}\nWe then discuss the era just before radiation-matter equality in which we cannot ignore the potential force for $\\phi$. As discussed in the previous subsection, we find $\\phi=\\phi_i$ in the RD universe. When the potential force for $\\phi$ becomes relevant, the chameleon field $\\phi$ starts to evolve into the potential minimum $\\phi=\\phi_{\\rm min}$. Due to the motion of $\\phi$, the smaller one of $\\rho_G$ and $\\rho_b$ follows up the larger one. We obtain $\\rho_G\/\\rho_b=-2\\alpha_A\/\\alpha_f$ when the chameleon field reaches the minimum $\\phi_{\\rm min}$. We call this era of the universe the following-up era.\n\nIf the initial value $\\phi_i$ is close to the potential minimum $\\phi_{\\rm min}$, the dark matter-baryon ratio is already tuned to be almost the value $-2\\alpha_A\/\\alpha_f$, which we set to $\\sim 5$, and thus we do not need to discuss this case. We therefore study the case with $\\phi_i < \\phi_{\\rm min}$ and the case with $\\phi_i > \\phi_{\\rm min}$ (which correspond to $\\rho_{G,i} \\gg \\rho_{b,i}$ and $\\rho_{b,i} \\gg \\rho_{G,i}$, respectively). We shall discuss them in order.\n\n\n\n\\subsubsection{$\\rho_{G,i}\\gg \\rho_{b,i}$ before the equal time}\n\\label{DM>>b}\nIf dark matter (i.e., massive gravitons) is over-produced, the equations are reduced to\n\\begin{align}\n\\ddot{\\phi}+3H \\dot{\\phi}-\\frac{\\lambda}{4M_p} m_T^2 u^2 =0\\,,\n\\\\\n3M_p^2 H^2 =A^4 \\rho_r +\\frac{1}{2}\\dot{\\phi}^2+3m_T^2 u^2\n\\,,\n\\end{align}\nand \\eqref{eq_u}, where we have ignored the contributions from baryon. The system admits a scaling solution\n\\begin{align}\n\\phi&=\\frac{M_p}{\\lambda} \\ln t +{\\rm constant}\\,,\n\\nn\nu&\\propto t^{-1\/2}\n\\,,\n\\nn\na&\\propto t^{1\/2}\n\\,, \\label{scaling_DM}\n\\end{align}\nwhere the density parameters in the Einstein frame are given by\n\\begin{align}\n\\Omega_G=\\frac{4}{3\\lambda^2}\n\\,, \\quad\n\\Omega_{\\phi}=\\frac{2}{3 \\lambda^2}\n\\,, \\quad\n\\Omega_r=1-\\frac{2}{\\lambda^2}\n\\,.\n\\end{align}\nThe effective equation of state parameter in the Jordan frame is given by\n\\begin{align}\nw_{\\rm tot}=\\frac{\\lambda-2\\beta}{3(\\lambda+2\\beta)} \\,,\n\\end{align}\nand then $w_{\\rm tot} =-2\/9$ if $2\\beta=5\\lambda$. This solution exists only when $\\lambda^2>2$ since the density parameter has to be $0<\\Omega_r<1$. \n\n\nFor this scaling solution, the graviton mass decreases as\n\\begin{align}\nm_T^2 \\propto a^{-2}\\,, \\label{time_dep_m}\n\\end{align}\nwhich guarantees the adiabatic condition \\eqref{adiabatic_condition} when $\\epsilon \\ll 1$.\nThe energy density of massive gravitons in the Einstein frame decreases as\n\\begin{align}\nA^4 \\rho_G =\\frac{1}{2}m_T^2 u^2 \\propto a^{-4}\n\\,.\n\\end{align}\nOn the other hand, the energy density of baryon in the Einstein frame ``increases'' as\n\\begin{align}\nA^4 \\rho_b \\propto A a^{-3} \\propto a^{-3+2\\frac{\\beta}{\\lambda}}\n\\,,\n\\end{align}\n(For example, we obtain $A^4 \\rho_b \\propto a^2$ when $2\\beta = 5 \\lambda$). Therefore, even if baryon is negligible at initial, the baryon energy density grows and then it cannot be ignored when the energy density of baryon becomes comparable to that of dark matter.\n\nNote that the Jordan frame energy density of baryon, $\\rho_b$, always decays as $a_J^{-3}$ where $a_J=Aa$ is the scale factor of the Jordan frame metric. The quantity $A^4 \\rho_b$ is the energy density in the Einstein frame.\n\nIn the Einstein frame, the interpretation of the peculiar behavior of $A^4\\rho_G$ and $A^4\\rho_b$ is that the energy density of massive gravitons is converted to that of baryon through the motion of the chameleon field $\\phi$. Although we have considered the non-relativistic massive gravitons, the energy density of that in the Einstein frame behaves as radiation which implies that the field $\\phi$ removes the energy of massive gravitons (indeed, the graviton mass decreases due to the motion of $\\phi$). The removed energy is transferred into baryon via the non-minimal coupling.\n\nDuring the scaling solution, the massive graviton never dominates over radiation because both energy densities of the massive graviton and radiation obey the same decaying law $A^4\\rho_r, A^4\\rho_G \\propto a^4$. Hence, the field $\\phi$ can reach the bottom of the effective potential before the MD era. After reaching the bottom of the effective potential, the standard decaying laws for matters $A^4\\rho_r \\propto a^{-4}$ and $A^4\\rho_G, A^4 \\rho_b \\propto a^{-3}$ are recovered, then the usual dynamics of the universe is obtained with the observed dark matter-baryon ratio. \n\nWe note that the following-up of the baryon energy density can be realized even if the scaling solution does not exist $(\\lambda^2<2)$. The dynamics of this case is numerically studied in Sec.~\\ref{sec_numerical}.\n\n\n\n\\subsubsection{$\\rho_{b,i}\\gg \\rho_{G,i}$ before the equal time}\n\\label{b>>DM}\nIn this case, the equations for the scale factor and $\\phi$ form a closed system given by\n\\begin{align}\n\\ddot{\\phi}+3H\\dot{\\phi}+\\frac{\\beta}{M_p} A^4 \\rho_b=0\\,,\n\\\\\n3M_p^2 H^2=A^4 \\rho_r+ A^4 \\rho_b+\\frac{1}{2}\\dot{\\phi}^2\n\\,. \n\\end{align}\nThe scaling solution is then found as\n\\begin{align}\n\\phi&=-\\frac{M_p}{2\\beta} \\ln t +{\\rm constant} \\,,\n\\nn\na &\\propto t^{1\/2}\n\\,,\n\\end{align}\nin which the density parameters are\n\\begin{align}\n\\Omega_b=\\frac{1}{3 \\beta^2}\n\\,, \\quad\n\\Omega_{\\phi}=\\frac{1}{6\\beta^2}\n\\,, \\quad\n\\Omega_r=1-\\frac{1}{2\\beta^2}\n\\,,\n\\end{align}\nwhere $\\beta$ has to satisfy $\\beta^2>1\/2$.\n\nDuring this scaling solution, the universe does not expand in the Jordan frame. Although the Einstein frame scale factor expands as the RD universe, $a\\propto t^{1\/2}$, the Jordan frame scale factor is given by\n\\begin{align}\na_J=aA={\\rm constant}\n\\,.\n\\end{align}\n\nThe solution for $u$ is found by substituting the scaling solution into \\eqref{eq_u}. We obtain\n\\begin{align}\nu\\propto a^{-\\frac{3}{2}+\\frac{\\lambda}{4\\beta}}\n\\,,\\quad\nm_T^2 \\propto a^{\\lambda\/\\beta}\n\\,,\n\\end{align}\nand then the energy density of massive graviton varies as\n\\begin{align}\nA^4 \\rho_G \\propto a^{-3+\\lambda\/2\\beta}\\,.\n\\end{align}\nThe adiabatic condition \\eqref{adiabatic_condition} is guaranteed when $\\epsilon \\ll 1$.\nWhen $2\\beta \\simeq 5 \\lambda$, the graviton mass roughly increases as $m_T^2 \\propto a^{2\/5}$ and the energy density of massive gravitons in the Einstein frame decreases as $A^4\\rho_G \\propto a^{-14\/5}$. Therefore, even if the energy density of massive gravitons is significantly lower than that of baryon, the correct dark matter-baryon ratio is realized in time since the energy density of massive gravitons decreases slower than that of baryon. \n\n\n\n\\subsection{Matter dominant era}\nAfter $\\phi$ reaches the potential minimum $\\phi_{\\rm min}$, the chameleon field $\\phi$ does not move during the MD universe. As shown in \\cite{Aoki:2017cnz}, when $\\phi$ is constant, the massive graviton behaves as CDM and then the standard MD universe is obtained.\n\n\\subsection{Accelerating expanding era}\n\\label{sec_acc}\nAfter the MD era, the universe must show the accelerating expansion due to dark energy. Although one can introduce a new field to obtain the acceleration, we consider a minimal scenario such that the chameleon field itself is dark energy, i.e., the accelerating expansion is realized by the potential $V_0$. When $V_0$ becomes relevant to the dynamics of $\\phi$, the chameleon field again rolls down which leads to a decreasing of $m_T$. As a result, the energy density of massive gravitons rapidly decreases and then we can ignore the contributions from massive gravitons. The basic equation during the accelerating expansion is thus given by\n\\begin{align}\n3M_p^2 H^2=A^4 \\rho_b+\\frac{1}{2}\\dot{\\phi}^2 +V_0 \n\\,, \\label{Fri_DE} \\\\\n\\ddot{\\phi}+3H\\dot{\\phi}-\\frac{\\lambda}{M_p}V_0+\\frac{\\beta}{M_p}A^4 \\rho_b=0\n\\,, \\label{eq_phi_DE}\n\\end{align}\nwhich yield a scaling solution\n\\begin{align}\n\\phi&=\\frac{2M_p}{\\lambda} \\ln t +{\\rm constant}\n\\,, \\nn\na&\\propto t^{\\frac{2}{3}(1+\\beta\/\\lambda)} \\,,\n\\end{align}\nin which\n\\begin{align}\n\\Omega_b=\\frac{\\lambda^2+\\beta \\lambda -3}{(\\beta+\\lambda)^2}\n\\,, \\quad\n\\Omega_{\\phi}=\\frac{\\beta^2+\\beta \\lambda +3}{(\\beta+\\lambda)^2}\n\\,,\n\\end{align}\nand\n\\begin{align}\nw_{\\rm tot}=-\\frac{2\\beta}{4\\beta+\\lambda}\n\\,.\n\\end{align}\nThe scaling solution exists when\n\\begin{align}\n\\lambda(\\beta+\\lambda)>3\n\\,. \\label{inequality_DE}\n\\end{align}\nFor $2\\beta=5\\lambda$, we find $w_{\\rm tot}=-5\/11$ and the inequality \\eqref{inequality_DE} is reduced into $\\lambda^2 >6\/7$.\n\nThe amplitude of the massive graviton is given by\n\\begin{align}\nu \\propto t^{-\\frac{1}{2}(1+2\\beta\/\\lambda)}\n\\,,\n\\end{align}\nand then the density parameter of massive graviton decreases as\n\\begin{align}\n\\Omega_G\\propto t^{-1-2\\beta\/\\lambda}\n\\,.\n\\end{align}\nThe energy density of massive graviton gives just a negligible contribution during this scaling solution which guarantees the equations \\eqref{Fri_DE} and \\eqref{eq_phi_DE}. \n\n\nOn the other hand, when $\\lambda^2<6\/7$, the non-minimal coupling is small so that the field $\\phi$ can be approximated as a standard quintessence field. As a result, the acceleration is obtained by the slow-roll of $\\phi$ and then the dark energy dominant universe is realized.\n\n\n\n\n\n\n\n\n\n\\section{Cosmic evolutions}\n\\label{sec_numerical}\nIn this section, we numerically solve the equations \\eqref{Fri}-\\eqref{eq_u}. We discuss two cases, the over-produced case ($\\rho_{G,i} \\gg \\rho_{b,i}$) and the less-produced case ($\\rho_{G,i}\\ll \\rho_{b,i}$), in order.\n\n\n\\subsection{Over-produced case}\nFirst, we consider the over-produced case $\\rho_{G,i} \\gg \\rho_{b,i}$. We assume \\eqref{model_A} which we call Model A. A cosmological dynamics is shown in Fig.~\\ref{fig_modelA}. We set $\\rho_{G,i}\/\\rho_{b,i}=\\Omega_{G,i}\/\\Omega_{b,i}=200$ at the initial of the numerical calculation. Although dark matter is initially over-produced, the energy density of baryon follows up that of dark matter and then we obtain $\\rho_G\/\\rho_b\\simeq 5$ when $a_J=Aa\\sim 10^{-4}$ where we normalize the Jordan frame scale factor $a_J$ so that $\\Omega_{\\phi}|_{a_J=1}=0.7$. We note the following-up of $\\rho_b$ is obtained even if $\\lambda^2>2$ is not satisfied (In Fig.~\\ref{fig_modelA}, we set $\\lambda^2=(6\/5)^2<2$).\n\n\n\\begin{figure}[tbp]\n\\centering\n\\includegraphics[width=7cm,angle=0,clip]{Model_A.eps}\n\\caption{The evolution of the density parameters and the total equation of state parameters in terms of the Jordan frame scale factor $a_J=Aa$ which is normalized to be $\\Omega_{\\phi}|_{a_J=1}=0.7$. We set $\\beta=3$ and $\\lambda=2\\beta\/5=6\/5$ in Model A \\eqref{model_A}. We assume the initial ratio between dark matter and baryon as $\\rho_{G,i}\/\\rho_{b,i}=\\Omega_{G,i}\/\\Omega_{b,i}=200$ with $\\phi_i=0$.\n}\n\\label{fig_modelA}\n\\end{figure}\n\n\n\\begin{figure}[tbp]\n\\centering\n\\includegraphics[width=7cm,angle=0,clip]{phi.eps}\n\\caption{The evolution of the chameleon field $\\phi$ in Model A and Model B with $\\rho_{G,i}\/\\rho_{b,i}=200$.\n}\n\\label{fig_phi}\n\\end{figure}\n\nThe dynamics of the universe is precisely tested by the CMB observations after the decoupling time $a_J \\simeq 10^{-3}$. The evolutions of the total equation of state parameters are shown in Fig.~\\ref{fig_modelA}. The dynamics of the observable universe is represented by the Jordan frame quantity $w_{\\rm tot}$ because the visible matters couple with the Jordan frame metric. On the other hand, since the dark matter (i.e., massive gravitons) is originated from the the gravity sector, dark matter feels the dynamics of the Einstein frame whose equation of state parameter is denoted by $w_E$. Although the large deviation of dynamics from the standard cosmological one appears before the decoupling time $a_J \\lesssim 10^{-3}$, the standard dust dominant universe is recovered around the decoupling time.\n\n\nWhen we increase the values of $\\beta$ and $\\lambda$, the deviation from the standard evolution is amplified which is caused by the oscillation of $\\phi$ around $\\phi_{\\rm min}$ as shown in Fig,~\\ref{fig_phi}. Since the Jordan frame scale factor is given by $a_J=Aa=a e^{\\beta\\phi\/M_p}$, the oscillation of $\\phi$ yields the oscillation of $a_J$ which is amplified by increasing of $\\beta$.\n\n\nFig.~\\ref{fig_modelA} does not show the dark energy ``dominant'' universe even in the accelerating phase. Instead, the acceleration is realized by the scaling solution as explained in Sec.~\\ref{sec_analytic}. If this scaling solution can pass the observational constraints, it might give an answer for the other coincidence problem of dark energy: why the present dark energy density is almost same as that of matter? However, the cosmological dynamics after the decoupling time is strongly constrained by the observations. Thus, the dark energy model with the scaling solution should have a severe constraint (see \\cite{Amendola:1999er,Amendola:2003eq} for examples). Furthermore, the large coupling $\\alpha_A \\gtrsim M_p^{-1}$ leads to that the Compton wavelength of the chameleon field has to be less than Mpc to screen the fifth force in the Solar System~\\cite{Wang:2012kj}; however, the coupling functions \\eqref{model_A} require the Gpc scale Compton wavelength to give the current accelerating expansion.\n\n\n\nWe then provide a model in which the couplings $\\alpha_A$ and $\\alpha_f$ are initially large but they become small in time. This behavior is realized by the model\n\\begin{align}\nK^2=(1-\\phi^2\/M^2)^{-1} , \\, A=e^{\\beta\\phi\/M_p} ,\\,\nf=e^{-\\lambda \\phi\/M_p}, \\label{model_B}\n\\end{align}\nwhich we call Model B. The only difference from Model A is that $K$ is a function of $\\phi$. If the amplitude of the field $\\phi$ is small at initial $(\\phi \\ll M)$, Model B gives the same behavior as Model A. After $\\phi$ starts to roll and then $|\\phi| \\rightarrow M$, the kinetic function $K$ increases which causes the decreasing of the non-minimal couplings $\\alpha_A,\\alpha_f \\rightarrow 0$ (see Figs.~\\ref{fig_phi} and \\ref{fig_modelB}). \nNote that the field value is restricted in the range $-M<\\phit_j,j\\neq k|H_k)\\mathcal{P}(H_k)\n\\end{eqnarray}\\par\nIn general, the priori probability of different hypotheses can be assumed as uniformly distributed ($\\mathcal{P}(H_i)=\\mathcal{P}(H_j), i \\neq j$), and the conditional probability in (\\ref{ErrorProb}) is assumed to be equal \\cite{KayV2} for symmetry, that is\n\\begin{eqnarray}\n\\mathcal{P}(t_1 > t_2|H_1) =\n \\mathcal{P}(t_2 > t_1|H_2)\n\\end{eqnarray}\nthen the error probability is\n\\begin{eqnarray}\n\\mathcal{P}_E &=& 1 - \\mathcal{P}(t_1 > t_2|H_1\n.\n\\end{eqnarray}\nThus the probability can be analyzed with respect to the conditional probability $\\mathcal{P}( t_1> t_2|H_1)$ under the $H_1$ hypothesis.\nThe statistics (\\ref{OMPDetct}) under the $H_1$ hypothesis have the following joint distributions\n\\begin{eqnarray}\n\n\\left[ t_1 \\quad t_2 \\right]^T\n\t\t& \\thicksim & \\mathcal{N}(\\bm \\mu_{1,2},\\bm \\Sigma_{1,2})\n\\end{eqnarray}\nand\n\\begin{eqnarray}\n\\bm \\mu_{1,2}=\\left[ \\begin{array}{c}\n\t\t\t \\frac{1}{2}\\|\\bm{\\Phi s_1}\\|_2^2 \\\\\n\t\t\t \\langle\\bm{\\Phi s_2},\\bm{\\Phi s_1}\\rangle-\\frac{1}{2}\\|\\bm{\\Phi s_2}\\|_2^2\n\t\t\\end{array} \\right] \\nonumber\n\\end{eqnarray}\n\\begin{equation}\n\\bm \\Sigma_{1,2} =\\sigma^2\\left[ \\begin{array}{cc}\n\t\t\t\t\t\t\t\t\t\\|\\bm{\\Phi^T\\Phi s_1}\\|_2^2 &\\langle \\bm{\\Phi^T\\Phi s_1},\\bm{\\Phi^T\\Phi s_2}\\rangle \\\\\n\t\t\t\t\t\t\t\t\t\\langle \\bm{\\Phi^1\\Phi s_2},\\bm{\\Phi^T\\Phi s_1}\\rangle &\\|\\bm{\\Phi^T\\Phi s_2}\\|_2^2\n\t\t\t\t\t\t\t\t\\end{array}\\right].\\nonumber\n\\end{equation}\nUsing the property of Gaussian distribution, the probability of false classification is then\n\\begin{eqnarray}\n\\mathcal{P}_E \n&=&Q(\\frac{\\|\\bm{\\Phi(s_1-s_2)}\\|_2^2}{2\\sigma\\|\\bm{\\Phi^T\\Phi(s_1-s_2)}\\|_2})\\nonumber\n\\end{eqnarray}\nand $Q(x)=\\int_{x}^{\\infty} {\\frac{1}{\\sqrt{2\\pi}}\\exp{(-\\frac{t^2}{2})}\\mathrm{d}t}$.\n\\end{proof}\n\\par As a matter of fact, if there is more than 2 hypotheses in the Compressive Classification problem (\\ref{CompressHypo}), the error probability of the Compressive Classifier (\\ref{OMPDetct}) and (\\ref{MatchFitler}) may not be so explicit as (\\ref{CompressDetect}) for the statistical correlation between different $t_i$'s in (\\ref{OMPDetct}). But similar techniques can be utilized and same results can be deduced, we will discuss these m-ary ($m>2$) hypotheses scenarios in the next section.\n\\par So in order to analyze the error probability (\\ref{CompressDetect}) of classifier (\\ref{OMPDetct}) without the constraint of row-orthogonality to measurement matrices and for all possible k-sparse signals $\\bm s_i,i=1,2$, we will have to focus on\n\\begin{equation}\n\\frac{\\|\\bm{\\Phi(s_1-s_2)}\\|_2^2}{\\|\\bm{\\Phi^T\\Phi(s_1-s_2)}\\|_2} \\label{MainTarget}\n\\end{equation}\nfor all k-sparse signals $\\bm{s_1,s_2}\\in\\Lambda_k=\\{\\bm{s} \\in\\mathbb{R}^N,\\|\\bm{s}\\|_0\\leq k\\}$. \\par\nIn the cases where measurement matrices satisfying row-orthogonality ($\\bm{\\Phi\\Phi^T}=\\bm I$), (\\ref{MainTarget}) is then reduced to\n\\begin{equation}\n\\frac{\\|\\bm{\\Phi(s_1-s_2)}\\|_2^2}{\\|\\bm{\\Phi^T\\Phi(s_1-s_2)}\\|_2} = \\|\\bm{\\Phi(s_1-s_2)}\\|_2.\n\\end{equation}\nAnd this is what Davenport \\cite{Davenport06detectionand}\\cite{SPComp} and Zahedi \\cite{Ramin2010}\\cite{Zahedi201264} analyzed in their publications.\n\n\\section{measurement Matrices and the Error Probability of Compressive Signal Classification}\n\\label{sec:main}\n\\par Although there has been plenty of works about the performance analysis of Compressive Classification, all these works have the same row-orthogonality presumption, but without a theoretical explanation. However, what we believe is that there exist other important reasons for the row-orthogonal condition to be necessary. Here is our main result of this paper:\n\\begin{Theorem}\nIn the Compressive Classification problem (\\ref{CompressHypo}), by tightening or row-orthogonalizing the measurement matrix $\\bm \\Phi \\in \\mathbb{R}^{n \\times N},n < N$, the error probability (\\ref{CompressDetect}) of the classifier (\\ref{OMPDetct}) will be reduced, which means\n\\begin{equation}\\label{MainIneq}\n\\frac{\\|\\bm{\\Phi(s_1-s_2)}\\|_2^2}{\\|\\bm{\\Phi^T\\Phi(s_1-s_2)}\\|_2} \\leq\n\\frac{\\|\\bm{\\hat \\Phi(s_1-s_2)}\\|_2^2}{\\|\\bm{\\hat \\Phi^T\\hat \\Phi(s_1-s_2)}\\|_2}\n\\end{equation}\nwhere $\\bm \\Phi \\in \\mathbb{R}^{n \\times N}$ is the arbitrary measurement matrix, and $\\bm {\\hat \\Phi} \\in \\mathbb{R}^{n \\times N}$ is the equi-norm tight frame measurement matrix row-orthogonalized from $\\bm \\Phi$.\n\\end{Theorem}\n\\begin{proof}\nAccording to Section 2, the error probability (\\ref{CompressDetect}) is determined by the following expression (\\ref{MainTarget}):\n\\begin{equation}\n\\frac{\\|\\bm{\\Phi(s_1-s_2)}\\|_2^2}{\\|\\bm{\\Phi^T\\Phi(s_1-s_2)}\\|_2} \\nonumber\n\\end{equation}\nfor all k-sparse signals $\\bm s_1, \\bm s_2$, where $\\bm \\Phi$ is an arbitrary measurement matrix satisfying RIP. \\par\nAccording to the basic presumptions of $\\bm \\Phi$ in (\\ref{CompressHypo}), the arbitrary under-determined measurement matrix $\\bm \\Phi \\in \\mathbb{R}^{n \\times N}$, $n < N$ has full row rank, thus the singular value decomposition of $\\bm \\Phi$ is\n\\begin{equation}\\label{singular}\n\\bm \\Phi = \\bm{U \\left[ \\Sigma_n \\quad O\\right] V^T}\n\\end{equation}\nHere $\\bm \\Sigma_n \\in \\mathbb{R}^{n \\times n}$ is a diagonal matrix with each element $\\bm \\Phi$'s singular value $\\sigma_j \\neq 0$ $(1\\leq j \\leq n)$, and $\\bm U \\in \\mathbb{R}^{n \\times n}$, $\\bm V \\in \\mathbb{R}^{N \\times N}$ are orthogonal matrices composed of $\\bm \\Phi$'s left and right singular vectors. \\par\nIf an arbitrary equi-norm measurement matrix $\\bm \\Phi$ is transformed into an equi-norm tight frame $\\bm{\\hat \\Phi}$, we do orthogonalization to its row vectors, which is equivalent as:\n\\begin{equation}\\label{tighten}\n\\bm{\\hat \\Phi} = \\sqrt{c} \\cdot \\bm{U\\Sigma_n^{-1}U^T\\Phi} = \\sqrt{c}\\cdot \\bm{U \\left[ I_n \\quad O\\right] V^T}\n\\end{equation}\nWhere $\\bm U \\in \\mathbb{R}^{n \\times n}$, $\\bm V \\in \\mathbb{R}^{N \\times N}$ are $\\bm \\Phi$'s singular vector matrices. In a word, row-orthogonalization is equivalent to transforming all singular values of $\\bm \\Phi$ into equal ones. Thus $\\bm{\\hat \\Phi \\hat \\Phi^T = c \\cdot I_n}$, where $c>0$ is a certain constant for normalization.\\par\nThen\n\\begin{eqnarray}\\label{TF}\n\\frac{\\|\\bm{\\hat \\Phi(s_1-s_2)}\\|_2^2}{\\|\\bm{\\hat \\Phi^T\\hat \\Phi(s_1-s_2)}\\|_2}\n=\n\\| \\left[ \\begin{array}{cc}\n\t\t\t\t\t\t\t\t\t\\bm{I_n} &\\bm O\n\t\t\t\t\t\t\t\t\\end{array}\\right] \\bm V^T(\\bm{s_1-s_2})\\|_2.\n\\end{eqnarray}\nAnd for arbitrary measurement matrix $\\bm \\Phi$ that may not be row-orthogonal, we have\n\\begin{eqnarray}\n\\lefteqn{\\frac{\\|\\bm{\\Phi(s_1-s_2)}\\|_2^2}{\\|\\bm{\\Phi^T\\Phi(s_1-s_2)}\\|_2}} \\nonumber \\\\\n&=& \\frac{\\|\\left[ \\begin{array}{cc}\n\t\t\t\t\t\t\t\t\t\t\t\t\\bm \\Sigma_n & \\bm O \\end{array} \\right]\\bm{V^T(s_1-s_2)}\\|_2^2}\n\t\t\t{\\| \\left[ \\begin{array}{cc}\n\t\t\t\t\t\t\t\t\t\t\t\t\\bm \\Sigma_n^2 & \\bm O \\end{array} \\right]\\bm{V^T(s_1-s_2)}\\|_2} .\\label{singularfrac}\n\\end{eqnarray}\nIf we denote $\\bm{V^T(s_1-s_2)}$ by $\\bm u^{(1,2)}$, where $\\bm u^{(1,2)} = [u_1, u_2, \\cdots ,u_N]^T$. Then (\\ref{singularfrac}) becomes\n\\begin{eqnarray}\n\\frac{\\|\\left[ \\begin{array}{cc}\n\t\t\t\t\t\t\t\t\t\t\t\t\\bm \\Sigma_n & \\bm O \\end{array} \\right]\\bm{V^T(s_1-s_2)}\\|_2^2}\n\t\t\t{\\| \\left[ \\begin{array}{cc}\n\t\t\t\t\t\t\t\t\t\t\t\t\\bm \\Sigma_n^2 & \\bm O\\end{array} \\right]\\bm{V^T(s_1-s_2)}\\|_2}\n\t\t\t\t\t\t\t\t\t\t\t\t= \\frac{\\sum_{j=1}^n {\\sigma_j^2 u^{2}_j}}{\\sqrt{\\sum_{j=1}^n {\\sigma_j^4 u^{2}_j}}}\\nonumber \\\\\n\\leq \\sqrt{\\sum_{j=1}^n {u^{2}_j}}=\n\\| \\left[ \\begin{array}{cc}\n\t\t\t\t\t\t\t\t\t\\bm{I_n} &\\bm O\n\t\t\t\t\t\t\t\t\\end{array}\\right] \\bm V^T(\\bm{s_1-s_2})\\|_2 . \\label{CIneq}\n\\end{eqnarray}\nThe last inequality is derived from the Cauchy-Schwarz Inequality, combining (\\ref{singularfrac}), (\\ref{CIneq}) with (\\ref{TF}), then we have\n\\begin{equation}\n\\frac{\\|\\bm{\\Phi(s_1-s_2)}\\|_2^2}{\\|\\bm{\\Phi^T\\Phi(s_1-s_2)}\\|_2} \\leq\n\\frac{\\|\\bm{\\hat \\Phi(s_1-s_2)}\\|_2^2}{\\|\\bm{\\hat \\Phi^T\\hat \\Phi(s_1-s_2)}\\|_2}\n\\end{equation}\nwhich means that row-orthogonalization makes (\\ref{MainTarget}) larger and thus brings lower error probability.\nThe condition when the equality holds is that\n\\begin{eqnarray}\\label{condition}\n\\bm{\\left[\\Sigma_n^2\\quad O\\right]}\\bm{V^T}(\\bm{s_1-s_2})\n=c \\cdot \\bm{\\left[I_n\\quad O\\right]}\\bm{V^T}(\\bm{s_1-s_2})\n\\end{eqnarray}\nwhere $c>0$ is a certain constant.\\par\nIt is obvious that the equality in (\\ref{MainIneq}) holds for all k-sparse signals $\\bm s_1$ and $\\bm s_2$, if and only if $\\bm{\\Sigma_n^2=c \\cdot I_n}$, which means\n\\begin{equation}\n\\bm{\\Phi\\Phi^T} = c \\cdot \\bm I_n\\label{TightCond}\n\\end{equation}\n\\par So the result of (\\ref{MainIneq}) means that when arbitrary under-determined measurement matrices $\\bm \\Phi \\in \\mathbb{R}^{n \\times N},n < N$ are transformed into an equi-norm tight frame, i.e. row-ortho-gonalized, the equality in (\\ref{MainIneq}) will hold, then the value of (\\ref{MainTarget}) will increase, which means improvement of the performance of Compressive Classifier (\\ref{OMPDetct}).\n\\end{proof}\\par\nThe constant $c>0$ above is an amplitude constant for normalization and can take any value, with the equi-norm presumption of measurement matrices, the following corollary can be deduced:\n\\begin{Corollary}\nIf a matrix $\\bm \\Phi\\in \\mathbb{R}^{n \\times N}, n < N$ form an equi-norm tight frame, that is $\\bm{\\Phi\\Phi^T}=c\\cdot I_n$ and the column vectors satisfy $\\|\\bm \\phi_i\\|_2=\\|\\bm \\phi_j\\|_2=\\psi$, then $c = \\frac{N}{n}\\psi^2$.\n\\end{Corollary}\n\\begin{proof}\nIf $\\bm \\Phi$ has equal column norms and satisfies $\\bm{\\Phi\\Phi^T}=c\\cdot I_n$, then\n\\begin{eqnarray}\ntr(\\bm{\\Phi^T\\Phi})=N\\cdot \\psi ^2= tr(\\bm{\\Phi\\Phi^T})=n\\cdot c.\n\\end{eqnarray}\nAs a result, $c=\\frac{N}{n}\\psi^2$.\n\\end{proof}\\par\nIf we let $c = 1$, then we can get $\\|\\bm \\phi_i\\|_2=\\|\\bm \\phi_j\\|_2=\\sqrt{n\/N}$, which coincides with the results of \\cite{Ramin2010} and \\cite{Zahedi201264}.\n\n\\par Before the end of this section, some important discussions are believed to be necessary here.\n\\par Remark 1:\nFurther analysis of the result of Theorem 2 indicates that, when the measurement matrices of the commonly used Compressive Classifier (\\ref{OMPDetct}) are \"tightened\", i.e. row-orthogonalized, then the inequality (\\ref{MainIneq}) becomes equality, and the corresponding error probability will become\n\\begin{eqnarray}\\label{MFeqivalent}\n\\mathcal{P}_E(\\hat{\\bm \\Phi})=\n Q(\\frac{\\|\\bm{\\hat \\Phi(s_1-s_2)}\\|_2}{2 c^{-1\/2}\\cdot \\sigma})\\nonumber \\\\\n= Q(\\frac{\\|\\bm{P_{\\Phi^T} (s_1-s_2)}\\|_2}{2 c^{-1\/2}\\cdot \\sigma})\n\\end{eqnarray}\nwhere $\\bm{P_{\\Phi^T} = \\Phi^T ( \\Phi \\Phi^T)^{-1} \\Phi}$. The last equality is derived from (\\ref{singular}) and (\\ref{tighten}), which equals, as a matter of fact, the error probability of the General Matched Filter Classifier \\cite{SPComp}\\cite{PHDThesis}:\n\\begin{eqnarray}\n\\hat t_i=\n\\langle\\bm{y},\\bm{(\\Phi \\Phi^T)^{-1}\\Phi}\\bm{s_i}\\rangle-\\frac{1}{2}\\|\\bm{P_{\\Phi^T} s_i} \\|_2^2,\\quad i=1,2. \\label{MFDetct}\n\\end{eqnarray}\nThus Theorem 2 and (\\ref{MFeqivalent}) indicates that equi-norm tight frames will improve the Compressive Classifier (\\ref{OMPDetct}) to the level of General Matched Filter Classifier (\\ref{MFDetct}) in the sense of error probability. Although it is obvious that row-orthogonality (\\ref{TightCond}) sufficiently ensures (\\ref{MFDetct}) to become equivalent as (\\ref{OMPDetct}), the necessity with row-orthogonality, or \"tightness\", of the equivalence between the Compressive Classifier (\\ref{OMPDetct}) and the General Matched Filter Classifier (\\ref{MFDetct}) is not so explicit but demonstrated by Theorem 2 and (\\ref{MFeqivalent}). \n\\par Besides, the error probability (\\ref{MFeqivalent}) coincides with Davenport's \\cite{Davenport06detectionand} and \\cite{SPComp} and Zahedi's \\cite{Ramin2010} and \\cite{Zahedi201264}, where they constrained row-orthogonality $\\bm{\\Phi \\Phi^T = c \\cdot I}$ and set $c = 1$. So Theorem 2 explains the benefits of using the row-orthogonal constraint to do Compressive Classification.\n Similar discussions about the improvement of equi-norm tight frames to oracle estimators can be found in \\cite{UniTightFrame}, which is another good support of the advantage of \"tight\".\n\\par Remark 2:\nAs is mentioned in last section, when there are more than 2 hypotheses, the m-ary ($m>2$) compressive classification problem model becomes:\n\\begin{equation}\\label{CompressHypo_mary}\n\\bm{y} =\n\\left \\{ \\begin{array}{ll}\n\t\\bm{\\Phi}(\\bm{s_1} + \\bm{n}) & \\text{Hypothesis $H_1$} \\\\\n\t\\bm{\\Phi}(\\bm{s_2} + \\bm{n}) & \\text{Hypothesis $H_2$} \\\\\n\t\\cdots & \\cdots \\\\\n\t\\bm{\\Phi}(\\bm{s_m} + \\bm{n}) & \\text{Hypothesis $H_m$} \\\\\n\n\\end{array} \\right. .\n\\end{equation}\nUsing the same Compressive Classifier\n\\begin{eqnarray}\nt_i\n=\\langle\\bm{y},\\bm{\\Phi}\\bm{s_i}\\rangle-\\frac{1}{2}\\langle \\bm{\\Phi s_i},\\bm{\\Phi s_i} \\rangle,\\quad i=1,2,\\cdots ,m .\n\\end{eqnarray}\nThe corresponding error probability will be \n\\begin{eqnarray}\n\\mathcal{P}_E \n=1 - \\mathcal{P}( t_T> t_i,\\forall i \\neq T|H_T) ,\\quad T = 1,2,\\cdots,m.\n\\end{eqnarray}\nCombined with the Union Bound of probability theory, the error probability then satisfies\n\\begin{eqnarray}\n\\mathcal{P}_E \\leq\n\\sum_{i \\neq T}^m Q(\\frac{\\|\\bm{\\Phi(s_T-s_i)}\\|_2^2}{2\\sigma\\|\\bm{\\Phi^T\\Phi(s_T-s_i)}\\|_2}), T = 1,2,\\cdots,m .\\label{mErrorProb}\n\\end{eqnarray}\nThe error probability (\\ref{mErrorProb}) is similar to (\\ref{CompressDetect}) except for the inequality due to the use of union bound. In fact, it may be difficult to get any more accurate result than (\\ref{mErrorProb}), because of the statistical correlation between different $t_i$'s. It may not be persuasive to conclude the error probability's decrease brought by equi-norm tight frames, using the same proof in Theorem 2 in this m-ary scenario. because of the inequality in (\\ref{mErrorProb}); however, simulation results in the next section will demonstrate that equi-norm tight frames are still better in the m-mary ($m>2$) Compressive Classification scenario.\n\\par Remark 3: In comparison with the the work of Zahedi in \\cite{Ramin2010} and \\cite{Zahedi201264}, where Equiangular Tight Frames (ETFs) are proved to have the best worst-case performance among all tight frames (row-orthogonal constrained matrices), we just give the proof that for general under-determined measurement matrices, tightening can bring performance improvement for Compressive Classification. Our job is different from theirs, because all of Zahedi's analysis is based on the constraint that the measurement matrices are tight, or row-orthogonal, and the advantage of Equiangular Tight Frames (ETFs, \\cite{Waldron2009}) is that ETFs have the best worst-case (maximum of the minimum) performance among all tight frames of same dimensions, while our result shows that when arbitrary measurement matrices is \"tightened\", i.e. transformed into equi-norm tight frames, the performance of Compressive Classification will get improved. Nonetheless, the existence and construction of ETFs of some certain dimensions remains an open problem (\\cite{Waldron2009}), while doing row-orthogonalization for arbitrary matrices is very easy and practical. So our results provided a convenient approach to improve the performance of compressive classifiers.\n\\par\n\n\\section{Simulations}\n\\label{sec:simulation}\n\n\\begin{figure}\n\\centering\n\n\n\t\n\t\n\t\n\n\n\n\t\n\t\n\t\n\n\t\\begin{minipage}[htbp]{0.5\\textwidth}\n\t\\centering\n\t\t\\includegraphics[width=3.6in]{MonteCaro_2Hypo.eps}\n\t\t\\caption{Monte-Carlo simulation of 2-ary compressive classification error probability using non-tight frames and tight frames (for $k=1$ sparse signals)}\n\t\t\\label{figure1}\n\t\t\\end{minipage}\n\t\\begin{minipage}[htbp]{0.5\\textwidth}\n\t\\centering\n\t\t\\includegraphics[width=3.6in]{MonteCaro_2Hypok10.eps}\n\t\t\\caption{Monte-Carlo simulation of 2-ary compressive classification error probability using non-tight frames and tight frames (for $k=10$ sparse signals)}\n\t\t\\label{figure2}\n\t\t\\end{minipage}\n\t\n\t\\end{figure}\n\t\\begin{figure}\n\t\t\\begin{minipage}[htbp]{0.5\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=3.6in]{MonteCaro_10Hypo.eps}\n\t\t\\caption{Monte-Carlo simulation of 10-ary compressive classification error probability using non-tight frames and tight frames (for $k=1$ sparse signals)}\n\t\t\\label{figure3}\t\t\n\t\\end{minipage}\\\\\n\t\t\\begin{minipage}[htbp]{0.5\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=3.6in]{MonteCaro_10Hypok10.eps}\n\t\t\\caption{Monte-Carlo simulation of 10-ary compressive classification error probability using non-tight frames and tight frames (for $k=10$ sparse signals)}\n\t\t\\label{figure4}\t\t\n\t\\end{minipage\n\\end{figure}\n\\par In this section the main result of theorem 2 is verified by Monte-Carlo simulations. In the simulation some arbitrary $k=1$ and $k=10$ sparse signals are generated and classified using non-tight frames and tight frames. The Gaussian Random Matrices are chosen to be the non-tight frames, and the row-orthogonalized ones from those random matrices are chosen as the tight frames. Here we choose $N=500$, and the error probabilities of both 2-ary Compressive Classification and 10-ary Compressive Classification are demonstrated in Fig.\\ref{figure1} and Fig.\\ref{figure3} for $k=1$ sparse signals, and Fig.\\ref{figure2} and Fig.\\ref{figure4} for $k=10$ sparse signals, with the number of measurements $n$ ranging from 100 to 450 and signal to noise ratios $\\bm{\\|s_i\\|_2^2}\/\\sigma^2$ from 5 dB to 20 dB. Each error probability is calculated from average of 10000 independent experiments with tight or non-tight measurement matrices.\n\\par The simulation shows that equi-norm tight frames transformed from general Gaussian Random Matrices have better Compressive Classification performance than those non-tight Gaussian Random Matrices within $n$'s whole range, both for 2-ary classification and m-ary ($m>2$) classification scenarios, which is the benefit that \"tightening\" brings.\n\n\n\\section{Conclusion}\n\\label{sec:conclusion}\n\\par This paper deals with the performance improvement of a commonly used Compressive Classifier (\\ref{OMPDetct}). We prove that the transformation to equi-norm tight Frames from arbitrary measurement matrices will reduce the probability of false classification of the commonly used Compressive Classifier, thus improve the classification performance to the level of the General Matched Filter Classifier (\\ref{MFDetct}), which coincides with the row-orthogonal constraint commonly used before. Although there are other proofs that among all equi-norm tight frames the Equiangular Tight Frames (ETFs) achieve best worst-case classification performance, the existence and construction of ETFs of some dimensions is still an open problem.\nAs the construction of equi-norm tight frames from arbitrary matrices is much simple and practical, the conclusion of this paper can also provide a convenient approach to implement an improved measurement matrix for Compressive Classification. \n\n\n\n\n\n\\bibliographystyle{IEEEbib}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzdiqt b/data_all_eng_slimpj/shuffled/split2/finalzzdiqt new file mode 100644 index 0000000000000000000000000000000000000000..e565572b99dd90fc6de06138b9247d2b0b9c7143 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzdiqt @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\n\nIn many computational settings, one wishes to transmit a $d$-dimensional real-valued vector.\nFor example, in distributed and federated learning scenarios, multiple participants (a.k.a. \\emph{clients}) in distributed SGD send gradients to a parameter server that averages them and updates the model parameters accordingly~\\cite{mcmahan2017communication}.\nIn these applications and others (e.g., traditional machine learning methods such K-Means and power iteration~\\cite{pmlr-v70-suresh17a} or other methods such as geometric monitoring~\\cite{icde2021}), sending \\emph{approximations} of vectors may suffice. Moreover, the vectors' dimension $d$ is often large (e.g., in neural networks, $d$ can exceed a billion~\\cite{NIPS2012_6aca9700,shoeybi2019megatron,NEURIPS2019_093f65e0}), so sending compressed vectors is appealing.\n\nIndeed, recent works have studied how to send vector approximations using representations that use a small number of bits per entry (e.g.,~\\cite{pmlr-v70-suresh17a,ben2020send,wen2017terngrad,NIPS2017_6c340f25,konevcny2018randomized,caldas2018expanding}). \\new{Further, recent work has shown direct training time reduction from compressing the vectors to one bit per coordinate~\\cite{bai2021gradient}.}\nMost relevant to our work are solutions that address the distributed mean estimation problem. For example, \\cite{pmlr-v70-suresh17a} uses the randomized Hadamard transform followed by stochastic quantization (a.k.a. randomized rounding).\nWhen each of the $n$ clients transmits $O(d)$ bits, their Normalized Mean Squared Error (NMSE) is bounded by $O\\big(\\frac{\\log d}{n}\\big)$.\nThey also show a $O(\\frac{1}{n})$ bound with $O(d)$ bits via variable-length encoding, albeit at a higher computational cost.\nThe sampling method of~\\cite{konevcny2018randomized} yields an $O\\parentheses{\\frac{r\\cdot R}{n}}$ NMSE bound using $d(1+o(1))$ bits \\emph{in expectation}, where $r$ is each coordinate's representation length and $R$ is the normalized average variance of the sent vectors.\nRecently, researchers proposed to use Kashin's representation~\\cite{caldas2018expanding,lyubarskii2010uncertainty,iaab006}. Broadly speaking, it allows representing a $d$-dimensional vector in (a higher) dimension~$\\lambda\\cdot d$ for some $\\lambda>1$ using small coefficients. This results in an $O\\Big({\\frac{\\lambda^2}{(\\sqrt\\lambda -1)^4\\cdot n}}\\Big)$ NMSE bound, \\mbox{where each client transmits $\\lambda \\cdot d(1+o(1))$ bits~\\cite{iaab006}.}\n\\textcolor{black}{A recent work~\\cite{davies2021new} suggested an algorithm where if all clients' vectors have pairwise distances of at most $y\\in\\mathbb R$ (i.e., for any client pair $\\mathfrak c_{1},\\mathfrak c_{2}$, it holds that $\\norm{x_{(\\mathfrak c_1)}-x_{(\\mathfrak c_2)}}_2\\le y$), the resulting MSE is $O(y^2)$ (which is tight with respect to y) using $O(1)$ bits per coordinate on average. This solution provides a stronger MSE bound when vectors are sufficiently close \\mbox{(and thus $y$ is small) but does not improve the worst-case guarantee.}}\n\n\\ifdefined\n\\vbox{We step back and focus on approximating $d$-dimensional vectors \nusing $d(1+o(1))$ bits (e.g., one bit per dimension and a lower order overhead). \nWe develop novel biased and unbiased compression techniques based on (uniform as well as structured) random rotations in high-dimensional spheres. Intuitively, after a rotation, the coordinates are identically distributed, allowing us to estimate each coordinate with respect to the resulting distribution. \nOur algorithms do not require expensive operations, such as variable-length encoding or computing the Kashin's representation, and are fast and easy to implement.\nWe obtain an $O\\parentheses{\\frac{1}{n}}$ NMSE bound using $d(1+o(1))$ bits, regardless of the coordinates' representation length, improving over previous works. \nEvaluation results indicate that this translates to a consistent \\mbox{improvement over the state of the art in different distributed and federated learning tasks.}}\n\\else\nWe step back and focus on approximating $d$-dimensional vectors \nusing $d(1+o(1))$ bits (e.g., one bit per dimension and a lower order overhead). \nWe develop novel biased and unbiased compression techniques based on (uniform as well as structured) random rotations in high-dimensional spheres. Intuitively, after a rotation, the coordinates are identically distributed, allowing us to estimate each coordinate with respect to the resulting distribution. \nOur algorithms do not require expensive operations, such as variable-length encoding or computing the Kashin's representation, and are fast and easy to implement.\nWe obtain an $O\\parentheses{\\frac{1}{n}}$ NMSE bound using $d(1+o(1))$ bits, regardless of the coordinates' representation length, improving over previous works. \nEvaluation results indicate that this translates to a consistent \\mbox{improvement over the state of the art in different distributed and federated learning tasks.}\n\\fi\n\n\\section{Problem Formulation and Notation}\n\n\\T{1b~-~Vector Estimation.} \nWe start by formally defining the \\emph{1b~-~vector estimation} problem.\nA sender, called Buffy\\xspace, gets a real-valued vector $x\\in\\mathbb R^d$ and sends it using a $d(1+o(1))$ \nbits message (i.e., asymptotically \\emph{one bit per coordinate}).\nThe receiver, called Angel\\xspace, uses the message to derive an estimate $\\widehat x$ of the original vector $x$.\nWe are interested in the quantity $\\norm{x-\\widehat x}_2^2$, which is the sum of squared errors (SSE), and its expected value, the Mean Squared Error (MSE). For ease of exposition, we hereafter assume that $x\\neq 0$ as this special case can be handled with one additional bit.\nOur goal is to minimize the \\emph{vector}-NMSE (denoted \\emph{vNMSE}), defined as the normalized \\mbox{MSE, i.e., $\\frac{\\mathbb E\\brackets{\\norm{x-\\widehat x}_2^2}}{\\norm{x}_2^2}$~.}\n\n\n\\T{1b~-~Distributed Mean Estimation.}\nThe above problem naturally generalizes to the \\emph{1b~-~Distributed Mean Estimation} problem. Here, we have a set of $n{\\,\\in\\,}\\mathbb N$ \\emph{clients} and a \\emph{server}. Each client $\\mathfrak c\\in\\set{1,\\ldots,n}$ has its own vector $x_{(\\mathfrak c)} {\\,\\in\\,}\\mathbb R^d$, which it sends using a $d(1{+}o(1))$-bits message to the server. \nThe server then produces an estimate $\\widehat {x_{\\text{avg}}}{\\,\\in\\,}\\mathbb R^d$ of the average $x_{\\text{avg}}=\\frac{1}{n}\\sum_{\\mathfrak c=1}^n x_{(\\mathfrak c)}$ with the goal of minimizing {its \\emph{NMSE}, defined as the average estimate's MSE normalized by the average norm of the clients' original vectors, i.e., $\\frac{\\mathbb{E}\\brackets{\\norm{x_{\\text{avg}}-\\widehat {x_{\\text{avg}}}}_2^2}}{\\frac{1}{n}\\cdot\\sum_{\\mathfrak c=1}^n\\norm{x_{(\\mathfrak c)}}_2^2}$~.}\n\n\n\\T{Notation.} We use the following notation and definitions throughout the paper:\n\n\\textit{Subscripts.} $x_i$ denotes the $i$'th \\emph{coordinate} of the vector $x$, to distinguish it from client $\\mathfrak c$'s vector $x_{(\\mathfrak c)}$. \n\n\\textit{Binary-sign.} For a vector $x\\in \\mathbb R^d$, we denote its binary-sign function as $\\text{sign}(x)$, where $\\text{sign}(x)_i=1$ if $x_i\\ge 0$ and $\\text{sign}(x)_i=-1$ if $x_i < 0$. \n\n\\textit{Unit vector.} For any (non-zero) real-valued vector $x \\in \\mathbb R^d$, we denote its normalized vector by $\\breve{x} = \\frac{x}{\\norm{x}_2}$. That is, $\\breve{x}$ and $x$ has the same direction and it holds that $\\norm{\\breve{x}}_2=1$.\n\n\\textit{Rotation Matrix.} \nA matrix $R\\in\\mathbb R^{d\\times d}$ is a rotation matrix if $R^{T} R=I$. The set of all rotation matrices is \\mbox{denoted as $\\mathcal O(d)$.\nIt follows that $\\forall R \\in \\mathcal{O}(d){:\\,}det(R) \\in \\set{-1,1}$ and $\\forall x \\in\\mathbb R^d{:\\,} \\norm{x}_2 {=} \\norm{Rx}_2$.}\n\n\\textit{Random Rotation.}\nA random rotation $\\mathcal{R}$ is a distribution over all random rotations in $\\mathcal O(d)$.\nFor ease of exposition, we abuse the notation and given $x\\in\\mathbb R^d$ denote the random rotation of $x$ by $\\mathcal R(x)=Rx$, where $R$ is drawn from $\\mathcal R$. Similarly, $\\mathcal R^{-1}(x)=R^{-1}x=R^Tx$ is the inverse rotation.\n\n\\textit{Rotation Property.}\nA quantity that determines the guarantees of our algorithms is~${\\ensuremath{\\mathcal L}\\xspace_{\\mathcal R, x}^d =\\frac{{\\norm{\\mathcal R(\\breve x)}_1^2}}{d}}$ (note the use of the $L_1$ norm).\nWe show that rotations with high $\\ensuremath{\\mathcal L}\\xspace_{\\mathcal R, x}^d$ values yield better estimates.\n\n\\T{Shared Randomness.}\nWe assume that Buffy\\xspace and Angel\\xspace have access to shared randomness, e.g., by agreeing on a common PRNG seed. Shared randomness is studied both in communication complexity (e.g., \\cite{newman1991private}) and in communication reduction in machine learning systems (e.g., \\cite{pmlr-v70-suresh17a,ben2020send}).\nIn our context, it means that \\mbox{Buffy\\xspace and Angel\\xspace can generate the same random rotations without communication.}\n\n\\section{The \\mbox{DRIVE}\\xspace Algorithm}\n\n\\ifdefined\nWe start by presenting \\mbox{DRIVE}\\xspace (Deterministically RoundIng randomly rotated VEctors), a novel 1b~-~Vector Estimation algorithm.\nFirst, we show how to minimize \\mbox{DRIVE}\\xspace's vNMSE. Then, we show how to make \\mbox{DRIVE}\\xspace unbiased, extending it to the 1b~-~Distributed \\mbox{Mean Estimation problem.}\n\nIn \\mbox{DRIVE}\\xspace, Buffy\\xspace uses shared randomness to sample a rotation matrix $R\\sim\\mathcal R$ and rotates the vector $x\\in\\mathbb R^d$ by computing $\\mathcal R(x)=R x$.\n\\else\nWe start by presenting \\mbox{DRIVE}\\xspace (Deterministically RoundIng randomly rotated VEctors), a novel 1b~-~Vector Estimation algorithm.\nLater, we extend \\mbox{DRIVE}\\xspace to the 1b~-~Distributed Mean Estimation problem.\nIn \\mbox{DRIVE}\\xspace, Buffy\\xspace uses shared randomness to sample a rotation matrix $R\\sim\\mathcal R$ and rotates the vector $x\\in\\mathbb R^d$ by computing $\\mathcal R(x)=R x$.\n\\fi\nBuffy\\xspace then calculates $S$, a scalar quantity we explain below. Buffy\\xspace then sends $\\big(S,\\text{sign}(\\mathcal R(x))\\big)$ to Angel\\xspace. \nAs we discuss later, sending $\\big(S,\\text{sign}(\\mathcal R(x))\\big)$ requires $d(1+o(1))$ bits.\nIn turn, Angel\\xspace computes $\\widehat {\\mathcal R(x)}=S\\cdot\\text{sign}(\\mathcal R(x)) \\in\\set{-S,+S}^d$.\nIt then uses the shared randomness to generate the same rotation matrix\nand employs the inverse {rotation, i.e., estimates $\\widehat x = \\mathcal R^{-1}(\\widehat {\\mathcal R(x)})$.\nThe pseudocode of \\mbox{DRIVE}\\xspace appears in Algorithm~\\ref{code:alg1}.}\n\n\nThe properties of \\mbox{DRIVE}\\xspace depend on the rotation $\\mathcal R$ and the \\emph{scale parameter} $S$.\nWe consider both uniform rotations, that provide stronger guarantees, and structured rotations that are orders of magnitude faster to compute. \nAs for the scale $S = S({x,R})$, its exact formula determines the characteristics of \\mbox{DRIVE}\\xspace's estimate, e.g., having minimal vNMSE or being unbiased. \nThe latter allows us to apply \\mbox{DRIVE}\\xspace to the 1b~-~Distributed Mean Estimation (Section~\\ref{subsec:drive_dme}) and get an NMSE that decreases proportionally to the number of clients. \n\n\\begin{algorithm}[t]\n\\caption{~\\mbox{DRIVE}\\xspace}\n\\label{code:alg1}\n\\begin{multicols}{2}\n\\begin{algorithmic}[1]\n \\Statex \\hspace*{-4mm}\\textbf{Buffy\\xspace:}\n \n \\State Compute $\\mathcal R(x),\\ S$.\\textcolor{white}{$\\big($}\n \n \\State Send $\\big(S,\\text{sign}(\\mathcal R(x))\\big)$ to Angel\\xspace.\\textcolor{white}{$\\widehat {\\mathcal R(x)}$}\n\\end{algorithmic}\n\\columnbreak\n\\begin{algorithmic}[1]\n\\Statex \\hspace*{-4mm}\\textbf{Angel\\xspace:}\n\\State Compute $\\widehat {\\mathcal R(x)}=S\\cdot\\text{sign}\\big(\\mathcal R(x)\\big)$.\n\\State Estimate $\\widehat x = \\mathcal R^{-1}\\big(\\widehat {\\mathcal R(x)}\\big)$.\n\\end{algorithmic}\n\\end{multicols}\n\\end{algorithm}\nWe now prove a general result on the SSE of \\mbox{DRIVE}\\xspace that applies to any random rotation $\\mathcal R$ and any vector $x\\in\\mathbb R^d$. \nIn the following sections, we use this result to obtain the vNMSE \\mbox{when considering specific rotations and scaling methods as well as analyzing their guarantees.}\n\\begin{theorem}\\label{thm:theoreticalAlg}\nThe SSE of \\mbox{DRIVE}\\xspace is: $\\norm{x-\\widehat x}_2^2=\\norm{x}_2^2-2 \\cdot S \\cdot\\norm{\\mathcal R(x)}_1 + d \\cdot S^2$~.\n\\end{theorem}\n\\begin{proof}\nThe SSE in estimating $\\mathcal R(x)$ using $\\widehat {\\mathcal R(x)}$ equals that of estimating $x$ using $\\widehat x$. Therefore,\n\\ifdefined\n\\begin{multline}\n\\norm{x-\\widehat x}_2^2 = \\norm{\\mathcal R(x-\\widehat x)}_2^2 = \\norm{\\mathcal R(x)-\\mathcal R(\\widehat x)}_2^2 = \\norm{{\\mathcal R(x)}-\\widehat {\\mathcal R(x)}}_2^2\\\\\n= \\norm{\\mathcal R(x)}_2^2 - 2 \\angles{\\mathcal R(x),\\widehat {\\mathcal R(x)}} + {\\norm{\\widehat {\\mathcal R(x)}}_2^2} = \\norm{x}_2^2-2\\angles{\\mathcal R(x),\\widehat {\\mathcal R(x)}} + \\norm{\\widehat {\\mathcal R(x)}}_2^2.\\label{eq:thm_main}\n\\end{multline}\n\\else\n\\\\\n$\\hspace*{0.0cm}\\norm{x-\\widehat x}_2^2 = \\norm{\\mathcal R(x-\\widehat x)}_2^2 = \\norm{\\mathcal R(x)-\\mathcal R(\\widehat x)}_2^2 = \\norm{{\\mathcal R(x)}-\\widehat {\\mathcal R(x)}}_2^2$\\\\\n$\\hspace*{1.0cm}= \\norm{\\mathcal R(x)}_2^2 - 2 \\angles{\\mathcal R(x),\\widehat {\\mathcal R(x)}} + {\\norm{\\widehat {\\mathcal R(x)}}_2^2} = \\norm{x}_2^2-2\\angles{\\mathcal R(x),\\widehat {\\mathcal R(x)}} + \\norm{\\widehat {\\mathcal R(x)}}_2^2.\\refstepcounter{equation}\\hfill\\mbox{(\\theequation)}\\label{eq:thm_main}$\n\\fi\nNext, we have that,\n\\ifdefined\n\\begin{align}\n \\angles{\\mathcal R(x),\\widehat {\\mathcal R(x)}} &= \\sum_{i=1}^d \\mathcal R(x)_i\\cdot \\widehat {\\mathcal R(x)}_i = S \\cdot\\sum_{i=1}^d \\mathcal R(x)_i\\cdot \\text{sign}\\big(\\mathcal R(x)_i\\big) = S \\cdot\\norm{\\mathcal R(x)}_1\\ ,\\label{eq:thm_inner_product_1}\\\\\n\\norm{\\widehat {\\mathcal R(x)}}_2^2 &= \\sum_{i=1}^d \\widehat {\\mathcal R(x)}_i^2 = d\\cdot S^2~.\\label{eq:thm_inner_product_2}\n\\end{align}\n\\else\n\n$\\angles{\\mathcal R(x),\\widehat {\\mathcal R(x)}} = \\sum_{i=1}^d \\mathcal R(x)_i\\cdot \\widehat {\\mathcal R(x)}_i = S \\cdot\\sum_{i=1}^d \\mathcal R(x)_i\\cdot \\text{sign}\\big(\\mathcal R(x)_i\\big) = S \\cdot\\norm{\\mathcal R(x)}_1\\ ,\\refstepcounter{equation}\\hfill\\mbox{(\\theequation)}\\label{eq:thm_inner_product_1}$\\\\\n$\\hspace*{0.85cm}\\norm{\\widehat {\\mathcal R(x)}}_2^2 = \\sum_{i=1}^d \\widehat {\\mathcal R(x)}_i^2 = d\\cdot S^2~.\\refstepcounter{equation}\\hfill\\mbox{(\\theequation)}\\label{eq:thm_inner_product_2}$\n\\fi\n\nSubstituting Eq.~\\eqref{eq:thm_inner_product_1} and Eq.~\\eqref{eq:thm_inner_product_2} in Eq.~\\eqref{eq:thm_main} yields the result.\n\\end{proof}\n\\section{\\mbox{DRIVE}\\xspace With a Uniform Random Rotation}\n\nWe first consider the thoroughly studied uniform random rotation (e.g., ~\\cite{mezzadri2006generate,wedderburn1975generating,heiberger1978generation,stewart1980efficient,tanner1982remark}), which we denote by $\\mathcal R_U$. The sampled matrix is denoted by $R_U\\sim\\mathcal R_U$, that is, $\\mathcal R_U(x)=R_U\\cdot x$.\nAn appealing property of a uniform random rotation is that, as we show later, it admits a scaling that results in a low constant vNMSE even with unbiased estimates.\n\n\n\\subsection{1b~-~Vector Estimation}\n\nUsing Theorem \\ref{thm:theoreticalAlg}, we obtain the following result. The result holds for any rotation, including $\\mathcal R_U$.\n\\begin{lemma}\\label{cor:biased_alg1_vNMSE}\nFor any $x\\in\\mathbb R^d$, \\mbox{DRIVE}\\xspace's SSE is minimized by $S=\\frac{\\norm{\\mathcal R(x)}_1}{d}$ (that is, $S=\\frac{\\norm{Rx}_1}{d}$ is determined after $R\\sim\\mathcal R$ is sampled). This yields a vNMSE of $\\frac{\\mathbb E\\brackets{\\norm{x-\\widehat x}_2^2}}{\\norm{x}_2^2}=1 - \\mathbb E\\brackets{\\ensuremath{\\mathcal L}\\xspace_{\\mathcal R,x}^d}$.\n\\end{lemma}\n\\begin{proof}\nBy Theorem \\ref{thm:theoreticalAlg}, to minimize the SSE we require\n\\ifdefined\n\\begin{align*}\n \\hspace*{.4cm}\\frac{\\partial}{\\partial S}\\big({\\norm{x}_2^2-2 \\cdot S \\cdot\\norm{\\mathcal R(x)}_1 + d \\cdot S^2}\\big) = -2 \\cdot \\norm{\\mathcal R(x)}_1 + 2 \\cdot d \\cdot S = 0~,\n\\end{align*}\n\\else\n\\vspace*{1mm}\\\\\n$\\hspace*{0.9cm}\\frac{\\partial}{\\partial S}\\big({\\norm{x}_2^2-2 \\cdot S \\cdot\\norm{\\mathcal R(x)}_1 + d \\cdot S^2}\\big) = -2 \\cdot \\norm{\\mathcal R(x)}_1 + 2 \\cdot d \\cdot S = 0~,$\\\\\\vspace*{1mm}\n\\fi\nleading to $S=\\frac{\\norm{\\mathcal R(x)}_1}{d}$. Then, the SSE of \\mbox{DRIVE}\\xspace becomes:\n\\ifdefined\n{\n\\begin{multline*}\n\\norm{x-\\widehat x}_2^2 = \\norm{x}_2^2-2\\cdotS \\cdot\\norm{\\mathcal R(x)}_1+d\\cdot S^2 = \n\\norm{x}_2^2-2\\cdot\\frac{\\norm{\\mathcal R(x)}_1^2}{d} + d \\cdot \\frac{\\norm{\\mathcal R(x)}_1^2}{d^2}\n \\\\=\n\\norm{x}_2^2 - \\frac{\\norm{\\mathcal R(x)}_1^2}{d}\n=\n\\norm{x}_2^2 - \\frac{\\norm{x}_2^2 \\cdot \\norm{\\mathcal R(\\breve x)}_1^2}{d}\n= \\norm{x}_2^2 \\big(1-\\frac{\\norm{\\mathcal R(\\breve x)}_1^2}{d}\\,\\big)~.\n\\end{multline*}\n}\n\\else\n\\\\\n$\\hspace*{0.9cm}\\norm{x-\\widehat x}_2^2 = \\norm{x}_2^2-2\\cdotS \\cdot\\norm{\\mathcal R(x)}_1+d\\cdot S^2 = \n\\norm{x}_2^2-2\\cdot\\frac{\\norm{\\mathcal R(x)}_1^2}{d} + d \\cdot \\frac{\\norm{\\mathcal R(x)}_1^2}{d^2}$\\\\\n$\\hspace*{2.33cm}=\n\\norm{x}_2^2 - \\frac{\\norm{\\mathcal R(x)}_1^2}{d}\n=\n\\norm{x}_2^2 - \\frac{\\norm{x}_2^2 \\cdot \\norm{\\mathcal R(\\breve x)}_1^2}{d}\n= \\norm{x}_2^2 \\big(1-\\frac{\\norm{\\mathcal R(\\breve x)}_1^2}{d}\\,\\big)~.$\n\n\\fi\n{Thus, the normalized SSE is $\\frac{\\norm{x-\\widehat x}_2^2}{\\norm{x}_2^2}=1{-}\\ensuremath{\\mathcal L}\\xspace_{\\mathcal R,x}^d$. Taking expectation yields the result. \\qedhere}\n\\end{proof}\nInterestingly, for the uniform random rotation, $\\ensuremath{\\mathcal L}\\xspace_{\\mathcal R_U,x}^d$ follows the same distribution for all $x$. \nThis is because, by the definition of $\\mathcal R_U$, it holds that $\\mathcal R_U(\\breve x)$ is distributed uniformly over the unit sphere for any $x$.\nTherefore \\mbox{DRIVE}\\xspace's vNMSE depends only on the dimension $d$.\nWe next analyze the vNMSE attainable by the best possible $S$, as given in Lemma~\\ref{cor:biased_alg1_vNMSE}, when the algorithm uses $\\mathcal R_U$ and is not required to be unbiased. In particular, we state the following theorem whose\n\\ifdefined\n \\mbox{proof appears in Appendix \\ref{app:biased_drive_vNMSE}. }\n\\else\n proof appears in Appendix \\ref{app:biased_drive_vNMSE} \\mbox{(all appendices appear in the Supplementary Material and the extended paper version~\\cite{vargaftik2021drive}). }\n\\fi\n\\begin{restatable}{theorem}{biaseddrivevNMSE}\\label{thm:biased_drive_vNMSE}\nFor any $x \\in \\mathbb R^d$, the vNMSE of \\mbox{DRIVE}\\xspace with $S=\\frac{\\norm{\\mathcal R_U(x)}_1}{d}$ is\n$\\parentheses{1 - \\frac{2}{\\pi}} \\parentheses{ {1-\\frac{1}{d}}}$~.\n\\end{restatable}\n\n\n\n\\subsection{1b~-~Distributed Mean Estimation}\\label{subsec:drive_dme}\nAn appealing property of \\mbox{DRIVE}\\xspace with a uniform random rotation, established in this section, is that with a proper scaling parameter $S$, the estimate is unbiased. \nThat is, for any $x \\in \\mathbb R^d$, our scale guarantees that $\\mathbb E\\brackets{\\widehat x} = x$.\nUnbiasedness is useful when generalizing to the Distributed Mean Estimation problem. \nIntuitively, when $n$ clients send their vectors, any biased algorithm would result in an NMSE that may not decrease with respect to $n$. For example, if they have the same input vector, the bias would remain after averaging. Instead, an unbiased encoding algorithm has the property that when all clients \\mbox{act (e.g., use different PRNG seeds) independently, the NMSE decreases proportionally to $\\frac{1}{n}$.}\n\nAnother useful property of uniform random rotation is that its distribution is unchanged when composed with other rotations. We use it in \\mbox{the following theorem's proof, given in Appendix~\\ref{app:drive_is_unbiased}.}\n\n\\begin{restatable}{theorem}{driveisunbiased}\\label{theorem:drive_is_unbised}\nFor any $x \\in \\mathbb R^d$, set $S = \\frac{\\norm{x}_2^2}{\\norm{\\mathcal R_U(x)}_1}$. Then \\mbox{DRIVE}\\xspace satisfies $\\mathbb{E} [\\widehat x] = x$.\n\\end{restatable}\n\n\\mbox{Now, we proceed to obtain vNMSE guarantees for \\mbox{DRIVE}\\xspace's unbiased estimate.}\n\\begin{lemma}\\label{cor:alg1_unbiased_vNMSE}\nFor any $x\\in\\mathbb R^d$, \\mbox{DRIVE}\\xspace with $S = \\frac{\\norm{x}_2^2}{\\norm{\\mathcal R_U(x)}_1}$ has a vNMSE of \n$\\mathbb E\\brackets{{\\frac{1}{\\ensuremath{\\mathcal L}\\xspace_{\\mathcal R_U,x}^d}}} {-} 1$~.\n\\begin{proof}\nBy Theorem~\\ref{thm:theoreticalAlg}, the SSE of the algorithm satisfies:\n\\ifdefined\n{\n\\begin{equation*}\n\\begin{aligned}\n&\\norm{x}_2^2-2\\cdotS \\cdot\\norm{R_U \\cdot x}_1+d\\cdot S^2 = \n\\norm{x}_2^2-2\\cdot \\frac{\\norm{x}_2^2}{\\norm{R_U \\cdot x}_1} \\cdot\\norm{R_U \\cdot x}_1+ d \\cdot \\parentheses{\\frac{\\norm{x}_2^2}{\\norm{R_U \\cdot x}_1}}^2\n \\\\\n&= d \\cdot \\parentheses{\\frac{\\norm{x}_2^2}{\\norm{x}_2\\norm{R_U \\cdot \\breve x}_1}}^2 {-} \\norm{x}_2^2 = d \\cdot \\frac{\\norm{x}_2^2}{\\norm{R_U \\cdot \\breve x}_1^2} {-} \\norm{x}_2^2 = \\norm{x}_2^2 \\cdot \\parentheses{\\parentheses{\\frac{d}{\\norm{R_U \\cdot \\breve x}_1^2}} {-} 1}~.\n\\end{aligned} \n\\end{equation*}\n}\n\\else\n\\begin{equation*}\n\\small\n\\begin{aligned}\n&\\norm{x}_2^2-2\\cdotS \\cdot\\norm{R_U \\cdot x}_1+d\\cdot S^2 = \n\\norm{x}_2^2-2\\cdot \\frac{\\norm{x}_2^2}{\\norm{R_U \\cdot x}_1} \\cdot\\norm{R_U \\cdot x}_1+ d \\cdot \\parentheses{\\frac{\\norm{x}_2^2}{\\norm{R_U \\cdot x}_1}}^2\n \\\\\n&= d \\cdot \\parentheses{\\frac{\\norm{x}_2^2}{\\norm{x}_2\\norm{R_U \\cdot \\breve x}_1}}^2 {-} \\norm{x}_2^2 = d \\cdot \\frac{\\norm{x}_2^2}{\\norm{R_U \\cdot \\breve x}_1^2} {-} \\norm{x}_2^2 = \\norm{x}_2^2 \\cdot \\parentheses{\\parentheses{\\frac{d}{\\norm{R_U \\cdot \\breve x}_1^2}} {-} 1}~.\n\\end{aligned} \n\\end{equation*}\n\\fi\n\\mbox{Normalizing by $\\norm{x}_2^2$ and taking expectation over $R_U$ concludes the proof.}\n\\end{proof}\n\\end{lemma}\nOur goal is to derive an upper bound on the above expression and thus upper-bound the vNMSE. Most importantly, we show that even though the estimate is unbiased and we use only a single bit per coordinate, the vNMSE does not increase with the dimension and is bounded by a small constant. In particular, in Appendix \\ref{app:anbiased_drive_large_deviation}, we prove the following:\n\n\\begin{restatable}{theorem}{vNMSEofunbiaseddrive}\\label{thm:vNMSEofunbiaseddrive}\nFor any $x \\in\\ \\mathbb R^d$, the vNMSE of \\mbox{DRIVE}\\xspace with $S = \\frac{\\norm{x}_2^2}{\\norm{\\mathcal R_U(x)}_1}$\nsatisfies:\\\\\n\\mbox{$(i)$ For all $d \\ge 2$, it is at most $2.92$. $(ii)$ For all $d \\ge 135$, it is at most $\\frac{\\pi}{2} - 1 + \\sqrt{\\frac{{(6\\pi^3-12\\pi^2)}\\cdot\\ln d+1}{d}}$.}\n\\end{restatable}\n\nThis theorem yields strong bounds on the vNMSE. For example, the vNMSE is lower than $1$ for $d \\ge 4096$ and lower than $0.673$ for $d \\ge 10^5$. Finally, we obtain the following corollary,\n\\begin{corollary}\\label{cor:deviation_res_3}\nFor any $x\\in\\mathbb R^d$, the vNMSE tends to $\\frac{\\pi}{2} - 1 \\approx 0.571$ as $d\\to\\infty$~. \n\\end{corollary}\n\nRecall that \\mbox{DRIVE}\\xspace's above scale $S$ is a function of both $x$ and the sampled $R_U$. An alternative approach is to \\emph{deterministically} set $S$ to \n$\\frac{\\norm{x}_2^2}{\\mathbb E\\brackets{\\norm{\\mathcal R_U(x)}_1}}$.\nAs we prove in Appendix~\\ref{app:l1_expected_value}, the resulting scale is $\\frac{\\norm{x}_2\\cdot (d-1)\\cdot \\mathrm{B}(\\frac{1}{2},\\frac{d-1}{2})}{2d}$, where $\\mathrm{B}$ is the Beta function. \nInterestingly, this scale no longer depends on $x$ but only on its norm.\nIn the appendix, we also prove that the resulting vNMSE is bounded by $\\frac{\\pi}{2} - 1$ \\emph{for any $d$}. \nIn practice, we find that the benefit is marginal.\n\nFinally, with a vNMSE guarantee for the unbiased estimate by \\mbox{DRIVE}\\xspace, we obtain the following key result for the\n1b~-~Distributed Mean Estimation problem, whose proof appears in Appendix~\\ref{app:dme}.\nWe note that this result guarantees (e.g., see~\\cite{beznosikov2020biased}) that distributed SGD, where the participants' gradients are compressed with \\mbox{DRIVE}\\xspace, converges at the same asymptotic rate as without compression.\n\n\n\n\\begin{restatable}{theorem}{cordme}\\label{cor:dme}\nAssume $n$ clients, each with its own vector $x_{(\\mathfrak c)}{\\,\\in\\,}\\mathbb R^d$. Let each client independently sample $R_{U,\\mathfrak c}{\\,\\sim\\,}\\mathcal R_U$ and set its scale to $\\frac{\\norm{x_{(\\mathfrak c)}}_2^2}{\\norm{R_{U,\\mathfrak c}\\cdot x_{(\\mathfrak c)}}_1}$. Then, the server average estimate's NMSE satisfies:\n$\\frac{\\mathbb{E}\\brackets{\\norm{x_{\\text{avg}}-\\widehat {x_{\\text{avg}}}}_2^2}}{\\frac{1}{n}{\\cdot}\\sum_{\\mathfrak c=1}^n\\norm{x_{(\\mathfrak c)}}_2^2} {\\,=\\,} \\frac{\\mathit{vNMSE}}{n}$, {where \\textit{vNMSE} is given by Lemma~\\ref{cor:alg1_unbiased_vNMSE} and is bounded by Theorem~\\ref{thm:vNMSEofunbiaseddrive}.} \n\\end{restatable}\nTo the best of our knowledge, \\mbox{DRIVE}\\xspace is the first algorithm with a provable NMSE of $O(\\frac{1}{n})$ for the 1b~-~Distributed Mean Estimation problem (i.e., with $d(1+o(1))$ bits). \nIn practice, we use only $d+O(1)$ bits to implement \\mbox{DRIVE}\\xspace. We use the $d(1+o(1))$ notation to ensure compatibility with the theoretical results; see Appendix \\ref{app:MessageRepresentationLength} for a discussion.\n\n\n\n\\section{Reducing the vNMSE with \\mbox{DRIVE$^+$}\\xspace}\\label{sec:drive_plus}\n\nTo reduce the vNMSE further, we introduce the \\mbox{DRIVE$^+$}\\xspace algorithm.\nIn \\mbox{DRIVE$^+$}\\xspace, we also use a scale parameter, denoted $\\sc=\\sc(x, R)$ to differentiate it from the scale $S$ of \\mbox{DRIVE}\\xspace.\nHere, instead of reconstructing the rotated vector in a symmetric manner, i.e., $\\widehat {\\mathcal R(x)} \\in S \\cdot \\{-1,1\\}^d$, we have that $\\widehat {\\mathcal R(x)} \\in \\sc \\cdot \\{ c_1, c_2\\}^d$ where $c_1,c_2$ are computed using K-Means clustering with $K=2$ over the $d$ entries of the rotated vector $\\mathcal R(x)$. \nThat is, $c_1,c_2$ are chosen to minimize the SSE over any choice of two values. \nThis does not increase the (asymptotic) time complexity over the random rotations considered in this paper as solving K-Means for the special case of one-dimensional data is deterministically solvable in $O(d \\log d)$ (e.g.,~\\cite{gronlund2017fast}).\nNotice that \\mbox{DRIVE$^+$}\\xspace still requires $d(1+o(1))$ bits as we communicate $\\parentheses{\\sc \\cdot c_1, \\sc \\cdot c_2}$ and a single bit per coordinate, indicating its nearest centroid.\nWe defer the pseudocode and analyses of \\mbox{DRIVE$^+$}\\xspace to Appendix~\\ref{app:drive_plus_app}. We show that with proper scaling, for both the 1b~-~Vector Estimation and 1b~-~Distributed Mean Estimation \\mbox{problems, \\mbox{DRIVE$^+$}\\xspace yields guarantees that are at least as strong as those of \\mbox{DRIVE}\\xspace.}\n\n\n\\section{\\mbox{DRIVE}\\xspace with a Structured Random Rotation}\\label{sec:Hadamard}\n\nUniform random rotation generation usually relies on QR factorization (e.g., see \\cite{pytorchqrfact}), which requires $O(d^3)$ time and $O(d^2)$ space. \nTherefore, uniform random rotation can only be used in practice to rotate low-dimensional vectors. This is impractical for neural network architectures with many millions of parameters.\nTo that end, we continue to analyze \\mbox{DRIVE}\\xspace and \\mbox{DRIVE$^+$}\\xspace with the (randomized) Hadamard transform, a.k.a. \\emph{structured} random rotation~\\cite{pmlr-v70-suresh17a,ailon2009fast}, that admits a fast \\emph{in-place}, \n\\ifdefined\n\\mbox{parallelizable, $O(d\\log d)$ time implementation~\\cite{fino1976unified,uberHadamard}. We start with a few definitions.}\n\\else\n\\mbox{parallelizable, $O(d\\log d)$ time implementation~\\cite{fino1976unified,uberHadamard,openSource}. We start with a few definitions.}\n\\fi\n\n\\begin{definition}\nThe Walsh-Hadamard matrix (\\cite{horadam2012Hadamard}) $H_{2^k}\\in\\{+1,-1\\}^{2^{k}\\times 2^{k}}$ is recursively defined via: \\mbox{ \n$ \\small\nH_{2^k}{=} \\begin{pmatrix}\nH_{2^{k-1}} & H_{2^{k-1}} \\\\\nH_{2^{k-1}} & -H_{2^{k-1}}\n\\end{pmatrix} \n$ and $H_1 {=} \\begin{pmatrix} 1 \\end{pmatrix}$.\nAlso, $(\\frac{1}{\\sqrt d}H) \\cdot (\\frac{1}{\\sqrt d}H)^T {=} I$ and $\\mathit{det}(\\frac{1}{\\sqrt d}H) \\in [-1,1]$.} \n\\end{definition}\n\n\\ifdefined\n\\vbox{\n\\fi\n\\begin{definition}\nLet $R_H$ denote the rotation matrix $\\frac{HD}{\\sqrt d}\\in\\mathbb R^{d\\times d}$, where $H$ is a Walsh-Hadamard matrix and $D$ is a diagonal matrix whose diagonal entries are i.i.d. Rademacher random variables (i.e., taking values uniformly in $\\pm 1$). \nThen $\\mathcal R_H(x)=R_H \\cdot x = \\frac{1}{\\sqrt d} H \\cdot (x_1 \\cdot D_{11}, \\dots, x_d \\cdot D_{dd})^T$ is the randomized Hadamard transform of $x$ and $\\mathcal R_H^{-1}(x)=R^T_H \\cdot x = \\frac{DH}{\\sqrt d} \\cdot x$ is the inverse transform\n\\end{definition}\n\\ifdefined\n}\n\\fi\n\n\n\n\\subsection{1b~-~Vector Estimation}\n\nRecall that the vNMSE of \\mbox{DRIVE}\\xspace, when minimized using $S=\\frac{\\norm{\\mathcal R (x)}_1}{d}$, is $1 - \\mathbb E\\brackets{\\ensuremath{\\mathcal L}\\xspace_{\\mathcal R,x}^d}$ (see Lemma~\\ref{cor:biased_alg1_vNMSE}). We now bound this quantity of \\mbox{DRIVE}\\xspace with a structured random rotation.\n\\begin{lemma}\\label{lem:hadamard_biased}\nFor any dimension $d\\geq 2$ and vector $x \\in \\mathbb R^d$, the vNMSE of \\mbox{DRIVE}\\xspace with a structured random rotation and scale $S=\\frac{\\norm{\\mathcal R_H (x)}_1}{d}$ is: $1 - \\mathbb E\\brackets{\\ensuremath{\\mathcal L}\\xspace_{\\mathcal R_H,x}^d} \\le \\frac{1}{2}$.\\vspace{-1mm}\n\\end{lemma}\n\\begin{proof}\nObserve that for all $i$, \n$\\mathbb E\\brackets{\\abs{\\mathcal R_H(x)_i}} = \\mathbb E\\big[\\big|{\\sum_{j=1}^d \\frac{x_j}{\\sqrt{d}} H_{ij} D_{jj}}\\big|\\big]$. \nSince $\\set{H_{ij} D_{jj} \\mid j\\in[d]}$ are i.i.d. Rademacher random variables we can use the Khintchine inequality~\\cite{khintchine1923dyadische,szarek1976best} which implies that $\\frac{1}{\\sqrt{2d}} \\cdot \\norm x_2\n\\le \\mathbb E\\brackets{\\abs{\\mathcal R_H(x)_i}} \\le \\frac{1}{\\sqrt{d}} \\cdot \\norm x_2$\n(see~\\cite{filmus2012khintchine,latala1994best} for simplified proofs).\nWe conclude that: \n\\ifdefined\n{\n\\begin{equation*}\n\\begin{aligned}\n\\mathbb E\\brackets{\\ensuremath{\\mathcal L}\\xspace_{\\mathcal R_H,x}^d} = \\frac{1}{d} \\cdot \\mathbb E\\brackets{ \\norm{\\mathcal R_H(\\breve x)}_1^2} \\ge \\frac{1}{d} \\cdot \\mathbb E\\brackets{ \\norm{\\mathcal R_H(\\breve x)}_1}^2 \\ge \\frac{1}{d} \\cdot \\Big(\\sum_{i=1}^d \\frac{1}{\\sqrt{2d}}\\Big)^2 = \\frac{1}{2}~.\n\\end{aligned} \n\\end{equation*}\n}\n\\else\n\\\\\n\\hspace*{1.2cm}\n$\\mathbb E\\brackets{\\ensuremath{\\mathcal L}\\xspace_{\\mathcal R_H,x}^d} = \\frac{1}{d} \\cdot \\mathbb E\\brackets{ \\norm{\\mathcal R_H(\\breve x)}_1^2} \\ge \\frac{1}{d} \\cdot \\mathbb E\\brackets{ \\norm{\\mathcal R_H(\\breve x)}_1}^2 \\ge \\frac{1}{d} \\cdot (\\sum_{i=1}^d \\frac{1}{\\sqrt{2d}})^2 = \\frac{1}{2}~.$\\\\\n\\fi\nThis bound is sharp since for $d\\ge 2$ we have that \n$\\ensuremath{\\mathcal L}\\xspace_{\\mathcal R_H,x}^d = \\frac{1}{2}$ for $x=(\\frac{1}{\\sqrt 2}, \\frac{1}{\\sqrt 2},0,\\ldots,0)^T$~.\n\\end{proof}\n\\vspace{-1mm}\nObserve that unlike for the uniform random rotation, $\\mathbb{E}\\brackets{\\ensuremath{\\mathcal L}\\xspace_{\\mathcal R_H,x}^d}$ depends on $x$. We also note that this bound of $\\frac{1}{2}$ applies to \\mbox{DRIVE$^+$}\\xspace (with scale $\\sc=1$) as we show in Appendix~\\ref{app:1bDrive+}.\n\n\\subsection{1b~-~Distributed Mean Estimation}\\label{sec:dme_hadamard_subsec}\n\nFor an arbitrary $x \\in \\mathbb R^d$ and $\\mathcal R$, \nand in particular for $\\mathcal{R}_H$, \nthe estimates of \\mbox{DRIVE}\\xspace cannot be made unbiased. For example,\nfor $x=(\\frac{2}{3}, \\frac{1}{3})^T$ we have that $\\text{sign}(\\mathcal R_H(x))=(D_{11},D_{11})^T$ and thus $\\widehat{\\mathcal R_H(x)} = S\\cdot (D_{11},D_{11})^T$. \nThis implies that $\\widehat x = \\mathcal R_H^{-1}(\\widehat{\\mathcal R_H(x)}) = \\frac{1}{\\sqrt 2} \\cdot D\\cdot H\\cdotS\\cdot (D_{11},D_{11})^T=\\sqrt 2 \\cdot S\\cdot D\\cdot (D_{11}, 0)^T =$ $\\sqrt 2 \\cdot S\\cdot (D_{11}^2,0)^T=(\\sqrt 2 \\cdot S ,0)^T$. \\mbox{Therefore, $\\mathbb E[\\widehat x]\\neq x$ regardless of the scale.}\n\n\nNevertheless, we next provide evidence for why when the input vector is high dimensional and admits finite moments, a structured random rotation performs similarly to a uniform random rotation, yielding all the appealing aforementioned properties. Indeed, it is a common observation that the distribution of machine learning workloads and, in particular, neural network gradients are governed by such distributions (e.g., lognormal \\cite{chmiel2020neural} or normal \\cite{banner2018post,ye2020accelerating}).\n\nWe seek to show that at high dimensions, the distribution of $\\mathcal R_H(x)$ is sufficiently similar to that of $\\mathcal R_U(x)$. \nBy definition, the distribution of $\\mathcal R_U(x)$ is that of a uniformly at random distributed point on a sphere. \nPrevious studies of this distribution for high dimensions (e.g., \\cite{spruill2007asymptotic,diaconis1987dozen,rachev1991approximate,stam1982limit}) have shown that individual coordinates of $\\mathcal R_U(x)$ converge to the same normal distribution and that these coordinates are ``weakly'' dependent in the sense that the joint distribution of every $O(1)$-sized subset of coordinates is similar to that of independent normal variables for large $d$. \n\nWe hereafter assume that $x=(x_1, \\ldots, x_d)$, where the $x_i$s are i.i.d. and that $\\mathbb{E}[x_j^2]=\\sigma^2$ and $\\mathbb{E}[\\abs{x_j}^3]=\\rho < \\infty$ for all $j$. We show that $\\mathcal R_H (x)_i$ converges to the same normal distribution for all $i$.\nLet $F_{i,d}(x)$ be the cumulative distribution function (CDF) of $\\frac{1}{\\sigma} \\cdot \\mathcal R_H (x)_i$ and $\\Phi$ be the CDF of the standard normal \\mbox{distribution.\nThe following lemma, proven in Appendix~\\ref{app:proofOfBerryEsseen}, shows the convergence.}\n\\begin{restatable}{lemma}{proofOfBerryEsseen}\\label{lem:proofOfBerryEsseen}\n\\mbox{For all $i$, $\\mathcal R_H (x)_i$ converges to a normal variable: $\\sup_{x\\in \\mathbb R}\\abs{F_{i,d}(x)-\\Phi(x)} \\le \\frac{0.409\\cdot \\rho}{\\sigma^3 \\sqrt d}$.}\n\\end{restatable}\n\n\\ifdefined\n\\vbox{\n\\fi\nWith this result, we continue to lay out evidence for the ``weak dependency'' among the coordinates. We do so by calculating the moments of their joint distribution in increasing subset sizes showing that these moments converge to those of independent normal variables. Previous work has shown that a structured random rotation on vectors with specific distributions results in ``weakly dependent'' normal variables. This line of research~\\cite{rader1969new,thomas2013parallel,herendi1997fast} utilized the Hadamard transform for a different purpose. Their goal was to develop a computationally cheap method to generate independent normally distributed variables from simpler (e.g., uniform) distributions. We apply their analysis to our setting.\n\\ifdefined\n}\n\\fi\n\n\n\\mbox{We partially rely on the following observation that the Hadamard matrix satisfies.}\n\n\n\\begin{observation}\\label{obs:Hadamard_rows}(\\cite{rader1969new})\nThe Hadamard product (coordinate-wise product), $H_{\\angles{i}}\\circ H_{\\angles{\\ell}}$,\nof two rows $H_{\\angles{i}},H_{\\angles{\\ell}}$ in the Hadamard matrix yields another row at the matrix $H_{\\angles{i}}\\circ H_{\\angles{\\ell}} = H_{\\angles{1+(i-1) \\oplus (\\ell-1)}}$. Here, $(i-1) \\oplus (\\ell-1)$ is the bitwise {xor of the $(\\log d)$-sized binary representation of $(i-1)$ and $(\\ell-1)$}. \nIt follows that $\\sum_{j=1}^d H_{ij}H_{\\ell j}=\\sum_{j=1}^d (H_{\\angles{i}}\\circ H_{\\angles{\\ell}})_{{j}} = \\sum_{j=1}^d H_{1+(i-1) \\oplus (\\ell-1), j}$~.\n\\end{observation}\n\n\nWe now analyze the moments of the rotated variables, starting with the following observation. It follows from {the sign-symmetry of $D$ and matches the joint distribution of i.i.d. normal variables.}\n\\begin{observation}\nAll odd moments containing $\\mathcal R_H(x)$ entries are $0$. That is,\\\\ \n$\\hspace*{1.7cm}\\forall q\\in\\mathbb N, \\forall {i_1,\\ldots,i_{2q+1}}\\in\\set{1,\\ldots,d}:\\mathbb E\\brackets{\\mathcal R_H(x)_{i_1}\\cdot \\ldots\\cdot \\mathcal R_H(x)_{i_{2q+1}}}=0$.\n\\end{observation}\n\nTherefore, we need to examine only even moments. We start with showing that the second moments also match with the distribution of independent normal variables.\n\\begin{lemma}\nFor all $i\\neq \\ell$ it holds that $\\mathbb{E} \\brackets{(\\mathcal R_H \\cdot x)_i \\cdot (\\mathcal R_H \\cdot x)_\\ell} = 0$, whereas $\\mathbb{E} \\brackets{(\\mathcal R_H \\cdot x)_i^2} = \\sigma^2$.\n\\end{lemma}\n\\begin{proof}\n{Since $\\set{D_{jj}\\mid j\\in\\set{1,\\ldots,d}}$ are sign-symmetric and i.i.d., }\n$\n\\mathbb{E} \\big[(\\mathcal R_H \\cdot x)_i \\cdot (\\mathcal R_H \\cdot x)_\\ell\\big] = \\mathbb{E} \\big[{\\frac{1}{d} (\\sum_{j=1}^d x_j H_{ij} D_{jj}) \\cdot (\\sum_{j=1}^d x_j H_{\\ell j} D_{jj})}\\big] = \\mathbb{E} \\brackets{x_j^2} \\cdot \\frac{1}{d} \\cdot \\sum_{j=1}^d H_{ij} H_{\\ell j}$.\nNotice that $\\sum_{j=1}^d H_{1j}{\\,=\\,}d$ \\mbox{ and $\\sum_{j=1}^d H_{ij}=0$ for all $i>1$. Thus, by \\Cref{obs:Hadamard_rows} we get $0$ if $i \\neq \\ell$ and $\\sigma^2$ otherwise.\\qedhere} \n\\end{proof}\nWe have established that the coordinates are pairwise uncorrelated. Similar but more involved analysis shows that the same trend continues under the assumption of the existence of $x$'s higher moments. In Appendix \\ref{app:Hadamard_4th_moments} we analyze the 4th moments showing that they indeed approach the 4th moments of independent normal variables with a rate of $\\frac{1}{d}$; the reader is referred to \\cite{rader1969new} for further intuition and higher moments analysis.\nWe therefore expect that using \\mbox{DRIVE}\\xspace and \\mbox{DRIVE$^+$}\\xspace with Hadamard transform will yield similar results to that of a uniform random rotation at high dimensions and when the input vectors respect the finite moments assumption.\n\n\\begin{figure}[]\n\\centering\n\\centerline{\\includegraphics[width=\\textwidth, trim=70 93 70 0, clip]{figures\/hadamard_vs_rr_4.pdf}}\n\\ifdefined\\else\\vspace*{-2mm}\\fi\n\\caption{Distributed mean estimation comparison: each data point is averaged over $10^4$ trials. In each trial, \\emph{the same} (randomly sampled) vector is sent by $n=10$ clients.}\n\\ifdefined\\else\\vspace*{-3mm}\\fi\n\\label{fig:theory_is_cool}\n\\end{figure}\n\nIn addition to the theoretical evidence, in Figure \\ref{fig:theory_is_cool}, we show experimental results comparing the measured NMSE for the 1b~-~Distributed Mean Estimation problem with $n=10$ clients (all given the \\emph{same} vector so biases do not cancel out) for \\mbox{DRIVE}\\xspace and \\mbox{DRIVE$^+$}\\xspace using both uniform and structured random rotations over three different distributions. The results indicate that all variants yield similar NMSEs in reasonable dimensions, \\mbox{in line with the theoretical guarantee of Corollary \\ref{cor:deviation_res_3} and Theorem~\\ref{cor:dme}.}\n\n\n\n\n\n\n\n\n\n\n\\section{Evaluation}\\label{sec:Evaluation}\n\nWe evaluate \\mbox{DRIVE}\\xspace and \\mbox{DRIVE$^+$}\\xspace, comparing them to standard and recent state-of-the-art techniques. We consider classic distributed learning tasks as well as federated learning tasks (e.g., where the data distribution is not i.i.d. and clients may change over time). All the distributed tasks are implemented over PyTorch \\cite{NIPS2019_9015} and all the federated tasks are implemented over TensorFlow Federated \\cite{tensorflowfed}.\nWe focus our comparison on vector quantization algorithms and recent sketching techniques and exclude sparsification methods (e.g.,~\\cite{konecy2017federated, WangSLCPW18, NEURIPS2019_d9fbed9d,NEURIPS2018_b440509a}) and methods that involve client-side memory since these can often work in conjunction with our algorithms.\n\n\\vbox{\n\\paragraph{\\textbf{{Datasets}.\\quad}}\\label{p:datasets}\nWe use MNIST~\\cite{lecun1998gradient,lecun2010mnist}, EMNIST~\\cite{cohen2017emnist}, CIFAR-10 and CIFAR-100 \\cite{krizhevsky2009learning} for image classification tasks; a next-character-prediction task using the Shakespeare dataset \\cite{shakespeare}; and a next-word-prediction \\mbox{task using the Stack Overflow dataset \\cite{stackoverflowdb}. Additional details appear in Appendix~\\ref{app:utilizedAssets}.}\n\n\\paragraph{\\textbf{{Algorithms}.\\quad}} \n\n\\addtocounter{footnote}{-1}\n\\new{Since our focus is on the distributed mean estimation problem and its federated and distributed learning applications, we run \\mbox{DRIVE}\\xspace and \\mbox{DRIVE$^+$}\\xspace with the unbiased scale quantities.\\footnotemark}\n}\n\n\nWe compare against several alternative algorithms: (1) \\emph{FedAvg} \\cite{mcmahan2017communication} that uses the full vectors (i.e., each coordinate is represented using a 32-bit float); (2) Hadamard transform followed by 1-bit stochastic quantization (SQ) \\cite{pmlr-v70-suresh17a,konevcny2018randomized}; (3) Kashin's representation followed by 1-bit stochastic quantization \\cite{caldas2018expanding}; (4) \\emph{TernGrad}~\\cite{wen2017terngrad}, which clips coordinates larger than 2.5 times the standard deviation, then performs 1-bit stochastic quantization on the absolute values and separately sends their signs and the maximum coordinate for scale (we note that TernGrad is a low-bit variant of a well-known algorithm called \\emph{QSGD}~\\cite{NIPS2017_6c340f25}, and we use TernGrad since we found it to perform better in our experiments);\\footnotetext{For \\mbox{DRIVE}\\xspace the scale is $S=\\frac{\\norm{x}_2^2}{\\norm{\\mathcal R(x)}_1}$ (see Theorem~\\ref{theorem:drive_is_unbised}). For \\mbox{DRIVE$^+$}\\xspace the scale is $\\sc=\\frac{\\norm{x}_2^2}{\\norm{c}_2^2}$, where $c\\in\\set{c_1,c_2}^d$ is the vector indicating the nearest centroid to each coordinate in $\\mathcal R(x)$ (see Section~\\ref{sec:drive_plus}).}\\footnote{\\new{When restricted to two quantization levels, TernGrad is identical to QSGD's max normalization variant with clipping (slightly better due to the ability to represent 0).}} and (5-6)~\\emph{Sketched-SGD}~\\cite{ivkin2019communication} and \\emph{FetchSGD}~\\cite{rothchild2020fetchsgd}, which are both count-sketch \\cite{charikar2002finding} based algorithms designed for distributed and federated learning, respectively.\n\n\n\n\n \\begin{table}[]\n \\resizebox{\\textwidth}{!}{%\n \\begin{tabular}{r|l|l|l|l|l|l|}\n \\cline{2-7}\n \\multicolumn{1}{l|}{Dimension ($d$)} & \\multicolumn{1}{c|}{\\begin{tabular}[c]{@{}c@{}}Hadamard\\\\ + 1-bit SQ\\end{tabular}} & \\multicolumn{1}{c|}{\\begin{tabular}[c]{@{}c@{}}Kashin\\\\ + 1-bit SQ\\end{tabular}} & \\multicolumn{1}{c|}{\\begin{tabular}[c]{@{}c@{}}Drive \\\\ (Uniform)\\end{tabular}} & \\multicolumn{1}{c|}{\\begin{tabular}[c]{@{}c@{}}Drive$^+$\\\\ (Uniform)\\end{tabular}} & \\multicolumn{1}{c|}{\\begin{tabular}[c]{@{}c@{}}Drive\\\\ (Hadamard)\\end{tabular}} & \\multicolumn{1}{c|}{\\begin{tabular}[c]{@{}c@{}}Drive$^+$ \\\\ (Hadamard)\\end{tabular}} \\\\ \\hline\n \\multicolumn{1}{|r|}{128} & 0.5308, {\\textit{0.34}} & 0.2550, \\textit{2.12} & 0.0567, \\textit{40.4} & \\textbf{0.0547}, \\textit{40.7} & 0.0591, \\textit{0.36} & 0.0591, \\textit{0.72} \\\\ \\hline\n \\multicolumn{1}{|r|}{8,192} & 1.3338, {\\textit{0.57}} & 0.3180, \\textit{3.42} & \\textbf{0.0571}, \\textit{5088} & \\textbf{0.0571}, \\textit{5101} & \\textbf{0.0571}, {\\textit{0.60}} & \\textbf{0.0571}, \\textit{1.06} \\\\ \\hline\n \\multicolumn{1}{|r|}{524,288} & 2.1456, {\\textit{0.79}} & 0.3178, \\textit{4.69} & --- & --- & \\textbf{0.0571}, \\textit{0.82} & \\textbf{0.0571}, \\textit{1.35} \\\\ \\hline\n \\multicolumn{1}{|r|}{33,554,432} & 2.9332, {\\textit{27.1}} & 0.3179, \\textit{332} & --- & --- & \\textbf{0.0571}, {\\textit{27.2}} & \\textbf{0.0571}, \\textit{37.8} \\\\ \\hline\n \\end{tabular}%\n }\n \\caption{Empirical NMSE and average per-vector encoding time (in milliseconds, on an RTX 3090 GPU) for distributed mean estimation with $n=10$ clients (same as in Figure~\\ref{fig:theory_is_cool}) and Lognormal(0,1) distribution. Each entry is a (NMSE, \\textit{time}) tuple and the most accurate result is highlighted in \\textbf{bold}.\n }\\label{tbl:weAreFast}\n \\ifdefined\\else\\vspace*{-2mm}\\fi\n \\end{table} \n\nWe note that Hadamard with 1-bit stochastic quantization is our most fair comparison, as it uses the same number of bits as \\mbox{DRIVE$^+$}\\xspace (and slightly more than \\mbox{DRIVE}\\xspace) and has similar computational costs. This contrasts with Kashin's representation, where both the number of bits and the computational costs are higher. \nFor example, a standard TensorFlow Federated implementation (e.g., see ``\\textsc{class KashinHadamardEncodingStage}'' hyperparameters at \\cite{tensorflowfedkashincode}) uses a minimum of $1.17$ bits per coordinate, and three iterations of the algorithm resulting in five Hadamard transforms for each vector.\nAlso, note that TernGrad uses an extra bit per coordinate for sending the sign. Moreover, \\mbox{the clipping performed by TernGrad is a heuristic procedure, which is orthogonal to our work. }\n\n\nFor each task, we use a subset of datasets and the most relevant competition.\nDetailed configuration information and additional results appear in Appendix \\ref{appendix:additional_simulations}.\nWe first evaluate the vNMSE-Speed tradeoffs and then proceed to federated and distributed learning experiments.\n\n\n\n\n\n\n\n\\paragraph{\\textbf{vNMSE-Speed Tradeoff}.\\quad}\nAppearing in Table~\\ref{tbl:weAreFast}, the results show that our algorithms offer the lowest NMSE and that the gap increases with the dimension. \nAs expected, \\mbox{DRIVE}\\xspace and \\mbox{DRIVE$^+$}\\xspace with uniform rotation are more accurate for small dimensions but are significantly slower.\nSimilarly, \\mbox{DRIVE}\\xspace is as accurate as \\mbox{DRIVE$^+$}\\xspace, and both are significantly more accurate than Kashin (by a factor of 4.4$\\times$-5.5$\\times$) and Hadamard (9.3$\\times$-51$\\times$) with stochastic quantization. \nAdditionally, \\mbox{DRIVE}\\xspace is 5.7$\\times$-12$\\times$ faster than Kashin and about as fast as Hadamard.\nIn Appendix~\\ref{app:additional_speed_results} we discuss the result\n, give the complete experiment specification, and provide measurements on a commodity machine. \n\n\\new{We note that the above techniques, including \\mbox{DRIVE}\\xspace, are more computationally expensive than linear-time solutions like TernGrad. Nevertheless, \\mbox{DRIVE}\\xspace's computational overhead becomes insignificant for modern learning tasks. For example, our measurements suggest that it can take 470~ms for computing the gradient on a ResNet18 architecture (for CIFAR100, batch size = 128, using NVIDIA GeForceGTX 1060 (6GB) GPU) while the encoding of \\mbox{DRIVE}\\xspace (Hadamard) takes 2.8~ms. That is, the overall computation time is only increased by 0.6\\% while the error reduces significantly. Taking the transmission and model update times into consideration would reduce the importance of the compression time further.}\n\n\n\\paragraph{\\textbf{{Federated Learning}.\\quad}}\nWe evaluate over four tasks: (1)~EMNIST over customized CNN architecture with two convolutional layers with ${\\approx}1.2M$ parameters \\cite{caldas2019leaf}; (2)~CIFAR-100 over \\mbox{ResNet-18}~\\cite{krizhevsky2009learning}\n; (3)~a~next-character-prediction task using the Shakespeare dataset \\cite{mcmahan2017communication}; (4)~a~next-word-prediction task using the Stack Overflow dataset \\cite{reddi2020adaptive}. Both (3) and (4) use LSTM recurrent models \\cite{Hochreiter1997LongSM} with\n${\\approx}820K$ and ${\\approx}4M$ parameters,\nrespectively. We use code, client partitioning, models, hyperparameters, and validation metrics from the federated learning benchmark of \\cite{reddi2020adaptive}. \n\n\\new{The results are depicted in Figure~\\ref{fig:federated_dnn}. \nWe observe that in all tasks, \\mbox{DRIVE}\\xspace and \\mbox{DRIVE$^+$}\\xspace have accuracy that is competitive with that of the baseline, FedAvg. In CIFAR-100, TernGrad and \\mbox{DRIVE}\\xspace provide the best accuracy. For the other tasks, \\mbox{DRIVE}\\xspace and \\mbox{DRIVE$^+$}\\xspace have the best accuracy, while the best alternative is either Kashin + 1-bit SQ or TernGrad, depending on the task. Hadamard + 1-bit SQ, which is the most similar to our algorithms (in terms of both bandwidth and compute), provides lower accuracy in all tasks. Additional details and hyperparameter \\mbox{configurations are presented in Appendix~\\ref{app:fl_details}.}}\n\n\n\\paragraph{\\textbf{{Distributed CNN Training}.\\quad}}\nWe evaluate distributed CNN training with 10 clients in two configurations: (1) CIFAR-10 dataset with ResNet-9; (2) CIFAR-100 with ResNet-18 \\cite{krizhevsky2009learning, he2016deep}. \n\\new{In both tasks, \\mbox{DRIVE}\\xspace and \\mbox{DRIVE$^+$}\\xspace have similar accuracy to FedAvg, closely followed by Kashin + 1-bit SQ. The other algorithms are less accurate, with Hadamard + 1-bit SQ being better than Sketched-SGD and TernGrad for both tasks.}\nAdditional details and {hyperparameter configurations are presented in Appendix \\ref{app:dl_details}. Figure~\\ref{fig:distributed-dnn} depicts the results.}\n\n\\begin{figure}[]\n\\centering\n\\centerline{\\includegraphics[width=\\textwidth, trim=10 45 0 0, clip]{figures\/Federated-DNN.pdf}}\n\\ifdefined \n \\vspace*{-2mm}\n\\fi\n\\caption{\n Accuracy per round on various federated learning tasks. Smoothing is done using a rolling mean with a window size of 150. The second row zooms-in on the last 50 rounds.}\n\\label{fig:federated_dnn}\n\\ifdefined \n \\vspace*{-2mm}\n\\fi\n\\end{figure}\n\n\n\\begin{figure}[]\n\\centering\n\\centerline{\\includegraphics[width=\\textwidth, trim=0 45 0 0, clip]{figures\/Distributed-DNN.pdf}}\n\\ifdefined \n \\vspace*{-2mm}\n\\fi\n\\caption{\n \n \\mbox{Accuracy per round on distributed learning tasks, with a zoom-in on the last 50 rounds.}\n}\n\\ifdefined \n \\vspace*{-4mm}\n\\fi\n\\label{fig:distributed-dnn}\n\\end{figure}\n\n\n\n\n\n\n\n\n\\paragraph{\\textbf{{Evaluation Summary}.\\quad}}\nOverall, it is evident that \\mbox{DRIVE}\\xspace and \\mbox{DRIVE$^+$}\\xspace consistently offer markedly favorable results in comparison to the alternatives in our setting. Kashin's representation appears to offer the best competition, albeit at somewhat higher computational complexity and bandwidth requirements. The lesser performance of the sketch-based techniques is attributed to the high noise of the sketch under such a low ($d(1+o(1))$ bits) communication requirement. This is because the \\mbox{number of counters they can use is too low, making too many coordinates map into each counter.}\n\\new{In Appendix~\\ref{subsection:power_iteration_appendix}, we also compare \\mbox{DRIVE}\\xspace and \\mbox{DRIVE$^+$}\\xspace to state of the art techniques over K-Means and Power Iteration tasks for 10, 100, and 1000 clients, yielding similar trends.}\n\n\n\\ifdefined\n\\begin{table}[]\n\\resizebox{\\textwidth}{!}{%\n\\renewcommand{\\arraystretch}{1.6}\n\\begin{tabular}{c|c|c|c|c|}\n\\cline{2-5}\n\\multirow{2}{*}{} & \\multirow{1}{*}{Scale} & \\multicolumn{3}{c|}{Rotation} \\\\ \\cline{3-5} Problem\n & $S$ & \\multicolumn{2}{c|}{Uniform} & Hadamard \\\\ \\hline\n\\multicolumn{1}{|l|}{1b - VE} & $\\frac{\\norm{\\mathcal R(x)}_1}{d}$ & \\multicolumn{2}{c|}{vNMSE $= \\parentheses{1 - \\frac{2}{\\pi}} \\parentheses{ {1-\\frac{1}{d}}}$} & vNMSE $\\le \\frac{1}{2}$ \\\\ \\hline\n\\multicolumn{1}{|l|}{\\multirow{2}{*}{1b - DME}} & \\multirow{2}{*}{$\\frac{\\norm{x}_2^2}{\\norm{\\mathcal R(x)}_1}$} & \\multicolumn{2}{c|}{$(i)$ $d \\ge 2 \\implies$ NMSE $\\le \\frac{1}{n} \\cdot 2.92$} & \\multirow{2}{*}{---} \\\\\n\\multicolumn{1}{|l|}{} & & \\multicolumn{2}{l|}{$(ii)$ \n$d \\ge 135 \\implies$ NMSE $\\le \\frac{1}{n} \\cdot \\left(\\frac{\\pi}{2} - 1 + \\sqrt{\\frac{{(6\\pi^3-12\\pi^2)}\\cdot\\ln d+1}{d}}\\right)$} & \\\\ \\hline\n\\end{tabular}%\n}\n\\caption{\\new{Summary of the proven error bounds for \\mbox{DRIVE}\\xspace.}}\n\\label{tab:summary_proven_bounds}\n\\end{table}\n\\else\n\\begin{table}[]\n\\resizebox{\\textwidth}{!}{%\n\\renewcommand{\\arraystretch}{1.6}\n\\begin{tabular}{c|c|c|c|c|}\n\\cline{2-5}\n\\multirow{2}{*}{} & \\multirow{1}{*}{Scale} & \\multicolumn{3}{c|}{Rotation} \\\\ \\cline{3-5} Problem\n & $S$ & \\multicolumn{2}{c|}{Uniform} & Hadamard \\\\ \\hline\n\\multicolumn{1}{|l|}{1b - VE} & $\\frac{\\norm{\\mathcal R(x)}_1}{d}$ & \\multicolumn{2}{c|}{vNMSE $= \\parentheses{1 - \\frac{2}{\\pi}} \\parentheses{ {1-\\frac{1}{d}}}$} & vNMSE $\\le \\frac{1}{2}$ \\\\ \\hline\n\\multicolumn{1}{|l|}{\\multirow{1}{*}{1b - DME}} & \\multirow{1}{*}{$\\frac{\\norm{x}_2^2}{\\norm{\\mathcal R(x)}_1}$} & \\multicolumn{2}{c|}{NMSE $\\le \\frac{1}{n} \\cdot 2.92$;\\quad $d \\ge 135 \\implies$ NMSE $\\le \\frac{1}{n} \\cdot \\left(\\frac{\\pi}{2} - 1 + \\sqrt{\\frac{{(6\\pi^3-12\\pi^2)}\\cdot\\ln d+1}{d}}\\right)$} & \\multirow{1}{*}{---}\\\\ \\hline\n\\end{tabular}%\n}\n\\caption{\\new{Summary of the proven error bounds for \\mbox{DRIVE}\\xspace.}}\n\\label{tab:summary_proven_bounds}\n\\end{table}\n\\fi\n\n\\section{\\new{Discussion}}\n\n\\ifdefined\nIn this section, we overview few limitations and future research directions for \\mbox{DRIVE}\\xspace.\n\\fi\n\n\\paragraph{\\textbf{{Proven Error Bounds}.\\quad}} We summarize the proven error bounds in Table \\ref{tab:summary_proven_bounds}. Since \\mbox{DRIVE}\\xspace (Hadamard) is generally not unbiased (as discussed in Section \\ref{sec:dme_hadamard_subsec}), we cannot establish a formal guarantee for the 1b - DME problem when using Hadamard. It is a challenging research question whether there exists \\mbox{other structured rotations with low computational complexity and stronger guarantees.}\n\n\\paragraph{\\textbf{{Input Distribution Assumption}.\\quad}} The distributed mean estimation analysis of our Hadamard-based variants is based on an assumption (Section \\ref{sec:dme_hadamard_subsec}) about the vector distributions. While machine learning workloads, and DNN gradients in particular (e.g.,~\\cite{chmiel2020neural,banner2018post,ye2020accelerating}), were observed to follow such distributions, this assumption may not hold for other applications.\n\nFor such cases, we note that \\mbox{DRIVE}\\xspace is compatible with the error feedback (EF) mechanism~\\cite{seide20141, karimireddy2019error} that ensured convergence and recovery of the convergence rate of non-compressed SGD. Specifically, as evident by Lemma \\ref{lem:hadamard_biased}, any scale $\\frac{\\norm{\\mathcal R (x)}_1}{d} \\le S \\le 2 \\cdot \\frac{\\norm{\\mathcal R (x)}_1}{d}$ is sufficient to respect the \\emph{compressor} (i.e., \\emph{bounded variance}) assumption. For completeness, in Appendix \\ref{app:exp:ef}, we perform EF experiments comparing \\mbox{DRIVE}\\xspace and \\mbox{DRIVE$^+$}\\xspace to other compression techniques that use EF.\n\n\n\\paragraph{\\textbf{{Varying Communication Budget}.\\quad}} \nUnlike some previous works, our algorithms' guarantees with more than one bit per coordinate are not established. It is thus an interesting future work to extend \\mbox{DRIVE}\\xspace to other communication budgets and understand what are the resulting guarantees. We refer the reader to~\\cite{vargaftik2021communication} for initial steps towards that direction. \n\n\n\\paragraph{\\textbf{{Entropy Encoding}.\\quad}}\n\nEntropy encoding methods (such as Huffman coding) can further compress vectors of values when the values are not uniformly distributed. We have compared DRIVE against stochastic quantization methods using entropy encoding for the challenging setting for DRIVE where all vectors are the same (see Table \\ref{tbl:weAreFast} for further description). The results appear in Appendix \\ref{subsec:app:ee}, where DRIVE still outperforms these methods. We also note that, when computation allows and when using \\mbox{DRIVE}\\xspace with multiple bits per entry, \\mbox{DRIVE}\\xspace can also be enhanced by entropy encoding techniques. We describe some initial results for this setting in~\\cite{vargaftik2021communication}.\n\n\n\n\\paragraph{\\textbf{{Structured Data}.\\quad}} When the data is highly sparse, skewed, or otherwise structured, one can leverage that for compression. We note that some techniques that exploit sparsity or structure can be use in conjunction with our techniques. For example, one may transmit only non-zero entries or Top-K entries \\mbox{while compressing these using \\mbox{DRIVE}\\xspace to reduce communication overhead even further.}\n\n\\paragraph{\\textbf{{Compatibility With Distributed All-Reduce Techniques}.\\quad}} Quantization techniques, including \\mbox{DRIVE}\\xspace, may introduce overheads in the context of All-Reduce (depending on the network architecture and communication patterns). In particular, if every node in a cluster uses a different rotation, \\mbox{DRIVE}\\xspace will not allow for efficient in-path aggregation without decoding the vectors. Further, the computational overhead of the receiver increases by a $\\log d$ factor as each vector has to be decoded separately before an average can be computed. It is an interesting future direction for \\mbox{DRIVE}\\xspace to understand how to minimize such potential overheads. For example, one can consider bucketizing co-located workers and apply DRIVE's quantization only for cross-rack traffic. \n\n\n\n\\section{Conclusions} \\label{sec:conclusions}\n{\nIn this paper, we studied the vector and distributed mean estimation problems. These problems are applicable to distributed and federated learning, where clients communicate real-valued vectors (e.g., gradients) to a server for averaging.\nTo the best of our knowledge, our algorithms are the first with a provable error of $O(\\frac{1}{n})$ for the 1b~-~Distributed Mean Estimation problem (i.e., with $d(1+o(1))$ bits). \nAs shown in~\\cite{iaab006}, any algorithm that uses $O(d)$ shared random bits (e.g., our Hadamard-based variant) has a vNMSE of $\\Omega(1)$, i.e., \\mbox{DRIVE}\\xspace and \\mbox{DRIVE$^+$}\\xspace are asymptotically optimal; additional discussion is given in Appendix~\\ref{app:lower_bounds}.\nOur experiments, carried over various tasks and datasets, indicate that our algorithms improve over the state of the art. \nAll the results presented in this paper are fully reproducible by our source code, \navailable at~\\cite{openSource}.\n}\n\n\n\n\n\\ifspacinglines\n\\fi\n\n\n\n\n\\begin{ack}\nMM was supported in part by NSF grants CCF-2101140, CCF-2107078, CCF-1563710, and DMS-2023528. MM and RBB were supported in part by a gift to the Center for Research on Computation and Society at Harvard University. AP was supported in part by the Cyber Security Research Center at Ben-Gurion University of the Negev.\nWe thank Moshe Gabel, Mahmood Sharif, Yuval Filmus and, the anonymous reviewers for helpful comments and suggestions.\n\\end{ack}\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\nTo incorporate knowledge in real-world question-answering systems, knowledge base question answering (KBQA) utilizes a background knowledge base (KB) as the source of answers to factoid natural language questions. Leveraging the versatility of KB query languages like SPARQL \\citep{prud2008sparql}, many previous works \\citep{unger2012template, yahya2012natural} adopted a semantic parsing paradigm for KBQA, in which questions are converted to equivalent SPARQL queries and answers are given by executing the queries in KB. Regarding the intrinsic graph structure of SPARQLs, some works further reduced such procedure as generating the query graph of SPARQLs w.r.t. questions. However, these methods either require auxiliary tools (e.g. AMR in \\citealp{kapanipathi2021leveraging}, constituency tree in \\citealp{hu2021edg}, dependency tree in \\citealp{hu2017answering}) causing potential cascading errors, or rely on predefined templates \\citep{cui2019kbqa, athreya2021template} limiting their expressiveness and generalization abilities.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=1.0\\linewidth]{pics\/two_stage_2.pdf}\n \\caption{Generating a query graph (bottom) by two stages to represent the SPARQL (right-top). At graph structure generation stage, node-extraction generates all graph nodes while graph-composition adds unlabeled edges between proper nodes. Then, the relation extraction stage decides the specific predicate of each edge.}\n \\label{fig:two_stage}\n\\end{figure}\n\n\\begin{comment}\nTo address these, efforts were made on devising independent pipelines for query graph construction \\citep{lin2021deep}. As in Figure \\ref{fig:two_stage}, these pipelines usually involve a node extraction (NE) module to detect the mentions of all nodes in query graph and link entity mentions, a graph composition (GC) module to connect related nodes given by NE, and a relation extraction (RE) module deciding the KB predicate corresponding to each edge added in GC. Here, we observe strong causal effects between NE and GC, e.g. edges connected by GC are valid only between the node mentions extracted in NE, making GC decisions highly dependent on NE. To this regard, previous works \\citep{zhang2021namer, ravishankar2021two} that perform NE and GC separately without causal-modelling may fell short in deeply comprehending the correlated tasks and accurately generating query graphs.\n\\end{comment}\nTo address these, efforts were made on devising independent pipelines for query graph construction \\citep{lin2021deep}. As in Figure \\ref{fig:two_stage}, these pipelines usually involve a node extraction (NE) module to detect the mentions of all nodes in query graph and link entity mentions, a graph composition (GC) module to connect related nodes given by NE, and a relation extraction (RE) module deciding the KB predicate corresponding to each edge added in GC. In this framework, two drawbacks exist in previous works: 1) we observe strong causal effects between NE and GC, e.g. edges connected by GC are valid only between the node mentions extracted in NE, making GC decisions highly dependent on NE. To this regard, previous works \\citep{zhang2021namer, ravishankar2021two} that perform NE and GC separately without causal-modelling may fall short in deeply comprehending the correlated tasks and accurately generating query graphs. 2) GC is commonly modelled as a sequence-generation in prior methods, either through generative decoder \\citep{shen2019multi, chen2021outlining} or via stage-transition \\citep{yih2015semantic, hu2018state}. However, sequence-modelling generally undergoes sequence ambiguity and exposure bias \\citep{zhang-etal-2019-bridging} that harms model accuracy.\n\n\\begin{comment}\nIn this work, we formalize the generation of query graph in a two-staged manner as in Figure \\ref{fig:two_stage}. At the first stage, we adopt a novel causal-enhanced table-filling model to jointly complete NE and GC, resulting in a query graph structure representing the connectivity of all nodes. More specifically, inspired by \\citet{chen-etal-2020-exploring-logically}, we utilize a label transfer mechanism to facilitate the acquisition of causality between NE and GC. Further, unlike prior methods that generate graph structure iteratively either through generative decoder \\citep{shen2019multi, chen2021outlining} or via stage-transition \\citep{yih2015semantic, hu2018state}, we apply a table-filler to decode all edges simultaneously. In turn, it naturally circumvents the sequence ambiguity and exposure bias \\citep{zhang-etal-2019-bridging} of iterative decoding. For the second stage, we propose a beam-search-based relation extraction algorithm to determine the predicate that binds to each graph edge. Differ from prior works, we perform candidate predicate retrieval and ranking alternately for each edge, limiting the candidate scale linearly w.r.t. KB degree and making the algorithm scalable for large-scale KBs like DBpedia.\n\\end{comment}\n\nIn this work, we formalize the generation of query graph in a two-staged manner as in Figure \\ref{fig:two_stage}. At the first stage, we tackle the aforesaid weaknesses by a novel causal-enhanced table-filling model to jointly complete NE and GC, resulting in a query graph structure representing the connectivity of all nodes. More specifically, inspired by \\citet{chen-etal-2020-exploring-logically}, we utilize a label transfer mechanism to facilitate the acquisition of causality between NE and GC (which solves drawback 1 above). Further, we apply a table-filler to decode all edges simultaneously, which naturally circumvents the ambiguity and bias of iterative decoding (and solves drawback 2). For the second stage, we propose a beam-search-based relation extraction algorithm to determine the predicate that binds to each graph edge. Differ from prior works, we perform candidate predicate retrieval and ranking alternately for each edge, limiting the candidate scale linearly w.r.t. KB degree and making the algorithm scalable for large-scale KBs like DBpedia.\n\n\\begin{comment}\nIn short, the contributions of this paper are: we formalize the construction of query graph as two stages, for the graph structure generation stage, we devise a causal-enhanced table-filler to grasp intrinsic causal effects and avoid exposure bias; for the relation extraction stage, we present an efficient beam-search algorithm scalable for large KBs; our method outperforms previous state-of-the-arts on LC-QuAD 1.0, a prominent KBQA benchmark, by a large margin ($\\sim\\!\\!17\\%$), further experiments verifies the effectiveness of our approach.\n\\end{comment}\nIn short, the major contributions of this paper are: 1) to our knowledge, we are the first to model GC as a table-filling process, which prevents the ambiguity and bias in prior works; 2) we model the intrinsic causal effects in KBQA to grasp subtask correlations and improve pipeline integrity; 3) our method outperforms previous state-of-the-arts on LC-QuAD 1.0, a prominent KBQA benchmark, by a large margin ($\\sim\\!\\!17\\%$), further experiments verifies the effectiveness of our approach.\n\n\\section{Preliminaries}\n\\subsection{Problem Setting}\nWe solve KBQA in a semantic parsing way, given a question (left-top in Figure \\ref{fig:two_stage}), we generate a SPARQL query (right-top in Figure \\ref{fig:two_stage}) to represent its semantics and answer the question by executing the query in KB. By definition, SPARQL describes a query graph with each triple in its body referring to a graph edge; by matching the graph pattern in KB, certain KB entries binding to the query graph can be processed as query results (e.g. in Table \\ref{tab:trigger_words} for SELECT queries, all entries binding to the \"select\" node are results; for JUDGE queries, the existence of matched entries determines the boolean result). Hence, our task is further specified as constructing the query graph (bottom of Figure \\ref{fig:two_stage}) of a question to represent its corresponding SPARQL.\n\n\\begin{table}[t]\n\\centering\n\\begin{comment}\n\\resizebox{1.\\columnwidth}{!}{\n\\begin{tabular}{c c c}\n \\toprule\n \\bfseries Type & \\bfseries Example SPARQL & \\bfseries Trigger Words\\\\\n \\cmidrule(lr){1-3}\n JUDGE & ask \\{dbr:New\\_York a dbo:City\\} & did, is, ...\\\\\n COUNT & select count(?x) \\{?x a dbo:City\\} & how many, ... \\\\\n SELECT & select ?x \\{?x a dbo:City\\} & \/ \\\\\n \\bottomrule\n\\end{tabular}}\n\\caption{Supported query types and their examples trigger words.}\n\\end{comment}\n\n\\resizebox{0.8\\columnwidth}{!}{\n\\begin{tabular}{c c}\n \\toprule\n \\bfseries Type & \\bfseries Example SPARQL\\\\\n \\cmidrule(lr){1-2}\n JUDGE & ask \\{dbr:New\\_York a dbo:City\\} \\\\\n COUNT & select count(?x) \\{?x a dbo:City\\} \\\\\n SELECT & select ?x \\{?x a dbo:City\\} \\\\\n \\bottomrule\n\\end{tabular}}\n\\caption{Supported query types.}\n\n\\label{tab:trigger_words}\n\\end{table}\n\n\\subsection{Methodology Overview}\nIllustrated by Figure \\ref{fig:two_stage}, we construct the query graph in two stages. In the graph structure generation stage (bottom-left in Figure \\ref{fig:two_stage}), we extract all graph nodes by finding the mention of each node in question and its tag among $\\{ variable,entity,type \\}$, e.g. the mention and tag for the node $?class$ is \"class\" and variable, respectively. Further, we link all non-variable nodes to KB entries, e.g. the $type$ node with mention \"person\" links to \\textit{dbo:person} in Figure \\ref{fig:two_stage}. Also, we decide the target (\"select\") node of the graph and add undirected edges between the nodes that are connected in the query graph, resulting in a graph structure representing the connectivity of all nodes.\n\nSince all edges above are undirected and unlabeled, we fill in the exact KB predicate of each edge in the relation extraction stage (bottom-right in Figure \\ref{fig:two_stage}) to construct a complete query graph. \n\nFinally, we compose a SPARQL w.r.t. the query graph as output. Note that the body of the SPARQL exactly corresponds to the query graph, so only the SPARQL header is yet undetermined. Like \\citealp{hu2021edg}, we collect frequent trigger words in the train data to classify questions into COUNT, JUDGE or SELECT queries as in Table \\ref{tab:trigger_words} (e.g. a question beginning with \"is\" triggers JUDGE). Thus, an entire SPARQL can now be formed. In the following sections, we expatiate our methodology for the two aforementioned stages.\n\n\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=0.999\\linewidth]{pics\/graph_structure_generation_4k_cut.png}\n \\caption{Causal-enhanced table-filling model for graph structure generation. The label-to-node and table-to-edge correspondence is illustrated by the poker and fruit symbols respectively.}\n \\label{fig:graph_structure_generation}\n\\end{figure*}\n\n\n\\section{Graph Structure Generation (GSG)}\n\\label{sec:gsg}\nThe overview of the model proposed for graph structure generation is illustrated by Figure \\ref{fig:graph_structure_generation}. As discussed in Section \\ref{sec:intro}, the model jointly deals with node extraction and graph composition via causal-modelling, which is detailed in this section below.\n\n\\subsection{Node Extraction (NE)}\nNode extraction discovers all nodes in the query graph, i.e. \\{\\textit{?person}, \\textit{?class}, \\textit{dbr:Swinhoe's\\_Crake}, \\textit{dbo:person}\\} in Figure \\ref{fig:two_stage}. We represent a node by its mention and tag, i.e. (\"person\", \\textit{variable}), (\"class\", \\textit{variable}), (\"Swinhoe's Crake\", \\textit{entity}) and (\"person\", \\textit{type}) for each node respectively.\n\nThis goal can naturally be achieved by multi-class sequence labeling. More specifically, let $\\mathbf{Q}\\in \\mathbb{N}^{n}$ be the question (token ids) with length $n$, we first encode it into hidden features $\\mathbf{H}_{rb}$ by a RoBERTa \\citep{liu2019roberta} encoder $E_{rb}: \\mathbb{N}^{n} \\rightarrow \\mathbb{R}^{n\\times h_{rb}}$ with hidden size $h_{rb}$:\n\\[ \\mathbf{H}_{rb} = E_{rb}(\\mathbf{Q}) \\in \\mathbb{R}^{n\\times h_{rb}} \\]\nThen, $\\mathbf{H}_{rb}$ is projected by a fully-connected-network (FCN) $E_{ne}: \\mathbb{R}^{n\\times h_{rb}} \\rightarrow \\mathbb{R}^{n\\times |L|}$ into $\\mathbf{Y}_{ne}$ in label space:\n\\[ \n\\mathbf{Y}_{ne} = E_{ne}(\\mathbf{H}_{rb}) \\in \\mathbb{R}^{n\\times |L|}\n\\]\n$L=\\{O\\}\\cup\\{B,I\\}\\times\\{V,E,T,VT\\}$ is the label set denoting the mention span of variables (V), entities (E), types (T), or overlapping variable and type (VT). Now, the label prediction of each token can be given by $\\mathbf{P}_{ne}=argmax(\\mathbf{Y}_{ne})$; also, given the gold token labels $\\mathbf{G}_{ne}\\in \\mathbb{N}^{n}$ (Figure \\ref{fig:graph_structure_generation} top), a model for NE can be trained by optimizing:\n\\[ \\ell_{ne} = -\\frac{1}{n} \\sum\\limits_{i=1}^n log(\\text{softmax}(\\mathbf{Y}_{ne})[i; \\mathbf{G}_{ne}[i]]) \\]\nWhere $[\\cdot ]$ denotes tensor indexing.\n\nAfter detecting all node mentions and tags, we link each non-variable node to KB entries by DBpedia Lookup and a mention-to-type dictionary built on train data to align the graph structure with KB. See Appendix \\ref{sec:appendix_linking} for more details in node linking.\n\n\\subsection{Graph Composition (GC)}\n\\label{sec:gc}\n\\begin{comment}\nAfter node extraction, all nodes in the query graph remain unconnected. To form the structure of the query graph, graph composition insert unlabeled and undirected edges between the nodes that are related in the query graph, leaving the specific predicate of each edge yet unresolved. Formerly, graph composition is commonly modelled as a sequence-generation process representing an edge by one or several sequence elements. To learn the generation of edge sequence, previous works train reward functions for stage-transition \\citep{yih2015semantic, hu2018state} or generative decoder models \\citep{shen2019multi, zhang2021namer}. Despite the strong expressiveness, modelling graph composition by a sequence usually suffers from two issues: 1) while the edge sequence is ordered, edges in the query graph are a set without order. For a graph with two edges $e_1$ and $e_2$, both sequence $e_1{\\text -}e_2$ and $e_2{\\text -}e_1$ correctly represents the edges in the graph, but they are distinct from the perspective of sequence-generation. As a result, the edge set itself becomes ambiguous for the sequence, which confuses the model when comprehending a sequence and potentially decelerates the convergence. 2) As discussed by \\citealp{zhang-etal-2019-bridging}, without extra augmentation, sequence-generation generally endures an exposure bias between training and inference, harming the model's accuracy when predicting. Hence, a robust model should address the issues above properly.\n\\end{comment}\nAfter node extraction, all nodes in the query graph remain unconnected. To form the structure of the query graph, graph composition inserts unlabeled and undirected edges between the nodes that are related in the query graph, leaving the specific predicate of each edge yet unresolved. Formerly, graph composition is commonly modelled as a edge-sequence-generation process via stage-transition \\citep{yih2015semantic, hu2018state} or generative decoders \\citep{shen2019multi, chen2021outlining}. Despite the strong expressiveness, modelling graph composition by a sequence usually suffers from two issues: 1) while the edge sequence is ordered, edges in the query graph are a set without order. For a graph with two edges $e_1$ and $e_2$, both sequence $e_1{\\text -}e_2$ and $e_2{\\text -}e_1$ correctly represents the edges in the graph, but they are distinct from the perspective of sequence-generation. As a result, the edge set itself becomes ambiguous for the sequence, which confuses the model when comprehending a sequence and potentially decelerates the convergence. 2) As discussed by \\citealp{zhang-etal-2019-bridging}, without extra augmentation, sequence-generation generally endures an exposure bias between training and inference, harming the model's accuracy when predicting. Hence, a robust model should address the issues above properly.\n\nHere, we model graph composition by a table-filling process to decide all edges simultaneously involving no sequence-generation, which naturally circumvents all issues above. Let $\\mathbf{H}_{gc}\\in \\mathbb{R}^{n\\times h_{gc}}$ be the hidden features for graph composition (the full definition of $\\mathbf{H}_{gc}$ with causal-modelling is given in Section \\ref{sec:causal_modelling}; without causal-modelling, we simply have $\\mathbf{H}_{gc}=\\mathbf{H}_{rb}$), we adopt a biaffine attention model \\citep{dozat2016deep, wang2021unire} to convert $\\mathbf{H}_{gc}$ into a table denoting the relationship between each token pair. More specifically, through two multi-layer-perceptrons (MLP) $E_{head}$ and $E_{tail}: \\mathbb{R}^{n\\times h_{gc}} \\rightarrow \\mathbb{R}^{n\\times h_{bi}}$, we first project $\\mathbf{H}_{gc}$ into head ($\\mathbf{H}_{head}$) and tail ($\\mathbf{H}_{tail}$) features:\n\\[ \\mathbf{H}_{\\{head,tail\\}} = E_{\\{head,tail\\}}(\\mathbf{H}_{gc}) \\in \\mathbb{R}^{n\\times h_{bi}} \\]\nThen, for $\\forall 1\\leq i,j\\leq n$, the biaffine attention is performed between the head features of the i\\textsuperscript{th} token $\\mathbf{h}_{head}^{(i)}$ and the tail features of the j\\textsuperscript{th} token $\\mathbf{h}_{tail}^{(j)}$, producing $\\mathbf{s}_{i,j}\\in \\mathbb{R}^2$ representing the probability that an edge exists between the i\\textsuperscript{th} and j\\textsuperscript{th} token:\n\\[ \\mathbf{s}_{i,j} = \\text{softmax}(\\text{Biaff}(\\mathbf{h}_{head}^{(i)}, \\mathbf{h}_{tail}^{(j)})) \\]\n\\[ \\text{Biaff}(\\mathbf{x},\\mathbf{y}) := \\mathbf{x}^T\\mathbf{U}_1\\mathbf{y} + \\mathbf{U}_2(\\mathbf{x}\\oplus \\mathbf{y}) + \\mathbf{b} \\]\nAs $\\mathbf{U}_1\\in\\mathbb{R}^{2\\times h_{bi}\\times h_{bi}}$, $\\mathbf{U}_2\\in\\mathbb{R}^{2\\times 2h_{bi}}$ and $\\mathbf{b}\\in\\mathbb{R}^2$ are trainable parameters, $\\oplus$ denotes concatenation. Combining all scores by $\\mathbf{Y}_{gc} = (\\mathbf{s}_{i,j})_{(1\\leq i,j\\leq n)}\\in \\mathbb{R}^{n\\times n\\times 2}$, we now have a table describing the edge existence likelihood between any two tokens.\n\nAt training, we first obtain the boolean gold table $\\mathbf{G}_{gc}\\in \\mathbb{B}^{n\\times n}$, for every connected node pair in the query graph, the element in $\\mathbf{G}_{gc}$ corresponding to any pair of tokens belonging to the mentions of the two nodes respectively is set to 1 (resulting in several rectangles of 1s). Also, we prefix the question with a special [CLS] token and connect it with the target node to represent the \"select\" edge; for ASK queries without target nodes, a [SEP] token is suffixed and connected with [CLS]. Note that since the graph structure is undirected, $\\mathbf{G}_{gc}$ is a symmetric matrix. An example of $\\mathbf{G}_{gc}$ can be found in Figure \\ref{fig:graph_structure_generation}. With $\\mathbf{G}_{gc}$, we can train the table-filler by $\\ell_{tb}$:\n\\[ \\ell_{tb} = -\\frac{1}{n^2} \\sum\\limits_{i=1}^n\\sum\\limits_{j=1}^n log(\\mathbf{Y}_{gc}[i;j;\\mathbf{G}_{gc}[i;j]]) \\]\nFollowing \\citealp{wang2021unire}, we also introduce $\\ell_{sym}$ to grasp the table symmetry. Finally, we optimize $\\ell_{gc} = \\ell_{tb} + \\ell_{sym}$ to train a model for GC.\n\\[ \\ell_{sym} = \\frac{1}{n^2} \\sum\\limits_{i=1}^n\\sum\\limits_{j=1}^n\\sum\\limits_{k=1}^2 |\\mathbf{Y}_{gc}[i;j;k]-\\mathbf{Y}_{gc}[j;i;k]| \\]\n\nAt inference, for each pair of nodes given by NE, we average the rectangle area in $\\mathbf{Y}_{gc}$ corresponding to the mentions of the node pair as its edge existence probability. The node pairs with a probability higher than 0.5 are connected. This threshold is selected intuitively to denote an edge is more likely to exist against to not exist, though we argue that the prediction is insensitive to any threshold in reasonable range (e.g. 0.3$\\sim$0.7).\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=.92\\linewidth]{pics\/causal_full.pdf}\n \\caption{Modelling NE and GC with (a) and without (b) causality, as X, Y\\textsubscript{NE}, and Y\\textsubscript{GC} denotes question, NE predictions, and GC predictions. Model (c) learns the causal effects by a label transfer module.}\n \\label{fig:causal_graphs}\n\\end{figure}\n\n\n\\subsection{Causal Modelling NE and GC}\n\\label{sec:causal_modelling}\nUp to now, NE and GC are treated as separate tasks that fail to model the intrinsic causal effects between them (e.g. edges in Y\\textsubscript{GC} only exist between the mentions detected in NE). Here, we model such causality by a mediation assumption in Figure \\ref{fig:causal_graphs}(b) denoting the causal dependence of GC on both question and NE prediction by edge X\\textrightarrow Y\\textsubscript{GC} and Y\\textsubscript{NE}\\textrightarrow Y\\textsubscript{GC} respectively. To grasp this causal graph, we devise a label transfer \\citep{chen-etal-2020-exploring-logically} module to enable the transfer of NE predictions to GC, i.e. representing Y\\textsubscript{NE}\\textrightarrow Y\\textsubscript{GC}, in Figure \\ref{fig:causal_graphs}(c).\n\nIn detail, we sample NE predictions $\\widetilde{\\mathbf{Y}_{ne}}$ by gumbel softmax \\cite{nie2018relgan} with $\\boldsymbol{g}\\!\\!\\sim$Gumbel(0,1) and temperature $\\tau$.\n\\[ \\widetilde{\\mathbf{Y}_{ne}} = \\text{softmax}((\\mathbf{Y}_{ne}+\\boldsymbol{g})\/\\tau) \\in \\mathbb{R}^{n\\times |L|} \\]\n$\\widetilde{\\mathbf{Y}_{ne}}$ is then embedded by label embedding $\\mathbf{W}_{le}\\in \\mathbb{R}^{|L|\\times h_{le}}$ and concatenated with $\\mathbf{H}_{rb}$ to form $\\mathbf{H}_{gc}$ in Section \\ref{sec:gc} with $h_{gc}\\!=\\!h_{rb}+h_{le}$:\n\\[ \\mathbf{H}_{gc} = \\mathbf{H}_{rb} \\oplus (\\widetilde{\\mathbf{Y}_{ne}} \\mathbf{W}_{le})\\in \\mathbb{R}^{n\\times h_{gc}} \\]\nNow, by minimizing $\\ell_{gsg}\\!=\\!\\ell_{ne}\\!+\\!\\ell_{gc}$, a joint model for NE and GC can be obtained. In this model, GC receives NE labels to learn the causal effects from NE, while NE gets feedback through differentiable label transfer to further aid GC decision. In this sense, our model improves the integrity of graph structure generation compared with separately modelling each subtask or simple multitasking.\n\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=.999\\linewidth]{pics\/re.pdf}\n \\caption{Candidate retrieval and ranking framework for relation extraction.}\n \\label{fig:re_overview}\n\\end{figure}\n\n\\section{Relation Extraction (RE)}\n\\label{sec:re}\nAs shown in Figure \\ref{fig:re_overview}, relation extraction (RE) conducts candidate retrieval and ranking in turn for each edge in graph structure $S$ to decide its predicate. For a question $q$, an edge $e$ connecting nodes $n_1$ and $n_2$ with mention $m_1,\\!m_2$ respectively, candidate retrieval recalls a set of predicates $P$ that can be bound to $e$. Note that unlike $e$, each predicate in $P$ is directional. Then, candidate ranking \\texttt{Rank}($P$,$q$,$m_1$,$m_2$) gives each predicate a score. This section details this procedure.\n\\paragraph{Candidate Ranking}\n\\label{sec:re_cand_rank}\nFor each $p_i\\in P$, we encode it together with $q,\\!m_1,\\!m_2$ by a RoBERTa encoder and pool them to $0\\!\\leq\\! s_i\\!\\leq\\! 1$ to score the predicate. If the direction of $p_i$ is $n_1\\!\\!\\rightarrow\\! n_2$, we join $q,m_1,m_2,p_i$ sequentially by [SEP] token as model input; otherwise (direction $n_2\\!\\!\\rightarrow\\! n_1$), the join order is $q,m_2,m_1,p_i$. By giving $s_i$ to each candidate, we can get the most proper predicates for $e$ by selecting those with highest scores. More details on training the ranking model can be found in Appendix \\ref{sec:appendix_re_ranking}.\n\\paragraph{Candidate Retrieval}\n\\citealp{zhang2021namer} proposed a straightforward way to retrieve candidates: if either $n_1$ or $n_2$ is a non-variable node, the predicates around that node in KB are viewed as candidates; otherwise, they trace $n_1$ or $n_2$ in other graph edges with non-variable nodes and view the predicates k-hop away from that node in KB as candidates (e.g. predicates 2-hop away from \\textit{dbo:person} are candidates for $?class\\text{-}?person$ in Figure \\ref{fig:re_overview}). We view this as the baseline in latter experiments.\n\n\\begin{algorithm}[t]\n\\caption{BeamSearchRE}\\label{alg:re_bs}\n\\LinesNumbered\n\\small\n\n\\SetKwFunction{FR}{Sample}\n\\SetKwFunction{FI}{inference}\n\\SetKwFunction{FG}{getNodePairsFromGraph}\n\\SetKwFunction{FC}{Retrieve}\n\\SetKwFunction{FCR}{Rank}\n\n\\begin{comment}\n\\KwIn{Question $q$, Query graph structure $S$, beam width $b$}\n\\KwOut{A beam of query graphs $\\mathcal{B}$}\n$\\mathcal{B}\\leftarrow\\{\\epsilon\\}$\\;\n$\\mathcal{S}_{undef}\\leftarrow\\mathcal{S}$\\;\n\\While{$\\mathcal{S}_{undef}\\ne\\varnothing$}{\n $\\mathcal{B}'\\leftarrow\\{\\epsilon\\}$\\;\n $e=(n_1,n_2)\\leftarrow$\\FR$(\\mathcal{S}_{undef})$\\;\n \\For{$G\\in\\mathcal{B}$\n \n }{\n $\\mathcal{P}\\leftarrow$\\FC$(G, n_1, n_2)$\\;\n \n $\\mathcal{C}=\\{(p_i,s_i)\\}\\leftarrow$\\FCR$(\\mathcal{P}, q, m_1, m_2)$\\;\n $\\mathcal{B'}\\leftarrow\\mathcal{B'}\\cup \\{G\\}\\times\\mathcal{C}$\\;\n }\n $\\mathcal{B}\\leftarrow\\mathcal{B'}.\\text{topk}(b)$\\;\n $\\mathcal{S}_{undef}.\\text{remove}(e)$\\;\n \n \n \n \n \n \n}\n\\end{comment}\n\n\\KwIn{Question $q$, Query graph structure $S$, beam width $b$}\n\\KwOut{A beam of query graphs $B$}\n$B\\leftarrow\\{\\{\\}\\}$;\\tcp*[h]{Start with an empth graph}\n\n$S_{pend}\\leftarrow S$;\\tcp*[h]{All edges are pending}\n\n\\While{$S_{pend}\\ne\\varnothing$}{\n $B'\\leftarrow\\{\\}$\\;\n \\tcp{Select a pending edge}\n $e=(n_1,n_2)\\leftarrow$\\FR$(S_{pend})$\\;\n \\For{$G\\in B$\n \n }{\n \n $P\\leftarrow$\\FC$(G, n_1, n_2)$\\;\n \n \\tcp{$n_1\/n_2$ has mention $m_1\/m_2$}\n $C=\\{(p_i,s_i)\\}\\leftarrow$\\FCR$(P, q, m_1, m_2)$\\;\n \n \\tcp{Extend previous beams}\n \\For{$(p_i,s_i)\\in C$}{\n $B'\\leftarrow B'\\cup \\{G \\cup \\{(n_1, n_2, p_i, s_i)\\} \\}$;\n }\n }\n $B\\leftarrow B'.\\text{topk}(b)$;\\tcp*[h]{Set up new beams}\n \n \\tcp{Mark \\textit{e} as determined}\n $S_{pend}\\leftarrow S_{pend}\\setminus \\{e\\}$\\;\n \n \n \n \n \n \n}\n\\end{algorithm}\n\nHowever, this results in a candidate scale O($n^k$)\\footnote{n is the node degree in KB, k is the edge number in $S$}, making it unscalable to multi-hop queries (k$\\uparrow$) and large KBs (n$\\uparrow$). Here, we propose Algorithm \\ref{alg:re_bs} to limit the scale to O(n). We start by selecting an edge between $n_1^a$ and $n_1^b$ containing a non-variable node (e.g. edge \\textit{?class}-\\textit{dbr:Swinhoe's\\_Crake} in Figure \\ref{fig:re_overview}), retrieving all adjacent predicates of that node in KB and use \\texttt{Rank} to select the most proper predicate $p_1$ (e.g. dbp:named\\_by) of score $s_1$, this forms a subgraph $G$=\\{($n_1^a$,$n_1^b$,$p_1$)\\} with only one edge whose score is $s_1$. Then, we sample another edge between $n_2^a$ and $n_2^b$ (e.g. \\textit{?class}-\\textit{?person}) and retrieve its candidates $P$ based on $G$ (e.g. $G$ already entails \\textit{?class}=\\textit{dbr:bird}, so all neighbors of \\textit{dbr:bird} forms $P$), this process is denoted as \\texttt{Retrieve}($G,n_2^a,n_2^b$). Now, we use \\texttt{Rank} to select $p_2$ of score $s_2$ from $P$, add ($n_2^a$,$n_2^b$,$p_2$) to subgraph $G$ and update its score as $s_1*s_2$. Repeating this loop until all edges are bound with a predicate, we finally form a query graph.\n\nNote that for each edge, the candidate scale given by \\texttt{Retrieve} is O(n), since it is always among the neighbors of one or several KB nodes. Also, to improve the recall of query graphs, this process can trivially be extended as a beam search with each step maintaining a beam of subgraphs $B$, ordering each subgraph by $\\prod_i s_i$ as in Algorithm \\ref{alg:re_bs}.\n\n\\begin{table}[t]\n\\centering\n\\resizebox{.99\\columnwidth}{!}{\n\\begin{tabular}{c l c c c}\n \\toprule\n \\bfseries Type & \\bfseries Methods & \\bfseries P & \\bfseries R & \\bfseries F1\\\\\n \\cmidrule(lr){1-5}\n \\multirow{2}{*}{\\bfseries I} &\n NSQA \\citep{kapanipathi2021leveraging} & .448 & .458 & .445 \\\\\n & EDGQA \\citep{hu2021edg} & .505 & .560 & .531 \\\\\n \\cmidrule(lr){1-5}\n \\multirow{4}{*}{\\bfseries II} &\n QAmp \\citep{vakulenko2019message} & .250 & .500 & .330 \\\\\n & NAMER \\citep{zhang2021namer} & .438 & .438 & .435 \\\\\n & STaG-QA \\citep{ravishankar2021two} & \\bfseries.745 & .548 & .536 \\\\\n & \\bfseries Crake (ours) & .722 & \\bfseries.731 & \\bfseries.715 \\\\\n \\bottomrule\n\\end{tabular}}\n\\caption{End-to-end performance on LC-QuAD 1.0 test set. I\/II stands for methods with\/without aux tools. We re-implement NAMER since its results on LC-QuAD is not provided; however, NAMER suffers from severe timeout issues on DBpedia to limit its performance, so we restrict each candidate query to run at most 45s in practice (which already requires $\\sim\\!$15h for a complete evaluation run).}\n\\label{tab:e2e_eval}\n\\end{table}\n\n\\section{Experiments}\n\\label{sec:exp}\n\\paragraph{Dataset} We adopt LC-QuAD 1.0 \\citep{trivedi2017lc}, a predominant open-domain English KBQA benchmark based on DBpedia \\citep{auer2007dbpedia} 2016-04, to test the performance of our system. We randomly sample 200 questions from train data as dev set and follow the raw test set, resulting in a 4800\/200\/1000 train\/dev\/test split. More details on the dataset can be found in Appendix \\ref{sec:appendix_dataset}. Like \\citealp{zhang2021namer}, we do not experiment on multiple datasets due to the high annotation cost involved, however, we conduct no dataset-specific optimizations in this work, so we consider the large improvements on LC-QuAD and detailed discussions sufficient to prove our effectiveness.\n\\paragraph{Annotation}\n\\label{sec:annotation}\nWe annotate the dataset with the mention of each node in query graph, e.g. the mention \"class\" and \"person\" for the node \\textit{?class} and \\textit{dbo:person} respectively in Figure \\ref{fig:two_stage}. With the annotation, we obtain the gold data ($G_{ne},\\!G_{gc}$) to train our models. Appendix \\ref{sec:appendix_annotation} details the annotation process. \n\n\\paragraph{Baselines}\nWe evaluate our method against existing works both with and without auxiliary tools. With aux tools, \\citealp{kapanipathi2021leveraging} constructs query graphs based on the AMR of questions; \\citealp{hu2021edg} designs rules on constituency tree to aid query graph formation. For independent pipelines without aux tools, \\citealp{vakulenko2019message} parses URI mentions from the question to match with KB via confidence score passing; \\citealp{ravishankar2021two} combines a generative graph-skeleton decoder with entity and relation detector to form a query; \\citealp{zhang2021namer} co-trains a pointer generator with the node extractor to build a query graph, it's worth to note that this work also requires the node-to-mention \\nameref{sec:annotation} for training.\n\n\\paragraph{Setup}\nWe utilize the RoBERTa-large released by huggingface \\citep{wolf-etal-2020-transformers} as our encoder. All experiments are averaged on two runs on an NVIDIA A40 GPU. For the GSG model, we train for at most 500 epochs (\\textasciitilde6 GPU-hours) and report the best checkpoint on dev set; for the RE model, we train for 20 epochs (\\textasciitilde16 GPU-hours) and report the final checkpoint. For hyperparameters, we find no apparent performance variance on dev set as long as the values are in reasonable range (e.g. $64\\!\\leq\\! h_{le}\\!\\leq\\!1024$, $1e\\text{-}6\\!\\leq\\! lr_{gsg}\\!\\leq\\!2e\\text{-}5$) so no further tuning is involved. See the full setting in Appendix \\ref{sec:appendix_hyper}.\n\n\\begin{table*}[t]\n\\centering\n\\resizebox{1.85\\columnwidth}{!}{\n\\begin{tabular}{l l c c c c c c c c}\n \\toprule\n \\multirow{2}{*}{\\bfseries Methods} &\n \\multirow{2}{*}{\\bfseries Decoder Parameters} &\n \\multicolumn{3}{c}{\\bfseries NE Accuracy} &\n \\multicolumn{2}{c}{\\bfseries GSG Accuracy} &\n \\multicolumn{3}{c}{\\bfseries End-to-end}\\\\\n \\cmidrule(lr){3-5}\n \\cmidrule(lr){6-7}\n \\cmidrule(lr){8-10} & &\n P & R & F1 &\n EM & Actual &\n P & R & F1\\\\\n \\cmidrule(lr){1-10}\n Seq2seq & 76.67M\\space($\\times1$) & .895 & .901 & .897 & .695 & .768 & .653 & .674 & .654\\\\\n TF & 0.66M\\space\\space\\space($\\times1\/100$) & .895 & .901 & .897 & .728 & .795 & .655 & .674 & .657\\\\\n TF+SMTL & 0.66M\\space\\space\\space($\\times1\/100$) & .901 & .904 & .902 & .735 & .805 & .665 & .684 & .667\\\\\n \n \\cmidrule(lr){1-10}\n TF+Causal & 3.03M\\space\\space\\space($\\times1\/25$) & \\bfseries.909 & \\bfseries.914 & \\bfseries.911 & \\bfseries.755 & \\bfseries.828 & \\bfseries.677 & \\bfseries.696 & \\bfseries.680\\\\\n \\bottomrule\n\\end{tabular}}\n\\caption{Experiments on table-filling and causal-modelling. Seq2seq and TF adopt a generative decoder and table-filler in GC respectively, while both deal with NE and GC by separate models. TF+SMTL (simple multitask learning) co-trains NE and GC by directly adding losses without modelling their intrinsic causal effects. TF+Causal denotes our full approach which models the causal effects between NE and GC by label transfer. We report the node-level P\/R\/F1 in NE, the exact-match (EM) and actual accuracy (that ignores variable mentions in judging accuracy) in GSG, and the overall answer-level P\/R\/F1 on LC-QuAD 1.0 dev set for comparison.}\n\\label{tab:ablation}\n\\end{table*}\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=.999\\linewidth]{pics\/learning_curve.pdf}\n \\caption{EM accuracy of GSG during training. See the meaning of each series in Table \\ref{tab:ablation}.}\n \\label{fig:learn_curve}\n\\end{figure}\n\n\\subsection{End-to-end Evaluation}\nAs shown in Table \\ref{tab:e2e_eval}, our method, Crake, outperforms all former methods by a large $\\!\\sim\\!$17\\% margin on F1, becoming the new SoTA of LC-QuAD 1.0. Surpassing methods requiring aux tools (I) on all metrics, we present the effectiveness of independent pipelines (II) that avoid cascading errors. Also, we achieve consistent answer precision and recall to surpass other methods in II on F1, showing the superiority of our pipeline design, which is further discussed in the sections below.\n\n\\subsection{Effects of Tabel-Filling}\n\nAs explained in Section \\ref{sec:gc}, modelling GC as a sequence-generation causes a few issues that can be overcome by table-filling. Specifically, the sequence ambiguity confuses the learning process and requires large decoders to grasp the sequence generation policy, which may slow down the convergence. Besides, the exposure bias harms the decoding accuracy of the model at inference. This section, we try to verify such effects by experiments. To enable the comparison with sequence-generation, we construct a generative decoder as in \\citealp{zhang2021namer} as the baseline, which sequentially generates the connected node pairs in the graph structure to represent the edges. We train the generative model under the same settings (e.g. learning rate, warmup, epochs, etc.), resulting in the performance of \\texttt{Seq2seq} in Table \\ref{tab:ablation}.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=.93\\linewidth]{pics\/case_study_3.pdf}\n \\caption{Case study on the effects of causal-modelling.}\n \\label{fig:case_study}\n\\end{figure}\n\nComparing with the table-filling model (i.e. \\texttt{TF} in Table \\ref{tab:ablation}), \\texttt{Seq2seq} comes short in the accuracy of graph structure, indicating the negative effects of the exposure bias on predicting accuracy. Meanwhile, \\texttt{TF} requires only 1\/100 of \\texttt{Seq2seq}'s parameters to achieve comparable or better results, we attribute this to the removal of sequence ambiguity which frees the model from acquiring the complex and ambiguous scheme of sequence-generation. This speculation is further verified in Figure \\ref{fig:learn_curve}, in which \\texttt{TF} converges distinctly quicker than \\texttt{Seq2seq} since the simultaneous decision of all edges is well-defined and easier to learn. Thus, compared with sequence-modelling, handling GC via table-filling reduces model size and boosts training, which is essential for real-world applications.\n\n\\subsection{Effects of Causal-Modelling}\n\nWe propose a joint model to learn the NE-GC causalities in Section \\ref{sec:causal_modelling}, to discuss its effects, we compare it with two alternatives in Table \\ref{tab:ablation}: 1) using two separate models for NE and GC (\\texttt{TF} in Table \\ref{tab:ablation}) like \\citealp{ravishankar2021two}, 2) co-training NE and GC by sharing encoder and adding losses (\\texttt{TF+SMTL} in Table \\ref{tab:ablation}) like \\citealp{shen2019multi}. As shown, co-training consistently surpasses separate models by grasping the shared knowledge between NE and GC, nevertheless, our causal-modelling approach (\\texttt{TF+Causal}) further outperforms co-training. In detail, though \\texttt{TF+Causal} has similar results with \\texttt{TF+SMTL} in NE, it achieves better accuracy for overall GSG (NE+GC) and excels in end-to-end metrics. Therefore, we infer that causal-modelling improves the integrity of the GSG stage by expressing the internal causalities between its subtasks. To better understand this, we perform a case study in Figure \\ref{fig:case_study}, in which \\texttt{TF} fails to realize that \"skier\" also corresponds to a type node; in contrast, \\texttt{TF+SMTL} extract all nodes correctly by learning both NE and GC labels, but it still fails in generating a correct graph structure. Finally, \\texttt{TF+Causal} utilizes the VT tag of \"skier\" in NE predictions and correctly connects the II-IV edge in GC. Thus, Figure \\ref{fig:case_study} demonstrates the usage of causal effects to reach higher accuracy in GSG.\n\n\\subsection{Analysis on Beam-Search RE}\n\n\\begin{table}[t]\n\\centering\n\\resizebox{1.\\columnwidth}{!}{\n\\begin{tabular}{l c c c c c c}\n \\toprule\n \\multirow{2}{*}{\\bfseries Methods} & \n \\multicolumn{3}{c}{\\bfseries Accuracy} &\n \\multicolumn{3}{c}{\\bfseries Efficiency}\\\\\n \\cmidrule(lr){2-4}\n \\cmidrule(lr){5-7} & \\bfseries P & \\bfseries R & \\bfseries F1 & \\bfseries 1-hop & \\bfseries 2-hop & \\bfseries 3-hop\\\\\n \\cmidrule(lr){1-7}\n Baseline & .560 & .566 & .556 & \\bfseries0.12s & 42.4s & 84.2s\\\\\n BeamSearch & \\bfseries.677 & \\bfseries.696 & \\bfseries.680 & \\bfseries0.12s & \\bfseries1.06s & \\bfseries2.72s\\\\\n \\bottomrule\n\\end{tabular}}\n\\caption{Performance comparison between our beam-search RE algorithm and its baseline in Section \\ref{sec:re}. Accuracy refers to the answer-level P\/R\/F1, efficiency is measured by the average run time on 1\/2\/3-hop queries.}\n\\label{tab:re_ablation}\n\\end{table}\n\nIn this section, we compare our beam-search RE algorithm with its baseline. As stated in Section \\ref{sec:re}, by alternately performing retrieval and ranking on each edge (rather than retrieving the candidates of every edge before ranking), our approach lowers the scale of candidate predicates on multi-hop queries to get better efficiency, which is verified in Table \\ref{tab:re_ablation}. In detail, \\texttt{BeamSearch} costs substantially less time than \\texttt{Baseline} in 2 and 3-hop queries (note that for 1-hop queries, two methods reduce to a same process with similar time costs). Since \\texttt{BeamSearch} only operates on the neighbors of certain KB nodes, it avoids the retrieval of 2-hop neighbors, which requires considerable time on DBpedia, to improve efficiency. In addition, by pruning off useless candidates in \\texttt{Baseline}, \\texttt{BeamSearch} also achieves higher overall KBQA accuracy in Table \\ref{tab:re_ablation}.\nTherefore, Algorithm \\ref{alg:re_bs} transcends previous methods to reveal an efficient and accurate solution for ranking-based RE scalable to KB size and query complexity.\n\n\n\\section{Related Works}\n\\paragraph{KBQA via Semantic Parsing} A mainstream to solve KBQA is semantic parsing \\citep{yih2016value} which converts a question to a KB query to get answers. Due to the graph-like structure of KB queries, prior works construct query graphs to represent queries in semantic parsing. Among them, some works \\citep{zafar2018formal, chen2021formal} only focus on predicting the graph structure given node inputs. To perform end-to-end QA, \\citealp{hu2017answering} leverages the dependency parsing tree to match KB subgraphs for answers; \\citealp{kapanipathi2021leveraging} builds the query graph by transforming and linking the AMR \\citep{banarescu2012abstract} of the question; \\citealp{hu2021edg} uses the constituency tree to compose an entity description graph representing the query graph structure. Requiring aux tools or data structures, these works may be subjected to cascading errors. \\citealp{yih2015semantic} overcomes this by an independent stage-transition framework to generate the query graph, \\citealp{hu2018state} extends the transitions to express more complex graphs. Besides, \\citealp{zhang2021namer} adopts a pointer generator to decode graph structure, \\citealp{ravishankar2021two} generates the query skeleton by a seq2seq decoder. Unlike these methods that model the query graph as a sequence (by state-transition or generative decoder), we decode all edges at once via a table-filler in graph structure generation.\n\n\\paragraph{Modelling causal effects} Causality occurs in various deep-learning scenarios between multiple channels or subtasks, existing works models the causality for better performance. \\citealp{niu2021counterfactual} mitigates the false causal effects in VQA \\citep{antol2015vqa} to overcome language bias; \\citealp{zeng2020counterfactual} dispels the incorrect causalities from different input channels of NER by generating counterfacts. \\citealp{chen-etal-2020-exploring-logically} utilizes the inter-subtask causalities to improve multitask learning for JERE \\citep{li2014incremental}, ABSA \\citep{kirange2014aspect}, and LJP. Unlike them, we formulate and utilize the internal causal effects in KBQA.\n\n\n\\section{Conclusion}\nIn this work, we formalize the generation of query graphs in KBQA by two stages, namely graph structure generation (GSG) and relation extraction (RE). In GSG, we propose a table-filling model for graph composition to avoid the ambiguity and bias of sequence-modelling, meanwhile, we encode the inherent causal effects among GSG by a label-transfer block to improve the stage integrity. In RE, we introduce an effective beam-search algorithm to retrieve and rank predicates in order for each edge, which turns out to be scalable for large KBs and multi-hop queries. Consequently, our approach substantially surpasses previous state-of-the-arts in KBQA, revealing the effectiveness of our pipeline design. Detailed experiments also validate the effects of all our contributions.\n\n\n\\section{Limitation}\nAdmittedly, our approach endures certain limitations as discussed below.\n\\paragraph{Query Expressiveness} Like most semantic parsing systems, we fail to cover all the operations of SPARQL, limiting our capability to compose queries with complex \\texttt{filter} or property path. For the conciseness of our system, we only focus on constructing triples in the multi-hop query graph in this paper, while we plan to incorporate more functions into Crake in the future to improve the expressiveness of the system.\n\\paragraph{Annotation Cost} Training models with node mentions require expensive manual annotations, which is impractical for us to conduct on every popular KBQA dataset. As explained in Section \\ref{sec:exp}, without data-oriented optimization, we believe the significant gain presented adequate to verify our contributions. Further, we expect to extenuate such costs in two directions for the future: 1) some modules of our framework (e.g. NE) is generalizable to other English questions, gifting it the potential to be transferred to other datasets without re-training; 2) few-shot \\citep{wang2020generalizing} and active \\citep{aggarwal2014active} learning techniques aids the model to reach competitive performance with a small portion of annotated data, which can be explored in our framework to reduce annotation cost.\n\n\\section*{Acknowledgements}\n\nThis work was supported by National Key R\\&D Program of China (2020AAA0105200) and NSFC under grant U20A20174. The corresponding author of this work is Lei Zou (zoulei@pku.edu.cn). We would like to thank Zhen Niu and Sen Hu for their kind assistance on this work. We also appreciate anonymous reviewers for their valuable comments and advises.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Methodology}\n\\label{sect_methodology}\n\nIn order to facilitate our analysis, we start by making the following assumptions that are standard in the multi-armed bandits literature.\n\\begin{assumption} For all $u \\in [N]$, $i \\in [M]$ and $t \\in \\mathbb{N}$, the rewards $R_{t, u, i}$ are independent and $\\eta$-sub-Gaussian with mean $\\theta^*_{u, i} \\in [0, B]$.\n\n\\label{rew_assumptio}\n\\end{assumption}\n\nTo model the dependency between the mean rewards obtained from different user-item pairs, we employ the following assumption. We first present our algorithm and theoretical results under the general setting given by this assumption, and specialize for the setting of the collaborative filtering in following sections.\n\\begin{assumption}\n\\label{low_assum}\nThe mean reward matrix $\\vect{\\Theta}^*$ belongs to a known structure set $\\mathcal{L} \\subseteq \\mathbb{R}^{N \\times M}$.\n\\end{assumption}\n\nIn order to make use of initial historical data possibly available to the provider, we assume that the algorithm has access to an initial rough estimate $\\overline{\\vect{\\Theta}}$ that satisfies $\\|\\overline{\\vect{\\Theta}} - \\vect{\\Theta}^* \\|_\\text{F} \\leq G$. Such an estimate can be constructed using an off-the-shelf low-rank matrix completion algorithm on the initialization data. If such observations are not readily available at the time of initialization, they can be obtained by randomly sampling some of the user-item allocation pairs once. It is worth to note that one can also set $\\overline{\\vect{\\Theta}} = \\vect{0}$ and let $G$ be some number satisfying $\\|\\vect{\\Theta}^* \\|_\\text{F} \\leq G$.\n\n\\vspace{-3pt}\n\\begin{algorithm}\n\\caption{Structured Combinatorial Multi-Armed Bandit}\n\\begin{algorithmic}\n\\Require horizon $T$, initial estimate $\\overline{\\vect{\\Theta}} \\in \\mathbb{R}^{N \\times M}$ with $\\|\\overline{\\vect{\\Theta}} - \\vect{\\Theta}^* \\|_\\text{F} \\leq G$.\n\\For{$t = 1, 2, \\dots, T$}\n\\State Find the regularized least squares estimate $ \\widehat{\\vect{\\Theta}}_t = \\argmin_{\\vect{\\Theta} \\in \\mathcal{L}} \\left \\{ L_{2,t}(\\vect{\\Theta}) + \\gamma \\|\\vect{\\Theta} - \\overline{\\vect{\\Theta}} \\|_2^2 \\right \\}$\n\\State Construct the confidence set $\\mathcal{C}_t = \\{ \\vect{\\Theta} \\in \\mathcal{L} : \\|\\vect{\\Theta} - \\widehat{\\vect{\\Theta}}_t \\|_{2, E_t} \\leq \\sqrt{ \\beta_t^*(\\delta, \\alpha, \\gamma)}\\}$\n\\State Compute the action vector $\\vect{X}_t = \\argmax_{\\vect{X} \\in \\mathcal{X}_t} \\max_{\\vect{\\Theta} \\in \\mathcal{C}_t} \\; \\langle \\vect{X}, \\vect{\\Theta} \\rangle$\n\\State Play the arms $\\mathcal{A}_t$ according to $\\vect{X}_t$ \n\\State Observe $R_{t, u, i}$ for all $(u, i) \\in \\mathcal{A}_{t}$\n\\EndFor\n\\end{algorithmic}\n\\label{alg_low}\n\\end{algorithm}\n\\vspace{-3pt}\n\nOur method summarized in Algorithm \\ref{alg_low} follows the standard OFU (Optimism in Face of Uncertainty) principle \\cite{abbasi_2011}. It maintains a confidence set $\\mathcal{C}_t$ which contains the true parameter $\\vect{\\Theta}^*$ with high probability and chooses the allocation $\\vect{X}_t$ according to\n\\begin{equation}\n \\vect{X}_t = \\argmax_{\\vect{X} \\in \\mathcal{X}_t} \\left \\{ \\max_{\\vect{\\Theta} \\in \\mathcal{C}_t} \\; \\langle \\vect{X}, \\vect{\\Theta} \\rangle \\right \\}\n\\label{low_oful}\n\\end{equation}\nTypically, the faster the confidence set $\\mathcal{C}_t$ shrinks, the lower regret we have. However, the main difficulty is to construct a series of $\\mathcal{C}_t$ that leverage the combinatorial observation model as well as the structure of the parameter so that we have low regret bounds. In this work, we consider constructing confidence sets that are centered around the regularized least square estimates. We let the cumulative squared prediciton error at time $t$ be\n\\begin{equation*}\n L_{2,t}(\\vect{\\Theta}) = \\sum_{\\tau=1}^{t-1} \\sum_{(u, i) \\in \\mathcal{A}_\\tau} (\\theta_{ui} - R_{\\tau, u, i})^2,\n\\end{equation*}\nand define the regularized least squares estimate at time $t$ as\n\\begin{equation}\n \\widehat{\\vect{\\Theta}}_t = \\argmin_{\\vect{\\Theta} \\in \\mathcal{L}} \\left \\{ L_{2,t}(\\vect{\\Theta}) + \\gamma \\|\\vect{\\Theta} - \\overline{\\vect{\\Theta}} \\|_2^2 \\right \\}.\n \\label{least_squares_estimate_low}\n\\end{equation}\nThen, the confidence sets take the form $\\mathcal{C}_t := \\{ \\vect{\\Theta} \\in \\mathcal{L} : \\|\\vect{\\Theta} - \\widehat{\\vect{\\Theta}}_t \\|_{2, E_t} \\leq \\sqrt{\\beta_t}\\}$ where $\\beta_t$ is an appropriately chosen confidence parameter, and the regularized empirical 2-norm $\\| \\cdot \\|_{2, E_t}$ is\n\\begin{equation*}\n \\| \\vect{\\Delta} \\|_{2, E_t}^2 := \\sum_{u=1}^{N} \\sum_{i=1}^{M} (n_{t, u, i} + \\gamma) (\\Delta_{ui})^2,\n\\end{equation*}\nwhere $n_{t, u, i} := \\sum_{\\tau=1}^{t-1} \\mathds{1} \\{(u,i) \\in \\mathcal{A}_\\tau\\}$ is the number of times item $i$ has been allocated to user $u$ before time $t$ (excluding time $t$).\nHence, the empirical 2-norm is a measure of discrepancy that weighs the entries depending on how much they have been explored. Roughly speaking, since the confidence ellipsoid constructed using the 2-norm is wider in directions that are not yet well-explored, the OFU step described in \\ref{low_oful} is more inclined to make allocations that include the corresponding user-item pairs. In order to obtain low-regret guarantees for the allocations, the first step is to choose correct $\\beta_t$ parameter such that $\\mathcal{C}_t$ will contain the true parameter $\\vect{\\Theta}^*$ for all $t$ with high probability.\nIn order to take advantage of the structure of the arms, we let $\\mathcal{N}(\\mathcal{F}, \\alpha, \\| \\cdot \\|_{\\text{F}})$ denote the $\\alpha$-covering number of $\\mathcal{F}$ in the Frobenious-norm $\\| \\cdot \\|_{\\text{F}}$, and let\n\\begin{equation*}\n \\beta_t^*(\\delta, \\alpha, \\gamma) := 8 \\eta^2 \\log \\left(\\mathcal{N}(\\mathcal{L}, \\alpha, \\| \\cdot \\|_{\\text{F}}) \/ \\delta \\right) + 2 \\alpha t NM \\left [ 8 B + \\sqrt{8 \\eta^2 \\log(4NM t^2\/\\delta)} \\right] + 4 \\gamma G^2.\n\\end{equation*}\nThen, the following Lemma establishes that if we set $\\beta_t = \\beta_t^*(\\delta, \\alpha, \\gamma)$, the resulting confidence sets have the desired properties.\n\\begin{lemma} For any $\\delta > 0$, $\\alpha > 0$, $\\gamma > 0$, let $\\widehat{\\vect{\\Theta}}_t$ be the regularized least squares estimate given in \\ref{least_squares_estimate_low}. If the confidence sets are given as\n\\begin{equation}\n \\mathcal{C}_t := \\{ \\vect{\\Theta} \\in \\mathcal{L} : \\|\\vect{\\Theta} - \\widehat{\\vect{\\Theta}}_t \\|_{2, E_t} \\leq \\sqrt{ \\beta_t^*(\\delta, \\alpha, \\gamma)}\\},\n \\label{conf_sets_low}\n\\end{equation}\nthen with probability at least $1 - 2 \\delta$, $\\mathcal{C}_t \\ni \\vect{\\Theta}^*$, for all $t \\in \\mathbb{N}$.\n\\end{lemma}\n\nFinally, we show that if the structured combinatorial bandits algorithm follows the OFU allocations given in \\eqref{low_oful} while constructing the confidence sets according to \\eqref{conf_sets_low}, it obtains the following overall regret guarantee:\n\\begin{theorem}\\label{thm_alloc_regret}\nUnder Assumptions \\ref{rew_assumptio} and \\ref{low_assum}, for any $\\delta > 0$, $\\alpha > 0$, $\\gamma \\geq 1$, with probability $1 - 2\\delta$, the cumulative regret of Algorithm \\ref{alg_low} is bounded by\n\\begin{equation*}\n \\mathcal{R}(T, \\pi) \\leq \\sqrt{ 8 N M \\beta_T^* (\\delta, \\alpha, \\gamma) T \\log \\left(1 + T \/ \\gamma \\right) }.\n\\end{equation*}\n\\end{theorem}\n\n\\subsection{Low-Rank COMbinatorial Bandits (LR-COMB)}\n\n\\label{sect_lrcb}\n\nAs common in collaborative filtering settings, the correlation between users and arms can be captured through a matrix factorization model that leads to a low-rank mean reward matrix. \nEach user $u$ (item $i$) is associated with a feature vector $\\vect{p}_u$ ($\\vect{q}_i$) in a shared $R$-dimensional space (typically $R \\ll M, N$), and the mean reward of each user-item allocation pair is given by $\\theta_{ui}^* = \\vect{p}_u^\\textrm{T} \\vect{q}_i$. Consequently, the mean reward matrix satisfies the factorization $\\vect{\\Theta}^* = \\vect{P} \\vect{Q}^\\textrm{T}$ for some $\\vect{P} \\in \\mathbb{R}^{N \\times R}$ and $\\vect{Q} \\in \\mathbb{R}^{M \\times R}$. Based on this observation and the boundedness condition given in Assumption \\ref{rew_assumptio}, we can choose the structure set $\\mathcal{L}$ as\n\\begin{equation}\n \\mathcal{L} = \\{ \\vect{\\Theta} \\in \\mathds{R}^{N \\times M} : \\text{rank}(\\vect{\\Theta}) \\leq R, \\theta_{ui} \\in [0, B], \\forall u,i\\}.\n \\label{low_l}\n\\end{equation}\n\nThen, Lemma \\ref{lemma_covering} in the appendix shows that the covering number for $\\mathcal{L}$ given in equation \\eqref{low_l} is upper bounded by $\\log \\mathcal{N}(\\mathcal{L}, \\alpha, \\| \\cdot \\|_{\\text{F}}) \\leq (N + M + 1) R \\log ( 9B \\sqrt{NM} \/ \\alpha )$. Therefore, the regret guarantee for a setting with low-rank mean reward matrix becomes:\n\\begin{theorem}[Regret of LR-COMB] \\label{low_rank_regret_thm}\nUnder Assumption \\ref{rew_assumptio} and Assumption \\ref{low_assum} with $\\mathcal{L}$ given in \\eqref{low_l}, the Algorithm \\ref{alg_low} achieves cumulative regret\n\\begin{equation}\n \\mathcal{R}(T, \\pi) = \\widetilde{\\mathcal{O}} \\left( \\sqrt{N M (N+M) RT} \\right),\n\\end{equation}\n\\end{theorem}\nwhere $\\widetilde{\\mathcal{O}}$ is the big-O notation, ignoring the poly-logarithmic factors of $N, M, T, R$. \n\nIn comparison, if we were to ignore the low-rank structure between the mean rewards obtained from user-item allocation pairs and apply the standard combinatorial bandit algorithms (e.g., CUCB \\cite{chen_2013}), we would suffer \\smash{$\\widetilde{\\mathcal{O}} ( N M \\sqrt{T} )$} regret \\cite{kveton_2015}. Since $R \\ll M, N$ in many applications of collaborative filtering, our algorithm significantly outperforms this naive approach. As common in the literature of combinatorial bandits, one possible approach to improve upon our theoretical analysis might be by assuming a problem setting where at most $K$ of the arms can be played in each round. However, our current analysis techniques do not allow us to incorporate and leverage such an assumption together with the low-rank structure of collaborative filtering. \n\n\\textbf{Implementation via Matrix Factorization: }\n\\label{section_mf}\nIn order to efficiently solve optimization problems \\eqref{low_oful} and \\eqref{least_squares_estimate_low} in large scales, we take advantage of the matrix factorization model. As a result, we factorize $\\vect{\\Theta} = \\vect{P} \\vect{Q}^\\textrm{T}$ where $\\vect{P} \\in \\mathbb{R}^{N \\times R}$ and $\\vect{Q} \\in \\mathbb{R}^{M \\times R}$, and solve the problems by optimizing over $\\vect{P}$ and $\\vect{Q}$ rather than directly optimizing over $\\vect{\\Theta}$. Even if the problem \\eqref{least_squares_estimate_low} is not convex in the joint variable ($\\vect{P}$, $\\vect{Q}$), it is convex in $\\vect{P}$ for fixed $\\vect{Q}$ and it is convex in $\\vect{Q}$ for fixed $\\vect{P}$. Therefore, an alternating minimization algorithm becomes a feasible choice to find a reasonable solution for the least squares problem. Similarly, an alternating minimization approach is also useful to solve the problem \\eqref{low_oful}. We can fix an allocation $\\vect{X}$ and minimize over $\\vect{P}$ and $\\vect{Q}$. Then, for fixed $\\vect{P}$ and $\\vect{Q}$, the allocation $\\vect{X}$ is determined through the dual decomposition mechanism described in the section \\ref{sect_opt_allocations}. We call the resulting algorithm LR-COMB with Matrix Factorization and present it as Algorithm \\ref{alg_mf} in the Appendix. \n\n\\section{Implementation via Matrix Factorization}\n\nThe following algorithm describes an efficient implementation of our Low-Rank Combinatorial Bandit algorithm using matrix factorization. Note that converged $\\widehat{\\vect{\\Theta}}_t$ and $\\vect{X}_t$ are not necessarily the optimum solution for problems \\eqref{low_oful} and \\eqref{least_squares_estimate_low} since the problems are not convex. However, the alternating optimization algorithm guarantees that, in each iteration, the objective value only decreases for \\eqref{least_squares_estimate_low}. Similarly, the objective value for \\eqref{low_oful} increases in each iteration of the alternating optimization.\n\n\\begin{algorithm}\n\\caption{LR-COMB with Matrix Factorization}\n\\begin{algorithmic}\n\\Require horizon $T$, initial estimate $\\overline{\\vect{\\Theta}} \\in \\mathbb{R}^d$ with $\\|\\overline{\\vect{\\Theta}} - \\vect{\\Theta}^* \\|_\\text{F} \\leq G$, parameters $\\delta, \\alpha > 0$, $\\gamma \\geq 1$.\n\\For{$t = 1, 2, \\dots, T$}\n\\State randomly initialize $\\widehat{\\vect{P}}$ and $\\widehat{\\vect{Q}}$\n\\While{convergence criterion not satisfied}\n \\State $\\widehat{\\vect{P}} \\gets \\argmin_{\\vect{P} \\in \\mathbb{R}^{N \\times R}} \\left \\{ \\sum_{\\tau=1}^{t-1} \\sum_{(u, i) \\in \\mathcal{A}_\\tau} (\\vect{p}_u^\\textrm{T} \\vect{q}_i - R_{\\tau, u, i})^2 + \\gamma \\|\\vect{P} \\vect{Q}^\\textrm{T} - \\overline{\\vect{\\Theta}} \\|_\\text{F}^2 \\right \\}$\n \\State $\\widehat{\\vect{Q}} \\gets \\argmin_{\\vect{Q} \\in \\mathbb{R}^{M \\times R}} \\left \\{ \\sum_{\\tau=1}^{t-1} \\sum_{(u, i) \\in \\mathcal{A}_\\tau} (\\vect{p}_u^\\textrm{T} \\vect{q}_i - R_{\\tau, u, i})^2 + \\gamma \\|\\vect{P} \\vect{Q}^\\textrm{T} - \\overline{\\vect{\\Theta}} \\|_\\text{F}^2 \\right \\}$\n\\EndWhile\n\\State $\\widehat{\\vect{\\Theta}}_t \\gets \\widehat{\\vect{P}} \\widehat{\\vect{Q}}^\\textrm{T}$\n\\State $\\vect{X} \\gets \\mathbb{1}_{N \\times M}$, $\\vect{P} \\gets \\widehat{\\vect{P}}$, $\\vect{Q} \\gets \\widehat{\\vect{Q}}$\n\\While{convergence criterion not satisfied}\n\\While{convergence criterion not satisfied}\n\\State $\\vect{P} \\gets \\argmax_{\\vect{P} \\in \\mathbb{R}^{N \\times R}} \\langle \\vect{X}, \\vect{P} \\vect{Q}^\\textrm{T} \\rangle$ s.t. $\\|\\vect{P} \\vect{Q}^\\textrm{T} - \\widehat{\\vect{\\Theta}}_t \\|_{2, E_t} \\leq \\sqrt{ \\beta_t^*(\\delta, \\alpha, \\gamma)}$\n\\State $\\vect{Q} \\gets \\argmax_{\\vect{Q} \\in \\mathbb{R}^{M \\times R}} \\langle \\vect{X}, \\vect{P} \\vect{Q}^\\textrm{T} \\rangle$ s.t. $\\|\\vect{P} \\vect{Q}^\\textrm{T} - \\widehat{\\vect{\\Theta}}_t \\|_{2, E_t} \\leq \\sqrt{ \\beta_t^*(\\delta, \\alpha, \\gamma)}$\n\\EndWhile\n\n\\State $\\vect{\\Theta} \\gets \\vect{P} \\vect{Q}^\\textrm{T}$\n\n\\While{convergence criterion not satisfied}\n \\For{$u \\in [N]$} \n \\State $\\vect{x}_u\\gets \\argmax_{\\vect{x}} \\left \\{ \\vect{x}^\\textrm{T} (\\vect{\\theta}_u - \\vect{\\lambda}) \\middle | \\vect{x} \\in \\{0, 1\\}^{M}, \\vect{x}^\\textrm{T} \\mathbb{1}_M \\leq d_{t, u} \\right \\} $\n \\EndFor \n \\State $\\vect{\\lambda} \\gets \\left[ \\vect{\\lambda} - \\alpha \\left( \\vect{c}_t - \\sum_{u = 1}^{N} \\vect{x}_u \\right) \\right]^{+} $\n\\EndWhile\n\\State $\\vect{X} \\gets [\\vect{x}_1, \\vect{x}_2, \\dots, \\vect{x}_N]^\\text{T}$\n\\EndWhile\n\\State $\\vect{X}_t \\gets \\vect{X}$\n\\State Play the arms $\\mathcal{A}_t$ according to $\\vect{X}_t$ \n\\State Observe $R_{t, u, i}$ for all $(u, i) \\in \\mathcal{A}_{t}$\n\\EndFor\n\\end{algorithmic}\n\\label{alg_mf}\n\\end{algorithm}\n\n\\section{Relaxation of Integer Program}\n\\label{appendix_num}\n\nA traditional linear integer program (IP) in matrix form is formulated as\n\\begin{align}\n\\begin{split}\n \\max_{\\mathbf{x}} &\\; \\mathbf{t}^\\mathrm{T} \\mathbf{x}\\\\\n \\text{s.t.} &\\; \\mathbf{A} \\mathbf{x} \\leq \\mathbf{b}\\\\\n &\\; \\mathbf{x} \\in \\mathds{Z}_+^d\\\\\n \\label{int_prog}\n\\end{split}\n\\end{align}\n\nThis problem can be relaxed to a linear program by dropping the integral constraints (setting $\\mathbf{x} \\in \\mathds{R}_+^d$). The integrality gap of an integer program is defined as the difference between the optimal values of the integer program in (IP) and its relaxed linear program. When the vector $\\mathbf{b}$ is integral and the matrix $\\mathbf{A}$ is totally unimodular (all entries are 1, 0, or -1 and every square sub-minor has determinant of +1 or -1) then the integrality gap is zero and the solution of the relaxed linear program is integer valued \\cite{bertsekas_1991}. \n\nHence, we can solve \\eqref{int_prog} by instead solving the following relaxed linear program:\n\\begin{align}\n\\begin{split}\n \\max_{\\mathbf{x}} &\\; \\mathbf{t}^\\mathrm{T} \\mathbf{x}\\\\\n \\text{s.t.} &\\; \\mathbf{A} \\mathbf{x} \\leq \\mathbf{b}\\\\\n &\\; \\mathbf{x} \\in \\mathds{R}_+^d\\\\\n\\end{split}\n\\end{align}\n\nFor a matrix $\\vect{A}$ whose rows can be partitioned into two disjoint sets $\\mathcal{C}$ and $\\mathcal{D}$, the following four conditions together are sufficient for $\\vect{A}$ to be totally unimodular \\cite{heller_1957}:\n\\begin{enumerate}[nosep, labelindent= 2pt, align= left, labelsep=0.4em, leftmargin=*]\n \\item Every entry in $\\vect{A}$ is $0$, $+1$, or $-1$.\n \\item Every column of $\\vect{A}$ contains at most two non-zero entries.\n \\item If two non-zero entries in a column of $\\vect{A}$ have the same sign, then the row of one is in $\\mathcal{C}$ , and the other in $\\mathcal{D}$ .\n \\item If two non-zero entries in a column of $\\vect{A}$ have opposite signs, then the rows of both are in $\\mathcal{C}$ , or both in $\\mathcal{D}$ .\n\\end{enumerate}\n\nIn the setting of resource allocation, we can write problem \\eqref{integer_num} equivalently as problem \\eqref{int_prog} where $\\mathbf{x} = \\text{vec}(\\mathbf{X})$, $\\mathbf{t} = \\text{vec}(\\mathbf{\\Theta^*})$, $\\mathbf{A}$ and $\\mathbf{b}$ are given as\n\\begin{equation}\n \\mathbf{A} = \n \\begin{bmatrix}\n \\mathds{1}_N^\\mathrm{T} \\otimes \\mathbf{I}_M\\\\\n \\mathbf{I}_N \\otimes \\mathds{1}_M^\\mathrm{T}\\\\\n \\end{bmatrix}\n \\qquad \n \\mathbf{b} = \n \\begin{bmatrix}\n \\vect{c}_t\\\\\n \\vect{d}_t\\\\\n \\end{bmatrix}\n\\label{def_Ab}\n\\end{equation}\n\nFor matrix $\\vect{A}$ given in \\eqref{def_Ab}, we can set $\\mathcal{C}$ to be the set of first $M$ rows corresponding to the capacity constraints, and $\\mathcal{D}$ to be the set of remaining rows corresponding to the demand constraints. Since this $\\vect{A}$ matrix satisfies the conditions of the proposition for sets $\\mathcal{C}$ and $\\mathcal{D}$, we obtain that $\\mathbf{A}$ is totally unimodular. Finally, since the vector $\\mathbf{b}$ is integral and the matrix $\\mathbf{A}$ is totally unimodular, the integrality gap is zero.\n\n\\section{Structured combinatorial multi-armed bandits}\n\\label{SCMAB}\n\nFor the ease of exposition, we present our proofs in the following setting with $d$ structured arms.\n\nWe consider a CMAB (Combinatorial Multi Armed Bandit) problem setting with $d$ arms associated with a set of independent random rewards $R_{t, i}$ for $i \\in [d]$ and $t \\in \\mathbb{N}$. Assume that the set of rewards $\\{R_{t, i} | t \\in \\mathbb{N}\\}$ associated with arm $i$ are $\\eta$-sub-Gaussian with mean $\\theta_i^* \\in [0, B]$. Let $\\vect{\\theta}^* = (\\theta_1, \\theta_2, \\dots, \\theta_d)$ be the vector of expectations of all arms and assume we know that it belongs to a structure set $\\mathcal{F} \\subseteq \\mathds{R}^{d}$. We further assume that we have access to an initial rough estimate $\\overline{\\vect{\\theta}}$ that satisfies $\\|\\overline{\\vect{\\theta}} - \\vect{\\theta}^* \\|_2 \\leq G$ (one can also set $\\overline{\\vect{\\theta}} = \\vect{0}$ and let $G$ be some number satisfying $\\|\\vect{\\theta}^* \\|_2 \\leq G$.).\n\nAt each round $t$, a subset of arms $\\mathcal{A}_t \\subseteq [d]$ are played and the individual outcomes of arms in $\\mathcal{A}_t$ are revealed.\nThe total reward at round $t$ is the sum of the rewards obtained from all arms in $\\mathcal{A}_t$. \nLetting $\\vect{e}_{i} \\in \\mathbb{R}^{d}$ denote the zero-one vector with a single one at the $i$-th entry, \ndefine the action vector for time $t$ as $\\vect{x}_{t} = \\sum_{i \\in \\mathcal{A}_t} \\vect{e}_{i}$. \nConsequently, $\\vect{x}_{t}$ becomes a zero-one vector with ones at entries $\\mathcal{A}_t$ and zeros everywhere else. \nThe problem contains a constraint that any valid action $\\vect{x}_{t}$ must belong to a (time-varying) constraint set $\\mathcal{X}_{t} \\subseteq \\{0, 1\\}^{d}$. \n\nThe optimum allocation $\\vect{x}^*_t$ at time $t$ is given by $\\vect{x}^*_t \\in \\argmax_{\\vect{x} \\in \\mathcal{X}_t} \\; \\langle \\vect{x}, \\vect{\\theta}^* \\rangle$. We denote by $H_t$ the history $\\{\\vect{x}_{\\tau}, (R_{\\tau, i})_{i \\in \\mathcal{A}_{\\tau}}\\}_{\\tau = 1}^{t-1}$ of observations available when choosing the next action $\\vect{x}_{t}$. Let $\\pi$ be a policy which takes the action $\\vect{x}_{t}$ using the history $H_t$. Then, the $T$ period regret of a policy $\\pi$ is the random variable $ \\mathcal{R}(T, \\pi) = \\sum_{t = 1}^{T} \\left[ \\langle \\vect{x}^*_t, \\vect{\\theta}^*\\rangle - \\langle \\vect{x}_{t}, \\vect{\\theta}^* \\rangle \\right] $.\n\n\\subsection{OFU for Structured Combinatorial Bandits}\n\nWe present our algorithm in the setting where the mean reward vector $\\vect{\\theta}^*$ belongs to a structure set $\\mathcal{F} \\subseteq \\mathds{R}^{d}$. Then, we analyze the algorithm to establish performance guarantees.\n\n\\begin{algorithm}\n\\caption{OFU for Structured Combinatorial Bandits}\\label{alg:cap}\n\\begin{algorithmic}\n\\Require horizon $T$, initial estimate $\\overline{\\vect{\\theta}} \\in \\mathbb{R}^d$ with $\\|\\overline{\\vect{\\theta}} - \\vect{\\theta}^* \\|_2 \\leq G$, parameters $\\delta, \\alpha > 0$, $\\gamma \\geq 1$.\n\\For{$t = 1, 2, \\dots, T$}\n\\State Find the least squares estimate $ \\widehat{\\vect{\\theta}}_t = \\argmin_{\\vect{\\theta} \\in \\mathcal{F}} \\left \\{ L_{2,t}(\\vect{\\theta}) + \\gamma \\|\\vect{\\theta} - \\overline{\\vect{\\theta}} \\|_2^2 \\right \\}$\n\\State Construct the confidence set $\\mathcal{C}_t = \\{ \\vect{\\theta} \\in \\mathcal{F} : \\|\\vect{\\theta} - \\widehat{\\vect{\\theta}}_t \\|_{2, E_t} \\leq \\sqrt{ \\beta_t^*(\\delta, \\alpha, \\gamma)}\\}$\n\\State Compute the action vector $\\vect{x}_t = \\argmax_{\\vect{x} \\in \\mathcal{X}_t} \\max_{\\vect{\\theta} \\in \\mathcal{C}_t} \\; \\langle \\vect{x}, \\vect{\\theta} \\rangle$\n\\State Play the arms $\\mathcal{A}_t$ according to $\\vect{x}_t$ \n\\State Observe $(R_{\\tau, i})_{i \\in \\mathcal{A}_{\\tau}}$\n\\EndFor\n\\end{algorithmic}\n\\label{alg_comb}\n\\end{algorithm}\n\nThe algorithm maintains a confidence set $\\mathcal{C}_t$ that contains the true parameter $\\vect{\\theta}^*$ with high probability and chooses the action $\\vect{x}_t$ according to\n\\begin{equation}\n (\\vect{x}_t, \\widetilde{\\vect{\\theta}}_t) = \\argmax_{(\\vect{x}, \\vect{\\theta}) \\in \\mathcal{X}_t \\times \\mathcal{C}_t} \\; \\langle \\vect{x}, \\vect{\\theta} \\rangle\n\\end{equation}\n\nThe confidence sets that we construct are centered around the regualarized least square estimates defined next. We first let the cumulative squared prediciton error at time $t$ be\n\\begin{equation}\n L_{2,t}(\\vect{\\theta}) = \\sum_{\\tau=1}^{t-1} \\sum_{i \\in \\mathcal{A}_\\tau} (\\theta_i - R_{\\tau, i})^2\n\\end{equation}\nand define the regularized least squares estimate at time $t$ as\n\\begin{equation}\n \\widehat{\\vect{\\theta}}_t = \\argmin_{\\vect{\\theta} \\in \\mathcal{F}} \\left \\{ L_{2,t}(\\vect{\\theta}) + \\gamma \\|\\vect{\\theta} - \\overline{\\vect{\\theta}} \\|_2^2 \\right \\}\n\\end{equation}\n\nThen, the confidence sets take the form $\\mathcal{C}_t := \\{ \\vect{\\theta} \\in \\mathcal{F} : \\|\\vect{\\theta} - \\widehat{\\vect{\\theta}}_t \\|_{2, E_t} \\leq \\sqrt{\\beta_t}\\}$ where $\\beta_t$ is an appropriately chosen confidence parameter, and the regularized empirical 2-norm $\\| \\cdot \\|_{2, E_t}$ is defined by \n\\begin{equation*}\n \\| \\vect{\\Delta} \\|_{2, E_t}^2 := \\sum_{\\tau=1}^{t-1} \\sum_{i \\in \\mathcal{A}_\\tau} \\langle \\vect{\\Delta}, \\vect{e}_{i} \\rangle^2 + \\gamma \\| \\vect{\\Delta} \\|_{2}^2 = \\sum_{i=1}^{d} (n_{t, i} + \\gamma) (\\Delta_{i})^2\n\\end{equation*}\nwhere $n_{t, i} := \\sum_{\\tau=1}^{t-1} \\mathds{1} \\{i \\in \\mathcal{A}_t\\}$ denotes the number of times arm $i$ has been pulled before time $t$ (excluding time $t$). For future reference, we also define the (non-regularized) empirical 2-norm $\\| \\cdot \\|_{2, \\widetilde{E}_t}$ by \n\\begin{equation*}\n \\| \\vect{\\Delta} \\|^2_{2, \\widetilde{E}_t} := \\sum_{\\tau=1}^{t-1} \\sum_{i \\in \\mathcal{A}_\\tau} \\langle \\vect{\\Delta}, \\vect{e}_{i} \\rangle^2 = \\sum_{i=1}^{d} (n_{t, i}) (\\Delta_{i})^2\n\\end{equation*}\n\nNote that the regularized empirical 2-norm is related to (non-regularized) empirical 2-norm as\n\\begin{equation*}\n \\| \\vect{\\Delta} \\|^2_{2, E_t^2} = \\| \\vect{\\Delta} \\|^2_{2, \\widetilde{E}_t} + \\gamma \\| \\vect{\\Delta} \\|^2_{2}\n\\end{equation*}\n\nBy Lemma \\ref{lemma_lower_bound}, we establish that for any $\\vect{\\theta} \\in \\mathbb{R}^{d}$, \n\\begin{equation}\n \\mathds{P} \\left( L_{2,t}(\\vect{\\theta}) \\geq L_{2,t}(\\vect{\\theta}^*) + \\frac{1}{2} \\|\\vect{\\theta}^* - \\vect{\\theta} \\|^2_{2, \\widetilde{E}_t} - 4 \\eta^2 \\log(1\/\\delta) \\quad ,\\forall t \\in \\mathbb{N} \\right) \\geq 1 - \\delta\n\\end{equation}\n\nHence, with high probability, $\\vect{\\theta}$ can achieve lower squared error than $\\vect{\\theta}^*$ only if the empirical deviation $\\|\\vect{\\theta}^* - \\vect{\\theta} \\|^2_{2, \\widetilde{E}_t}$ is less than $8 \\eta^2 \\log(1\/\\delta)$. \n\nIn order to make this property hold uniformly for all $\\vect{\\theta}$ in a subset $\\mathcal{C}_t$ of $\\mathcal{F}$, we discretize $\\mathcal{C}_t$ at some discretization scale $\\alpha$ and apply a union bound for this finite discretization set. Let $\\mathcal{N}(\\mathcal{F}, \\alpha, \\| \\cdot \\|_{2})$ denote the $\\alpha$-covering number of $\\mathcal{F}$ in the 2-norm $\\| \\cdot \\|_{2}$, and let \n\\begin{equation}\n \\beta_t^*(\\delta, \\alpha, \\gamma) := 8 \\eta^2 \\log \\left(\\mathcal{N}(\\mathcal{F}, \\alpha, \\| \\cdot \\|_{2}) \/ \\delta \\right) + 2 \\alpha t \\sqrt{d} \\left [ 8 B + \\sqrt{8 \\eta^2 \\log(4d t^2\/\\delta)} \\right] + 4 \\gamma G^2\n\\end{equation}\n\nThen, Lemma \\ref{lemma_confidence} shows that if we set $\\beta_t = \\beta_t^*(\\delta, \\alpha)$, the confidence sets $\\mathcal{C}_t$ contain the true parameter $\\vect{\\theta}^*$ for all $t$ with high probability. Following the construction of the confidence sets, the next step is to obtain the overall regret guarantee. As given in Corollary \\ref{corr_regret_order}, we find that the regret of Algorithm \\ref{alg_comb} satisfies\n\\begin{equation}\n \\mathcal{R}(T, \\pi) = \\widetilde{\\mathcal{O}} \\left( \\sqrt{\\eta^2 d \\log \\left(\\mathcal{N}(\\mathcal{F}, T^{-1}, \\| \\cdot \\|_{2})\\right) T} \\right)\n\\end{equation}\n\n\\begin{remark}\nIn the literature of linear bandits, the typical observation model is such that each action $\\vect{x}_{t}$ results in a single reward feedback with mean $\\langle \\vect{x}_{t}, \\vect{\\theta}^* \\rangle$ and sub-Gaussianity parameter $d \\eta^2$ (since each arm has a $\\eta^2$-sub-Gaussian reward). Therefore, that observation model can only obtain $\\widetilde{\\mathcal{O}} ( \\sqrt{\\eta^2 d^2 \\log \\left(\\mathcal{N}(\\mathcal{F}, T^{-1}, \\| \\cdot \\|_{2})\\right) T} )$ regret guarantee. However, in our setting, the observations are sets of independent rewards $\\{R_{t, i}\\}_{i \\in \\mathcal{A}_t}$ where each element is $\\eta^2$-sub-Gaussian. Due to this richer nature of the observation model, we are able to achieve lower regret guarantees than only observing a single cumulative reward.\n\\end{remark}\n\n\\section{Proofs for Confidence Sets}\n\\label{pf_conf_sets}\n\n\\subsection{Martingale Exponential Inequalities}\n\nWe start with preliminary results on martingale exponential inequalities.\n\nConsider a sequence of random variables $(Z_n)_{n \\in \\mathds{N}}$ adapted to the filtration $(\\mathcal{H}_n)_{n \\in \\mathds{N}}$. Assume $\\mathds{E}[\\exp (\\lambda Z_i)]$ is finite for all $\\lambda$. Define the conditional mean $\\mu_i = \\mathds{E}[Z_i | \\mathcal{H}_{i-1}]$, and define the conditional cumulant generating function of the centered random variable $[Z_i - \\mu_i]$ by $\\psi_i(\\lambda) := \\log \\mathds{E}[ \\exp (\\lambda [Z_i - \\mu_i]) | \\mathcal{H}_{i-1}]$. Let \n\\begin{equation*}\n M_n(\\lambda) = \\exp \\left \\{ \\sum_{i=1}^{n} \\lambda [Z_i - \\mu_i] - \\psi_i(\\lambda) \\right \\} \n\\end{equation*}\n\n\\begin{lemma}\n$(M_n (\\lambda))_{n \\in \\mathds{N}}$ is a martingale with respect to the filtration $(\\mathcal{H}_n)_{n \\in \\mathds{N}}$, and $\\mathds{E}[M_n (\\lambda)] = 1$.\n\\label{martingale}\n\\end{lemma}\n\n\\begin{proof}\nBy definition, we have\n\\begin{equation*}\n \\mathds{E}[M_1 (\\lambda) | \\mathcal{H}_0] = \\mathds{E}[\\exp \\{ \\lambda [Z_1 - \\mu_1] - \\psi_1(\\lambda) \\} | \\mathcal{H}_0] = 1 \n\\end{equation*}\nThen, for any $n \\geq 2$,\n\\begin{align*}\n \\mathds{E}[M_n (\\lambda) | \\mathcal{H}_{n-1}] &= \\mathds{E}[M_{n-1}(\\lambda) \\exp \\{ \\lambda [Z_n - \\mu_n] - \\psi_n(\\lambda) \\} | \\mathcal{H}_{n-1}] \\\\\n &= M_{n-1}(\\lambda) \\mathds{E}[\\exp \\{ \\lambda [Z_n - \\mu_n] - \\psi_n(\\lambda) \\} | \\mathcal{H}_{n-1}]\\\\\n &= M_{n-1}(\\lambda)\\\\\n\\end{align*}\nsince $M_{n-1}(\\lambda)$ is a measurable function of the filtration $\\mathcal{H}_{n-1}$.\n\\end{proof}\n\n\\begin{lemma}\nFor all $x \\geq 0$ and $\\lambda \\geq 0$, \n\\begin{equation*}\n\\mathds{P} \\left(\\sum_{i=1}^{n} \\lambda Z_i \\leq x + \\sum_{i=1}^{n} [\\lambda \\mu_i + \\psi_i(\\lambda)] \\quad ,\\forall t \\in \\mathds{N} \\right) \\geq 1 - e^{-x}\n\\end{equation*}\n\\label{exp_martingale}\n\\end{lemma}\n\n\\begin{proof}\nFor any $\\lambda$, $(M_n (\\lambda))_{n \\in \\mathds{N}}$ is a martingale with respect to $(\\mathcal{H}_n)_{n \\in \\mathds{N}}$ and $\\mathds{E}[M_n (\\lambda)] = 1$ by Lemma \\ref{martingale}. For arbitrary $x \\geq 0$, define $\\tau_x = \\inf \\{ n\\geq 0 | M_n(\\lambda) \\geq x \\}$ and note that $\\tau_x$ is a stopping time corresponding to the first time $M_n$ crosses the boundary $x$. Since $\\tau$ is a stopping time with respect to $(\\mathcal{H}_n)_{n \\in \\mathds{N}}$, we have $\\mathds{E}[M_{\\tau_x \\wedge n}(\\lambda)] = 1$. Then, by Markov's inequality\n\\begin{equation*}\n x \\mathds{P} (M_{\\tau_x \\wedge n}(\\lambda) \\geq x) \\leq \\mathds{E}[M_{\\tau_x \\wedge n}(\\lambda)] = 1\n\\end{equation*}\n\nNoting that the event $\\{ M_{\\tau_x \\wedge n}(\\lambda) \\geq x \\} = \\bigcup_{k=1}^n \\{M_{k}(\\lambda) \\geq x\\} $, we have\n\\begin{equation*}\n\\mathds{P} \\left( \\bigcup_{k=1}^n \\{M_{k}(\\lambda) \\geq x\\} \\right) \\leq \\frac{1}{x}\n\\end{equation*}\n\nTaking the limit as $n \\to \\infty$, and applying monotone convergence theorem shows that $\\mathds{P} \\left( \\bigcup_{k=1}^\\infty \\{M_{k}(\\lambda) \\geq x\\} \\right) \\leq \\frac{1}{x}$ or $\\mathds{P} \\left( \\bigcup_{k=1}^\\infty \\{M_{k}(\\lambda) \\geq e^x\\} \\right) \\leq e^{-x}$. Then, by definition of $M_k (\\lambda)$, we conclude\n\\begin{equation*}\n \\mathds{P} \\left( \\bigcup_{k=1}^\\infty \\left \\{ \\sum_{i=1}^{n} \\lambda [Z_i - \\mu_i] - \\psi_i(\\lambda) \\geq x \\right \\} \\right) \\leq e^{-x}\n\\end{equation*}\n\n\\end{proof}\n\n\\subsection{Proofs for the construction of confidence sets}\n\n\\begin{lemma} For any $\\delta > 0$ and $\\vect{\\theta} \\in \\mathbb{R}^{d}$,\n\\begin{equation}\n \\mathds{P} \\left( L_{2,t}(\\vect{\\theta}) \\geq L_{2,t}(\\vect{\\theta}^*) + \\frac{1}{2} \\|\\vect{\\theta}^* - \\vect{\\theta} \\|^2_{2, \\widetilde{E}_t} - 4 \\eta^2 \\log(1\/\\delta) \\quad ,\\forall t \\in \\mathbb{N} \\right) \\geq 1 - \\delta\n\\end{equation}\n\n\\label{lemma_lower_bound}\n\\end{lemma}\n\n\\begin{proof}\n\nLet $\\mathcal{H}_{t-1}$ be the $\\sigma$-algebra generated by $(H_t, \\mathcal{A}_t)$ and let $\\mathcal{H}_0 = \\sigma(\\emptyset, \\Omega)$. Then, define $\\epsilon_{t, i} := R_{t, i} - \\langle \\mathbf{X}_{t, i}, \\vect{\\theta}^*\\rangle$ for all $t \\in \\mathds{N}$ and $i \\in \\mathcal{A}_t$. By previous assumptions, $\\mathds{E} [\\epsilon_{t, i} | \\mathcal{H}_{t-1}] = 0$ and $\\mathds{E} [\\exp (\\lambda \\epsilon_{t, u}) | \\mathcal{H}_{t-1} ] \\leq \\exp \\left(\\frac{\\lambda^2 \\eta^2}{2} \\right)$ for all $t$.\n\nDefine $Z_{t, i} = (R_{t, i} - \\langle \\mathbf{X}_{t, i}, \\vect{\\theta}^*\\rangle)^2 - (R_{t, i} - \\langle \\mathbf{X}_{t, i}, \\vect{\\theta}\\rangle)^2$. Then, we have\n\\begin{align*}\nZ_{t, i} &= - (\\langle \\mathbf{X}_{t, i}, \\vect{\\theta}\\rangle - \\langle \\mathbf{X}_{t, i}, \\vect{\\theta}^*\\rangle)^2 + 2 \\epsilon_{t, i} (\\langle \\mathbf{X}_{t, i}, \\vect{\\theta}\\rangle - \\langle \\mathbf{X}_{t, i}, \\vect{\\theta}^*\\rangle)\\\\\n&= - \\langle \\mathbf{X}_{t, i}, \\vect{\\theta} - \\vect{\\theta}^*\\rangle^2 + 2 \\epsilon_{t, i} \\langle \\mathbf{X}_{t, i}, \\vect{\\theta} - \\vect{\\theta}^*\\rangle\n\\end{align*}\n\nTherefore, the conditional mean and conditional cumulant generating function satisfy\n\\begin{align*}\n \\mu_{t, i} &:= \\mathds{E}[Z_{t, i} | \\mathcal{H}_{t-1}] = - \\langle \\mathbf{X}_{t, i}, \\vect{\\theta} - \\vect{\\theta}^*\\rangle^2\\\\\n \\psi_t(\\lambda) &:= \\log \\mathds{E}[ \\exp (\\lambda [Z_{t, i} - \\mu_{t, i}]) | \\mathcal{H}_{t-1}]\\\\\n &= \\log \\mathds{E}[ \\exp (2 \\lambda \\langle \\mathbf{X}_{t, i}, \\vect{\\theta} - \\vect{\\theta}^*\\rangle \\epsilon_{t, i} ) | \\mathcal{H}_{t-1}]\\\\\n &\\leq \\frac{(2 \\lambda \\langle \\mathbf{X}_{t, i}, \\vect{\\theta} - \\vect{\\theta}^*\\rangle)^2 \\eta^2}{2}\n\\end{align*}\n\nApplying Lemma \\ref{exp_martingale} shows that for all $x \\geq 0$ and $\\lambda \\geq 0$,\n\\begin{equation*}\n\\mathds{P} \\left(\\sum_{\\tau=1}^{t-1} \\sum_{u \\in \\mathcal{A}_\\tau} Z_{\\tau, i} \\leq \\frac{x}{ \\lambda} + \\sum_{\\tau=1}^{t-1} \\sum_{u \\in \\mathcal{A}_\\tau} \\langle \\mathbf{X}_{\\tau, i}, \\vect{\\theta} - \\vect{\\theta}^*\\rangle^2 (2 \\lambda \\eta^2 - 1) \\quad ,\\forall t \\in \\mathds{N} \\right) \\geq 1 - e^{-x}\n\\end{equation*}\n\nNote that we have $\\sum_{\\tau=1}^{t-1} \\sum_{u \\in \\mathcal{A}_\\tau} Z_{\\tau, i} = L_{2,t}(\\vect{\\theta}^*) - L_{2,t}(\\vect{\\theta})$,\n\nand $\\sum_{\\tau=1}^{t-1} \\sum_{u \\in \\mathcal{A}_\\tau} \\langle \\mathbf{X}_{\\tau, i}, \\vect{\\theta} - \\vect{\\theta}^*\\rangle^2 = \\|\\vect{\\theta}^* - \\vect{\\theta} \\|^2_{2, \\widetilde{E}_t}$. \n\nThen, choosing $\\lambda = \\frac{1}{4 \\eta^2}$ and $x = \\log \\frac{1}{\\delta}$ gives\n\\begin{equation*}\n \\mathds{P} \\left( L_{2,t}(\\vect{\\theta}) \\geq L_{2,t}(\\vect{\\theta}^*) + \\frac{1}{2} \\|\\vect{\\theta}^* - \\vect{\\theta} \\|^2_{2, \\widetilde{E}_t} - 4 \\eta^2 \\log(1\/\\delta) \\quad ,\\forall t \\in \\mathbb{N} \\right) \\geq 1 - \\delta\n\\end{equation*}\n\n\\end{proof}\n\n\\begin{lemma}\nIf $\\vect{\\theta}^{\\alpha} \\in \\mathcal{F}^{\\alpha}$ satisfies $\\|\\vect{\\theta} - \\vect{\\theta}^{\\alpha}\\|_{2} \\leq \\alpha$, then with probability at least $1 - \\delta$,\n\\begin{equation}\n \\left | \\frac{1}{2} \\|\\vect{\\theta}^* - \\vect{\\theta}^{\\alpha} \\|^2_{2, \\widetilde{E}_t} - \\frac{1}{2} \\|\\vect{\\theta}^* - \\vect{\\theta} \\|^2_{2, \\widetilde{E}_t} + L_{2,t}(\\vect{\\theta}) - L_{2,t}(\\vect{\\theta}^{\\alpha}) \\right | \\leq \\alpha t d \\left [ 8 B + \\sqrt{8 \\eta^2 \\log(4d t^2\/\\delta)} \\right]\n\\label{eqn_of_discr_lemma}\n\\end{equation}\n\\label{discr_lemma}\n\\end{lemma}\n\n\\begin{proof}\n\nSince any two $\\vect{\\theta}, \\vect{\\theta}^{\\alpha} \\in \\mathcal{F}$ satisfy $\\|\\vect{\\theta} - \\vect{\\theta}^{\\alpha}\\|_{2} \\leq \\sqrt{d} B$, it is enough to consider $\\alpha \\leq \\sqrt{d} B$. We find \n\\begin{align*}\n \\sum_{ i = 1 }^{d} | \\langle \\vect{\\theta}, \\vect{e}_{i} \\rangle^2 - \\langle \\vect{\\theta}^{\\alpha}, \\vect{e}_{i} \\rangle^2 | &\\leq \\max_{ \\|\\vect{\\Delta}\\|_2 \\leq \\alpha } \\left \\{\\sum_{ i = 1 }^{d} \\left | \\theta_{i}^2 - (\\theta_{i}+\\Delta_i)^2 \\right | \\right \\}\\\\\n &= \\max_{ \\|\\vect{\\Delta}\\|_2 \\leq \\alpha } \\left \\{\\sum_{ i = 1 }^{d}\\left | 2 \\theta_{i} \\Delta_i + \\Delta_i^2 \\right | \\right \\}\\\\\n &\\leq \\max_{ \\|\\vect{\\Delta}\\|_2 \\leq \\alpha } \\left \\{ 2 \\sum_{ i = 1 }^{d}\\left | \\theta_{i} \\Delta_i \\right | + \\sum_{ i = 1 }^{d} \\Delta_i^2 \\right \\}\\\\\n &\\leq \\max_{ \\|\\vect{\\Delta}\\|_2 \\leq \\alpha } \\left \\{ 2 B \\|\\vect{\\Delta}\\|_1 + \\|\\vect{\\Delta}\\|_2^2 \\right \\}\\\\\n &\\leq 2 B \\sqrt{d} \\alpha + \\alpha^2\n\\end{align*}\nTherefore, it implies\n\\begin{align*}\n \\sum_{ i = 1 }^{d} | \\langle \\vect{\\theta} - \\vect{\\theta}^*, \\vect{e}_{i} \\rangle^2 - \\langle \\vect{\\theta}^{\\alpha} - \\vect{\\theta}^*, \\vect{e}_{i} \\rangle^2 | &= \\sum_{ i = 1 }^{d} \\left| \\langle \\vect{\\theta}, \\vect{e}_{i} \\rangle^2 - \\langle \\vect{\\theta}^{\\alpha}, \\vect{e}_{i} \\rangle^2 + 2 \\langle \\vect{\\theta}^*, \\vect{e}_{i} \\rangle \\langle \\vect{\\theta}^{\\alpha} - \\vect{\\theta}, \\vect{e}_{i} \\rangle \\right| \\\\\n &\\leq 2 B \\sqrt{d} \\alpha + \\alpha^2 + 2 B \\|\\vect{\\theta} - \\vect{\\theta}^{\\alpha}\\|_{1} \\\\\n &\\leq 4 B \\sqrt{d} \\alpha + \\alpha^2\n\\end{align*}\n\nSimilarly, for any $t$, we have\n\\begin{align*}\n \\sum_{ i = 1 }^{d} | \\left( R_{t,i} - \\langle \\vect{\\theta}, \\vect{e}_{i} \\rangle \\right)^2 -\\left( R_{t,i} - \\langle \\vect{\\theta}^{\\alpha}, \\vect{e}_{i} \\rangle \\right)^2 | &= \\sum_{ i = 1 }^{d} \\left| 2 R_{t,i} \\langle \\vect{\\theta}^{\\alpha} - \\vect{\\theta}, \\vect{e}_{i} \\rangle + \\langle \\vect{\\theta}, \\vect{e}_{i} \\rangle^2 - \\langle \\vect{\\theta}^{\\alpha}, \\vect{e}_{i} \\rangle^2 \\right| \\\\\n &\\leq 2 \\sum_{ i = 1 }^{d} \\left| R_{t,i} \\right | \\left| \\langle \\vect{\\theta}^{\\alpha} - \\vect{\\theta}, \\vect{e}_{i} \\rangle \\right| + 2 B \\sqrt{d} \\alpha + \\alpha^2\\\\\n &\\leq 2 \\| \\vect{\\theta}^{\\alpha} - \\vect{\\theta}\\|_{\\infty} \\sum_{ i = 1 }^{d} | R_{t,i} | + 2 B \\sqrt{d} \\alpha + \\alpha^2\\\\\n &\\leq 2 \\alpha \\sum_{ i = 1 }^{d} | R_{t,i} | + 2 B \\sqrt{d} \\alpha + \\alpha^2\n\\end{align*}\n\nSumming over $t$ and noting that $\\mathcal{A}_t \\subseteq [d]$, the left hand side of \\eqref{eqn_of_discr_lemma} is bounded by\n\\begin{equation*}\n \\sum_{\\tau = 1}^{t-1} \\left( \\frac{1}{2} \\left[ 4 B \\sqrt{d} \\alpha + \\alpha^2 \\right] + 2 \\alpha \\sum_{ i = 1 }^{d} | R_{t,i} | + 2 B \\sqrt{d} \\alpha + \\alpha^2 \\right) \\leq \\alpha \\sum_{\\tau = 1}^{t-1} \\left(6 B \\sqrt{d} + 2 \\sum_{ i = 1 }^{d} | R_{\\tau,i} | \\right)\n\\end{equation*}\n\nBecause $\\epsilon_{\\tau, i}$ is $\\eta$-sub-Gaussian, $\\mathds{P} \\left( |\\epsilon_{\\tau, i}| > \\sqrt{2 \\eta^2 \\log(2\/\\delta) }\\right) \\leq \\delta$. By a union bound, $\\mathds{P} \\left( \\exists \\tau, i \\text{ s.t. } |\\epsilon_{\\tau, i}| > \\sqrt{2 \\eta^2 \\log(4d \\tau^2\/\\delta) }\\right) \\leq \\frac{\\delta d}{2} \\sum_{\\tau = 1}^{\\infty} \\frac{1}{d \\tau^2} \\leq \\delta$. Since $| R_{\\tau,i} | \\leq B + | \\epsilon_{\\tau, i} |$, we have $| R_{\\tau,i} | \\leq B + \\sqrt{2 \\eta^2 \\log(4d \\tau^2\/\\delta)}$ with probability at least $1 - \\delta$. Consequently, the bound for the discretization error becomes\n\\begin{equation*}\n\\alpha t d \\left [ 8 B + 2 \\sqrt{2 \\eta^2 \\log(4d t^2\/\\delta)} \\right]\n\\end{equation*}\n\n\n\n\\end{proof}\n\n\n\\begin{lemma} For any $\\delta > 0$, $\\alpha > 0$ and $\\gamma > 0$, if\n\\begin{equation}\n \\mathcal{C}_t = \\{ \\vect{\\theta} \\in \\mathcal{F} : \\|\\vect{\\theta} - \\widehat{\\vect{\\theta}}_t \\|_{2, E_t} \\leq \\sqrt{ \\beta_t^*(\\delta, \\alpha, \\gamma)}\\}\n\\end{equation}\nfor all $t \\in \\mathbb{N}$, then\n\\begin{equation}\n \\mathds{P} \\left( \\vect{\\theta}^* \\in \\mathcal{C}_t \\quad ,\\forall t \\in \\mathbb{N} \\right) \\geq 1 - 2\\delta\n\\end{equation}\n\\label{lemma_confidence}\n\\end{lemma}\n\n\\begin{proof}\n\nLet $\\mathcal{F}^{\\alpha} \\subset \\mathcal{F}$ be an $\\alpha$-cover of $\\mathcal{F}$ in the 2-norm so that for any $\\vect{\\theta} \\in \\mathcal{F}$, there exists $\\vect{\\theta}^{\\alpha} \\in \\mathcal{F}^{\\alpha}$ such that $\\|\\vect{\\theta} - \\vect{\\theta}^{\\alpha}\\|_{2} \\leq \\alpha$. By a union bound applied to Lemma \\ref{lemma_lower_bound}, with probability at least $1 - \\delta$,\n\\begin{equation*}\n L_{2,t}(\\vect{\\theta}^{\\alpha}) - L_{2,t}(\\vect{\\theta}^*) \\geq \\frac{1}{2} \\|\\vect{\\theta}^* - \\vect{\\theta}^{\\alpha} \\|^2_{2, \\widetilde{E}_t} - 4 \\eta^2 \\log(|\\mathcal{F}^{\\alpha}|\/\\delta) \\quad ,\\forall \\vect{\\theta}^{\\alpha} \\in \\mathcal{F}^{\\alpha}, t \\in \\mathbb{N}\n\\end{equation*}\n\nTherefore, with probability at least $1 - \\delta$, for all $\\vect{\\theta} \\in \\mathcal{F}, t \\in \\mathbb{N}$,\n\\begin{align*}\n L_{2,t}(\\vect{\\theta}) - L_{2,t}(\\vect{\\theta}^*) \\geq & \\frac{1}{2} \\|\\vect{\\theta}^* - \\vect{\\theta} \\|^2_{2, \\widetilde{E}_t} - 4 \\eta^2 \\log(|\\mathcal{F}^{\\alpha}|\/\\delta) \\\\\n &+ \\min_{\\vect{\\theta}^{\\alpha} \\in \\mathcal{F}^{\\alpha}} \\left \\{ \\frac{1}{2} \\|\\vect{\\theta}^* - \\vect{\\theta}^{\\alpha} \\|^2_{2, \\widetilde{E}_t} - \\frac{1}{2} \\|\\vect{\\theta}^* - \\vect{\\theta} \\|^2_{2, \\widetilde{E}_t} + L_{2,t}(\\vect{\\theta}) - L_{2,t}(\\vect{\\theta}^{\\alpha}) \\right \\}\n\\end{align*}\n\nBy Lemma \\ref{discr_lemma}, with probability at least $1 - 2 \\delta$,\n\\begin{equation*}\n L_{2,t}(\\vect{\\theta}) - L_{2,t}(\\vect{\\theta}^*) \\geq \\frac{1}{2} \\|\\vect{\\theta}^* - \\vect{\\theta} \\|^2_{2, \\widetilde{E}_t} - D_t\n\\end{equation*}\nwhere $D_t := 4 \\eta^2 \\log(|\\mathcal{F}^{\\alpha}|\/\\delta) + \\alpha t d \\left [ 8 B + \\sqrt{8 \\eta^2 \\log(4d t^2\/\\delta)} \\right]$.\n\nAdding the regularization terms to both sides, we obtain\n\\begin{equation*}\n L_{2,t}(\\vect{\\theta}) + \\gamma \\|\\vect{\\theta} - \\overline{\\vect{\\theta}} \\|_2^2 - L_{2,t}(\\vect{\\theta}^*) - \\gamma \\|\\vect{\\theta}^* - \\overline{\\vect{\\theta}} \\|_2^2 \\geq \\frac{1}{2} \\|\\vect{\\theta}^* - \\vect{\\theta} \\|^2_{2, \\widetilde{E}_t} + \\gamma \\|\\vect{\\theta} - \\overline{\\vect{\\theta}} \\|_2^2 - D_t - \\gamma \\|\\vect{\\theta}^* - \\overline{\\vect{\\theta}} \\|_2^2 \n\\end{equation*}\n\nNote the definition of the least square estimate $\\widehat{\\vect{\\theta}}_t = \\argmin_{\\vect{\\theta} \\in \\mathcal{F}} \\left \\{ L_{2,t}(\\vect{\\theta}) + \\gamma \\|\\vect{\\theta} - \\overline{\\vect{\\theta}} \\|_2^2 \\right \\}$. By letting $\\vect{\\theta} = \\widehat{\\vect{\\theta}}_t$, the left hand side becomes non-positive, and hence \n\\begin{equation*}\n\\frac{1}{2} \\|\\vect{\\theta}^* - \\widehat{\\vect{\\theta}}_t \\|^2_{2, \\widetilde{E}_t} \\leq D_t + \\gamma \\left( \\|\\vect{\\theta}^* - \\overline{\\vect{\\theta}} \\|_2^2 - \\|\\widehat{\\vect{\\theta}}_t - \\overline{\\vect{\\theta}} \\|_2^2 \\right)\n\\end{equation*}\n\nThen, \n\\begin{equation*}\n\\frac{1}{2} \\|\\vect{\\theta}^* - \\widehat{\\vect{\\theta}}_t \\|^2_{2, \\widetilde{E}_t} + \\gamma \\left( \\|\\widehat{\\vect{\\theta}}_t - \\overline{\\vect{\\theta}} \\|_2^2 + \\|\\vect{\\theta}^* - \\overline{\\vect{\\theta}} \\|_2^2 \\right) \\leq D_t + 2 \\gamma \\|\\vect{\\theta}^* - \\overline{\\vect{\\theta}} \\|_2^2\n\\end{equation*}\n\nBy triangle inequality we have $ \\|\\widehat{\\vect{\\theta}}_t - \\overline{\\vect{\\theta}} \\|_2 + \\|\\vect{\\theta}^* - \\overline{\\vect{\\theta}} \\|_2 \\geq \\|\\vect{\\theta}^* - \\widehat{\\vect{\\theta}}_t \\|_2$. Taking squares on both sides, we obtain $ \\|\\widehat{\\vect{\\theta}}_t - \\overline{\\vect{\\theta}} \\|_2^2 + \\|\\vect{\\theta}^* - \\overline{\\vect{\\theta}} \\|_2^2 \\geq \\frac{1}{2} \\|\\vect{\\theta}^* - \\widehat{\\vect{\\theta}}_t \\|_2^2$. Then, noting that $\\| \\vect{\\Delta} \\|^2_{2, E_t^2} = \\| \\vect{\\Delta} \\|^2_{2, \\widetilde{E}_t} + \\gamma \\| \\vect{\\Delta} \\|^2_{2}$, we have\n\\begin{equation*}\n\\frac{1}{2} \\|\\vect{\\theta}^* - \\widehat{\\vect{\\theta}}_t \\|^2_{2, E_t} \\leq D_t + 2 \\gamma \\|\\vect{\\theta}^* - \\overline{\\vect{\\theta}} \\|_2^2\n\\end{equation*}\n\nLastly, using the inequality $\\|\\vect{\\theta}^* - \\overline{\\vect{\\theta}} \\|_2^2 \\leq G^2$,\n\\begin{equation*}\n\\|\\vect{\\theta}^* - \\widehat{\\vect{\\theta}}_t \\|^2_{2, E_t} \\leq 8 \\eta^2 \\log(|\\mathcal{F}^{\\alpha}|\/\\delta) + 2 \\alpha t d \\left [ 8 B + \\sqrt{8 \\eta^2 \\log(4d t^2\/\\delta)} \\right] + 4 \\gamma G^2\n\\end{equation*}\n\nTaking the infimum over the size of $\\alpha$-covers, we obtain the final result.\n\n\\end{proof}\n\n\n\\section{Proofs for Regret Bounds}\n\\label{pf_regrets}\n\nThroughout this section we will use the shorthand $\\beta_t = \\beta_t^*(\\delta, \\alpha, \\gamma)$.\n\nWe start with the definitions of weighted inner product and norms.\n\n\\begin{definition}\nFor a symmetric positive definite matrix $\\vect{W} \\in \\mathbb{R}^{d \\times d}$, define\n\\begin{itemize}\n \\item $\\vect{W}$-inner product of two vectors $\\vect{x}, \\vect{y} \\in \\mathbb{R}^{d}$ as $\\langle \\vect{x}, \\vect{y} \\rangle_{\\vect{W}} := \\langle \\vect{W} \\vect{x}, \\vect{y} \\rangle$\n \\item $\\vect{W}$-norm of a vector $\\vect{x} \\in \\mathbb{R}^{d}$ as $\\|\\vect{x}\\|_{\\vect{W}} := \\sqrt{\\langle \\vect{x}, \\vect{x} \\rangle_{\\vect{W}}}$.\n\\end{itemize}\n\\end{definition}\n\nThen, the regularized empirical 2-norm of a vector $\\vect{z} \\in \\mathbb{R}^d$ can be written as \n\\begin{equation}\n \\| \\vect{z} \\|_{2, E_t} = \\| \\vect{z} \\|_{\\vect{A}_t}\n\\end{equation}\nwhere $\\vect{A}_t$ is a diagonal matrix with diagonal entries $\\{ (n_{t, 1} + \\gamma), (n_{t, 2} + \\gamma), \\dots, (n_{t, d} + \\gamma) \\}$. \n\nRecall that $n_{t, i} = \\sum_{\\tau=1}^{t-1} \\mathds{1} \\{i \\in \\mathcal{A}_t\\}$ denotes the number of times arm $i$ has been pulled before time $t$ (excluding time $t$).\n\n\\begin{lemma}\nLet $x \\in \\mathcal{X}$ and $\\Theta \\in \\mathcal{F}_t$. Then,\n\\begin{equation}\n |\\langle \\vect{\\theta} - \\widehat{\\vect{\\theta}}_t^{\\text{LS}}, \\vect{x} \\rangle| \\leq w \\sqrt{\\beta_t}\n\\end{equation}\nwhere $w = \\| \\vect{x} \\|_{\\vect{A}_t^{-1}}$ is the \"confidence width\" of an action $\\vect{x}$ at time $t$.\n\\label{lemma_conf_width}\n\\end{lemma}\n\n\\begin{proof} Let $\\vect{\\Delta} = \\vect{\\theta} - \\widehat{\\vect{\\theta}}_t^{\\text{LS}}$. Then,\n\\begin{align*}\n |\\langle \\vect{\\Delta} , \\vect{x} \\rangle| &= |\\vect{\\Delta} ^\\textrm{T} \\vect{x} |\\\\\n &= |\\vect{\\Delta}^\\textrm{T} \\vect{A}_t^{1\/2} \\vect{A}_t^{-1\/2} \\vect{x}|\\\\\n &= |(\\vect{A}_t^{1\/2}\\vect{\\Delta})^\\textrm{T} \\vect{A}_t^{-1\/2} \\vect{x}|\\\\\n &\\leq \\| \\vect{A}_t^{1\/2}\\vect{\\Delta} \\| \\|\\vect{A}_t^{-1\/2} \\vect{x}\\|\\\\\n &= \\| \\vect{\\Delta} \\|_{\\vect{A}_t} \\| \\vect{x} \\|_{\\vect{A}_t^{-1}}\\\\\n &= w \\| \\vect{\\Delta} \\|_{2, E_t}\\\\\n &\\leq w \\sqrt{\\beta_t} \n\\end{align*}\n\\end{proof}\n\nDefine the widths of the allocations as\n\\begin{equation}\n w_t := \\| \\vect{x}_t \\|_{\\vect{A}_t^{-1}} \\qquad \\text{and} \\qquad w_{t, i} := \\| \\vect{e}_i \\|_{\\vect{A}_t^{-1}}\n\\label{def_widths}\n\\end{equation}\n\n\\begin{lemma}\nFor any $t \\in \\mathbb{N}$, we have the identity\n\\begin{equation*}\n w_t^2 = \\sum_{i \\in \\mathcal{A}_t} w_{t, i}^2\n\\end{equation*}\n\\label{lemma_width_identity}\n\\end{lemma}\n\\begin{proof}\n\\begin{align*}\n w_t^2 &= \\langle \\vect{x}_t, \\vect{x}_t \\rangle_{\\vect{A}_t^{-1}}\\\\\n &= \\left\\langle \\vect{A}_t^{-1} \\sum_{i \\in \\mathcal{A}_t} \\vect{e}_i, \\sum_{i \\in \\mathcal{A}_t} \\vect{e}_i \\right\\rangle \\\\\n &= \\sum_{i \\in \\mathcal{A}_t} \\sum_{j \\in \\mathcal{A}_t} \\left\\langle \\vect{A}_t^{-1} \\vect{e}_i, \\vect{e}_j \\right\\rangle\\\\\n &= \\sum_{i \\in \\mathcal{A}_t} \\left\\langle \\vect{A}_t^{-1} \\vect{e}_i, \\vect{e}_i \\right\\rangle\\\\\n &= \\sum_{i \\in \\mathcal{A}_t} w_{t, i}^2\n\\end{align*}\nwhere the penultimate step follows because $\\left\\langle \\vect{A}_t^{-1} \\vect{e}_i, \\vect{e}_j \\right\\rangle = 0$ for $i \\neq j$.\n\\end{proof}\n\n\\begin{lemma}\nLet the regret at time $t$ be $r_t = \\langle \\vect{x}_t^*, \\vect{\\theta}^* \\rangle - \\langle \\vect{x}_t, \\vect{\\theta}^* \\rangle$. If $\\vect{\\theta}^* \\in \\mathcal{C}_t$, then\n\\begin{equation*}\n r_t \\leq 2 w_t \\sqrt{\\beta_t}\n\\end{equation*}\n\\label{lemma_regret_ub}\n\\end{lemma}\n\n\\begin{proof}\nBy the choice of $(\\vect{x}_t, \\widetilde{\\vect{\\theta}}_t)$, we have\n\\begin{align*}\n \\langle \\vect{x}_t, \\widetilde{\\vect{\\theta}}_t \\rangle = \\max_{(\\vect{x}, \\vect{\\theta}) \\in \\mathcal{X}_t \\times \\mathcal{C}_t} \\langle \\vect{x}, \\vect{\\theta} \\rangle \\geq \\langle \\vect{x}_t^*, \\vect{\\theta}^* \\rangle\n\\end{align*}\nwhere the inequality uses $\\vect{\\theta}^* \\in \\mathcal{C}_t$. Hence,\n\\begin{align*}\n r_t &= \\langle \\vect{x}_t^*, \\vect{\\theta}^* \\rangle - \\langle \\vect{x}_t, \\vect{\\theta}^* \\rangle\\\\\n &\\leq \\langle \\vect{x}_t, \\widetilde{\\vect{\\theta}}_t - \\vect{\\theta}^* \\rangle\\\\\n &= \\langle \\vect{x}_t, \\widetilde{\\vect{\\theta}}_t - \\widehat{\\vect{\\theta}}_t^{\\text{LS}} \\rangle + \\langle \\vect{x}_t, \\widehat{\\vect{\\theta}}_t^{\\text{LS}} - \\vect{\\theta}^* \\rangle\\\\\n &\\leq 2 w_t \\sqrt{\\beta_t}\n\\end{align*}\n\nwhere the last step follows from Lemma $\\ref{lemma_conf_width}$.\n\\end{proof}\n\nNext, we show that the confidence widths do not grow too fast.\n\n\\begin{lemma}\nFor every t,\n\\begin{equation}\n \\log \\det \\vect{A}_{t+1} = d \\log \\gamma + \\sum_{\\tau = 1}^{t} \\sum_{i \\in \\mathcal{A}_\\tau} \\log(1 + w_{\\tau,i}^2) \n\\end{equation}\n\\label{lemma_logdet_multiply}\n\\end{lemma}\n\n\\begin{proof} By the definition of $\\vect{A}_t$, we have\n\\begin{align*}\n \\det \\vect{A}_{t+1} &= \\det \\left( \\vect{A}_{t} + \\sum_{i \\in \\mathcal{A}_t} \\vect{e}_{i} \\vect{e}_{i}^\\textrm{T} \\right)\\\\\n &= \\det \\left( \\vect{A}_{t}^{1\/2} \\left( \\vect{I} + \\vect{A}_{t}^{-1\/2}\\left( \\sum_{i \\in \\mathcal{A}_t} \\vect{e}_{i} \\vect{e}_{i}^\\textrm{T} \\right) \\vect{A}_{t}^{-1\/2} \\right) \\vect{A}_{t}^{1\/2} \\right) \\\\\n &= \\det (\\vect{A}_{t}) \\det\\left( \\vect{I} + \\sum_{i \\in \\mathcal{A}_t} \\vect{A}_{t}^{-1\/2} \\vect{e}_{i} \\vect{e}_{i}^\\textrm{T} \\vect{A}_{t}^{-1\/2} \\right)\n\\end{align*}\n\nEach $\\vect{A}_{t}^{-1\/2} \\vect{e}_{i} \\vect{e}_{i}^\\textrm{T} \\vect{A}_{t}^{-1\/2}$ term has zeros everywhere except one entry on the diagonal and that non-zero entry is equal to $w_{t, i}^2$. Furthermore, the location of the non-zero entry is different in for each term. Hence,\n\\begin{equation*}\n \\det\\left( \\vect{I} + \\sum_{i \\in \\mathcal{A}_t} \\vect{A}_{t}^{-1\/2} \\vect{e}_{i} \\vect{e}_{i}^\\textrm{T} \\vect{A}_{t}^{-1\/2} \\right) = \\prod_{i \\in \\mathcal{A}_t} (1 + w_{t, i}^2)\n\\end{equation*}\n\nTherefore, we have\n\\begin{equation*}\n \\log \\det \\vect{A}_{t+1} = \\log\\det \\vect{A}_{t} + \\sum_{i \\in \\mathcal{A}_t} \\log(1 + w_{t, i}^2)\n\\end{equation*}\nSince $\\vect{A}_1 = \\gamma \\vect{I}$, we have $\\log \\det \\vect{A}_1 = d \\log \\gamma$ and the result follows by induction.\n\\end{proof}\n\n\\begin{lemma}\nFor all t, $\\log \\det \\vect{A}_t \\leq d \\log (t + \\gamma - 1)$.\n\\label{lemma_logdet_upper}\n\\end{lemma}\n\\begin{proof}\nNoting that $\\vect{A}_t$ is a diagonal matrix with diagonals $(n_{t, i} + \\gamma)$,\n\\begin{align*}\n \\tr \\vect{A}_t &= d \\gamma + \\sum_{i = 1}^{d} n_{t, i} \\\\\n &= d \\gamma + d (t-1) \\\\\n &= d(t + \\gamma - 1)\n\\end{align*}\nNow, recall that $\\tr \\vect{A}_t$ equals the sum of the eigenvalues of $\\vect{A}_t$. On the other hand, $\\det(\\vect{A}_t)$ equals the product of the eigenvalues. Since $\\vect{A}_t$ is positive definite, its eigenvalues are all positive. Subject to these constraints, $\\det(\\vect{A}_t)$ is maximized when all the eigenvalues are equal; the desired bound follows. \n\\end{proof}\n\n\\begin{lemma}\nLet $\\gamma \\geq 1$. Then, for all t, we have\n\\begin{equation*}\n \\sum_{\\tau = 1}^{t} \\sum_{i \\in \\mathcal{A}_\\tau} w_{\\tau, i}^2 \\leq 2 d \\log \\left(1 + \\frac{t}{\\gamma} \\right)\n\\end{equation*}\n\\label{lemma_width_sum_ub}\n\\end{lemma}\n\n\\begin{proof}\nNote that $0 \\leq w_{\\tau, i}^2 \\leq 1$, if $\\gamma \\geq 1$. Using the inequality $y \\leq 2 \\log(1 + y)$ for $0 \\leq y \\leq 1$, we have\n\\begin{align*}\n \\sum_{\\tau = 1}^{t} \\sum_{i \\in \\mathcal{A}_\\tau} w_{\\tau, i}^2 &\\leq 2 \\sum_{\\tau = 1}^{t} \\sum_{i \\in \\mathcal{A}_\\tau} \\log (1 + w_{\\tau, i}^2)\\\\\n &= 2 \\log \\det \\vect{A}_{t+1} - 2 d \\log \\gamma\\\\\n &\\leq 2 d \\log \\left(1 + \\frac{t}{\\gamma} \\right)\n\\end{align*}\nwhere the last two lines follow from Lemmas \\ref{lemma_logdet_multiply} and \\ref{lemma_logdet_upper} respectively.\n\\end{proof}\n\n\\begin{lemma}\nLet the instantaneous regret at time $t$ be $r_t = \\langle \\vect{x}_t^*, \\vect{\\theta}^* \\rangle - \\langle \\vect{x}_t, \\vect{\\theta}^* \\rangle$. If $\\gamma \\geq 1$ and $\\vect{\\theta}^* \\in \\mathcal{C}_t$ for all $t \\leq T$, then\n\\begin{equation*}\n \\sum_{t = 1}^{T} r_t^2 \\leq 8 \\beta_T d \\log \\left(1 + \\frac{T}{\\gamma} \\right)\n\\end{equation*}\n\\label{lemma_square_regret}\n\\end{lemma}\n\n\\begin{proof}\n Assuming that $\\vect{\\theta}^* \\in \\mathcal{C}_t$ for all $t \\leq T$,\n\\begin{align*}\n \\sum_{t = 1}^{T} r_t^2 &\\leq \\sum_{t = 1}^{T} 4 w_t^2 \\beta_t\\\\\n &\\leq 4 \\beta_T \\sum_{t = 1}^{T} w_t^2\\\\\n &= 4 \\beta_T \\sum_{t = 1}^{T} \\sum_{i \\in \\mathcal{A}_t} w_{t, i}^2 \\\\\n &\\leq 8 \\beta_T d \\log \\left(1 + \\frac{T}{\\gamma} \\right)\n\\end{align*}\nwhere the first step follows from Lemma \\ref{lemma_regret_ub}, second step follows from the definition of $\\beta_t$, third step uses the identity given in Lemma \\ref{lemma_width_identity} and the last step is due to Lemma \\ref{lemma_width_sum_ub}.\n\\end{proof}\n\n\\begin{theorem}\nLet $\\gamma \\geq 1$. Then, with probability at least $1 - 2 \\delta$, the T period regret is bounded by\n\\begin{equation*}\n \\mathcal{R}(T, \\pi) \\leq \\sqrt{ 8 d \\beta_T^* (\\delta, \\alpha, \\gamma) T \\log \\left(1 + \\frac{T}{\\gamma} \\right) }\n\\end{equation*}\nwhere\n\\begin{equation}\n \\beta_T^*(\\delta, \\alpha, \\gamma) = 8 \\eta^2 \\log \\left(\\mathcal{N}(\\mathcal{F}, \\alpha, \\| \\cdot \\|_{2}) \/ \\delta \\right) + 2 \\alpha d T \\left(8 B + \\sqrt{8 \\eta^2 \\log(4 d T^2 \/ \\delta)} \\right) + 4 \\gamma G^2\n\\end{equation}\n\\label{thm_regret}\n\\end{theorem}\n\n\\begin{proof}\nAssuming that $\\vect{\\theta}^* \\in \\mathcal{C}_t$ for all $t \\leq T$,\n\\begin{align*}\n \\mathcal{R}(T, \\pi) &= \\sum_{t = 1}^{T} r_t \\\\\n &\\leq \\left( T \\sum_{t = 1}^{T} r_t^2 \\right)^{1\/2}\\\\\n &\\leq \\left( 8 d \\beta_T T \\log \\left(1 + \\frac{T}{\\gamma} \\right) \\right)^{1\/2}\n\\end{align*}\nwhere the last step follows from Lemma \\ref{lemma_square_regret}. Then, by Lemma \\ref{lemma_confidence}, $\\vect{\\theta}^* \\in \\mathcal{C}_t$ for all $t \\leq T$ with probability at least $1 - 2 \\delta$. Therefore, the bound holds true with probability at least $1 - 2 \\delta$.\n\\end{proof}\n\n\\begin{corollary}\nLetting $\\delta = \\mathcal{O} ((dT)^{-1})$, $\\alpha = \\mathcal{O} ((dT)^{-1})$ and $\\gamma = 1$ results in a regret bound that satisfies\n\\begin{equation}\n \\mathcal{R}(T, \\pi) = \\widetilde{\\mathcal{O}} \\left( \\sqrt{\\eta^2 d \\log \\left(\\mathcal{N}(\\mathcal{F}, T^{-1}, \\| \\cdot \\|_{2})\\right) T} \\right)\n\\end{equation}\n\\label{corr_regret_order}\n\\end{corollary}\n\n\\begin{proof}\nBy Theorem \\ref{thm_regret}, with probability $1$,\n\\begin{equation*}\n \\mathcal{R}(T, \\pi) \\leq (1 - \\delta) \\sqrt{ 8 d \\beta_T^* (\\delta, \\alpha, \\gamma) T \\log \\left(1 + \\frac{T}{\\gamma} \\right) } + 2 \\delta BdT\n\\end{equation*}\n\nLetting $\\delta = \\mathcal{O} (T^{-1})$, $\\alpha = \\mathcal{O} = (T^{-1})$ and $\\gamma = 1$,\n\\begin{equation*}\n \\mathcal{R}(T, \\pi) = \\widetilde{\\mathcal{O}} \\left( \\sqrt{d \\beta_T^* (T, T^{-1}, 1) T} \\right)\n\\end{equation*}\n\nNoting that $\\beta_T^* (T, T^{-1}, 1) = \\widetilde{\\mathcal{O}} \\left(\\eta^2 \\log \\left(\\mathcal{N}(\\mathcal{F}, T^{-1}, \\| \\cdot \\|_{2})\\right) \\right)$, the proof is complete.\n\n\\end{proof}\n\n\\section{Proofs for OFU-based Allocations}\n\\label{pf_allocation}\n\n\\subsection{Proof of Theorem \\ref{thm_alloc_regret}}\n\nIn the allocation problem, the mean reward of the arms are given in the matrix $\\vect{\\Theta} \\in \\mathbb{R}^{N \\times M}$. Consider setting $\\vect{\\theta} = \\text{vec}({\\vect{\\Theta}})$ as the mean reward vector for the problem described in Appendix Section \\ref{SCMAB}. Noting that $d = NM$ and $\\|\\cdot\\|_\\text{F} = \\|\\text{vec}(\\cdot)\\|_2$, the proof becomes a direct extension of Theorem \\ref{thm_regret}.\n\n\\subsection{Proof of Theorem \\ref{low_rank_regret_thm}}\n\nThe proof is direct extension of Corollary \\ref{corr_regret_order} where the covering number is replaced with the following upper bound given for the choice of $\\mathcal{L}$ defined in equation \\eqref{low_l}. We provide the upper bound for the covering number of $\\mathcal{L}$ in the following lemma.\n\n\\begin{lemma}[Covering number for low-rank matrices]\nThe covering number of $\\mathcal{L}$ given in \\eqref{low_l} obeys\n\\begin{equation}\n \\log \\mathcal{N}(\\mathcal{L}, \\alpha, \\| \\cdot \\|_{\\text{F}}) \\leq (N + M + 1) R \\log \\left( \\frac{9B \\sqrt{NM}}{\\alpha} \\right)\n\\end{equation}\n\\label{lemma_covering}\n\\end{lemma}\n\n\\begin{proof}\nThis proof is modified from \\cite{candes_2011}. Let $\\mathcal{S} = \\{ \\vect{\\Theta} \\in \\mathds{R}^{N \\times M} : \\text{rank}(\\vect{\\Theta}) \\leq R, \\|\\vect{\\Theta}\\|_\\text{F} \\leq 1\\}$. We will first show that there exists an $\\epsilon$-net $\\mathcal{S}^\\epsilon$ for the Frobenious norm obeying \n\\begin{equation*}\n |\\mathcal{S}^\\epsilon| \\leq \\left(9 \/ \\epsilon \\right)^{(N+M+1)R}\n\\end{equation*} \n\nFor any $\\vect{\\Theta} \\in \\mathcal{S}$, singular value decomposition gives $\\vect{\\Theta} = \\vect{U} \\vect{\\Sigma} \\vect{V}^\\textrm{T}$, where $\\|\\vect{\\Sigma}\\|_{\\text{F}} \\leq 1$. We will construct an $\\epsilon$-net for $\\mathcal{S}$ by covering the set of permisible $\\vect{U}$, $\\vect{\\Sigma}$ and $\\vect{V}$. Let $\\mathcal{D}$ be the set of diagonal matrices with nonnegative diagonal entries and Frobenious norm less than or equal to one. We take $\\mathcal{D}^{\\epsilon\/3}$ be an $\\epsilon$\/3-net for $\\mathcal{D}$ with $|\\mathcal{D}^{\\epsilon\/3}| \\leq (9\/\\epsilon)^R$. Next, let $\\mathcal{O}_{N,R} = \\{\\vect{U} \\in \\mathbb{R}^{N \\times R} : \\vect{U}^\\textrm{T} \\vect{U} = \\vect{I} \\}$. To cover $\\mathcal{O}_{N,R}$, we use the $\\|\\cdot\\|_{1,2}$ norm defined as\n\\begin{equation*}\n \\|\\vect{U}\\|_{1,2} = \\max_{i} \\|\\vect{u_i}\\|_{\\ell_2}\n\\end{equation*}\nwhere $\\vect{u_i}$ denotes the $i$th column of $\\vect{U}$. Let $\\mathcal{Q}_{N, R} = \\{\\vect{U} \\in \\mathbb{R}^{N, R} : \\|\\vect{U}\\|_{1,2} \\leq 1\\}$. It is easy to see that $\\mathcal{O}_{{N, R}} \\subset \\mathcal{Q}_{{N, R}}$ since the columns of an orthogonal matrix are unit normed. We see that there is an $\\epsilon\/3$-net $\\mathcal{O}_{{N, R}}^{\\epsilon\/3}$ for $\\mathcal{O}_{{N, R}}$ obeying $|\\mathcal{O}_{{N, R}}^{\\epsilon\/3}| \\leq (9\/\\epsilon)^{NR}$. Similarly, let $\\mathcal{P}_{M,R} = \\{\\vect{V} \\in \\mathbb{R}^{M \\times R} : \\vect{V}^\\textrm{T} \\vect{V} = \\vect{I} \\}$. By the same argument, there is an $\\epsilon\/3$-net $\\mathcal{P}_{M, R}^{\\epsilon\/3}$ for $\\mathcal{P}_{M, R}$ obeying $|\\mathcal{P}_{{M, R}}^{\\epsilon\/3}| \\leq (9\/\\epsilon)^{MR}$. We now let $\\mathcal{S}^\\epsilon = \\{ \\bar{\\vect{U}} \\bar{\\vect{\\Sigma}} \\bar{\\vect{V}}^\\textrm{T} : \\bar{\\vect{U}} \\in \\mathcal{O}_{{N, R}}^{\\epsilon\/3}, \\bar{\\vect{V}} \\in \\mathcal{P}_{{M, R}}^{\\epsilon\/3}, \\bar{\\vect{\\Sigma}} \\in \\mathcal{D}^{\\epsilon\/3}\\}$, and remark $|\\mathcal{S}^\\epsilon| \\leq |\\mathcal{O}_{{N, R}}^{\\epsilon\/3}| |\\mathcal{P}_{{M, R}}^{\\epsilon\/3}| |\\mathcal{D}^{\\epsilon\/3}| \\leq (9\/\\epsilon)^{(N+M+1)R}$. It remains to show that for all $\\vect{\\Theta} \\in \\mathcal{S}$, there exists $\\bar{\\vect{\\Theta}} \\in \\mathcal{S}^\\epsilon$ with $\\|\\vect{\\Theta} - \\bar{\\vect{\\Theta}}\\|_\\text{F} \\leq \\epsilon$.\n\nFix $\\vect{\\Theta} \\in \\mathcal{S}$ and decompose it as $\\vect{\\Theta} = \\vect{U} \\vect{\\Sigma} \\vect{V}^\\textrm{T}$. Then, there exists $\\bar{\\vect{\\Theta}} = \\bar{\\vect{U}} \\bar{\\vect{\\Sigma}} \\bar{\\vect{V}}^\\textrm{T} \\in \\mathcal{S}^\\epsilon$ with $\\bar{\\vect{U}} \\in \\mathcal{O}_{{N, R}}^{\\epsilon\/3}, \\bar{\\vect{V}} \\in \\mathcal{P}_{{M, R}}^{\\epsilon\/3}, \\bar{\\vect{\\Sigma}} \\in \\mathcal{D}^{\\epsilon\/3}$ satisfying $\\|\\vect{U} - \\bar{\\vect{U}}\\|_{1, 2} \\leq \\epsilon\/3$, $\\|\\vect{V} - \\bar{\\vect{V}}\\|_{1, 2} \\leq \\epsilon\/3$ and $\\|\\vect{\\Sigma} - \\bar{\\vect{\\Sigma}}\\|_{\\text{F}} \\leq \\epsilon\/3$. This gives \n\\begin{align*}\n \\|\\vect{\\Theta} - \\bar{\\vect{\\Theta}}\\|_{\\text{F}} &= \\|\\vect{U} \\vect{\\Sigma} \\vect{V}^\\textrm{T} - \\bar{\\vect{U}} \\bar{\\vect{\\Sigma}} \\bar{\\vect{V}}^\\textrm{T}\\|_{\\text{F}}\\\\\n &= \\|\\vect{U} \\vect{\\Sigma} \\vect{V}^\\textrm{T} - \\bar{\\vect{U}} \\vect{\\Sigma} \\vect{V}^\\textrm{T} + \\bar{\\vect{U}} \\vect{\\Sigma} \\vect{V}^\\textrm{T} - \\bar{\\vect{U}} \\bar{\\vect{\\Sigma}} \\vect{V}^\\textrm{T} + \\bar{\\vect{U}} \\bar{\\vect{\\Sigma}} \\vect{V}^\\textrm{T} - \\bar{\\vect{U}} \\bar{\\vect{\\Sigma}} \\bar{\\vect{V}}^\\textrm{T}\\|_{\\text{F}} \\\\\n &\\leq \\|(\\vect{U} - \\bar{\\vect{U}}) \\vect{\\Sigma} \\vect{V}^\\textrm{T}\\|_{\\text{F}} + \\|\\bar{\\vect{U}} (\\vect{\\Sigma} - \\bar{\\vect{\\Sigma}}) \\vect{V}^\\textrm{T}\\|_{\\text{F}} + \\|\\bar{\\vect{U}} \\bar{\\vect{\\Sigma}} (\\vect{V} - \\bar{\\vect{V}})^\\textrm{T} \\|_{\\text{F}}\n\\end{align*}\n\nFor the first term, since $\\vect{V}$ is an orthogonal matrix,\n\\begin{align*}\n \\|(\\vect{U} - \\bar{\\vect{U}}) \\vect{\\Sigma} \\vect{V}^\\textrm{T}\\|^2_{\\text{F}} &= \\|(\\vect{U} - \\bar{\\vect{U}}) \\vect{\\Sigma} \\|^2_{\\text{F}} \\\\\n &\\leq \\|\\Sigma\\|^2_{\\text{F}} \\|\\vect{U} - \\bar{\\vect{U}} \\|^2_{1, 2} \\leq (\\epsilon\/3)^2\n\\end{align*}\nBy the same argument, $\\|\\bar{\\vect{U}} \\bar{\\vect{\\Sigma}} (\\vect{V} - \\bar{\\vect{V}})^\\textrm{T} \\|_{\\text{F}} \\leq \\epsilon\/3$ as well. Lastly, $\\|\\bar{\\vect{U}} (\\vect{\\Sigma} - \\bar{\\vect{\\Sigma}}) \\vect{V}^\\textrm{T}\\|_{\\text{F}} = \\|\\vect{\\Sigma} - \\bar{\\vect{\\Sigma}}\\|_{\\text{F}} \\leq \\epsilon \/ 3$. Therefore, $\\|\\vect{\\Theta} - \\bar{\\vect{\\Theta}}\\|_{\\text{F}} \\leq \\epsilon$, showing that $\\mathcal{S}^\\epsilon$ is an $\\epsilon$-net for $\\mathcal{S}$ with respect to the Frobenious norm.\n\nNext, we will construct an $\\alpha$-net for $\\mathcal{L}$ given in equation \\ref{low_l}. Let $\\kappa = B\\sqrt{NM}$. We start by noting that for all $\\vect{\\Theta} \\in \\mathcal{L}$, the Frobenious norm obeys $\\|\\vect{\\Theta}\\|_\\text{F} \\leq \\kappa$. Then, define $\\vect{X} = \\frac{1}{\\kappa} \\vect{\\Theta} \\in \\mathcal{S}$ and $\\mathcal{L}^\\alpha := \\left\\{ \\kappa \\bar{\\vect{X}} : \\bar{\\vect{X}} \\in \\mathcal{S}^\\epsilon \\right\\}$. We previously showed that for any $\\vect{X} \\in \\mathcal{S}$, there exists $\\bar{\\vect{X}} \\in \\mathcal{S}^\\epsilon$ such that $\\|\\vect{X} - \\bar{\\vect{X}}\\|_\\text{F} \\leq \\epsilon$. Therefore, for any $\\vect{\\Theta} \\in \\mathcal{L}$, there exists $\\bar{\\vect{\\Theta}} = \\kappa \\bar{\\vect{X}} \\in \\mathcal{L}^\\alpha$ such that $\\|\\vect{\\Theta} - \\bar{\\vect{\\Theta}}\\|_\\text{F} \\leq \\kappa \\epsilon$. Setting $\\epsilon = \\alpha \/ \\kappa$, we obtain that $\\mathcal{L}^\\alpha$ is an $\\alpha$-net for $\\mathcal{L}$ with respect to the Frobenious norm. Finally, the size of $\\mathcal{L}^\\alpha$ obeys\n\\begin{equation*}\n |\\mathcal{L}^\\alpha| = |\\mathcal{S}^{\\alpha \/ \\kappa}| \\leq \\left(9 \\kappa \/ \\alpha \\right)^{(N+M+1)R}\n\\end{equation*} \nThis completes the proof.\n\\end{proof}\n\n\\section{Additional Experimental Details}\n\\label{sect_additional_exp}\n\nAll experiments are implemented in Python and carried out on a machine with 2.3GHz 8-core Intel Core i9 CPU and 16GB of RAM. We solve the allocation integer program \\eqref{integer_num} using large-scale mixed integer programming (MIP) solver packages to have efficient computations.\n\n\\textbf{Parameter setup}:\n\\begin{itemize}[labelindent= 0pt, align= left, labelsep=0.4em, leftmargin=*]\n\\item In synthetic data, $\\vect{\\Theta}^*$ is scaled such that $B = 10$.\n\\item Standard deviation of the rewards: $\\eta = 1$.\n\\item In the static setting, $d_{t, u} = 1$ for all $u \\in [N]$.\n\\item In the dynamic setting, $d_{t, u} = 1$ with probability $0.2$, $0$ otherwise, independently for each $u \\in [N]$.\n\\item $C_\\text{max} = \\text{ceil} \\left( \\frac{3}{M} \\sum_{u = 1}^{N} d_{t, u} \\right)$.\n\\item $c_{t, i}$ are uniformly sampled over $\\{1, \\dots, C_\\text{max}\\}$ independently for each $t \\in [T]$ and $i \\in [M]$.\n\n\\end{itemize}\n\nTo complement our discussion on the importance of capacity-aware recommendations, Figure \\ref{fig_cap} shows the effects of having stricter capacity constraints. When $C_\\text{max}$ is low (the capacity constraints are stricter), we see that the performance gap between our proposed method and ICF2 is larger. Therefore, it is more crucial to employ capacity-aware mechanisms in settings with tighter capacity constraints. \n\n\\begin{figure}[ht]\n\\center\n\\includegraphics[width=0.4\\textwidth]{plots\/capacity_effect.jpeg}\n\\caption{Normalized cumulative reward obtained in $T = 300$ rounds for different choices of $C_{\\text{max}}$ (normalized by the cumulative reward of optimal allocations). Synthetic data in a static setting with $N = 400$, $M = 200$, $R = 10$. For each data point, the experiments are run on $20$ problem instances and means are reported together with error regions that indicate one standard deviation of uncertainty.}\n\\label{fig_cap}\n\\end{figure}\n\nIn Figures \\ref{exp_1}, \\ref{exp_2}, \\ref{exp_3} and \\ref{exp_4}, we provide detailed results for different experimental settings described in Section \\ref{sect_exp}. Reward indicates the instantaneous reward obtained in each iteration, regret is the gap between the reward of the optimum allocation and the allocation achieved by the algorithm. Cumulative regret (defined in \\eqref{cumulative_regret}) is the cumulative sum of instantaneous regrets up to iteration $t$. The average cumulative regret is obtained by normalizing the cumulative regret with $1 \/ t$.\n\n\\begin{figure}[ht]\n\\center\n\\includegraphics[width=\\textwidth]{plots\/experiment_1.jpeg}\n\\caption{Experimental results for synthetic data in a static setting with $N = 800$, $M = 400$, $R = 20$. The experiments are run on $10$ problem instances and means are reported together with error regions that indicate one standard deviation of uncertainty.}\n\\label{exp_1}\n\\end{figure}\n\n\\begin{figure}[ht]\n\\center\n\\includegraphics[width=\\textwidth]{plots\/experiment_2.jpeg}\n\\caption{Experimental results for synthetic data in a dynamic setting with $N = 1000$, $M = 150$, $R = 20$, probability of activity $0.2$. The experiments are run on $10$ problem instances and means are reported together with error regions that indicate one standard deviation of uncertainty.}\n\\label{exp_2}\n\\end{figure}\n\n\\begin{figure}[ht]\n\\center\n\\includegraphics[width=\\textwidth]{plots\/experiment_3.jpeg}\n\\caption{Experimental results for Restaurant-Customer data in a static setting. The experiments are run on $10$ problem instances and means are reported together with error regions that indicate one standard deviation of uncertainty.}\n\\label{exp_3}\n\\end{figure}\n\n\\begin{figure}[ht]\n\\center\n\\includegraphics[width=\\textwidth]{plots\/experiment_4.jpeg}\n\\caption{Experimental results for MovieLens 100k data in a static setting. The experiments are run on $10$ problem instances and means are reported together with error regions that indicate one standard deviation of uncertainty.}\n\\label{exp_4}\n\\end{figure}\n\\section*{Checklist}\n\n\\begin{enumerate}\n\n\\item For all authors...\n\\begin{enumerate}\n \\item Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?\n \\answerYes{We pose the problem of making recommendations that will achieve optimal allocations in markets with constraints. Then, we formulate this problem as a structured combinatorial bandit and propose a no-regret algorithm.}\n \\item Did you describe the limitations of your work?\n \\answerYes{In Section \\ref{learning_opt_alloc}, we describe that we are studying the posed problem of interactive recommendations for optimum allocations under the assumptions that facilitate our theoretical analysis. In Section \\ref{sect_lrcb}, we indicate that one possible approach to improve upon our theoretical analysis might be by assuming a problem setting where at most some specific number of the arms can be played in each round.}\n \\item Did you discuss any potential negative societal impacts of your work?\n \\answerNA{}\n \\item Have you read the ethics review guidelines and ensured that your paper conforms to them?\n \\answerYes{}\n\\end{enumerate}\n\n\n\\item If you are including theoretical results...\n\\begin{enumerate}\n \\item Did you state the full set of assumptions of all theoretical results?\n \\answerYes{See Assumptions \\ref{rew_assumptio} and \\ref{low_assum} in Section \\ref{sect_methodology}.}\n \\item Did you include complete proofs of all theoretical results?\n \\answerYes{See Appendix \\ref{appendix_num}, \\ref{SCMAB}, \\ref{pf_conf_sets}, \\ref{pf_regrets} and \\ref{pf_allocation}.}\n\\end{enumerate}\n\n\\item If you ran experiments...\n\\begin{enumerate}\n \\item Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)?\n \\answerYes{The code and data for the simulations are provided in the supplemental material.}\n \\item Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)?\n \\answerYes{The hyperparameters of experimental settings are given in Appendix \\ref{sect_additional_exp} as well as the captions of Figures \\ref{fig_regrets} and \\ref{fig_cap}. Further implementation details can be found in the code.}\n \\item Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)?\n \\answerYes{See Figure \\ref{fig_regrets} and other figures in Appendix \\ref{sect_additional_exp}.}\n \\item Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)?\n \\answerYes{See Appendix \\ref{sect_additional_exp}.}\n\\end{enumerate}\n\n\\item If you are using existing assets (e.g., code, data, models) or curating\/releasing new assets...\n\\begin{enumerate}\n \\item If your work uses existing assets, did you cite the creators?\n \\answerYes{The datasets are cited appropriately.}\n \\item Did you mention the license of the assets?\n \\answerNA{}\n \\item Did you include any new assets either in the supplemental material or as a URL?\n \\answerYes{The code and data for the simulations are provided in the supplemental material.}\n \\item Did you discuss whether and how consent was obtained from people whose data you're using\/curating?\n \\answerNA{}\n \\item Did you discuss whether the data you are using\/curating contains personally identifiable information or offensive content?\n \\answerNA{}\n\\end{enumerate}\n\n\n\\item If you used crowdsourcing or conducted research with human subjects...\n\\begin{enumerate}\n \\item Did you include the full text of instructions given to participants and screenshots, if applicable?\n \\answerNA{}\n \\item Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable?\n \\answerNA{}\n \\item Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation?\n \\answerNA{}\n\\end{enumerate}\n\n\n\\end{enumerate}\n\n\n\n\n\\section{Conclusion and future directions}\n\n\nIn this paper, we have studied the setting of interactive\nrecommendations that achieve socially optimal allocations under capacity constraints. We have formulated the problem as a low-rank combinatorial multi-armed bandit and proposed an algorithm that enjoys low regret. Building on the ideas founded in this work, we aim to pursue joint recommendation and pricing mechanisms that will achieve optimal allocations in the general problem setting with users actively reacting to the recommendations based on the prices determined by the provider. We believe that this is a practically-relevant question to be resolved because it allows for design of many interesting real-world recommendation applications for settings with associated markets. \n\n\\section{Discussion}\n\\label{sec:discussion}\n\nDescribe how recommendations have become an integral part of our socioeconomic life.\nDescribe how the current implementations miss the key factor of limited service capacities.\nUse examples such as restaurant suggestions, hospital suggestions, supply chain recommendations and business decisions.\nOur goal is to design a recommendation system that is aware of market conditions, service capacities, etc.\n\n\nEl Farol Bar, Kolkata Paisa Restaurant problem, etc (mainly statistical methods in this literature).\nLearning related works: Include the paper found by Efe related to such works, paper by Jordan, etc. \nShow what is missing in each one of them. \nFor example, pricing, or capacity constraints, or utilities, etc.\nKelly handles many of these aspects but lacks learning efficiciency.\nFor example, there is a huge communication overhead involved in signalling between the network and the users.\n\n\nWe consider the setting of online food delivery as an example setting to demonstrate our approach.\nConsider a platform that allocates diners to restaurants.\n(Doordash and UberEats are prominent examples of such platforms today.)\nDiners come to the platform seeking to place orders.\nThere are several restaurants registered with the platform ready to serve. \nBesides there are people who deliver the food from the restaurants to the diners.\nOther possible participants include the stakeholders, regulatory agencies, worker unions, etc.\nThis classification is useful to characterize the relevant attributes that are related to the different components in the network and how they connect with each other. \n\nHere is an example of the relevant attributes and their inter-connectedness.\nThe diners have preferences over the different restaurants.\nAlong with the food choice, the diners also care about the delay in their service, i.e.\nthe time it takes for them to recieve their order.\nPreference over restaurants and delays is an important attribute of the diners.\nThese preferences are their private knowledge which might be hard to learn with limited communication.\nBesides the diners might also be unwilling to report these truthfully.\nWe allow different diners to have different preferences, but they all belong to the same class of having preferences over restaurants and delays.\nOn the other hand, the restaurants have different serving capacities.\nThe number of diners that are being served \nby any given restaurant affects the delay of the diners served by that restaurant \nand in turn the experience of these diners.\n\n\nThere are several more aspects to this problem, for example, the availability of delivery people affects the delays.\nThe delivery people will have different preferences over where they wish to travel in order to deliver.\nFor example, at the end of the day a delivery person would prefer to deliver at a place closer to her home.\nThe restaurants can also have preferences over their idle times, for example, different restaurants would care differently about the lack of orders.\nFor example, restaurants that have online food delivery as their main source of revenue will care more whereas restaurants that rely more on on-foot traffic to their restaurants will care less.\nBesides, it is a dynamic setting since the availability of delivery people, the preferences of the agents in the network, and the serving capacities of the restaurants are all changing with time.\nWe will take a market-based approach to solve this problem.\n\n\nIn this paper, however, we will restrict our attention to serving the diners to the best extent possible given the constraints coming from the limited serving capacities of the restaurants and the availability of delivery persons.\nWe will only care about the preferences of the diners and treat the other components of the network as capacity constraints.\nWe will relegate the more general problem incorporating the preferences of the other components in the network for future papers.\n\n\\subsection{Notes from Intro}\n\n\n\n\nOur approach is based on integrating methods from network economics (Kelly decentralized network framework) with collaborative filtering (for example, matrix factorization or latent variable models) and bandits learning (for example, UCB or Thompson sampling). \nOur framework can be thought of as an extension of the market based methods in \nnetwork resource allocation settings with the additional feature of practical limitations in collecting the required information.\nThis requires striking a balance between opportunistically learning the unknown preferences and allocating the limited items.\n\n\nIntuition:\nIn market settings prices naturally adjust to the demand and the preferences of the agents so that \nwe allocate items to the agents that prefer them the most.\nKelly proposed a framework in the context of bandwidth allocation over the Internet.\nThe system consists of users with preferences and limited resources that can be used to serve the users.\nThe users preferences are modeled as utility functions over their outcomes,\nand the outcomes are a function of the resource allocation in the system.\nThe problem is formulated as a social welfare maximization one (or equivalently customer satisfaction maximization) subject to the capacity constraints in the system.\nOften, the system operator (platform) does not know the user preferences, only the users know their own preferences (or at least have an idea about it).\nOn the other hand the users do not know about the system capacity constraints and the system operator is best placed to collect this information.\n\n\n\nIn theory, the above framework should solve our problem.\nHowever, in the framework proposed by Kelly, \nwe would require the agents to respond constantly with their updated bids for different items.\nGiven how large the number of available items can be, \nit is impractical to get such responses from the agents.\nAssuming that the agents preferences are related to each other with a hidden low dimensional structure,\nit is natural to use methods from collaborative filtering to resolve this problem.\n\n\n\\subsection{EXtending to dynamic settings}\n\nLet $[n] := \\{1, 2, \\dots, n\\}$ be the set of diners, with a typical diner denoted by $i$.\nLet $[m] := \\{1, 2, \\dots, m\\}$ be the set of restaurants, with a typical restaurant denoted by $r$.\nFor each diner, restaurant pair let $u_{i,r}:\\bbR^+ \\to \\bbR^+$ be the utility function, where $u_{i,r}(\\tau)$ denotes the utility diner $i$ gets from ordering from restaurant $r$ with a service delay of time $\\tau > 0$.\n\n\nAs in the serverless pricing paper, we will dicretize the delay times into several tiers or levels.\nLet $[l] := \\{1, 2, \\dots, l\\}$ be the set of delay tiers with a typical tier denoted by $t$.\nFor example, let tier $1$ correspond to a delay of at most $10$ mins, tier $2$ correspond to a delay or more than $10$ mins and less than $30$ mins, tier 3 correspond to a delay of more than $30$ mins and less than $60$ mins, and so on.\nNote that the intervals in the different tiers can be of different lengths.\nOur framework is general enough to model any of these.\nThe more the number of intervals, the higher precision we have over the utility functions, but it makes the computational problem harder leading to greater estimation and approximation errors.\nThus, the design of tiers is an important practical problem and will depend on the particular setting and tradeoff concerns.\nWe will assume that we have 6 tiers, each with a length of $10$ mins.\nThus, the maximum delay that we allow is $1$ hr beyond which we assume that the utility is $0$.\n\n(Include a diagram of utility function and values corresponding to tiers.)\n\n\nLet $U$ be an $n \\times m \\times l$ tensor, where $U(i,r,t)$ is the utility of diner $i$ if she receives her order from restaurant $r$ within the delay tier $t$.\nThis models the preferences of the diners.\nLet us now model the system constraints.\nWe will first have a stylized model for the system, but then relax it later.\nLet $\\mu_r$ denote the average service rate of restaurant $r$, i.e. if $\\mu_r = 1$ serving\/min then restaurant $r$ can cook 1 serving every min on average.\nGiven the current availability of delivery persons, let $\\delta_{i,r}$ denote the time it takes to deliver the order from restaurnat $r$ to diner $i$.\nLet us assume that this is also fixed for the moment, for example, say we use the average delay in delivery time.\n(The general RL framework will be stronger and be able to adapt to changing delivery times and restaurant serving rates. In that case we won't assume that we know these values a priori, but the RL framework will learn and predict these values in an online fashion.)\n\n\n\nI will now describe the dynamic aspect in this problem.\nWe envision the scheduler to operate periodically, say every $1$ min.\nLet us descritize the time into $1$ min intervals and starting at time $s = 0$.\nAt any instant, it maintains a queue of orders that are yet to be scheduled.\nTo elaborate, diners come to our platform and enter their order size. \nThe diners who are currently looking to order from our platform will be called active diners.\nLet $A(s) \\subset [n]$ denote the set of active diners at step $s$.\nEach diner $i$ once it comes to the platform, inputs its order size $J_i$, the amount of food she wishes to order.\nThus for each active diner, i.e. $i \\in A$, we know the order size $J_i$.\n\n\nOn the operation of the scheduler in step $s$ it comes up with a matching restaurant for each customer, a corresponding delay in service, and the corresponding cost for the order.\n(I will soon describe how the scheduler comes up with these allocations and pricing, but it is similar to the static scheduler in the serverless pricing paper.)\nEach active diner observes this suggestion and either accepts it or rejects it.\nIf the diner accepts the suggestion, she is scheduled for that order and removed from the queue and is not an active diner anymore.\n(I will use the queue and the set of active diners interchangeably, since they are identical in this framework.)\nIf the diner does not accept the suggestion, she is kept in the queue and will get a new suggestion in the next operation of the scheduler.\n(Possible considerations for practical reasons: we don't want to make the diners wait for $1$ min to come up with a new suggestion. We can consider providing multiple suggestions in each iteration to each diner, or a scheduler that operates at a faster time-scale.)\n\n\nLet the amount of food allocated to user $i$ from restaurant $r$ in tier $t$ by the scheduler be denoted by $x(i,r,t)$.\nThe static problem that the scheduler solves at every operation can be stated as follows:\n\n\\begin{align}\n\\text{\\bf\\underline{SYS}} \\nn \\\\\n\\maxi_{x(i,r,t)\\geq 0} ~~~~~& \\sum_{i=1}^n U(i,R_i,T_i)\\nn \\\\\n\\text{subject to} ~~~~~& \\sum_{i = 1}^N x(i,r,t) \\leq M(r,t), \\forall~ r \\in [m], t \\in [l], \\text{ and }\\label{eq: sys_capacity}\\\\\n& R_i, T_i \\in \\left\\{(r,t) \\in [m]\\times[l]: \\sum_{s = 1}^t x(i,r,s) \\geq J_i\\right\\} ~\\forall~ i \\in [n].\n\n \\label{eq: sys_T_def}\n\\end{align}\n\n\nLet\n\\[\n\tF(i,r,t) = \\frac{U(i,r,t)}{J_i}.\n\\]\nWe relax the above system problem as follows:\n\n\\begin{align}\n\\text{\\bf \\underline{SYS-LP}}\\nn\\\\\n\\maxi_{x(i,r,t)\\geq 0} ~~~~~&\\sum_{i=1}^N \\sum_{t=1}^T x(i,r,t)F(i,r,t)\\nn \\\\\n\\text{subject to} ~~~~~&\\sum_{t=1}^T \\sum_{r = 1}^m x(i,r,t) \\leq J_i, ~\\forall ~i \\in [N], \\label{tot_job_cons}\\\\\n&\\sum_{i = 1}^N x(i,r,t) \\leq M(r,t), ~\\forall~ r \\in [m], t \\in [l]. \\label{tot_resource_cons}\n\\end{align}\n\n\nThis is an LP problem and can be solved efficiently.\n(By performing sparcity analysis we should show that for most of the users their orders come from a single restaurant in a single tier. \nUse this to provide a bound for the relaxation.)\nFurther, we can use Kelly decomposition on this relaxed system problem to obtain a decentralized version for solving this problem.\nIt also gives rise to price discovery $p(i,r,t)$, which is the market equilibrium price for user $i$ to receive its food from restaurant $r$ within tier $t$.\n\n\n\\section{Experiments}\n\\label{sect_exp}\n\nIn this section, we demonstrate the efficiency of our proposed algorithm by conducting an experimental study over both synthetic and real-world datasets. The goal of our experimental evaluation is twofold: (i) evaluate our algorithm for making online recommendations and allocations in various market settings and (ii) understand the qualitative performance and intuition of our algorithm.\n\n\\textbf{Baseline algorithms:} We demonstrate the performance of our method by comparing it with baseline algorithms. To the best of our knowledge, there are no current approaches specifically designed to make interactive recommendations and allocations considering the capacity constraints. Therefore, based on currently available algorithms, we construct our baseline with methods that are designed for similar goals:\n\\begin{enumerate}[nosep, labelindent= 0pt, align= left, labelsep=0.4em, leftmargin=*]\n \\item \\textbf{ACF:} (Allocations with Collaborative Filtering) It solves for the least squares problem \\eqref{least_squares_estimate_low} to estimate the mean rewards obtained from user-item allocation pairs. Then, makes the best allocation with respect to the estimated parameters at each round.\n \\item \\textbf{CUCB:} It runs the Combinatorial-UCB algorithm \\cite{chen_2013} to decide on allocations without assuming any low-rank structure between the users and items. It views the user-item allocation pairs as arms that have no correlation in between. In every round, it pulls some subset of the arms according to the capacity and demand constraints.\n \\item \\textbf{ICF:} It runs the Interactive Collaborative Filtering algorithm with linear UCB \\cite{zhao_2013} without considering the capacity constraints. For each user, the algorithm recommend the items that it estimates the users will obtain the most reward. Since this method does not consider the capacities, the recommendations do not necessarily satisfy the capacity constraints. Therefore, if an item is recommended to more users than its capacity, we assume that only a randomly chosen subset of the assigned users are able to get the item. The users that are not able to get the item do not send any reward feedback to the system.\n \\item \\textbf{ICF2:} It is the same as ICF method described above, except that the algorithm observes a zero reward ($R_{t, u, i} = 0$) for all the user-item allocations that were not successfully achieved. As a result of the low rewards obtained from allocations that lead to capacity violations, the algorithm learns to avoid violating the capacities.\n\\end{enumerate}\n\n\n\\textbf{Experimental setup and datasets:} We use a synthetic dataset and two real world datasets to evaluate our approach. For the synthetic data, we generate an (approximately) low-rank random matrix $\\vect{\\Theta}^* \\in \\mathbb{R}^{N \\times M}$ with entries from $[0, B]$. For the real-world data, we consider the following publicly available data sets: Movielens 100k \\cite{harper_2015} which includes ratings from 943 users on 1682 movies and the RC (Restaurant and Consumer) dataset \\cite{blanca_2011} which includes ratings from 138 users on 130 restaurants. As the information of capacities are not given in the considered data, and to the best of our knowledge to any of the publicly available recommendation datasets, we consider instantiating random capacities for all items as described shortly. We consider settings with static and time-varying capacities\/demands. For the static case, we assume that all users request one item at all iterations, and the capacity of each each remains unchanged with time. In the dynamic setting, we allow both the demands $\\vect{d}_t$ and capacities $\\vect{c}_t$ vary with time $t$. At each allocation round, we consider that each entry of $\\vect{d}_t$ is independently sampled from a fixed probability distribution over $\\{0, 1\\}$. Therefore, while active users (with demand 1) are allocated at most one item, the inactive users (with demand 0) do not get allocated any item. Similarly, each entry of $\\vect{c}_t$ is independently sampled from a uniform distribution over $\\{0, 1, \\dots, C_{\\text{max}}\\}$. At each round $t$, if user $u$ is allocated the item $i$, the system observes a reward with normal distribution $\\mathcal{N}(\\theta_{ui}^*, \\eta^2)$. \n \n\\textbf{Results:} We summarize our results in Figure \\ref{fig_regrets}. Further experimental details and results are left to Appendix \\ref{sect_additional_exp}. The observations can be summed up into following points: (1) LR-COMB (our proposed approach) is able to achieve lower regret than all other baseline methods in all experimental settings. (2) Even though the ACF method performs slightly better than LR-COMB in the initial rounds, it often gets stuck at high-regret allocations, and hence cannot achieve \\emph{no-regret}. It suffers from large regrets in the long-term because it tries to directly exploit the information it acquired so far without making any deliberate explorations. Therefore, we observe the significance of employing a bandit-based approach in achieving a no-regret algorithm. (3) Since CUCB does not leverage the low-rank structure of the parameters, it needs to sample and learn about each user-item allocation pair separately. Hence, it takes much longer for it to learn the optimum allocations. (4) Since ICF does not consider the capacities while making the allocations, it ends up incurring very large regrets. Even if it is able to identify the high-reward allocation pairs via collaborative filtering, the recommendations exceed the respective capacities of the items and we cannot obtain high rewards. (5) One possible ad-hoc approach to mitigate the issues with ICF is to use ICF2 which can indirectly capture the effects of the capacities since it receives zero rewards when the items are not successfully allocated. Nevertheless, ICF2 still does not directly use the knowledge of the capacities and hence it is still quite suboptimal. Even though it is able to show decent performance in static settings, its performance significantly degrades when the capacities dynamically change with time. \n\n\\vspace{-6 pt}\n\\begin{figure}[ht]\n\\center\n\\includegraphics[width=\\textwidth]{plots\/regrets_all_experiments.jpeg}\n\\caption{Instantaneous regret incurred in each round in different experimental settings. From left to right: (1) synthetic data in a static setting with $N = 800$, $M = 400$, $R = 20$, (2) synthetic data in a dynamic setting with $N = 1000$, $M = 150$, $R = 20$, probability of user activity $0.2$, (3) Restaurant-Customer data in a static setting, (4) Movielens 100k data in a static setting. In all settings, the experiments are run on $10$ problem instances and means are reported together with error regions that indicate one standard deviation of uncertainty.}\n\\label{fig_regrets}\n\\end{figure}\n\n\\section{Introduction}\n\n\n\n\nOnline recommendation systems have become an integral part of our socioeconomic life with the rapid increases in online services that help users discover options matching their preferences. \nDespite providing efficient ways to discover information about the preferences of users, they have played a largely complementary role to searching and browsing with little consideration of the accompanying \\emph{markets} within which recommended items are allocated to the users. Indeed, in many real-world scenarios, recommendations bring about the \\emph{allocation} of the corresponding items in a market that has possibly intrinsic constraints. \nIn particular, recommendations of candidate items that have associated notions of limited \\emph{capacities} naturally give rise to a market setting where users compete for the allocation of the recommended items.\n\nAllocation constraints are common in recommendation contexts.\nA few interesting examples include: (1) Point-of-Interest (PoI) recommendation systems (e.g., restaurants, theme parks, hotels), where the PoI can only accomodate limited number of visitors, (2) book recommendation systems employed by libraries, where the books recommended to the borrowers have limited copies, (3) route recommendation systems which aim to suggest the optimal road for travelling while avoiding traffic congestion, (4) course recommendation systems for universities, where each recommended course has limited number of seats. As similar systems become more ubiquitous and impactful in the broader aspects of daily life, there is a huge application drive and potential for delivering recommendations that respect the requirements of the market. Therefore, it is crucial to consider capacity-aware recommendation systems to maximize the user experience. \n\n\\textbf{Main Challenges: }We model the user preferences as rewards that users obtain by consuming different items, while the social welfare is the aggregate reward over the entire system comprising multiple users with heterogeneous preferences, and a provider who continually recommends items to the users and receives interactive reward feedback from them. The provider aims to maximize the social welfare while respecting the \\emph{time-varying} allocation constraints: indeed we consider system \\emph{dynamics} in terms of user demands and item capacities to be an important aspect of our problem. In the process of identifying the best match between users and target items, the provider encounters two challenges: The first challenge relates to the element of \\emph{recommendation} as the provider needs to make recommendations without exact knowledge of the user preferences ahead of time, and hence has to continue exploring user preferences while continually making recommendations. The second challenge relates to the \\emph{allocation} aspect of the problem induced by the market constraints. Note that even if matching the users with their most preferred items would result in high rewards, such an allocation may not respect the constraints of the market. For example, in a restaurant recommendation setting, if there is a hugely popular restaurant that most people love, a naive recommender would send many users to the same restaurant, causing overcrowding and considerable user dissatisfaction. \n\nThe key to overcoming the (first) challenge of making accurate recommendations is to learn the user preferences from the reward feedback. Since the preferences of different users for different items are highly correlated, it is natural to employ collaborative filtering techniques that have been widely applied in recommender systems \\cite{schafer_2007, bennett_2007, koren_2009, sarwar_01}. In order to learn the user preferences efficiently, previous works have established interactive collaborative filtering systems that query the users with well-chosen recommendations \\cite{kawale_2015, zhao_2013}. Typically, these works consider a setting where a single user arrives to the system at each round and the system makes a recommendation that will match the user's preferences. However, this assumption no longer holds in applications having an associated market structure, as recommendations made to different users in the same time period must also respect the constraints of the market.\n\nThe common strategy to tackling the (second) allocation challenge is through pricing mechanisms that ensure social optimality. Such mechanisms have been studied in economics for two-sided (supply and demand) markets and are called Walrasian auctions \\cite{smith_1991}. In the networking literature, Kelly has also used similar mechanisms to do optimal bandwidth allocation over a network \\cite{kelly_97}. In pricing-based mechanisms, the users choose the items based on their preferences as well as the posted prices. The provider meanwhile successively adjusts these prices in response to the user's demand for the items, so that capacity constraints are satisfied in equilibrium. The equilibrium prices ensure that the limited number of items are allocated to users that are expected to obtain the largest reward. However, these mechanisms still require the users to know and evaluate their preferences for \\emph{all} possible items and respond constantly with their updated bids\/demands for each item. This is definitely not a scalable solution for the large-scale system (comprising large numbers of users and items) that we target. Furthermore, this framework assumes that users already know their preferences for all items, which is clearly not true in our setting, where users report their preferences through feedback \\emph{after} being targeted with their recommended items. For this reason, the provider must \\emph{learn} the user preferences in its quest to perform optimal capacity-constrained allocations.\n\n\\begin{wrapfigure}{r}{0.36 \\textwidth} \n\\vspace{-7pt}\n\\includegraphics[width= 0.36 \\textwidth]{plots\/recommendation-system.png}\n\\caption{The provider interactively learns the user preferences to achieve socially optimal capacity-constrained allocations.}\n\\label{system_diagram}\n\\vspace{-8pt}\n\\end{wrapfigure}\nHence, as depicted in Figure \\ref{system_diagram}, the goal of the provider is twofold: (1) to learn the user preferences and make recommendations that will guide the users to choose the items that they are likely to obtain high rewards, (2) to achieve allocations that will satisfy the capacity constraints. To achieve these goals, we envision developing the following market-aware recommendation mechanism for the provider. By recommending items, the provider helps the users to narrow down their options so that users can comprehend and evaluate their preferences among a smaller number of offered items. In addition, being aware of the market structure, the provider carefully determines the item prices that play the role of an intermediary for satisfying the constraints of the market. We believe that this is an important and practically-relevant question to be resolved because it allows for the analysis of many interesting real-world interactive recommendation settings with market constraints. In its full generality, this framework requires us to model the user decisions in a way that will capture the effects of the recommendations and prices that they are presented. In order to avoid the complications introduced by this modelling challenge and to obtain a profound understanding of fundamental aspects of the problem, we begin with focusing our attention on a central question whose solution will be key to making progress towards our longer-term goal of developing a complete framework.\n\n\nSpecifically, we focus our study on these essential aspects of the problem: recommending and allocating the items while interactively learning the user preferences, which to the best of our knowledge has not been addressed in the literature. In essence, we analyze a special case of the mechanism introduced above, by assuming that the provider makes recommendations such that the number of presented choices matches with the number of items the user is willing to consume, so that the users obtain all of the recommended items regardless of their prices. Then, the provider's task reduces to deciding on high-reward allocations while satisfying the constraints by allocating each item to at most certain number of users.\n\n\\textbf{Structured Combinatorial Multi-Armed Bandits: } The provider seeks to choose high-reward allocations subject to the constraints, while actively learning the user preferences by making queries that will give rise to the most informative responses. Therefore, it encounters the well-known \\textit{exploration-exploitation} dilemma. In essence, there exists a trade-off between two competing goals: maximizing social welfare using the historical feedback data, and gathering new information to improve the performance in the future. In the literature of interactive collaborative filtering, this dilemma is typically formulated as a multi-armed bandit problem where each arm corresponds to allocation of an item to a user \\cite{zhao_2013, barraza_2017, wang_2019}. When an item is allocated to a user, a random reward is obtained and the reward information is fed back to the provider to improve its future allocation strategies. However, in contrast to prior works, our setting further requires that a collection of actions taken for different users satisfy the constraints of the market. \n\nWe formulate our problem as a bandit problem with arms having correlated means, and call it Structured Combinatorial Bandit. Based on the standard OFU (Optimism in Face of Uncertainty) principle for linear bandits \\cite{dani_2008, abbasi_2011}, we devise a procedure that learns the mean reward values opportunistically so as to solve the system problem of optimal allocation with minimum regret. The estimation method benefits from both the combinatorial nature of the feedback and the dependencies induced by the low-rank structure of collaborative filtering setting. Moreover, using matrix factorization techniques, the algorithm is efficient even at scale in settings with a large number of users and items. As is standard with OFU-based methods, our algorithm maintains a confidence set of the mean rewards for all user-item pairs. If it has less data about some user-item allocation pair, the confidence set becomes wider in the corresponding direction. Then, due to optimism, the algorithm becomes more inclined to attempt the corresponding allocation pairs to explore and collect more information. \n\n\\textbf{Our contributions: } \n\\begin{itemize}[nosep, labelindent= 0pt, align= left, labelsep=0.4em, leftmargin=*]\n \\item We formulate the problem of making recommendations that will facilitate socially optimal allocation of items with constraints. Our formulation further allows for the analysis of problem settings with dynamic (i.e., time-varying) item capacities and user demands.\n \\item We pose the Structured Combinatorial Bandit problem under generic structural assumptions (not only low-rank) and propose an algorithm that achieves sublinear regret bounds in terms of parameters that depend on the problem-specific structure of the arms.\n \\item For the recommendation setting, we specialize our results to low-rank structures and obtain a Low-Rank Combinatorial Bandit (LR-COMB) algorithm that achieves \\smash{$\\widetilde{\\mathcal{O}} ( \\sqrt{N M (N+M) RT} )$} regret in $T$ rounds for a problem with $N$ users, $M$ items and rank $R$ mean reward matrix.\n\\end{itemize}\n\\textbf{Experiments: } We run experiments both on synthetic and real-world datasets to show the efficacy of the proposed algorithms. Results show that proposed algorithm can obtain significant improvements over naive approaches in solving the problem of recommendation and allocation with constraints.\n\n\\textbf{Related work: }\n\n\n\\begin{itemize}[nosep, labelindent= 0pt, align= left, labelsep=0.4em, leftmargin=*]\n\n\\item \\textbf{Combinatorial Multi-Armed Bandits (CMAB) and Semi-Bandits: } The frameworks of CMAB \\cite{chen_2013, kveton_2015} and semi-bandits \\cite{audibert_2011} model multi-armed bandit problems where the player chooses a subset of arms in each round and observes individual outcomes of the played arms. However, they do not incorporate any structural assumptions about the rewards obtained from the arms. However, in a collaborative filtering setting like ours, the main promise is to leverage the intrinsic structure between different user-item pairs. To close this gap, we pose the problem of Structured Combinatorial Bandits and devise an algorithm that makes use of the structure of the arms as well. Additionally, CMAB framework assumes availability of an oracle that takes the means rewards for the arms and outputs the optimum subset of arms subject to the selection constraints. Due to the combinatorial nature of the problem, this oracle may not be readily available in general CMAB settings. In our case, due to the special structure of the capacity constraints, we can efficiently solve for the optimum allocations given the mean rewards of the allocation pairs.\n\n\\item \\textbf{Structured Linear Bandits: } Our formulation also shows parallelism with the frameworks of structured linear bandits \\cite{johnson_2016, combes_2017} and low-rank linear bandits \\cite{lu_2021}. However, it is distinct from them by having the additional ability to capture the combinatorial nature of the problem. In linear bandits, the player only observes the final total reward, but no outcome of any individual arm. Our setup differs from their case because the player (provider) is able to observe individual outcomes of all played arms. Due to this richer nature of the observation model, we can achieve lower regret guarantees than what is available in the literature of structured linear bandits.\n\n\\item \\textbf{Recommendation with Capacity Constraints: } There have been a few works using the notion of constrained resources to model and solve the problem of recommendation with capacity constraints \\cite{christakopoulou_2017, makhijani_2019}. However, these works only consider optimizing the recommendation accuracy subject to item usage constraints without any consideration of the interactive mechanisms that discover user preferences through recommendations.\n\n\\item \\textbf{Competing Bandits in Matching Markets: } One other related line of literature studies the stable matching problem in two-sided markets \\cite{liu_2020}. The model assumes that entities on each side of the market has preference orderings for the other side of the market and the allocations are driven by these preference orderings rather than the prices. In contrast to our work, these mechanisms necessitate at least the entities on one side of the market know their preferences over all the entities on the other side of the market. However, in many real-world settings of optimum recommendation and allocation, like the examples given above, the explicit preferences are not known ahead of time and can only be discovered through interactions. Furthermore, the matching markets only model one-to-one matches, meaning that they do not allow for the items to be allocated for multiple users.\n\\end{itemize}\n\\vspace{-2pt}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Problem setting}\n\\label{sec:setting}\n\nWe use bold font for vectors $\\mathbf{x}$ and matrices $\\mathbf{X}$, and calligraphic font $\\mathcal{X}$ for sets. We denote by $[K]$ the set $\\{1,2, \\dots, K\\}$.\nFor a vector $\\mathbf{x}$, we denote its $i$-th entry by $x_i$ and for a matrix $\\mathbf{X}$, we denote its $(i,j)$-th entry by $x_{ij}$. We denote the Frobenius inner product of two matrices by $\\langle \\vect{A}, \\vect{B} \\rangle = \\tr (\\vect{A}^\\mathrm{T} \\vect{B})$, and the Frobenious norm of a matrix $\\vect{A}$ by $\\|\\vect{A}\\|_\\text{F}$.\n\nSuppose the \\emph{system} has $N$ \\emph{users} and $M$ \\emph{items} in record. \nThe items are \\emph{allocated} to the users in multiple \\emph{rounds} (or \\emph{periods}) denoted by $t \\in \\mathbb{N}$. Allocation of an item $i \\in [M]$ to a user $u \\in [N]$ results in a random \\emph{reward} that has a distribution unknown to the system provider. The expected reward obtained from allocating item $i$ to user $u$ is denoted by $\\theta^*_{ui}$ and these values are collected into the mean reward matrix $\\mathbf{\\Theta}^* \\in \\mathbb{R}^{N \\times M}$. \n\nWe assume that each item has (time-varying) capacity that corresponds to the maximum number of different users it can be allocated to. We denote the capacity of item $i \\in [M]$ by $c_{t,i}$, and collect these values into vectors $\\mathbf{c}_t \\in \\mathbb{R}^{M}$. Similarly, each user has a (time-varying) demand that corresponds to the maximum number of different items it can get allocated. We denote the demand of user $u \\in [N]$ by $d_{t,u}$, and collect these values into vectors $\\mathbf{d}_t \\in \\mathbb{R}^{N}$. Therefore, each item can only be allocated to at most $c_{t,i}$ different users, while each user can only get allocated at most $d_{t,u}$ different types of items in the period $t$. We shall call these the \\emph{allocation constraints}. One can consider the special case where $d_{t,u}$ parameters only take values from $\\{0, 1\\}$ so that each \\emph{active} user gets at most one allocation while the \\emph{inactive} users do not get any allocations.\n\nLet $\\mathbf{X}_{t}$ denote the \\emph{allocation matrix} for round $t$ where the $(u, i)$-th entry is one if user $u$ is allocated item $i$ at round $t$, and zero otherwise. Due to the allocation constraints, any valid $\\mathbf{X}_{t}$ must belong to the set of valid allocation matrices $\\mathcal{X}_t \\subseteq \\{0, 1\\}^{N \\times M}$ defined as:\n\\begin{equation*}\n \\mathcal{X}_t = \\{ \\mathbf{X} \\in \\{0, 1\\}^{N \\times M} : \\mathbf{X} \\mathds{1}_M \\leq \\mathbf{d}_t \\text{ and } \\mathbf{X}^\\textrm{T} \\mathds{1}_N \\leq \\mathbf{c}_t\\}\n\\end{equation*}\n\\vspace{-1pt}\nwhere the inequalities are entry-wise and $\\mathds{1}_p$ denotes the all-ones vector of size $p$.\n\n\\subsection{Optimal allocations}\n\\label{sect_opt_allocations}\n\nGiven the knowledge about the mean reward matrix $\\vect{\\Theta}^*$, the optimal allocation $\\vect{X}^*_t$ at time $t$ can be obtained by solving the integer program:\n\\begin{equation}\n \\vect{X}^*_t \\in \\argmax_{\\vect{X} \\in \\mathcal{X}_t} \\; \\langle \\vect{X}, \\vect{\\Theta}^* \\rangle\n \\label{integer_num}\n\\end{equation}\nThis integer program can be relaxed to a linear program by dropping the integral constraints (setting $0 \\leq x_{ui} \\leq 1$). In Appendix \\ref{appendix_num}, we show that the integrality gap of this problem is zero. \\footnote{The integrality gap is the difference between optimal values of the integer program and its linear relaxation.} Hence, any integer solution found for the relaxed problem is also a solution for the allocation problem. \n\nWhen the provider does not have direct knowledge of the mean rewards associated with user-item allocation pairs, one standard approach is to employ pricing mechanisms \\cite{smith_1991, kelly_97}. \nThe idea is to apply dual decomposition on the (partial) Lagrangian function $ L( \\vect{X}, \\vect{\\lambda}) = \\langle \\vect{X}, \\vect{\\Theta}^* \\rangle + \\vect{\\lambda}^\\textrm{T} ( \\vect{c}_t - \\vect{X}^\\textrm{T} \\mathds{1}_N)$ where $\\vect{\\lambda} \\geq 0$ are the Lagrange multipliers (item prices) associated with the capacity constraints. Then, the allocation problem is decomposed into one problem for each user and one problem for the provider where the item prices mediates between the subsidiary problems. Each user calculates its demand by maximizing the corresponding component of the Lagrangian for a given set of prices. On the other side, the provider iteratively updates the prices based on users demands to achieve the optimal pricing. At the end of many consecutive updates from users and the provider, the equilibrium ensures that the limited items are allocated to users that are expected to obtain the largest reward. (See \\cite{palomar_2006} for further details.)\n\n\nHowever, as discussed in the introduction, this pricing mechanism has limitations in many real-world applications. Most importantly, it requires the user to solve a problem that involves the valuations even for the items that the user has no prior experience with. However, in many real-world scenarios, it is infeasible to request the users to choose among all the items in the system. Secondly, in the process of price discovery, the mechanism asks the users to repetitively respond to the prices by recomputing their demands. However, since it might take many iterations until convergence to the optimal pricing, asking the users to respond many times would be a burden for them. Furthermore, the final prices found by this iterative mechanism are only guaranteed to be optimal for the problem defined by the capacity $\\vect{c}_t$ and demand $\\vect{d}_t$ parameters at round $t$. If the capacities and demands vary with time, the optimal pricing and allocation for the next allocation round $t+1$ will be different and will be needed to be rediscovered. \n\n\\subsection{Learning the optimal allocations}\n\n\\label{learning_opt_alloc}\n\nTo address the issues discussed in the previous section, we need mechanisms that can find the optimum allocations using fewer and simpler interactions. One resolution is to recommend a subset of items along with prices intelligently chosen by the provider. This way, the users will be able to easily evaluate their preference on the small number of recommended items and decide on their demand without requiring to consider all items in the system. The provider will decide on well-chosen offerings with correct prices so that it can satisfy the capacity constraints. However, as the provider does not have the complete knowledge of the user preferences, it needs to learn the unknown preference parameters $\\vect{\\Theta}^*$ from the user feedback so that it can determine better recommendations as well as the correct prices. Based on the examples of applications provided in the introduction, we believe that design of such system dynamics is a practically-relevant question to be resolved.\n\nAs a first step in this direction, we decide to restrict our attention to a setting that itself has interesting interactions between learning the user preferences and allocating the items. In order to facilitate our analysis, we consider that the number of choices presented to each user $u$ at round $t$ is limited exactly by the their demand $d_{t, u}$ and users are allocated with all of the items that they are recommended. Therefore, the problem essentially reduces to an allocation problem in which users get allocated a set of items directly by the provider instead of users choosing between the offerings. Then, after each round of allocation, users provide feedback about the items that they have been allocated so that the provider can enhance its performance in the following rounds. Hence, whilst the users get allocated sequentially, the predictions are constantly refined using the reward feedback.\n\nThe provider determines the allocations according to an \\textit{optimistic} estimate of the true mean reward matrix $\\vect{\\Theta}^*$. It solves the allocation problem \\eqref{integer_num} assuming that the estimated parameter is the underlying reward parameter and obtains an estimate for the optimum allocation at each round $t$. Even though these allocations can be suboptimal due to estimation errors, our analysis shows that the cumulative regret obtained from these sequential allocations can only grow sublinearly with the time horizon. Using this approach, we are coupling the general principle of optimism in the face of uncertainty (OFU) along with capacity aware resource allocation. In the experiments section, we show the importance of this connection by comparing our strategy with algorithms that only focus on one aspect of the problem: a non-OFU algorithm that only aims for achieving momentary performance and an OFU-based algorithm that is unaware of the capacities.\n\n\\begin{remark}\n\\label{remark_price}\nWhen the allocation problem \\eqref{integer_num} is solved with the estimated parameters, the Lagrange multipliers for the capacity constraints give estimates for the optimum prices of the items. As long as the user preferences are estimated well enough, these prices emerging from provider's problem are such that users who are aware of their preference for all items would still choose the recommended items. Hence, when the user preferences are learned, the mechanism is able to achieve high-reward allocations that complies with the user incentives under the optimal pricing.\n\\end{remark} \n\n\n\\subsection{Problem formulation}\n\nIn this section, we formulate the provider's problem and its objective. At each time period $t$, the provider chooses multiple user-item allocation pairs collected into a set $\\mathcal{A}_t \\subseteq [N] \\times [M]$. Then, the provider observes a random reward $R_{t, u, i}$ if user $u$ is allocated with item $i$ at round $t$. The total reward is the sum of rewards obtained from the system at all rounds during a time horizon $T$. The task is to repeatedly allocate the items to the users in multiple rounds so that the total expected reward of the system is as close to the reward of the optimal allocation as possible.\n\nLetting $\\mathbf{E}_{u, i} \\in \\mathbb{R}^{N \\times M}$ denote the zero-one matrix with a single one at the $(u, i)$ entry, we can write the indicator matrix for the allocation at time $t$ as $\\mathbf{X}_{t} = \\sum_{(u, i) \\in \\mathcal{A}_t} \\mathbf{E}_{u, i}$. Consequently, $\\mathbf{X}_{t}$ becomes a zero-one matrix with ones at entries $\\mathcal{A}_t$ and zeros everywhere else. Note that there is a one-to-one relation between the matrix $\\mathbf{X}_{t}$ and the set $\\mathcal{A}_t$.\n\nWe denote by $H_t$ the history $\\{\\mathbf{X}_{\\tau}, (R_{\\tau, u, i})_{(u, i) \\in \\mathcal{A}_{\\tau}}\\}_{\\tau = 1}^{t-1}$ of observations available to the provider when choosing the next allocation $\\mathbf{X}_{t}$. The allocator employs a policy $\\pi = \\{ \\pi_t | t \\in \\mathbb{N}\\}$, which is a sequence of functions, each mapping the history $H_t$ to an action $\\mathbf{X}_{t}$. Then, the $T$ period cumulative regret of a policy $\\pi$ is the random variable\n\\vspace{-7pt}\n\\begin{equation*} \\label{cumulative_regret}\n \\mathcal{R}(T, \\pi) = \\sum_{t = 1}^{T} \\left[ \\langle \\mathbf{X}^*_t, \\mathbf{\\Theta}^*\\rangle - \\langle \\mathbf{X}_{t}, \\mathbf{\\Theta}^* \\rangle \\right]\n\\end{equation*}\n\\vspace{-4pt}\nwhere $\\vect{X}^*_t \\in \\argmax_{\\vect{X} \\in \\mathcal{X}_t} \\; \\langle \\vect{X}, \\vect{\\Theta}^* \\rangle$ denotes optimum allocation at time $t$.\n\n\\subsection{Combinatorial Multi-Armed Bandit Approach}\n\nOur first case of consideration is without any assumption on the structure of the matrix $R$. In this case, our problem maps to the setting of combinatorial multi-armed bandits. We can consider each allocation pair $(u, i)$ to be an arm with expected reward $R_{ui}$. In total, there are $N M$ arms and at each iteration we select of a subset of arms (a super arm in the context of CMAB). We are constrained to select a subset from $\\mathcal{S} \\in 2^{[NM]}$ where $\\mathcal{S}$ corresponds to the allocations that comply with problem constraints. \n\nLet $r_R(X)$ denote the expected reward of an allocation $X$ when the reward matrix is given by $R$. Then, \n\\begin{equation}\n r_R(X) = \\sum_{u = 1}^{N} \\sum_{i = 1}^{M} R_{ui} x_{ui}\n\\end{equation}\n\nThen this reward structure satisfies the following properties:\n\n\\begin{enumerate}\n \\item Monotonicity: If $R_{ui} \\leq R_{ui}'$ for all $u \\in [N], i \\in [M]$, then we have $r_R(X) \\leq r_{R'}(X)$\n \\item Bounded smoothness: If $\\max_{(u,i) \\in [N]\\times [M]} |R_{ui} - R_{ui}'| \\leq \\Lambda$, then we have $|r_R(X) - r_{R'}(X)| \\leq N \\Lambda$.\n\\end{enumerate}\n\n\\subsection{Low-Rank Multi-Armed Bandit \\& Interactive Collaborative Filtering Approach}\n\nUsing the latent factor model (Hofmann and Puzicha 1999), the reward is assumed to be given by a product of user and item feature vectors $p_u$ and $q_i$:\n\\begin{equation}\n r_{ui} = p_u^\\textrm{T}q_i + \\eta\n\\end{equation}\nwhere $\\eta$ is a zero-mean Gaussian observation noise with variance $\\sigma^2$.\n\n\\subsubsection{Distribution of User and Item Feature Vectors}\n\nWe adopt the PMF model (Salakhutdinov and Mnih 2008) to build the distributions for the user and the item feature vectors, which are then used the generate the reward values. We define the prior distributions of the user and item feature vectors as Gaussian with variance $\\sigma_p^2$\nand $\\sigma_q^2$:\n\\begin{align}\n p(p_u | \\sigma_p^2) &= \\mathcal{N}(p_u | 0, \\sigma_p^2 I)\\\\\n p(q_i | \\sigma_q^2) &= \\mathcal{N}(q_i | 0, \\sigma_q^2 I)\n\\end{align}\n\nThen, the conditional distribution of the reward $r_{ui}$ given the user and item feature vectors follow a Gaussian distribution:\n\\begin{equation}\n p(r_{ui} | p_u, q_i, \\sigma^2) = \\mathcal{N}(p_u^\\textrm{T} q_i | 0, \\sigma^2)\n\\end{equation}\n\nBy obtaining a set of noisy rating observations $\\{u(k), i(k), r_{u(k) i(k)}\\}_{k = 1}^{K}$, we can obtain the posterior distributions for the user and item feature vectors. Here, we first focus on the conditional distribution user feature vectors, given the current item feature vectors:\n\\begin{align}\n p(P | R, Q, \\sigma^2, \\sigma_p^2, \\sigma_q^2) &= p(P | R, Q, \\sigma^2, \\sigma_p^2)\\\\\n &\\propto p(R | P, Q, \\sigma^2) \\cdot p(P | \\sigma_p^2)\n\\end{align}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\n Automated systems that make fast decisions based on visual input, such as autonomous driving, drone control, or smart factories, rely on a very short response time to prevent damage or injury.\n Low-latency network transmission enabled by recent development in networking technologies, such as 5G, allows edge devices with low computing power to offload expensive \\ac{DNN} inference of the vision task to a nearby server.\n \n \n Figure~\\ref{fig:system} depicts a model scenario of an obstacle suddenly emerging in a trajectory of a self-driving car.\n Considering the car's speed of 100 km\/h, if the end-to-end latency of the brake control system increased by 40 ms, for example, due to slow compression, the car would travel an additional 1.1 meters, potentially hitting the obstacle instead of stopping in front of it.\n\n \n \n\n \\begin{figure}[ht]\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/pdf\/system.pdf}\n \n \n \\caption{The reaction time of an autonomous vehicle control system determines whether it can avoid hitting an obstacle.}\n \\label{fig:system}\n \n \\end{figure}\n\n \n \n \n \n \n \n\n Fast compression of source images is necessary to ensure low latencies over a transfer channel, and one way to decrease the latency is to reduce the codec complexity.\n Removing coding features, however, typically results in decreased vision performance at the same bitrate.\n Since \\acp{DNN} have the ability to learn from the input data, it is possible to retrain them on the compressed dataset (after decompressing it) to overcome the coding efficiency lost by pruning the coding features.\n At the same time, pruning the existing codecs allows to reuse existing hardware support and retraining does not modify the neural network architecture since only the weights change.\n\n \n \n\n \n \n \n \n \n \n \n \n\n \n\n \n \n\n\n Several types of low-complexity codecs exist.\n Real-time texture compression can reach very high encoding speeds compared to other methods at the expense of rather low coding efficiency~\\cite{waveren2007, holub2013, zadnik2022}.\n A ``mezzanine compression'' family of codecs is designed specifically to meet ultra-low latency requirements, with \\acs{JPEG}~XS~\\cite{descampe2021jpeg} as the newest standard in this family.\n The recently standardized \\ac{HTJ2K}~\\cite{htj2k} simplifies the otherwise complex \\acs{JPEG}~2000~\\cite{skodras2001jpeg} with the goal of 10x throughput improvement.\n However, it does not offer such a precise rate allocation as \\acs{JPEG}~XS.\n Traditional hybrid video codecs, such as \\ac{HEVC}~\\cite{sullivan2012} or \\ac{VVC}~\\cite{bross2020} offer advanced coding features and great coding efficiency.\n However, the additional complexity of, for example, the inter or intra prediction and advanced entropy coding can be prohibitive in resource-constrained devices.\n \n \\Ac{JPEG}~\\cite{wallace1992} shares the core transform coding features with hybrid video codecs, but without the additional complexity.\n At the same time, it can deliver sufficient quality for computer vision applications, as shown in the paper.\n\n In this paper, we explore the idea of pruning the encoding configurations to reduce the encoding time and latency and compensating the lost vision performance by retraining the vision model with the compressed dataset.\n As two case studies we chose reducing the configuration space of the otherwise very complex \\ac{ASTC}~\\cite{nystad2012} format and \\ac{JPEG}~XS that was designed specifically as a lightweight, low-latency codec.\n \n \n \n Both codecs operate in a constant bitrate mode which is important for ensuring predictable latency.\n \n\n \n \n \n \n\n We evaluate the effect of \\ac{ASTC} compression artifacts on the image classification accuracy of ShuffleNet~\\cite{ma2018shufflenet} V2 and both \\ac{ASTC} and \\ac{JPEG}~XS on the semantic segmentation of LR-ASPP-MobileNetV3~\\cite{howard2019}.\n We compare both to \\ac{JPEG}, and in the case of the segmentation task also to \\ac{HTJ2K} and \\ac{JPEG}~2000.\n\n The contributions of this paper are:\n \\begin{itemize}\n \\item We propose a lightweight \\ac{ASTC} encoder\\footnote{\\url{github.com\/cpc\/simple-texcomp}} that is approximately $2.3\\times$ faster than \\ac{JPEG} on a Samsung S10 smartphone.\n \\item We study how pruning \\ac{JPEG}~XS encoding configurations impacts latency and computer vision performance.\n \n \n \\item We demonstrate that the quality vs. latency tradeoff can be alleviated by retraining the classification and segmentation models with the compressed datasets.\n \\end{itemize}\n\n \n \n \n \n\n \n \n \n \n\n \n \n\n \n \n \n \n\n \n \n \n \n \n \n\n \n \n\n \n\n \n\n\n\n\n\n\n\n\n\\section{Background and Related Work}\n\n\\subsection{Adaptive Scalable Texture Compression}\n\n\\ac{ASTC} is the newest and most flexible texture compression format adopted as an OpenGL extension by the Khronos Group.\nLike other texture compression formats, it quantizes the input block's color space and represents its pixels as indices pointing at one of the quantized colors.\nCompared to the older BCn formats, \\ac{ASTC} supports many configuration options: scaling the input block size, partitioning, different color endpoint modes (CEM), endpoint and weight quantizations, dual-plane encoding, and \\ac{BISE}.\nAn important property of texture compression is a fixed compression ratio and random access: The individual pixels are randomly addressable from the compressed representation without decompressing.\n\nModern \\acp{GPU} have texture fetch units that can perform the decompression online during rendering with a negligible overhead which further enhances the low-latency potential of texture compression.\n\n\n\n\n\n\\subsection{JPEG XS}\n\n\\ac{JPEG}~XS is a wavelet-based mezzanine codec designed primarily for low complexity, low latency, high bandwidth, and high-quality video delivery.\nThe minimal coding unit of \\ac{JPEG}~XS is one precinct whose size can range from less than one pixel line up to several lines of the image.\n\nThe \\ac{JPEG}~XS rate allocation can predict the bitrate precisely, unlike \\ac{HTJ2K} where a precise rate allocation would require a significant additional complexity~\\cite{descampe2021jpeg}.\nThe latest version of the standard also supports direct Bayer data compression~\\cite{richter2021bayer} which can be used to bypass the traditional image processing pipeline at the sensor side and thus save latency.\n\nTo the best of our knowledge, no publicly available JPEG XS encoder currently exists.\nTherefore, we utilized the reference \\ac{JPEG}~XS reference software, version 1.4.0.4 (ISO\/IEC 21122-5:2020).\nIn the literature, \\cite{itakura2020} developed a JPEG XS codec capable of running at 60 \\ac{FPS} at 8K resolution on a 64-core AMD EPYC processor.\n\n\n\\subsection{Compression for Computer Vision}\n\\label{subs:cv}\n\n\nSome previous works optimize the perceptual model of \\ac{JPEG} for computer vision~\\cite{xie2019, liu2018}, leading to significant quality improvements.\n\\cite{brummer2020} optimized the global JPEG XS encoding parameters (gains and priorities) to better capture the characteristics of a computer vision target.\nOur approach of retraining the vision model with the compressed dataset is complementary to codec parameter optimizations.\n\n\\cite{zadnik2021} used retraining to recover object detection and semantic segmentation performance of BC1 and YCoCg-BC3~\\cite{waveren2007} texture compression.\nTo the best of our knowledge, no prior work implements a minimal-subset \\ac{ASTC} in the context of computer vision.\n\n\\cite{marie2022expert} proposed a modified loss function of a \\ac{DNN} to achieve more efficient restoration of classification accuracy lost to compression artifacts.\nThey achieved a minor but consistent gain of up to 0.79 \\ac{pp} validation accuracy compared to a simple retraining method used in this work.\n\nThe recent exploration of \\ac{VCM} by \\ac{MPEG} is an effort to develop a coding scheme with both machine and human perception in mind~\\cite{duan2020}.\nThe current development is being built on top of \\ac{VVC} which is a more complex format than what we target in this paper.\nFurthermore, our use case considers only the computer vision performance without the human in the loop.\nJPEG AI \\cite{jpegai2021} also explores compression for both human and computer vision targets, but focuses on utilizing learning-based coding methods.\n\nYet another approach to adapting compression for computer vision is ``feature compression'' which encodes intermediate neural features~\\cite{shao2020bottlenet}.\nFeature compression, however, requires computing some of the convolutional layers on the encoding device which contrasts with our approach of decreasing the encoding complexity.\n\n\\section{Implementation of Pruned Codecs}\n\n\\subsection{ASTC}\n\nDue to the \\ac{ASTC} complexity, exhaustively searching for encoding parameters is not feasible in real time, and such, heuristics must be used to prune the configuration space.\nIn our work, we reduce the configuration space to only one configuration: 5-bit color endpoint and 2-bit and weight quantization with a weight grid of $8\\times5$.\nThe selected configuration showed the lowest per-pixel distortion measured as \\ac{PSNR} on a sample dataset from adjacent configurations without requiring \\ac{BISE}.\n\nSince the only way to scale the \\ac{ASTC} bitrate is to modify the input block size, we implemented both $12\\times12$ and $8\\times8$ input block sizes, implying a \\ac{CR} of 27:1 and 12:1 ($0.\\overline{8}$ and 2.0 \\ac{bpp}), respectively.\n\nThe encoding of a block starts by selecting the endpoints with a small inset similar to~\\cite{waveren2007}.\nThen, ``ideal weights'' are selected by orthogonally projecting the input pixels onto the line defined by the endpoints.\nLastly, the ``ideal weights'' are bilinearly downsampled to the $8\\times5$ grid and quantized into two bits.\n\n\n\\subsection{JPEG XS}\n\n\nThe long encoding time of the reference encoder is caused by the rate allocation algorithm exhaustively computing the bit budget for each precinct at all quantization levels and using all possible coding methods.\nWe reduced the number of searched quantizations and coding methods to 13 and 5, respectively, without losing quality as the other combinations were unused in our tests.\n\nTo reduce the number of rate allocation passes further, we disabled the significance flag coding.\nSignificance flag coding detects a run of all-zero ``significance groups'' (groups of 8 adjacent coefficients) that can be encoded with a single flag and requires an additional rate allocation pass.\nDisabling this method brings the number of utilized coding methods from 5 to 3 and removes the need for ``refresh'' passes, significantly reducing the encoding time.\nHowever, it also reduces the coding efficiency, which we try to recover by retraining with the compressed dataset.\nWe kept the coefficient prediction from a previous line, as disabling it would prevent the encoder meet the target bitrate.\n\n\n\\section{Experimental Setup}\n\n\\subsection{Implementation Details}\n\\label{subs:impl}\n\n\nThe \\ac{ASTC} encoder uses ARM NEON intrinsics to vectorize the most significant loops using 8-bit fixed-point representation.\nFor a fair runtime comparison with \\ac{JPEG}, we chose the \\ac{SIMD}-optimized \\texttt{libjpeg-turbo} library\\footnote{\\url{libjpeg-turbo.org} (version 2.1.1)} and developed a wrapper encoder application around the library.\nThe \\ac{JPEG} coding parameters were chosen to match the defaults of the \\texttt{cjpeg} command-line utility: YCbCr color space with 4:2:0 subsampling and no restart intervals.\nThe quality parameter (Q) 45 was selected so that the bitrate of a random sample of 10000 ImageNet images after encoding is the highest possible at or below the rate of \\ac{ASTC} $12\\times12$.\nBoth \\ac{ASTC} and \\ac{JPEG} were evaluated on a single core (A76) of a Samsung S10 smartphone.\n\n\n\nThe \\ac{JPEG}~XS encoder was compiled only for the x86 instruction set and evaluated on a single thread of Intel i7-8650U laptop CPU at a base frequency of 2.1 GHz with disabled frequency scaling.\nFor runtime comparison we chose two open source encoders: \\texttt{grok}\\footnote{\\url{github.com\/GrokImageCompression\/grok} (version 9.7.7)} for \\ac{JPEG}~2000 and \\texttt{OpenJPH}\\footnote{\\url{github.com\/aous72\/OpenJPH} (version 0.9.0)} for \\ac{HTJ2K}, both using irreversible \\ac{DWT}.\n\n\n\n\n\n\n\\subsection{Vision Tasks}\n\nWe evaluated the image classification accuracy of ShuffleNet V2\\footnote{\\url{pytorch.org\/hub\/pytorch_vision_shufflenet_v2}} in $0.5\\times$ and $1.0\\times$ sizes trained on the ImageNet dataset~\\cite{deng2009imagenet} with training hyperparameters derived from \\cite{zhang2018shufflenet} and \\cite{ma2018shufflenet}.\nWe also evaluated a semantic segmentation task with LR-ASPP-MobileNetV3\\footnote{\\url{github.com\/ekzhang\/fastseg} (commit 91238cd)} in both large and small versions trained on the Cityscapes dataset~\\cite{cordts2016}.\nThe Cityscapes images for training were cropped to $\\sqrt{2}$ of the original size in each dimension to avoid running out of \\ac{GPU} memory.\nBoth networks were retrained with the dataset compressed with \\ac{ASTC} to recover the vision performance lost by compression artifacts.\nUnfortunately, the \\ac{JPEG}~XS encoder was not able to encode some of the ImageNet images at the bitrate of $0.\\overline{8}$ \\ac{bpp}.\nTherefore, we evaluated only the segmentation task with this codec.\n\nAll encoders mentioned in the previous subsection were used for quality evaluations, along with \\texttt{astcenc}\\footnote{\\url{github.com\/ARM-software\/astc-encoder} (version 3.7)} at the fastest profile for quality evaluations on the Cityscapes dataset.\n\n\\ac{JPEG}-compressed images were used only for retraining the ShuffleNet V2 network.\n\\ac{JPEG}, \\ac{JPEG}~2000, and \\ac{HTJ2K} reach segmentation \\ac{mIoU} within 1.5\\% below the uncompressed result, and retraining is expected to bring the results on par with the uncompressed results, therefore, we did not retrain with these codecs.\n\n\\section{Results}\n\n\\subsection{Quality}\n\n\\paragraph*{Image Classification}\n\n\n\nTable~\\ref{tab:quality_shufflenet} summarizes the highest achieved classification accuracies of ShuffleNet V2 under different conditions.\nWhen the compressed data is used as an input to the network trained on uncompressed data (the ``orig.'' column), the \\ac{ASTC} compression degrades the accuracy by more than 15 \\ac{pp}, while the difference caused by \\ac{JPEG} compression is only 1.3 and 0.6 \\ac{pp}.\nHowever, when retrained with the compressed dataset (the ``retr.'' column), the accuracy decrease for \\ac{ASTC} with $12\\times12$ block size is only 4.9 and 5.0 \\ac{pp} for the $0.5\\times$ and $1.0\\times$ network sizes, respectively.\nThe \\ac{ASTC} $8\\times8$ achieves higher quality than $12\\times12$: only 2.3--1.8 \\ac{pp} accuracy decrease compared to the uncompressed result.\nRetraining with the \\ac{JPEG}-compressed dataset brings a 0.2 \\ac{pp} increase in the classification accuracy of the smaller network.\nThe results show that retraining the larger network does not improve the already high accuracy for \\ac{JPEG}.\n\n\\begin{table}[htbp]\n \\caption{Validation top-1 accuracy of ShuffleNet V2 on ImageNet validation set with JPEG and the proposed ASTC compression with and without retraining.}\n \\label{tab:quality_shufflenet}\n \\centering\n \\begin{threeparttable}\n \\begin{tabular}{|l|r|rr|rr|}\n \\hline\n & & \\multicolumn{2}{c|}{ 0.5x } & \\multicolumn{2}{c|}{ 1.0x } \\\\\n compression & bpp & orig. & retr. & orig. & retr. \\\\\n \\hhline{|=|=|==|==|}\n uncompressed & 24.0 & \\multicolumn{2}{c|}{54.4\\%} & \\multicolumn{2}{c|}{64.3\\%} \\\\\n ASTC 12x12 & 0.89 & -16.8 & -4.9 & -15.1 & -5.0 \\\\\n JPEG Q45 & \\raise.17ex\\hbox{$\\scriptstyle\\mathtt{\\sim}$} 0.89 & -1.3 & -1.1 & -0.6 & -0.7 \\\\\n ASTC 8x8 & 2.00 & -6.4 & -2.3 & -6.6 & -1.8 \\\\\n \\hline\n \\end{tabular}\n \\end{threeparttable}\n\\end{table}\n\n\\paragraph*{Semantic Segmentation}\n\nFigure~\\ref{fig:fastseg_eval} compares rate-distortion curves of multiple encoders according to three metrics: \\ac{PSNR}, \\ac{SSIM}, and validation \\ac{mIoU} of LR-ASPP-MobileNetV3 (large vesion) trained on an uncompressed dataset.\nThe small version of the model shows similar relations between the \\ac{mIoU} curves to the large version, therefore, we omitted it for brevity.\nThe plots show that despite significant \\ac{PSNR} and \\ac{SSIM} differences between \\ac{JPEG}~2000, \\ac{JPEG}, and \\ac{HTJ2K}, the \\ac{mIoU} difference is relatively small.\nDisabling significance flag coding of \\ac{JPEG}~XS (denoted as ``no-sf'') shows a consistent decrease of all metrics in both main and subline profiles.\nSimilarly, when compared to a full-featured \\texttt{astcenc} encoder at the fastest preset, our pruned implementation achieves significantly lower quality.\nBoth \\ac{SSIM} and \\ac{mIoU} metrics decrease rapidly with \\ac{JPEG}~XS at lower bitrates ($0.\\overline{8}$ and 1.0 \\ac{bpp}), especially the subline profile.\n\n\n\\begin{figure}[t]\n \\centering\n \\begin{tikzpicture}\n \\tikzstyle{dataline} = [color=black, dashed, mark options=solid, mark=+]\n \\begin{axis}\n [\n yshift = -11cm,\n width=0.95\\linewidth,\n height=6cm,\n \n \n xlabel = {bpp},\n xlabel near ticks,\n ylabel = {PSNR (dB)},\n \n ylabel near ticks,\n grid = both,\n major grid style = {dotted,black!90},\n minor grid style = {dotted,gray!60},\n legend columns = 3,\n legend style = { at = {(0.433, 1.05)}, anchor = south, font = \\scriptsize },\n legend cell align = { left },\n xmin = 0,\n xmax = 4,\n ymin = 30,\n ymax = 55,\n ytick distance = 10,\n minor x tick num = 9,\n minor y tick num = 9,\n ]\n\n \\addplot [dataline, black, mark=x]\n table[x=bpp_mean, y=psnr_mean, col sep=comma] {data\/generated\/grok_jp2_irev.csv};\n \n \n \\addplot [dataline, blue]\n table[x=bpp_mean, y=psnr_mean, col sep=comma] {data\/generated\/openjph.csv};\n \\addplot [dataline, green!75!black]\n table[x=bpp_mean, y=psnr_mean, col sep=comma] {data\/generated\/turbojpeg.csv};\n \\addplot [dataline, solid, black!40!purple!70!green]\n table[x=bpp_mean, y=psnr_mean, col sep=comma] {data\/generated\/astc_u8.csv};\n \\addplot [dataline, black!40!purple!70!green]\n table[x=bpp_mean, y=psnr_mean, col sep=comma] {data\/generated\/astcenc_fastest.csv};\n \\addplot [dataline, red]\n table[x=bpp_mean, y=psnr_mean, col sep=comma] {data\/generated\/jxs_p3.csv};\n \\addplot [dataline, orange!50!brown]\n table[x=bpp_mean, y=psnr_mean, col sep=comma] {data\/generated\/jxs_p5.csv};\n \\addplot [dataline, solid, red]\n table[x=bpp_mean, y=psnr_mean, col sep=comma] {data\/generated\/jxs_nosf_p3.csv};\n \\addplot [dataline, solid, orange!50!brown]\n table[x=bpp_mean, y=psnr_mean, col sep=comma] {data\/generated\/jxs_nosf_p5.csv};\n\n \\legend{ J2K (grok), HTJ2K (OpenJPH), JPEG (libjpeg-turbo), ASTC (ours),\n ASTC (astcenc), JXS main, JXS subline, JXS main no-sf, JXS subline no-sf }\n\n \\end{axis}\n \\begin{axis}\n [\n yshift = -16.6cm,\n width=0.95\\linewidth,\n height=6cm,\n xlabel = {bpp},\n xlabel near ticks,\n ylabel = {SSIM (-)},\n \n ylabel near ticks,\n grid = both,\n major grid style = {dotted,black!90},\n minor grid style = {dotted,gray!60},\n xmin = 0,\n xmax = 4,\n ymin = 0.89,\n minor x tick num = 9,\n minor y tick num = 4,\n ]\n\n \\addplot [dataline, black, mark=x]\n table[x=bpp_mean, y=ssim_mean, col sep=comma] {data\/generated\/grok_jp2_irev.csv};\n \n \n \\addplot [dataline, blue]\n table[x=bpp_mean, y=ssim_mean, col sep=comma] {data\/generated\/openjph.csv};\n \\addplot [dataline, green!75!black]\n table[x=bpp_mean, y=ssim_mean, col sep=comma] {data\/generated\/turbojpeg.csv};\n \\addplot [dataline, solid, black!40!purple!70!green]\n table[x=bpp_mean, y=ssim_mean, col sep=comma] {data\/generated\/astc_u8.csv};\n \\addplot [dataline, black!40!purple!70!green]\n table[x=bpp_mean, y=ssim_mean, col sep=comma] {data\/generated\/astcenc_fastest.csv};\n \\addplot [dataline, red]\n table[x=bpp_mean, y=ssim_mean, col sep=comma] {data\/generated\/jxs_p3.csv};\n \\addplot [dataline, orange!50!brown]\n table[x=bpp_mean, y=ssim_mean, col sep=comma] {data\/generated\/jxs_p5.csv};\n \\addplot [dataline, solid, red]\n table[x=bpp_mean, y=ssim_mean, col sep=comma] {data\/generated\/jxs_nosf_p3.csv};\n \\addplot [dataline, solid, orange!50!brown]\n table[x=bpp_mean, y=ssim_mean, col sep=comma] {data\/generated\/jxs_nosf_p5.csv};\n\n \\end{axis}\n \\begin{axis}\n [\n yshift = -22.2cm,\n width=0.95\\linewidth,\n height=6cm,\n \n \n xlabel = {bpp},\n xlabel near ticks,\n ylabel = {mIoU (-)},\n \n ylabel near ticks,\n grid = both,\n major grid style = {dotted,black!90},\n minor grid style = {dotted,gray!60},\n xmin = 0,\n xmax = 4,\n minor x tick num = 9,\n minor y tick num = 9,\n ]\n\n \\addplot [dataline, black, mark=x]\n table[x=bpp_mean, y=fastseg_large_miou, col sep=comma] {data\/generated\/grok_jp2_irev.csv};\n \n \n \\addplot [dataline, blue]\n table[x=bpp_mean, y=fastseg_large_miou, col sep=comma] {data\/generated\/openjph.csv};\n \\addplot [dataline, green!75!black]\n table[x=bpp_mean, y=fastseg_large_miou, col sep=comma] {data\/generated\/turbojpeg.csv};\n \\addplot [dataline, solid, black!40!purple!70!green]\n table[x=bpp_mean, y=fastseg_large_miou, col sep=comma] {data\/generated\/astc_u8.csv};\n \\addplot [dataline, black!40!purple!70!green]\n table[x=bpp_mean, y=fastseg_large_miou, col sep=comma] {data\/generated\/astcenc_fastest.csv};\n \\addplot [dataline, red]\n table[x=bpp_mean, y=fastseg_large_miou, col sep=comma] {data\/generated\/jxs_p3.csv};\n \\addplot [dataline, orange!50!brown]\n table[x=bpp_mean, y=fastseg_large_miou, col sep=comma] {data\/generated\/jxs_p5.csv};\n \\addplot [dataline, solid, red]\n table[x=bpp_mean, y=fastseg_large_miou, col sep=comma] {data\/generated\/jxs_nosf_p3.csv};\n \\addplot [dataline, solid, orange!50!brown]\n table[x=bpp_mean, y=fastseg_large_miou, col sep=comma] {data\/generated\/jxs_nosf_p5.csv};\n \\addplot[mark=none, black, domain=0:4] {0.66539};\n\n \\draw (0, 260) node [right] {\\scriptsize uncompressed};\n\n \\end{axis}\n \\end{tikzpicture}\n \n \\caption{Mean intersection over union (mIoU) of a FastSeg large network (bottom), SSIM (middle) and PSNR (top) of a Cityscapes validation set compressed with different methods.}\n \\label{fig:fastseg_eval}\n \n\\end{figure}\n\nTo recover the large \\ac{mIoU} degradation of \\ac{ASTC} and \\ac{JPEG}~XS at low bitrates, we retrained the network with the compressed datasets.\nTable~\\ref{tab:quality_mobilenet} summarizes the \\ac{mIoU} improvements of retraining LR-ASPP-MobileNetV3 in both small and large variants.\nFor \\ac{ASTC} $12\\times12$, retraining brought an improvement of 1.1 and 8.6 \\ac{pp} for the small and large networks, respectively.\nRetraining \\ac{JPEG}~XS in the main profile resulted in \\ac{mIoU} around 2.3--2.6 \\ac{pp} lower than the uncompressed result.\nThe subline profile of \\ac{JPEG}~XS shows a sharp decline in the vision performance without retraining.\nRetraining allows recovering most of the quality back.\nHowever, the results still do not reach the quality of \\ac{ASTC} $12\\times12$.\n\n\n\\begin{table}[htbp]\n \\caption{Mean intersection over union (mIoU) of LR-ASPP-MobileNetV3 with Cityscapes validation set and the JPEG XS compression.}\n \\label{tab:quality_mobilenet}\n \\centering\n \\begin{threeparttable}\n \\begin{tabular}{|l|r|rr|rr|}\n \\hline\n & & \\multicolumn{2}{c|}{ small } & \\multicolumn{2}{c|}{ large } \\\\\n compression & bpp & orig. & retr. & orig. & retr. \\\\\n \\hhline{|=|=|==|==|}\n uncompressed & 24.0 & \\multicolumn{2}{c|}{61.2\\%} & \\multicolumn{2}{c|}{66.5\\%} \\\\\n JPEG XS (main, sf) & 0.89 & -4.9 & -2.3 & -6.5 & -2.6 \\\\\n JPEG XS (main, no-sf) & 0.89 & -5.8 & -2.7 & -7.4 & -2.3 \\\\\n JPEG XS (subline, sf) & 0.89 & -14.4 & -5.4 & -20.5 & -5.3 \\\\\n JPEG XS (subline, no-sf) & 0.89 & -17.3 & -6.7 & -25.2 & -6.8 \\\\\n ASTC 12x12 & 0.89 & -5.5 & -4.4 & -12.6 & -4.0 \\\\\n ASTC 8x8 & 2.00 & -3.5 & -2.8 & -6.4 & -1.7 \\\\\n \\hline\n \\end{tabular}\n \\end{threeparttable}\n\\end{table}\n\nFigure~\\ref{fig:images} shows segmentations of two challenging scenes after the retrained model inference using compressed images to illustrate the effect of compression on the segmentation result.\nThe first image shows shape deformations caused by the \\ac{ASTC} and \\ac{JPEG}~XS main profile (``p3'') while the network trained with \\ac{JPEG}~XS subline profile (``p5'') fails to detect the people at all.\nIn the second image, all cases detect the person in the foreground but fail to detect some, or all, the people in the distance.\nIt should be emphasized that the MobileNet networks were trained with images containing approximately half of the pixels of the full-resolution images due to \\ac{GPU} memory limitations.\nTherefore, the examples do not correspond to the best predictions achievable with these networks.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/png\/retrain_comp.png}\n \\caption{Visual comparison of LR-ASPP-MobileNetV3 (small version) segmentation of two Cityscapes images. From left to right: Original image (brightened), ground truth, segmentation of the model trained by datasets compressed by the ASTC ($12\\times12$), pruned JPEG XS (main profile) and pruned JPEG XS (subline profile) encoders.}\n \\label{fig:images}\n\\end{figure}\n\n\\subsection{Runtime}\n\n\\paragraph*{ASTC Encoder}\n\nTable~\\ref{tab:runtime_arm} compares the average encoding time of a Cityscapes image (resolution $2048\\times1024$) of our \\ac{ASTC} encoder with the block size $12\\times12$ and $8\\times8$ to \\ac{JPEG} with the quality parameters Q 85 and 96.\nThe Q parameters were determined by the same procedure as in Subsection~\\ref{subs:impl} to ensure approximately the same bitrate as \\ac{ASTC} on a random subset of 100 images.\nFor the $12\\times12$ block size, the images were padded to a resolution divisible by a block size of 12 before \\ac{ASTC} encoding.\n\nThe results show our simple \\ac{ASTC} $12\\times12$ encoder is approximately $2.3\\times$ faster than the \\ac{JPEG} encoder based on \\texttt{libjpeg-turbo}.\nThe \\ac{JPEG} decoding was slightly slower than the encoding.\nWhile we did not conduct \\ac{ASTC} decoding measurements, in~\\cite{zadnik2021}, we measured BC1 and YCoCg-BC3 decoding time of an 8K frame as less than 1 ms on a desktop \\ac{GPU}.\n\\ac{ASTC} decoding is more complicated, but the decoding overhead is still expected to be close to negligible in comparison to the encoding.\n\n\\begin{table}[htbp]\n \\caption{Encoding time of the proposed \\ac{ASTC} encoder and \\texttt{libjpeg-turbo} encoder and decoder (ARM A76 single-core).}\n \\label{tab:runtime_arm}\n \\centering\n \\begin{threeparttable}\n \\begin{tabular}{|l|rr|}\n \\hline\n & \\multicolumn{2}{|c|}{bpp} \\\\\n & \\tl0.89 & \\tl2.0 \\\\\n \\hhline{|=|==|}\n ASTC & 5.8 & 7.0 \\\\\n JPEG (enc, libjpeg-turbo) & 13.3 & 16.7 \\\\\n JPEG (dec, libjpeg-turbo) & 13.6 & 21.5 \\\\\n \\hline\n \\end{tabular}\n \\end{threeparttable}\n\\end{table}\n\n\nFor comparison, we also measured the encoding and decoding time of \\ac{JPEG} at quality parameters 0 and 100 as 11 and 22 ms, and 7 and 32 ms, respectively.\nThese numbers establish the encoding speed bounds of this format.\n\nOn a single core of Intel i7-8650U laptop \\ac{CPU}, the AVX2-optimized \\texttt{astcenc} encoder compressed one Cityscapes image at approximately 44 and 61 ms (block sizes $12\\times12$ and $8\\times8$, respectively) at the fastest preset, suggesting that both \\ac{JPEG} and our pruned \\ac{ASTC} encoders are faster than a traditional \\ac{ASTC} encoder even with the latter evaluated on a more powerful \\ac{CPU}.\n\n\\paragraph*{JPEG XS Encoder}\n\n\nTable~\\ref{tab:runtime_x86} compares \\ac{JPEG}~XS with three encoders: \\ac{JPEG}, \\ac{JPEG}~2000, and \\ac{HTJ2K}.\nThe \\ac{HTJ2K} quantization was determined by a similar procedure as in Subsection~\\ref{subs:impl}.\nThe results show that by disabling the significance flag coding, the encoding time of one Cityscapes frame improved by 22--23\\%, and is only about 9--20\\% slower than \\ac{JPEG}~2000.\n\\ac{HTJ2K} encoding by OpenJPH was 3.8--$6.2\\times$ faster than \\ac{JPEG}~XS without the significance coding flags.\n\\ac{JPEG} by \\texttt{libjpeg-turbo} brought this difference further by almost another order of magnitude.\nKakadu \\ac{HTJ2K} is not publicly available, therefore, we used results published in~\\cite{taubman2019high} and extrapolated them to our setup.\nMore specifically, we scaled the result by the number of pixels from 4K to $2048\\times1024$, multiplied by 4 since the original result was obtained on a 4-core machine, and finally scaled to our frequency of 2.1 GHz from the original 3.4 GHz.\nBased on this rough estimation, it seems likely the \\ac{HTJ2K} encoder is capable of reaching encoding throughput close to \\ac{JPEG}.\n\n\\begin{table}[htbp]\n \\caption{Encoding times of JPEG XS, JPEG, JPEG 2000, and HTJ2K encoders on a single core of i7-8650U CPU}\n \\label{tab:runtime_x86}\n \\centering\n \\begin{threeparttable}\n \\begin{tabular}{|l|rrr|}\n \\hline\n & \\multicolumn{3}{|c|}{bpp} \\\\\n & \\tl0.89 & \\tl2.0 & \\tl3.3 \\\\\n \\hhline{|=|===|}\n JPEG XS (main, sf) & 625 & 654 & 683 \\\\\n JPEG XS (main, no-sf) & 477 & 503 & 531 \\\\\n JPEG (libjpeg-turbo) & 11.3 & 13.5 & 16.0 \\\\\n JPEG 2000 (grok) & 437 & 439 & 441 \\\\\n HTJ2K (OpenJPH) & 73.7 & 105 & 138 \\\\\n HTJ2K (Kakadu~\\cite{taubman2019high}) & - & 14.4\\tnote{*} & - \\\\\n \\hline\n \\end{tabular}\n \\begin{tablenotes}[para]\n \\item[*] extrapolated from the result in the publication\n \\end{tablenotes}\n \\end{threeparttable}\n\\end{table}\n\nTable~\\ref{tab:latency} shows \\ac{JPEG}~XS encoding time and latency at two different bitrates and three profiles: high, main, and subline.\nThe ``precinct'' column denotes the number of lines that form one presinct.\nThe ``enc'' column shows the total frame encoding time and the ``latency'' column shows the time until the first precinct is done encoding and thus represents the minimum theoretical achievable latency.\nWhile the overall frame encoding time does not differ dramatically between presets, the precinct size has a major impact on latency: The precinct of the high profile consisting of three lines shows a latency of 32--33\\% of the total encoding time, while the latency of a half-line precinct of the subline profile is three times smaller portion of the encoding time.\nThus, even without reaching a high throughput, it is possible to achieve latency almost an order of magnitude lower than the encoding time.\nIt should also be noted that the latency includes the wavelet transform over the whole frame and can be further reduced by pipelining it with the rest of the computation.\n\n\\begin{table}[htbp]\n \\caption{Latency and throughput comparison of the pruned JPEG XS reference encoder without significance flag coding.}\n \\label{tab:latency}\n \\centering\n \\begin{threeparttable}\n \\begin{tabular}{|lll|rrr|}\n \\hline\n & precinct & bpp & enc & \\multicolumn{2}{c|}{latency} \\\\\n profile & (lines) & & (ms) & (ms) & (\\%) \\\\\n \\hhline{|===|===|}\n high & 3 & 0.89 & 502 & 167 & 33\\% \\\\\n high & 3 & 2.0 & 528 & 166 & 31\\% \\\\\n main & 2 & 0.89 & 477 & 136 & 29\\% \\\\\n main & 2 & 2.0 & 504 & 137 & 27\\% \\\\\n subline & 0.5 & 0.89 & 443 & 53 & 12\\% \\\\\n subline & 0.5 & 2.0 & 469 & 53 & 11\\% \\\\\n \\hline\n \\end{tabular}\n \\end{threeparttable}\n\\end{table}\n\n\\section{Discussion}\n\\label{sec:discussion}\n\n\n\n\nRetraining with the compressed dataset showed the largest improvements when the vision performance without retraining was very low, such as the classification with the \\ac{ASTC} $12\\times12$ and segmentation with the subline \\ac{JPEG}~XS encoders.\nOn the segmentation task, the overall quality decrease without retraining was smaller, because the images are less noisy and less prone to compression artifacts.\nThe vision performance could be further enhanced by improving the retraining process and optimizing encoding parameters for computer vision using one of the methods introduced in Subsection~\\ref{subs:cv}.\n\nThe lightweight \\ac{ASTC} encoder achieves a higher encoding speed than \\ac{JPEG}, making it the fastest encoder evaluated.\nOn the other hand, the pruned \\ac{JPEG}~XS reference encoder does not achieve sufficient runtime performance.\nHowever, its low complexity and results from literature~\\cite{itakura2020} suggest a fast implementation is possible.\nThe rate allocation can be further optimized by, for example, replacing the exhaustive search with a binary search, in combination with other heuristics.\nThe second most expensive operation in the reference encoder is the wavelet transform which we did not modify.\nIn the reference encoder, the wavelet transform is performed over the whole frame before the coding of individual precincts.\nHowever, it is possible to interleave the wavelet transform with the precinct coding as the latency of the wavelet transform ranges from a few pixels to 6 lines~\\cite{descampe2021jpeg}.\n\nTo put the results into a practical perspective, let us consider a scenario of compressing a Cityscapes image and sending it over a 500 Mbit\/s commercially available 5G network and an embedded transceiver capable of 2 Mbit\/s.\nAssuming a 1 ms latency budget for encoding and network transfer, the \\ac{ASTC} at the bitrate of $0.\\overline{8}$ would require a latency of 145 and 3.2 lines, respectively, assuming the encoding speed of 5.8 ms\/frame.\nWhile the first case allows partitioning the image into larger chunks, in the second case, as shown in Figure~\\ref{fig:fastseg_eval}, lowering the latency of \\ac{JPEG}~XS comes at a significant quality loss and thus necessitates the compensation by retraining.\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Conclusion}\n\nWe explored decreasing an image encoder complexity to achieve lower latency.\nNamely, we evaluated a lightweight implementation of an \\ac{ASTC} encoder and a pruned version of a \\ac{JPEG}~XS reference encoder.\n\nThe \\ac{ASTC} encoder outperforms \\ac{JPEG} in terms of encoding speed by approximately $2.3\\times$ at the same bitrate.\nWhen retrained with the dataset compressed with ASTC at the lowest bitrate of $0.\\overline{8}$, the classification accuracy was about 5 \\ac{pp}, and the segmentation \\ac{mIoU} 4.4--4.0 \\ac{pp} lower than the output of the networks trained and evaluated without any compression.\n\nThe pruned \\ac{JPEG}~XS reference encoder is not nearly as fast as \\ac{ASTC} and needs more optimizations to be usable for real-time tasks.\nNevertheless, we show that disabling significance flag coding decreases the number of required rate allocation passes, and boosts the encoding speed by 22--23\\% at the cost of only 0.4--0.3 \\ac{pp} of segmentation \\ac{mIoU} after retraining.\n\n\n\\ac{HTJ2K} and \\ac{JPEG} outperform both the tested codecs in terms of vision performance.\nHowever, \\ac{ASTC} still holds the advantage of the fastest coding speed, while \\ac{JPEG}~XS, if sufficiently optimized, is suitable for applications requiring ultra-low latencies.\nTo improve the quality, it is possible to apply computer vision--specific encoding parameter optimizations or improve the retraining process.\n\n\\section*{Acknowledgment}\n\nThe work was financially supported by the Tampere University ITC Graduate School. It was also supported by European Union's Horizon 2020 research and innovation program under Grant Agreement No 871738 (CPSoSaware) and in part by the Academy of Finland under Grant 325530.\n\n\n\n\n\n\\balance\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\n\\label{sec:orgdeb2123}\n\nThose systems that we might classify as living, adaptive, or somehow intelligent all display a fundamental property: they resist or avoid perturbations that would result in their existence becoming unsustainable. This means that they must somehow be able to sense their current state of affairs (\\emph{perception}) and respond appropriately (\\emph{action}). In particular, an adaptive system should sense the relevant aspects of its current environmental state, and form expectations about the consequences of that state. In general, the interaction with the environment will be stochastic, and the statistically optimal method of `sensing' and prediction is Bayesian inference.\n\nTypically, however, the system has no direct access to the external state, only to sense data that indirectly have external causes. Moreover, sense data are often very high-dimensional, and predicting their consequences is underdetermined. As a result, it is common to assume that successful organisms are imbued with some kind of \\emph{generative model} of the process by which external causes generate their sense data. They can then use this model to infer those actions will bring (their beliefs about) their current state closer to those expectations: a process called \\emph{active inference}.\n\nSystems such as these are inherently open, and often their internal models and beliefs are supposed to be structured hierarchically---that is, compositionally. The processes of prediction and action sketched here are naturally bidirectional, and indeed our first contribution in the present work is to show that Bayesian inference is abstractly structured as a \\emph{category of optics} \\citep{Riley2018Categories,Clarke2020Profunctor}, the emerging canonical formalism for (open) bidirectionally structured compositional systems.\n\nThe compositional framework of \\emph{open games} \\citep{Bolt2019Bayesian,Ghani2016Compositional} builds on categories of optics to describe systems of motivated interacting agents, but it is substantially more general than needed for classical game theory: generalized open games naturally describe any bidirectionally structured open systems that can be associated with a measure of fitness. Consequently, such generalized open games provide a natural home for a compositional theory of interacting cybernetic systems, and using our notion of \\emph{Bayesian lens}, we characterize a number of canonical statistical models as \\emph{statistical games}.\n\nHowever, mere open games themselves supply no notion of \\emph{dynamics} mediating the interactions. We therefore introduce the concept of \\emph{dynamical realisation} of an open game (Definition \\ref{def:realisation}), as well as a coherence condition that ensures such a realisation behaves as we would expect from a cybernetic system (Definition \\ref{def:cyber-sys}). We use these concepts to show that two prominent frameworks for active inference instantiate such categories of cybernetic systems.\n\n\\paragraph{Acknowledgements} We thank the organizers of \\emph{Applied Category Theory 2020} for the opportunity to present this work, and the anonymous reviewers for helpful comments and questions. We also thank Bruno Gavranovi\u0107, Jules Hedges, and Neil Ghani for stimulating and insightful conversations, and credit Jules Hedges for observing the correct form of the Bayesian \\(\\mathsf{update}\\) map in discussions at SYCO 6.\n\n\\section{Bayesian Updates Compose Optically}\n\\label{sec:org69acce1}\n\nWe begin by proving that Bayesian updates compose according to the `lens' pattern \\citep{Foster2007Combinators} that sits at the heart of categories of open games and other `bidirectional' structures. We first show that Bayesian inversions are `vertical' maps in a fibred category of state-dependent channels. The Grothendieck construction of this structure gives a category of lenses. Open games are commonly defined using the more general `optics' pattern \\citep{Bolt2019Bayesian}, and so we also show that, under the Yoneda embedding, our category of lenses is equivalently a category of optics.\n\nThroughout the paper, we work in a general category of stochastic channels; abstractly, this corresponds to a \\emph{Markov category} \\citep{Fritz2019synthetic} or \\emph{copy-delete category} \\citep{Cho2017Disintegration}. Familiar examples of such categories include \\(\\mathcal{K}\\mspace{-2mu}\\ell(\\mathcal{D})\\), the Kleisli category of the finitely-supported distribution monad \\(\\mathcal{D}\\), and, for `continuous' probabiliy, \\(\\mathcal{K}\\mspace{-2mu}\\ell(\\mathcal{G})\\), the Kleisli category of the Giry monad. We will write \\(c^\\dag_\\pi := \\bdag{c}(\\pi)\\) to indicate the \\textbf{Bayesian inversion} of the channel \\(c\\) with respect to a state \\(\\pi\\). Then, given some \\(y \\in Y\\), \\(c^\\dag_\\pi (y)\\) is a new `posterior' distribution on X. We will call \\(c^\\dag_\\pi(y)\\) the \\textbf{Bayesian update} of \\(\\pi\\) along \\(c\\) given \\(y\\).\n\nFor a substantially expanded version of this section, including proofs and background exposition with precise definitions of Bayesian inversion, see the author's \\citep{Smithe2020Bayesian}. We will occasionally here refer to definitions or results in that paper.\n\n\\begin{defn}[State-indexed categories] \\label{def:stat-cat}\nLet \\((\\cat{C}, \\otimes, I)\\) be a monoidal category enriched in a Cartesian closed category \\(\\Cat{V}\\). Define the \\(\\cat{C}\\text{-state-indexed}\\) category \\(\\Fun{Stat}: \\cat{C}\\op \\to \\Cat{V{\\hbox{-}} Cat}\\) as follows. \n\\begin{align}\n\\Fun{Stat} \\;\\; : \\;\\; \\cat{C}\\op \\; & \\to \\; \\Cat{V{\\hbox{-}} Cat} \\nonumber \\\\\nX & \\mapsto \\Fun{Stat}(X) := \\quad \\begin{pmatrix*}[l]\n& \\Fun{Stat}(X)_0 & := \\quad \\;\\;\\; \\cat{C}_0 \\\\\n& \\Fun{Stat}(X)(A, B) & := \\quad \\;\\;\\; \\Cat{V}(\\cat{C}(I, X), \\cat{C}(A, B)) \\\\\n\\id_A \\: : & \\Fun{Stat}(x)(A, A) & := \\quad \n\\left\\{ \\begin{aligned}\n\\id_A : & \\; \\cat{C}(I, X) \\to \\cat{C}(A, A) \\\\\n & \\quad\\;\\;\\: \\rho \\quad \\mapsto \\quad \\id_A\n\\end{aligned} \\right. \\label{eq:stat} \\\\\n\\end{pmatrix*} \\\\ \\nonumber \\\\\nf : \\cat{C}(Y, X) & \\mapsto \\begin{pmatrix*}[c]\n\\Fun{Stat}(f) \\; : & \\Fun{Stat}(X) & \\to & \\Fun{Stat}(Y) \\vspace*{0.5em} \\\\\n& \\Fun{Stat}(X)_0 & = & \\Fun{Stat}(Y)_0 \\vspace*{0.5em} \\\\\n& \\Cat{V}(\\cat{C}(I, X), \\cat{C}(A, B)) & \\to & \\Cat{V}(\\cat{C}(I, Y), \\cat{C}(A, B)) \\vspace*{0.125em} \\\\\n& \\alpha & \\mapsto & f^\\ast \\alpha : \\big( \\, \\sigma : \\cat{C}(I, Y) \\, \\big) \\mapsto \\big( \\, \\alpha(f \\klcirc \\sigma) : \\cat{C}(A, B) \\, \\big)\n\\end{pmatrix*} \\nonumber\n\\end{align}\nComposition in each fibre \\(\\Fun{Stat}(X)\\) is given by composition in \\(\\cat{C}\\); that is, by the left and right actions of the profunctor \\(\\Fun{Stat}(X)(-, =) : \\cat{C}\\op \\times \\cat{C} \\to \\Cat{V}\\). Explicitly, given \\(\\alpha : \\Cat{V}(\\cat{C}(I, X), \\cat{C}(A, B))\\) and \\(\\beta : \\Cat{V}(\\cat{C}(I, X), \\cat{C}(B, C))\\), their composite is \\(\\beta \\circ \\alpha : \\Cat{V}(\\cat{C}(I, X), \\cat{C}(A, C)) : = \\rho \\mapsto \\beta(\\rho) \\klcirc \\alpha(\\rho)\\). Since \\(\\Cat{V}\\) is Cartesian, there is a canonical copier \\(\\mathord{\\usebox\\sbcopier} : x \\mapsto (x, x)\\) on each object, so we can alternatively write \\((\\beta \\circ \\alpha)(\\rho) = \\big(\\beta(-) \\klcirc \\alpha(-)\\big) \\circ \\mathord{\\usebox\\sbcopier} \\circ \\rho\\). Note that we indicate composition in \\(\\cat{C}\\) by \\(\\klcirc\\) and composition in the fibres \\(\\Fun{Stat}(X)\\) by \\(\\circ\\).\n\\end{defn}\n\n\\begin{ex} \\label{ex:stat-meas}\nLet \\(\\Cat{V} = \\Cat{Meas}\\) be a `convenient' (\\emph{i.e.}, Cartesian closed) category of measurable spaces, such as the category of quasi-Borel spaces \\citep{Heunen2017Convenient}, let \\(\\mathcal{P} : \\Cat{Meas} \\to \\Cat{Meas}\\) be a probability monad defined on this category, and let \\(\\cat{C} = \\mathcal{K}\\mspace{-2mu}\\ell(\\mathcal{P})\\) be the Kleisli category of this monad. Its objects are the objects of \\(\\Cat{Meas}\\), and its hom-spaces \\(\\mathcal{K}\\mspace{-2mu}\\ell(\\mathcal{P})(A, B)\\) are the spaces \\(\\Cat{Meas}(A, \\mathcal{P} B)\\) \\citep{Fritz2019synthetic}. This \\(\\cat{C}\\) is a monoidal category of stochastic channels, whose monoidal unit \\(I\\) is the space with a single point. Consequently, states of \\(X\\) are just measures (distributions) in \\(\\mathcal{P} X\\). That is, \\(\\mathcal{K}\\mspace{-2mu}\\ell(\\mathcal{P})(I, X) \\cong \\Cat{Meas}(1, \\mathcal{P} X)\\). Instantiating \\(\\Fun{Stat}\\) in this setting, we obtain:\n\\begin{align}\n\\Fun{Stat} \\;\\; : \\;\\; \\mathcal{K}\\mspace{-2mu}\\ell(\\mathcal{P})\\op \\; & \\to \\; \\Cat{V{\\hbox{-}} Cat} \\nonumber \\\\\nX & \\mapsto \\Fun{Stat}(X) := \\quad \\begin{pmatrix*}[l]\n& \\Fun{Stat}(X)_0 & := \\quad \\;\\;\\; \\Cat{Meas}_0 \\\\\n& \\Fun{Stat}(X)(A, B) & := \\quad \\;\\;\\; \\Cat{Meas}(\\mathcal{P} X, \\Cat{Meas}(A, \\mathcal{P} B)) \\\\\n\\id_A \\: : & \\Fun{Stat}(X)(A, A) & := \\quad\n\\left\\{ \\begin{aligned}\n\\id_A : & \\; \\mathcal{P} X \\to \\Cat{Meas}(A, \\mathcal{P} A) \\\\\n & \\;\\;\\; \\rho \\;\\;\\, \\mapsto \\quad \\eta_A\n\\end{aligned} \\right. \\label{eq:stat-kl-d} \\\\\n\\end{pmatrix*} \\\\\nc : \\mathcal{K}\\mspace{-2mu}\\ell(\\mathcal{P})(Y, X) & \\mapsto \\Fun{Stat}(c) \\, := \\hfill\\nonumber\n\\end{align}\n\\begin{equation*}\n\\begin{pmatrix*}[c]\n\\Fun{Stat}(c) \\; : &\\Fun{Stat}(X) & \\to & \\Fun{Stat}(Y) \\vspace*{0.5em} \\\\\n& \\Fun{Stat}(X)_0 & = & \\Fun{Stat}(Y)_0 \\vspace*{0.5em} \\\\\n& \\begin{pmatrix*}[l]\n d^\\dag : & \\mathcal{P} X & \\to \\mathcal{K}\\mspace{-2mu}\\ell(\\mathcal{P})(A, B) \\\\\n & \\; \\pi & \\mapsto \\quad \\quad d^\\dag_\\pi\n \\end{pmatrix*}\n & \\mapsto &\n \\begin{pmatrix*}\n c^\\ast d^\\dag : \\mathcal{P} Y \\to \\mathcal{K}\\mspace{-2mu}\\ell(\\mathcal{P})(A, B) \\\\\n \\rho \\quad \\mapsto \\quad d^\\dag_{c \\klcirc \\rho}\n \\end{pmatrix*}\n\\end{pmatrix*} \\nonumber\n\\end{equation*}\nEach \\(\\Fun{Stat}(X)\\) is a category of stochastic channels with respect to measures on the space \\(X\\). We can write morphisms \\(d^\\dag : \\mathcal{P} X \\to \\mathcal{K}\\mspace{-2mu}\\ell(\\mathcal{P})(A, B)\\) in \\(\\Fun{Stat}(X)\\) as \\(d^\\dag_{(\\cdot)} : A \\xklto{(\\cdot)} B\\), and think of them as generalized Bayesian inversions: given a measure \\(\\pi\\) on \\(X\\), we obtain a channel \\(d^\\dag_\\pi : A \\xklto{\\pi} B\\) with respect to \\(\\pi\\). Given a channel \\(c : Y \\mathoverlap{\\rightarrow}{\\smallklcirc\\,} X\\) in the base category of priors, we can pull \\(d^\\dag\\) back along \\(c\\), to obtain a \\(Y\\text{-dependent}\\) channel in \\(\\Fun{Stat}(Y)\\), \\(c^\\ast d^\\dag : \\mathcal{P} Y \\to \\mathcal{K}\\mspace{-2mu}\\ell(\\mathcal{P})(A, B)\\), which takes \\(\\rho : \\mathcal{P} Y\\) to the channel \\(d^\\dag_{c \\klcirc \\rho} : A \\xklto{c \\klcirc \\rho} B\\) defined by pushing \\(\\rho\\) through \\(c\\) and then applying \\(d^\\dag\\).\n\\end{ex}\n\n\\begin{rmk}\nNote that by taking \\(\\Cat{Meas}\\) to be Cartesian closed, we have \\(\\Cat{Meas}(\\mathcal{P} X, \\Cat{Meas}(A, \\mathcal{P} B)) \\cong \\Cat{Meas}(\\mathcal{P} X \\times A, \\mathcal{P} B)\\) for each \\(X\\), \\(A\\) and \\(B\\), and so a morphism \\(c^\\dag : \\mathcal{P} Y \\to \\mathcal{K}\\mspace{-2mu}\\ell(\\mathcal{P})(X, Y)\\) equivalently has the type \\(\\mathcal{P} Y \\times X \\to \\mathcal{P} Y\\). Paired with a channel \\(c : Y \\to \\mathcal{P} X\\), we have something like a Cartesian lens; and to compose such pairs, we can use the Grothendieck construction \\citep{nLab2020Grothendieck,Spivak2019Generalized}.\n\\end{rmk}\n\n\\begin{defn}[$\\Cat{GrLens}_{\\Fun{Stat}}$] \\label{def:stat-lens}\nInstantiating the category of Grothendieck \\(F\\text{-lenses } \\Cat{GrLens}_F\\) (see \\citep{Spivak2019Generalized})\nwith \\(F = \\Fun{Stat} : \\cat{C}\\op \\to \\Cat{V{\\hbox{-}} Cat}\\), we obtain the category \\(\\Cat{GrLens}_\\Fun{Stat}\\) whose objects are pairs \\((X, A)\\) of objects of \\(\\cat{C}\\) and whose morphisms \\((X, A) \\mathrel{\\ooalign{\\hfil$\\mapstochar\\mkern5mu$\\hfil\\cr$\\to$\\cr}} (Y, B)\\) are elements of the set\n\\begin{equation}\n\\Cat{GrLens}_\\Fun{Stat} \\big( (X, A), (Y, B) \\big) \\cong \\cat{C}(X, Y) \\times \\Cat{V} \\big( \\cat{C}(I, X), \\cat{C}(B, A) \\big) \\, .\n\\end{equation}\nThe identity \\(\\Fun{Stat}\\text{-lens}\\) on \\((Y, A)\\) is \\((\\id_Y, \\id_A)\\), where by abuse of notation \\(\\id_A : \\cat{C}(I, Y) \\to \\cat{C}(A, A)\\) is the constant map \\(\\id_A\\) defined in \\eqref{eq:stat} that takes any state on \\(Y\\) to the identity on \\(A\\). The sequential composite of \\((c, c^\\dag) : (X, A) \\mathrel{\\ooalign{\\hfil$\\mapstochar\\mkern5mu$\\hfil\\cr$\\to$\\cr}} (Y, B)\\) and \\((d, d^\\dag) : (Y, B) \\mathrel{\\ooalign{\\hfil$\\mapstochar\\mkern5mu$\\hfil\\cr$\\to$\\cr}} (Z, C)\\) is the \\(\\Fun{Stat}\\text{-lens } \\big( (d \\klcirc c), (c^\\dag \\circ c^\\ast d^\\dag) \\big) : (X, A) \\mathrel{\\ooalign{\\hfil$\\mapstochar\\mkern5mu$\\hfil\\cr$\\to$\\cr}} (Z, C)\\) with \\((d \\klcirc c) : \\cat{C}(X, Z)\\) and where \\((c^\\dag \\circ c^\\ast d^\\dag) : \\Cat{V}\\big(\\cat{C}(I, X), \\cat{C}(C, A)\\big)\\) takes a state \\(\\pi : \\cat{C}(I, X)\\) on \\(X\\) to the channel \\(c^\\dag_{\\pi} \\klcirc \\d^\\dag_{c \\klcirc \\pi}\\). If we think of the notation \\((\\cdot)^\\dag\\) as denoting the operation of forming the Bayesian inverse of a channel (in the case where \\(A = X\\), \\(B = Y\\) and \\(C = Z\\)), then the main result of this section is to show that \\((d \\klcirc c)^\\dag_\\pi \\overset{d \\klcirc c \\klcirc \\pi}{\\sim} c^\\dag_{\\pi} \\klcirc \\d^\\dag_{c \\klcirc \\pi}\\), where \\(\\overset{d \\klcirc c \\klcirc \\pi}{\\sim}\\) denotes \\((d \\klcirc c \\klcirc \\pi)\\text{-almost-equality}\\) \\citep[Definition 2.5]{Smithe2020Bayesian}.\n\\end{defn}\n\nIn order to give an optical form for \\(\\Cat{GrLens}_\\Fun{Stat}\\), we need to find two \\(\\cat{M}\\text{-actegories}\\) with a common category of actions \\(\\cat{M}\\). Let \\(\\hat{\\cat{C}}\\) and \\(\\check{\\cat{C}}\\) denote the categories \\(\\hat{\\cat{C}} := \\Cat{V{\\hbox{-}} Cat}(\\cat{C}\\op, \\Cat{V})\\) and \\(\\check{\\cat{C}} := \\Cat{V{\\hbox{-}} Cat}(\\cat{C}, \\Cat{V})\\) of presheaves and copresheaves on \\(\\cat{C}\\), and consider the following natural isomorphisms.\n\\begin{align}\n\\Cat{GrLens}_\\Fun{Stat} \\big( (X, A), (Y, B) \\big) & \\cong \\cat{C}(X, Y) \\times \\Cat{V} \\big( \\cat{C}(I, X), \\cat{C}(B, A) \\big) \\nonumber \\\\\n& \\cong \\int^{M \\, : \\, \\cat{C}} \\cat{C}(X, Y) \\times \\cat{C}(X, M) \\times \\Cat{V}\\big(\\cat{C}(I, M), \\cat{C}(B, A)\\big) \\nonumber \\\\\n& \\cong \\int^{\\hat{M} \\, : \\, \\hat{\\cat{C}}} \\cat{C}(X, Y) \\times \\hat{M}(X) \\times \\Cat{V}\\big(\\hat{M}(I), \\cat{C}(B, A)\\big) \\label{eq:stat-lens-coend}\n\\end{align}\nThe second isomorphism follows by Yoneda reduction \\citep{Loregian2015This,Roman2020Profunctor}, and the third follows by the Yoneda lemma. We take \\(\\cat{M}\\) to be \\(\\cat{M} := \\hat{\\cat{C}}\\), and define an action \\(\\odot\\) of \\(\\hat{\\cat{C}}\\) on \\(\\check{\\cat{C}}\\) as follows.\n\\begin{defn}[$\\odot$]\nWe give only the action on objects; the action on morphisms is analogous.\n\\begin{equation} \\label{eq:L-action}\n\\begin{aligned}\n\\odot : \\hat{\\cat{C}} & \\to \\Cat{V{\\hbox{-}} Cat}(\\check{\\cat{C}}, \\check{\\cat{C}}) \\\\\n\\hat{M} & \\mapsto\n \\begin{pmatrix*}\n \\hat{M} \\odot - & : & \\check{\\cat{C}} & \\to & \\check{\\cat{C}} \\\\\n & & P & \\mapsto & \\Cat{V}\\big( \\hat{M}(I), P \\big)\n \\end{pmatrix*}\n\\end{aligned}\n\\end{equation}\nFunctoriality of \\(\\odot\\) follows from the functoriality of copresheaves. \\qed\n\\end{defn}\n\n\\begin{prop} \\label{prop:stat-actegory}\n\\(\\odot\\) equips \\(\\check{\\cat{C}}\\) with a \\(\\hat{\\cat{C}}\\text{-actegory}\\) structure: unitor isomorphisms \\(\\lambda^{\\odot}_F : 1 \\odot F \\xto{\\sim} F\\) and associator isomorphisms \\(a^{\\odot}_{\\hat{M}, \\hat{N}, F} : (\\hat{M} \\times \\hat{N}) \\odot F \\xrightarrow{\\sim} \\hat{M} \\odot (\\hat{N} \\odot F)\\) for each \\(\\hat{M},\\hat{N}\\) in \\(\\check{\\cat{C}}\\), both natural in \\(F : \\Cat{V{\\hbox{-}} Cat}(\\cat{C}, \\Cat{V})\\).\n\\end{prop}\n\nWe are now in a position to define the category of abstract Bayesian lenses, and show that this category coincides with the category of \\(\\Fun{Stat}\\text{-lenses}\\).\n\\begin{defn}[Bayesian lenses]\nDenote by \\(\\Cat{BayesLens}\\) the category of optics \\(\\Cat{Optic}_{\\times, \\odot}\\) for the action of the Cartesian product on presheaf categories \\(\\times : \\hat{\\cat{C}} \\to \\Cat{V{\\hbox{-}} Cat}(\\hat{\\cat{C}}, \\hat{\\cat{C}})\\) and the action \\(\\odot : \\hat{\\cat{C}} \\to \\Cat{V{\\hbox{-}} Cat}(\\check{\\cat{C}}, \\check{\\cat{C}})\\) defined in \\eqref{eq:L-action}. Its objects \\((\\hat{X}, \\check{Y})\\) are pairs of a presheaf and a copresheaf on \\(\\cat{C}\\), and its morphisms \\((\\hat{X}, \\check{A}) \\mathrel{\\ooalign{\\hfil$\\mapstochar\\mkern5mu$\\hfil\\cr$\\to$\\cr}} (\\hat{Y}, \\check{B})\\) are abstract \\emph{Bayesian lenses}---elements of the type\n\\begin{equation}\n\\Cat{Optic}_{\\times, \\odot}\\Big((\\hat{X}, \\check{A}), (\\hat{Y}, \\check{B})\\Big)\n= \\int^{\\hat{M} \\, : \\, \\hat{\\cat{C}}} \\hat{\\cat{C}}(\\hat{X}, \\hat{M} \\times \\hat{Y}) \\times \\check{\\cat{C}}(\\hat{M} \\odot \\check{B}, \\check{A})\n\\end{equation}\nGiven \\(v : \\cat{C}(X, Y)\\) and \\(u : \\Cat{V}(\\cat{C}(I, X), \\cat{C}(B, A))\\), we denote the corresponding element of this type by \\(\\optar{v}{u}\\). A Bayesian lens \\((\\hat{X}, \\check{X}) \\mathrel{\\ooalign{\\hfil$\\mapstochar\\mkern5mu$\\hfil\\cr$\\to$\\cr}} (\\hat{Y}, \\check{Y})\\) is called a \\textbf{simple} Bayesian lens.\n\\end{defn}\n\n\\begin{prop} \\label{prop:bayeslens-are-lenses}\n\\(\\Cat{BayesLens}\\) is a category of lenses; a definition is given in \\citep[\u00a72.2.1]{Smithe2020Bayesian}.\n\\end{prop}\n\n\\begin{prop}[$\\Fun{Stat}\\text{-lenses}$ are Bayesian lenses] \\label{prop:stat-lens-bayeslens}\nLet \\(\\hat{(\\cdot)} : \\cat{C} \\hookrightarrow \\Cat{V{\\hbox{-}} Cat}(\\cat{C}\\op, \\Cat{V})\\) denote the Yoneda embedding and \\(\\check{(\\cdot)} : \\cat{C} \\hookrightarrow \\Cat{V{\\hbox{-}} Cat}(\\cat{C}, \\Cat{V})\\) the coYoneda embedding. Then\n\\begin{equation}\n\\Cat{Optic}_{\\times, \\odot}\\Big((\\hat{X}, \\check{A}), (\\hat{Y}, \\check{B})\\Big)\n\\cong\n\\Cat{GrLens}_\\Fun{Stat} \\Big( (X, A), (Y, B) \\Big)\n\\end{equation}\nso that \\(\\Cat{GrLens}_\\Fun{Stat}\\) is equivalent to the full subcategory of \\(\\Cat{Optic}_{\\times, \\odot}\\) on representable (co)presheaves.\n\\end{prop}\n\n\\begin{rmk}\nWe will often abuse notation by indicating representable objects in \\(\\Cat{BayesLens}\\) by their representations in \\(\\cat{C}\\). That is, we will write \\((X, A)\\) instead of \\((\\hat{X}, \\check{A})\\) where this would be unambiguous.\n\\end{rmk}\n\n\\begin{prop} \\label{prop:bayeslens-smc}\n\\(\\Cat{BayesLens}\\) is a symmetric monoidal category. The monoidal product \\(\\otimes\\) is inherited from \\(\\cat{C}\\); the unit object is the pair \\((I, I)\\) where \\(I\\) is the unit object in \\(\\cat{C}\\). For more details on the structure, see \\citep{Riley2018Categories} or \\citep{Moeller2018Monoidal}.\n\\end{prop}\n\n\\begin{defn}[Exact and approximate Bayesian lens]\nLet \\(\\optar{c}{c^\\dag} : (X, X) \\mathrel{\\ooalign{\\hfil$\\mapstochar\\mkern5mu$\\hfil\\cr$\\to$\\cr}} (Y, Y)\\) be a simple Bayesian lens. We say that \\(\\optar{c}{c^\\dag}\\) is \\textbf{exact} if \\(c\\) admits Bayesian inversion and, for each \\(\\pi : I \\mathoverlap{\\rightarrow}{\\smallklcirc\\,} X\\) such that \\(c \\klcirc \\pi\\) has non-empty support, \\(c^\\dag_\\pi\\) is the Bayesian inversion of \\(c\\) with respect to \\(\\pi\\). Simple Bayesian lenses that are not exact are said to be \\textbf{approximate}.\n\\end{defn}\n\n\\begin{lemma} \\label{lemma:optical-bayes}\nLet \\(\\optar{c}{c^\\dag}\\) and \\(\\optar{d}{d^\\dag}\\) be sequentially composable exact Bayesian lenses. Then the contravariant component of the composite lens \\(\\optar{d}{d^\\dag} \\lenscirc \\optar{c}{c^\\dag} \\cong \\optar{d \\klcirc c}{c^\\dag \\circ c^\\ast d^\\dag}\\) is, up to \\(d \\klcirc c \\klcirc \\pi \\text{-almost-}\\allowbreak\\text{equality}\\), the Bayesian inversion of \\(d \\klcirc c\\) with respect to any state \\(\\pi\\) on the domain of \\(c\\) such that \\(c \\klcirc \\pi\\) has non-empty support. That is to say, \\emph{Bayesian updates compose optically}: \\((d \\klcirc c)^\\dag_\\pi \\overset{d \\klcirc c \\klcirc \\pi}{\\sim} c^\\dag_\\pi \\klcirc d^\\dag_{c \\klcirc \\pi}\\).\n\\end{lemma}\n\n\\section{Open Games for General Optics}\n\\label{sec:org675467e}\n\\label{sec:games}\n\nIn this section, we supply mild generalizations of the structures underlying open games, building on those in \\citep{Bolt2019Bayesian}; at first, then, we consider games over arbitrary categories of optics \\(\\Cat{Optic}_{\\circR, \\circL}\\). Subsequently, we use games over Bayesian lenses (in the category of optics \\(\\Cat{BayesLens}\\) introduced above) to exemplify a number of canonical statistical concepts, such as maximum likelihood estimation and the variational autoencoder, and clarify their compositional structure using the notion of \\emph{optimization game} (Definition \\ref{def:opt-game}). Owing to space constraints, we omit most proofs in this section; they will appear in a full paper expanding the present abstract, and can be supplied at the request of the reader.\n\n\\begin{obs}\nIn the graphical calculus for the compact closed bicategory of profunctors \\(\\Cat{Prof}\\) \\citep{Roman2020Open}, the hom object \\(\\Cat{Optic}_{\\circR, \\circL}((X, A), (Y, B))\\) has the depiction\n\\[\n\\tikzfig{img\/optic-RL-XA-YB}\n\\]\nwhere the types on the wires are the 0-cells of \\(\\Cat{Prof}\\), the monoidal actions \\(\\circR\\) and \\(\\circL\\) are depicted as (co)monoids, and the states and effects are (co)representable functors on the objects \\(X,A,Y,B\\), treated as profunctors.\n\\end{obs}\n\n\\begin{defn}[Generalized context] \\label{def:ctx}\nThe context functor \\(C : \\Cat{Optic}_{\\circR, \\circL}\\op \\times \\Cat{Optic}_{\\circR, \\circL} \\to \\Set\\) takes the pair of optical objects \\(((X, A), (Y, B))\\) to the type with depiction\n\\[\n\\tikzfig{img\/context-RL-XA-YB}\n\\]\nThe triangles depict the (co)presheaves on the monoidal unit \\(I\\) in the underlying actegories. The action on morphisms (\\emph{i.e.}, optics) is by precomposition on the left and postcomposition on the right. Functoriality follows accordingly. \\qed\n\nWe can compose a context with an optic to obtain a `closed' system, as follows:\n\\[\n\\tikzfig{img\/closed-RL-XA-YB} \\mapsto \n\\tikzfig{img\/closed-RL-XA-YB-composed}\n\\]\n\\end{defn}\n\n\\begin{conjecture} \\label{conj:doubling}\nIt is easy to show that a context on \\(((X,A),(Y,B))\\) is equivalently a state \\((I, I) \\mathrel{\\ooalign{\\hfil$\\mapstochar\\mkern5mu$\\hfil\\cr$\\to$\\cr}} ((X,A),(Y,B))\\) in the monoidal category of `double lenses', \\(\\Cat{Lens}_{\\Cat{Optic}_{\\circR, \\circL}}\\) \\citep{Bolt2019Bayesian}. Rendering this graphically leads us to the following conjecture: categories of double optics are instances of the \\emph{doubling} or \\emph{CP} construction from categorical quantum mechanics (\\emph{cf.} \\citep{Coecke2015Categorical,Coecke2016Categorical}).\n\\end{conjecture}\n\n\\begin{prop} \\label{prop:ctx-nice}\nLet \\(\\cat{C}\\) and \\(\\cat{D}\\) be the (monoidal) actegories underlying \\(\\Cat{Optic}_{\\circR, \\circL}\\), and denote their respective monoidal units by \\(I_{\\cat{C}}\\) and \\(I_{\\cat{D}}\\). If these unit objects are terminal in their respective categories, then the contexts \\(C((X, A), (Y, B))\\) simplify to\n\\[\n\\tikzfig{img\/context-terminal}\n\\]\nwhere we have depicted the representable presheaf on \\(I_{\\cat{D}}\\) as \\(\\mathord{\\usebox\\sbground}\\) to indicate that \\(A\\) is just discarded. Consequently, in this case, a context is just an optic \\((I, B) \\mathrel{\\ooalign{\\hfil$\\mapstochar\\mkern5mu$\\hfil\\cr$\\to$\\cr}} (X, Y)\\).\n\n\\end{prop}\n\n\\begin{defn}[Generalized open game] \\label{def:open-game}\nLet \\((X, A)\\) and \\((Y, B)\\) be objects in any symmetric monoidal category of optics \\(\\Cat{Optic}_{\\circR, \\circL}\\). Let \\(\\Sigma\\) be a \\(\\cat{U}\\text{-category}\\), for any base of enrichment \\(\\cat{U}\\) such that \\(\\cat{U}\\text{-}\\Cat{Prof}\\) is compact closed. An \\textbf{open game} from \\((X, A)\\) to \\((Y, B)\\) with strategies in \\(\\Sigma\\), denoted \\(G : (X, A) \\xto{\\Sigma} (Y, B)\\), is given by:\n\\begin{enumerate}\n\\item a play function \\(P : \\Sigma_0 \\to \\Cat{Optic}_{\\circR, \\circL}((X, A), (Y, B))\\); and\n\\item a best response function \\(B : C((X, A), (Y, B)) \\to \\cat{U}\\text{-}\\Cat{Prof}(\\Sigma, \\Sigma)\\).\n\\end{enumerate}\nGiven a strategy \\(\\sigma : \\Sigma\\), we will often write \\(\\optar{v}{u}_\\sigma\\) or similar to denote its image under \\(P\\). A strategy is an \\textbf{equilibrium} in a context \\(\\optar{\\pi}{k}\\) if it is a fixed point of \\(B(\\optar{\\pi}{k})\\).\n\\end{defn}\n\nRoughly speaking, the `best responses' to a strategy \\(\\sigma\\) in a context is are those strategies \\(\\tau\\) such that choosing \\(\\tau\\) would result in performance at the game at least as good as choosing \\(\\sigma\\); equilibrium strategies are those for which such deviation would not improve performance.\n\n\\begin{rmk} \\label{rmk:b-relt}\n\nNote: whereas classic open games use a best-response relation, we categorify that here to a best-response \\emph{relator} (in the terminology of \\citep{Loregian2015This}; \\emph{i.e.}, a `proof-relevant' relation), so that we can describe the trajectories witnessing the computation of equilibria, rather than their mere existence.\n\\end{rmk}\n\n\\begin{prop} \\label{prop:cat-open-games}\nGeneralized open games over the symmetric monoidal category of optics \\(\\Cat{Optic}_{\\circR, \\circL}\\) with strategies enriched in \\(\\cat{U}\\) form a symmetric monoidal category denoted \\(\\Cat{Game}(\\cat{U}, \\circR, \\circL)\\).\n\\end{prop}\n\nSince our games are only a mild generalization of those of \\citep{Bolt2019Bayesian}, we refer the reader to \u00a73.10 of that paper for an idea of the proof of the foregoing proposition, which goes through analogously. The sequential composition of games is given by the sequential composition of optics, with the best response to the composite being the product of the best responses to the factors. Similarly, parallel composition is given by the monoidal product of optics, and the best response to the composite is again the product of the best responses to the factors.\n\nWe now consider some games over \\(\\Cat{BayesLens}\\) that supply the building blocks of the archetypal cybernetic systems to be considered in \\secref{sec:cyber-sys}. For now, we will take the strategies simply to be discrete categories (\\emph{i.e.}, sets), as in the standard formulation of open games. Consequently, we will take the codomain of the best response function to be \\(\\Set(\\Sigma, \\Set(\\Sigma, 2))\\), for each strategy type \\(\\Sigma\\). We assume the ambient category of stochastic channels is semicartesian, so that the monoidal unit is the terminal object.\n\n\\begin{rmk}\nAll the games we will consider henceforth will have play functions whose codomains restrict to the representable subcategory \\(\\Cat{GrLens}_\\Fun{Stat}\\) of \\(\\Cat{BayesLens}\\); in this work, we do not use the extra generality afforded by \\(\\Cat{BayesLens}\\), except insofar as it grants us the use of string diagrams in \\(\\Cat{Prof}\\), which we find helpful for reasoning intuitively about these systems. The generality of optics \\emph{is} however used in the `game-theoretic' games of \\citep{Bolt2019Bayesian}, and in future work we hope to relate the cybernetic systems of this paper to the game-theoretic setting of that earlier work.\n\\end{rmk}\n\n\\begin{rmk} \\label{rmk:atomic-games}\nAll the statistical games considered in this paper will be `atomic' in the sense of \\citep{Bolt2019Bayesian}: in particular, the best response functions we consider will be constant, meaning that, in any context, the set of best strategies does not depend on the `current' choice of strategy. Permitting such dependence will be important in future work, however, when we consider how cybernetic systems interact, and hence respond to each other.\n\\end{rmk}\n\n\\begin{ex} \\label{ex:ml-game}\nA Bayesian lens of the form \\((I, I) \\mathrel{\\ooalign{\\hfil$\\mapstochar\\mkern5mu$\\hfil\\cr$\\to$\\cr}} (X, X)\\) is fully specified by a state \\(\\pi : I \\mathoverlap{\\rightarrow}{\\smallklcirc\\,} X\\). A context for such a lens is given by a lens \\(\\optar{!}{k} : (I, X) \\mathrel{\\ooalign{\\hfil$\\mapstochar\\mkern5mu$\\hfil\\cr$\\to$\\cr}} (X, X)\\) where \\(! : I \\mathoverlap{\\rightarrow}{\\smallklcirc\\,} I\\) is the unique map and \\(k : X \\mathoverlap{\\rightarrow}{\\smallklcirc\\,} X\\) is any endochannel on \\(X\\). A \\textbf{maximum likelihood game} is any game whose play function has codomain in Bayesian lenses of this form \\((I, I) \\mathrel{\\ooalign{\\hfil$\\mapstochar\\mkern5mu$\\hfil\\cr$\\to$\\cr}} (X, X)\\) for any \\(X : \\cat{C}\\), and whose best response function is isomorphic to\n\\[\nB(\\optar{!}{k}) = \\optar{\\rho}{!}_\\sigma \\mapsto \\left\\{ \\optar{\\pi}{!}_\\tau \\middle| \\pi \\in \\underset{\\pi : I \\mathoverlap{\\rightarrow}{\\smallklcirc\\,} X}{\\arg\\max} \\E_{k \\klcirc \\pi} \\left[ \\pi \\right] \\right\\}\n\\]\nwhere \\(\\E\\) is the canonical expectation operator (\\emph{i.e.} algebra evaluation) associated to states in \\(\\cat{C}\\), and where we have written \\(\\optar{\\rho}{!}_\\sigma\\) and \\(\\optar{\\pi}{!}_\\tau\\) to denote the images of the strategies \\(\\sigma\\) and \\(\\tau\\) under the play function. Intuitively, then, the best response is given by the strategy that maximises the likelihood of the state obtained from the context \\(k\\).\n\\end{ex}\n\n\\begin{rmk}\nIn what follows, we assume that the underlying category \\(\\cat{C}\\) of stochastic channels \\emph{admits density functions}. Informally, a density function for a stochastic channel \\(c : X \\mathoverlap{\\rightarrow}{\\smallklcirc\\,} Y\\) is a measurable function \\(p_c : Y \\times X \\to [0, 1]\\) whose values are the probabilities (or probability densities) \\(p_c(y | x)\\) at each pair \\((y, x) : Y \\times X\\). We say that the value \\(p_c(y | x)\\) is the probability (or probability density) of \\(y\\) \\emph{given} \\(x\\). In a category such as \\(\\mathcal{K}\\mspace{-2mu}\\ell(\\mathcal{D}_{\\leq 1})\\), whose objects are sets and whose morphisms \\(X \\mathoverlap{\\rightarrow}{\\smallklcirc\\,} Y\\) are functions \\(X \\to \\mathcal{D}(Y + 1)\\), a density function for \\(c : X \\mathoverlap{\\rightarrow}{\\smallklcirc\\,} Y\\) is a morphism \\(Y \\otimes X \\mathoverlap{\\rightarrow}{\\smallklcirc\\,} I\\); note that in \\(\\mathcal{K}\\mspace{-2mu}\\ell(\\mathcal{D}_{\\leq 1})\\), \\(I\\) is not terminal. In the finitely-supported case, density functions are effectively equivalent to channels, but this is not the case in the continuous setting, where they are of most use. For more on this, see \\citep[\u00a72.1.4]{Smithe2020Bayesian}.\n\\end{rmk}\n\nA natural first generalization of maximum likelihood games takes us from states \\(I \\mathoverlap{\\rightarrow}{\\smallklcirc\\,} X\\) to channels \\(Z \\mathoverlap{\\rightarrow}{\\smallklcirc\\,} X\\); that is, from `elements' to `generalized elements' in the covariant (forwards) part of the lens. Unlike Bayesian lenses \\((I, I) \\mathrel{\\ooalign{\\hfil$\\mapstochar\\mkern5mu$\\hfil\\cr$\\to$\\cr}} (X, X)\\), lenses \\((Z, Z) \\mathrel{\\ooalign{\\hfil$\\mapstochar\\mkern5mu$\\hfil\\cr$\\to$\\cr}} (X, X)\\) admit nontrivial contravariant components, which we think of as generalized Bayesian inversions. Consequently, our first generalization is a notion of `Bayesian inference game'. A context \\(\\optar{\\pi}{k} : (I, X) \\mathrel{\\ooalign{\\hfil$\\mapstochar\\mkern5mu$\\hfil\\cr$\\to$\\cr}} (Z, X)\\) for a Bayesian lens \\((Z, Z) \\mathrel{\\ooalign{\\hfil$\\mapstochar\\mkern5mu$\\hfil\\cr$\\to$\\cr}} (X, X)\\) then constitutes a `prior' state \\(\\pi : I \\mathoverlap{\\rightarrow}{\\smallklcirc\\,} Z\\) and a `continuation' channel \\(k : X \\mathoverlap{\\rightarrow}{\\smallklcirc\\,} X\\) which together witness the closure of the otherwise open system.\n\n\\begin{ex} \\label{ex:simp-inf-game}\nFix a channel \\(c : Z \\mathoverlap{\\rightarrow}{\\smallklcirc\\,} X\\) with associated density function \\(p_c : X \\times Z \\to {\\mathbb{R}}_+\\) and a measure of divergence between states on \\(Z\\), \\(D : \\cat{C}(I, Z) \\times \\cat{C}(I, Z) \\to {\\mathbb{R}}\\). A corresponding (generalized) \\textbf{simple Bayesian inference game} is any game whose play function has codomain \\(\\Cat{BayesLens}((Z, Z), (X, X))\\) and whose best response function is isomorphic to\n\\begin{align*}\nB(\\optar{\\pi}{k}) = \\optar{d}{d'}_\\sigma & \\mapsto\n\\Bigg\\{ \\optar{c}{c'}_\\tau \\bigg| \nc' \\in \\underset{c' : \\Cat{V}(\\cat{C}(I, Z), \\, \\cat{C}(X, Z))}{\\arg\\min} \\E_{x \\sim k \\klcirc c \\klcirc \\pi} \\bigg[ \\E_{z \\sim c'_\\pi(x)} \\left[ - \\log p_c(x | z) \\right]\n + D(c'_\\pi(x), \\pi) \\bigg] \\Bigg\\} \\\\\n= \\optar{d}{d'}_\\sigma & \\mapsto\n\\Bigg\\{ \\optar{c}{c'}_\\tau \\bigg| \nc' \\in \\underset{c' : \\Cat{V}(\\cat{C}(I, Z), \\, \\cat{C}(X, Z))}{\\arg\\min} \\bigg( \\E_{z \\sim c'_\\pi \\klcirc k \\klcirc c \\klcirc \\pi} \\left[ - \\int_X \\log p_c(\\d k \\klcirc c \\klcirc \\pi | z) \\right] \\\\\n& \\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\n + D(c'_\\pi \\klcirc k \\klcirc c \\klcirc \\pi, \\pi) \\bigg) \\Bigg\\}\n\\end{align*}\nwhere \\(\\pi : I \\mathoverlap{\\rightarrow}{\\smallklcirc\\,} Z\\) and \\(k : X \\mathoverlap{\\rightarrow}{\\smallklcirc\\,} X\\), and where the notation \\(z \\sim \\pi\\) means ``\\(z\\) distributed according to the state \\(\\pi\\)''. Note that the second line follows from the first by linearity of expectation.\n\\end{ex}\n\n\\begin{prop}[{\\citep[Thm. 1]{Knoblauch2019Generalized}}]\nWhen \\(D\\) is chosen to be the Kullback-Leibler divergence \\(D_{KL}\\), minimizing the objective function defining a simple Bayesian inference game is equivalent to computing an (exact) Bayesian inversion.\n\\end{prop}\n\n\\begin{cor}\nGiven two Bayesian inference games \\(G : (Z, Z) \\mathrel{\\ooalign{\\hfil$\\mapstochar\\mkern5mu$\\hfil\\cr$\\to$\\cr}} (Y, Y)\\) and \\(H : (Y, Y) \\mathrel{\\ooalign{\\hfil$\\mapstochar\\mkern5mu$\\hfil\\cr$\\to$\\cr}} (X, X)\\), we can compose them sequentially to obtain a game \\(H \\lenscirc G : (Z, Z) \\mathrel{\\ooalign{\\hfil$\\mapstochar\\mkern5mu$\\hfil\\cr$\\to$\\cr}} (X, X)\\), which we will call a \\textbf{hierarchical Bayesian inference game}. It is then an immediate consequence of Lemma \\ref{lemma:optical-bayes} that, in any given context for which the forwards channels admit Bayesian inversion, the best response to the composite game \\(H \\lenscirc G\\) (that is, the optimal inversion of the composite channel) is given simply by (the composition of) the best responses to the factors \\(H\\) and \\(G\\). Consequently, Bayesian inference games are closed under composition.\n\\end{cor}\n\nSimilarly, given a channel \\(c : Z \\otimes Y \\mathoverlap{\\rightarrow}{\\smallklcirc\\,} X\\), we can consider the \\textbf{marginal Bayesian inference game} in which the objective is to compute the inversion of the channel onto just one of the factors \\(Z\\) or \\(Y\\) in the domain.\n\n\\begin{ex}[Variational autoencoder game] \\label{ex:vae-game}\nFix a family \\(\\mathcal{F} \\hookrightarrow \\cat{C}(Z, X)\\) of forward channels and a family \\(\\mathcal{P} \\hookrightarrow \\cat{C}(X, Z)\\) of backward channels such that each \\(c : \\mathcal{F}\\) admits a density function \\(p_c : X \\otimes Z \\to {\\mathbb{R}}_+\\) and each \\(d : \\mathcal{P}\\) admits a density function \\(q : Z \\otimes X \\to {\\mathbb{R}}_+\\); think of these families as determining parameterizations of the channels. We take our strategy type to be \\(\\Sigma = \\mathcal{F} \\times \\mathcal{P}\\). A \\textbf{simple variational autoencoder game} \\((Z,Z) \\xto{\\Sigma} (X,X)\\) is any game with play function \\(P : \\Sigma \\to \\Cat{BayesLens}((Z,Z), (X,X))\\) and whose best response function is isomorphic to\n\\begin{align*}\nB(\\optar{\\pi}{k}) = \\optar{d}{d'}_\\sigma \\mapsto\n\\Bigg\\{ \\optar{c}{c'}_\\tau \\Bigg| (c, c') \\in \\underset{\\substack{c \\in \\mathcal{F}, \\\\ c' \\in \\Cat{V}(\\cat{C}(I, Z), \\, \\mathcal{P})}}{\\arg\\min} \\E_{x \\sim k \\klcirc c \\klcirc \\pi} \\E_{z \\sim c'_\\pi(x)} \\left[ \\log \\frac{q(z|x)}{p_c(x|z)p_\\pi(z)} \\right]\n\\Bigg\\}\n\\end{align*}\nwhere \\(\\pi : I \\mathoverlap{\\rightarrow}{\\smallklcirc\\,} Z\\) admits a density function \\(p_\\pi : Z \\to {\\mathbb{R}}_+\\), \\(q : Z \\otimes X \\to {\\mathbb{R}}_+\\) is a density function associated to \\(c'_\\pi\\), and \\(k\\) has type \\(X \\mathoverlap{\\rightarrow}{\\smallklcirc\\,} X\\).\n\\end{ex}\n\n\\begin{prop} \\label{prop:vae-learns-model}\nA best response to a variational autoencoder game is a stochastic channel \\(c : \\mathcal{F}\\) that maximises the likelihood of the state observed through the continuation \\(k\\) under the assumption that the generative process is in \\(\\mathcal{F}\\), along with an inverse channel \\(c'_\\pi : \\mathcal{P}\\) that best approximates the exact Bayesian inverse \\(c^\\dag_\\pi\\) under the constraint of being in \\(\\mathcal{P}\\).\n\\end{prop}\n\n\\begin{prop} \\label{prop:vae-infers}\nVariational autoencoder games generalize inference games for the Kullback-Leibler divergence. More precisely, the objective function defining autoencoder games is of the same form as that defining inference games \\eqref{ex:simp-inf-game} when \\(D = D_{KL}\\).\n\\end{prop}\n\nThis prompts the following generalization:\n\n\\begin{ex}[Generalized autoencoder game] \\label{ex:ae-game}\nFix two families of channels \\(\\mathcal{F},\\mathcal{P}\\) and a strategy type \\(\\Sigma\\) as in Example \\ref{ex:vae-game}. Then a (generalized) \\textbf{simple autoencoder game} \\((Z,Z) \\xto{\\Sigma} (X,X)\\) is any game with play function \\(P : \\Sigma \\to \\Cat{BayesLens}((Z,Z), (X,X))\\) and whose best response function is isomorphic to\n\\begin{align*}\nB(\\optar{\\pi}{k}) = \\optar{d}{d'}_\\sigma \\mapsto\n\\Bigg\\{ \\optar{c}{c'}_\\tau \\Bigg| (c, c') \\in \\underset{\\substack{c \\in \\mathcal{F}, \\\\ c' \\in \\Cat{V}(\\cat{C}(I, Z), \\, \\mathcal{P})}}{\\arg\\min} \\bigg( & \\E_{z \\sim c'_\\pi \\klcirc k \\klcirc c \\klcirc \\pi} \\left[ - \\int_X \\log p_c(\\d k \\klcirc c \\klcirc \\pi | z) \\right] \\\\\n& \\qquad + D(c'_\\pi \\klcirc k \\klcirc c \\klcirc \\pi, \\pi) \\bigg)\n\\Bigg\\}\n\\end{align*}\nwhere \\(\\pi\\) and \\(k\\) have respective types \\(I \\mathoverlap{\\rightarrow}{\\smallklcirc\\,} Z\\) and \\(X \\mathoverlap{\\rightarrow}{\\smallklcirc\\,} X\\), and \\(D\\) is any measure of divergence between states.\n\nAs with Bayesian inference games, we can generalize simple autoencoder games to \\textbf{hierarchical} and \\textbf{marginal} autoencoder games via the corresponding sequential and parallel compositions.\n\\end{ex}\n\nThe foregoing games have been purely statistically formulated, without capturing the motivating feature of an open system as something in interaction with an external environment. Nonetheless, we can model a simple open system of hierarchical \\textbf{active inference} that receives stochastic inputs from an environment and emits actions stochastically into the environment, as follows.\n\\begin{ex}[Active inference game] \\label{ex:ai-game}\nLet \\(\\{S_i\\}_i\\) be set of spaces of sensory data indexed by hierarchical levels of abstraction \\(i\\) (for instance, the levels of abstraction might range from representations of whole objects to fine details about their texture); similarly, let \\(\\{A_i\\}_i\\) be a set of spaces of possible actions similarly hierarchically organized. Consider the marginal autoencoder games \\((S_{i+1} \\otimes A_{i}, S_{i+1}) \\mathrel{\\ooalign{\\hfil$\\mapstochar\\mkern5mu$\\hfil\\cr$\\to$\\cr}} (S_{i+1}, S_{i+1})\\) and \\((A_{i+1} \\otimes S_{i}, A_{i+1}) \\mathrel{\\ooalign{\\hfil$\\mapstochar\\mkern5mu$\\hfil\\cr$\\to$\\cr}} (A_{i+1}, A_{i+1})\\) coupled via the symmetric monoidal structure \\(\\otimes\\) of \\(\\cat{C}\\):\n\\[\n\\tikzfig{img\/optic-ai-S} \\qquad\n\\tikzfig{img\/optic-ai-A} \\; \\mapsto \\;\n\\tikzfig{img\/optic-ai}\n\\]\ngiving a composite game \\((S_{i+1} \\otimes A_{i+1}, S_{i+1} \\times A_{i+1}) \\mathrel{\\ooalign{\\hfil$\\mapstochar\\mkern5mu$\\hfil\\cr$\\to$\\cr}} (S_{i} \\otimes A_{i}, S_{i} \\times A_{i})\\). Recall from \\citep[\u00a7\u00a73.7-3.8]{Bolt2019Bayesian} that a composite game is given by the (sequential and parallel) composition of optics, with best-response given by the product of the best-responses of the factors.\n\nNote that the Bayesian posterior inferred by such a game has independent factors on \\(S_{i+1}\\) and \\(A_{i+1}\\). This is not merely a diagrammatic convenience, but coincides with a common `mean field' simplification in the modelling literature \\citep{Buckley2017free,Kingma2017Variational}. The dashed box is a functorial box \\citep{Mellies2006Functorial} depicting the Yoneda embedding; recall that optics in \\(\\Cat{BayesLens}\\) were defined over (co)presheaves, and so here we needed to lift the monoidal product on \\(\\cat{C}\\) into a diagram over its presheaf category \\(\\Cat{Cat}(\\cat{C}\\op, \\Set)\\).\n\nNext, compose these games along the hierarchy indexed by \\(i\\), to obtain a game \\((S_{N} \\otimes A_{N}, S_{N} \\times A_{N}) \\mathrel{\\ooalign{\\hfil$\\mapstochar\\mkern5mu$\\hfil\\cr$\\to$\\cr}} (S_{0} \\otimes A_{0}, S_{0} \\times A_{0})\\), such as an element of the following object:\n\\[\n\\tikzfig{img\/optic-ai-hierarchy}\n\\]\nGiven a context with a strong prior about expected sensory states and a continuation that responds to an action of type \\(A_0\\) by feeding back a state on \\(S_0\\), the best response can be shown to be that which selects actions that, under the current state, maximize the likelihood of obtaining the expected `goal' state \\citep{Buckley2017free,Friston2015Active}.\n\\end{ex}\n\n\\begin{rmk}\nWe have framed each of these statistical procedures as optimization problems not only to suggest a link to the utility-maximising agents of game theory, but also because it suggests the use of iterative methods to compute best responses; note that computational tractability is an important motivation in the proof of Proposition \\ref{prop:vae-learns-model}. \n\nThe question of providing such dynamical or, thinking of game composition as an algebra for building complex systems, `coalgebraic' semantics for (generalized) optimization games is the topic of the next section. We first formalize this notion.\n\\end{rmk}\n\n\\begin{defn} \\label{def:opt-game}\nAn \\textbf{optimization game} is any open game whose best response function can be defined by a function of the form \\(\\Sigma \\times C \\xto{\\pi} M \\xto{\\varphi} P\\), where \\(\\Sigma\\) is a strategy type, \\(C\\) a context type, \\(M\\) any space, and \\(P\\) a poset. We call \\(\\varphi\\) the \\textbf{fitness function}, and think of \\(\\pi\\) as projecting systems into a space whose points can be assigned a fitness. The best response function of an optimization game can then be defined by giving the subset of strategies contextually maximizing fitness, for each context \\(c : C\\).\n\\end{defn}\n\n\\section{Cybernetic Systems and Dynamical Realisation}\n\\label{sec:org4ff5161}\n\\label{sec:cyber-sys}\n\nIn this section, we begin to answer the question of precisely how the optimization games of the previous section may be realized in physical systems, such as brains or computers. More formally, this means we seek open dynamical systems whose input and output types correspond to the domain and codomain types of the foregoing games, such that there is a correspondence between the behaviours of the abstract games and their dynamical realisations, and such that the evolutions of the internal states of the dynamical systems correspond to strategic improvements in game-playing: by concentrating on optimization games, a natural measure of such improvement is encoded in the fitness function underlying the best-response relator.\n\nWe do not require that there is a correspondence between internal states of the realisations and strategies for the corresponding games, but we do require that the fitness functions extend to the the total state spaces of the closure of a realisation induced by the context. When there \\emph{is} a correspondence between internal states and strategies, we can take advantage of Definition \\ref{def:open-game} and interpret trajectories over the state space as trajectories over strategies witnessing the strategic improvement.\n\nWe begin by sketching categories of dynamical games, and then use these ideas to define preliminary notions of open cybernetic systems and categories thereof. We consider principally single systems whose underlying games are atomic (in the sense of Remark \\ref{rmk:atomic-games}), and leave the study of the behaviour of interacting cybernetic systems to future work. Once more, we omit proofs in this section; they will appear in a paper to follow.\n\n\\begin{defn}[Discrete-time dynamical system over $\\cat{C}$; after \\citep{Schultz2019Dynamical,Clarke2020Profunctor}] \\label{def:dds}\nA \\textbf{discrete-time dynamical system} over \\(\\cat{C}\\) with state space \\(S :\\cat{C}\\), input type \\(A : \\cat{C}\\) and output type \\(B : \\cat{C}\\) is a lens \\((S, S) \\mathrel{\\ooalign{\\hfil$\\mapstochar\\mkern5mu$\\hfil\\cr$\\to$\\cr}} (B, A)\\) over \\(\\cat{C}\\), \\emph{i.e.} in the following optical hom object:\n\\begin{align*}\n\\int^{M : \\cat{C}} \\Cat{Comon}(\\cat{C})(S, M \\otimes B) \\times \\cat{C}(M \\otimes A,S)\n\\cong \\Cat{Comon}(\\cat{C})(S, B) \\times \\cat{C}(S \\otimes A, S)\n\\end{align*}\nwhere the isomorphism follows by Yoneda reduction. Note that this requires that the `output' map of the dynamical system is a comonoid homomorphism in \\(\\cat{C}\\) and hence deterministic in a category of stochastic channels.\n\\end{defn}\n\n\\begin{defn}[Category of discrete-time dynamical systems] \\label{def:cat-dds}\nWe define a category \\(\\Cat{Dyn}_{\\cat{C}}\\) whose objects are the objects of \\(\\cat{C}\\) and whose morphisms, denoted \\(A \\xto{S} B\\), are discrete-time dynamical systems; the symbol above the arrow denotes the internal state space. Hom objects are given by\n\\[\n\\Cat{Dyn}_{\\cat{C}}(A, B) = \\sum_{S : \\cat{C}} \\Cat{Comon}(\\cat{C})(S, B) \\times \\cat{C}(S \\otimes A, S) \\, .\n\\]\nIdentity dynamical systems on each \\(A : \\cat{C}\\) are the `no-op' dynamical systems \\(A \\xto{A} A\\) given by identity optics \\(\\id_A : (A, A) \\mathrel{\\ooalign{\\hfil$\\mapstochar\\mkern5mu$\\hfil\\cr$\\to$\\cr}} (A, A)\\). Associativity and unitality of composition is inherited from the category of optics underlying Definition \\ref{def:dds}; a symmetric monoidal structure is similarly inherited. \\qed\n\\end{defn}\n\n\\begin{defn}[Lenses over dynamical systems; after \\citep{Riley2018Categories}] \\label{def:dyn-lens}\nThe category of (monoidal) lenses over \\(\\cat{C}\\text{-dynamical}\\) systems has as objects pairs \\((X, A)\\) of objects in \\(\\cat{C}\\) and as morphisms, \\textbf{dynamical lenses} \\((X, A) \\mathrel{\\ooalign{\\hfil$\\mapstochar\\mkern5mu$\\hfil\\cr$\\to$\\cr}} (Y, B)\\), elements of the type\n\\begin{gather*}\n\\tikzfig{img\/optic-dyn-XA-YB} \\\\\n\\int^{M : \\cat{C}} \\Cat{Dyn}_{\\cat{C}}(X, M \\otimes Y) \\times \\Cat{Dyn}_{\\cat{C}}(M \\otimes B, A) \\\\\n\\cong \\\\\n\\sum_{P,Q : \\cat{C}} \\int^{M : \\cat{C}} \\cat{C}(P \\otimes X, P) \\times \\Cat{Comon}(\\cat{C})(P, M \\otimes Y) \\times \\cat{C}(Q \\otimes M \\otimes B, Q) \\times \\Cat{Comon}(\\cat{C})(Q, A) \\\\\n\\sum_{P,Q : \\cat{C}} \\;\\; \\tikzfig{img\/optic-dyn-full-XA-YB} \\; .\n\\end{gather*}\nThat is, a dynamical lens is a pair of dynamical systems coupled along some `residual' type.\n\\end{defn}\n\n\\begin{rmk}\nAt this point we begin to run into sizes issues. However, for the purposes of this paper, we will simply assume that a satisfactory resolution of these matters is at hand; for instance, that there is a hierarchy of Grothendieck universes such that the coends over (`large') sums in the preceding definition constitute accessible objects.\n\\end{rmk}\n\nWe now expand the definition of context in the dynamical setting. We will see that a dynamical context is simply a closure of an open dynamical system: that is, a `larger' system into which a `smaller' open dynamical system can plug such that the composite is a closed (but still uninitialized) system.\n\n\\begin{prop} \\label{prop:dyn-ctx}\nIf \\(I\\) is terminal in \\(\\cat{C}\\), a context for a dynamical lens \\((X, A) \\mathrel{\\ooalign{\\hfil$\\mapstochar\\mkern5mu$\\hfil\\cr$\\to$\\cr}} (Y, B)\\) is an element of the following type, denoted \\(\\tilde{C}\\big((X, A), (Y, B)\\big)\\):\n\\[\n\\sum_{P,Q \\, : \\, \\cat{C}} \\;\\; \\tikzfig{img\/context-dyn-terminal}\n\\]\nInterpreting this diagram, a context for a dynamical lens \\((X, A) \\mathrel{\\ooalign{\\hfil$\\mapstochar\\mkern5mu$\\hfil\\cr$\\to$\\cr}} (Y, B)\\) amounts to an autonomous dynamical system with output type of the form \\(X \\otimes M\\) (for some residual type \\(M\\)), coupled along the residual \\(M\\) to an open dynamical system with input type \\(Y \\otimes M\\) and output type \\(B\\); and the \\(A\\) type is discarded. This is precisely what we should expect from a dynamical analogue of Proposition \\ref{prop:ctx-nice}.\n\\end{prop}\n\n\\begin{defn} \\label{def:dyn-game}\nA \\textbf{dynamical game} is just a generalized open game (\\ref{def:open-game}) over the category of dynamical lenses. We write \\((X, A) \\xto{\\tilde{\\Sigma}, S} (Y, B)\\) to indicate both the strategy type \\(\\tilde{\\Sigma}\\) and state space \\(S\\). Dynamical games form a symmetric monoidal category in the corresponding way. For notational clarity, we will write \\(\\tilde{G}\\) for a dynamical game, \\(\\tilde{P}\\) for its play function, and \\(\\tilde{B}\\) for its best response function.\n\\end{defn}\n\n\\begin{defn}[Dynamical realisation of an open game] \\label{def:realisation}\nLet \\(G : (X, A) \\xto{\\Sigma} (Y, B)\\) be an open game with \\(X, A, Y, B\\) all objects of some symmetric monoidal category \\(\\cat{C}\\). A \\textbf{dynamical realisation} of \\(G\\) is a choice of dynamical game \\(\\tilde{G} : (X, A) \\xto{\\tilde{\\Sigma}, S} (Y, B)\\) on the same objects, along with a function \\(\\interp{\\cdot} : C((X, A), (Y, B)) \\to \\tilde{C}((X, A), (Y, B))\\) lifting static contexts to dynamical contexts. Given a context \\(\\optar{\\pi}{k} : C((X, A), (Y, B))\\), we choose a representative \\(\\optar{\\interp{\\pi}}{\\interp{k}} \\cong \\interp{\\optar{\\pi}{k}} : \\tilde{C}((X, A), (Y, B))\\) for its realisation.\n\\end{defn}\n\nA `dynamical context' is an element of the type given in Proposition \\(\\ref{prop:dyn-ctx}\\): a context for a dynamical lens. A `static context' is simply a context for the `static' game that is being dynamically realized. At this stage, we impose no particular requirements on the context realisation function \\(\\interp{\\cdot}\\), except to say that in the intended semantics, \\(\\interp{\\optar{\\pi}{k}}\\) is a (coupled, open) dynamical system that constantly emits the state \\(\\pi\\) and (by some mechanism) realizes the channel \\(k\\). We call such a context \\emph{stationary} as neither \\(\\pi\\) nor \\(k\\) vary in time; future work will generalize the results of this section to \\emph{non-stationary} contexts.\n\n\\begin{defn}[Open cybernetic systems] \\label{def:cyber-sys}\nAn open \\textbf{cybernetic system} is defined by the data:\n\\begin{itemize}\n\\item an open optimization game (Def. \\ref{def:opt-game}) \\(G : (X, A) \\xto{\\Sigma} (Y, B)\\) with \\(X, A, Y, B\\) all objects of some symmetric monoidal category \\(\\cat{C}\\),\n\\item a fitness function \\(\\varphi_G : \\Sigma \\times C \\to M \\xto{\\varphi} F\\) where \\(C = C\\left((X, A), (Y, B)\\right)\\),\n\\item a dynamical realisation \\(\\big(\\tilde{G} : (X, A) \\xto{\\tilde{\\Sigma}, S} (Y, B), \\interp{\\cdot} : C((X, A), (Y, B)) \\to \\tilde{C}((X, A), (Y, B)) \\big)\\) of \\(G\\),\n\\end{itemize}\nsatisfying the following condition for each context \\(\\optar{\\pi}{k} : C((X, A), (Y, B))\\):\n\\begin{itemize}\n\\item there exists a dynamical strategy \\(\\tilde{\\sigma} : \\tilde{\\Sigma}\\), such that\n\\item writing \\(Z\\) for the total state space of the autonomous dynamical system \\(\\interp{\\optar{\\pi}{k}} \\lenscirc \\tilde{P}(\\tilde{\\sigma})\\) induced by the context, there exists a function \\(\\nu : Z \\to M\\) projecting \\(Z\\) into the `fitness landscape' \\(M\\), such that\n\\item there exists a fitness-maximising fixed point \\(\\zeta^\\ast : Z\\), in the sense that\n\\item for some equilibrium strategy of the static system \\(\\sigma^\\ast : \\text{fix } B(\\optar{\\pi}{k})\\), \\(\\varphi(\\nu(\\zeta^\\ast)) \\leq \\varphi_G(\\sigma^\\ast, \\optar{\\pi}{k})\\).\n\\end{itemize}\n\\noindent\nA \\textbf{category of open cybernetic systems} is a category of (generalized) open games such that each game is an open cybernetic system with dynamics realised in the same category \\(\\cat{C}\\), and such that the composite of games is a cybernetic system whose fitness-maximising fixed point projects onto fitness-maximising fixed points of each of the factors in their corresponding local contexts. (See \\citep[\u00a73.7]{Bolt2019Bayesian} for the definition of local context.)\n\\end{defn}\nThe idea here is that, by using the fitness function of the underlying optimization game, the cybernetic condition forces the behaviour of the dynamical realisation to coincide with the process of iteratively improving the strategies deployed by the system in playing the game. We summarize the condition in the diagram\n\\begin{equation*}\n\\begin{tikzcd}\n\\Sigma \\times C \\arrow[d, \"\\interp{\\cdot}\"'] \\arrow[r] & M \\arrow[r, \"\\varphi\"] & F \\\\\n\\tilde{\\Sigma} \\times \\tilde{C} \\arrow[r, \"\\text{fix}\"] & Z \\arrow[u] & \n\\end{tikzcd}\n\\end{equation*}\nthough this is in general ill-defined: we do not require a function \\(\\interp{\\cdot} : \\Sigma \\to \\tilde{\\Sigma}\\), and nor do we require that the best response to \\(\\tilde{G}\\) coincides in any way with the best reponse to \\(G\\). Investigating such conditions is the subject of future work; for instance, we may be interested in nested cybernetic systems, such as characterize evolution by natural selection, and how their fitness functions constrain one another. For similar reasons, we are also interested in the case where the fitness function is itself non-stationary.\n\n\\begin{rmk} \\label{rmk:kubernetes}\nThe codomain category of the cybernetic realisation functor is in general much larger than the domain category of static games, and often it makes sense to consider dynamical games in this codomain category as if they were dynamical realisations of static games, even if in fact there is no static game to which they could correspond. For instance, adaptive systems in physical environments are in general not realisations of static games because their contexts are irreducibly dynamical and thus not the dynamical realisation of a static context; but over short time intervals, it can be productive to treat such systems as realisations of static games. In continuous time (not treated here), it is even possible to consider dynamical games that are indeed realisations of games that are static when represented in a smoothly varying coordinate system. The free-energy framework of Theorem \\ref{thm:cyber-fep} is an example of a category of cybernetic systems with a rich underlying category of dynamic games.\n\\end{rmk}\n\nA classic category of open cybernetic systems is found in the computational neuroscience literature, as summarized in the following theorem.\n\n\\begin{thm} \\label{thm:cyber-fep}\nConsider the subcategory of \\(\\Cat{BayesLens}\\) spanned by finite-dimensional Euclidean spaces, with morphisms generated (under sequential and parallel composition) by the (variational) autoencoder and inference games whose forwards and backwards channels emit Gaussian measures with high-precision. The (discrete-time) free-energy framework for action and perception \\citep{Buckley2017free} instantiates a category of open cybernetic systems realising games over this subcategory.\n\\end{thm}\n\n\\begin{rmk}\nTypical presentations of `active inference' under the free-energy principle are excessively complicated by the lack of attention paid to compositionality. Because the free-energy framework instantiates a \\emph{category} of open cybernetic systems, a radically simplified compositional presentation is possible. Such a presentation forms a companion to the present work.\n\\end{rmk}\n\n\\begin{cor} \\label{cor:bx-cortex}\nThe free-energy framework has been used to supply a computational explanation for the pervasive bidirectionality of cortical circuits in the mammalian brain \\citep{Bastos2012Canonical,Friston2010free}. A corollary of Theorem \\ref{thm:cyber-fep} is that this bidirectionality is furthermore justified by the abstract structure of Bayesian inference and its dynamical realisation: because Bayesian updates compose optically, a cybernetic system realising Bayesian inference compositionally must instantiate this structure. We note also that the parallel interacting bidirectional structure of the active inference game (Example \\ref{ex:ai-game}) is reproduced in the cortex.\n\\end{cor}\n\nThe free-energy framework realisation of autoencoder games is not unique; an alternative is found in machine learning.\n\n\\begin{thm} \\label{thm:cyber-vae}\nConsider the subcategory of \\(\\Cat{BayesLens}\\) spanned by finite-dimensional Euclidean spaces, with morphisms generated (under sequential and parallel composition) by the (variational) autoencoder and inference games whose forwards and backwards channels emit exponential-family measures. The deep (variational) autoencoder framework \\citep{Kingma2017Variational} instantiates a category of open cybernetic systems realising games over this subcategory.\n\\end{thm}\n\nIncreasingly, the variational autoencoder framework is used to model complete agents in machine learning, rather than merely dynamically realise static inference or learning problems. Indeed, thinking of the `free-energy framework' as a collection of cybernetic realisations of autoencoder and active-inference games, the demonstration of the following corollary of Theorem \\ref{thm:cyber-vae} is unsurprising:\n\n\\begin{cor} \\label{cor:deep-ai}\nThe ``deep active inference agent'' \\citep{Ueltzhoeffer2018Deep} is a cybernetic system realising an active inference game in the variational autoencoder framework.\n\\end{cor}\n\nWe have heretofore concentrated on `variational Bayesian' realisations of the games introduced in \\secref{sec:games}, as they most strikingly fit the language of optimization used there. But we expect any other family of approximate inference methods to supply a corresponding category of cybernetic systems. We thus make the following conjecture.\n\n\\begin{conjecture}\nConsider the subcategory of \\(\\Cat{BayesLens}\\) spanned by finite-dimensional smooth manifolds, with morphisms generated (under sequential and parallel composition) by the generalized autoencoder and inference games. We expect sampling algorithms, such as Markov chain Monte Carlo, to supply a corresponding category of open cybernetic systems of interest.\n\\end{conjecture}\n\nFinally, we provide further justification for Remark \\ref{rmk:b-relt}.\n\n\\begin{obs} \\label{prop:sigma-traj}\nConsider a variational autoencoder, realised as in Theorem \\ref{thm:cyber-vae}. By choosing the parameterizations \\(\\mathcal{F},\\mathcal{P}\\) of the forwards and backwards channels to coincide with the state spaces of their dynamical realisations, and the (static) play function \\(P\\) to take a parameter vector to the corresponding channel, the dynamical realisation induces a trajectory over the strategy space. Such trajectories organize into sheaf whose sections are trajectories of arbitrary length \\citep{Schultz2019Dynamical}, spans of which are again just (generalized) dynamical systems; these spans are equivalently profunctors \\citep{Benabou2000Distributors}. We can thus define a best-response function valued in profunctors whose elements are trajectories witnessing deviations of strategies to `better' strategies, and whose dynamical equilibria correspond precisely to the equilibria of the `static' best response function.\n\\end{obs}\n\n\\paragraph{On-going and Future Work}\n\\label{sec:org420ea85}\n\nThe structures sketched in this paper are merely first steps towards a categorical theory of cybernetics. In particular, since the first draft of this work was written, we have come to believe that the preliminary notions presented here of dynamical realisation, and by extension of open cybernetic system, are substantially less elegant than they could be. On-going work is focusing on this issue. We hope that a consequence of this refinement will be that the treatment of \\emph{interacting} cybernetic systems is simplified. In this new setting, we will also treat non-stationary systems in dynamical contexts and in continuous time, thereby supplying a general compositional treatment of (amongst other things) the `free-energy' framework.\n\nFinally, with respect to applications, we are interested in using these tools to realise game-theoretic games and to investigate the connections between repeated games and dynamical realisation. There are deep links with reinforcement learning to be explored, and we seek a setting for the study of nested and mutli-agent (`ecological') systems.\n\n\\bibliographystyle{eptcs}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzfjpy b/data_all_eng_slimpj/shuffled/split2/finalzzfjpy new file mode 100644 index 0000000000000000000000000000000000000000..1c939a53810103c751fb15dee145247ea9746a6e --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzfjpy @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nLet $(v_{t})_{t\\in\\mathbb Z}$ be a complex circularly symmetric Gaussian stationary process with zero mean and covariance function $(r_k)_{k\\in\\mathbb Z}$ with $r_k=\\mathbb{E}[v_{t+k}v^*_{t}]$ and $r_k\\to0$ as $k\\to\\infty$. We observe $N$ independent copies of $(v_t)_{t\\in\\mathbb Z}$ over the time window $t\\in\\{0,\\ldots,T-1\\}$, and stack the observations in a matrix $V_T=[ v_{n,t} ]_{n,t = 0}^{N-1, T-1}$. This matrix can be written as $V_T=W_TR_T^{1\/2}$, where $W_T\\in\\mathbb{C}^{N\\times T}$ has independent $\\mathcal{CN}(0,1)$ (standard circularly symmetric complex Gaussian) entries and $R_T^{1\/2}$ is any square root of the Hermitian nonnegative definite Toeplitz $T\\times T$ matrix\n\\begin{equation*}\nR_T \\triangleq \\left[ r_{i-j} \\right]_{0\\leq i,j\\leq T-1} = \\begin{bmatrix}\nr_0 & r_{1} & \\ldots & r_{T-1} \\\\ \nr_{-1} & \\ddots & \\ddots & \\vdots \\\\ \n\\vdots & \\ddots & \\ddots & r_{1}\\\\ \nr_{1-T} & \\ldots & r_{-1} & r_0\n\\end{bmatrix}.\n\\end{equation*}\nA classical problem in signal processing is to estimate $R_T$ from the observation of $V_T$. \nWith the growing importance of multi-antenna array processing, there has recently been a renewed interest for this estimation problem in the regime of large system dimensions, {\\it i.e.} for both $N$ and $T$ large. \n\nAt the core of the various estimation methods for $R_T$ are the biased and unbiased estimates $\\hat{r}_{k,T}^b$ and $\\hat{r}_{k,T}^u$ for $r_k$, respectively, defined by\n\\begin{align*}\n\t\\hat{r}_{k,T}^b &= \\frac{1}{NT}\\sum_{n=0}^{N-1}\\sum_{t=0}^{T-1} v_{n,t+k} v_{n,t}^* \\mathbbm{1}_{0 \\leq t+k \\leq T-1} \\\\\n\t\\hat{r}_{k,T}^u &= \\frac{1}{N(T-|k|)} \\sum_{n=0}^{N-1} \\sum_{t=0}^{T-1} v_{n,t+k} v_{n,t}^* \\mathbbm{1}_{0 \\leq t+k \\leq T-1}\n\\end{align*}\nwhere $\\mathbbm{1}_A$ is the indicator function on the set $A$.\nDepending on the relative rate of growth of $N$ and $T$, the matrices $\\widehat{R}_{T}^b = [\\hat{r}_{i-j,T}^b]_{0 \\leq i,j \\leq T-1}$ and $\\widehat{R}_{T}^u = [\\hat{r}_{i-j,T}^u]_{0 \\leq i,j \\leq T-1}$ may not satisfy $\\Vert R_T - \\widehat{R}_{T}^b \\Vert \\overset{\\rm a.s.}{\\longrightarrow} 0$ or $\\Vert R_T - \\widehat{R}_{T}^u \\Vert \\overset{\\rm a.s.}{\\longrightarrow} 0$. An important drawback of the biased entry-wise estimate lies in its inducing a general asymptotic bias in $\\widehat{R}_{T}^b$; as for the unbiased entry-wise estimate, it may induce too much inaccuracy in the top-right and bottom-left entries of $\\widehat{R}_{T}^u$. The estimation paradigm followed in the recent literature generally consists instead in building banded or tapered versions of $\\widehat{R}_{T}^b$ or $\\widehat{R}_{T}^u$ ({\\it i.e.} by weighting down or discarding a certain number of entries away from the diagonal), exploiting there the rate of decrease of $r_k$ as $k\\to\\infty$ \\cite{WuPour'09,BickLev'08a,LamFan'09,XiaoWu'12,CaiZhangZhou'10,CaiRenZhou'13}.\nSuch estimates use the fact that $\\Vert R_T-R_{\\gamma(T),T}\\Vert \\to 0$ with $R_{\\gamma,T} = [ [R_T]_{i,j} \\mathbbm{1}_{|i-j| \\leq \\gamma}]$ for some well-chosen functions $\\gamma(T)$ (usually satisfying $\\gamma(T)\\to \\infty$ and $\\gamma(T)\/T\\to 0$) and restrict the study to the consistent estimation of $R_{\\gamma(T),T}$. The aforementioned articles concentrate in particular on choices of functions $\\gamma(T)$ that ensure optimal rates of convergence of $\\Vert R_T-\\widehat{R}_{\\gamma(T),T}\\Vert$ for the banded or tapered estimate $\\widehat{R}_{\\gamma(T),T}$.\nThese procedures, although theoretically optimal, however suffer from several practical limitations. First, they assume the {\\it a priori} knowledge of the rate of decrease of $r_k$ (and restrict these rates to specific classes). Then, even if this were indeed known in practice, being asymptotic in nature, the results do not provide explicit rules for selecting $\\gamma(T)$ for practical finite values of $N$ and $T$. Finally, the operations of banding and tapering do not guarantee the positive definiteness of the resulting covariance estimate.\n\nIn the present article, we consider instead that the only constraint about $r_k$ is $\\sum_{k=-\\infty}^\\infty |r_k|<\\infty$ and estimate $R_T$ from the standard (non-banded and non-tapered) estimates $\\widehat{R}_T^b$ and $\\widehat{R}_T^u$. The consistence of these estimates, in general invalid, shall be enforced here by the choice $N,T\\to\\infty$ with $N\/T\\to c\\in(0,\\infty)$. This setting is more practical in applications as long as both the finite values $N$ and $T$ are sufficiently large and of similar order of magnitude.\nAnother context where a non banded Toeplitz rectification of the estimated \ncovariance matrix leads to a consistent estimate in the spectral norm is \nstudied in \\cite{val-loub-icassp14}. \n\nOur specific contribution lies in the establishment of concentration inequalities for the random variables $\\Vert R_T-\\widehat{R}_T^b\\Vert$ and $\\Vert R_T-\\widehat{R}_T^u\\Vert$. It is shown specifically that, for all $x>0$, $-\\log \\mathbb{P}[\\Vert R_T-\\widehat{R}_T^b \\Vert> x ]= O(T)$ and $-\\log \\mathbb{P}[\\Vert R_T-\\widehat{R}^u_T\\Vert > x ]= O(T\/ \\log T)$. Aside from the consistence in norm, this implies as a corollary that, as long as $\\limsup_T\\Vert R_T^{-1}\\Vert<\\infty$, for $T$ large enough, $\\widehat{R}^u_T$ is positive definite with outstanding probability ($\\widehat{R}^b_T$ is nonnegative definite by construction).\n\nFor application purposes, the results are then extended to the case where $V_T$ is changed into $V_T+P_T$ for a rank-one matrix $P_T$. Under some conditions on the right-eigenspaces of $P_T$, we show that the concentration inequalities hold identically. The application is that of a single source detection (modeled through $P_T$) by an array of $N$ sensors embedded in a temporally correlated noise (modeled by $V_T$). To proceed to detection, $R_T$ is estimated from $V_T+P_T$ as $\\widehat{R}_T^b$ or $\\widehat{R}_T^u$, which is used as a whitening matrix, before applying a generalized likelihood ratio test (GLRT) procedure on the whitened observation. Simulations corroborate the theoretical consistence of the test. \n\nThe remainder of the article is organized as follows. The concentration inequalities for both biased and unbiased estimates are exposed in Section~\\ref{unperturbed}. The generalization to the rank-one perturbation model is presented in Section~\\ref{sig+noise} and applied in the practical context of source detection in Section~\\ref{detect}.\n\n{\\it Notations:} The superscript $(\\cdot)^{\\sf H}$ denotes Hermitian transpose, $\\left\\| X \\right\\|$ stands for the spectral norm for a matrix and Euclidean norm for a vector, and $\\| \\cdot \\|_\\infty$ is the $\\sup$ norm of a function. The notations ${\\cal N}(a,\\sigma^2)$ and ${\\cal CN}(a,\\sigma^2)$ represent the real and complex circular Gaussian distributions with mean $a$ and variance $\\sigma^2$. For $x \\in \\mathbb{C}^{m}$, $D_x=\\diag (x)=\\diag (x_0, \\ldots, x_{m-1} )$ is the diagonal matrix having on its diagonal the elements of the vector $x$.\nFor $x=[x_{-(m-1)},\\ldots,x_{m-1}]^{\\sf T} \\in \\mathbb{C}^{2m+1}$, the matrix ${\\cal T}(x) \\in \\mathbb{C}^{m \\times m}$ is the Toeplitz matrix built from $x$ with entries $[{\\cal T}(x)]_{i,j}=x_{j-i}$.\nThe notations $\\Re(\\cdot)$ and $\\Im(\\cdot)$ stand for the real and the imaginary parts respectively.\n\n\n\\section{Performance of the covariance matrix estimators} \n\\label{unperturbed} \n\n\\subsection{Model, assumptions, and results}\n\\label{subsec-model} \n\nLet $(r_k)_{k\\in\\mathbb Z}$ be a doubly \ninfinite sequence of covariance coefficients. For any $T \\in \\mathbb N$, let \n$R_T = \\mathcal T( r_{-(T-1)},\\ldots, r_{T-1})$, a Hermitian nonnegative definite matrix. \nGiven $N = N(T) > 0$, consider the \nmatrix model \n\\begin{equation}\nV_T = [ v_{n,t} ]_{n,t = 0}^{N-1, T-1} = W_T R_T^{1\/2}\n\\label{model1}\n\\end{equation}\nwhere $W_T = [ w_{n,t} ]_{n,t = 0}^{N-1, T-1}$ has independent ${\\cal CN}(0,1)$ entries. \nIt is clear that $r_k = \\mathbb{E} [v_{n,t+k} v_{n,t}^*]$ for any $t$, $k$, and $n \\in \\{0,\\ldots, N-1\\}$.\n\nIn the following, we shall make the two assumptions below.\n\\begin{assumption} \n\\label{ass-rk} \nThe covariance coefficients $r_k$ are absolutely summable and $r_0 \\neq 0$. \n\\end{assumption} \nWith this assumption, the covariance function \n\\[\n{\\boldsymbol\\Upsilon}(\\lambda) \\triangleq \n\\sum_{k=-\\infty}^\\infty r_k e^{-\\imath k\\lambda}, \\quad \n\\lambda \\in [0, 2\\pi) \n\\] \nis continuous on the interval $[0, 2\\pi]$. Since $\\| R_T \\| \\leq \\| \\boldsymbol\\Upsilon \\|_\\infty$ (see \\emph{e.g.} \\cite[Lemma 4.1]{Gray'06}), Assumption \\ref{ass-rk} \nimplies that $\\sup_T \\| R_T \\| < \\infty$. \n\nWe assume the following asymptotic regime which will be simply denoted as ``$T\\rightarrow\\infty$'':\n\\begin{assumption} \n\\label{ass-regime} \n$T \\rightarrow \\infty$ and $N\/T \\rightarrow c > 0$.\n\\end{assumption} \n\n\nOur objective is to study the performance of two estimators of the \ncovariance function frequently considered in the literature. These estimators are defined as \n\\begin{align}\n\\label{est-b} \n\\hat{r}_{k,T}^b&= \n\\frac{1}{NT} \\sum_{n=0}^{N-1}\\sum_{t=0}^{T-1}\nv_{n,t+k} v_{n,t}^* \\mathbbm{1}_{0 \\leq t+k \\leq T-1} \\\\\n\\label{est-u} \n\\hat{r}_{k,T}^u&=\\frac{1}{N(T-|k|)}\n\\sum_{n=0}^{N-1}\\sum_{t=0}^{T-1} v_{n,t+k} v_{n,t}^* \n\\mathbbm{1}_{0 \\leq t+k \\leq T-1}.\n\\end{align}\nSince $\\mathbb{E} \\hat{r}_{k,T}^b = ( 1 - |k|\/T ) r_k$ and $\\mathbb{E} \\hat{r}_{k,T}^u = r_k$,\nthe estimate $\\hat{r}_{k,T}^b$ is biased while $\\hat{r}_{k,T}^u$ is unbiased. \nLet also \n\\begin{align}\n\t\\label{est-Rb}\n\\widehat R^b_T &\\triangleq {\\cal T}\\left( \\hat{r}_{-(T-1),T}^b, \\ldots, \n\\hat{r}_{(T-1),T}^b \\right) \\\\\n\t\\label{est-Ru}\n\\widehat R^u_T &\\triangleq {\\cal T} \\left( \\hat{r}_{-(T-1),T}^u, \\ldots, \n\\hat{r}_{(T-1),T}^u \\right).\n\\end{align}\nA well known advantage of $\\widehat R^b_T$ over $\\widehat R^u_T$ as an estimate of $R_T$ is its structural nonnegative definiteness.\nIn this section, results on the spectral behavior of these matrices are provided under the form of concentration inequalities on $\\| \\widehat R^b_T - R_T \\|$ and $\\| \\widehat R^u_T - R_T \\|$: \n\\begin{theorem}\n\\label{th-biased} \nLet Assumptions~\\ref{ass-rk} and \\ref{ass-regime} hold true and let $\\widehat R^b_T$ be defined as in \\eqref{est-Rb}.\nThen, for any $x>0$,\n\\begin{equation*}\n\\mathbb{P} \\left[\\norme{\\widehat R_T^b - R_T} > x \\right] \\leq\n\\exp \\left( -cT \\left( \\frac{x}{\\| \\boldsymbol\\Upsilon \\|_\\infty} - \n \\log \\left( 1 + \\frac{x}{\\| \\boldsymbol\\Upsilon \\|_\\infty} \\right) + o(1) \\right)\n\\right)\n\\end{equation*}\nwhere $o(1)$ is with respect to $T$ and depends on $x$.\n\\end{theorem}\n\\begin{theorem}\n\\label{th-unbiased} \nLet Assumptions~\\ref{ass-rk} and \\ref{ass-regime} hold true and let $\\widehat R^u_T$ be defined as in \\eqref{est-Ru}.\nThen, for any $x>0$,\n\\begin{equation*}\n\\mathbb{P} \\left[\\norme{\\widehat R_T^u - R_T} > x \\right] \n\\leq \n\\exp \\left(- \\frac{cTx^2}{4\\norme{\\boldsymbol\\Upsilon}_{\\infty}^2\\log T} (1 + o(1)) \\right) \n\\end{equation*}\nwhere $o(1)$ is with respect to $T$ and depends on $x$.\n\\end{theorem}\n\nA consequence of these theorems, obtained by the Borel-Cantelli lemma, is that $\\| \\widehat R^b_T - R_T \\| \\to 0$ and \n$\\| \\widehat R^u_T - R_T \\| \\to 0$ almost surely as $T \\to \\infty$. \n\nThe slower rate of decrease of $T\/\\log(T)$ in the unbiased estimator exponent may be interpreted by the increased inaccuracy in the estimates of $r_k$ for values of $k$ close to $T-1$. \n\nWe now turn to the proofs of Theorems \\ref{th-biased} and \\ref{th-unbiased},\nstarting with some basic mathematical results that will be needed throughout \nthe proofs. \n\n\\subsection{Some basic mathematical facts} \n\n\\begin{lemma}\n\\label{lm-fq} \nFor $x,y \\in \\mathbb{C}^{m}$ and $A \\in \\mathbb{C}^{m\\times m}$,\n\\[\n\\left| x^{\\sf H} A x - y^{\\sf H} A y \\right| \\leq \n\\norme{A} (\\norme{x}+\\norme{y})\\norme{x-y}.\n\\]\n\\end{lemma}\n\\begin{proof}\n\\begin{align*} \n\\left| x^{\\sf H} A x - y^{\\sf H} A y \\right| &= \n\\left| x^{\\sf H} A x - y^{\\sf H} A x + y^{\\sf H} A x - y^{\\sf H} A y \\right| \\\\\n&\\leq \\left| (x - y)^{\\sf H} A x \\right| + \\left| y^{\\sf H} A (x - y) \\right| \\\\\n&\\leq \\norme{A}(\\norme{x} + \\norme{y}) \\norme{x-y}.\n\\end{align*}\n\\end{proof}\n\n\\begin{lemma} \n\\label{chernoff} \nLet $X_0, \\ldots, X_{M-1}$ be independent $\\mathcal{CN}(0,1)$ random \nvariables. Then, for any $x > 0$, \n\\[\n\\mathbb{P} \\left[ \\frac1M \\sum_{m=0}^{M-1} (|X_m|^2 - 1) > x \\right] \n\\leq \\exp \\left( -M ( x - \\log(1+x) ) \\right) . \n\\] \n\\end{lemma}\n \n\\begin{proof} \nThis is a classical Chernoff bound. Indeed, given $\\xi \\in (0,1)$, we have\nby the Markov inequality \n\\begin{align*}\n\\mathbb{P} \\Bigl[ M^{-1} \\sum_{m=0}^{M-1} (|X_m|^2 - 1) > x \\Bigr] &= \\mathbb{P}\\left[ \\exp \\left( \\xi \\sum_{m=0}^{M-1} |X_m|^2 \\right) \n> \\exp \\xi M (x+1) \\right] \\\\\n&\\leq \\exp(-\\xi M (x+1)) \n\\mathbb{E} \\left[ \\exp \\left( \\xi \\sum_{m=0}^{M-1} |X_m|^2 \\right) \\right] \\\\\n&= \\exp \\left(- M \\left( \\xi(x+1) + \\log(1-\\xi) \\right) \\right) \n\\end{align*} \nsince $\\mathbb{E} \\left[\\exp (\\xi |X_m|^2) \\right] = 1\/(1-\\xi)$. The result follows upon \nminimizing this expression with respect to $\\xi$. \n\\end{proof}\n\n\\subsection{Biased estimator: proof of Theorem \\ref{th-biased}} \\label{biased}\nDefine\n\\begin{align*}\n\\widehat \\Upsilon^b_T(\\lambda) &\\triangleq \n\\sum_{k=-(T-1)}^{T-1} \\hat{r}_{k,T}^b e^{\\imath k \\lambda} \n\\\\\n\\Upsilon_T(\\lambda) &\\triangleq \n\\sum_{k=-(T-1)}^{T-1} r_k e^{\\imath k \\lambda}.\n\\end{align*} \nSince $\\widehat{R}_T^b-R_T$ is a Toeplitz matrix, from \\cite[Lemma 4.1]{Gray'06}, \n\\begin{equation*}\n\\norme{\\widehat{R}_T^b-R_T} \\leq \n\\underset{\\lambda\\in[0,2\\pi)}{\\text{sup}} \n\\left| \\widehat \\Upsilon_T^b(\\lambda) - \\Upsilon_T(\\lambda) \\right| \n\\leq \n\\underset{\\lambda\\in[0,2\\pi)}{\\text{sup}} \n\\left| \\widehat \\Upsilon_T^b(\\lambda) - \\mathbb{E} \\widehat \\Upsilon_T^b(\\lambda) \\right| +\n\\underset{\\lambda\\in[0,2\\pi)}{\\text{sup}} \n\\left| \\mathbb{E} \\widehat \\Upsilon_T^b(\\lambda) - \\Upsilon_T(\\lambda) \\right|. \n\\end{equation*}\nBy Kronecker's lemma (\\cite[Lemma 3.21]{Kallenberg'97}), the rightmost term at the right-hand side satisfies \n\\begin{equation}\n\\label{determ-biased} \n\\left| \\mathbb{E} \\widehat \\Upsilon_T^b(\\lambda) - \\Upsilon_T(\\lambda)\\right| \n\\leq \\sum_{k=-(T-1)}^{T-1} \\frac{|k r_k|}{T} \n\\xrightarrow[T\\to\\infty]{} 0. \n\\end{equation} \nIn order to deal with the term \n$\\sup_{\\lambda \\in [0,2\\pi)} | \\widehat \\Upsilon_T^b(\\lambda) - \\mathbb{E} \\widehat \\Upsilon_T^b(\\lambda) |$, \ntwo ingredients will be used. The first one is the \nfollowing lemma (proven in Appendix \\ref{anx-lm-qf}):\n\n\\begin{lemma}\n\\label{lemma_d_quad}\nThe following facts hold:\n\\begin{align*} \n\\widehat \\Upsilon_T^b(\\lambda) &= d_T(\\lambda)^{\\sf H}\\frac{V_T^{\\sf H}V_T}{N}d_T(\\lambda) \\\\\n\\mathbb{E} \\widehat \\Upsilon_T^b(\\lambda) &= d_T(\\lambda)^{\\sf H} R_T d_T(\\lambda)\n\\end{align*} \nwhere $d_T(\\lambda)=1\/\\sqrt{T}\\left[1, e^{- \\imath\\lambda}, \\ldots, \ne^{-\\imath(T-1)\\lambda} \\right]^{\\sf T}$.\n\\end{lemma}\n\nThe second ingredient is a Lipschitz property of the function \n$\\| d_T(\\lambda) - d_T(\\lambda') \\|$ seen as a function of $\\lambda$. \nFrom the inequality $|e^{-\\imath t\\lambda}-e^{-\\imath t\\lambda'}| \n\\leq t|\\lambda-\\lambda'|$, we indeed have\n\\begin{equation} \n\\label{lipschitz} \n\\| d_T(\\lambda) - d_T(\\lambda') \\| = \n\\sqrt{\\frac{1}{T} \\sum_{t=0}^{T-1} |e^{-\\imath t\\lambda}-\ne^{-\\imath t\\lambda'}|^2} \\leq \\frac{T|\\lambda-\\lambda'|}{\\sqrt{3}} . \n\\end{equation}\n\nNow, denoting by $\\lfloor \\cdot \\rfloor$ the floor function and choosing \n$\\beta > 2$, define ${\\cal I}=\\left\\{0, \\ldots, \\lfloor T^{\\beta} \\rfloor - 1 \\right\\}$.\nLet $\\lambda_i=2 \\pi \\frac{i}{\\lfloor T^{\\beta} \\rfloor }$, $i \\in {\\cal I}$, be a regular discretization of the \ninterval $[0, 2\\pi]$. \nWe write \n\\begin{align*}\n&\\underset{\\lambda \\in [0, 2\\pi)}{\\text{sup}} \\left| \\widehat \\Upsilon_T^b(\\lambda) - \\mathbb{E} \\widehat \\Upsilon_T^b(\\lambda) \\right| \n \\\\&\\leq \n\\underset{i \\in {\\cal I}}{\\text{max}} \n\\underset{\\lambda \\in [\\lambda_i,\\lambda_{i+1}]}{\\text{sup}} \\Bigl(\\left| \\widehat \\Upsilon_T^b(\\lambda) \n- \\widehat \\Upsilon_T^b(\\lambda_i) \\right| + \\left| \\widehat \\Upsilon_T^b(\\lambda_i) \n- \\mathbb{E} \\widehat \\Upsilon_T^b(\\lambda_i) \\right| + \\left| \\mathbb{E} \\widehat \\Upsilon_T^b(\\lambda_i) \n- \\mathbb{E} \\widehat \\Upsilon_T^b(\\lambda) \\right|\\Bigr)\\\\ &\\leq \n\\underset{i \\in {\\cal I}}{\\text{max}} \n\\underset{\\lambda \\in [\\lambda_i,\\lambda_{i+1}]}{\\text{sup}} \n\\left| \\widehat \\Upsilon_T^b(\\lambda) - \\widehat \\Upsilon_T^b(\\lambda_i) \\right| + \\underset{i \\in {\\cal I}}{\\text{max}} \\left| \\widehat \\Upsilon_T^b(\\lambda_i) \n- \\mathbb{E} \\widehat \\Upsilon_T^b(\\lambda_i) \\right| + \\underset{i \\in {\\cal I}}{\\text{max}} \n\\underset{\\lambda \\in [\\lambda_i,\\lambda_{i+1}]}{\\text{sup}} \n\\left| \\mathbb{E} \\widehat \\Upsilon_T^b(\\lambda_i) - \\mathbb{E} \\widehat \\Upsilon_T^b(\\lambda) \\right| \\\\ &\\triangleq \\chi_1 + \\chi_2 + \\chi_3 . \n\\end{align*} \nWith the help of Lemma \\ref{lemma_d_quad} and \\eqref{lipschitz}, \nwe shall provide concentration inequalities on the random terms $\\chi_1$ \nand $\\chi_2$ and a bound on the deterministic term $\\chi_3$. \nThis is the purpose of the three following lemmas. Herein and in the \nremainder, $C$ denotes a positive constant independent of $T$. This constant \ncan change from an expression to another. \n\n\\begin{lemma}\n\\label{chi1} \nThere exists a constant $C > 0$ such that for any $x > 0$ and any $T$ large\nenough, \n\\begin{equation*}\n\\mathbb{P} \\left[ \\chi_1 > x \\right]\n\\leq \\exp \\Biggl( - cT^2\\Biggl( \\frac{x T^{\\beta-2}}{C \\| \\boldsymbol\\Upsilon\\|_\\infty} \n- \\log \\frac{x T^{\\beta-2}}{C \\| \\boldsymbol\\Upsilon\\|_\\infty} - 1 \\Biggr) \\Biggr) . \n\\end{equation*}\n\\end{lemma} \n\n\\begin{proof}\nUsing Lemmas \\ref{lemma_d_quad} and \\ref{lm-fq} along with \n\\eqref{lipschitz}, we have \n\\begin{align*} \n\\left| \\widehat \\Upsilon_T^b(\\lambda) - \\widehat \\Upsilon_T^b(\\lambda_i) \\right|\n&= \\left| d_T(\\lambda)^{\\sf H}\\frac{V_T^{\\sf H} V_T}{N} d_T(\\lambda) \n - d_T(\\lambda_i)^{\\sf H} \\frac{V_T^{\\sf H} V_T}{N}d_T(\\lambda_i) \\right| \\\\\n&\\leq 2 N^{-1} \\norme{d_T(\\lambda) - d_T(\\lambda_i)} \\| R_T\\| \\norme{W_T^{\\sf H} W_T} \\\\\n&\\leq C | \\lambda - \\lambda_i | \\| \\boldsymbol\\Upsilon\\|_\\infty \\norme{W_T^{\\sf H} W_T} . \n\\end{align*} \nFrom $\\| W_T^{\\sf H} W_T \\| \\leq \\tr(W_T^{\\sf H} W_T)$ and \nLemma~\\ref{chernoff}, assuming $T$ large enough so that $f(x,T) \\triangleq x T^{\\beta-1} \/ (C N \\| \\boldsymbol\\Upsilon\\|_\\infty)$ satisfies \n$f(x,T) \\geq 1$, we then obtain \n\\begin{align*}\n\\mathbb{P} \\left[ \\chi_1 > x \\right] &\\leq \n\\mathbb{P} \\left[ \nC \\| \\boldsymbol\\Upsilon\\|_\\infty T^{-\\beta} \n\\sum_{t=0}^{T-1}\\sum_{n=0}^{N-1} |w_{n,t}|^2 > x \\right] \\\\\n&= \\mathbb{P} \\left[ \n\\frac{1}{NT} \\sum_{n,t} (| w_{n,t} |^2 -1 ) > f(x,T) - 1 \\right] \\\\\n&\\leq \\exp( - NT( f(x,T) - \\log f(x,T) - 1 ) ) . \n\\end{align*} \n\\end{proof}\n\n\\begin{lemma} \n\\label{chi2} \nThe following inequality holds\n\\begin{equation*}\n\\mathbb{P} \\left[ \\chi_2 > x \\right] \\leq\n2T^{\\beta} \\exp \\Biggl( - c T \\Biggl( \\frac{x}{\\|\\boldsymbol\\Upsilon\\|_\\infty} - \n\\log \\Bigl( 1 + \\frac{x}{\\|\\boldsymbol\\Upsilon\\|_\\infty} \\Bigr) \\Biggr) \\Biggr). \n\\end{equation*}\n\\end{lemma} \n\n\\begin{proof}\nFrom the union bound we obtain:\n\\begin{align*} \n\\mathbb{P} \\left[ \\chi_2 > x \\right]\n&\\leq \\sum_{i=0}^{\\lfloor T^{\\beta} \\rfloor - 1} \n\\mathbb{P} \\left[ \\left| \\widehat \\Upsilon_T^b(\\lambda_i) \n - \\mathbb{E} \\widehat \\Upsilon_T^b(\\lambda_i) \\right| > x \\right].\n\\end{align*} \nWe shall bound each term of the sum separately. Since \n\\begin{equation*} \n\\mathbb{P} \\left[ \\left| \\widehat \\Upsilon_T^b(\\lambda_i) \n - \\mathbb{E} \\widehat \\Upsilon_T^b(\\lambda_i) \\right| > x \\right] \n= \\mathbb{P} \\left[ \\widehat \\Upsilon_T^b(\\lambda_i) \n - \\mathbb{E} \\widehat \\Upsilon_T^b(\\lambda_i) > x \\right] +\n\\mathbb{P} \\left[ - \\left( \\widehat \\Upsilon_T^b(\\lambda_i) - \\mathbb{E} \\widehat \\Upsilon_T^b(\\lambda_i) \\right) > x \\right] \n\\end{equation*}\nit will be enough to deal with the first right-hand side term as the second \none is treated similarly.\nLet $\\eta_T(\\lambda_i) \\triangleq W_T q_T(\\lambda_i) = \n\\left[ \\eta_{0,T}(\\lambda_i), \\ldots, \\eta_{N-1,T}(\\lambda_i) \\right]^{\\sf T}$ \nwhere $q_T(\\lambda_i) \\triangleq R_T^{1\/2} d_T(\\lambda_i)$. Observe that \n$\\eta_{k,T}(\\lambda_i) \\sim \\mathcal{CN}(0,\\| q_T(\\lambda_i) \\|^2 I_N)$. We know from Lemma~\\ref{lemma_d_quad} that \n\\begin{equation}\n\\widehat \\Upsilon_T^b(\\lambda_i) - \\mathbb{E} \\widehat \\Upsilon_T^b(\\lambda_i) = \n\\frac 1N \\left( \\| \\eta_T(\\lambda_i) \\|^2 - \\mathbb{E} \\| \\eta_T(\\lambda_i) \\|^2 \\right).\n\\label{Epsilon_eta}\n\\end{equation}\nFrom (\\ref{Epsilon_eta}) and Lemma~\\ref{chernoff}, we therefore get \n\\begin{equation*} \n\\mathbb{P}\\left[ \\widehat \\Upsilon_T^b(\\lambda_i) - \\mathbb{E} \\widehat \\Upsilon_T^b(\\lambda_i) > x \\right]\n\\leq \\exp \\Biggl( -N \\Biggl( \\frac{x}{\\| q_T(\\lambda_i) \\|^2} - \n \\log\\Bigl( 1 + \\frac{x}{\\| q_T(\\lambda_i) \\|^2} \\Bigr) \\Biggr) \\Biggr). \n\\end{equation*} \nNoticing that $\\| q_T(\\lambda_i) \\|^2 \\leq \\|\\boldsymbol\\Upsilon\\|_\\infty$ and that the function $f(x) = x - \\log \\Bigl( 1 + x \\Bigr)$ is increasing for $x>0$, we get the result.\n\\end{proof} \n\nFinally, the bound for the deterministic term $\\chi_3$ is provided by the following lemma:\n\\begin{lemma}\n\\label{chi3} \n$\\displaystyle{ \n\\chi_3 \\leq C \\| \\boldsymbol\\Upsilon\\|_\\infty T^{-\\beta + 1}\n}$. \n\\end{lemma} \n\\begin{proof} \nFrom Lemmas \\ref{lemma_d_quad} and \\ref{lm-fq} along with \n\\eqref{lipschitz}, we obtain\n\\begin{align*} \n\\left| \\mathbb{E} \\widehat \\Upsilon_T^b(\\lambda) - \\mathbb{E} \\widehat \\Upsilon_T^b(\\lambda_i) \\right|\n&= \\left| d_T(\\lambda)^{\\sf H} R_T d_T(\\lambda) - d_T(\\lambda_i)^{\\sf H} R_T d_T(\\lambda_i) \\right| \\\\\n&\\leq 2 \\norme{R_T} \\norme{d_T(\\lambda) - d_T(\\lambda_i)} \\\\\n&\\leq C \\| \\boldsymbol\\Upsilon\\|_\\infty | \\lambda - \\lambda_i | T. \n\\end{align*} \nFrom $\\underset{i \\in {\\cal I}}{\\text{max}} \\underset{\\lambda \\in [\\lambda_i,\\lambda_{i+1}]}{\\text{sup}} \n| \\lambda - \\lambda_i | = \\lambda_{i+1} - \\lambda_i = T^{-\\beta}$ we get the result.\n\\end{proof}\n\nWe now complete the proof of Theorem \\ref{th-biased}. From \n\\eqref{determ-biased} and Lemma~\\ref{chi3}, we get\n\\begin{equation*}\n\\mathbb{P} \\left[\\norme{\\widehat R_T^b - R_T} > x \\right] = \n\\mathbb{P}\\left[ \\chi_1 + \\chi_2 > x + o(1) \\right]. \n\\end{equation*} \nGiven a parameter $\\epsilon_T \\in [0,1]$, we can write (with some slight notation abuse)\n\\begin{equation*}\n\\mathbb{P}\\left[ \\chi_1 + \\chi_2 > x + o(1) \\right] \\leq\n\\mathbb{P}\\left[ \\chi_1 > x \\epsilon_T \\right] + \\mathbb{P}\\left[\\chi_2 > x (1 - \\epsilon_T) + o(1) \\right].\n\\end{equation*}\nWith the results of Lemmas \\ref{chi1} and \\ref{chi2}, setting $\\epsilon_T=1\/T$, we get\n\\begin{align*}\n\\mathbb{P}\\left[ \\chi_1 + \\chi_2 > x + o(1) \\right] &\\leq\n\\mathbb{P}\\left[ \\chi_1 > \\frac{x}{T} \\right] + \\mathbb{P}\\left[ \\chi_2 > x (1 - \\frac{x}{T}) + o(1) \\right] \\\\\n&\\leq\n\\exp \\Bigl( - cT^2 \\Bigl( \\frac{x T^{\\beta-3}}{C \\| \\boldsymbol\\Upsilon\\|_\\infty} \n- \\log \\frac{x T^{\\beta-3}}{C \\| \\boldsymbol\\Upsilon\\|_\\infty} - 1 \\Bigr) \\Bigr) \\\\\n&+ \\exp \\Bigl( - cT \\Bigl( \\frac{x\\left( 1 - \\frac{1}{T} \\right)}{\\|\\boldsymbol\\Upsilon\\|_\\infty} - \n\\log \\Bigl( 1 + \\frac{x\\left( 1 - \\frac{1}{T} \\right)}{\\|\\boldsymbol\\Upsilon\\|_\\infty} \\Bigr) + o(1) \\Bigr) \\Bigr) \\\\\n&=\n\\exp \\Bigl( - cT \\Bigl( \\frac{x}{\\|\\boldsymbol\\Upsilon\\|_\\infty} - \n\\log \\Bigl( 1 + \\frac{x}{\\|\\boldsymbol\\Upsilon\\|_\\infty} \\Bigr) + o(1) \\Bigr) \\Bigr) \n\\end{align*}\nsince $\\beta>2$.\n\n\\subsection{Unbiased estimator: proof of Theorem \\ref{th-unbiased}}\\label{unbiased}\nThe proof follows basically the same main steps as for Theorem~\\ref{th-biased} \nwith an additional difficulty due to the scaling terms $1\/(T-|k|)$.\n\nDefining the function \n\\[ \n\\widehat \\Upsilon^u_T(\\lambda) \\triangleq \n\\sum_{k=-(T-1)}^{T-1} \\hat{r}_{k,T}^u e^{ik \\lambda}\n\\]\nwe have\n\\begin{equation*} \n\\norme{\\widehat{R}_T^u-R_T} \\leq \n\\underset{\\lambda\\in[0,2\\pi)}{\\text{sup}} \n\\left| \\widehat \\Upsilon_T^u(\\lambda) - \\Upsilon_T(\\lambda) \\right| = \n\\underset{\\lambda\\in[0,2\\pi)}{\\text{sup}} \n\\left| \\widehat \\Upsilon_T^u(\\lambda) - \n \\mathbb{E} \\widehat \\Upsilon_T^u(\\lambda) \\right|\n\\end{equation*} \nsince $\\Upsilon_T(\\lambda)=\\mathbb{E} \\widehat \\Upsilon_T^u(\\lambda)$, the estimates\n$\\hat{r}_{k,T}^u$ being unbiased. \n\nIn order to deal with the right-hand side of this expression, we need the \nfollowing analogue of Lemma~\\ref{lemma_d_quad}, borrowed from \n\\cite{val-loub-icassp14} and proven here in Appendix~\\ref{anx-lm-qf2}.\n\\begin{lemma}\n\\label{lemma_d_quad2}\nThe following fact holds:\n\\begin{align*}\n\\widehat \\Upsilon_T^u(\\lambda) &= d_T(\\lambda)^{\\sf H} \n\\left( \\frac{V_T^{\\sf H}V_T}{N} \\odot B_T \\right) d_T(\\lambda)\n\\end{align*} \nwhere $\\odot$ is the Hadamard product of matrices and where \n\\[\n\tB_T \\triangleq \\left[ \\frac{T}{T-|i-j|} \\right]_{0\\leq i,j\\leq T-1}.\n\\] \n\\end{lemma}\n\nIn order to make $\\widehat \\Upsilon_T^u(\\lambda)$ more tractable, we rely on the \nfollowing lemma which can be proven by direct calculation.\n\\begin{lemma}\n\\label{lemma_hadamard}\nLet $x$, $y \\in \\mathbb{C}^{m}$ and $A, B \\in C^{m \\times m}$. Then \n\\begin{equation*}\nx^{\\sf H}( A \\odot B ) y = \\tr (D_x^{\\sf H} A D_y B^{\\sf T}) \n\\end{equation*}\nwhere we recall $D_x = \\diag(x)$ and $D_y = \\diag(y)$.\n\\end{lemma}\n\nDenoting\n\\begin{align*}\nD_T(\\lambda) &\\triangleq \\diag (d_T(\\lambda)) = \\frac{1}{\\sqrt{T}} \\diag(1, e^{i\\lambda}, \\ldots, e^{i(T-1)\\lambda} ) \\\\\nQ_T(\\lambda) &\\triangleq R_T^{1\/2} D_T(\\lambda) B_T D_T(\\lambda)^{\\sf H} (R_T^{1\/2})^{\\sf H}\n\\end{align*}\nwe get from Lemmas~\\ref{lemma_d_quad2} and \\ref{lemma_hadamard} \n\\begin{align}\n\\label{Upsilon_sum} \n\\widehat \\Upsilon_T^u(\\lambda) &= \\frac1N\n\\tr(D_T(\\lambda)^{\\sf H} (R_T^{1\/2})^{\\sf H} W_T^{\\sf H} W_T R_T^{1\/2} D_T(\\lambda) B_T)\n\\nonumber\\\\ &= \\frac1N \\tr (W_T Q_T(\\lambda) W_T^{\\sf H}) \\nonumber \\\\\n&= \\frac1N\n\\sum_{n=0}^{N-1} w_n^{\\sf H} Q_T(\\lambda) w_n \n\\end{align} \nwhere $w_i^{\\sf H}$ is such that $W_T=[w_0^{\\sf H},\\ldots,w_{N-1}^{\\sf H}]$. \n\nCompared to the biased case, the main difficulty lies here in the fact \nthat the matrices $B_T\/T$ and $Q_T(\\lambda)$ have unbounded spectral norm as $T\\to\\infty$. \nThe following lemma, proven in Appendix~\\ref{prf-lm-B-Q}, provides some information on the spectral behavior of these\nmatrices that will be used subsequently. \n\\begin{lemma}\n\\label{lm-B-Q} \nThe matrix $B_T$ satisfies \n\\begin{equation}\n\\label{norme_B} \n\\norme{B_T} \\leq \\sqrt{2} T( \\sqrt{\\log T} + C). \n\\end{equation} \nFor any $\\lambda \\in[0, 2\\pi)$, the eigenvalues \n$\\sigma_0, \\ldots, \\sigma_{T-1}$ of the matrix $Q(\\lambda)$ satisfy the \nfollowing inequalities: \n\\begin{eqnarray}\n\\sum_{t=0}^{T-1} \\sigma_t^2 &\\leq& 2 \\norme{\\boldsymbol\\Upsilon}_{\\infty}^2 \\log T + C \\label{sum_sigma2}\\\\ \n\\underset{t}{\\max} |\\sigma_t| &\\leq& \\sqrt{2} \\| \\boldsymbol \\Upsilon \\|_\\infty \n( \\log T )^{1\/2} + C \\label{sig_max} \\\\\n\\sum_{t=0}^{T-1} |\\sigma_t|^3 &\\leq& C ((\\log T)^{3\/2} +1)\\label{sum_sigma3}\n\\end{eqnarray}\nwhere the constant $C$ is independent of $\\lambda$.\n\\end{lemma} \n\nWe shall also need the following easily shown Lipschitz property of the \nfunction $\\norme{D_T(\\lambda) - D_T(\\lambda')}$: \n\\begin{equation} \n\\label{lipschitz_D} \n\\| D_T(\\lambda) - D_T(\\lambda') \\| \\leq \\sqrt{T}|\\lambda - \\lambda'|. \n\\end{equation}\n\nWe now enter the core of the proof of Theorem~\\ref{th-unbiased}. \nChoosing $\\beta>2$, let $\\lambda_i=2 \\pi \\frac{i}{\\lfloor T^{\\beta}\\rfloor}$, \n$i \\in {\\cal I}$, be a regular discretization of the interval $[0, 2\\pi]$ with \n${\\cal I}=\\left\\{0, \\ldots, \\lfloor T^{\\beta} \\rfloor - 1 \\right\\}$. We write \n\\begin{align*}\n\\underset{\\lambda \\in [0, 2\\pi)}{\\text{sup}} \n\\left| \\widehat \\Upsilon_T^u(\\lambda) - \\mathbb{E} \\widehat \\Upsilon_T^u(\\lambda) \\right| \n&\\leq \\underset{i \\in {\\cal I}}{\\text{max}} \n\\underset{\\lambda \\in [\\lambda_i,\\lambda_{i+1}]}{\\text{sup}} \n\\left| \\widehat \\Upsilon_T^u(\\lambda)-\\widehat \\Upsilon_T^u(\\lambda_i) \\right| + \\underset{i \\in {\\cal I}}{\\text{max}} \\left| \\widehat \\Upsilon_T^u(\\lambda_i) - \\mathbb{E} \\widehat \\Upsilon_T^u(\\lambda_i) \\right|\\\\& + \\underset{i \\in {\\cal I}}{\\text{max}} \\underset{\\lambda \\in [\\lambda_i,\\lambda_{i+1}]}{\\text{sup}} \n\\left| \\mathbb{E} \\widehat \\Upsilon_T^u(\\lambda_i) - \\mathbb{E} \\widehat \\Upsilon_T^u(\\lambda) \\right| \\\\ \n&\\triangleq \\chi_1 + \\chi_2 + \\chi_3 . \n\\end{align*} \n\nOur task is now to provide concentration inequalities on the random terms \n$\\chi_1$ and $\\chi_2$ and a bound on the deterministic term $\\chi_3$.\n\n\\begin{lemma}\n\\label{chi1-u} \nThere exists a constant $C > 0$ such that, if $T$ is large enough, the following \ninequality holds:\n\\begin{equation*}\n\\mathbb{P} \\left[ \\chi_1 > x \\right]\n\\leq \\exp \\left( - cT^2 \\left( \\frac{x T^{\\beta-2}}{C \\sqrt{\\log T}} \n- \\log \\frac{x T^{\\beta-2}}{C \\sqrt{\\log T}} - 1 \\right) \\right). \n\\end{equation*} \n\\end{lemma} \n\n\\begin{proof}\nFrom Equation~\\eqref{Upsilon_sum}, we have\n\\begin{align} \n\\left| \\widehat \\Upsilon_T^u(\\lambda) - \\widehat \\Upsilon_T^u(\\lambda_i) \\right|\n& = \\frac1N \\left| \\sum_{n=0}^{N-1} w_n^{\\sf H}\\left( Q_T(\\lambda) - Q_T(\\lambda_i) \\right) w_n \\right| \\nonumber \\\\\n& \\leq \\frac1N \\sum_{n=0}^{N-1} \\left| w_n^{\\sf H} \\left( Q_T(\\lambda) - Q_T(\\lambda_i) \\right) w_n \\right| \\nonumber\\\\\n& \\leq \\frac1N \\norme{Q_T(\\lambda) - Q_T(\\lambda_i)} \\sum_{n=0}^{N-1} \\norme{w_n}^2 \\nonumber.\n\\end{align}\nThe norm above further develops as\n\\begin{align*} \n&\\norme{Q_T(\\lambda) - Q_T(\\lambda_i)}\\\\\n& \\leq \\norme{R_T} \\Vert D_T(\\lambda)B_TD_T(\\lambda)^{\\sf H}- \nD_T(\\lambda_i)B_TD_T(\\lambda)^{\\sf H} + D_T(\\lambda_i)B_TD_T(\\lambda)^{\\sf H} - \nD_T(\\lambda_i)B_TD_T(\\lambda_i)^{\\sf H} \\Vert \\\\\n&\\leq 2 \\norme{D_T(\\lambda)} \\norme{R_T} \\norme{B_T} \n\\norme{D_T(\\lambda) - D_T(\\lambda_i)} \\leq C T ( \\sqrt{\\log T} + 1) \\left| \\lambda - \\lambda_i \\right| \n\\end{align*}\nwhere we used \\eqref{norme_B}, \\eqref{lipschitz_D}, and $\\norme{D_T(\\lambda)}=1\/\\sqrt{T}$.\nUp to a change in $C$, we can finally write $\\norme{Q_T(\\lambda) - Q_T(\\lambda_i)} \\leq \nC T^{1-\\beta} \\sqrt{\\log T}$. Assume that $f(x,T) \\triangleq xT^{\\beta-2}\/\n\\left( C \\sqrt{\\log T} \\right)$ \nsatisfies $f(x,T) > 1$ (always possible for every fixed $x$ by taking $T$ large). Then we get by Lemma~\\ref{chernoff} \n\\begin{align*} \n\\mathbb{P} \\left[ \\chi_1 > x \\right] & \\leq \\mathbb{P} \\left( C N^{-1} T^{1-\\beta} \n\\sqrt{\\log T} \\sum_{n,t} \\left|w_{n,t}\\right|^2 > x \\right) \\\\ \n&= \\mathbb{P} \\left( \\frac{1}{NT} \\sum_{n,t} (\\left|w_{n,t}\\right|^2 - 1) \n> f(x,T) - 1 \\right) \\\\\n&\\leq\n\\exp \\left( - NT \\left( f(x,T) - \\log \\left(f(x,T)\\right) - 1 \\right) \\right).\n\\end{align*}\n\\end{proof}\n\nThe most technical part of the proof is to control the term $\\chi_2$, which we handle hereafter.\n\n\\begin{lemma}\n\\label{chi2-u} \nThe following inequality holds:\n\\begin{equation*}\n\\mathbb{P} \\left[ \\chi_2 > x \\right]\n\\leq \\exp \\left(- \\frac{cx^2T}{4\\norme{\\boldsymbol\\Upsilon}_{\\infty}^2\\log T} (1 + o(1)) \\right).\n\\end{equation*}\n\\end{lemma}\n\n\\begin{proof}\nFrom the union bound we obtain:\n\\begin{equation} \\label{union_bound}\n\\mathbb{P} \\left[ \\chi_2 > x \\right]\n\\leq \\sum_{i=0}^{\\lfloor T^{\\beta} \\rfloor - 1} \n\\mathbb{P} \\left[ \\left| \\widehat \\Upsilon_T^u(\\lambda_i) \n - \\mathbb{E} \\widehat \\Upsilon_T^u(\\lambda_i) \\right| > x \\right].\n\\end{equation} \nEach term of the sum can be written\n\\begin{equation*}\n\\mathbb{P} \\left[ \\left| \\widehat \\Upsilon_T^u(\\lambda_i) - \\mathbb{E} \\widehat \\Upsilon_T^u(\\lambda_i) \\right| > x \\right] = \\mathbb{P} \\left[\\widehat \\Upsilon_T^u(\\lambda_i) - \\mathbb{E} \\widehat \\Upsilon_T^u(\\lambda_i) > x \\right] + \\mathbb{P} \\left[- \\left(\\widehat \\Upsilon_T^u(\\lambda_i) - \\mathbb{E} \\widehat \\Upsilon_T^u(\\lambda_i) \\right) > x \\right].\n\\end{equation*}\nWe will deal with the term $\\psi_i = \\mathbb{P} \\left[\\widehat \\Upsilon_T^u(\\lambda_i) - \\mathbb{E} \\widehat \\Upsilon_T^u(\\lambda_i) > x \\right]$, the term $\\mathbb{P} \\left[- \\left(\\widehat \\Upsilon_T^u(\\lambda_i) - \\mathbb{E} \\widehat \\Upsilon_T^u(\\lambda_i) \\right) > x \\right]$ being treated similarly.\nLet $Q_T(\\lambda_i) = U_T \\Sigma_T U_T^{\\sf H}$ be a spectral \nfactorization of the Hermitian matrix $Q_T(\\lambda_i)$ with \n$\\Sigma_T =\\diag (\\sigma_{0},\\ldots,\\sigma_{T-1})$. \nSince $U_T$ is unitary and $W_T$ has independent ${\\cal CN}(0,1)$ elements,\nwe get from Equation \\eqref{Upsilon_sum} \n\\begin{equation}\n\\label{Ups_u}\n\\widehat \\Upsilon_T^u(\\lambda_i) \\stackrel{\\cal L}{=} \n\\frac{1}{N}\\sum_{n=0}^{N-1} w_n^{\\sf H} \\Sigma_T(\\lambda_i) w_n \n= \\frac{1}{N}\\sum_{n=0}^{N-1} \\sum_{t=0}^{T-1} |w_{n,t}|^2 \\sigma_t\n\\end{equation}\nwhere $\\stackrel{\\cal L}{=}$ denotes equality in law. \nSince $\\mathbb{E} \\left[ e^{a|X|^2} \\right]= 1\/(1-a)$ when $X \\sim {\\cal CN}(0, 1)$ \nand $0 x \\right) \\nonumber \\\\ \n&\\leq \\mathbb{E}\\left[\\text{exp}\\Bigl( \n\\frac{\\tau}{N}\\sum_{n,t} |w_{n,t}|^2\\sigma_t \\Bigr) \\right]\n\\exp \\Bigl(-\\tau\\Bigl(x + \\sum_{t=0}^{T-1}\\sigma_t \\Bigr)\\Bigr) \\nonumber \\\\ \n&= \n\\exp \\Bigl( -\\tau \\Bigl(x+ \\sum_{t=0}^{T-1} \\sigma_t \\Bigr) \\Bigr) \n\\prod_{t=0}^{T-1} \\Bigl( 1- \\frac{\\sigma_t\\tau}{N} \\Bigr)^{-N} \n\\label{pro} \\\\\n&= \\exp\\Bigl(-\\tau \\Bigl(x+ \\sum_{t=0}^{T-1} \\sigma_t \\Bigr) \n - N \\sum_{t=0}^{T-1} \\log \\Bigl(1-\\frac{\\sigma_t\\tau}{N} \\Bigr) \\Bigr) \n\\nonumber\n\\end{align}\nfor any $\\tau$ such that $0 \\leq \\tau < \\underset{0\\leq t\\leq T-1}{\\text{min}}\\frac{N}{\\sigma_t}$.\nWriting $\\log(1-x) = - x - \\frac{x^2}{2} + R_3(x)$ with $\\left| R_3(x) \\right| \\leq \\frac{|x|^3}{3(1-\\epsilon)^3}$ when $|x|<\\epsilon<1$, we get\n\\begin{align}\n\\psi_i &\\leq\n\\exp \\Bigl(-\\tau x + N \\sum_{t=0}^{T-1}\\Bigl( \\frac{\\sigma_t^2\\tau^2}{2N^2} \n+ R_3 \\Bigl(\\frac{\\sigma_t\\tau}{N} \\Bigr) \\Bigr) \\Bigr) \\nonumber \\\\\n&\\leq \\exp \\Bigl(-N \\Bigl( \\frac{\\tau x}{N} - \\frac{\\tau^2}{2N^2} \\sum_{t=0}^{T-1} \\sigma_t^2 \\Bigr) \\Bigr) \\exp \\Bigl( N \\sum_{t=0}^{T-1} \\Bigl| R_3 \\Bigl(\\frac{\\sigma_t \\tau}{N} \\Bigr) \\Bigr| \\Bigr) \\label{expr}.\n\\end{align} \nWe shall manage this expression by using Lemma~\\ref{lm-B-Q}. In order to \ncontrol the term $\\exp(N \\sum |R_3(\\cdot)|)$, we make the choice\n\\begin{equation*}\n\\tau = \\frac{axT}{\\log T}\n\\end{equation*}\nwhere $a$ is a parameter of order one to be optimized later.\nFrom \\eqref{sig_max} we get \n$\\max_t\\frac{\\sigma_t \\tau}{N} = O \\left( (\\log T)^{-1\/2} \\right)$. Hence, \nfor all $T$ large, $\\tau < \\min_t \\frac{N}{\\sigma_t}$. Therefore, \\eqref{pro} \nis valid for this choice of $\\tau$ and for $T$ large. Moreover, for $\\epsilon$ \nfixed and $T$ large, $\\frac{\\sigma_t \\tau}{N} < \\epsilon <1$ so that for \nthese $T$\n\\begin{equation*}\nN \\sum_{t=0}^{T-1} \\Bigl| R_3 \\Bigl(\\frac{\\sigma_t\\tau}{N} \\Bigr) \\Bigr| \n\\leq \\frac{a^3T^3x^3}{3N^2(1-\\epsilon)^3 (\\log T)^3} \\sum_{t=0}^{T-1} |\\sigma_t|^3 \n= O \\left( {T}{(\\log T)^{-3\/2}}\\right) \n\\end{equation*}\nfrom (\\ref{sum_sigma3}).\nPlugging the expression of $\\tau$ in (\\ref{expr}), we get \n\\begin{equation*}\n\\psi_i \\leq \\exp \\Bigl(-N \\Bigl( \\frac{a T x^2}{(\\log T)N} - \\frac{a^2T^2x^2}{2N^2(\\log T)^2} \\sum_{t=0}^{T-1} \\sigma_t^2 \\Bigr) \\Bigr) \\exp \\Bigl( C \\Bigl( {T}{(\\log T)^{-3\/2}} \\Bigr) \\Bigr) . \n\\end{equation*}\nUsing (\\ref{sum_sigma2}), we have\n\\begin{equation*}\n\\psi_i \\leq \\exp \\Bigl(-\\frac{x^2T}{\\log T} \n\\Bigl(a - \\frac{\\norme{\\boldsymbol\\Upsilon}_{\\infty}^2a^2T}{N} \\Bigr) \\Bigr) \n\\exp\\Bigl( \\frac{CT}{(\\log T)^{3\/2}} \\Bigr). \n\\end{equation*}\nThe right hand side term is minimized for $a=\\frac{N}{2T\\norme{\\boldsymbol\\Upsilon}_{\\infty}^2}$ which finally gives\n\\begin{equation*}\n\\psi_i \n\\leq \\exp \\Bigl(- \\frac{Nx^2}{4\\norme{\\boldsymbol\\Upsilon}_{\\infty}^2\\log T} (1 + o(1)) \\Bigr).\n\\end{equation*}\nCombining the above inequality with (\\ref{union_bound}) (which induces additional $o(1)$ terms in the argument of the exponential) concludes the lemma.\n\\end{proof}\n\n\\begin{lemma}\n\\label{chi3-u} \n$\\displaystyle{ \n\\chi_3 \\leq C T^{-\\beta+2} \\sqrt{\\log T}\n}$. \n\\end{lemma}\n\n\\begin{proof}\nFrom Lemma~\\ref{lemma_d_quad2}, $\\norme{R_T \\odot B_T} \\leq \\norme{R_T}\\norme{B_T}$ (see \\cite[Theorem 5.5.1]{HornJoh'91}), and \\eqref{lipschitz}, we get:\n\\begin{equation*}\n\\left| \\mathbb{E} \\widehat \\Upsilon_T^u(\\lambda_i) - \\mathbb{E} \\widehat \\Upsilon_T^u(\\lambda) \\right| \n\\leq 2 \\norme{d_T(\\lambda) - d_T(\\lambda_i)} \\norme{R_T} \\norme{B_T}\n\\leq C T^2 \\left| \\lambda - \\lambda_{i} \\right| \\| \\boldsymbol\\Upsilon\\|_\\infty \\sqrt{\\log T}.\n\\end{equation*}\n\\end{proof}\nLemmas \\ref{chi1-u}--\\ref{chi3-u} show that $\\mathbb{P}[\\chi_2 > x]$ dominates the\nterm $\\mathbb{P}[\\chi_1 > x]$ and that the term $\\chi_3$ is vanishing. Mimicking the\nend of the proof of Theorem \\ref{th-biased}, we obtain Theorem \n\\ref{th-unbiased}. \n\nWe conclude this section by an empirical evaluation by Monte Carlo simulations of $\\mathbb{P}[\\Vert {\\widehat R_T - R_T}\\Vert > x]$ (curves labeled Biased and Unbiased), with $\\widehat R_T\\in\\{\\widehat R_T^b,\\widehat R_T^u\\}$, $T=2N$, $x=2$. This is shown in Figure~\\ref{det} against the theoretical exponential bounds of Theorems~\\ref{th-biased} and \\ref{th-unbiased} (curves labeled Biased theory and Unbiased theory). We observe that the rates obtained in Theorems~\\ref{th-biased} and \\ref{th-unbiased} are asymptotically close to optimal.\n\n\\begin{figure}[H]\n\\center\n\t\\begin{tikzpicture}[font=\\footnotesize]\n\t\t\\renewcommand{\\axisdefaulttryminticks}{2} \n\t\t\\pgfplotsset{every axis\/.append style={mark options=solid, mark size=2pt}}\n\t\t\\tikzstyle{every major grid}+=[style=densely dashed] \n\t\t\t \n\t\t\\pgfplotsset{every axis legend\/.append style={fill=white,cells={anchor=west},at={(0.99,0.02)},anchor=south east}} \n\t\t\\tikzstyle{every axis y label}+=[yshift=-10pt] \n\t\t\\tikzstyle{every axis x label}+=[yshift=5pt]\n\t\t\\begin{axis}[\n\t\t\t\tgrid=major,\n\t\t\t\n\t\t\t\txlabel={$N$},\n\t\t\t\tylabel={$T^{-1} \\log \\left( \\mathbb P \\left[ \\norme {\\widehat R_T - R_T} > x \\right] \\right)$},\n\t\t\t ytick={-0.2,-0.15,-0.1,-0.05,0},\n yticklabels = {$-0.2$,$-0.15$,$-0.1$,$-0.05$,$0$},\n\t\t\t\n \n\t\t\t\n\t\t\t\txmin=10,\n\t\t\t\txmax=40, \n\t\t\t\tymin=-0.2, \n\t\t\t\tymax=0,\n width=0.7\\columnwidth,\n height=0.5\\columnwidth\n\t\t\t]\n\n \\addplot[smooth,black,densely dashed,line width=0.5pt,mark=star] plot coordinates{\n(10.000000,-0.052766)(12.000000,-0.051276)(14.000000,-0.050322)(16.000000,-0.049673)(18.000000,-0.049212)(20.000000,-0.048872)(22.000000,-0.048614)(24.000000,-0.048414)(26.000000,-0.048255)(28.000000,-0.048127)(30.000000,-0.048023)(32.000000,-0.047936)(34.000000,-0.047864)(36.000000,-0.047803)(38.000000,-0.047750)(40.000000,-0.047705)\n\n\n\n };\n \\addplot[smooth,black,line width=0.5pt,mark=star] plot coordinates{\n(10.000000,-0.178687)(12.000000,-0.154996)(14.000000,-0.137000)(16.000000,-0.122875)(18.000000,-0.112712)(20.000000,-0.1042589)(22.000000,-0.096377)(24.000000,-0.091108)(26.000000,-0.085960)(28.000000,-0.082679)(30.000000,-0.080274)(32.000000,-0.077053)(34.000000,-0.074633)(36.000000,-0.071135)(38.000000,-0.069311)(40.000000,-0.067182)\n\n\n\n\n };\n \\addplot[smooth,black,densely dashed,line width=0.5pt,mark=o] plot coordinates{\n(10.000000,-0.011823)(12.000000,-0.010787)(14.000000,-0.010070)(16.000000,-0.009540)(18.000000,-0.009129)(20.000000,-0.008799)(22.000000,-0.008526)(24.000000,-0.008295)(26.000000,-0.008097)(28.000000,-0.007924)(30.000000,-0.007771)(32.000000,-0.007635)(34.000000,-0.007512)(36.000000,-0.007401)(38.000000,-0.007300)(40.000000,-0.007206)\n\n\n\n };\n \n \\addplot[smooth,black,line width=0.5pt,mark=o] plot coordinates{\n(10.000000,-0.063852)(12.000000,-0.050732)(14.000000,-0.041205)(16.000000,-0.034622)(18.000000,-0.029335)(20.000000,-0.025196)(22.000000,-0.021541)(24.000000,-0.018643)(26.000000,-0.015748)(28.000000,-0.013602)(30.000000,-0.012069)(32.000000,-0.010916)(34.000000,-0.010038)(36.000000,-0.007765)(38.000000,-0.007908)(40.000000,-0.007151)\n\n\n\n };\n\n \\legend{ {Biased theory},{Biased},{Unbiased theory},{Unbiased}}\n \\end{axis}\n\t\\end{tikzpicture}\n\\caption{Error probability of the spectral norm for $x=2$, $c=0.5$, $[R_T]_{k,l}=a^{|k-l|}$ with $a=0.6$.}\n\\label{det}\n\\end{figure}\n\n\\section{Covariance matrix estimators for the \\\\ ``Signal plus Noise'' model} \n\\label{sig+noise} \n\n\\subsection{Model, assumptions, and results}\n\\label{subsec-model-perturbed} \n\nConsider now the following model:\n\\begin{equation}\\label{model2}\n\tY_T = [y_{n,t}]_{\\substack{0\\leq n\\leq N-1 \\\\0 \\leq t\\leq T-1}} =P_T+V_T\n\\end{equation}\nwhere the $N\\times T$ matrix $V_T$ is defined in \\eqref{model1} and where $P_T$ satisfies the \nfollowing assumption: \n\\begin{assumption} \n\\label{ass-signal} \n$P_T \\triangleq \\boldsymbol h_T \\boldsymbol s_T^{\\sf H} \\Gamma_T^{1\/2}$ where $\\boldsymbol h_T\\in \\mathbb{C}^N$ is a \ndeterministic vector such that\n$\\sup_T \\| \\boldsymbol h_T \\| < \\infty$, the vector \n$\\boldsymbol s_T = (s_0, \\ldots, s_{T-1})^{\\sf T} \\in \\mathbb{C}^T$ is a random vector\nindependent of $W_T$ with the distribution ${\\cal CN}(0, I_T)$, and \n$\\Gamma_T = [\\gamma_{ij} ]_{i,j=0}^{T-1}$ is Hermitian nonnegative such that $\\sup_T \\| \\Gamma_T \\| < \\infty$. \n\\end{assumption}\n\nWe have here a model for a rank-one signal corrupted with a Gaussian \nspatially white and temporally correlated noise with stationary temporal\ncorrelations. Observe that the signal can also be temporally correlated. \nOur purpose is still to estimate the noise correlation matrix\n$R_T$. To that end, we use one of the estimators \\eqref{est-b} or \\eqref{est-u} \nwith the difference that the samples $v_{n,t}$ are simply replaced with the \nsamples $y_{n,t}$. It turns out that these estimators are still consistent in \nspectral norm. Intuitively, $P_T$ does not break the \nconsistence of these estimators as it can be seen as a rank-one \nperturbation of the noise term $V_T$ in which the subspace spanned by \n$(\\Gamma^{1\/2})^{\\sf H} \\boldsymbol s_T$ is ``delocalized'' enough so as not to perturb \nmuch the estimators of $R_T$. In fact, we even have the following strong result.\n\\begin{theorem}\n\\label{th-perturb} \nLet $Y_T$ be defined as in \\eqref{model2} and let Assumptions~\\ref{ass-rk}--\\ref{ass-signal} hold. Define the\nestimates \n\\begin{align*}\n\\hat{r}_{k,T}^{bp}&= \n\\frac{1}{NT} \\sum_{n=0}^{N-1}\\sum_{t=0}^{T-1}\ny_{n,t+k} y_{n,t}^* \\mathbbm{1}_{0 \\leq t+k \\leq T-1} \\\\\n\\hat{r}_{k,T}^{up}&= \n\\frac{1}{N(T-|k|)} \\sum_{n=0}^{N-1}\\sum_{t=0}^{T-1}\ny_{n,t+k} y_{n,t}^* \\mathbbm{1}_{0 \\leq t+k \\leq T-1}\n\\end{align*}\nand let \n\\begin{align*}\n\\widehat R_T^{bp} &= {\\cal T}(\\hat{r}_{-(T-1),T}^{bp},\\ldots,\\hat{r}_{(T-1),T}^{bp}) \\\\\n\\widehat R_T^{up} &= {\\cal T}(\\hat{r}_{-(T-1),T}^{up},\\ldots,\\hat{r}_{(T-1),T}^{up}).\n\\end{align*}\nThen for any $x > 0$, \n\\begin{equation*} \n\\mathbb{P} \\left[\\norme{\\widehat R_T^{bp} - R_T} > x \\right]\n\\leq \n\\exp \\Bigl( -cT \\Bigl( \\frac{x}{\\| \\boldsymbol\\Upsilon \\|_\\infty} - \n \\log \\Bigl( 1 + \\frac{x}{\\| \\boldsymbol\\Upsilon \\|_\\infty} \\Bigr) + o(1) \\Bigr) \n\\Bigr) \n\\end{equation*} \nand \n\\begin{equation*} \n\\mathbb{P} \\left[\\norme{\\widehat R_T^{up} - R_T} > x \\right]\n\\leq \n\\exp \\Bigl(- \\frac{cTx^2}{4\\norme{\\boldsymbol\\Upsilon}_{\\infty}^2\\log T} (1 + o(1)) \n\\Bigr). \n\\end{equation*} \n\\end{theorem}\n\nBefore proving this theorem, some remarks are in order.\n\\begin{remark}\nTheorem \\ref{th-perturb} generalizes without difficulty to the case where $P_T$ has a fixed rank $K>1$. This captures the situation of $K\\ll \\min(N,T)$ sources.\n\\end{remark} \n\\begin{remark}\n\tSimilar to the proofs of Theorems \\ref{th-biased} and \\ref{th-unbiased}, the proof of Theorem~\\ref{th-perturb} uses concentration inequalities for functionals of Gaussian random variables based on the moment generating function and the Chernoff bound. Exploiting instead McDiarmid's concentration inequality \\cite{ledoux}, it is possible to adapt Theorem~\\ref{th-perturb} to $\\boldsymbol s_T$ with bounded (instead of Gaussian) entries. This adaptation may account for discrete sources met in digital communication signals. \n\\end{remark} \n\n\\subsection{Main elements of the proof of Theorem \\ref{th-perturb}} \n\nWe restrict the proof to the more technical part that concerns $\\widehat R^{up}_T$. \nDefining\n\\begin{eqnarray*}\n\\widehat \\Upsilon_T^{up}(\\lambda) \\triangleq \\sum_{k=-(T-1)}^{T-1} \\hat{r}_{k,T}^{up} e^{ik \\lambda}\n\\end{eqnarray*}\nand recalling that $\\Upsilon_T(\\lambda) = \\sum_{k=-(T-1)}^{T-1} r_{k} e^{ik \\lambda}$, \nwe need to establish a concentration inequality on \\linebreak\n$\\mathbb{P} \\left[ \\sup_{\\lambda\\in[0,2\\pi)} | \\widehat \\Upsilon_T^{up}(\\lambda) - \n\\Upsilon_T(\\lambda) | > x \\right]$. For any $\\lambda\\in[0,2\\pi)$, the term \n$\\widehat \\Upsilon_T^{up}(\\lambda)$ can be written as \n(see Lemma~\\ref{lemma_d_quad2}) \n\\begin{align*}\n\\widehat \\Upsilon_T^{up}(\\lambda)&= \nd_T(\\lambda)^{\\sf H} \\left(\\frac{Y_T^{\\sf H}Y_T}{N} \\odot B_T \\right)\nd_T(\\lambda) \\\\\n&= \nd_T(\\lambda)^{\\sf H} \\left(\\frac{V_T^{\\sf H}V_T}{N} \\odot B_T \\right)\nd_T(\\lambda) \\\\\n&+ \nd_T(\\lambda)^{\\sf H} \\left(\\frac{P_T^{\\sf H}V_T+V_T^{\\sf H}P_T}{N} \\odot B_T \n\\right) d_T(\\lambda) \\\\\n&+ d_T(\\lambda)^{\\sf H} \\left(\\frac{P_T^{\\sf H}P_T}{N} \\odot B_T \\right)\nd_T(\\lambda) \\\\\n&\\triangleq \\widehat \\Upsilon_T^{u}(\\lambda) + \n\\widehat \\Upsilon_T^{cross}(\\lambda) + \\widehat \\Upsilon_T^{sig}(\\lambda) \n\\end{align*} \nwhere $B_T$ is the matrix defined in the statement of \nLemma~\\ref{lemma_d_quad2}.\nWe know from the proof of Theorem~\\ref{th-unbiased} that \n\\begin{equation} \n\\label{noise-term} \n\\mathbb{P} \\left[\\sup_{\\lambda\\in[0,2\\pi)} \n| \\widehat \\Upsilon_T^{u}(\\lambda) - \\Upsilon_T(\\lambda) | > x \\right]\n\\leq \n\\exp \\left(- \\frac{cTx^2}{4\\norme{\\boldsymbol\\Upsilon}_{\\infty}^2\\log T} (1 + o(1)) \n\\right) . \n\\end{equation} \nWe then need only handle the terms $\\widehat \\Upsilon_T^{cross}(\\lambda)$ and \n$\\widehat \\Upsilon_T^{sig}(\\lambda)$. \n\nWe start with a simple lemma. \n\\begin{lemma}\n\\label{lm-prod-gauss}\nLet $X$ and $Y$ be two independent ${\\cal N}(0,1)$ random variables. Then for \nany $\\tau \\in(-1,1)$, \n\\[\n\\mathbb{E}[\\exp(\\tau XY)] = (1 - \\tau^2)^{-1\/2}.\n\\]\n\\end{lemma}\n\\begin{proof}\n\\begin{align*}\n\\mathbb{E}[\\exp(\\tau XY)] &= \\frac{1}{2\\pi} \\int_{\\mathbb R^2} \ne^{\\tau xy} e^{-x^2\/2} e^{-y^2\/2} \\, dx\\, dy \\\\\n&= \\frac{1}{2\\pi} \\int_{\\mathbb R^2} \ne^{-(x-\\tau y)^2\/2} e^{-(1 - \\tau^2)y^2\/2} \\, dx\\, dy \\\\\n&= (1 - \\tau^2)^{-1\/2} .\n\\end{align*}\n\\end{proof} \nWith this result, we now have\n\\begin{lemma} \n\\label{cross-term}\nThere exists a constant $a > 0$ such that \n\\[\n\\mathbb{P}\\left[ \\sup_{\\lambda \\in[0,2\\pi)} | \\widehat \\Upsilon_T^{cross}(\\lambda) | \n> x \\right] \\leq \\exp\\Bigl( - \\frac{axT}{\\sqrt{\\log T}}(1 + o(1)) \\Bigr) . \n\\]\n\\end{lemma}\n\\begin{proof}\nWe only sketch the proof of this lemma. We show that for any \n$\\lambda \\in [0, 2\\pi]$, \n\\[\n\\mathbb{P}[ | \\widehat \\Upsilon_T^{cross}(\\lambda) | \n> x ] \\leq \\exp\\Bigl( - \\frac{axT}{\\sqrt{\\log T}} + C \\Bigr) \n\\]\nwhere $C$ does not depend on $\\lambda \\in [0,2\\pi]$. The lemma is then proven by a discretization\nargument of the interval $[0, 2\\pi]$ analogous to what was done in the \nproofs of Section~\\ref{unperturbed}.\nWe shall bound $\\mathbb{P}[ \\widehat \\Upsilon_T^{cross}(\\lambda) > x ]$, the term\n$\\mathbb{P}[ \\widehat \\Upsilon_T^{cross}(\\lambda) < - x ]$ being bounded similarly. \nFrom Lemma~\\ref{lemma_hadamard}, we get \n\\begin{align*}\n\\widehat \\Upsilon_T^{cross}(\\lambda) &= \n\\tr \\Bigl( D_T(\\lambda)^{\\sf H} \\frac{P^{\\sf H}_T V_T + V_T^{\\sf H} P_T}{N} \nD_T(\\lambda) B_T \\Bigr) \\\\\n&= \\tr \\frac{D_T(\\lambda)^{\\sf H} (\\Gamma_T^{1\/2})^{\\sf H} {\\boldsymbol s}_T \n{\\boldsymbol h}_T^{\\sf H} W_T R_T^{1\/2} D_T(\\lambda) B_T}{N} \\\\\n&\\phantom{=} + \n\\tr \\frac{D_T(\\lambda)^{\\sf H} (R_T^{1\/2})^{\\sf H} W_T^{\\sf H} {\\boldsymbol h}_T \n{\\boldsymbol s}_T^{\\sf H} \\Gamma_T^{1\/2} D_T(\\lambda) B_T}{N} \\\\\n&= \\frac 2N \\Re ( \\boldsymbol h_T^{\\sf H} W_T G_T(\\lambda) \\boldsymbol s_T ) \n\\end{align*} \nwhere $G_T(\\lambda) = R_T^{1\/2} D_T(\\lambda) B_T D_T(\\lambda)^{\\sf H} \n(\\Gamma_T^{1\/2})^{\\sf H}$. Let $G_T(\\lambda) = U_T \\Omega_T \n\\widetilde U_T^{\\sf H}$ be a singular value decomposition of $G_T(\\lambda)$ where \n$\\Omega = \\diag(\\omega_0, \\ldots, \\omega_{T-1})$. Observe that the vector \n$\\boldsymbol x_T \\triangleq W_T^{\\sf H} \\boldsymbol h_T = (x_0,\\ldots, x_{T-1})^{\\sf T}$ has \nthe distribution ${\\cal CN}(0, \\| \\boldsymbol h_T \\|^2 I_T)$. We can then write \n\\[\n\\widehat \\Upsilon_T^{cross}(\\lambda) \\stackrel{{\\cal L}}{=} \n\\frac 2N \\Re\\left( \\boldsymbol x_T^{\\sf H} \\Omega_T \\boldsymbol s_T \\right)\n= \\frac 2N \\sum_{t=0}^{T-1} \\omega_t ( \\Re x_t \\Re s_t + \\Im x_t \\Im s_t). \n\\]\nNotice that $\\{ \\Re x_t, \\Im x_t, \\Re s_t, \\Im s_t \\}_{t=0}^{T-1}$ are\nindependent with $\\Re x_t, \\Im x_t \\sim {\\cal N}(0, \\| \\boldsymbol h_T \\|^2\/2)$ and \n$\\Re s_t, \\Im s_t \\sim {\\cal N}(0, 1\/2)$. Letting $0 < \\tau < (\\sup_T \\| \\boldsymbol h_T \\|)^{-1}(\\sup_{\\lambda} \\| G_T(\\lambda) \\|)^{-1}$ and using Markov's inequality and Lemma~\\ref{lm-prod-gauss}, we get \n\\begin{align*} \n\\mathbb{P} \\left[ \\widehat \\Upsilon_T^{cross}(\\lambda) > x \\right] &= \n \\mathbb{P} \\left[ e^{N \\tau \\widehat \\Upsilon_T^{cross}(\\lambda)} > e^{N \\tau x} \\right] \n\\leq e^{-N \\tau x} \\mathbb{E} \\left[ e^{2\\tau \\sum_t \n\\omega_t ( \\Re x_t \\Re s_t + \\Im x_t \\Im s_t)} \\right] \\\\\n&= e^{-N\\tau x} \\prod_{t=0}^{T-1} \\left( 1 - \\tau^2 \\omega_t^2 \\| \\boldsymbol h_T \\|^2 \\right)^{-1} \n= \\exp \\left( -N \\tau x - \\sum_{t=0}^{T-1} \\log( 1 - \\tau^2 \\omega_t^2 \\| \\boldsymbol h_T\\|^2 ) \\right). \n\\end{align*}\nMimicking the proof of Lemma~\\ref{lm-B-Q}, we can establish that \n$\\sum_t \\omega_t^2 = O(\\log T)$ and $\\max_t \\omega_t = O(\\sqrt{\\log T})$ \nuniformly in $\\lambda \\in [0, 2\\pi]$. Set $\\tau = b \/ \\sqrt{\\log T}$ where\n$b > 0$ is small enough so that \n$\\sup_{T,\\lambda} (\\tau \\| \\boldsymbol h_T \\| \\, \\| G_T(\\lambda) \\|) < 1$. \nObserving that $\\log(1-x) = O(x)$ for $x$ small enough, we get \n\\[\n\\mathbb{P}[ \\widehat \\Upsilon_T^{cross}(\\lambda) > x ] \\leq \n\\exp \\bigl( -N bx\/\\sqrt{\\log T} + {\\cal E}(\\lambda, T) \\bigr) \n\\]\nwhere $| {\\cal E}(\\lambda, T) | \\leq (C \/ \\log T) \\sum_t \\omega_t^2 \\leq C$. \nThis establishes Lemma~\\ref{cross-term}. \n\\end{proof}\n\n\\begin{lemma} \n\\label{signal-term}\nThere exists a constant $a > 0$ such that \n\\[\n\\mathbb{P}\\left[ \\sup_{\\lambda \\in[0,2\\pi)} | \\widehat \\Upsilon_T^{sig}(\\lambda) | \n> x \\right] \\leq \\exp\\Bigl( - \\frac{axT}{\\sqrt{\\log T}}(1 + o(1)) \\Bigr) . \n\\]\n\\end{lemma}\n\\begin{proof}\nBy Lemma~\\ref{lemma_hadamard},\n\\begin{align*} \n\\widehat \\Upsilon_T^{sig}(\\lambda) &= \nN^{-1} \\tr ( D_T^{\\sf H} P_T^{\\sf H} P_T D_T B_T ) \\\\\n&= \\frac{\\| \\boldsymbol h_T \\|^2}{N} \\boldsymbol s_T^{\\sf H} G_T(\\lambda) \\boldsymbol s_T \n\\end{align*} \nwhere $G_T(\\lambda) = \\Gamma_T^{1\/2} D_T(\\lambda) B_T D_T(\\lambda)^{\\sf H} \n(\\Gamma_T^{1\/2})^{\\sf H}$. By the spectral factorization \n$G_T(\\lambda) = U_T \\Sigma_T U_T^{\\sf H}$ with \n$\\Sigma_T = \\diag(\\sigma_0, \\ldots, \\sigma_{T-1})$, we get\n\\[\n\\widehat \\Upsilon_T^{sig}(\\lambda) \\stackrel{{\\cal L}}{=} \n\\frac{\\| \\boldsymbol h_T \\|^2}{N} \\sum_{t=0}^{T-1} \\sigma_t | s_t |^2 \n\\] \nand\n\\begin{align*} \n\\mathbb{P}[ \\widehat \\Upsilon_T^{sig}(\\lambda) > x ] &\\leq \ne^{-N\\tau x} \\mathbb{E} \\Bigl[ e^{\\tau \\|\\boldsymbol h_T\\|^2 \\sum_t \\sigma_t | s_t|^2}\\Bigr] \\\\\n&= \\exp\\Bigl( -N \\tau x - \\sum_{t=0}^{T-1} \n\\log(1 - \\sigma_t \\tau \\|\\boldsymbol h_T\\|^2) \\Bigr) \n\\end{align*} \nfor any $\\tau \\in (0, 1 \/ (\\|\\boldsymbol h_T\\|^2 \\sup_\\lambda \\| G_T(\\lambda)\\|))$. \nLet us show that \n\\[\n| \\tr G_T(\\lambda) | \\leq C \\sqrt{\\frac{\\log T + 1}{T}} . \n\\] \nIndeed, we have \n\\begin{align*}\n| \\tr G_T(\\lambda) | &= N^{-1} | \\tr D_T B_T D_T^{\\sf H} \\Gamma_T | = \\frac 1N \\left| \\sum_{k,\\ell=0}^{T-1} \\frac{e^{-\\imath (k-\\ell)\\lambda} \n\\gamma_{\\ell,k}}{T-|k-\\ell|} \\right| \\\\ \n&\\leq \\Bigl( \\frac 1N \\sum_{k,\\ell=0}^{T-1} |\\gamma_{k,\\ell}|^2 \\Bigr)^{1\/2} \n\\Bigl( \\frac 1N \\sum_{k,\\ell=0}^{T-1} \\frac{1}{(T-|k-\\ell|)^2} \\Bigr)^{1\/2} \\\\\n&= \\Bigl(\\frac{\\tr \\Gamma_T\\Gamma_T^{\\sf H}}{N} \\Bigr)^{1\/2} \n\\Bigl(\\frac 2N (\\log T + C) \\Bigr)^{1\/2} \\leq C \\sqrt{\\frac{\\log T + 1}{T}} .\n\\end{align*} \nMoreover, similar to the proof of Lemma~\\ref{lm-B-Q}, we can show that \n$\\sum_t\\sigma_t^2 = O(\\log T)$ and $\\max_t |\\sigma_t| = O(\\sqrt{\\log T})$ \nuniformly in $\\lambda$. Taking $\\tau = b \/ \\sqrt{\\log T}$ for \n$b > 0$ small enough, and recalling that $\\log(1-x) = 1 - x + O(x^2)$\nfor $x$ small enough, we get that \n\\[\n\\mathbb{P}[ \\widehat \\Upsilon_T^{sig}(\\lambda) > x ] \\leq \n\\exp\\Bigl( - \\frac{N bx}{\\sqrt{\\log T}} + \n\\frac{b \\| \\boldsymbol h_T \\|^2}{\\sqrt{\\log T}} \\tr G_T(\\lambda) + \n{\\cal E}(T,\\lambda) \\Bigr) \n\\]\nwhere $| {\\cal E}(T,\\lambda) | \\leq (C \/ \\log T) \\sum_t \\sigma_t^2 \\leq C$. \nWe therefore get \n\\[\n\\mathbb{P}[ \\widehat \\Upsilon_T^{sig}(\\lambda) > x ] \\leq \n\\exp\\Bigl( - \\frac{N bx}{\\sqrt{\\log T}} + C \\Bigr) \n\\]\nwhere $C$ is independent of $\\lambda$. Lemma~\\ref{signal-term} is then obtained\nby the discretization argument of the interval $[0,2\\pi]$.\n\\end{proof}\n\nGathering Inequality~\\eqref{noise-term} with Lemmas~\\ref{cross-term} and \n\\ref{signal-term}, we get the second inequality of the statement of \nTheorem~\\ref{th-perturb}. \n\n\n\\section{Application to source detection} \\label{detect}\n\nConsider a sensor network composed of $N$ sensors impinged by zero (hypothesis $H_0$) or one (hypothesis $H_1$) source signal. The stacked signal matrix $Y_T=[y_0,\\ldots,y_{T-1}]\\in\\mathbb{C}^{N\\times T}$ from time $t=0$ to $t=T-1$ is modeled as\n\\begin{eqnarray}\\label{model_det}\nY_T = \\left\\{\n \\begin{array}{ll}\n V_T & \\mbox{, $H_0$} \\\\\n \\boldsymbol h_T \\boldsymbol s_T^{\\sf H} + V_T& \\mbox{, $H_1$}\n \\end{array}\n\\right.\n\\end{eqnarray}\nwhere $s_T^{\\sf H}=[s_0^*,\\ldots,s_{T-1}^*]$ are (hypothetical) independent $\\mathcal{CN}(0,1)$ signals transmitted through the constant channel $\\boldsymbol h_T \\in \\mathbb{C}^{N}$, and $V_T=W_T R_T^{1\/2}\\in\\mathbb{C}^{N\\times T}$ models a stationary noise matrix as in \\eqref{model1}.\n\nAs opposed to standard procedures where preliminary pure noise data are available , we shall proceed here to an online signal detection test solely based on $Y_T$, by exploiting the consistence established in Theorem~\\ref{th-perturb}.\nThe approach consists precisely in estimating $R_T$ by $\\widehat{R}_T\\in\\{\\widehat{R}_T^{bp},\\widehat{R}_T^{up}\\}$, which is then used as a whitening matrix for $Y_T$. The binary hypothesis \\eqref{model_det} can then be equivalently written\n\\begin{eqnarray}\\label{model_w}\nY_T \\widehat{R}_T^{-1\/2}= \\left\\{\n \\begin{array}{ll}\n W_T {R}_T \\widehat{R}_T^{-1\/2}& \\mbox{, $H_0$} \\\\\n \\boldsymbol h_T \\boldsymbol s_T^{\\sf H} \\widehat{R}_T^{-1\/2} + W_T {R}_T \\widehat{R}_T^{-1\/2}& \\mbox{, $H_1$}.\n \\end{array}\n\\right.\n\\end{eqnarray}\nSince $\\Vert R_T\\widehat{R}_T^{-1}-I_T\\Vert\\to 0$ almost surely (by Theorem~\\ref{th-perturb} as long as $\\inf_{\\lambda\\in[0,2\\pi)} {\\boldsymbol \\Upsilon}(\\lambda)>0$), for $T$ large, the decision on the hypotheses \\eqref{model_w} can be handled by the generalized likelihood ratio test (GLRT) \\cite{BianDebMai'11} by approximating $W_T {R}_T \\widehat{R}_T^{-1\/2}$ as a purely white noise. We then have the following result.\n\\begin{theorem}\\label{th_detection}\n\tLet $\\widehat{R}_T$ be any of $\\widehat{R}_T^{bp}$ or $\\widehat{R}_T^{up}$ strictly defined in Theorem~\\ref{th-perturb} for $Y_T$ now following model \\eqref{model_det}. Further assume $\\inf_{\\lambda\\in[0,2\\pi)} {\\boldsymbol \\Upsilon}(\\lambda)>0$ and define the test\n\\begin{equation}\\label{glrt_est}\n\\alpha = \\frac{N\\norme{Y_T \\widehat{R}_T^{-1}Y_T^{\\sf H}}}{\\tr \\left( Y_T \\widehat{R}_T^{-1} Y_T^{\\sf H} \\right)} ~ \\overset{H_0}{\\underset{H_1}{\\lessgtr}} ~ \\gamma\n\\end{equation}\nwhere $\\gamma\\in\\mathbb R^+$ satisfies $\\gamma>(1+\\sqrt c)^2$. Then, as $T \\to \\infty$,\n\\begin{align*}\n\t\\mathbb{P} \\left[ \\alpha \\geq \\gamma \\right] \\to \\left\\{ \\begin{array}{ll} 0 &,~H_0 \\\\ 1 &,~H_1. \\end{array}\\right.\n\\end{align*}\n\\end{theorem}\n\nRecall from \\cite{BianDebMai'11} that the decision threshold $(1+\\sqrt{c})^2$ corresponds to the almost sure limiting largest eigenvalue of $\\frac1T W_T W_T^{\\sf H}$, that is the right-edge of the support of the Mar\\v{c}enko--Pastur law. \n\nSimulations are performed hereafter to assess the performance of the test \\eqref{glrt_est} under several system settings. We take here ${\\boldsymbol h}_T$ to be the following steering vector ${\\boldsymbol h}_T=\\sqrt{p\/T}[1, \\ldots , e^{2i\\pi \\theta (T-1)}]$ with $\\theta = 10^\\circ$ and $p$ a power parameter. The matrix $R_T$ models an autoregressive process of order 1 with parameter $a$, {\\it i.e.} $[R_T]_{k,l}=a^{|k-l|}$. \n\nIn Figure~\\ref{det1}, the detection error $1-\\mathbb{P} [ \\alpha \\geq \\gamma|H_1]$ of the test \\eqref{glrt_est} for a false alarm rate (FAR) $\\mathbb{P} [ \\alpha \\geq \\gamma|H_0 ]=0.05$ under $\\widehat{R}_T=\\widehat{R}_T^{up}$ (Unbiased) or $\\widehat{R}_T=\\widehat{R}_T^{bp}$ (Biased) is compared against the estimator that assumes $R_T$ perfectly known (Oracle), {\\it i.e.} that sets $\\widehat{R}_T=R_T$ in \\eqref{glrt_est}, and against the GLRT test that wrongly assumes temporally white noise (White), {\\it i.e.} that sets $\\widehat{R}_T=I_T$ in \\eqref{glrt_est}. The source signal power is set to $p=1$, that is a signal-to-noise ratio (SNR) of $0$ dB, $N$ is varied from $10$ to $50$ and $T=N\/c$ for $c=0.5$ fixed. In the same setting as Figure~\\ref{det1}, the number of sensors is now fixed to $N=20$, $T=N\/c=40$ and the SNR (hence $p$) is varied from $-10$~dB to $4$~dB. The powers of the various tests are displayed in Figure~\\ref{det2} and compared to the detection methods which estimate $R_T$ from a pure noise sequence called Biased PN (pure noise) and Unbiased PN. The results of the proposed online method are close to that of Biase\/Unbiased PN, this last presenting the disadvantage to have at its disposal a pure noise sequence at the receiver. \n\nBoth figures suggest a close match in performance between Oracle and Biased, while Unbiased shows weaker performance. The gap evidenced between Biased and Unbiased confirms the theoretical conclusions. \n\n\\begin{figure}[H]\n\\center\n \\begin{tikzpicture}[font=\\footnotesize]\n \\renewcommand{\\axisdefaulttryminticks}{2} \n \\pgfplotsset{every axis\/.append style={mark options=solid, mark size=2pt}}\n \\tikzstyle{every major grid}+=[style=densely dashed] \n\\pgfplotsset{every axis legend\/.append style={fill=white,cells={anchor=west},at={(0.01,0.01)},anchor=south west}} \n \\tikzstyle{every axis y label}+=[yshift=-10pt] \n \\tikzstyle{every axis x label}+=[yshift=5pt]\n \\begin{semilogyaxis}[\n grid=major,\n \n xlabel={$N$},\n ylabel={$1-\\mathbb{P}[\\alpha>\\gamma|H_1]$},\n \n \n \n xmin=10,\n xmax=50, \n ymin=1e-4, \n ymax=1,\n width=0.7\\columnwidth,\n height=0.5\\columnwidth\n ]\n \\addplot[smooth,black,line width=0.5pt,mark=star] plot coordinates{\n(10.000000,0.318300)(15.000000,0.164000)(20.000000,0.069100)(25.000000,0.028400)(30.000000,0.013300)(35.000000,0.006100)(40.000000,0.002900)(45.000000,0.001200)(50.000000,0.0005000)\n\n\n };\n \n \\addplot[smooth,black,line width=0.5pt,mark=o] plot coordinates{\n(10.000000,0.848100)(15.000000,0.385600)(20.000000,0.168900)(25.000000,0.063900)(30.000000,0.028900)(35.000000,0.012800)(40.000000,0.005200)(45.000000,0.002100)(50.000000,0.000900)\n\n\n };\n \n \\addplot[smooth,black,line width=0.5pt,mark=triangle] plot coordinates{\n(10.000000,0.973000)(15.000000,0.953000)(20.000000,0.954000)(25.000000,0.957000)(30.000000,0.958000)(35.000000,0.958000)(40.000000,0.940000)(45.000000,0.970000)(50.000000,0.949000)\n\n\n\n };\n \n \\addplot[smooth,black,line width=0.5pt,mark=x] plot coordinates{\n(10.000000,0.200200)(15.000000,0.104000)(20.000000,0.043600)(25.000000,0.018700)(30.000000,0.008500)(35.000000,0.003300)(40.000000,0.0014500)(45.000000,0.000600)(50.000000,0.000200)\n\n\n };\n\n\t\\legend{{Biased},{Unbiased},{White},{Oracle}}\n \\end{semilogyaxis}\n \\end{tikzpicture}\n \\caption{Detection error versus $N$ with FAR$=0.05$, $p=1$, SNR$=0$ dB, $c=0.5$, and $a=0.6$.}\n\\label{det1}\n\\end{figure}\n\n\\begin{figure}[H]\n\\center\n \\begin{tikzpicture}[font=\\footnotesize]\n \\renewcommand{\\axisdefaulttryminticks}{2} \n \\pgfplotsset{every axis\/.append style={mark options=solid, mark size=2pt}}\n \\tikzstyle{every major grid}+=[style=densely dashed] \n\\pgfplotsset{every axis legend\/.append style={fill=white,cells={anchor=west},at={(0.01,0.99)},anchor=north west}} \n \\tikzstyle{every axis y label}+=[yshift=-10pt] \n \\tikzstyle{every axis x label}+=[yshift=5pt]\n \\begin{axis}[\n grid=major,\n \n xlabel={SNR (dB)},\n ylabel={$\\mathbb{P}[\\alpha>\\gamma|H_1]$},\n \n \n \t \n xmin=-10,\n xmax=4, \n ymin=0, \n ymax=1,\n width=0.7\\columnwidth,\n height=0.5\\columnwidth\n ]\n \n \\addplot[smooth,black,line width=0.5pt,mark=star] plot coordinates{\n(-15.000000,0.051200)(-14.000000,0.048100)(-13.000000,0.049100)(-12.000000,0.053100)(-11.000000,0.049800)(-10.000000,0.053200)(-9.000000,0.056100)(-8.000000,0.0593000)(-7.000000,0.067500)(-6.000000,0.073900)(-5.000000,0.110200)(-4.000000,0.190800)(-3.000000,0.330600)(-2.000000,0.538500)(-1.000000,0.766600)(0.000000,0.922000)(1.000000,0.985800)(2.000000,0.998600)(3.000000,1.000000)(4.000000,1.000000)(5.000000,1.000000)\n\n\n };\n \n \\addplot[smooth,black,line width=0.5pt,mark=o] plot coordinates{\n(-15.000000,0.053900)(-14.000000,0.049300)(-13.000000,0.049500)(-12.000000,0.055100)(-11.000000,0.050800)(-10.000000,0.054000)(-9.000000,0.057800)(-8.000000,0.060100)(-7.000000,0.068400)(-6.000000,0.076300)(-5.000000,0.100400)(-4.000000,0.159500)(-3.000000,0.248400)(-2.000000,0.427000)(-1.000000,0.652600)(0.000000,0.854200)(1.000000,0.964900)(2.000000,0.995100)(3.000000,0.999500)(4.000000,1.000000)(5.000000,1.000000)\n\n\n };\n \n \\addplot[smooth,black,densely dashed,line width=0.5pt,mark=star] plot coordinates{\n(-15.000000,0.047700)(-14.000000,0.047400)(-13.000000,0.044600)(-12.000000,0.049700)(-11.000000,0.049700)(-10.000000,0.051900)(-9.000000,0.059400)(-8.000000,0.063100)(-7.000000,0.073300)(-6.000000,0.099100)(-5.000000,0.149500)(-4.000000,0.240100)(-3.000000,0.399900)(-2.000000,0.613300)(-1.000000,0.816900)(0.000000,0.944300)(1.000000,0.991500)(2.000000,0.999200)(3.000000,1.000000)(4.000000,1.000000)(5.000000,1.000000)\n\n };\n \n \\addplot[smooth,black,densely dashed,line width=0.5pt,mark=o] plot coordinates{\n(-15.000000,0.048300)(-14.000000,0.051100)(-13.000000,0.047100)(-12.000000,0.050300)(-11.000000,0.051800)(-10.000000,0.052900)(-9.000000,0.056300)(-8.000000,0.062600)(-7.000000,0.068100)(-6.000000,0.086500)(-5.000000,0.116600)(-4.000000,0.178600)(-3.000000,0.299600)(-2.000000,0.505700)(-1.000000,0.721500)(0.000000,0.897600)(1.000000,0.976000)(2.000000,0.996900)(3.000000,0.999900)(4.000000,0.999900)(5.000000,1.000000)\n\n };\n \n \n \\addplot[smooth,black,line width=0.5pt,mark=x] plot coordinates{\n(-15.000000,0.048800)(-14.000000,0.050800)(-13.000000,0.048300)(-12.000000,0.048900)(-11.000000,0.055600)(-10.000000,0.058300)(-9.000000,0.063100)(-8.000000,0.066000)(-7.000000,0.084300)(-6.000000,0.110800)(-5.000000,0.169400)(-4.000000,0.280300)(-3.000000,0.446500)(-2.000000,0.669800)(-1.000000,0.858100)(0.000000,0.963000)(1.000000,0.994300)(2.000000,0.999400)(3.000000,1.000000)(4.000000,1.000000)(5.000000,1.000000)\n\n\n\n };\n\n\t\\legend{{Biased},{Unbiased},{Biased PN},{Unbiased PN},{Oracle}}\n \\end{axis}\n \\end{tikzpicture}\n \\caption{Power of detection tests versus SNR (dB) with FAR$=0.05$, $N=20$, $c=0.5$, and $a=0.6$.}\n\\label{det2}\n\\end{figure}\n\n\n\\begin{appendix}\n\\subsection{Proofs for Theorem~\\ref{th-biased}} \n\\subsubsection{Proof of Lemma \\ref{lemma_d_quad}} \n\\label{anx-lm-qf} \n\nDeveloping the quadratic forms given in the statement of the lemma, we get \n\\begin{align*} \nd_T(\\lambda)^{\\sf H}\\frac{V_T^{\\sf H}V_T}{N}d_T(\\lambda) &= \n\\frac{1}{NT}\\sum_{l,l'=0}^{T-1} e^{-\\imath(l'-l)\\lambda} [V_T^{\\sf H}V_T]_{l,l'} \\\\\n&= \\frac{1}{NT}\\sum_{l,l'=0}^{T-1} e^{-\\imath(l'-l)\\lambda} \n\\sum_{n=0}^{N-1} v^*_{n,l} v_{n,l'} \\\\ \n&= \\sum_{k=-(T-1)}^{T-1} e^{-\\imath k \\lambda} \n\\frac{1}{NT} \\sum_{n=0}^{N-1} \\sum_{t=0}^{T-1} v_{n,t}^* v_{n,t+k} \n\\mathbbm{1}_{0 \\leq t+k \\leq T-1}\\\\ \n&= \\sum_{k=-(T-1)}^{T-1} \\hat{r}_k^b e^{-\\imath k \\lambda}=\n\\widehat\\Upsilon_T^b(\\lambda), \n\\end{align*} \nand \n\\begin{align*} \n\\mathbb{E} \\left[ d_T(\\lambda)^{\\sf H} \\frac{V_T^{\\sf H}V_T}{N}d_T(\\lambda) \\right] \n&= d_T(\\lambda)^{\\sf H} (R_T^{1\/2})^{\\sf H} \n\\frac{\\mathbb{E}[ W_T^{\\sf H} W_T]}{N} R_T^{1\/2} d_T(\\lambda) \\\\ \n&= d_T(\\lambda)^{\\sf H} R_T d_T(\\lambda) .\n\\end{align*} \n\n\\subsection{Proofs for Theorem~\\ref{th-unbiased}}\n\\subsubsection{Proof of Lemma \\ref{lemma_d_quad2}} \n\\label{anx-lm-qf2} \nWe have \n\\begin{align*}\nd_T(\\lambda)^{\\sf H} \\left( \\frac{V_T^{\\sf H}V_T}{N} \\odot B_T \\right) d_T(\\lambda) &=\n\\frac{1}{NT}\\sum_{l,l'=-(T-1)}^{T-1}e^{i(l-l') \\lambda} [V_T^{\\sf H}V_T]_{l,l'}\\frac{T}{T-|l-l'|} \\nonumber \\\\\n&= \\sum_{k=-(T-1)}^{T-1}e^{ik\\lambda}\\frac{1}{N(T-|k|)}\\sum_{n=0}^{N-1}\\sum_{t=0}^{T-1}v_{n,t}^*v_{n,t+k}\\mathbbm{1}_{0 \\leq t+k \\leq T-1} \\\\\n&=\\sum_{k=-(T-1)}^{T-1} \\hat{r}_k^u e^{ik \\lambda} =\\widehat \\Upsilon_T^u(\\lambda).\n\\end{align*}\n\n\\subsubsection{Proof of Lemma \\ref{lm-B-Q}} \n\\label{prf-lm-B-Q} \nWe start by observing that \n\\begin{align*}\n\\tr B_T^2 &= \\sum_{i,j=0}^{T-1} \\left[ B_T \\right]_{i,j}^2 \n= \\sum_{i,j=0}^{T-1} \\left( \\frac{T}{T-|i-j|} \\right)^2 \n= 2\\sum_{i>j}^{T-1} \\left( \\frac{T}{T-|i-j|} \\right)^2 + T \\\\\n&= 2 \\sum_{k=1}^{T-1} \\left( \\frac{T}{T-k} \\right)^2 \\left( T - k \\right) + T \n= 2 T^2 \\sum_{k=1}^{T-1} \\frac{1}{T-k} + T \n= 2 T^2 \\left(\\log T + C \\right).\n\\end{align*}\nInequality \\eqref{norme_B} is then obtained upon noticing that \n$\\norme{B_T} \\leq \\sqrt{\\tr B_T^2}$. \n\nWe now show (\\ref{sum_sigma2}). \nUsing twice the inequality $\\tr (FG) \\leq \\norme{F} \\tr(G)$ when \n$F,G \\in \\mathbb{C}^{m \\times m}$ and $G$ is nonnegative definite \\cite{HornJoh'91}, \nwe get\n\\begin{align*}\n\\sum_{t=0}^{T-1} \\sigma_t^2(\\lambda_i) &= \\tr Q_T(\\lambda_i)^2 \n= \\tr R_T D_T(\\lambda_i)B_T D_T(\\lambda_i)^{\\sf H} R_T D_T(\\lambda_i) \nB_T D_T(\\lambda_i)^{\\sf H} \\\\\n&\\leq \\norme{R_T} \\tr R_T (D_T(\\lambda_i) B_T D_T(\\lambda_i)^{\\sf H})^2 \\\\\n&\\leq T^{-2} \\norme{R_T}^2 \\tr (B_T^2) \\leq 2\\norme{\\boldsymbol\\Upsilon}_{\\infty}^2 \\log T + C. \n\\end{align*}\n\nInequality \\eqref{sig_max} is immediate since $\\norme{Q_T}^2 \\leq \\tr Q_T^2$.\n\nAs regards \\eqref{sum_sigma3}, by the Cauchy--Schwarz inequality,\n\\begin{align*}\n\\sum_{t=0}^{T-1} |\\sigma_t^3(\\lambda_i)| &= \n\\sum_{t=0}^{T-1} \\sigma_t^2(\\lambda_i) |\\sigma_t(\\lambda_i)| \n\\leq \\sqrt{\\sum_{t=0}^{T-1} \\sigma_t^4(\\lambda_i) \\sum_{t=0}^{T-1} \\sigma_t^2(\\lambda_i)} \\\\ \n&\\leq \\sqrt{\\left(\\sum_{t=0}^{T-1} \\sigma_t^2(\\lambda_i) \\right)^2 \\sum_{t=0}^{T-1} \\sigma_t^2(\\lambda_i)} = \\left( \\sum_{t=0}^{T-1} \\sigma_t^2(\\lambda_i) \\right)^{3\/2}\\\\\n&= C ( (\\log T)^{3\/2} +1 ).\n\\end{align*}\n\n\n\n\\end{appendix}\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{\\label{Intro:level1}Introduction}\nCoherent radiation can polarize the angular momentum distribution of an ensemble of atoms in various ways, creating different \npolarization moments, which modifies the way these atoms will interact with radiation. Carefully prepared spin polarized \natoms can make the absorption highly dependent on frequency \n(electromagnetically induced transparency~\\cite{Harris:1990}), causing large values of the dispersion, which, in turn, \nare useful for such interesting effects as slow light~\\cite{Hau:1999} and optical information storage~\\cite{Liu:2001}. \nElectric and magnetic fields, external or inherent in the radiation fields, may also influence the time evolution of \nthe spin polarization and cause measurable changes in absorption or fluorescence intensity and\/or polarization. \nThese effects are the basis of many magnetometry schemes~\\cite{Scully:1992,Budker:2007}, \nand must be taken into account in atomic clocks~\\cite{Knappe:2005} and when searching for \nfundamental symmetry violations~\\cite{Budker:2002} or exotic physics such as \nan electric dipole moment of the electron~\\cite{Regan:2002}. \nSufficiently strong laser radiation creates atomic polarization in excited as well as in the ground state~\\cite{Auzinsh:OptPolAt}\nThe polarization is destroyed when the Zeeman sublevel degeneracy is removed by a magnetic field. \nSince the ground state has a much longer lifetime, very narrow magneto-optical resonances can be created, which \nare related to the ground-state Hanle effect (see ~\\cite{Arimondo:1996} for a review). Such resonances were first \nobserved in cadmium in 1964~\\cite{Lehmann:1964}. \n\nThe theory of magneto-optical resonances has been understood for some time(see \\cite{Budker:2002,Alexandrov:2005,Auzinsh:OptPolAt} for a review), \nand bright (opposite sign) resonances have also been observed and explained~\\cite{Kazantsev:1984,Renzoni:2001a,Alnis:2001}; the challenge in describing experiments \nlies in choosing the effects to be included in the numerical calculations so as to find a balance between computation time and accuracy.\nThe optical Bloch equations (OBEs) for the density matrix have been used as early as 1978 to model magneto-optical \nresonances~\\cite{Picque:1978}. In order to achieve greater accuracy, later efforts to model signals took into account \neffects such as Doppler broadening, the coherent properties of the laser radiation, \nand the mixing of magnetic sublevels in an external magnetic field to produce more and more \naccurate descriptions of experimental signals~\\cite{Auzinsh:2008}. Analytical models can also achieve excellent descriptions of \nexperimental signals at low laser powers in the regime of linear excitation where optical pumping plays a negligible \nrole~\\cite{Castagna:2011,Breschi:2012}. In recent \nyears, excellent agreement has been achieved by numerical calculations even when optical pumping plays a role. \nHowever, as soon as the laser radiation begins to saturate the \nabsorption transition, the model's accuracy suffers. The explanation has been that at high radiation intensities, \nit is no longer possible to model the \nrelaxation of atoms moving in and out of the beam with a single rate constant~\\cite{Auzinsh:1983, Auzinsh:2008}. \nNevertheless, accurate numerical models of situations in an intense laser field are very desirable, because they \ncould arise in a number of experimental situations. Therefore, we have set out to model magneto-optical effects in the presence of \nintense laser radiation by taking better account of the fact that an atom experiences different laser intensity values as it \npasses through a beam. In practice, we solve the rate \nequations for the Zeeman coherences for different regions of the laser beam with a value of the Rabi frequency that more \nclosely approximates the real situation in that part of the beam.\nTo save computing time, stationary solutions to the rate equations for Zeeman sublevels and coherences are sought for each region~\\cite{Blushs:2004}. \nWith this simplification to take into account the motion of atoms through the beam, \nwe could now obtain accurate descriptions of experimental signals up to much higher intensities for reasonable computing times. \nMoreover, the model can be used to study the spatial distribution of the laser induced \nfluorescence within the laser beam. We performed such a study theoretically and experimentally using two overlapping lasers: \none spatially broad, intense pump laser, and a weaker, tightly focused, spatically narrow probe laser. \nThe qualitative agreement between experimental and theoretical fluorescence intensity profiles indicates \nthat the model is a useful tool for studying fluorescence dynamics\nas well as for modelling magneto-optical signals at high laser intensities. \n\n\n\\section{\\label{Theory:level1}Theory}\nThe theoretical model used here is a further development of previous efforts~\\cite{Auzinsh_crossing:2013}, which has been subjected \nto some initial testing in the specialized context of an extremely thin cell~\\cite{Auzinsh:2015}.\nThe description of coherent processes starts with the optical Bloch equation (OBE):\n\\begin{equation}\ni \\hbar \\frac{\\partial \\rho}{\\partial t} = \\left[\\hat{H},\\rho \\right]+ i \\hbar \\hat{R}\\rho,\n\\end{equation}\nwhere $\\rho$ is the density matrix describing the atomic state, $\\hat{H}$ is the Hamiltonian of the system, \nand $\\hat{R}$ is an operator that describes relaxation. These equations are transformed into rate equations \nthat are solved under stationary conditions in order to obtain the Zeeman coherences in the \nground ($\\rho_{g_ig_j})$ and excited ($\\rho_{e_ie_j}$) states\\cite{Blushs:2004}. However, when the intensity distribution in the beam is not \nhomogeneous, more accurate results can be achieved by dividing the laser beam into concentric regions and solving the \nOBEs for each region separately while accounting for atoms that move into and out of each region as they fly through the \nbeam. Figure~\\ref{fig:dal1} illustrates the idea. \n\\begin{figure}\n\t\\includegraphics[width=0.45\\textwidth]{beam_division.pdf}\n\t\\caption{\\label{fig:dal1} Laser beam profile split into a number of concentric regions.}\n\\end{figure}\nThe top part of the figure shows the intensity profile of the laser beam, while \nthe bottom part of the figure shows a cross-section of the laser beam indicating the concentric regions. \n\nIn order to account for particles that leave one region and enter the next, an extra term must be added to the OBE:\n\\begin{equation}\n\\label{eq:transit}\n-i \\hbar\\hat{\\gamma_t} \\rho+i \\hbar\\hat{\\gamma_t} \\rho'.\n\\end{equation}\nIn this term, $\\rho'$ is the density matrix of the particles entering the region (identical to the density matrix of the \nprevious region), and $\\hat{\\gamma_t}$ is an operator that accounts for transit relaxation. This operator is \nessentially a diagonal matrix with elements $\\hat{\\gamma}_{t_{ij}}=(\\nicefrac{v_{yz}}{s_n})\\delta_{ij}$, where $v_{yz}$ \ncharacterizes the particle speed in the plane perpendicular to the beam and $s_n$ is the linear dimension of the region. \nTo simplify matters, we treat particle motion in only one direction and later average with particles that move in the other \ndirection. In that case, $\\rho'=\\rho^{n-1}$. \nThus, the rate equations for the density matrix $\\rho^n$ of the $n$textsuperscript{th} region become \n\\begin{align}\n\\label{eq:rate_region}\ni~\\hbar \\frac{\\partial\\rho^n}{\\partial t} &= \\left[ \\hat{H},\\rho^n\\right]+i ~\\hbar \\hat{R} \\rho^n -i~\\hbar\\hat{\\gamma_t}^n\\rho^n \\notag \\\\ \n& +i~\\hbar\\hat{\\gamma_t}^n\\rho^{n-1}-i~\\hbar\\hat{\\gamma_c}\\rho^n+i~\\hbar\\hat{\\gamma_c}\\rho^0.\n\\end{align}\nIn this equation the relaxation operator $\\hat{R}$ describes spontaneous relaxation only and $\\hat{\\gamma_c}$ is the collisional relaxation \nrate, which, however, becomes significant only at higher gas densities. \n\nNext, the rotating wave approximation~\\cite{Allen:1975} is applied to the OBEs, which yield \nstochastic differential equations that can be simplified by means of the decorrelation \napproach~\\cite{Kampen:1976}. Since the measurable quantity is merely light intensity, \na formal statistical average is performed over the fluctuating phases of these stochastic equations, \nmaking use of the decorrelation approximation~\\cite{Blushs:2004}. As a result, the density matrix \nelements that correspond to optical coherences are eliminated and one is left with rate equations for the \nZeeman coherences:\n\\begin{align}\n\\label{eq:ground}\n\\frac{\\partial \\rho_{g_i,g_j}^n}{\\partial t} =& \\sum_{e_k,e_m}\\left(\\Xi_{g_ie_m}^n + (\\Xi_{e_kg_j}^n)^*\\right) d_{g_ie_k}^*d_{e_mg_j}\\rho_{e_ke_m}^n \\notag \\\\\n& - \\sum_{e_k,g_m}(\\Xi_{e_kg_j}^n)^*d_{g_ie_k}^*d_{e_kg_m}\\rho_{g_mg_j}^n \\notag \\\\ \n& - \\sum_{e_k,g_m}\\Xi_{g_ie_k}^n d_{g_me_k}^*d_{e_kg_j}\\rho_{g_ig_m}^n \\\\\n& - i\\omega_{g_ig_j}\\rho_{g_ig_j}^n+\\sum_{e_ke_l}\\Gamma_{g_ig_j}^{e_ke_l}\\rho_{e_ke_l}^n-\\gamma_{t}\\rho_{g_ig_j}^n \\notag \\\\ \n& + \\gamma^{n}_{t}\\rho_{g_ig_j}^{n-1}-\\gamma^{n}_{c}\\rho_{g_ig_j}^n+\\gamma_c\\rho_{g_ig_j}^0\\notag\\\\\n\\label{eq:excited}\n\\frac{\\partial \\rho_{e_i,e_j}^n}{\\partial t} =& \\sum_{g_k,g_m}\\left((\\Xi_{e_ig_m}^n)^* + \\Xi_{g_ke_j}^n\\right) d_{e_ig_k}^*d_{g_me_j}\\rho_{g_kg_m}^n \\notag\\\\\n& - \\sum_{g_k,e_m}\\Xi_{g_ke_j}^nd_{e_ig_k}d_{g_ke_m}^*\\rho_{e_me_j}^n \\notag \\\\ \n& - \\sum_{g_k,e_m}(\\Xi_{e_ig_k}^n)^*d_{e_mg_k}d_{g_ke_j}^*\\rho_{e_ie_m}^n \\\\\n& - i\\omega_{e_ie_j}\\rho_{e_ie_j}^n-\\Gamma\\rho_{e_ie_j}^n-\\gamma^{n}_{t}\\rho_{e_ie_j}^n \\notag \\\\\n& +\\gamma^{n}_{t}\\rho_{e_ie_j}^{n-1}-\\gamma_{c}\\rho_{e_ie_j}^n \\notag.\n\\end{align}\nIn both equations, the first term describes population increase and creation of coherence due to induced \ntransitions, the second and third terms describe population loss due to induced transitions, the fourth \nterm describes the destruction of Zeeman coherences due to the splitting $\\omega{g_ig_j}$, \nrespectively, $\\omega_{e_ie_j}$ of the Zeeman sublevels in an external magnetic field, \nand the fifth term in Eq.~\\ref{eq:excited} describes spontaneous decay with $\\Gamma\\rho_{e_ie_j}^n$ giving the \nspontaneous rate of decay for the excited state. At the same time the fifth term in Eq.~\\ref{eq:ground} \ndescribes the transfer of population and coherences from the excited state matrix element $\\rho_{e_k e_l}$ to \nthe ground state density matrix element $\\rho_{g_i g_j}$ with rate $\\Gamma^{e_k e_l}_{g_i g_j}$. \nThese transfer rates are related to the rate of spontaneous decay $\\Gamma$ for the excited state. \nExplicit expressions for these $\\Gamma^{e_k e_i}_{g_i g_j}$ can be calculated from quantum angular \nmomentum theory and are given in~\\cite{Auzinsh:OptPolAt}. \nThe remaining terms have been described previously in the context of \nEqns.~\\ref{eq:transit} and~\\ref{eq:rate_region}. \nThe laser beam interaction is represented by the term\n\\begin{align}\n\\Xi_{g_ie_j}= \\frac{|\\bm\\varepsilon^n|^2}{\\frac{\\Gamma+\\Delta\\omega}{2}+i \\left(\\bar{\\omega}-\\mathbf{k}\\cdot \\mathbf{v}+\\omega_{g_ie_j}\\right)},\n\\end{align} \nwhere $|\\bm\\varepsilon^n|^2$ is the laser field's electric field strength in the $n$th region, $\\Gamma$ is the spontaneous \ndecay rate, $\\Delta\\omega$ is the laser beam's spectral width, $\\bar{\\omega}$ is the laser frequency, \n$k\\cdot v$ gives the Doppler shift, and $\\omega_{g_ie_j}$ is the difference in energy between levels \n$g_i$ and $e_j$. The system of linear equations can be solved for stationary conditions to \nobtain the density matrix $\\rho$. \n\nFrom the density matrix one can obtain the fluorescence intensity from each region for each velocity group $v$ and given \npolarization $\\bm\\varepsilon_f$ up to a constant factor of $\\tilde{I}_0$~\\cite{AuzFerb:OptPolMol, Barrat:1961, Dyakonov:1965}:\n\\begin{equation}\\label{eq:fluorescence}\n\tI_{n}(v,\\bm\\varepsilon_f) = \\tilde{I}_0\\sum\\limits_{g_i,e_j,e_k} d_{g_ie_j}^{\\ast(ob)}d_{e_kg_i}^{(ob)}\\rho_{e_je_k}.\n\\end{equation}\nFrom these quantities one can calculate the total fluorescence intensity for a given polarization $\\bm\\varepsilon_f$: \n\\begin{align}\n\tI(\\bm\\varepsilon_f) = \\sum_{n} \\sum_{v} f(v)\\Delta v \\frac{A_{n}}{A} I_{n}(v,\\bm\\varepsilon_f).\n\\end{align}\nHere the sum over $n$ represents the sum over the different beam regions of relative area \n$\\nicefrac{A_{n}}{A}$ as they are traversed by the \nparticle, $v$ is the particle velocity along the laser beam, and $f(v)\\Delta v$ gives the number of \natoms with velocity $v\\pm \\nicefrac{\\Delta v}{2}$. \n\nIn practice, we do not measure the electric field strength of the laser field, but the intensity $I=P\/A$, where $P$ is the \nlaser power and $A$ is the cross-sectional area of the beam. In the theoretical model it is more convenient to use the \nRabi frequency $\\Omega_R$, here defined as follows:\n\\begin{align}\n\\label{eq:Rabi}\n\\Omega_R = k_R \\frac{\\vert\\vert d \\vert\\vert \\cdot \\vert\\vert \\epsilon \\vert\\vert}{\\hbar} \\\n\t = k_R \\frac{\\vert\\vert d \\vert\\vert}{\\hbar} \\sqrt{\\frac{2 I}{\\epsilon_0 n c}},\n\\end{align}\nwhere $\\vert\\vert d \\vert\\vert$ is the reduced dipole matrix element for the transition in question, $\\epsilon_0$ is the \nvacuum permittivity, $n$ is the index of refraction of the medium, $c$ is the speed of light, and $k_R$ is a factor that \nwould be unity in an ideal case, but is adjusted to achieve the best fit between theory and experiment since the \nexperimental situation will always deviate from the ideal case in some way. \nWe assume that the laser beam's intensity distribution follows a Gaussian distribution. We define the average value of \n$\\Omega_R$ for the whole beam by taking the weighted average of a Gaussian distribution on the range [0,FWHM\/2], where \nFWHM is the full width at half maximum. Thus it follows that the Rabi frequency at the peak of the intensity distribution \n(see Fig.~\\ref{fig:dal1}) is $\\Omega_R=0.721\\Omega_{peak}$. From there the Rabi frequency of each region can be obtained \nby scaling by the value of the Gaussian distribution function. \n\n\n\n\n\n\n\\section{\\label{exp:level1}Experimental setup}\nThe theoretical model was tested with two experiments. The first experiment measured magneto-optical resonances on the $D_1$ line of \n$^{87}$Rb and is shown schematically in Fig.~\\ref{fig:exp_Rb87}. The experiment has been described elsewhere along with \ncomparison to an earlier version of the theoretical model that did not divide the laser beam into separate regions~\\cite{Auzinsh:2009}.\n\\begin{figure}\n\t\\includegraphics[width=0.45\\textwidth]{exp_rb87.pdf}\n\t\\caption{\\label{fig:exp_Rb87} (Color online) Basic experimental setup for measuring magneto-optical resonances. The inset \n\ton the left shows the level diagram of $^{87}$Rb~\\cite{Steck:rubidium87}. The other inset shows the geometrical orientation of the electric \n\tfield vector \\boldmath{E}, the magnetic field vector \\boldmath{B}, and laser propagation direction (Exc.) and \n\tobservation direction (Obs.).}\n\\end{figure}\nThe laser was an extended cavity diode laser, whose frequency could be scanned by applying a voltage to a piezo crystal attached to the grating. \nNeutral density (ND) filters were used to regulate the laser intensity, and linear polarization was obtained using a \nGlan-Thomson polarizer. A set of three orthogonal Helmholtz coils scanned the magnetic field along the $z$ axis \nwhile compensating the ambient field in the other directions. A pyrex cell with a natural isotopic mixture \nof rubidium at room temperature was located at the center of the coils. The total laser induced fluorescence (LIF) \nin a selected direction (without frequency or polarization selection) was detected with \na photodiode (Thorlabs FDS-100) and data were acquired with a data acquisition card (National Instruments 6024E)\nor a digital oscilloscope (Agilent DSO5014). To generate the magnetic field scan with a rate of about 1~Hz, \na computer-controlled analog signal was applied to a bipolar power supply (Kepco BOP-50-8M). The laser \nfrequency was simultaneously scanned at a rate of about 10-20 MHz\/s, and it was measured by \na wavemeter (HighFinnesse WS-7). \nThe laser beam was characterized using a beam profiler (Thorlabs BP104-VIS). \n\nA second experimental setup was used to study the spatial profile of the fluorescence generated by atoms in a \nlaser beam at resonance. It is shown in Fig.~\\ref{fig:exp_setup}. Here two lasers were used to excite the \n$D_1$ and $D_2$ transitions of cesium. Both lasers were based on distributed feedback diodes from toptica \n(DL100-DFB). One of the lasers (Cs $D_2$) served as a pump laser with a spatially broad and intense beam, \nwhile the other (Cs $D_1$), \nspatially narrower beam probed the fluorescence dynamics within the pump beam. \nFigure~\\ref{fig:levels} shows the level scheme of the excited transitions. \nBoth lasers were stabilized with saturation absorption signals from cells shielded by three layers of mu-metal. \nMu-metal shields were used to avoid frequency drifts due to the magnetic field scan performed in the experiment and other \nmagnetic field fluctuations in the laboratory.\n\\begin{figure}\n\t\\includegraphics[width=0.45\\textwidth]{divu_staru_exp_sh_v3.pdf}\n\t\\caption{\\label{fig:exp_setup} (Color online) Experimental setup for the two-laser experiment. The lasers were stabilized by two Toptica Digilok modules \n\tlocked to error signals generated from saturated absorption spectroscopy measurements made in a separate, magnetically shielded cell.}\n\\end{figure}\n\\begin{figure}\n\t\\includegraphics[width=0.45\\textwidth]{Cs_limenu_shema_DiviStari.pdf}\n\t\\caption{\\label{fig:levels} Level scheme for the two-laser experiment. The bold, solid arrow represents the pump laser transition, \n\twhereas the arrows with dashed lines represent the scanning laser transitions. Other transitions are given as thin, solid lines.}\n\\end{figure}\n\nA bandpass filter (890~nm $\\pm$ 10~nm) was placed before the photodiode. \nTo reduce noise from the intense pump beam, the probe beam was modulated by placing a mechanical \nchopper near its focus, and the fluorescence signal was passed through a lock-in amplifier and recorded \non a digital oscilloscope (Yokogawa DL-6154). \nThe probe laser was scanned through the pump laser beam profile using \na mirror mounted on a moving platform (Nanomax MAX301 from Thorlabs) with a scan range of 8 mm in one dimension. \nThe probing beam itself had a full width at half maximum (FWHM) \ndiameter of~\\SI{200}{\\micro\\metre} with typical laser power of~\\SI{100}{\\micro\\watt}. \nThe pump beam width was~\\SI{1.3}{\\milli\\metre} (FWHM) and its power was~\\SI{40}{\\milli\\watt}. This laser beam diameter was achieved by letting the \nlaser beam slowly diverge after passing the \nfocal point of a lens with focal length of~\\SI{1}{\\metre}. The pump laser beam diverged slowly enough to be effectively \nconstant within the vapor cell.\nThe probe beam was also focussed by the same lens to reach its focus point inside the cell. \n\n\n\n\n\n\n\\section{\\label{1laser:level1}Application of the model to magneto-optical signals obtained for high laser power densities}\nAs a first test for the numerical model with multiple regions inside the laser beam, we used the model to calculate the \nshapes of magneto-optical resonances for $^{87}$Rb in an optical cell. The experimental setup was described earlier \n(see Fig.~\\ref{fig:exp_Rb87}). Figure~\\ref{fig:one_laser_exp}(a)--(c) \nshow experimental signals (markers) and theoretical calculations (curves) of magneto-optical signals in the \n$F_g=2\\longrightarrow F_e=1$ transition of the $D_1$ line of $^{87}$Rb. Three theoretical curves are shown: \ncurve N1 was calculated assuming a laser beam with a single average \nintensity; curve N20 was calculated using a laser beam divided into 20 concentric regions; curve N20MT was calculated \nin the same way as curve N20, but furthermore the results were averaged also over trajectories that did not pass through \nthe center. At the relatively low Rabi frequency of $\\Omega_R = 2.5$~MHz [Fig.~\\ref{fig:one_laser_exp}(a)] \nall calculated curves practically coincided and described well the experimental signals. The single region model \ntreats the beam as a cylindrical beam with an intensity of 2~mW\/cm$^2$, which is below the saturation intensity for \nthat transition of 4.5~mW\/cm$^2$~\\cite{Steck:rubidium87}. When the laser intensity was 20~mW\/cm$^2$ ($\\Omega_R = 8.0$~MHz), \nwell above the saturation intensity, model N1 is no longer adequate for describing the experimental signals \nand model N20MT works slightly better [Fig.~\\ref{fig:one_laser_exp}(b)]. \nIn particular, the resonance becomes sharper and sharper as the intensity increases, and models \nN20 and N20MT reproduce this sharpness. Even at an intensity around 200~mW\/cm$^2$ ($\\Omega_R = 25$~MHz), \nthe models with 20 regions describe the shape of the experimental curve quite well, \nwhile model N1 describes the experimental results poorly in terms of width and overall shape [Fig.~\\ref{fig:one_laser_exp}(c)]. \n\n\\begin{figure}[htpb]\n\t\\includegraphics[width=0.45\\textwidth]{rb87_a.pdf}\n\t\\includegraphics[width=0.45\\textwidth]{rb87_b.pdf}\n\t\\includegraphics[width=0.45\\textwidth]{rb87_c.pdf}\n\t\\caption{(Color online) Magneto-optical resonances for the $F_g=2\\longrightarrow F_e=1$ transition of the $D_1$ line of $^{87}$Rb. \n\tFilled circles represent experimental measurements for (a) 28 $\\mu$W ($\\Omega_R$=2.5 MHz) \n\t(b) 280 $\\mu$W ($\\Omega_R$=8.0 MHz), and \n\t(c) 2800 $\\mu$W ($\\Omega_R$=25 MHz). \n\tCurve N1 (dashed) shows the results of a theoretical model that uses one Rabi frequency to model the entire beam profile. \n\tCurve N20 (dash-dotted) shows the result of the calculation when the laser beam profile is divided into 20 concentric circles, \n\tand the optical Bloch equations are solved separately for each circle. \n\tCurve N20MT (solid) shows the results for a calculation with 20 concentric regions when trajectories are taken into account \n\tthat do not pass through the center of the beam. \n\t}\n\t\\label{fig:one_laser_exp}\n\\end{figure}\n\n\n\\section{\\label{distribution:level1}Investigation of the spatial distribution of fluorescence in an intense laser beam}\n\\subsection{Theoretical investigation of the spatial dynamics of fluorescence in an extended beam}\nIn order to describe the magneto-optical signals in the previous sections, the fluorescence from all concentric beam regions \nin models N20 and N20MT was summed, since usually experiments measure only total fluorescence (or absorption), especially if \nthe beams are narrow. \nHowever, solving the optical Bloch equations separately for different concentric regions of the laser beam, it is possible \nto calculate the strength of the fluorescence as a function of distance from the center of the beam. With an appropriate \nexperimental technique, the distribution of fluorescence within a laser beam could also be measured. \n\nFigure~\\ref{fig:dynamics} shows the calculated fluorescence distribution as a function of position in the laser beam. \nAs atoms move through the beam in one direction, the intense laser radiation optically pumps the ground state. In a very \nintense beam, the ground state levels that can absorb light have emptied even before the atoms reach the center \n(solid, green curve). Since atoms are actually traversing the beam from all directions, the result is a fluorescence profile with a \nreduction in intensity near the center of the beam (dashed, red curve). \n\\begin{figure}[htpb]\n\t\\includegraphics[width=0.45\\textwidth]{f3.pdf}\n\t\\caption{(Color online) Fluorescence distribution as a function of position in the laser beam.\n\tDotted (blue) line---laser beam profile, solid (green) line---fluorescence from atoms moving in one direction;\n\tdash-dotted (red) line---the overall fluorescence as a function of position that results \n\tfrom averaging all beam trajectories. Results from theoretical calculations.}\n\t\\label{fig:dynamics}\n\\end{figure}\nThe effect of increasing the laser beam intensity (or Rabi frequency) can be seen in Fig.~\\ref{fig:dynamics_rabi}.\nAt a Rabi frequency of $\\Omega_R=0.6$ MHz, the fluorescence profile tracks the intensity profile of the laser beam \nexactly. When the Rabi frequency is increased ten times ($\\Omega_R=6.0$ MHz), which corresponds to an intensity \nincrease of 100, the fluorescence profile already appears somewhat deformed and wider than the actual laser beam profile. \nAt Rabi frequencies of $\\Omega_R=48.0$ MHz and greater, the fluorescence intensity at the center of the intense laser beam \nis weaker than towards the edges as a result of the ground state being depleted by the intense radiation \nbefore the atoms reach the center of the laser beam.\n\\begin{figure}[htpb]\n\t\\includegraphics[width=0.45\\textwidth]{f4.pdf}\n\t\\caption{(Color online) Fluorescence distribution as a function of position in the laser beam for various values of the Rabi frequency. \n\tResults from theoretical calculations. As the Rabi frequency increases, the distribution becomes broader.}\n\t\\label{fig:dynamics_rabi}\n\\end{figure}\n \n\\subsection{Experimental study of the spatial dynamics of excitation and fluorescence in an intense, extended beam}\n\nIn order to test our theoretical model of the spatial distribution of fluorescence from atoms in an intense, extended pumping beam, \nwe decided to record magneto-optical resonances \nfrom various positions in the pumping beam. The experimental setup is shown in \nFig.~\\ref{fig:exp_setup}. To visualize these data, surface plots were generated where one horizontal \naxis represented the magnetic field and the other, the position of the probe beam relative to the pump beam axis. The \nheight of the surface represented the fluorescence intensity. In essense, the surface consists of a series of \nmagneto-optical resonances recorded for a series of positions of the probe beam axis relative the the pump beam axis. \nFig.~\\ref{fig:p44_s43} shows the results for experiments [(a)] and calculations [(b)] for which the pump beam was tuned to the \n$F_g=4\\longrightarrow F_e=4$ transition of the Cs $D_2$ line and the probe beam was tuned to the\n$F_g=4\\longrightarrow F_e=3$ transition of the Cs $D_1$ line. \n\\begin{figure*}\n\t\\includegraphics[width=0.45\\textwidth]{f7a.pdf}\n\t\\includegraphics[width=0.45\\textwidth]{f7b.pdf}\n\t\\caption{\\label{fig:p44_s43} (Color online) Magneto-optical resonances produced for various positions of the probing laser beam \n\t($F_g=4\\longrightarrow F_e=3$ transition of the $D_1$ line of cesium) with respect to the pump laser beam \n\t($F_g=4\\longrightarrow F_e=4$ transition of the $D_2$ line of cesium): \n\t(a) experimental results and (b) theoretical calculations. }\n\\end{figure*}\nOne can see that the theoretical plot reproduces qualitatively all the features of the experimental measurement. \nSimilar agreement can be observed when the probe beam was tuned to the $F_g=3\\longrightarrow F_e=4$ transition \nof the Cs $D_1$ line, as \nshown in Fig.~\\ref{fig:p44_s34}. \n\\vfill\n\\break\n\\begin{figure*}\n\t\\includegraphics[width=0.45\\textwidth]{f8a.pdf}\n\t\\includegraphics[width=0.45\\textwidth]{f8b.pdf}\n\t\\caption{\\label{fig:p44_s34} (Color online) Magneto-optical resonances produced for various positions of the probe laser beam \n\t($F_g=3\\longrightarrow F_e=4$ transition of the $D_1$ line of cesium) with respect to the pump laser beam \n\t($F_g=4\\longrightarrow F_e=4$ transition of the $D_2$ line of cesium): \n\t(a) experimental results and (b) theoretical calculations. }\n\\end{figure*}\n\n\n\n\\section{\\label{Conclusions:level1}Conclusions}\nWe have set out to model magneto-optical signals more accurately at laser intensities significantly higher than the saturation \nintensity by dividing the laser beam into concentric circular regions and solving the rate equations for Zeeman coherences in each region while \ntaking into account the actual laser intensity in that region and the transport of atoms between regions. This approach was used to \nmodel magneto-optical resonances for the $F_g=2 \\longrightarrow F_e=1$ transitions of the $D_1$ line of $^{87}$Rb, comparing the \ncalculated curves to measured signals. \nWe have demonstrated that good agreement between theory and experiment can be achieved up to Rabi frequencies of at least 25~MHz, \nwhich corresponds to a laser intensity of 200 mW\/cm$^2$, or more than 40 times the saturation intensity of the transition.\nAs an additional check on the model, we have studied the spatial distribution of the fluorescence intensity within a laser beam theoretically \nand experimentally. The results indicated that at high laser power densities, the maximum fluorescence intensity is not produced \nin the center of the beam, because the atoms have been pumped free of absorbing levels prior to reaching the center. We compared\nexperimental and theoretical signals of magneto-optical resonance signals obtained by exciting cesium atoms with a \nnarrow, weak probe beam tuned to the $D_1$ transition at various locations inside a region illuminated by an intense pump beam \ntuned to the $D_2$ transition and obtained good qualitative agreement. \n\n \n\\begin{acknowledgments}\nWe gratefully acknowledge support from the Latvian Science Council Grant Nr. 119\/2012 and \nfrom the University of Latvia Academic Development Project Nr. AAP2015\/B013.\n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec:intro}\nPrincipal component analysis (PCA) is \nthe ``workhorse'' method\n for dimensionality reduction and feature extraction. It finds well-documented applications, including statistics, bioinformatics, genomics, quantitative finance, and engineering, to name just a few. The goal of PCA is to obtain low-dimensional representations for high-dimensional data, while preserving most of the high-dimensional data variance \\cite{1901pca}. \n\n\nYet, various practical scenarios involve \\emph{multiple} datasets, in which one is tasked with extracting the most discriminative information of one target dataset relative to others. \nFor instance, consider two gene-expression measurement datasets of volunteers from across different geographical areas and genders: the first dataset collects gene-expression levels of cancer patients, considered here as the \n \\emph{target data}, \nwhile the second contains levels from healthy individuals corresponding here to our \n \\emph{background data}. The goal is to identify molecular subtypes of cancer within cancer patients.\nPerforming PCA on either the target data or the target together with background data is likely to yield principal components (PCs) that correspond to\nthe background information common to both datasets (e.g., the demographic patterns and genders) \\cite{1998background}, rather than the PCs uniquely describing the subtypes of cancer.\nAlbeit simple to comprehend and practically relevant, such discriminative data analytics has not been thoroughly addressed. \n\n\nGeneralizations of PCA include kernel (K) PCA \\cite{kpca,2017kpca}, graph PCA \\cite{proc2018gsk}, \n$\\ell_1$-PCA \\cite{2018l1pca}, \n robust PCA \\cite{jstsp2016shahid},\nmulti-dimensional scaling \\cite{mds}, \nlocally linear embedding \\cite{lle}, \nIsomap \\cite{2000isomap}, and Laplacian eigenmaps \\cite{2003eigenmap}. Linear discriminant analysis (LDA) is a\n\\emph{supervised} classifier of linearly projected reduced dimensionality data vectors. It is designed so that linearly projected training vectors (meaning \\emph{labeled} data) of the same class stay as close as possible, while projected data of different classes are positioned as far as possible \\cite{1933lda}. \nOther discriminative methods include \n\tre-constructive and discriminative subspaces \\cite{fidler2006combining},\n\t discriminative vanishing component analysis \\cite{hou2016discriminative}, \n\t and kernel LDA \\cite{mika1999fisher},\n\t which similar to LDA rely on labeled data.\nSupervised PCA looks for orthogonal projection vectors so that the dependence of projected vectors from \n\tone dataset on the other dataset is maximized \\cite{barshan2011supervised}.\n\nMultiple-factor analysis, an extension of PCA to deal with multiple datasets, is implemented in two steps: S1) normalize each dataset by the largest eigenvalue of \n\tits sample covariance matrix; and, S2) perform PCA on the combined dataset of all normalized ones \\cite{abdi2013multiple}.\nOn the other hand, canonical correlation analysis is widely employed for analyzing multiple datasets \\cite{1936cca,2018cwsggcca,2018gmcca}, but its goal is to extract the shared low-dimensional structure. \nThe recent proposal called contrastive (c) PCA aims at extracting contrastive information between two datasets \\cite{2017cpca}, by searching for directions along which the target data variance is large while that of \n the background data is small. Carried out using the singular value decomposition (SVD), cPCA can reveal dataset-specific information often missed by standard PCA if the involved hyper-parameter is properly selected. \nThough possible \n to automatically choose\nthe best hyper-parameter from a list of candidate values, performing SVD multiple\ntimes can be computationally cumbersome in large-scale feature extraction settings.\n\n\n \n\n\nBuilding on but going beyond cPCA, this paper starts by developing a novel approach, termed discriminative (d) PCA, for discriminative analytics of \\emph{two} datasets. dPCA looks for linear projections (as in LDA) but of \\emph{unlabeled}\n data vectors, by \\textcolor{black}{maximizing the variance of projected target data while minimizing that of background data. This leads to a \\emph{ratio trace} maximization formulation,}\n and also justifies our chosen term \\emph{discriminative PCA}. Under certain conditions, dPCA is proved to be least-squares (LS) optimal in the sense that it reveals PCs specific to the target data relative to background data. Different from \n cPCA, dPCA is parameter free, and it requires a single generalized eigendecomposition, lending itself favorably to large-scale discriminative data analytics. \n However, real data vectors \n often exhibit nonlinear correlations, rendering dPCA inadequate for complex practical setups. To this end, nonlinear dPCA is developed via kernel-based learning. Similarly, the solution of KdPCA can be provided analytically in terms of generalized eigenvalue decompositions. As the complexity of KdPCA grows only linearly with the dimensionality of data vectors, KdPCA is preferable over dPCA for discriminative analytics of high-dimensional data.\n\n\n\n\n\ndPCA is further extended to handle multiple (more than two) background datasets. Multi-background (M) dPCA is developed to extract low-dimensional discriminative structure unique to the target data but not to \\emph{multiple} sets of background data. This becomes possible by maximizing \\textcolor{black}{the variance of projected \n\t target data while minimizing the sum of variances of all projected \n\t background data.}\nAt last, kernel (K) MdPCA is put forth to account for nonlinear data correlations.\n\n\n\\emph{Notation}: Bold uppercase (lowercase) letters denote matrices (column vectors).\nOperators $(\\cdot)^{\\top}$,\n$(\\cdot)^{-1}$, and ${\\rm Tr}(\\cdot)$ denote matrix transposition, \ninverse, and trace, respectively; \n$\\|\\mathbf{a}\\|_2 $ is the $\\ell_2$-norm of vector $\\mathbf{a}$;\n ${\\rm diag}(\\{a_i\\}_{i=1}^m)$ is a diagonal matrix holding elements $\\{a_i\\}_{i=1}^m$ on its main diagonal; \n $\\mathbf{0}$ denotes all-zero vectors or matrices;\n and $\\mathbf{I}$ represents identity matrices of suitable dimensions.\n\n\n\\section{Preliminaries and Prior Art}\\label{sec:preli}\nConsider two datasets, namely a target dataset $\\{\\mathbf{x}_i\\in\\mathbb{R}^D\\}_{i=1}^m$ that we are interested in analyzing, and a background dataset $\\{\\mathbf{y}_j\\in\\mathbb{R}^D\\}_{j=1}^n$ that contains latent background-related vectors also present in the target data. Generalization to multiple background datasets will be presented in Sec. \\ref{sec:mdpca}. \nAssume without loss of generality that both datasets are centered; in other words, \n\\textcolor{black}{the sample mean $m^{-1}\\sum_{i=1}^m \\mathbf{x}_i$ $(n^{-1}\\sum_{j=1}^n \\mathbf{y}_j)$ has been subtracted from each $\\mathbf{x}_i$ $(\\mathbf{y}_j)$.}\nTo motivate our novel approaches \nin subsequent sections, some basics of PCA and cPCA are outlined next.\n\n\nStandard PCA handles a single dataset at a time. \nIt looks for low-dimensional representations $\\{\\boldsymbol{\\chi}_i\\in\\mathbb{R}^d \\}_{i=1}^m$ of $\\{\\mathbf{x}_i\\}_{i=1}^m$ with $d1$, PCA looks for $\\{\\mathbf{u}_i\\in\\mathbb{R}^D \\}_{i=1}^d$, \nobtained from the \\textcolor{black}{$d$ eigenvectors of $\\mathbf{C}_{xx}$ associated with the first $d$ largest eigenvalues sorted in a decreasing order}. As alluded to in Sec. \\ref{sec:intro}, PCA applied on $\\{\\mathbf{x}_i \\}_{i=1}^m$ only, or on the combined datasets\n $\\{\\{\\mathbf{x}_i\\}_{i=1}^m,\\,\\{\\mathbf{y}_j\\}_{j=1}^n \\}$ can generally not uncover the discriminative patterns or features of the target data relative to the background data.\n\n\n\n\nOn the other hand, the recent cPCA seeks a vector $\\mathbf{u}\\in\\mathbb{R}^D$ along which the target data exhibit large variations while the background\n data exhibit small variations, via solving \\cite{2017cpca}\n\t\\begin{subequations}\n\t\t\\label{eq:cpca}\n\t\t\\begin{align}\n\t\t\t\\underset{\\mathbf{u}\\in\\mathbb{R}^D}{\\max}\n\t\t\t\\quad&\\mathbf{u}^\\top\\mathbf{C}_{xx}\\mathbf{u}-\\alpha \\mathbf{u}^\\top\\mathbf{C}_{yy}\\mathbf{u}\\label{eq:cpcaobj}\\\\\n\t\t\t{\\rm s.\\,to}\\quad &\\mathbf{u}^\\top\\mathbf{u}=1\n\t\t\\end{align}\t\t\n\t\\end{subequations}\nwhere $\\mathbf{C}_{yy}:=(1\/n)\\sum_{j=1}^n\\mathbf{y}_j\\mathbf{y}_j^\\top\\in\\mathbb{R}^{D\\times D}$ denotes the sample covariance matrix of $\\{\\mathbf{y}_j \\}_{j=1}^n$, and the hyper-parameter $\\alpha\\ge 0$ trades off maximizing the target data variance (the first term in \\eqref{eq:cpcaobj}) for minimizing the background data variance (the second term). For a given $\\alpha$, the solution of \\eqref{eq:cpca} is given by the eigenvector of $\\mathbf{C}_{xx}-\\alpha\\mathbf{C}_{yy}$ associated with its largest eigenvalue, along which the obtained data projections constitute the first contrastive (c) \\textcolor{black}{PC}. Nonetheless, there is no rule of thumb for choosing $\\alpha$.\n\\textcolor{black}{A spectral-clustering based algorithm was devised to automatically select $\\alpha$ from a list of candidate values \\cite{2017cpca}, but its brute-force search is computationally expensive to use in large-scale datasets.}\n\n\n\n\n\n\\section{Discriminative Principal Component Analysis} \\label{sec:dpca}\nUnlike PCA, LDA is a supervised classification method of linearly projected data at reduced dimensionality. \nIt finds those linear projections that reduce that variation in the same class and increase the separation between classes \\cite{1933lda}. This is accomplished by maximizing the ratio of the labeled data variance between classes to that within the classes. \n\nIn a related but unsupervised setup, \nconsider we are given a target dataset and a background dataset, and we are tasked with \\textcolor{black}{extracting vectors that are meaningful in representing\n\t $\\{\\mathbf{x}_i\\}_{i=1}^m$, but not $\\{\\mathbf{y}_j\\}_{j=1}^n$.}\n A meaningful \n approach would then be to maximize the ratio of the projected target data variance over that of the background data. Our \\emph{discriminative (d) PCA} approach finds\n\t\t\\begin{equation}\t\\label{eq:dpca}\n\t\t\\textcolor{black}{\n\t\t\t\\hat{\\mathbf{u}}:=\\arg\n\t\t\t\\underset{\\mathbf{u}\\in\\mathbb{R}^{D}}{\\max}\n\t\t\t\\quad\\frac{\\mathbf{u}^\\top\\mathbf{C}_{xx}\\mathbf{u}}{ \\mathbf{u}^\\top\\mathbf{C}_{yy}\\mathbf{u}}}\n\t\t\\end{equation}\nWe will term the solution in \\eqref{eq:dpca} discriminant subspace vector,\nand the projections $\\{\\hat{\\mathbf{u}}^\\top\\mathbf{x}_i\\}_{i=1}$ the first discriminative (d) \\textcolor{black}{PC}. \nNext, we discuss the solution in \\eqref{eq:dpca}.\n\n\nUsing Lagrangian duality theory, the solution in \\eqref{eq:dpca} corresponds to the right eigenvector of $\\mathbf{C}_{yy}^{-1}\\mathbf{C}_{xx}$ associated with the largest eigenvalue. \nTo establish this, note that \\eqref{eq:dpca} can be equivalently rewritten a\n\t\\begin{subequations}\n\\label{eq:dpcafm2}\n\\begin{align}\n\t\\hat{\\mathbf{u}}:=\\arg\n\t\t\\underset{\\mathbf{u}\\in\\mathbb{R}^{D}}{\\max}\\quad& \\mathbf{u}^\\top\\mathbf{C}_{xx}\\mathbf{u}\\label{eq:dpcafm2cos}\\\\\n\t{\\rm s.\\,to}\\quad& \\mathbf{u}^\\top\\mathbf{C}_{yy}\\mathbf{u}=1.\\label{eq:dpcafm2con}\n\\end{align}\n\t\\end{subequations}\n\tLetting $\\lambda$ denote the dual variable associated with the constraint \\eqref{eq:dpcafm2con}, the Lagrangian of \\eqref{eq:dpcafm2} becomes\n\\begin{equation}\\label{eq:lag}\n\\mathcal{L}(\\mathbf{u};\\,\\lambda)=\\mathbf{u}^\\top\\mathbf{C}_{xx}\\mathbf{u}+\\lambda\\left(1-\\mathbf{u}^\\top\\mathbf{C}_{yy}\\mathbf{u}\\right).\n\\end{equation}\nAt the optimum $(\\hat{\\mathbf{u}};\\,\\hat{\\lambda})$, \nthe KKT conditions confirm that\n\\begin{equation}\\label{eq:gep}\n\t\\mathbf{C}_{xx}\\hat{{\\mathbf{u}}}=\\hat{\\lambda}\\mathbf{C}_{yy}\\hat{\\mathbf{u}}.\n\\end{equation}\nThis is a generalized eigen-equation, whose solution $\\hat{\\mathbf{u}}$ is the generalized eigenvector of $(\\mathbf{C}_{xx},\\,\\mathbf{C}_{yy})$ corresponding to the generalized eigenvalue $\\hat{\\lambda}$. \nLeft-multiplying\n \\eqref{eq:gep} by $\\hat{\\mathbf{u}}^\\top$ yields\n$\t\\hat{\\mathbf{u}}^\\top\\mathbf{C}_{xx}\\hat{\\mathbf{u}}=\\hat{\\lambda} \\hat{\\mathbf{u}}^\\top\\mathbf{C}_{yy}\\hat{\\mathbf{u}}\n$, corroborating that the optimal objective value of \\eqref{eq:dpcafm2cos} is attained when $\\hat{\\lambda}:=\\lambda_1$ is the largest generalized eigenvalue. Furthermore, \\eqref{eq:gep} can be solved efficiently using well-documented solvers that rely on e.g., Cholesky's factorization \\cite{saad1}. \n \n \n Supposing further that $\\mathbf{C}_{yy}$ is nonsingular\n \\eqref{eq:gep} yields \n \\begin{equation}\n \\label{eq:dpcasol}\n \\mathbf{C}_{yy}^{-1}\\mathbf{C}_{xx}\\hat{\\mathbf{u}}=\\hat{\\lambda}\\hat{\\mathbf{u}}\n \\end{equation}\nimplying that $\\hat{\\mathbf{u}}$ in \\eqref{eq:dpcafm2} is the right eigenvector of $\\mathbf{C}_{yy}^{-1}\\mathbf{C}_{xx}$ corresponding to the largest eigenvalue $\\hat{\\lambda}=\\lambda_1$.\n\n\n\nTo find multiple ($d\\ge 2$) subspace vectors, namely $\\{\\mathbf{u}_i\\in\\mathbb{R}^D\\}_{i=1}^d$ that form $\\mathbf{U}:=[\\mathbf{u}_1 \\, \\cdots \\, \\mathbf{u}_d]\\in\\mathbb{R}^{D\\times d}$, \\textcolor{black}{in \\eqref{eq:dpca} with $\\mathbf{C}_{yy}$ being nonsingular,} can be generalized as follows (cf. \\eqref{eq:dpca})\n\\vspace{4pt}\n\\textcolor{black}{\n\t\\begin{equation}\n\t\t\\label{eq:dpcam}\t\n\t\t\\hat{\\mathbf{U}}:=\\arg\t\\underset{\\mathbf{U}\\in\\mathbb{R}^{D\\times d}}{\\max}~\n\t\t{\\rm Tr}\\left[\\left(\\mathbf{U}^\\top\\mathbf{C}_{yy}\\mathbf{U}\\right )^{-1}\\mathbf{U}^\\top\\mathbf{C}_{xx}\\mathbf{U}\\right].\n\t\\end{equation}\n\t}\n\n\nClearly, \\eqref{eq:dpcam} is a \\emph{ratio trace} maximization problem; see e.g., \\cite{2014mati}, whose solution\nis given in Thm. \\ref{the:dpca} (see a proof in \\cite[p. 448]{2013fukunaga}).\n\n\n\n\\begin{theorem}\n\t\\label{the:dpca}\n\tGiven centered data $\\{{\\mathbf{x}}_i\\in\\mathbb{R}^{D}\\}_{i=1}^m$ and $\\{{\\mathbf{y}}_j\\in\\mathbb{R}^{D}\\}_{j=1}^n$ with sample covariance matrices $\\mathbf{C}_{xx}:=(1\/m)\\sum_{i=1}^m\\mathbf{x}_i\\mathbf{x}_i^\\top$ and $\\mathbf{C}_{yy}:=(1\/n)\\sum_{j=1}^n\\mathbf{y}_j\\mathbf{y}_j^\\top\\succ\\mathbf{0}$, the $i$-th column of the dPCA optimal solution $\\hat{\\mathbf{U}}\\in\\mathbb{R}^{D\\times d}$ in \\eqref{eq:dpcam} is given by the right eigenvector of $\\mathbf{C}_{yy}^{-1}\\mathbf{C}_{xx}$ associated with the $i$-th largest eigenvalue, where $i=1,\\,\\ldots,\\,d$.\n\\end{theorem}\n\n\n\nOur dPCA for \ndiscriminative analytics of two datasets is summarized in Alg. \\ref{alg:dpca}.\n\\textcolor{black}{Four} remarks are now in order.\n\n\\begin{remark}\nWithout background data, we have $\\mathbf{C}_{yy}=\\mathbf{I}$, and dPCA boils down to the standard PCA. \n\\end{remark}\n\n\n\\begin{remark}\n\tSeveral possible combinations of target and background datasets include:\n\t i) measurements from a healthy group $\\{\\mathbf{y}_j\\}$ and a diseased group $\\{\\mathbf{x}_i\\}$, where the former has similar population-level variation \n\t with the latter, but distinct variation \n\t due to subtypes of diseases; ii) before-treatment $\\{\\mathbf{y}_j\\}$ and after-treatment $\\{\\mathbf{x}_i\\}$ datasets, in which the former contains additive \n\t measurement noise rather than the variation caused by treatment; and iii) signal-free $\\{\\mathbf{y}_j\\}$ and signal recordings $\\{\\mathbf{x}_i\\}$, where the former consists of only noise.\n\\end{remark}\n\n\\begin{remark}\\label{re:twoeq}\nConsider the eigenvalue decomposition \n$\\mathbf{C}_{yy}=\\mathbf{U}_y\\mathbf{\\Sigma}_{yy}\\mathbf{U}_y^\\top$. \nWith $\\mathbf{C}_{yy}^{1\/2}:=\\mathbf{\\Sigma}_{yy}^{1\/2}\\mathbf{U}_{y}^\\top$, and the definition \n $\\mathbf{v}:=\\mathbf{C}_{yy}^{\\top\/2}\\mathbf{u}\\in\\mathbb{R}^D$, \\eqref{eq:dpcafm2} can be expressed as\t\n\t\\begin{subequations}\n\t\t\\label{eq:v}\n\t\\begin{align}\n\t\t\\hat{\\mathbf{v}}:=\\arg\n\t\\max_{\\mathbf{v}\\in\\mathbb{R}^D}\\quad&\n\t\t\\mathbf{v}^\\top\\mathbf{C}^{-1\/2}_{yy}\\mathbf{C}_{xx}\\mathbf{C}^{-\\top\/2}_{yy}\\mathbf{v}\\\\\n\t\t{\\rm s.\\,to}\\quad &\\mathbf{v}^\\top\\mathbf{v}=1\t\n\t\t\\end{align}\n\t\t\\end{subequations}\nwhere $\\hat{\\mathbf{v}}$ corresponds to the leading eigenvector of $\\mathbf{C}_{yy}^{-1\/2}\\mathbf{C}_{xx} \\mathbf{C}_{yy}^{-\\top\/2}$. Subsequently, $\\hat{\\mathbf{u}}$ in \\eqref{eq:dpcafm2} is recovered as $\\hat{\\mathbf{u}}=\\mathbf{C}_{yy}^{-\\top\/2}\\hat{\\mathbf{v}}$.\nThis indeed suggests that discriminative analytics of $\\{\\mathbf{x}_i\\}_{i=1}^m$ and $\\{\\mathbf{y}_j\\}_{j=1}^n$ using dPCA \ncan be viewed as PCA of the `denoised' or `background-removed' data $\\{\\mathbf{C}^{-1\/2}_{yy}\\mathbf{x}_i \\}$,\nfollowed by an `inverse' transformation to \nmap the obtained subspace vector of the $\\{\\mathbf{C}^{-1\/2}_{yy}\\mathbf{x}_i \\}$ data to $\\{\\mathbf{x}_i\\}$ that of the target \ndata. \nIn this sense, $\\{\\mathbf{C}^{-1\/2}_{yy}\\mathbf{x}_i \\}$ can be seen as the data obtained after removing the dominant `background' subspace vectors from the target data. \n\n\n\n\t\n\n\n\n\\end{remark}\n\\begin{remark}\nInexpensive power or Lanczos iterations \\cite{saad1} can be employed to compute the principal eigenvectors in \\eqref{eq:dpcasol}.\n\\end{remark}\n\n\n\n\\begin{algorithm}[t]\n\t\\caption{Discriminative PCA.}\n\t\\label{alg:dpca}\n\t\\begin{algorithmic}[1]\n\t\t\\STATE {\\bfseries Input:}\n\t\tNonzero-mean target and background data $\\{\\accentset{\\circ}{\\mathbf{x}}_i\\}_{i=1}^m$ and $\\{\\accentset{\\circ}{\\mathbf{y}}_j\\}_{j=1}^n$; number of dPCs $d$.\n\t\t\\STATE {\\bfseries Exclude} the means from $\\{\\accentset{\\circ}{\\mathbf{x}}_i\\}$ and $\\{\\accentset{\\circ}{\\mathbf{y}}_j\\}$ to obtain centered data $\\{\\mathbf{x}_i\\}$, and $\\{\\mathbf{y}_j\\}$. Construct $\\mathbf{C}_{xx}$ and $\\mathbf{C}_{yy}$.\n\t\t\\STATE {\\bfseries Perform} \\label{step:4} eigendecomposition\n\t\ton $\\mathbf{C}_{yy}^{-1}\\mathbf{C}_{xx}$ to obtain the $d$ right eigenvectors $\\{\\hat{\\mathbf{u}}_i\\}_{i=1}^d$ associated with the $d$ largest eigenvalues.\n\t\t\\STATE {\\bfseries Output} $\\hat{\\mathbf{U}}=[\\hat{\\mathbf{u}}_1\\,\\cdots \\, \\hat{\\mathbf{u}}_d]$.\n\t\t\\vspace{-0pt}\n\t\\end{algorithmic}\n\\end{algorithm} \n\n\n\n\n\\textcolor{black}{Consider again \\eqref{eq:dpcafm2}. Based on Lagrange duality, when selecting $\\alpha=\\hat{\\lambda}$ in \\eqref{eq:cpca}, where $\\hat{\\lambda}$ is the largest eigenvalue of $\\mathbf{C}_{yy}^{-1}\\mathbf{C}_{xx}$, cPCA maximizing $\\textcolor{black}{{\\mathbf{u}}^\\top}(\\mathbf{C}_{xx}-\\hat{\\lambda}\\mathbf{C}_{yy}){\\mathbf{u}}$ is equivalent to $\\max_{\\mathbf{u}\\in\\mathbb{R}^{D}}\\mathcal{L}(\\mathbf{u};\\hat{\\lambda})=\\mathbf{u}^\\top(\\mathbf{C}_{xx}-\\hat{\\lambda}\\mathbf{C}_{yy})\\mathbf{u}+\\hat{\\lambda}$, which coincides with \\eqref{eq:lag} when $\\lambda=\\hat{\\lambda}$ at the optimum. \nThis suggests that the optimizers of cPCA and dPCA share the same direction when $\\alpha$ in cPCA is chosen to be the optimal dual variable $\\hat{\\lambda}$ of our dPCA in \\eqref{eq:dpcafm2}.\n\tThis equivalence between dPCA and cPCA with a proper $\\alpha$ can also be seen from \n\tthe following.\n\t\\begin{theorem}{\\cite[Theorem 2]{guo2003generalized}}\n\t\t\\label{the:cvsd}\n\t\tFor real symmetric matrices $\\mathbf{C}_{xx}\\succeq \\mathbf{0}$ and $\\mathbf{C}_{yy}\\succ\\mathbf{0}$, the following holds\n\t\t\t\\begin{equation*}\n\t\t\t\\check{\\lambda}=\\frac{\\check{\\mathbf{u}}^\\top\\mathbf{C}_{xx}\\check{\\mathbf{u}}}{\\check{\\mathbf{u}}^\\top\\mathbf{C}_{yy}\\check{\\mathbf{u}}}=\\underset{\\|\\mathbf{u}\\|_2=1}{\\max}\\frac{\\mathbf{u}^\\top\\mathbf{C}_{xx}\\mathbf{u}}{\\mathbf{u}^\\top\\mathbf{C}_{yy}\\mathbf{u}} \n\t\t\t\\end{equation*}\n\t\t\tif and only if\n\t\t\t\\begin{equation*}\n\t\t\t\\check{\\mathbf{u}}^\\top(\\mathbf{C}_{xx}-\\check{\\lambda}\\mathbf{C}_{yy})\\check{\\mathbf{u}}=\\underset{\\|\\mathbf{u}\\|_2=1}{\\max}\\mathbf{u}^\\top(\\mathbf{C}_{xx}-\\check{\\lambda}\\mathbf{C}_{yy})\\mathbf{u}.\n\t\t\t\\end{equation*}\n\t\\end{theorem}\n\t}\n\n\nTo gain further insight into the relationship between dPCA and cPCA,\nsuppose that $\\mathbf{C}_{xx}$ and $\\mathbf{C}_{yy}$ are simultaneously diagonalizable; that is, there exists an unitary matrix $\\mathbf{U}\\in\\mathbb{R}^{D\\times D}$ such that\n\\begin{equation*}\n\\mathbf{C}_{xx}:=\\mathbf{U}\\mathbf{\\Sigma}_{xx}\\mathbf{U}^\\top,\\quad {\\rm and}\\quad \\mathbf{C}_{yy}:=\\mathbf{U}\\mathbf{\\Sigma}_{yy}\\mathbf{U}^\\top\n\\end{equation*} \nwhere diagonal matrices $\\mathbf{\\Sigma}_{xx},\\,\\mathbf{\\Sigma}_{yy}\\succ \\mathbf{0}$ hold accordingly eigenvalues $\\{\\lambda_x^i\\}_{i=1}^D$ of $\\mathbf{C}_{xx}$ and $\\{\\lambda_y^i\\}_{i=1}^D$ of $\\mathbf{C}_{yy}$ on their main diagonals. \n\\textcolor{black}{Even if the two datasets may share some subspace vectors, $\\{\\lambda_x^i\\}_{i=1}^D$ and $\\{\\lambda_y^i\\}_{i=1}^D$ are in general not the same.}\nIt is easy to check that $\\mathbf{C}_{yy}^{-1}\\mathbf{C}_{xx}=\\mathbf{U}\\mathbf{\\Sigma}_{yy}^{-1}\\mathbf{\\Sigma}_{xx}\\mathbf{U}^\\top=\\mathbf{U} {\\rm diag}\\big(\\{\\frac{\\lambda_x^i}{\\lambda_y^i}\\}_{i=1}^D\\big)\\mathbf{U}^\\top$. Seeking the first $d$ latent subspace vectors is tantamount to taking the $d$ columns of $\\mathbf{U}$ that correspond to the $d$ largest values among $\\{\\frac{\\lambda_x^i}{\\lambda_y^i}\\}_{i=1}^D$.\nOn the other hand, cPCA for a fixed $\\alpha$, looks for the first $d$ latent subspace vectors of $\\mathbf{C}_{xx}-\\alpha\\mathbf{C}_{yy}=\\mathbf{U}(\\mathbf{\\Sigma}_{xx}-\\alpha{\\bm\\Sigma}_{yy})\\mathbf{U}^\\top=\\mathbf{U}{\\rm diag}\\big(\\{\\lambda_x^i-\\alpha\\lambda_y^i\\}_{i=1}^D\\big)\\mathbf{U}^\\top$, which amounts to taking the $d$ columns of $\\mathbf{U}$ associated with the $d$ largest values in $\\{\\lambda_x^i-\\alpha\\lambda_y^i\\}_{i=1}^D$. \n\\textcolor{black}{This further confirms that when $\\alpha$ is sufficiently large (small), cPCA returns the $d$ columns of $\\mathbf{U}$ associated with the $d$ largest $\\lambda_{y}^{i}$'s ($\\lambda_x^{i}$'s). When $\\alpha$ is not properly chosen, cPCA may fail to extract the most contrastive information from target data relative to background data. In contrast, this is not an issue is not present in dPCA simply because it has no tunable parameter.}\n\n\n\n\n\\section{Optimality of {d}PCA}\n\\label{sec:optim}\n\nIn this section, we show that dPCA is optimal when data obey a certain affine model. In a similar vein, PCA adopts a factor analysis model to express the non-centered background data $\\{\\accentset{\\circ}{\\mathbf{y}}_j\\in\\mathbb{R}^D\\}_{j=1}^n$ as\n\\begin{equation}\n\\accentset{\\circ}{\\mathbf{y}}_j=\\mathbf{m}_y+\n\\mathbf{U}_b\\bm{\\psi}_j+\\mathbf{e}_{y,j},\t\\quad j=1,\\,\\ldots,\\,n\\label{eq:y}\n\\end{equation}\nwhere $\\mathbf{m}_y\\in\\mathbb{R}^D$ denotes the unknown location (mean) vector; $\\mathbf{U}_b\\in\\mathbb{R}^{D\\times k}$ has orthonormal columns with $k{\\lambda_{x,i}}\/{\\lambda_{y,i}}$.\n\\end{assumption}\nAssumption \\ref{asmp:unique} essentially requires that $\\mathbf{u}_s$ is discriminative enough in the target data relative to the background data. \n\\textcolor{black}{After combining Assumption \\ref{asmp:unique} and the fact that $\\mathbf{u}_s$ is an eigenvector of $\\mathbf{C}_{yy}^{-1}\\mathbf{C}_{xx}$, it follows readily that the eigenvector of $\\mathbf{C}_{yy}^{-1}\\mathbf{C}_{xx}$ associated with the largest eigenvalue is $\\mathbf{u}_s$.}\nUnder these two assumptions, we establish the optimality of dPCA next. \n\\begin{theorem}\n\t\\label{the:optm}\n\tUnder Assumptions \\ref{asmp:model} and \\ref{asmp:unique} with $d=1$, as $m,\\,n\\to\\infty$, the solution of \\eqref{eq:dpca} recovers the subspace vector specific to target data relative to background data, namely $\\mathbf{u}_s$. \n\\end{theorem}\n\n \n\\section{Kernel dPCA} \\label{sec:kdpca}\n\nWith advances in data acquisition and data storage technologies, a sheer volume of possibly high-dimensional data are collected daily, that topologically lie on a nonlinear manifold in general. This goes beyond the ability of the (linear) dPCA in Sec. \\ref{sec:dpca} due mainly to a couple of reasons: i) dPCA presumes a linear low-dimensional hyperplane to project the target data vectors; and ii) dPCA incurs computational complexity $\\mathcal{O}(\\max (m,\\,n) D^2)$ that grows quadratically with the dimensionality of data vectors.\n To address these challenges, this section generalizes dPCA to account for nonlinear data relationships via kernel-based learning, and puts forth kernel (K) dPCA for nonlinear discriminative analytics. Specifically, KdPCA starts by \\textcolor{black}{mapping} both the target and background data vectors from the original data space to a higher-dimensional (possibly infinite-dimensional) feature space using a common nonlinear function, which is followed by performing linear dPCA on the \\textcolor{black}{transformed} data.\n\n\nConsider first the dual version of dPCA, starting with the $N:=m+n$ augmented data $\\{\\mathbf{z}_i\\in\\mathbb{R}^D\\}_{i=1}^N$ as \n\\begin{equation*}\n\\label{eq:z}\n\\mathbf{z}_i:=\\left\\{\n\\begin{array}{ll}\n\\mathbf{x}_i, & 1\\le i \\le m\\\\\n\\mathbf{y}_{i-m}, & m< i \\le N\n\\end{array}\n\\right.\n\\end{equation*}\nand express the wanted\nsubspace vector $\\mathbf{u}\\in\\mathbb{R}^D$ in terms of\n $\\mathbf{Z}:=[\\mathbf{z}_1\\,\\cdots \\, \\mathbf{z}_N]\\in\\mathbb{R}^{D\\times N}$, yielding $\\mathbf{u}:=\\mathbf{Z}\\mathbf{a}$, where $\\mathbf{a}\\in\\mathbb{R}^N$ denotes the dual vector. \\textcolor{black}{When ${\\rm min}(m\\,,n)\\gg D$, matrix $\\mathbf{Z}$ has full row rank in general. Thus, there always exists a vector $\\mathbf{a}$ so that $\\mathbf{u}=\\mathbf{Z}\\mathbf{a}$. Similar steps have also been used in obtaining dual versions of PCA and CCA \\cite{kpca,2004kernel}.} \nSubstituting $\\mathbf{u}=\\mathbf{Z}\\mathbf{a}$ into \\eqref{eq:dpca} leads to our dual dPCA \n\\begin{equation}\n\\textcolor{black}{\t\\label{eq:ddpca}\n\t\\underset{\\mathbf{a}\\in\\mathbb{R}^N}{\\max}\n\t\\quad \\frac{\\mathbf{a}^\\top \\mathbf{Z}^\\top \\mathbf{C}_{xx}\\mathbf{Z}\\mathbf{a}}{ \\mathbf{a}^\\top \\mathbf{Z}^\\top\\mathbf{C}_{yy}\\mathbf{Z}\\mathbf{a}}\n}\n\\end{equation}\nbased on which we will develop our KdPCA in the sequel.\n\n\n\n\nSimilar to deriving KPCA from dual PCA \\cite{kpca}, our approach is first to transform $\\{\\mathbf{z}_i\\}_{i=1}^N$ from $\\mathbb{R}^D$ to a high-dimensional space $\\mathbb{R}^L$ (possibly with $L=\\infty$) by some nonlinear mapping function $\\bm{\\phi}(\\cdot)$, followed by removing the sample means of $\\{\\bm{\\phi}(\\mathbf{x}_i)\\}$ and $\\{\\bm{\\phi}(\\mathbf{y}_j)\\}$ from the corresponding transformed data; and subsequently, implementing dPCA on the centered transformed datasets to obtain the low-dimensional \\textcolor{black}{kernel} dPCs. Specifically, the sample covariance matrices of $\\{\\bm{\\phi}(\\mathbf{x}_i)\\}_{i=1}^m$ and $\\{\\bm{\\phi}(\\mathbf{y}_j)\\}_{j=1}^n$ can be expressed as \n\\begin{align*}\t\n\t\\mathbf{C}_{xx}^{\\phi }&:=\\frac{1}{m}\\sum_{i=1}^m \\left(\\bm{\\phi}(\\mathbf{x}_i)-\\bm{\\mu}_{ x}\\right)\\left(\\bm{\\phi}(\\mathbf{x}_i)-\\bm{\\mu}_{ x}\\right)^\\top\\in\\mathbb{R}^{L\\times L}\\\\\n\t\\mathbf{C}_{yy}^{\\phi }&:=\\frac{1}{n}\\sum_{j=1}^n \\left(\\bm{\\phi}(\\mathbf{y}_j)-\\bm{\\mu}_{ y}\\right)\\left(\\bm{\\phi}(\\mathbf{y}_j)-\\bm{\\mu}_{ y}\\right)^\\top\\in\\mathbb{R}^{L\\times L}\n\\end{align*}\nwhere the $L$-dimensional vectors $\\bm{\\mu}_{x}:=(1\/m)\\sum_{i=1}^m\\bm{\\phi}(\\mathbf{x}_i)\n$ and $\\bm{\\mu}_{y}:=(1\/n)\\sum_{j=1}^n\\bm{\\phi}(\\mathbf{y}_j)\n$ are accordingly the sample means of $\\{\\bm{\\phi}(\\mathbf{x}_i)\\}$ and $\\{\\bm{\\phi}(\\mathbf{y}_j)\\}$.\nFor convenience, let\n $\\bm{\\Phi}(\\mathbf{Z}):=[\\bm{\\phi}(\\mathbf{x}_1)-\\bm{\\mu}_x,\\,\\cdots,\\,\\bm{\\phi}(\\mathbf{x}_m)-\\bm{\\mu}_x,\\,\\bm{\\phi}(\\mathbf{y}_1)-\\bm{\\mu}_y,\\,\\cdots,\\,\\bm{\\phi}(\\textcolor{black}{\\mathbf{y}_n})-\\bm{\\mu}_y]\\in\\mathbb{R}^{L\\times N}$.\n Upon replacing $\\{\\mathbf{x}_i\\}$ and $\\{\\mathbf{y}_j\\}$ in \\eqref{eq:ddpca} with $\\{\\bm{\\phi}(\\mathbf{x}_i)-\\bm{\\mu}_x\\}$ and $\\{\\bm{\\phi}(\\mathbf{y}_j)-\\bm{\\mu}_y\\}$, respectively, \\textcolor{black}{the kernel version of \\eqref{eq:ddpca} boils down to}\n\\textcolor{black}{\n\\begin{equation}\t\n\t\\label{eq:kdpca}\n\t\\underset{\\mathbf{a}\\in\\mathbb{R}^N}{\\max}\n\t\\quad\\frac{\\mathbf{a}^\\top \\bm{\\Phi}^\\top(\\mathbf{Z}) \\mathbf{C}_{xx}^{\\phi}\\bm{\\Phi}(\\mathbf{Z})\\mathbf{a}}{ \\mathbf{a}^\\top \\bm{\\Phi}^\\top(\\mathbf{Z})\\mathbf{C}_{y y}^{\\phi}\\bm{\\Phi}(\\mathbf{Z})\\mathbf{a}}.\n\\end{equation}}\n\n\nIn the sequel, \\eqref{eq:kdpca} will be further simplified by leveraging the so-termed `kernel trick' \\cite{RKHS}. \n\nTo start, define a kernel matrix $\\mathbf{K}_{xx}\\in\\mathbb{R}^{m\\times m}$ of $\\{\\mathbf{x}_i\\}$ whose $(i,\\,j)$-th entry is $\\kappa(\\mathbf{x}_i,\\,\\mathbf{x}_j):=\\left<\\bm{\\phi}(\\mathbf{x}_i),\\,\\bm{\\phi}(\\mathbf{x}_j)\\right>$ for $i,\\,j=1,\\,\\ldots,\\,m$, where $\\kappa(\\cdot)$ represents some kernel function. Matrix $\\mathbf{K}_{yy}\\in\\mathbb{R}^{n\\times n}$ of $\\{\\mathbf{y}_j\\}$ is defined likewise. Further, the $(i,\\,j)$-th entry of matrix $\\mathbf{K}_{xy}\\in\\mathbb{R}^{m\\times n}$ is $\\kappa(\\mathbf{x}_i,\\,\\mathbf{y}_j):=\\left<\\bm{\\phi}(\\mathbf{x}_i),\\,\\bm{\\phi}(\\mathbf{y}_j)\\right>$. Centering $\\mathbf{K}_{xx}$, $\\mathbf{K}_{yy}$, and $\\mathbf{K}_{xy}$ produces \n\\begin{align*}\n\\mathbf{K}_{xx}^c&:=\\mathbf{K}_{xx}-\\tfrac{1}{m}\\mathbf{1}_{m }\\mathbf{K}_{xx}-\\tfrac{1}{m}\\mathbf{K}_{xx}\\mathbf{1}_{m}+\\tfrac{1}{m^2}\\mathbf{1}_{m}\\mathbf{K}_{xx}\\mathbf{1}_{m}\\\\\n\\mathbf{K}_{yy}^c&:=\\mathbf{K}_{yy}-\\!\\tfrac{1}{n}\\mathbf{1}_{ n}\\mathbf{K}_{yy}-\\!\\tfrac{1}{n}\\mathbf{K}_{yy}\\mathbf{1}_{n}+\\!\\tfrac{1}{n^2}\\mathbf{1}_{n}\\mathbf{K}_{yy}\\mathbf{1}_{n}\\\\\n\\mathbf{K}_{xy}^c&:=\\mathbf{K}_{xy}-\\tfrac{1}{m}\\mathbf{1}_{ m}\\mathbf{K}_{xy}-\\tfrac{1}{n}\\mathbf{K}_{xy}\\mathbf{1}_{ n}+\\tfrac{1}{mn}\\mathbf{1}_{m}\\mathbf{K}_{xy}\\mathbf{1}_{n}\n\\end{align*}\nwith matrices $\\mathbf{1}_{m}\\in\\mathbb{R}^{m\\times m}$ and $\\mathbf{1}_n\\in\\mathbb{R}^{n\\times n}$ having all entries $1$. Based on those centered matrices, let\n\\begin{equation}\n\\label{eq:k}\n\\mathbf{K}:=\\left[\\begin{array}\n{cc}\n\\mathbf{K}_{xx}^c &\\mathbf{K}_{xy}^c\\\\\n(\\mathbf{K}_{xy}^c)^\\top &\\mathbf{K}_{yy}^c\n\\end{array}\n\\right]\\in\\mathbb{R}^{N\\times N}.\n\\end{equation}\nDefine further \n$\\mathbf{K}^x\\in\\mathbb{R}^{N\\times N}$ and $\\mathbf{K}^y\\in\\mathbb{R}^{N\\times N}$ with $(i,\\,j)$-th entries\n\\begin{subequations}\n\t\\label{eq:kxky}\n\t\\begin{align}\n\tK^x_{i,j}\\,:=\\left\\{\n\t\\begin{array}{ll}\n\tK_{i,j}\/m \\!&1\\le i \\le m\\\\\n\t~~~0 &m0$ is added to the diagonal entries of $\\mathbf{K}\\mathbf{K}^y$. Hence, our KdPCA formulation for $d=1$ is given by}\n\t\\begin{equation}\t\\label{eq:kdpcafm2}\n\\textcolor{black}{\t\t\\hat{\\mathbf{a}}:=\\arg\t\n\t\\underset{\\mathbf{a}\\in\\mathbb{R}^N}{\\max}\n\t\\quad\\frac{\\mathbf{a}^\\top \\mathbf{K}\\mathbf{K}^x\\mathbf{a}}{ \\mathbf{a}^\\top \\left(\\mathbf{K}\\mathbf{K}^y+\\epsilon \\mathbf{I}\\right)\\mathbf{a}}.}\n\t\\end{equation}\nAlong the lines of dPCA, the solution of KdPCA in \\eqref{eq:kdpcafm2} can be provided by \n\\begin{equation}\n\\label{eq:kdpcasol}\n\\textcolor{black}{\n\\left(\\mathbf{K}\\mathbf{K}^y+\\epsilon\\mathbf{I}\\right)^{-1}\\mathbf{K}\\mathbf{K}^x\\hat{\\mathbf{a}}=\\hat{\\lambda}\n\\hat{\\mathbf{a}}.}\n\\end{equation}\nThe optimizer $\\hat{\\mathbf{a}}$ coincides with the right eigenvector of $\\left(\\mathbf{K}\\mathbf{K}^y+\\epsilon\\mathbf{I}\\right)^{-1}\\mathbf{K}\\mathbf{K}^x$ corresponding to the largest eigenvalue $\\hat{\\lambda}=\\lambda_1$.\n\n When looking for $d$ dPCs, with $\\{\\mathbf{a}_i\\}_{i=1}^d$ collected as columns in $\\mathbf{A}:=[\\mathbf{a}_1\\,\\cdots \\,\\mathbf{a}_d]\\in\\mathbb{R}^{N\\times d}$, the KdPCA in \\eqref{eq:kdpcafm2} can be generalized to $d\\ge 2$ as \n\t\\begin{equation*}\n\t\t\t\\hat{\\mathbf{A}}:=\\arg\n\t\t\\underset{\\mathbf{A}\\in\\mathbb{R}^{N\\times d}}{\\max}\n\t\t\\quad {\\rm Tr} \\left[ \\left(\\mathbf{A}^\\top (\\mathbf{K}\\mathbf{K}^y+\\epsilon \\mathbf{I}\\right)\\mathbf{A})^{-1} \\mathbf{A}^\\top \\mathbf{K}\\mathbf{K}^x\\mathbf{A}\\right]\n\t\\end{equation*}\nwhose columns correspond to the $d$ right eigenvectors of $\\left(\\mathbf{K}\\mathbf{K}^y+\\epsilon\\mathbf{I}\\right)^{-1}\\mathbf{K}\\mathbf{K}^x$\n associated with the $d$ largest eigenvalues.\n Having found $\\hat{\\mathbf{A}}$, one can project the data $\\bm{\\Phi}(\\mathbf{Z})$ onto the obtained $d$ subspace vectors by $\\mathbf{K}\\hat{\\mathbf{A}}$. \n It is worth remarking that KdPCA can be performed in the high-dimensional feature space without explicitly forming and evaluating the nonlinear transformations. Indeed, this becomes possible by the `kernel trick' \\cite{RKHS}. \n The main steps of KdPCA are given in Alg. \\ref{alg:kdpca}. \n \nTwo remarks are worth making at this point.\n \t\\begin{remark}\n \t\tWhen the kernel function required to form $\\mathbf{K}_{xx}$, $\\mathbf{K}_{yy}$, and $\\mathbf{K}_{xy}$ is not given, one may use the multi-kernel learning method to automatically choose the right kernel function(s); see for example, \\cite{mkl2004,zhang2017going,tsp2017zwrg}. Specifically, one can presume $\\mathbf{K}_{xx}:=\\sum_{i=1}^P\\delta_i\\mathbf{K}_{xx}^i$, $\\mathbf{K}_{yy}:=\\sum_{i=1}^P\\delta_i\\mathbf{K}_{yy}^i$, and $\\mathbf{K}_{xy}:=\\sum_{i=1}^P\\delta_i\\mathbf{K}_{xy}^i$ in \\eqref{eq:kdpcafm2}, where $\\mathbf{K}_{xx}^i\\in\\mathbb{R}^{m\\times m}$, $\\mathbf{K}_{yy}^i\\in\\mathbb{R}^{n\\times n}$, and $\\mathbf{K}_{xy}^i\\in\\mathbb{R}^{m\\times n}$ are formed using the kernel function $\\kappa_i(\\cdot)$; and $\\{\\kappa_i(\\cdot)\\}_{i=1}^P$ are a preselected dictionary of known kernels, but $\\{\\delta_i\\}_{i=1}^P$ will be treated as unknowns to be learned along with $\\mathbf{A}$ in \\eqref{eq:kdpcafm2}.\n \t\\end{remark}\n \t\\begin{remark}\n \t\tIn the absence of background data, upon setting $\\{\\bm{\\phi}(\\mathbf{y}_j)=\\mathbf{0}\\}$, and $\\epsilon=1$ in \\eqref{eq:kdpcafm2}, matrix $\\left(\\mathbf{K}\\mathbf{K}^y+\\epsilon\\mathbf{I}\\right)^{-1}\\mathbf{K}\\mathbf{K}^x$ reduces to\n \t\t\\begin{equation*}\n \t\t\\mathbf{M}:=\\left[\n \t\t\\begin{array}\n \t\t{cc}\n \t\t(\\mathbf{K}_{xx}^c)^2 &\\mathbf{0}\\\\\n \t\t\\mathbf{0} &\\mathbf{0}\n \t\t\\end{array}\n \t\t\\right].\n \t\t\\end{equation*} After collecting the first $m$ entries of $\\hat{\\mathbf{a}}_i$ into $\\mathbf{w}_i\\in\\mathbb{R}^{m}$, \\eqref{eq:kdpcasol} suggests that $(\\mathbf{K}_{xx}^c)^2\\mathbf{w}_i=\\lambda_i\\mathbf{w}_i$, where $\\lambda_i$ denotes the $i$-th largest eigenvalue of $\\mathbf{M}$. \n \t\tClearly, $\\{\\mathbf{w}_i\\}_{i=1}^d$ can be viewed as the $d$ eigenvectors of $(\\mathbf{K}_{xx}^c)^2$ associated with their $d$ largest eigenvalues. Recall that KPCA finds the first $d$ principal eigenvectors of $\\mathbf{K}_{xx}^c$ \\cite{kpca}. Thus, KPCA is a special case of KdPCA, when no background data are employed.\n \t\\end{remark}\n \n\n\n\n\\section{Discriminative Analytics with\\\\ Multiple Background Datasets} \\label{sec:mdpca}\n\nSo far, we have presented discriminative analytics methods for two datasets. \nThis section presents their generalizations\nto cope with multiple (specifically, one target plus more than one background) datasets. \nSuppose that, in addition to the zero-mean target dataset $\\{\\mathbf{x}_i\\in\\mathbb{R}^D\\}_{i=1}^m$, we are also given $M\\ge 2$ centered background datasets $\\{\\mathbf{y}_j^k\\}_{j=1}^{n_k}$ for $k=1,\\,\\ldots,\\,M$. \nThe $M$ sets of background data $\\{\\mathbf{y}_j^k\\}_{k=1}^M$ contain latent background subspace vectors that are also present in $\\{\\mathbf{x}_i\\}$.\n\nLet $\\mathbf{C}_{xx}:=m^{-1}\\sum_{i=1}^{m}\\mathbf{x}_i\\mathbf{x}_i^\\top$ and $\\mathbf{C}_{yy}^k:=n_k^{-1}\\times $ $\n\\sum_{j=1}^{n_k}\\mathbf{y}_{j}^k(\\mathbf{y}^k_{j})^\\top$ be the corresponding sample covariance matrices. The goal here is to unveil the latent subspace vectors \n\\textcolor{black}{that are significant in representing the target data, but not any of the background data.}\nBuilding on the dPCA in \\eqref{eq:dpcam} for a single background dataset, it is meaningful to seek directions that maximize the variance of target data, while minimizing those of all background data. Formally, \nwe pursue the following optimization, that we term multi-background (M) dPCA here, for discriminative analytics of multiple datasets\n\\begin{equation}\\label{eq:gdpca}\n\\underset{\\mathbf{U}\\in\\mathbb{R}^{D\\times d}}{\\max}\t\\quad {\\rm Tr}\\bigg[\\bigg(\\sum_{k=1}^M\\omega_k\\mathbf{U}^\\top\\mathbf{C}_{yy}^{k}\\mathbf{U}\\bigg)^{-1}\\mathbf{U}^\\top\\mathbf{C}_{xx}\\mathbf{U}\\bigg]\n\\end{equation}\nwhere $\\{\\omega_k\\ge 0\\}_{k=1}^M$ with $\\sum_{k=1}^M \\omega_k=1$\nweight the variances of the $M$ projected background datasets. \n\n\nUpon defining $\\mathbf{C}_{yy}:=\\sum_{k=1}^{M}\\omega_k\\mathbf{C}_{yy}^k\n$, it is straightforward to see that \\eqref{eq:gdpca} reduces to \\eqref{eq:dpcam}. Therefore, one readily deduces \n that the optimal ${\\mathbf{U}}$ in \\eqref{eq:gdpca} can be obtained by taking the $d$ right eigenvectors of $\\mathbf{C}_{yy}^{-1}\\mathbf{C}_{xx}$ that are associated with the $d$ largest eigenvalues. For implementation, the steps of MdPCA are presented in Alg. \\ref{alg:mdpca}.\n\n\\begin{algorithm}[t]\n\t\\caption{Multi-background dPCA.}\n\t\\label{alg:mdpca}\n\t\\begin{algorithmic}[1]\n\t\t\\STATE {\\bfseries Input:}\n\t\tTarget data $\\{\\accentset{\\circ}{\\mathbf{x}}_i\\}_{i=1}^m$ and background data $\\{\\accentset{\\circ}{\\mathbf{y}}_{j}^k\\}_{j=1}^{n_k}$ for $k=1,\\,\\ldots,\\,M$; weight hyper-parameters $\\{\\omega_k\\}_{k=1}^M$; number of dPCs $d$.\n\t\t\\STATE {\\bfseries Remove} the means from $\\{\\accentset{\\circ}{\\mathbf{x}}_i\\}$ and $\\{\\accentset{\\circ}{\\mathbf{y}}_{j}^k\\}_{k=1}^{M}$ to obtain $\\{\\mathbf{x}_i\\}$ and $\\{\\mathbf{y}_{j}^k\\}_{k=1}^{M}$. Form $\\mathbf{C}_{xx}$, $\\{\\mathbf{C}_{yy}^k\\}_{k=1}^M$, and $\\mathbf{C}_{yy}:=\\sum_{k=1}^{M}\\omega_k\\mathbf{C}^k_{yy}$.\n\t\t\\STATE {\\bfseries Perform} eigendecomposition\n\t\ton $\\mathbf{C}_{yy}^{-1}\\mathbf{C}_{xx}$ to obtain the first $d$ right eigenvectors $\\{\\hat{\\mathbf{u}}_i\\}_{i=1}^d$.\n\t\t\\STATE {\\bfseries Output} $\\hat{\\mathbf{U}}:=[\\hat{\\mathbf{u}}_1\\,\\cdots\\,\\hat{\\mathbf{u}}_d]$.\n\t\t\\vspace{-0pt}\n\t\\end{algorithmic}\n\\end{algorithm}\n\n\n\\begin{remark}\n\t\\label{rmk:alpha}\t\n\tThe parameters $\\{\\omega_k\\}_{k=1}^M$ can be decided using two possible methods:\n\t i) spectral-clustering \\cite{ng2002spectral} to select a few sets of $\\{\\omega_k\\}$ yielding \\textcolor{black}{the most representative subspaces for projecting the target data across $\\{\\omega_k\\}$;}\n\t or ii) optimizing $\\{\\omega_k\\}_{k=1}^M$ jointly with $\\mathbf{U}$ in \\eqref{eq:gdpca}.\n\\end{remark}\n\n\n\n\nFor data belonging to nonlinear manifolds, kernel (K) MdPCA will be developed next.\n With some nonlinear function $\\phi(\\cdot)$, we obtain the transformed target data $\\{\\bm{\\phi}(\\mathbf{x}_i)\\in\\mathbb{R}^L\\}$ as well as background data $\\{\\bm{\\phi}(\\mathbf{y}_{j}^k)\\in\\mathbb{R}^L\\}$. Letting $\\bm{\\mu}_x\\in\\mathbb{R}^L$ and $\\bm{\\mu}_{y}^k:=(1\/n_k)\\sum_{j=1}^{n_k}\\bm{\\phi}(\\mathbf{y}_{j}^k)\\in\\mathbb{R}^L$ denote the means of $\\{\\bm{\\phi}(\\mathbf{x}_i)\\}$ and $\\{\\bm{\\phi}(\\mathbf{y}_{j}^k)\\}$, respectively, one can form the corresponding covariance matrices $\\mathbf{C}_{xx}^{\\phi}\\in\\mathbb{R}^{L\\times L}$, and \n\\begin{equation*}\n\t\\mathbf{C}_{yy}^{\\phi,k}:=\\frac{1}{n_k}\\sum_{j=1}^{n_k}\\left(\\bm{\\phi}(\\mathbf{y}_{j}^k)-\\bm{\\mu}_{y}^k\\right)\\left(\\bm{\\phi}(\\mathbf{y}_{j}^k)-\\bm{\\mu}_{y}^k\\right)^\\top\\in\\mathbb{R}^{L\\times L}\n\t\\end{equation*}\nfor $k=1,\\,\\ldots,\\, M$.\nDefine the aggregate vector $\\mathbf{b}_i\\in\\mathbb{R}^L$ \n\\begin{equation*}\n\\label{eq:b}\n\\mathbf{b}_i:=\\left\\{\n\\begin{array}{ll}\n\\bm{\\phi}(\\mathbf{x}_i)-\\bm{\\mu}_x, & 1\\le i \\le m\\\\\n\\bm{\\phi}(\\mathbf{y}_{i-m}^1)-\\bm{\\mu}_{y}^1, & m< i \\le m+n_1\\\\\n~~~\\vdots & \\\\\n\\bm{\\phi}(\\mathbf{y}_{i-\\textcolor{black}{(N-n_M)}}^M)-\\bm{\\mu}_y^M,& N-n_{M}< i\\le N\n\\end{array}\n\\right.\n\\end{equation*}\nwhere $N:=m+\\sum_{k=1}^M n_k$, for $i=1,\\,\\ldots,\\,N$, and collect vectors $\\{\\mathbf{b}_i\\}_{i=1}^N$ as columns to form $\\mathbf{B}:=[\\mathbf{b}_1\\,\\cdots\\,\\mathbf{b}_N]\\in\\mathbb{R}^{L\\times N}$.\nUpon assembling dual vectors $\\{\\mathbf{a}_i\\in\\mathbb{R}^N\\}_{i=1}^d$ to form $\\mathbf{A}:=[\\mathbf{a}_1\\,\\cdots \\, \\mathbf{a}_d]\\in\\mathbb{R}^{N\\times d}$,\nthe kernel version of \\eqref{eq:gdpca} can be obtained as\n\t\\begin{equation*}\n\n\\underset{\\mathbf{A}\\in\\mathbb{R}^{N\\times d}}{\\max} {\\rm Tr}\\bigg[\\bigg(\\mathbf{A}^\\top\\mathbf{B}^\\top\\sum_{k=1}^M\\omega_k\\mathbf{C}^{\\phi,k}_{yy}\\mathbf{B}\\mathbf{A}\\hspace{-0.1cm}\\bigg)^{-1}\\mathbf{A}^\\top\\mathbf{B}^\\top\\mathbf{C}^{\\phi} _{xx}\\mathbf{B}\\mathbf{A}\\bigg].\n\t\\end{equation*}\n\n\n\n\n\nConsider now kernel matrices $\\mathbf{K}_{xx}\\in\\mathbb{R}^{m\\times m}$ and $\\mathbf{K}_{kk}\\in\\mathbb{R}^{n_k\\times n_k}$, whose $(i,\\,j)$-th entries are $\\kappa(\\mathbf{x}_i,\\,\\mathbf{x}_j)$ and $\\kappa(\\mathbf{y}_{i}^k,\\,\\mathbf{y}_{j}^k)$, respectively, for $k=1,\\,\\ldots,\\,M$. \n Furthermore, matrices $\\mathbf{K}_{xk}\\in\\mathbb{R}^{m\\times n_k}$, and $\\mathbf{K}_{lk}\\in\\mathbb{R}^{n_l\\times n_k}$ are defined with their corresponding $(i,\\,j)$-th elements $\\kappa(\\mathbf{x}_{i},\\,\\mathbf{y}_{j}^k)$ and $\\kappa(\\mathbf{y}_{i}^{l},\\,\\mathbf{y}_{j}^k)$, for $l=1,\\,\\ldots,\\, k-1$ and $k=1,\\,\\ldots,\\,M$.\nWe subsequently center those matrices to obtain $\\mathbf{K}_{xx}^c$ and \n\\begin{align*}\n\\mathbf{K}_{kk}^c&:=\\mathbf{K}_{kk}-\\tfrac{1}{n_k}\\mathbf{1}_{n_k}\\mathbf{K}_{kk}-\\tfrac{1}{n_k}\\mathbf{K}_{kk}\\mathbf{1}_{n_k}+\\tfrac{1}{n_k^2}\\mathbf{1}_{n_k}\\mathbf{K}_{kk}\\mathbf{1}_{n_k}\\\\\n\\mathbf{K}_{xk}^c&:=\\mathbf{K}_{xk}-\\tfrac{1}{m}\\mathbf{1}_{m }\\mathbf{K}_{xk}-\\tfrac{1}{n_k}\\mathbf{K}_{xk}\\mathbf{1}_{n_k}+\\tfrac{1}{mn_k}\\mathbf{1}_{m}\\mathbf{K}_{xk}\\mathbf{1}_{n_k}\\\\\n\\mathbf{K}_{lk}^c&:=\\mathbf{K}_{lk}-\\!\\tfrac{1}{n_l}\\mathbf{1}_{n_l}\\mathbf{K}_{lk}-\\!\\tfrac{1}{n_{k}}\\mathbf{K}_{lk}\\mathbf{1}_{n_k}+\\!\\tfrac{1}{n_ln_k}\\mathbf{1}_{n_l}\\mathbf{K}_{lk}\\mathbf{1}_{n_k}\n\\end{align*}\nwhere $\\mathbf{1}_{n_k}\\in\\mathbb{R}^{n_k\\times n_k}$ and $\\mathbf{1}_{n_{l}}\\in\\mathbb{R}^{n_{l}\\times n_{l}}$ are all-one matrices. With $\\mathbf{K}^x$ as in \\eqref{eq:kx}, consider the $N\\times N$ matrix\n\\begin{equation}\n\\label{eq:km}\n\\mathbf{K}:=\\left[\\begin{array}\n{llll}\n\\mathbf{K}_{xx}^c & \\mathbf{K}_{x1}^c & \\cdots & \\mathbf{K}_{xM}^c \\\\\n(\\mathbf{K}_{x1}^c)^\\top & \\mathbf{K}_{11}^c & \\cdots & \\mathbf{K}_{1M}^c \\\\\n\\quad\\vdots & \\quad\\vdots & \\ddots & \\quad\\vdots \\\\\n(\\mathbf{K}_{xM}^c)^\\top & (\\mathbf{K}_{1M}^c)^\\top & \\cdots & \\mathbf{K}_{MM}^c\n\\end{array}\n\\right]\n\\end{equation}\nand $\\mathbf{K}^k\\in\\mathbb{R}^{N\\times N}$ with $(i,\\,j)$-th entry\n\\begin{equation}\nK^k_{i,j}:=\\left\\{\\!\\!\\begin{array}{cl}\nK_{i,j}\/n_k, \\!&\\text{if}~ m+\\sum_{\\ell=1}^{n_{k-1}}n_{\\ell}< i \\le m+\\sum_{\\ell=1}^{n_{k}}n_{\\ell}\\\\\n0, &\\mbox{otherwise}\n\\end{array}\n\\right.\\label{eq:kk}\n\\end{equation}\nfor $k=1,\\,\\ldots,\\,M$. Adopting the regularization in \\eqref{eq:kdpcafm2}, our KMdPCA finds\n\t\\begin{equation*}\n\\hat{\\mathbf{A}}:=\\arg\\underset{\\mathbf{A}\\in\\mathbb{R}^{N\\times d}}{\\max}{\\rm Tr}\\bigg[\\bigg(\\mathbf{A}^\\top\\Big(\\mathbf{K}\\sum_{k=1}^M\\mathbf{K}^k+\\epsilon\\mathbf{I}\\Big)\\mathbf{A}\\bigg)^{-1}\\!\\!\\mathbf{A}^\\top\\mathbf{K}\\mathbf{K}^x\\mathbf{A}\\bigg]\n\t\\end{equation*}\nsimilar to (K)dPCA, whose solution comprises the right eigenvectors associated with the \\textcolor{black}{first $d$} largest eigenvalues in\n\t\\begin{equation}\n\t\t\\label{eq:kmdpcasol}\n\t\t\\bigg(\\mathbf{K}\\sum_{k=1}^M\\mathbf{K}^k+\\epsilon\\mathbf{I}\\bigg)^{-1}\\mathbf{K}\\mathbf{K}^x\\hat{\\mathbf{a}}_i=\\hat{\\lambda}_i\\hat{\\mathbf{a}}_i.\n\t\\end{equation}\n\t\n\n\n\nFor implementation, KMdPCA is presented in Alg. \\ref{alg:mkdpca}.\n\\begin{algorithm}[t]\n\t\\caption{Kernel multi-background dPCA.}\n\t\\label{alg:mkdpca}\n\t\\begin{algorithmic}[1]\n\t\t\\STATE {\\bfseries Input:}\n\t\tTarget data $\\{\\mathbf{x}_i\\}_{i=1}^m$ and background data $\\{\\mathbf{y}_{j}^k\\}_{j=1}^{n_k}$ for $k=1,\\,\\ldots,\\,M$; number of dPCs $d$; kernel function $\\kappa(\\cdot)$; weight coefficients $\\{\\omega_k\\}_{k=1}^M$; constant $\\epsilon$.\n\t\t\\STATE {\\bfseries Construct} $\\mathbf{K}$ using \\eqref{eq:km}. Build $\\mathbf{K}^x$ and $\\{\\mathbf{K}^k\\}_{k=1}^M$ via \\eqref{eq:kx} and \\eqref{eq:kk}.\n\t\t\\STATE {\\bfseries Solve} \\eqref{eq:kmdpcasol} to obtain the first $d$ eigenvectors $\\{\\hat{\\mathbf{a}}_i\\}_{i=1}^d$.\n\t\t\\STATE {\\bfseries Output} $\\hat{\\mathbf{A}}:=[\\hat{\\mathbf{a}}_1\\,\\cdots\\,\\hat{\\mathbf{a}}_d]$. \n\t\t\\vspace{-0pt}\n\t\\end{algorithmic}\n\\end{algorithm} \n\n\n\n\\begin{remark}\nWe can verify that PCA, KPCA, dPCA, KdPCA, MdPCA, and KMdPCA incur computational complexities $\\mathcal{O}(mD^2)$, $\\mathcal{O}(m^2D)$, $\\mathcal{O}(\\max(m,n)D^2)$, $\\mathcal{O}(\\max(m^2,n^2)D)$, $\\mathcal{O}(\\max(m,\\bar{n} )D^2)$, and $\\mathcal{O}(\\max(m^2,\\bar{n}^2 )D)$, respectively, where $\\bar{n}:=\\max_k~\\{n_k\\}_{k=1}^M$. \nIt is also not difficult to check that the computational complexity of \n\tforming $\\mathbf{C}_{xx}$, $\\mathbf{C}_{yy}$, $\\mathbf{C}_{yy}^{-1}$, and performing the eigendecomposition on $\\mathbf{C}_{yy}^{-1}\\mathbf{C}_{xx}$ is $\\mathcal{O}(mD^2)$, $\\mathcal{O}(nD^2)$, $\\mathcal{O}(D^3)$, and $\\mathcal{O}(D^3)$, respectively. As the number of data vectors ($m,\\,n$) is much larger than their dimensionality $D$, when performing dPCA in the primal domain, it follows readily that dPCA incurs complexity $\\mathcal{O}(\\max(m,n)D^2)$. Similarly, the computational complexities of the other algorithms can be checked\nEvidently, when $\\min(m,n)\\gg D $ or $\\min(m,\\underline{n})\\gg D$ with $\\underline{n}:=\\min_k\\,\\{n_k\\}_{k=1}^M$,\n\tdPCA and MdPCA are computationally more attractive than KdPCA and KMdPCA. On the other hand, KdPCA and KMdPCA become more appealing, when $D\\gg \\max(m,n)$ or $D\\gg \\max(m,\\bar{n})$.\n\tMoreover, the computational complexity of cPCA is $\\mathcal{O}(\\max (m,n)D^2L)$, where $L$ denotes the number of $\\alpha$'s candidates. Clearly, relative to dPCA, cPCA is computationally more expensive when $DL> \\max(m,n)$.\n\\end{remark}\n\n\n\n\n\\section{Numerical Tests}\\label{sec:simul}\n\nTo evaluate the performance of our proposed approaches for discriminative analytics, we carried out a number of numerical tests using several synthetic and real-world datasets, a sample of which are reported in this section. \n\n\\subsection{dPCA tests}\n\n\n\n Semi-synthetic target \\textcolor{black}{$\\{\\accentset{\\circ}{\\mathbf{x}}_i\\in\\mathbb{R}^{784}\\}_{i=1}^{2,000}$} and background images \\textcolor{black}{$\\{\\accentset{\\circ}{\\mathbf{y}}_j\\in\\mathbb{R}^{784}\\}_{j=1}^{3,000}$} were obtained by superimposing images from the MNIST \\footnote{Downloaded from http:\/\/yann.lecun.com\/exdb\/mnist\/.} and CIFAR-10 \\cite{cifar10} datasets.\n Specifically, the target data $\\{\\mathbf{x}_i\\in\\mathbb{R}^{784}\\}_{i=1}^{2,000}$ were generated using $2,000$ handwritten digits 6 and 9 (1,000 for each) of size $28\\times 28$,\n superimposed with $2,000$ frog images from the CIFAR-10 database \\cite{cifar10} followed by removing the sample mean from each data point; see Fig. \\ref{fig:targ}. The raw $32\\times 32$ frog images were converted into grayscale, and randomly cropped to $28\\times 28$. The zero-mean background data $\\{\\mathbf{y}_j\\in\\mathbb{R}^{784}\\}_{j=1}^{3,000}$ were constructed using $3,000$ cropped frog images, which were randomly chosen from the remaining frog images in the CIFAR-10 database.\n\n \\begin{figure}[t]\n \t\\centering \n \t\\includegraphics[scale=0.64]{targ.pdf} \n \t\\caption{\\small{Superimposed images.}}\n \t\\label{fig:targ}\n \\end{figure}\n \nThe dPCA Alg. \\ref{alg:dpca} was performed on $\\{\\accentset{\\circ}{\\mathbf{x}}_i\\}$ and $\\{\\accentset{\\circ}{\\mathbf{y}}_j\\}$ with $d=2$. PCA was implemented on $\\{\\accentset{\\circ}{\\mathbf{x}}_i\\}$ only. The first two PCs and dPCs are presented in the left and right panels of Fig. \\ref{fig:digits}, respectively. Clearly, dPCA reveals the discriminative information of the target data describing digits $6$ and $9$ relative to the background data, enabling successful discovery of the digit $6$ and $9$ subgroups. On the contrary, PCA captures only the patterns that correspond to the generic background rather than those associated with the digits $6$ and $9$. \n\\textcolor{black}{To further assess the performance of dPCA and PCA, K-means is carried out using the resulting low-dimensional representations of the target data. The clustering performance is evaluated in terms of two metrics: clustering error and scatter ratio. The clustering error is defined as the ratio of the number of incorrectly clustered data vectors over $m$.\n\tScatter ratio verifying cluster separation is defined as $S_t\/\\sum_{i=1}^2S_i$, where $S_t$ and $\\{S_i\\}_{i=1}^2$ denote the total scatter value and the within cluster scatter values, given by $S_t:=\\sum_{j=1}^{2,000}\\|\\hat{\\mathbf{U}}^\\top\\mathbf{x}_j\\|_2^2$ and $\\{S_i:=\\sum_{j\\in \\mathcal{C}_i}\\|\\hat{\\mathbf{U}}^\\top\\mathbf{x}_j-\\hat{\\mathbf{U}}^\\top\\sum_{k\\in{\\mathcal{C}}_i}\\mathbf{x}_k\\|_2^2\\}_{i=1}^2$, respectively, with $\\mathcal{C}_i$ representing the set of data vectors belonging to cluster $i$. Table \\ref{tab:cluster} reports the clustering errors and scatter ratios of dPCA and PCA under different $d$ values. Clearly, dPCA exhibits lower clustering error and higher scatter ratio.}\n\n\\begin{figure}[t]\n\t\\centering \n\t\\includegraphics[scale=0.53]{digits.pdf} \t\n\t\\caption{\\small{dPCA versus PCA on semi-synthetic images.}}\n\t\\label{fig:digits}\n\\end{figure}\n\n\n\n\n\\renewcommand{\\arraystretch}{1.5} \n\\begin{table}[tp]\t\n\\centering\n\t\\fontsize{8.5}{8}\\selectfont\n\t\t\\textcolor{black}{\\caption{\\textcolor{black}{Performance comparison between dPCA and PCA.}}\n\t\\label{tab:cluster}\n\t\\vspace{.8em}\n\t\\begin{tabular}{|c|c|c|c|c|}\n\t\t\\hline\n\t\t\\multirow{2}{*}{$d$}&\n\t\t\\multicolumn{2}{c|}{Clustering error}&\\multicolumn{2}{c|}{ Scatter ratio}\\cr\\cline{2-5}\n\t\t&dPCA&PCA&dPCA&PCA\\cr\n\t\t\\hline\n\t\t\\hline\n\t\t1&0.1660&0.4900&2.0368&1.0247\\cr\\hline\n\t\t2&0.1650&0.4905&1.8233&1.0209\\cr\\hline\n\t\t3&0.1660&0.4895&1.6719&1.1327\\cr\\hline\n\t\t4&0.1685&0.4885&1.4557&1.1190\\cr\\hline\n\t\t5&0.1660&0.4890&1.4182&1.1085\\cr\\hline\n\t\t10&0.1680&0.4885&1.2696&1.0865\\cr\\hline\n\t\t50&0.1700&0.4880&1.0730&1.0568\\cr\\hline\n\t 100&0.1655&0.4905&1.0411&1.0508\\cr\n\t\t\\hline\n\t\\end{tabular}}\n\\end{table}\n \n\n\nReal protein expression data \\cite{mice}\n were also used to evaluate the ability of dPCA to discover subgroups in real-world conditions. Target data $\\{\\accentset{\\circ}{\\mathbf{x}}_i\\in\\mathbb{R}^{77}\\}_{i=1}^{267}$ contained $267$ data vectors, each collecting $77$ protein expression measurements of a mouse having Down Syndrome disease \\cite{mice}. \nIn particular, the first $135$ data points $\\{\\accentset{\\circ}{\\mathbf{x}}_i\\}_{i=1}^{135}$ recorded protein expression measurements of $135$ mice with drug-memantine treatment, while the remaining $\\{\\accentset{\\circ}{\\mathbf{x}}_i\\}_{i=136}^{267}$ collected measurements of $134$ mice without such treatment. Background data $\\{\\accentset{\\circ}{\\mathbf{y}}_j\\in\\mathbb{R}^{77}\\}_{j=1}^{135}$ on the other hand, comprised such measurements from $135$ healthy mice, which likely exhibited similar natural variations (due to e.g., age and sex) as the target mice, but without the differences that result from the Down Syndrome disease. \n\n\n\nWhen performing cPCA on $\\{\\accentset{\\circ}{\\mathbf{x}}_i\\}$ and $\\{\\accentset{\\circ}{\\mathbf{y}}_j\\}$, four $\\alpha$'s were selected from $15$ logarithmically-spaced values between $10^{-3}$ and $10^{3}$ via the spectral clustering method presented in \\cite{2017cpca}. \n\n\nExperimental results are reported in Fig. \\ref{fig:mice} with red circles and black diamonds representing sick mice with and without treatment, respectively. Evidently, when PCA is applied, the low-dimensional representations of the protein measurements from mice with and without treatment are distributed similarly. In contrast, the low-dimensional representations cluster two groups of mice successfully when dPCA is employed. At the price of runtime (about $15$ times more than dPCA), cPCA with well {tuned} parameters ($\\alpha=3.5938$ and $27.8256$) can also separate the two groups.\n\n\n\n\\begin{figure}[t]\n\t\\centering \n\t\\includegraphics[scale=0.735]{mice.pdf} \n\t\\vspace{-4pt}\n\t\\caption{\\small{Discovering subgroups in mice protein expression data.}}\n\t\\label{fig:mice}\n\t\\vspace{-4pt}\n\\end{figure}\n\n\\subsection{KdPCA tests}\\label{sec:simu2}\n\n\n\n \\begin{figure}[t]\n \t\\centering \n \t\\includegraphics[scale=0.61]{kdpca_target.pdf} \n \t\\caption{\\small{Target data dimension distributions with $x_{i,j}$ representing the $j$-th entry of $\\mathbf{x}_i$ for $j=1,\\, \\ldots,\\,4$ and $i=1,\\, \\ldots,\\,300$.}}\n \t\\label{fig:kdpca_target}\n \\end{figure}\n \n In this subsection, our KdPCA is evaluated \n using synthetic and real data.\nBy adopting the procedure described in \\cite[p. 546]{hastie2009elements}, we generated target data $\\{{\\mathbf{x}}_i:=[x_{i,1}\\, x_{i,2}\\,x_{i,3}\\,x_{i,4}]^\\top\\}_{i=1}^{300}$ and background data $\\{{\\mathbf{y}}_j\\in\\mathbb{R}^4\\}_{j=1}^{150}$. In detail, $\\{[x_{i,1}\\,x_{i,2}]^{\\top}\\}_{i=1}^{300}$ were sampled uniformly from two circular concentric clusters with corresponding radii $1$ and $6$ shown in the left panel of Fig. \\ref{fig:kdpca_target}; and $\\{[x_{i,3}\\,x_{i,4}]^{\\top}\\}_{i=1}^{300}$ were uniformly drawn from a circle with radius $10$; see Fig. \\ref{fig:kdpca_target} (right panel) for illustration.\n The first and second two dimensions of $\\{{\\mathbf{y}}_j\\}_{j=1}^{150}$ were uniformly sampled from two concentric circles with corresponding radii of $4$ and $10$. All data points in $\\{{\\mathbf{x}}_i\\}$ and $\\{{\\mathbf{y}}_j\\}$ were corrupted with additive noise sampled independently from $\\mathcal{N}(\\mathbf{0},\\,0.1\\mathbf{I})$. \n To unveil the specific cluster structure of the target data relative to the background data, \n Alg. \\ref{alg:kdpca} was run with $\\epsilon=10^{-3}$ and using the degree-$2$ polynomial kernel $\\kappa(\\mathbf{z}_i,\\mathbf{z}_j)=(\\mathbf{z}_i^\\top\\mathbf{z}_j )^2$. Competing alternatives including PCA, KPCA, cPCA, kernel (K) cPCA \\cite{2017cpca}, and dPCA were also implemented. \n Further, KPCA and KcPCA shared the kernel function with KdPCA. Three different values of $\\alpha$ were automatically chosen for cPCA \\cite{2017cpca}.\n The parameter $\\alpha$ of KcPCA was set as $1$, $10$, and $100$.\n \n\nFigure \\ref{fig:kdpca_syn} depicts the first two dPCs, cPCs, and PCs of the aforementioned dimensionality reduction algorithms. Clearly, only KdPCA\nsuccessfully reveals the two unique clusters of $\\{{\\mathbf{x}}_i\\}$ relative to $\\{{\\mathbf{y}}_j\\}$. \n\n\\begin{figure}[t]\n\t\\centering \n\t\\includegraphics[scale=0.78]{kdpca_syn.pdf} \n\t\t\t\\vspace{5pt}\n\t\\caption{\\small{Discovering subgroups in nonlinear synthetic data.}}\n\t\\label{fig:kdpca_syn}\n\\end{figure}\n\\begin{figure}[t]\n\t\\centering \n\t\\includegraphics[scale=0.74]{kdpca_real.pdf} \n\t\\caption{\\small{Discovering subgroups in MHealth data.}}\n\t\\label{fig:kdpca_real}\n\\end{figure}\n\n\\begin{figure}[t]\n\t\\centering \n\t\\includegraphics[scale=0.71]{kdpca_real_test2.pdf} \n\t\\caption{\\small{Distinguishing between waist bends forward and cycling.}}\n\t\\label{fig:kdpca_real_test2} \n\\end{figure}\n\n\\begin{figure}[t]\n\t\\centering \n\t\\includegraphics[scale=0.7]{kdpca_real_add.pdf} \n\t\\caption{\\small{Face recognization by performing KdPCA.}}\n\t\\label{fig:kdpca_real_add} \n\\end{figure}\n\n \nKdPCA was tested \nin realistic settings using the real Mobile (M) Health data \\cite{mhealth}. This dataset consists of sensor (e.g., gyroscopes, accelerometers, and EKG) measurements from volunteers conducting a series of physical activities. In the first experiment, $200$ target data $\\{{\\mathbf{x}}_i\\in\\mathbb{R}^{23}\\}_{i=1}^{200}$ were used, each of which recorded $23$ sensor measurements from one volunteer performing two different physical activities, namely laying down and having frontal elevation of arms ($100$ data points correspond to each activity). Sensor measurements from the same volunteer standing still were utilized for the $100$ background data points $\\{{\\mathbf{y}}_j\\in\\mathbb{R}^{23}\\}_{j=1}^{100}$. \nFor KdPCA, KPCA, and KcPCA algorithms, the Gaussian kernel \n with bandwidth $5$\nwas used. Three different values for the parameter $\\alpha$ in cPCA were automatically selected from a list of $40$ logarithmically-spaced values between $10^{-3}$ and $10^{3}$, whereas $\\alpha$ in KcPCA was set to $1$ \\cite{2017cpca}.\n\n\n\nThe first two dPCs, cPCs, and PCs of KdPCA, dPCA, KcPCA, cPCA, KPCA, and PCA are reported in Fig. \\ref{fig:kdpca_real}. It is self-evident that the two activities evolve into two separate clusters in the plots of KdPCA and KcPCA. On the contrary, due to the nonlinear data correlations, the other alternatives fail to distinguish the two activities.\n\n \nIn the second experiment, the target data were formed with sensor measurements of one volunteer executing waist bends forward and cycling. The background data were collected from the same volunteer standing still. The Gaussian kernel with bandwidth $40$ was used for KdPCA and KPCA, while the second-order polynomial kernel $\\kappa(\\mathbf{z}_i,\\,\\mathbf{z}_j)=(\\mathbf{z}_i^\\top\\mathbf{z}_j+3)^2$ was employed for KcPCA. The first two dPCs, cPCs, and PCs of simulated schemes are depicted\n in Fig. \\ref{fig:kdpca_real_test2}. \nEvidently, KdPCA outperforms its competing alternatives in discovering the two physical activities of the target data.\n\n\\textcolor{black}{To test the scalability of our developed schemes, the Extended Yale-B (EYB) face image dataset \\cite{yaleb} was adopted to test \n\t the clustering performance of KdPCA, KcPCA, and KPCA. EYB database contains frontal face images of $38$ individuals, each having about around $65$ color images of $192\\times 168$ ($32,256$) pixels. The color images of three individuals ($60$ images per individual) were converted into grayscale images and vectorized to obtain $180$ vectors of size $32,256\\times 1$. The $120$ vectors from two individuals (clusters) comprised the target data, and the remaining $60$ vectors formed the background data. A Gaussian kernel with bandwidth $150$ was used for KdPCA, KcPCA, and KPCA. Figure \\ref{fig:kdpca_real_add} reports the first two dPCs, cPCs, and PCs of KdPCA, KcPCA (with 4 different values of $\\alpha$), and KPCA, with black circles and red stars representing the two different individuals from the target data. K-means is carried out using the resulting $2$-dimensional representations of the target data. The clustering errors of KdPCA, KcPCA with $\\lambda=1$, KcPCA with $\\lambda=10$, KcPCA with $\\lambda=50$, KcPCA with $\\lambda=100$, and KPCA are $0.1417$, $0.7$, $0.525$, $0.275$, $0.2833$, and $0.4167$, respectively. Evidently, the face images of the two individuals can be better recognized with KdPCA than with other methods.}\n\n\\subsection{MdPCA tests}\n \n\t\nThe ability of the MdPCA Alg. \\ref{alg:mdpca} for discriminative dimensionality reduction \nis examined here with two background datasets. \n For simplicity, the involved weights were set to $\\omega_1=\\omega_2=0.5$.\t\n \n In the first experiment,\n\ttwo clusters of $15$-dimensional data points were generated for the target data $\\{\\accentset{\\circ}{\\mathbf{x}}_i\\in\\mathbb{R}^{15}\\}_{i=1}^{300}$ ($150$ for each). \nSpecifically, the first $5$ dimensions of $\\{\\accentset{\\circ}{\\mathbf{x}}_i\\}_{i=1}^{150}$ and $\\{\\accentset{\\circ}{\\mathbf{x}}_i\\}_{i=151}^{300}$ were sampled from $\\mathcal{N}(\\mathbf{0},\\,\\mathbf{I})$ and $\\mathcal{N}(8\\mathbf{1},\\,2\\mathbf{I})$, respectively. The second and last $5$ dimensions of $\\{\\accentset{\\circ}{\\mathbf{x}}_i\\}_{i=1}^{300}$ were drawn accordingly from the normal distributions $\\mathcal{N}(\\mathbf{1},\\,10\\mathbf{I})$ and $\\mathcal{N}(\\mathbf{1},\\,20\\mathbf{I})$. The right top plot of Fig. \\ref{fig:mdpca_syn} shows that performing PCA cannot resolve the two clusters.\nThe first, second, and last $5$ dimensions of the first background dataset $\\{\\accentset{\\circ}{\\mathbf{y}}_{j}^1\\in\\mathbb{R}^{1}\\}_{j=15}^{150}$ were sampled from $\\mathcal{N}(\\mathbf{1},\\,2\\mathbf{I})$, $\\mathcal{N}(\\mathbf{1},\\,10\\mathbf{I})$, and $ \\mathcal{N}(\\mathbf{1},\\,2\\mathbf{I})$, respectively, while those of the second background dataset $\\{\\accentset{\\circ}{\\mathbf{y}}_{j}^2\\in\\mathbb{R}^{15}\\}_{j=1}^{150}$ were drawn from $\\mathcal{N}(\\mathbf{1},\\,2\\mathbf{I})$, $\\mathcal{N}(\\mathbf{1},\\,2\\mathbf{I})$, and $\\mathcal{N}(\\mathbf{1},\\,20\\mathbf{I})$. \nThe two plots at the bottom of Fig. \\ref{fig:mdpca_syn} depict the first two dPCs of dPCA implemented with a single background dataset. \nEvidently, MdPCA can discover the two clusters in the target data by leveraging \n the two background datasets. \n\n\\begin{figure}[t]\n\t\\centering \n\t\\includegraphics[scale=0.66]{mdpca_syn.pdf} \n\t\\vspace{-5pt}\n\t\\caption{\\small{Clustering structure by MdPCA using synthetic data.}}\n\t\\label{fig:mdpca_syn}\n\t\\vspace{-5pt}\n\\end{figure}\n\n\\begin{figure}[t]\n\t\\centering \n\t\\includegraphics[scale=0.69]{mdpca_semisyn.pdf} \n\t\\vspace{-8pt}\n\t\\caption{\\small{Clustering structure by MdPCA using semi-synthetic data.}}\n\t\\label{fig:mdpca_semisyn}\n\t\\vspace{-10pt}\n\\end{figure}\n\n \nIn the second experiment, the\n target data $\\{\\accentset{\\circ}{\\mathbf{x}}_{i}\\in\\mathbb{R}^{784}\\}_{i=1}^{400}$ were obtained using $400$ handwritten digits $6$ and $9$ ($200$ for each) of size $28\\times 28$ from the MNIST dataset superimposed with $400$ resized `girl' images from the CIFAR-100 dataset \\cite{cifar10}. \nThe first $392$ dimensions of the first background dataset $\\{\\accentset{\\circ}{\\mathbf{y}}_{j}^1\\in\\mathbb{R}^{784}\\}_{j=1}^{200}$ and the last $392$ dimensions of the other background dataset $\\{\\accentset{\\circ}{\\mathbf{y}}_{j}^2\\in\\mathbb{R}^{784}\\}_{j=1}^{200}$ correspond to the first and last $392$ features of $200$ cropped girl images, respectively. The remaining dimensions of both background datasets were set zero.\n Figure \\ref{fig:mdpca_semisyn} presents the obtained (d)PCs of MdPCA, dPCA, and PCA, with red stars and black diamonds depicting digits $6$ and $9$, respectively. \n PCA and dPCA based on a single background dataset (the bottom two plots in Fig. \\ref{fig:mdpca_semisyn}) reveal that the two clusters of data follow a similar distribution in the space spanned by the first two PCs. The separation between the two clusters becomes clear when the MdPCA is employed.\n \n\n \n \n\n\\subsection{KMdPCA tests}\n\\begin{figure}[t]\n\t\\centering \n\t\\includegraphics[scale=0.7]{kmdpca_syn.pdf} \n\t\\vspace{-10pt}\n\t\\caption{\\small{The first two dPCs obtained by Alg. \\ref{alg:mkdpca}.}}\n\t\\label{fig:kmdpca}\n\t\\vspace{-13pt}\n\\end{figure}\n\n\nAlgorithm \\ref{alg:mkdpca} \\textcolor{black}{with $\\epsilon=10^{-4}$} is examined for dimensionality reduction using simulated data and compared against MdPCA, KdPCA, dPCA, and PCA.\nThe first two dimensions of the target data $\\{\\mathbf{x}_i\\in \\mathbb{R}^6\\}_{i=1}^{150}$ and $\\{\\mathbf{x}_i\\}_{i=151}^{300}$ were generated from two circular concentric clusters with respective radii of $1$ and $6$. The remaining four dimensions of the target data $\\{\\mathbf{x}_i\\}_{i=1}^{300}$ were sampled from two concentric circles with radii of $20$ and $12$, respectively. Data $\\{\\mathbf{x}_i\\}_{i=1}^{150}$ and $\\{\\mathbf{x}_i\\}_{i=151}^{300}$ corresponded to two different clusters. \nThe first, second, and last two dimensions of one background dataset $\\{\\mathbf{y}_{j}^1\\in\\mathbb{R}^6\\}_{j=1}^{150}$ were sampled from three concentric circles with corresponding radii of $3$, $3$, and $12$. \nSimilarly, three concentric circles with radii $3$, $20$, and $3$ were used for generating the other background dataset $\\{\\mathbf{y}_{j}^2\\in\\mathbb{R}^6\\}_{j=1}^{150}$. Each datum in $\\{\\mathbf{x}_i\\}$, $\\{\\mathbf{y}_{j}^1\\}$, and $\\{\\mathbf{y}_{j}^2\\}$ was corrupted by additive noise $ \\mathcal{N}(\\mathbf{0}, \\,0.1\\mathbf{I})$. When running KMdPCA, the degree-$2$ polynomial kernel used in Sec. \\ref{sec:simu2} was adopted, and weights were set as $\\omega_1=\\omega_2=0.5$.\n\n\nFigure \\ref{fig:kmdpca} depicts the first two dPCs of KMdPCA, MdPCA, KdPCA and dPCA, as well as the first two PCs of (K)PCA.\nIt is evident that only KMdPCA is able to discover the two clusters in the target data.\n\n\n\n\n\n\n\n\n\n\n\n\\section{Concluding Summary}\\label{sec:concl}\nIn diverse practical setups, one is interested in extracting, visualizing, and leveraging the unique low-dimensional features of one dataset relative to a few others. This paper put forward a novel framework, that is termed discriminative (d) PCA, for performing discriminative analytics of multiple datasets. Both linear, kernel, and multi-background models were pursued.\n In contrast with existing alternatives, dPCA is demonstrated to be optimal under certain assumptions. \nFurthermore, dPCA is \\textcolor{black}{parameter free}, and requires only one generalized eigenvalue decomposition. \nExtensive tests using both synthetic and real data corroborated the efficacy of our proposed approaches relative to relevant prior works. \n\nSeveral directions open up for future research: i) distributed and privacy-aware (MK)dPCA implementations to cope with large amounts of high-dimensional data; ii) robustifying (MK)dPCA to outliers; \n and iii) \n graph-aware (MK)dPCA generalizations exploiting additional priors of the data.\n\n\\section*{Acknowledgements}\nThe authors would like to thank Professor Mati Wax for pointing out an error in an early draft of this paper.\n\n\n\\bibliographystyle{IEEEtranS}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nMost of our experience (outside perturbation theory) with quantum mechanics concerns nonrelativistic quantum systems. This may be due to the fact that, as yet, no results on specific models of relativistic quantum field theory (rqft) in (physical) four space-time dimensions exist \\cite{GlJa}. In spite of that, it is still widely believed, and there are good reasons for that \\cite{Haag}, that rqft is the most fundamental physical theory. One of its most basic principles is \\textbf{microcausality} (\\cite{Haag}, \\cite{StrWight}), which is the local (i.e., in terms of local quantum fields) formulation of Einstein causality, the limitation of the velocity of propagation of signals by $c$, the velocity of light in the vacuum:\n$$\n[\\Phi(f),\\Phi(g)] = 0\n\\eqno{(1)}\n$$\nwhere the fields $\\Phi$ are regarded as (space-time) operator-valued distributions, when the supports of $f$ and $g$ are space-like to one another, see section 3.\n\nUnfortunately, in spite of its enormous success, nonrelativistic quantum mechanics (nrqm) is well-known to violate Einstein causality, which is not surprising, since nrqm is supposed to derive from the the non-relativistic limit $c \\to \\infty$ of rqft, mathematically speaking a rather singular limit, called a group contraction, from the Poincar\\'{e} to the Galilei group, first analysed by In\\\"{o}n\\\"{u} and Wigner \\cite{InWi}. For some systems, such as quantum spin systems, finite group velocity follows from the Lieb-Robinson bounds \\cite{LiR}, \\cite{NS}: these systems are, however, approximations to nonrelativistic many-body systems. Due to the crucial importance of Einstein causality for the foundations of physics (see \\cite{HMN}), it is important to understand in which precise sense nrqm is an \\textbf{approximation} of a causal theory, viz., rqft.\n\nIn classical physics, acausal behavior is well-known, e.g., in connection to the diffusion equation or heat conduction problems. In both cases, these equations may be viewed as an approximation of the telegraphy equation (see \\cite{Bar}, pg. 185 or \\cite{MorFes}, section 7.4), and the approximations are under mathematical control. What happens in quantum theory?\n\nFor imaginary times, the heat diffusion equation becomes the Schr\\\"{o}dinger equation, for a free particle of mass $m$ in infinite space ($\\hbar = 1$):\n$$\ni\\frac{\\partial}{\\partial t} \\Psi_{t} = - \\frac{1}{2m} \\triangle \\Psi_{t} \\mbox{ with } \\Psi \\in {\\cal H}=L^{2}(\\mathbf{R}^{3})\n\\eqno{(2.1)}\n$$\nThe Laplacean $\\triangle$ is a multiplication operator in momentum space, and the solution of (2.1) is (with $\\tilde{f}$ denoting Fourier transform of $f$),\n$$\n\\tilde{\\Psi_{t}}(\\vec{p}) = \\exp(-it\\frac{\\vec{p}^{2}}{2m}) \\tilde{\\Psi_{0}}(\\vec{p})\n\\eqno{(2.2)}\n$$\nAssuming that $\\Psi_{0}$ is a $C_{0}^{\\infty}(\\mathbf{R}^{3})$ function, i.e., smooth with compact support, it follows from (2.2) and the ''only if'' part of the Paley-Wiener theorem (see,e.g., \\cite{RSII}, Theorem IX-11) that, for any $t \\ne 0$, $\\Psi_{t}$ cannot be of compact support and is thus infinitely extended: one speaks of ''instantaneous spreading'' \\cite{GCH}. Of course, spreading is a general phenomenon in quantum physics, and this feature is demonstrated in a varied number of situations and in several possible ways, including an exact formula for the free propagator (see, e.g., \\cite{MWB}, chapter 2). The fact that the violation of Einstein causality is ''maximal'' was sharpened and made precise by Requardt \\cite{Req}, who showed that, for a class of one and n body non-relativistic systems, states localized at time zero in an arbitrarily small open set of $\\mathbf{R}^{n}$ are total after an arbitrarily small time.\n\nA very nice recent review of related questions was given by Yngvason in \\cite{Yng}. His theorem 1 transcribes a result proved by Perez and Wilde \\cite{PeWil} (see also \\cite{Yng} for additional related references), which shows that localization in terms of position operators is incompatible with causality in relativistic quantum physics. \n\nA different approach, which generalizes the argument after (2.2) by making a different use of analyticity, and also introduces the dichotomy mentioned in the abstract, was proposed by Hegerfeldt \\cite{GCH1}:\n\n\\textbf{Theorem 1} Let $H$ be a self-adjoint operator, bounded below, on a Hilbert space ${\\cal H}$. for given $\\Psi_{0} \\in {\\cal H}$, let $\\Psi_{t}, t \\in \\mathbf{R}$, be defined as\n$$\n\\Psi_{t} = \\exp(-iHt) \\Psi_{0}\n\\eqno{(3)}\n$$\nLet $A$ be a positive operator on ${\\cal H}$, $A \\ge 0$, and $p_{A}$ be defined by\n$$\np_{A}(t) \\equiv (\\Psi_{t}, A \\Psi_{t})\n\\eqno{(4)}\n$$\nThen, either\n$$\np_{A}(t) \\ne 0 \n\\eqno{(5)}\n$$\nfor almost all $t$ and the set of such $t$ is dense and open, or \n$$\np_{A}(t) \\equiv 0 \\mbox{ for all } t\n\\eqno{(6)}\n$$\n\nIf, now, the probability to find a particle inside a bounded region $V$ is given by the expectation value of an operator $N(V)$, such that\n$$\n0 \\le N(V) \\le \\mathbf{1}\n\\eqno{(7)}\n$$\n(e.g., $$N(V) = |\\chi_{V})(\\chi_{V}| \\eqno{(8)}$$ , where $\\chi_{V}$ is the characteristic function of $V$), and $\\mathbf{1}$ the identity operator, it follows from theorem 1, with the choice\n$$\nA = \\mathbf{1} - N(V)\n\\eqno{(9)}\n$$\nthat, if at $t=0$ a particle is strictly localized in a bounded region $V_{0}$, then, unless it remains in $V_{0}$ for all times, it cannot be strictly localized in a bounded region $V$, however large, for any finite time interval thereafter, implying a violation of Einstein causality (see also \\cite{GCH1}, pg. 24, for further comments).\n\nOur main purpose in this review is to analyse the dichotomy (5)-(6) for nonrelativistic quantum systems (in case (6) we include some relativistic systems in the final discussion in the conclusion). We start with the option given by equation (6).\n\n\\section{Systems confined to a bounded region of space in quantum theory}\n\nOption (6) is found - with $A$ defined by (7)-(9) - in all systems restricted to lie in a finite region $V$ by \\textbf{boundaries}, with a Hamiltonian $H_{V}$ in theorem 1 self-adjoint and bounded below. This includes the electromagnetic field (Casimir effect, see the conclusion), but we now concentrate on nonrelativistic quantum systems. The simplest prototype of such is the free Hamiltonian $H_{V} = -\\frac{d^{2}}{dx^{2}}$, with $V=[0,L]$. The forthcoming theorem summarizes (and slightly extends) the rather detailed analysis in \\cite{GarKar}, using the results in \\cite{Robinson} (see the appendix of \\cite{GarKar} and references given there for the standard concepts used below). Our forthcoming conclusions differ, however, from \\cite{GarKar}. \n\n\\textbf{Theorem 2.1} In the following three cases, $H_{V}$ is self-adjoint and semi-bounded:\n\na1) $H_{V}^{\\sigma}$ on the domain \n\\begin{eqnarray*}\nD(H_{V}^{\\sigma}) = \\{\\mbox{ set of absolutely continuous (a.c.) functions } \\Psi \\mbox{ over } [0,L]\\\\\n\\mbox{ with a.c. first derivative } \\Psi^{'} \\mbox{ such that } \\Psi^{''} \\in L^{2}(0,L)\\}\n\\end{eqnarray*}\nand satisfying the boundary condition (b.c.)\n$$\n\\Psi^{'}(0)= \\sigma_{0} \\Psi(0) \\mbox{ and } \\Psi^{'}(L)= -\\sigma_{L} \\Psi(L)\n\\eqno{(10)}\n$$\nwhere $(\\sigma_{0},\\sigma_{L}) \\in (\\mathbf{R} \\times \\mathbf{R})$;\n\na2) $H_{V}^{\\infty}$ on $D(H_{V}^{\\infty})$, same as inside the brackets in a1), but with (10) replaced by\n$$\n\\Psi(0) = \\Psi(L) = 0\n\\eqno{(11)}\n$$\n\na3) $H_{V}^{\\theta}$, on $D(H_{V}^{\\theta})$, same as inside the brackets in a1), but with (10) replaced by \n$$\n\\Psi(0)= \\exp(i\\theta) \\Psi(L)\n\\eqno{(12)}\n$$\nwith $\\theta \\in \\mathbf{R}$. The case $\\sigma_{0}=0$ in a1) corresponds to Neumann b.c., $\\sigma_{0}>0$ to repulsive, $\\sigma_{0}<0$ to attractive boundaries (see \\cite{Robinson}, pg.17), with analogous statements for $\\sigma_{L}$. The case a2) corresponds to setting $\\sigma_{0} = -\\sigma_{L} = \\infty$ in (10), and is the case of impenetrable boundaries (Dirichlet b.c.). a3) is a generalization of periodic b.c.. We also have:\n\n\\textbf{Theorem 2.2} \n\na) In case a1), the momentum $p=-i\\frac{d}{dx}$ is not a symmetric operator;\n\nb) In case a2), $p$ defines a closed symmetric operator $p_{\\infty}$, and in case a3) it is a self-adjoint operator $p_{\\theta}$, which is a self-adjoint extension of $p_{\\infty}$, but for no $\\theta \\in \\mathbf{R}$ there are functions satisfying the Dirichlet b.c. (11) in the domain $D(p_{\\theta})$ of $p_{\\theta}$. Furthermore, in case a3),\n$$\nH_{V}^{\\theta} = p_{\\theta}^{2} = p_{\\theta}^{*} p_{\\theta}\n\\eqno{(13)}\n$$\n\nAn explicit proof of b.) may be found in \\cite{GarKar}, and a.) is straightforward. We see that in cases a1) and a2) the momentum is not well-defined (as a self-adjoint operator), while it is so in case a3), in which case the expected property (13) holds.\n\nWhat do we conclude fom theorems 2.1 and 2.2 (and their natural extensions to partial differential operators in higher dimensions, see \\cite{Robinson}, pg. 34)? As remarked by Robinson (\\cite{Robinson}, page 22), defining the probability current density $j(x)$ associated to the particle,\n$$\nj(x) = i(\\frac{d\\bar{\\Psi}}{dx} \\Psi(x)-\\bar{\\Psi}(x)\\frac{d\\Psi(x)}{dx})\n\\eqno{(14)}\n$$\nwe see that for a1),a2), $j(0)=0=j(L)$, while, in case a3), only $j(0)=j(L)$ holds. Thus, only in cases a1),a2) the particle flux both into and out of the system is zero, corresponding to an \\textbf{isolated} system, while a3) only means that all that flows in at $x=0$ flows out at $x=L$. This is the case with \\textbf{periodic} b.c. (a restriction of a3)), which requires for each $\\Psi$ that $\\Psi(x+L)=\\Psi(x)$ for all $x \\in [0,L]$. we thus call a3) generalised periodic b.c.: they allow a finite system to have a momentum operator \\cite{MaRo}, because, at the same time, they render the system ''infinite'' in a peculiar way, making it into a torus. \n \nWe see, therefore, that the attempt to confine a quantum system in a bounded region of space by imposing on it b.c. originating from classical physics (a1),a2)) leads to physical inconsistencies, since the momentum is expected to exist as a local generator of space-translations (theorem 2.2). Generalized periodic b.c. (a3)) sometimes save the situation, for instance regarding thermodynamic quantities in statistical mechanics, which are expected (and often proven) not to depend on the boundary conditions \\cite{Ru}. For expectation values and correlation functions this need not be the case. In addition, there are situations in rqft, such as the Casimir effect, for which periodic b.c. are definitely not adequate, as we shall comment in the conclusion.\n\n\\section{The problem of instantaneous spreading}\n\nWe now come to the option given by equation (5). Using theorem 1, Hegerfeldt (see \\cite{GCH} and references given there) proposed to analyse a two-atom model suggested by Fermi \\cite{Fermi} to check finite propagation speed in quantum electrodynamics, with $H$ the Hamiltonian, $A=A_{e_{B}}$, the probability that atom $B$, initially in the ground state, is excited by a photon resulting from the decay of atom $A$, initially in an excited state, and $\\Psi_{0}$ denoting the initial physical state of the system $A-B$. The conclusion is that $B$ is either immediately excited with nonzero probability or never. This conclusion was challenged by Buchholz and Yngvason \\cite{BYng} in a beautiful and subtle analysis, in which they concluded that there are no causality problems for the Fermi system in a full description of the system by rqft. One important point raised in \\cite{BYng} is that (4),(5) with $A$ positive is not an adequate criterion to investigate causality in rqft, as shown by the simple counterexample of the state $(\\Psi_{0}, \\cdot \\Psi_{0})$ equal to the vacuum state, for which $p_{A}(t)$ is \\textbf{always} nonzero for $A$ positive space-localized, by the Reeh-Schlieder theorem (see, e.g., \\cite{Ar}, pg. 101). It is perhaps worth noting that a non-perturbative rqft description of the two-atom system is not known, but the authors \\cite{BYng} relied on the general principles of rqft (\\cite{Haag}, \\cite{Ar}).\n\nIt follows from the above that instantaneous spreading would cease to be an obstacle to the physical consistency of nonrelativistic quantum mechanics if it could be shown that the latter is an approximation, in a suitable precise sense, to rqft. In this review we expand on a discussion by C. J\\\"{a}kel and one of us \\cite{JaWre} on this matter. In a not yet precise fashion (but see Lemma 2), one might propose as approximation criterion\n\n\\textbf{Proposal C} Nonrelativistic ground state expectation values are ''close'' to the corresponding relativistic vacuum expectation values when certain physical parameters are ''small''.\n\nFor an atom, e.g. hydrogen, in interaction with the electromagnetic field in its ground state, one such parameter is the ratio between the mean velocity of the electron in the ground state and $c$, which is of order of the fine structure constant. It is clear that the Dirac atom, with a potential, is not a fully relativistic system, and therefore not a candidate to solve the Einstein causality problems in the manner proposed in \\cite{BYng}: thus, the well-known relativistic corrections \\cite{JJS} do not solve the causality issue as sketched above. Perturbative quantum electrodynamics, in spite of its great success, does not offer a solution either: for instance, the relativistic Lamb shift relies strongly on Bethe's nonrelativistic treatment (see, e.g., \\cite{JJS}, pg. 292). \n\nWe now attempt to make proposal C precise, and, at the same time, show some results relating relativistic and nonrelativistic systems which are not found in this form in the textbook literature. We take as the nonrelativistic systems, formulated in Fock space, the symmetric Fock space for Bosons, ${\\cal F}_{s}({\\cal H})$ which we simply denote by ${\\cal F}$ (see \\cite{MaRo} for a nice textbook presentation) and there the state $\\omega_{\\Psi_{0}}=(\\Psi_{0}, \\cdot \\Psi_{0})$. The observables will be functionals of the nonrelativistic free quantum fields at time zero:\n$$\n\\Phi(\\vec{x}) = \\phi(\\vec{x}) + \\phi^{*}(\\vec{x})\n\\eqno{(15.1)}\n$$\nwhere $*$ denotes hermitian conjugate and\n$$\n\\phi(\\vec{x}) = \\frac{1}{(2\\pi)^{3\/2}(2m_{0})^{1\/2}}\\int d\\vec{k} a(\\vec{k})\\exp(-i\\vec{k}\\cdot \\vec{x})\n\\eqno{(15.2)}\n$$\nand the canonically conjugate momenta\n$$\n\\Pi(\\vec{x}) = \\pi(\\vec{x}) + \\pi^{*}(\\vec{x})\n\\eqno{(16.1)}\n$$\nwhere\n$$\n\\pi(\\vec{x}) = -\\frac{i(2m_{0})^{1\/2}}{(2\\pi)^{3\/2}}\\int d\\vec{k} a(\\vec{k})\\exp(i\\vec{k}\\cdot \\vec{x}) \n\\eqno{(16.2)}\n$$\nAbove, $a, a^{*}$ are annihilation-creation operators satisfying\n$$\n[a(\\vec{k}),a^{*}(\\vec{l})] = \\delta(\\vec{k}-\\vec{l})\n\\eqno{(17)}\n$$\nIt is more adequate, both mathematically and physically (\\cite{MaRo},\\cite{RSII}) to use the smeared fields\n$$\n\\Phi(f) = \\int d\\vec{x} f(\\vec{x}) \\Phi(\\vec{x})\n\\eqno{(18)}\n$$\nand\n$$\n\\Pi(g) = \\int d\\vec{x} g(\\vec{x}) Pi(\\vec{x})\n\\eqno{(19)}\n$$\ni.e., to consider $\\Phi,\\Pi$ as operator-valued distributions, satisfyind the canonical commutation relations (CCR)\n$$\n[\\Phi(f),\\Pi(g)] = i(f,g)\n\\eqno{(20)}\n$$\non a suitable dense set (\\cite{RSII}, pg. 232), with\n$$\n(f,g) = \\int d\\vec{x} \\bar{f}(\\vec{x}) g(\\vec{x})\n\\eqno{(21)}\n$$\nfor $f,g \\in {\\cal S}(\\mathbf{R}^{3})$, the Schwarz space \\cite{RSII}. For the free relativistic quantum system, the corresponding state is again the no-particle state $\\omega_{\\Psi_{0}}$, the observables (functionals of) the relativistic free quantum fields\n$$\n\\Phi_{r}(\\vec{x}) = \\phi_{r}(\\vec{x}) + \\phi_{r}^{*}(\\vec{x})\n\\eqno{(22.1)}\n$$\nwhere\n$$ \n\\phi_{r}(\\vec{x}) = \\frac{c}{(2\\pi)^{3\/2}}\\int d\\vec{k}\\frac{1}{(2\\omega_{\\vec{k}}^{c})^{1\/2}} a(\\vec{k})\\exp(-i\\vec{k}\\cdot \\vec{x})\n\\eqno{(22.2)}\n$$\nand the canonically conjugate momentum\n$$\n\\Pi_{r}(\\vec{x}) = \\pi_{r}(\\vec{x}) + \\pi_{r}^{*}(\\vec{x})\n\\eqno{(23.1)}\n$$\nwith\n$$\n\\pi_{r}(\\vec{x}) = -\\frac{i}{(2\\pi)^{3\/2}c}\\int d\\vec{k}(2\\omega_{\\vec{k}}^{c})^{1\/2} a(\\vec{k})\\exp(i\\vec{k}\\cdot \\vec{x})\n\\eqno{(23.2)}\n$$\nIt is convenient to consider the CCR in the Weyl form\n$$\n\\exp(i\\Pi(f))\\exp(i\\Phi(g))=\\exp(i\\Phi(g))\\exp(i\\Pi(f))\\exp(-i(f,g))\n\\eqno{(24)}\n$$\nfor $f,g \\in {\\cal S}_{\\mathbf{R}}(\\mathbf{R}^{3})$, the Schwarz space of real-valued functions on $\\mathbf{R}^{3}$. Above,\n$$\n\\omega_{\\vec{k}}^{c} \\equiv (c^{2}\\vec{k}^{2}+m_{0}^{2}c^{4})^{1\/2}\n\\eqno{(25)}\n$$\nwith $m_{0}$ the ''bare mass'' of the particles. We write\n$$\na(f) = (2m_{0})^{1\/2}[\\Phi(f)+i \\Pi(f)]\n\\eqno{(26)}\n$$\nand similarly for $a^{*}(f),a_{r}(f),a_{r}^{*}(f)$. The zero-particle vector $\\Psi_{0} \\in {\\cal F}$ is such that\n$$\na(f)\\Psi_{0} = 0 \\mbox{ for all } f \\in {\\cal S}(\\mathbf{R}^{3})\n\\eqno{(27)}\n$$\nand similarly for $a_{r}(f)$. We assume that there exists a continuous unitary representation $U(\\vec{a},R)$ of the Euclidean group $\\vec{x} \\to R\\vec{x}+\\vec{a}$ on ${\\cal F}$ with $R$ a rotation and $\\vec{a}$ a translation, s.t.\n$$\nU(\\vec{a},R) a(f) U(\\vec{a},R)^{-1} = a(f_{\\vec{a},R})\n\\eqno{(28)}\n$$\nwith\n$$\nf_{\\vec{a},R}(\\vec{x})= f(R^{-1}(\\vec{x}-\\vec{a}) \n\\eqno{(29)}\n$$\nThe following lemma is fundamental:\n\n\\textbf{Lemma 1} The no-particle state $\\Psi_{0}$ is the unique state invariant under $U(\\vec{a},R)$.\n\n\\textbf{Proof} By (27),(28),\n$$\n0 = U(\\vec{a},R) a(f) \\Psi_{0} = a(f_{\\vec{a},R})U(\\vec{a},R) \\Psi_{0}\n\\eqno{(30)}\n$$\nSince, for all $f \\in {\\cal S}(\\mathbf{R}^{3})$, $U(\\vec{a},R) \\Psi_{0}$ is also a no-particle state by (30), it follows that\n$$\nU(\\vec{a},R) \\Psi_{0} = \\lambda(\\vec{a},R) \\Psi_{0}\n\\eqno{(31)}\n$$\nwith $|\\lambda|=1$, and the $\\lambda$ form a one-dimensional representation of the Euclidean group. Since the Euclidean group posesses only the trivial one-dimensional representation, we conclude that\n$$\nU(\\vec{a},R) \\Psi_{0} = \\Psi_{0}\n$$\ni.e., $\\Psi_{0}$ is necessarily a Euclidean invariant state (As Wightman observes \\cite{Wight}, this is \\textbf{not} assumed when one writes (28)!). In the case of the free (relativistic or nonrelativistic) field, the cluster property of the two-point function (a corollary of the Riemann-Lebesgue lemma, see, e.g., \\cite{MWB}, Lemma 3.8) implies, together with von Neumann's ergodic theorem (see, again, e.g., \\cite{MWB}, Theorem A.2), that $\\Psi_{0}$ is the unique state invariant under all space translations and, thus, the unique state invariant under all $U(\\vec{a},R)$. q.e.d.\n\nLemma 1 is the main ingredient of\n\n\\textbf{Theorem 3} The representations of the Weyl CCR (24) $(\\Phi,\\Pi)$ and $(\\Phi_{r},\\Pi_{r})$ are unitarily inequivalent.\n\n\\textbf{Proof} The proof of this theorem follows from (\\cite{RSII}, Theorem X.46), the inequivalence of the Weyl CCR for different masses $m_{1}$ and $m_{2}$, by identifying $\\Phi_{m_{1}}$ with $\\Phi$ and $\\Phi_{m_{2}}$ with $\\Phi_{r}$ (and similarly for the $\\Pi$). Let $G(R,\\vec{a})$ (resp. $G_{r}(R,\\vec{a})$) be the representatives of the Euclidean group leaving $(\\Phi,\\Pi)$ (resp.$(\\Phi_{r},\\Pi_{r})$) invariant. We assume that there exists a unitary map $T$ on ${\\cal F}$ which satisfies $$T \\exp(i\\Phi(f))T^{-1} = \\exp(i\\Phi_{r}(f)) \\eqno{(32)}$$ and $$T \\exp(i\\Pi(f))T^{-1} = \\exp(i\\Pi_{r}(f) \\eqno{(33)}$$. Exactly as in \\cite{RSII}, Theorem X.46, pg.234, this leads to \n$$\nTG(R,\\vec{a})T^{-1} = G_{r}(R,\\vec{a})\n\\eqno{(34)}\n$$\nfor all $(R,\\vec{a})$ in the Euclidean group. Applying (34) to $\\Psi_{0}$, we find\n$$\nT \\Psi_{0} = G_{r}(R,\\vec{a}) T \\Psi_{0}\n\\eqno{(35)}\n$$\nand, since, by lemma 1, $\\Psi_{0}$ is the unique vector in ${\\cal F}$ invariant under both $G(R,\\vec{a})$ and $G_{r}(R,\\vec{a})$, (35) yields\n$$\nT \\Psi_{0} = \\alpha \\Psi_{0}\n\\eqno{(36)}\n$$\nwhere $\\alpha$ is a phase. From (32), (33), and (36),\n\\begin{eqnarray*}\n(\\Psi_{0}, \\Phi(f) \\Phi(g) \\Psi_{0}) = (\\Psi_{0}, T \\exp(i\\Phi(f))T^{-1} T \\exp(i\\Phi(g))T^{-1}\\Psi_{0})\\\\\n= (\\Psi_{0},\\exp(i\\Phi_{r}(f))\\exp(i\\Phi_{r}(g))\\Psi_{0})\n\\end{eqnarray*}$$\\eqno{(37)}$$\nwhich implies that $\\Psi_{0}, \\Phi(f) \\Phi(g) \\Psi_{0})$ and $(\\Psi_{0},\\exp(i\\Phi_{r}(f))\\exp(i\\Phi_{r}(g))\\Psi_{0})$ are equal as tempered distributions on\n${\\cal S}(\\mathbf{R}^{3}) \\times {\\cal S}(\\mathbf{R}^{3})$. We have, from (15), (22),\n\\begin{eqnarray*}\n(\\Psi_{0}, \\Phi(\\vec{x})\\Phi(\\vec{y})\\Psi_{0}) = \\frac{1}{2m_{0}} \\delta(\\vec{x}-\\vec{y})=\\\\\n= \\frac{1}{(2m_{0})(2\\pi)^{3}} \\int d\\vec{k} \\exp(i\\vec{k} \\cdot (\\vec{x}-\\vec{y}))\n\\end{eqnarray*} $$\\eqno{(38)}$$\nwhile\n\\begin{eqnarray*}\n(\\Psi_{0}, \\Phi_{r}(\\vec{x}) \\Phi_{r}(\\vec{y}) \\Psi_{0}) = \\frac{1}{i}\\Delta_{+}(\\vec{x}-\\vec{y},m_{0}^{2})=\\\\\n=\\frac{1}{2(2\\pi)^{3}} \\int d\\vec{k} \\exp(i\\vec{k}\\cdot(\\vec{x}-\\vec{y}))\\frac{c^{2}}{\\omega_{\\vec{k}}^{c}}\n\\end{eqnarray*}$$\\eqno{(39)}$$\nand, by (25),\n$$\n\\frac{c^{2}}{\\omega_{\\vec{k}}^{c}}= \\frac{1}{m_{0}(1+\\frac{\\vec{k}^{2}}{m_{0}^{2}c^{2}})^{1\/2}}\n\\eqno{(40)}\n$$\nfrom which\n$$ \n\\frac{c^{2}}{\\omega_{\\vec{k}}^{c}} \\to \\frac{1}{m_{0}} \\mbox{ as } c \\to \\infty\n\\eqno{(41)}\n$$\nFor finite $c$, (38) and (39) do not, however, satisfy (37), leading to a contradiction. q.e.d.\n\nIn spite of the fact that the relativistic and nonrelativistic zero-time fields lead to inequivalent representations of the CCR due to the fact that the corresponding two-point functions are different for finite $c$, (41) shows that (39) tends to (38) as $c \\to \\infty$ and suggests that proposal C might be correct This is the content of the forthcoming lemma 2. We assume that we are given two ''wave-functions'' $f_{1},f_{2}$ such that\n$$\nf_{1}, f_{2} \\in {\\cal S}(\\mathbf{R}^{3})\n\\eqno{(42)}\n$$\n\\textbf{Lemma 2} Let (42) hold and $\\epsilon $, $\\delta $ be chosen such that for $i=1,2$,\n\\begin{eqnarray*}\n\\int_{\\frac{|\\vec{k}|}{m_{0}c}>\\delta} d\\vec{k}|\\tilde{f_{i}}(\\vec{k})|^{2} < \\epsilon\\\\\n\\end{eqnarray*}\n$$\\eqno{(43)}$$\nThen\n\\begin{eqnarray*}\n2m_{0} \\Delta C \\equiv 2m_{0} |(\\Psi_{0}, \\Phi(f_{1})\\exp(-i(t_{1}-t_{2})H) \\Phi(f_{2})\\Psi_{0})-\\\\\n- (\\Psi_{0}, \\Phi_{r}(f_{1}) \\exp(-i(t_{1}-t_{2})H_{r}) \\Phi_{r}(f_{2})\\Psi_{0})| \\le\\\\\n\\le (2\\epsilon + \\delta^{2}\/2 + |t_{1}-t_{2}|\\frac{m_{0}c^{2}}{\\hbar} \\frac{\\delta^{4}}{8})\n\\end{eqnarray*} $$\\eqno{(44)}$$\nAbove,\n$$\nH \\equiv \\int d\\vec{k} \\frac{\\vec{k}^{2}}{2m_{0}} a^{*}(\\vec{k})a(\\vec{k})\n\\eqno{(45)}\n$$\n$$\nH_{r} \\equiv \\int d\\vec{k} (\\omega_{\\vec{k}}^{c}-m_{0}c^{2}) a^{*}(\\vec{k})a(\\vec{k}) \n\\eqno{(46)}\n$$\nWe also define the number operator\n$$\nN \\equiv \\int d\\vec{k} a^{*}(\\vec{k})a(\\vec{k})\n\\eqno{(47)}\n$$\n\n\\textbf{Remark} It is supposed that $\\delta$ is sufficiently small and is coupled to $\\epsilon$, so that both are small: a fine tuning is required in (43) and depends on the specific problem, but the requirement (43) is very natural and corresponds to the previously mentioned condition that the wavefunctions are ''small' beyond a certain critical momentum (in the ''relativistic'' region of momenta). In addition the time interval $|t_{1}-t_{2}|$ should be small in comparison with characteristic times related to the rest energy $\\frac{\\hbar}{m_{0}c^{2}}$ (here we reinserted $\\hbar$ for clarity). (45)-(47) may be understood as quadratic forms (see \\cite{RSII}, pg. 220). The quantity subtracted in (46) is the ''Zitterbewegungsterm'' \\cite{JJS}. Notice that the $2m_{0}$ factor in (44) cancels the product of two $(2m_{0})^{-1\/2}$ in each $\\Phi(f)$ in (15), or the corresponding relativistic term in the limit $c \\to \\infty$ by (41).\n\n\\textbf{Proof} We write, by (45), (46), (15) and (22), and setting $\\tau \\equiv t_{1}-t_{2}$,\n\\begin{eqnarray*}\n\\Delta C = | \\int d\\vec{k} (\\tilde{f}_{1}(\\vec{k})-\\tilde{f}_{2}(\\vec{k})) \\beta(\\vec{k},\\tau,c)|\\\\\n\\beta(\\vec{k},\\tau,c) \\equiv \\frac{1}{2m_{0}} \\exp(-i\\tau \\frac{\\vec{k}^{2}}{2m_{0}}-\\\\\n- \\frac{c^{2}}{\\omega_{\\vec{k}}^{c}} \\exp(-i\\tau(\\omega_{\\vec{k}}^{c}-m_{0}c^{2}))\n\\end{eqnarray*} $$\\eqno{(48)}$$\nWe split the integral defining $\\Delta C$ in (48) into one over $ I_{\\delta} \\equiv \\{\\vec{k} ;\\frac{|\\vec{k}|}{m_{0}c}>\\delta \\}$, and the other over the complementary region. We now insert the elementary inequalities valid inside $I_{\\delta}$: \n\\begin{eqnarray*}\n\\frac{1}{2m_{0}} - \\frac{c^{2}}{2\\omega_{\\vec{k}}^{c}} \\le \\frac{\\delta^{2}}{4m_{0}}\\\\\n|\\exp(-i\\tau \\frac{\\vec{k}^{2}}{2m_{0}}) -\\exp(-i\\tau(\\omega_{\\vec{k}}^{c}-m_{0}c^{2}))|\\\\\n\\le \\frac{m_{0}c^{2}|\\tau|\\delta^{4}}{8\\hbar}\\\\\n\\frac{c^{2}}{2\\omega_{\\vec{k}}^{c}} \\le \\frac{1}{2m_{0}}\n\\end{eqnarray*}\nas well as assumption (43) in the complement of $I_{\\delta}$, into (48), to obtain (44). q.e.d.\n\nLemma 2 shows that in the free field case, in spite of the nonequivalence of the relativistic and nonrelativistic representations shown in theorem 3, Einstein causality is saved, at least in an approximative sense. The real trouble starts with interactions. In that case, (37) implies, taking now for $\\Phi$ the interacting field, and $\\Psi_{0}$ the interacting vacuum $\\Omega_{0}$, that the two-point function of the interacting field must equal that of the free field of mass $m_{0}$ in the case of equivalence of representations. For a hermitian local scalar field for which the vacuum is cyclic, (37) (with $m_{0} > 0$) implies that $\\Phi$ is a free field of mass $m_{0}$ (Theorem 4.15 of \\cite{StrWight}). We know, however, at least for space dimensions less or equal to 2, interacting fields exist, the first one historically having been in one dimension \\cite{GlJa} the free scalar Boson field of mass $m_{0} >0$. Its Hamiltonian is\n$$\nH(g) = H_{0}+H_{I}(g)= \\int_{\\mathbf{R}}dk \\omega_{k}a^{*}(k)a(k)+\\int_{\\mathbf{R}}dx g(x):\\Phi_{r}(x)^{4}:\n\\eqno{(49)}\n$$\nwith $g \\in L^{2}(\\mathbf{R})$ a real valued function. $H(g)$ is a well-defined symmetric operator on a dense set in Fock space (see proposition pg. 227 of \\cite{RSII}; for self-adjointness see further in the same reference). The dots in (49) denote the so-called Wick product, which means that all creation operators in $\\Phi_{r}(x)^{4}$ are to be placed to the left of all annihilation operators (for further elementary discussion see \\cite{MaRo}, and a complete treatment \\cite{RSII}). In (49), the limit $g \\to \\lambda$ (with $\\lambda > 0$ a constant, interpreted as the coupling constant) exists in a well-defined sense \\cite{GliJa}). In the present case, the vacua $\\Omega_{0}$ and the no-particle state, which also belong to inequivalent representations, differ greatly. This may already be expected on the level of (49), because the ground state $\\Omega_{g}$ of $H(g)$ (whose existence was proved in \\cite{GliJa}) cannot be $\\Psi_{0}$ because of the vacuum polarizing term $H_{I}^{P}(g)$ in (49):\n$$\nH_{I}^{P}(g) = \\int dx g(x) \\phi_{r}^{*}(x)^{4}\n\\eqno{(50)}\n$$\nThose terms in (49) which commute with the number operator $N$ given by (47) are all equal to\n$$\nH_{I}^{C}(g) = 6 \\int dx g(x) \\phi_{r}^{*}(x)^{2} \\phi_{r}(x)^{2} \n\\eqno{(51)}\n$$\nThe formal limit as $c \\to \\infty$ of the operator $H(g)-H_{I}^{C}(g)$ is not ''small'', for instance for (50) we get from (41)\n$$\nH_{I,\\infty}^{P}(g) = \\int dk_{1} \\cdots dk_{4} \\tilde{g}(k_{1}+\\cdots+k_{4})a^{*}(k_{1}) \\cdots a^{*}(k_{4})\n\\eqno{(52)}\n$$\nin the sense of quadratic forms. In the formal limit $c \\to \\infty$, $g \\to \\lambda$, (51) yields\n$$\nH_{I} = \\frac{3\\lambda}{2m_{0}^{2}} \\int dxdy a^{*}(x)a^{*}(y)\\delta(x-y)a(x)a(y)\n\\eqno{(53)}\n$$\nwith $a^{*}(x),a(x)$ defined by (26): together with $H_{0}$ in (49), this defines the Hamiltonian of a nonrelativistic system of Bosons with delta-function interactions (see \\cite{Do} for the precise definition in a segment with periodic b.c.).\n\nThe limit $g \\to \\lambda$, followed by $c \\to \\infty$, was controlled by Dimock \\cite{Dim} in a remarkable tour-de-force. He showed that the two-particle scattering amplitude of model (49) converges to that of model (53) (with the free Hamiltonian (45)). The proof in \\cite{Dim} does not, however, offer any hint as to how the contribution of all the terms in $H(g)-H_{I}^{C}(g)$ becomes irrelevant in that limit (W.F.W. thanks Prof. Dimock for a discussion about this topic). \n\nThe above-mentioned point is crucial, for the following reason. For quantum systems in general, it is essential to arrive at many-body systems with \\textbf{nonzero} density $\\rho$ in the thermodynamic limit, i.e., $N \\to \\infty$, $V \\to \\infty$, with $\\frac{N}{V}=\\rho >0$ (see \\cite{MaRo} for an overview of applications). The corresponding non-relativistic system has, in contrast to the situation considered in \\cite{Dim}, also an infinite number of degrees of freedom ($N \\to \\infty$). The situation has an analogy to the classical limit of quantum mechanical correlation functions considered by Hepp \\cite{HepC}, where two possible limits may be envisaged, one of them yielding quantum mechanical N-particle systems, the other one classical field theory. For free systems with nonzero density, non-Fock representations arise, both in the non-relativistic and in the relativistic cases \\cite{AW}, but it may be checked that lemma 2 continues to hold (for zero temperature). For interacting systems, however, $N$ is not a good quantum number, and, upon fixing it (at a large value proportional to the volume $V$), the relativistic system can only be close to the nonrelativistic one if the contribution of the terms $H(g)-H_{I}^{C}(g$ becomes indeed irrelevant in the joint limit $g \\to \\lambda$ followed by $c \\to \\infty$. \n\nAs an example, we expect that the ground state energy per unit volume of the relativistic system (with a Hamiltonian for volume $V$ defined as in \\cite{GliJa}) tends, as $V \\to \\infty$ and $c \\to \\infty$, to the thermodynamic limit $e$ of the same quantity in the Lieb-Liniger model \\cite{LLi}, which is explicitly known to be $e(\\rho)= \\rho^{3}f(\\frac{\\lambda}{\\rho})$, with $f$ known explicitly as the unique solution of a Fredholm integral equation. Since $\\rho$ is not a parameter in the relativistic system, it is only when the above mentioned terms do not contribute (in the limit $g \\to \\lambda$, $c \\to \\infty$) that a similar fixing of the density becomes possible also for the relativistic system. This seems to be a deep mystery, whatever the way the problem is regarded.\n\nIn order to explain the last issue more completely, consider the l.h.s. of (44) in lemma 2. In the first term thereof, $\\Psi_{0}$ should be replaced by the ground state of the Hamiltonian $H+H_{I}$, where $H$ is given by (45) and $H_{I}$ by (53), and $\\Phi(f)$ replaced by a bounded function of the zero time nonrelativistic fields as in (24). Properly speaking, instead of smearing with a function $g$ one should consider the Hamiltonian restricted to a bounded region, e.g. a segment with, say, periodic b.c., and the thermodynamic limit taken, but we shall continue with the previous description for brevity. The second term on the left hand side of (44) should be replaced by\n$$\n\\lim_{g \\to \\lambda} (\\Omega_{g}, A_{r}(f_{1}) \\exp(-i(t_{1}-t_{2})H(g)) B_{r}(f_{2})\\Omega_{g})\n\\eqno{(54)}\n$$\nwhere $\\Omega_{g}$ is the ground state of $H(g)$ (shown to exist in \\cite{GliJa}), $A_{r}(f_{1}),B_{r}(f_{2})$ are bounded local functions of the fields (22), (23), i.e., with $f_{1},f_{2}$ with \\textbf{compact} support in the space variable. It was shown in \\cite{GliJa} that \n$$\n\\exists\\lim_{g \\to \\lambda} \\exp(i \\tau H(g)) A(f) \\exp(-i \\tau H(g))\n\\eqno{(55)}\n$$\nin the sense of the norm (in the C*-algebraic sense) for bounded local $A(f)$ (for these concepts, see \\cite{BRo2} or \\cite{Hug}). This limit, for both operators $A(f_{1}),B(f_{2})$ in (54), determines the observable content of (54), but it is clear that the whole of $H(g)$ will contribute to (55), in particular terms such as\n\\begin{eqnarray*}\nH_{I,c}^{P}(g) = \\int dk_{1} \\cdots dk_{4} \\prod_{i=1}^{4} \\frac{c}{(2\\omega_{k_{i}}^{c})^{1\/2}}\\\\\n\\tilde{g}(k_{1}+\\cdots+k_{4}) a^{*}(k_{1}) \\cdots a^{*}(k_{4})\n\\end{eqnarray*} $$\\eqno{(56)}$$ \nin $H(g)-H_{I}^{C}(g)$ will contribute, for a certain choice of observables in (55). Given that their formal limit as $c \\to \\infty$ does not vanish ((52)), it seems very unlikely that the limit, as $c \\to \\infty$, of (55) is independent of $H(g)-H_{I}^{C}$. In this connection, one may recall that the $S$-matrix, considered in \\cite{Dim} as an observable, is of a special kind, because it commutes with the free Hamiltonian $H_{0}$ in (49).\n\nThe basic ingredient of the proof of (55) \\cite{GliJa} is the fundamental property of microcausality (1) \\cite{Haag}\\cite{StrWight}. On the other hand, the form (49) of the Hamiltonian is dictated by the property of Lorentz covariance \\cite{GlJa}, proved for this model in \\cite{HeOs}.\n\n\\section{Conclusion}\n\nIn this review we discussed two aspects of the dynamics of non-relativistic quantum systems, unified by a dichotomy in Hegerfeldt's theorem 1. According to this theorem, there are exactly two options (5) and (6) for such systems.\n\nThe first aspect was related to option (6) in that theorem, viz. the attempt to isolate a quantum system from its surroundings by a set of boundary conditions, including those of Dirichlet and Neumann type (a1,a2). In general, this leads to physical inconsistencies, as reviewed in theorem 2.2 for non-relativistic systems, see also \\cite{GarKar}. We view these inconsistencies as consequences of trying to impose conditions deriving from classical physics to quantum systems, be they non-relativistic or relativistic. The latter case is well illustrated in Milonni's famous paper \\cite{Mil}, where he proved that, near a perfectly reflecting slab, the transverse vector potential and the electric field satisfy a set of equal-time CCR different from those holding for free fields. In \\cite{BohrRos}, Bohr and Rosenfeld showed, under the natural assumption that the fields are measured by observing the motion of quantum massive objects with which the fields interact, that the above-mentioned equal-time CCR follow. They are, therefore, very fundamental. \n\nThis suggests that such idealized b.c. are unphysical: this fact was explicitly shown for Dirichlet or Neumann b.c. in the case of the Casimir effect \\cite{KNW}. The reason is that there are wild fluctuations of quantum fields over sharp surfaces \\cite{DeCa}. One promising direction to study this (as yet open) problem is to look at the electromagnetic field in the presence of dielectrics, instead of the ''infinitely thin'' conductor plates (see \\cite{Bim} and references given there). The Casimir problem (for conductors or dielectrics) is an example for which adoption of generalized periodic b.c. (a3) in theorems 2.1,2.2 is not a physically reasonable option: it is not compatible with the theory's classical limit.\n\nOur second topic concerned option (5) in theorem 1, the issue of instantaneous spreading for non-relativistic quantum systems. We concluded that there are serious obstacles on the way to rescue Einstein causality in the (natural) approximative sense of lemma 2, due to terms such as the vacuum polarizing term (56) in the interaction Hamiltonian (49), which are not ''small'' in the formal limit $c \\to \\infty$ by (52). Since the presence of such terms is dictated by such fundamental principles as Lorentz covariance and microcausality, the solution may not be simple.\n\nAlthough we used a special model for the sake of argument, any physical theory with vacuum polarization, such as quantum electrodynamics, is expected to be subject to analogous considerations. Notice that the remarks on the use of approximate theories in \\cite{Yng} do not apply here, because the problems we pose are not due to the approximative character of the theories, such as, e.g., various cutoffs in quantum-electrodynamics, but, as remarked in the previous paragraph, are due to an intrinsic property of relativistic quantum field theory, viz., vacuum polarization (or, more precisely, having a non-persistent vacuum).\n\nThe arguments we presented, however, are clearly no mathematical proof of a no-go theorem. One reason is that the limits $g \\to \\lambda$ and $c \\to \\infty$ do not necessarily commute: they may not. The problem is therefore open. It is hoped that a complete change of point of view may clarify the problem, but we conjecture that $H(g)-H_{C}^{I}(g)$ will play a central role in the final solution. \n\nProgress in both topics above would obviously be of great relevance for the foundations of quantum theory. \n\n\\textbf{Acknowledgement} The idea of a part of this review arose at the meeting of operator algebras and quantum physics, satellite conference to the XVIII international congress of mathematical physics. We thank the organizers for making the participation of one of us (W. F. W.) possible, and Prof. J. Dimock for discussions there on matters related to section 3. We also thank Christian J\\\"{a}kel for critical remarks concerning possible changes of viewpoint, and for recalling some relevant references. W.F.W. also thanks J. Froehlich for calling his attention to the reference \\cite{Yng}.\n \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nBiological systems, such as humans, use electrical signals as the medium of communication between their control centers (brains) and motor organs (arms, legs). While this is taken for granted by most people, those with severe physical impairments, such as quadriplegia, experience the breakdown of this communication system rendering them unable to perform the most basic physical movements. Modern technologies, such as BCIs, have attempted to ameliorate this through the use of brain signals as commands for assistive systems \\cite{perdikis2018cybathlon}. MI, a common paradigm for BCI control, requires the subject to simulate or imagine movement of the limbs on account of there being discernible differences in brain signals when moving different limbs \\cite{abiri2019comprehensive}. Due to it being non-invasive and cost-effective, EEG is the method of choice for collecting data for such systems \\cite{abiri2019comprehensive}.\n\nOne of the many recent developments in the application of EEG-driven BCIs is the Cybathlon competition held every four years under the auspices of Eidgen\u00f6ssische Technische Hochschule Z\u00fcrich (ETH Zurich) \\cite{riener2014cybathlon}. The competition involves physically challenged individuals completing routine tasks via assistive systems. One such task -- the BCI race -- has the participants (called pilots) control a virtual game character via brain signals only. Competing teams, who may hail from either academia or industry, are responsible for creating BCI systems and training their respective pilots. The goal of the Cybathlon is to push the state-of-the-art in BCI assistive systems, and accelerate its adoption in everyday lives of those who need it most.\n\nFor the 2020 edition of Cybathlon, a team from the Technische Universit\u00e4t M\u00fcnchen (TUM) called \"CyberTUM\" is amongst the competitors in the BCI race challenge. In order to achieve high scores in the competition, a major part of BCI development is the focus on robustness of the system i.e. minimizing the variability of the system for different sessions and environments. Lack of robustness, in fact, is an established concern in almost all BCI systems. Possible causes of the problem include nonstationarity of EEG signals (variance for the same subject) \\cite{wolpaw2002brain} \\cite{vidaurre2010towards}. An additional cause, as noted by participating teams in Cybathlon 2016, is the change in the subject's emotional state. During the race, As expected, a public event such as the BCI race, the pilots' stress stress levels increased. This is to be expected as a public event such as the BCI race can heighten stress. This change in the pilots' emotional state caused their respective BCI systems to perform sub-optimally. \n\nThe objective of this work is to mitigate this concern and develop MI systems that are robust to perturbations in the subject's emotional state, specifically to emotional arousal. In order to achieve this, we develop VR environments to induce high and low arousal in the subject before recording MI data. VR environments have been previously used along with EEG to prompt changes in emotional arousal \\cite{baumgartner2006neural}. Additionally, they have been used together with MI for treating Parkinson's disease \\cite{mirelman2013virtual}. To our knowledge, this is the first work where VR environments are used to increase robustness of MI-BCI systems. Subsequently, learning algorithms are trained, not only for MI but also for different arousal states. The idea is that during the BCI race, we first detect the pilot's emotional state of arousal, and choose the appropriate MI classifier. Due to COVID-19, many steps in the above mentioned outline had to be modified, the details of which are present as follows. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Related Work}\n\n\\subsection{Cybathlon 2016}\nThe inaugural Cybathlon competition was held in 2016. After the competition, the competing teams published their methods for training the participants, amongst which were Brain Tweakers (EPFL) \\cite{perdikis2018cybathlon} and Mirage91 (Graz University of Technology). One of the pilots of the former performed well in the qualifiers but poorly in the final, prompting the authors to cite psychological factors such as stress as the possible cause for the drop. A similar course of events was observed for the pilot of Mirage91, who after achieving an average runtime of 120 s in the days leading up to the Cybathlon, dropped to 196 s during the competition. The authors indicated that the pilot was showing signs of nervousness on competition day, with a heart beat of 132 beats per minute (bpm) prior to the race \\cite{statthaler2017cybathlon}. \n\nThe authors' hypothesis regarding the drop in their pilots' performances is supported by existing BCI literature \\cite{chaudhary2016brain} \\cite{lotte2013flaws} \\cite{hammer2012psychological} \\cite{jeunet2016standard}. Further support comes from evidence in affective science: It has been theorized that any event that causes an increase in emotional arousal can affect perception and memory in a manner which causes the retention of high-priority information and disregard of low-priority information \\cite{mather2012selective}.\n\n\n\\subsection{Emotional Valence and Arousal}\nEmotions are defined as complex psychological states, with three constituents: subjective experience, physiological and behavioral response \\cite{hockenbury2000discovering}. Following early attempts \\cite{wundt1897outlines}, more rigorous descriptions of emotions were made, the most widely accepted of which being the 'circumplex model' \\cite{russell1980circumplex}. It proposes that all emotions can be described as a combination of two properties: valence and arousal. These can be thought as orthogonal axes in two-dimensions. Neurologically, it entails that any emotional state is a result of two distinct and independent neural sub-systems \\cite{posner2005circumplex}. Figure \\ref{fig:circumplex} provides a visual representation of the circumplex model. As can be seen, emotions such as 'excited' are high on both the arousal and valence axes, while 'gloomy' is low in both arousal and valence.\n\n\\begin{figure}[tpb]\n \\centering\n \\includegraphics[width=0.4\\textwidth]{figures\/valence-arousal.png}\n \\caption{The circumplex model of emotional classification. Figure courtesy of \\cite{Kim2016ImageRW}.} \\label{fig:circumplex}\n\\end{figure}\n\nAlternative descriptions, such as the 'vector model' \\cite{Bradley1992RememberingPP}, do not veer off sharply from the circumplex model; they too base emotional classification on both valence and arousal. Hence the circumplex model was used as the paradigm of emotional analysis for the duration of the project.\n\n\n\\subsection{Arousal, EEG and Motor Imagery}\\label{arousal-eeg-background}\nStates of high and low arousal can be inferred from EEG signals \\cite{pizzagalli2007electroencephalography}. This has been previously used to train learning systems for distinguishing between various arousal states \\cite{nagy2014predicting}. EEG bands pertinent to different states of arousal are alpha (8-14 Hz) -- related to a relaxed yet awakened state -- and gamma (36-44 Hz) -- a pattern associated with increased arousal and attention. The theta pattern (4-8 Hz), correlated with lethargy and sleepiness, is also useful for differentiating arousal. \n\nWith regards to motor imagery (MI), the most relevant EEG bands have been shown to be alpha (8-14 Hz) and beta (14-30 Hz) \\cite{graimann2010brain}, the latter of which is associated with high degrees of cognitive activity \\cite{pizzagalli2007electroencephalography}. \n\nMotor imagery data refers to data produced when the subject simulates limb movement. As movement of different limbs is sufficiently distinguishable, this can be used to perform control for various other tasks \\cite{padfield2019eeg}. To record EEG data for motor imagery, the 10-20 international system of electrode placement is used \\ref{fig:eeg-map}. Due to the cross-lateral nature of limb control in the human brain, movement of the right arm is recorded most faithfully by C3 and that of the left arm by C4 \\cite{graimann2010brain}.\n\n\\begin{figure}[htpb]\n \\centering\n \\includegraphics[width=0.3\\textwidth]{figures\/eeg-map.jpg}\n \\caption{10-20 International system of EEG electrode placement. Electrodes C3 and C4 are most relevant for MI activity. Figure courtesy of \\cite{rojas2018study}.} \n \\label{fig:eeg-map}\n\\end{figure}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Methodology}\n\n\\subsection{Virtual Reality Environments}\nTraditional methods of inducing stress include the Sing-a-song stress test (SSST) \\cite{brouwer2014new} and the Trier social stress test (TSST) \\cite{kirschbaum1993trier}, while meditation has been shown to induce relaxation \\cite{sedlmeier2012psychological}\\cite{lumma2015meditation}. Emulating such environments faithfully in VR is sufficiently challenging, and may not be the most productive way to use VR to induce high\/low emotional arousal.\n\nPreviously, VR exposure therapy has been explored to alleviate various psychological disorders \\cite{krijn2004virtual}. One such example is using a VR height challenge -- placing the subject on higher ground in a virtual environment \\cite{diemer2016fear}. Not only does the challenge induce high emotional arousal in test subjects, but the control subjects -- the ones who are not acrophobic -- also exhibit the same physiological responses as the test group i.e. increased heart rate and skin conductance level \\cite{diemer2016fear}. Similarly, VR environments, particularly those with natural scenery e.g. a forest, have shown efficacy in reducing stress \\cite{anderson2017relaxation}\\cite{annerstedt2013inducing}. We thus developed two VR environments: one where the subject was placed on top of a skyscraper, called 'Height' while the second in a relaxing forest called 'Relaxation.' The environments were created using Unity 3D\\footnote{\\href{https:\/\/unity.com\/}{https:\/\/unity.com\/}}.\n\n\\begin{figure}[tbp]\n\\centering\n \\begin{subfigure}[b]{0.4\\textwidth}\n \\includegraphics[width=\\textwidth]{figures\/aroused.png}\n \\label{fig:f1}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.4\\textwidth}\n \\includegraphics[width=\\textwidth]{figures\/relaxation.jpg}\n \\label{fig:f2}\n \\end{subfigure}\n \\caption{Virtual reality environments for inducing arousal in subjects. On top is 'Height' designed to induce high arousal by placing the subject on the edge of a skyscraper. Below is 'Relaxation' intended to lower arousal via a natural, calming setting.}\n \\label{fig:vr}\n\\end{figure}\n\n\n\\subsection{Dataset}\nAs this project was part of the CyberTUM team's participation in Cybathlon 2020, the original idea was to collect real data with the actual pilots who will be competing in the even proper. At the beginning of this work, however, no ethics approval had been acquired to run any experiments on the pilots. This was not detrimental to the project as a proof-of-concept could still be arrived at by collecting EEG data from volunteers within the CyberTUM team. The COVID-19 pandemic obstructed our means of collecting such data.\n\nIn the absence of our own motor-imagery and arousal data, we opted for the Graz 2b data set \\cite{leeb2008bci}. It belongs to a family of BCI datasets collected by the BCI Lab at Graz University of Technology. The dataset has been used previously in the BCI Competition IV \\cite{tangermann2012review}. EEG data is collected for 9 subjects doing a binary motor-imagery task (moving right and left hand on cue). The data is sampled at a frequency of 250 Hz with 3 EEG and 3 EOG channels. For our experiments, we use data from two subjects, B05 and B04, whom we refer to as subject 1 and 2 respectively henceforth.\n\n\n\\subsection{Subject Classification as Proxy for Arousal Classification}\\label{subject-classification}\nAs mentioned, we were unable to obtain our own EEG arousal data. To train the classifiers, we alternatively modified the experiment. Instead of using data with high\/low arousal emotional states as labels, we used different subjects as proxies for such states, making it a cross-subject classification task \\cite{del2014electroencephalogram} \\cite{riyad2019cross}. As EEG signals demonstrate significant variance between subjects, we can consider the data coming from subject A as that belonging to the emotional state of high arousal, and data from subject B as belonging to low arousal. With this approach, we can continue to train a classifier that would approximate the performance of one that is trained on actual arousal data, assuming the emotional states in this actual data are informative.\n\n\n\n\\subsection{Experimental Design}\nThe original scheme was to:\n\\begin{enumerate}\n \\item Develop VR environments in line with existing literature that are known to induce stress (high arousal) and relaxation (low arousal) in subjects.\n \\item Use electrodermal activity (EDA) activity to validate the efficacy of VR environments. EDA is a wide-used measure for emotional arousal, as skin conductance rises with rise in arousal \\cite{critchley2002electrodermal}.\n \\item Record MI data alternating between states of low and high arousal for each session. Start with 60s of inducing high arousal via the \"Height\" environment, then record MI data for 45s. Repeat the same with \"Forest\" environment for relaxed state. Repeat this process for each trial. The MI data was to be recorded by using the common paradigm of showing the participant a cue on screen (typically left or right arrow) which would prompt them to imagine as if they were moving their left or right hand \\cite{ramoser2000optimal} \\cite{pfurtscheller1997eeg} \\cite{liu2017motor}.\n \\item Train an arousal classifier. The aim of this classifier is to indicate the emotional state (high or low arousal) of the subject.\n \\item Train separate MI classifiers for each emotional state. The goal is to optimize for accuracy, even if different types of pre-processing and classifier types were required for each state, unlike the arousal classifier which necessitates the same pre-processing steps.\n \\item During deployment, first classify the emotional state using the arousal classifier, and based on its result, choose the appropriate MI classifier.\n\n\\end{enumerate}\nAs mentioned previously, due to numerous factors, many steps in the above formulation had to be either abandoned (2 and 3) or modified (4 and 5). The revised scheme, replaced steps 4-6 with the following:\n\n\\begin{enumerate}\n \\item Train a cross-subject classifier replacing the arousal classifier. The task of this classifier is to take EEG as input from any of the two subjects, and classify the input as belonging to either subject 1 or 2. As the classifier is agnostic to the subject, the same pre-processing had to be done for each subject's data.\n \\item Train separate MI classifiers for each subject instead of training for each emotional state.\n \\item At test time, sample a run of a few data points (5 in our experiments), feeding them to the cross-subject classifier. Based on its mode (most frequent classification), select the appropriate MI classifier.\n\\end{enumerate}\n\n\n\\subsection{Learning algorithms}\\label{algos}\nWe experimented with a multitude of machine learning algorithms which are briefly described as follows.\n\n\\paragraph{Logistic regression}\nLogistic regression is a modification of linear regression for a binary classification task \\cite{kleinbaum2002logistic}. It predicts the probability of a class given the input, by first learning a weighted linear combination of input features and applying a logistic function to the result.\n\\begin{equation}\n y = \\frac{1}{1+e^{-a}} \\quad where \\quad a = \\theta_0 + \\theta_1.x_1 + \\theta_2.x_2\n\\end{equation}\n\n\\paragraph{Linear discriminant analysis}\nLDA attempts to maximize inter-class variance while minimizing intra-class variance \\cite{balakrishnama1998linear} in the data. This results in a clustering of the data where it is easily separable. It is widely used in MI BCI \\cite{wang2006common} \\cite{wang2006common}.\n\n\\paragraph{Naive Bayes}\nA probabilistic classifier, naive bayes uses bayes' law to calculate the posterior probability of an event (class) given the prior and likelihood \\cite{murphy2006naive}. The posterior can then be updated with new evidence. It assumes that the features are independent, hence the term naive in its name.\n\\begin{equation}\n P(y|x) = \\frac{P(y).P(x|y)}{P(x)}\n\\end{equation}\n\n\\paragraph{Ensemble model}\nThis is implemented as a voting classifier in gumpy. It uses a mix of classifiers such as nearest-neighbor, LDA and support vector machines (SVM) and uses the majority vote as the classification output. As such, it necessarily either equals or outperforms both Naive Bayes and LDA as it uses them in the ensemble.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Results}\n\n\\subsection{Artifact Removal}\n\\begin{figure}[!t]\n \\begin{subfigure}[b]{0.5\\textwidth}\n \\includegraphics[width=\\textwidth]{figures\/subject-5-1.png}\n \\caption{Plotting ICA with EOG channels. A visual depiction of the first component (in red) of ICA being correlated with EOG.}\n \\label{fig:f1}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.5\\textwidth}\n \\includegraphics[width=\\textwidth]{figures\/subject-5-2.png}\n \\caption{Plotting ICA components against each other. The peak in the first component (blue) evidently due to an eye-blink. }\n \\label{fig:f2}\n \\end{subfigure}\n \\caption{Artifact analysis using ICA for subject 1.}\n \\label{ica}\n\\end{figure}\nThe data for subject 1 and 2 contained 324 and 399 trials (attempts at moving right or left hand) respectively. The standard approach to train MI classifiers is to analyze data and remove existing artifacts before extracting features from the data \\cite{uriguen2015eeg}. We first applied a Butterworth bandpass filter \\cite{daud2015butterworth} to extract frequencies within the range 2-60 Hz. We then analyze the data for artifacts. A common source of artifacts in MI data is noise from electrodes located in the forehead's proximity. This is in fact data collected from the Electrooculography (EOG) channels which detect movements such as eye blinks, which may show up in the MI data. Such noise can be detected by first performing independent component analysis (ICA) -- widely used in EEG preprocessing \\cite{ica} -- which tries to decompose a signal into constituent component under the assumption of statistical independence. We then see which of the resultant components correlates most with EOG channels, and filter it out \\cite{ica-eog}. An example of ICA on subject 1 can be seen in figure \\ref{ica}. We filter out the first component which seems to be picking up an eye blink. ICA on subject 2 did not improve the results.\n\n\n\\subsection{Feature Extraction}\n\\begin{figure*}[t]\n\\centering\n\\begin{subfigure}[b]{0.49\\linewidth}\n\\centering\n\\includegraphics[height=3cm]{figures\/pca-1.png}\n\\caption{PCA visualization of subject 1's feature vector.}\n\\end{subfigure} \n\\begin{subfigure}[b]{0.49\\linewidth}\n\\centering\n\\includegraphics[height=3cm]{figures\/pca-2.png}\n\\caption{PCA visualization of subject 2's feature vector.} \n\\end{subfigure}\n\\caption{Dimensionality reduction using PCA for feature space visualization of both subjects. Subject 2's features are more informative for the motor-imagery task compared to subject 1 which is also reflected in the training accuracy. Right hand movements are labeled red while left hand movements are blue.}\n\\label{fig:pca}\n\\end{figure*}\nSeveral methods were attempted to extract features. In principle, feature extraction in BCI takes two forms: frequency band selection and channel selection (also known as spatial filtering). In regards to the former, we've previously mentioned in \\ref{arousal-eeg-background} that alpha and beta bands have been shown to be most related to MI activity. Accordingly, we use these frequency bands as our features. In the same section we observed that channels C3 and C4 are the most relevant for MI, which we can use directly without any spatial filtering. For this, instead of using raw alpha and beta patterns, we opt for logarithmic sub-band powers of said patterns (see gumpy documentation\\footnote{\\href{http:\/\/gumpy.org\/}{http:\/\/gumpy.org\/}}). Each spectrum is divided into four sub-bands. An alternative approach for feature extraction in MI classification has been the use of the \"common spatial pattern (CSP)\" algorithm \\cite{Koles2005SpatialPU}. It tries to find optimal variances of subcomponents of a signal \\cite{csp} with respect to a given task. In our experiments, however, CSP performed poorly compared to logarithmic sub-band power of alpha and beta bands. The results when CSP was applied have thus been omitted from the report, but could be reproduced in the notebook (see section \\ref{documentation}). A visualization of the features using PCA for both subjects can be seen in figure \\ref{fig:pca}. As can be observed, the features for subject 2 are more conducive to discrimination of MI. This is also verified in the training results, where every classification algorithm achieved higher accuracy for subject 2 compared to subject 1.\n\n\n\\subsection{Training}\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.5\\textwidth]{figures\/training.pdf}\n \\caption{Training scheme for both classifiers. MI classifiers are trained separately for each subject (labels corresponding to right and left hand) while Cross-subject classifier trained on features of both subjects.} \\label{fig:training}\n\\end{figure}\nAs mentioned previously, we train two types of classifiers: MI per subject classifier and cross-subject classifier. The entire training procedure is visually depicted in figure \\ref{fig:training}. After doing feature extraction, we first train an MI classifier for each subject with labels 0 and 1 (left and right hand movement respectively). Subsequently, we combine data of both subjects, labelling it 0 and 1 (subject 1 and subject 2 respectively) and train the cross-subject classifier. All classifiers described in \\ref{algos} are trained in each case, the results of which can be seen in table \\ref{results}.\n\n\\paragraph{MI classification}\nThe data for each subject was divided into an 80-20 split (training-test). The features were also standardized by rescaling to zero mean and unit standard deviation. Results for both subjects were satisfactory, although subject 1's data was harder to train on compared to subject 2. This can be observed by looking at the ranges of training accuracy for both subjects [55.84-70.12 vs. 91.25-95]\\%. Subject 2's classifiers achieved both a higher average accuracy as well as lower variance. LDA performed best for subject 1, while logistic regression achieved best results for subject 2. \n\n\n\\paragraph{Cross-subject classification}\nTraining for cross-subject classifiers followed the same procedure of feature extraction with the only difference being a re-labeling of the samples from limb movements to source subject. Once again, we split the data into 80-20 (train-test) portions, though this time the data is the combined samples from both subjects. For testing the classifiers, we split the test set further into sections containing five samples (trials) each. For each section, we take the mode (most frequent prediction) of the classifier which is considered the final result. For example, if our test data has 50 samples from each subject, we portion it into 20 sections (each subject with 10 sections). We then feed each section to the classifier and take the majority score for that section as the classifier's prediction. As can be seen, the ensemble model outperforms the rest of the algorithms by a considerable margin. In addition to this, we also created t-SNE embeddings of the features with 2 and 3 dimensions \\cite{tsne}. The results were not up to par and have thus been left out here (they can be reproduced via the notebook discussed in \\ref{documentation}). More details can be found in \\ref{discussion}.\n\n\n\n\n\\begin{table}[h]\n\\caption{Summary of results. Accuracy scores for MI (both subjects) as well as cross-subject (X-sub) using various classifiers. Best results in bold}\n\\begin{center}\n\\begin{tabular}{|c|c|c|c|c|}\n\\hline\n\\textbf{Task} & \\multicolumn{4}{|c|}{\\textbf{Classifier}} \\\\\n\\cline{2-5} \n\\textbf{} & \\textbf{Logistic Regression} & \\textbf{LDA} & \\textbf{Naive Bayes} & \\textbf{Ensemble}\\\\\n\\hline\nMI-sub 1 & 67.53\\% & \\textbf{70.12\\%} & 55.84\\% & 70.12\\% \\\\\n\\hline\nMI-sub 2 & \\textbf{95\\%} & 91.25\\% & 91.25\\% & 93.75\\% \\\\\n\\hline\nX-sub & 58.65\\% & 59.38\\% & 59.38\\% & \\textbf{68.75\\%} \\\\\n\\hline\n\\end{tabular}\n\\label{results}\n\\end{center}\n\\end{table}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Discussion}\\label{discussion}\n\nThe results indicate that assuming different emotional states impart sufficient differences in EEG data, we can train classifiers that perform well above chance. Significant differences in the EEG signals between both subjects were observed during feature extraction and classification. This is not an uncommon phenomenon and has been documented in the literature \\cite{pizzagalli2007electroencephalography}. Blankertz et. al show that after testing on 80 subjects, the average classifier accuracy of a binary task was 74.4\\% with a spread of 16.5\\% \\cite{BLANKERTZ20101303}. Our findings buttress this as the best models for subject 1 and 2 achieved 75.38\\% and 95\\% accuracy respectively. This variability generally chalked up to differences in the subjects' abilities for implicit learning \\cite{kotchoubey2000learning}, performance in early neurofeedback sessions \\cite{Neumann1117} and attention spans \\cite{Daum94}. \n\nAccording to Tangermann et. al, the best results on data set 2b were achieved using filter-bank CSP as a pre-processing step followed by a naive bayes classifier during Competition IV \\cite{tangermann2012review}. In our testing, however, vanilla CSP for feature extraction was sub-optimal. Naive bayes was also found trailing behind other classifiers as seen in table \\ref{results}. We thus observe that vanilla CSP is not as performant as log band-power in our experiments, while we did not perform any experiments with filter-bank CSP.\n\nIn regards to cross-subject classification, appreciable results have been achieved by using ICA for feature extraction \\cite{tangkraingkij2009selecting} combined with a nearest-neighbor (NN) classifier. We verify the efficacy of ICA as a pre-processing step for feature extraction. Other approaches have shown PCA as an effective step for dimensionality reduction \\cite{palaniappan2005energy}. While we could not confirm this with PCA, using the more modern dimensionality reduction technique of t-SNE performed poorly in our experiments (tested using target dimensions 2 and 3). There is, however, recent evidence that using t-SNE in tandem with common dictionary learning may yield good results \\cite{nishimoto2020eeg}.\n\n\\subsection{Limitations and Future Outlook}\n\nA primary limitation of this work is the lack of testing on actual subjects. While the system ensures acceptable performance on an existing dataset, we can not conclude much about its usefulness in the real-world. To make such assertions with a certain degree of confidence, we need to evaluate how quickly we can switch between various MI classifiers based on the predictions of the emotion (cross-subject) classifier. This is also true for calibration time at the start of each session; while we use five trials during testing and get well above-chance results, comprehensive and systematic verification of the system is in order if it is to be of any practical use.\n\nIn addition to alpha patterns, gamma bands are correlated with increased arousal \\cite{pizzagalli2007electroencephalography}, which may have carried a strong supervision signal for the classifier. Had we acquired EEG data for aroused and relaxed states of a subject, an emphasis on gamma bands would have been warranted. As such, in the present case, as we did not have data corresponding to high and low arousal, gamma patterns were assumed not to be informative.\n\nFuture work may also look at training classifiers for more than two subjects. While two subjects suffice for the purposes of this study, as the original task was the discrimination between two emotional states of arousal, it may be worth exploring how the cross-subject classifier would scale to additional classes. This may be interpreted as having to classify not only emotional arousal but also valence (positive or negative) which may have important ethical implications.\n\nMost of the classifiers used in this project are classic algorithms, and were chosen for their still prevailing use in MI BCI. However, future work may also incorporate modern approaches such as deep neural networks for MI classification \\cite{Tabar_2016}. Deep learning could also be used to formulate our problem as that of multi-task learning for both arousal and MI classification \\cite{multitask}. In this manner we can replace training multiple classifiers with a single one which both classifies emotional arousal as well as motor imagery. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Interdisciplinary Work}\nThe nature of this project necessitated the undertaking of a multi-disciplinary approach, from understanding and systematizing human emotional arousal to developing algorithms for distinguishing both emotional states as well as motor function via EEG. Thus, this work borrows, incorporates and synthesizes elements from a number of disciplines including psychology (emotional arousal), neuroscience (EEG and motor-imagery), computer graphics (virtual reality environments) and artificial intelligence (machine learning for classification). Broadly, we can categorize psychology and neuroscience as brain sciences and computer graphics and artificial intelligence under the umbrella of informatics. Each of the two disciplines contributed unique methods and insights without which the project may not have come to fruition. The most valuable insight was the difficulty in training accurate machine learning algorithms for EEG. Although machine learning has become the dominant paradigm for classification tasks, this project demonstrates that pre-processing of data (via techniques such as ICA and log power-band) is at least as important to the success of the system as the classifier (the results for other feature extractors can be reproduced in the provided notebook), and even after pre-processing, we have no guarantees of robust performance. Another key insight was the extent to which EEG patterns vary between different people, pointing to the difficulty of transfer learning in this domain. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Conclusion}\nA major hurdle in the widespread and practical use of assistive systems based on MI-BCI is lack of reliability. While this can have many origins, an important source as identified by two Cybathlon teams in 2016 was related to shifts in the subject's state of emotional arousal. In this work, we present an end-to-end framework for inducing high\/low arousal in subjects, collecting EEG data and train learning algorithms for robust MI classification. While COVID-19 enforced certain constraints on data acquisition, we were still able to develop a proof-of-concept for how emotion-robust MI-BCI systems could be trained. Our results indicate that if the training signal contains sufficient information i.e. each emotional state has a distinct enough EEG signature, we can successfully train systems that are robust to variance in emotional arousal. A thorough study, however, needs to be conducted to determine the practicality of such a system with respect to variables such as classifier switching times and calibration periods.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Acknowledgments}\nThis project could not have been possible without the aid of Nicholas Berberich who provided constant and quality guidance on overall methodology, feature extraction and algorithms. Also worth gratitude are Matthijs Pals for his support in regards to MI data preprocessing and Svea Meyer for helping in the initial phase of the project as well as with explaining EEG terminology.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzfkgz b/data_all_eng_slimpj/shuffled/split2/finalzzfkgz new file mode 100644 index 0000000000000000000000000000000000000000..adea2b07d87aa5140752ae8877fe51437621b12e --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzfkgz @@ -0,0 +1,5 @@ +{"text":"\\section{FORWARD PHYSICS AND FORWARD INSTRUMENTATION AT THE CMS INTERACTION POINT}\n\nForward physics at the LHC covers a wide range of diverse physics subjects which have in \ncommon that particles produced at small polar angles $\\theta$ and hence large \nvalues of rapidity provide a defining characteristics. This article concentrates\non their physics interest in $pp$ collisions.\n\nAt the Large-Hadron-Collider (LHC), where proton-proton collisions occur at \ncenter-of-mass energies of 14 TeV, the maximal possible rapidity is \n$y_{max} = \\ln{\\frac{\\sqrt{s}}{m_{\\pi}}}\\sim 11.5$. The two multi-purpose detectors ATLAS and \nCMS at the LHC are designed primarily for efficient detection of processes with large\npolar angles and hence\nhigh transverse momentum $p_T$. The coverage in pseudorapidity \n$\\eta = - \\ln{[\\tan{( \\theta \/ 2 )} ] }$ of\ntheir main components extends down to about $|\\theta| = 1^\\circ$ from the beam axis\nor $|\\eta| = 5$.\n\nFor the CMS detector, several subdetectors with coverage beyond $|\\eta| =5$ are \ncurrently under construction (CASTOR and ZDC sampling calorimeters) or in the proposal \nstage (FP420 proton taggers and fast timing detectors). \n\nFuthermore, a salient feature of the forward instrumentation around the \ninteraction point\n(IP) of CMS is the presence of TOTEM~\\cite{TOTEM}. TOTEM is an approved experiment at \nthe LHC for measuring the $pp$ elastic cross section as a function of the four-momentum\ntransfer squared, $t$, and for measuring the total cross section with a precision of\napproximately 1\\%. The TOTEM experiment uses the same IP as CMS and supplements around \nthe CMS IP several tracking devices, located inside of the volume \nof the main CMS detector, plus near-beam proton taggers a distance up to $\\pm 220$~m. \nThe CMS and TOTEM collaborations have described the considerable physics potential of\njoint data taking in a report to the LHCC \\cite{opus}. \n\nThe kinematic coverage of the combined CMS and TOTEM apparatus is unprecedented at a\nhadron collider. It would be even further enhanced by complementing CMS with the\ndetectors of the FP420 proposal which would induce forward physics into the portfolio of\npossible discovery processes at the LHC~\\cite{fp420}.\n\nAn overview of the forward instrumentation up to $\\pm 220$~m from the CMS IP is given in \nFig.~\\ref{fig:overview}. There are two suites of calorimeters with tracking detectors in front.\nThe CMS Hadron Forward (HF) calorimeter with the TOTEM telescope T1 in front \ncovers the region $3 < |\\eta | < 5$, the CMS CASTOR calorimeter with the TOTEM telescope \nT2 in front covers $5.2 < |\\eta| < 6.6$. The CMS ZDC calorimeters will be \ninstalled at the end of the straight LHC beam-line section, at a distance of \n$\\pm 140$~m from the IP. Near-beam proton taggers will be installed by TOTEM at \n$\\pm 147$~m and $\\pm 220$~m from the IP.\nFurther near-beam proton taggers in combination with very fast timing detectors to be\ninstalled at $\\pm 420$~m from the IP are part of the FP420 proposal.\n\n\\begin{figure}\n\\hspace*{-0.5cm}\n\\includegraphics[scale=0.32, angle = -90]{cms_totem_detectors_new.eps}\n\\caption{Layout of the forward detectors around the CMS interaction point.}\n\\label{fig:overview}\n\\end{figure}\n\n\n\n\n\\section{PHYSICS WITH FORWARD DETECTORS}\n\nIn the following, we describe the physics interest of the CMS CASTOR and ZDC \ncalorimeters~\\cite{PTDR1} \nand the TOTEM T1 and T2 telescopes~\\cite{TOTEM}. Of particular interest are \nQCD measurements at values of Bjorken-$x$ as low as $x \\sim 10^{-6}$ and the resulting \nsensitivity to non-DGLAP dynamics, \nas well as forward particle and energy flow measurements. These can play an important \nrole in tuning the Monte Carlos description of underlying event and multiple interactions\nat the LHC and in constraining Monte Carlo generators used for cosmic ray studies.\n\n\\subsection{CMS CASTOR \\& ZDC calorimeters}\n\nThe two calorimeters are of interest for $pp$, $pA$ and $AA$ running at the LHC, \nwhere $A$ denotes a heavy ion. They are Cherenkov-light devices with electromagnetic \nand hadronic sections and will be present in \nthe first LHC $pp$ runs at luminosities where event pile-up should be low.\n\nThe CASTOR calorimeters are octagonal cylinders located at $\\sim 14$~m from the IP.\nThey are sampling calorimeters with tungsten plates as absorbers and fused silica quartz \nplates as active medium. The plates are inclined by $45^\\circ$ with respect to the \nbeam axis. Particles passing through the quartz emit Cherenkov\nphotons which are transmitted to photomultiplier tubes through aircore lightguides.\nThe electromagnetic section is 22 radiation lengths $X_0$ deep\nwith 2 tungsten-quartz sandwiches, the hadronic section consists of 12 tungsten-quartz\nsandwiches. The total depth is 10.3 interaction lengths $\\lambda_l$. The calorimeters\nare read out segmented azimuthally in 16 segments and logitudinally in 14 segments. \nThey do not have any segmentation in $\\eta$. The CASTOR coverage of \n$5.2 < |\\eta| < 6.6$ closes hermetically the total CMS calorimetric pseudorapidity range over\n13 units. \n\nCurrently, funding is available only for a CASTOR calorimeter on one side of the IP.\nConstruction is advanced, with concluding beamtests foreseen for this summer and \ninstallation in time for the 2009 LHC data taking. \n\nThe CMS Zero Degree Calorimeters, ZDC, are located inside the TAN absorbers \nat the ends of the straight section of \nthe LHC beamline, between the LHC beampipes, at $\\pm 140$~m distance on each side of the \nIP. They are very radiation-hard sampling calorimeters \nwith tungsten plates as absobers and as active medium quartz fibers read out via\naircore light guides and photomultiplier tubes.\nThe electromagnetic part, $19 X_0$ deep, is segmented into 5 units horizontally, the \nhadronic part into 4 units in depth. The total depth is 6.5 $\\lambda_l$. The ZDC \ncalorimeters have 100\\% acceptance for neutral particles with $|\\eta|>8.4$ and can measure\n50~GeV photons with an energy resolution of about 10\\%. \n\nThe ZDC calorimeters are already installed and will be operational already in 2008.\n\n\n\\subsection{TOTEM T1 \\& T2 telescopes}\n\nThe TOTEM T1 telescope consists of two arms symmetrically installed around the CMS IP \nin the endcaps of the\nCMS magnet, right in front of the CMS HF calorimeters and with $\\eta$ coverage similar to\nHF.\nEach arm consists of 5 planes of Cathod Strip Chambers (CSC) which measure\n3 projections per plane, resulting in a spatial resolution of 0.36~mm in the radial and\n0.62~mm in the azimuthal coordinate in test beam measurements.\n\nThe two arms of the TOTEM T2 telescope are mounted right in front of the CASTOR \ncalorimeters, with similar $\\eta$ coverage. Each arm consists of 10 planes of 20\nsemi-circular modules of Gas Electron Multipliers (GEMs). The detector read-out is\norganized in strips and pads, a resolution of $115~\\mu $m for the radial coordinate and\nof $16~\\mu$rad in azimuthal angle were reached in prototype test beam measurements.\n\n\n\\subsection{Proton-proton collisions at low $x_{Bj}$}\n\nIn order to arrive at parton-parton interactions at very low $x_{Bj}$ values, several \nsteps in the QCD cascade initiated by the partons from the \nproton may occur before the final hard interaction takes place. Low-$x_{Bj}$ QCD hence offers\nideal conditions for studying the QCD parton evolution dynamics. Measurements at the\nHERA $ep$ collider have explored low-$x_{Bj}$ dynamics down to values of a few $10^{-5}$.\nAt the LHC the minimum accessible $x$ decreases by a factor $\\sim 10$ for each\n2 units of rapidity. A process with a hard scale of $Q ~ 10$~GeV and within the \nacceptance of T2\/CASTOR ($\\eta = 6$) can occur at $x$ values as low as \n$10^{-6}$.\n\n\n\\begin{figure}[htb]\n\\includegraphics[scale =0.4]{pierre_m2vsx2_bw.eps}\n\\caption{Acceptance of the T2\/CASTOR detectors for Drell-Yan electrons, see text.}\n\\label{fig:DYcoverage}\n\\end{figure}\n\n\nForward particles at the LHC can be produced in collisions between two partons with\n$x_1 >> x_2$, in which case the hard interaction system is boosted forward.\nAn example is Drell-Yan production of $e^+ e^-$ pairs, \n$ q q \\rightarrow \\gamma^\\star \\rightarrow e^+ e^-$, a process that probes primarily \nthe quark content of the proton. Figure~\\ref{fig:DYcoverage} shows the distribution of the invariant\nmass $M$ of the $e^+ e^-$ system versus the $x_{Bj}$ of one of the quarks, where\n$x_2$ is chosen such that $x_1 >> x_2$. The solid curve shows the kinematic limit\n$M^{max} = \\sqrt{x_2 s}$. The dotted lines indicate the acceptance window for both\nelectrons to be detectable in T2\/CASTOR.\nThe black points correspond to any of the Drell-Yan events generated \nwith Pythia, the green\/light grey (blue\/dark grey ) ones refer to those events in which at least one (both)\nelectron lies within the T2\/CASTOR detector acceptance. For invariant masses of the $e^+ e^-$ \nsystem of $M> 10$~GeV, $x_{Bj}$ values down to $10^{-6}$ are accessible.\n\nThe rapid rise of the gluon density in the proton with decreasing values of $x_{Bj}$\nobserved by HERA in deep inelastic scattering cannot continue indefinitely without violating\nunitarity at some point. Hence, parton recombination within the proton must set in at low\nenough values of $x_{Bj}$ and leads to non-linear terms in the QCD gluon evolution. \nFigure~\\ref{fig:saturation} compares for \nDrell Yan processes with both electrons within the T2\/CASTOR detector acceptance the cross\nsection predicted by a PDF model without (CTEQ5L~\\cite{CTEQ}) and with (EHKQS~\\cite{EHKQS}) \nsaturation effects. A difference of a factor 2 is visible in the predictions. Further details \ncan be found in~\\cite{opus}.\n\n\n\n\\begin{figure}[htb]\n\\includegraphics[scale =0.3]{pierre_dsigmadx2dy_bw.eps}\n\\caption{Comparison of the cross section prediction of a model without (CTEQ5L) and \nwith (EHKQS) saturation for Drell-Yan events in which both electrons are detected in T2\/CASTOR.}\n\\label{fig:saturation}\n\\end{figure}\n\n\nComplementary information on the QCD evolution at low $x_{Bj}$ can be gained from\nforward jets. The DGLAP evolution~\\cite{DGLAP} \nassumes that parton emission in the cascade is strongly \nordered in transverse momentum while in the BFKL evolution~\\cite{BFKL}, \nno ordering in $k_t$ is assumed,\nbut strong ordering in $x$. At small $x_{Bj}$, the difference between the two approaches is\nexpected to be most pronounced for hard partons created at the beginning of the cascade, \nat pseudorapidities close to the proton, i.e. in the forward direction. Monte Carlo generator\nstudies indicate that the resulting excess of forward jets with high $p_T$, observed\nat HERA, might be measurable with T2\/CASTOR. Another observable sensitive to\nBFKL-like QCD evolution dynamics are dijets with large rapidity separation, which \nenhances the available phase space for BFKL-like parton radiation between the jets.\nLikewise dijets separated by a large rapidity gap are of interest since they indicate\na process in which no color flow occurs in the hard scatter but where, contrary to the \ntraditional picture of soft Pomeron exchange, also a high transverse momentum transfer \noccurs across the gap. \n\n\\subsection{Multiplicity \\& energy flow}\n\nThe forward detectors can be valuable tools for Monte Carlo tuning.\n\n\nThe hard scatter in hadron-hadron collisions takes place in a dynamic environment,\nrefered to as the ``underlying event'' (UE), where\nadditional soft or hard interactions between the partons and \ninitial and final state radiation occur. The effect of the UE can not be disentangled on an\nevent-by-event basis and needs to be included by means of tuning Monte Carlo multiplicities \nand energy flow predictions to data. The predictive power of these tunes obtained \nfrom Tevatron data is very limited, and ways need to be found to constrain the UE at LHC \nenergies with LHC data. As shown in~\\cite{Borras}, the forward detectors are sensitive\nto features of the UE which central detector information alone cannot constrain.\n\n\\begin{figure}[!b]\n\\includegraphics[scale =0.55]{cosmics_Eflow.epsi}\n\\caption{Energy flow as predicted by Monte Carlo generators used for the description of \ncosmic ray induced air showers~\\cite{opus}.}\n\\label{fig:cosmics}\n\\end{figure}\n\nAnother area with high uncertainties is modelling the interaction of primary cosmic rays in \nthe PeV energy range with the atmosphere. Their rate of occurance per year is too low for\nreliable quantitative analysis. The center-of-mass energy in $pp$ collisions at the LHC \ncorresponds to 100 PeV energy in a fixed target collision. Figure~\\ref{fig:cosmics} shows the \nenergy flow\nas function of pseudorapidity as predicted by different Monte Carlos in use in the cosmic ray\ncommunity. Clear differences in the predictions are visible in the acceptance region of\nT2\/CASTOR and ZDC.\n\n\n\n\\section{PHYSICS WITH A VETO ON FORWARD DETECTORS}\n\nEvents of the type $pp \\rightarrow pXp$ or $pp \\rightarrow Xp$, where no color exchange\ntakes place between the proton(s) and the system $X$, can be caused by $\\gamma$ exchange,\nor by diffractive interactions. In both cases, the absence of color flow between the\nproton(s) and the system $X$ results in a large gap in the rapidity distribution of the\nhadronic final state. Such a gap can be detected by requiring the absence of a signal in \nthe forward detectors. In the following, we discuss three exemplary processes which are \ncharacterized by a large rapidity gap in their hadronic final state.\n\n\\subsection{Diffraction with a hard scale}\n\nDiffraction, traditionally thought of as soft process and described in Regge theory, can also\noccur with a hard scale ($W$, dijets, heavy flavors) as\nhas been experimentally observed at UA8, HERA and Tevatron. In the presence of a hard scale,\ndiffractive processes can be described in perturbative QCD (pQCD) and their cross sections\ncan be factorized into that one of the hard scatter and a diffractive particle \ndistribution function (dPDF). In diffractive hadron-hadron scattering, rescattering between\nspectator particles breaks the factorization. The so-called rapidity gap survival \nprobability quantifies this effect~\\cite{survival}. A measure for it can be obtained by the ratio of\ndiffractive to inclusive processes with the same hard scale. At the Tevatron, the ratio \nis found to be ${\\cal O}(1 \\%)$~\\cite{tevatron}.\nTheoretical expectations for the LHC vary from a fraction of a percent to as much as \n30\\%~\\cite{predLHC}. \n\nSingle diffractive $W$ production, $pp \\rightarrow pX$, where $X$ includes a $W$, \nis an example for diffraction with a hard scale at the LHC and is in \nparticular sensitive to the quark component of the proton dPDF in an as-of-yet unmeasured \nregion. In the absence of event pile-up, a selection is possible based on the requirement\nthat there be no activity above noise level in the CMS forward calorimeters HF and CASTOR.\n\n\\begin{figure}\n\\includegraphics[scale=0.4]{nHFvsnCASTOROnlyMinus_nTrkMax_1_100pb_new.eps}\n\\caption{Number of towers with activity above noise level in HF versus in CASTOR for\nsingle diffractive $W$ production and for an integrated luminosity of 100~${\\rm pb}^{-1}$~\\cite{SDW}}\n\\label{fig:SDW}\n\\end{figure}\n\nFigure~\\ref{fig:SDW} shows the number of towers with activity above noise level in HF versus \nin CASTOR. The decay channel is $W \\rightarrow \\mu \\nu$ and a rapidity gap survical factor \nof 5\\% is assumed in the diffractive Monte Carlo sample (Pomwig). The number of events is\nnormalized to an integrated luminosity of 100~$\\rm pb^{-1}$ of single interactions (i.e. no\nevent pile-up). In the combined Pomwig + Pythia Monte Carlo sample, a clear excess in the \nbin [n(Castor), n(HF)] = [0,0] is visible, of ${\\cal O}(100)$ events. The ratio of diffraction \nto non-diffraction in the [0,0] bin of approximately 20 demonstrate the feasibility of \nobserving single diffractive $W$ production at the LHC.\n\nThe study assumes that CASTOR will be available only on one side. A second CASTOR in the \nopposite hemisphere and the use of T1, T2 will improve the observable excess \nfurther. \n\n\\subsection{Exclusive dilepton production}\n\nExclusive dimuon and dielectron production with no significant additional\nactivity in the CMS detector occurs with high cross section in\ngamma-mediated processes at the LHC, either as the pure QED process \n$\\gamma \\gamma \\rightarrow ll$ \nor in $\\Upsilon$ photoproduction~\\footnote{Photoproduction of $J\/psi$ mesons is also \npossible, but difficult to observe because of the trigger thresholds for leptons in CMS.} \nA feasibility study to detect them with CMS was presented in this\nworkshop~\\cite{Hollar}. \n\nThe event selection is based on requiring that outside of the two leptons, no other \nsignificant activity is visible within the central CMS detector, neither in the calorimeter\nnor in the tracking system. In 100 $\\rm pb^{-1}$ of single interaction data, ${\\cal O} (700)$\nevents in the dimuon channels and ${\\cal O} (70)$ in the dielectron channel can be selected.\nEvents in which one of the protons in the process does not stay intact but dissociates \nare the dominant source of background and are comparable in statistics to the signal. \nThis background can be significantly reduced by means of a veto condition on activity in CASTOR\nand ZDC, in a configuration with a ZDC on each side and a CASTOR on only one side of the IP\nby 2\/3.\n\nThe theoretically very precisely known cross section of the almost pure QED process \n$pp \\rightarrow pllp$ via $\\gamma$ exchange is an ideal calibration channel. With\n$100 \\rm pb^{-1}$ of data, an absolute luminosity calibration with 4\\% precision is feasible.\nFuthermore, exclusive dimuon production is an ideal alignment channel with high statistics \nfor the proposed proton taggers at 420~m from the IP. Upsilon photoproduction can constrain \nQCD models of diffraction, as discussed in the next section. \nThe $\\gamma \\gamma \\rightarrow e^+ e^-$ process has recently been observed at the Tevatron~\\cite{exclTevatron}.\n\n\n\\subsection{Upsilon photoproduction} \n\nAssuming the STARLIGHT~\\cite{starlight} \nMonte Carlo cross section prediction, the 1S, 2S and 3S resonances\nwill be clearly visible in $100 \\rm pb^{-1}$ of single interaction data. With their average\n$\\gamma p$ center-of-mass energy of $ \\simeq 2400 \\rm GeV^2$ they will extend\nthe accessible range of the HERA measurement of the $W_{\\gamma p}$ dependence of \n$\\sigma (\\gamma p \\rightarrow \\Upsilon(1 S) p)$ by one order of magnitude. \n\n\n\\begin{figure}[htb]\n\\includegraphics[scale=0.4]{UpsilonSignalPAS.eps}\n\\caption{Invariant mass of exclusive dimuon production in the Upsilon mass region~\\cite{ExclDileptons}}\n\\label{fig:Upsilon}\n\\end{figure}\n\nBy means of the $p_T^2$ value of the $\\Upsilon$ as estimator of the transfered four-momentum \nsquared, $t$, at the proton vertex, it might be possible to measure the $t$ dependence of the \ncross section. This dependence is sensitive to the two-dimensional gluon distribution of the \nproton and would give access to the generalized parton distribution function (GPD) of the \nproton.\n\n\\section{PHYSICS WITH NEAR-BEAM PROTON TAGGERS}\n\nFor slightly off-momentum protons, the LHC beamline with its magnets is essentially a\nspectrometer. If a scattered proton is bent sufficiently, but little enough to remain within \nthe beam-pipe, they can be detected by means of detectors inserted into the beam-pipe and\napproaching the beam envelope as closely as possible. At ligh luminosity at the LHC,\nlarge rapidity gaps typical for diffractive events or events with $\\gamma$ exchange tend to be \nfilled in by particles from overlaid pile-up events. Hence tagging the outgoing scattered \nproton(s) becomes the only mean of detection at high luminosities.\n\n\\subsection{TOTEM and FP420 proton taggers}\n\nThe TOTEM proton taggers, located at $\\pm 147$~m and $\\pm 220$~m from the IP, each consist\nof Silicon strip detectors housed in movable Roman Pots~\\cite{TOTEM}. \nThe detector\ndesign is such that the beam can be approached up to a minimal distance of $10 \\sigma +$~0.5~mm.\nWith nominal LHC beam optics, scattered protons from the IP are within the acceptance of the \ntaggers at 220~m when for their fractional momentum loss $\\xi$ holds: $0.02 < \\xi < 0.2$. \n\n\n\\begin{figure}\n\\includegraphics[scale=0.35, angle =-90]{fp420coverage.eps}\n\\caption{Acceptance in $x_L = 1 - \\xi$, where $\\xi$ is the fractional momentum loss of the \nscattered proton, of the TOTEM and FP420 proton taggers. The data points shown are from \nZEUS~\\cite{zeus}.}\n\\label{fig:xiCoverage}\n\\end{figure}\n\nIn order to achieve acceptance at smaller values of $\\xi$ with nominal LHC beam optics, \ndetectors have to be located further away from the IP. Proton taggers at $\\pm 420$~m from the\nIP have an acceptance of $0.002 < \\xi < 0.02$, complementing taggers at 220~m, as shown\nin Figure~\\ref{fig:xiCoverage}. \nThe proposal~\\cite{fp420} of the FP420 R\\&D collaboration foresees employing 3-D Silicon, an\nextremely radiation hard novel Silicon technology, for the proton taggers. Additional \nfast timing Cherenkov detectors will be capable of determining, within a resolution of a \nfew millimeters, whether the tagged proton came from the same vertex as the hard scatter visible\nin the central CMS detector. In order to comply with the space constraints of the location \nwithin the cryogenic region of the LHC, these detectors will be attached to a movable beam-pipe\nwith the help of which the detectors can approach the beam to within 3~mm.\n\nThe FP420 proposal is currently under scrutiny in CMS and ATLAS. If approved, installation could\nproceed in 2010, after the LHC start-up.\n\n\n\\subsection{Physics potential}\n\nForward proton tagging capabilities enhance the physics potential of CMS. They would\nrender possible a precise measurement of the mass and quantum numbers of the Higgs boson\nshould it be discovered by traditional searches. They also augment the CMS discovery reach\nfor Higgs production in the minimal supersymmetric extension (MSSM) of the Standard Model (SM)\nand for physics beyond the SM in $\\gamma p$ and $\\gamma \\gamma$ interactions.\n\nA case in point is the central exclusive production (CEP) process~\\cite{CEP}, \n$pp \\rightarrow p + \\phi + p$, where the plus sign denotes the absence of hadronic \nactivity between the outgoing protons, which survive the interaction intact, and the \nstate $\\phi$. The final state consists solely of the\nscattered protons, which may be detected in the forward proton taggers, and the decay \nproducts of $\\phi$ which can be detected in the central CMS detector. \nSelection rules force the produced state $\\phi$ to have $J^{CP} = n^{++}$ with $n =0, 2, ..$. \nThis process offers hence an experimentally very clean \nlaboratory for the discovery of any particle with these quantum numbers that couples \nstrongly to gluons. Additional advantages are the possibility to determine the mass of the state\n$\\phi$ with excellent resolution from the scattered protons alone, independent of its\ndecay products, and the possibility, unique at the LHC, to determine the quantum numbers of \n$\\phi$ directly from the azimuthal asymmetry between the scattered protons.\n\n\n\\begin{figure}[htb]\n\\includegraphics[angle=-90]{marek.eps}\n\\caption{Five $\\sigma$ discovery contours for central exclusive production of the \nheavier CP-even Higgs boson $H$~\\cite{Tasevsky}. See text for details.}\n\\label{fig:higgs}\n\\end{figure}\n\nIn the case of a SM Higgs boson with mass close to the current exclusion limit, which decays\npreferably into $b \\bar{b}$, CEP\nimproves the achievable signal-to-background ratio dramatically, to \n$\\cal{O}$(1)~\\cite{fp420,lightHiggs}. \nIn certain\nregions of the MSSM, generally known as ``LHC wedge region'', the heavy MSSM Higgs bosons would \nescape detection at the LHC. \nThere, the preferred search channels at the LHC are not available \nbecause the \nheavy Higgs bosons decouple from gauge bosons while their couplings to $b \\bar{b}$ and \n$\\tau \\bar{\\tau}$ are enhanced at high $\\tan{\\beta}$. Figure~\\ref{fig:higgs} depicts\nthe 5~$\\sigma$ discovery contour for the $H \\rightarrow b \\bar{b}$ channel in CEP in \nthe $M_A - \\tan{\\beta}$ plane of the MSSM within the $M_h^{max}$ benchmark scenario\nwith $\\mu = +200$~GeV and for different integrated luminosities. \nThe values of the mass of the heavier CP-even Higgs boson, $M_H$, are indicated by \ncontour lines. The dark region corresponds to the parameter region excluded by LEP. \n\nForward proton tagging will also give access to a rich QCD program on hard diffraction\nat high luminosities, where event pile-up is significant and makes undetectable the gaps \nin the hadronic final state otherwise typical of diffraction. Detailed studies with high\nstatistical precision will be possible on skewed, unintegrated gluon \ndensities; Generalized Parton Distributions which contain information on the correlations \nbetween partons in the proton; and the rapidity gap survival probability, a quantity closely \nlinked to soft rescattering effects and the features of the underlying event at the LHC.\n\nForward proton tagging also provides the possibility for precision studies of $\\gamma p$\nand $\\gamma \\gamma$ interactions at center-of-mass energies never reached before. Anomalous top\nproduction, anomalous gauge boson couplings, exclusive dilepton production and quarkonia \nproduction are possible topics, as was discussed in detail at this workshop.\n\n\n\\section{SUMMARY}\n\nForward physics in $pp$ collisions at the LHC covers a wide range of diverse physics subjects (low-$x_{Bj}$ QCD,\nhard diffraction, $\\gamma \\gamma$ and $\\gamma p$ interactions)\n which have in\ncommon that particles produced at large\nvalues of rapidity provide a defining characteristics. \nFor the CMS detector, several subdetectors with forward $\\eta$ coverage \nare currently under construction (CASTOR, ZDC) or in the proposal \nstage (FP420). The TOTEM experiment \nsupplements around the CMS IP several tracking devices and near-beam proton taggers at \ndistances up to $\\pm 220$~m. \nThe kinematic coverage of the combined CMS and TOTEM apparatus is unprecedented at a\nhadron collider. It would be even further enhanced by complementing CMS with the\ndetectors of the FP420 proposal which would add forward physics to the portfolio of\npossible discovery processes at the LHC.\n\n\n\\section{FORWARD PHYSICS AND FORWARD INSTRUMENTATION AT THE CMS INTERACTION POINT}\n\nForward physics at the LHC covers a wide range of diverse physics subjects which have in \ncommon that particles produced at small polar angles $\\theta$ and hence large \nvalues of rapidity provide a defining characteristics. This article concentrates\non their physics interest in $pp$ collisions.\n\nAt the Large-Hadron-Collider (LHC), where proton-proton collisions occur at \ncenter-of-mass energies of 14 TeV, the maximal possible rapidity is \n$y_{max} = \\ln{\\frac{\\sqrt{s}}{m_{\\pi}}}\\sim 11.5$. The two multi-purpose detectors ATLAS and \nCMS at the LHC are designed primarily for efficient detection of processes with large\npolar angles and hence\nhigh transverse momentum $p_T$. The coverage in pseudorapidity \n$\\eta = - \\ln{[\\tan{( \\theta \/ 2 )} ] }$ of\ntheir main components extends down to about $|\\theta| = 1^\\circ$ from the beam axis\nor $|\\eta| = 5$.\n\nFor the CMS detector, several subdetectors with coverage beyond $|\\eta| =5$ are \ncurrently under construction (CASTOR and ZDC sampling calorimeters) or in the proposal \nstage (FP420 proton taggers and fast timing detectors). \n\nFuthermore, a salient feature of the forward instrumentation around the \ninteraction point\n(IP) of CMS is the presence of TOTEM~\\cite{TOTEM}. TOTEM is an approved experiment at \nthe LHC for measuring the $pp$ elastic cross section as a function of the four-momentum\ntransfer squared, $t$, and for measuring the total cross section with a precision of\napproximately 1\\%. The TOTEM experiment uses the same IP as CMS and supplements around \nthe CMS IP several tracking devices, located inside of the volume \nof the main CMS detector, plus near-beam proton taggers a distance up to $\\pm 220$~m. \nThe CMS and TOTEM collaborations have described the considerable physics potential of\njoint data taking in a report to the LHCC \\cite{opus}. \n\nThe kinematic coverage of the combined CMS and TOTEM apparatus is unprecedented at a\nhadron collider. It would be even further enhanced by complementing CMS with the\ndetectors of the FP420 proposal which would induce forward physics into the portfolio of\npossible discovery processes at the LHC~\\cite{fp420}.\n\nAn overview of the forward instrumentation up to $\\pm 220$~m from the CMS IP is given in \nFig.~\\ref{fig:overview}. There are two suites of calorimeters with tracking detectors in front.\nThe CMS Hadron Forward (HF) calorimeter with the TOTEM telescope T1 in front \ncovers the region $3 < |\\eta | < 5$, the CMS CASTOR calorimeter with the TOTEM telescope \nT2 in front covers $5.2 < |\\eta| < 6.6$. The CMS ZDC calorimeters will be \ninstalled at the end of the straight LHC beam-line section, at a distance of \n$\\pm 140$~m from the IP. Near-beam proton taggers will be installed by TOTEM at \n$\\pm 147$~m and $\\pm 220$~m from the IP.\nFurther near-beam proton taggers in combination with very fast timing detectors to be\ninstalled at $\\pm 420$~m from the IP are part of the FP420 proposal.\n\n\\begin{figure}\n\\hspace*{-0.5cm}\n\\includegraphics[scale=0.32, angle = -90]{cms_totem_detectors_new.eps}\n\\caption{Layout of the forward detectors around the CMS interaction point.}\n\\label{fig:overview}\n\\end{figure}\n\n\n\n\n\\section{PHYSICS WITH FORWARD DETECTORS}\n\nIn the following, we describe the physics interest of the CMS CASTOR and ZDC \ncalorimeters~\\cite{PTDR1} \nand the TOTEM T1 and T2 telescopes~\\cite{TOTEM}. Of particular interest are \nQCD measurements at values of Bjorken-$x$ as low as $x \\sim 10^{-6}$ and the resulting \nsensitivity to non-DGLAP dynamics, \nas well as forward particle and energy flow measurements. These can play an important \nrole in tuning the Monte Carlos description of underlying event and multiple interactions\nat the LHC and in constraining Monte Carlo generators used for cosmic ray studies.\n\n\\subsection{CMS CASTOR \\& ZDC calorimeters}\n\nThe two calorimeters are of interest for $pp$, $pA$ and $AA$ running at the LHC, \nwhere $A$ denotes a heavy ion. They are Cherenkov-light devices with electromagnetic \nand hadronic sections and will be present in \nthe first LHC $pp$ runs at luminosities where event pile-up should be low.\n\nThe CASTOR calorimeters are octagonal cylinders located at $\\sim 14$~m from the IP.\nThey are sampling calorimeters with tungsten plates as absorbers and fused silica quartz \nplates as active medium. The plates are inclined by $45^\\circ$ with respect to the \nbeam axis. Particles passing through the quartz emit Cherenkov\nphotons which are transmitted to photomultiplier tubes through aircore lightguides.\nThe electromagnetic section is 22 radiation lengths $X_0$ deep\nwith 2 tungsten-quartz sandwiches, the hadronic section consists of 12 tungsten-quartz\nsandwiches. The total depth is 10.3 interaction lengths $\\lambda_l$. The calorimeters\nare read out segmented azimuthally in 16 segments and logitudinally in 14 segments. \nThey do not have any segmentation in $\\eta$. The CASTOR coverage of \n$5.2 < |\\eta| < 6.6$ closes hermetically the total CMS calorimetric pseudorapidity range over\n13 units. \n\nCurrently, funding is available only for a CASTOR calorimeter on one side of the IP.\nConstruction is advanced, with concluding beamtests foreseen for this summer and \ninstallation in time for the 2009 LHC data taking. \n\nThe CMS Zero Degree Calorimeters, ZDC, are located inside the TAN absorbers \nat the ends of the straight section of \nthe LHC beamline, between the LHC beampipes, at $\\pm 140$~m distance on each side of the \nIP. They are very radiation-hard sampling calorimeters \nwith tungsten plates as absobers and as active medium quartz fibers read out via\naircore light guides and photomultiplier tubes.\nThe electromagnetic part, $19 X_0$ deep, is segmented into 5 units horizontally, the \nhadronic part into 4 units in depth. The total depth is 6.5 $\\lambda_l$. The ZDC \ncalorimeters have 100\\% acceptance for neutral particles with $|\\eta|>8.4$ and can measure\n50~GeV photons with an energy resolution of about 10\\%. \n\nThe ZDC calorimeters are already installed and will be operational already in 2008.\n\n\n\\subsection{TOTEM T1 \\& T2 telescopes}\n\nThe TOTEM T1 telescope consists of two arms symmetrically installed around the CMS IP \nin the endcaps of the\nCMS magnet, right in front of the CMS HF calorimeters and with $\\eta$ coverage similar to\nHF.\nEach arm consists of 5 planes of Cathod Strip Chambers (CSC) which measure\n3 projections per plane, resulting in a spatial resolution of 0.36~mm in the radial and\n0.62~mm in the azimuthal coordinate in test beam measurements.\n\nThe two arms of the TOTEM T2 telescope are mounted right in front of the CASTOR \ncalorimeters, with similar $\\eta$ coverage. Each arm consists of 10 planes of 20\nsemi-circular modules of Gas Electron Multipliers (GEMs). The detector read-out is\norganized in strips and pads, a resolution of $115~\\mu $m for the radial coordinate and\nof $16~\\mu$rad in azimuthal angle were reached in prototype test beam measurements.\n\n\n\\subsection{Proton-proton collisions at low $x_{Bj}$}\n\nIn order to arrive at parton-parton interactions at very low $x_{Bj}$ values, several \nsteps in the QCD cascade initiated by the partons from the \nproton may occur before the final hard interaction takes place. Low-$x_{Bj}$ QCD hence offers\nideal conditions for studying the QCD parton evolution dynamics. Measurements at the\nHERA $ep$ collider have explored low-$x_{Bj}$ dynamics down to values of a few $10^{-5}$.\nAt the LHC the minimum accessible $x$ decreases by a factor $\\sim 10$ for each\n2 units of rapidity. A process with a hard scale of $Q ~ 10$~GeV and within the \nacceptance of T2\/CASTOR ($\\eta = 6$) can occur at $x$ values as low as \n$10^{-6}$.\n\n\n\\begin{figure}[htb]\n\\includegraphics[scale =0.4]{pierre_m2vsx2_bw.eps}\n\\caption{Acceptance of the T2\/CASTOR detectors for Drell-Yan electrons, see text.}\n\\label{fig:DYcoverage}\n\\end{figure}\n\n\nForward particles at the LHC can be produced in collisions between two partons with\n$x_1 >> x_2$, in which case the hard interaction system is boosted forward.\nAn example is Drell-Yan production of $e^+ e^-$ pairs, \n$ q q \\rightarrow \\gamma^\\star \\rightarrow e^+ e^-$, a process that probes primarily \nthe quark content of the proton. Figure~\\ref{fig:DYcoverage} shows the distribution of the invariant\nmass $M$ of the $e^+ e^-$ system versus the $x_{Bj}$ of one of the quarks, where\n$x_2$ is chosen such that $x_1 >> x_2$. The solid curve shows the kinematic limit\n$M^{max} = \\sqrt{x_2 s}$. The dotted lines indicate the acceptance window for both\nelectrons to be detectable in T2\/CASTOR.\nThe black points correspond to any of the Drell-Yan events generated \nwith Pythia, the green\/light grey (blue\/dark grey ) ones refer to those events in which at least one (both)\nelectron lies within the T2\/CASTOR detector acceptance. For invariant masses of the $e^+ e^-$ \nsystem of $M> 10$~GeV, $x_{Bj}$ values down to $10^{-6}$ are accessible.\n\nThe rapid rise of the gluon density in the proton with decreasing values of $x_{Bj}$\nobserved by HERA in deep inelastic scattering cannot continue indefinitely without violating\nunitarity at some point. Hence, parton recombination within the proton must set in at low\nenough values of $x_{Bj}$ and leads to non-linear terms in the QCD gluon evolution. \nFigure~\\ref{fig:saturation} compares for \nDrell Yan processes with both electrons within the T2\/CASTOR detector acceptance the cross\nsection predicted by a PDF model without (CTEQ5L~\\cite{CTEQ}) and with (EHKQS~\\cite{EHKQS}) \nsaturation effects. A difference of a factor 2 is visible in the predictions. Further details \ncan be found in~\\cite{opus}.\n\n\n\n\\begin{figure}[htb]\n\\includegraphics[scale =0.3]{pierre_dsigmadx2dy_bw.eps}\n\\caption{Comparison of the cross section prediction of a model without (CTEQ5L) and \nwith (EHKQS) saturation for Drell-Yan events in which both electrons are detected in T2\/CASTOR.}\n\\label{fig:saturation}\n\\end{figure}\n\n\nComplementary information on the QCD evolution at low $x_{Bj}$ can be gained from\nforward jets. The DGLAP evolution~\\cite{DGLAP} \nassumes that parton emission in the cascade is strongly \nordered in transverse momentum while in the BFKL evolution~\\cite{BFKL}, \nno ordering in $k_t$ is assumed,\nbut strong ordering in $x$. At small $x_{Bj}$, the difference between the two approaches is\nexpected to be most pronounced for hard partons created at the beginning of the cascade, \nat pseudorapidities close to the proton, i.e. in the forward direction. Monte Carlo generator\nstudies indicate that the resulting excess of forward jets with high $p_T$, observed\nat HERA, might be measurable with T2\/CASTOR. Another observable sensitive to\nBFKL-like QCD evolution dynamics are dijets with large rapidity separation, which \nenhances the available phase space for BFKL-like parton radiation between the jets.\nLikewise dijets separated by a large rapidity gap are of interest since they indicate\na process in which no color flow occurs in the hard scatter but where, contrary to the \ntraditional picture of soft Pomeron exchange, also a high transverse momentum transfer \noccurs across the gap. \n\n\\subsection{Multiplicity \\& energy flow}\n\nThe forward detectors can be valuable tools for Monte Carlo tuning.\n\n\nThe hard scatter in hadron-hadron collisions takes place in a dynamic environment,\nrefered to as the ``underlying event'' (UE), where\nadditional soft or hard interactions between the partons and \ninitial and final state radiation occur. The effect of the UE can not be disentangled on an\nevent-by-event basis and needs to be included by means of tuning Monte Carlo multiplicities \nand energy flow predictions to data. The predictive power of these tunes obtained \nfrom Tevatron data is very limited, and ways need to be found to constrain the UE at LHC \nenergies with LHC data. As shown in~\\cite{Borras}, the forward detectors are sensitive\nto features of the UE which central detector information alone cannot constrain.\n\n\\begin{figure}[!b]\n\\includegraphics[scale =0.55]{cosmics_Eflow.epsi}\n\\caption{Energy flow as predicted by Monte Carlo generators used for the description of \ncosmic ray induced air showers~\\cite{opus}.}\n\\label{fig:cosmics}\n\\end{figure}\n\nAnother area with high uncertainties is modelling the interaction of primary cosmic rays in \nthe PeV energy range with the atmosphere. Their rate of occurance per year is too low for\nreliable quantitative analysis. The center-of-mass energy in $pp$ collisions at the LHC \ncorresponds to 100 PeV energy in a fixed target collision. Figure~\\ref{fig:cosmics} shows the \nenergy flow\nas function of pseudorapidity as predicted by different Monte Carlos in use in the cosmic ray\ncommunity. Clear differences in the predictions are visible in the acceptance region of\nT2\/CASTOR and ZDC.\n\n\n\n\\section{PHYSICS WITH A VETO ON FORWARD DETECTORS}\n\nEvents of the type $pp \\rightarrow pXp$ or $pp \\rightarrow Xp$, where no color exchange\ntakes place between the proton(s) and the system $X$, can be caused by $\\gamma$ exchange,\nor by diffractive interactions. In both cases, the absence of color flow between the\nproton(s) and the system $X$ results in a large gap in the rapidity distribution of the\nhadronic final state. Such a gap can be detected by requiring the absence of a signal in \nthe forward detectors. In the following, we discuss three exemplary processes which are \ncharacterized by a large rapidity gap in their hadronic final state.\n\n\\subsection{Diffraction with a hard scale}\n\nDiffraction, traditionally thought of as soft process and described in Regge theory, can also\noccur with a hard scale ($W$, dijets, heavy flavors) as\nhas been experimentally observed at UA8, HERA and Tevatron. In the presence of a hard scale,\ndiffractive processes can be described in perturbative QCD (pQCD) and their cross sections\ncan be factorized into that one of the hard scatter and a diffractive particle \ndistribution function (dPDF). In diffractive hadron-hadron scattering, rescattering between\nspectator particles breaks the factorization. The so-called rapidity gap survival \nprobability quantifies this effect~\\cite{survival}. A measure for it can be obtained by the ratio of\ndiffractive to inclusive processes with the same hard scale. At the Tevatron, the ratio \nis found to be ${\\cal O}(1 \\%)$~\\cite{tevatron}.\nTheoretical expectations for the LHC vary from a fraction of a percent to as much as \n30\\%~\\cite{predLHC}. \n\nSingle diffractive $W$ production, $pp \\rightarrow pX$, where $X$ includes a $W$, \nis an example for diffraction with a hard scale at the LHC and is in \nparticular sensitive to the quark component of the proton dPDF in an as-of-yet unmeasured \nregion. In the absence of event pile-up, a selection is possible based on the requirement\nthat there be no activity above noise level in the CMS forward calorimeters HF and CASTOR.\n\n\\begin{figure}\n\\includegraphics[scale=0.4]{nHFvsnCASTOROnlyMinus_nTrkMax_1_100pb_new.eps}\n\\caption{Number of towers with activity above noise level in HF versus in CASTOR for\nsingle diffractive $W$ production and for an integrated luminosity of 100~${\\rm pb}^{-1}$~\\cite{SDW}}\n\\label{fig:SDW}\n\\end{figure}\n\nFigure~\\ref{fig:SDW} shows the number of towers with activity above noise level in HF versus \nin CASTOR. The decay channel is $W \\rightarrow \\mu \\nu$ and a rapidity gap survical factor \nof 5\\% is assumed in the diffractive Monte Carlo sample (Pomwig). The number of events is\nnormalized to an integrated luminosity of 100~$\\rm pb^{-1}$ of single interactions (i.e. no\nevent pile-up). In the combined Pomwig + Pythia Monte Carlo sample, a clear excess in the \nbin [n(Castor), n(HF)] = [0,0] is visible, of ${\\cal O}(100)$ events. The ratio of diffraction \nto non-diffraction in the [0,0] bin of approximately 20 demonstrate the feasibility of \nobserving single diffractive $W$ production at the LHC.\n\nThe study assumes that CASTOR will be available only on one side. A second CASTOR in the \nopposite hemisphere and the use of T1, T2 will improve the observable excess \nfurther. \n\n\\subsection{Exclusive dilepton production}\n\nExclusive dimuon and dielectron production with no significant additional\nactivity in the CMS detector occurs with high cross section in\ngamma-mediated processes at the LHC, either as the pure QED process \n$\\gamma \\gamma \\rightarrow ll$ \nor in $\\Upsilon$ photoproduction~\\footnote{Photoproduction of $J\/psi$ mesons is also \npossible, but difficult to observe because of the trigger thresholds for leptons in CMS.} \nA feasibility study to detect them with CMS was presented in this\nworkshop~\\cite{Hollar}. \n\nThe event selection is based on requiring that outside of the two leptons, no other \nsignificant activity is visible within the central CMS detector, neither in the calorimeter\nnor in the tracking system. In 100 $\\rm pb^{-1}$ of single interaction data, ${\\cal O} (700)$\nevents in the dimuon channels and ${\\cal O} (70)$ in the dielectron channel can be selected.\nEvents in which one of the protons in the process does not stay intact but dissociates \nare the dominant source of background and are comparable in statistics to the signal. \nThis background can be significantly reduced by means of a veto condition on activity in CASTOR\nand ZDC, in a configuration with a ZDC on each side and a CASTOR on only one side of the IP\nby 2\/3.\n\nThe theoretically very precisely known cross section of the almost pure QED process \n$pp \\rightarrow pllp$ via $\\gamma$ exchange is an ideal calibration channel. With\n$100 \\rm pb^{-1}$ of data, an absolute luminosity calibration with 4\\% precision is feasible.\nFuthermore, exclusive dimuon production is an ideal alignment channel with high statistics \nfor the proposed proton taggers at 420~m from the IP. Upsilon photoproduction can constrain \nQCD models of diffraction, as discussed in the next section. \nThe $\\gamma \\gamma \\rightarrow e^+ e^-$ process has recently been observed at the Tevatron~\\cite{exclTevatron}.\n\n\n\\subsection{Upsilon photoproduction} \n\nAssuming the STARLIGHT~\\cite{starlight} \nMonte Carlo cross section prediction, the 1S, 2S and 3S resonances\nwill be clearly visible in $100 \\rm pb^{-1}$ of single interaction data. With their average\n$\\gamma p$ center-of-mass energy of $ \\simeq 2400 \\rm GeV^2$ they will extend\nthe accessible range of the HERA measurement of the $W_{\\gamma p}$ dependence of \n$\\sigma (\\gamma p \\rightarrow \\Upsilon(1 S) p)$ by one order of magnitude. \n\n\n\\begin{figure}[htb]\n\\includegraphics[scale=0.4]{UpsilonSignalPAS.eps}\n\\caption{Invariant mass of exclusive dimuon production in the Upsilon mass region~\\cite{ExclDileptons}}\n\\label{fig:Upsilon}\n\\end{figure}\n\nBy means of the $p_T^2$ value of the $\\Upsilon$ as estimator of the transfered four-momentum \nsquared, $t$, at the proton vertex, it might be possible to measure the $t$ dependence of the \ncross section. This dependence is sensitive to the two-dimensional gluon distribution of the \nproton and would give access to the generalized parton distribution function (GPD) of the \nproton.\n\n\\section{PHYSICS WITH NEAR-BEAM PROTON TAGGERS}\n\nFor slightly off-momentum protons, the LHC beamline with its magnets is essentially a\nspectrometer. If a scattered proton is bent sufficiently, but little enough to remain within \nthe beam-pipe, they can be detected by means of detectors inserted into the beam-pipe and\napproaching the beam envelope as closely as possible. At ligh luminosity at the LHC,\nlarge rapidity gaps typical for diffractive events or events with $\\gamma$ exchange tend to be \nfilled in by particles from overlaid pile-up events. Hence tagging the outgoing scattered \nproton(s) becomes the only mean of detection at high luminosities.\n\n\\subsection{TOTEM and FP420 proton taggers}\n\nThe TOTEM proton taggers, located at $\\pm 147$~m and $\\pm 220$~m from the IP, each consist\nof Silicon strip detectors housed in movable Roman Pots~\\cite{TOTEM}. \nThe detector\ndesign is such that the beam can be approached up to a minimal distance of $10 \\sigma +$~0.5~mm.\nWith nominal LHC beam optics, scattered protons from the IP are within the acceptance of the \ntaggers at 220~m when for their fractional momentum loss $\\xi$ holds: $0.02 < \\xi < 0.2$. \n\n\n\\begin{figure}\n\\includegraphics[scale=0.35, angle =-90]{fp420coverage.eps}\n\\caption{Acceptance in $x_L = 1 - \\xi$, where $\\xi$ is the fractional momentum loss of the \nscattered proton, of the TOTEM and FP420 proton taggers. The data points shown are from \nZEUS~\\cite{zeus}.}\n\\label{fig:xiCoverage}\n\\end{figure}\n\nIn order to achieve acceptance at smaller values of $\\xi$ with nominal LHC beam optics, \ndetectors have to be located further away from the IP. Proton taggers at $\\pm 420$~m from the\nIP have an acceptance of $0.002 < \\xi < 0.02$, complementing taggers at 220~m, as shown\nin Figure~\\ref{fig:xiCoverage}. \nThe proposal~\\cite{fp420} of the FP420 R\\&D collaboration foresees employing 3-D Silicon, an\nextremely radiation hard novel Silicon technology, for the proton taggers. Additional \nfast timing Cherenkov detectors will be capable of determining, within a resolution of a \nfew millimeters, whether the tagged proton came from the same vertex as the hard scatter visible\nin the central CMS detector. In order to comply with the space constraints of the location \nwithin the cryogenic region of the LHC, these detectors will be attached to a movable beam-pipe\nwith the help of which the detectors can approach the beam to within 3~mm.\n\nThe FP420 proposal is currently under scrutiny in CMS and ATLAS. If approved, installation could\nproceed in 2010, after the LHC start-up.\n\n\n\\subsection{Physics potential}\n\nForward proton tagging capabilities enhance the physics potential of CMS. They would\nrender possible a precise measurement of the mass and quantum numbers of the Higgs boson\nshould it be discovered by traditional searches. They also augment the CMS discovery reach\nfor Higgs production in the minimal supersymmetric extension (MSSM) of the Standard Model (SM)\nand for physics beyond the SM in $\\gamma p$ and $\\gamma \\gamma$ interactions.\n\nA case in point is the central exclusive production (CEP) process~\\cite{CEP}, \n$pp \\rightarrow p + \\phi + p$, where the plus sign denotes the absence of hadronic \nactivity between the outgoing protons, which survive the interaction intact, and the \nstate $\\phi$. The final state consists solely of the\nscattered protons, which may be detected in the forward proton taggers, and the decay \nproducts of $\\phi$ which can be detected in the central CMS detector. \nSelection rules force the produced state $\\phi$ to have $J^{CP} = n^{++}$ with $n =0, 2, ..$. \nThis process offers hence an experimentally very clean \nlaboratory for the discovery of any particle with these quantum numbers that couples \nstrongly to gluons. Additional advantages are the possibility to determine the mass of the state\n$\\phi$ with excellent resolution from the scattered protons alone, independent of its\ndecay products, and the possibility, unique at the LHC, to determine the quantum numbers of \n$\\phi$ directly from the azimuthal asymmetry between the scattered protons.\n\n\n\\begin{figure}[!b]\n\\includegraphics[angle=-90]{marek.eps}\n\\caption{Five $\\sigma$ discovery contours for central exclusive production of the \nheavier CP-even Higgs boson $H$~\\cite{Tasevsky}. See text for details.}\n\\label{fig:higgs}\n\\end{figure}\n\nIn the case of a SM Higgs boson with mass close to the current exclusion limit, which decays\npreferably into $b \\bar{b}$, CEP\nimproves the achievable signal-to-background ratio dramatically, to \n$\\cal{O}$(1)~\\cite{fp420,lightHiggs}. \nIn certain\nregions of the MSSM, generally known as ``LHC wedge region'', the heavy MSSM Higgs bosons would \nescape detection at the LHC. \nThere, the preferred search channels at the LHC are not available \nbecause the \nheavy Higgs bosons decouple from gauge bosons while their couplings to $b \\bar{b}$ and \n$\\tau \\bar{\\tau}$ are enhanced at high $\\tan{\\beta}$. Figure~\\ref{fig:higgs} depicts\nthe 5~$\\sigma$ discovery contour for the $H \\rightarrow b \\bar{b}$ channel in CEP in \nthe $M_A - \\tan{\\beta}$ plane of the MSSM within the $M_h^{max}$ benchmark scenario\nwith $\\mu = +200$~GeV and for different integrated luminosities. \nThe values of the mass of the heavier CP-even Higgs boson, $M_H$, are indicated by \ncontour lines. The dark region corresponds to the parameter region excluded by LEP. \n\nForward proton tagging will also give access to a rich QCD program on hard diffraction\nat high luminosities, where event pile-up is significant and makes undetectable the gaps \nin the hadronic final state otherwise typical of diffraction. Detailed studies with high\nstatistical precision will be possible on skewed, unintegrated gluon \ndensities; Generalized Parton Distributions which contain information on the correlations \nbetween partons in the proton; and the rapidity gap survival probability, a quantity closely \nlinked to soft rescattering effects and the features of the underlying event at the LHC.\n\nForward proton tagging also provides the possibility for precision studies of $\\gamma p$\nand $\\gamma \\gamma$ interactions at center-of-mass energies never reached before. Anomalous top\nproduction, anomalous gauge boson couplings, exclusive dilepton production and quarkonia \nproduction are possible topics, as was discussed in detail at this workshop.\n\n\n\\section{SUMMARY}\n\nForward physics in $pp$ collisions at the LHC covers a wide range of diverse physics subjects (low-$x_{Bj}$ QCD,\nhard diffraction, $\\gamma \\gamma$ and $\\gamma p$ interactions)\n which have in\ncommon that particles produced at large\nvalues of rapidity provide a defining characteristics. \nFor the CMS detector, several subdetectors with forward $\\eta$ coverage \nare currently under construction (CASTOR, ZDC) or in the proposal \nstage (FP420). The TOTEM experiment \nsupplements around the CMS IP several tracking devices and near-beam proton taggers at \ndistances up to $\\pm 220$~m. \nThe kinematic coverage of the combined CMS and TOTEM apparatus is unprecedented at a\nhadron collider. It would be even further enhanced by complementing CMS with the\ndetectors of the FP420 proposal which would add forward physics to the portfolio of\npossible discovery processes at the LHC.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Acknowledgements}\nThis work was sponsored in part by the National Science Foundation under contract ECCS-1800812. \nThis material is based upon work supported by the Google Cloud Research Credits program with the award GCP19980904.\nWe would like to sincerely thank Dr. Hassan Hijazi at Los Alamos National Laboratory for sharing a copy of his GravityX AC-OPF solver with us for comparison of our results. \n\\section{Background}\n\\label{sec:background}\n\n\\subsection{Homotopy Methods}\nThe Newton-Raphson (N-R) method is often used to solve the underlying non-linear equations in an AC-OPF formulation and its convergence can be sensitive to the choice of initial guess for the variables.\nIf the starting point is outside the basin of attraction for the problem, convergence can be very slow.\nA class of successive-relaxation methodologies, known as homotopy methods, was introduced to mitigate such issues.\nHomotopy is a numerical analysis method to solve a non-linear system of equations that traverses the solution space by deforming the non-linear equations from a trivial problem to the original.\nThe homotopy method initially defines a relaxation of the original problem which is trivial to solve, and proceeds to solve a sequence of deformations that ultimately leads back to the original problem.\nSuppose $\\mathcal{F}(x)=0$ is a set of non-linear equations that we aim to solve, we define a mapping to a trivial problem represented by $\\mathcal{G}(x)=0$.\nThe deformation from the trivial problem to the original non-linear problem is controlled by embedding a scalar homotopy factor $\\nu \\in [0,1]$ into the non-linear equations, thereby defining a sequence of problems given by (\\ref{eq:basic_homotopy}) .\n\\begin{equation}\n\\label{eq:basic_homotopy}\n \\mathcal{H}(x, \\nu) = \\nu \\mathcal{G}(x) + (1-\\nu) \\mathcal{F}(x)=0,~ \\nu \\in [0,1]\n\\end{equation}\nHaving determined the solution, $x^0$ to the trivial problem, $\\mathcal{H}(x,1) = \\mathcal{G}(x) = 0$, we can iteratively decrease $\\nu$, to move the system closer to the original problem, and use $x^0$ as our initial guess for the the next sub-problem.\nBy incrementally decreasing $\\nu$ and solving the updated sub-problems, we traverse the solution space from the trivial problem to the original problem.\nFor this method to be effective, each sub-problem's solution should lie within the basin of attraction of the solution of the previous sub-problem, in order to exploit the quadratic convergence of N-R.\nIt is often challenging to develop a general homotopy method that ensures a proper traversal of the solution space, where there exists a feasible solution for every sub-problem $\\mathcal{H}(x,\\nu)=0$ as it traverses the path from $\\nu: 1 \\rightarrow 0 $ \\cite{Allgower}. We present our homotopy method based on a circuit-inspired intuition that can intuitively ensure a feasible path. \n\n\\subsection{Homotopy Methods in Power System}\n\nA number of approaches have applied homotopy methods to solve power flow, AC-OPF, and other variants in recent years \\cite{Murray-homotopy,Pandey-IMB, Park-homotopy,Network-Stepping,Pandey-Tx-Stepping}.\nIn \\cite{Pandey-Tx-Stepping}, the authors present a circuit-theoretic homotopy to robustly solve the power flow equations that embeds a homotopy factor in the equivalent circuits of the grid components to solve power flow.\nThese methods are extended in the Incremental Model Building (IMB) framework \\cite{Pandey-IMB} to solve the AC-OPF optimization problem.\nThe idea behind the IMB framework is to build the grid from ground-up using an embedded homotopy factor in AC-OPF equations.\nIMB defines a relaxed problem, $\\mathcal{G}(x)$, where the buses are almost completely shorted to one another, nearly all of the loads are removed at $\\nu=1$, and $\\nu$ is embedded in the generator limits so that the generator injections can be initially close to zero while remaining feasible with respect to the inequality constraints.\nAs a result, the relaxed network has very little current flow, so nearly all of the buses have voltage magnitude and angles close to that of the reference bus, and a flat-start initial point can reliably be used as a trivial solution of the homotopy sub-problem at $\\nu=1$.\nTo satisfy the requirement that a feasible solution exist for every sub-problem along the homotopy path, fictitious slack current sources are introduced at each bus for sub-problems $\\nu \\neq 0$, and their injections are heavily penalized \\cite{Jereminov-feasibility}.\n\nWhile IMB shows an approach to solve AC-OPF without good initial conditions, it does not include discrete control variables in the formulation.\nEven with a continuous relaxation to these variables, their introduction can significantly increase the nonlinearity of the network flow constraint equations and make the problem very challenging to solve.\nIn this work, we present a framework that builds on the IMB framework to include discrete control devices to solve the \\ACOPFD robustly.\n\n\\section{Conclusion}\n\\label{sec:conclusion}\nIn this paper we developed a two-stage homotopy algorithm to robustly solve real-world AC-OPF problems while incorporating discrete control devices. The proposed approach uses fundamentals from circuit-theory to design the mechanisms of the underlying homotopy methods and the approach does not depend on access to good initial conditions to solve the overall problem. To evaluate this approach, we ran a series of tests for four networks containing transformers and shunts with discrete settings.\nFor two of the cases, this method performed better than a state-of-the-art solver. Furthermore, we showed that by constructing different homotopy paths, we find different local optima solutions for the same problem, which had significantly different generation dispatch patterns. Therefore, we believe that the nonconvexity of the solution space of \\ACOPFDs problems warrant more investigation.\n\n\n\n\n\n\\section{AC-OPF with Discrete Variables Formulation \\ACOPFD}\n\\label{sec:formulation}\nThe paper solves the following AC-OPF with discrete control settings, which we will refer to as \\ACOPFD:\n\\begin{subequations}\n\\begin{gather}\n\\underset{x,\\xd}{\\text{minimize }}\nf_0(x, \\xd) \\\\\n\\text{subject to: }\n g(x, \\xd) =0 \\\\%, \\; i = 1, \\ldots, m.\n h(x, \\xd) \\leq 0 \\\\\n \\xd^i \\in \\mathcal{D}^i, \\; i = 1, \\ldots, n_d\n\\end{gather}\n\\end{subequations}\nThe vector $x$ consists of the continuous-valued variables of the optimization, including complex bus voltages, real and reactive power injections by generators, continuous shunts settings, and substitution variables to track transformer and line flows.\nThe vector $\\xd\\in R^{n_d}$ represents the discrete control settings that are limited to a finite set of discrete values. In this paper, these include transformer taps $\\tau$ and phase-shifters $\\phi$ and discrete shunts $B^{sh}$ but the method is not restricted to just these. Each element of $\\xd$, $\\xd^i\\in\\xd$, has an associated integrality constraint (1d) restricting each device setting to be a finite value within the set, $\\mathcal{D}^i$. The objective function (1a) is chosen depending on the purpose of the AC-OPF study, but typically represents the economic cost of power generation.\nConstraint (1b) represents the AC network constraints, which can be formulated either to enforce net zero power-mismatch or current-mismatch at all nodes.\nConstraint (1c) contains the bus voltage magnitude limits, branch and transformer thermal limits, and real and reactive power generation limits.\n\n\n\n\\section{Implementation and Evaluation}\n\\label{sec:implementation}\nTo test the effectiveness of the proposed algorithm, we run the \\ACOPFDs solver on four networks based on cases used in the ARPA-E Grid Optimization Challenge 2 \\cite{go2} that contain transformers with adjustable tap ratios or phase shifts, and discrete switched shunt banks.\nThe cases were modified to remove additional features of the GO formulation in order to focus on the efficacy of our approach for incorporating discrete control devices, and to make all generation costs linear.\nThe details of cases used are shown in Table I, and the files have been made available in a public Github repository \\cite{cases-github}.\n\n\\begin{table}[h]\n\\label{tab:case-info}\n\\caption{Properties of \\ACOPFD ~Cases Tested}\n\\begin{tabular}{|c|c|c|c|c|c|}\n\\hline\n\\textbf{Name} & \\textbf{Buses} & \\textbf{Generators} & \\textbf{Loads} & \\textbf{Lines} & \\textbf{Discrete Devices} \\\\ \\hline\nA & 3022 & 420 & 1625 & 3579 & 1384 \\\\ \\hline\nB & 6867 & 567 & 4618 & 7815 & 925 \\\\ \\hline\nC & 11152 & 1318 & 4661 & 16036 & 1030 \\\\ \\hline\nD & 16789 & 994 & 7846 & 23375 & 2722 \\\\ \\hline\n\\end{tabular}\n\\end{table}\nIn each evaluation, $\\xdbase$ values are set to the respective device settings listed in the .raw file, but to simulate running the \\ACOPFDs without any prior knowledge of settings, a copy of each case file is created where each transformer's initial setting is listed as its median available setting, and each switched shunt bank has all switches off (0 p.u.).\nThese cases with initial settings removed are denoted with a * superscript.\n\n\\subsection{Robustness and Scalability of IMB+D}\n\nTable \\ref{tab:big-results} shows a summary of results.\n$k_{adj}=0.1$ was applied equally across all devices in Stage 1, but normalized by the range of the individual device's settings ($\\xdupper^i - \\xdlower^i$)\nAs a comparison point, we also evaluated the same cases using the GravityX AC-OPF solver \\cite{Gravity}, a leading submission to the GO Challenge which utilizes the IPOPT non-linear optimization tool \\cite{IPOPT}.\n``\\% Adj'' indicates how many device settings in the solution differed from their initial value (the original .raw file values in the standard cases, and the simulated unknown in the * cases).\nFor each test, to evaluate the necessity of Stage II, a simple ``round and resolve'' approach is also attempted at the end of Stage I, in which the a feasible solution is sought immediately after rounding and fixing settings. \n\nThe proposed approach is able to find solutions for all the cases, even when initial device settings are removed.\nFor Cases A and D, evaluating from the simulated unknown settings actually produces a very slightly better objective value.\nGravityX produces a slightly better objective for both versions of cases B and C, but a less optimal solution for case A and does not converge for case D. \nObserve that Stage II is not strictly necessary for cases B, C, or D, but it is necessary for Case A to converge with discrete settings, which makes sense because in this case the tap steps are much further apart, so selecting discrete settings introduces a larger disturbance.\n\\begin{table}[]\n\\centering\n\\caption{Objective solutions and best $k_{adj}$ for tested cases}\n\\label{tab:big-results}\n\\begin{tabular}{|c|c|c|c|c|c|c|}\n\\hline\n & \\multicolumn{3}{c|}{IMB+D ($k_{adj}=0.1$)} & \\multicolumn{2}{c|}{GravityX+IP-OPT} \\\\ \\hline\nCase & Obj & \\% Adj & Need Stage II? & Obj & \\% Adj \\\\ \\hline\nA~ & 5.377e5 & 24.9\\% & Yes & 5.561e5 & 23.8\\% \\\\ \\hline\nA* & 5.357e5 & 25.6\\% & Yes & 5.561e5 & 17.6\\% \\\\ \\hline\nB~ & 1.216e5 & 90.8\\% & No & 1.215e5 & 74.5\\% \\\\ \\hline\nB* & 1.216e5 & 93.9\\% & No & 1.215e5 & 71.7\\% \\\\ \\hline\nC~ & 5.294e5 & 77.8 \\% & No & 5.292e5 & 80.29\\% \\\\ \\hline\nC* & 5.294e5 & 82.2\\% & No & 5.292e5 & 69.4\\% \\\\ \\hline\nD~ & 3.592e5 & 74.7\\% & No & No Solution & N\/A \\\\ \\hline\nD* & 3.591e5 & 89.2\\% & No & No Solution & N\/A \\\\ \\hline\n\\end{tabular}\n\\end{table}\n\n\\subsection{Impact of Different Homotopy Paths}\nTo construct different homotopy paths for the problem, we vary the Homotopy Stage I's adjustment penalty $k_{adj}$ parameter. We solve cases D and D* across a sweep of $k_{adj}$ values, and also at $k_{adj}=0$, meaning a penalty term is never used in Stage I.\nRecall that the term is completely removed at $\\nu_1 = 0$, so \\textit{the same final problem is being solved}, regardless of choice of $k_{adj}$. \nEssentially, all that differs is the homotopy path to the original \\ACOPFD problem. \n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=2.6in]{figures\/test_figure1.png}\n \\caption{Changing $k_{adj}$ affects the homotopy path taken, and thus yields different solutions}\n \n \\label{fig:stage1-graphs}\n\\end{figure}\nFig. \\ref{fig:stage1-graphs} shows the effect of increasing $k_{adj}$ (i.e., varying the homotopy path) on both the final solution cost and percentage of adjustments made to devices in the solution.\nThis plot shows how it is possible to find multiple different local optima by parameterizing the penalty terms and essentially varying the homotopy path.\nFirst, the we obtain the best objective function with very low penalty factors.\nAdditionally, when knowledge of base settings is available, increasing the $k_{adj}$ reduces the number of adjustments.\nSurprisingly, however observe that for the D* simulations, increasing $k_{adj}$ does not have a large impact the number of discrete adjustments.\nWe hypothesize this is because some devices may require certain low or high settings for the network to be feasible, and so adjustments will be made regardless of the penalty value.\n\n\\subsection{Comparison of Local Solutions}\n\nTo further investigate the impact of different local solutions on the grid dispatch, we consider three sets of solution for the same exact problem as defined by Case A*: two solutions generated by IMB+D by taking separate homotopy paths through choice of $k_{adj}$, and one generated by GravityX.\nThe three solutions have generation dispatch costs in the range of \\$5.36e5-\\$5.84e5.\nHowever, more interesting insights can be gathered by looking at the actual dispatch of six large generators in Case A* for these three distinct local optima, which are shown in Fig. \\ref{fig:gen_dispatch}.\nWe notice that the dispatches for these generators vary significantly across the three dispatches.\nThis would imply that in the real world, the dispatch produced by an \\ACOPFDs study can have widely varying patterns dependent on the method (e.g. Gravity-X vs. IMB+D) used to solve the non-convex problem, or even the choice of homotopy path within IMB+D based on the value of $k_{adj}$.\nIn real world, this could make it harder for grid operators to justify the choice of one dispatch over another as the global minima may not be easily obtainable.\nToday, by running the convex DC-constraint based optimization with fixed settings, they are able to overcome the problem. \nHowever, due to the lack of granularity and accuracy, DC-based optimizations may be insufficient for future scenarios where adjustments to discrete devices are necessary in the optimization.\n\\begin{figure}\n \\centering\n \\includegraphics[width=2.5in]{figures\/generator_dispatch.png}\n \\caption{Changing the penalty scalar $k_{adj}$ on adjustments in Stage I causes a different homotopy path to be taken and can yield starkly different local optima in the final solution.}\n \\label{fig:gen_dispatch}\n\\end{figure}\n\\section{Introduction}\n\\label{sec:introduction}\n\nA critical framework for modeling and optimizing the efficiency of today's power grid is based on the Alternating-Current Optimal Power Flow (AC-OPF) problem.\nIn the AC-OPF problem, a user-specified objective function, typically the cost of power generation, is optimized subject to network and device constraints.\nThese constraints include AC network constraints defined by Kirchhoff's Voltage and Current Laws, as well inequality constraints representing operational limits, such as bounds on voltages, power generation, and power transfer, to ensure reliable operation of the power system.\nIt has been estimated that improved methods to model and run the US power grid could improve dispatch efficiency in the US electricity system leading to savings between \\$6 billion and \\$19 billion per year \\cite{Cain}.\nMoreover, improved AC-OPF solution techniques can improve the reliability and resiliency of the grid, which is under increasing duress from extreme weather events, such as California's ongoing wildfires and the aftermath of the winter 2021 storm in Texas.\n\nTraditionally, the AC-OPF problem is a non-convex nonlinear problem with only continuous variables.\nHowever, many devices deployed in the grid today have controls with discrete-valued settings, and are likely to become more widespread as the modernization of the grid continues.\nDevices such as switched shunt banks and adjustable transformers can assist in balancing power flows in the system, meeting resilience-focused operational constraints, and locating a more optimal or resilient operating point than one located using fixed components settings.\nThe increased flexibility from these discrete devices also allows operators to avoid or delay costly upgrades to the network while increasing resiliency during extreme events. \nGiven the significant potential benefits, a recent Grid Optimization (GO) Competition (Challenge 2), organized by ARPA-E, sought new robust approaches to AC-OPF where adjustments to discrete devices like tap changers, phase shifters, and switched shunt banks are included in the variable set \\cite{go2}.\n\nAlthough there are clear benefits to inclusion of discrete control devices in an AC-OPF study, doing so directly results in a mixed-integer non-linear program (MINLP) that is significantly harder to solve.\nIncluding these discrete settings in the variable space could lead to searching over a combinatorially-large solution space, which would be intractable to solve in practical time. \nIf prior settings are at least known, then these can be used a starting point to begin the search for new settings.\nHowever, prior settings may be not known in some situations like planning or policy studies. For instance, when engineers evaluate the feasibility of 50\\% renewable penetration in a future U.S. Eastern Interconnection \\cite{miso50}. \n\nOne approach \\cite{liu-linearization} to include these discrete variables relies on sequential linearization of the optimization problem and then handling the discrete variables using mixed-integer linear problem (MILP) techniques.\nUnfortunately, the underlying network constraints can be highly non-linear with respect to certain control settings such as transformer tap ratios and phase shifts. Therefore these methods can suffer from a significant loss in model fidelity.\nAdditionally, linear relaxations to the OPF problem can lead to physically infeasible solutions, which are extremely undesirable for a grid dispatch \\cite{Baker-DCOPF}.\n\nThe simplest approach to this obstacle is a two-stage rounding technique \\cite{Tinney} \\cite{Papalexopoulos}.\nIn this method, the discrete control settings are initially treated as continuous-valued, and a relaxed formulation of the AC-OPF problem is solved; then, these variables are fixed to their nearest respective discrete values, and the optimization is solved again with these variables held constant.\nThe first challenge with this approach is that for realistically-sized networks, solving the relaxed problem with continuous valued transformer taps, phase-shifters and switched shunts is computationally challenging, especially when good initial conditions are unavailable.\nThe second challenge is that the rounding step to map the continuous-valued settings to their nearest respective discrete values can result in a physically infeasible solution, which can be difficult to avoid when the discrete values are spaced far apart or when very many control variables are rounded at once. \nDirectly rounding to the nearest discrete value also creates a discontinuous jump in the solution space, which is problematic for any Newton-method based solver that relies on first-order derivative continuity.\n\nA number of methods have been introduced to address the two challenges above. \nRecent work \\cite{Coffrin} \\cite{Lavei} has pushed the state-of-the-art for solving AC-OPF for realistic networks, but these formulations did not include discrete variables. \nFor formulations that do include discrete control variables, several new approaches have been proposed \\cite{liu-penalty-function,Macfie-discrete-shunt,Capitanescu,Murray-discrete} to eliminate non-physical solutions and degradation of optimality due to rounding.\n\\cite{liu-penalty-function} proposes utilizing a penalty term in the objective function to push each relaxed control variable towards an available discrete value, so that the disturbance introduced by the rounding step is smaller.\nHowever, the use of penalty functions in optimizations with discrete variables can introduce stationary points and local minima \\cite{discrete-opt-overview}.\nAnother approach is to select subsets of relaxed discrete variables to round in an outer loop, while repeatedly solving the AC-OPF problem in between subsets until all of the settings have been fixed.\nIn \\cite{Macfie-discrete-shunt}, the authors present two methods for selecting which variables are rounded in each loop, and show these can reduce optimality degradation caused by rounding; however, unless only a single device is rounded at a time, this can introduce oscillations, since each rounding effectively adds a piecewise discontinuous function \\cite{Katzenelson}.\n\\cite{Capitanescu} uses sensitivities with respect to the discrete variables as metric to determine when to round settings, with the help of either a merit function or a MILP solver.\nIn \\cite{Murray-homotopy}, the authors point out that time constraints may make it impractical for grid operators to adjust a large number of control variable changes for a single dispatch.\nTo address this, they propose introducing a sparsity-inducing penalty term to the objective function along with a line-search of the discrete variable space to find a more limited number of control variable changes that can still improve optimality compared to holding all control variables constant.\nHowever, this method inherently assumes knowledge of good settings, which might not be available in some use cases for AC-OPF, such as planning studies.\n\nWe propose a two-stage homotopy algorithm for solving AC-OPF problems that explains how to incorporate discrete controls variables, can scale to large networks, and is robust to potential lack of knowledge of prior settings. \nOur approach builds off of the homotopy-based AC-OPF methodology presented in \\cite{Pandey-IMB}.\nIn the first stage,the discrete settings for the adjustable control devices are relaxed as continuous-valued variables and the optimization is solved using a robust homotopy technique.\nThe solution of the first stage is used to select discrete settings, and the respective variables are held constant thereafter. \nThen the errors induced by removing the relaxation are calculated and used in a second homotopy problem to locate a realistic solution.\n\nThe proposed novel approach can robustly determine a local optimum of any large real-world network with discrete controls without reliance on prior setting values. We also show that by choosing different homotopy paths, the proposed approach can obtain a variety of local minima solutions with significantly different generation dispatches.\nIn the results, we show that the method is not only novel in its approach but also more robust than another state-of-the-art optimization tool.\n\n\n\\section{Two Stage Homotopy Method for Solving \\ACOPFD}\n\\label{sec:methodology}\nWe propose a two-stage homotopy algorithm to solve the \\ACOPFDs problem described in Section \\ref{sec:formulation}.\nThe methods described are an extension of the IMB approach discussed above and we term the overall approach IMB+D\n\n\n\nThe first stage, Stage I, applies a relaxation to the \\ACOPFDs problem in which we treat the discrete variables as continuous-valued by removing the integrality constraints. \nWe refer to the resulting relaxed optimization as \\ACOPFC, which is solved in Stage I.\nWe present a modeling framework for incorporating each adjustable device into the \\ACOPFC~ homotopy formulation so as to preserve the IMB concept of slowly ``turning on'' the grid as the sequence of sub-problems is traversed.\nAfter convergence of Stage I, in Stage II, the relaxed solution is used to select the nearest feasible discrete settings, and a second homotopy problem is defined to solve this problem. The local optima of this second stage yields an optimal solution for \\ACOPFD with the discrete value constraints satisfied. \n\n\n\n\\begin{figure}[h!]\n \\centering\n \\includegraphics[width=2.8in]{figures\/Stage_One.png}\n \\caption{A simple network with discrete devices in blue (a). In the completely relaxed network (b), the relaxation elements are shown in red. At the end of Stage I (c), a solution is found with all relaxations removed except continuous settings.}\n \\label{fig:stageI_figure}\n\\end{figure}\n\\subsection{Homotopy Stage I: Solving the Relaxed \\ACOPFD}\n\\subsubsection{Embedding general network with homotopy}\nIn the constraints of an AC-OPF problem expressed using the current-voltage formulation, the current flows across lines are linear with respect to voltages, but the voltage and flow limit equations are quadratic, and the current injections from generators and loads are highly non-linear.\nEven without the introduction of additional control devices, Newton method's may fail to converge if the initial guess for variables is not within the basin of attraction of a solution \\cite{Murray-homotopy}. Therefore for large systems, without access to a reliable starting guess for Newton's method, we can ensure convergence using the IMB method presented in \\cite{Pandey-IMB}.\nBy starting with a deformed version of the network in IMB that has very little current flow, there exists a high voltage solution in which the bus voltages are close to one another, which is close to a flat start guess.\nTo define a deformed network and have a smooth trajectory of intermediate deformations back to the original network, in Stage I the homotopy factor $\\nu_1$ is embedded into the many of the parameters of the network's topology and devices: namely into high conductances in parallel to existing lines, the load factor, the generation limits (see Fig. \\ref{fig:stageI_figure}.b).\n\n\\subsubsection{Continuous relaxation and separation of discrete settings}\nIn Stage I, we apply a relaxation to the discrete control setting variables and split it into two components in order to leverage the robust methods of the IMB framework. \nFirst, a relaxation removing the integrality constraints (1d) is applied to replace the discrete variables $\\xd$ with a continuous vector, $\\xdcont$.\nAs a result, the constraints (1d) are replaced by:\n\\begin{equation}\n \\xdlower^i \\leq \\xdcont^i \\leq \\xdupper^i \\text{, } i \\in 1...n_d\n\\end{equation}\nwhere $\\xdlower^i = \\min(\\mathcal{D}^i)$ and $\\xdupper^i = \\max(\\mathcal{D}^i)$, representing the minimum and maximum possible settings for a device.\n\nEven with this relaxation, adding devices that affect the flow of power across the network can significantly increase the nonlinearity of the constraint equations and the corresponding optimality conditions.\nFor example, introducing adjustable transformers causes the previously-linear transformer power flow model to be nonlinear with respect to voltage at its terminals, and adding phase shifters introduce trigonometric functions, and tap changers will introduce $\\frac{1}{\\tau}$ and $\\frac{1}{\\tau^2}$ terms.\nTo introduce these highly nonlinear models into the IMB framework, such that we preserve its initial trivial form where the entire grid is nearly shorted and a feasible homotopy path from $\\nu_1 = 1 \\rightarrow 0$, we design three measures.\n\nFirst, each relaxed setting variable $\\xdcont^i$ is separated into two components: a ``base'' value $\\xdbase^i$ and a continuous-valued ``adjustment'' variable $\\xdadjcont^i$:\n\\begin{equation}\n\\label{split}\n \\xdcont^i = \\xdtriv^i + \\xdadjcont^i \n\\end{equation}\nSplitting $\\xdcont^i$ affects its initialization and bounding, but the total value still drives the respective device behavior. \nHowever, this step allows maintaining the solution space of the variable $\\xdadjcont$ around 0, which we have observed empirically improves the convergence in comparison to introducing $\\xdcont$ as variable directly; we believe this could be due to improved search directions for the Newton's method as the partials are dependent on the value of control variables.\n\nSecond, to maintain the trivial shorted form of the IMB method at $\\nu_1 = 1$, we transform the discrete base setting $\\xdtriv^i$ to be homotopy dependent such that during early stages of homotopy it has almost no impact on the network solution. For example, a tap ratio of 1.0 p.u. on a transformer would have no affect on the system as a whole. We define this smooth deformation of the discrete base setting $\\xdtriv^i$ through embedding a homotopy factor $\\nu_1$:\n\\begin{equation}\n \\xdbase^i = \\nu_1 \\xdtriv^i + (1-\\nu_1) \\xdbase_0^i\n\\end{equation}\n$\\xdtriv^i$ is chosen as a setting value that would ensure that the trivial solution for the $\\nu_1=1$ sub-problem is maintained. \n$\\xdbase_0^i$ must be chosen from within the feasible domain. A good prior setting can be used, but the median value can always be used if the user does not know one.\n\nLastly, we ensure the effective settings do not stray from their respective trivial values in the early sub-problems of Stage I by adding a homotopy-dependent penalty term to the objective function parameterized by a $\\nu_1$ and scaled by $k_{adj}$:\n\\begin{equation} \n \\label{modified_objective_kadj}\n f(x, \\xdadjcont) = f_0(x, \\xdadjcont) + \\nu_1 k_{adj} \\sum_{i=1}^{N_d} |\\xdadjcont^i|^2\n\\end{equation}\nIncluding this term encourages minimizing $|\\xdcont^i|$ more strongly in the early stages of homotopy. However, as $\\nu_1$ is decreased, the penalty weight is reduced so that adjustment variables can move more freely as necessary to satisfy physical network constraints and decrease the primary objective function.\nNote that all penalty terms have been removed from the objective function when the \\ACOPFC ~is solved at $\\nu_1=0$, so the form of the final \\ACOPFC~ problem is independent of $k_{adj}$ value.\n\n\nTo address the likelihood of infeasible sub-problems along the homotopy path ($c(\\nu_1), \\forall \\nu_1 \\in [0,1] $) where KCL cannot be satisfied without violating some variable limits, just as in IMB, homotopy dependent slack current injections (shown as red current sources in Fig. \\ref{fig:stageI_figure}.b) are defined at each node in the network to allow satisfaction of conservation of charge \\cite{Jereminov-feasibility}.\nThe magnitudes of the injection sources are penalized heavily in the objective function so that the sources only inject current if required for satisfaction of KCL, and the values are scaled by $\\nu_1$ so that the fictitious sources are removed entirely when $\\nu_1=0$.\n\n\\subsubsection{Solution of Stage I}\nFor each sub-problem defined in the homotopy path, the relaxed \\ACOPFC~is solved using the primal-dual interior point (PDIP) approach \\cite{Boyd}.\nThe Lagrangian for the sub-problem at any given $\\nu_1 \\in [0,1]$ is given by \n\\begin{equation}\n\\begin{aligned}\n \\label{eq:lagrangian}\n \\mathcal{L}^{\\nu_1}(\\xext, \\lambda,\\mu) = f_0^{\\nu_1}(\\xext) + \\lambda^T g^{\\nu_1}(\\xext) + \\mu^T h^{\\nu_1}(\\xext)\n\\end{aligned}\n\\end{equation}\nwhere $\\xext = [x, \\xdcont]^T$, $\\lambda$ is vector of dual variables for the equality constraints, and $\\mu$ is the vector of slack variables for the inequality constraints.\nA local minimizer, $\\theta^* = [\\xext^*, \\lambda^*, \\mu^*]$, is sought by using Newton's method to solve for the set of perturbed first order KKT conditions:\n\\begin{equation}\n\\label{eq:KKT}\n\\begin{split}\n& \\mathcal{F}(\\theta) = \\begin{bmatrix}\n\\nabla_{\\xext} f_0(\\xext) + \\nabla_{\\xext}^T g(\\xext) \\lambda + \\nabla_{\\xext}^T h(x_{ext}) \\mu \\\\\ng(\\xext) \\\\\n\\mu \\odot h(\\xext) + \\epsilon\n\\end{bmatrix} = 0 \\\\\n\\end{split}\n\\end{equation}\nwhere $\\odot$ is element-wise multiplication.\nIn order to facilitate convergence and primal-dual feasibility of the solution, heuristics based on diode-limiting from circuit simulation methods are applied \\cite{Pandey-Tx-Stepping}. \nThe located $\\theta^*$ is used as the initial guess for the next \\ACOPFC sub-problem once the homotopy parameter (and thus the network relaxation) has been updated. \nThis process is repeated until $\\nu_1=0$, at which point a solution has been found to \\ACOPFC (Fig. \\ref{fig:stageI_figure}.c).\n\n\\subsection{Homotopy Stage II: Discretization}\n\nAfter determining the optimal relaxed settings for the devices, we move onto the second stage of our approach, which solves for discrete value settings and the corresponding state of the grid using the relaxed solution from Stage I.\n\\subsubsection{Selection of discrete setting values}\nA practical discrete setting value must be chosen for each control device.\nFor each control variable, the nearest-neighbor discrete value to is selected. The chosen setting is evaluated to estimate whether snapping the variable to this value might result in infeasibility that prevents convergence.\nThe sensitivities of bus voltages with respect to the setting are calculated and an approximate voltage after rounding is calculated using the sensitivity vector and a first-order Taylor approximation.\nWith predicted voltage values, we check inequality constraints affected by the setting's perturbation.\nIf the chosen rounded value resulted in an infeasibility or violation of bounds, then second closest available setting is chosen and checked.\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=2.8in]{figures\/Stage_Two.png}\n \\caption{Process of using Stage I solution to discretize settings, formulate Stage II, and determine \\ACOPFD solution}\n \\label{fig:Stage-Two-Networks}\n\\end{figure}\n\\subsubsection{Stage II Homotopy: Error Injections}\nAt the termination of the Stage I, we have a solution vector $\\theta^*$ that satisfies the perturbed KKT conditions for the continuous-valued OPF:\n\\begin{equation}\n \\label{eq:KKT-sol-1}\n \\mathcal{F}(\\theta^{*}) = 0\n\\end{equation}\nBy changing the setting variables from their converged continuous values to realistic setting values ($\\xdadjcont \\rightarrow \\xdadj$), the state vector is altered from $\\theta^*$ to $\\theta^\\prime$.\nBecause of the adjustments, evaluating the KKT conditions at $\\theta^\\prime$ will result in violations of the conditions, which we will refer to as residual vector $R$:\n\\begin{equation}\n \\label{eq:KKT-sol-res}\n \\mathcal{F}(\\theta^\\prime) = R\n\\end{equation}\nIn the case of primal variables, we can think of R as a set of independent current sources that compensates for the current mismatch at each node due to rounding discrete device settings.\nThis idea can be extended to dual variables as well, as the underlying equations and nonlinearities have the same form \\cite{Network-Stepping}.\nTherefore, in the network disturbed by the rounding step, the power flow constraints can be satisfied immediately after rounding by adding a current sources to each of these perturbed buses, with current injection values defined by the current mismatches caused by rounding (see Fig. \\ref{fig:Stage-Two-Networks}.b).\nIn a relaxed problem where we seek a solution to the same network but with the addition of $R$ to the respective equations, then we already know a solution to the problem: $\\theta^\\prime$.\n\\begin{equation}\n \\mathcal{G}(\\theta^\\prime) = \\mathcal{F}(\\theta^\\prime) - R = 0\n\\end{equation}\nTherefore, we propose a second homotopy stage to find a feasible solution to the \\ACOPFDs problem after control variables have been rounded, in which the relaxed system of equations are KKT conditions for the \\ACOPFDs equations \\eqref{eq:KKT} after rounding but with $R$ added as error injections.\n\n\\begin{equation}\n\\begin{split}\n \\mathcal{H}_2(x,\\nu_2) = (1-\\nu_2)\\mathcal{F}(\\theta) + \\nu_2(\\mathcal{F}(\\theta) - R) = 0\n\\end{split}\n\\end{equation}\nHere $\\nu_2$, the Stage II scalar homotopy factor, is used to gradually reduce the residual injections, tracing a continuous path to a feasible solution for $\\mathcal{F}(\\theta)$ with rounded values.\nThis avoids taking a discontinuous jump between solving \\ACOPFC and \\ACOPFD.\nWhen this homotopy problem is solved at $\\nu_2=0$, the error injections have been removed, and we have located a realistic, feasible solution to the \\ACOPFD.\n\n\\subsection{Generalization beyond IMB}\nWhile the two-stage approach in this paper is described as an extension of the larger IMB framework \\cite{Pandey-IMB} which assumes no knowledge of prior system settings, the two-stage homotopy algorithm can be applied to other approaches for solving AC-OPF without loss of generality. \nConsider the scenario in which good initial conditions for the general network are available but optimal settings for discrete variables ($\\xd^*$) are unknown.\nSuch a use-case may occur if a grid planner wishes to re-evaluate existing settings for discrete devices (e.g. aims to move from a feasible setting $(\\xd^k)$ to an optimal feasible setting $(\\xd^*)$), or to explore the effects of upgrading of fixed devices to adjustable devices). \nIn this situation, the grid planner may want to start from a feasible discrete setting but still explore whether a more optimal operating setting exists. In this scenario, we would still separate and relax each discrete setting $\\xd^i$ according to \\eqref{split}. \nBut here, we account for this knowledge of good setting in Stage I of the algorithm by defining base values using the previously known setting $\\xd^{k,i}$ such that $\\xdbase^{i} = \\xd^{k,i}$.\n\nTo ensure finding a solution at $\\nu_1=1$ is simple, the objective function is modified according to \\eqref{modified_objective_kadj}, but a very high penalty $k_{adj}$ value is used.\nNow at $\\nu_1=1$, with access to good initial conditions and feasible initial discrete settings, a trivial solution is obtained first.\nHowever, as we traverse the homotopy path from $\\nu_1 = 1 \\rightarrow 0$, the adjustment values for discrete settings take according to the objective $f_0(x, \\xdadjcont)$ and eventually an optimal set of relaxed discrete settings are obtained at $\\nu_1 = 0$. \nTo find a feasible discrete setting, we perform Stage II as described in Section IV.B without modification. \n\n\t \n\t\t\n\n\\section{Appendix: Additional Results}\nTable III contains a larger set of results obtained using the \\ACOPFDs approach. For all these simulations a $k_{adj}$ value of 0.1 was used. The case file can be found at \\cite{cases-github}. \n\\textit{Note: this table was omitted from the original PSCC submission due to space constraints.}\n\\bigbreak\n\n\\begin{table}[h]\n\\onecolumn\n\\begin{center}\n\\caption{Additional \\ACOPFDs results}\n\\begin{tabular}{|c|c|c|c|c|c|c|}\n\\hline\n\\textbf{Case} & \\textbf{Buses} & \\textbf{Tap Changers} & \\textbf{Phase Shifter} & \\textbf{Switched Shunt} & \\textbf{Objective} & \\textbf{\\% Adj} \\\\ \\hline\nA & 3022 & 981 & 4 & 399 & 5.377e5 & 24.9 \\% \\\\ \\hline\nA* & 3022 & 981 & 4 & 399 & 5.357e5 & 25.6 \\% \\\\ \\hline\nB & 6867 & 759 & 5 & 161 & 1.216e5 & 90.8 \\% \\\\ \\hline\nB* & 6867 & 759 & 5 & 161 & 1.216e5 & 93.9 \\% \\\\ \\hline\nC & 11152 & 530 & 0 & 500 & 5.294e5 & 77.8 \\% \\\\ \\hline\nC* & 11152 & 530 & 0 & 500 & 5.294e5 & 82.2 \\% \\\\ \\hline\nD & 16789 & 997 & 2 & 1723 & 3.592e5 & 74.7 \\% \\\\ \\hline\nD* & 16789 & 997 & 2 & 1723 & 3.591e5 & 89.2 \\% \\\\ \\hline\nE & 6549 & 1846 & 3 & 436 & 9.774e4 & 80.7 \\% \\\\ \\hline\nE* & 6549 & 1846 & 3 & 436 & 9.777e4 & 91.1 \\% \\\\ \\hline\nF & 14393 & 0 & 5 & 724 & 2.317e4 & 33.7 \\% \\\\ \\hline\nF* & 14393 & 0 & 5 & 724 & 2.312e4 & 33.2 \\% \\\\ \\hline\nG & 21849 & 997 & 2 & 1710 & 1.879e5 & 60.9 \\% \\\\ \\hline\nG* & 21849 & 997 & 2 & 1710 & 1.879e5 & 66.4 \\% \\\\ \\hline\nH & 31156 & 12 & 36 & 2451 & 2.009e5 & 76.3 \\% \\\\ \\hline\nH* & 31156 & 12 & 36 & 2451 & 2.009e5 & 63.9 \\% \\\\ \\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\\end{document}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\nIntelligent reflecting surface (IRS) is an artificial planar array consisting of numerous reconfigurable passive elements with the capability of manipulating the impinging electromagnetic signals and offering anomalous reflections \\cite{Tan2018SRA,cuiTJ2017metasurface,Larsson2020Twocritical,Garcia2020JSACIRS_gap_scatter_reflect}.\nMany recent studies have indicated that IRS is a promising solution to build a programmable wireless environment via steering the incident signal in fully customizable ways to enhance the spectral and energy efficiency of legacy systems \\cite{RenzoJSACposition,magzineWuqq,Liaskos2018magzineIRS,Renzo2019position}.\nMost contributions in this area focus on joint active and passive procoding design with various objectives and constraints \\cite{Wuq2019TWCprecoderIRS,HY2020TWCprecoderIRS,mux2020TWCprecoderIRS,ZhouG2020TSPIRSprecoderimperfectCE,LinS2021TWCprecoderIRS}.\nThe potential gains claimed by these works highly depend on the availability of accurate channel state information (CSI).\nHowever, channel estimation is a challenging task for the IRS-assisted system because there are no sensing elements or radio frequency chains, and thus there is no baseband processing capability in the IRS.\n\n\n\n\n\n\\begin{figure}\n[!t]\n\\centering\n\\includegraphics[width=.8\\columnwidth]{model_v2.eps}\n\\caption{The IRS-assisted multiuser MISO communication system.}\n\\label{IRS_system}\n\\end{figure}\n\n\\begin{figure}\n[!t]\n \\centering\n \\subfigure[Protocol for SU-MISO system]{\n \\label{protocol:a}\n \\includegraphics[width=.6\\columnwidth]{protocol_0.eps}}\n \\subfigure[Directly extended protocol for MU-MISO system]{\n \\label{protocol:b}\n \\includegraphics[width=.6\\columnwidth]{protocol_1.eps}}\n \\caption{The conventional uplink channel estimation protocols without exploiting the common-link structure.}\n \\label{protocol_tran}\n\\end{figure}\n\nSome early-attempted works \\cite{Jensen2020SUCE,zhangruiJSACSUCE,YouxhSPLCESU} estimate the uplink cascaded IRS channel for single user (SU) multiple-input-single-output (MISO) systems using the protocol shown in {\\figurename~\\ref{protocol:a}}.\nIn these works, the cascaded channel is equivalently represented as a traditional $M\\times N$ MIMO channel, where $M$ and $N$ is the base station (BS) array size and the IRS size, respectively, and the sensing matrix for channel reconstruction consists of the phase shifting vectors in consecutive training timeslots.\nMany works \\cite{Araujo2021JSAC_CE_PARAFAC,Mishra2019CEonoff,Elbir2020WCL_DL_CE,\nZhouzy2020decompositionCE,Kundu2021OJCSLMMSE_DFTGOOD,Alwazani2020OJCSLMMSE_DFT} directly extend the SU protocol to multi-user (MU) MISO systems, in which $K$ users transmit orthogonal pilot sequences in each training timeslot, as shown in {\\figurename~\\ref{protocol:b}}.\nBased on this protocol,\nthe on-off IRS state (amplitude) control strategy is proposed in \\cite{Araujo2021JSAC_CE_PARAFAC,Mishra2019CEonoff,Elbir2020WCL_DL_CE} to better decompose the MU cascaded channel coefficients for easier channel estimation of the cascaded channel for each user.\n\nIt is pointed out by \\cite{Liuliang_CE2020TWC} that direct application of the SU protocol on an MU-MISO system fails to exploit the structural property and results in substantially larger pilot overheads.\nIntuitively, all the cascaded channels share a common BS-IRS link and it is possible to reduce the pilot overhead since the number of independent variables is $MN+NK$ instead of $MNK$.\nOne algorithm is proposed in \\cite{DaiLLFullD} with the idea of sequentially estimating the BS-IRS channel and the IRS-user channels.\nHowever, it requires that the BS can work at full-duplex mode.\nFor the cascaded channel estimation,\na new channel estimation protocol is proposed in \\cite{Liuliang_CE2020TWC}.\nSpecifically, the cascaded channel of one reference user is firstly estimated based on the SU protocol, and then other users' channels are estimated by only estimating the ratios of their channel coefficients to the reference channel, which can be referred to as the relative channels.\nThe overall training overhead is reduced from $NK$ to $K+N+\\lceil \\frac{N}{M}\\rceil(K-1)$.\nHowever, there is an error propagation issue associated with this scheme since a low-accuracy estimation on the reference channel may jeopardize the estimations of the relative channels.\nMoreover, some IRS elements need to be switched off while estimating the relative channels for coefficients decomposition \\cite{Liuliang_CE2020TWC}.\n\nFor IRS design, the ``off'' state means no reflection (i.e., perfectly absorbing the incident signals), and hence it is difficult \\cite{Perfect_Absorption1,Perfect_Absorption2,Perfect_Absorption3} and also attracts additional implementation costs since this state is unnecessary for data transmission after the channel estimation.\nIn addition, switching off the IRS elements causes reflection power loss, which will lower the receive signal-to-noise ratio (SNR).\nSome recent works attempt to overcome this issue using ``always-ON'' training schemes.\nIn \\cite{Zhouzy2020decompositionCE,Kundu2021OJCSLMMSE_DFTGOOD,Alwazani2020OJCSLMMSE_DFT}, cascaded channel estimation algorithms based on tensor decomposition are proposed for MU-MISO systems, without requiring selected IRS elements to be off using the protocol in {\\figurename~\\ref{protocol:b}}.\nIn particular, the training phase shifts are optimized to minimize the mean squared error (MSE), and it has been verified that the discrete-Fourier-transform (DFT)-based training phase shifting configuration is optimal in this scenario.\nHowever, the pilot overhead is $NK$ since the protocol in {\\figurename~\\ref{protocol:b}} does not utilize the\ncommon-link property.\nIn \\cite{double_IRS}, an always-ON training scheme is proposed, which extends the protocol in \\cite{Liuliang_CE2020TWC} to the double-IRS aided system.\nHowever, the number of BS antennas needs to be equal to or larger than the number of IRS elements (i.e., $M\\geq N$) to guarantee a full-rank measurement matrix to estimate the relative channels between the reference user and the other users.\\footnote{\nAccording to the property ${\\rm{rank}}({\\bf A} \\otimes{\\bf B})={\\rm{rank}}({\\bf A} ){\\rm{rank}}({\\bf B} )$, the rank of the measurement matrix in equation (40) of \\cite{double_IRS} cannot be larger than $(K-1)M$ while the targeted rank is $(K-1)N$.\n}\nThis assumption is quite restrictive as the number of elements of the IRS ($N$) is usually larger than the number of antennas at the BS.\n\n\n\nAnother critical problem is the feasibility issue when the channel statistical prior information is utilized to improve the channel estimation accuracy,\nalthough this is a common idea for conventional MIMO channel estimation.\nIn \\cite{Kundu2021OJCSLMMSE_DFTGOOD} and \\cite{Alwazani2020OJCSLMMSE_DFT}, statistical knowledge of the individual BS-IRS link and IRS-user links is required. However, these messages are not available in practice since\nnone of the existing algorithms, to the best of the authors' knowledge, can reconstruct the individual channel coefficients when $MM$).\nWe propose a holistic solution to address the aforementioned issues.\nIn particular, a novel always-ON training protocol is designed; meanwhile the common-link structure is utilized to reduce the pilot overhead.\nFurthermore, an optimization-based cascaded channel estimation framework, which is flexible to utilize more practical channel statistical prior information, is proposed.\nThe following summarizes our key contributions.\n\\begin{itemize}\n\\item {\\bf Always-ON Channel Estimation Protocol Exploiting the Common-Link Structure}:\n We propose a novel channel estimation protocol without the need for on-off amplitude control to avoid the reflection power loss.\n Meanwhile, the common-link structure is exploited and the pilot overhead is reduced to $K+N+\\lceil \\frac{N}{M}\\rceil(K-1)$.\n \n\n In addition, the proposed protocol is applicable with any number of elements at the IRS (also including $N \\leq M$).\n Further, it does not need a ``reference user'', and as such, the estimation performance is enhanced owing to the multiuser diversity.\n\n\n\\item {\\bf Optimization-Based Cascaded Channel Estimation Framework}:\n Since there is no on-off amplitude control, the cascaded channel coefficients are highly coupled.\n In order to exploit the common-link structure, we decompose the cascaded channel coefficients by the multiplication of the common-link variables and the user-specific variables, and then an optimization-based joint channel estimation problem is formulated based on the maximal a posterior probability (MAP) rule. The proposed optimization-based approach is flexible to incorporate different kinds of channel statistical prior setups. Specifically, we utilize the combined statistical information of the cascaded channels, which is a weaker requirement compared to statistical knowledge of the individual BS-IRS and IRS-user channels.\n Then, a low-complexity alternating optimization algorithm is proposed to achieve a local optimum solution.\n Simulation results demonstrated that the optimization solution with proposed protocol achieves a more than $15$ dB gain compared to the benchmark.\n \n\n\n\\item {\\bf Training Phase Shifting Optimization for the Proposed Protocol}:\n The phase shifting configuration can substantially enhance the channel estimation performance of the cascaded IRS channel because the phase shifting vectors are important components in the measurement matrix for channel reconstruction.\n However, traditional solutions \\cite{Zhouzy2020decompositionCE,Kundu2021OJCSLMMSE_DFTGOOD,Alwazani2020OJCSLMMSE_DFT} of phase shifting optimization for SU cascaded IRS channel estimation cannot be directly applied to the MU case due to that the cascaded channel coefficients are highly coupled when the common-link structure is exploited.\n We propose a new formulation to optimize the phase shifting configuration, which maximizes the average reflection gain of the IRS.\n Simulation results further verify the proposed configuration achieves a more than $3$ dB gain compared to the state-of-the-art baselines.\n \n\n\n\\end{itemize}\n\n\n\n\n\\section{System Model}\\label{system model}\n\n\n\\subsection{System Model of MU-MISO IRS Systems}\nThis paper investigates the uplink channel estimation in a narrow-band IRS-aided MU-MISO communication system that consists of one BS with $M$ antennas, one IRS with $N$ elements, and $K$ single-antenna users,\\footnote{\n{We adopt the single-antenna-user setup here for ease of presentation. The signal model can be directly extended to the setup when users have multiple antennas by transmitting orthogonal uplink pilot sequences in different antennas.\n}}\nas illustrated in {\\figurename~\\ref{IRS_system}}.\nLet ${\\bf h}_{{\\rm d},k} \\in {\\mathbb C}^{M \\times 1}$ denote the BS-user channel (a.k.a., the direct channel) for user $k$, ${\\bf G} \\in {\\mathbb C}^{N \\times M}$ denote the common BS-IRS channel, and ${\\bf h}_{{\\rm r},k} \\in {\\mathbb C}^{N \\times 1}$ denote the IRS-user channel for user $k$.\nWe assume quasi-static block fading for all the channels such that the channel coefficients remain constant within one channel coherence interval, and they are independent and identically distributed (i.i.d.) between coherence intervals. Note that the quasi-static model considers the worse case scenarios where the temporal correlations between blocks are not exploited. In practice, the pilot overheads can be further reduced if one exploits temporal correlations of the channel blocks \\cite{kalman_filter_IRS2021TVT,kalman_filter_IRS2021chinacom}, but this is outside the scope of the paper.\n\nThe received baseband signal at the BS is given by\n\\begin{equation}\\label{equ:y_model1}\n\\begin{aligned}[b]\n{\\bf y}_{t}&=\\sum_{k=1}^K \\left({\\bf h}_{{\\rm d},k}+ {\\bf G}^{\\rm T} {\\bf \\Theta}_t {\\bf h}_{{\\rm r},k} \\right) x_{k,t}\n+{ {\\bf z}_t}\n,\n\\end{aligned}\n\\end{equation}\nwhere $t$ is the time index, $x_{k,t}$ is the transmit pilot symbol from user $k$, ${\\bf z}_t \\sim {\\cal{CN}}({\\bm 0},\\sigma_0^2 {\\bf I}_{M}) $ is the {additive white Gaussian noise} (AWGN), and ${\\bf \\Theta}_t \\in {\\mathbb C}^{N \\times N}$ is the IRS reflection coefficient matrix.\nIt is known that ${\\bf \\Theta}_t$ is a diagonal matrix such that ${\\bf \\Theta}_t={\\rm diag}({\\bm \\theta}_t)$, where ${\\bm \\theta}_t=[e^{\\jmath \\varphi_{t,1}},e^{\\jmath \\varphi_{t,2}},\\cdots,e^{\\jmath \\varphi_{t,N}}]^{\\rm T}$ is the phase shifting vector from the IRS.\\footnote{\n{In practice, the reflection efficiency cannot be 1, which is known as the reflection loss. However, for ease of presentation, this loss can be absorbed into the path loss of ${\\bf G}$ since it is a constant value.\n}}\n\n\n\n\n\n\n\n\\subsection{IRS Cascaded Channel Model}\\label{sec:model_cascaded}\n\nDenote the cascaded channel related to the BS's $m$-th antenna and the $k$-th user by\n\\begin{equation}\\label{equ:cascaded_channel_vector}\n\\begin{aligned}[b]\n{\\bf h}_{{\\rm I},k,m}={\\rm{diag}}({\\bf h}_{{\\rm r},k}) {\\bf g}_m,\n\\end{aligned}\n\\end{equation}\nwhere ${\\bf g}_m$ is the $m$-th column in ${\\bf G}=[{\\bf g}_1,\\cdots,{\\bf g}_M]$.\nThe cascaded channel over all BS antennas is given by ${\\bf H}_{{\\rm I},k}=[{\\bf h}_{{\\rm I},k,1}^{\\rm T},\\cdots,{\\bf h}_{{\\rm I},k,M}^{\\rm T}]^{\\rm T}$, and we have\n\\begin{equation}\\label{equ:cascaded_channel}\n\\begin{aligned}[b]\n{\\bf H}_{{\\rm I},k}={\\bf G}^{\\rm T} {\\rm{diag}}({\\bf h}_{{\\rm r},k}).\n\\end{aligned}\n\\end{equation}\nSubstituting \\eqref{equ:cascaded_channel} into \\eqref{equ:y_model1}, the received signal is given by\n\\begin{equation}\\label{equ:y_model_vn}\n\\begin{aligned}[b]\n{\\bf y}_t&=\\sum_{k=1}^K \\left({\\bf h}_{{\\rm d},k}+ {\\bf H}_{{\\rm I},k} {\\bm{\\theta}}_t \\right) x_{k,t}\n+{\\bf z}_t\n.\n\\end{aligned}\n\\end{equation}\n\nIn \\cite{Araujo2021JSAC_CE_PARAFAC,Mishra2019CEonoff,Elbir2020WCL_DL_CE} and\n\\cite{Zhouzy2020decompositionCE,Kundu2021OJCSLMMSE_DFTGOOD,Alwazani2020OJCSLMMSE_DFT}, ${\\bf H}_{{\\rm I},k}$ for all $k$ are estimated\nwithout exploiting the implicit common link structure behind the ${\\bf H}_{{\\rm I},k}$ for all $k$. As a result, there are $MNK$ variables to be estimated and this poses a heavy penalty on the required pilot overheads in the MU-MISO system.\nOn the other hand, from \\eqref{equ:cascaded_channel}, we can see that the cascaded channels $\\left\\{{\\bf H}_{{\\rm I},1}, {\\bf H}_{{\\rm I},2},\\cdots,{\\bf H}_{{\\rm I},K}\\right\\}$ all share a common BS-IRS link $\\bf G$.\nSpecifically,\n ${\\bf H}_{{\\rm I},k}$ is the multiplication of the common $\\bf G$ and the user-specific ${\\rm{diag}}({\\bf h}_{{\\rm r},k})$.\nIn other words, the cascaded channels are not independent variables. In fact, the common link structure $\\bf G$ should be exploited in the channel estimation.\nAs a result, the total number of independent variables\nis reduced to $MN+NK$.\n\n\nWe assume ${\\mathbb E}\\left[ {\\bf h}_{{\\rm I},k,m}\\right]={\\bf 0}$ for all $k$ and $m$.\\footnote{\n{The proposed algorithm in this paper is applicable for the case when an LoS link exists and ${\\mathbb E}\\left[ {\\bf h}_{{\\rm I},k,m}\\right]\\neq{\\bf 0}$. Simply substitute ${\\mathbb E}\\left[ {\\bf h}_{{\\rm I},k,m}\\right]$ into the prior distribution model in \\eqref{equ:prior}. Note that since the rank-1 LoS link usually is very strong and easier to be estimated, it will dominate the power of the channel coefficients, and the NMSE will be better than the NLoS scenario investigated in this paper.\n}}\nThe covariance of the cascaded channel ${\\bf h}_{{\\rm I},k,m}$ is given by\n\\begin{equation}\\label{equ:cascaded_channel_statisical}\n\\begin{aligned}[b]\n{\\bf C}_m^{(k)}={\\mathbb E}\\left[ {\\bf h}_{{\\rm I},k,m} {\\bf h}_{{\\rm I},k,m}^{\\rm H} \\right].\n\\end{aligned}\n\\end{equation}\nIn this paper, we focus on the case when ${\\bf C}_m^{(k)}$ is a full rank matrix for all $m$ and $k$, which is generally true in sub-6 GHz bands.\nWe design a channel estimation algorithm and phase shifting configuration scheme by utilizing knowledge of ${\\bf C}_m^{(k)}$.\nNote that in \\cite{Kundu2021OJCSLMMSE_DFTGOOD} and \\cite{Alwazani2020OJCSLMMSE_DFT}, the channel estimation algorithms for the cascaded channels require knowledge of the covariance of the IRS-BS link $\\bf G$ as well as the covariance of the IRS-user links ${\\bf h}_{{\\rm r},k}$. Also note that knowledge of the covariance of the cascaded channel ${\\bf C}_m^{(k)}$ is a weaker requirement compared to knowledge of the individual covariances $\\bf G$ and ${\\bf h}_{{\\rm r},k}$.\n\n\n\n\n\n\n\n\n\\section{Proposed Channel Estimation Protocol}\\label{dense_scheme}\n\n\n\n\n\\subsection{Overview of the Selected On-Off Channel Estimation Protocol}\\label{overview_onoff}\nThe selected on-off channel estimation protocol in \\cite{Liuliang_CE2020TWC} is illustrated in {\\figurename~\\ref{protocol:c}}, and consists of three stages.\nIn stage I, the BS-user channels are estimated by switching off all the IRS elements.\nIn stage II, a reference user is selected, which is indexed by user $1$, and its cascaded channel ${\\bf H}_{{\\rm I},1}$ is estimated using the algorithm for SU-MISO cases \\cite{zhangruiJSACSUCE}.\nIn stage III, the other $K-1$ users' cascaded channels are estimated by exploiting the common-link property.\n\n\\begin{figure}\n[!ht]\n\\centering\n\\includegraphics[width=.95\\columnwidth]{protocol_2.eps}\n\\caption{Selected on-off channel estimation protocol in \\cite{Liuliang_CE2020TWC}.}\n\\label{protocol:c}\n\\end{figure}\n\n\nWe focus on the estimation in stage III. Substituting ${\\bf H}_{{\\rm I},1}={\\bf G}^{\\rm T} {\\rm{diag}}({\\bf h}_{{\\rm r},1})$ into \\eqref{equ:y_model1}, the received signal is given by\n\\begin{equation}\\label{equ:y_model_LL}\n\\begin{aligned}[b]\n{\\bf y}_t&=\\sum_{k=1}^K {\\bf h}_{{\\rm d},k} x_{k,t}+\n{\\bf H}_{{\\rm I},1} {\\bm{\\theta}}_t x_{1,t}\\\\\n&\\qquad+\\sum_{k=2}^K {\\bf H}_{{\\rm I},1} {\\rm diag}({\\bm \\theta}_t) {\\bf h}_{{\\rm u},k} x_{k,t}\n+{ {\\bf z}_t}\n,\n\\end{aligned}\n\\end{equation}\nwhere ${\\bf h}_{{\\rm u},k} ={\\rm diag}({\\bf h}_{{\\rm r},1})^{-1}{\\bf h}_{{\\rm r},k}$ for all $k=2,3,\\cdots,K$, which are the user-specific variables to be estimated in this stage after exploiting ${\\bf H}_{{\\rm I},1}$ as the common-link variable.\nIn \\cite{Liuliang_CE2020TWC}, to estimate ${\\bf h}_{{\\rm u},k}$, only the $k$-th user sends $x_{k,t}=1$ and all the other users are inactive such that $x_{j,t}=0$ for all $j \\neq k$.\nThe received signal is given by\n\\begin{equation\n\\begin{aligned}[b]\n{\\bf y}_\n&={\\bf h}_{{\\rm d},k}+ {\\bf H}_{{{\\rm I},1}} {\\rm diag}({\\bm \\theta}_t) {\\bf h}_{{\\rm u},k}\n+ {\\bf z}_t\n.\n\\end{aligned}\n\\end{equation}\nBy only switching on the first $M$ IRS elements with $[\\theta_{t,1},\\theta_{t,2},\\cdots,\\theta_{t,M}]^{\\rm T}={\\bf 1}$, the first $M$ coefficients in ${\\bf h}_{{\\rm u},k}$ can be estimated by\n\\begin{equation\n\\begin{aligned}[b]\n\\begin{bmatrix}{h}_{{\\rm u},k,1}\\\\ \\vdots \\\\ { h}_{{\\rm u},k,M}\\end{bmatrix}\n=\n{\n\\begin{bmatrix}\n H_{{{\\rm I},1},11} & \\dots & H_{{{\\rm I},1},1M}\\\\\n \\vdots & \\ddots & \\vdots\\\\\n H_{{{\\rm I},1},M1} & \\dots & H_{{{\\rm I},1},MM}\n\\end{bmatrix}\n}^{-1}\n({\\bf y}_i-{\\bf h}_{{\\rm d},k})\n.\n\\end{aligned}\n\\end{equation}\nNote that one may adopt the LMMSE estimator to achieve better performance if the covariance of ${\\bf h}_{{\\rm u},k} ={\\rm diag}({\\bf h}_{{\\rm r},1})^{-1}{\\bf h}_{{\\rm r},k}$ is available \\cite[Section V]{Liuliang_CE2020TWC}.\nIn the next timeslot, the next $M$ IRS elements are switched on with $[\\theta_{t,M+1},\\theta_{t,M+2},\\cdots,\\theta_{t,2M}]^{\\rm T}={\\bf 1}$ while the other elements are switched off to estimate the next $M$ coefficients in ${\\bf h}_{{\\rm u},k}$. The estimation continues in this way until all the coefficients in ${\\bf h}_{{\\rm u},k} $ are estimated, which finally costs $I=\\lceil \\frac{N}{M}\\rceil$ timeslots.\nThe overall pilot overhead of the protocol in \\cite{Liuliang_CE2020TWC} is $K+N+\\lceil \\frac{N}{M}\\rceil(K-1)$.\n\n\n\\subsection{Always-ON Channel Estimation Protocol}\\label{proposed_protocol}\nWe propose a novel always-ON channel estimation protocol without switching off\nselected IRS elements. The proposed protocol consists of two stages, as illustrated in {\\figurename~\\ref{protocol:d}}.\nIn particular, stage I contains $L_1+1$ timeslots, where $L_1 = \\lceil \\frac{N}{M}\\rceil$, and each timeslot contains $K$ samples. Stage II contains $L_2=N-L_1$ timeslots, and each timeslot contains only one sample.\nTherefore, we have $K(L_1+1)+L_2$ received samples in total. As defined in \\eqref{equ:y_model1}, the $t$-th received sample is given by\n\\begin{equation}\\label{equ:y_model2}\n\\begin{aligned}[b]\n{\\bf y}_{t}&=\\sum_{k=1}^K \\left({\\bf h}_{{\\rm d},k}+ {\\bf G}^{\\rm T} {\\rm diag} ({\\bm \\theta}_t) {\\bf h}_{{\\rm r},k} \\right) x_{k,t}\n+{ {\\bf z}_t}\n,\n\\end{aligned}\n\\end{equation}\nwhere $t=1,2,\\cdots,K(L_1+1)+L_2$.\n\n\n\n\\begin{figure}\n[!ht]\n\\centering\n\\includegraphics[width=.8\\columnwidth]{protocol_3.eps}\n\\caption{Proposed always-ON protocol.}\n\\label{protocol:d}\n\\end{figure}\n\n\n\\subsubsection{Received Signal Samples in Each Training Timeslot}\nTo facilitate analysis, we introduce new notations on the received signal samples within one training timeslot indexed by $\\ell=0,1,\\cdots,N$.\n\\begin{itemize}\n\\item\n{{\\bf {Stage I}} (Users send orthogonal pilot sequences ${\\bf X}$)}:\nStage I consists of training timeslots $\\ell=0,1,\\cdots,L_1$.\nDenote by ${\\bf X} \\in {\\mathbb C}^{K \\times K}$, where ${\\bf X}^{\\rm H} {\\bf X}=K{\\bf I}_K$, the orthogonal pilot sequences consisting of unit-modulus elements.\nIn the $\\ell$-th timeslot, $K$ users transmit ${\\bf X}$ by $K$ samples, and the IRS is configured by the phase shifting vector ${\\bm \\theta}_\\ell$.\nThe $K$ received samples in the $\\ell$-th timeslot are ${\\bf y}_{K\\ell+1},{\\bf y}_{K\\ell+2},\\cdots,{\\bf y}_{K\\ell+K}$.\nLet ${{\\bf Y}}_{\\ell}=[{\\bf y}_{K\\ell+1},{\\bf y}_{K\\ell+2},\\cdots,{\\bf y}_{K\\ell+K}]$. We have\n\\begin{equation}\\label{equ:y_model_stage_I}\n\\begin{aligned}[b]\n{{\\bf Y}}_{\\ell}&=\\left({\\bf H}_{\\rm d} + {\\bf G}^{\\rm T} {\\rm diag} ({\\bm \\theta}_{\\ell}) {\\bf H}_{\\rm r} \\right) {\\bm X}\n+{{\\bf Z}}_{\\ell}\n,\n\\end{aligned}\n\\end{equation}\nwhere\n\\begin{align}\n{\\bf H}_{\\rm d}&=[{\\bf h}_{{\\rm d},1},\\cdots,{\\bf h}_{{\\rm d},K}] \\in {\\mathbb C}^{M \\times K},\\\\\n{\\bf H}_{\\rm r}&=[{\\bf h}_{{\\rm r},1},\\cdots,{\\bf h}_{{\\rm r},K}] \\in {\\mathbb C}^{N \\times K}\n\\end{align}\nare the stacked channel coefficient matrices,\n$\\ell$ is the timeslot index,\nand ${{\\bf Z}}_{\\ell}=[{\\bf z}_{K\\ell+1},{\\bf z}_{K\\ell+2},\\cdots,{\\bf z}_{K\\ell+K}]$ denotes the noise.\n\n\n\\item {{\\bf {Stage II}} (Users send pilot $\\bar{\\bf x}$, which is the first column of $\\bf X$)}:\nStage II consists of training timeslots $\\ell=L_1+1,L_1+2,\\cdots,N$.\nWe denote the first column of $\\bf X$ by\n\\begin{equation}\\label{equ:barx}\n\\bar{\\bf x}=[{\\bar x}_{1},\\cdots,{\\bar x}_{K}]^{\\rm T}.\n\\end{equation}\nIn the $\\ell$-th timeslot, the users transmit $\\bar{\\bf x}$, while the IRS is configured by ${\\bm \\theta}_\\ell$.\nThe received signal in stage II is denoted by\n\\begin{equation}\\label{equ:y_model_stage_III}\n\\begin{aligned}[b]\n\\bar{\\bf y}_{\\ell}\n&= \\left({\\bf H}_{\\rm d} + {\\bf G}^{\\rm T} {\\rm diag} ({\\bm \\theta}_\\ell) {\\bf H}_{\\rm r} \\right) \\bar{\\bf x}+{\\bf z}_{\\ell+(K-1)L_1+K}\n,\n\\end{aligned}\n\\end{equation}\nwhere $\\bar{\\bf y}_{\\ell}={\\bf y}_{\\ell+(K-1)L_1+K}$, and $\\ell=L_1+1,L_1+2,\\cdots,N$.\n\\end{itemize}\n\n$\\bar{\\bf y}_{\\ell}$ may extend to all the $N+1$ timeslots in the protocol by\n\\begin{equation}\\label{equ:bar_y}\n\\begin{aligned}[b]\n\\bar{\\bf y}_{\\ell}=\n\\begin{cases}\n{\\bf y}_{1+K \\ell}, \\; & {\\rm for} \\; \\ell=0,\\cdots,L_1,\\\\\n{\\bf y}_{\\ell+(K-1)L_1+K}, & {\\rm for} \\; \\ell=L_1+1,\\cdots,N.\n\\end{cases}\n\\end{aligned}\n\\end{equation}\n\nThe pilot overhead in stage I is $(\\lceil \\frac{N}{M}\\rceil+1) K$, and the overhead in stage II is $N-\\lceil \\frac{N}{M}\\rceil $. The overall pilot overhead of the proposed protocol is $K+N+\\lceil \\frac{N}{M}\\rceil(K-1)$.\nNote that the pilot overhead is significantly reduced by about $M$ times compared to the protocol in\n\\cite{Araujo2021JSAC_CE_PARAFAC,Mishra2019CEonoff,Elbir2020WCL_DL_CE} and\n\\cite{Zhouzy2020decompositionCE,Kundu2021OJCSLMMSE_DFTGOOD,Alwazani2020OJCSLMMSE_DFT}, which require $NK$ pilots.\n\n\n\\subsubsection{Signal Pre-processing}\nIn the proposed protocol, we set ${\\bm \\theta}_0=-{\\bm \\theta}_1$ to decouple the estimation on the BS-user channel\\footnote{\n The BS-user channel can be estimated based on ${{\\bf Y}}_0$ and ${{\\bf Y}}_1$ by the linear minimum mean squared error estimator, which is similar to the methods in existing works\n\\cite{Kundu2021OJCSLMMSE_DFTGOOD,Alwazani2020OJCSLMMSE_DFT,Liuliang_CE2020TWC} (See Appendix \\ref{app_Estimate_Hd}).} and on the cascaded IRS channel.\nTo facilitate the estimation of the cascaded IRS channel, signal pre-processing to remove the BS-user channel from the received signals is performed.\n\n\\begin{itemize}\n\\item \\emph{Pre-processing on ${{\\bf Y}}_\\ell$ for $\\ell=1,2,\\cdots,L_1$}:\nFor $\\ell=1$, we have\n\\begin{equation}\\label{equ:R1}\n\\begin{aligned}[b]\n{\\bf R}_{1} &= \\frac{1}{2}\\left({{\\bf Y}}_1-{{\\bf Y}}_0\\right) \\\\\n&={\\bf G}^{\\rm T} {\\rm diag} ({\\bm \\theta}_1) {\\bf H}_{\\rm r}{\\bf X}\n+\\tilde{\\bf Z}_1\n,\n\\end{aligned}\n\\end{equation}\nwhere the elements in $\\tilde{\\bf Z}_1$ follow i.i.d. ${\\cal{CN}}({ 0},\\frac{1}{2}\\sigma_0^2) $.\nFor $\\ell=2,3,\\cdots,L_1$, we have\n\\begin{equation}\\label{equ:R2L1}\n\\begin{aligned}[b]\n{\\bf R}_{\\ell} &= {{\\bf Y}}_{\\ell} -\\frac{1}{2}\\left({{\\bf Y}}_0+{{\\bf Y}}_1\\right)\\\\\n&={\\bf G}^{\\rm T} {\\rm diag} ({\\bm \\theta}_\\ell) {\\bf H}_{\\rm r} {\\bf X}\n+\\tilde{\\bf Z}_{\\ell}\n,\n\\end{aligned}\n\\end{equation}\nwhere the elements in $\\tilde{\\bf Z}_{\\ell}$ follow ${\\cal{CN}}({ 0},\\frac{3}{2}\\sigma_0^2) $.\nNote that the difference between \\eqref{equ:R1} and \\eqref{equ:R2L1} is that they have different noise variances.\n\n\\item \\emph{Pre-processing on $\\bar{\\bf y}_{\\ell}$ for $\\ell=1,2,\\cdots,N$}:\nIn the same manner, the BS-user channel is removed from $\\bar{\\bf y}_{\\ell}$\nfor $\\ell=1,2,\\cdots,N$:\n\\begin{equation}\\label{equ:rbar}\n\\begin{aligned}[b]\n\\bar{\\bf r}_{\\ell} &= \\bar{\\bf y}_{\\ell}-\\frac{1}{2}\\left(\\bar{\\bf y}_{0}+\\bar{\\bf y}_{1}\\right)\\\\\n&={\\bf G}^{\\rm T} {\\rm diag} ({\\bm \\theta}_\\ell) {\\bf H}_{\\rm r}\\bar{\\bf x}\n+\\bar{\\bf z}_{\\ell}\\\\\n&={\\bf G}^{\\rm T} {\\rm diag} ({\\bf H}_{\\rm r}\\bar{\\bf x}) {\\bm \\theta}_\\ell\n+\\bar{\\bf z}_{\\ell}\n,\n\\end{aligned}\n\\end{equation}\nwhere the elements in $\\bar{\\bf z}_{\\ell}$ follow ${\\cal{CN}}({ 0},\\frac{3}{2}\\sigma_0^2) $.\n\\end{itemize}\n\n\\begin{lemma}[Effectiveness of the proposed protocol]\\label{lemma0}\nThe cascaded channels ${\\bf H}_{{\\rm I},k}={\\bf G}^{\\rm T} {\\rm{diag}}({\\bf h}_{{\\rm r},k})$ ($k=1,2,\\cdots,K$) for all $K$ users can be perfectly recovered with probability one by adopting an orthogonal phase shifting configuration matrix ${\\bm \\Phi}=[{\\bm \\theta}_1,{\\bm \\theta}_2,\\cdots,{\\bm \\theta}_N]$ whose elements are all non-zero, if there is no noise, and\n${\\bf G}={\\bf F}_{\\rm R} {\\ddot{\\bf G}} {\\bf F}_{\\rm B}^{\\rm T}$ and ${\\bf h}_{{\\rm r},k}={\\bf F}_{\\rm R}{\\ddot{\\bf h}}_{{\\rm r},k}$ where ${\\bf F}_{\\rm B} \\in {\\mathbb C}^{M \\times M}$ and ${\\bf F}_{\\rm R} \\in {\\mathbb C}^{N \\times N}$ is respectively the angular domain basis for the BS antenna array and the IRS,\n${\\ddot{\\bf G}}=[{\\ddot{\\bf g}}_1,{\\ddot{\\bf g}}_2,\\cdots,{\\ddot{\\bf g}}_M]$, and ${\\ddot{\\bf g}}_m$ and ${\\ddot{\\bf h}}_{{\\rm r},k}$ for all $m$ and $k$ are pairwise independent following zero-mean multivariate normal distributions.\n\\end{lemma}\n\n\\begin{IEEEproof}\nSee Appendix \\ref{proof_lemma0}.\n\\end{IEEEproof}\n\nCompared with \\cite{Liuliang_CE2020TWC} and \\cite{double_IRS}, the proposed protocol has two main differences, which provide the opportunity to keep all the IRS elements ON and to reduce the pilot overhead by utilizing the common-link structure at the same time.\nHere, we try to explain the intuition by supposing we adopt a similar estimation algorithm to those in \\cite{Liuliang_CE2020TWC} and \\cite{double_IRS}, i.e., first estimate the reference channel and then estimate the relative channels.\nFirstly, instead of requiring a specific reference user, we design a virtual reference channel ${\\bf H}_{\\rm v}={\\bf G}^{\\rm T} {\\rm diag} ({\\bf H}_{\\rm r}\\bar{\\bf x})$, which is fair for all users and can be reconstructed using the $N$ observations in \\eqref{equ:rbar}.\nSecondly, in our protocol, the relative channels can be estimated by using ${\\bf R}_{\\ell}$ in \\eqref{equ:R1} and \\eqref{equ:R2L1}\nresulting a measurement matrix\n$[({\\bf H}_{\\rm v}{\\rm diag} ({\\bm \\theta}_1))^{\\rm T},\\cdots,({\\bf H}_{\\rm v}{\\rm diag} ({\\bm \\theta}_{L_1}))^{\\rm T}]^{\\rm T}$.\nOne can see that there are $L_1$ different ${\\bm \\theta}_{\\ell}$ in the measurement matrix instead of a fixed one as in \\cite{double_IRS}. When $L_1 \\geq \\lceil \\frac{N}{M}\\rceil$, it is possible that the rank of the measurement matrix becomes $N$ with a proper phase shifting configuration to obtain a reasonable estimation.\\footnote{\nIt is seen that the selected on-off protocol in \\cite{Liuliang_CE2020TWC} can be treated as a special case of our protocol by setting $\\bar{\\bf x}=[1,0,\\cdots,0]^{\\rm T}$ and selecting different parts of the IRS elements to be ON and OFF to obtain $L_1$ different ${\\bm \\theta}_{\\ell}$, as introduced in Section \\ref{overview_onoff}.\n}\nNote that the above two-step channel estimation algorithm is only for explaining the intuition of the proposed approach. In the next section, we will propose an optimization-based cascaded channel estimation algorithm that may achieve more reliable performance.\n\n\n\\section{Optimization-Based MU-Cascaded IRS Channel Estimation}\\label{sec:opt_est}\nIn this section, we propose an optimization-based channel estimation on the cascaded IRS channel ${\\bf h}_{{\\rm I},k,m}$ for all $m$ and $k$ based on the pre-processed observations $\\{{\\bf R}_{\\ell},\\bar{\\bf r}_{\\ell}\\}$ in \\eqref{equ:R1}, \\eqref{equ:R2L1} and \\eqref{equ:rbar}.\nWe consider a general decomposition on the cascaded channel, which is friendly in utilizing the channel prior knowledge\nand the common-link structure across the multiple users.\nSpecifically, we adopt the MAP\napproach to estimate\n the cascaded channel ${\\bf h}_{{\\rm I},k,m}$ for all $m$ and $k$\ngiven the pre-processed observations $\\{{\\bf R}_{\\ell},\\bar{\\bf r}_{\\ell}\\}$.\nAn alternating optimization algorithm with efficient initialization is further proposed to achieve a local optimum of the MAP problem.\n\n\n\n\n\\subsection{MAP Problem Formulation}\nAs shown in \\eqref{equ:cascaded_channel_vector}, the cascaded channel ${\\bf h}_{{\\rm I},k,m}$ can be decomposed by the common BS-IRS channel ${\\bf g}_m$ and the IRS-user channel ${\\bf h}_{{\\rm r},k}$ as follows:\n\\begin{equation}\\label{equ:cascaded_channel_vector_2v}\n\\begin{aligned}[b]\n{\\bf h}_{{\\rm I},k,m}={\\rm{diag}}({\\bf h}_{{\\rm r},k}) {\\bf g}_m.\n\\end{aligned}\n\\end{equation}\nHowever, the main challenge to estimate the individual ${\\bf h}_{{\\rm r},k}$ and ${\\bf g}_m$ is that in the MAP formulation, prior distributions of ${\\bf g}_m$ and ${\\bf h}_{{\\rm r},k}$ will be needed, but\nit is difficult to obtain individual covariances of ${\\bf g}_m$ and ${\\bf h}_{{\\rm r},k}$ based on the covariance of the cascaded channel.\\footnote{\nOne possible way to estimate the covariance of the cascaded channel ${\\bf C}_m^{(k)}$ is using the\n the maximum likelihood estimator $\\hat{\\bf C}_m^{(k)}=\\frac{1}{J} \\sum_{j=1}^J \\left[ \\hat{\\bf h}_{{\\rm I},k,m}(j) \\hat{\\bf h}_{{\\rm I},k,m}(j)^{\\rm H} \\right]$, where $\\{\\hat{\\bf h}_{{\\rm I},k,m}(j)\\}$ are the estimated historical cascaded channels in the past $j=1,2,\\cdots,J$ transmission frames.\n Note that similar covariances are also required by the LMMSE estimators for the selected on-off channel estimation protocol in \\cite{Liuliang_CE2020TWC} (See equations (72) and (86) in \\cite{Liuliang_CE2020TWC}).\n}\n\nTo address this issue, we consider\na more general auxiliary variable set for the cascaded channel decomposition:\n\\begin{equation}\\label{equ:set_opt}\n\\begin{aligned}[b]\n{\\cal A}=\\left\\{ \\left.\\left\\{{\\bf H}_{\\rm g},{\\bf H}_{\\rm u}\\right\\} \\right|\n{\\bf h}_{{\\rm I},k,m}={\\rm{diag}}({\\bf h}_{{\\rm u},k}) {\\bf h}_{{\\rm g},m}, \\forall k, \\forall m\n\\right\\},\n\\end{aligned}\n\\end{equation}\nwhere ${\\bf H}_{\\rm g}=[{\\bf h}_{{\\rm g},1},\\cdots,{\\bf h}_{{\\rm g},M}]$ is the common-link variable whose $m$-th column is ${\\bf h}_{{\\rm g},m}$, and ${\\bf H}_{\\rm u}=[{\\bf h}_{{\\rm u},1},\\cdots,{\\bf h}_{{\\rm u},K}]$ is the user-specific variable whose $k$-th column is ${\\bf h}_{{\\rm u},k}$.\nOne may verify that $\\{{\\bf G}, {\\bf H}_{\\rm r}\\}\\in {\\cal A}$ according to \\eqref{equ:cascaded_channel_vector_2v}.\nBased on this, we formulate an optimization problem on ${\\bf H}_{\\rm g}$ and ${\\bf H}_{\\rm u}$ using the MAP approach, which is given by\n\\begin{equation*\n\\begin{aligned}[b]\n&\n{\\mathcal{P}}{(\\text{A})}\n\\quad \\max_{ {\\bf H}_{\\rm g}, {\\bf H}_{\\rm u} } \\;\nf_{{\\rm A}}({\\bf H}_{\\rm g}, {\\bf H}_{\\rm u})\n,\n\\end{aligned}\n\\end{equation*}\nwhere the objective function is given by\\footnote{\nOne may substitute different prior setups to $p({\\bf h}_{{\\rm I},k,m} )$ in \\eqref{equ:prior}.\nIn addition, if channel prior knowledge is unavailable, one may simply remove $p({\\bf h}_{{\\rm I},k,m} )$ from \\eqref{equ:obj_f},\nand the optimization becomes the maximum likelihood approach.\n}\n\\begin{equation}\\label{equ:obj_f}\n\\begin{aligned}[b]\n&f_{{\\rm A}}({\\bf H}_{\\rm g}, {\\bf H}_{\\rm u})\n= \\sum_{m=1}^M \\sum_{k=1}^K \\ln p({\\bf h}_{{\\rm I},k,m} ) \\\\\n&\\quad+\\sum_{\\ell=1}^{L_1} \\ln p({\\tilde{\\bf R}}_{\\ell}| {\\bf H}_{\\rm g}, {\\bf H}_{\\rm u} )\n+\\sum_{\\ell=L_1+1}^{N} \\ln p( \\bar{\\bf r}_{\\ell} | {\\bf H}_{\\rm g}, {\\bf H}_{\\rm u} )\n,\n\\end{aligned}\n\\end{equation}\nand $\\tilde{\\bf R}_{\\ell}= {{\\bf R}}_{\\ell} {\\bf X}^{-1}$.\nNote that $f_{{\\rm A}}({\\bf H}_{\\rm g}, {\\bf H}_{\\rm u})$ in \\eqref{equ:obj_f} only requires the prior distribution of ${\\bf h}_{{\\rm I},k,m}$, which is given by\n\\begin{equation}\\label{equ:prior}\n\\begin{aligned}[b]\np({\\bf h}_{{\\rm I},k,m} )\\propto e^{\n- {\\bf h}_{{\\rm g},m}^{\\rm H} {\\rm{diag}}({\\bf h}_{{\\rm u},k})^{\\rm H} {{\\bf C}_m^{(k)}}^{-1} {\\rm{diag}}({\\bf h}_{{\\rm u},k}) {\\bf h}_{{\\rm g},m}\n},\n\\end{aligned}\n\\end{equation}\nwhere $\\propto$ denotes equality up to a\nscaling that is independent of the variables (i.e., ${\\bf h}_{{\\rm I},k,m}$ for \\eqref{equ:prior}).\nThe likelihood functions $p({\\tilde{\\bf R}}_{\\ell}| {\\bf H}_{\\rm g}, {\\bf H}_{\\rm u})$ and $p( \\bar{\\bf r}_{\\ell} | {\\bf H}_{\\rm g}, {\\bf H}_{\\rm u} )$ are given by\n\\begin{align}\np(\\tilde{{\\bf R}}_{\\ell}| {\\bf H}_{\\rm g}, {\\bf H}_{\\rm u} )\n&\\propto\ne^{-\\sigma_{\\ell}^{-2} \\left\\|\n\\tilde{{\\bf R}}_{\\ell}- {\\bf H}_{\\rm g}^{\\rm T} {\\rm diag} \\left({\\bm \\theta}_{\\ell}\\right) {\\bf H}_{{\\rm u}}\n\\right\\|^2_{\\rm F} }\n, \\label{equ:likelihood_R}\n\\\\\np( \\bar{\\bf r}_{\\ell} | {\\bf H}_{\\rm g}, {\\bf H}_{\\rm u} )\n&\\propto\ne^{-\\bar\\sigma_{\\ell}^{-2} \\left\\|\n\\bar{\\bf r}_{\\ell}- {\\bf H}_{\\rm g}^{\\rm T} {\\rm diag} \\left({\\bm \\theta}_{\\ell} \\right) {\\bf H}_{{\\rm u}} \\bar{\\bf x}\n\\right\\|^2_2 }\n, \\label{equ:likelihood_r}\n\\end{align}\nwhere $\\sigma_{1}^2=\\frac{1}{2K}\\sigma_0^2$, $\\sigma_{\\ell}^2=\\frac{3}{2K}\\sigma_0^2$ for $\\ell=2,3,\\cdots,L_1$,\nand $\\bar\\sigma_{\\ell}^2=\\frac{3}{2}\\sigma_0^2$ for $\\ell=L_1+1,L_1+2,\\cdots,N$.\nFinally, after dropping all the irrelevant constant terms, the objective function is equivalently written as\n\\begin{equation}\\label{equ:obj_f_eq}\n\\begin{aligned}[b]\n&f_{{\\rm A}}({\\bf H}_{\\rm g}, {\\bf H}_{\\rm u})\n=-\\sum_{\\ell=1}^{L_1}\n{\\frac{1}{\\sigma_{\\ell}^{2}} \\left\\|\n\\tilde{{\\bf R}}_{\\ell}- {\\bf H}_{\\rm g}^{\\rm T} {\\rm diag} \\left({\\bm \\theta}_{\\ell}\\right) {\\bf H}_{{\\rm u}}\n\\right\\|^2_{\\rm F} }\\\\\n&\\quad-\\sum_{\\ell=L_1+1}^{N}\n{ \\frac{1}{\\bar\\sigma_{\\ell}^{2}} \\left\\|\n\\bar{\\bf r}_{\\ell}-{\\bf H}_{\\rm g}^{\\rm T} {\\rm diag} \\left({\\bm \\theta}_{\\ell} \\right) {\\bf H}_{{\\rm u}} \\bar{\\bf x}\n\\right\\|^2_2 }\n\\\\\n&\\quad-\\sum_{m=1}^M \\sum_{k=1}^K {\\bf h}_{{\\rm g},m}^{\\rm H} {\\rm{diag}}({\\bf h}_{{\\rm u},k})^{\\rm H} {{\\bf C}_m^{(k)}}^{-1} {\\rm{diag}}({\\bf h}_{{\\rm u},k}) {\\bf h}_{{\\rm g},m} .\n\\end{aligned}\n\\end{equation}\nNote that ${\\mathcal{P}}{(\\text{A})}$ does not have a unique solution but all the solutions are equivalent for the purpose of estimation of the cascaded channel ${\\bf h}_{{\\rm I},k,m}$ for all $m$ and $k$.\n\\begin{lemma}[Equivalence of the solution of ${\\mathcal{P}}{(\\text{A})}$]\\label{eq_decomposite}\nLet ${\\bf H}_{\\rm g}^\\star$ and ${\\bf H}_{\\rm u}^\\star$ be an optimal solution of ${\\mathcal{P}}{(\\text{A})}$, then ${\\rm diag} \\left({\\bf a}\\right){\\bf H}_{\\rm g}^\\star$ and ${\\rm diag} \\left({\\bf a}\\right)^{-1}{\\bf H}_{\\rm u}^\\star$ is also an optimal solution of ${\\mathcal{P}}{(\\text{A})}$\nfor any coefficient ${\\bf a}=[a_1,a_2,\\cdots,a_N]^{\\rm T}$ with $|a_n|\\neq0$ for all $n=1,2,\\cdots,N$.\n\\end{lemma}\n\n\\begin{IEEEproof}\nSee Appendix \\ref{proof_lemma1}.\n\\end{IEEEproof}\n\n\nAs a result, there is ambiguity in estimating individual channels from solving ${\\mathcal{P}}{(\\text{A})}$. Nevertheless, the cascaded channel is unique regardless of the coefficient ${\\bf a}$.\n\n\n\n\n\\subsection{ Channel Estimation Algorithm based on Alternative Optimization}\\label{sec:opt_est_AO}\n\nSolving ${\\mathcal{P}}{(\\text{A})}$ is difficult due to the optimization variables being coupled in the likelihood functions \\eqref{equ:likelihood_R} and \\eqref{equ:likelihood_r}.\nFortunately, we will show that ${\\mathcal{P}}{(\\text{A})}$ is actually bi-convex (see Lemma \\ref{fu_convex} and Lemma \\ref{fg_convex}), which can be solved by alternative optimization. In particular, we decompose ${\\mathcal{P}}{(\\text{A})}$ into two convex sub-problems, and\nthe optimal solutions for these sub-problems will be derived accordingly.\n\n\n\\subsubsection{Optimize ${\\bf H}_{\\rm u}$}\nWe investigate the optimization of ${\\bf H}_{\\rm u}$ while ${\\bf H}_{\\rm g}$ are fixed.\nAfter dropping all irrelevant terms, the sub-problem is given by\n\\begin{align*}\n{\\mathcal{P}}{({\\text A}_{{\\rm u}})} \\quad \\min_{ {\\bf H}_{\\rm u} }\\; f_{{\\rm u}}({\\bf H}_{\\rm u})\n,\n\\end{align*}\nwhere\n\\begin{equation}\\label{equ:obj_f_hrk}\n\\begin{aligned}[b]\n&f_{{\\rm u}}({\\bf H}_{\\rm u})\n=\\sum_{\\ell=1}^{L_1}\n{\\frac{1}{\\sigma_{\\ell}^{2}} \\left\\|\n\\tilde{{\\bf R}}_{\\ell}- {\\bf D}_{\\ell} {\\bf H}_{{\\rm u}}\n\\right\\|^2_{\\rm F} }\\\\\n&\\quad+\\sum_{\\ell=L_1+1}^{N}\n{ \\frac{1}{\\bar\\sigma_{\\ell}^{2}} \\left\\|\n\\bar{\\bf r}_{\\ell}-{\\bf D}_{\\ell} {\\bf H}_{{\\rm u}} \\bar{\\bf x}\n\\right\\|^2_2 }\n+\\sum_{k=1}^K {\\bf h}_{{\\rm u},k}^{\\rm H} {\\bf C}_{{\\rm u},k} {\\bf h}_{{\\rm u},k}\n,\n\\end{aligned}\n\\end{equation}\nand\n\\begin{align}\n{\\bf D}_{\\ell}&={\\bf H}_{\\rm g}^{\\rm T} {\\rm diag} \\left({\\bm \\theta}_{\\ell}\\right), \\; {\\text{for}} \\; \\ell=1,2,\\cdots,N,\\\\\n{\\bf C}_{{\\rm u},k}&=\\sum_{m=1}^M {\\rm{diag}}({\\bf h}_{{\\rm g},m})^{\\rm H} {{\\bf C}_m^{(k)}}^{-1} {\\rm{diag}}({\\bf h}_{{\\rm g},m}), \\quad \\forall k.\n\\end{align}\n\n\n\\begin{lemma}[Convexity of ${\\mathcal{P}}{({\\text A}_{{\\rm u}})}$]\\label{fu_convex}\nFor any fixed ${\\bf H}_{\\rm g}$, the objective function of ${\\mathcal{P}}{({\\text A}_{{\\rm u}})}$ is a convex quadratic function of the vectorization of ${\\bf H}_{\\rm u}$, which is denoted by ${\\rm{vec}}({\\bf H}_{\\rm u})$.\n\\end{lemma}\n\n\\begin{IEEEproof}\nSee Appendix \\ref{proof_lemmafu}.\n\\end{IEEEproof}\n\n\nBased on Lemma \\ref{fu_convex}, the optimal solution of ${\\mathcal{P}}{({\\text A}_{{\\rm u}})}$ is the root of the first order derivative of $f_{{\\rm u}}({\\rm{vec}}({\\bf H}_{\\rm u}))$, which is given by\n\\begin{equation}\\label{equ:opt_hrk}\n\\begin{aligned}[b]\n{\\rm{vec}}({\\bf H}_{\\rm u}^\\star)\n&= {\\bm \\Lambda}_{{\\rm u}}^{-1} {\\bm \\nu}_{{\\rm u}}\n,\n\\end{aligned}\n\\end{equation}\nwhere\n\\begin{align}\n{\\bm \\Lambda}_{{\\rm u}}&=\n {\\bf C}_{{\\rm u}}\n+{\\bf I}_K \\otimes \\left(\\sum_{\\ell=1}^{L_1} \\frac{1}{\\sigma_{\\ell}^{2}} {\\bf D}_{\\ell}^{\\rm H} {\\bf D}_{\\ell}\\right) \\notag\\\\\n&\\quad + \\left(\\bar{\\bf x}^\\ast \\bar{\\bf x}^{\\rm T}\\right)\\otimes \\left(\\sum_{\\ell=L_1+1}^{N} \\frac{1}{\\bar\\sigma_{\\ell}^{2}} |\\bar{x}_k|^2 {\\bf D}_{\\ell}^{\\rm H} {\\bf D}_{\\ell}\\right)\n, \\label{equ:opt_hrk_e1}\\\\\n{\\bf C}_{{\\rm u}}&={\\rm{blkdiag}}({\\bf C}_{{\\rm u},1},{\\bf C}_{{\\rm u},2},\\cdots,{\\bf C}_{{\\rm u},K}),\n\\end{align}\nand\n\\begin{equation}\\label{equ:opt_hrk_e2}\n\\begin{aligned}[b]\n{\\bm \\nu}_{{\\rm u}}\n&={\\rm{vec}}\\left(\n\\sum_{\\ell=1}^{L_1} \\frac{1}{\\sigma_{\\ell}^{2}} {\\bf D}_{\\ell}^{\\rm H}\n\\tilde{{\\bf R}}_{\\ell}\n+ \\sum_{\\ell=L_1+1}^{N} \\frac{1}{\\bar\\sigma_{\\ell}^{2}}\n {\\bf D}_{\\ell}^{\\rm H} \\bar{\\bf r}_{\\ell} \\bar{\\bf x}^{\\rm H}\n\\right).\n\\end{aligned}\n\\end{equation}\n\n\n\n\n\\subsubsection{Optimize ${\\bf H}_{{\\rm g}}$}\nSimilarly, the sub-problem of optimizing ${\\bf H}_{{\\rm g}}$ is given by\n\\begin{align*}\n{\\mathcal{P}}{(\\text{A}_{{\\rm g}})} \\; \\min_{ {\\bf H}_{{\\rm g}} }\\; \\sum_{m=1}^M f_{{\\rm g},m}({\\bf h}_{{\\rm g},m})\n,\n\\end{align*}\nwhere\n\\begin{align}\n&f_{{\\rm g},m}({\\bf h}_{{\\rm g},m})\n=\n \\sum_{k=1}^K \\sum_{\\ell=1}^{L_1} \\frac{1}{\\sigma_{\\ell}^{2}} \\left\\|\n\\tilde{{r}}_{\\ell,m,k}-{\\bf h}_{{\\rm g},m}^{\\rm T} {\\bf b}_{\\ell,k}\n\\right\\|^2 \\notag\\\\\n&+ \\sum_{\\ell=L_1+1}^{N}\\frac{1}{\\bar\\sigma_{\\ell}^{2}} \\left\\|\n\\bar{r}_{\\ell,m}-\\sum_{k=1}^K \\bar{x}_k {\\bf h}_{{\\rm g},m}^{\\rm T} {\\bf b}_{\\ell,k}\n\\right\\|^2+{\\bf h}_{{\\rm g},m}^{\\rm H} {\\bf C}_{{\\rm g},m} {\\bf h}_{{\\rm g},m}\n, \\label{equ:obj_f_gm}\\\\\n&\\quad{\\bf b}_{\\ell,k}={\\rm diag}\\left({\\bm \\theta}_{\\ell}\\right) {\\bf h}_{{\\rm u},k}, \\; {\\text{for}} \\; \\ell=1,2,\\cdots,N,\\\\\n&\\quad{\\bf C}_{{\\rm g},m}=\\sum_{k=1}^K {\\rm{diag}}({\\bf h}_{{\\rm u},k})^{\\rm H} {{\\bf C}_m^{(k)}}^{-1} {\\rm{diag}}({\\bf h}_{{\\rm u},k}), \\quad \\forall m,\n\\end{align}\n$\\tilde{{r}}_{\\ell,m,k}$ denotes the entry in the $m$-th row and $k$-th column of $\\tilde{\\bf R}_{\\ell}$,\nand $\\bar{r}_{\\ell,m}$ denotes the $m$-th entry in $\\bar{\\bf r}_{\\ell}$.\n\n\n\\begin{lemma}[Convexity of ${\\mathcal{P}}{({\\text A}_{{\\rm g}})}$]\\label{fg_convex}\nFor any fixed ${\\bf H}_{\\rm u}$, the objective function of ${\\mathcal{P}}{({\\text A}_{{\\rm g}})}$ is a convex quadratic function of $\\{{\\bf h}_{{\\rm g},1},{\\bf h}_{{\\rm g},2},\\cdots,{\\bf h}_{{\\rm g},M}\\}$.\n\\end{lemma}\n\n\\begin{IEEEproof}\nSee Appendix \\ref{proof_lemmafg}.\n\\end{IEEEproof}\n\n\n\nBased on lemma \\ref{fg_convex}, the optimal ${\\bf h}_{{\\rm g},m}$ is the\nroot of the first order derivative of $f_{{\\rm g},m}({\\bf h}_{{\\rm g},m})$, which is given by\n\\begin{equation}\\label{equ:opt_gm}\n\\begin{aligned}[b]\n{\\bf h}_{{\\rm g},m}^\\star\n&= {\\bm \\Lambda}_{{\\rm g},m}^{-1} {\\bm \\nu}_{{\\rm g},m}\n,\n\\end{aligned}\n\\end{equation}\nfor $m=1,2,\\cdots,M$, where\n\\begin{equation}\\label{equ:opt_gm_e1}\n\\begin{aligned}[b]\n{\\bm \\Lambda}_{{\\rm g},m}&={\\bf C}_{{\\rm g},m}^{-1}\n+\\sum_{k=1}^K\\sum_{\\ell=1}^{L_1} \\frac{1}{\\sigma_{\\ell}^{2}} {\\bf b}_{\\ell,k}^\\ast {\\bf b}_{\\ell,k}^{\\rm T}\\\\\n&\\;+ \\sum_{\\ell=L_1+1}^{N}\\frac{1}{\\bar\\sigma_{\\ell}^{2}}\n\\left(\\sum_{k =1}^K \\bar{x}_k {\\bf b}_{\\ell,k}^{\\rm T}\\right)^{\\rm H}\n\\left(\\sum_{k =1}^K \\bar{x}_k {\\bf b}_{\\ell,k}^{\\rm T}\\right)\n,\n\\end{aligned}\n\\end{equation}\nand\n\\begin{equation}\\label{equ:opt_gm_e2}\n\\begin{aligned}[b]\n&{\\bm \\nu}_{{\\rm g},m}=\\sum_{k=1}^K\\sum_{{\\ell}=1}^{L_1} \\frac{\\tilde{{r}}_{\\ell,m,k}}{\\sigma_{\\ell}^{2}} {\\bf b}_{\\ell,k}^\\ast\n+ \\sum_{\\ell=L_1+1}^{N}\\frac{\\bar{r}_{\\ell,m}}{\\bar\\sigma_{\\ell}^{2}}\n\\left(\\sum_{k =1}^K \\bar{x}_k^{\\ast} {\\bf b}_{\\ell,k}^{\\ast}\\right)\n.\n\\end{aligned}\n\\end{equation}\n\n\n\\subsubsection{Initial Estimation on ${\\bf H}_{{\\rm g}}$}\n The quality of the solution obtained by the alternative optimization depends heavily on the initial point.\n Here, we propose\nan efficient estimator for ${\\bf H}_{{\\rm g}}$ to initialize the proposed alternative optimization algorithm.\nIn particular, we construct a special $\\left\\{{\\bf H}_{\\rm g},{\\bf H}_{\\rm u}\\right\\}$ pair whose elements are given by\n\\begin{align}\n{\\bf h}_{{\\rm g},m}&= {\\rm diag} ({\\bf H}_{\\rm r}\\bar{\\bf x}) {\\bf g}_m, \\label{equ:hg1}\\\\\n{\\bf h}_{{\\rm u},k}&={\\rm diag} ({\\bf H}_{\\rm r}\\bar{\\bf x})^{-1}{\\bf h}_{{\\rm r},k}. \\label{equ:hu1}\n\\end{align}\nSubstituting the above ${\\bf H}_{\\rm g}$ into \\eqref{equ:rbar}, we have\n\\begin{equation}\\label{equ:rbar_v2}\n\\begin{aligned}[b]\n\\bar{\\bf r}_{\\ell}\n&={\\bf H}_{\\rm g}^{\\rm T} {\\bm \\theta}_\\ell\n+\\bar{\\bf z}_{\\ell}\n,\n\\end{aligned}\n\\end{equation}\nfor $\\ell=1,2,\\cdots,N$.\nTherefore, ${\\bf H}_{\\rm g}$ can be initialized by the least squares (LS) estimator as follows:\n\\begin{equation}\\label{equ:est_Hc}\n\\begin{aligned}[b]\n\\hat{\\bf H}_{\\rm g}=\\left( \\left[\\bar{\\bf r}_1,\\cdots,\\bar{\\bf r}_N \\right]\n\\left[{\\bm \\theta}_1,\\cdots,{\\bm \\theta}_N \\right]^{-1} \\right)^{\\rm T}.\n\\end{aligned}\n\\end{equation}\nSince $\\hat{\\bf H}_{\\rm g}$ in \\eqref{equ:est_Hc} is unbiased and it has exploited most of the available observations in all the $N+1$ training timeslots,\nit will give a good initial point.\n\n\n\\subsection{The Overall Proposed Algorithm}\nThe overall proposed cascaded IRS channel estimation algorithm is summarized in Algorithm \\ref{alg:P1}.\nThe convergence of the proposed alternating optimization algorithm is analyzed in Lemma \\ref{convergence}.\n\n\\begin{algorithm}[!ht]\n\\caption{ Proposed alternating optimization algorithm for the cascaded IRS channel estimation}\n\\label{alg:P1}\n\\begin{algorithmic}[1]\n\\STATE {Initialize ${\\bf H}_{{\\rm g}}$ by \\eqref{equ:est_Hc}.}\\\\\n\\REPEAT\n\\STATE Update ${\\bf H}_{{\\rm u}}$ by \\eqref{equ:opt_hrk};\n\\STATE Update ${\\bf h}_{{\\rm g},m}$ by \\eqref{equ:opt_gm} for all $m$;\n\\UNTIL{ $f_{{\\rm A}}({\\bf H}_{\\rm g}, {\\bf H}_{\\rm u})$ in \\eqref{equ:obj_f_eq} converges;}\\\\\n\\STATE {Output the cascaded channel ${\\hat{\\bf h}}_{{\\rm I},k,m}={\\rm{diag}}({\\bf h}_{{\\rm u},k}) {\\bf h}_{{\\rm g},m}$ for all $k$ and $m$.}\n\\end{algorithmic}\n\\end{algorithm}\n\n\\begin{lemma}[Convergence of the Proposed Alternating Optimization Algorithm]\\label{convergence}\nThe objective function $f_{{\\rm A}}({\\bf H}_{\\rm g}, {\\bf H}_{\\rm u})$ is non-increasing in every step when ${\\bf H}_{{\\rm u}}$ or ${\\bf H}_{{\\rm g}}$ are updated, and the optimization iterations in \\eqref{equ:opt_hrk} and \\eqref{equ:opt_gm} converge to a local optimum\nof ${\\mathcal{P}}{(\\text{A})}$.\n\\end{lemma}\n\n\\begin{IEEEproof}\nAs shown in Section \\ref{sec:opt_est_AO}, the original problem is decomposed into two unconstrained minimization problems whose objectives are convex quadratic functions, and each subproblem has a unique optimal solution, which is derived in \\eqref{equ:opt_hrk} and \\eqref{equ:opt_gm}.\nTherefore, the whole alternating optimization algorithm will converge to a local optimum of the original problem ${\\mathcal{P}}{(\\text{A})}$ \\cite{BCD}.\n\\end{IEEEproof}\n\n\n\\emph{Remark}:\nThe complexity for updating ${\\bf H}_{{\\rm u}}$ by \\eqref{equ:opt_hrk} is ${\\cal O}(K^3 N^3+KMN^2)$. The complexity for updating ${\\bf h}_{{\\rm g},m}$ by \\eqref{equ:opt_gm} is ${\\cal O}(KN^3)$, and thus the complexity to update ${\\bf H}_{{\\rm g}}$ is ${\\cal O}(KMN^3)$. Therefore, the overall complexity of the solution is ${\\cal O}(IK^3 N^3+IKMN^3)$, where $I$ denotes the number of iterations of the alternating optimization algorithm.\\footnote{We will show in simulation that the algorithm will converge quickly in about two or three iterations. In addition, the ${\\cal O}(N^3)$ complexity is costed by the matrix inversion operation. However, since the matrices required inversion operation are all Hermitian positive semi-definite matrices, they can be implemented very efficiently by advanced algorithms such as the Cholesky-decomposition-based algorithm \\cite{matrix_inverse}. }\n\n\n\\section{Training Phase Shifting Configuration}\\label{sec:Phaseshift_cofig}\n\\subsection{ Motivation of the Phase Shifting Configuration}\nThe IRS steers the incident signal to different directions by configuring different phase shifting vectors ${\\bm \\theta}_{\\ell}$, as illustrated in {\\figurename~\\ref{theta_opt}}.\nAccording to the protocol, the cascaded channel is scanned by $N$ spatial directions in $N$ training timeslots, and a proper design on $\\{ {\\bm \\theta}_1,{\\bm \\theta}_2,\\cdots,{\\bm \\theta}_N \\}$ guarantees that the whole channel information in all directions is contained by the received signals such that good channel estimation performance can be achieved.\n\n\n\\begin{figure}\n[!ht]\n\\centering\n\\includegraphics[width=.9\\columnwidth]{phase_shift_opt.eps}\n\\caption{Illustration of the impact of different phase shiftings.}\n\\label{theta_opt}\n\\end{figure}\n\n\nIn the SU-MISO scenario, the overall received measurements after removing the pilots and the BS-user channels is given by\n\\begin{equation}\\label{equ:y_su}\n\\begin{aligned}[b]\n{\\bf Y}_{\\rm{SU}}&= {\\bf H}_{{\\rm I},1} {\\bm \\Phi}\n+{\\bf Z}\n,\n\\end{aligned}\n\\end{equation}\nwhere ${\\bm \\Phi}=[{\\bm \\theta}_1,{\\bm \\theta}_2,\\cdots,{\\bm \\theta}_N]$. The LS estimator may adopted as follows \\cite{Zhouzy2020decompositionCE}:\n\\begin{equation}\\label{equ:hatH_su}\n\\begin{aligned}[b]\n\\hat{\\bf H}_{{\\rm I},1}&= {\\bf Y}_{\\rm{SU}} {\\bm \\Phi}^{-1}\n,\n\\end{aligned}\n\\end{equation}\nwhere $\\hat{\\bf H}_{{\\rm I},1}$ denotes the estimated cascaded channel.\nThen ${\\bm \\Phi}$ is optimized by minimizing the MSE:\n\\begin{align*}\n\\min_{ {\\bm \\Phi} }\\; & {\\rm{tr}} \\left(\\left({\\bm \\Phi} {\\bm \\Phi}^{\\rm H}\\right)^{-1}\\right)\\\\\n{\\bf s.t.} \\;\n& |{\\bm \\Phi}_{i,j}|=1, \\quad \\forall i,j=1,2,\\cdots,N\n.\n\\end{align*}\nIt is proved in \\cite{Zhouzy2020decompositionCE} that the optimal value of the MSE is $1$, which can be achieved by the DFT matrix such that ${\\bm \\Phi}={\\bf F}$, where\n\\begin{equation}\\label{equ:DFT_F}\n\\begin{aligned}[b]\n{\\bf F}\n=\n\\begin{bmatrix}\n 1 & 1 & 1 & \\cdots & 1 \\\\\n 1 & e^{-\\jmath 2 \\pi \\frac{1}{N}} & e^{-\\jmath 2 \\pi \\frac{2}{N}} & \\cdots & e^{-\\jmath 2 \\pi \\frac{N-1}{N}} \\\\\n 1 & e^{-\\jmath 2 \\pi \\frac{2}{N}} & e^{-\\jmath 2 \\pi \\frac{4}{N}} & \\cdots & e^{-\\jmath 2 \\pi \\frac{2(N-1)}{N}} \\\\\n \\vdots & \\vdots & \\vdots & \\ddots & \\vdots \\\\\n 1 & e^{-\\jmath 2 \\pi \\frac{N-1}{N}} & e^{-\\jmath 2 \\pi \\frac{2(N-1)}{N}} & \\cdots & e^{-\\jmath 2 \\pi \\frac{(N-1)^2}{N}}\n\\end{bmatrix}\n.\n\\end{aligned}\n\\end{equation}\n\nFor the protocol extended from the SU case \\cite{Zhouzy2020decompositionCE,Kundu2021OJCSLMMSE_DFTGOOD,Alwazani2020OJCSLMMSE_DFT} shown in {\\figurename~\\ref{protocol_tran}}, the transmit signals from users are the same in different timeslots.\nTherefore, the columns of ${\\bf F}$ may be permuted to any orders, and the MSE will remain the same.\nHowever, in our proposed protocol in {\\figurename~\\ref{protocol:d}}, the transmit signals are different in stage I and stage II.\nIn particular, the received signals in stage I contribute to the estimation on both the common-link variable and the user-specific variables in the cascaded channels, while the signals in stage II contribute to the common-link variable only.\nHence, the phase shifting vectors $ {\\bm \\theta}_1, {\\bm \\theta}_2,\\cdots,{\\bm \\theta}_{L_1} $\nin stage I require additional design.\n\n\n\n\n\n\n\n\n\\subsection{Optimization Formulation of the Phase Shifting Configuration for MU-MISO IRS Systems}\n\nAs shown in \\eqref{equ:est_Hc}, the initial estimation on the common-link variable $\\hat{\\bf H}_{\\rm g}$ in \\eqref{equ:est_Hc} is almost the same as the estimator for the SU case shown in \\eqref{equ:hatH_su}.\nTherefore, we still adopt the DFT-based phase shifting configuration for all the $N$ timeslots. Additionally, an additional steering direction ${\\bm{\\vartheta}} \\in {\\mathbb C}^{N \\times 1}$ is introduced for a more flexible design:\n\\begin{equation}\\label{equ:prop_phi}\n\\begin{aligned}[b]\n{\\bm \\Phi}&= {\\rm{diag}}({\\bm{\\vartheta}}){\\bf F}\n.\n\\end{aligned}\n\\end{equation}\nDenote by $\\vartheta_n$ the $n$-th element in ${\\bm{\\vartheta}}$. We have $|\\vartheta_n|=1$ for all $n=1,2,\\cdots,N$.\nOne can see that the value of ${\\rm{tr}} \\left(\\left({\\bm \\Phi} {\\bm \\Phi}^{\\rm H}\\right)^{-1}\\right)$ is kept at $1$ for any ${\\bm{\\vartheta}}$.\nDenote by ${\\bf f}_{\\ell}$ the $\\ell$-th column of $\\bf F$. The training phase shifting vector in the $\\ell$-th timeslot is given by\n\\begin{equation}\\label{equ:prop_theta_ell}\n\\begin{aligned}[b]\n{\\bm \\theta}_\\ell&= {\\rm{diag}}({\\bm{\\vartheta}}) {\\bf f}_{\\ell}\n,\n\\end{aligned}\n\\end{equation}\nwhere $\\ell=1,2,\\cdots,N$.\nThe remaining task is to design ${\\bm{\\vartheta}}$.\n\nHowever, it is difficult to design a straightforward objective function to optimize ${\\bm{\\vartheta}}$ since the MSE of the estimated ${\\bf H}_{\\rm u}$ by the proposed algorithm is complicated.\nConsidering that we have the knowledge of the covariances ${\\bf C}_m^{(k)}$ of the cascaded channels for all $m$ and $k$,\nthe average received power of the effective IRS channel from user $k$ to the $m$-th BS antenna in timeslot $\\ell$ can be denoted by a function of ${\\bm{\\vartheta}}$, ${\\bf f}_{\\ell}$ and ${\\bf C}_m^{(k)}$:\n\\begin{equation}\\label{equ:channel_gain}\n\\begin{aligned}[b]\nQ_{\\ell,k,m} &= {\\mathbb E} \\left[ | {\\bf g}_m^{\\rm T} {\\rm{diag}}({\\bm \\theta}_\\ell) {\\bf h}_{{\\rm r},k} |^2 \\right]\\\\\n &= {\\mathbb E} \\left[ | {\\bf h}_{{\\rm I},k,m}^{\\rm T} {\\bm \\theta}_\\ell |^2 \\right]\\\\\n&= {\\mathbb E} \\left[ {\\bm \\theta}_\\ell^{\\rm H}\n\\left( {\\bf h}_{{\\rm I},k,m} {\\bf h}_{{\\rm I},k,m}^{\\rm H} \\right)^{\\ast}\n{\\bm \\theta}_\\ell \\right]\\\\\n&= {\\bm \\theta}_\\ell^{\\rm H}\n\\left( {\\bf C}_m^{(k)} \\right)^{\\ast}\n{\\bm \\theta}_\\ell\\\\\n&={\\bm{\\vartheta}}^{\\rm H} {\\rm{diag}}({\\bf f}_{\\ell})^{\\rm H}\n\\left( {\\bf C}_m^{(k)} \\right)^{\\ast}\n{\\rm{diag}}({\\bf f}_{\\ell}){\\bm{\\vartheta}}\n.\n\\end{aligned}\n\\end{equation}\nSince ${\\bf f}_{\\ell}$ and ${\\bf C}_m^{(k)}$ are known variables, the summation of $Q_{\\ell,k,m}$ over antennas $m=1,2,\\cdots,M$, users $k=1,2,\\cdots,K$ and timeslots $\\ell=1,2,\\cdots,L_1$ is a function of ${\\bm{\\vartheta}}$, which is given by\n\\begin{equation}\\label{equ:channel_gain_all}\n\\begin{aligned}[b]\nf_{\\rm B}({\\bm{\\vartheta}})&=\\sum_{\\ell=1}^{L_1} \\sum_{k=1}^K \\sum_{m=1}^M Q_{\\ell,k,m} \\\\\n&=\\sum_{\\ell=1}^{L_1} \\sum_{k=1}^K \\sum_{m=1}^M\n{\\bm{\\vartheta}}^{\\rm H} {\\rm{diag}}({\\bf f}_{\\ell})^{\\rm H}\n\\left( {\\bf C}_m^{(k)} \\right)^{\\ast}\n{\\rm{diag}}({\\bf f}_{\\ell}){\\bm{\\vartheta}} \\\\\n&={\\bm{\\vartheta}}^{\\rm H} \\left(\\sum_{\\ell=1}^{L_1} \\sum_{k=1}^K \\sum_{m=1}^M\n {\\rm{diag}}({\\bf f}_{\\ell})^{\\rm H}\n\\left( {\\bf C}_m^{(k)} \\right)^{\\ast}\n{\\rm{diag}}({\\bf f}_{\\ell})\\right){\\bm{\\vartheta}} .\n\\end{aligned}\n\\end{equation}\nWe define\n\\begin{equation}\n\\begin{aligned}[b]\n{\\bf E}=\\left(\\sum_{\\ell=1}^{L_1} \\sum_{k=1}^K \\sum_{m=1}^M\n {\\rm{diag}}({\\bf f}_{\\ell})^{\\rm H}\n\\left( {\\bf C}_m^{(k)} \\right)^{\\ast}\n{\\rm{diag}}({\\bf f}_{\\ell})\\right).\n\\end{aligned}\n\\end{equation}\nThe optimization problem on ${\\bm{\\vartheta}}$ is formulated to maximize $f_{\\rm B}({\\bm{\\vartheta}})$:\n\\begin{equation*\n\\begin{aligned}[b]\n{\\mathcal{P}}{(\\text{B})}\\quad \\max_{ {\\bm{\\vartheta}} } \\; &\nf_{\\rm B}({\\bm{\\vartheta}})={\\bm{\\vartheta}}^{\\rm H} {\\bf E} {\\bm{\\vartheta}}\\\\\n{\\bf s.t.} \\;\n& |\\vartheta_n|=1, \\quad \\forall n=1,2,\\cdots,N.\n\\end{aligned}\n\\end{equation*}\n\n\n\\subsection{Solution for ${\\mathcal{P}}{(\\text{B})}$}\n${\\mathcal{P}}{(\\text{B})}$ is a non-convex problem due to the maximizing of a convex objective function and the unit-modulus constraints.\nWe solve ${\\mathcal{P}}{(\\text{B})}$ by the successive convex approximation (SCA) algorithm.\nIn particular, a surrogate problem, shown as follows, is iteratively solved:\n\\begin{equation*\n\\begin{aligned}[b]\n{\\mathcal{P}}{({\\text{B}}_i)}\\quad \\max_{ {\\bm{\\vartheta}} } \\; &\n{f}_{\\rm B}^{(i)} ({\\bm{\\vartheta}},\\bar{\\bm{\\vartheta}})\\\\\n{\\bf s.t.} \\;\n& |\\vartheta_n|=1, \\quad \\forall n=1,2,\\cdots,N,\n\\end{aligned}\n\\end{equation*}\nwhere $i$ is the iteration index, $\\bar{\\bm{\\vartheta}}$ is the solution of the surrogate problem in the $(i-1)$-th iteration, and ${f}_{\\rm B}^{(i)}({\\bm{\\vartheta}},\\bar{\\bm{\\vartheta}})$ is the first-order approximation of ${f}_{\\rm B}({\\bm{\\vartheta}})$ at $\\bar{\\bm{\\vartheta}}$:\n\\begin{equation}\\label{equ:surrogate}\n\\begin{aligned}[b]\n{f}_{\\rm B}^{(i)} ({\\bm{\\vartheta}},\\bar{\\bm{\\vartheta}})\n&= 2 {\\rm Re} \\left\\{{\\bar{\\bm \\vartheta}}^{\\rm H} {\\bf E} {\\bm \\vartheta}\\right\\}-{\\bar{\\bm \\vartheta}}^{\\rm H} {\\bf E} {\\bar{\\bm \\vartheta}}.\n\\end{aligned}\n\\end{equation}\nOne can see that ${f}_{\\rm B}^{(i)}({\\bm{\\vartheta}},\\bar{\\bm{\\vartheta}})$ is a linear function of ${\\bm{\\vartheta}}$, and thus the optimal solution of ${\\mathcal{P}}{({\\text{B}}_i)}$ is given by\n\\begin{equation}\\label{equ:opt_theta}\n\\begin{aligned}[b]\n{\\bm{\\vartheta}}=e^{\\jmath \\angle ({\\bf E}{\\bar{\\bm \\vartheta}})}.\n\\end{aligned}\n\\end{equation}\nThe proof on the convergence of the SCA algorithm can be referred to in \\cite{SCA}.\n\n\n\n\\section{Numerical Examples}\\label{simulation}\n\\subsection{Simulation Setups}\nThis section evaluates the performance of the proposed cascaded channel estimation algorithm.\nIn particular, we consider the indoor femtocell network illustrated in {\\figurename~\\ref{indoor_8user}} in which $K=8$ users are randomly distributed in a 5 m $\\times$ 5 m square area and are served by one BS and one IRS.\nWe generate the channel coefficients according to the 3GPP ray-tracing model \\cite[Section 7.5]{3GPP} using the model parameters for the Indoor-Office scenario \\cite[Table 7.5-6]{3GPP}.\nThe system parameters for the simulations are summarized in Table \\ref{table_sim}, in which the path-loss is set according to the Indoor-Office pathloss model in \\cite[Table 7.4.1-1]{3GPP}.\n\n\n\\begin{figure}\n[!ht]\n\\centering\n\\includegraphics[width=.8\\columnwidth]{simulation_scena.eps}\n\\caption{The simulated IRS-aided $K$-user MISO communication scenario comprising of one $M$-antenna BS and one $N$-element IRS.}\n\\label{indoor_8user}\n\\end{figure}\n\n\\begin{table}[!ht]\n\\footnotesize\n\\renewcommand{\\arraystretch}{1.3}\n\\caption{Simulation Parameters}\n\\label{tablepm}\n\\centering\n\\begin{tabular}{c|c}\n\\hline\nParameters & Values \\\\\n\\hline\nCarrier frequency & 2.4 GHz\\\\\n\\hline\n Transmission bandwidth & $200$ kHz\\\\\n\\hline\nNoise power spectral density & $-170$ dBm\/Hz\\\\\n\\hline\nPath-loss for BS-IRS and IRS-user links (dB)& $40 + 17.3 \\lg d$\\\\\n\\hline\nPath-loss for BS-user link (dB)& $30 + 31.9 \\lg d+\\zeta$\\\\\n\\hline\nPenetration loss $\\zeta$ due to obstacle & 20 dB \\\\\n\\hline\n Reflection efficiency of IRS & 0.8\\\\\n\\hline\nHeight of users & 1.5 m\\\\\n\\hline\nLocation of BS & (0, 0, 3m)\\\\\n\\hline\nLocation of IRS & (0, 10m, 3m)\\\\\n\\hline\n\\end{tabular}\n\\label{table_sim}\n\\end{table}\n\n\n\n\nIn the simulation, we consider two baseline schemes to benchmark the proposed scheme.\n\\begin{itemize}\n\\item {\\bf Baseline 1 [LMMSE using the protocol in {\\figurename~\\ref{protocol:b}} \\cite{Kundu2021OJCSLMMSE_DFTGOOD,Alwazani2020OJCSLMMSE_DFT}]}: This curve illustrates the performance of the LMMSE estimator proposed in \\cite{Kundu2021OJCSLMMSE_DFTGOOD} and \\cite{Alwazani2020OJCSLMMSE_DFT}. For simplicity, we assume that the BS-user channels have already been perfectly estimated by this scheme.\n The protocol illustrated in {\\figurename~\\ref{protocol:b}} is adopted.\n In additon, for fair comparison, the number of training timeslots is set as $\\lceil \\frac{N-1}{K}\\rceil+\\lceil \\frac{N}{M}\\rceil$ such that the total pilot overhead is just slightly higher than that of the proposed scheme.\n\\item {\\bf Baseline 2 [Bilinear alternating least squares (BALS) algorithm \\cite{Araujo2021JSAC_CE_PARAFAC}]}:\n In \\cite{Araujo2021JSAC_CE_PARAFAC}, an iterative algorithm is proposed to estimate ${\\bf G}$ and ${\\bf H}_{\\rm r}$ by utilizing the PARAFAC decomposition, which adopts the same channel estimation protocol as Baseline 1. Note that due to the ambiguity issue (see Lemma \\ref{eq_decomposite} or \\cite[Section IV]{Araujo2021JSAC_CE_PARAFAC}), the BALS also cannot exactly reconstruct ${\\bf G}$ and ${\\bf H}_{\\rm r}$, and the actually estimated variable is still the cascaded channel.\n\\item {\\bf Baseline 3 [MAP modification for the BALS in \\cite{Araujo2021JSAC_CE_PARAFAC}]}: In this baseline, we make a simple modification based on the MAP optimization in this paper to further enhance the performance of the BALS algorithm in \\cite{Araujo2021JSAC_CE_PARAFAC} by exploiting the prior knowledge of the cascaded channels.\n\\item {\\bf Baseline 4 [Selected On-off protocol \\cite{Liuliang_CE2020TWC}]}: This curve illustrates the performance of the estimation algorithm in \\cite{Liuliang_CE2020TWC} based on the selected on-off channel estimation protocol shown in {\\figurename~\\ref{protocol:c}}. We assume that the covariances of ${\\bf h}_{{\\rm u},k} ={\\rm diag}({\\bf h}_{{\\rm r},1})^{-1}{\\bf h}_{{\\rm r},k}$ for all $k=1,2,\\cdots,8$ are available, and the LMMSE estimator in \\cite[Section V]{Liuliang_CE2020TWC} is adopted. In addition, we always select user $1$ as the reference user.\n\\end{itemize}\nNote that the proposed scheme and Baselines 3 and 4 have the same pilot overhead, i.e., $K+N+\\lceil \\frac{N}{M}\\rceil(K-1)$.\nWe focus on the evaluation of the performance of cascaded channel estimation, and use the normalized MSE (NMSE) as the evaluation metric, which is given by\n\\begin{equation}\n{\\rm{NMSE}}=\\frac{\\sum_{k=1}^K\\sum_{m=1}^M{\\mathbb E} \\left[\\left|{\\bf h}_{{\\rm I},k,m}-{\\hat{\\bf h}}_{{\\rm I},k,m}\\right|_2^2\\right]}\n{\\sum_{k=1}^K\\sum_{m=1}^M {\\mathbb E} \\left[\\left|{\\bf h}_{{\\rm I},k,m}\\right|_2^2\\right]}\n.\n\\end{equation}\nIn addition, based on the proposed protocol, the BS-user channel estimation can be independent of the cascaded channel estimation by applying the signal pre-processing as shown in Section \\ref{proposed_protocol}. This signal pre-processing operation will provide a theoretical $3$ dB gain for the BS-user channel estimation compared to the conventional solution, which shuts down the IRS to estimate the BS-user channel (see Appendix \\ref{app_Estimate_Hd}), and thus we do not compare the estimation performance for the BS-user channel in the simulations.\n\n\n\\begin{figure}\n[!t]\n \\centering\n \\subfigure[$P_{\\rm T}$ vs NMSE]{\n \\label{nmse_vs_PT:a}\n \\includegraphics[width=1\\columnwidth]{nmse_vs_pt_v2.eps}}\n \n \\subfigure[Convergence behavior when $P_{\\rm T}=15$ dBm]{\n \\label{nmse_vs_PT:b}\n \\includegraphics[width=1\\columnwidth]{converge_v2.eps}}\n \\caption{The NMSE versus transmit power when $M=8$ and $N=32$.}\n \\label{nmse_vs_PT}\n\\end{figure}\n\n\\subsection{Simulation Results}\n{\\figurename~\\ref{nmse_vs_PT:a}} illustrates the NMSE of different schemes with respect to the transmit power of users, in which the BS adopts a $4\\times2$ uniform planar array (UPA), and the IRS adopts an $8\\times4$ UPA. Thus, we have $M=8$ and $N=32$.\nThe BALS algorithm in \\cite{Araujo2021JSAC_CE_PARAFAC} achieves the worst performance since it does not exploit the channel prior knowledge.\nThe performance of the LMMSE using the traditional protocol in \\cite{Kundu2021OJCSLMMSE_DFTGOOD} and \\cite{Alwazani2020OJCSLMMSE_DFT} does not vary with the increase of $P_{\\rm T}$ since the main bottleneck is that the number of training timeslots is smaller than $N$.\nMoreover, the performance of BALS-MAP is better than that of LMMSE at a low SNR, but worsens as $P_{\\rm T}$ increases since it will reduce to BALS when $P_{\\rm T}$ is infinite.\nBased on the above observations, we can draw a conclusion that the traditional channel estimation protocol shown in {\\figurename~\\ref{protocol:b}} is not effective for exploiting the common-link structure, and thus we do not consider Baselines 2 and 3 in the remaining simulations.\nOn the other hand, it is seen that the proposed protocol with the optimization-based channel estimation algorithm achieves significant gain compared to all the baselines.\nIn addition, the phase shifting configuration by solving ${\\mathcal{P}}{(\\text{B})} $ achieves a more than 3 dB gain by steering the reflected signals in Stage I to the direction with a higher SNR compared to the random configuration baseline.\nNext, in {\\figurename~\\ref{nmse_vs_PT:b}}, we fix the transmit power $P_{\\rm T}$ at 15 dBm and show the convergence behaviors of the proposed Algorithm \\ref{alg:P1} for ${\\mathcal{P}}{(\\text{A})}$.\nOne can see that the proposed algorithm converges quickly.\nNote that although the solution without phase-shift optimization achieves a higher objective value, this does not imply it will have better performance since the objective functions of the two curves are different due to them adopting different ${\\bm{\\vartheta}}$ for the training phase shifting configuration.\n\n\\begin{figure}\n[!t]\n\\centering\n\\includegraphics[width=1\\columnwidth]{change_M_v2.eps}\n\\caption{NMSE versus $M$, when $P_{\\rm T}=20$ dBm.}\n\\label{M_vs_NMSE}\n\\end{figure}\n\nIn {\\figurename~\\ref{M_vs_NMSE}}, we simulate the performance of different BS antenna numbers $M$ when the BS adopts the uniform linear array and the IRS is still $8\\times4$ UPA. It is seen that the NMSE of all curves increases as $M$ increases since the ratio of the channel unknowns to the training observations decreases as $M$ increases. Moreover, the performance gain achieved by the phase shifting configuration increases as $M$ increases. This is because when $M$ increases, the number of training timeslots in Stage I of the proposed protocol decreases, and the probability that the random configuration scheme steers to the highest SNR direction becomes lower.\n\n\n\n\n\n\\begin{figure}\n[!t]\n\\centering\n\\includegraphics[width=1\\columnwidth]{change_N_v2.eps}\n\\caption{NMSE versus $N$, when $P_{\\rm T}=20$ dBm.}\n\\label{N_vs_NMSE}\n\\end{figure}\n\n\\begin{figure}\n[!t]\n\\centering\n\\includegraphics[width=1\\columnwidth]{CCDF_v2.eps}\n\\caption{The CCDFs for random user locations.}\n\\label{CCDF_location}\n\\end{figure}\n\nIn {\\figurename~\\ref{N_vs_NMSE}}, we simulate the NMSE of different schemes for different IRS sizes $N$. The BS is $4\\times2$ UPA, and the IRS is $N_1 \\times 8$ UPA in which $N_1$ increases from $4$ to $10$. Note that as $N$ increases, the pilot overhead increases according to the proposed protocol but the ratio of the channel unknowns to the training observations is almost fixed.\nIt is seen that the NMSE of Baselines 1 and 4 varies only a little, while the NMSE of the proposed scheme decreases as $N$ increases. This is because the channel becomes more correlated as $N$ becomes large, and the proposed scheme has a better capability of exploiting the channel prior knowledge.\n\nFinally, we investigate the impact of user locations on the estimation performance.\nIn particular, we fix $N=8 \\times 4$ and $M=4 \\times 2$, and generate $100$ snapshots for random user locations. For each snapshot, we further generate $1000$ channel realizations with independent small-scale fading to reduce the impact of other system parameters.\n{\\figurename~\\ref{CCDF_location}} plots the complementary cumulative distribution functions (CCDFs) of the NMSE for different snapshots.\nOne can see that the performance gains of the proposed scheme are irrespective of user locations.\nIn addition, we further increase $P_{\\rm T}=40$ dBm for Baseline 4 (i.e., the selected-on-off-protocol-based scheme \\cite{Liuliang_CE2020TWC}) such that it achieves a similar average NMSE to the proposed scheme with $P_{\\rm T}=20$ dBm. However, Baseline 4 achieves a much worse outage performance.\nThis is because the performance of the selected-on-off-protocol-based scheme \\cite{Liuliang_CE2020TWC} highly depends on the channel quality of the reference user, while the proposed scheme is much more robust since it does not require selecting one reference user.\n\n\n\n\\section{Conclusion}\\label{conclusion}\nIn this paper, we proposed a novel always-ON channel estimation protocol for uplink cascaded channel\nestimation in IRS-assisted MU-MISO systems.\nIn contrast to the existing schemes, the pilot overhead required by the proposed protocol is greatly reduced\nby exploiting the common-link structure.\nBased on the protocol, we formulated an optimization-based joint channel estimation problem that utilizes the combined\nstatistical information of the cascaded channels, and then we proposed an alternating optimization algorithm to solve the problem with the local optimum solution.\nIn addition, we optimized the phase shifting configuration in the proposed protocol, which may further enhance the channel estimation performance.\nThe simulation results demonstrated that the proposed protocol using the optimization based joint channel estimation algorithm achieves a more than $15$ dB gain compared to the benchmark. In addition, the proposed optimized phase shifting configuration achieves a more than $3$ dB gain compared to the random configuration scheme.\n\n\n\n\\appendices\n\\section{Proof of Lemma \\ref{lemma0}}\\label{proof_lemma0}\nDefine the virtual reference channel by ${\\bf H}_{\\rm v}={\\bf G}^{\\rm T} {\\rm diag} ({\\bf H}_{\\rm r}\\bar{\\bf x})$. Based on \\eqref{equ:rbar}, we have\n\\begin{equation}\\label{equ:app0_Hv}\n\\begin{aligned}[b]\n\\left[\\bar{\\bf r}_1,\\cdots,\\bar{\\bf r}_N \\right]\n&={\\bf H}_{\\rm v} {\\bm \\Phi}\n.\n\\end{aligned}\n\\end{equation}\nThus ${\\bf H}_{\\rm v}$ can be perfectly estimated by:\n\\begin{equation}\\label{equ:app0_Hv2}\n\\begin{aligned}[b]\n{\\bf H}_{\\rm v}= \\left[\\bar{\\bf r}_1,\\cdots,\\bar{\\bf r}_N \\right]\n{\\bm \\Phi}^{-1} .\n\\end{aligned}\n\\end{equation}\nWe further define the $K$ relative channels by ${\\bf h}_{{\\rm A},k}={\\rm diag} ({\\bf H}_{\\rm r}\\bar{\\bf x})^{-1} {\\bf h}_{{\\rm r},k}$, and ${\\bf H}_{\\rm A}=[{\\bf h}_{{\\rm A},1},{\\bf h}_{{\\rm A},2},\\cdots,{\\bf h}_{{\\rm A},K}]$.\nThen the cascaded channels become ${\\bf H}_{{\\rm I},k}= {\\bf H}_{\\rm v}{\\rm diag}({\\bf h}_{{\\rm A},k})$. Therefore, the remaining task is to perfectly estimate ${\\bf H}_{\\rm A}$.\n\nBased on the assumption on $\\bf G$ and ${\\bf H}_{\\rm r}$, we have\n${\\bf H}_{\\rm v}={\\bf F}_{\\rm B} {\\ddot{\\bf G}}^{\\rm T} {\\bf F}_{\\rm R}^{\\rm T} {\\rm diag} ({\\bf F}_{\\rm R} {\\ddot{\\bf H}}_{\\rm r}\\bar{\\bf x})$ where ${\\ddot{\\bf H}}_{\\rm r}=[{\\ddot{\\bf h}}_{{\\rm r},1},\\cdots,{\\ddot{\\bf h}}_{{\\rm r},K}]$.\nDefine ${\\bf V}=[{\\bf v}_{1},{\\bf v}_{2},\\cdots,{\\bf v}_{M}]$ which is given by\n\\begin{equation}\\label{equ:app0_V}\n\\begin{aligned}[b]\n{\\bf v}_m={\\rm diag} ({\\bf F}_{\\rm R} {\\ddot{\\bf H}}_{\\rm r}\\bar{\\bf x}){\\bf F}_{\\rm R} {\\ddot{\\bf g}}_m,\n\\end{aligned}\n\\end{equation}\nand thus ${\\bf H}_{\\rm v}={\\bf F}_{\\rm B} {\\bf V}^{\\rm T}$.\nUsing the independence of ${\\ddot{\\bf g}}_m$ and ${\\ddot{\\bf h}}_{{\\rm r},k}$, we have ${\\mathbb E}[{\\bf v}_i {\\bf v}_j^{\\rm H}]={\\bm 0}$. Since all ${\\bf v}_{m}$ ($m=1,2,\\cdots,M$) follow joint multivariate normal distribution, ${\\bf v}_{m}$ for all $m$ are pairwise independent to each other.\nIn addition, since ${\\bf C}_m^{(k)}={\\mathbb E}\\left[ {\\bf h}_{{\\rm I},k,m} {\\bf h}_{{\\rm I},k,m}^{\\rm H} \\right]$ is full-rank for all $k$ and $m$, ${\\mathbb E}[{\\bf v}_m {\\bf v}_m^{\\rm H}]$ is also full-rank for all $m$ with properly-designed $\\bar{\\bf x}$.\n\nNext, using ${\\bf R}_{\\ell}$ in \\eqref{equ:R1} and \\eqref{equ:R2L1}, we have\n\\begin{equation}\\label{equ:app0_RL}\n\\begin{aligned}[b]\n\\tilde{\\bf R}_{\\ell} &= {\\bf F}_{\\rm B}^{-1} {\\bf R}_{\\ell} {\\bf X}^{-1}\\\\\n&={\\bf F}_{\\rm B}^{-1} {\\bf G}^{\\rm T} {\\rm diag} ({\\bm \\theta}_\\ell) {\\bf H}_{\\rm r}\\\\\n&={\\bf F}_{\\rm B}^{-1} {\\bf H}_{\\rm v} {\\rm diag} ({\\bm \\theta}_\\ell) {\\bf H}_{\\rm A}\\\\\n&={\\bf V}^{\\rm T} {\\rm diag} ({\\bm \\theta}_\\ell) {\\bf H}_{\\rm A}\n.\n\\end{aligned}\n\\end{equation}\nStacking all $\\tilde{\\bf R}_{\\ell}$, we have\n\\begin{equation}\\label{equ:app0_mesure}\n\\begin{aligned}[b]\n\\begin{bmatrix}\\tilde{\\bf R}_1\\\\ \\vdots \\\\ \\tilde{\\bf R}_{L_1}\\end{bmatrix}\n=\\begin{bmatrix}{\\bf V}^{\\rm T}{\\rm diag} ({\\bm \\theta}_1)\\\\ \\vdots \\\\ {\\bf V}^{\\rm T}{\\rm diag} ({\\bm \\theta}_{L_1})\\end{bmatrix}\n{\\bf H}_{\\rm A}.\n\\end{aligned}\n\\end{equation}\nDefine ${\\bm \\Psi}=[{\\rm diag} ({\\bm \\theta}_1){\\bf V},\\cdots,{\\rm diag} ({\\bm \\theta}_{L_1}){\\bf V}]^{\\rm T}$. Now we need to prove ${\\text{rank}}({\\bm \\Psi})={\\rm{min}}\\{N,ML_1\\}$ with probability one, and the whole proof is completed.\n\nBy permuting the columns of ${\\bm \\Psi}$, we have a new matrix\n$\\bar{\\bm \\Psi}=[{\\rm diag} ({\\bf v}_1)\\bar{\\bm \\Phi},\\cdots,{\\rm diag} ({\\bf v}_M)\\bar{\\bm \\Phi}]^{\\rm T}$\nwhere $\\bar{\\bm \\Phi}=[{\\bm \\theta}_1,\\cdots,{\\bm \\theta}_{L_1}]$.\nThen, it is equivalent to prove that ${\\text{rank}}(\\bar{\\bm \\Psi})={\\rm{min}}\\{N,ML_1\\}$ with probability one. We prove it by induction.\nDefine $\\bar{\\bm \\Psi}_m=[{\\rm diag} ({\\bf v}_1)\\bar{\\bm \\Phi},\\cdots,{\\rm diag} ({\\bf v}_m)\\bar{\\bm \\Phi}]^{\\rm T}$.\nSince $\\bar{\\bm \\Phi}$ is semi-orthogonal, ${\\text{rank}}(\\bar{\\bm \\Psi}_1)=L_1$ with probability one.\nLet ${\\text{rank}}(\\bar{\\bm \\Psi}_{m-1})={\\rm{min}}\\{N,(m-1)L_1\\}$ with probability one, and the rest task is to prove ${\\text{rank}}(\\bar{\\bm \\Psi}_m)={\\rm{min}}\\{N,mL_1\\}$ with probability one.\nWe prove it by contradiction.\nConsider the case when ${\\text{rank}}(\\bar{\\bm \\Psi}_{m-1})=(m-1)L_1$ which is smaller than $N$ but ${\\text{rank}}(\\bar{\\bm \\Psi}_m)<{\\rm{min}}\\{N,mL_1\\}$.\nSince ${\\rm diag} ({\\bf v}_m) {\\bm \\theta}_i$ and ${\\rm diag} ({\\bf v}_m) {\\bm \\theta}_j$ are orthogonal for $i \\neq j$, there shall exists $\\bf x$ which satisfies:\n\\begin{equation}\\label{equ:app0_x}\n\\begin{aligned}[b]\n[{\\rm diag} ({\\bf v}_1)\\bar{\\bm \\Phi},\\cdots,{\\rm diag} ({\\bf v}_{m-1})\\bar{\\bm \\Phi}]{\\bf x}={\\rm diag} ({\\bf v}_m) {\\bm \\theta}_\\ell,\n\\end{aligned}\n\\end{equation}\nfor some $1\\leq \\ell \\leq L_1$ such that ${\\text{rank}}(\\bar{\\bm \\Psi}_m)<{\\rm{min}}\\{N,mL_1\\}$ is true.\nHowever, since ${\\bf v}_m$ is independent to ${\\bf v}_1,{\\bf v}_2,\\cdots,{\\bf v}_{m-1}$ with full-rank covariance matrix, equation \\eqref{equ:app0_x} is inconsistent (which has no solution) with probability one, and the whole proof is finished.\n\n\\section{Estimation on the BS-User Channel}\\label{app_Estimate_Hd}\nBased on ${\\bm \\theta}_0=-{\\bm \\theta}_1$, we have\n\\begin{equation}\\label{equ:r_direct}\n\\begin{aligned}[b]\n{\\bf R}_{0} &= \\frac{1}{2}\\left({{\\bf Y}}_0+{{\\bf Y}}_1\\right) {\\bf X}^{-1}\\\\\n&={\\bf H}_{\\rm d}+\\tilde{\\bf Z}_0\n,\n\\end{aligned}\n\\end{equation}\nwhere $\\tilde{\\bf Z}_0$ is the noise matrix consisting of $MK$ i.i.d. complex Gaussian variables following ${\\cal{CN}}({ 0},\\frac{1}{2K}\\sigma_0^2) $.\nLet ${\\bf r}_{{0},k}$ be the $k$-th column of ${\\bf R}_{0}$.\nThe BS-user direct channel for the $k$-th user can be estimated by the\nLMMSE\nestimator \\cite{Kay1993statisticalSP}:\n\\begin{equation}\\label{equ:hat_direct}\n\\begin{aligned}[b]\n\\hat{\\bf h}_{{\\rm d},k}={\\bf C}_{{\\rm d},k} \\left({\\bf C}_{{\\rm d},k}+ \\frac{\\sigma_0^2}{2K} {\\bf I}_M\\right)^{-1} {\\bf r}_{0,k},\n\\end{aligned}\n\\end{equation}\nwhere ${\\bf C}_{{\\rm d},k}={\\mathbb E}\\left[ {\\bf h}_{{\\rm d},k} {\\bf h}_{{\\rm d},k}^{\\rm H} \\right]$\nis the covariance matrix of the direct channel from the BS to the $k$-th user.\nNote that, as shown in \\eqref{equ:r_direct} and \\eqref{equ:hat_direct}, the proposed protocol may achieve a $3$ dB performance gain compared to the existing works, \\cite{Kundu2021OJCSLMMSE_DFTGOOD,Alwazani2020OJCSLMMSE_DFT,Liuliang_CE2020TWC}, on the estimation of the BS-user channels since it exploits doubled observation samples.\n\n\\section{Proof of Lemma \\ref{eq_decomposite}}\\label{proof_lemma1}\nThe objective function of ${\\mathcal{P}}{(\\text{A})}$ can be denoted by the function of the cascaded channel coefficients $\\{{\\bf h}_{{\\rm I},k,m}\\}$, as follows:\n\\begin{equation}\\label{equ:obj_f_A2}\n\\begin{aligned}[b]\n&f_{{\\rm A}}({\\bf H}_{\\rm g}, {\\bf H}_{\\rm u})\n=f_{{\\rm A}}(\\{{\\bf h}_{{\\rm I},k,m}\\})\\\\\n&=-\\sum_{\\ell=1}^{L_1} \\sum_{k=1}^K\n{\\frac{1}{\\sigma_{\\ell}^{2}} \\left\\|\n\\tilde{{\\bf r}}_{\\ell,k}- \\sum_{m=1}^M {\\bf h}_{{\\rm I},k,m}^{\\rm T} {\\bm \\theta}_{\\ell}\n\\right\\|^2 }\\\\\n&\\quad-\\sum_{\\ell=L_1+1}^{N}\n{ \\frac{1}{\\bar\\sigma_{\\ell}^{2}} \\left\\|\n\\bar{\\bf r}_{\\ell}-\\sum_{k=1}^K \\sum_{m=1}^M \\bar{x}_k {\\bf h}_{{\\rm I},k,m}^{\\rm T} {\\bm \\theta}_{\\ell}\n\\right\\|^2 }\n\\\\\n&\\quad-\\sum_{m=1}^M \\sum_{k=1}^K {\\bf h}_{{\\rm I},k,m}^{\\rm H} {{\\bf C}_m^{(k)}}^{-1} {\\bf h}_{{\\rm I},k,m} .\n\\end{aligned}\n\\end{equation}\nTherefore, if one optimal solution $\\left\\{{\\bf H}_{\\rm g}^\\star,{\\bf H}_{\\rm u}^\\star \\right\\} \\in {\\cal A}$, any $\\left\\{{\\bf H}_{\\rm g},{\\bf H}_{\\rm u}\\right\\}$ pair in set ${\\cal A}$ is an optimal solution of ${\\mathcal{P}}{(\\text{A})}$, and the lemma is proved.\n\n\n\n\\section{Proof of Lemma \\ref{fu_convex}}\\label{proof_lemmafu}\nDenote $\\ddot{\\bf h}_{\\rm u}={\\rm{vec}}({\\bf H}_{\\rm u})$. The objective function $f_{{\\rm u}}$ in \\eqref{equ:obj_f_hrk} is given by\n\\begin{equation}\\label{equ:proof_fu_1}\n\\begin{aligned}[b]\n&f_{{\\rm u}}(\\ddot{\\bf h}_{\\rm u})\n=\\sum_{\\ell=1}^{L_1}\n{\\frac{1}{\\sigma_{\\ell}^{2}} \\left\\|\n{\\rm{vec}} (\\tilde{{\\bf R}}_{\\ell})- \\left({\\bf I}_K \\otimes {\\bf D}_{\\ell}\\right) \\ddot{\\bf h}_{\\rm u}\n\\right\\|^2_{2} }\\\\\n&\\quad+\\sum_{\\ell=L_1+1}^{N}\n{ \\frac{1}{\\bar\\sigma_{\\ell}^{2}} \\left\\|\n\\bar{\\bf r}_{\\ell}-\\left({\\bar{\\bf x}}^{\\rm T} \\otimes {\\bf D}_{\\ell}\\right) \\ddot{\\bf h}_{\\rm u}\n\\right\\|^2_2 }\n+ \\ddot{\\bf h}_{\\rm u}^{\\rm H} {\\bf C}_{{\\rm u}} \\ddot{\\bf h}_{\\rm u}\n,\n\\end{aligned}\n\\end{equation}\nwhere ${\\bf C}_{{\\rm u}}={\\rm{blkdiag}}({\\bf C}_{{\\rm u},1},{\\bf C}_{{\\rm u},2},\\cdots,{\\bf C}_{{\\rm u},K})$.\nThe second order derivative of $f_{{\\rm u}}(\\ddot{\\bf h}_{\\rm u})$ is given by\n\\begin{equation}\\label{equ:proof_fu_2}\n\\begin{aligned}[b]\n\\frac{ \\partial^2 f_{{\\rm u}}(\\ddot{\\bf h}_{\\rm u})}{\\partial \\ddot{\\bf h}_{\\rm u} \\partial \\ddot{\\bf h}_{\\rm u}^{\\rm H}}\n&=2{\\bf C}_{{\\rm u}}\n+2{\\bf I}_K \\otimes \\left(\\sum_{\\ell=1}^{L_1} \\frac{1}{\\sigma_{\\ell}^{2}} {\\bf D}_{\\ell}^{\\rm H} {\\bf D}_{\\ell}\\right) \\notag\\\\\n&\\quad + 2\\left(\\bar{\\bf x}^\\ast \\bar{\\bf x}^{\\rm T}\\right)\\otimes \\left(\\sum_{\\ell=L_1+1}^{N} \\frac{1}{\\bar\\sigma_{\\ell}^{2}} |\\bar{x}_k|^2 {\\bf D}_{\\ell}^{\\rm H} {\\bf D}_{\\ell}\\right)\n,\n\\end{aligned}\n\\end{equation}\nwhich is a Hermitian positive semi-definite matrix. Thus, the lemma is proved.\n\n\\section{Proof of Lemma \\ref{fg_convex}}\\label{proof_lemmafg}\nThe second order derivative of $f_{{\\rm g},m}({\\bf h}_{{\\rm g},m})$ in \\eqref{equ:obj_f_gm} is given by\n\\begin{equation}\\label{equ:proof_fg_1}\n\\begin{aligned}[b]\n&\\frac{ \\partial^2 f_{{\\rm g},m}({\\bf h}_{{\\rm g},m})}{\\partial {\\bf h}_{{\\rm g},m} \\partial {\\bf h}_{{\\rm g},m}^{\\rm H}}\n=2{\\bf C}_{{\\rm g},m}^{-1}\n+2\\sum_{k=1}^K\\sum_{\\ell=1}^{L_1} \\frac{1}{\\sigma_{\\ell}^{2}} {\\bf b}_{\\ell,k}^\\ast {\\bf b}_{\\ell,k}^{\\rm T}\\\\\n&\\quad+ 2\\sum_{\\ell=L_1+1}^{N}\\frac{1}{\\bar\\sigma_{\\ell}^{2}}\n\\left(\\sum_{k =1}^K \\bar{x}_k {\\bf b}_{\\ell,k}^{\\rm T}\\right)^{\\rm H}\n\\left(\\sum_{k =1}^K \\bar{x}_k {\\bf b}_{\\ell,k}^{\\rm T}\\right)\n,\n\\end{aligned}\n\\end{equation}\nwhich is a Hermitian positive semi-definite matrix. Thus, $f_{{\\rm g},m}({\\bf h}_{{\\rm g},m})$ is a convex quadratic function of ${\\bf h}_{{\\rm g},m}$. Then, the objective function $\\sum_{m=1}^M f_{{\\rm g},m}({\\bf h}_{{\\rm g},m})$ is a convex quadratic function of $\\{{\\bf h}_{{\\rm g},1},{\\bf h}_{{\\rm g},2},\\cdots,{\\bf h}_{{\\rm g},M}\\}$.\n\n\n\n\n\n\\bibliographystyle{IEEEtran\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\n The stability of quantum motion in dynamical systems,\n measured by quantum Loschmidt echo \\cite{Peres84}, has attracted much attention\n in recent years.\n The echo is the overlap of the evolution of the\n same initial state under two Hamiltonians with slight difference in the classical limit,\n $ M(t) = |m(t) |^2 $, where\n \\begin{equation} m(t) = \\langle \\Psi_0|{\\rm exp}(iHt\/ \\hbar ) {\\rm exp}(-iH_0t \/ \\hbar) |\\Psi_0 \\rangle \\label{mat} \\end{equation}\n is the fidelity amplitude.\n Here $H_0$ and $H$ are the unperturbed and perturbed Hamiltonians, respectively,\n $ H=H_0 + \\epsilon H_1 $, with $\\epsilon $ a small quantity and $H_1$ a perturbation.\n This quantity $M(t)$ is called fidelity in the field of\n quantum information \\cite{nc-book}.\n\n\n Fidelity decay in quantum systems whose classical counterparts have\n strong chaos with exponential instability, has been studied well\n \\cite{JP01,JSB01,CLMPV02,JAB02,BC02,CT02,PZ02,WL02,VH03,STB03,WCL04,Vanicek04,WL05,WCLP05,GPSZ06}.\n Related to the perturbation strength, previous investigations show\n the existence of at least three regimes of fidelity decay:\n (i) In the perturbative regime in which the typical transition matrix element is smaller than the\n mean level spacing, the fidelity has a Gaussian decay.\n (ii) Above the perturbative regime, the fidelity has an exponential decay with a rate\n proportional to $\\epsilon^2$, usually called the Fermi-golden-rule (FGR) decay of fidelity.\n (iii) Above the FGR regime is the Lyapunov regime in which $M(t)$ has usually an\n approximate exponential decay with a perturbation-independent rate.\n\n\n Fidelity decay in regular systems with quasiperiodic motion in the\n classical limit has also attracted much attention\n \\cite{PZ02,JAB03,PZ03,SL03,Vanicek04,WH05,Comb05,HBSSR05,GPSZ06,WB06,pre07}.\n For single initial Gaussian wavepacket, the fidelity has\n been found to have initial Gaussian decay followed by power law decay\\cite{PZ02,WH05,pre07}.\n\n\n Meanwhile, there exists a class of system which lies between the two classes of system mentioned above,\n namely, between chaotic systems with exponential instability and regular systems\n with quasiperiodic motion.\n One example of this class of system is the triangle map proposed by Casati and Prosen \\cite{triangle}.\n The map has linear instability with vanishing Lyapunov exponent,\n but can be ergodic and mixing with power-law decay of correlations.\n The classical Loschmidt echo in the triangle map has been studied recently\n and found behaving differently from that in systems with exponential instability\n and in systems with quasiperiodic motion \\cite{c-fid-tri}.\n This suggests that the decaying behavior of fidelity in the quantum triangle map may be\n different from that in the other two classes of system as well.\n In this paper, we present numerical results which confirm this expectation.\n\n\n Specifically, like in systems possessing strong chaos,\n in the triangle map three regimes of fidelity decay are found\n with respect to the perturbation strength: weak, intermediate and strong.\n However, in each of the three regimes, the decaying law(s) for the fidelity in the triangle map has\n been found different from that in systems possessing strong chaos.\n In section II, we recall properties of the classical triangle map\n and discuss its quantization.\n Section III is devoted to numerical investigations for the laws of fidelity decay\n in the three regimes of perturbation strength.\n Conclusions are given in section IV.\n\n\n \\section{Triangle map}\n\n\n \\begin{figure}\n \\includegraphics[width=\\columnwidth]{s0001-t1p7-N8.EPS}\n \\caption{ (color online).\n Averaged fidelity at weak perturbation, $\\sigma =10^{-4}$(solid curve),\n with average taken over 50 initial point sources chosen randomly, $N=2^{12}=4096$.\n The dashed-dotted straight line has a slope 1.7, showing that $\\log_{10}\\overline M(t)$ is approximately\n a function of $t^{1.7}$.\n For comparison, we also show two straight lines (dashed and dotted) with\n slopes 1 and 2, respectively.\n } \\label{fig-s0001-t1p7}\n \\end{figure}\n\n\n\n On the torus $(r,p) \\in {T}^2 = [-\\pi ,\\pi ) \\times [-\\pi ,\\pi )$,\n the triangle map is\n \\begin{eqnarray} \\nonumber p_{n+1} = p_n + \\alpha \\ \\text{sgn} (r_n)+ \\beta , \\hspace{1cm} (\\text{mod} 2\\pi )\n \\\\ r_{n+1} = r_n + p_{n+1} , \\hspace{1cm} (\\text{mod} 2\\pi ) \\label{map} \\end{eqnarray}\n where $\\text{sgn}(r) = \\pm 1 $ is the sign of $r$ for $r \\ne 0$ and\n $\\text{sgn}(r) =0$ for $r=0$ \\cite{triangle}.\n Rich behaviors have been found in the map:\n For rational $\\alpha \/\\pi$ and $\\beta \/\\pi$, the system is pseudointegrable.\n With the choice of $\\alpha =0$ and irrational $\\beta \/ \\pi$, it is ergodic but not mixing.\n Interestingly, for incommensurate irrational values of $\\alpha \/\\pi$ and $\\beta \/\\pi$,\n the dynamics is ergodic and mixing.\n In our numerical calculations, we take $\\alpha = \\pi^2 $ and $\\beta = (\\sqrt{5} -1)\\pi \/2$,\n {for which $(\\beta \/ \\alpha )$ is an irrational number, the golden mean divided by $\\pi$,\n and the map is ergodic and mixing.}\n\n\n The triangle map (\\ref{map}) can be associated with the Hamiltonian\n \\begin{eqnarray} H = \\frac 12 \\widetilde p^2 + V(r) \\sum_{n=-\\infty }^{\\infty } \\delta (t-nT), \\label{H} \\end{eqnarray}\n where $ V(r) = - \\widetilde \\alpha |r| - \\widetilde \\beta r$ and $T$ is the period of kicking.\n It is easy to verify that the dynamics produced by this Hamiltonian gives the map (\\ref{map})\n with the replacement $p=T\\widetilde p, \\alpha = T\\widetilde \\alpha $, and $\\beta =T\\widetilde \\beta $.\n\n\n \\begin{figure}\n \\includegraphics[width=\\columnwidth]{s0001-001-N1.EPS}\n \\caption{ (color online).\n Averaged fidelity at three weak perturbation strengths, $\\sigma =10^{-4}$(thin solid curve), $10^{-3}$\n (dashed curve), and $10^{-2}$(thick solid curve),\n with average taken over 50 initial point sources chosen randomly, $N=2^{12}=4096$.\n The dashed-dotted straight line represents $M_1(t)$ in Eq.~(\\ref{ctgamma})\n with $\\gamma =1.7$ and $c$ as an adjusting parameter.\n Inset: Fidelity of $\\sigma =10^{-3}$ and $N=2^{n}$;\n the two curves are almost indistinguishable.\n } \\label{fig-s0001-tlog}\n \\end{figure}\n\n\n The classical map can be quantized by the method of quantization on torus\n \\cite{HB80-q-tori, FMR91,WB94,Haake}.\n Schr\\\"{o}dinger evolution under the Hamiltonian in Eq.~(\\ref{H})\n for one period of time is given by the Floquet operator\n \\begin{equation} \\label{U1} U = \\exp \\left [ -\\frac i2 ({\\hat{\\widetilde p}})^2T \\right ]\n \\exp [-i V( {\\hat r}) ], \\end{equation}\n where we set $\\hbar =1$ in Schr\\\"{o}dinger equation.\n In this quantization scheme, an effective Planck constant $\\hbar_{\\rm eff}=T$ is introduced.\n It has the following relation to the dimension $N$ of the Hilbert space,\n \\begin{equation} \\label{h} N h_{\\rm eff} =4\\pi^2, \\end{equation}\n hence, $\\hbar_{\\rm eff} = 2\\pi \/ N$.\n In what follows, for brevity, we will omit the subscript eff of $\\hbar_{\\rm eff}$.\n Eigenstates of $\\hat{r} $ and $\\hat p$ are discretized,\n $\\hat{r}|j\\rangle = j \\hbar |j\\rangle $ and $\\hat{p}|k\\rangle = k \\hbar |k\\rangle $,\n with $j,k =-N\/2,-N\/2+1,\\ldots ,0,1, \\ldots , (N\/2)-1$.\n Then,\n {making use of the above discussed relations among $\\widetilde p, p, T, \\widetilde \\alpha , \\alpha ,\n \\widetilde \\beta , \\beta $, in particular, $T=\\hbar $},\n the Floquet operator in Eq.~(\\ref{U1}) can be written as\n \\begin{equation} \\label{U} U = \\exp \\left [ -\\frac {i}{2\\hbar} ({\\hat{ p}})^2 \\right ]\n \\exp \\left [ \\frac{i}{\\hbar} (\\alpha |\\hat r| +\\beta \\hat r) \\right ] . \\end{equation}\n In numerical computation, the time evolution\n $ |\\psi (t)\\rangle = U^t |\\psi_0\\rangle $ is calculated by the fast Fourier transform (FFT) method.\n\n\n The fidelity in Eq.~(\\ref{mat}) involves two slightly different Hamiltonians,\n unperturbed and perturbed.\n In this paper, for an unperturbed system with parameters $\\alpha $ and $\\beta $,\n the perturbed system is given by\n \\begin{equation} \\alpha \\to \\alpha + \\epsilon \\ \\ \\ \\ \\beta \\to \\beta . \\end{equation}\n Without the loss of generality, we assume $\\epsilon \\ge 0$.\n The parameter $\\sigma =(\\epsilon \/ \\hbar )$ can be used to characterize the strength of quantum\n perturbation.\n\n\n\n \\begin{figure}\n \\includegraphics[width=\\columnwidth]{mt-s001-01-N2.EPS}\n \\caption{ (color online).\n Variation of the averaged fidelity with $\\sigma t$ for $\\sigma =0.01, 0.02$ and 0.1,\n with average taken over 100 initial point sources chosen randomly, $N=4096$.\n The solid straight line is drawn for a comparison with linear dependence on $\\sigma t$.\n For $\\sigma = 0.02$ and 0.1, $\\log_{10} \\overline M(t)$ is approximately a linear function of $ \\sigma t$,\n before it becomes close to the saturation value.\n Inset: The distribution $P(y)$ for the action difference $\\Delta S$ at $t=40$,\n where $y=(\\Delta S -\\langle \\Delta S \\rangle )\n \/ \\epsilon $ and $\\langle \\Delta S \\rangle$ is the average value of $\\Delta S$.\n It is calculated by taking randomly $10^7$ initial points in the phase space.\n $P(y)$ does not have a Gaussian shape.\n } \\label{fig-mt-s001-01}\n \\end{figure}\n\n\n \\section{Three regimes of fidelity decay}\n\n \\subsection{Weak perturbation regime}\n\n\n Let us first discuss weak perturbation.\n As mentioned in the introduction, in systems with strong chaos in the classical limit,\n the fidelity has a Gaussian decay under sufficiently weak perturbation.\n The Gaussian decay is derived by making use of the first order perturbation theory for eigensolutions\n of $H$ and $H_0$ and the random matrix\n theory for $\\Delta E_n \\equiv E_n-E^0_n$, where $E_n$ and $E^0_n$ are\n eigenenergies of $H$ and $H_0$, respectively.\n Numerical results in Ref.~\\cite{EKW05} show agreement of the spectral\n statistics in the triangle map with the prediction of random matrix theory,\n hence, at first sight, Gaussian decay might be expected for the fidelity decay\n in the weak perturbation regime of the triangle map.\n\n\n However, our numerical results show a non-Gaussian decay of fidelity for small perturbation.\n An example is given in Fig.~\\ref{fig-s0001-t1p7} for $\\sigma=10^{-4}$.\n To obtain relatively smooth curves for fidelity,\n average has been taken over 50 initial point sources (eigenstates of $\\hat r$) chosen randomly.\n This figure, plotted with $\\log_{10} \\left (-\\log_{10} \\overline M(t)\\right )$ versus $ \\log_{10} t $,\n shows clearly that $\\log_{10}\\overline M(t)$ is approximately proportional to $t^{1.7}$ (the\n dashed-dotted straight line), while is far from the Gaussian case of $t^2$ and the\n exponential case of $t$ represented by the dotted and dashed lines, respectively.\n\n\n Furthermore, we found that the averaged fidelity $\\overline M(t)$ can be fitted well by\n \\begin{equation} \\label{ctgamma} M_1(t) = \\exp (-c \\sigma^2 t^{\\gamma }) \\end{equation}\n with $\\gamma \\simeq 1.7$ and $c$ as a fitting parameter.\n In Fig.~\\ref{fig-s0001-tlog}, we show fidelity decay for three different values of $\\sigma $.\n With the horizontal axis scaling with $\\log_{10}\\sigma^2 t^{1.7 }$,\n the three curves corresponding to the three values of $\\sigma $\n are hardly distinguishable in their overlapping regions (except for long times).\n Note that, to show clearly the dashed-dotted straight line which represents\n $M_1(t)$ in Eq.~(\\ref{ctgamma}),\n we have deliberately adjusted a little the best-fitting value of $c$ such that the dashed-dotted line\n is a little above the curves of the fidelity.\n\n\n \\begin{figure}\n \\includegraphics[width=\\columnwidth]{s0p1-n11-n12-N3.EPS}\n \\caption{ (color online).\n Fidelity decay for $\\sigma =0.1$ and $N=2^{n}$,\n averaged over 100 initial point sources.\n } \\label{fig-s0p1-n11-n12}\n \\end{figure}\n\n\n\n In the inset of Fig.~\\ref{fig-s0001-tlog}, we show curves of fidelity for the same $\\sigma$\n but different values of $\\epsilon $ and $N$.\n The two curves are very close, supporting the assumption\n that $\\epsilon $ and $N$ appear in the form of the\n single variable $\\sigma $ as written on the right hand side of Eq.~(\\ref{ctgamma}).\n This dependence of $\\overline M(t)$ on the variable $\\sigma $ for sufficiently small $\\sigma $\n can be understood in a first-order perturbation treatment of fidelity,\n as shown in the following arguments.\n\n\n Let us consider a Hilbert space with sufficiently large dimension $N$\n and make use of arguments similar to those used in Ref.~\\cite{CT02}\n for deriving the Gaussian decay,\n but without assuming the applicability of the random matrix theory.\n It follows that, for times not very long, the averaged fidelity (averaged over initial states)\n is mainly determined by\n $\\langle \\exp (-i\\Delta \\omega_n t ) \\rangle$, where $\\Delta \\omega_n =\\omega_{n} -\\omega_n^0$\n and $\\langle \\ldots \\rangle $ indicates average over the quasi-spectrum.\n Here $\\omega_n^0$ is an eigen-frequency of the Floquet operator $U$ in Eq.~(\\ref{U})\n and $\\omega_n$ is the corresponding eigen-frequency of $(U e^{i\\sigma |r|})$.\n For large $N$, $\\langle \\exp (-i\\Delta \\omega_n t ) \\rangle$\n can be calculated by making use of the distribution of $\\Delta \\omega_n $.\n Since the two Floquet operators $U$ and $(U e^{i\\sigma |r|})$ differ by $e^{i\\sigma |r|}$,\n the distribution of $\\Delta \\omega_n $ is approximately a function of $\\sigma $.\n Then, $M(t)$ is approximately a function $\\sigma $.\n\n\n Finally, we give some remarks on the value of $\\gamma $.\n When $\\Delta \\omega_n$ has a Gaussian distribution, $\\overline M(t)$ has a Gaussian decay with $\\gamma =2$,\n as in the case of systems possessing strong chaos.\n In the triangle map, the non-Gaussian decay of fidelity discussed above implies\n that $\\Delta \\omega_n$ does not have a Gaussian distribution.\n Other types of distribution may predict values of $\\gamma $ different from 2, in particular,\n a L\\'{e}vy distribution would give $\\gamma <2$ in agreement with our numerical result.\n We also remark that the results here are not in confliction with numerical results of Ref.~\\cite{EKW05},\n in which only the statistics of $\\omega_n$ (not that of $\\Delta \\omega_n$) is found\n in agreement with the prediction of random matrix theory.\n\n\n \\subsection{Intermediate perturbation strength}\n\n\n \\begin{figure}\n \\includegraphics[width=\\columnwidth]{mt-s01-1-poi-st-N4.EPS}\n \\caption{ (color online).\n Averaged fidelity of $\\sigma$ from 0.1 to 1, with average taken over 1000 randomly chosen\n initial pointer sources, $N=2^{14}=16384$.\n For $\\sigma =0.2$ and above, the averaged fidelity obeys a decaying law which is different from\n that in Eq.~(\\ref{Mt-sigma-t}), in particular, it is not a function of $(\\sigma t)$.\n } \\label{fig-mt-s01-1-poi-st}\n \\end{figure}\n\n\n\n With increasing perturbation strength, exponential decay of $\\overline M(t)$ appears\n (see Fig.~\\ref{fig-mt-s001-01}).\n For $\\sigma $ from 0.02 to 0.1, after some initial times and before approaching\n its saturation value, the fidelity decays as\n \\begin{equation} M_2(t) = \\exp (- a \\sigma t), \\label{Mt-sigma-t} \\end{equation}\n with $a$ as a fitting parameter.\n Numerically, we found that $a \\approx 0.08$.\n The decay rate is proportional to $(\\sigma t)$, unlike in the FGR decay found in systems\n with strong chaos,\n \\begin{equation} M_{\\rm FGR}(t) \\sim \\exp (-2 \\sigma^2 K_E t), \\label{FGR} \\end{equation}\n where $K_E$ is the classical action diffusion constant \\cite{CT02}.\n The curves of $\\sigma =0.02$ and 0.1 in Fig.~\\ref{fig-mt-s001-01} are quite close,\n while that of $\\sigma =0.01$ has some deviation from the two.\n This implies that the $\\exp (- a \\sigma t)$ behavior of $\\overline M(t)$ appears\n between $\\sigma =0.01$ and 0.02.\n Note that vertical shifts have been made for the two curves of\n $\\sigma =0.02$ and 0.1 in Fig.~\\ref{fig-mt-s001-01} for better comparison.\n\n\n \\begin{figure}\n \\includegraphics[width=\\columnwidth]{lglgm-st2p5-s2-10-N5.EPS}\n \\caption{(color online).\n Averaged fidelity at strong perturbation,\n with average taken over 1000 randomly chosen initial Gaussian wavepackets,\n $N=2^{17}=131072$.\n $z=\\epsilon t^{2.5}\/\\hbar $ with $\\hbar $ fixed in this figure.\n The solid line represents a curve $\\exp (-c \\epsilon t^{2.5})$,\n where the fitting parameter $c$ is determined from comparison with the two curves\n of $\\sigma =2$ and 4 in the small-$z$ region.\n } \\label{fig-lglgm}\n \\end{figure}\n\n\n The origin of the non-FGR decay of fidelity\n in this regime of perturbation strength, may come from weak chaos.\n In fact, in another system which also possesses weak chaos in the classical limit,\n namely, the sawtooth map in some parameter regime, linear dependence of the decaying rate\n on $\\sigma$ has also been observed in the intermediate perturbation regime \\cite{WCL04,WL05,foot1}.\n In this regime of perturbation strength, the semiclassical theory predicts that,\n in the first order classical perturbation theory,\n the averaged fidelity is given by \\cite{WCL04}\n \\begin{eqnarray} \\overline M(t) \\simeq \\left | \\int d\\Delta S e^{i\\Delta S\/ \\hbar }\n P(\\Delta S)\\right |^2, \\label{Mp-ps} \\end{eqnarray}\n where\n $ \\Delta S( {\\bf p} _0 , {\\bf r} _0 ; t) = \\epsilon \\int_0^t dt' H_1[( {\\bf r} (t')]$\n is the action difference of two the classical trajectories starting at the same\n point $( {\\bf p} _0 , {\\bf r} _0)$ in the two systems,\n with $H_1$ evaluated along one of the two trajectories,\n and $P(\\Delta S)$ is the distribution of $ \\Delta S( {\\bf p} _0 , {\\bf r} _0 ; t)$.\n In systems possessing strong chaos, $P(\\Delta S)$ may have a Gaussian form, which implies\n the FGR decay for the fidelity.\n In the triangle map, $P(\\Delta S)$ is not a Gaussian distribution\n as shown in the inset of Fig.~\\ref{fig-mt-s001-01},\n hence, the fidelity does not have the FGR decay with a rate proportional to $\\sigma^2$.\n\n\n It is difficult to find an analytical expression for $P(\\Delta S)$,\n hence, we can not derive Eq.~(\\ref{Mt-sigma-t}) analytically.\n However, a qualitative understanding of the $(\\sigma t)$-dependence of $\\overline M(t)$ can be\n gained, as shown in the following arguments.\n Equation (\\ref{Mp-ps}) shows that the time-dependence of fidelity decay comes mainly from\n the dependence of $P(\\Delta S)$ on time.\n In the case of strong chaos, $\\Delta S$ behaves like a random walk, hence,\n $P(\\Delta S)$ has a Gaussian form with a width increasing as $\\sqrt t$ \\cite{CT02}.\n Since $\\Delta S \\propto \\epsilon $, the width of $P(\\Delta S)$ is a function of $(\\epsilon \\sqrt t)$;\n then, Eq.~(\\ref{Mp-ps}) gives the FGR decay of $\\overline M(t)$ which depends on $(\\sigma^2t)$.\n In the case of the triangle map, due to the linear instability of the map, it may happen\n that the width of $P(\\Delta S)$ increase linearly with $ t$\n in some situations when $t$ is not very long.\n This implies that the width of $P(\\Delta S)$ may be a function of the variable $(\\epsilon t)$.\n Then, it is possible for $\\overline M(t)$ to be approximately a function of $(\\sigma t)$.\n\n\n Equation (\\ref{Mp-ps}) predicts that, up to the first order classical perturbation theory,\n the dependence of $\\overline M(t)$ on $\\epsilon $ and $\\hbar $\n takes the single variable $\\sigma =\\epsilon \/ \\hbar $.\n Numerically we found that this is approximately correct, as shown in Fig.~\\ref{fig-s0p1-n11-n12}.\n Specifically, for fixed $\\sigma =0.1$, $\\overline M(t)$ of $N=2^{11}$ and of $N=2^{12}$ separate at about $t=15$.\n Indeed, for long times $t$, higher order contributions in the classical perturbation theory may\n need consideration and $\\overline M(t)$ may depend on $\\epsilon $ and $\\hbar $ in a different way.\n For larger $N$, hence smaller $\\hbar $, the agreement becomes better,\n e.g., $\\overline M(t)$ of $N=2^{12}$ is closer to $N=2^{13}$ than to $N=2^{11}$.\n\n\n\n When $\\sigma $ goes beyond 0.1,\n the exponential decay of $\\overline M(t)$ expressed in Eq.~(\\ref{Mt-sigma-t}) disappears,\n in particular, the dependence of $\\overline M(t)$ on $\\sigma $ and $t$ does not take the form of $(\\sigma t)$\n (see Fig.~\\ref{fig-mt-s01-1-poi-st}).\n Meanwhile fluctuations of $\\overline M(t)$ becomes larger and larger with increasing $\\sigma $\n for initial point states.\n For example, Fig.~\\ref{fig-mt-s01-1-poi-st} shows that $\\overline M(t)$ of $\\sigma =1$\n has considerable fluctuations even after averaging over 1000 initial point sources.\n Taking initial Gaussian wavepackets, the fluctuations can be much suppressed.\n\n\n \\begin{figure}\n \\includegraphics[width=\\columnwidth]{mt-sgt1-tavg-N6.EPS}\n \\caption{ (color online).\n Averaged Fidelity for strong perturbation, from top to bottom, $\\sigma =2,4$ and 10.\n The average is taken over 1000 initial Gaussian wavepackets chosen randomly\n and over time from $t-2$ to $t+2$. $N=2^{17}$.\n The time axis is plotted in the logarithm scale.\n It shows that the long time decay of fidelity is slower than power law decay.\n } \\label{fig-mt-sgt1-tavg}\n \\end{figure}\n\n\n\n \\subsection{Strong perturbation regime}\n\n\n The triangle map has vanishing Lyapunov exponent, hence, its fidelity may not have\n the perturbation-independent decay\n which has been observed at strong perturbation in systems possessing exponential\n instability in the classical limit\n \\cite{JP01,BC02,STB03,WCLP05}.\n To understand fidelity decay in the triangle map, it is helpful to recall results\n about the classical fidelity given in \\cite{c-fid-tri}.\n In the classical triangle map, the classical fidelity decays as\n $M_{cl}(t) \\sim \\exp (-c \\epsilon t^{2.5})$\n for initial times when $M_{cl}(t)$ remains close to one,\n and has an exponential decay $\\exp (-c' \\epsilon^{2\/5}t)$ for longer times.\n The interesting feature is that the classical fidelity depends\n on the same scaling variable $\\tau \\equiv \\epsilon t^{2.5}$ in different time regions.\n\n\n In the weak and intermediate perturbation regimes discussed in the previous sections,\n the dependence of fidelity on $\\epsilon $ and $t$ does not take the form of the single\n variable $\\tau$.\n This is not strange, because the classical limit is achieved in the limit\n $\\hbar \\to 0$, which implies $\\sigma \\to \\infty$ for whatever small but fixed $\\epsilon $.\n Therefore, it is the strong perturbation regime in which\n the decaying behavior of fidelity may have some relevance to the classical fidelity.\n Numerical results presented below indeed support this expectation.\n\n\n \\begin{figure}\n \\includegraphics[width=\\columnwidth]{mt-sgt1-tavg-140-1k-N7.EPS}\n \\caption{ (color online).\n The same as in Fig.~\\ref{fig-mt-sgt1-tavg},\n with a different scale for the horizontal axis and\n for the time interval $140 < t < 1000$.\nFor $\\sigma =4$ and 10, $\\log_{10}\\overline M(t)$ form two lines for each $\\sigma$.\nThe three solid lines represent $\\log_{10} M_3(t)$ given by Eq.~(\\ref{loglogt}),\nwith $b=9.6,9.3$, and 8.3 from top to bottom.\n } \\label{fig-mt-sgt1-tavg-140-1k}\n \\end{figure}\n\n\n Figure \\ref{fig-lglgm} shows variation of the averaged fidelity with $\\log_{10}\\epsilon t^{2.5}$,\n with average taken over 1000 initial Gaussian wavepackets chosen randomly.\n The initial decay of the fidelity of $\\sigma =2$ and 4\n are quite close to the classical prediction $\\exp (-c \\epsilon t^{2.5})$.\n For longer times, the fidelity of $\\sigma $ from 2 to 10 (with $\\hbar $ fixed)\n is approximately a function of $\\tau$, the scaling variable predicted in the classical case,\n but, the decaying behavior of fidelity\n is not the same as that of the classical fidelity, i.e., not an exponential decay.\n We found that the dependence of $\\overline M(t)$ on $\\hbar$ does not take the form of $\\tau \/ \\hbar$,\n i.e., $\\overline M(t)$ is not a function of the single variable $(\\tau \/ \\hbar )$.\n\n\n For long times, the fidelity has large fluctuations even after averaging over 1000\n initial Gaussian wavepackets.\n The fluctuations can be much suppressed, when a further average is taken for time $t$ .\n Specifically, for each time $t$, we take average over $\\overline M(t')$ for $t'$ from $t-2$ to $t+2$.\n The results are given in Fig.~\\ref{fig-mt-sgt1-tavg},\n which shows that the long time decay of fidelity is slower than power law decay.\n To study the decaying behavior of the slower-than-power-law decay,\n we compare it with the function\n \\begin{equation} M_3(t) = a(\\log_{10} t)^{-b}, \\label{loglogt} \\end{equation}\n with $a$ and $b$ as fitting parameters.\n In the time interval $140 < t < 1000$, the averaged fidelity can be fitted by this function,\n as shown in Fig.~\\ref{fig-mt-sgt1-tavg-140-1k}, where we plot $\\log_{10} M(t)$ versus\n $ \\log_{10} (\\log_{10} t)$.\n Further research work is needed to find analytical explanations for this slower-than-power-law decay\n of fidelity.\n\n\n\\vspace{1cm}\n\n \\section{Conclusions and Discussions}\n\n\n We present numerical results on fidelity decay in the triangle map with linear instability.\n Three regimes of fidelity decay has been found with respect to the perturbation strength:\n weak, intermediate and strong.\n At weak perturbation, the fidelity decays like $\\exp (-c \\sigma^2 t^{1.7})$.\n In the intermediate regime, the fidelity has an exponential decay\n which is approximately $\\exp (-c' \\sigma t)$.\n In the regime of strong perturbation, the fidelity is approximately a function of\n $\\epsilon t^{2.5}$\n and decays slower than power law decay for long times.\n\n\n These results show that the fidelity in the triangle map obeys decaying laws which are\n different from those in systems with strong chaos or with regular motion.\n The difference is closely related to the weak-chaos feature of the classical triangle map.\n In which way and to what extent does weak chaos influence the fidelity decay?\n This is still an open question.\n Indeed, common features of fidelity decay in systems with weak chaos, as well as\n their explanations, should be an interesting topic for future research work.\n In particular, one may note that stretch exponential decay of fidelity has also been observed\n for wave packets which initially reside in the border between chaotic\n and regular regions in mixed-type systems \\cite{WLT02}.\n\n\nACKNOWLEDGMENTS. The author is very grateful to G.~Casati and T.~Prosen\nfor valuable discussions and suggestions.\nThis work is partially supported by Natural Science Foundation of China Grant\nNo.~10775123 and the start-up funding of USTC.\n\n ","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Acknowledgments}\n\nThis research is based upon work supported in part by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, under Award Number DE-SC0021398. This paper was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or any agency thereof. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof.\n\n\n\n\n\\section{Equivalence of Incremental and Terminal Information Gain in sOED} \n\\label{app:incre_terminal}\n\n\\begin{proof}[Proof of \\cref{prop:terminal_incremental}]\nUpon substituting \\cref{eq:terminal1,eq:terminal_info_gN} into \\cref{eq:expected_utility}, the expected utility for a given deterministic policy $\\pi$ using the terminal formulation is\n\\begin{align}\n U_T(\\pi)&=\\mathbb{E}_{y_0,...,y_{N-1}|\\pi,x_0}\\[\\int_{\\Theta} p(\\theta|I_N)\\ln{\\frac{p(\\theta|I_N)}{p(\\theta|I_0)}}\\,d\\theta \\] \\nonumber\\\\\n &= \\mathbb{E}_{I_1,\\dots,I_N|\\pi,x_0}\\[\\int_{\\Theta} p(\\theta|I_N)\\ln{\\frac{p(\\theta|I_N)}{p(\\theta|I_0)}}\\,d\\theta \\]\n \\label{e:app_UT}\n\\end{align}\nwhere recall $I_k=\\{ d_0,y_0,\\dots,d_{k-1},y_{k-1} \\}$ (and $I_0=\\emptyset$).\nSimilarly, substituting \\cref{eq:incremental1,eq:incremental2}, the expected utility for the same policy $\\pi$ using the incremental formulation is\n\\begin{align}\n U_I(\\pi)&=\\mathbb{E}_{y_0,...,y_{N-1}|\\pi,x_0}\\[\\sum_{k=1}^N \\int_{\\Theta} p(\\theta|I_k)\\ln{\\frac{p(\\theta|I_k)}{p(\\theta|I_{k-1})}}\\,d\\theta \\] \\nonumber\\\\\n &=\\mathbb{E}_{I_1,\\dots,I_N|\\pi,x_0}\\[\\sum_{k=1}^N \\int_{\\Theta} p(\\theta|I_k)\\ln{\\frac{p(\\theta|I_k)}{p(\\theta|I_{k-1})}}\\,d\\theta \\].\n \\label{e:app_UI}\n\\end{align}\nIn both cases, \n$\\mathbb{E}_{y_0,\\dots,y_{N-1}|\\pi,x_0}$ can be equivalently replaced by $\\mathbb{E}_{I_1,\\dots,I_N|\\pi,x_0}$ since\n\\begin{align*}\n \\mathbb{E}_{I_1,\\dots,I_N |\\pi,x_0} \\[\\cdots\\] &= \\mathbb{E}_{d_0,y_0,d_1,y_1,\\dots,d_{N-1},y_{N-1} | \\pi,x_0} \\[\\cdots\\] \\\\\n &= \\mathbb{E}_{d_0|\\pi} \\mathbb{E}_{y_0,d_1,y_1,\\dots,d_{N-1},y_{N-1}|\\pi,x_0,d_0} \\[\\cdots\\] \\\\\n &= \\mathbb{E}_{y_0,d_1,y_1,\\dots,d_{N-1},y_{N-1}|\\pi,x_0,\\mu_0(x_0)} \\[\\cdots\\] \\\\\n &= \\mathbb{E}_{y_0,d_1,y_1,\\dots,d_{N-1},y_{N-1}|\\pi,x_0} \\[\\cdots\\] \\\\\n &= \\mathbb{E}_{y_0|\\pi,x_0} \\mathbb{E}_{d_1|\\pi,x_0,y_0} \\mathbb{E}_{y_1,\\dots,d_{N-1},y_{N-1}|\\pi,x_0,y_0,d_1} \\[\\cdots\\] \\\\\n &= \\mathbb{E}_{y_0|\\pi,x_0} \\mathbb{E}_{y_1,\\dots,d_{N-1},y_{N-1}|\\pi,x_0,y_0,\\mu_1(x_1)} \\[\\cdots\\] \\\\\n &= \\mathbb{E}_{y_0|\\pi,x_0} \\mathbb{E}_{y_1,\\dots,d_{N-1},y_{N-1}|\\pi,x_0,y_0} \\[\\cdots\\] \\\\\n &= \\mathbb{E}_{y_0|\\pi,x_0} \\mathbb{E}_{y_1|\\pi,x_0,y_0} \\mathbb{E}_{d_2,\\dots,d_{N-1},y_{N-1}|\\pi,x_0,y_0,y_1} \\[\\cdots\\] \\\\\n & \\qquad\\vdots \\\\\n &= \\mathbb{E}_{y_0|\\pi,x_0} \\mathbb{E}_{y_1|\\pi,x_0,y_0} \\cdots \\mathbb{E}_{y_{N-1}|\\pi,x_0,y_0,y_1,\\dots,y_{N-2},\\mu_{N-1}(x_{N-1})} \\[\\cdots\\] \\\\\n &= \\mathbb{E}_{y_0|\\pi,x_0} \\mathbb{E}_{y_1|\\pi,x_0,y_0} \\cdots \\mathbb{E}_{y_{N-1}|\\pi,x_0,y_0,y_1,\\dots,y_{N-2}} \\[\\cdots\\] \\\\\n &= \\mathbb{E}_{y_0,\\dots,y_{N-1}|\\pi,x_0} \\[\\cdots\\],\n\\end{align*}\nwhere the third equality is due to the deterministic policy (Dirac delta function) $d_0=\\mu_0(x_0)$, the fourth equality is due to \n$\\mu_0(x_0)$ being known if $\\pi$ and $x_0$ are given. The seventh equality is due to $\\mu_1(x_1)$ being known if $\\pi$ and $x_1$ are given, and $x_1$ is known if $x_0$, $d_0=\\mu_0(x_0)$ and $y_0$ are given, and $\\mu_0(x_0)$ is known if $\\pi$ and $x_0$ are given, so overall $\\mu_1(x_1)$ is known if $\\pi$, $x_0$ and $y_0$ are given.\nThe eighth to second-to-last equalities all apply the same reasoning recursively. The last equality brings the expression back to a conditional joint expectation. \n\nTaking the difference between \\cref{e:app_UT} and \\cref{e:app_UI}, we obtain\n\\begin{align*}\n &U_I(\\pi) - U_T(\\pi)\\\\\n &=\\mathbb{E}_{I_1,\\dots,I_N|\\pi,x_0}\\[\\sum_{k=1}^N \\int_{\\Theta} p(\\theta|I_k)\\ln{\\frac{p(\\theta|I_k)}{p(\\theta|I_{k-1})}}\\,d\\theta - \\int_{\\Theta} p(\\theta|I_N)\\ln{\\frac{p(\\theta|I_N)}{p(\\theta|I_0)}}\\,d\\theta \\]\\\\\n &=\\int_{\\Theta}\\mathbb{E}_{I_1,\\dots,I_N|\\pi,x_0}\\[ \\sum_{k=1}^N p(\\theta|I_k)\\ln{\\frac{p(\\theta|I_k)}{p(\\theta|I_{k-1})}} - p(\\theta|I_N)\\ln{\\frac{p(\\theta|I_N)}{p(\\theta|I_0)}} \\]\\, d\\theta\\\\\n &=\\int_{\\Theta}\\mathbb{E}_{I_1,\\dots,I_N|\\pi,x_0}\\[ \\sum_{k=1}^{N-1} p(\\theta|I_k)\\ln{\\frac{p(\\theta|I_k)}{p(\\theta|I_{k-1})}} + p(\\theta|I_N)\\ln{\\frac{p(\\theta|I_0)}{p(\\theta|I_{N-1})}} \\]\\,d\\theta\\\\\n &=\\int_{\\Theta}\\mathbb{E}_{I_1,\\dots,I_{N-1}|\\pi,x_0} \\int_{I_N} p(I_N|I_{N-1},\\pi) \\[ \\sum_{k=1}^{N-1} p(\\theta|I_k)\\ln{\\frac{p(\\theta|I_k)}{p(\\theta|I_{k-1})}} + p(\\theta|I_N)\\ln{\\frac{p(\\theta|I_0)}{p(\\theta|I_{N-1})}} \\]\\,dI_N\\,d\\theta\\\\\n &=\\int_{\\Theta}\\mathbb{E}_{I_1,\\dots,I_{N-1}|\\pi,x_0} \\[ \\sum_{k=1}^{N-1} p(\\theta|I_k)\\ln{\\frac{p(\\theta|I_k)}{p(\\theta|I_{k-1})}} + \\int_{I_N} p(\\theta,I_N|I_{N-1},\\pi)\\ln{\\frac{p(\\theta|I_0)}{p(\\theta|I_{N-1})}}\\,dI_N \\]\\,d\\theta\\\\\n &=\\int_{\\Theta}\\mathbb{E}_{I_1,\\dots,I_{N-1}|\\pi,x_0} \\[ \\sum_{k=1}^{N-1} p(\\theta|I_k)\\ln{\\frac{p(\\theta|I_k)}{p(\\theta|I_{k-1})}} + p(\\theta|I_{N-1})\\ln{\\frac{p(\\theta|I_0)}{p(\\theta|I_{N-1})}} \\]\\,d\\theta\\\\\n &=\\int_{\\Theta}\\mathbb{E}_{I_1,\\dots,I_{N-1}|\\pi,x_0} \\[ \\sum_{k=1}^{N-2} p(\\theta|I_k)\\ln{\\frac{p(\\theta|I_k)}{p(\\theta|I_{k-1})}} + p(\\theta|I_{N-1})\\ln{\\frac{p(\\theta|I_0)}{p(\\theta|I_{N-2})}} \\]\\,d\\theta\\\\\n &=\\int_{\\Theta}\\mathbb{E}_{I_1,\\dots,I_{N-2}|\\pi,x_0} \\[ \\sum_{k=1}^{N-3} p(\\theta|I_k)\\ln{\\frac{p(\\theta|I_k)}{p(\\theta|I_{k-1})}} + p(\\theta|I_{N-2})\\ln{\\frac{p(\\theta|I_0)}{p(\\theta|I_{N-3})}} \\]\\,d\\theta\\\\\n &\\qquad \\vdots \\\\\n &=\\int_{\\Theta}\\mathbb{E}_{I_1|\\pi,x_0} \\[ p(\\theta|I_{1})\\ln{\\frac{p(\\theta|I_0)}{p(\\theta|I_0)}} \\]\\,d\\theta\\\\\n &=0,\n\\end{align*}\n\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\where the third equality takes the last term from the sigma-summation and combines it with the last term, the fourth equality expands the expectation and uses $p(I_N|I_1,\\ldots,I_{N-1},\\pi) = p(I_N|I_{N-1},\\pi)$, the fifth equality makes use of $p(\\theta|I_N)=p(\\theta|I_N,\\pi)$, and the seventh to second-to-last equalities repeat the same procedures recursively. \nHence, $U_T(\\pi)=U_I(\\pi)$.\n\\end{proof}\n\n\\section{Policy Gradient Expression}\n\\label{app:pg_derive}\n\nOur proof for \\cref{thm:PG} follows the proof given by \\cite{silver2014deterministic} for a general infinite-horizon MDP. \nBefore presenting our proof, we first introduce a shorthand notation for writing the state transition probability:\n\\begin{align}\np(x_k \\rightarrow x_{k+1}|\\pi_w)=p(x_{k+1}|x_k,\\mu_{k,w}(x_k)).\n\\end{align}\nWhen taking an expectation over consecutive state transitions, we further use the simplifying notation\n\\begin{align}\n&\\int_{x_{k+1}}p(x_k \\rightarrow x_{k+1}|\\pi_w) \\int_{x_{k+2}} p(x_{k+1} \\rightarrow x_{k+2}|\\pi_w) \\nonumber\\\\\n&\\qquad \\cdots \\int_{x_{k+m}} p(x_{k+(m-1)} \\rightarrow x_{k+m}|\\pi_w) \\[\\cdots\\] \\,dx_{k+1} \\, dx_{k+2} \\cdots \\, dx_{k+m} \\nonumber\\\\ \n&= \\int_{x_{k+m}} p(x_k \\rightarrow x_{k+m}|\\pi_w) \\[\\cdots\\] \\, dx_{k+m}\n\\\\ \n&= \\mathbb{E}_{x_{k+m} | \\pi_w, x_k} \\[\\cdots\\].\n\\end{align}\n\nTo avoid notation congestion, below we will also omit the subscript on $w$ and shorten $\\mu_{k,w_k}(x_k)$ to $\\mu_{k,w}(x_k)$, with the understanding that $w$ takes the same subscript as the $\\mu$ function. \n\n\\begin{proof}[Proof of \\cref{thm:PG}]\n\nWe begin by recognizing that the gradient of expected utility in \\cref{eq:expected_utility_w} can be written using the V-function:\n\\begin{align}\n \\nabla_w U(w) = \\nabla_w V^{\\pi_w}_0(x_0).\\label{e:gradU_derive}\n\\end{align}\nThe goal is then to derive the gradient expression for the V-functions. \n\nWe apply the definitions and recursive relations for the V- and Q-functions, and obtain a recursive relationship for the gradient of V-function:\n\\begin{align}\n \\nabla_w V^{\\pi_w}_k(x_k) \n &= \\nabla_w Q^{\\pi_w}_k(x_k,\\mu_{k,w}(x_k)) \n \\nonumber\\\\\n &= \\nabla_w \\Bigg[ \\int_{y_k} p(y_k|x_k,\\mu_{k,w}(x_k))g_k(x_k,\\mu_{k,w}(x_k),y_k)\\,dy_k \\nonumber\\\\\n &\\qquad\\qquad + \\int_{x_{k+1}} p(x_{k+1}|x_k,\\mu_{k,w}(x_k)) V^{\\pi_w}_{k+1}(x_{k+1}) \\,dx_{k+1} \\Bigg] \\nonumber\\\\\n &= \\nabla_w \\int_{y_k} p(y_k|x_k,\\mu_{k,w}(x_k))g_k(x_k,\\mu_{k,w}(x_k),y_k)\\,dy_k \\nonumber\\\\\n &\\qquad\\qquad + \\nabla_w \\int_{x_{k+1}} p(x_{k+1}|x_k,\\mu_{k,w}(x_k)) V^{\\pi_w}_{k+1}(x_{k+1}) \\,dx_{k+1} \\nonumber\\\\\n &= \\int_{y_k} \\nabla_w \\mu_{k,w}(x_k) \\nabla_{d_k} \\[ p(y_k|x_k,d_k) g_k(x_k,d_k,y_k) \\]\\Big|_{d_k=\\mu_{k,w}(x_k)} \\,dy_k \\nonumber\\\\\n &\\qquad\\qquad + \\int_{x_{k+1}} \\Big[ p(x_{k+1}|x_k,\\mu_{k,w}(x_k)) \\nabla_w V^{\\pi_w}_{k+1}(x_{k+1}) \n \\nonumber\\\\ \n &\\qquad\\qquad +\\nabla_w \\mu_{k,w}(x_k) \\nabla_{d_k} p(x_{k+1}|x_k,d_k)\\Big|_{d_k=\\mu_{k,w}(x_k)} V^{\\pi_w}_{k+1}(x_{k+1}) \\Big] \\,dx_{k+1} \\nonumber\\\\\n &= \\nabla_w \\mu_{k,w}(x_k) \\nabla_{d_k} \\Bigg[ \\int_{y_k} p(y_k|x_k,d_k) g_k(x_k,d_k,y_k) \\,dy_k \n \\nonumber\\\\\n &\\qquad\\qquad\\qquad\\qquad + \\int_{x_{k+1}} p(x_{k+1}|x_k,d_k)V^{\\pi_w}_{k+1}(x_{k+1})dx_{k+1} \\Bigg]\\Bigg\\vert_{d_k=\\mu_{k,w}(x_k)} \\nonumber\\\\\n &\\qquad\\qquad + \\int_{x_{k+1}} p(x_{k+1}|x_k,\\mu_{k,w}(x_k)) \\nabla_w V^{\\pi_w}_{k+1}(x_{k+1}) \\,dx_{k+1} \\nonumber\\\\\n &= \\nabla_w \\mu_{k,w}(x_k) \\nabla_{d_k} Q^{\\pi_w}_{k}(x_k,d_k)\\Big|_{d_k=\\mu_{k,w}(x_k)} \n \\label{e:gradV_recursive}\\\\\n &\\qquad\\qquad + \\int_{x_{k+1}} p(x_k \\rightarrow x_{k+1}|\\pi_w) \\nabla_w V^{\\pi_w}_{k+1}(x_{k+1}) \\,dx_{k+1}. \\nonumber\n\\end{align}\nApplying the recursive formula \\cref{e:gradV_recursive} to itself repeatedly and expanding out the overall expression,\nwe obtain\n\\begin{align}\n &\\nabla_w V^{\\pi_w}_k(x_k) \\nonumber\\\\\n &= \\nabla_w \\mu_{k,w}(x_k) \\nabla_{d_k} Q^{\\pi_w}_k(x_k,d_k)\\Big|_{d_k=\\mu_{k,w}(x_k)} \\nonumber\\\\\n &\\qquad + \\int_{x_{k+1}} p(x_k \\rightarrow x_{k+1}|\\pi_w) \\nabla_w \\mu_{k+1,w}(x_{k+1}) \\nabla_{d_{k+1}} Q^{\\pi_w}_{k+1}(x_{k+1},d_{k+1})\\Big|_{d_{k+1}=\\mu_{k+1,w}(x_{k+1})} \\,dx_{k+1} \\nonumber\\\\\n &\\qquad + \\int_{x_{k+1}} p(x_k \\rightarrow x_{k+1}|\\pi_w) \\int_{x_{k+2}} p(x_{k+1} \\rightarrow x_{k+2}|\\pi_w) \\nabla_w V^{\\pi_w}_{k+2}(x_{k+2}) \\,dx_{k+2} \\,dx_{k+1} \\nonumber\\\\\n &= \\nabla_w \\mu_{k,w}(x_k) \\nabla_{d_k} Q^{\\pi_w}_k(x_k,d_k)\\Big|_{d_k=\\mu_{k,w}(x_k)} \\nonumber\\\\\n &\\qquad + \\int_{x_{k+1}} p(x_k \\rightarrow x_{k+1}|\\pi_w) \\nabla_w \\mu_{k+1,w}(x_{k+1}) \\nabla_{d_{k+1}} Q^{\\pi_w}_{k+1}(x_{k+1},d_{k+1})\\Big|_{d_{k+1}=\\mu_{k+1,w}(x_{k+1})} \\,dx_{k+1} \\nonumber\\\\\n &\\qquad + \\int_{x_{k+2}} p(x_{k} \\rightarrow x_{k+2}|\\pi_w) \\nabla_w V^{\\pi_w}_{k+2}(x_{k+2}) \\,dx_{k+2} \\nonumber\\\\\n &= \\nabla_w \\mu_{k,w}(x_k) \\nabla_{d_k} Q^{\\pi_w}_k(x_k,d_k)\\Big|_{d_k=\\mu_{k,w}(x_k)} \\nonumber\\\\\n &\\qquad + \\int_{x_{k+1}} p(x_k \\rightarrow x_{k+1}|\\pi_w) \\nabla_w \\mu_{k+1,w}(x_{k+1}) \\nabla_{d_{k+1}} Q^{\\pi_w}_{k+1}(x_{k+1},d_{k+1})\\Big|_{d_{k+1}=\\mu_{k+1,w}(x_{k+1})} \\,dx_{k+1} \\nonumber\\\\\n &\\qquad + \\int_{x_{k+2}} p(x_k \\rightarrow x_{k+2}|\\pi_w) \\nabla_w \\mu_{k+2,w}(x_{k+2}) \\nabla_{d_{k+2}} Q^{\\pi_w}_{k+2}(x_{k+2},d_{k+2})\\Big|_{d_{k+2}=\\mu_{k+2,w}(x_{k+2})} \\,dx_{k+2}\\nonumber\\\\\n &\\hspace{2.5em}\\vdots \\nonumber\\\\\n &\\qquad + \\int_{x_{N}} p(x_{k} \\rightarrow x_{N}|\\pi_w) \\nabla_w V^{\\pi_w}_{N}(x_{N}) \\,dx_{N} \\nonumber\\\\\n &= \\sum_{l=k}^{N-1} \\int_{x_l} p(x_k \\rightarrow x_l|\\pi_w) \\nabla_w \\mu_{l,w}(x_l) \\nabla_{d_l} Q^{\\pi_w}_l(x_l,d_l)\\Big|_{d_l=\\mu_{l,w}(x_l)} \\,dx_l\\nonumber\\\\\n &= \\sum_{l=k}^{N-1} \\mathbb{E}_{x_l| \\pi_w, x_k} \\[\\nabla_w \\mu_{l,w}(x_l) \\nabla_{d_l} Q^{\\pi_w}_l(x_l,d_l)\\Big|_{d_l=\\mu_{l,w}(x_l)}\\] \\,dx_l,\\label{e:gradV_final}\n\\end{align}\nwhere for the second-to-last equality, we absorb the first term into the sigma-notation by using\n\\begin{align*}\n& \\nabla_w \\mu_{{k},w}(x_{k}) \\nabla_{d_{k}} Q^{\\pi_w}_{k}(x_{k},d_{k})\\Big|_{d_{k}=\\mu_{k,w}(x_{k})} \\nonumber\\\\\n& \\qquad = \\int_{x_{k}} p(x_k | x_{k},\\mu_{k,w}(x_k)) \\nabla_w \\mu_{{k},w}(x_{k}) \\nabla_{d_{k}} Q^{\\pi_w}_{k}(x_{k},d_{k})\\Big|_{d_{k}=\\mu_{k,w}(x_{k})} \\,dx_{k}\n\\nonumber\\\\\n& \\qquad = \\int_{x_{k}} p(x_k \\rightarrow x_{k}|\\pi_w) \\nabla_w \\mu_{{k},w}(x_{k}) \\nabla_{d_{k}} Q^{\\pi_w}_{k}(x_{k},d_{k})\\Big|_{d_{k}=\\mu_{k,w}(x_{k})} \\,dx_{k},\n\\end{align*}\nand we eliminate the last term in the summation since\n$\\nabla_w V^{\\pi_w}_{N}(x_{N})=\\nabla_w g_{N}(x_{N})=0$.\n\nAt last, substituting \\cref{e:gradV_final} into \n\\cref{e:gradU_derive}, we obtain the policy gradient expression:\n\\begin{align}\n \\nabla_w U(w) &= \\nabla_w V^{\\pi_w}_0(x_0) \\nonumber\\\\\n &= \\sum_{l=0}^{N-1} \\mathbb{E}_{x_l|\\pi_w,x_0} \\[ \\nabla_w \\mu_{l,w}(x_l) \\nabla_{d_l} Q^{\\pi_w}_l(x_l,d_l)\\Big|_{d_l=\\mu_{l,w}(x_l)} \\]. \\nonumber\n \\end{align}\nRenaming the iterator from $l$ to $k$ arrives at \\cref{eq:pg_theorem} in \\cref{thm:PG}, completing the proof.\n\\end{proof}\n\n\\section{Conclusions}\n\\label{sec:conclusions}\n\n\nThis paper presents a mathematical framework and computational methods to optimally design a finite number of sequential experiments (sOED); the code is available at \\url{https:\/\/github.com\/wgshen\/sOED}. \nWe formulate sOED as a finite-horizon POMDP. \nThis sOED form is provably optimal, incorporates both elements of feedback and lookahead, and generalizes the suboptimal batch (static) and greedy (myopic) design strategies. \nWe further structure the sOED problem in a fully Bayesian manner and with information-theoretic rewards (utilities), and prove the equivalence of incremental and terminal information gain setups. In particular, sOED can accommodate expensive nonlinear forward models with general non-Gaussian posteriors of continuous random variables. \n\n\n\n\nWe then introduce numerical methods for solving the sOED problem, which entails finding the optimal policy that maximizes the expected total reward.\nAt the core of our approach is PG, an actor-critic RL technique that parameterizes and learns both the policy and value functions in order to extract the gradient with respect to the policy parameters.\nWe derive and prove the PG expression for finite-horizon sOED, and propose an MC estimator. \nAccessing derivative information enables the use of gradient-based optimization algorithms to achieve efficient policy search. \nSpecifically, we parameterize the policy and value functions as DNNs, and detail architecture choices that accommodate a nonparametric representation of the Bayesian posterior belief states. Further combined with a terminal information gain formulation, the Bayesian inference becomes embedded in the design sequence, allowing us to sidestep the need for explicitly and numerically computing the Bayesian posteriors at intermediate experiments.\n\n\n\n\nWe apply the overall PG-sOED method to two different examples.\nThe first is a linear-Gaussian problem that offers a closed form solution, serving as a benchmark. We validate the PG-sOED policy against the analytic optimal policy, and observe orders-of-magnitude speedups of PG-sOED over an ADP-sOED baseline.\nThe second entails a problem of contaminant source inversion in a convection-diffusion field. Through multiple sub-cases, we illustrate the advantages of PG-sOED over greedy and batch designs, and provide insights to the value of feedback and lookahead in the context of time-dependent convection-diffusion processes. \nThis demonstration also illustrates the ability of PG-sOED to accommodate expensive forward models with nonlinear physics and dynamics. \n\n\nThe main limitation of the current PG-sOED method is its inability \nto handle high-dimensional settings. While the nonparametric representation sidesteps the need to compute intermediate posteriors, Bayesian inference is ultimately required in order to estimate the KL divergence in the terminal reward.\nThus, an important direction of future work is to improve scalability for high-dimensional inference, to go beyond the current gridding method. This may be approached by employing more general and approximate inference methods such as MCMC, variational inference, approximate Bayesian computation, and transport maps, perhaps in combination with dimension-reduction techniques.\n\nAnother fruitful area to explore is within advanced RL techniques\n(e.g., \\cite{mnih2015human,lillicrap2015continuous,mnih2013playing, \nschulman2017proximal}).\nFor example, replay buffer stores the experienced episodes, and training data can be sampled from this buffer to reduce sampling costs, control correlation among samples, and reach better convergence performance. \nOff-policy algorithms track two version of the policy network and Q-network---a behavior network for determining actions and a target network for learning---which have demonstrated improved sample efficiency.\nParameters of the policy and Q-networks may also be shared due to their similar features.\nFinally, adopting new utility measures, such as those reflecting goal-orientedness, robustness, and risk, would be of great interest to better capture the value of experiments and data in real-life and practical settings. \n\n\n\n\\section{Problem Formulation}\n\\label{sec:formulation}\n\n\n\\subsection{The Bayesian Paradigm}\n\nWe consider designing a finite\\footnote{In experimental design, the experiments are generally expensive and limited in number. Finite and small values of $N$ are therefore of interest. \nThis is in contrast to RL that often deals with infinite horizon.} number of $N$ experiments, indexed by integers $k=0,1,\\ldots,N-1$.\nWhile the decision of how many experiments to perform (i.e. choice of $N$) is important, it is \nnot considered\nin this paper; instead, we assume $N$ is given and fixed.\nFurthermore, let \n$\\theta\\in \\mathbb{R}^{N_{\\theta}}$ denote the unknown model parameter we seek to \nlearn\nfrom the experiments, $d_k \\in \\mathcal{D}_k\\subseteq \\mathbb{R}^{N_d}$ the experimental design variable for the $k$th experiment (e.g., \nexperiment conditions),\n$y_k \\in \\mathbb{R}^{N_y}$ the noisy observation from the $k$th experiment (i.e. experiment measurements), and $N_{\\theta}$, $N_{d}$, and $N_{y}$ respectively the dimensions of parameter, design, and observation spaces. We further consider continuous $\\theta$, $d_k$, and $y_k$, although discrete or mixed settings can be accommodated as well.\nFor simplicity,\nwe also let $N_d$ and $N_y$ be constant across all experiments, but this is not a requirement.\n\nA Bayesian approach treats $\\theta$ as a random variable. \nAfter performing the $k$th \nexperiment, its conditional probability density function (PDF) is described by Bayes' rule:\n\\begin{align}\n \\label{eq:bayes_rule}\n p(\\theta|d_k,y_k,I_k) = \\frac{p(y_k|\\theta,d_k,I_k)p(\\theta|I_k)}{p(y_k|d_k,I_k)}\n\\end{align}\nwhere $I_k=\\{ d_0,y_0,\\dots,d_{k-1},y_{k-1} \\}$ (and $I_0=\\emptyset$) is the information set collecting the design and observation records from all experiments prior to the $k$th experiment, $p(\\theta|I_k)$ is the prior PDF for the $k$th experiment,\n$p(y_k|\\theta,d_k,I_k)$ is the likelihood function,\n$p(y_k|d_k,I_k)$ is the model evidence (or marginal likelihood, which is constant with respect to $\\theta$),\nand $p(\\theta|d_k,y_k,I_k)$ is the posterior PDF. The prior is then a representation of the uncertainty about $\\theta$ before \nthe $k$th experiment, and the posterior describes the updated uncertainty about $\\theta$ after having observed the outcome from the $k$th experiment.\nIn \\cref{eq:bayes_rule}, we also simplify the prior $p(\\theta|d_k,I_k)=p(\\theta|I_{k})$, invoking a reasonable assumption that knowing only the design for $k$th experiment (but without knowing its outcome) would not affect the prior. \nThe likelihood function carries the relation between the hidden parameter $\\theta$ and the observable $y_k$, through a forward model $G_k$ that governs the underlying process for the $k$th experiment (e.g., constrained via a system of partial differential equations (PDEs)). For example, a common likelihood form is \n\\begin{align}\n y_k = G_k(\\theta, d_k; I_k) + \\epsilon_k,\n\\end{align}\nwhere $\\epsilon_k$ is a Gaussian random variable that describes the discrepancy between model prediction $G_k$ and observation $y_k$ due to, for instance, measurement noise. The inclusion of $I_k$ in $G_k$ signifies that model behavior may be affected by previous experiments. Each evaluation of the likelihood $p(y_k|\\theta,d_k,I_k) = p_{\\epsilon}(y_k-G_k(\\theta,d_k; I_k))$ thus involves a forward model solve, typically the most expensive part of the computation.\nLastly, \nthe posterior $p(\\theta|d_k,y_k,I_k)=p(\\theta|I_{k+1})$ becomes the prior for the $(k+1)$th experiment via the same form of \\cref{eq:bayes_rule}. Hence, Bayes' rule can be consistently and recursively applied for a sequence of multiple experiments. \n\n\\subsection{Sequential Optimal Experimental Design}\n\\label{sec:math_formulation}\n\nWe now present a general framework for sOED, posed as a POMDP.\nAn overview flowchart for sOED is presented in \\cref{fig:process} to accompany the definitions below.\n\n\\begin{figure}[htb]\n \\centering\n \\includegraphics[width=0.95\\linewidth]{Figures\/process.jpg}\n \\caption{Flowchart of the process involved in a $N$-experiment sOED.}\n \\label{fig:process}\n\\end{figure}\n\n\\textbf{State.} We introduce the state variable $x_k=[x_{k,b},x_{k,p}] \\in \\mathcal{X}_k$\nto be the state prior to designing and performing the $k$th experiment. \nHence, \n$x_0,\\ldots,x_{N-1}$ denote the respective states prior to each of the $N$ experiments, and $x_1,\\ldots,x_N$ denote the respective states after each of the $N$ experiments.\nThe state is an entity that summarizes past information needed for making experimental design decisions in the future. \nIt is very general and can contain different quantities deemed to be decision-relevant. \nIn our case here, the state consists of a belief state $x_{k,b}$ reflecting our state of uncertainty about the hidden $\\theta$, and a physical state $x_{k,p}$ carrying other non-random variables pertinent to the design problem. \nSince $\\theta$ is not observable and can be only inferred from noisy and indirect observations $y_k$ through Bayes' rule in \\cref{eq:bayes_rule}, this setup can be viewed as a POMDP for $\\theta$ (or a MDP for $x_k$).\n\nConceptually, a \\emph{realization} of the belief state manifests as \nthe continuous posterior (conditional) random variable \n$(x_{k,b} = x'_{k,b}) = (\\theta|I_k=I_k')$, \nwhere the prime denotes realization. Such a random variable can be \nportrayed by, for example, its PDF, cumulative distribution function, or characteristic function\\footnote{\nIt is possible for $\\theta|I_k$'s with different $I_k'$'s to have the same PDF (or distribution or characteristic function), for example simply by exchanging the experiments. Hence, the mappings from $I_k$ to these portrayals (PDF, distribution, characteristic functions) are non-injective. This may be problematic when considering transition probabilities of the belief state, but avoided if we keep to our root definition of belief state based on $I_k$, which remains unique.}.\nAttempting to directly represent these infinite-dimensional quantities in practice would require some finite-dimensional approximation or discretization.\nAlternatively, one can adopt a nonparametric approach and track \n$I_k$\n(from a given initial $x_0$),\nwhich then yields a \nrepresentation of $x_{k}$ (both $x_{k,b}$ and $x_{k,p}$) without any approximation\\footnote{$I_k$ collects the complete history of experiments and their observations, therefore is a sufficient statistic for $x_k$ by definition. Hence, if $I_k$ is known, then the full state $x_k$ is equivalently represented. \nAll of these are conditioned on a given initial $x_0$ (which includes the prior on $\\theta$), but for simplicity we will omit this conditioning when writing the PDFs in this paper, with the understanding that it is always implied. }\nbut its dimension grows with $k$. However, the dimension is always bounded since the maximum number of experiments considered is finite (i.e. $k < N$).\nIn any case, the belief state space is uncountably infinite since $\\theta$ is a continuous random variable (i.e. the possible posteriors that can be realized is uncountably infinite).\nWe will further detail our numerical representation of the belief state in \\cref{sec:policy_net} and \\cref{sec:numerical_belief_state}.\n\n\\textbf{Design (action) and policy.} Sequential experimental design involves building policies mapping from the state space to the design space, $\\pi = \\{\\mu_k : \\mathcal{X}_k \\mapsto \\mathcal{D}_k, k=0,\\ldots,N-1\n\\}$, such that the design for the $k$th experiment is determined by the state via $d_k=\\mu_k(x_k)$. Thus, sequential design is inherently adaptive, computing designs based on the current state which depends on the previous experiments and their outcomes.\nWe focus on deterministic policies in this study, where policy functions $\\mu_k$ produce deterministic outputs. \n\n\\textbf{System dynamics (transition function).} The system dynamics, denoted by $x_{k+1}=\\mathcal{F}_k(x_k,d_k,y_k)$, describes the transition from state $x_k$ to state $x_{k+1}$ after carrying out the $k$th experiment with design $d_k$ and observation $y_k$. For the belief state, the prior $x_{k,b}$ can be updated to the posterior $x_{k+1,b}$ via Bayes' rule in \\cref{eq:bayes_rule}. The physical state, if present, evolves based on the relevant physical process.\nWhile the system dynamics described in \\cref{eq:bayes_rule} appears deterministic given a specific realization of $d_k$ and $y_k$, it is a stochastic transition since the observation $y_k$ is random. In particular, there exists an underlying transition probability\n\\begin{align}\np(x_{k+1}|x_{k},d_{k})=p(y_k|x_k,d_k)=p(I_{k+1}|d_{k},I_{k}) =p(y_{k}|d_{k},I_{k}) = \n\\int_{\\Theta} p(y_k|\\theta,d_k, I_k)p(\\theta|I_{k})\\,d\\theta,\n\\label{eq:transition}\n\\end{align}\nwhere we \nsimplify the prior with $p(\\theta|d_k,I_k)=p(\\theta|I_{k})$. \nThis transition probability is intractable and does not have a closed form. However, we are able to generate samples of the next state by sampling from the prior and likelihood, as suggested by the last equality in \\cref{eq:transition}. Hence, we have a model-based (via a sampling model) setup.\n\n\n\\textbf{Utility (reward).} We denote $g_k(x_k,d_k,y_k) \\in \\mathbb{R}$ to be the immediate reward from performing an experiment. Most generally, this quantity can depend on the state, design, and observation. For example, it may simply be the (negative) cost of the $k$th experiment.\nSimilarly, we define a terminal reward $g_N(x_N) \\in \\mathbb{R}$ containing any additional reward measure that reflects the benefit of reaching certain final state, and that can only be computed after the entire set of experiments is completed. We will provide a specific example of reward functions pertaining to information measures in \\cref{sec:information_gain}.\n\n\n\\textbf{sOED problem statement.} The sOED problem seeks the policy that solves the following optimization problem: \nfrom a given initial state $x_0$, \\begin{align}\n \\label{eq:optimal_policy}\n \\pi^\\ast = \\operatornamewithlimits{arg\\,max}_{\\pi=\\{\\mu_0,\\ldots,\\mu_{N-1}\\}}& \\qquad U(\\pi)\\\\\n \\text{s.t.}& \n \\qquad d_k = \\mu_k(x_k) \\in \\mathcal{D}_k, \\nonumber\\\\\n &\\qquad x_{k+1}=\\mathcal{F}_k(x_k,d_k,y_k),\n \\hspace{3em} \\text{for}\\quad k=0,\\dots,N-1, \\nonumber\n\\end{align}\nwhere\n\\begin{align}\n \\label{eq:expected_utility}\n U(\\pi) = \\mathbb{E}_{y_0,...,y_{N-1}|\\pi,x_0}\\[\\sum_{k=0}^{N-1}g_k(x_k,d_k,y_k)+g_N(x_N)\\]\n\\end{align}\nis the expected total utility functional. \nWhile here $x_0$ is fixed, this formulation can easily be adjusted to accommodate stochastic $x_0$ as well, by including $x_0$ as a part of $I_k$ and taking another expectation over $x_0$ in \\cref{eq:expected_utility}. \n\nOverall, our sOED problem corresponds to a model-based planning problem of RL. It is challenging for several reasons: \n\\begin{itemize}\n\\item finite horizon, where the policy functions $\\mu_k$ are different for each $k$ and need to be tracked and solved for separately; \n\\item partially and indirectly observed hidden $\\theta$ whose belief state space is uncountably infinite and also infinite-dimensional or nonparametric; \n\\item deterministic policy;\n\\item continuous design (action) and observation spaces; \n\\item transition probability intractable to compute, and transition can only be sampled;\n\\item each belief state transition involves a Bayesian inference, \nrequiring many \nforward model evaluations; \n\\item reward functions are information measures for continuous random variables (discussed below), which are difficult to estimate.\n\\end{itemize}\n\n\\subsection{Information Measures as Experimental Design Rewards}\n\\label{sec:information_gain}\n\nWe wish to adopt reward functions \nthat reflect the degree of success for the experiments, \nnot only\nthe experiment costs. Determining such an appropriate quantity depends on the experimental goals, e.g., to achieve inference, prediction, model discrimination, etc. One popular choice corresponding to the goal of parameter inference is to maximize a measure of information gained on $\\theta$. \nLindley's seminal paper~\\cite{Lindley1956} proposes to use the mutual information between the parameter and observation as the expected utility, and Ginebra~\\cite{Ginebra2007} provides more general criteria for proper measure of information gained from an experiment. \nFrom the former, mutual information is equal to the expected KL\ndivergence from the prior to the posterior. The KL divergence provides an intuitive interpretation as it quantifies the farness between the prior and the posterior distributions, and thus a larger divergence corresponds to a greater degree of belief update---and hence information gain---resulting from the experiment and its observation.\n\nIn this paper, we follow Lindley's approach and demonstrate the use of KL divergence as sOED rewards,\nand present two reasonable sequential design formulations that are in fact equivalent. The first, call it the \\emph{terminal formulation}, involves clumping the information gain from all $N$ experiments in the terminal reward (for clarity, we omit all other reward contributions common to the two formulations, although it would be trivial to show the equivalence for those cases too): \n\\begin{align}\n g_k(x_k, d_k, y_k) &= 0, \\qquad k=0,\\ldots,N-1 \\label{eq:terminal1}\\\\\n g_N(x_N) &= D_{\\mathrm{KL}}\\( p(\\cdot|I_N)\\,||\\,p(\\cdot|I_0) \\) \\nonumber\\\\ &= \\int_{\\Theta} p(\\theta|I_N) \\ln\\[\\frac{p(\\theta|I_N)}{p(\\theta|I_0)}\\]\\,d\\theta.\\label{eq:terminal_info_gN}\n\\end{align}\nThe second, call it the \\emph{incremental formulation}, entails the use of incremental information gain from each experiment in their respective immediate rewards:\n\\begin{align}\n g_k(x_k, d_k, y_k) &= D_{\\mathrm{KL}}\\( p(\\cdot|I_{k+1})\\,||\\,p(\\cdot|I_k) \\) \\nonumber\\\\&= \\int_{\\Theta} p(\\theta|I_{k+1}) \\ln\\[\\frac{p(\\theta|I_{k+1})}{p(\\theta|I_k)}\\]\\,d\\theta, \\qquad k=0,\\ldots,N-1\\label{eq:incremental1}\\\\\n g_N(x_N) &= 0. \\label{eq:incremental2}\n\\end{align}\n\n\\begin{theorem} \n\\label{prop:terminal_incremental}\nLet $U_T(\\pi)$ be the sOED expected utility defined in \\cref{eq:expected_utility} subject to the constraints in \\cref{eq:optimal_policy} for a given policy $\\pi$ while using the terminal formulation \\cref{eq:terminal1,eq:terminal_info_gN}. Let $U_I(\\pi)$ be the same except using the incremental formulation \\cref{eq:incremental1,eq:incremental2}. Then $U_T(\\pi)=U_I(\\pi)$. \n\\end{theorem}\n\nA proof is provided in \\cref{app:incre_terminal}.\nAs a result, the two formulations correspond to the same sOED problem. \n\n\\subsection{Generalization of Suboptimal Experimental Design Strategies}\n\\label{sec:subopt_design}\n\n\nWe also make the connection between sOED to \nthe commonly used batch design and greedy sequential design.\nWe illustrate below that both batch and greedy designs are,\nin general, suboptimal with respect to the expected utility \\cref{eq:expected_utility}. Thus, sOED generalizes these design strategies.\n\nBatch OED designs all $N$ experiments together prior to performing any of those experiments. Consequently, it is non-adaptive, and cannot make use of new information acquired from any of the $N$ experiments to help adjust the design of other experiments. Mathematically, batch design seeks static design values (instead of a policy) over the joint design space $\\mathcal{D}:=\\mathcal{D}_0 \\times \\mathcal{D}_1 \\times \\cdots \\times \\mathcal{D}_{N-1}$:\n\\begin{align}\n (d_0^{\\mathrm{ba}},\\dots,d_{N-1}^{\\mathrm{ba}}) = \\operatornamewithlimits{arg\\,max}_{(d_0,\\dots,d_{N-1}) \\in \\mathcal{D}} \\mathbb{E}_{y_0,\\dots,y_{N-1}|d_0,\\dots,d_{N-1},x_0}\\[ \\sum_{k=0}^{N-1}g_k(x_k,d_k,y_k) + g_N(x_N) \\],\\label{eq:batch}\n\\end{align}\nsubject to the system dynamics. In other words, the design $d_k$ is chosen independent of $x_k$ (for $k > 0$).\nThe suboptimality of batch design becomes clear once realizing \\cref{eq:batch} is equivalent to the sOED formulation in \\cref{eq:optimal_policy} but restricting all $\\mu_k$ to be only constant functions. Thus, $U(\\pi^{\\ast}) \\geq U(\\pi^{\\mathrm{ba}}=d^{\\mathrm{ba}})$. \n\nGreedy design is also a type of sequential experimental design and produces a policy. It optimizes only for the immediate reward at each experiment:\n\\begin{align}\n\\mu_k^{\\mathrm{gr}} = \\operatornamewithlimits{arg\\,max}_{\\mu_k} \\mathbb{E}_{y_k|x_k,\\mu_k(x_k)}\\[ g_k(x_k,\\mu_k(x_k),y_k) \\], \\qquad k=0,\\dots,N-1,\\label{eq:greedy}\n\\end{align}\nwithout needing to subject to the system dynamics since the policy functions $\\mu_k^{\\mathrm{gr}}$ are decoupled. $U(\\pi^{\\ast}) \\geq U(\\pi^{\\mathrm{gr}})$ follows trivially.\nAs a more specific example when using information measure utilities described in \\cref{sec:information_gain}, greedy design would only make sense under the incremental formulation (\\cref{eq:incremental1,eq:incremental2}).\nThen, together with \\cref{prop:terminal_incremental}, we have \n$U_{T}(\\pi^{\\ast})=U_{I}(\\pi^{\\ast}) \\geq U_{I}(\\pi^{\\mathrm{gr}})$.\n\n\n\n\n\n\\section{Introduction}\n\\label{sec:intro}\n\n\n\n\n\nExperiments are indispensable for scientific research. Carefully designed experiments can provide substantial savings for these often expensive data-acquisition opportunities. However, designs based on heuristics are usually not optimal, especially for complex systems with high dimensionality, nonlinear responses and dynamics, multiphysics, and uncertain and noisy environments. \nOptimal experimental design (OED), while leveraging a\ncriteria based on a forward model that simulates the experiment process, systematically quantifies and maximizes the value of experiments. \n\n\n\n\nOED for linear models \n\\cite{Fedorov1972,Atkinson2007} \nuses criteria based on the information matrix derived from the model, which can be calculated analytically. Different operations on this matrix form the core of the well-known alphabetical designs, such as the $A$- (trace), $D$- (determinant), and $E$-optimal (largest eigenvalue) designs. \nBayesian OED further incorporates the notion of prior and posterior distributions that reflect the uncertainty update as a result of the experiment data \n\\cite{Berger1985, Chaloner1995}.\nIn particular, the Bayesian $D$-optimal criterion generalizes to the nonlinear setting under an information-theoretic perspective \\cite{Lindley1956}, and is equivalent to the expected Kullback\u2013Leibler (KL) divergence from the prior to the posterior. \nHowever, these OED criteria are generally intractable to compute for nonlinear models\nand must be\napproximated~\\cite{Box1959,Ford1989,Chaloner1995,Muller2005,Ryan2016}. With advances in computing power and a need to tackle bigger and more complex systems in engineering and science, there is a growing interest, urgency, and opportunity for computational development \nof nonlinear OED methods~\\cite{Ryan2003,Terejanu2012,Huan2013,Long2015,Weaver2016,Alexanderian2016,Tsilifis2017,Overstall2017,Beck2018,Kleinegesse2019,Foster2019,Wu2020}. \n\n\nWhen designing multiple experiments, commonly used \napproaches are often suboptimal. The first is \\emph{batch} (or static) design: it rigidly designs all experiments together \\emph{a priori} using the aforementioned linear or nonlinear OED method, and does not offer any opportunity to adapt when new information becomes available (i.e. no feedback). \nThe second is \\emph{greedy} (or myopic) design \n\\cite{Box1992, Dror2008, Cavagnaro2010, Solonen2012, Drovandi2013, Drovandi2014, Kim2014,Hainy2016,Kleinegesse2021}:\nit plans only for the \\emph{next} experiment, \nupdates with its observation, and repeats the design process. While greedy design has feedback, it lacks consideration for future effects and consequences (i.e. no lookahead). Hence, greedy design does not see the big picture or plan for the future. It is easy to relate, even from everyday experience (e.g., driving a car, planning a project), that a lack of feedback (for adaptation) and lookahead (for foresight) can lead to suboptimal decision-making with \nundesirable consequences. \n\n\n\n\n\n\n\nA provably optimal formulation of sequential experimental design---we refer to as sequential OED (sOED)~\\cite{Muller2007,VonToussaint2011,Huan2015,Huan2016}---needs both elements of feedback and lookahead, and \ngeneralizes the batch and greedy designs. The main features\nof sOED are twofold. First, sOED works with design \\emph{policies} (i.e. functions that can adaptively suggest what experiment to perform depending on the current situation) in contrast to\nstatic design values. Second, sOED always designs for all remaining experiments, thus capturing the effect on the entire future horizon when each design decision is made. \nFormally, the sOED problem can be formulated as a {partially observable Markov decision process} (POMDP). Under this agent-based view, the experimenter (agent) selects the experimental design (action) following a policy, and observes the experiment measurements (observation) in order to maximize the total utility (reward) that depends on the unknown model parameters (hidden state). \nA belief state \ncan be further formed\nbased on the Bayesian posterior that describes the uncertainty \nof the hidden state, thereby turning the POMDP into a \nbelief Markov decision process (MDP) \\cite{littman1995learning}. \n\nThe sOED problem targeted in our paper presents an atypical and challenging POMDP: finite horizon, continuous random variables, uncountably infinite belief state space, deterministic policy, continuous designs and observations, sampling-only transitions that each involves a Bayesian inference, and information measures as rewards. Thus, while there exists an extensive POMDP literature (e.g.,~\\cite{cassandra1994acting, littman1995efficient, cassandra1998survey, kurniawati2016online, igl2018deep}), off-the-shelf methods cannot be directly applied to this sOED problem. \nAt the same time, attempts for sOED have been sparse, with examples~\\cite{Carlin1998,Gautier2000,Pronzato2002, Brockwell2003, Christen2003, Murphy2003, Wathen2006} \nfocusing on discrete settings \nand with special problem and solution forms,\nand do not use an information criteria or do not adopt a Bayesian framework. \nMore recent efforts for Bayesian sOED~\\cite{Huan2015,Huan2016} employ approximate dynamic programming (ADP) and transport maps, and illustrate the advantages of sOED over batch and greedy designs. However, this ADP-sOED method remains computationally expensive.\n\n\n\n\n\n\n\n\n\n\nIn this paper, we create new methods to solve the sOED problem in a computationally efficient manner, by drawing the state-of-the-art from reinforcement learning (RL) \\cite{watkins1992q, sutton2000policy, szepesvari2010algorithms, mnih2015human, schulman2015trust, silver2016mastering, silver2017mastering, li2017deep, sutton2018reinforcement}.\nRL approaches are often categorized as value-based (learn value functions only)\n\\cite{watkins1992q,mnih2015human,wang2016dueling,van2016deep}, policy-based (learn policy only)\n\\cite{willianms1988toward, williams1992simple}, or actor-critic (learn policy and value functions together) \\cite{konda2000actor, peters2008natural, silver2014deterministic, lillicrap2015continuous}. \nADP-sOED~\\cite{Huan2015,Huan2016} is thus value-based, where the policy is only implicitly expressed via the learnt value functions. Consequently, each policy evaluation involves optimizing the value functions on-the-fly, a costly calculation especially for continuous action spaces. \nBoth policy-based and actor-critic methods are more efficient in this respect. \nActor-critic methods have further been observed to produce lower solution variance and faster convergence \\cite{sutton2018reinforcement}. \n\nWe adopt an actor-critic approach in this work. \nRepresenting and learning the policy explicitly further enables the use of policy gradient (PG) techniques \\cite{sutton2000policy, kakade2001natural, degris2012off, silver2014deterministic, lillicrap2015continuous, schulman2015trust, mnih2016asynchronous, schulman2017proximal, lowe2017multi, liu2017stein, barth2018distributed} that estimate the gradient with respect to policy parameters, and in turn permits the use of gradient-based optimization algorithms.\nInspired by deep deterministic policy gradient (DDPG)~\\cite{lillicrap2015continuous}, we further employ deep neural networks (DNNs) to parameterize and approximate the policy and value functions. The use of DNNs can take advantage of potentially large number of episode samples generated from the transition simulations, and compute gradients efficiently through back-propagation. \nNevertheless, care needs be taken to design the DNNs and their hyperparameters in order to \nobtain stable and rapid convergence to a good sOED policy, which we will describe in the paper. \n\nThe main contributions of our paper are as follows.\n\\begin{itemize}\n\\item We formulate the sOED problem as a finite-horizon POMDP under a Bayesian setting for continuous random variables, and illustrate its generalization over the batch and greedy designs.\n\\item We present the PG-based sOED (that we call PG-sOED) algorithm, proving the key gradient expression and proposing its Monte Carlo estimator. We further present the DNN architectures for the policy and value functions, and detail the numerical setup of the overall method.\n\\item We demonstrate the speed and optimality advantages of PG-sOED over ADP-sOED, batch, and greedy designs, on a benchmark and a problem of contaminant source inversion in a convection-diffusion field that involves an expensive forward model. \n\\item We make available our PG-sOED code at \\url{https:\/\/github.com\/wgshen\/sOED}. \n\\end{itemize}\n\nThis paper is organized as follows. \\Cref{sec:formulation} introduces the components needed in an sOED problem, culminating with the sOED problem statement. \\Cref{sec:method} describes the details of the entire PG-sOED method.\n\\Cref{sec:results} presents numerical examples, a linear-Gaussian benchmark and a problem of contaminant source inversion in a convection-diffusion field, to validate PG-sOED and demonstrate its advantages over other baselines.\nFinally, \\cref{sec:conclusions} concludes the paper and provides an outlook for future work.\n\n\n\n\n\\section{Policy Gradient for Sequential Optimal Experimental Design}\n\\label{sec:method}\n\nWe approach the sOED problem by directly parameterizing the policy functions and representing them explicitly. We then develop gradient expression with respect to the policy parameters, so to enable gradient-based optimization for numerically identifying optimal or near-optimal policies. Such approach is known as the PG\nmethod (e.g., \\cite{silver2014deterministic, lillicrap2015continuous}).\nIn addition to the policy, we also parameterize and learn the value functions,\nthus arriving at an actor-critic form. \nPG contrasts with previous ADP-sOED efforts~\\cite{Huan2015,Huan2016} that \napproximate only the value functions. In those works, the policy is represented implicitly, and requires solving a (stochastic) optimization problem each time the policy is evaluated. This renders both the offline training and online policy usage computationally expensive. As we will demonstrate, PG sidesteps this requirement.\n\n\nIn the following, we first derive the exact PG expression in \\cref{ss:PG_exact}. We then present numerical methods in \\cref{ss:PG_numerical} to estimate this exact PG expression. In particular, this requires adopting a parameterization of the policy functions; we will present the use of DNNs to achieve this parameterization. Once the policy parameterization is established, we can then compute the PG with respect to the parameters, and optimize them using a gradient ascent procedure.\n\n\\subsection{Derivation of the Policy Gradient}\n\\label{ss:PG_exact}\n\nThe PG approach to sOED (PG-sOED) involves parameterizing each policy function $\\mu_{k}$ with parameters $w_k$ ($k=0,\\ldots,N-1$), which we denote by the shorthand form $\\mu_{k,w_k}$. In turn, the policy $\\pi$ is parameterized by $w=\\{w_k, \\forall k\\} \\in \\mathbb{R}^{N_w}$ \nand denoted by $\\pi_{w}$, where $N_w$ is the dimension of the overall policy parameter vector. The sOED problem statement from \\cref{eq:optimal_policy,eq:expected_utility} then updates to: from a given initial state $x_0$,\n\\begin{align}\n \\label{eq:PG_sOED}\n w^{\\ast} = \\operatornamewithlimits{arg\\,max}_{w}& \\qquad U(w)\\\\\n \\text{s.t.}& \n \\qquad d_k = \\mu_{k,w_k}(x_k) \\in \\mathcal{D}_k, \\nonumber\\\\\n &\\qquad x_{k+1}=\\mathcal{F}_k(x_k,d_k,y_k), \n \\hspace{3em} \\text{for}\\quad k=0,\\dots,N-1, \\nonumber\n\\end{align}\nwhere\n\\begin{align}\n \\label{eq:expected_utility_w}\n U(w) = \\mathbb{E}_{y_0,...,y_{N-1}|\\pi_w,x_0}\\[\\sum_{k=0}^{N-1}g_k(x_k,d_k,y_k)+g_N(x_N)\\].\n\\end{align}\nWe now aim to derive the gradient $\\nabla_{w} U(w)$.\n\nBefore presenting the gradient expression, we need to introduce the value functions. \nThe \\emph{state-value function} (or \\emph{V-function}) following policy $\\pi_{w}$ and at the $k$th experiment is\n\\begin{align}\nV_k^{\\pi_{w}}(x_k)&=\\mathbb{E}_{y_k,\\dots,y_{N-1}|\\pi_{w},x_k}\\[\\sum_{t=k}^{N-1} g_t(x_t,\\mu_{t,w_t}(x_t),y_t) + g_N(x_N)\\] \\\\\n &= \\mathbb{E}_{y_k|\\pi_w,x_k} \\[ g_k(x_k,\\mu_{k,w_k}(x_k),y_k) + V^{\\pi_w}_{k+1}(x_{k+1}) \\] \\\\\n V_N^{\\pi_{w}}(x_N) &= g_N(x_N)\n\\end{align}\nfor $k=0,\\ldots,N-1$, where $x_{k+1}=\\mathcal{F}_k(x_k,\\mu_{k,w_k}(x_k),y_k)$.\nThe V-function is the expected cumulative remaining reward starting from a given state $x_k$ and following policy $\\pi_{w}$ for all remaining experiments. \nThe \\emph{action-value function} (or \\emph{Q-function}) following policy $\\pi_{w}$ and at the $k$th experiment is\n\\begin{align}\n\\label{eq:action_bellman}\nQ_k^{\\pi_{w}}(x_k,d_k)&=\\mathbb{E}_{y_k,\\dots,y_{N-1}|\\pi_{w},x_k,d_k}\\[g_k(x_k,d_k,y_k) + \\sum_{t=k+1}^{N-1} g_t(x_t,\\mu_{t,w_t}(x_t),y_t) + g_N(x_N)\\]\n\\\\\n&=\\mathbb{E}_{y_k|x_k,d_k} \\[ g_k(x_k,d_k,y_k) + Q^{\\pi_w}_{k+1}(x_{k+1},\\mu_{k+1,w_{k+1}}(x_{k+1}))\\]\n\\label{eq:action_bellman2}\n\\\\\nQ_{N}^{\\pi_{w}}(x_N,\\cdot) &= g_N(x_N).\n\\end{align}\nfor $k=0,\\ldots,N-1$, where $x_{k+1}=\\mathcal{F}_k(x_k,d_k,y_k)$. \nThe Q-function is the expected cumulative remaining reward for performing the $k$th experiment at the given design $d_k$ from a given state $x_k$ and thereafter following policy $\\pi_{w}$. The two functions are related via\n\\begin{align}\nV_k^{\\pi_{w}}(x_k)=Q_k^{\\pi_{w}}(x_k,\\mu_{k,w_k}(x_k)).\n\\end{align}\n\n\n\n\n\\begin{theorem}\n\\label{thm:PG}\nThe gradient of the expected utility in \\cref{eq:expected_utility_w} with respect to the policy parameters (i.e. the policy gradient) is \n\\begin{align}\n \\nabla_w U(w) = \\sum_{k=0}^{N-1} \\mathbb{E}_{x_k|\\pi_w,x_0} \\[ \\nabla_w \\mu_{k,w_k}(x_k) \\nabla_{d_k} Q^{\\pi_w}_k(x_k,d_k)\\Big|_{d_k=\\mu_{k,w_k}(x_k)} \\].\\label{eq:pg_theorem}\n\\end{align}\n\\end{theorem}\nWe provide a proof in \\cref{app:pg_derive}, which follows the proof in \\cite{silver2014deterministic} for a general infinite-horizon MDP. \n\n\\subsection{Numerical Estimation of the Policy Gradient}\n\\label{ss:PG_numerical}\n\n\nThe PG \\cref{eq:pg_theorem} generally cannot be evaluated in closed form, and needs to be approximated numerically. We propose a Monte Carlo (MC) estimator:\n\\begin{align}\n \\label{eq:policy_grad}\n \\nabla_w U(w) \\approx \\frac{1}{M} \\sum_{i=1}^M \\sum_{k=0}^{N-1} \\nabla_w \\mu_{k,w_k}(x^{(i)}_k) \\nabla_{d^{(i)}_k} Q^{\\pi_w}_k(x^{(i)}_k,d^{(i)}_k)\\Big|_{d^{(i)}_k=\\mu_{k,w_k}(x^{(i)}_k)}\n\\end{align}\nwhere superscript indicates the $i$th episode (i.e. trajectory instance) generated from MC sampling. Note that the \\emph{sampling} only requires a given policy and does not need any Q-function. Specifically, for the $i$th episode, we first sample a hypothetical ``true'' $\\theta^{(i)}$ from the prior belief state $x_{0,b}$ and freeze it for the remainder of this episode---that is, all subsequent $y_k^{(i)}$ will be generated from this $\\theta^{(i)}$.\nWe then compute $d_k^{(i)}$ from the current policy $\\pi_w$, sample $y_k^{(i)}$ from the likelihood $p(y_k|\\theta^{(i)},d_k^{(i)},I_k^{(i)})$, and repeat for all experiments $k=0,\\dots,N-1$. The same procedure is then repeated for all episodes $i=1,\\dots,M$. The choice of $M$ can be selected based on indicators such as MC standard error, ratio of noise level compared to gradient magnitude, or the validation expected utility from sOED policies produced under different $M$. \nWhile we propose to employ a fixed sample $\\theta^{(i)}$ for the entire $i$th episode, one may also choose to resample $\\theta_k^{(i)}$ at each stage $k$ from the updated posterior belief state $x_{k,b}^{(i)}$.\nThese two approaches are in fact equivalent, since from factoring out the expectations we have\n\\begin{align}\n \\label{eq:equivalency_sample_theta}\n U(w) &= \\mathbb{E}_{y_0,...,y_{N-1}|\\pi_w,x_0}\\[\\sum_{k=0}^{N-1}g_k(x_k,d_k,y_k)+g_N(x_N)\\] \\nonumber \\\\\n &= \\mathbb{E}_{\\theta|x_{0,b}} \\mathbb{E}_{y_0|\\pi_w,\\theta,x_0} \\mathbb{E}_{y_1|\\pi_w,\\theta,x_0,y_0} \\cdots \\nonumber\\\\\n &\\qquad \\qquad \\qquad \\cdots \\mathbb{E}_{y_{N-1}|\\pi_w,\\theta,x_0,y_0,\\dots,y_{N-2}} \\[\\sum_{k=0}^{N-1}g_k(x_k,d_k,y_k)+g_N(x_N)\\] \\\\\n &= \\mathbb{E}_{\\theta_0|x_{0,b}} \\mathbb{E}_{y_0|\\pi_w,\\theta_0,x_0} \\mathbb{E}_{\\theta_1|x_{1,b}} \\mathbb{E}_{y_1|\\pi_w,\\theta_1,x_{1}} \\cdots \\nonumber\\\\\n & \\qquad \\qquad \\qquad \\cdots\\mathbb{E}_{\\theta_{N-1}|x_{N-1,b}} \\mathbb{E}_{y_{N-1}|\\pi_w,\\theta_{N-1},x_{N-1}} \\[\\sum_{k=0}^{N-1}g_k(x_k,d_k,y_k)+g_N(x_N)\\],\n\\end{align}\nwhere the second equality corresponds to the episode-fixed $\\theta^{(i)}$, and the last equality corresponds to the resampling of $\\theta_k^{(i)}$. The former, however, is computationally easier, since it does not require working with the intermediate posteriors.\n\nFrom \\cref{eq:policy_grad}, the MC estimator for PG entails computing the gradients $\\nabla_w \\mu_{k,w_k}(x^{(i)}_k)$ and $\\nabla_{d^{(i)}_k} Q^{\\pi_w}_k(x^{(i)}_k,d^{(i)}_k)$. While the former can be obtained through the parameterization of the policy functions, the latter typically requires parameterization of the Q-functions as well. We thus parameterize both the policy and Q-functions, arriving at an actor-critic method. Furthermore, we adopt the approaches from Deep Q-Network (DQN)~\\cite{mnih2015human} and \nDDPG~\\cite{lillicrap2015continuous}, and use DNNs to approximate the policy and Q-functions. We present these details next. \n\n\n\n\\subsubsection{Policy Network}\n\\label{sec:policy_net}\n\nConceptually, we would need to construct individual DNNs $\\mu_{k,w_k}$ to approximate $\\mu_{k} : \\mathcal{X}_k \\mapsto \\mathcal{D}_k$ for each $k$. Instead, we choose to combine them together \ninto a single function $\\mu_{w}(k, x_k)$, which then requires only a single DNN for the entire policy at the cost of a higher input dimension. Subsequently, the $\\nabla_w \\mu_{k,w_k}(x^{(i)}_k)=\\nabla_w \\mu_{w}(k,x^{(i)}_k)$ term from \\cref{eq:policy_grad} can be obtained via back-propagation. Below, we discuss the architecture design of such a DNN, with particular focus on its input layer.\n\n\nFor the first input component, i.e. the stage index $k$,\ninstead of passing in the integer directly we opt to use one-hot encoding that takes the form of a unit vector:\n\\begin{align}\n k \\qquad \\longrightarrow \\qquad e_{k+1}=[0,\\dots,0,\\underbrace{1}_{(k+1)\\rm{th}},0,\\dots,0]^T.\n\\end{align}\nWe choose one-hot encoding because the stage index\nis an ordered categorical variable instead of a quantitative variable (i.e. it has notion of ordering but no notion of metric). Furthermore, these unit vectors are always orthogonal, which we observed to offer good overall numerical performance of the policy network. The tradeoff is that the dimension of representing $k$ is increased from 1 to $N$.\n\nFor the second component, i.e. the state $x_k$ (including both $x_{k,b}$ and $x_{k,p}$), we represent it in a nonparametric manner as discussed in \\cref{sec:math_formulation}:\n\\begin{align}\nx_k \\qquad \\longrightarrow \\qquad I_k=\\{d_0,y_0,\\dots,d_{k-1},y_{k-1}\\}.\n\\end{align}\nTo accommodate states up to stage $(N-1)$ (i.e. $x_{N-1}$), we use a fixed total dimension of $(N-1)(N_d+N_y)$ for this representation, where for $k < (N-1)$ the entries for $\\{d_l, y_l \\,|\\, l \\geq k\\}$ (experiments that have not happened yet) are padded with zeros (see \\cref{eq:NN_input}). \nIn addition to providing a \nstate representation without any approximation, another major advantage of such nonparametric form can be seen under the terminal formulation in \\cref{eq:terminal_info_gN}, where now none of the intermediate belief states (i.e. $x_{k,b}$ for $k